You are on page 1of 1099

IBM Product Master 12.0.

IBM
© Copyright IBM Corp. 2023.
US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Tables of Contents
IBM Product Master documentation 1
Release summary 1
What's new for 12.0 Fix Packs 3
Considerations for General Data Protection Regulation (GDPR) readiness 8
GDPR 8
Product configuration considerations for GDPR readiness 9
Data life cycle 9
Data collection 10
Data storage 10
Data access 11
Data processing 11
Data deletion 12
Data monitoring 12
Responding to data subject rights 13
Information roadmap 13
Overview 14
Product Master architecture 14
Information flow 16
Product services 17
Performance planning 17
Global Data Synchronization architecture 18
Key concepts 19
Attribute collections 20
Catalogs 21
Exports 22
Global Data Synchronization (GDS) 22
Data model 23
Data pool 24
GDS attributes 24
Supply side extension attributes 24
Adding new extension 25
Global location number (GLN) 25
Global trade item number (GTIN) 25
GS1 messages 25
Supply Side 1SYNC messages 26
Subscriptions 27
Supply Side Global Data Synchronization 27
Trade items 28
Trading partners 28
Hierarchies 28
Imports 28
Link attributes versus relationship attributes 29
Location attributes 29
Lookup tables 29
Specs 29
Roles and tasks 30
Business users tasks 30
System administrators tasks 31
Solution developers tasks 32
Interacting with Product Master 32
Extending Product Master capabilities 33
Product accessibility 33
Product accessibility for the Admin UI 34
Product accessibility for the Persona-based UI 35
Installing 35
Planning to install 36
Installation scenarios 36
Installation and configuration worksheets 37
Installation directory worksheet 38
IBM Db2 data source worksheet 38
Oracle data source worksheet 38
WebSphere Application Server installation worksheet 39
Application configuration worksheet 39
System requirements 40
Preparing for the installation 43
Downloading and extracting the installer 44
Installing Perl 44
Sources of Perl 45
Installing GNU utilities 45
Installing Perl in the Product Masterhome directory 46
Including Perl directory in the PATH statement 46
Sample .bashrc file 47
Installing Perl modules 47
Installing IBM Installation Manager 48
Adding offerings to IBM Installation Manager 48
Installing and setting up the database 49
Setting up your Db2 database 49
Buffer pool requirements 50
Table space requirements 51
Creating the Db2 instance 51
Creating the Db2 database 52
Creating buffer pools 52
Creating table spaces 53
Adding database users and granting permissions 54
Db2 configurations 55
IBM Db2 database profile registry updates 55
Db2 database manager configuration parameters 56
Db2 database configuration parameters 56
Transaction log files for the database 58
Setting up the Db2 client on Product Master 58
IBM Db2 database setup checklist 59
Setting up your Oracle Database 59
Disk considerations for the database 60
Creating a database 61
Oracle setup for high availability 61
Oracle parameter file settings 62
Oracle table space settings 63
Setting up transaction logs 65
Creating database schema users 65
Setting up Oracle on the application server 66
Installing Oracle XML DB component 66
Oracle setup checklist 66
Configuring WebSphere® Application Server 67
Exporting and importing LTPA tokens between WAS domains 68
Installing OpenSearch (Fix Pack 8 and later) 68
Configuring OpenSearch 69
Installing Elasticsearch (Fix Pack 7 and earlier) 70
Configuring Elasticsearch 71
Installing Hazelcast IMDG 72
Installing MongoDB 73
Enabling MongoDB authentication 74
Enabling SSL for MongoDB 74
Installing Python and machine learning modules 75
IBM Product Master interactive installation 76
Installing by using IBM® Installation Manager 77
Installing the product in graphical mode 78
Installing the product using console mode 79
Installing the product silently 79
Creating a response file while you are running a graphical installation 80
Customizing the silent mode response file 80
Silent installation database parameters for Db2 81
Silent installation database parameters for Oracle 82
Silent installation WebSphere Application Server parameters 82
Silent installation Application configuration parameters 83
Disabling the installer splash screen during silent installation 83
Installing silently by using a response file 84
Installing the product manually 84
Creating the env_settings.ini file 86
Setting the common parameters in the env_settings.ini file 86
Setting the common database parameters 87
Storing database passwords in an encrypted format 87
Setting Db2 parameters 88
Setting Oracle parameters 89
Setting up Oracle to use the OCI drivers 90
Configuring the application server parameters 91
Setting the common application server parameters 91
Setting WebSphere Application Server parameters 91
Configuring WebSphere MQ parameters 92
Configuring Persona-based UI parameters 92
Running configuration scripts 93
Configuring the WebSphere Application Server 94
Deploying Product Master 95
Run schema creation scripts 96
Custom table space names 97
Testing the database connectivity 98
Error handling for table space name mapping file 98
Configuring GDS feature 99
Creating a WebSphere Message Queue .bindings file 100
Creating a .bindings file for Windows 101
Creating a .bindings file for UNIX 102
Setting Global Data Synchronization parameters 103
Configuring Global Data Synchronization memory parameters for messaging 104
Setting up an AS2 connector 104
Connecting to a data pool 104
Installing (accelerated deployment) 105
Installing the product by using Operators 105
Downloading the Product Master Docker images (Operator) 107
Configuring Product Master deployment YAML (Fix Pack 8) 108
ipm_12.0.x_cr.yaml file parameters 117
Enabling a feature through CR 118
Configuring Product Master deployment YAML (Fix Pack 7 and later) 119
Configuring Product Master deployment YAML (Fix Pack 6 and later) 128
Configuring Product Master deployment YAML (Fix Pack 4 and later) 136
Configuring Product Master deployment YAML (Fix Pack 3 and later) 144
Creating or migrating database schema 145
Deploying on the Kubernetes cluster 146
Installing OpenSearch on an OpenShift cluster 147
Deploying on the Red Hat OpenShift cluster 151
Customizing the Docker container (Fix Pack 8 and later) 152
Customizing the Docker container (Fix Pack 7 and later) 154
Customizing the Docker container (Fix Pack 5 and later) 156
Customizing the Docker container (Fix Pack 3 and later) 158
Enabling Horizontal Pod Autoscaler (HPA) 159
Configuring SAML SSO (Accelerated deployment) 163
Configure the IBM Product Master (Accelerated deployment) 164
Enable the SAML Web browser SSO (Accelerated deployment) 165
Configure SSO partners (Accelerated deployment) 166
Configure SSO in the browser (Accelerated deployment) 166
Timeout behavior in the Persona UI 167
Known issues and limitations 168
Migrating Product Master Kubernetes or Red Hat OpenShift deployment (Fix Pack 7 and later) 169
Migrating Product Master Kubernetes or Red Hat OpenShift deployment (Fix Pack 4 to Fix Pack 6) 171
Migrating Product Master Kubernetes or OpenShift deployment (Fix Pack 2 to Fix Pack 3) 173
Installing the product by using the Docker images 175
Downloading the Product Master Docker assets 176
Environment variables for deployed Docker image 177
Customizing the Docker container 179
Starting, accessing, or closing the Docker containers 181
Customizing Elasticsearch, Hazelcast, and MongoDB services 182
Customizing IBM MQ service (GDS) 183
Deploying on the Oracle Server 183
Deploying on the Kubernetes cluster 185
Known issues and limitations 187
Deploying on a clustered environment 187
Preparing the logging and configuration directories 188
Maintaining a cluster environment 188
Deploying Product Master using WebSphere Application Server Deployment Manager 188
Enabling horizontal cluster for the IBM Product Master 189
Configuring a cluster environment 196
Horizontal clustering 197
Vertical clustering 199
Post-installation instructions 200
Enabling customizations 203
Using custom Angular scripts to add custom component 204
Navigating to the Persona-based UI from custom Angular scripts 206
Enabling Custom Actions by using Entry Preview scripts 208
Navigating to the Persona-based UI from Entry Preview scripts 209
Enabling Custom Tabs by using Trigger scripts 210
Enabling Custom Tools by using Custom Tool script 211
Developing custom REST API 211
Upgrading the existing custom sample library 212
Enabling HTTPS ports 212
Enabling IBM Operational Decision Manager 213
Verifying the installation 213
Verifying the database and WebSphere Application Server settings 213
Creating test company 214
Accessing the product 214
Applying fix pack 216
Extracting and installing the fix pack 217
Database schema migration 218
Verifying the installation 218
Installing password security update 219
Reset password patch 220
Migrating 222
Migration prerequisites 222
Migrating to IBM Product Master Version 12.0 223
Troubleshooting 225
Managing master data 226
Persona-based UI 226
Types of Personas 227
Role access 229
Navigating the Persona-based UI 230
Customizing the Persona-based UI features 233
Customizing Personal settings 234
Password criteria 235
Customizing Application settings 236
Enriching an entry 237
Tabs for an entry 238
Enriching entries (Bulk updates) 240
Export and Import feature (collaboration area) 242
Lookup table and relationship attributes 243
Using Data Management 244
Uploading assets 245
Linking assets and items 246
Editing assets 247
Generating renditions 248
Migrating assets 248
Disabling DAM 249
Using Explorer 249
Export and Import feature (catalogs and hierarchies) 250
Using Search 252
Supported operators 254
Using Free text search (Fix Pack 8 and later) 255
Using OpenSearch to access indexed content (Fix Pack 8 and later) 256
Using Free text search (Fix Pack 7 and earlier) 261
Filtering Free text search results 263
Excluding content from the Free text search results 264
Using Data Model Manager 264
Using Collaboration Area console 265
Editing or creating a collaboration area 266
Using File Explorer 266
Using Job console 267
Adding or editing a job schedule 268
Using Lookup Table console 269
Editing or creating a Lookup table 269
Using Rules console 270
Editing or creating a rule 271
Supported operators and rules 273
Rules expressions 274
Using Spec console 276
Editing or creating a spec 276
Publishing specs to the IBM Watson Knowledge Catalog 279
Using Attribute collection console 280
Editing or creating an attribute collection 280
Using Workflow console 281
Editing or creating a workflow 281
Viewing dashboards 283
Audit History dashboard 284
DAM Summary dashboard 285
Data Sanity dashboard 285
Job Summary dashboard 286
User Summary dashboard 286
Vendor Summary dashboard 287
Worklist Summary dashboard 288
Additional features 288
Relationship and Linked editors 289
Bulk loading items 290
Generating Stock Keeping Unit (SKU) 291
Using machine learning (ML) assisted data stewardship 292
Training the machine learning services model 293
Configuring machine learning Services 294
Accessing the machine learning by using the REST APIs 295
Using Suspect Duplicate Processing (SDP) 297
Running SDP automation script 298
Performing Suspect Duplicate Processing (SDP) 299
Automating SDP 300
Configuring Item completeness 300
Working with the Vendor persona 302
Admin UI 304
Navigating the Admin UI 304
Password criteria 235
User interface panels 306
Product manager module 307
Catalog Console 307
Catalog to Catalog Export Console 308
Hierarchy Console 308
Hierarchy Mapping Console 308
Selection Console 309
Report Console 309
Lookup Table Console 309
Collaboration manager module 310
Import Console 310
Export Console 311
Collaboration Area Console 311
Queue Console 311
Message Console 311
Web Service Console 312
Transaction Console 312
Document Store 312
Data Source Console 313
Routing Console 313
Generated Files 313
Data model manager module 314
Jobs Console 314
Specs Console 315
Spec Map Console 315
Attribute Collection Console 315
Scripts Console 316
Script Sandbox 316
User Console 316
Role Console 317
Access Control Group Console 317
Catalog Access Privileges Console 317
Defining access for roles 318
Hierarchy Access Privilege Console 333
Activity Log 333
Alerts Subscription Console 333
Staging Area Console 334
Workflow Console 334
System administrator module 334
Audit 335
DB Admin 335
Performance Info 335
Profiling 336
Performance 336
Database Performance 336
Caches 336
Properties 337
Log Directory Listing 337
System Status 337
Profiler 337
Import Environment 338
Selective Export 338
Selective Import 338
Size Distribution 339
Customizing Admin UI features 339
Importing user settings 341
Customizing your user settings 341
Choosing a locale 342
Setting the default date and time attributes 342
Customizing Unit of Measure (UOM) 343
Working with the Admin UI 344
Creating an object 344
Creating an item 344
Creating a category 345
Using search 345
Creating a search template 346
Transforming the search template 347
Search using flag attributes 348
Search using the list view 348
Using IBM Watson search 349
Configuring a Queue with WebSphere MQ 350
Configuring with WebSphere MQ Explorer 350
Create a queue manager 350
Create a local queue 351
Create initial context 351
Create a connection factory 352
Configuring with the command line 352
Using the command line to configure WebSphere MQ 352
Using the command line to configure JMS 353
Using the command line to configure the binding file 354
Installing IBM Data Explorer 354
Selecting catalogs for indexing 355
Selecting attributes for indexing 355
Configuring IBM Product Master to use IBM Data Explorer 355
Exporting the Product Master catalog data into a Data Explorer database 356
Creating reports 357
Creating the export report 357
Creating the receiver report 357
Performance parameters for the export and receiver report jobs 358
Running the export and receiver reports 359
Editing objects 359
Screens for editing objects 361
Editing items in a catalog 362
Editing categories in a hierarchy 363
Editing lookup table type attribute values 363
Lookup table and relationship attributes 364
Editing attribute values using the pop-up menu 365
Restricting editing of all attributes in the [System Default] view 366
Restricting catalog selections for relationship attributes 366
Checking out and editing items and categories 367
Using link attributes 367
Categorizing an item 367
Filtering a list of items and categories 368
Changing where categories and items are displayed in the left pane 369
Viewing relationships 369
Working with collaboration areas 369
Rich text editor overview 370
Running and managing jobs 370
Running a report 371
Running an import 371
Running an export 372
Scheduling jobs 372
Synchronizing product data (GDS) 373
Navigating Supply Side GDS pages 373
Manage items (GDS) 373
Synchronization reports (GDS) 374
Creating master data by using Supply Side GDS 375
Administering 376
Administering database 376
Back up and recovery 376
Exporting and importing the database schema 377
Database maintenance tasks 378
Checking and deleting old object versions with scripts 378
Monitoring databases 379
Dropping temporary tables 380
Administering IBM Db2 database 380
Reorganizing Db2 databases 382
Purging performance profiling data in DB2 382
Scripts and commands for Db2 383
Tuning your IBM Db2database 384
Administering Oracle Database 384
Reorganizing Oracle Database 385
Scripts and commands for Oracle 387
Database administrator troubleshooting responsibilities 387
Administering system 388
Integrating Lightweight Directory Access Protocol (LDAP) 388
Integrating with Product Master 390
Global Data Synchronization (GDS) administration 391
System monitoring 392
Measuring performance 393
Disk space management 393
Caching web pages 394
Managing document store 395
Document store maintenance 396
Setting the FTP directory 397
Using file system as document store 398
Mounting a folder to the docstore 398
Setting document store access privileges 399
Deleting files 399
Compressing document store files 400
Data maintenance 400
Old object versions 401
Report jobs for maintaining old object versions 401
Job history 402
Creating a catalog version 403
Performance profile 403
Updating DB statistics 404
Enabling and configuring the spell checker 404
Application server performance checklist 405
Application and Business expert troubleshooting responsibilities 405
Administering Product Master system 406
Company management 406
Creating a company 406
Object index maintenance 407
Regenerating indexes for objects 407
Company deployment 408
Preparing to deploy a company 409
Deploying a company 409
Limitations for deploying a company 410
Limitations for exporting a company 410
Limitations for importing a company 411
Deploying a company from the user interface 412
Selectively importing objects 413
Configuring the script input spec and using a predefined export script 414
Deploying a company from the command-line 415
Deploying a company using the Java APIs 416
Java interfaces for exporting and importing data models 416
Sample EnvironmentExporter interface usage 417
Sample ExportList interface usage 418
Sample ExportRequisiteList interface usage 418
Sample EnvironmentImporter interface usage 419
Sample ImportList interface usage 419
Sample ImportRequisiteList interface usage 420
Scripts for company deployment 421
Generating export scripts 422
Deployment script operations 423
Deployment action modes 423
Object type dependencies 424
Specifying file names for exported objects 426
Packaging files with the command-line 427
Sample package control file usage 429
Sample hierarchy content file usage 430
Sample UDL file usage 430
Viewing your company deployment status 431
Propagate a company 431
Restrictions for propagating scripts 432
Restrictions for propagating specs 432
Restrictions for propagating attribute collections 433
Restrictions for propagating roles 434
Restrictions for propagating users 434
Restrictions for propagating access control groups 434
Restrictions for propagating lookup tables 435
Restrictions for propagating views 435
Restrictions for propagating workflows 435
Workflow change propagation 436
Restrictions for propagating collaboration areas 437
Restrictions for propagating hierarchies 437
Managing the system and services 438
Starting the system and all services 438
Stopping the system and all services 438
Checking the status of the system and all services 439
Aborting the system and all services 440
Cache management 440
Configuring initial spec cache loading during server startup 441
Product services 442
Service names 442
List all defined services 443
Configuring memory flags for each service type 443
Starting a service 444
Stopping a service 444
Aborting a service 445
Getting the short status of a service 445
Service checks 447
System administrator troubleshooting responsibilities 448
Performance tuning 449
Performance best practices 449
Performance checklist 450
Volume testing checklist 450
Performance tuning for the application server 451
Typical server and process configuration 451
Tune the application server 452
Hardware sizing and tuning 453
System tuning 453
Performance tuning for the Persona-based UI 455
Troubleshooting 455
Troubleshooting issues 456
Troubleshooting general issues 457
During installation, no error in the Test Connection option on the Database Configuration window 458
Workflow events are not working properly 458
Items stuck in the merge workflow step 459
Appsvr (WebSphere® Application Server) process stops responding when started or stopped 461
Error processing xml queue while saving 461
Errors while running create_schema.sh script 462
Error occurred in XML processing error 463
CWPAP0127E: The Java API object reference is inaccessible due to deletion or similar operation 464
Cannot end Appsvr (WebSphere® Application Server) process 465
Invalid special character strings 465
Importing hierarchy content for hierarchies with categories that have a relationship attribute set not supported 467
Date fields in the Mass Transactions screen not displaying proper values 467
Cannot access items or categories due to lock busy messages 467
No such file or directory warning when using the CCD_CLASSPATH environment variable to compile Java classes 468
Restrictions with multi-domain entities and translation 468
Some docstore files do not show up in the file system 470
Unable to start the Java Message Service (JMS) receiver 470
Unable to start the Java Message Service (JMS) receiver 470
Invalid value in the GDS lookup tables 471
Receiving "en_US.error.searchgln" error 471
Troubleshooting profiling agents issues 471
WASX7016E: Exception received while reading file 472
Can't load "libjprofiler.a" error 472
Profiling agent libraries are not loaded on the AIX platform 472
Generating and sharing a jProfiler profile task with the IBM Support 473
Troubleshooting application server issues 473
Unresponsive WebSphere® Application Server 473
Invalid configuration settings 474
Unresponsive user interface 474
Troubleshooting connectivity issues 474
JMS connectivity issues by using WebSphere Application Server bundled with WebSphere MQ 475
JMS connectivity issues using WebSphere MQ server 479
Java connectivity issues 479
java.net.ConnectException: Connection refused error 480
java.sql.SQLRecoverableException: IO Error: Connection reset error 481
java.sql.SQLException: Listener refused the connection error 482
java.lang.StackOverflow error 483
java.lang.OutOfMemoryError: Failed to create a thread error 483
Transaction log full error for imports implemented with the Java API 484
Unable to change to remote directory error 485
Product Master box does not see the target destination 485
Is distributor working 485
Troubleshooting user interface issues 486
Browser issues 486
Could not load error after login 488
Error 500: com.ibm.websphere.servlet.session.UnauthorizedSessionRequestException error 489
Item Rich Search on a multi-occurring attribute returns incorrect results 490
Login page is getting reloaded 490
Unauthorized error for the static selection user 491
Troubleshooting database issues 491
Character set error messages during data export or import 492
Database space allocation problems 492
Slow performance if a running job is stopped 492
Redo log switch problems 493
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired error 493
Product Master hangs and the client UI is frozen 493
Product Master hangs when running analyze schema 494
com.ibm.db2.jcc.b.SqlException: Failure in loading T2 native library db2jcct2 error 494
Improving query performance 494
Drop temporary aggregate tables and indexes 495
Complex SQL statements are running very slow 495
Changed database tables after running migrateToInstalledFP.sh script 495
DBV_0_UK DBV_VERSION index error 496
java.sql.SQLException: ORA-00600: internal error code, arguments error 497
Troubleshooting performance issues 497
Understanding and defining the performance problem 497
Basic health checks to ensure optimal performance 498
Locating the performance problem in the stack 501
Client-side checks 502
Product Master side checks 502
DB server-side checks 502
Individual troubleshooting techniques 503
General performance issues 503
Check update statistics time in DB2 504
Check update statistics time in Oracle 504
Check existence of required tables and indexes 505
Check for invalid indices 506
Manual update of statistics 506
Check SQL runtimes directly on database server 507
Check query plan 507
Check table layout 509
Checking the size of the Document Store 509
Troubleshooting multicast 510
Confirm caching is working in cluster 510
Checklist of multicast settings 510
Configuring multicast 511
Verify the network and machine configuration 511
My machines are not listening to multicast traffic 512
Confirm cache is setup correctly in the log files 512
Allowing multicast broadcasting on a Linux machine 513
Debuging ehcache using the ehcache debugger 513
Contents of the native_cache.log file 514
Troubleshooting product installation 515
Troubleshooting migration issues 516
Troubleshooting the Persona-based UI issues 519
Troubleshooting the SAML SSO issues 522
Troubleshooting the operator issues 524
Troubleshooting the docstore issues 525
Troubleshooting import job schedule issues 526
Troubleshooting Admin UI "Error 203" issue 526
Tools for troubleshooting 527
Collecting data with the IBM Support Assistant 528
Monitoring Java virtual machine (JVM) 528
Using Sumall utility 529
Using LTA tool 530
Using profiling mechanism 531
Contacting IBM Support 532
Log files 533
Viewing log files 535
Viewing the Global Data Synchronization log files 536
Collecting log files 536
Troubleshooting script-related errors 537
Enable custom logging 537
Debugging Persona-based UI logs 538
FAQs - Cloud offering 538
Develop applications 540
Languages and code 540
Java API 541
Java API components 542
Required components 542
Requirements and restrictions 543
Develop code using Java API 543
Develop Java API-based stand-alone applications 544
Java API stand-alone application example 544
Developing and implementing extension points 544
Developing an extension point implementation class 545
Extension points methods arguments 546
Implementation class packaging 546
Sample implementation class 546
Making the extension point class available 547
Registering a URL for invocation of extension points 548
Registering the document store loaded class 548
Register the custom user JAR-based class 548
Starting Java API extension points from scripting 549
Sample scripting code 549
Testing extension point implementation classes 550
Developing in a team mode 551
Security in extension points 551
Caching extension point classes 552
Sample extension points in Java API 552
Sample entry preview extension point in Java API 552
Sample catalog preview extension point in Java API 553
Sample pre-processing and post-processing extension point in Java API 553
Sample code for transaction and exception handling in import and report jobs 554
Transactions and exception handling guidelines 556
Limitations on using transactions within extension points 556
Debugging Java API code 556
Advanced programming with Java API 557
PIMCollection class: Collections of product entities 558
Attributes in Java API 558
Committing and backing out units of work 559
Multi-catalog batching for imports 560
Savepoints 560
Custom Resource Bundle 561
Using setExitValue API to route the entries in automated steps of a workflow 561
Java API code best practices 561
Java API migration 562
Migration from script to Java 562
Attribute collection operations - script to Java migration 563
Attribute operations - script to Java migration 564
Catalog operations - script to Java migration 565
Category operations - script to Java migration 566
Collaboration area operations - script to Java migration 566
Currency operations - script to Java migration 569
Database script operations - script to Java migration 570
Date operations - script to Java migration 570
Distribution operations - script to Java migration 570
Environment operations - script to Java migration 571
Excel operations - script to Java migration 572
Hierarchy operations - script to Java migration 573
Item operations - script to Java migration 574
Java API for the getCtgItemCatSpecificAttribsList script operation 576
Job operations - script to Java migration 578
JMS operations - script to Java migration 578
Locale operations - script to Java migration 579
LookupTable operations - script to Java migration 579
Math operations - script to Java component 580
MQ operations - script to Java migration 580
Number operations - script to Java migration 581
Organization operations - script to Java migration 581
RE operations - script to Java migration 584
Reader operations - script to Java migration 584
Search operations - script to Java migration 585
Selections operations - script to Java migration 586
Spec operations - script to Java migration 587
String manipulations operations - script to Java migration 589
System-Utils operations - script to Java migration 590
Timezone operations - script to Java migration 591
User defined log operations - script to Java migration 592
Workflow operations - script to Java migration 593
Writer operations - script to Java migration 594
XML operations - script to Java migration 595
Zip operations - script to Java migration 595
Sample Java code outside of Java API 595
BidiUtils sample 596
BuildCSV sample 598
BuildDelim sample 599
BuildFixedWidth sample 599
CSVParser sample 600
DelimiterParser sample 602
ExcelParser sample 603
FixedWidthParser sample 605
SampleTokenizer sample 606
StringUtils sample 608
Troubleshooting Java API 610
Resources 611
Script API 611
Types of scripts (extension points) 612
String Enumeration rule script 613
UI Refresh script 614
Value rule script 614
Query all lookup table names through scripts 615
Delete a non-multi-occurring attribute group through scripts 616
Retrieve a select group of items through scripting 616
Script expressions 616
Scriptlets 617
Language constructs 617
Functions and variables 617
Looping and conditionals 618
HashMaps and arrays 618
Calling a method 619
Local variables 619
Global variables 619
Declaring variables and functions 620
Getting script by path 620
Void definition 621
New definition 621
Guidance on transactions and exception handling 621
useTransaction and startTransaction script operations 622
Nesting transactions 622
Catching errors in a transaction 623
Transaction-sensitive operations 623
Workflow operations and transactional implications 623
Guidelines for exception handling in scripting 624
Sample code for exception handling 625
Compiled scripts 625
Settings for Product Master scripting mode 626
Common script compilation errors 626
Common runtime problems 627
Resolve runtime problems 628
Debugging scripts 628
Predefined scripts 629
Export script example 629
Scripting tips 629
JDBC script operations 630
Query language 630
Query language syntax 631
Notational conventions 632
Member data: [] notation 632
Objects join: dot notation 632
Function: () notation 632
String: Single quotation mark 633
Objects and terminals 633
Leading objects 634
Attributes 634
Named attributes 634
Spec-driven attributes 635
Compound attributes 635
Constant attributes 635
Functions 636
date() function 636
path() function 636
timestamp() function 636
Aggregate functions 637
Predicates 637
Atomic predicates 637
Compound predicate 638
Semantics of an object join 638
Sample queries 639
Sample queries for items in a catalog 639
Sample queries for items in a collaboration area 644
Sample queries for categories in hierarchies 644
Creating web services 647
Web services within Product Master 648
Document literal style web services 649
Planning for web services 649
Implementing a simple web service 650
Sample implementation script and WSDL document 652
Implementing a complex web service 653
Deploying a web service 655
Debugging web services 657
Access web services 659
Examples of implementing and deploying web services 659
Example of using the UI to create a web service for a document literal type 659
Example of creating a web service to query the number of items in a catalog 661
Example of creating a web service to search for an item 662
Example of creating a web service that uses a business object and complex logic 664
Example of creating a web service that retrieves information from an item 669
Example of handling namespaces when you create a web service 673
Web services outside of Product Master 675
Deploying a web service 675
Access web services 676
Developing scripts and extension points with the script workbench 676
Overview of the script workbench 676
Installing the script workbench 677
Installing the script workbench into Rational Software Architect 677
Uninstalling the alphaWorks version 677
Modify preferences and properties 678
Setting project properties 678
Setting script workbench preferences 678
Enabling task tags 679
Setting file associations 679
Develop scripts with the script workbench for Product Master 679
Opening the Product Master perspective 680
Creating a project 680
Apply the Product Master project nature to an existing project 681
Overview of the script editor 681
Creating a script 681
Importing an existing script 681
Updating scripts from the server 682
Changing file extensions 682
File extensions 682
Specify comments in Javadoc format 683
Debugging scripts 683
Run a script 683
Running a script on the server 684
Viewing the output in the console from running a script 684
Develop Java extension points using the script workbench 684
Getting started with Java extension points 684
Creating a Java project 685
Setting up the Java Extension Points wizard 685
Ensure the extension point class file is available on the server 686
Troubleshooting checklist for script workbench for Product Master 686
Deploying a third party or custom user .jar file 686
Samples 687
JavaScript extensions samples 687
Customizing the right pane user interfaces 688
Customizing the new data entry screens 688
Installing the JavaScript extensions sample 688
Web services samples 689
Using web services samples 690
Developing the solution 691
Creating data model objects 691
Data modeling overview 692
Performance design considerations 693
Product attributes 694
Core product attributes in the data model 694
Item identifiers 695
Item types 695
Common attributes 696
Pricing and cost attributes 696
Image link attributes 696
Extension attributes in the data model 697
Location attributes in the data model 698
Product classifications in the data model 698
Item relationships 700
Item-to-item relationships 700
Product bundles 701
Item variations and differentiators 701
Cross-sell and up-sell products 702
Accessory products 702
Substitute and replacement products 702
Item-to-other-entity relationships 703
Item-to-supplier relationships 703
Item-to-client relationships 703
Item-to-location relationships 704
Creating specs 704
Types of specs 705
Creating primary specs 705
Creating secondary specs 706
Creating sub specs 707
Attribute node modeling and naming guidelines for specs 707
Validation rules for number, integer and currency attribute types 708
Setting a date attribute type without creating a time stamp 708
Internal representations of attribute types 709
Specifying the length of attributes in specs 709
Specifying currency symbols 710
Associating specs to objects 710
Creating hierarchies 712
Creating catalogs 713
Defining linked attributes 714
Creating views 714
Creating attribute collections 717
Defining location attributes for entries 718
Creating item relationships 719
Creating lookup tables 720
Modifying lookup tables by adding new attribute values 721
Modify lookup tables to contain values for Global Data Synchronization attributes 721
Modeling considerations for lookup values 722
Modeling considerations for lookup tables (and cached catalogs) 722
Attaching files to a staging area 723
Defining multi-occurrence group labels 723
Creating objects for business processes 723
Defining use cases 724
Creating business processes 724
Creating collaboration areas 725
Building collaboration areas by role 725
Reserving and releasing items 726
Creating a collaboration area 726
Creating workflows 728
Workflow steps 729
Types of steps 730
Performers for steps 731
Step logic in a workflow 731
Workflow step extension points 732
Workflow engine - transactions and events 732
Caching workflow data 733
Asynchronous and synchronous processing for workflow events 733
Exit values in workflow steps 735
Workflow validations 736
Persistence of attribute values 737
Workflows versus custom tools 738
CEH logging and debugging 738
Creating a workflow 738
Creating a workflow in the UI 739
Defining routing 740
Sample Java code for creating a workflow 740
Sample script code for creating a workflow 741
Representing steps in a workflow 742
Steps that are not included in a workflow 742
Modeling the process to steps in a workflow 743
Creating a nested workflow 743
Multiple workflow processes 743
Running multiple workflow engines on the same server for an Product Master instance 744
Running multiple workflow engine processes for an Product Master instance across a cluster 744
Enabling the business user workflow dashboard for business processes 745
Installing the business user workflow dashboard 745
Integrating with upstream and downstream systems 746
Creating data sources 747
Creating and scheduling jobs 748
Job design 749
Import job design 749
Import job design considerations 750
Types of imports 751
Common import scenarios 752
Importing large volume of data 752
Export job design 753
Report job design 753
Report-based imports 754
Choosing between imports and reports 755
Creating imports 755
Creating exports 757
Creating reports 758
Setting up schedules for jobs 759
Creating scripts 760
Running a trigger script 761
Using the parameters of invoker.jsp 761
Creating message queues 763
Creating selections and searches 764
Search design and best practices 765
Creating selections 766
Creating static selections 766
Creating dynamic selections 767
Creating searches 768
Creating catalog searches 768
Creating hierarchy searches 768
Creating collaboration area searches 769
Using Recent Searches 770
Exporting search results 770
Creating the security model 771
Setting up a Single Sign-on (SSO) environment 772
Configuring LDAP and Product Master internal repositories synchronization 772
Setting up WebSphere Application Server 773
Enabling SSO in WebSphere Application Server 773
Configuring LTPA in WebSphere Application Server 774
Configuring Product Master in the WebSphere Application Server 774
Configuring SSO 774
Configuring SAML SSO 775
Configuring the IBM Product Master 776
Enabling the SAML Web browser SSO 777
Configuring SSO partners 778
Enabling SAML Service Provider Initiated web SSO 779
Configuring SSO in the browser 782
Timeout behavior in the Persona UI 167
Known issues and limitations (SSO) 784
Configuring SPNEGO and Kerberos SSO 784
Defining users 785
Creating users 786
Revising passwords 787
Defining roles 787
Creating roles 788
Configuring GDS users and roles 789
Creating a GDS role 789
Creating a GDS user 789
Creating custom tools with the UI framework 790
Flow configurations of the UI framework 790
flow-config.xml file for the UI framework 791
Error and login for the UI framework 791
Navigation control and dispatch for the UI framework 792
Synchronous and page navigation 792
Asynchronous and Ajax calls for the UI framework 793
Configuring third-party modules for the left pane 793
Samples for implementing a custom interface with the UI framework 794
Sample for implementing a parameter-enabled interface 795
Sample for implementing a request-response-enabled interface 795
Sample for implementing a simple command with no interface 796
Sample for implementing an asynchronous interface 797
Scenarios for creating custom tools 798
Scenario 1: Creating a simple custom tool 798
Scenario 2: Modifying a custom tool to display a collaboration area 799
Customizing labels and icons for items and categories 802
Defining domain entities in the XML spec 802
Generating a CSS file for the domain entities 805
Associating values to the domain entities using the user interface 805
Error handling and limitations for domain entities 806
Scenario 1: The domain spec file does not exist 806
Scenario 2: Missing XML tag specified for domain entity 807
Scenario 3: Wrong type for domain entity 807
Scenario 4: Duplicate domain entity specified 807
Scenario 5: Invalid characters specified in domain entity ID 808
Scenario 6: Invalid empty label value specified for required labels on a domain entity 808
Scenario 7: A domain entity has been defined without a company tag 808
Enabling event logging using history manager 809
Event logging 809
Log messages in the data model 809
Configuring logging for supported events of an object 810
history_subscriptions.xml file 811
Using filters in the history manager 811
Objects that support event logging 813
Error Handling 813
Subscription policies 814
Consuming logged history data 814
Configuring the Entity Count Tool 816
Integrating with other products 818
Integrating IBM Content Integrator 819
Installing IBM Content Integrator for IBM Content Management 820
Installing IBM Content Integrator for IBM FileNet P8 Platform 821
Configuring the content management system 822
content_management_system_properties.xml File 823
Configuring IBM Product Master to use IBM Content Integrator services 824
Managing content in a content management system 824
Adding content to the content management system 824
Searching the content management system 827
Searching content management system using case insensitive searches 830
Configuring uploads to content management system for case insensitive searches 831
Configuring for performing case insensitive searches 831
Using a wildcard search string 832
Escape characters 832
Associating a content management system document to an item 833
Viewing content in the content management system 834
Content Management troubleshooting checklist 835
Java API for the content management system 836
Integrating IBM InfoSphere Physical Master Data Management 838
Overview of the Banking Solution 838
Key concepts 840
Banking Solution Sample 841
Deploying the Banking Solution Sample 841
Importing the Banking Solution sample 844
Managing product and offers 844
Managing features 845
Managing sellable items 845
Managing offers 846
Adding sellable items to the offer 847
Adding features to the offer 848
Adding target market segments to the offer 848
Adding target market locations to the offer 848
Managing categories 849
Developing with the Banking Solution Sample and Toolkit 850
Integrating IBM InfoSphere Information Server 851
Loading data from upstream systems 852
Importing an InfoSphere DataStage output file 855
csv_loader_invoke_job.sh Script 856
Create report using table_invoke_job.sh script 857
Sample InfoSphere DataStage and InfoSphere QualityStage CSV output file data 858
CSV columns and column values 858
Data and status table format rules 859
Property file information 859
Publishing data to downstream systems 860
Exporting the product data 860
Defining the attribute collection 861
Exporting the attribute collection 861
Importing the schema into IBM InfoSphere Information Server 862
Building InfoSphere DataStage and InfoSphere QualityStage jobs 863
Integrating Operational Decision Manager 864
Introduction of Advanced Business Rules through IBM Operational Decision Manager 864
Key concepts 865
Roles and tasks 866
Installing and configuring the Advanced Business Rules solution 866
Enabling Advanced Business Rules in your solution 867
Enabling the Advanced Business Rules user interface 867
Updating your workflows 868
Specifying decision types 869
Adding attributes to specs 869
Authenticate with Advanced Business Rules 869
Access control for rule viewing and authoring 870
Additional customization required by your solution 870
Associating decision types to rule projects 870
Copying entity-specific rules between entities 871
Deleting entity-specific rules 871
Working with Advanced Business Rules sample model 871
Import the Advanced Business Rules solution sample data 871
Enabling the Advanced Business Rules user interface for the sample model 872
Description of sample model 872
Adding rules 873
Integrating WebSphere Commerce 874
Introduction of Advanced Catalog Management 874
Solution architecture overview 875
Key concepts 875
Attribute dictionary 876
Catalog entry repositories 876
Catalog entry categories 876
Stores 877
Workspaces 877
DescriptionOverride attributes 877
CalculationCode Attributes 877
Search Engine Optimization 878
Attribute groups 878
Data model 879
Roles and tasks 879
Integration approaches 881
Managing business user privileges 882
Installing and configuring Advanced Catalog Management 883
Installing WebSphere Commerce Server 883
Prerequisites checklist for WebSphere MQ 884
Installing WebSphere Commerce Server bundle 884
Installing DB2 885
Installing Oracle 885
Installing HTTP Server 885
Installing WebSphere Application Server 885
Installing WebSphere Commerce Server 885
Install WebSphere Application Server fix pack and WebSphere Commerce Server feature pack 886
Installing WebSphere Application Server fix pack 886
Installing WebSphere Commerce Server feature pack 886
Enabling WebSphere Commerce Server feature pack 887
Creating a WebSphere Commerce Server instance 887
Publishing WebSphere Commerce Server stores 888
Installing and configuring WebSphere MQ for WebSphere Commerce Server 888
Installing WebSphere MQ for WebSphere Commerce Server use 888
Configuring WebSphere MQ for WebSphere Commerce use 889
Creating queue managers 889
Configuring WebSphere Application Server for use with WebSphere MQ 890
Enabling WebSphere MQ for WebSphere Commerce Server 891
Installing and configuring FTP server 891
Installing and configuring Advanced Catalog Management on WebSphere Commerce Server 892
Enabling ACM in WebSphere Commerce Server 892
Enabling ACM at the WebSphere Commerce Server binary level 893
Enabling ACM at the WebSphere Commerce Server instance level 894
Modifying the wc-dataload-env.xml file 894
Configuring the wcs.properties file 894
Verifying the information in the wcs.properties file 895
Copying the messages 896
Modifying the wc-server.xml file 896
Modifying the struts-config-ext.xml file 896
Enabling the jar libraries 897
Enable MQ Listener 897
Installing and configuring Advanced Catalog Management on Product Master 897
Configuring Advanced Catalog Management on Product Master 898
Setting up stores, catalogs and attribute dictionary 899
Adding a Direct Store with Master Catalog and Attribute Dictionary 900
Adding a Catalog Asset Store with Master Catalog and Attribute Dictionary 900
Adding an eSite Store to extend the Catalog Asset Store with Sales Catalog 901
Troubleshooting 901
Instance creation failed 902
Failure in enabling WebSphere Commerce Server 902
WebSphere MQ call failed 902
Enabling multiple languages 903
Enabling predefined languages 903
Enabling new languages 904
Attributes which support multiple languages by default 905
Restrictions and limitations 905
Restrictions on creating a localized enumeration attribute in the attribute dictionary 906
Restrictions on modifying an enumeration value of an attribute in a catalog entry which is localized 906
Managing catalog data 906
Managing attribute dictionary 907
Creating descriptive attributes 907
Creating predefined descriptive attributes 908
Creating defining attributes 908
Setting sequence values for descriptive and defining attributes 908
Setting facets for attribute dictionary attributes 909
Creating attribute groups 909
Providing different values to code and name of Attribute Dictionary attributes 909
Managing catalog groups 910
Managing catalog groups in a workflow 910
Creating catalog groups in a workflow 911
Creating catalog groups with SEO data 911
Adding calculation codes to catalog groups 911
Managing catalog entries 912
DescriptionOverride attributes 877
Implementing Description Override 913
Managing catalog entry attributes 913
Adding attribute dictionary-based attributes 915
Setting sequence values for descriptive and defining attributes in catalog entries 915
Creating catalog entries with SEO data 915
Adding calculation codes to catalog entries 915
Creating catalog entries with Description Override in eSite Store 916
Managing catalog entries in a workflow 916
Creating catalog entries in a workflow 917
Catalog entry tabbed view 917
Searching catalog entries 918
Generating SKUs 918
Generating SKUs with product associations 919
Generating SKUs without product associations 919
Creating calculation codes in WebSphere Commerce 919
Publishing catalog data 920
Publishing an attribute dictionary 920
Mapping 921
Sample XML data 921
Export status 922
Publishing catalog entry categories (Catalog Group) 922
Publishing a catalog group in a workflow 922
Publishing catalog groups with SEO data 923
Publishing catalog groups with CalculationCode 923
Mapping 923
XML data for SOAP request 924
Export status 924
Publishing catalog attributes 924
Publishing catalog entries with SEO data 925
Publishing catalog entries with CalculationCode 925
Publishing catalog entries with DescriptionOverride to eSite Store 925
Publishing catalog entries 925
XSL syntax 926
Mapped attributes 926
XML data for data load 927
Data load report 927
Disassociating an attribute in a catalog entry 928
Updating an enumeration value of an attribute in a catalog entry 928
Publishing attribute groups 929
Refreshing data 929
Objects and classes 929
Creating custom logger 931
Development environment setup 931
Debugging examples 932
Extending Advanced Catalog Management 932
Integrating WebSphere MQ 934
Integrating scheduler applications 936
Creating the scripts for external scheduling 936
Running a job from the Tivoli Workload Scheduler interface 937
Integrating connectors 938
Adobe InDesign connector 939
Amazon Marketplace Web Service connector 941
eBay Commerce Network Merchant Center connector 943
Google Merchant Center connector 945
JD Edwards connector 947
Magento2 connector 949
Message archive service 951
SAP connector 951
Reference 953
REST APIs 954
REST API error codes 954
Integrations REST APIs 963
Integrations REST APIs - Catalog 966
Integrations REST APIs - Hierarchy 971
Integrations REST APIs - Collaboration Area 973
Integrations REST APIs - Lookup Table 974
Integrations REST APIs - Search 976
Configuration files 977
admin_properties.xml file parameters 977
application.properties (Indexer) file parameters 978
application.properties (pim collector) file parameters 979
common.properties file parameters 980
Editing common.properties file 992
config.json file parameters 992
dam.properties file parameters 994
damConfig.properties file parameters 995
dashboards_config.ini properties file parameters 996
data_entry_properties.xml file file parameters 997
db.xml file parameters 998
docstore_mount.xml file parameters 998
env_settings.ini file parameters 999
gdsConfig.properties file parameters 1001
mass-edit.properties file parameters 1001
mdm-cache-config.properties parameters 1002
all.useMulticast parameter 1003
accessCache.maxElementsInMemory parameter 1003
attrGroupCache.maxElementsInMemory parameter 1004
attrGroupCache.timeToLiveSeconds parameter 1004
catalogCache parameter 1004
catalogDefinitionCache.maxElementsInMemory parameter 1005
ctgViewCache.maxElementsInMemory 1005
lookupCache.maxElementsInMemory parameter 1006
roleCache.maxElementsInMemory parameter 1006
scriptCache.maxElementsInMemory parameter 1006
specCache__KEY_START_VERSION_TO_VALUE.maxElementsInMemory parameter 1007
workflowCache.maxElementsInMemory parameter 1007
wsdlCache.maxElementsInMemory parameter 1007
mdm-cache-config.xml.template file parameters 1008
mdmce-roles.json.default file parameters 1009
ml_configuration file parameters 1010
restConfig.properties file parameters 1010
Shell scripts 1013
abort_local.sh script 1015
analyze_schema.sh script 1015
checkForCompileError.sh script 1016
cleanup_cmp.sh script 1016
configureEnv.sh script 1017
copy_files_for_appsvr.sh script 1017
create_appsvr.sh script 1018
create_cmp.sh script 1018
create_pimdb.sh script 1019
create_pimdevdb.sh script 1019
create_schema.sh script 1019
create_vhost.sh script 1020
db2_export.sh script 1020
db2_fast_import.sh script 1021
delete_old_versions script 1022
drop_schema.sh script 1023
envexpimpXMLValidator.sh script 1023
estimate_old_version script 1023
exportCompanyAsZip.sh script 1024
get_ccd_version.sh script 1025
get_params.sh script 1025
get_service_status.sh script 1025
importCompanyFromZip.sh script 1026
importRelationshipsAsCompany.sh script 1026
indexRegenerator.sh script 1027
install_war.sh script 1027
load.sh script 1028
pimprof.sh script 1028
pimSupport.sh script 1031
rename_cmp.sh script 1033
rmi_status.sh script 1034
run_job_template.sh script 1034
start_local.sh script 1035
start_local_rmlogs.sh script 1036
start_rmi_and_appsvr.sh script 1036
stop_local.sh script 1036
svc_control.sh script 1037
test_db.sh script 1038
updateRtClasspath.sh script 1038
Script operations 1039
Global Data Synchronization configuration files 1074
Editing the GDS configuration files 1074
Viewing system properties 1074
gds.properties file 1075
daysInPast parameter 1075
userIdInXML parameter 1076
gds_system.properties file 1076
TRADE_ITEM_VALIDATION_SCRIPT_PATH parameter 1076
IBM Product Master on Cloud 1077
IBM Product Master documentation
Welcome to the IBM® Product Master release documentation, where you can find information about how to install, configure, and use IBM Product Master, Version 12.0.

Getting started

Release summary
System requirements
Product overview
Product accessibility

Common tasks
Installing

Troubleshooting and support


Troubleshooting issues
IBM Electronic Support
IBM Support portal

More information

IBM Passport Advantage


IBM Fix Central

© Copyright IBM Corporation 2015, 2022

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Release summary
Release summary for IBM® Product Master Version 12.0.

Contents
Description
What's new
Defect fixes
Enhancements
System requirements
Media
Installing
Support

Description
IBM Product Master provides a highly scalable, enterprise Product Information Management (PIM) solution. Product Master is the middleware that establishes a single,
integrated, consistent view of products and services information of an enterprise.

IBM Product Master is the new name for InfoSphere® Master Data Management Collaboration Server - Collaborative Edition. InfoSphere Master Data Management
Collaboration Server - Collaborative Edition is a product that is used to manage various entities such as products, suppliers, assets, locations, and even customers in some
cases. Although you could manage various entities due to the flexible data model and adaptive user interface, InfoSphere Master Data Management Collaboration Server -
Collaborative Edition product is really built for Product Information Management (PIM) solution purpose. The product provides all the core capabilities that are a must for
a robust PIM solution.

Although the product name reflected the product offering (collaborative authoring of master data through workflows), the name unfortunately did not do justice to the
term "PIM solution" and was potentially confusing to the users (many of whom had no idea that IBM even had a PIM offering). Also, since the product name lacked
"Product" keyword, the search engines could not be potentially used for digital marketing.

To address these concerns, the product name was changed to IBM Product Master. IBM Product Master retains the underlying InfoSphere Master Data Management
Collaboration Server - Collaborative Edition architecture and functions. With the IBM Product Master V12.0 release, the product introduces more capabilities that enable
user personas to perform various tasks more efficiently. IBM Product Master continues to be part of IBM InfoSphere Master Data Management family with same
operational support.

Type of Release Availability


General Availability of IBM Product Master, Version 12.0 15 May 2020

What's new

IBM Product Master 12.0.0 1


Simplified installer
Admin UI and Persona-based UI are now bundled into a single installer, which deploys both of them on a single WebSphere® Application Server.
Persona-based UI collaboration area enhancements

Quickly search the items under collaboration area by filtering out the empty steps and collaboration areas.
Ability to undo check-out feature for items under collaboration areas.
Directly open checked out items from Search, Explorer, or Free text Search pages and browse to an item under a specific step.

Persona-based UI interface enhancements

Ability to delete Saved Search or Saved Template from the Persona-based UI.
Display Lookup type attribute as a drop-down list by using "Show as drop down". This controls the display of an individual Lookup attribute (drop down list or
pop-up window).
Ability to decide the order of attributes on single-edit page by using "New Line Before" and "New Line After". You can group or split the set of attributes for
display as required, for example, Unit of measurement and an actual value are displayed together.
Ability to set the Collaboration Area or Explorer grid pagination size from the interface instead of a configuration parameter. You can set individual page size.
Ability to manage the Secondary specification from the Spec console.

Machine learning-based Standardization


Machine learning-based product standardization feature helps the data stewards in improving the data quality. To address data quality issues, use machine learning
to standardize the product descriptions to fix any evident misspellings. The machine learning service learns contextual information of product descriptions, which is
used to identify and replace the incorrect words in the descriptions.
Stock Keeping Units (SKU) Generation workflow sample
SKU generator sample for automatically creating an array of SKUs for a product. After the product and the SKU category is defined, run the SKU generator to
automatically generate all the SKUs.

Defect fixes
Following is the list of APARs addressed with the IBM Product Master 12.0.
APAR Description
JR61351 In multi-occurrence grouping, node user can search only in the first occurrence of group node, and there is no way to search in other occurrences of
the group node.
JR61353 Any custom UI on the entry preview pop-up window, when maximized, does not use the maximized screen.
JR61356 Unable to copy text from the Elasticsearch grid.
JR61488 Catalog view is not working for the Item pop-up window and Item location.
JR61550 None of the Action scripts works on a single-edit of search after clearing the cache.
JR61675 The timeout pop-up window does not display properly in the Mozilla Firefox browser.
JR61686 The column search does not find an item on the Search grid and works only on the loaded items.
JR61848 The secondary hierarchy of the catalog disappears in the interface.
JR61879 Reading database properties from the db.xml file and not the common.properties file for the IBM InfoSphere DataStage® integration.
JR61880 Importing data issue.
JR61913 Need multi-edit links access from the Entry Previews.
JR62026 Problem with String Enumeration rule list in the Persona-based UI.
JR62043 The Clear option in the Column filter is not working.
JR62170 Angular Preview scripts have a change in the preview-script.json file. The "container id" has changed to "container name" due to which the toolbar
buttons are not getting displayed.
JR62186 Incorrect script operation description for STRING[] WORKFLOWSTEP ::GETWFLSTEPATTRIBUTEGROUPS()in workbench.
JR62219 Before saving multi-occurrence attributes, "<Lookup_Table_Name> >> <PK>" added irrespective of the display format set.
JR62220 In the Lookup table drop-down list after deleting the entry, Save is not enabled.
JR62221 In the Lookup table drop-down list, value selection is not working.

Enhancements
Following is the list of enhancements for the IBM Product Master 12.0.

IBM Product Master needs discard or undo check-out feature for the collaboration areas. For more information, see Enriching an entry.
Preview scripts in the Persona-based UI require a system-based "container ID". For more information, see Developing Custom Tabs by using custom Angular
scripts.
Persona-based UI does not display change details for the collaboration areas. For more information, see Enriching an entry and Enriching entries (Bulk updates).
Not possible to hide empty steps and empty collaboration areas. For more information, see Navigating the.
Multi-edit page fetches all item attributes despite specified in a view.
The sequence of the entries in the collaboration area step is the same as the sequence in which they were moved to the collaboration area step.
Cannot delete a Saved Search or Saved Template. For more information, see Using Search.
Persona-based UI needs a collaboration area search. For more information, see Navigating the.
Click Check out on the multi-edit page to open the item in the collaboration area. For more information, see Using Explorer.
Ability to decide the order of attributes on the single-edit page, for example, Unit of measurement and value to be grouped for display. For more information, see
Editing or creating a spec.
Free text search takes much time to load results. Support for configuring display attributes on the Free text search result. For more information, see Enabling.
Amazon Relational Database Service (RDS) for Oracle for the IBM Product Master.
Consolidate the REST API logging from the mdm-rest.war file.

System requirements
The system requirements for the IBM Product Master installations.

For more information about system requirements for the IBM Product Master, see System requirements.

2 IBM Product Master 12.0.0


To see the detailed system requirements in the Software Product Compatibility Reports(SPCR), search IBM Product Master and press Enter.

Media
IBM Product Master consists of the following general media packages:

Media Part number​


IBM Product Master Multiplatform Multilingual eAssembly CC6AWML
License Information CC6AXML
IBM Product Master Key Version 12.0 (Docker containers) CC6AYML

Installing
For step-by-step installation instructions, see Installing. For detailed migration instructions, see Migrating to Version.

Support
IBM Electronic Support offers a portfolio of online support tools and resources that provide comprehensive technical information to diagnose and resolve problems and
maintain your IBM products. IBM developed many smart online tools and proactive features that can help you prevent problems from occurring in the first place, or
quickly and easily troubleshoot problems when they occur.

IBM's improved personalization of support resources helps you focus on and be alerted to exactly the information and resources that are needed for efficient and effective
problem prevention and resolution. For more information, see:

https://support.podc.sl.edst.ibm.com/support/home/

What's new for 12.0 Fix Packs


Learn about the new and updated features and functions available with the fix packs for the Product Master 12.0.

Related information
Release notes: IBM® Product Master, Version 12.0

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

What's new for 12.0 Fix Packs


Learn about the new and updated features and functions available with the fix packs for the Product Master 12.0.

Jump to:

IBM® Product Master Version 12.0 Fix Pack 8


IBM Product Master Version 12.0 Fix Pack 7
IBM Product Master Version 12.0 Fix Pack 6
IBM Product Master Version 12.0 Fix Pack 5
IBM Product Master Version 12.0 Fix Pack 4
IBM Product Master Version 12.0 Fix Pack 3
IBM Product Master Version 12.0 Fix Pack 2
IBM Product Master Version 12.0 Fix Pack 1

Table 1. Important links


Available fix packs and interim fixes Fix Central
REST APIs
REST APIs swagger file
Latest REST API definitions Integrations REST APIs
Integrations REST APIs swagger file

System requirements documentation topic link


System Requirements Software Product Compatibility Report link

Release notes for interim fixes


Release notes: IBM Product Master 12.0 Interim Fix 1 - APARs
Release notes: IBM Product Master 12.0 Fix Pack 2 Interim Fix 1 - APARs
Release notes: IBM Product Master 12.0 Fix Pack 2 Interim Fix 2 - APARs
Release notes: IBM Product Master 12.0 Fix Pack 4 Interim Fix 1 - APARs, Stack upgrade, and enhancements.
Release notes: IBM Product Master 12.0 Fix Pack 4 Interim Fix 2 - APARs
Release notes: IBM Product Master 12.0 Fix Pack 5 Interim Fix 1 - APARs, Stack upgrade, Security fixes, and new Integration REST APIs.
Release notes: IBM Product Master 12.0 Fix Pack 5 Interim Fix 2 - APARs
Release notes: IBM Product Master 12.0 Fix Pack 7 Interim Fix 1 - APARs
Release notes: IBM Product Master 12.0 Fix Pack 7 Interim Fix 2 - APARs

IBM Product Master 12.0.0 3


Release notes: IBM Product Master 12.0 Fix Pack 7 Interim Fix 3 - APARs
Release notes: IBM Product Master 12.0 Fix Pack 8 Interim Fix 1 - APARs

IBM Product Master Version 12.0 Fix Pack 8 (February 2023)


Persona-based UI enhancements

Free Text Search enhancements


Ability to save a free text search or load a saved free text search that enables you to reuse previously saved free text search​.
Ability to switch between a detailed or compact data view due to enhanced intuitive user interface​.
Support for category level filter while performing item search​.
For more information, see Using Free text search.
OpenSearch replaces Elasticsearch. This is due to change in licensing strategy (no longer open source) by Elastic NV. Due to this, you need to shift to OpenSearch
2.4.1 and later, and run full indexing. You need to install and configure OpenSearch by using your own SSL certificate.
For more information, see Installing and configuring the OpenSearch.
Data flattening
Ability to publish product data in a flattened format in OpenSearch.
Data is stored in a JSON format as "Attribute Name:Value" pair instead of "Node Id:Value" pair.
Ability to quickly export this data to external systems for any kind of analytics or reporting purposes.
For more information, see Using OpenSearch to access indexed content.
Traditional search enhancements
Ability to search on multiple values by using an "IN" operator. You are no longer required to add multiple conditions to search for a set of values and instead
can group them into a comma-separated list and search.
Example
‘Red’, ‘Blue’, ‘Green’​
Ability to search based on comparison of the string length by using the string length operators "greaterThanOrEqualTo" and "lessThanOrEqualTo".
For more information, see Using Search.

Other enhancements

New Horizontal Pod Autoscaling (HPA) feature - The HPA changes the resource allocation by automatically increasing or decreasing the number of pods in response
to CPU or memory consumption. You need to enable the feature from the CR file.
For more information, see Enabling Horizontal Pod Autoscaler.
UI now supports Angular v12.
A new facet in the specification allows users to view the multi-occurring grouping attributes in a tabular format, and thus, reducing the challenges and complexity
faced in handling multi-level grouping attributes​.
For more information, see Customizing Personal settings.
Ability to quickly filter categories when a product is associated to multiple categories on the Categories tab of the single-edit page​.
For more information, see Tabs for an entry.
Support for inline editing of all attribute types within collaboration page including for multi-occurrence attributes​.
For more information, see Enriching entries (Bulk updates).
Provisioning options to rearrange the images after uploading into a multi-occurring image or thumbnail attribute​s.
For more information, see Enriching entries (Bulk updates).
Support for logging of username and IP address in the log files.
For more information, see Enable custom logging.

Integrations REST APIs

POST API to search for an item in the catalog based on the search criteria.
For more information, see Integrations REST APIs - Search.

Release notes
IBM Product Master, Version 12.0 Fix Pack 8 release notes

IBM Product Master Version 12.0 Fix Pack 7 (October 2022)


Persona-based UI enhancements

Workflow management
Ability to create, edit, or delete workflows through the Persona-based UI.
Ability to view end-to-end flow of an entry through a workflow.
Ability to drag steps while you are designing a workflow.
For more information, see Using Workflow console.
Automated Suspect Duplicate Processing (SDP)
Automated step in the traditional SDP workflow.
Compares an item with the master record and the threshold limit, and automatically marks as a "Match" or a "No-Match" (configurable).
For more information, see Automating SDP.
Container-related enhancements
Product Master pods creation or deletion now based on feature enable or disable to save CPU and memory.
New customization approach. Instead of creating a custom image, you can use default image for deployment and then proceed with customization.
Switched workloads from Deployment to StatefulSet for better pods management.
Product Master containerized version now supports Magento connector.
For more information, see Installing the product by using Operators.

Other enhancements

Workflow step has an error status icon that when clicked displays an error pane. You can view attributes with failed validation. Select a record to see attributes with
error highlighted. For more information, see Validation error.
An administrator can release items that are reserved by another user. For more information, see Enriching an entry.
Support of expression predicates for String and String enumeration attributes in the Rules console. For more information, see Rules expressions.
Integrated IBM Watson® Assistant (chatbot) within the application. For more information, see Chatbot.

4 IBM Product Master 12.0.0


Relationship search allows searching for entries based on any attribute of the selected catalog (Except Display attribute). For more information, see Relationship
editor.
All tabs also available as a drop-down list on the single-edit page. Select to browse directly to the specific tab. For more information, see View tabs.
Email notification from a workflow step now supports direct action (Approve or Reject). For more information, see Notifications.
Save search results on the Free text Search as a saved list. For more information, see Save as list.
Merged Composite rule support. For more information, see Composite rule.
Integrations REST APIs
API for Get Collaboration area entries (Item or categories).
PUT API to update category from collaboration area.
For more information, see Integrations REST APIs - Collaboration Area.
Improvements
Improved workflow processing for items that ensures that the items do not remain in a step for long time.
Improved performance for saving an item that is mapped to more than 30 categories.
Improved loading in more than 1500 subcategories.
Improved application performance, if the data has too many Localized attributes configured.

Release notes
IBM Product Master, Version 12.0 Fix Pack 7 release notes

IBM Product Master Version 12.0 Fix Pack 6 (May 2022)


Persona-based UI enhancements

Manage daily Job execution and analysis.


Focused dashboard depicting a consolidated view of all types of jobs.
Graphical views for the job runs by the status and failure that enable you to quickly view and take appropriate actions for the failures.
Ability to run and create new schedules for all the preexisting jobs.
Providing a holistic view of the job run status, schedules, and status of the run jobs.
Enhanced UI that enables you to search for a job and easily navigate from one job type to other.
For more information, see Job Summary dashboard.

Personalize your home page.


New actionable metrics provide more contextual and useful information for daily activities.
Multiple predesigned widgets with wider insights of the product data.
Chosen widgets retained even after re-login.
Quick glimpse of the entire Data Model with navigation to each available data type.
For more information, see Navigating the Persona-based UI.

Customize user interface.


Ability to select a "Custom theme".
Admin - You can now create multiple themes and choose a theme from the Settings page, under the Application Settings tab. The Choose Theme drop-
down list displays only user-created themes along with default theme.

Settings page now has separate personal settings and application settings tabs that help to differentiate between settings for the entire application and user
level settings.
Admin - Sets application-level settings like enabling free text search, Digital asset management (DAM), home page titles, themes on.

New configuration settings to enable you to perform more customizations from the UI.
For more information, see Customizing Application settings and Customizing Personal settings.

Other enhancements

Easy-to-interpret Import Job status report.


Import Job status report for success, failures, and errors is now available in a Microsoft Excel file format that is simpler to interpret.
The Microsoft Excel file has a detailed report for import that is performed on workflow step, catalog, lookup table, categories, and mass edit.
For more information, see Using Job console.

Vendor user login with LDAP is now supported. For more information, see Integrating with Product Master and Working with the Vendor persona.
Vendor user with SSO is now supported. For more information, see Configuring SAML SSO (Accelerated deployment) and Timeout behavior in the Persona UI.
Persisting of item level rule violations - The user is not required to save the item to view the error panel when the user navigates from,
Data Sanity by the Rules dashboard to the single-edit page of an item.
Workflow step multi-edit page to the single-edit page of an item.
For more information, see Data Sanity dashboard.

Help text for Free Text Search page that guides you with types of supported searches.
UI enhancements to the Multi-occurrence grouping attributes make it easier to identify each occurrence and group by using color differentiation for each nested
level of grouping.
Maintain catalog selection on relationship pop-up window.
Ability to view the details of a specification. To see a quick overview of the entire spec, click the Grid View the Spec details page to view all the attributes and facets
in a tabular form.
Updated folder structure for Customization and Custom Theme features in case of a new configuration is as follows:
Customization: mdm_ui.war.ear/mdm_ui.war/js/custom

Custom theme: mdm_ui.war.ear/mdm_ui.war/custom-style

Integrations REST APIs


PUT API to update category from the Explorer.
Lookup table – Add, update, and delete entries.
Delete items from catalog.

Generic documentation updates

Added product uninstallation steps for Kubernetes and Red Hat® OpenShift® platforms. For more information, see Uninstalling the product.

IBM Product Master 12.0.0 5


Updated Machine Learning services documentation. For more information, see Using machine learning (ML) assisted data stewardship.
Updated GDS documentation. For more information, see Global Data Synchronization (GDS) and Synchronizing product data (GDS).
Updated documentation for the History, Audit, and Change Review tabs. For more information, see Tabs for an entry.
Reorganized Managing Master Data section. Also, each feature topic now displays role-based access. For more information, see Persona-based UI.
Added a chart that displays Operator vs Product version mapping. For more information, see Installing the product by using Operators.

Release notes
IBM Product Master, Version 12.0 Fix Pack 6 release notes

IBM Product Master Version 12.0 Fix Pack 5 (October 2021)


Pagination support for Multi-occurrence group attributes

Ability to enable pagination for multi-occurrence grouping attributes by setting the value of the multiOccGrpPaginationView property to true in the config.json
file.
Ability to switch to any page by using the navigation buttons like first, last, next, and previous.
The user is navigated directly to the last page when a new occurrence is added.
Ability to jump to any page by providing the page number in the pagination section.

License update to address the high capacity restriction for IBM Db2®
A new eAssembly enables users to deploy IBM Db2 on higher hardware configurations.
Release notes
IBM Product Master, Version 12.0 Fix Pack 5 release notes

IBM Product Master Version 12.0 Fix Pack 4 (August 2021)


Persona-based UI enhancements

Category management including the ability to view, add, edit, or clone the categories. For more information, see Using Explorer.
View categories attribute details from the Explorer page. For more information, see Using Explorer.
Create, modify, or delete Attribute collection. For more information, see Using Attribute collection console.
Lookup Table management with the ability to create Lookup spec and add or edit Lookup table values. For more information, see Using Lookup Table console.
Availability of docstore to view the files and folders. For more information, see Using File Explorer.

Other enhancements

Import and export items directly in a catalog from the Explorer and Search pages. For more information, see Export and Import feature (catalogs and
hierarchies).
Visualization view for the Relationship and Linked entries on the single-edit page. For more information, see Tabs for an entry.
API to show step and collaboration areas in the Persona-based UI.
UIHelper interface APIs provide the URL for the Persona-based UI for a specific set of screens or operations.
Support to view change on the multi-edit page.
Ability to view or download import, export, and other logs from the Job Details pop-up window.
Support for the unit of measure (UOM) as a new spec schema facet in the Admin and Persona-based UI.
Ability to customize sample Vendor code according to the business requirements. For more information, see Using the sample Vendor code.

Release notes
IBM Product Master, Version 12.0 Fix Pack 4 release notes

IBM Product Master Version 12.0 Fix Pack 3 (June 2021)


Dockerization and deployment

Support for the WebSphere® Liberty Images for containerized setup.


Lightweight Java™ runtime environment images for Cloud.
Images are highly composable, start fast, use less memory, and scale easily.
Full support for the Operator Lifecycle Management for containerized deployments.
For more information, see Installing the product by using Operators.
Support for the Simple Mail Transfer Protocol (SMTP) server port configuration and service authentication.
Enables Product Master integration with SMTP services, which enforce authentication and run on nondefault port.
For more information, see Simple Mail Transfer Protocol (SMTP) parameters.
Support for SAML 2.0 Just In Time (JIT) provisioning for the Admin UI and Persona-based UI.
JIT enables more efficient integration of SAML to provide a seamless login experience for users.
JIT automates user account and group creation and does not need a local LDAP. You need to reconfigure the existing SAML configuration.
For more information, see Configuring SSO.
Database JAR files are now included as part of the application package. Reduced deployment efforts since no need to install IBM Db2 or Oracle database
client before product installation.

Security enhancements

Enhanced password security by using the SHA-256 with salting mechanism.


To mitigate password attacks and industry standard security practices for cloud, password has been made more secure by forcing uniqueness and
increasing the complexity. Thus, you need to update the existing passwords.
For more information, see Installing password security update.
Admin UI support for password strength, password expiry, and user lockout.
New properties provided in the common.properties file, enable an enhanced password restriction and security for all the users.
For more information, see common.properties file parameters.

Business UI enhancement

Improved usability in the workflows.

6 IBM Product Master 12.0.0


Better user experience - Actions to be taken on the entries in the current workflow step are now moved to the right of the toolbar as separate buttons
for each action.
Easy navigation - Navigate to any step of workflow by using a drop-down list for all the available steps, from the multi-edit page of the workflow step.
Custom theme
Improved support for custom theme.
Enhancing user experience by providing support for custom themes.
For more information, see Customizing theme.
Performance improvements
Caching capability added for the String Enumeration rules and Lookup table values.
Caching mechanism added to the Rest APIs to avoid too many database calls.
Import and Export enhancements on workflow processing.
Ability to export selected items and selected attributes to a Microsoft Excel file from a workflow step.
Map items to categories through the import.
All available Enumeration and Lookup table values are provided in the Microsoft Excel file as a drop-down list.
Secondary specification attributes add or update is provided through the import.
For more information, see Export and Import feature (collaboration area).
Ability to refresh the collaboration area items count on dashboard.
Ability to expand all, collapse all, and show more options added to the Multi-occurrence type of attributes.
Improved the UI performance by displaying 10 occurrences on load and then providing the flexibility to show more occurrences.
All the occurrences can be expanded and collapsed on a single click.
API support added to export items from the catalog.
Users having direct access on items can export items by using the API. UI support to be provided in next release.
Added capability to publish spec from the Product Master to the Watson™ Knowledge Catalog as an asset. For more information, see Publishing specs to the
IBM Watson Knowledge Catalog.

Stack upgrade

MongoDB upgraded to version 4.0.22.


Hazelcast IMDG upgraded to version 4.1.

Release notes
IBM Product Master, Version 12.0 Fix Pack 3 release notes

IBM Product Master Version 12.0 Fix Pack 2 (December 2020)


Added support for expressions predicate type in the Rules engine.
Implementation of redirection to linked or related items from Linked items or Related items tab for the Single-edit page.
Implementation of the Flag Attribute as a radio button on the single-edit page.
Security Assertion Markup Language (SAML v2.0) support for the Persona-based UI and Admin UI with windows and form authentication methods.
Operator-based product deployment with container images. For more information, see Deploying on the Kubernetes cluster.
Official support of the Microsoft Edge browser for both the Admin UI and the Persona-based UI.
Magento connector support upgrade from version 2.0 to version 2.4. For more information, see Magento2 connector.
Persona-based UI and custom sample library supports Angular version 8. For existing custom components, upgrade your custom sample library. For more
information, see Upgrading the existing custom sample library.
Authentication is mandatory for MongoDB, thus you need to correctly enable authentication for Digital Assets Management and Machine learning. For more
information, see Enabling MongoDB authentication.
Secure Sockets Layer (SSL) support is available for MongoDB. For more information, see Enabling SSL for MongoDB.
Persona-based UI supports password strength check, password expiry, and user lockout features. For more information, see common.properties file parameters.
Admin UI supports password strength check feature while creating or updating user information.

IBM Product Master, Version 12.0 Fix Pack 2 release notes

IBM Product Master Version 12.0 Fix Pack 1 (August 2020)


GDS Supply-Side implementation through Persona-based UI
Using GDS Persona, you can access the GDS feature in the Persona-based UI. The GDS feature is accessible to the Full Admin role and GDS Supply Editor Role. GDS
feature supports the GDS Supply Editor role that gets loaded when you run the loadGDSDatamodel script.
Item completeness feature:

Item completeness feature allows you to track the completion percentage of any item. The completion is calculated based on the preselected attributes in
an attribute collection.
Completeness tab in the single-edit page for the item completeness.
Completeness indicators on the single-edit, multi-edit, Search, Explorer, and Free Text Search pages.

For more information, see Configuring Item completeness.


Improved data visualization through Data Sanity and other dashboards

“Data Completeness by catalog and channels” and “Data Quality by Rules” metrics provide key indicators around data quality and completeness.
Ability to drill down to the detailed view and open an Item or a Category from dashboards.

For more information, see Viewing dashboards.


Saved Templates as Quick Links on the Home page
For more information, see Using Search.
Fuzzy search support in Free Text Search

Ability to enable Free Text Search fuzzy search to search for a list of results based on likely relevance even though search argument words and spellings may
not exactly match.

Documentation updates

Vendor-related: Postinstallation instructions, and Working with the Vendor persona

IBM Product Master 12.0.0 7


Installation-related: Installing
Log4j file-related: Updates related to the log4j.xml file
Machine learning-related: Accessing the machine learning by using the REST APIs
Integration APIs-related: Integrations REST APIs
SKU-related: Generating Stock Keeping Unit (SKU)
SDP-related: Using Suspect Duplicate Processing (SDP)
DAM (Generic and Renditions-related): Using Data Management and Generating renditions
Connectors-related: Integrating connectors.

Release notes
IBM Product Master, Version 12.0 Fix Pack 1 release notes

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master considerations for General Data Protection Regulation (GDPR)
readiness
This section provides information about using IBM Product Master to help your organization achieve GDPR readiness.

For PID: 5725-E59, UT:BDP30, IBM Product Master

Disclaimer
Note: This document is intended to help you in your preparations for GDPR readiness. It provides information about features of Product Master that you can configure, and
aspects of the product’s use, that you should consider to help your organization with GDPR readiness. This information is not an exhaustive list, due to the many ways that
clients can choose and configure features, and the large variety of ways that the product can be used in itself and with third-party applications and systems.
Clients are responsible for ensuring their own compliance with various laws and regulations, including the European Union General Data Protection Regulation.
Clients are solely responsible for obtaining advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulations that
may affect the clients’ business and any actions the clients may need to take to comply with such laws and regulations.

The products, services, and other capabilities described herein are not suitable for all client situations and may have restricted availability. IBM does not provide
legal, accounting, or auditing advice or represent or warrant that its services or products will ensure that clients are in compliance with any law or regulation.

GDPR
General Data Protection Regulation (GDPR) has been adopted by the European Union ("EU") and applies from May 25, 2018.
Product configuration considerations for GDPR readiness
This section provides considerations for configuring Product Master to help your organization with GDPR readiness.
Data life cycle
Product Master is a general purpose product information management solution that allows organizations to create a single view of product information. This is part
of the Product Master portfolio. The product master data is made available to be used by application systems and business processes.
Data collection
When assessing your use Product Master and the requirements of GDPR, you should consider the types of personal data which in your circumstances are passing
through Product Master.
Data storage
As described in the data collection section, Product Master stores information in its catalogs.
Data access
Data stored in the Product Master repository can be accessed through several product interfaces.
Data processing
The way data is processed in Product Master is controlled by the business users through the user interface, in concert with defined workflows that reflect the
requirements of product categories and secondary specifications. Tasks can also be invoked through the Product Master API by calling applications.
Data deletion
GDPR provides individuals with the right to have their data deleted upon request. Product Master includes services that enable organizations to delete
individual/person data.
Data monitoring
You should regularly test, assess, and evaluate the effectiveness of your technical and organizational measures to comply with GDPR. These measures should
include ongoing privacy assessments, threat modeling, centralized security logging and monitoring among others.
Responding to data subject rights
Product Master's range of capabilities is available to address the needs to respond to data subject rights requests.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

GDPR
General Data Protection Regulation (GDPR) has been adopted by the European Union ("EU") and applies from May 25, 2018.

Why is GDPR important?


GDPR establishes a stronger data protection regulatory framework for processing of personal data of individuals. GDPR brings:

8 IBM Product Master 12.0.0


New and enhanced rights for individuals
Widened definition of personal data
New obligations for processors
Potential for significant financial penalties for non-compliance
Compulsory data breach notification

Related information
GDPR EU
Transform your business with the GDPR

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Product configuration considerations for GDPR readiness


This section provides considerations for configuring Product Master to help your organization with GDPR readiness.

Configuration to support data handling requirements


Product Master offers a range of capabilities to assist this. The following sections provide considerations for data handling to help your organization prepare for GDPR
readiness.

Configuration to support Data Privacy


Product Master supports Data Privacy in active ways with a number of features. The following sections provide insight in these features and how these can be best utilized
to prepare for GDPR readiness.

Configuration to support Data Security


Similarly, Product Master offers a range of capabilities to achieve appropriate Data Security. The following sections provide considerations for data handling to help your
organization to prepare for GDPR readiness.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data life cycle


Product Master is a general purpose product information management solution that allows organizations to create a single view of product information. This is part of the
Product Master portfolio. The product master data is made available to be used by application systems and business processes.

Although the primary use of Product Master is to master product data, it does allow for a range of information to be capture. If this includes personal data this may require
you to determine your GDPR readiness. Review the following section to determine if this applies to your organization.

Product Master ingest data in real-time or batch mode and is typically integrated with several source systems. Likewise, Product Master makes master data available
through publishing to target systems (such as an eCommerce site).

Product Master implement workflow-based processes that involve various individuals and team that perform certain tasks (such as legal review, pricing, and approval) to
prepare the product information for publishing.

What types of data flow through Product Master?


As a general-purpose MDM for Product solution, there is no one definitive answer to this question because use cases vary from user to user. However, it is likely, that
customers of Product Master use it to interact with data that relates to the following categories:

Product information - This is at the core of this solution: the entry, maintenance, and publishing of product information.
Supplier information - Often associated with the product information is supplier data and potentially contact information.
Business user data – The user of the system may have some of their information stored in the system. This may be limited to user credentials or contain a broader
range of information.

What are the lawful bases for processing?


The lawful bases for processing are set out in Article 6 of the GDPR. At least one of these must apply whenever you process personal data:

Consent
The individual has given clear consent for you to process their personal data for a specific purpose.
Contract
The processing is necessary for a contract you have with the individual, or because they have asked you to take specific steps before entering into a contract.

IBM Product Master 12.0.0 9


Legal obligation
The processing is necessary for you to comply with the law (not including contractual obligations).
Vital interests
The processing is necessary to protect someone’s life.
Public task
The processing is necessary for you to perform a task in the public interest or for your official functions, and the task or function has a clear basis in law.
Legitimate interests
The processing is necessary for your legitimate interests or the legitimate interests of a third party unless there is a good reason to protect the individual’s personal
data which overrides those legitimate interests. (This cannot apply if you are a public authority processing data to perform your official tasks.)

Personal data used for online contact with IBM


Clients can submit online comments/feedback/requests to contact Product Master topics in a variety of ways, primarily:

Public comments area on pages in the IBM Product Master community on the IBM® Developer site.
Public comments area on pages of IBM Product Master product documentation.
Feedback forms in the IBM Product Master community.

Typically, only the client name and email address are used, to enable personal replies for the subject of the contact, and the use of personal data conforms to the IBM
Online Privacy Statement.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data collection
When assessing your use Product Master and the requirements of GDPR, you should consider the types of personal data which in your circumstances are passing through
Product Master.

While Product Master’s core data entity is product information, personal data can potentially also be stored. If your implementation stores personal data, such as contact
information for supplier organizations, consider changing this practice as part of your GDPR preparations. The data will normally arrive through Information Integration
tooling or directly through the program’s APIs. This data is then persisted in Product Master.

Product Master operates in an environment where other elements are required, such as IBM® WebSphere® Application Server and IBM Db2® or Oracle database. In doing
so, information flows to and from those elements. This may include user credentials.

As this information has entered via other systems required consent is assumed to be secured by those systems or business processes associated with those. Nonetheless,
you may wish to consider aspects such as:

Was lawful consent in the upstream systems and processes established based on the criteria described in the previous section?
How the data arriving into Product Master ? Are all data paths known and verified? Is the data encrypted per enterprise standards?
Where user credentials are exchange with other applications, the inclusion of personal data should be avoided and authentication data (userids, passwords, and
API keys) are collected in Product Master . Ensure that this information is encrypted using the facilities provided with Product Master (which leverage WebSphere
Application Server functionality to do so) or consider use of external directory such as LDAP.
Product Master provides many ways for you to configure and customize the use of this solution. Thus, during development data is entered in the Product Master
solution for testing purposes. Consider what data is use in this stage; in the case of personal data, avoid the use of any real world data or, if this is done, use
appropriate masking and anonymization capabilities.

Types of data collected


Product Master is aimed at the collection and refinement of product information, but it allows for the collection of a range of data. This information is stored in Catalogs
and these catalogs can be used for virtually any purposes. The use of the catalogs is determined by your needs. It is possible that one or more catalogs are focused on or
include personal data. You should review the catalogs and their content for such inclusions. This should be the focus of your assessment of your GDPR readiness.

As previously described the use of this personal data may be covered under lawful use and thus not require addition handling, but this needs to be established in your
review. Data minimization may be an important aspect of that review (removal of any personal data that is not covered under the lawful use provisions). Any additional
attributes need to be reviewed.

For more information, see the documentation about the Roles and tasks of Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data storage
As described in the data collection section, Product Master stores information in its catalogs.

Transparency and data minimization


The Product Master catalogs are defined by the developers as part of the implementation of the solution. There should be a record of implementation artifacts that can
provide inside in the attributes that are captured in the catalogs. Alternatively, reports can be run to get information on the catalog attribution.

10 IBM Product Master 12.0.0


All data is available for access, subject to sufficient access rights, through Product Master APIs. It can thus be made available and reviewed as needed.

Protection
Product Master provides a comprehensive security model that provides you with mechanisms to protect the data stored in the Product Master datastore as required by
you needs. These include:

Definition of users and roles


Access control groups
Data access and protected catalogs

These mechanisms should be considered together when deriving a strategy around data security for the granular protection that is needed. More on Product Master
security can be found at Creating a security model.

Additional considerations
The following Data Storage mechanisms are used by Product Master which users may wish to consider when assessing their GDPR readiness.

Storage of account data


Storage in backups

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data access
Data stored in the Product Master repository can be accessed through several product interfaces.

The following Product Master product interfaces are designed to enable data access. Some interfaces provide access through a remote connection, while others exist to
provide access through a local connection.

Web services (remote)


RESTful API (remote)
Web user interfaces (remote) - Business Users, Administrators

These interfaces are designed to allow for real-time integration directly with other systems and business processes (web services, APIs) or give users access to business
users and administration staff. All access is controlled through Product Master's security facilities.

Authentication of users is typically handled externally through integration with an LDAP compatible Directory Services. Once authenticated, the user authority can be
established and, through Product Master security, the user has only the granted levels of access and rights. For more information about LDAP integration, see LDAP
integration.

Product Master provides optional logging and history management functionality. For more information about event logging and history management, see Enabling event
logging using history manager.

If you are using these functions, give due consideration to securing the associated history and log files. Product Master security can be applied to these files in the Product
Master environment. If these files are used outside of the Product Master environment, such as for data replication purposes, further security measures may need to be
taken.

Additional considerations
Development Users and Troubleshooting
During the development stage, developers, data engineers and other staff will be working with an instance of Product Master. This is to create the Product Master
solution that addresses the specific use cases that your organization was, or is, implementing. For this the development users will need test data that is
representative of the real world data to be effective. Care should be given to the creation of the test data; there should be no data that is, or may be reconstituted to
be, actual client data. Similarly, care should be given to the data access and use when the development assist in the resolution of problems with the production
environment.
Administrators
Product Master Administrators may have special right to access different parts of the Product Master solution and potentially have access to personal data. Thus
this role requires that it is granted only to select individuals. Further the Product Master Security should be used to prevent the administrator for specific tasks that
fall outside of his/her needs.
User roles
For a full list of Product Master user roles, see Roles and tasks.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data processing
The way data is processed in Product Master is controlled by the business users through the user interface, in concert with defined workflows that reflect the
requirements of product categories and secondary specifications. Tasks can also be invoked through the Product Master API by calling applications.

IBM Product Master 12.0.0 11


Encryption for data at rest as well as in transit are available and it is recommended to be implemented if exposure exist for unauthorized access to the MDM systems
environment. This is especially applicable if remote connections extend beyond your firewall.

Physical storage and hosting of personal data


Product Master is a potentially critical part of an enterprise's systems environment and this means that measures need to be taken to ensure the appropriate level of
protection and redundancy are implemented to achieve suitable SLA level. In other words, high availability and disaster recovery architectures are common for Product
Master deployments. Several patterns exist for this, but a full review of these in this document is not appropriate. Contact your IBM representative for more information
and assistance to address your unique needs. However, the following aspects should be considered, if applicable:

Primary data center


Per the information in this document.
Backup sites
If applicable, the same measure and protections as they are in place for the primary data center should apply to any back-up sites. Furthermore, the connectivity
and switchover facilities that exist between the primary site and these must ensure the integrity of the data protection of the MDM data, especially the personal
data content.
Archives
Product Master does not include an archiving capability. However, if archiving is implemented through external facilities, its data protection, access rules, and so on
should be in line with the main Product Master instance, or potentially even more restrictive.
Mirroring
In some configuration mirroring, data replication or other forms of maintaining multiple instances of the Product Master environments are used. In those cases, the
same consideration apply as described above for backup sites with the additional consideration of the mechanisms used to direct the transaction activities
(request/responses) to the correct instance (such as load balancers).

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data deletion
GDPR provides individuals with the right to have their data deleted upon request. Product Master includes services that enable organizations to delete individual/person
data.

Right to erasure
Article 17 of the GDPR states that data subjects have the right to have their personal data removed from the systems of controllers and processors - without undue delay -
under a set of circumstances.

Data deletion characteristics


Product Master offers services for deletion of the stored information; as the primary use scenario is for product information this is tied into the lifecycle of those products.
These can be invoked as required as needed. The deletion services have different options such that the data for a record can be deleted of multiple records, with or
without its history. This functionality can also be used to removed personal data if the solution has been used to store this. As described in an earlier portion of this
document the cases where personal data is stored should be minimized and documented.

Note: Product Master allows for deletion services to be invoked by the business user, if so authorized, as well as through transaction through the offering’s API.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data monitoring
You should regularly test, assess, and evaluate the effectiveness of your technical and organizational measures to comply with GDPR. These measures should include
ongoing privacy assessments, threat modeling, centralized security logging and monitoring among others.

Product Master provides optional logging and transaction audit trails functionality. It is strongly recommended to that you make use of these facilities if personal data is
stored in the solution. See Enabling event logging using history manager.

Consider the following issues when leveraging the product capabilities in this area:

Product Master logging can be configured to your needs, but should, at a minimum, include successful and unsuccessful logon events, privileged activities, and
security events.
A security event should be logged and investigated if a potential attempted or successful breach of access controls is detected.
Ensure logs contain sufficient information about the event. For example, include the type of event, when the event occurred, where the event occurred, the source
of the event, the outcome (success or failure) of the event, and the identity of any user/subject/device associated with the event.
Retain logs on the system for at least 90 days.
Protect logs against unauthorized access.
Keep system clocks synchronized with a common reference time source to improve log accuracy.
Product Master uses IBM® WebSphere® Application Server (WAS) and many monitoring facilities exist within WAS. These should be leveraged to extend the data
monitoring to this level and potentially in concert with other applications be those integrated with Product Master or not.

12 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Responding to data subject rights


Product Master's range of capabilities is available to address the needs to respond to data subject rights requests.

Product Master includes the ability to:

See various views of the data subject's information (this is not common).
Modify or correct any data subject information attribute (excluding those that are essential to data processing requirements and where lawful consent exists).
Delete data subject records or individual attributes as described in a previous portion of this document.

If it becomes clear that access, security, data usage policies, and so on need to be refined to meet regulatory requirements, Product Master's flexibility allows for rapid
customizations to achieve this.

The nature of the Product Master approach is that the unique needs of your business and implemented MDM use cases can be reflected in the solution. The Script
workbench is the primary tooling to accomplish this.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master information roadmap


Find links to the information resources that are available for the IBM® Product Master.

Category Information resources


IBM Product Master website
Product
This website provides a broad range of information and resources about IBM Product Master and related topics.
overview
Introduction to IBM Product Master
This information introduces you to IBM Product Master including features, architecture, and components.
What's new in version 12.0
This information describes the new features and enhancements in the IBM Product Master.
What's new in fix packs
This information describes the new features and enhancements included in the fix packs.

System requirements
Installing
This information describes the system requirements for the IBM Product Master.
Installing and uninstalling
This information helps you to prepare for installation, install the product, and verify the installation. There is also information to help you to
uninstall the product.
Configuration files
This information provides details about the parameters that are defined in the properties files.

Migrating
Migrating
This information describes the process of migrating from InfoSphere® Master Data Management Collaboration Server - Collaborative Edition to
and upgrading
IBM Product Master.
Upgrading
This information describes the process of migrating from an older fix pack version to the latest fix pack version of the IBM Product Master.

Managing master data


Managing
This information describes how to manage master data by using either Admin or Persona-based user interface depending on the managing
tasks that you want to perform. You can also view details for synchronize product data with Global Data Synchronization (GDS).

Administering database
Administering This information describes how to set up and maintain your database for your IBM Product Master system.
Administering system
This information describes how to manage data models, users, roles, and system services.
Administering Product Master system
This information describes how to set up your company, manage your IBM Product Master system, and all of your services.

Develop applications
Developing
This information describes how to develop application to use with IBM Product Master.
Developing the solution
This information describes how a solution developer can take the specifications from the solution architects and create the data model objects,
workflows and collaboration areas, import, export, and report jobs, selections and searches, and security objects.

Integrating with other products


Integrating
This information guides you on how to integrate IBM Product Master with other products like IBM InfoSphere Physical Master Data
Management, IBM InfoSphere Information Server, IBM Operational Decision Manager, IBM WebSphere® Commerce, IBM WebSphere MQ,
scheduler applications, and connectors like Adobe InDesign, Amazon Marketplace Web Service, eBay Commerce Network Merchant Center,
Google Merchant Center, JD Edwards, Magento2, and SAP.

IBM Product Master 12.0.0 13


Category Information resources
Performance tuning
Performance This information explains how to monitor, plan, and tune the performance of your IBM Product Master application, application server, and
Persona-based UI. You can also refer best practices, performance checklist, and volume testing checklist.

Troubleshooting
Troubleshooting This information describes how to troubleshoot different issues, tools available for troubleshooting, viewing log files, and referring commonly
asked FAQs.

Reference
Reference
This information provides details about the IBM Product Master Javadoc, REST APIs, Integrations REST APIs, configuration files, shell scripts,
script operations, and Global Data Synchronization configuration files.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Overview
IBM® Product Master provides a highly scalable, enterprise Product Information Management (PIM) solution. Product Master is the middleware that establishes a single,
integrated, consistent view of products and services information of an enterprise.

Product Master capabilities


To become a global, on-demand e-business, an enterprise needs a source of product and services information to address business needs such as global data
synchronization, e-commerce, supply-chain management, and trading-partner management.
Product Master provides PIM solutions to the enterprise that centralize and optimize product data. Relevant and unique content is then delivered to any person, system,
business partner, or customer. Product Master provides an enterprise solution that accelerates the time to market, increases market share and customer satisfaction, and
reduces costs. With Product Master, companies can manage, link, and synchronize item, location, organization, trading partner, and trade terms internally and externally.

Product Master provides the following PIM solutions:

A flexible, scalable repository that manages and links product, location, trading partner, organization, and terms-of-trade information.
Tools for modeling, capturing, creating, and managing information with high user productivity and high information quality.
The ability to integrate and synchronize information with existing systems, enterprise applications, repositories, and masters.
Business user workflows for supporting multi-department and multi-enterprise business processes.
The ability to exchange and synchronize information externally with Business Partners.

Product Master architecture


IBM Product Master has a scalable architecture that provides security and operational redundancy (or high availability).
Key concepts
To use IBM Product Master, you need to understand the application key concepts.
Roles and tasks
For most PIM systems, the tasks that are accomplished in the product are covered by users in three main roles: business users, system administrators, and solution
developers. Depending on the role that you have, certain tasks might not be available for you.
Interacting with Product Master
There are two ways of interacting with IBM Product Master; through the user interface or the Product Master workbench.
Extending Product Master capabilities
You can extend the "out-of-the-box" capabilities of IBM Product Master with custom business rules, validations, and integration capabilities. Product Master
supports creating extensions with three program languages.
Product accessibility
Accessibility features help users with physical disabilities, such as restricted mobility or limited vision, to use software products successfully.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Product Master architecture


IBM® Product Master has a scalable architecture that provides security and operational redundancy (or high availability).

The Product Master has a component-based architecture that can consist of a two-tier or three-tier configuration. The Product Master components include: core
components, integration components, and collaboration components. For best performance, run Product Master on a dedicated system and restrict access to the system
to maintain security.

Product Master provides a PIM solution development platform with scheduler, business process management, event processing, queue management, and other common
components. It supports SQL-like business data query, object-oriented scripting, Java™ API programming, web service development, and other PIM solution development
features.

Product Master includes a web-based application with a three-tier architecture that consists of:

1. A web browser user interface for rendering PIM content in a browser, including static, dynamic, or cached data in the client side.
2. A middle tier with the functional modules that process user requests and produce PIM content in the server side that runs on an application server.

14 IBM Product Master 12.0.0


3. A database management system (DBMS) that stores the data that is required by the middle tier that runs on a database server.

Core components
API layer

Java API
You use the Java APIs to extend an Product Master solution by creating Java client programs, services, or GUIs that invoke the APIs to access Product
Master functionality. You can also implement pre-defined interfaces and make the compiled classes available to the system, such that when a specific
extension point is reached, the system invokes the class by using a special URL. For more information about Java APIs and how you can use them to access
Product Master, see:
Java API
Search API
You use the Search API to search for information in the Collaborative Edition system. You can customize and define the searching model to support different
business user requirements. For more information, see
Searching the content management system.
Script API
The Script APIs provide a full featured set of capabilities to extend an Product Master solution. For more information, see
Script API.
Business Object layer

Data objects
You use these objects to model the instance level objects at your site. Instance level objects are the objects that you perform actions on most often in
the course of a day. For example, products, SKUs, work items, and so on.
Metadata objects
You use these objects to model the structure of an implementation, for example, catalogs, hierarchies, and so on. These objects also define the
structure of the data objects, for example, specs that define attributes of items.
User modeling objects
You use these objects to capture the user model of an enterprise. The user model defines items like the reporting hierarchy of users, roles, users, data
access privileges, permissions, and so on.

Infrastructure layer

Queue manager
The queue manager service sends documents outside of Product Master.
Event processor
The event processor service dispatches events between all the modules.
Admin service
The Admin service is used to start or stop services. The Admin service also maintains a track of the current state of other services. This service is required on
all computers where one or more services is running.
RMI registry
The RMI registry coordinates the communication between services by using the RMI.
Scheduler service
The scheduler service runs the scheduled jobs such as imports, exports, and reports.
Storage layer

IBM Product Master 12.0.0 15


Docstore
The set of physical database tables and file system locations where you store the extended or unstructured content. For example, feed files, reports,
export job output, and so on. You use the document store to manage all incoming and outgoing files.
Product Master repository
The set of physical database tables that are used to persist the business objects in.

Integration Components
Custom tools
Custom tools provide a custom user interface in the Product Mastersystem. You can use custom tools to extend the user interface of the Product Master system.
Web services
You can use the web services to invoke standard Web service requests. A set of sample Web services and custom Web services that use the Product Master
scripting language are also provided. For more information, see
Planning for web services.
Import-export
The jobs that are responsible for ingesting the incoming data (imports) and generating data (export) to send to the external systems. These jobs move data between
Product Master and other applications from a system integration perspective. For more information, see
Integrating with upstream and downstream systems.

Collaboration Components
Workflow engine
This service processes the events that are related to business objects that are moving through business processes and have been captured in the workflows.
Data authoring UI
A set of user interface screens that are used to interact with the data objects (instance level business objects) to specify and enrich the data that is provided for
them and to set up associations between them. All user interface interaction is processed by the appsvr process.
Import-export
The jobs that are responsible for ingesting the incoming data (imports) and generating data (export) to send to the external systems. These jobs run in the context
of a scheduler service.

Information flow
The flow of information in Product Master is interdependent on several components.
Product services
IBM Product Master includes several components that are implemented as JVM services.
Performance planning
Ensure that you plan before you install IBM Product Master. Your planning can greatly affect the performance of Product Master.
Global Data Synchronization architecture
The Global Data Synchronization feature of IBM Product Master architecture consists of the supply and demand sides. Organizations that create and own data are
on the supply side, and the consumers of the data are on the demand side. The two sides communicate through third-party entities called data pools.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Information flow
The flow of information in Product Master is interdependent on several components.

The following image provides an example of the flow of information between the client, web server, or application server (where Product Master is installed), and the
database server.

16 IBM Product Master 12.0.0


The scheduler service, which manages import and export jobs in the background, can be on the application server or on a separate server, depending on load
requirements. If the scheduler is placed on a separate server, Product Master must bind the scheduler service to a specific Remote Method Invocation (RMI) port.

The following information flow is illustrated in this image:

An application server handles HTTP requests from users.


Services are started or stopped using RMI.
The Scheduler service uses the same RMI port as the one used to control services.
Application and scheduler servers communicate with the database server using JDBC.
The scheduler can be run on a dedicated computer or on an application server.
In this example, the first server runs every service except the scheduler, and the second server runs the RMI registry, admin process, and scheduler.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Product services
IBM® Product Master includes several components that are implemented as JVM services.

The six JVM services and the RMI (Java™ Remote Method Invocation) registry that is run concurrently in the product. The RMI registry registers all product services and
must be running before starting all other services.

Table 1. JVM Services


JVM Service Description
admin The admin service starts and stops modules on remote computers.
appsvr The application server service serves JavaServer Pages.
eventprocess The event processor service dispatches events between all the modules.
or
queuemanager The queue manager service sends documents outside of Product Master.
scheduler The scheduler service runs all scheduled jobs in the background.
The scheduler provides a unified view to manage all jobs that are scheduled within Product Master. Through the Jobs Console, a job can be ran based
on a defined timetable and monitored with status information.

The scheduler service communicates with the application through the unified database server, file system and through the rmiregistry.
workflow The workflow engine processes workflow events that are posted to the database.
rmiregistry The RMI (Remote Method Invocation) registry service is a standard Java method that finds and starts methods or functions on remote systems.
RMI is type of RPC (Remote Procedure Call). In Java, a remote system can be on another physical system or on the same computer but in a different
JVM. The rmiregistry is a simple directory. Java objects connect to the registry and register how to connect to them and what methods or functions
they have. Other services lookup the function they need in the registry to find out where it is, then call the remote object and run the method. An
example is to shut down a service. The RootAdmin Java object looks up Product Master services in the registry, finds out how to contact them, and
starts the shutdown method. As such, the rmiregistry service does not require a great deal of system resources.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Performance planning
Ensure that you plan before you install IBM® Product Master. Your planning can greatly affect the performance of Product Master.

Adhere to the following common guidelines so that you avoid major performance problems:

Testing, profiling and modifying the solution


For each line item, ensure that you set aside 20% extra time to complete testing, profiling, and modifying the solution as needed. Include the 20% extra time
before there are any known performance problems.
Define all of your use cases
The use cases are performance sensitive. Ensure that you identify the requirements, dependencies, and required performance. Allow extra time for use
cases that have a high potential for performance problems. For example, use cases that have a high potential for performance problems can include many
specs, large amount of location data or large numbers of workflow steps. Test and profile the use cases as they are developed or as early as possible if there
are any other dependencies. Ensure that you do not delay performance testing until the end of the project. You need to establish a baseline for the use cases
and have them approved by the customer.
Identify the hardware that is needed for testing
For hardware, ensure that you identify the hardware that is needed for performance testing and have it available early in the project. The hardware for
performance testing should be a replica of the hardware that is planned for production. Performance testing and user acceptance testing should always be
done on hardware that is identical to production.
Allocate the size of the hardware
Allocating the correct hardware is critical to sustain performance of the solution. The correct size of the hardware that is required to effectively run the final
solution depends on:
The volume of the activity on the system
The overall complexity of the solution
Correct sizing can be done by working with the technical sales team, IBM services team, or the performance team.
Tune the allocated hardware

IBM Product Master 12.0.0 17


Correctly sized hardware is only effective when it is properly tuned. The following two key areas that commonly appear as the cause of performance
problems are:

Latency and bandwidth between the application server and database


The latency should be under 0.25 ms between the application server and the database. It can be measured by running the traceroute command on
most systems. The connection between the two should be a Gigabit Ethernet capable of transferring large files at 25 MB/s through FTP.
Important: The preceding value is only applicable to a non-clustered environment.
Number of open descriptors is too low
Unexpected problems can be avoided by checking the number of open descriptors and verifying that they are set according to WebSphere®
Application Server guidelines to 8000. The number of open descriptors can be checked by using the ulimit –a command on most computers.

Balance the load and allow for failover


An easy way to address potential overloading of the application server is to use a load balancer. Multiple instances of the scheduler can be started on one or
more servers and various services of the scheduler load balance themselves automatically.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Global Data Synchronization architecture


The Global Data Synchronization feature of IBM® Product Master architecture consists of the supply and demand sides. Organizations that create and own data are on the
supply side, and the consumers of the data are on the demand side. The two sides communicate through third-party entities called data pools.

Supply Side (Supplier)


AS2 Product
The software application that communicates with the source data pool by sending and receiving XML documents by using the secure AS2 protocol. This software
application is a distinct product that you must purchase, install, and configure separately.
Supply side Global Data Synchronization feature
The feature of Product Master that is responsible for synchronizing item data with retailers.
IBM Product Master
The enterprise Product Information Management (PIM) solution that is used to manage, link, and synchronize item, location, organization, trading partner, and trade
terms data.
Supplier Content Repository
The database on the supply side that is used to store item, location, organization, trading partner, and trade terms data.
Legacy Systems
Software systems that run in parallel to the Global Data Synchronization feature of Product Master for other functions such as accounting and inventory
management. These systems exchange data with the Global Data Synchronization feature.

18 IBM Product Master 12.0.0


Global Data Synchronization Network
Source Data Pool
Suppliers publish data to this data pool.
Other Data Pools
Data pools other than the data pool that a certain pair of supplier and retailer subscribe to. Suppliers and recipients might subscribe to one or more data pools.
Recipient Data Pool
Recipients subscribe to this data pool and consume data that is published by the suppliers.

Demand Side (Retailer)


AS2 Product
The software application that communicates with the recipient data pool by sending and receiving XML documents by using the secure AS2 protocol. This software
application is a distinct product that you must purchase, install, and configure separately.
Demand side Global Data Synchronization feature
The feature of Product Master that is responsible for synchronizing data with the supplier.
IBM Product Master
The enterprise Product Information Management (PIM) solution that is used to manage, link, and synchronize item, location, organization, trading partner, and trade
terms data.
Retailer Content Repository
The database on the demand side that is used to store item, location, organization, trading partner, and trade terms data.
Legacy Systems
Software systems that run in parallel to the Global Data Synchronization feature of Product Master for other functions such as accounting and inventory
management. These systems exchange data with the Global Data Synchronization feature.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Key concepts
To use IBM® Product Master, you need to understand the application key concepts.

For more information on some key concepts, click the links at the end.

Access Control Group (ACGs)


Access Control Groups provide a way to control access to the users. An ACG allows or restricts access to different parts of the systems like view catalogs, view
items in the catalogs, edit items, delete items, and so on. ACGs are assigned to Roles, which in-turn provide access to users that have these roles.
Attribute value
An attribute value is the data that is captured against an attribute on an item or category.
Attributes (or attribute names)
An attribute is the definition of a field, allowing data to be collected on an item or category. An attribute has a type, the validations, and other metadata used in the
capture of data.
Categories
Categories are used both for browsing and organizing products. A category must be created within a hierarchy. When a hierarchy is applied to a catalog, each item in
the catalog can be associated to one or more of the categories within that hierarchy. Every hierarchy must have exactly one root category. Every category can have
zero or more categories within it, known as subcategories.
Collaboration Areas
Collaboration areas are runtime manifestation of the workflows. You can define collaboration areas for a hierarchy or a catalog, based on the workflow definition.
When defined, items from a catalog or categories from a hierarchy can be checked-out under the workflows for reviews and approvals by various people in the
organization.
Company
A Company is the topmost container in the IBM Product Master. All the Product Information Management (PIM) operations are done under a company. You can
create multiple companies in an application.
Custom Tools
Enables the users to add their own custom screens. These custom screens can be invoked using the system menus.
Data model
A representation of business and data requirements, designed by using various and flexible business objects such as Catalogs, Items, or Lookup Tables.
Document Store (docstore)
A store of documents that are required by the system. Any custom scripts, Java code, or data files can be kept here, and system can access docstore to get these
documents.
Dynamic Selections
Dynamic selections also select a set of items, but the selection criteria is driven by a WebSphere Query Language (WQL). The result set can vary based on the
criteria that are provided in the query.
Entry Preview
A type of extension point that allows you to create a sample view of a current item set that can then be run from the data entry screens. For example, you can write
a script to view how an item displays when you use an XML format.
Environment import-export
Environment import-export is backup and restore functionality in the IBM Product Master that helps in cloning of a running system. For example, a test system can
be exported to a production box and all entities are cloned on the production system.
Extension Points
If users want to extend any out-of-the-box functionality, they can provide their own functionality using the extension points. These extension points can be written
by using the Java APIs or scripting language and can be hooked to the system so that those are run at certain invocation points in the system.
Items

IBM Product Master 12.0.0 19


Items represent products and services, for example, stock-keeping units (SKUs), global trade item numbers (GTINs), market offers, or any other objects as defined
by the business. You can perform edits by using the stand-alone or advanced content authoring screens. From either screen, items can be added or edited.
Location attributes
Allow the IBM Product Master to maintain item and location-specific information.
Java APIs
Java-based APIs can be run from any extension points. Java APIs expose the complete system functionality.
Post Processing
This is an extension point that is run when items or categories are being saved. Post processing extension point is invoked before saving of the item but after
processing operation is completed.
Post Save
This is an extension point that is run after items or categories are saved.
Pre-Processing
This is an extension point that is run when items or categories are being saved. Preprocessing extension point is invoked before saving of the item and before any
processing for save starts.
Reports job
Reports are asynchronous jobs that can be used for doing various operations as jobs.
Role
A Role entity represents the role that is being played by different set of users. For example, a role can be Admin role, Approver role, or Reviewer role. Multiple users
can have the same role.
Selections
Selections are used to select a subset of items. You can either predefine these or select the items at run time. Items that are selected from a selection can be used
for various purposes like running exports, reports or simply display on the screens.
Static Selections
Static selections are a type of selection that has the predefined criteria for selection of items. For example, all items under set of categories are selected.
Sub Spec
Sub Spec is a portion of the spec that can be contained within a main Spec and can be reused by several other Specs.
Views
A set of attribute collections, by using which user can control what they want to see on screen. Users can select the view that they want to work with so that they
see limited information on the screens.
Workflow
Workflows define the business flow. Steps are defined to perform certain business logic and these steps are connected to establish the flow. Workflows define the
metadata for the Collaboration areas and runtime behavior of these collaboration areas are controlled by the definition of workflows.
WebSphere Query Language (WQL)
IBM Product Master provides its own SQL like query language that is called WQL, using which users can directly access all the internal entities or objects.

Attribute collections
An attribute collection is a group of item or category attributes that is associated or behave the same way in a context. Attribute collections are used for workflow
step validation and catalog and hierarchy views.
Catalogs
Catalogs are collection of items that are related to each other through a business context. Catalogs are containers for items and can be associated with any number
of hierarchies. For example, the Spring Print Catalog is a collection of just the print catalog products from the spring collection. It has its own hierarchy to organize
the products within the print catalog and holds only the fields that apply to the print medium or channel.
Exports
Enables user to send out the data out of the IBM Product Master into the external systems. Data is sent out asynchronously and users can control the amount of
data that is sent out.
Global Data Synchronization (GDS)
Global data synchronization (GDS) is an ongoing business process that enables continuous exchange of data between trading partners to ensure synchronized
information.
Hierarchies
A hierarchy (or category hierarchy, category tree, or taxonomy) is composed of categories and the relationships between them. Typically used to organize browsing
or navigation, categories are like folders that can contain items or other categories.
Imports
Enables users to import data into the IBM Product Master. The data can correspond to items or categories. Import supports mass import of data that is run in
asynchronous mode.
Link attributes versus relationship attributes
You can use links and relationships between catalogs and items.
Location attributes
Locations allow an IBM Product Master solution to maintain item and location-specific information.
Lookup tables
Lookup tables provide a way of storing highly used, constant data that needs to be looked up by the users many times. Like a catalog or a hierarchy, lookup tables
are also driven by Specs for data modelling.
Specs
A specification (spec) defines the data model (meta-model) for items, categories, locations, import or export files, and lookup tables.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Attribute collections
An attribute collection is a group of item or category attributes that is associated or behave the same way in a context. Attribute collections are used for workflow step
validation and catalog and hierarchy views.

Overview

20 IBM Product Master 12.0.0


Having an attribute collection increases the efficiency of data model management and performance with many attributes (500+). Attribute collections simplify the
management of large sets of attributes. Instead of working with an entire attribute group, it is possible to work on a functional subset of attributes. The subset of attributes
can create views, tabs, workflows, inheritance rules, and privileges. Associating a subset is more efficient than associating individual attributes.

If an Attribute Collection property is not created for a hierarchy, and the property is set to "auto-generated Core Collection", you run into issues. When the hierarchy has a
significant number of categories, not all of the categories can display on a single page in the multi-edit user interface because the vertical scroll bar is not displayed.
Alternatively, you can use the Up and Down arrows to browse through the list of categories. The work-around is to create an Attribute Collection property and set it to the
hierarchy. After this is set, the vertical scroll bar appears.

There are two types of core attributes:

Default core attributes


Default core attributes are system defined attributes that are retrieved and saved for every object. The default core attributes include only the attributes that are
critical to making sure that an item is not saved to the database in violation of key rules.
The default core attributes include:

The primary key


The path attribute (for categories only)
Any required attributes (either from a primary or a secondary spec)

For other attributes, it is not possible for the system to determine whether they are needed. Therefore, in some cases, validation rules are run for every item, and
those rules must be included in the core attributes. Hence, user-defined core attributes can be added to the total set of core attributes.
User-defined core attributes
User-defined core attributes are required when an attribute needs to be retrieved and saved for every object. These core attributes are defined per container and
include attributes that need to be validated or calculated every time an item or category is saved. A set of user-defined core attributes is associated per container.
When creating user-defined core attribute collections, include any required attributes that are in secondary specs. For each catalog and category tree, it is possible
to associate an attribute collection as the user-defined core attributes.
Note: Keep the number of attributes at a minimum to achieve optimal performance per user-defined core attributes.

In addition to the core attributes, there are also the following types of attributes:

Item attribute and category attribute (entry node)


A property of an item or a category, for example ID, name or price.

An item takes its attributes from the attributes (spec nodes) of catalog spec
A category takes its attributes from the attributes (spec nodes) of hierarchy spec

Every attribute on an item or category can have a specified value.


Spec attribute (spec node)
A property of a spec. By adding attributes to a spec you can specify what data fields can be stored for:

Items from a catalog that uses this spec


Categories from a hierarchy that uses this spec

Spec node attribute


A property of a spec node, such as type, minimum and maximum occurrence, minimum length, maximum length, and validation rule. This attribute determines
properties of the spec node and decides how the spec node is used as an item attribute or category attribute when data is stored within it.

In the single edit screen:

Specs are rendered in the order that is specified in the View (or the Tab if tabbed view).
Nodes with a spec are rendered in the order that is specified in the View. Typically, same as the order in the spec definition, but can be overridden for a tab to not
use spec ordering).
Primary Key node is only rendered if the View or Tab explicitly includes it. This node allows certain tabs to not show primary spec and primary key node if the
solution wants so.
Primary Key is no longer rendered "always as the first node" in the primary spec. Instead, it is rendered at the correct location within the primary spec as specified
by the spec order (or view or tab order).

Related concepts
Hierarchies

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Catalogs
Catalogs are collection of items that are related to each other through a business context. Catalogs are containers for items and can be associated with any number of
hierarchies. For example, the Spring Print Catalog is a collection of just the print catalog products from the spring collection. It has its own hierarchy to organize the
products within the print catalog and holds only the fields that apply to the print medium or channel.

Overview
Each catalog must be associated with at least one primary hierarchy, which determines the structure for classifying that catalog's items. A catalog can have zero or more
secondary hierarchies. Catalogs are repositories or containers for items with the following characteristics:

Catalogs are defined by several components that are required to build a catalog (Name, Primary spec, Primary category hierarchy, ACG (Access control group).

IBM Product Master 12.0.0 21


More optional attributes can be specified for any catalog (Secondary category hierarchy, Linked catalogs, catalog scripts that are used for preprocessing and post
processing).
Catalogs can be linked, allows attributes to be managed for a single item across multiple catalogs.
Catalog views can be customized to personalized how item attributes are presented to a user.

Catalog Console
Through the catalog console you can view, manage, and edit your product information before it is exported to a defined destination. You can also perform the following
tasks:

Browse and modify a catalog


Create a catalog
View items from the rich search
View and modify a catalog-associated spec
View and modify catalog attributes
View catalog differences
Roll back a catalog version
Search a catalogs content
Delete a catalog
Customize catalog views

The catalog console displays all of the current catalogs that were created. The console view can be customized to show specific catalog attributes, if wanted. The default
view shows the catalog name, catalog spec, primary and secondary hierarchies, access control group, and the catalog view that is applied to the catalog.

To access the catalog console, click Product Manager > Catalog > Catalog Console .

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Exports
Enables user to send out the data out of the IBM® Product Master into the external systems. Data is sent out asynchronously and users can control the amount of data that
is sent out.

Overview
You use exports, also called syndications, to export data from a catalog or a hierarchy to external files.

Export Console
With IBM Product Master flexible architecture, businesses can connect to multiple marketplaces with multiple catalogs, by using a single repository of data. You can
perform the following tasks by using the Export Console:

Export catalogs into any format


Publish variances
Publish subsets of content
Merge destination-specific data and standard information

To access the Export Console, click Collaboration Manager > Exports > Export Console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Global Data Synchronization (GDS)


Global data synchronization (GDS) is an ongoing business process that enables continuous exchange of data between trading partners to ensure synchronized information.

GDS is based on a publish/subscribe model. The supplier is required to publish the product information to a data pool, and the data pool then matches the published data
to the known subscribers of the data. The product information can be about price, party, and other relationship-specific attributes. This process ensures that all
stakeholders for a product are notified about the latest information about the product.

GDS uses a global network that is called the Global Data Synchronization Network (GDSN), which facilitates the synchronization of item information between retailer and
suppliers by using a single global registry. It is a single point of entry into the network for suppliers and retailers through selected data pools. After connection, the trading
partners can subscribe to or publish all of their item information in a single and consistent process for all other trading partners that are connected to the GDSN.

The following figure shows the flow of product data between trading partners, data pools, and the global registry with in the GDSN.

22 IBM Product Master 12.0.0


The GDSN synchronization process involves the following steps:

1. The supplier submits an item information to the source data pool.


2. The source data pool registers item in the global registry, which helps the GDSN community to locate data sources and manage ongoing synchronization
relationships between trading partners.
3. After the item is registered at global registry, the supplier publishes the item in the source data pool.
4. The retailer subscribes to the item by sending the subscription message to the recipient data pool.
5. The recipient data pool along with the subscription details, requests an item information from the source data pool through the global registry.
6. Based on retailer's subscription information, the source data pool synchronizes with the recipient data pool to share the item information.
7. After getting the information from the source data pool, the recipient data pool forwards the item information to the retailer.
8. The retailer confirms the recipient data pool if the item is approved or rejected.
9. The recipient data pool sends an item confirmation to the source data pool.
10. The source data pool forwards the item confirmation to the supplier.

Data model
A default data model for Global Data Synchronization is provided with Product Master.
Data pool
A data pool is a centralized repository of data where trading partners (retailers, distributors, or suppliers) can obtain, maintain, and exchange information about
products in a standard format. Suppliers can upload data to a data pool, which retailers receive through their data pool.
GDS attributes
Trade items rely on specific key attributes to support global data synchronization with data pools.
Global location number (GLN)
A global location number (GLN) is a unique 13-digit number that is used to identify a trade location. The first 7 digits represent the company prefix. The next 5 digits
represent the trade location, and the last digit is the check digit.
Global trade item number (GTIN)
A global trade item number (GTIN) is a unique 14-digit number that is used to identify trade items. The first 13 digits represent the item reference number and the
last digit is the check digit.
GS1 messages
GS1 is the global organization responsible for the design and implementation of global standards and solutions to improve efficiency and visibility in the supply and
demand chains across sectors. The GS1 system of standards is the most widely used supply-chain standards system in the world.
Subscriptions
A subscription is a message that establishes a request for trade item information for a trading partner who is receiving the data on a continuous basis.
Supply Side Global Data Synchronization
The Supply Side Global Data Synchronization is a process whereby suppliers register product data to a source data pool, which retailers can then subscribes to.
Suppliers review and act upon requests from retailers about the provided product data.
Trade items
A trade item is any product or service that can be priced, ordered, or invoiced at any point in any supply chain.
Trading partners
A trading partner is a party to a transaction in the supply chain, such as a supplier (seller) or a retailer (buyer).

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data model
A default data model for Global Data Synchronization is provided with Product Master.

The data model for Supply Side and Demand side adheres to the latest available specifications by 1WorldSync. The data model acts as a quick starter for product
information sharing through data pool. The current supported specifications are:

1WorldSync v8.11 XML on Supply Side.


GDSN BMS v3.1.5 XML on Demand Side.

These default data models consists of the following:

Specs that contain attribute as defined by corresponding specifications. The specs might be Primary, Secondary, Lookup, and so on.
Lookup tables attached to spec attributes containing valid lists (lookup table content) pre-defined by specifications.
Data model to help with messaging choreography.
Attribute collections that users use to view spec attributes in the GDS user interface.
GDS core working catalogs that are attached to Primary specs defining core trade item attributes.
Hierarchies, such as GPC_Hierarchy, populated with sample content.
Workflows as required by GDS processes within the product component.
Scripts as required by various GDS features.

IBM Product Master 12.0.0 23


Default access control groups that are defined on privileges that are needed for GDS functions.

This default data model is provided in English only because attribute specifications and their valid values are predefined by GDSN and extended by data pool in English.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data pool
A data pool is a centralized repository of data where trading partners (retailers, distributors, or suppliers) can obtain, maintain, and exchange information about products
in a standard format. Suppliers can upload data to a data pool, which retailers receive through their data pool.

Data pools store trade items that contain key attributes, such as the Global Trade Item Number (GTIN), in standardized formats that allow trading partners to easily
synchronize their data.

Product Master supports following data pool service:

Supply Side Global Data Synchronization supports 1SYNC (v8.11 XML schema)

These data pool services enable trading partners to perform demand side and supply side data synchronization. For more information about the GDSN standard or the
supported data pool services, see the following websites:

Global Data Synchronization Supply Side: 1 World sync


Standards body: GS1

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

GDS attributes
Trade items rely on specific key attributes to support global data synchronization with data pools.

The attributes in GDSN are classified into two broad categories:

Global attributes: A global attribute is an attribute that is relevant for business cases around the world, and can only have a single value throughout the world. One
example of a global attribute is the GTIN attribute.
Target market attributes: These attributes can be further classified into:
Core target market attributes: These attributes are for a trade item that can have different values for different target markets, or are mandatory for one and
optional for another TM. In global data synchronization, these attributes are referred to as global local attributes. A global-local attribute is an attribute that is
relevant for business cases around the world, but it might have a different value depending on the geography. An example of a global-local attribute is VAT
tax values, which varies by country, such as 1.00 in France or 1.05 in Belgium.
Extension attributes: These are business driven or industry vertical driven attributes. In global data synchronization, these attributes are referred to as
Extension attributes.

Supply side extension attributes


Supply Side Global Data Synchronization supports extension attributes as defined by 1WorldSync proprietary XML specifications. The framework to support
extension attributes is provided and any extension attribute can be modeled and enabled for publish.
Adding new extension
If needed you can add new extension under the Interoperability_Hierarchy.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Supply side extension attributes


Supply Side Global Data Synchronization supports extension attributes as defined by 1WorldSync proprietary XML specifications. The framework to support extension
attributes is provided and any extension attribute can be modeled and enabled for publish.

The out of the box Global Data Synchronization data model for Supply Side, includes following extension attributes samples across the different extensions.

Animal Feeding
Child Nutrition Label
Dairy Fish Meat Poultry
Food and Beverage Ingredient
Food and Beverage Preparation Serving
Food and Beverage Properties Information
French
Lowes Specific
Netherlands

24 IBM Product Master 12.0.0


Non Food Ingredient
Nutritional Information
Packaging Information
Product Formulation Statement
Regulatory Compliance (RegulatedTradeItem)
SINFOS
Trade Item Data Carrier & Identification
Trade Item Licensing
UDEX

You can use these extension attributes samples as they are or customize them to meet your requirements.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Adding new extension


If needed you can add new extension under the Interoperability_Hierarchy.

Procedure
1. Create a secondary spec if not available. All extensions have a secondary spec (Example - ExtensionName_Extension_Attribute_Spec) available in the data model.
2. Create a lookup table with same name as that of the new extension added in the Interoperability Hierarchy. This is not mandatory, but helps identify which Lookup
and extension association.
a. Use the Flexi_Attributes_Spec when new lookup is created. The Flexi_Attributes_Spec has following attributes:

Name
Name of the attribute (not the full path) present in the secondary spec of the associated extension.
Flexi_Type
For each attribute present in the extension spec, you must check the Data Type column in the IM 8.11 Data Source 1WS XML Guide to identify the
value for this attribute.
isRdd
Always set the value to false.

b. Create all the entries of the attribute present in the secondary spec of the associated extension in the Lookup table.
While adding the entries in the Lookup table if there are attributes that have same name, you must add the full path of the attribute as an entry.
3. Create an entry in the Interoperability_AttrGroup_Lookup lookup.

Interoperability Spec Name


Add the secondary spec name associated with the extension.
Example
ExtensionName_Extension_Attribute_Spec
Interoperability Name
Extension name from the Interoperability_Hierarchy.
IsGlobal
Set the value to false.
Interoperability Lookup Name
Name of the Lookup table created for this extension in the Step 2.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Global location number (GLN)


A global location number (GLN) is a unique 13-digit number that is used to identify a trade location. The first 7 digits represent the company prefix. The next 5 digits
represent the trade location, and the last digit is the check digit.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Global trade item number (GTIN)


A global trade item number (GTIN) is a unique 14-digit number that is used to identify trade items. The first 13 digits represent the item reference number and the last digit
is the check digit.

IBM Product Master 12.0 Fix Pack 8

IBM Product Master 12.0.0 25


Operating Systems: AIX, Linux, and Windows (Workbench only)

GS1 messages
GS1 is the global organization responsible for the design and implementation of global standards and solutions to improve efficiency and visibility in the supply and
demand chains across sectors. The GS1 system of standards is the most widely used supply-chain standards system in the world.

The GDS component of the collaborative MDM supports communication between the source data pool and recipient data pool. GDS uses the supply side and demand side
types of messages that are defined by the GS1 organization.

Supply Side 1SYNC messages


Supply side Global Data Synchronization uses these types of messages in the messaging process:

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Supply Side 1SYNC messages


Supply side Global Data Synchronization uses these types of messages in the messaging process:

Item Request
An item request is a 1SYNC message that a supplier uses to send item information to the source data pool. The following list describes the types of item request
messages that are sent:

Add
A supplier uses an item add request message to add fresh item information to the 1SYNC data pool.
Modify
A supplier sends an item modify request message to the 1SYNC data pool to modify existing item information.

Item Response
An item response is a 1SYNC message that is sent by the data pool to the supplier to indicate success or failure of any type of item request message. The following
list describes the types of item response messages that are sent:

documentAcknowledgment
Indicates success.
documentException
Indicates failure.

GDSN Item Registry Response


A GDSN item registry response is a message that is sent from the GDSN Item Registry to the supplier for every item request message. The following list describes
the types of GDSN item registry response messages:

documentAcknowledgment
Indicates success.
documentException
Indicates failure.

Item Link Request


An item link request is a 1SYNC message that is used by the supplier to create or delete item links. The item links are defining relations of registered items in the
source data pool. The following list describes the types of item link request messages that are sent:

Add
A supplier sends an item add link request message to the 1SYNC data pool to create item links.
Delete
A supplier sends an item delete link request message to the 1SYNC data pool to delete item links.

Item Link Response


An item link response is a 1SYNC message that is sent by the data pool to the supplier to indicate success or failure of an item link request message. The following
list describes the types of item link response messages that are sent:

documentAcknowledgment
Indicates success and that the links and hierarchies are registered or deleted at the data pool.
documentException
Indicates failure.

Publication Request - New/Initial Load


A publication request - new/initial load is a 1SYNC message that is used by a supplier to request the data pool to publish item data to the external world (trading
partners). The following list describes the types of publication request messages that are sent:

Add
A supplier sends an add publication request - new/initial load message to the 1SYNC data pool to request it to publish fresh item data to the external world.
Delete
A supplier sends a delete publication request - new/initial load message to the 1SYNC data pool to request it to delete item data that is published earlier.
Republish
A supplier sends a republish publication request - new/initial load message to the 1SYNC data pool to request it to republish item data to the external world.

26 IBM Product Master 12.0.0


Publication Response - New/Initial Load
A publication response - new/initial load is a 1SYNC message. It is sent by the data pool to the supplier to indicate success or failure of a publication request -
new/initial load message. The following list describes the types of publication response messages that are sent:

documentAcknowledgment
Indicates success.
documentException
Indicates failure.

Item Authorization Response


An item authorization response is a 1SYNC message that is sent as a confirmation to the supplier from the retailer after accepting the publication.
Message Exception
A message exception is a 1SYNC message that is sent from the data pool when the XML file gets rejected due to errors in the message that is sent by the supplier.
Item sync Response
An item sync response is a 1SYNC message that is generated and sent to data source only in the case of errors that are occurring during item synchronization with
the data recipient. The following list describes the types of item sync response messages that are sent:

New
Initial load
Modify
Correct
Delete

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Subscriptions
A subscription is a message that establishes a request for trade item information for a trading partner who is receiving the data on a continuous basis.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Supply Side Global Data Synchronization


The Supply Side Global Data Synchronization is a process whereby suppliers register product data to a source data pool, which retailers can then subscribes to. Suppliers
review and act upon requests from retailers about the provided product data.

Supply side Global Data Synchronization facilitates the synchronization of data by using this messaging process:

The supplier at Global Data Synchronization Supply Side sends the new item information as an "Item Add" message to the source data pool.
After the item information gets added at the source data pool, the supplier gets an acknowledgment with an "Item Response" message.
The item information gets registered on the global registry and the supplier receives an "Item Registry Response" from the source data pool.
The supplier then creates item links that define relations of registered items and sends to the data pool though an "Item Link Request".
The links and hierarchies are registered at data pool and acknowledged through an "Item Link Response".
Supplier then requests the publishing of the item data to external world (trading partners) through its data pool by using a "Publication Request - New/Initial Load".
After the successful publishing of data, the "Publication Response - New/Initial Load" is sent to the supplier.
The retailer accepts the publication and the confirmation is sent back by an "Item Authorization Response".

GDS connects to the source data pool services to enable a supplier to perform data synchronization.

IBM Product Master 12.0.0 27


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Trade items
A trade item is any product or service that can be priced, ordered, or invoiced at any point in any supply chain.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Trading partners
A trading partner is a party to a transaction in the supply chain, such as a supplier (seller) or a retailer (buyer).

Manufacturers can play a role of suppliers as well, however it is not necessary that all suppliers are manufacturers. It is by their role that they get classified as "Suppliers"
and "Retailers". Both suppliers and retailers are, collectively, called as "Trading Partners".

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Hierarchies
A hierarchy (or category hierarchy, category tree, or taxonomy) is composed of categories and the relationships between them. Typically used to organize browsing or
navigation, categories are like folders that can contain items or other categories.

Overview
There are two types of hierarchies; a Category Hierarchy is used by catalogs to classify items, while an Organization Hierarchy is used to manage Product Master users.
Each hierarchy is defined with a default view. This view includes all attributes applicable to the hierarchy. The view can be a combination of default core attributes and
user-defined core attributes.

Hierarchy Console
To access the Hierarchy Console, click Product Manager > Hierarchy > Hierarchy Console.

Related concepts
Attribute collections

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Imports
Enables users to import data into the IBM® Product Master. The data can correspond to items or categories. Import supports mass import of data that is run in
asynchronous mode.

Overview
You use imports, also called feeds, to import data into a catalog or a hierarchy from external files. The imported data is validated and standardized before it can be
imported successfully.

An import is first configured manually and can then be run on a scheduled or on-demand basis. Product Master allows the import of multiple types of data, such as, items,
binary files, category trees, and categorization mappings, from multiple sources to accomplish multiple purposes, such as, update, replace, and delete.

Import Console
You use the Import Console to create catalogs from various sources of data. The console acts as the switchboard for importing all data into Product Master. From here you
can view, modify, and create data that is being fed into Product Master. To create an import, you need to define and perform the numerous data and file feeds into Product

28 IBM Product Master 12.0.0


Master. The Import Console presents a list of feeds that are created.

To access the Import Console, click Collaboration Manager > Imports > Import Console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Link attributes versus relationship attributes


You can use links and relationships between catalogs and items.

The following table describes the different performance characteristics for both links and relationship attributes.
Links attributes Relationship attributes
Used in item objects only Used in both item and category objects
Associated through the primary key of the target item Associated through the internal ID of the target item
Can be associated to an item that does not exist yet Cannot be associated to an item that does not exist yet in target catalog
Targets a catalog that is identified at design time (in catalog attributes) Targets a catalog that is chosen at run time
Limited to the primary spec Can be used in primary and secondary specs

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Location attributes
Locations allow an IBM® Product Master solution to maintain item and location-specific information.

Overview
For example, an item is only available at one of several stores or an item, which is available at all locations might have price variations that are based on the store location.

The following list describes the item-location attribute relationship:

You can mark a single attribute with a null value. The child of a parent with a null value inherits the null value from the parent.
You can mark all attributes for an item-location with a null value. This marking applies to all current and future attributes for an item-location. Marking with a null
value does not suggest that the item is not available for a location. To make the item unavailable, it is necessary to make the location unavailable.
You can mark all attributes in a multi-occurrence group with a null value.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Lookup tables
Lookup tables provide a way of storing highly used, constant data that needs to be looked up by the users many times. Like a catalog or a hierarchy, lookup tables are also
driven by Specs for data modelling.

Lookup tables are provided to enhance the content management functions available in IBM® Product Master. You can use the tables to create standard tables, for example
units of measure (UOM, unit of measure), currencies, or countries. You can create custom replacements tables, for example, BK = Black and BL = Blue.

The creation and management of lookup table records is similar to standard item creation and management. The set of tools available to manage lookup tables, include
bulk operations such as add, edit, and search.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Specs
A specification (spec) defines the data model (meta-model) for items, categories, locations, import or export files, and lookup tables.

Overview
The following list identifies spec characteristics:

IBM Product Master 12.0.0 29


A model for how data is stored, calculated, and managed within IBM® Product Master.
A model for data as it resides outside of Product Master.
Flexible data templates that are used for data validation and can be changed.
Can be updated and maintained by a selected group of users.
Contains attributes that can be indexed to improve the speed of search operations.

There are several types of specs:

Destination spec
A destination spec defines the structure of the data that is exported to a destination data file for a catalog or hierarchy export.
File spec
A file spec defines the structure of a data file from a data source to use for a catalog or hierarchy import.
Lookup spec
A lookup spec is the data model that is used to define a Lookup Table. A lookup spec must have a primary key.
Primary spec
A primary spec is the data model that is used to define hierarchies and catalogs. A primary spec must have a primary key.
Script input spec
A script input spec defines the structure of the parameters that are passed to a Product Master script; especially a script that runs through the Product Master UI
(for example, a report script).
Secondary spec
A secondary spec is the data model that is used to define location attributes and supplementary attributes that are associated with particular categories. Secondary
specs do not have a primary key.
Sub spec
A sub spec is a reusable spec, which can be used as part of either a primary or a secondary spec, for example, to group a set of attributes that always occur
together. If a spec node from a sub spec is going to be used as a primary spec's primary key, it must be defined to have a minimum and maximum occurrence of 1.

Specs Console
The Specs Console enables to browse and view the following specifications:

File specs
Primary specs
Lookup specs
Destination specs
Secondary specs
Scripts specs

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Roles and tasks


For most PIM systems, the tasks that are accomplished in the product are covered by users in three main roles: business users, system administrators, and solution
developers. Depending on the role that you have, certain tasks might not be available for you.

Business users
Users who manage the content of the product information.
System administrators
Users who manage the database and system, logs and performance, and access for other users.
Solution developers
Users who create the objects that make up the PIM system, such as workflows and collaboration areas, specs, imports, and web services.

Business users tasks


Business users manage the content of the product information.
System administrators tasks
System administrators manage the database and system, logs and performance, and access for other users. Typical responsibilities of system administrators
include deploying and managing company data models, managing users and roles, managing system services, defining clusters, integrating LDAP, and monitoring
IBM Product Master.
Solution developers tasks
Solution developers take specifications from solution architects and create the objects that make up the PIM system. Typical responsibilities of solution developers
include creating data model objects (such as specs, sub specs, hierarchies, catalogs, and lookup tables), creating workflows and collaboration areas, creating the
security model (such as creating users, roles, and access control groups), and creating import and export jobs, searches, and selections.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Business users tasks


Business users manage the content of the product information.

As a business user, you can use the product interface for your tasks. In most cases, you use the appropriate screens to add and edit item content or work with categories.
If you have more responsibilities, you might use the consoles and menus that are available under the Product Manager, Collaboration Manager, and Data Model Manager

30 IBM Product Master 12.0.0


menu choices in the interface.

Screens for adding and editing items and categories


You use the Single edit and Multi-edit screens to add and edit items and categories. You can select an item from the Worklist UI or select a catalog from the left pane to
access one of these screens.

Menus for more objects and tasks


Some business users might have responsibilities in a PIM system in addition to working with items and categories. You can find the menus and consoles for some
additional tasks under the Product Manager, Collaboration Manager, and Data Model Manager menu choices in the interface. Depending on the role that you have and the
level of access and customization, certain tasks might not be available for you, and the menus might be different.

Product manager menu


You can use the Product Manager menu and consoles to do tasks such as:

importing a catalog
running a catalog export
creating a hierarchy
creating a hierarchy mapping
running a report

Collaboration manager menu


You can use the Collaboration Manager menu and consoles to do tasks such as:

running import and export jobs


viewing collaboration areas
viewing documents
viewing destination files

Data model manager menu


You can use the Data Model Manager menu and consoles to do tasks such as:

scheduling and running jobs


adding alerts
enabling and disabling alerts
viewing collaboration areas of a workflow

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

System administrators tasks


System administrators manage the database and system, logs and performance, and access for other users. Typical responsibilities of system administrators include
deploying and managing company data models, managing users and roles, managing system services, defining clusters, integrating LDAP, and monitoring IBM® Product
Master.

In many cases, you can use scripts for your administration tasks, such as for deploying a company. You can also use the product interface for many tasks. The following
lists show the items that you typically work with in the user interface in your role as a system administrator. The tasks in these lists are typical and do not represent an
exhaustive set.

The system administrator tasks are available from the consoles and menus under the System Administrator, Product Manager, Collaboration Manager, and Data Model
Manager menu choices in the interface. Depending on the role that you have and the level of access and customization, certain tasks might not be available for you.

System administrator menu


You can use the System Administrator menu and consoles to do tasks such as:

running SQL queries


viewing configuration properties or log files
creating or stopping a service
importing an environment
defining scripts to include in an export
importing files
searching and displaying JVMs
viewing database performance information
viewing cache details

Product manager menu


You can use the Product Manager menu and consoles to do tasks such as:

creating or importing catalogs


creating catalog-to-catalog exports
creating a hierarchy
creating a hierarchy mapping
creating and running a report
creating lookup tables

IBM Product Master 12.0.0 31


Collaboration manager menu
You can use the Collaboration Manager menu and consoles to do tasks such as:

creating and running import and export jobs


creating collaboration areas
creating queues and message queues
creating web services and viewing their transactions
creating data sources
creating distributions

Data model manager menu


You can use the Data Model Manager menu and consoles to do tasks such as:

scheduling and running jobs


creating and editing scripts
enabling and disabling users
creating access control groups
creating privileges for catalogs and hierarchies
enabling logging
adding and enabling alerts
creating a staging area
creating workflows

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Solution developers tasks


Solution developers take specifications from solution architects and create the objects that make up the PIM system. Typical responsibilities of solution developers
include creating data model objects (such as specs, sub specs, hierarchies, catalogs, and lookup tables), creating workflows and collaboration areas, creating the security
model (such as creating users, roles, and access control groups), and creating import and export jobs, searches, and selections.

In many cases, you can use Java™ or scripts to create the objects that you need for your PIM system, but you can also use the product interface for some of these tasks.
The solution developer tasks are available from the consoles and menus under the Product Manager, Collaboration Manager, and Data Model Manager menu choices in the
interface. Depending on the role that you have and the level of access and customization, certain tasks might not be available for you.

Product manager menu


You can use the Product Manager menu and consoles to do tasks such as:

creating or importing catalogs


creating catalog-to-catalog exports
creating a hierarchy
creating a hierarchy mapping
creating and running a report
creating lookup tables

Collaboration manager menu


You can use the Collaboration Manager menu and consoles to do tasks such as:

creating and running import and export jobs


creating collaboration areas
creating queues and message queues
creating web services and viewing their transactions
creating data sources
creating distributions

Data model manager menu


You can use the Data Model Manager menu and consoles to do tasks such as:

scheduling and running jobs


creating and importing specs
creating spec maps
creating attribute collections
creating scripts
enabling and disabling users
creating access control groups
creating privileges for catalogs and hierarchies
enabling logging
adding and enabling alerts
creating a staging area
creating workflows

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

32 IBM Product Master 12.0.0


Interacting with Product Master
There are two ways of interacting with IBM® Product Master; through the user interface or the Product Master workbench.

Using the workbench makes it easier to write and debug Product Master scripting. Using the user interface ensures that you do not have any installations to perform. You
can use the workbench if you do not want to write and debug Product Master scripts in the Script Sandbox from the user interface. For more information, see "Navigating
the interface" section in the Overview IBM Product Master.

Related information
Navigating the

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Extending Product Master capabilities


You can extend the "out-of-the-box" capabilities of IBM® Product Master with custom business rules, validations, and integration capabilities. Product Master supports
creating extensions with three program languages.

To extend the PIM solution, you can use Script, Java™, or REST API. You can use a mixture of Script and Java so that you can migrate to Java from Script over time.

Java
The IBM Product Master Java application programming interface (API) provides a full featured set of Java interfaces and a set of utility classes and methods so that you
can extend Product Master.
The Product Master Java API provides these advantages for extending an Product Master solution:

An easier learning curve for Java programmers.


The ability to write object-oriented code.
The ability to call existing Java utilities and applications directly in your Java code.
Support for extending Product Master with a nonproprietary programming language, which makes it easier to find coding help or references.

You can use any existing Java IDE (integrated development environments) such as RSA (Rational® Software Architect) to develop Product Master Java extensions. Product
Master support for application-server-based web services is provided only in the Java API.

Script
The IBM Product Master Scripting application programming interface (API) provides a full featured set of capabilities to extend the Product Master solution.

REST API
The IBM Product Master, Version 12.0 provides REST API layer to access the database data for constructing the dashboards and other purposes. Using these REST API
commands requires the same permissions as using the web interface. These REST APIs are available so that you can retrieve the data outside of the web interface. For
more information, see "REST API documentation" section in the REST APIs definitions.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Product accessibility
Accessibility features help users with physical disabilities, such as restricted mobility or limited vision, to use software products successfully.

The major accessibility features in this product enable users to do the following tasks:

Use assistive technologies, such as screen-reader software and digital speech synthesizer, to hear what is displayed on the screen. Consult the product
documentation of the assistive technology for details about using those technologies with this product.
Operate specific or equivalent features by using only the keyboard.
Magnify what is displayed on the screen.

In addition, the documentation includes the following features to aid accessibility:

All documentation is available in the HTML format to give the maximum opportunity for users to apply screen-reader software technology.
All images in the documentation are provided with an alternative text so that users with vision impairments can understand the contents of the images.

Product accessibility for the Admin UI


Accessibility features of the Admin UI help users with physical disabilities, such as restricted mobility or limited vision, to use software products successfully.

IBM Product Master 12.0.0 33


Product accessibility for the Persona-based UI
Accessibility features of the Persona-based UI help users with physical disabilities, such as restricted mobility or limited vision, to use software products
successfully.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Product accessibility for the Admin UI


Accessibility features of the Admin UI help users with physical disabilities, such as restricted mobility or limited vision, to use software products successfully.

Keyboard shortcuts
Table 1. Keyboard shortcuts
Action Command
Single-edit UI
Open the editor. Press Ctrl + Enter
Accept and close the editor. Press Ctrl + Enter
Maximize the size of the editor. Press Ctrl + M (if editor is not maximized)
Minimize the size of the editor. Press Ctrl + M (if editor is maximized)
Cancel or close the editor. Press Esc
Go to the previous attribute editor. Press Ctrl + Up
Go to the next attribute editor. Press Ctrl + Down arrow
Undo any change to a value. Press Ctrl + Z
Move sequentially through the attributes without opening the editor. Press the Tab
Move backwards through the attributes without opening the editors. Press Shift + Tab
Open a single editor. Select Cancel, the X at the upper-right corner, or press Esc.
Close an editor without accepting input. Select Cancel, the X at the upper-right corner, or press Esc.
Close an editor and accept keyboard input. Select OK or Ctrl + Enter
Close or cancel a screen. Press the Esc
Open editors sequentially while accepting keyboard input. Press Ctrl + Down arrow
Open editors in backwards sequence while accepting keyboard input. Press Ctrl + Up arrow
Browse through days within the calendar view. Press Left arrow, Right arrow, Up arrow, and Down arrow key
Browse through months within the calendar view. Press Shift + PageUp and Shift + PageDown
Browse through years within the calendar view. Press Ctrl + PageUp and Ctrl + PageDown
Browse through time within the calendar view. Press Up arrow and Down arrow
Select your input. Press Enter.
Rich Text editor in Single Edit
Select text in the Rich Text editor in the Single Edit screen. Press Ctrl + Shift + Left/Right/Up/Down arrows
Browse through the text editor options (for example, bold, italics, and underline). Press Shift + Tab
Move between options. Press the Left and Right arrows
Apply the option to the selected text. Press Enter
Relationship editor in Single Edit
Browse through the rows of a table. Press the Down arrow and Up arrow
Select the highlighted row within the table. Press Enter
Close the editor. Press Enter again
Browsing for a related item by using the tree
Move between folders. Press the Down arrow and Up arrow
Open a folder. Press the Right arrow
Move between a child node and a parent node. Press the Tab
Populate the Key and Display fields with the highlighted item. Ensure that the Press Enter
current highlighted node is an item.
Close the editor. Press Enter
Lookup Table editor in Single Edit
Browse through the options within the table. Press Down arrow and Up arrow
Select a row and populate the Key field. Press Enter
Close the Lookup Table editor. Press Enter again
Flag, num enum, string enum, and timezone fields in Single Edit
Remove the content. Press Delete or backspace
Browse through the options. Press Up arrow and Down arrow
To filter through the list of options in a field. Type the first letter to display the applicable options that start with that letter. These
field attributes are case-sensitive.
Accept the highlighted option. Press Enter
URL, Thumbnail image URL, Thumbnail, Image URL, Image fields in Single Edit
Open the options view. Select Enter
Switch between the view content and add content options. Press Tab and then press Enter to select the highlighted option

34 IBM Product Master 12.0.0


Action Command
If an image is associated, to view the full-size pop-up window in the View Press Enter
Content editor.
Close and image. Press Esc
String editor in Single Edit
Select text in the add content option. Press Ctrl + Shift + Left/Right/Up/Down arrows
Switch between options. Press Tab and then press Enter to select the highlighted option

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Product accessibility for the Persona-based UI


Accessibility features of the Persona-based UI help users with physical disabilities, such as restricted mobility or limited vision, to use software products successfully.

Keyboard shortcuts
Table 1. Keyboard shortcuts
Action Command
Single-edit page
Open the rich text editor. Press Ctrl + Enter
Accept and close the rich text editor. Press Ctrl + Enter
Cancel or close the editor. Press Esc
Undo any change to a value. Press Ctrl + Z
Move sequentially through the attributes. Press the Tab
Move backwards through the attributes. Press Shift + Tab
Open a single editor. Double-click
Close an editor without accepting input. Select Cancel, the X at the upper-right corner, or press Esc
Close an editor and accept keyboard input. Select OK
Close or cancel a screen. Press Esc
Relationship editor
Browse through the options within the table. Press Down arrow and Up arrow
Close the editor. Press Esc, OK, or cancel
Lookup Table editor
Browse through the options within the table. Press Down arrow and Up arrow
Close the editor. Select OK or close
Flag, num enum, string enum, and timezone fields
Remove the content. Press Delete or backspace
Browse through the options. Press Up arrow and Down arrow
To filter through the list of options in a field. Type the first letter to display the applicable options that start with that letter. These field attributes are case-sensitive.
Accept the highlighted option. Press Enter
String editor
Select text in the add content option. Press Ctrl + Shift + Left/Right/Up/Down arrows
Open Relationship attribute pop-up window. Double-click
Open Look-up table attribute pop-up window. Double-click
Multi-edit page
Cancel or close the editor. Press Esc
Go to the previous attribute editor. Press Shift + Tab
Cancel or close the editor. Press Esc
Undo any change to a value. Press Ctrl + Z

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing
IBM® Product Master enables companies to create a single, up-to-date repository of product and service information that can be used throughout their organization for
strategic business initiatives.

Organizations by using IBM Product Master can benefit from its robust features, including,

Intuitive and extensible out-of-the-box user interfaces


Business process collaboration tools including workflow
Data aggregation and syndication capabilities

IBM Product Master 12.0.0 35


Granular access privileges based on roles
Flexible data modeling capabilities
Sophisticated hierarchy management features
A robust service-oriented architecture (SOA)

Planning to install
Before you install IBM Product Master, make sure that you complete the planning steps and meet the prerequisites.
Preparing for the installation
Before you can install IBM Product Master, ensure that you review all of the hardware and software requirements to run Product Master.
IBM Product Master interactive installation
Deploying on a clustered environment
Developers, administrators and transition engineers who want to set up a typical Product Master clustered environment only a cluster can follow one of two ways.
Post-installation instructions
After completing the automated installation, you need to complete the following steps to configure Product Master for the Persona-based UI.
Verifying the installation
To verify that you have successfully installed IBM Product Master, log in to the product user interface.
Applying fix pack
When IBM releases a fix pack for the IBM Product Master, you can apply the fix pack.
Migrating
You can migrate to the latest version of the IBM Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Planning to install
Before you install IBM® Product Master, make sure that you complete the planning steps and meet the prerequisites.

Before you begin


Read the release notes for information about support product features or enhancements to the release. For more information, see Passport Advantage Online for
customers.
Review the installation scenarios and determine the installation approach that you are going to use.
Review and complete the installation worksheet and decide the various parameters to use in the installation. For more information, see Installation and
configuration worksheets.
In addition to these general prerequisites, there are other specific prerequisite tasks for installing Product Master. These tasks are outlined in the following topics.

Installation scenarios
You can install IBM Product Master on a single computer or in a clustered environment on several computers.
Installation and configuration worksheets
The installation worksheets list all of the values that you must specify during the Product Master installation process. Completing the installation worksheets before
you install the components can help you plan your installation, save time, and enforce consistency during the installation and configuration process.
System requirements
The system requirements describe the supported hardware and software requirements for Product Master. Ensure that you are familiar with the minimum product
levels that you must install before you open a problem report.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installation scenarios
You can install IBM® Product Master on a single computer or in a clustered environment on several computers.

Simple configuration
In the simple configuration, the product services are run on a single computer. The following image depicts a simple configuration of the product:

36 IBM Product Master 12.0.0


For more information, see Installing.

Complex configuration
In the complex configuration, the product services are run in a cluster across several computers. The following image depicts a complex configuration of the product:

For more information, see Deploying on a clustered environment.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installation and configuration worksheets


The installation worksheets list all of the values that you must specify during the Product Master installation process. Completing the installation worksheets before you
install the components can help you plan your installation, save time, and enforce consistency during the installation and configuration process.

Reuse the worksheets for each runtime environment that you plan to implement. For example, you might have a production environment, a test environment, and a
training environment.
The worksheets are used for applications and components with their base configuration settings that are defined within IBM Installation Manager. Any operational server,
user application, or component configuration steps that are required outside of IBM Installation Manager are described in separate individual application or component
topics.

Installation directory worksheet


Use this worksheet to record the root directory of the host (MDM_INSTALL_HOME) on which you want to install Product Master.
IBM Db2 data source worksheet
Use this data source worksheet to identify parameters for the IBM DB2® data source to which your Product Master is connecting.

IBM Product Master 12.0.0 37


Oracle data source worksheet
Use the Oracle data source worksheet to identify parameters for the data source to which your Product Master is connecting.
WebSphere Application Server installation worksheet
Use the IBM WebSphere® Application Server configuration worksheet to identify parameters for the application server that is used to host your Product Master.
Application configuration worksheet
Use the application configuration worksheet to identify parameters for the Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installation directory worksheet


Use this worksheet to record the root directory of the host (MDM_INSTALL_HOME) on which you want to install Product Master.

If you install more runtime environments later, they might not point to the same database as the one used for the initial environment. If you are installing multiple runtime
environments, reuse the installation worksheet to define the unique directory values for each environment.
While installing your installation directory path (for both MDM_INSTALL_HOME and IBMIMShared directories) must not contain any spaces. IBMIMShared is the directory
that is used by the Installation Manager to store installation artifacts.
The parameters that are listed in the following table equate to user prompts or fields that you see in IBM® Installation Manager.
Parameter Description Your value
Use the existing Choose this option if you want the Product Master components to be installed into an existing Eclipse shell or directory. You cannot
package group modify the directory name if you choose this option.
Do not choose this option if you previously installed other products by using IBM Installation Manager, such as IBM Rational®
Application Developer (RAD).
Create a This option is the default setting. IBM Installation Manager creates a default IBM/MDM directory under the root directory that you
package group choose. Or, you can name the directory as you want. For example: MDM_INSTALL_HOME/IBM/MDM_test or
MDM_INSTALL_HOME/IBM/MDM_prod

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Db2 data source worksheet


Use this data source worksheet to identify parameters for the IBM® DB2® data source to which your Product Master is connecting.

When you define the names for your databases and user accounts, consider giving the associated database instance, user account, and data source configuration the
same name. You might also want to include the Product Master version in your name. Using this naming convention can help other members of your organization and IBM
Software Support understand the mapping between instances, accounts, and databases.
The parameters that are listed in the following table equate to user prompts or fields that you see in IBM Installation Manager.
Parameter Description Your value
Database type Product Master supports Db2.
Database host Identify the fully qualified address of the host on which the database is installed. The default is localhost.
name
Database port Identify the database port or use the default port number provided. The Db2 default is 50000.
Database user The database user name must have DBA privileges.
name Restrictions on length and supported characters for user names and passwords are dependent upon any restrictions that might be
imposed by your operating system.
Database Provide a password for the database user name.
password
Local database Provide a name that identifies the MDM database. The default is MDMDB.
name The name must consist of 12 or fewer alphanumeric characters. Underscore (_) characters can be used in the name. Other
characters are not supported.

A physical MDM implementation uses the Db2 local client to run database scripts and requires a local database name.
Remote database Provide a name that identifies the remote MDM database. The default is MDMDB.
name
Database home Provide the parent directory of SQLLIB. For example, IBM AIX®, Linux® or Solaris: /home/db2inst1/sqllib
Database schema Specify the database schema name. By default the schema name is the same as the database application user.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Oracle data source worksheet


Use the Oracle data source worksheet to identify parameters for the data source to which your Product Master is connecting.

38 IBM Product Master 12.0.0


When you define the names for your databases and user accounts, consider giving the associated database instance, user account, and data source configuration the
same name. You might also want to include the Product Master version in your name. Using this naming convention can help other members of your organization and IBM®
Software Support understand the mapping between instances, accounts, and databases.
The parameters that are listed in the following table equate to user prompts or fields that you see in IBM Installation Manager.
Parameter Description Your value
Database type Product Master supports Oracle.
Database host Identify the fully qualified address of the host on which the database is installed. The default is localhost.
name
Database port Identify the database port or use the default port number provided. The Oracle default is 1521.
Database user The database user name must have DBA privileges.
name Restrictions on length and supported characters for user names and passwords are dependent upon any restrictions that might be
imposed by your operating system.
Database Provide a password for the database user name.
password
Database name Provide the database name.
TNS Specify the name of the service that is used to connect to the Oracle database. This parameter is required as this service can also be
used to connect to the remote database.
Database home Provide the fully qualified directory where the database is installed. For example: IBM AIX®, Linux® Solaris:
/home/mdm/oracle/product/11.2.0/dbhome_1
SID Provide the database system ID (SID).

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

WebSphere Application Server installation worksheet


Use the IBM® WebSphere® Application Server configuration worksheet to identify parameters for the application server that is used to host your Product Master.

The parameters that are listed in the following table equate to user prompts or fields that you see in IBM Installation Manager.
You can set the Deployment type parameter to either Network Deployment Edition or Base Edition. The Network Deployment Edition is used for server or cluster
installations. A Base Edition deployment is typically used in a workstation or demonstration installation. If you choose Network Deployment Edition, the installer runs a
sequence of commands against the IBM WebSphere Application Server deployment manager process to configure application servers and deploy applications. The
deployment manager and node agents must be configured and running before the deployment can proceed. For example, use a profile name of Dmgr01.

If you have cluster enabled, then for information, see Enabling horizontal cluster for the IBM Product Master.

If you choose BASE Edition, then the application server cannot be deployed on server1 of WebSphere Application Server BASE. Hence you must provide a new server as
the Product Master application server and deploy applications on the chosen server. The installer then creates the new server and runs a sequence of commands against
the newly created server to configure the application server and deploy the applications. Make sure that server1 is running before you proceed with the deployment. For
example, use a profile name like AppSrv01.

Parameter Description Your value


Deployment type Specify the deployment type and note the IBM WebSphere Application Server profile name. Your options are Network
Deployment Edition or Base Edition.
IBM WebSphere Specify the fully qualified directory in which IBM WebSphere Application Server is installed. The default is
Application Server home /opt/IBM/WebSphere/AppServer.
IBM WebSphere If you are using a base deployment, specify the fully qualified path of the application server profile home directory. The
Application Server profile default is /opt/IBM/WebSphere/AppServer/profiles
home
Host name Identify the fully qualified address of the host on which IBM WebSphere Application Server is installed. The default is
localhost.
SOAP port Identify the SOAP port of the deployment manager on the remote computer, if you are using remote deployment. The
default port for the base edition is 8880.
User name Identify the IBM WebSphere Application Server user name. The user must have administrative privileges.
Password The IBM WebSphere Application Server user password.
Cell Specify the IBM WebSphere Application Server cell where you want to deploy Product Master.
If you have IBM WebSphere Application Server already installed and configured, you can click Retrieve Host Details during
the installation process and have IBM Installation Manager retrieve the information for Cell, Node, and Server.
Node Specify the IBM WebSphere Application Server node where you want to deploy Product Master.
After you select the cell in IBM Installation Manager, all of the nodes within that cell are available in the list.
Server Specify the server where you want to deploy Product Master.
After you select the node in IBM Installation Manager, all of the servers that are available for that node show up in the list.

If you want to create a new server for deployment, you can specify the new name on the configuration pane and it is
created in IBM WebSphere Application Server during the installation process.
Virtual host name Specify the IBM WebSphere Application Server virtual host where you want to deploy Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 39


Application configuration worksheet
Use the application configuration worksheet to identify parameters for the Product Master.

The parameters that are listed in the following table equate to user prompts or fields that you see in IBM® Installation Manager on the Application Configuration pane.
Parameter Description Your value
Perl Directory Provide the Perl home directory. Either enter the directory or click Browse to choose the Perl home directory. For example, if
which perl command returns /usr/bin/perl, the Perl directory is /usr. The default is /opt/Perl.
Java™ SE Development Provide the Java SE Development Kit home directory. Either enter the directory or click Browse to choose the Java SE
Kit Path Development Kit home directory. For example, the Java directory in the WebSphere® installation. The default is
/opt/IBM/WebSphere/AppServer/java.
Locale Specify the language to be used by Product Master application and code tables. You can select only one language as the
application resource language. The default is English.
Cache Multicast Provide the cache multicast address. It ranges from 239.1.1.1 to 239.255.255.255. The default is 239.1.1.1.
Address
Cache Multicast TTL Provide the cache multicast TTL. 0 for single-system installation and 1 for clusters. The default is 0.
RMI port Specify the port on which the Remote Method Invocation (RMI) registry service listens for connections from other services. In
a clustered environment, all nodes must use the same RMI port to communicate. The default is 17507.
Application Server Specify the Hypertext Transfer Protocol (HTTP) port on which the Product Master application runs. The port should not be
HTTP port already in use. The default is 7507.
Create database tables Select the option to create database tables to be used by the product along with the application installation.
to be used by the
product

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

System requirements
The system requirements describe the supported hardware and software requirements for Product Master. Ensure that you are familiar with the minimum product levels
that you must install before you open a problem report.

To see the detailed system requirements in the Software Product Compatibility Reports (SPCR), search "IBM® Product Master", and click Enter.

Operating Systems
Containers
Prerequisites
Supported Software
Hardware
Co-packaged Software
Hypervisors

Operating Systems
AIX®
Bit value: 64-Tolerate
Operating System Operating System Minimum Hardware
AIX 7.2 TL2 POWER® System - Big Endian
AIX 7.1
Linux®
Bit value: 64-Tolerate
Operating System Operating System Minimum Hardware
Red Hat® Enterprise Linux (RHEL) Server 8.4 8.4 POWER System - Big Endian
x86-64
Red Hat Enterprise Linux (RHEL) Server 7.3 7.3 POWER System - Big Endian
x86-64
Red Hat Enterprise Linux (RHEL) Server 7 7.1 IBM z Systems®
POWER System - Big Endian
x86-64
SUSE Linux Enterprise Server (SLES) 12 SP1 Z Systems
x86-64
Ubuntu 14.04 LTS Base POWER System - Big Endian
14.04.02 x86-64
Ubuntu 16.04 LTS Base x86-64

Note: Red Hat Enterprise Linux (RHEL) Server 8.4 does not support Mongo 4.0.23.

Containers

40 IBM Product Master 12.0.0


Kubernetes

Docker
and earlier - 19.03.4 and later
Kubernetes cluster
and later - 1.24 and later
and earlier - 1.19 and later
Operator Lifecycle Manager
0.18.3 and later
0.17.0 and later
NGINX Ingress Controller
1.0 and later
0.47.0 and later

OpenShift® container platform

4.10.x
12.0.0.4.1 and 4.8.x
4.7.4
4.4.6
4.4.x
3.11.x

IBM WebSphere® Liberty


22.0.0.5 and later
20.0.0.12 and later
Note: IBM WebSphere Liberty software is included with the product images, but you do need to install other listed software before proceeding with the installation.

Prerequisites
Application Servers
Software Version
WebSphere Application Server Network Deployment 9.0.5 and future fix packs
9.0 and future fix packs
WebSphere MQ 9.0 and future fix packs
8.0 and future fix packs

Databases
Software Version
Db2® Advanced 11.1.0 and future fix packs
Enterprise
Server Edition
Db2 Enterprise 11.5.0 and future fix packs
Server Edition 11.1.0 and future fix packs
Db2 Standard 11.5.0.0
Edition VPC Note: Though from IBM Product Master 12.0 release, only the IBM Db2 Standard Edition VPC Option 11.5.0.0 is packaged with the product, you
Option can continue to use other supported Db2 versions if you have the required entitlement.
Oracle Database (12.2) Enterprise Edition
12c Release 2 (12.2) Standard Edition
Oracle Database (18.3) Enterprise Edition and future fix packs
18c
Oracle Database (19.3) Enterprise Edition and future fix packs
19c
Amazon RDS 12c Release 2
Oracle RDS 18c
19c
Elasticsearch
and later - Elasticsearch 7.17.1
Interim Fix 1 and later - Elasticsearch 7.16.2
and later - Elasticsearch 7.13.0
and later - Elasticsearch 7.7.0
- Elasticsearch 5.5.3

Important: Starting from IBM Product Master Fix Pack 8 onwards, Elasticsearch is getting replaced with OpenSearch. This is due to change in
licensing strategy (no longer open source) by Elastic NV. Due to this, you need to shift to OpenSearch 2.4.1 and later, and run full indexing. For
more information, see Installing OpenSearch (Fix Pack 8 and later).
OpenSearch
OpenSearch 2.4.1

IBM Product Master 12.0.0 41


Software Version
Hazelcast
Interim Fix 1 and later - Hazelcast IMDG 4.2.4
and later - Hazelcast IMDG 4.1
and later - Hazelcast IMDG 3.12.5

MongoDB
and later - MongoDB 4.0.25
Community
Edition and later - MongoDB 4.0.23
and later - MongoDB 4.0.22
and later - MongoDB 3.4

Java™ SDK Filter


Software Version
IBM Runtime Environment, Java Technology Edition 8 and future fix packs

Runtime Environment
Software Version
Open source Perl 5.30.1 and future fix packs

Web Browsers
Software Version
Google Chrome
109.0.5414.122 and future fix packs
105.0.5195.127
88 and future fix packs

Microsoft Edge Chromium


110.0.1587.50 and future fix packs
106.0.1370.42 and future fix packs
96 and future fix packs
88 and future fix packs

Mozilla Firefox
ESR 102.8 and future fix packs
ESR 102.3 and future packs
ESR 78 and future fix packs

Note: Minimum supported screen resolution is 1366x768 pixels.

Supported Software
Software Version
IBM InfoSphere® Information Server 11.7 and future fix packs
IBM Security Directory Server 6.4.0.0 and future fix packs
IBM Security Directory Suite 8.0.1 and future fix packs
Lotus® Domino® 8.5.3 and future fix packs
Microsoft Active Directory 2019 and future fix packs
2016 and future fix packs
2012 and future fix packs
2008 and future fix packs
2003 and future fix packs
Novell eDirectory 8.8 and future fix packs
Oracle Directory Server 11g Enterprise Edition Release 1 (11.1) and future fix packs
Oracle Internet Directory 11g Release 1 and future fix packs
IBM MQ 9.0.0 and future fix packs
IBM Operational Decision Manager 8.9.2
8.8.1
8.7.1

Hardware
Details for the IBM Product Master Server, all supported operating systems. Following table lists the minimum requirements that get the application running and not
sustain any prolonged load. For exact hardware requirements, you can request the sizing estimations

Hardware Application Server Requirement Database Server Requirement


Disk space 100 GB 100 - 200 GB in a RAID configuration
Memory 16 GB RAM 16 GB RAM
CPU 8 - 16 processor cores 8 Processor cores
Note: This is the minimum requirement for the Product Master to run functionally. In order to meet specific Service Level Agreement, additional capacity may be required.

Co-packaged Software
42 IBM Product Master 12.0.0
Product Name Version
Cognos® Analytics 11.1.5.0
IBM MQ 9.1
IBM Db2 Standard Edition VPC Option 11.5.0.0
IBM App Connect Enterprise 11.0.0.7
IBM Installation Manager 1.9
IBM Security Directory Suite 8.0.1.12
IBM Business Automation Workflow 19.0.0.3
WebSphere Application Server Network Deployment 9.0
WebSphere Application Server 9.0

Hypervisors
Hypervisors AIX Linux Windows
IBM PowerVM® Hypervisor (LPAR, DPAR, Micro-Partition) any supported version ✓ ❌ ❌
IBM PR/SM any version ❌ ✓ ❌
IBM z/VM® Hypervisor 6.4 ❌ ✓ ❌
Microsoft Hyper-V R3 ❌ ✓ ✓
Red Hat KVM as delivered with Red Hat Enterprise Linux (RHEL) and its RHEV equivalent 7.0 ❌ ✓ ✓
VMware ESXi 5.0 ❌ ✓ ✓
VMware ESXi 5.1 ❌ ✓ ✓
VMware ESXi 5.5 ❌ ✓ ✓
VMware ESXi 6.0 ❌ ✓ ✓
VMware ESXi 6.5 ❌ ✓ ✓
z/VM 6.1 ❌ ✓ ❌
z/VM 6.2 ❌ ✓ ❌
z/VM 6.3 ❌ ✓ ❌

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Preparing for the installation


Before you can install IBM® Product Master, ensure that you review all of the hardware and software requirements to run Product Master.

You must set up both a client system and one or more server systems. The application server, database server, and HTTP server can all be on the same server computer.
They can be combined together on server computers, or they can each be on their own server computers. The HTTP server is recommended, but is optional.

Product Master can run on a computer that has a host name with a length of 63 characters.

You must be logged on with an account that owns the IBM WebSphere® Application Server directories and binary files. The database JDBC drivers must be accessible by
this account. The instructions in the preparation topics assume that you are doing the installation locally on the server.

For best results, install Product Master as a non-root user:

For IBM WebSphere Application Server, use the wasadmin ID. This ID must own a DB2® client or a Db2 instance and be a member of the mqm management group.
For Db2, suggested installation method is to set up one or more restricted users on a system for database schema users. Because Db2 uses the operating system to
authenticate to a new user, a user ID such as mdmdb1 with a restricted shell is the best choice. This user is not required to be a member of any of the Db2 groups.
You can also do a simple installation by using a single ID for both the Db2 installation ID and the schema ID. The default ID is db2inst1. For more information
about IBM Db2, see the product documentation.
A different database user and schema must exist for each deployment of Product Master. Different databases for each deployment are not required.
When you install on IBM WebSphere Application Server, ensure that no server named server or cluster that is named cluster is being used on IBM WebSphere
Application Server. The names server and cluster are used by the IBM Product Master installation.

Downloading and extracting the installer


Download and extract the IBM Product Master ZIP file to perform product installation with the IBM Installation Manager.
Installing Perl
Product Master requires Perl and several Perl modules.
Installing IBM Installation Manager
Use this procedure if IBM Installation Manager is not installed.
Installing and setting up the database
You must set up the database to complete the IBM Product Master installation.
Configuring WebSphere® Application Server
Before IBM Product Master runs correctly, you must configure the application server.
Installing OpenSearch (Fix Pack 8 and later)
( and later) OpenSearch is a scalable, flexible, and extensible open source software suite for search, analytics, and observability applications licensed
under Apache 2.0.
Installing Elasticsearch (Fix Pack 7 and earlier)
( and earlier) The Elasticsearch is a highly scalable open source full-text search and analytics engine. The feature allows you to store, search, and
analyze large volumes of data quickly. The Elasticsearch feature is generally used as the underlying engine or technology that powers applications that have
complex search features and requirements.

IBM Product Master 12.0.0 43


Installing Hazelcast IMDG
This topic describes how to configure the Hazelcast IMDG.
Installing MongoDB
You can install MongoDB as described in this topic.
Installing Python and machine learning modules
You can install Python and modules that are required for machine learning by using shell scripts that are packaged with Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Downloading and extracting the installer


Download and extract the IBM® Product Master ZIP file to perform product installation with the IBM Installation Manager.

Procedure
1. Download the electronic installation images from the IBM Passport Advantage® website.
2. Copy the installer to the /opt/ directory.
3. Extract the installer content, so that repository.config file is available for the installation.
On extracting Product Master files with IBM Installation Manager, you can see following files and folders:
bin - This directory contains the shell scripts and product set-up configuration files that are required for installation.
data_explorer - This directory contains ccd_data_explorer.jar file.
dcl - This directory contains compressed ZIP file for the data change logging (DCL) component.
etc - This directory contains product configuration files.
export_dcl - This directory contains compressed ZIP file for the data change export (DCE) component.
Generic_Data_Loader - This directory contains a data loader utility to import items in catalog and perform actions on items like categorize or checkout to
collaboration area. It supports CSV and database for loading items.
jars - This directory contains JAR files.
javaapi - This directory contains Java™ API JAR to allow Java API classes to be developed in isolation from a IBM Product Master and reference
documentation ZIP file.
locales - This directory contains locales-specific files.
logs - This directory contains all log files of product.
mdmui - This directory contains the Persona-based UI files.
blobstore - This directory contains the digital assets.
bin - This directory contains the shell scripts that are required for installation.
deployable - This directory contains the war files for deployment.
dynamic - This directory contains property files.
env-export - This directory contains the Persona-based roles that are mapped in the Admin UI.
libs - This directory contains JAR files.
logs - This directory contains log files of installation.
machinelearning - This directory contains machine learning modules and scripts.
samples - This directory contains custom samples for the REST and Angular components.
plugins - This directory contains various plug-ins required by the Product Master services.
properties - This directory contains product installation related files.
public_html - This directory contains company-specific files.
setup - This directory contains product build related files.
src - This directory contains SQL scripts, docstore scripts, and other maintenance scripts.
tmp - This directory contains product temporary files.
4. Give executable permission to the mdmui directory by running the following command:
chmod -R 755 mdmui/

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing Perl
Product Master requires Perl and several Perl modules.

Before you begin


Following is the supported version.
Software Version
Open source Perl 5.30.1 and future fix packs

Procedure
To install Perl successfully, you must complete these steps:

1. Select the source that you want to use for Perl. For more information about selecting your source for Perl, see Sources of Perl.

44 IBM Product Master 12.0.0


2. If necessary, build and install Perl from the source.
Note: To validate your version of Perl, use the command :
perl -version.
3. Install any Perl modules that Product Master requires.

Sources of Perl
Consider these factors when you are deciding which version of Perl to install and use with Product Master.
Installing GNU utilities
Depending on your operating system and your choice for the source of Perl, you might need to install the freely available GNU utilities. If you plan to use the version
of Perl that is supplied with your operating system, and you have the C compiler for your operating system that is installed on the server where you are installing
Product Master, you do not need to install the GNU utilities.
Installing Perl in the Product Masterhome directory
You can install Perl in the home directory of the IBM Product Master user.
Including Perl directory in the PATH statement
If you installed Perl in the home directory for the IBM Product Master user, you must prefix the \bin directory of the directory where you installed Perl to the PATH
statement. This directory must be first on the PATH statement so that this installation of Perl is found before any other installation of Perl.
Sample .bashrc file
A .bashrc file is a system file for UNIX and Linux®. This file sets up the initial execution environment for deploying and running a PIM instance on a UNIX and Linux
server.
Installing Perl modules
After you install Perl, you might need to install the Perl modules. If you are using the version of Perl provided by your operating system, you need to use the C
compiler that was used to build Perl.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sources of Perl
Consider these factors when you are deciding which version of Perl to install and use with Product Master.

You can use the Perl that is:

Typically installed by default with your operating system,


A commercially distributed Perl such as ActiveState ActivePerl, or
You can create a custom installation of Perl in the home directory for your IBM® Product Master user

Table 1. Choosing which version of Perl to install


Technical
Sources of Perl Root access C Compiler Perl modules Installation portability experience that is
required
Supplied with Required Required if you are installing modules from Required Limited, must install within operating Minimal, because
the operating source; specifically, the C compiler that is modules that system generally part of
system provided by the operating system is are not installed operating system
required. by default.
The full C compiler is not included as part
of the operating system (except on Linux®),
and must be purchased separately.
Note: Except for Linux, which includes the
GCC C compiler, all other operating
systems require you to purchase the C
compiler separately:

AIX®: IBM xlc


HP-UX: HP ANSI/C
Solaris: Sun Studio C compiler

Commercially Not required if you are Not required All required Self-contained. Can be reinstalled on Minimal, due to
distributed installing in the home modules are other servers. ease of installation
directory for theProduct installed by
Master user default.
Installed in the Not required for Perl, but Required, but built by using the freely All are included. Self-contained. Can copy the Moderate.
Product is required temporarily available GNU compiler. Can instead use installation directory to similar Experience building
Masterhome for the GNU utilities. the C compiler for the operating system, if servers that have an identical PATH from source is
director you prefer. that is used on each server. highly
recommended.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing GNU utilities

IBM Product Master 12.0.0 45


Depending on your operating system and your choice for the source of Perl, you might need to install the freely available GNU utilities. If you plan to use the version of Perl
that is supplied with your operating system, and you have the C compiler for your operating system that is installed on the server where you are installing Product Master,
you do not need to install the GNU utilities.

Before you begin


You must have root access to install the GNU utilities.

About this task


Installing Perl requires the following GNU utilities: GNU C compiler (g), GNU autoconf, GNU automake, GNU m4, GNU libtool,and GNU Make.

Procedure
1. Download the GNU utilities for your operating system.
Linux® already includes the GNU utilities; you can download the GNU utilities for the other supported operating systems from these websites:

IBM® AIX®
AIX toolbox, available at: http://www.ibm.com/systems/power/software/aix/linux/toolbox/download.html
Sun Solaris
Sun Freeware, available at: http://www.sunfreeware.com
HP-UX
HP-UX Porting and Archive Center, available at: http://hpux.connect.org.uk. GCC is available from the HP Developer & Solution Partner Program (DSPP).

2. Install the GNU utilities by following the information that is provided with the package that you downloaded.
3. Make sure that the directory that contains the GCC utility, the C compiler, is the first directory in the PATH statement.
For example, if GCC is installed in /usr/local/bin, /usr/local/bin should be first in the PATH statement.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing Perl in the Product Masterhome directory


You can install Perl in the home directory of the IBM® Product Master user.

Before you begin


You have installed a C compiler.
Make sure that the PATH statement for the Product Master user includes the directory where the C compiler is installed.

Procedure
1. Download the Perl source code from the following website: http://www.perl.com
2. Decompress the Perl source code into a writable directory.
3. Change directories to the directory where you decompressed the Perl source code.
4. Run the following command to configure the Perl build:

./Configure –des –Dprefix=<mdmhome>/perl –Dcc=gcc

mdmhome is the directory where Product Master is installed.


5. Run the make command.
6. Run the make test command.
Do not proceed until this command completes successfully.
7. Run the make install command.
This command copies the Perl interpreter and all standard modules into the directory that you specified earlier as the custom Perl installation directory for Product
Master.

What to do next
Make sure to prefix this Perl installation directory to the PATH statement for this user.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Including Perl directory in the PATH statement


If you installed Perl in the home directory for the IBM® Product Master user, you must prefix the \bin directory of the directory where you installed Perl to the PATH
statement. This directory must be first on the PATH statement so that this installation of Perl is found before any other installation of Perl.

46 IBM Product Master 12.0.0


Procedure
1. Edit the .bashrc file for the IBM Product Master user.
2. Add the following statement to this .bashrc file:

PATH={mdmhome}/perl/bin:$PATH

Replace mdmhome with the home directory for the IBM Product Master user.

3. Save the .bashrc file.


4. Update the current shell by running the same statement:

PATH={mdmhome}/perl/bin:$PATH

Replace mdmhome with the home directory for the IBM Product Master user.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample .bashrc file


A .bashrc file is a system file for UNIX and Linux®. This file sets up the initial execution environment for deploying and running a PIM instance on a UNIX and Linux server.

Sample .bashrc file in a WebSphere Application Server environment


The following sample .bashrc file is used in a WebSphere® Application Server environment.

export TOP=<mdm4pim installdir>

# set Oracle specific settings


export ORACLE_HOME==/opt/oracle/instantclient_11_1; export ORACLE_HOME
export LD_LIBRARY_PATH=${ORACLE_HOME}
export LIBPATH=${ORACLE_HOME}
export PATH=$ORACLE_HOME:$ORACLE_HOME/bin:$PATH

# set DB2 specific setting


. <db2 installdir>/sqllib/db2profile

export PERL5LIB=$TOP/bin/perllib
export JAVA_HOME=<WAS installdir>/java
export LANG=en_US

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing Perl modules


After you install Perl, you might need to install the Perl modules. If you are using the version of Perl provided by your operating system, you need to use the C compiler that
was used to build Perl.

Before you begin


If you are using the operating system installation of Perl, you must have root access.
If you are using a custom installation of Perl, make sure that the Perl installation directory is the first directory in the PATH statement.

About this task


You can install the Perl modules with or without internet connection. If you do not have an internet connection for each of the servers in your IBM® Product Master
installation, you can download the Perl modules from CPAN and then copy them to your servers for you to install. If you have an internet connection for each of the servers
in your IBM Product Master installation, you can use the CPAN module that is part of your default Perl installation to download and install other Perl modules.

When you are configuring Product Master, your Perl installation is validated and any missing Perl modules are displayed.
You might want to download and use the Devel::Loaded module because it displays which modules you already have installed. After you install this module, enter the
pmall command.
Note: If pmall is not in your PATH, it is located in the \bin directory in the root directory of your Perl installation. You can use the which perl command to find the
location of the Perl interpreter in a Perl installation that is supplied by your operating system. The which perl command returns a syslink that points to the root of the
Perl installation.
Currently, the following Perl modules are required:

Config::IniFiles (included with Product Master, no need to install this module separately)
Config::Properties (included with Product Master, no need to install this module separately)
File::Find
Getopt::Long

IBM Product Master 12.0.0 47


Net::Domain
File::Copy
File::Temp
File::Basename
IO::Handle
File::Path

Procedure
1. Download the Perl modules from metacpan.
You must review the dependencies for the modules and download all of those dependent modules that are not installed on your Product Master servers as well.
Note: The home page of many modules on the CPAN site has a dependencies link that you can review to determine the dependencies of a module.
2. For each of the modules that you downloaded, complete the following steps:
a. Unpack it into a writable directory.
b. Run the Perl configure command: perl Makefile.pl.
c. Run the make command.
d. Run the make test command.
Do not proceed until this command completes successfully.
e. Run the make install command.
3. To install the Perl module with an internet connection, run the CPAN command.
To run CPAN in a shell environment where you can run commands and respond to prompts, type cpan and press Enter. Or, you can run the following command to
install specific modules and any of its dependent modules:

perl –MCPAN –e'install <module_name>'

If you are running CPAN for the first time, you must configure CPAN. Accept all the default values when prompted. When configuration is complete, you are either
given a prompt to initiate an action or the action that you already specified is initiated.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing IBM Installation Manager


Use this procedure if IBM® Installation Manager is not installed.

Before you begin


Following is the supported version.
Software Version
IBM Installation Manager 1.9

Preparing IBM Installation Manager


IBM Installation Manager uses defined repositories to determine what packages are available for you to install. These repositories point to your installation media.
You can manually add your offerings to IBM Installation Manager. Ensure that the Display variable is exported for the user interface mode of installation. Then, go to
the IM directory and enter the ./IBMIM command to start IBM Installation Manager.
Setting up the installation media
The installation media for installing IBM Product Master is available either as physical CDs or as downloadable installation image files from Passport Advantage®.

1. If you obtained Product Master in the form of physical CDs, check that you have all of the installation disks.
2. If you downloaded installation image files for Product Master from Passport Advantage, decompress the installation image files into the wanted installation
directory.

About this task


If you want to install IBM Installation Manager as non-root, do not install IBM Installation Manager in admin mode.

Procedure
1. From your installation media or from Passport Advantage, download IBM Installation Manager.
2. Extract the IBM Installation Manager compressed file. The name of the compressed file is dependent upon your operating system.
3. Edit the install.ini file and replace Admin with nonadmin.
4. Ensure that the Display variable is exported for the user interface mode of installation.
5. Open a command prompt.
6. Issue the ./install command and complete the installation wizard.

What to do next
Continue with adding offerings to the IBM Installation Manager.

Adding offerings to IBM Installation Manager


Use this procedure to add IBM Product Master to the list of offerings that are installed by IBM Installation Manager.

IBM Product Master 12.0 Fix Pack 8

48 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

Adding offerings to IBM Installation Manager


Use this procedure to add IBM® Product Master to the list of offerings that are installed by IBM Installation Manager.

Before you begin


Make sure that you installed IBM Installation Manager and that you did not install it in admin mode.

Procedure
1. Start IBM Installation Manager.
2. Click File > Preferences.
3. On the Preferences, select Repositories > Add Repository.
4. On the Add Repository, click Browse.
5. Locate and select repository.config file.
6. On the Add Repository, click OK.
7. On the Preferences, click OK.

What to do next
Continue with preparing for and installing Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing and setting up the database


You must set up the database to complete the IBM® Product Master installation.

To install and set up the database, refer to the documentation for your database. To configure the database for use with Product Master, refer to the following topics.

Product Master uses two kinds of connections to connect to the database:

A native database client to run scripts for creating schema or companies


A JDBC driver

For installation purposes, set up one or more restricted users on a system for database schema users. Because DB2® uses the operation system to authenticate a new
user, use a user ID such as mdmdb1 with a restricted shell. This user is not required to be a member of any Db2 groups. You can also do a simple installation by using a
single ID for both Db2 installation ID and the schema ID. The default ID is db2inst1. For more information, see your Db2 documentation.

Setting up your Db2 database


If you plan to use a Db2 database with IBM Product Master, you must install the supported version of Db2 before you install Product Master.
Setting up your Oracle Database
If you plan to use an Oracle Database with IBM Product Master, you must install the supported version of Oracle before you installProduct Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting up your Db2 database


If you plan to use a Db2DB2® database with IBM® Product Master, you must install the supported version of Db2 before you install Product Master.

Before you begin


Following are the supported versions.
Software Version
Db2® Advanced 11.1.0 and future fix packs
Enterprise Server
Edition
Db2 Enterprise Server 11.5.0 and future fix packs
Edition 11.1.0 and future fix packs

IBM Product Master 12.0.0 49


Software Version
Db2 Standard Edition 11.5.0.0
VPC Option Note: Though from IBM Product Master 12.0 release, only the IBM Db2 Standard Edition VPC Option 11.5.0.0 is packaged with the product,
you can continue to use other supported Db2 versions if you have the required entitlement.
After you install the database software, make sure that you apply the most current fix pack.

To set up a Db2 database and its environment, you must use these guidelines to create the instance, database, buffer pools, and table spaces.

The following aspects must be considered when you set up the database:

Database instance
Create a new, separate database for the Product Master schema. The example, PIMDB is used as the name of this new database. Because of the large amounts of
data that Product Master manages, do not share an existing database, but instead create a new one. The database must be created by using character encoding
UTF-8.

In most implementations, the Product Master database uses approximately 90% OLTP (online transaction processing) and 10% batch processing. OLTP causes lots
of concurrent activity and single row updates during business hours and large batch processing activity during off-peak time.

To avoid that the Db2 system is not I/O bound it is important to use 10 - 15 spindles per processor and dedicated LUNs (Logical Unit Numbers) per database file
system. It is also advisable to separate Db2 transaction logs and data on separate spindles and LUNs. Use file systems instead of raw devices and create one file
system per LUN. Use RAID-10 for transaction logs and RAID-10 or RAID-5 for data LUNs. Set the Db2_PARALLEL_IO registry variable and set the EXTENTSIZE to
the RAID stripe size. Use AUTOMATIC (the default) for NUM_IOCLEANERS, NUM_IOSERVERS, and PREFETCHSIZE.

Note: For more information about achieving balanced I/O for your Db2 system, see IBM Information Management Best Practices .

Buffer pool requirements


Due to the large size of tables in Product Master, the page size that is used to create the buffer pools is 16 KB and 32 KB.
Table space requirements
The following table lists the storage type, buffer pool, and the management type for the table spaces table space requirements.
Creating the Db2 instance
The first step in setting up Db2 for use with IBM Product Master is to create a Db2 instance.
Creating the Db2 database
The second step in setting up Db2 for use with IBM Product Master is to create a Db2 database.
Creating buffer pools
The third step in setting up Db2 for use with IBM Product Master is to create the buffer pools for use by Db2.
Creating table spaces
The fourth step in setting up Db2 for use with IBM Product Master is to create the table spaces in a database that has automatic storage that is enabled.
Adding database users and granting permissions
To install and use IBM Product Master effectively, you must add a database user and grant the necessary permissions.
Db2 configurations
You can customize profile registry variables, database manager configuration parameters, and Db2 configuration parameters to optimize performance with IBM
Product Master.
Setting up the Db2 client on Product Master
You must set up the Db2 client on Product Master.
IBM Db2 database setup checklist
Use this checklist to verify your IBM Db2 setup before installing IBM Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Buffer pool requirements


Due to the large size of tables in Product Master, the page size that is used to create the buffer pools is 16 KB and 32 KB.

You must create the buffer pools, and stop and restart the DB2® instance, before you create the table spaces.

The following table lists the buffer pools that are needed for use by table spaces and the size for each buffer pool:
Size: Automatic

Buffer pool Used by this table space


USERSBP USERS
INDXBP INDX
BLOBBP BLOB_TBL_DATA
XML_DATA_BP XML_DATA
XML_LARGE_BP XML_LARGE_DATA
XML_INDX_BP XML_INDEX
ITA_DATA_BP ITA_DATA
ITA_IX_BP ITA_IX
ITM_DATA_BP ITM_DATA
ITM_IX_BP ITM_IX
ITD_DATA_BP ITD_DATA
ITD_IX_BP ITD_IX
ICM_DATA_BP ICM_DATA
ICM_IX_BP ICM_IX
LCK_DATA_BP LCK_DATA

50 IBM Product Master 12.0.0


Buffer pool Used by this table space
LCK_IX_BP LCK_IX
TEMPUSRBP user's temporary table space
TEMPSYSBP system's temporary table space
IBMDEFAULTBP SYSCATSPACE

This table space is automatically created when you create the database.
The buffer pools ITA_DATA_BP, ITA_IX_BP, ITD_DATA_BP, ITD_IX_BP, ITM_DATA_BP, ITM_IX_BP, LCK_DATA_BP, and LCK_IX_BP are required for Product
Master production instances.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Table space requirements


The following table lists the storage type, buffer pool, and the management type for the table spaces table space requirements.

Definitions
Only the USERS, INDX, BLOB_TBL_DATA, TEMP_USER, and TEMP_SYSTEM table spaces are required for a default development environment. In a production environment,
create all listed table spaces and buffer pools so highly used tables such as ITA, ITD, ITM, ICM, and LCK can be associated to separate table spaces when you run the
create_schema.sh script.
Note: You must use a table space-mapping file to use these additional table spaces and buffer pools. This file is described in the Run schema creation scripts section.
Management: Automatic for the following table spaces.
Table space Type Buffer pool
USERS LARGE USERSBP
INDX LARGE INDXBP
BLOB_TBL_DATA LARGE BLOBBP
XML_DATA LARGE XML_DATA_BP
XML_LARGE_DATA LARGE XML_LARGE_BP
XML_INDEX LARGE XML_INDX_BP
TEMP_USER USER TEMPORARY TEMPUSRBP
TEMP_SYSTEM SYSTEM TEMPORARY TEMPSYSBP
ITA_DATA LARGE ITA_DATA_BP
ITM_DATA LARGE ITM_DATA_BP
ITD_DATA LARGE ITD_DATA_BP
ICM_DATA LARGE ICM_DATA_BP
LCK_DATA LARGE LCK_DATA_BP
ITA_IX LARGE ITA_IX_BP
ITM_IX LARGE ITM_IX_BP
ITD_IX LARGE ITD_IX_BP
ICM_IX LARGE ICM_IX_BP
LCK_IX LARGE LCK_IX_BP
TEMP_USER32 USER TEMPORARY TEMPUSRBP32
TEMP_SYSTEM32 SYSTEM TEMPORARY TEMPSYSBP32
There are two types of table space management:

Database-managed space (DMS)


Space that is managed by DB2®.
System-managed space (SMS)
Space that is managed by the operating system.

The TEMP_USER and TEMP_USER32 table spaces are SMS user temporary table spaces that store the declared temporary tables after the application defines such tables.
The use of temporary table space increases throughput of data while you run complex SQL queries that need extra space to process large amounts of data.

By creating intermediate tables that are used to process large amounts of data that is made available during the application connection, you reduce the need to rebuild
these intermediate tables, improving the performance of the system.

The TEMP_SYSTEM and TEMP_SYSTEM32 are SMS system temporary table spaces that are used during SQL operations for internal temporary tables, for sorting, storing
intermediate results, and for reorganizing tables and other transient data.

When you create physical and logical volumes for the table spaces, physically spread the table spaces in different disks to use parallel IO. Specifically, spread the ITA_IX
table space into different high-performance disks because it is a high-use and high-growth table space.

The table spaces that are listed need to be created with the AUTORESIZE YES option.

Instead of using database or operating system-managed table spaces that you can also use Automatic Storage for all the table spaces. By using the Automatic Storage
option, the database manager automatically manages the container and space allocation for the table spaces as you create and populate them. This is the default
behavior when a new database is created.

IBM Product Master 12.0 Fix Pack 8

IBM Product Master 12.0.0 51


Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating the Db2 instance


The first step in setting up DB2® for use with IBM® Product Master is to create a Db2 instance.

About this task


An instance is a logical database manager environment where you create databases and set the configuration parameters globally. You can have many databases in an
instance, however, you need to have one instance with one database for your Product Master production environment.

For more information about how to create a Db2 instance, see the Db2 documentation or consult your DBA.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating the Db2 database


The second step in setting up DB2® for use with IBM® Product Master is to create a Db2 database.

About this task


It is assumed that you know how to create a Db2 database. For more information about how to create a Db2 instance, see the Db2 documentation or consult your DBA.

Procedure
Create the Db2 database. When you create the database, make sure to use CODESET UTF-8 option in the "CREATE DATABASE" statement.

Example
Sample statement for creating a database:

CREATE DATABASE PIMDB AUTOMATIC STORAGE YES ON '/u01/db2inst1',


'/u02/db2inst1' USING CODESET UTF-8 TERRITORY US

In this example, PIMDB is the database name /u01/db2inst1 and


/u02/db2inst1 are the automatic storage paths on the Db2 server, change the storage paths appropriate to your server. Change the territory from US to your appropriate
territory. For more information, see the Db2 documentation for supported values for territory.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating buffer pools


The third step in setting up DB2® for use with IBM® Product Master is to create the buffer pools for use by Db2.

About this task


A buffer pool is memory that you use to cache table and index data pages as they are being read from disk or being modified.

For more information about how to create buffer pools, see the Db2 documentation or consult your DBA.

For information on buffer pool requirements for use with Product Master, see Buffer pool requirements.

Procedure
Create the buffer pools.

Example
Use the following statements for creating buffer pools in Db2:
Note: Some buffer pools have 32 K page size.

CREATE BUFFERPOOL USERSBP SIZE AUTOMATIC PAGESIZE 16K;


CREATE BUFFERPOOL INDXBP SIZE AUTOMATIC PAGESIZE 16K;

52 IBM Product Master 12.0.0


CREATE BUFFERPOOL BLOBBP SIZE AUTOMATIC PAGESIZE 16K;
CREATE BUFFERPOOL TEMPUSRBP SIZE AUTOMATIC PAGESIZE 16K;
CREATE BUFFERPOOL TEMPSYSBP SIZE AUTOMATIC PAGESIZE 16K;
CREATE BUFFERPOOL TEMPUSRBP32 SIZE AUTOMATIC PAGESIZE 32K;
CREATE BUFFERPOOL TEMPSYSBP32 SIZE AUTOMATIC PAGESIZE 32K;
CREATE BUFFERPOOL XML_DATA_BP SIZE AUTOMATIC PAGESIZE 32K;
CREATE BUFFERPOOL XML_LARGE_BP SIZE AUTOMATIC PAGESIZE 32K;
CREATE BUFFERPOOL XML_INDX_BP SIZE AUTOMATIC PAGESIZE 32K;

All buffer pools are required for successful schema and product installation.
If you are using custom table spaces, then the following extra buffer pools are required.

CREATE BUFFERPOOL ITA_DATA_BP SIZE AUTOMATIC PAGESIZE 16K;


CREATE BUFFERPOOL ITA_IX_BP SIZE AUTOMATIC PAGESIZE 16K;
CREATE BUFFERPOOL ITD_DATA_BP SIZE AUTOMATIC PAGESIZE 16K;
CREATE BUFFERPOOL ITD_IX_BP SIZE AUTOMATIC PAGESIZE 16K;
CREATE BUFFERPOOL ITM_DATA_BP SIZE AUTOMATIC PAGESIZE 16K;
CREATE BUFFERPOOL ITM_IX_BP SIZE AUTOMATIC PAGESIZE 16K;
CREATE BUFFERPOOL ICM_DATA_BP SIZE AUTOMATIC PAGESIZE 16K;
CREATE BUFFERPOOL ICM_IX_BP SIZE AUTOMATIC PAGESIZE 16K;
CREATE BUFFERPOOL LCK_DATA_BP SIZE AUTOMATIC PAGESIZE 16K;
CREATE BUFFERPOOL LCK_IX_BP SIZE AUTOMATIC PAGESIZE 16K;

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating table spaces


The fourth step in setting up DB2® for use with IBM® Product Master is to create the table spaces in a database that has automatic storage that is enabled.

About this task


For more information about how to create table spaces, see the Db2 documentation or consult your DBA.

For more information, see Table space requirements and Custom table space names.

Procedure
Create the table spaces.

Example
The following example provides sample statements for creating table spaces. Modify the container path /db/a1/db2inst1/ and /db/a5/db2inst1/ appropriate paths
in your Db2 server.
Large table spaces:

CREATE LARGE TABLESPACE USERS PAGESIZE 16K MANAGED BY AUTOMATIC STORAGE


BUFFERPOOL USERSBP
NO FILE SYSTEM CACHING AUTORESIZE YES INCREASESIZE 1G;

CREATE LARGE TABLESPACE INDX PAGESIZE 16K MANAGED BY AUTOMATIC STORAGE


BUFFERPOOL INDXBP
NO FILE SYSTEM CACHING AUTORESIZE YES INCREASESIZE 1G;

CREATE LARGE TABLESPACE BLOB_TBL_DATA PAGESIZE 16K MANAGED BY AUTOMATIC STORAGE


BUFFERPOOL BLOBBP
FILE SYSTEM CACHING AUTORESIZE YES INCREASESIZE 1G;

CREATE LARGE TABLESPACE XML_DATA PAGESIZE 32K MANAGED BY AUTOMATIC STORAGE


BUFFERPOOL XML_DATA_BP
NO FILE SYSTEM CACHING AUTORESIZE YES INCREASESIZE 100M;

CREATE LARGE TABLESPACE XML_LARGE_DATA PAGESIZE 32K MANAGED BY AUTOMATIC STORAGE


BUFFERPOOL XML_LARGE_BP
NO FILE SYSTEM CACHING AUTORESIZE YES INCREASESIZE 100M;

CREATE LARGE TABLESPACE XML_INDEX PAGESIZE 32K MANAGED BY AUTOMATIC STORAGE


BUFFERPOOL XML_INDX_BP
FILE SYSTEM CACHING AUTORESIZE YES INCREASESIZE 100M;

If you are using custom table spaces, ensure that you include the following extra table spaces:

CREATE LARGE TABLESPACE ITA_DATA PAGESIZE 16K MANAGED BY AUTOMATIC STORAGE


BUFFERPOOL ITA_DATA_BP
NO FILE SYSTEM CACHING AUTORESIZE YES INCREASESIZE 1G;

CREATE LARGE TABLESPACE ITM_DATA PAGESIZE 16K MANAGED BY AUTOMATIC STORAGE


BUFFERPOOL ITM_DATA_BP
NO FILE SYSTEM CACHING AUTORESIZE YES INCREASESIZE 1G;

CREATE LARGE TABLESPACE ITD_DATA PAGESIZE 16K MANAGED BY AUTOMATIC STORAGE


BUFFERPOOL ITD_DATA_BP
NO FILE SYSTEM CACHING AUTORESIZE YES INCREASESIZE 1G;

IBM Product Master 12.0.0 53


CREATE LARGE TABLESPACE ICM_DATA PAGESIZE 16K MANAGED BY AUTOMATIC STORAGE
BUFFERPOOL ICM_DATA_BP
NO FILE SYSTEM CACHING AUTORESIZE YES INCREASESIZE 1G;

CREATE LARGE TABLESPACE LCK_DATA PAGESIZE 16K MANAGED BY AUTOMATIC STORAGE


BUFFERPOOL LCK_DATA_BP
NO FILE SYSTEM CACHING AUTORESIZE YES INCREASESIZE 1G;

CREATE LARGE TABLESPACE ITA_IX PAGESIZE 16K MANAGED BY AUTOMATIC STORAGE


BUFFERPOOL ITA_IX_BP
NO FILE SYSTEM CACHING AUTORESIZE YES INCREASESIZE 1G;

CREATE LARGE TABLESPACE ITM_IX PAGESIZE 16K MANAGED BY AUTOMATIC STORAGE


BUFFERPOOL ITM_IX_BP
NO FILE SYSTEM CACHING AUTORESIZE YES INCREASESIZE 1G;

CREATE LARGE TABLESPACE ITD_IX PAGESIZE 16K MANAGED BY AUTOMATIC STORAGE


BUFFERPOOL ITD_IX_BP
NO FILE SYSTEM CACHING AUTORESIZE YES INCREASESIZE 1G;

CREATE LARGE TABLESPACE ICM_IX PAGESIZE 16K MANAGED BY AUTOMATIC STORAGE


BUFFERPOOL ICM_IX_BP
NO FILE SYSTEM CACHING AUTORESIZE YES INCREASESIZE 1G;

CREATE LARGE TABLESPACE LCK_IX PAGESIZE 16K MANAGED BY AUTOMATIC STORAGE


BUFFERPOOL LCK_IX_BP
NO FILE SYSTEM CACHING AUTORESIZE YES INCREASESIZE 1G;

Temporary table spaces:

CREATE USER TEMPORARY TABLESPACE TEMP_USER PAGESIZE 16K MANAGED


BY AUTOMATIC STORAGE
BUFFERPOOL TEMPUSRBP;

CREATE SYSTEM TEMPORARY TABLESPACE TEMP_SYSTEM PAGESIZE 16K MANAGED


BY AUTOMATIC STORAGE
BUFFERPOOL TEMPSYSBP;

CREATE USER TEMPORARY TABLESPACE TEMP_USER32 PAGESIZE 32K MANAGED


BY AUTOMATIC STORAGE
BUFFERPOOL TEMPUSRBP32;

CREATE SYSTEM TEMPORARY TABLESPACE TEMP_SYSTEM32 PAGESIZE 32K MANAGED


BY AUTOMATIC STORAGE
BUFFERPOOL TEMPSYSBP32;

Note: All table spaces are required for successful schema and product installation.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Adding database users and granting permissions


To install and use IBM® Product Master effectively, you must add a database user and grant the necessary permissions.

Before you begin


The Product Master database schema needs a database user that is authenticated at the server level.

About this task


The following database privileges are only required during the installation phase; before the create_schema command is run.

BINDADD
EXTERNAL
ROUTINE
IMPLSCHEMA
NOFENCE

These privileges can be revoked after the create_schema command is run. These database privileges are not required during a Fix Pack installation or a new version
migration.

Procedure
1. Create a user at the operating system level.
a. Create an operating system user.
You must have root authority to create a user. If you do not have root authority to create a user, contact your DB2® server administrator for help.
b. Set the password for the user.
You can try to connect to the server by using the user ID to verify that the user can connect to the server.
c. Set a new password for the user.
With AIX®, the password expires immediately after you log in to the server.

54 IBM Product Master 12.0.0


2. Create a database user pim and grant the permissions by using the instance owner login (the default instance owner login is db2inst1).
You must grant these permissions:
DBADM
CREATETAB
BINDADD
CONNECT
CREATE_NOT_FENCED
IMPLICIT_SCHEMA
LOAD ON DATABASE
Sample SQL:

db2 CONNECT TO <databasename> user <Username> using <password>


db2 GRANT DBADM, CREATETAB, BINDADD, CONNECT, CREATE_NOT_FENCED,
IMPLICIT_SCHEMA, LOAD ON DATABASE TO USER PIM

3. Grant user permissions to use space on all the Product Master specific table spaces.
You can grant user permissions by using the following SQL statements:

db2 GRANT USE OF TABLESPACE USERS TO PIM


db2 GRANT USE OF TABLESPACE INDX TO PIM
db2 GRANT USE OF TABLESPACE BLOB_TBL_DATA TO PIM
db2 GRANT USE OF TABLESPACE TEMP_USER TO PIM
db2 GRANT USE OF TABLESPACE XML_DATA TO PIM
db2 GRANT USE OF TABLESPACE XML_LARGE_DATA TO PIM
db2 GRANT USE OF TABLESPACE XML_INDEX TO PIM

4. Grant user permissions to any additional table spaces that you create for the Product Master production environment.
You can grant user permissions by using the following SQL statements:

db2 GRANT USE OF TABLESPACE ICM_DATA TO PIM


db2 GRANT USE OF TABLESPACE ICM_IX TO PIM
db2 GRANT USE OF TABLESPACE ITM_DATA TO PIM
db2 GRANT USE OF TABLESPACE ITM_IX TO PIM
db2 GRANT USE OF TABLESPACE ITD_DATA TO PIM
db2 GRANT USE OF TABLESPACE ITD_IX TO PIM
db2 GRANT USE OF TABLESPACE ITA_DATA TO PIM
db2 GRANT USE OF TABLESPACE ITA_IX TO PIM
db2 GRANT USE OF TABLESPACE LCK_DATA TO PIM
db2 GRANT USE OF TABLESPACE LCK_IX TO PIM

5. Create a schema entitled PIM for the user PIM.


Sample SQL as created by Control Center.

CREATE SCHEMA PIM AUTHORIZATION PIM;

What to do next
Repeat these steps if you want one more database schema user for another instance of Product Master. For example, if you want another test instance of Product Master,
then create a database user and schema with the name pimtest in the database. This operation needs an operating system user with the name pimtest.
Important: You can share the database for Product Master with development or QA environments, but not with a production environment. Sharing the Product Master
production database adversely affects production performance.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Db2 configurations
You can customize profile registry variables, database manager configuration parameters, and DB2® configuration parameters to optimize performance with IBM® Product
Master.

IBM Db2 database profile registry updates


IBM Product Master requires that certain profile registry values be updated for Db2.
Db2 database manager configuration parameters
IBM Product Master requires certain Db2 database manager configuration parameters to be set before you install and use the product.
Db2 database configuration parameters
IBM Product Master requires certain Db2 configuration parameters to be set before you install and use the product.
Transaction log files for the database
Transaction log files provide you with the ability to recover your environment to a consistent state and preserve the integrity of your data. Log file storage must be
optimized because log files are written sequentially, and the database manager reads log files sequentially during database recovery.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Db2 database profile registry updates


IBM® Product Master requires that certain profile registry values be updated for DB2®.

IBM Product Master 12.0.0 55


The following profile registry variables are required for use by Product Master:

DB2CODEPAGE
The DB2CODEPAGE registry variable is used to specify the character set that is used during export and import of data in Db2.

Set the value to 1208.

DB2COMM
The DB2COMM registry variable determines which protocol's connection managers are enabled when the database manager is started. You can set this variable for
multiple communication protocols by separating the keywords with commas.

Set the value to tcpip.

DB2_PARALLEL_IO
The DB2_PARALLEL_IO registry variable changes the way Db2 calculates the input and output parallelism of a table space. You can enable input and output
parallelism by providing the correct number of prefetch requests. You can enable input and output parallelism either implicitly, by using multiple containers, or
explicitly, by setting DB2_PARALLEL_IO. Each prefetch request is a request for an extent of pages. For example, a table space has two containers and the prefetch
size is four times the extent size. If the registry variable is set, a prefetch request for this table space is broken into four requests (one extent per request) with a
possibility of four prefetchers that service the requests in parallel.

Set the value to "*" (asterisk).

Other Profile Registry variables are not required, but can be set if there is a specific requirement.

You can set the Db2 registry variables by using the following Db2 commands in Db2 server:

db2set DB2COMM=tcpip
db2set DB2_PARALLEL_IO=*
db2set DB2CODEPAGE=1208

If you are migrating from old releases of Db2, ensure that you set the following registry variables and their values:

DB2_SKIPDELETED=OFF
DB2_SKIPINSERTED=OFF
DB2_EVALUNCOMMITTED=NO

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Db2 database manager configuration parameters


IBM® Product Master requires certain DB2® database manager configuration parameters to be set before you install and use the product.

The following table shows the database manager configuration parameters and corresponding values that must be set for use with Product Master. In each case, the
syntax of the command to update the parameter is,

db2 update dbm cfg using <parameter> <value>

Parameter Description Value


MON_HEAP_SZ The memory that is required for maintaining the private views of the database system monitor data is allocated from the monitor Automati
heap. Its size is controlled by the mon_heap_sz configuration parameter. c

SHEAPTHRES Private and shared sorts use memory from two different memory sources. The size of the shared sort memory area is statically 0
predetermined at the time of the first connection to a database based on the value of sheapthres. The size must be at least two (Automati
times the size of sortheap of any database hosted by the Db2 instance. c)

Sample statements for updating database manager configurations:

update dbm cfg using MON_HEAP_SZ automatic


update dbm cfg using SHEAPTHRES 0

There is no requirement to update SHEAPTHRES for new installations as 0 is the default value.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Db2 database configuration parameters


IBM® Product Master requires certain DB2® configuration parameters to be set before you install and use the product.

The following table shows the database configuration parameters that must be set for use with Product Master:

Table 1. Db2 database configuration parameters


Parameter Description Value
DFT_QUERYOPT The query optimization class is used to direct the optimizer to use different degrees of optimization when you compile SQL queries. 5
This parameter provides more flexibility by setting the default query optimization class.

56 IBM Product Master 12.0.0


Parameter Description Value
DBHEAP There is one database heap per database, and the database manager uses it on behalf of all instances of Product Master connected Automati
to the database. It contains control block information for tables, indexes, table spaces, and buffer pools. It also contains space for c
the log buffer (logbufsz), and the catalog cache (catalogcache_sz). Therefore, the size of the heap depends on the number of control
blocks that are stored in the heap at a specific time. The control block information is kept in the heap until all instances of Product
Master disconnect from the database.
The minimum amount the database manager needs to get started is allocated at the first connection. The data area is expanded as
needed up to the maximum specified by dbheap.
CATALOGCACHE This parameter indicates the maximum amount of space that the catalog cache can use from the database heap (dbheap). 5120
_SZ
LOGBUFSZ This parameter enables you to specify the amount of the database heap (defined by the dbheap parameter) to use as a buffer for log 4096
records before it writes these records to disk. This parameter must also be less than or equal to the dbheap parameter.
UTIL_HEAP_SZ This parameter indicates the maximum amount of memory that can be used simultaneously by the BACKUP, RESTORE, and LOAD 5120
and load recovery utilities.
LOCKLIST This parameter indicates the amount of storage that is allocated to the lock list. There is one lock list per database and it contains Automati
the locks that are held by all instances of Product Master concurrently connected to the database. Depending on the size of the c
database, this parameter might require an increase.
SORTHEAP This parameter defines the maximum number of private memory pages to be used for private sorts, or the maximum number of Automati
shared memory pages to be used for shared sorts. c

STMTHEAP The statement heap is used as a workspace for the SQL compiler during compilation of an SQL statement. This parameter specifies Automati
the size of this workspace. c

APPLHEAPSZ This parameter defines the number of private memory pages available to be used by the database manager on behalf of a specific Automati
agent or subagent. c

STAT_HEAP_SZ This parameter indicates the maximum size of the heap that is used in collecting statistics from running the RUNSTATS command. Automati
c

MAXLOCKS Lock escalation is the process of replacing row locks with table locks, reducing the number of locks in the list. This parameter Automati
defines a percentage of the lock list that is held by an application that must be filled before the database manager performs c
escalation.
LOCKTIMEOUT This parameter specifies the number of seconds that Product Master waits to obtain a lock. 60

NUM_IOCLEAN This parameter enables you to specify the number of asynchronous page cleaners for a database. The page cleaners write changed Automati
ERS pages from the buffer pool to disk before a database agent requires the space in the buffer pool. c

NUM_IOSERVER I/O servers are used on behalf of the database agents to perform prefetch I/O and asynchronous I/O by utilities such as back up and Automati
S restore. This parameter specifies the number of I/O servers for a database. c

MAXAPPLS This parameter specifies the maximum number of concurrent instances of Product Master that can be connected (both local and Automati
remote) to a database. c

AVG_APPLS The SQL optimizer uses this parameter to help estimate how much of the buffer pool is available at run time. Automati
c

MAXFILOP This parameter specifies the maximum number of file handles that can be open for each database agent. 640

CUR_COMMIT This parameter controls the behavior of cursor stability (CS) scans. ON

AUTO_MAINT This parameter is the parent of all the other automatic maintenance database configuration parameters. ON

AUTO_TBL_MAI This parameter is the parent of all table maintenance parameters. ON


NT
AUTO_RUNSTAT This automated table maintenance parameter enables or disables automatic table RUNSTATS operations for a database. ON
S
AUTO_STMT_ST This parameter enables and disables the collection of real-time statistics. ON
ATS
Sample statement for updating database configurations:

update db cfg using SELF_TUNING_MEM ON


update db cfg using DFT_QUERYOPT 5
update db cfg using CATALOGCACHE_SZ 6000
update db cfg using LOGBUFSZ 4096
update db cfg using UTIL_HEAP_SZ 5120
update db cfg using BUFFPAGE 1024
update db cfg using LOCKTIMEOUT 60
update db cfg using MAXFILOP 640
update db cfg using AUTO_MAINT ON
update db cfg using AUTO_TBL_MAINT ON
update db cfg using AUTO_RUNSTATS ON
update db cfg using AUTO_STMT_STATS ON

You must not update the following parameters for new installations. The parameters are already set with correct values, by default:

DBHEAP
LOCKLIST
MAXLOCKS
SORTHEAP
STMTHEAP
APPLHEAPSZ
STAT_HEAP_SZ
NUM_IOCLEANERS

IBM Product Master 12.0.0 57


NUM_IOSERVERS
MAXAPPLS
AVG_APPLS

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Transaction log files for the database


Transaction log files provide you with the ability to recover your environment to a consistent state and preserve the integrity of your data. Log file storage must be
optimized because log files are written sequentially, and the database manager reads log files sequentially during database recovery.

Put the logs on a file system and they are placed on their own physical disks, separate from the database table spaces and database software. The disks ideally should be
dedicated to DB2® logging to avoid the possibility of any other processes that are accessing or writing to these disks. Ideal placement of the logs is on the outer edge of
the disk where there are more data blocks per track. It is recommended to protect the log against single disk failures by using a RAID 10 or RAID 5 array.
Table 1. Transaction log files and parameters
Parameter Description
NEWLOGPATH This parameter is used to change the log path to create the transaction log files on a separate partition or volume than the default volume or the one
used for database table space containers.

Set it to a directory that is the destination of log files. Make sure that the directory is created before you set it. Make sure that there is enough space on
the destination before you set the new log path.

For example: update db cfg for PIMDB using NEWLOGPATH /u02/db2data/logs


LOGFILSIZ This parameter defines the size of each primary and secondary log file. The size of these log files limits the number of log records that can be written
to them before they become full and a new log file is required. Set it to 30000 if it is a development or test database otherwise set it to 60000. The
size is number of pages each of size 4 KB.

For example: update db cfg for PIMDB using LOGFILSIZ 60000


LOGPRIMARY The primary log files establish a fixed amount of storage that is allocated to the recovery log files. This parameter enables you to specify the number of
primary log files to be pre-allocated. Set it to 20 if it is a development database otherwise set it to 40.

For example: update db cfg for PIMDB using LOGPRIMARY 40


LOGSECOND This parameter specifies the number of secondary log files that are created and used for recovery log files (only as needed). When the primary log files
become full, the secondary log files (of size logfilsiz) are allocated one at a time as needed, up to a maximum number as controlled by this parameter.
Set its value to 2.

For example: update db cfg for PIMDB using LOGSECOND 2


Restart the database after you make the db configuration changes with db2stop and db2start commands:

db2stop force
db2start

The following table has information about values of different configuration parameters that influence the transaction log size and numbers for small, medium, and large
Product Master database implementations:
Table 2. Values of different configuration parameters
Parameter Small Medium Large
LOGFILSIZ 30000 60000 70000
LOGPRIMARY 30 40 50
LOGSECOND 2 2 2
Total Space Required 3.7 GB 9.6 GB 13.8 GB

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting up the Db2 client on Product Master


You must set up the DB2® client on Product Master.

Before you begin


Before you set up the Db2 client on the operating system running Product Master, you must:

Ensure that the system or database administrator has installed Db2 Admin, Developer, or Run-Time client on the operating system.
Get the Db2 client home directory on the operating system.
Get the host name, port number, and database name for the Db2 database server from the database administrator.

Procedure

58 IBM Product Master 12.0.0


1. Add the following line to the .bash_profile file in the home directory of the user ID that is used for installing Product Master.
. <Db2 client home>/sqllib/db2profile
Where you replace <Db2 client home> with the Db2 client home directory.
For example: . /opt/db2inst1/sqllib/db2profile
2. Log out and back in again in to the operating system. Ensure that the Db2 libraries are added by checking for the . <Db2 client
home>/sqllib/bin directory in the $PATH variable.
3. Set up Db2 server information in the Db2 client by using the following commands:
db2 "catalog tcpip node <nodename> remote <dbhostname>

server <sname/port#>"

db2 terminate

db2 "catalog database <dbname> as <dbname> at node <nodename>"

db2 terminate
Where you replace the following variables:

<nodename>
The name for the remote instance.
<dbhostname>
The host name or IP address of the database server.
<sname or port#>
The service name or port number for the connection port of the Db2 instance.
<dbname>
The database name.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Db2 database setup checklist


Use this checklist to verify your IBM® DB2® setup before installing IBM Product Master.

You can also run the perl $TOP/bin/perllib/db_checklist.pl script to check the Db2 parameters and configuration. Run the perl
$TOP/bin/perllib/db_checklist.pl script to verify that the Db2 settings are correctly set for Product Master.

Setting Description
Check the IBM Db2 server The version of the Db2 server can be the version that is identified in the system requirements depending on your product version.
release
Check the database code set The character set and national character set should be UTF8. On the database server that is logged in as instance owner:

$db2 get db cfg for <database name>

This should have |Database code set| set to UTF-8


Check the parameter file entries Follow the Db2 configuration sections provided to make sure that you have made required parameter changes for the Db2 registry
variables, database manager, and the database.
Check the table spaces setup Make sure that the required table spaces are set up in the database.
Check the transaction logs setup Make sure that the transaction logs are created on a separate partition.
Check the database user setup View the database user name and password in the $TOP/bin/conf/env_settings.ini file and make sure that the database user is created
and all required privileges are granted to the user.
Check the connectivity to the The database server and the database server node must be cataloged on the application server and the database must be accessible
database server from the application server.

Check the database connectivity with the following script:

$TOP/bin/test_db.sh

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting up your Oracle Database


If you plan to use an Oracle Database with IBM® Product Master, you must install the supported version of Oracle before you installProduct Master.

Before you begin


Following are the supported versions.
Software Version
Oracle Database 12c Release 2 (12.2) Enterprise Edition
(12.2) Standard Edition
Oracle Database 18c (18.3) Enterprise Edition and future fix packs

IBM Product Master 12.0.0 59


Software Version
Oracle Database 19c (19.3) Enterprise Edition and future fix packs
After you install the database software, make sure that you apply the most current fix pack.

To set up a DB2® database and its environment, you must use these guidelines to create the instance, database, buffer pools, and table spaces.

The following aspects must be considered when you set up the database:

Database instance
Create a new, separate database for the Product Master schema. The example, PIMDB is used as the name of this new database. Because of the large amounts of
data that Product Master manages, do not share an existing database, but instead create a new one. The database must be created by using character encoding
UTF-8.

In most implementations, the Product Master database uses approximately 90% OLTP (online transaction processing) and 10% batch processing. OLTP causes lots
of concurrent activity and single row updates during business hours and large batch processing activity during off-peak time.

To avoid that the Db2 system is not I/O bound it is important to use 10 - 15 spindles per processor and dedicated LUNs (Logical Unit Numbers) per database file
system. It is also advisable to separate Db2 transaction logs and data on separate spindles and LUNs. Use file systems instead of raw devices and create one file
system per LUN. Use RAID-10 for transaction logs and RAID-10 or RAID-5 for data LUNs. Set the Db2_PARALLEL_IO registry variable and set the EXTENTSIZE to
the RAID stripe size. Use AUTOMATIC (the default) for NUM_IOCLEANERS, NUM_IOSERVERS, and PREFETCHSIZE.

Note: For more information about achieving balanced I/O for your Db2 system, see IBM Information Management Best Practices .

Supported Oracle versions are described in the system requirements on the product support site. For more information, see System requirements.

After you install the database software, make sure that you apply the most current fix pack.

Use the Oracle Database configuration guidelines in the following sections to set up your Oracle Database .

Updating operating system settings for Oracle


There are several settings for system semaphores and shared memory that Oracle recommends. For more information, see the Oracle documentation specific to
your platform for configuration information.

Disk considerations for the database


The preparation for optimal workload distribution is a significant consideration when you set up the database for IBM Product Master.
Creating a database
Set up a separate database for use with IBM Product Master.
Oracle setup for high availability
For high availability and scalability, Oracle provides the Transparent Application Failover (TAF) feature that is a part of Real Application Clusters (RAC). TAF enables
IBM Product Master to be available continuously in the event of database server failure.
Oracle parameter file settings
Oracle uses configuration parameters to locate files and specify runtime parameters common to all Oracle products.
Oracle table space settings
These table spaces must be created in the IBM Product Master database.
Setting up transaction logs
Oracle relies on online redo log files to record transactions. Each time a transaction takes place in the database, an entry is added to the redo log files.
Creating database schema users
Oracle Database schema users must be set up for use with IBM Product Master.
Setting up Oracle on the application server
After you create a database, set the character sets, created the table spaces, transaction logs, and database schema users, you
are ready to install Oracle client on the application server.
Installing Oracle XML DB component
You need to install Oracle XML DB component to store XML documents in the database.
Oracle setup checklist
You can check your installation of Oracle against this checklist.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Disk considerations for the database


The preparation for optimal workload distribution is a significant consideration when you set up the database for IBM® Product Master.

In most of the customer implementations, the Product Master database processing workload is shared in the following way:

90% OLTP (Online Transaction Processing)


10% Batch Processing

Distributing the workload in this manner means that concurrent activity and single row updates are done during peak business hours and large batch processing is done
during off-peak-time. You must understand the type of workload your database is expected to perform so that you can lay out the physical database most effectively.
To achieve a balanced I/O, the Product Master DBA team would like for you to use a greater number of relatively low-capacity disks that are allocated for the database
rather than fewer high-capacity disks. A minimum of 6 to 10 disks per processor is ideal for optimal performance. Having too few large disks can cause database to wait on
disc I/O and impact performance.

RAID 10 provides excellent performance and availability. If overall cost is a concern, use RAID 5 with Fast Write Cache. If cost is not a concern, then RAID 10 is ideal for
the storing data.

The Product Master DBA team would prefer physically separating data, index, and UNDOTBS1 table space on the disks when you create table spaces and add data files.

60 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a database
Set up a separate database for use with IBM® Product Master.

About this task


For more information about how to create an Oracle database, see the Oracle documentation or consult your DBA.

Create a database after you are certain that the installation and set up prerequisites are met.

Procedure
Create an Oracle database.
Important: Product Master uses AL32UTF8 character set. Therefore, the database character set must be set to AL32UTF8 and the national character set must be set to
AL16UTF16 at the time you create the Product Master database.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Oracle setup for high availability


For high availability and scalability, Oracle provides the Transparent Application Failover (TAF) feature that is a part of Real Application Clusters (RAC). TAF enables IBM®
Product Master to be available continuously in the event of database server failure.

About this task


The Oracle TAF feature supports failover of read transactions only; write transactions during database failure are rolled back. Product Master continues to run when there
is a database failure. However, Product Master users are required to resubmit the transaction after failure. Unsaved data is lost and users are required to reenter the data.
Because Oracle will not load balance the existing database connections between all the nodes after the failover, it is recommended that you restart Product Master after
the failover to use all the database server nodes.

For installation and configuration of Oracle database with RAC, refer to the Oracle documentation. It is recommended that you set up the server-side TAF service on the
Oracle server.

Procedure
1. Configure the Oracle client for TAF.
a. Configure the TAF parameters along with the Oracle RAC nodes in the tnsnames.ora file of the Oracle client. The following sample shows the tnsnames
entry:
ibm.world =

(DESCRIPTION_LIST =

(FAILOVER = yes)

(LOAD_BALANCE = yes)

(DESCRIPTION =

(ADDRESS =

(PROTOCOL = TCP)

(HOST = fresno1)(PORT = 1521)

(HOST = fresno2)(PORT=1521)

(CONNECT_DATA =

(SERVICE_NAME = ibm.world)

(SERVER = dedicated)

(FAILOVER_MODE =

(BACKUP=ibm.world.bkp)

(TYPE=select)

IBM Product Master 12.0.0 61


(METHOD=preconnect)

(RETRIES=20)

(DELAY=3)

)
The FAILOVER_MODE section of the tnsnames.ora file lists the failover parameters and their values:

BACKUP=ibm.world.bkp
This parameter names the backup service name that takes over failed connections when a node crashes. In this example, the primary server is
fresno1 and TAF reconnects failed transactions to the fresno2 instance in case of server failure.
TYPE=select
This parameter tells TAF to restart all read-only in-flight transactions from the beginning of the transaction.
METHOD=preconnect
This parameter directs TAF to create two connections when the transactions start: one to the primary fresno1 database and a backup connection to
the fresno2 database. If instance failure, the fresno2 database is ready to resume the failed transaction.
RETRIES=20
This parameter directs TAF to retry a failover connection up to 20 times.
DELAY=3
This parameter tells TAF to wait 3 seconds between connection retries.

2. Configure Product Master to use the OCI driver when you are using TAF. See Setting Oracle parameters for setting up OCI driver.
a. After the configuration is complete, you must manually modify the db_url property in the db.xml file. The db_url property should use the tnsnames.ora
entry with TAF parameters similar to parameters as shown in the following example:
db_url=jdbc:oracle:oci:@ibm.world

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Oracle parameter file settings


Oracle uses configuration parameters to locate files and specify runtime parameters common to all Oracle products.

When an Oracle program or application requires a conversion for a particular configuration variable, Oracle consults the associated parameter. All Oracle parameters are
stored in the registry.

The following parameters are set for the use with IBM® Product Master:
Parameter Description Required value
SGA_TARGET SGA_TARGET specifies the total size of all SGA components. If the SGA_TARGET is specified, then the 50% of your physical memory on the DB
following memory pools are automatically sized: server, assuming you are using the DB
server for Oracle only and Oracle DB is
Buffer cache (DB_CACHE_SIZE) used for Product Master only.
Shared pool (SHARED_POOL_SIZE)
Large pool (LARGE_POOL_SIZE)
Java™ pool (JAVA_POOL_SIZE)
Streams pool (STREAMS_POOL_SIZE)

DB_BLOCK_SIZ The parameter sets the size (in bytes) of an Oracle database block. This value is set at the database Required Value: 8192
E creation, and cannot be changed later. DB BLOCK SIZE is critical for the Product Master schema and must
be at least 8192. Schema creation fails if the db_block_size is too small.
QUERY_REWRIT Used to enable or disable query rewriting for materialized views. Required Value: TRUE
E_ENABLED
PROCESSES The parameter specifies the maximum number of operating system user processes that can Required value: 200
simultaneously connect to an Oracle Server.
OPEN_CURSOR The parameter specifies the maximum number of open cursors a session can have at once, and Required value: 600
S constrains the PL/SQL cursor cache size, which PL/SQL uses to avoid reparsing statements re-executed
by a user.
MAX_ENABLED_ Specifies the maximum number of database roles that a user can enable, including subroles. Required value: 60
ROLES
LOG_BUFFER Specifies the amount of memory, in bytes, that are used to buffer redo entries before they are written to a Required value: 5242880
redo log file by LGWR. Redo entries keep a record of changes made to database blocks.
OPTIMIZER_IN Adjusts the cost-based optimizer's assumptions for what percentage of index blocks are expected to be Required value: 90
DEX_CACHING in the buffer cache for nested loops joins. This affects the cost of running a nested loop join where an
index is used. Setting this parameter to a higher value makes nested loops join look less expensive to the
optimizer. Range of values is 0 - 100 percent.
OPTIMIZER_IN Used to tune optimizer performance when too few or too many index access paths are considered. A Required value: 50
DEX_COST_ADJ lower value makes the optimizer more likely to select an index. That is, setting it to 50 percent makes the
index access path look half as expensive as normal. Range of Values is 1 - 10000.

62 IBM Product Master 12.0.0


Parameter Description Required value
NLS_LANG_SEM Used to configure the database, between two values, byte or character length, it enables you to create Required value: BYTE (this is the default
ANTICS CHAR and VARCHAR2 columns using either byte or character. value for Oracle).

For example Col1 Varchar2(20), is 20 bytes with byte length or 20 chars with char length. (20*4
bytes if you have defined UTF8). Existing columns are not affected. The data dictionary always uses byte
semantics.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Oracle table space settings


These table spaces must be created in the IBM® Product Master database.

If you want to prepare DB2® to store, retrieve and process data, you must create table spaces. Because of the large size of the tables in the Product Master, you must use a
page size of 16 KB when you create the table spaces.

For more details on custom table space implementation with the product, see Custom table space names.

Required table spaces


Only the USERS, INDX, BLOB_TBL_DATA, TEMP_USER, and TEMP_SYSTEM table spaces are required for a default development environment. The table spaces ITA_DATA,
ITA_IX, ITD_DATA, ITD_IX, ITM_DATA, ITM_IX, LCK_DATA are LCK_IX are required for Product Master production instances. You should use table space-mapping file that
is described in the section Running schema creation scripts to use these table spaces.

Table 1. Required table spaces


Table space Definition Recommended size
ICM_DATA This table space is used to store TCTG_ICM_ITEM_CATEGORY_MAP table data. A minimum size of 1 GB space with
auto-resize.
ICM_IX This table space is used to store TCTG_ICM_ITEM_CATEGORY_MAP index data. A minimum size of 1 GB space with
auto-resize.
ITM_DATA This table space is used to store TCTG_ITM_ITEM table data. A minimum size of 1 GB space with
auto-resize.
ITM_IX This table space is used to store TCTG_ITM_ITEM index data. A minimum size of 1 GB space with
auto-resize.
ITD_DATA This table space is used to store TCTG_ITD_ITEM_DETAIL table data. A minimum size of 5 GB space with
auto-resize.
ITD_IX This table space is used to store TCTG_ITD_ITEM_DETAIL index data. A minimum size of 5 GB space with
auto-resize.
ITA_DATA This table space is used to store TCTG_ITA_ITEM_ATTRIBUTES table data. A minimum size of 10 GB space with
auto-resize.
ITA_IX This table space is used to store TCTG_ITA_ITEM_ATTRIBUTES index data. A minimum size of 10 GB space with
auto-resize.
LCK_DATA This table space is used to store TUTL_LCK_LOCK table data. A minimum size of 1 GB space with
auto-resize.
LCK_IX This table space is used to store TUTL_LCK_LOCK index data. A minimum size of 1 GB space with
auto-resize.
SYSTEM This is the default table space that is created automatically in the Oracle Database. System table space is A minimum size of 300 MB for the
used to store the data dictionary and the objects that are created by system user. This is a permanent table system table space with auto resize.
space.
USERS This table space is used to store all the Product Master database tables except tables that are used to store A minimum size of 15 GB for the users
large objects (LOBs). This is a permanent locally managed table space. table space with auto resize.
INDX This table space is used to store all the Product Master database indexes. This is a permanent locally A minimum size of 30 GB for the indx
managed table space. table space with auto resize.
BLOB_TBL_DAT This table space is used to store Product Master database tables that contain large objects like Catalogs, A minimum size of 1 GB for the
A Images. This is a permanent locally managed table space. blob_tbl_data table space with auto
resize.
XML_DATA This table space is used to store Product Master database tables that contain XML documents. This is a A minimum size of 1 GB space for the
permanent locally managed table space. XML_DATA table space with auto
resize.
XML_INDEX This table space is used to store Product Master database indexes on XML documents. This is a permanent A minimum size of 1 GB space for the
locally managed table space. XML_INDEX table space with auto
resize.
UNDOTBS1 This is the undo table space. A minimum size of 15 GB for the
undotbs1 table space with auto resize.
TEMP This table space is used to store objects temporarily for database operations like sorting and grouping. This is A minimum size of 6 GB for the temp
a temporary table space. table space with auto resize.

IBM Product Master 12.0.0 63


Oracle table space information
Table 2. Oracle table space information
Table space Minimum Size Recommended storage parameters
SYSTEM 400® MB Default
USERS 5 GB EXTENT MANAGEMENT

LOCAL SEGMENT SPACE MANAGEMENT AUTO


INDX 20 GB EXTENT MANAGEMENT

LOCAL SEGMENT SPACE MANAGEMENT AUTO


BLOB_TBL_DATA 1 GB EXTENT MANAGEMENT LOCAL

LOCAL SEGMENT SPACE MANAGEMENT AUTO


XML_DATA 1 GB EXTENT MANAGEMENT LOCAL

LOCAL SEGMENT SPACE MANAGEMENT AUTO


XML_INDEX 1 GB EXTENT MANAGEMENT LOCAL

LOCAL SEGMENT SPACE MANAGEMENT AUTO


UNDOTBS1 10 GB UNDO TABLE SPACE

LEAVE DEFAULT VALUES


TEMP 5 GB TEMPORARY TABLE SPACE

LEAVE DEFAULT VALUES

Sample statements for creating Oracle table spaces


You must change the datafile path from <database_folder> to the appropriate path in your file system. You can also modify the maxsize parameter to a set limit.
Note: A single gigabyte (1G) of space is enough to maintain around 3 million records. Ensure that you verify your space requirement according to your capacity.

CREATE TABLESPACE "USERS"


LOGGING
DATAFILE '<database_folder>/users1.dbf' SIZE 1G REUSE
AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED;

CREATE TABLESPACE "INDX"


LOGGING
DATAFILE '<database_folder>/indx1.dbf' SIZE 1G REUSE
AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED;

CREATE TABLESPACE "BLOB_TBL_DATA"


LOGGING
DATAFILE '<database_folder>/blob1.dbf' SIZE 1G REUSE
AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED;

CREATE TABLESPACE "ICM_DATA"


LOGGING
DATAFILE '<database_folder>/icm_data1.dbf' SIZE 1G REUSE
AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED;

CREATE TABLESPACE "ICM_IX"


LOGGING
DATAFILE '<database_folder>/icm_ix1.dbf' SIZE 1G REUSE
AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED;

CREATE TABLESPACE "XML_DATA"


LOGGING
DATAFILE '<database_folder>/xml_data1.dbf' SIZE 1G REUSE
AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED;

CREATE TABLESPACE "XML_INDEX"


LOGGING
DATAFILE '<database_folder>/xml_index1.dbf' SIZE 1G REUSE
AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED;

CREATE TABLESPACE "XML_LARGE_DATA"


LOGGING
DATAFILE '<database_folder>/xml_lrgdata1.dbf' SIZE 1G REUSE
AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED;

CREATE TABLESPACE "ITM_DATA"


LOGGING
DATAFILE '<database_folder>/itm_data1.dbf' SIZE 1G REUSE
AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED;

CREATE TABLESPACE "ITM_IX"


LOGGING
DATAFILE '<database_folder>/itm_ix1.dbf' SIZE 1G REUSE
AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED;

CREATE TABLESPACE "ITD_DATA"


LOGGING
DATAFILE '<database_folder>/itd_data1.dbf' SIZE 1G REUSE
AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED;

CREATE TABLESPACE "ITD_IX"

64 IBM Product Master 12.0.0


LOGGING
DATAFILE '<database_folder>/itd_ix1.dbf' SIZE 1G REUSE
AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED;

CREATE TABLESPACE "ITA_DATA"


LOGGING
DATAFILE '<database_folder>/ita_data1.dbf' SIZE 1G REUSE
AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED;

CREATE TABLESPACE "ITA_IX"


LOGGING
DATAFILE '<database_folder>/ita_ix1.dbf' SIZE 1G REUSE
AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED;

CREATE TABLESPACE "LCK_DATA"


LOGGING
DATAFILE '<database_folder>/lck_data1.dbf' SIZE 1G REUSE
AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED;

CREATE TABLESPACE "LCK_IX"


LOGGING
DATAFILE '<database_folder>/lck_ix1.dbf' SIZE 1G REUSE
AUTOEXTEND ON NEXT 1G MAXSIZE UNLIMITED;

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting up transaction logs


Oracle relies on online redo log files to record transactions. Each time a transaction takes place in the database, an entry is added to the redo log files.

About this task


Database performance can be increased by correctly tuning the size of the redo log files. Uncommitted transactions also generate the redo log entries.

Some important considerations when you create redo logs:

Place all the redo log groups on one disk without any other files. That means separate the redo log files from data files.
Use the fastest of available disks for redo logs if possible.
Consider availability: members of the same group can be on different physical disks and controllers for recoverability purposes.
Avoiding the use of RAID 5 for redo logs. See Disk considerations for the database for information on optimal disk allocations.
Separate redo logs from archived redo logs by creating them on separate disks.

Redo log files are written sequentially by the Log Writer (LGWR) process. This operation can be made faster if there is no concurrent activity on the same disk. Dedicating
separate disks to redo log files usually ensures that LGWR runs smoothly with no further tuning necessary. If your system supports asynchronous I/O, but this feature is
not currently configured, then test to see whether this feature is beneficial.

Procedure
1. Create six redo log groups with files of size 300 MB each.
2. Multiplex (mirror) the redo logs by creating two members in each redo log group.
Important: No two members of the same group can be on the same disk.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating database schema users


Oracle Database schema users must be set up for use with IBM® Product Master.

Before you begin


Before you can create database schema users, you need the following user information:

Default table space: users.


Temporary table space: temp.
Authentication: Password.
Status: Unlocked.
Roles to be granted: Connect, and Resource.
System Privileges to be granted: unlimited table space, select any dictionary, query rewrite, and create any synonym.

About this task

IBM Product Master 12.0.0 65


You can create a database user for Product Master that is referenced in the common.properties file by using SQL commands.

Procedure
Run these SQL commands at the SQL prompt:

SQL> Create user PIM identified by PIM default tablespace users temporary tablespace temp;

SQL> Grant create session, connect,resource,unlimited tablespace, select any dictionary, query rewrite, create any synonym
to PIM;

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting up Oracle on the application server

After you create a database, set the character sets, created the table spaces, transaction logs, and database schema users, you are
ready to install Oracle client on the application server.

About this task


The file tnsnames.ora can be found in the $ORACLE_HOME/network/admin directory. Check connectivity between the application server and database server by using
tnsping or SQLPlus on the application server.

Procedure
Install Oracle client on the application server.
Make sure that you have an entry of the database in tnsnames.ora file on the application server where the Oracle client is installed.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing Oracle XML DB component


You need to install Oracle XML DB component to store XML documents in the database.

About this task


The Oracle XML DB component is required for IBM® Product Master. This component enables the efficient processing and storage of XML documents in the database. You
can perform these steps for manually installing this component. You can also use Oracles database configuration assistant for installation. For more information, see
Administering Oracle XML DB.

Procedure
1. Change directory to $ORACLE_HOME/rdbms/admin in the Oracle Database server.
2. Log on to SQLPlus by using SYS or SYSDBA user sqlplus "/as sysdba"
3. Run the catqm.sql script with the following parameters:
xdb_password is the password for XML DB repository.
xdb_ts_name is the table space to use for Oracle XML DB, it must be XML_DATA
temp_ts_name is the temporary table space, example TEMP
secure_file_for_repo is NO (if you want to use SecureFile LOB then XML_DATA table space can use Oracles Automatic Storage Management).
For example,

@catqm.sql pass4xdb XML_DATA TEMP NO

4. Ensure that XML DB installation is successful.


Note: XML DB protocol access is not required for Product Master.
5. In the Oracle initialization parameter file, add the following parameter or ensure that the existing value for the compatible parameter is 11.2.0.1. Restart Oracle
after changing the parameter value to compatible = 11.2.0.1.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Oracle setup checklist

66 IBM Product Master 12.0.0


You can check your installation of Oracle against this checklist.

You can also run the perl


$TOP/bin/perllib/db_checklist.pl script to check the Oracle parameters and configuration. Run the perl $TOP/bin/perllib/db_checklist.pl script to
verify that the Oracle settings are correctly set for Product Master.

X Oracle Setup Check List


Check the Oracle The version of the Oracle server can be the version that is identified in the system requirements depending on your product
Database server version.
release.
Check the database The character set and national character set can be AL32UTF8. Connect as system user and check the character set of the
character set. database.

SQL

select * from nls_database_parameters


where PARAMETER in (NLS_CHARACTERSET,
NLS_NCHAR_CHARACTERSET);
Check the init Run the SQLs found in $TOP/bin/db_checklist/oracle_checklist.sql to verify all the init parameter file entries are set correctly
parameter file entries. according to Product Master recommendations.
Check the table spaces Make sure that the required table spaces are set up in the database.
setup.
Check the redo log Make sure that there is enough redo log files that are created in the database. To get information about existing redo log files
files. in the database, connect as a system user and run the following query:

select * from v$log;


Check the database View the database user name and password in $TOP/etc/default/common.properties file and make sure that the database
user setup. user is created and all required privileges are granted to the user. For more information, see Adding database users and
granting permissions

.
Check the Make sure that there is an entry of the database in the tnsnames.ora file on the application server where the Oracle client is
tnsnames.ora file entry installed. The tnsnames.ora file can be found in the following directory: $ORACLE_HOME/network/admin directory.
for the database. Note: Due to a limitation in the schema installation, the service name in tnsnames.ora must match the SID of the database; in
other words, OCI utilities such as sqlplus must be able to connect by using a service name, which is the same as the SID.
Check the listener on The database must be accessible from the application server.
the database server.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring WebSphere® Application Server


Before IBM® Product Master runs correctly, you must configure the application server.

Before you begin


Following are the supported versions.
Software Version
WebSphere® Application Server Network Deployment 9.0
WebSphere Application Server 9.0
You must start the Admin Console of the WebSphere Application Server to complete the configuration tasks on the server before you install any of the Persona-based UI.

1. On the computer where the WebSphere Application Server is installed, go to the bin directory of the WebSphere Application Server profile. For example,
/opt/IBM/WebSphere/AppServer/profiles/AppSrv01/bin.
2. Run the following command to start the application server:
./startServer.sh server1

3. Enter the following web address of Admin Console in the address bar of a browser: https://<IP address>:9043/ibm/console/logon.jsp or http://<IP
address>:9060/ibm/console/logon.jsp, where IP is the IP address of the computer where the WebSphere Application Server is installed.
4. Enter the credentials and log in. The welcome page of Admin Console opens.

Important: In the Server Types > WebSphere application servers > WebSphere application servers > %SERVER_NAME% > Session management page, you need to specify
the value of the Set cookie path field as / (root folder). The cookie path needs to be set to the root folder in the IBM Product Master because the cookies are needed
throughout the application.
Note: You must have the WebSphere Application Server deployment manager (Dmgr) JVM Heap size arguments set to 512 MB and 1024 MB.
To increase the heap size:

1. Open the WebSphere Application Server Integrated Solutions Console and go to System Administration > Deployment Manager.
2. Under Server Infrastructure, expand Java and Process Management, then click Process definition.
3. Under Additional Properties, click Java Virtual Machine.
4. Set the Initial heap size to 512 MB and the Maximum heap size to 1024 MB.
5. Click OK, save your changes, and synchronize your changes with the nodes.

Exporting and importing LTPA tokens between WAS domains


If you use more than one server in your environment, and single sign-on is required, all of the WAS servers must share the same LTPA key in order to validate and

IBM Product Master 12.0.0 67


create the LTPA tokens.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Exporting and importing LTPA tokens between WAS domains


If you use more than one server in your environment, and single sign-on is required, all of the WAS servers must share the same LTPA key in order to validate and create
the LTPA tokens.

About this task


You need to use LTPA keys in order for WAS to digitally sign LTPA tokens.

Procedure
1. Log into the local instance admin console.
2. Click Security > Secure administration, applications, and infrastructure.
3. Click Authentication mechanisms and expiration under Authentication.
4. In the Cross-cell single sign-on section, provide the following:
A password in the Password and Confirm Password fields. This password encrypts and decrypts the LTPA keys that are contained in either an imported or
exported property file.
A qualified key file name. Ensure that the value is a fully qualified file name that points to the properties file that you are export the LTPA keys to. For
example, /opt/IBM/MDM/mdmkeys.properties
Click Export keys to export the LTPA keys to the fully qualified key file name.
5. Copy the mdmkeys.properties file to the remote server.
6. Log into the remote instance admin console and repeat Step 2 and Step 3.
7. Provide the fully qualified key file name and password of the LTPA keys.
8. Click Import keys to import the LTPA keys from the fully qualified key file name.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing OpenSearch (Fix Pack 8 and later)

( and later) OpenSearch is a scalable, flexible, and extensible open source software suite for search, analytics, and observability applications licensed under
Apache 2.0.

Before you begin


Ensure following in the workstation where you want to install OpenSearch.

You log in as a non-root user. You can also use an existing user and workstation that you have used for Elasticsearch installation and configuration.
The JAVA_HOME variable is not set.

About this task


This topic describes how to install the OpenSearch.
Following is the supported version.
Software Version
OpenSearch
OpenSearch 2.4.1

Procedure
1. Download and extract the OpenSearch.
2. Test your OpenSearch installation in either security enabled (SSL) or disabled (non-SSL) setup.
a. Test with security enabled.
b. Test with security disabled.
3. Optional: Configure Transport Layer Security (TLS) for additional security for your multi-node setup.

What to do next
Configure OpenSearch.

Configuring OpenSearch
You can configure the OpenSearch on a single node or across multi-nodes.

68 IBM Product Master 12.0.0


Related information
Configuring TLS certificates
Generating self-signed certificates
OpenSearch documentation

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring OpenSearch
You can configure the OpenSearch on a single node or across multi-nodes.

Procedure
1. Browse to the opensearch-2.4.1/config/ folder.
2. Edit the following properties in the opensearch-2.4.1/config/opensearch.yml file.

cluster.name
The name of the cluster.
By default, the application uses cluster.name =es-cluster.
If you modify the value of the cluster.name property, you must also update the elastic_cluster_name property in the env_settings.ini file.
node.name
The descriptive name for the node.
indices.query.bool.max_clause_count
Maximum number of clauses a Lucene BooleanQuery can contain. Update the value to 10000.

3. Configure indexer service.


a. Download and install IBM® Semeru Runtime Certified Edition, Version 11 (formerly called IBM SDK, Java™ Technology Edition, Version 11) from the following
location.
IBM Semeru 11.0.17.0
b. Extract the file by using the following command.

tar -xvf ibm-semeru-certified-jdk_x64_linux_11.0.17.0.tar.gz

c. Add java11_home property in the "env" section of the env_settings.ini file and update the property value with <jdk11_home path>.
d. Update the following properties in the application.properties file of the indexer service.
es.serverIp
es.isPasswordEncrypted
es.isSslEnabled
es.username
es.password
4. Configure REST API service.
a. Update the following properties in the restConfig.properties file of the REST API service.
elastic_search_service_uri
fts_is_auth_required
fts_is_password_encrypted
fts_auth_username
fts_auth_password
5. Optional: For SSL setup, proceed as follows.
a. Add the following content in the indexer_start.sh file.

-Djavax.net.ssl.trustStore=<JDK11_HOME>/lib/security/cacerts -Djavax.net.ssl.trustStorePassword=changeit -
Djavax.net.ssl.trustStoreType=JKS -Dcom.ibm.jsse2.overrideDefaultTLS=true -Djdk.tls.client.protocols=TLSv1.2

b. Import OpenSearch SSL certificates in the cacerts file of Java by running the following command.
Java 8

keytool -importcert -keystore <JDK8_HOME>/lib/security/cacerts -storepass changeit -alias opensearch -file


/opt/abc_com.crt

Java 11

keytool -importcert -keystore <JDK11_HOME>/lib/security/cacerts -storepass changeit -alias opensearch -file


/opt/abc_com.crt

c. Update the following on the WebSphere® Application Server.


i. Log in to the WebSphere Application Server.
ii. Go to Expand Server Types > Application Servers > IPM APPSERVER name.
iii. Under Server Infrastructure, expand Java and Process Management, then click Process definition.
iv. Under Additional Properties, click Java Virtual Machine.
v. Append the following content in the Generic JVM arguments field and update the JAVA_HOME_PATH according to your environment, and click Apply.

-Djavax.net.ssl.trustStore=<JAVA8_HOME_PATH>/jre/lib/security/cacerts -
Djavax.net.ssl.trustStorePassword=changeit -Djavax.net.ssl.trustStoreType=JKS -
Dcom.ibm.jsse2.overrideDefaultTLS=true -Djdk.tls.client.protocols=TLSv1.2

vi. Add the following key/value pair in the Custom properties field, and click Apply.

IBM Product Master 12.0.0 69


Key name Possible value
com.ibm.jsse2.overideDefaultTLS true
javax.net.ssl.trustStore Provide absolute path of the cacerts file from the installed Java8 folder.
javax.net.ssl.trustStorePassword Provide password of the cacerts file.
jdk.tls.client.protocols TLSv1.2
vii. Go to Security > SSL certificate and key management > SSL Configurations.
viii. Click NodeDefaultSSLSettings > Quality of protection (QoP) settings, then select TLSv1.2 from the protocol list.
ix. Click OK, and then Save.
x. Find the ssl.client.props files in the following folders and change the value of the com.ibm.ssl.protocol property to "TLSv1.2".
(was_home)\profiles\(serverName)\properties
(was_home)\profiles\(DmgrName)\properties
xi. Restart the application server.

What to do next
Perform the following tasks after installation and configuration of OpenSearch.

Restart all services.


Run a full indexing job.

Related information
env_settings.ini file parameters
application.properties (Indexer) file parameters

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing Elasticsearch (Fix Pack 7 and earlier)

( and earlier) The Elasticsearch is a highly scalable open source full-text search and analytics engine. The feature allows you to store, search, and analyze large
volumes of data quickly. The Elasticsearch feature is generally used as the underlying engine or technology that powers applications that have complex search features
and requirements.

Before you begin


Important: Starting from IBM® Product Master Fix Pack 8 onwards, Elasticsearch is getting replaced with OpenSearch. This is due to change in licensing strategy (no
longer open source) by Elastic NV. Due to this, you need to shift to OpenSearch 2.4.1 and later, and run full indexing. For more information, see Installing OpenSearch (Fix
Pack 8 and later).
Ensure that you log in as a non-root user in the workstation where you want to install Elasticsearch.

You log in as a non-root user.


The JAVA_HOME variable is not set.

About this task


This topic describes how to install the Elasticsearch.
Following are the supported versions.
Elasticsearch
and later - Elasticsearch 7.17.1
Interim Fix 1 and later - Elasticsearch 7.16.2
and later - Elasticsearch 7.13.0
and later - Elasticsearch 7.7.0
- Elasticsearch 5.5.3

Procedure
1. Download and extract the Elasticsearch.
2. Set some Operating System level parameters. The following two properties are more commonly set.

Max virtual memory [vm.max_map_count]


Needs to be increased to the value of at least 262144. For more information, see Virtual memory.
Max file descriptors
Needs to be increased to the value of at least 65536. For more information, see File Descriptors.

3. Create a dedicated system user to run Elasticsearch service.


4. Run Elasticsearch service by using the following command.

./bin/elasticsearch

70 IBM Product Master 12.0.0


5. Verify that the search service is running through the <http://localhost:9200> URL.

{
"name" : "<name>",
"cluster_name" : "<should match the cluster name specified in the elasticsearch.yml file>",
"cluster_uuid" : "CxrC0eL4S4yqi-iVCPbXlA",
"version" : {
"number" : "x.x.x",
"build_flavor" : "default",
"build_type" : "zip",
"build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
"build_date" : "2020-05-12T02:01:37.602180Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

What to do next
Configure Elasticsearch.

Configuring Elasticsearch
You can configure the Elasticsearch on a single node or across multi-nodes.

Related concepts
Customizing Application settings

Related tasks
Using Free text search (Fix Pack 7 and earlier)
Installing OpenSearch

Related reference
env_settings.ini file parameters

Related information
Elasticsearch documentation

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring Elasticsearch
You can configure the Elasticsearch on a single node or across multi-nodes.

Procedure
1. Browse to the elasticsearch-7.17.1/config folder.
2. Edit the following properties in the elasticsearch-7.17.1/config/elasticsearch.yml file.

cluster.name
The name of the cluster.
By default, the application uses cluster.name =es-cluster.
If you modify the value of the cluster.name property, you must also update the elastic_cluster_name property in the env_settings.ini file.
node.name
The descriptive name for the node.
indices.query.bool.max_clause_count
Maximum number of clauses a Lucene BooleanQuery can contain. Update the value to 10000.
network.host
The IP address of the host to make Elasticsearch service accessible from other workstations in the network.
path.data
The file path where the Elasticsearch stores the data.
data folder in the <elasticsearch installation directory>
path.logs
The file path where the Elasticsearch logs are generated.
logs folder in the <elasticsearch installation directory>
http.port
The HTTP port, "http.port".
transport.port

IBM Product Master 12.0.0 71


The TCP port.
discovery.type
The Elasticsearch node elects itself as master and does not join a cluster with any other node. For more information, see Single-node discovery.
transport.compress
Enable compression on the Elasticsearch response. By default, the value is "true".
cluster.initial_master_nodes
The IP address:TransportPort of the initial master node information in an array format. For example, ["IP address:TransportPort"].
discovery.seed.hosts
The IP address:TransportPort of the self-node and other nodes in an array format. For example,
discovery.seed_hosts: ["IP address:TransportPort"], ["IP address:TransportPort1"]

3. Restart the Elasticsearch service for changes to take effect.

Related information
Security for Elasticsearch
Transport Layer Security (TLS)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing Hazelcast IMDG


This topic describes how to configure the Hazelcast IMDG.

Before you begin


Following are the supported versions.
Software Version
Hazelcast
Interim Fix 1 and later - Hazelcast IMDG 4.2.4
and later - Hazelcast IMDG 4.1
and later - Hazelcast IMDG 3.12.5

Procedure
1. Download the Hazelcast IMDG in a compressed file format from the following location.
Hazel cast IMDG
2. Extract the compressed file format to your workstation.
3. Browse to the /hazelcast-<version_number>/bin directory and copy the hazelcast.xml file from the $TOP/mdmui/dynamic/hazelcast folder.
4. Use the following command to start the Hazelcast service as a background service.
nohup ./start.sh &

What to do next
You need to configure Hazelcast in the TCP/IP mode to enable discovering cluster members by using TCP. When you configure Hazelcast to discover members by TCP/IP,
you must list all or a subset of the members' host names, IP addresses or both as the cluster members. You do not have to list all of the cluster members, but at least one
of the listed members must be active in the cluster when a new member joins. To set your Hazelcast to be a full TCP/IP cluster, set the following configuration elements:

Set the enabled attribute of the multicast element to "false".


Set the enabled attribute of the aws element to "false".
Set the enabled attribute of the tcp-ip element to "true".
Set your member elements within the tcp-ip element.

For more information on the TCP/IP discovery configuration elements, see tcp-ip element.

<hazelcast>
...
<network>
...
<join>
<multicast enabled="false">
</multicast>
<tcp-ip enabled="true">
<member>machine1:5702</member>
<member>machine2:5702</member>
<member>machine3:5702</member>
</tcp-ip>
...
</join>
...
</network>
...
</hazelcast>

72 IBM Product Master 12.0.0


As shown in the example, you can provide IP addresses or host names for the member elements. Ensure that you provide the correct port along with the IP address. For
more details, see Discovering Members by TCP.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing MongoDB
You can install MongoDB as described in this topic.

Before you begin


Following are the supported versions.
Software Version
MongoDB Community Edition
and later - MongoDB 4.0.25
and later - MongoDB 4.0.23
and later - MongoDB 4.0.22
and later - MongoDB 3.4

Procedure
1. Install MongoDB by using either of the following links:
MongoDB 3.4
MongoDB 4.0.22
2. Optional: To upgrade, use the following links. You need to upgrade MongoDB 3.4 > MongoDB 3.6 > MongoDB 4.0.
MongoDB 3.6
MongoDB 4.0.22
3. Open Mongo shell by using the following command:

mongo

4. Open MongoDB configuration file by using the following command:

vi /etc/mongod.conf

5. In the mongod.conf file, add the following lines:

# Where and how to store data. storage:


dbPath: /var/lib/mongo journal:
enabled: true # engine:
# mmapv1:
# wiredTiger:
# how the process runs processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
# network interfaces net:
port: 27017
bindIp: 127.0.0.1 # Listen to local interface only, comment to listen on all interfaces.
#security: #operationProfiling: #replication: #sharding:
## Enterprise-Only Options #auditLog:
#snmp:

Note: If MongoDB is installed on the same physical server as the IBM® Product Master the value of "bindIp" should be set to "127.0.0.1". If the server is different,
set the value of "bindIp" to "0.0.0.0".
6. Quit the MongoDB shell by pressing, Ctrl + C, and then restart the MongoDB.
7. Verify that the MongoDB has started successfully. You can verify that the mongod process has started successfully by either of the following.
Checking the contents of the log file at /var/log/mongodb/mongod.log for the following line:

[initandlisten] waiting for connections on port <port>

Where,
<port> - Port that is configured in the /etc/mongod.conf file. The default value is 27017.
Use the following command.

chkconfig mongod on

8. Machine learning To create database and user in MongoDB, run the following commands in the MongoDB console:

>mongo

>use <database_name>

>db.createCollection(“<test_collection_name>”)

>show databases

To create user for the database, see step 1 in the Enabling authentication.

IBM Product Master 12.0.0 73


Note: The database for DAM is created per company when you add the DAM settings in the Settings page > Application settings page of the Persona-based UI. For
more information, see Customizing the features.

Enabling MongoDB authentication


and later: You need to enable MongoDB authentication for the Digital Assets Management (DAM) and Machine learning features.
Enabling SSL for MongoDB
and later: You can configure MongoDB to support SSL.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling MongoDB authentication

and later: You need to enable MongoDB authentication for the Digital Assets Management (DAM) and Machine learning features.

Procedure
1. In the MongoDB console, create a user having following role and access by using the following command.

>mongo
use admin;
db.createUser(
{
user: "<username>",
pwd: "<password>",
roles: [ { role: "userAdminAnyDatabase", db: "admin" }, "readWriteAnyDatabase" ]
}
)

2. Open MongoDB configuration file by using the following command.

vi /etc/mongod.conf

3. In the mongod.conf file, edit the value of the following property.

security:
authorization: enabled

4. Quit the MongoDB shell by pressing, Ctrl + C and then restart the MongoDB using the following command.

service mongod restart

5. In env_settings.ini file, specify the value of the following properties using the values specified in the step 1.

mongodb_username=
mongodb_password=

6. Run the configureEnv.sh script with the -ov option. The specified values get copied to the dam.properties, common.properties, and ml_configuration.ini files.
7. Optional: If you do not want to run the configureEnv.sh script with the -ov option. You can manually update the dam.properties, common.properties, and
ml_configuration.ini file as follows.
a. Run the following command to encrypt the password specified in the step 1.

$JAVA_RT com.ibm.ccd.common.generate.config.DBEncryptionUtils -encrypt --password=<password>

b. Copy and paste the MongoDB credentials in all the three files.
8. Restart IBM® Product Master services.

Related tasks
Password encryption utility

Related information
Enable Auth
Role-Based Access Control

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling SSL for MongoDB

and later: You can configure MongoDB to support SSL.

74 IBM Product Master 12.0.0


Before you begin
Generate SSL certificate. For more information, see

MongoDB 3.4: Configure mongod and mongos for TLS/SSL


MongoDB 4.0: Configure mongod and mongos for TLS/SSL

Procedure
1. Open MongoDB configuration file by using the following command.

vi /etc/mongod.conf

2. In the mongod.conf file, add the following SSL properties created in the prerequisite.
mode
PEMKeyFile
CAFile
Example

net:
ssl:
mode: requireSSL
PEMKeyFile: /etc/ssl/mongodb.pem
CAFile: /etc/ssl/ca.pem
allowConnectionsWithoutCertificates: true

3. Quit the MongoDB shell by pressing, Ctrl + C and then restart the MongoDB using the following command.

service mongod restart

4. Configure MongoDB SSL URL in the dam.properties, common.properties, and ml_configuration.ini files.
common.properties and dam.properties files - For more information, see Connection String URI Format.
common.properties

mongo_hostname=<host_name:port_number>/?[options]

dam.properties

mongo.url=<host_name:port_number>/?[options]

ml_configuration.ini ([MONGO DB] section) - For more information, see TLS/SSL and PyMongo.

host=<host_name:port_number>/?[options]

5. Go to the $JAVA_HOME/bin folder, import the SSL certificate in the truststore by using the following JAVA keytool command.

keytool -import -alias <alias name> -keystore <JAVA truststore path> -file <certificate file> -storepass <password>

Example

keytool -import -alias mongo -keystore "$JAVA_HOME/jre/lib/security/cacerts" -file /etc/ssl/mongodb-cert.cer -storepass


password

6. Using WebSphere® Application Server administrative console, add the following properties in the Java™™ Virtual Machine (JVM) arguments.

-Djavax.net.ssl.trustStore=$JAVA_HOME/jre/lib/security/cacerts
-Djavax.net.ssl.trustStorePassword=password
-Djavax.net.ssl.trustStoreType=JKS

7. Restart IBM® Product Master services.


Note: If you redeploy the IBM Product Master on a base setup that had SSL configured, the properties added in the Java™ Virtual Machine (JVM) arguments are lost,
hence you need to perform step 6-7 again.

Related reference
dam.properties file parameters
ml_configuration file parameters

Related information
TLS/SSL Configuration for Clients
Configure mongod and mongos for TLS/SSL

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing Python and machine learning modules


You can install Python and modules that are required for machine learning by using shell scripts that are packaged with Product Master.

IBM Product Master 12.0.0 75


Before you begin
Supported version is Python 3.9.7.

Procedure
1. Go to the $TOP/mdmui/machinelearning/requirements folder.
2. Select the appropriate system-specific requirements_<operating_system>.sh file.
3. Run the following command to start installation of the Python and dependent modules.

sudo sh requirements_<operating_system>.sh

Related concepts
Using machine learning (ML) assisted data stewardship

IBM Product Master 12.0 Fix Pack 8


Operating Systems: AIX, Linux, and Windows (Workbench only)

By submitting answers to the following questions, a custom set of installation instructions for the IBM Product Master are generated.

On-prem deployment offers you a choice of installation through IBM® Installation Manager, manually, or accelerated deployment (by using a Docker image).

IBM Installation Manager - Provides you an option of installation through Graphical (by using interactive graphical user interface), Console (through on-screen text
prompts), or Silent (by using command line using response file) mode.

Manually - You can use the packaged scripts that are bundled with the installer. Use manual deployment when you want to monitor and perform a customized
installation.

Accelerated - During an accelerated deployment, IBM Product Master is installed from a Docker image, that ensures a simple and consistent installation
experience. Use accelerated deployment when you need to consistently and quickly deploy across multiple instances.

76 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing by using IBM® Installation Manager


You can install Product Master in graphical mode, console mode, or silent mode by using IBM®® Installation Manager. Consider which installation method works best for
your environment.

Important: IBM Installation Manager does not support Product Master installation with the Oracle RAC database.

Graphical mode
If the computer on which you are running Product Master can render a graphical user interface, then graphical mode is the preferred option. The IBM Installation Manager
displays a series of screens that walk you through the selection of features, basic parameter configuration, and provides a summary of the options that you selected
before the installation began.

Console mode
If your computer on which you are running Product Master cannot render a graphical user interface or if you would prefer to work in a text interface, then you might choose
the console mode installation option. Console mode uses IBM Installation Manager to provide a series of on-screen prompts that walk you through the selection of

IBM Product Master 12.0.0 77


features and basic parameter configuration. Essentially, console mode installation is a text-based version of the graphical mode installation.

Silent mode
If you are planning identical installations on multiple computers, you might consider the silent option. A silent installation is started from the command line and uses a
response file. This option does not require you to specify the installation options. Instead, the installation options are read from a response file. You can create a response
file manually or by using the graphical installation wizard. A response file can be created without installing any software or during an installation. The steps that are taken
in the installation process and errors that are encountered are logged to a file.

Installing the product in graphical mode


You can use IBM Installation Manager to perform a graphical mode installation. There are two options you can choose from, graphical mode or extracting the
product files. In this option, you use the installer to extract the product files. You perform the configuration and the deployment of the product to the application
server yourself.
Installing the product using console mode
Installing the product silently
To install IBM Product Master silently, you must edit the sample silent mode response files.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing the product in graphical mode


You can use IBM® Installation Manager to perform a graphical mode installation. There are two options you can choose from, graphical mode or extracting the product
files. In this option, you use the installer to extract the product files. You perform the configuration and the deployment of the product to the application server yourself.

Before you begin


Before you begin, make sure that you meet these prerequisites:

You completed the installation preparation tasks (including preparing your IBM WebSphere® Application Server and database).
You added the Product Master offering in the Preferences section of the IBM Installation Manager.

Procedure
1. Start IBM Installation Manager. Go to the /<IM directory>/eclipse directory and run the ./IBMIM command to start IBM Installation Manager.
2. On the IBM Installation Manager home screen, click Install.
3. On the Install Packages screen, select the IBM Product Master product version, and click Next.
4. Accept the license agreement terms, and click Next.
5. On the second Install Packages screen:
a. Select the Installation Directory into which you want to install each component. If you choose to install a component in a directory other than the default,
select that component and click Browse in the Installation Directory field.
Attention: If you have IBM Rational® Application Developer installed, make sure that you do not install Product Master into the same package group. On the
Install Packages screen, select Create a new package group.
The Persona-based UI is installed in the specified installation directory mdmui folder.
b. For Architecture Selection, ensure that 64-bit is selected, and click Next.
6. Select the language and click Next.
7. You can follow any of the following methods to proceed further:
a. Select the Extract the product files feature to install and click Next.
i. Review the information that is given on the Extract Information screen and click Next.
ii. Review the information that is given on the Installation Summary screen and click Install.
b. Install with IBM WebSphere Application Server Network Deployment.
i. On the Database Configuration screen, enter the database details and click Test Connection before you exit the screen.
Note: Ensure that you use the same database name for both the remote and local database fields. For Oracle Database client version higher than 12c,
installation with IBM WebSphere Application Server Network Deployment is not supported, you can extract and install manually using scripts.
ii. On the WebSphere Application Server Configuration screen:
1. Enter the information that you used during the application server preparation.
2. Select Retrieve Host Details to obtain your Cell, Node, and Server information. Select existing application server or provide a new one. Provide
virtual host.
3.
4. On the Application Configuration screen:
a. Provide your Perl installation home directory, JDK home directory, cache multicast address, and TTL for multicast packets, RMI port and
Application server HTTP port. Set the value of TTL for multicast packets to 0 for single-machine installations and 1 for clusters, RMI port
and Application server HTTP port.
b. Select the Locale that you want to use for the installation. By default, the selected locale is English.
c. If you want the installer to create the table to be used by Product Master, select the Create database tables to be used by the product
checkbox.
iii. Review the configured parameters on the Summary screen and click Next.
iv. Review the installation summary information and click Install.
v. Enter the configuration information. Use the installation worksheets for guidance.
8. On the final IBM Installation Manager screen, click View Log Files to view the logs.
9. Click Finish and close IBM Installation Manager.

What to do next
78 IBM Product Master 12.0.0
Verify a successful installation by viewing the log files.

Related tasks
Post-installation instructions
Configuring GDS feature
Configuring Persona-based UI parameters

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing the product using console mode

Before you begin


Before you begin, make sure that you meet these prerequisites:

You completed the installation preparation tasks (including preparing your IBM® WebSphere® Application Server and database).
You added the Product Master offering in the Preferences section of the IBM Installation Manager.

Procedure
1. Review the prerequisites listed earlier in this topic and ensure that you have completed all of the necessary preparation steps. These steps are not optional.
2. Optionally, enable enhanced debug logging in INSTALLATION_MANAGER_HOME/logs by copying
STARTUPKIT_INSTALL_HOME/InstallationManagerDebug/log.properties into ./InstallationManager/logs.
Important: After you have enabled enhanced debug logging, the logged information in ./InstallationManager/logs can include password details that are entered by
the user during the installation. Ensure that these logs are stored in a secure place to avoid password exposure.
3. Start IBM Installation Manager in console mode:
a. From a command prompt, navigate to INSTALLATION_MANAGER_HOME/eclipse/tools.
b. Run the command imcl -c.
4. Select option 1, Install.
5. Set your repository by selecting Preferences menu.
6. Select the Product Master edition to install and any additional features that you require (such as workbench, if you are installing a workstation).
7. Review and accept the license agreement.
8. Choose whether to install into an existing package group or create a new package group.
Tip: If you are unsure of what to choose, then accept the default. Most installations should create a new package group.
9. Define the installation directory into which you want to install each component.
10. Select the languages for this deployment.
English is always selected. If you want to support any languages in addition to English, select them.
11. Select the Product Master to be installed either with IBM WebSphere Application Server Network Deployment or extract the product files for doing manual
installation later.
a. Select extract the product files, and then select Install on the next screen to continue manual installation with the scripts.
b. Optional: Installation with IBM WebSphere Application Server Network Deployment,
i. Enter the database configuration details, as prompted.
ii. Enter the database table space configuration details, as prompted.
iii. Enter the required details of the WebSphere Application Server instance where Product Master is to be installed.
iv. Provide the remaining Product Master deployment and configuration details as prompted by the console. For example, details of the PERL directory,
JAVA_HOME, RMI port and HTTP port. If required, you can choose the creation of database tables in this step.
The installation console runs a series of validation tests. If necessary, take any corrective action to address any warnings or errors.
v. When all of the validation tests pass successfully, choose the Install option. The installation application installs Product Master. Depending on your
configuration selections, the installation process can take a significant amount of time.

Results
A success message indicates that the installation has succeeded and the installation verification tests have successfully completed. You can also view the log files to verify
a successful installation. If the installation is not successful, view the log files and use the information in the troubleshooting topics to assist you.

What to do next
Log in to the Product Master UI using the HTTP port to confirm the successful installation of Product Master.

Related tasks
Post-installation instructions
Configuring GDS feature
Configuring Persona-based UI parameters

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 79


Installing the product silently
To install IBM® Product Master silently, you must edit the sample silent mode response files.

About this task


Sample silent mode response files are provided in the STARTUP_INSTALL_HOME/StartupKit directory. The following sample silent mode response files for Product Master
are available:

CE_WAS_ND_DB2.xml
Use this sample response file to install Product Master with IBM WebSphere® Application Server in Network Deployment mode and IBM DB2® database.
CE_WAS_BASE_DB2.xml
Use this sample response file to install Product Master with IBM WebSphere Application Server in Base Edition and IBM Db2 database.
CE_WAS_ND_ORACLE.xml
Use this sample response file to install Product Master with IBM WebSphere Application Server in Network Deployment mode and Oracle Database.
CE_WAS_BASE_ORACLE.xml
Use this sample response file to install Product Master with IBM WebSphere Application Server in Base Edition and Oracle Database.
CE_PAYLOAD_EXTRACTION.xml
Use this sample response file to extract Product Master files for manual installation.

Creating a response file while you are running a graphical installation


Use this procedure to capture responses and create a response file when you are running IBM Installation Manager in graphical mode.
Customizing the silent mode response file
You use this procedure to customize your silent mode installation response file.
Disabling the installer splash screen during silent installation
Use this procedure to disable the IBM Installation Manager splash screen for silent installations. This task must be completed for the silent installation to run
successfully.
Installing silently by using a response file
You can install Product Master silently, where the installation choices are provided in an options file instead of in the interactive IBM Installation Manager panes.
This type of installation is helpful when you are doing multiple identical installations.

Related tasks
Post-installation instructions
Configuring GDS feature
Configuring Persona-based UI parameters

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a response file while you are running a graphical installation


Use this procedure to capture responses and create a response file when you are running IBM® Installation Manager in graphical mode.

Before you begin


The password values in the file are encrypted. If the password value is changed in the system, you must input the correct password value to the response file before you
use it for a silent installation. You can enter a new decrypted value for the password, and the system encrypts it when the file is used during installation.

Procedure
Issue the ../IBMIM -record $YOUR_PATH/mysilent.res command to create the response file by starting the installation.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Customizing the silent mode response file


You use this procedure to customize your silent mode installation response file.

About this task


Although code examples might show with line breaks in the following content, the text between <.../> must be entered in the response file as one line without breaks.

Procedure
80 IBM Product Master 12.0.0
1. Open your response file.
2. Specify the home and shared resource directories.
a. To specify the MDM_INSTALL_HOME directory, add the following lines to your response file:

<profile id='IBM Product Master' installLocation='/home/mdmpim/IBM/MDM'>

Where /home/mdmpim/IBM/MDM is the Product Master installation home directory.


b. To specify the Installation Manager Shared Resource directory, add the following lines to your response file:

<preference name='com.ibm.cic.common.core.preferences.eclipseCache' value='${sharedLocation}'/>

Where <sharedLocation> is the Installation Manager Shared Resource directory.


3. Specify the IBM® Product Master offering version and the features that you want to install by adding the following line:

<offering profile='IBM Product Master' id='com.ibm.mdm.collaborative' version='12.0.0.<FP00IF000_20200505-0316>'/>

Where, 12.0.0.FP00IF000_20200505-0316= Product Master version.


Note: You can find the version number by looking in your installation media folder, for example, download_path/MDM/disk1/md/Offerings and locating the offering
JAR file. For example, disk1/md/Offerings/com.ibm.mdm.collaborative_x.x.x_YYYYMMDDHHMM.jar, where x.x.x_YYYYMMDDHHMM is the version number.
4. Specify the feature to install during the single IBM Installation Manager session by adding this line:

<offering profile='IBM Product Master' id='com.ibm.mdm.collaborative' version='12.0.0.<FP00IF000_20200505-0316>'


features='com.ibm.im.mdm.db.feature'/>

Where features='com.ibm.im.mdm.db.feature' is the specific feature to install. For more information, see Examples for specifying features for a silent
installation.
5. Specify your database parameters.
Note: For Extract the Product Files option, steps 5 - 7 are not required.
6. Specify your WebSphere® Application Server parameters.
7. Specify your Application Configuration parameters.

Example
To install only Product Master with database and application server, add this line:

features='com.ibm.im.mdm.db.feature'

To extract only the product files, include this line:

features='com.ibm.im.mdm.wl.feature'

What to do next
Continue with disabling the installer splash screen and running the silent installation.

Silent installation database parameters for Db2


You must specify parameters for your IBM DB2® database in your silent installation response file.
Silent installation database parameters for Oracle
You must specify parameters for your Oracle database in your silent installation response file.
Silent installation WebSphere Application Server parameters
You must specify parameters for WebSphere Application Server in your silent installation response file.
Silent installation Application configuration parameters
You must specify parameters for Application configuration in your silent installation response file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Silent installation database parameters for Db2


You must specify parameters for your IBM® DB2® database in your silent installation response file.

Enter the following lines in your response file if you are using a Db2 database. Change value= to the specific value used by your database.
Although code examples might show with line breaks in the following content, the text between <.../> must be entered in the response file as one line without breaks.

Database type

<data key='user.db.type,com.ibm.mdm.collaborative' value='DB2'/>

Database alias in a database catalog for the Db2 client

<data key='user.db.name,com.ibm.mdm.collaborative' value='PIMDB'/>

Database name

<data key='user.db.name.remote,com.ibm.mdm.collaborative' value='YOURDBASENAME'/>

Database schema name

<data key='user.db.schema,com.ibm.mdm.collaborative' value='DB2INST1'/>

Database server hostname

IBM Product Master 12.0.0 81


<data key='user.db.host,com.ibm.mdm.collaborative' value='localhost'/>

Database server port name

<data key='user.db.port,com.ibm.mdm.collaborative' value='ipaddress'/>

Database username (should be the same as the schema name)

<data key='user.db.user,com.ibm.mdm.collaborative' value='USERNAME'/>

Database password

<data key='user.db.password,com.ibm.mdm.collaborative' value='db2inst1'/>

Database client home directory

<data key='user.db2.home,com.ibm.mdm.collaborative' value='/home/db2inst1/sqllib'/>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Silent installation database parameters for Oracle


You must specify parameters for your Oracle database in your silent installation response file.

Enter the following lines in your response file if you are using an Oracle database. Change value= to the specific value used by your database.
Although code examples might show with line breaks in the following content, the text between <.../> must be entered in the response file as one line without breaks.

Database type

<data key="user.db.type,com.ibm.mdm.collaborative' value='ORACLE'/>

Oracle client TNS name

<data key='user.db.name,com.ibm.mdm.collaborative' value='TNSNAME'/>

Oracle server database name

<data key='user.db.name.remote,com.ibm.mdm.collaborative' value='DBASENAME'/>

Database username (should be the same as the schema name)

<data key='user.db.user,com.ibm.mdm.collaborative' value='USERNAME'/>

Database user password

<data key='user.db.password,com.ibm.mdm.collaborative' value='DBPASSWORD'/>

Database server hostname

<data key='user.db.host,com.ibm.mdm.collaborative' value='DBHOSTNAME'/>

Database server port

<data key='user.db.port,com.ibm.mdm.collaborative' value='1521'/>

Database schema name

<data key='user.db.schema,com.ibm.mdm.collaborative' value='SCHEMANAME'/>

Oracle client home directory

<data key='user.L2.db.home,com.ibm.mdm.collaborative' value='ORACLEHOMEPATH'/>

Oracle system identifier name

<data key=' user.oracle.sid,com.ibm.mdm.collaborative' value='ORACLEHOMEPATH'/>

Important: The silent installation process is not compatible with an Oracle Real Application Clusters (RAC) or an Oracle Database operating only on a service name
instead of the Security Identifier (SID).

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Silent installation WebSphere Application Server parameters


You must specify parameters for WebSphere® Application Server in your silent installation response file.

Enter the following lines in your response file. Change value= to the specific value used by your application server.
Although code examples might show with line breaks in the following content, the text between <.../> must be entered in the response file as one line without breaks.

82 IBM Product Master 12.0.0


WebSphere Application Server installation home directory

<data key='user.L1.was.home,com.ibm.mdm.collaborative' value='/home/mdmpim/opt/IBM/WebSphere/AppServer'/>

WebSphere Application Server type, either ND (Federated) or BASE (stand-alone)

<data key='user.was.type,com.ibm.mdm.collaborative' value='Base'/>

Profile Home

<data key='user.was.profile.home,com.ibm.mdm.collaborative'
value='/home/mdmpim/opt/IBM/WebSphere/AppServer/profiles/AppSrv01'/>

WebSphere Application Server Network Deployment Manager or WebSphere Application Server Base server1 SOAP port

<data key='user.deploy.port,com.ibm.mdm.collaborative' value='8880'/>

WebSphere Application Server HTTP port

<data key='user.ce.http.port,com.ibm.mdm.collaborative' value='7507'/>

WebSphere Application Server Deployment Manager or WebSphere Application Server Base hostname

<data key='user.deploy.host,com.ibm.mdm.collaborative' value='localhost'/>

WebSphere Application Server Deployment Manager or WebSphere Application Server Base virtual hostname

<data key='user.deploy.vHost,com.ibm.mdm.collaborative' value='IPM_VHOST'/>

WebSphere Application Server deployment target

<data key='user.was.server,com.ibm.mdm.collaborative' value='IPM_APPSERVER'/>


<data key='user.was.cell,com.ibm.mdm.collaborative' value='node02Node01Cell'/>
<data key='user.was.node,com.ibm.mdm.collaborative' value='node02Node01'/>

WebSphere Application Server security parameters

<data key='user.was.user,com.ibm.mdm.collaborative' value='wasadmin'/>


<data key='user.was.password,com.ibm.mdm.collaborative' value='wasadmin'/>

Note: The following parameters must not be modified in your response file:

<data key='user.was.cluster,com.ibm.mdm.collaborative' value='None'/>


<data key='user.was.cluster.flag,com.ibm.mdm.collaborative' value='false'/>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Silent installation Application configuration parameters


You must specify parameters for Application configuration in your silent installation response file.

Enter the following lines in your response file. Change value= to the specific value used by your application server.
Although code examples might show with line breaks in the following content, the text between <.../> must be entered in the response file as one line without breaks.

Perl installation home directory

<data key='user.ce.perl.directory,com.ibm.mdm.collaborative' value='/usr'/>

JDK home directory

<data key='user.ce.jdk.path,com.ibm.mdm.collaborative' value='/home/mdmpim/opt/IBM/WebSphere/AppServer/java/8.0'/>

Locale (Language to be used by Product Master)

<data key='user.ce.locale,com.ibm.mdm.collaborative' value='en_US'/>

Cache multicast address

<data key='user.ce.cache.multicast.address,com.ibm.mdm.collaborative' value='ipaddress'/>

Cache multicast time-to-live (0 for single server and 1 for clusters)

<data key='user.ce.cache.multicast.ttl,com.ibm.mdm.collaborative' value='0'/>

RMI port

<data key='user.ce.rmi.port,com.ibm.mdm.collaborative' value='17507'/>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Disabling the installer splash screen during silent installation


IBM Product Master 12.0.0 83
Use this procedure to disable the IBM® Installation Manager splash screen for silent installations. This task must be completed for the silent installation to run
successfully.

About this task


Follow these steps to add the -nosplash parameter in the IBMIM.ini file.

Procedure
1. Go to the INSTALLATIONMANAGER_INSTALL_HOME/eclipse directory.
2. Open the IBMIM.ini file.
3. Add the -nosplash parameter. For example (Linux and UNIX),

vi IBMIN.ini
/opt/IBM/InstallationManager/eclipse/
jre_6.0.0.sr9_20110208_03/jre/bin/java
-nosplash
-vmargs
-Xquickstart
-Xgcpolicy:gencon

4. Save and close the file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing silently by using a response file


You can install Product Master silently, where the installation choices are provided in an options file instead of in the interactive IBM® Installation Manager panes. This
type of installation is helpful when you are doing multiple identical installations.

Before you begin


Verify that the installation startup kit is installed. The response files in the kit can be used for a silent installation. Ensure that you have completed the steps in the
Disabling the installer splash screen during silent installation topic.

About this task


A properties file is generated when you run the interactive installation program. To use a silent installation, you must edit the properties file or create your own file by
editing one of the sample response files.

Procedure
1. To use a sample response file, go to STARTUPKIT_INSTALL_HOME. Response files have a .res extension. Use the file that is applicable to your operating system.
2. Edit the response file and make any necessary changes before you start the installation.
3. Start the installation with the applicable command:
a. Issue the IBMIM -record recordedFile command to run IBM Installation Manager and then generate the response file.
b. Issue the IBMIM -acceptLicense -silent -input inputFile command to run IBM Installation Manager in silent mode.
4. If an unrecoverable problem occurs during the silent installation, look for the cause of the problem in the log files in the MDM_INSTALL_HOME/logs/logs directory.
After you correct the issue, run the silent installation again.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing the product manually


You can install the product manually with scripts. Make sure to set your environment variables, your runtime properties, your database drivers, and the application server
settings.

Before you begin


The Product Master files extracted by using IBM Installation Manager. For more information, see Installing by using IBM® Installation Manager.
Attention: You must provide all the parameters that you are prompted for during the product installation stage. If you fail to provide all the parameters, the product
installation remains incomplete.

Free text search - If enabled, you must install and configure the Hazelcast IMDG and Elasticsearch. For more information, see Installing Hazelcast IMDG and
Installing the Elasticsearch.

Digital Asset Management - If enabled, you must install MongoDB. For more information, see Installing MongoDB.

84 IBM Product Master 12.0.0


Machine Learning (ML) services - If enabled, you must install and configure MongoDB. For more information, see Installing MongoDB. For more information on
installing Python 3.9.7 and machine learning modules, see and dependencies on the AIX, Ubuntu, RHEL, or CentOS, see Installing Python and machine learning
modules.

1. Set your environment variables.


2. Create the env_settings.ini file.
3. Validate the environment.
4. Configure the installation.
5. Run the compatibility scripts.
6. Configure the WebSphere® Application Server.
7. Configure a cluster environment.
8. Deploy the product in a cluster environment.
9. Configure runtime properties.
10. Run schema creation scripts.
11. Configure GDS feature.

Set your environment variables


You must set up specific environment variables in order for IBM Product Master to run successfully.

The configuration parameters are specified in the file <install dir>/bin/conf/env_settings.ini. You can create an env_settings.ini file by using <install
dir>/bin/conf/env_settings.ini.default template.

Procedure

1. Set and export the following environment variables in to the .bashrc file of the Product Master user.
TOP = <install_dir>
PERL5LIB = <install_dir>/bin/perllib

LANG=<locale value>

for example, en_US


Note: Locale C should not be set as a default since that can cause problems when you use Perl.
2. Logout and log in as the Product Master user to pick up the changes to .bashrc file.

Create the env_settings.ini file


For more information, see Creating the env_settings.ini file.

Validate the environment


Before you can configure the WebSphere Application Server, you need to verify that your environment is installed and configured properly.
Note: Run this script only one time.
Procedure

1. Go to the <install_dir> directory.


2. Run the <install_dir>/setup.sh script to,
Check whether the database client is configured.
Validate the Perl installation and notify if any Perl modules are missing.

Note: If any Perl modules are missing, install those Perl modules and run this script again.

Configuring the installation


For more information, see Running configuration scripts.

Running the compatibility scripts


Use the compatibility scripts to add some of the old environment variables that are used in previous versions of IBM Product Master. The variables include $TOP,
$CCD_DB, and $JAVA_RT.

Procedure

Add the following compatibility script lines to the .bashrc file:


rootDir=`perl $PERL5LIB/getTop.pl`

source $rootDir/bin/compat.sh

Configure the WebSphere Application Server


For more information, see Configuring the WebSphere Application Server.

Configure a cluster environment


For more information, see Configuring a cluster environment.

Deploy the product in a cluster environment

IBM Product Master 12.0.0 85


For more information, see Deploying on a clustered environment.

Configuring runtime properties


You must set a few runtime properties in the common.properties file as part of configuring IBM Product Master. For more information about these parameters, see the
comments in the common.properties file.

Procedure

1. If you are using FTP, set the directory for using FTP for import operations by specifying the FTP_root_dir parameter.
2. Set the temporary directory by specifying a value for the tmp_dir parameter. The directory /tmp is the default.

Run schema creation scripts


For more information, see Run schema creation scripts.

Configure GDS feature


For more information, see Configuring GDS feature.

Creating the env_settings.ini file


If you installed the product manually and you did not use the installer application, you need to create and edit the env_settings.ini file manually.
Running configuration scripts
Before you can configure the WebSphere Application Server, you need to configure the installation.
Configuring the WebSphere Application Server
To run IBM Product Master successfully, you must configure the WebSphere Application Server.
Deploying Product Master
After updating the configurations using the configureEnv.sh file, run the following scripts in the order listed.
Run schema creation scripts
After you install the WebSphere Application Server, the database and the IBM Product Master application, you must run the scripts to create the schema for the
database.
Configuring GDS feature
You must configure a few GDS parameters in order for IBM Product Master to be able to exchange product data with a datapool.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating the env_settings.ini file


If you installed the product manually and you did not use the installer application, you need to create and edit the env_settings.ini file manually.

Before you begin


Ensure that you run the <install dir>/setup.sh script before you create the env_settings.ini file.

Procedure
1. Copy the <install dir>/bin/conf/env_settings.ini.default file as:
cd <install dir>/bin/conf

cp env_settings.ini.default env_settings.ini
2. Set the appropriate environment parameters.

Setting the common parameters in the env_settings.ini file


After you create the env_settings.ini file, you need to set the common parameters.
Setting the common database parameters
If you want to set up the database, you need to configure the database type and common parameters and configure the database type-specific parameters.
Storing database passwords in an encrypted format
For audit and security purposes, always store sensitive information, such as passwords, in an encrypted format.
Setting Db2 parameters
Ensure that you set the following DB2® parameters.
Setting Oracle parameters
Ensure that you set the following Oracle parameters.
Configuring the application server parameters
After you install the product and configured your database, you can perform more configurations of the application server. After the installation is configured, use
the shell scripts in the <install_dir>/bin/go directory to start, stop, and abort Product Master.
Configuring WebSphere MQ parameters
For IBM® Product Master functions that have dependencies on WebSphere® MQ to work, you need to update the env_settings.ini file.
Configuring Persona-based UI parameters
To enable features like Free text search, Machine learning, Digital Asset Management, and Vendor portal you need to update configuration parameters in the
env_settings.ini file.

IBM Product Master 12.0 Fix Pack 8

86 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting the common parameters in the env_settings.ini file


After you create the env_settings.ini file, you need to set the common parameters.

Procedure
1. Open the env_settings.ini file.
2. Set the following parameters:

java_home
The path of Java™ home. Set the value of the java_home parameter to the supported IBM® SDK, Java ™ Technology Edition.
Note: Oracle JDK or Open Java Development Kit (OpenJDK) are not supported software development kit (SDK) packages.
jar_dir
The location of the third-party JAR files.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting the common database parameters


If you want to set up the database, you need to configure the database type and common parameters and configure the database type-specific parameters.

You need to set the following parameters regardless of the database you are using:

type
Possible values are DB2® or Oracle.
home
The database home directory.
username
The user name to connect to database.
password
The password to connect to database.
Note: Decide whether the database password is to be stored in an encrypted format or plain format and set the encrypted_password parameter in the
env_settings.ini file.
hostname
The host name of the DB2 or Oracle server.
port
The port the database server listens on.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Storing database passwords in an encrypted format


For audit and security purposes, always store sensitive information, such as passwords, in an encrypted format.

About this task


There are other database-related scripts, for example:

Schema, company creation, and deletion scripts


Migration scripts
Maintenance scripts
Note: The maintenance scripts require the argument dbpassword if the encrypt_password parameter is set to yes in the env_settings.ini file.

If the argument is not passed, you are prompted for the database password. If the encrypt_password parameter is set to no or is not set at all, you can run the scripts
without the dbpassword argument. Depending on the value of the encrypt_password parameter, the db.xml file stores either the plain text password or encrypted
password.

Procedure
1. Add the encrypt_password parameter to the [db] section of the env_settings.ini file.
2. Set the encrypt_password parameter to yes if you want the password to be encrypted.
Note: If you do not want to encrypt the password, keep the password parameter, as is, in the [db] section.

IBM Product Master 12.0.0 87


3. Run the bin/configureEnv.sh -dbpassword=<database
password> command. You are prompted to enter the password if it is not given as an argument.
For example, if the --overwrite option is not used, a warning displays asking that you to run the script with the dbpassword argument. If the encrypt_password
parameter in the env_settings.ini file is not set or set to no, the dbpassword argument is not required for the configureEnv.sh script.
4. Confirm that the script created the db.xml file in the $TOP/etc/default directory.
This step is important because:
It is the only place from where the Java™ code can read the encrypted password or plain text password
The decrypted password can be used in a JDBC connection.
5. Whenever any of the database-related properties in the [db] section of the env_settings.ini file are changed, run the bin/configureEnv.sh -dbpassword=
<database password> command to re-create the db.xml file. You are prompted to enter the password if its not given as an argument. In this case, if the
encrypt_password parameter in the env_settings.ini is not set or set to no, the dbpassword argument is not required for the configureEnv.sh script.
6. Confirm that the script created the db.xml file in the $TOP/etc/default directory.
This step is important because:
It is the only place from where the Java code can read the encrypted password or plain text password
The decrypted password can be used in a JDBC connection.
7. With this change, if you create IBM® Product Master schema, start create_schema.
For example:
bin/db/create_schema.sh -dbpassword=<database password>
You are prompted to enter the password if its not given as an argument. If the encrypt_password parameter in the env_settings.ini file is not set or is set to no, the
dbpassword argument is not required for the scripts.
8. Run the bin/test_db.sh -dbpassword=<database password> command. You are prompted to enter the password if its not given as an argument. If the
encrypt_password parameter in the env_settings.ini file is not set or is set to no, the dbpassword argument is not required for the scripts.

encrypt_password
If you choose to encrypt the database password, add the encrypt_password parameter to the [db] section of the env_settings.ini file, and set it to
yes. Remove the password parameter from the [db] section of the env_settings.ini file. This ensures that the database password is not present
anywhere in the text format. It is present only in the encrypted format in the db.xml file. If you choose to leave the database password in plain format, add
the encrypt_password parameter to the [db] section of the env_settings.ini file, and set it to no. Keep the password parameter in the [db] section of
the env_settings.ini file as in earlier versions of Product Master.
Remove the following properties from the common.properties file:
db_userName
db_password
db_url
db_class_name

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting Db2 parameters


Ensure that you set the following DB2® parameters.

About this task


Db2 provides a JDBC driver, which can be used either in type two architecture or type four architectures. You can use either architecture with IBM® Product Master.

Procedure
1. Configure the [db.<type>] section that corresponds to the value of type in the database section.
For example, if you are using Db2, set type=db2 in the database section and configure the [db.db2] section.
2. Configure the JDBC driver type parameter in the [db] section for Db2.
a. For Type 4 architecture:
i. Type four architectures is the default type.
ii. Set the jdbc_driver_type parameter to 4.
iii. Set the port parameter in the [db] section to the port the Db2 listener is on. Ask your database administrator for the port.
b. For Type 2 architecture:
i. Set the jdbc_driver_type parameter to 2. The port and host name parameters in the db.db2 section are ignored.
3. Configure the [db.db2] section for Db2. Set the following parameters:
a. Set the following parameters in the [db.db2] section.

alias
This parameter is in the [db.db2] section in the env_settings.ini file. This parameter is used by the CLP and the JDBC Type 2 drivers. It is the alias the CLP
uses in the CONNECT statement.
db_name
This parameter is in the [db.db2] section in the env_settings.ini file. The db_name parameter defaults to the value of the alias parameter, therefore,
db_name must be set only when the name of the database differs from the alias the client uses. This parameter is only used for Type 4 connections.

Example
Here is a simple example if you were using a Type 4 connection, you would have:
Client alias = 'mydb', db name = 'mydb'

[db]

88 IBM Product Master 12.0.0


type-=db2

username=dbuesr

password=somepwd

home=/home/db2inst1/sqllib

hostname=my-dbserver.company.com

port=60004

jdbc_driver_type=4

[db.db2]

alias=mydb
Here is an example of a Type 4 connection when the alias is different from the database name:

Client alias = 'mydb', db name = 'mdmpim'

[db]

type-=db2

username=dbuesr

password=somepwd

home=/home/db2inst1/sqllib

hostname=my-dbserver.company.com

port=60004

jdbc_driver_type=4

[db.db2]

alias=mydb

db_name=mdmpim

Here is a simple example if you were using a Type 2 connection, you would have:

[db]

type-=db2

username=dbuesr

password=somepwd

home=/home/db2inst1/sqllib

#hostname=my-dbserver.company.com

#port=60004

jdbc_driver_type=2

Note: For Oracle Database 18c and Oracle Database 19c, manually update the jars-oracle6.txt file in the $TOP/bin/conf/classpath folder for the JARs related to the Oracle
client.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting Oracle parameters


Ensure that you set the following Oracle parameters.

About this task


Oracle supports the following JDBC driver types:

thin
This type is the default type.
OCI
The OCI driver allows the use of TAF (Transparent Application Failover) for RAC (Real Application Cluster) installations.

IBM Product Master 12.0.0 89


You can use either driver type with IBM® Product Master.

Procedure
1. Configure the [db.<type>] section, which corresponds to the value of type in the database section.
For example, if you are using Oracle, set type=oracle in the database section and configure the [db.oracle] section.
2. Configure the [db] section for Oracle.
a. Set the JDBC driver type. Set the driver type to either thin or OCI.
3. Configure the [db.oracle] section for Oracle.

instance
The name of the Oracle instance. The instance that is used in the JDBC connect string and the SQLPlus connect string if the tns_name parameter is not set.
tns_name
This parameter is in the [db.oracle] section in the env_settings.ini file. The TNS name is used by SQLPLus to connect to the database. Set this section only if
the SQLPlus uses a different name to connect to the database other than JDBC. This parameter defaults to the value of the instance parameter, therefore,
tns_name must be set when the client connection differs from the SID of the database.
SID
Oracle System Identification (SID) is unique for each Oracle Database system. Oracle SID identifies the system and SERVICE_NAME identifies the remove
service. This parameter is in the [db.oracle] section in the env_settings.ini file.
success_token
If the Oracle Database client used is in a language other than English, in order for the test_db.sh script and other shell scripts to work, specify in this
parameter the text that is returned when a successful connection is made to the Oracle Database server.

Example
Here is a simple example:

(Database SID = 'mydb', client uses 'mydb' to connect using sqlplus

[db]
type-=oracle
username=dbuesr
password=somepwd
home=/opt/oracle/app/product/11.1.0/db_1
hostname=my-dbserver.company.com
port=1525

[db.oracle]
instance=mydb

Here is an example where the SQLPlus is different from SID:

(Database SID = 'mdmpim', client uses 'mydb' to connect using sqlplus

[db]
type-=oracle
username=dbuesr
password=somepwd
home=/opt/oracle/app/product/11.1.0/db_1
hostname=my-dbserver.company.com
port=1525

[db.oracle]
instance=mydb
tns_name=mdmpim

Setting up Oracle to use the OCI drivers


The OCI is an application-programming interface to Oracle databases. It consists of a library of C language routines to allow C programs (or programs that are
written in other third-generation languages) to send SQL statements to the database and interact with it in other ways. The OCI driver allows the use of TAF
(Transparent Application Failover) for RAC (Real Application Cluster) installations. You need to set up and configure support for the OCI driver for Oracle, and
change the IBM Product Master environment settings.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting up Oracle to use the OCI drivers


The OCI is an application-programming interface to Oracle databases. It consists of a library of C language routines to allow C programs (or programs that are written in
other third-generation languages) to send SQL statements to the database and interact with it in other ways. The OCI driver allows the use of TAF (Transparent Application
Failover) for RAC (Real Application Cluster) installations. You need to set up and configure support for the OCI driver for Oracle, and change the IBM® Product Master
environment settings.

Before you begin


Ensure that the Oracle client is installed. For more information, see System requirements.

90 IBM Product Master 12.0.0


Procedure
Add the following environment variables in .bashrc or .bash_profile file in IBM Product Master.

$ORACLE_HOME - This variable is the directory where Oracle client software is installed.
$LD_LIBRARY_PATH - This variable is an environment variable for Sun and Linux®. Use $LIBPATH for AIX®® and $SHLIB_PATH for HPUX.
$PATH

For example, the environment variables in .bashrc or .bash_profile file looks like:

export ORACLE_HOME=/opt/oracle/11g/client_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
export PATH=$ORACLE_HOME/bin:$PATH

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring the application server parameters


After you install the product and configured your database, you can perform more configurations of the application server. After the installation is configured, use the shell
scripts in the <install_dir>/bin/go directory to start, stop, and abort Product Master.

Configuring the application server requires four steps. In the env_settins.ini file:

1. Set the appserver type and common properties in the [appserver] section.
2. Configure the parameters for the appserver type in the [appserver.<type>] section.
3. Configure the parameters for each appserver service in the [appserver.<service
name>] section.
4. Add the security properties username and password to the [appserver] section.

Setting the common application server parameters


In order to set up the application server, you need to configure the application server type and common parameters, as well as, configure the application type-
specific parameters.
Setting WebSphere Application Server parameters
If you are using WebSphere® Application Server as your application server for running IBM® Product Master, you must verify the configuration settings, start the
application server, configure group and server settings, run some scripts, and then start the application server.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting the common application server parameters


In order to set up the application server, you need to configure the application server type and common parameters, as well as, configure the application type-specific
parameters.

Procedure
Set the following parameters in the [appserver] section of the env_settings.ini file:

type
Refer to the env_setting.ini.default file for the entire list of sections supported
home
The home directory of the appserver.
rmi_port
The RMI port.

What to do next
See Setting WebSphere Application Server parameters.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting WebSphere Application Server parameters


If you are using WebSphere® Application Server as your application server for running IBM® Product Master, you must verify the configuration settings, start the
application server, configure group and server settings, run some scripts, and then start the application server.

IBM Product Master 12.0.0 91


Before you begin
Before you can set up Product Master, you must ensure that it is configured.

Procedure
1. Configure the [appserver.websphere] section with the following parameters:

application_server_profile
The name of the WebSphere Application Server profile.
cell_name
The name of the WebSphere Application Server cell where Product Master is installed.
node_name
The name of the node in the WebSphere Application Server cell where Product Master is installed.
admin_security
Set this parameter to true if WebSphere Application Server administrative security is enabled.

2. Configure the [appserver.appsvr] section with the following parameters:

port
The port that Product Master runs on.
appserver_name
The name of the WebSphere Application Server component, which is created in a later step.
vhost_name
The name of the WebSphere Application Server virtual host component, which is created in a later step.

3. Add the security parameters username and password to the [appserver.websphere] section.
For example:

# Application server admin user name and password. This info is


required WebSphere when admin_security
in [appserver.websphere] section is set to true. If username
and password are not provided in env_settings.ini, user will
need to provide the values in the command line when invoking
scripts like start_local.sh or otherwise the user will be
prompted to enter the values before the execution of the
script will continue.
#username=
#password=

Note: If you choose not to save the credential information in the env_settins.ini file, you can provide it from the command line while running install_war.sh file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring WebSphere MQ parameters


For IBM® Product Master functions that have dependencies on WebSphere® MQ to work, you need to update the env_settings.ini file.

Procedure
1. Open the env_settings.ini file, and go to the [mq] section.
2. Set the following parameters:

enabled
Set to yes to enable the support for functions, which have dependencies on WebSphere MQ.
home
The installation directory of the WebSphere MQ client.

3. Save your changes.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring Persona-based UI parameters


To enable features like Free text search, Machine learning, Digital Asset Management, and Vendor portal you need to update configuration parameters in the
env_settings.ini file.

Procedure
1. Open the env_settings.ini file, and update following parameters according to the modules you want to enable.

92 IBM Product Master 12.0.0


2. Set the following parameters. For more information, see env_settings.ini file parameters.

Digital Asset Management


[dam] section
enable
[mongo] section
mongo_database
mongo_hostname
mongo_password
mongo_port
mongo_username
mongodb_encrypt_password
Free Text Search
[freetext-search] section
elastic_authentication
elastic_cluster_name
elastic_encrypt_password
elastic_password
elastic_server_hostname
elastic_username
enable
indexer_port
pimcollector_port
[hazelcast] section
hazelcast_server_IpAddress
hazelcast_server_port
Machine learning
[mlservice] section
enable
ml_service_protocol
ml_service_host
ml_controller_port
ml_attributes_port
ml_categorization_port
ml_standardization_port
Product Master UI WAR
[mdmui-app-war] section
enable
Product Master REST WAR
[mdmrest-app-war] section
enable
Simple Mail Transfer Protocol (SMTP)
[smtp] section
from_address
smtp_additional_props
smtp_address
smtp_authentication
smtp_encrypt_password
smtp_password
smtp_port
smtp_username
Single Sign-on (SSO)
[sso] section
enable_sso
sso_company
Vendor portal
[vendor-portal] section
enable

3. Save your changes.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Running configuration scripts


Before you can configure the WebSphere® Application Server, you need to configure the installation.

Before you begin


You must run configureEnv.sh script after a change is made to the env_settings.ini file or the addition of any JAR file to the jar directory in an enterprise installation.

About this task

IBM Product Master 12.0.0 93


The configureEnv.sh script does the following tasks,

Validates the env_settings.ini file and notifies if there are any errors.
Generates the configuration for IBM® Product Master services.
Generates following Persona-based UI settings files:
File name Used for... File Location
application.properties pim-collector and indexer $TOP/mdmui/dynamic/indexer$TOP/mdmui/dynamic/pimcollector
config.json User interface configuration $TOP/mdmui/dynamic/MDMUI
dam.properties Digital Asset Management $TOP/etc/default/dam/config
damConfig.properties Digital Asset Management $TOP/mdmui/dynamic/mdmrest
dashboards_config.ini Dashboards $TOP/mdmui/dashboards/tnpmoed/conf
ml_configuration.ini Machine learning services $TOP/mdmui/machinelearning/config
restConfig.properties REST APIs $TOP/mdmui/dynamic/mdmrest
Generates a <install dir>/build/build.properties file for Apache Ant.
Generates the common.properties file.
Note:
Comments inside the common.properties file are deleted after you run the configureEnv.sh script. If you want the descriptions for each property, refer the
common.properties.default file.
If the common.properties file exists, a warning message is reported and displays the missing properties, which exist in the common.properties.template file.
If the common.properties file does not exist, you can either copy the properties from the common.properties.default file or delete the common.properties
file and run the configureEnv.sh script to generate a new file.
If the configureEnv.sh script is run with an overwrite (-ov) option, then the script does not overwrite all the existing properties and appends only the newly
added properties in the property file.

Following are the parameters for the configureEnv.sh script.

--ov
Overwrites all the existing generated files, thus you might lose your custom configurations in such files.
--dbpassword
=<database password>
Disables the Db2 password that is stored in the plain text. For more information, see Storing database passwords in an encrypted format.
--mqpassword
=<MQ password>
Disables the IBM MQ password that is stored in the plain text. Parameter to be passed if the value is removed from the env_settings.ini file.
--mongodbpassword
=<MongoDB password>
Disables the MongoDB password that is stored in the plain text. The value of the mongodb_encrypt_password parameter should be set to yes and the password
value should be removed from the env_settings.ini file.
--smtppassword
=<SMTP password>
Disables the SMTP password that is stored in the plain text. The value of the smtp_encrypt_password parameter should be set to yes and the password value
should be removed from the env_settings.ini file.
--espassword
=<Elastic Search password>
Disables the Elastic Search password that is stored in the plain text. The value of the elastic_encrypt_password parameter should be set to yes and the password
value should be removed from the env_settings.ini file.

Procedure
1. Go to the <install dir>/bin directory.
Note: Use a short folder name for the <install dir> to avoid "java.lang.IllegalArgumentException: Value too long" error message during installation.
Example

Use /opt/MDM instead of opt/IBM/MDM/MDMv12.

2. Run the <install dir>/bin/configureEnv.sh script.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring the WebSphere Application Server


To run IBM® Product Master successfully, you must configure the WebSphere® Application Server.

Before you begin


If you are installing the product on AIX®, you must increase the size of the ncargs parameter to accommodate the long list of arguments that the product installation
requires. Run the following command:
chdev -l sys0 -a ncargs=NewValue
Where NewValue can be a value from 6 (the operating system default) to 128 and represents the number of 4 K blocks to be allocated for the argument list.

Procedure

94 IBM Product Master 12.0.0


1. Add a WebSphere Application Server group.
This group is used to grant permissions in ${WAS_HOME}, which is necessary for the Product Master application server. Some examples of group names are:
wasgrp, wasgroup, or pimgroup.
On AIX server, you can add a group by using the SMIT administration tool. For more information about creating a group and setting permissions for the group, see
your operating system documentation. Ensure that the Product Master user is always part of the WebSphere Application Server group.

2. Add the Product Master user to the group created in the previous step.
To check group membership, run the id command from the UNIX command prompt as the Product Master user. If the group is not in the list of groups, log out, log
in, and run the id command to check the user again.
3. Start the WebSphere Application Server default server.
To start the WebSphere Application Server default server, run the following command as root:

${WAS_HOME}/bin/startServer.sh server1

4. In the WebSphere Application Server console, change the umask for the server1 process to 002. In the Run as group text box for server1, set the text box to the
group created in the first step.
5. Stop the WebSphere Application Server console.
To stop the administration console, run the following command as root:

${WAS_HOME}/bin/stopServer.sh server1

6. Change the permissions on the WebSphere Application Server directory so that the group has write permission:

# chgrp -R wasgroup (WAS_HOME) # chmod -R g+rw (WAS_HOME)

Note: The WAS_HOME variable is not defined here because you must run the commands as root. You must manually enter the WebSphere Application Server
installation path, for example: chmod -R g+rw /opt/IBM/WebSphere/AppServer. The group that is used here must be the same as the one you set up in step
4.
7. Start the application server and the administrative console.
To start the WebSphere Application Server, run the following command as root:

${WAS_HOME}/bin/startServer.sh server1

Generally, Product Master is installed on the root directory of WebSphere Application Server but some installations use an installation ID (such as wsadmin) to run
the WebSphere Application Server console.

What to do next
Deploy the Product Master. For more information, see Deploying Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deploying Product Master


After updating the configurations using the configureEnv.sh file, run the following scripts in the order listed.

About this task


$TOP/bin/websphere/create_vhost.sh
You are prompted to create a virtual host.

$TOP/bin/websphere/create_appsvr.sh
You are prompted to create an application servers.

$TOP/bin/websphere/install_war.sh
You are prompted to install the application server that is configured for IBM® Product Master in the [appserver.appsvr] section of the env_settings.ini file.
Install Product Master on the default application server (appsvr_<SERVER_NAME>). For more information, see Enabling horizontal cluster for the IBM Product
Master.
When WebSphere® Application Server security is enabled, add the wsadminUsername and wsadminPwd arguments to the install_war.sh command.

Syntax

install_war.sh [ --wsadminUsername=<WAS admin user name>


--wsadminPwd=<password for WAS admin user>]

The install_war.sh script installs the WAR file for each application server that is defined in the [services] section in the env_settings.ini file.

$TOP/mdmui/bin/installAll.sh
Installs and deploys the Persona-based UI on the application server. Also starts the following services, if they are enabled:

Free Text Search


Machine learning services

Procedure
1. Enter the following URL in the address bar of your browser to access the GUI:

Admin UI

IBM Product Master 12.0.0 95


http://host_name:port_number/utils/enterLogin.jsp
Persona-based UI
http://host_name:port_number/mdm_ui

2. To manually start these services, proceed as follows:


ML services
a. Go to the scripts folder, for example "$TOP/mdmui/machinelearning/scripts".
b. Use the following commands as required:
To start the service:

python3.9 start.pyc

To stop the service:

python3.9 stop.pyc

To check the status of the service:

python3.9 status.pyc

All services
To start all the services:

$TOP/bin/go/start_local.sh

To stop all the services:

$TOP/bin/go/stop_local.sh

To get status of all the services:

$TOP/bin/go/rmi_status.sh

Free Text Search services


To start the service:

$TOP/mdmui/bin/startFtsServices.sh

To stop the service:

$TOP/mdmui/bin/stopFtsServices.sh

Enable Free text search. For more details, see Using Free Text Search.

What to do next
Complete the steps that are listed in the Post-installation instructions.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Run schema creation scripts


After you install the WebSphere®, the database and the IBM® Product Master application, you must run the scripts to create the schema for the database.

Creating a schema
IBM Product Master provides a script that you use to create the schema for your database.

Before you can create a schema, ensure that you,

Create valid table space names and ensure that they are valid.
Ensure database connectivity.

If you run the create_schema.sh script without the tablespace option, all tables and indexes are created in the default table spaces USERs and INDX only. If you
created all of the buffer pools and table spaces as described in the Creating tablespaces, make sure to use an appropriate table space mapping file.
The <install dir>/bin/db/analyze_schema.sh runs the local database schema analyzer.

1. Use the following shell script to create the schema: <install dir>/bin/db/create_schema.sh. It creates a log file that is called <install dir>/logs/schema.log.
Attention: When you run the create_schema.sh script, errors are not displayed. Ensure that you review the log file to view any errors.
Note: Run create_schema.sh only one time. If you run create_schema.sh on an existing schema, you replace the existing schema with an empty schema.
2. Optional: You can specify the --tablespace argument to specify a table space name mapping file that shows the customized table space names for the required
table spaces: create_schema.sh --tablespace=<tablespace name mapping file>.
If you do not specify the argument --tablespace=tablespace_name_mapping_file on the command line when you first run the create_schema.sh script, all tables
and indexes are created in the default table spaces USERS and INDX. If you do not specify the argument --tablespace=tablespace_name_mapping_file in a later
run, the name that is used for the previous create schema operation is used. For more information about table spaces, see Table space names for static tables.

3. Using the following command, you can specify --silent argument to suppress confirmation messages when the script is run.

nohup ./create_schema.sh --silent &

4. After you run the create_schema.sh command, review the $TOP/logs/schema.log file to check for errors.

96 IBM Product Master 12.0.0


Custom table space names
There are two types of tables: static tables and runtime tables. The Product Master creates tables by two methods.
Testing the database connectivity
Before you can use IBM Product Master, you must create the database schema.
Error handling for table space name mapping file
In addition to the standard script errors in 'create schema', and the new tablespace_name_mapping_file command-line argument, the mapping file errors are
validated.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Custom table space names


There are two types of tables: static tables and runtime tables. The Product Master creates tables by two methods.

Static tables
These tables are created during installation phase when you run the create_schema.sh script.
Runtime tables
These tables are created during run time when the following functions are used:

Creating user-defined logs


Deleting a catalog
Importing items
Integrity verification scripts
Docstore Maintenance script

By default, table spaces USERS, INDX, and BLOB_TBL_DATA are used for creating Product Master database tables. Table space name customization for overriding default
table space names is available in the following section.
This custom table space function addresses the deployment issues and is for new installations only, so there are no migration issues. If you have an existing system that is
deployed, your DBA must manually change the table space names for all tables under Product Master database schema on an appropriate maintenance window. Also, the
table space parameter in the common.properties file must be updated to new table space names where you want the runtime tables to be created.

Table space names for static tables


Product Master creates tables during installation phase by using default table space names: USERS, INDX, and BLOB_TBL_DATA.

A table space mapping file can be used to define custom table space names instead of the default table space names that are mentioned previously. This file is a comma-
delimited text file that maps the tables, table spaces, and index table spaces together. This file is used as a parameter for the create_schema.sh script, for example:

$TOP/bin/db/create_schema.sh --tablespace=<table space name mapping file>

The table space name mapping file has the following format for each line:

table_name,(table_tablespace_name), (index_tablespace_name)
As shown previously, both table_tablespace_name and index_tablespace_name are optional. For example:
tctg_sel_selection,ctg_tables,

tctg_dys_dynamic_selection,ctg_tables,

tctg_itm_item,

tctg_itd_item_detail,ctg_tables,ctg_indx

tctg_ita_item_attributes,ctg_tables,ctg_indx

The table space name mapping file includes the following properties:

If any of the table_tablespace_name and index_tablespace_name are not specified, the default table space names are used.
All the tables that are used by Product Master is not included in the mapping file, use the default table space names.
Blank lines are ignored.
Lines that start with # are considered as comment lines and are ignored, for example:
#----------------------------

# This is a comment line

#----------------------------

A default mapping file is in: $TOP/src/db/schema/gen/ tablespace_name_mapping_file.txt


This file follows the format that is specified previously and it can be used as a template for customizing table space names. It includes all required table spaces that are
created during installation phase of Product Master.

In production environments, it is ideal to use the table spaces as outlined in the table space requirements section, so highly used tables such as itd, ita, itm, icm,
and lck are stored in separate table spaces and buffer pools. This separation helps to improve overall performance. To do so, you need to create a table space mapping
file with the following contents:

tctg_itd_item_detail,itd_data,itd_ix

tctg_ita_item_attributes,ita_data,ita_ix

tctg_itm_item,itm_data,itm_ix

IBM Product Master 12.0.0 97


tctg_icm_item_category_map,icm_data,icm_ix

tutil_lck_lock,lck_data,lck_ix

Table space names for runtime tables


IBM® Product Master creates tables during run time by using default table space names: USERS and INDX.

The default table space names can be changed through the $TOP/etc/default/common.properties file.

You can change these default table space names on these parameters in the user_tablespace_name and index_tablespace_name file. For example,

user_tablespace_name=data

index_tablespace_name=index
In this example data and index replaces USERS and INDX table spaces. These table spaces are used for tables that are created during run time.

Important: These properties are optional. If any of them is not defined, the hardcoded default values are used.
Note: The properties user_tablespace_name and index_tablespace_name are not listed in the $TOP/etc/default/common.properties directory. If you intend to change
your table space names, then you need to add these properties to the $TOP/etc/default/common.properties directory and set the required values.
After you modify the table space name parameters, Product Master must be restarted. You must ensure that the table spaces are created and usable by Product Master
database user before you restart.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Testing the database connectivity


Before you can use IBM® Product Master, you must create the database schema.

About this task


When you are connecting the database, the <install
dir>/bin/test_db.sh script tests the native client and the JDBC connections. The script prints any errors.
The schema generation script, create_schema.sh, does not stop if it encounters an error nor does it display errors. Examine the log file, <install dir>/logs/schema.log, to
ensure that the schema was successfully created.

You can run the script, create_schema.sh, with the —verbose option. This option prints the logging information to the log file. It also includes the SQL that was sent to the
database and the output from the Java™ programs.

Procedure
1. Verify database user.
Verify that the database user referenced in the common.properties file exists in the database with the correct privileges.
2. Verify that you have command-line connectivity and that Product Master can connect to the database by running the following shell script:
a. Run test_db.sh.
The command test_db.sh tests command-line connectivity using DB2® or sqlplus. It also tests JDBC connectivity using Java.
3. After the database user is set up correctly, create the Product Master database schema.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Error handling for table space name mapping file


In addition to the standard script errors in 'create schema', and the new tablespace_name_mapping_file command-line argument, the mapping file errors are validated.

The following mapping file errors are validated:

1. If the mapping file does not exist, the system returns this error:
The tablespace name mapping file 'file_name' does not exist.
If this output happens, the script stops.
2. If the mapping file is not a readable text file or invalid, the system returns this error:
The tablespace name mapping file 'file_name' is invalid.
If this output happens, the script stops.
3. If a line in the mapping file is not formatted as required, the system returns this error:
The following line in the tablespace name
mapping file 'file_name' is invalid and
will be ignored: the_line.
If this output happens, the script continues.
4. If a table name in a line in the mapping file does not exist or is not a valid IBM® Product Master table, the system ignores the line and return this error:
Invalid table name: table_name.
If this output happens, the script continues.
5. If a table name or the whole line is duplicated, ignore the line and report warning to user:

98 IBM Product Master 12.0.0


Duplicated table name: table_name.
If this output happens, the script continues.

The table_tablespace_name and index_tablespace_name are optional. If any of them are missing, create_schema.sh script uses the default table space names. No error
message or warning is returned.

If the table space name mapping is completed successfully, the system returns this message:

The system has applied the tablespace name(s) for each table successfully.
If any error reported, the system returns this message:
The system failed to apply the tablespace name(s) for each tables
All of these message strings are localized.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring GDS feature


You must configure a few GDS parameters in order for IBM® Product Master to be able to exchange product data with a datapool.

Before you begin


You need to load the Generic Persona Model. For more information, see Post-installation instructions.

Procedure
1. Stop all the services.
2. If you want to create a new specific company for the GDS Persona Supply Side configuration, run the create_cmp.sh script that is located in the $TOP/bin/db folder.
You need to perform Post-installation instructions for the new company.
3. Configure the env_settings.ini file for the GDS configuration. Update the following default properties in the [gds] section as required.

#Possible value is """"yes"""" or """"no"""". Set this to """"yes"""" only if GDS is to be enabled. Default
value is """"no"""".
enabled=yes
#Company code to load GDS datamodel in.
company_code=
#Possible value for gds_app_type is """"Demand"""" or """"Supply""""
gds_app_type=
#Possible values Transora,WWREV6
ACTIVE_DATA_POOL_ID=
#Messaging related parameters. Self GLN is needed for GDS Demand side only.
inbound_queue_name=
outbound_queue_name=
queue_connection_factory=
datapool_gln=
self_gln=

4. Install IBM MQ. For more information, see IBM MQ documentation.


5. Set the following default properties in the [mq] section as required. To enable IBM MQ, the value of the enabled property should be true.

enabled=true
#home will default to /opt/mqm if not set
home=<mq home>
# if mq security is enabled, value should be true. Defaults to false
mq_security=false
username=<user name>
password=<password>
# encrypt_password defaults to no. If the user wants the password to be encrypted, it should be set to yes. And value of
password parameter above should be removed.
encrypt_password=no

6. Run the configureEnv.sh located in the $TOP/bin folder. If you cannot override property files by running configureEnv.sh script then, you need to update the
properties manually as follows.
a. Add IBM MQ JARs from the jars-mq.txt file and directory in class path.
b. Set the value of the enable_gds=true in the common.properties file at the $TOP/etc/default folder .
c. Set the following properties in the gds.properties file at the $TOP/etc/default folder.
company_code
gds_app_type - Possible values can be Supply and Demand.
Internal_Classificion_Scheme - Possible values can be Internal_Hierarchy and GPC_Hierarchy. The defaults values are set based on the value of the
gds_app_type parameter.
If the value of the gds_app_type=Supply then Internal_Classification_Scheme=Internal_Hierarchy.
If the value of the gds_app_type=Demand then Internal_Classification_Scheme=GPC_Hierarchy.
ACTIVE_DATA_POOL_ID - Possible values can be Transora (for 1SYNC) and WWREV6 (for GDSN/SA2).
d. Update the mq.xml file at the $TOP/etc/default folder.
If the file is missing create a mq.xml file as follows with correct username and password.

<?xml version="1.0" encoding="UTF-8"?>


<mq_config>
<mq_security_enabled>true</mq_security_enabled>
<mq_userName><username></mq_userName>
<mq_password_encrypted/>

IBM Product Master 12.0.0 99


<mq_password_plain><password></mq_password_plain>
</mq_config>

e. Update following properties in the properties.xml file.


INBOUND_QUEUE_NAME
OUTBOUND_QUEUE_NAME
QUEUE_CONNECTION_FACTORY
DATAPOOL_GL (Demand side)
SELF_GLN (Demand side)
Supply side path for the properties.xml file:
$TOP/etc/messaging/xml/supply/transora
Demand side path for the properties.xml file:
$TOP/etc/messaging/xml/demand/wwrev6
7. Run the loadGDSDatamodel.sh script located in the $TOP/bin/db folder.
8. Start the services from the $TOP/bin/go folder.
9. In the Admin UI,
a. Go to Home > My Settings > General Settings. For all the users, set the Locale for Item and Category Data Display to English (United States).
b. Verify that the Full Admin and GDS Supply Editor roles are available in the Role console and assign the appropriate role to the user.
c. Add appropriate values according to your requirements to the following default static selections in the Selection console:
Product_GPC - Selection with all the data present in the GPC Hierarchy.
Product_IC - Selection with all the data present in the Internal Hierarchy.
Product_TMH - Selection with all the data present in the Target Market Hierarchy.
Note:
Do not select the TA as an available option for the Target market as this is a root category for the Target Market Hierarchy and is not the actual Target
market.
Select the leaf categories present under the GPC Hierarchy and Internal Hierarchy while creating the respective selections.
To populate the TPC_IP selection, see the step h.
d. Go to Product Manager > Reports > Reports console. Run the following report after selecting the appropriate input parameters:
Update GPC Lookup - Updates GPC_Category_Selection_List_Lookup table
Update IC Lookup - Updates Internal_Category_Selection_List_Lookup table
Update IP Lookup - Updates IP_Category_Selection_List_Lookup table
Update TMH Lookup - Updates Target_Market_Selection_List_Lookup table
To run the reports, you must open the report and provide input parameter (selection name) in the report.

Following is the order of execution of reports and respective input parameters:


Update GPC Lookup report and Product_GPC input parameter
Update IC Lookup report and Product_IC input parameter
Update TMH Lookup report and Product_TMH input parameter
e. Verify the following Lookup table updates after report execution:
GPC_Category_Selection_List_Lookup
Internal_Category_Selection_List_Lookup
Target_Market_Selection_List_Lookup
f. Create an item (with information provider details) under the Trading Partner Catalog under IP Hierarchy in any category.
g. Update the static TPC_IP selection through the Selection console by selecting the Information provider provided in the previous step.
i. Run the Update IP Lookup report by providing static TPC_IP selection as an input.

ii. You must verify that the IP_Category_Selection_List_Lookup content is updated after the report execution.

h. Add post-processing script with the name as TradeItemValidation under the catalog attributes in the GDS Product Catalog.
i. Select the Post Processing and click Add.
ii. Add the following in the Catalog Script Editor and click Save.

//script_execution_mode=java_api="japi://com.ibm.gds.extensions.postprocessing.ValidationsPostProcessing.class"

iii. Select the new script from the list in the Catalog attribute page.
Note: After each execution of the loadGDSDatamodel.sh script, you need to add or select this script.
10. Create gds.properties file by using the gds.properties.default file located in the $TOP/etc/default folder.
11. Create gdsConfig.properties file by using the gdsConfig.properties.default file located in the $TOP/mdmui/dynamic/mdmrest folder.

What to do next
Log in to the application and access the GDS feature.

Creating a WebSphere Message Queue .bindings file


To properly configure the IBM Product Master GDS settings, you need to create a .bindings file.
Setting Global Data Synchronization parameters
Ensure that you set the following Global Data Synchronization feature parameters.
Configuring Global Data Synchronization memory parameters for messaging
You need to configure the Global Data Synchronization memory parameters for the messaging module before you can use the Global Data Synchronization
messaging service.
Setting up an AS2 connector
The AS2 (Applicability Statement 2) protocol is used for securely transmitting business documents in the XML, binary, and Electronic Data Interchange (EDI)
formats over the internet. It is frequently used in business-to-business data exchange operations. To ensure correct XML data exchange, the Global Data
Synchronization Network has identified and defined AS2 as the standard for communication between suppliers and data pools, and data pools and retailers for end-
to-end connectivity.
Connecting to a data pool
You need to connect to a data pool to send or receive data.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

100 IBM Product Master 12.0.0


Creating a WebSphere Message Queue .bindings file
To properly configure the IBM® Product Master GDS settings, you need to create a .bindings file.

Before you begin


Ensure that you have a working server-side installation of WebSphere® Message Queue. For more information, see IBM MQ Version 9.0 documentation.
Ensure that you have a queue manager available for use and is started.

Procedure
You can either use IBM WebSphere MQ Explorer or MQSC command to create and start a queue manager. Use the MQSC commands as follows:

1. Log in to shell by using mqm user.


2. Create a queue manager. Type the crtmqm -q <QUEUE_MGR_NAME> command as:
crtmqm -q bcg.queue.manager
3. Start the queue manager. Type strmqm.

Creating a .bindings file for Windows


To configure the IBM Product Master GDS WebSphere MQ settings, you need to create a .bindings file.
Creating a .bindings file for UNIX
To properly configure the IBM Product Master GDS WebSphere MQ settings, you need to create a .bindings file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a .bindings file for Windows


To configure the IBM® Product Master GDS WebSphere® MQ settings, you need to create a .bindings file.

About this task


The following steps create a .bindings file on the Windows operating system.

Procedure
1. Set up the WebSphere Message Queue class path.
The WebSphere Message Queue default installation directory is at C:\Program Files\IBM\WebSphere MQ in Windows that should be set as MQ_INSTALL_DIR.
Assuming you must update the system class path variable (CLASSPATH) with the following JAR files:
<MQ_INSTALL_DIR>\Java\lib\providerutil.jar
<MQ_INSTALL_DIR>\Java\lib\com.ibm.mqjms.jar
<MQ_INSTALL_DIR>\Java\lib\ldap.jar
<MQ_INSTALL_DIR>\Java\lib\jta.jar
<MQ_INSTALL_DIR>\Java\lib\jndi.jar
<MQ_INSTALL_DIR>\Java\lib\jms.jar
<MQ_INSTALL_DIR>\Java\lib\connector.jar
<MQ_INSTALL_DIR>\Java\lib\fscontext.jar
<MQ_INSTALL_DIR>\Java\lib\com.ibm.mq.jar
2. Create a directory on the WebSphere MQ server and call it: C:\JNDI-Directory.
Note: If this directory exists, delete any earlier versions of the .bindings files from it.
3. Go to the <MQ_INSTALL_DIR>\Java\bin directory and add the following changes to the JMSAdmin.config file.
Ensure that the values for the following parameters are:
INITIAL_CONTEXT_FACTORY=com.sun.jndi.fscontext.RefFSContextFactory
PROVIDER_URL=file:/C:/JNDI-Directory
Note: If these parameters are not present, include these parameters with the previous values.
4. Open a command prompt and change the directory to <MQ_INSTALL_DIR>\Java\bin. Issue the JMSAdmin.bat file. On successful initialization of this batch file,
you see a InitCtx> prompt. You are now ready to run the MQSC commands.
If an exception occurs, then check whether the class path system variable is properly set for the JAR files that are listed in step 1 previously.
5. Run the following commands in the sequence:

InitCtx> def q(INBOUND_QUEUE_NAME)


InitCtx> def q(OUTBOUND_QUEUE_NAME)
InitCtx> def
qcf(QUEUE_CONNECTION_FACTORY_NAME)transport(CLIENT)channel(java.channel)host(WMQ_SERVER_IP)port(WMQ_SERVER_DEFINED_PORT)qm
gr(QUEUE_MANAGER_NAME)
InitCtx> end

For example:

InitCtx> def q(XML_IN)

IBM Product Master 12.0.0 101


InitCtx> def q(XML_OUT)
InitCtx> def qcf(ptpQcf) transport(CLIENT) channel(java.channel)host(9.121.222.84) port(1414) qmgr(bcg.queue.manager)
InitCtx> end

Where:

XML_IN
The inbound queue that is used by the GDS messaging service to read from.
XML_OUT
The outbound queue to which GDS posts messages.
ptpQcf
The queue connection factory name as defined in $TOP/bin/conf/env_settings.ini as a value for the queue_connection_factory parameter under the [gds]
section.
9.121.222.84
The WebSphere MQ server IP address.
1414
The WebSphere MQ server that is defined listen port.
bcg.queue.manager
The queue manager name under which the queues are defined.

Note: If you receive a message similar to unable to bind object message, check whether the JNDI-Directory directory exists. Also, if there is already an earlier
version of the .bindings file in the folder, delete it and redo steps 4 and 5 previously.
6. Copy the generated .bindings file from the JNDI-Directory directory and paste the file to the required destination at Product Master under the $TOP/etc/default/
folder.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a .bindings file for UNIX


To properly configure the IBM® Product Master GDS WebSphere® MQ settings, you need to create a .bindings file.

About this task


The following steps create a .bindings file on the UNIX operating system.

Procedure
1. Set up the WebSphere Message Queue class path.
The WebSphere Message Queue default installation directory is at /opt/mqm in UNIX that should be set as MQ_INSTALL_DIR. You need to update the system class
path variable (CLASSPATH) with the following JAR files:
<MQ_INSTALL_DIR>/java/lib/providerutil.jar
<MQ_INSTALL_DIR>/java/lib/com.ibm.mqjms.jar
<MQ_INSTALL_DIR>/java/lib/ldap.jar
<MQ_INSTALL_DIR>/java/lib/jta.jar
<MQ_INSTALL_DIR>/java/lib/jndi.jar
<MQ_INSTALL_DIR>/java/lib/jms.jar
<MQ_INSTALL_DIR>/java/lib/connector.jar
<MQ_INSTALL_DIR>/java/lib/fscontext.jar
<MQ_INSTALL_DIR>/java/lib/com.ibm.mq.jar
2. Create /opt/mqm/JNDI-Directory directory on the WebSphere MQ server.
Note: If this directory exists, delete any earlier versions of the .bindings files in it.
3. Go to the <MQ_INSTALL_DIR>\Java\bin directory and add the following changes to the JMSAdmin.config file.
Ensure that the values for the following parameters are:
INITIAL_CONTEXT_FACTORY=com.sun.jndi.fscontext.RefFSContextFactory
PROVIDER_URL=/opt/mqm/JNDI-Directory
Note: If these parameters are not present, include the parameters with mentioned values.
4. Open a command line and change the directory to <MQ_INSTALL_DIR>\Java\bin. Issue the JMSAdmin.bat file. On successful initialization of this batch file, you
see a InitCtx> prompt. You can now enter the MQSC commands.
If an exception occurs, then check whether the class path system variable is properly set for the JAR files that are listed in step 1.
5. Run the following commands in the sequence:

InitCtx> def q(INBOUND_QUEUE_NAME)


InitCtx> def q(OUTBOUND_QUEUE_NAME)
InitCtx> def
qcf(QUEUE_CONNECTION_FACTORY_NAME)transport(CLIENT)channel(java.channel)host(WMQ_SERVER_IP)port(WMQ_SERVER_DEFINED_PORT)qm
gr(QUEUE_MANAGER_NAME)
InitCtx> end

For example:

InitCtx> def q(XML_IN)


InitCtx> def q(XML_OUT)
InitCtx> def qcf(ptpQcf) transport(CLIENT) channel(java.channel)host(9.121.222.84) port(1414) qmgr(bcg.queue.manager)
InitCtx> end

102 IBM Product Master 12.0.0


6. Copy the generated .bindings file from the JNDI-Directory directory and paste the file to the required destination at Product Master under the $TOP/etc/default and
$TOP/etc/appsvr_<SERVER_NAME> directories.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting Global Data Synchronization parameters


Ensure that you set the following Global Data Synchronization feature parameters.

Procedure
1. Open the env_settings.ini file, and go to the [gds] section.
2. Set the following parameters:

enabled
Set this parameter to either yes or no value. If you want to enable GDS, set to yes. The default is no.
company_code
Set this parameter to the company code for which you want to load the GDS data model.
gds_app_type
Set this parameter to either Demand or Supply value.
ACTIVE_DATA_POOL_ID
Set this parameter to either Transora or WWREV6.
inbound_queue_name
Enter the name of the WebSphere® MQ inbound queue from where the listener of the Global Data Synchronization feature of IBM® Product Master reads XML
messages.
outbound_queue_name
Enter the name of the WebSphere MQ outbound queue where the Global Data Synchronization feature of IBM Product Master posts or pushes XML
messages.
queue_connection_factory
Enter the name of the WebSphere MQ queue connection factory.
datapool_gln
Enter the GLN of the data pool with which the Global Data Synchronization feature of Product Master is supposed to exchange XML messages.
self_gln
Enter the GLN of your organization. You need to populate this parameter only if your organization is a retailer type of organization (gds_app_type=Demand).

3. Because many functions in Global Data Synchronization have dependencies on the services that are provided by WebSphere MQ, set the following parameters in
the [mq] section:

enabled
To enable the support for functions that have dependencies on the WebSphere MQ, set to yes.
home
The installation directory of the WebSphere MQ client.
mq_security
Set this parameter to either true or false. If you want to enable WebSphere MQ security, set the value to true. The default is false.
username
Enter the user name that has access to the WebSphere MQ. If WebSphere WebSphere MQ security is set to true, you need to provide a user.
password
Enter the password. If WebSphere MQ security is set to true and encrypt_password is set to false, you need to provide the password.
encrypt_password
If you want the password to be encrypted, set the value to yes. Remove the value from the password parameter. The default is no.

4. Save and close the env_settings.ini file.


5. If you are using WebSphere Application Server as your application server, you must complete these configuration steps for the WebSphere MQ .jar files.
a. Change directories to the <install_dir>/jars directory.
b. Create the following three soft links, replacing WAS_HOME with the home directory for WebSphere Application Server and DEFAULT_APPSVR with the name
of the default application server:
ln -s <WAS_HOME>/profiles/<DEFAULT_APPSVR>/installedConnectors/wmq.jmsra.rar/com.ibm.mq.jar com.ibm.mq.jar
ln -s <WAS_HOME>/profiles/<DEFAULT_APPSVR>/installedConnectors/wmq.jmsra.rar/com.ibm.mqjms.jar com.ibm.mqjms.jar
ln -s <WAS_HOME>/profiles/<DEFAULT_APPSVR>/installedConnectors/wmq.jmsra.rar/dhbcore.jar dhbcore.jar
c. Run the bin/configureEnv.sh script to update the class path.

Example
Examples for the different sections of the env_settings.ini file:

services section

[services]
admin=admin
eventprocessor=eventprocessor
queuemanager=queuemanager
scheduler=scheduler
workflowengine=workflowengine

mq section

IBM Product Master 12.0.0 103


[mq]
enabled=yes
home=/opt/mqm

mq_security=false
username=root
password=pwd
encrypt_password=no

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring Global Data Synchronization memory parameters for messaging


You need to configure the Global Data Synchronization memory parameters for the messaging module before you can use the Global Data Synchronization messaging
service.

Procedure
1. Open the <Install_Dir>/bin/gdsmsg.sh file.
2. Set the values for the initial heap size and the maximum heap size on the CCD_JMS_JVM_DEBUG_OPTS parameter.
The default values are -Xmx1024m -Xms512m.
Note: You must not attempt to set the maximum heap size more than the physical memory available on your computer.
3. Save and close the <Install_Dir>/bin/gdsmsg.sh file.

Example
This is an example of setting the CCD_JMS_JVM_DEBUG_OPTS parameter for an initial heap size of 512 MB and a maximum heap size of 1024 MB.
CCD_JMS_JVM_DEBUG_OPTS="-Xmx1024m -Xms512m"

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting up an AS2 connector


The AS2 (Applicability Statement 2) protocol is used for securely transmitting business documents in the XML, binary, and Electronic Data Interchange (EDI) formats over
the internet. It is frequently used in business-to-business data exchange operations. To ensure correct XML data exchange, the Global Data Synchronization Network has
identified and defined AS2 as the standard for communication between suppliers and data pools, and data pools and retailers for end-to-end connectivity.

The AS2 protocol is based on the HTTP and SMIME protocols. It allows messages to be encrypted and signed. It also allows the receiver of a message to generate a
confirmation message that is sent to the sender of the message. The Global Data Synchronization feature of IBM® Product Master requires a distinct AS2 connector
software application that uses the AS2 protocol to communicate with a data pool. The business documents that are exchanged between Product Master and the data pool
are in the XML format.

One example of an AS2 connector software that you can use is IBM WebSphere® Partner Gateway.

You must install and configure an AS2 connector to enable IBM Product Master to communicate with a datapool. The AS2 protocol is used for communication between
Product Master and a datapool. This protocol provides fast and secure transmission of business data.

Procedure

Install and configure the AS2 connector. For more information, refer to your AS2 connector documentation.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Connecting to a data pool


You need to connect to a data pool to send or receive data.

Before you begin


Before you can connect to a data pool, you must register with the data pool. Registration with a data pool requires you to enter into a contract and to pay the subscription
fee. On successful registration, you get the URL that the data pool exposes to customers, and one or more global location numbers (GLNs).

104 IBM Product Master 12.0.0


Procedure
1. Define a participant connection for sending information from IBM® Product Master to the data pool.
You need to enter the URL that the data pool exposes, the global location numbers that you received from the data pool on registering, and the protocol in which
you plan to send your product documents.
2. Define a participant connection for receiving information from the data pool.
You need to enter the protocol in which you plan to receive product documents.
3. Activate the connections.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing (accelerated deployment)


During an accelerated deployment, IBM® Product Master is installed from Docker images, which ensures a simple and consistent installation experience. Use accelerated
deployment when you need to consistently and quickly deploy a standard installation image.

Agile DevOps methodologies increases efficiency, and provides flexible and fast deployment options.

IBM Product Master Docker images are built on the following and run on all Docker supported hosts.

Linux® Ubuntu base Docker image ( )


Red Hat® Enterprise Linux (RHEL) 7 Universal Base Image (UBI) base Docker image ( )
Red Hat Enterprise Linux (RHEL) 8 Universal Base Image (UBI) base Docker image ( and later)

Product Master supports containerized deployment on the following:

Docker ( )
Red Hat OpenShift® ( and later)
Kubernetes ( and later)

Docker configuration
In the docker configuration, you use accelerated deployment to consistently and quickly deploy a standard installation image. The following image depicts a docker
configuration of the product:

Following are available types of interactions:

API
QUEUE
DATABASE
DOCSTORE

Installing the product by using Operators


( and later) Operator based containerized version of the Product Master images enables accelerated deployment. Product Master container
environments are consistent, predictable, and replicable. They enable complete control over the environment in which the services run.
Installing the product by using the Docker images
Using Docker images, you can set up Product Master environment. You can use the Docker images to also set up Elasticsearch,
Hazelcast, and MongoDB services.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 105


Installing the product by using Operators

( and later) Operator based containerized version of the Product Master images enables accelerated deployment. Product Master container environments are
consistent, predictable, and replicable. They enable complete control over the environment in which the services run.

In addition to product images, third-party images for Elasticsearch, Hazelcast, MongoDB, and IBM® Message Queue are also installed as part of the deployment.

Accelerated deployment of the Product Master ensures a simple and consistent installation experience. Use accelerated deployment when you need to deploy a standard
installation image consistently and quickly.

Product Master container images are operator-based and built on the Red Hat® Enterprise Linux® (RHEL) 8 Universal Base Image (UBI) base Docker image, this ensures
these are compatible with all the environments that support Operator Life Cycle Management (OLM) and Kubernetes (K8). Moreover, the images are lightweight as the
WebSphere® Application Server has been replaced with the light version of application server – IBM WebSphere Liberty.

Further improvements have been done to completely remove the dependency on the database clients from the container images. Support for both the IBM Db2® and
Oracle is available out-of-the-box without the overhead of creating custom images with the database clients.

Supported platforms
Product Master supports OLM deployment on the following platforms,

Kubernetes
Red Hat OpenShift® Container Platform

System Requirements
Before proceeding with deployment of the IBM Product Master, ensure that you meet the software and hardware requirements. For more information, see Containers.

Following software is installed on the Docker containers as part of the Product Master image deployment:

Red Hat Enterprise Linux (RHEL) 8 Universal Base Image (UBI) base Docker image
IBM WebSphere Liberty Kernel Version 20.0.0.12 and required features
Perl 5.30.1
Java™ 8
Python 3.9.7 (Machine learning image)
Unix utilities

IBM Product Master versions


Release Operator version Replaces Operand version Default channel
IBM Product Master 12.0 Fix Pack 4 1.0.1 12.0.4 v1.0
IBM Product Master 12.0 Fix Pack 5 1.0.2 1.0.1 12.0.5 v1.0
IBM Product Master 12.0 Fix Pack 4 Interim Fix 1 1.0.3 1.0.1 12.0.4.1 v1.0
IBM Product Master 12.0 Fix Pack 5 Interim Fix 1 1.0.4 1.0.2 12.0.5.1 v1.0
IBM Product Master 12.0 Fix Pack 4 Interim Fix 2 1.0.5 1.0.3 12.0.4.2 v1.0
IBM Product Master 12.0 Fix Pack 6 1.0.6 1.0.4 12.0.6 v1.0
IBM Product Master 12.0 Fix Pack 7 1.0.7 1.0.6 12.0.7 v1.0
IBM Product Master 12.0 Fix Pack 7 Interim Fix 1 1.0.8 1.0.7 12.0.7.1 v1.0
IBM Product Master 12.0 Fix Pack 8 1.0.9 1.0.8 12.0.8 v1.0

Log files
You can check logs inside the Product Master containers at the following locations.

Table 1. Log file locations and later


Service Log Location Example
ipm-admin >/logs/<service>_<hostname> >/logs/appsvr_PRODUCTMASTER-ADMIN-5B64BBFB89-HMMDS
ipm-personaui >/logs/appsvr_PRODUCTMASTER-PERSONAUI-74CFCF9F6D-R9Z22
ipm-fts-indexer >/logs/indexer/productmaster-fts-indexer-0
ipm-fts-pim >/logs/pim-collector/productmaster-fts-pim-0
ipm-gds /logs/appsvr_PRODUCTMASTER-GDS-0
ipm-ml /logs/machinelearning
ipm-restapi /logs/appsvr_PRODUCTMASTER-RESTAPI-0
ipm-sch /logs/scheduler_PRODUCTMASTER-SCH-0
ipm-wfl >/logs/workflowengine_PRODUCTMASTER-WFL-0

Table 2. Log file locations and earlier


Service Log Location Example
ipm-admin /logs/<service>_<hostname> /logs/appsvr_PRODUCTMASTER-ADMIN-5B64BBFB89-HMMDS
ipm-fts-indexer /opt/MDM/mdmui/logs/indexer/<hostname> /opt/MDM/mdmui/logs/indexer/productmaster-fts-indexer-84559c94c7-j2sjt
ipm-fts-pim /opt/MDM/mdmui/logs/pim-collector/<hostname> /opt/MDM/mdmui/logs/pim-collector/productmaster-fts-pim-6ccb8cdffb-bt984
ipm-gds /opt/mdmlogs/<service>_<hostname> /opt/mdmlogs/appsvr_PRODUCTMASTER-GDS-6695589F64-NBQQM

106 IBM Product Master 12.0.0


Service Log Location Example
ipm-ml /opt/MDM/mdmui/logs/machinelearning /opt/MDM/mdmui/logs/machinelearning
ipm-personaui /logs/<service>_<hostname> /logs/appsvr_PRODUCTMASTER-PERSONAUI-74CFCF9F6D-R9Z22
ipm-restapi /logs/<service>_<hostname> /logs/appsvr_PRODUCTMASTER-RESTAPI-5BCDD57DD9-XGSWN
/opt/MDM/mdmui/logs/<service>_<hostname> /opt/MDM/mdmui/logs/mdmrest/appsvr_PRODUCTMASTER-RESTAPI-5BCDD57DD9-XGSWN
ipm-sch /opt/mdmlogs/<service>_<hostname> /opt/mdmlogs/scheduler_PRODUCTMASTER-SCH-0
ipm-wfl /opt/mdmlogs/<service>_<hostname> /opt/mdmlogs/workflowengine_PRODUCTMASTER-WFL-0

Downloading the Product Master Docker images (Operator)


To acquire the Product Master Docker assets, you must download them from the IBM Passport Advantage®. You can use a script to acquire the Product
Master Docker images from the IBM Support Fix Central.
Configuring Product Master deployment YAML (Fix Pack 8)
( and later) Before deployment you need to configure Product Master YAML file.
Creating or migrating database schema
If you are installing the product for the first time, you need to run create schema.
Deploying on the Kubernetes cluster
IBM Product Master services are deployed on the Kubernetes cluster by using OLM.
Installing OpenSearch on an OpenShift cluster
Using Helm you can easily install and manage OpenSearch in a OpenShift cluster. Helm is the best way to find, share, and use software built for
OpenShift.
Deploying on the Red Hat OpenShift cluster
IBM Product Master services are deployed on the OpenShift cluster by using OLM. As part of deployment, product images, operator images, and third-
party images are downloaded.
Customizing the Docker container (Fix Pack 8 and later)
Product Master images can be customized according to your requirement.
Enabling Horizontal Pod Autoscaler (HPA)
( and later) The HPA changes the resource allocation by increasing or decreasing the number of Pods in response to CPU or memory consumption.
Configuring SAML SSO (Accelerated deployment)
IBM Product Master supports SAML 2.0 web single sign-on with Just In Time (JIT) provisioning for the Admin UI and Persona-based UI.
Migrating Product Master Kubernetes or Red Hat OpenShift deployment (Fix Pack 7 and later)
( and later) This topic explains the procedure to migrate Kubernetes or Red Hat OpenShift cluster.

Related concepts
Troubleshooting the operator issues
Uninstalling the product

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Downloading the Product Master Docker images (Operator)

To acquire the Product Master Docker assets, you must download them from the IBM® Passport Advantage®. You can use a script to acquire the Product
Master Docker images from the IBM Support Fix Central.

Prerequisites
Following are the prerequisites for downloading the Docker images:

Verify that you have an IBM ID. If you do not have an IBM ID, contact IBM Support.
Your IBM ID has permission to access and download Product Master artifacts.
Your network has open connection to the IBM Bluemix® registry.

Downloading
Proceed as follows to download the Docker images:

1. On the host computer, create a Docker working directory for all your Docker activity (downloads, installation, running, and monitoring).
2. Give the directory a meaningful name, such as /mdm.
3. Open a browser and browse to the IBM Support Fix Central site.
4. Locate and download the Product Master artifacts. For more information, see the Download IBM Product Master to determine the part numbers that you should
download.
5. Unpack each of the parts into the Docker working directory that you created on your host machine (/mdm).
6. Open IPM_12.0.x_DOCKER.zip and extract the Download_IPM_Docker script.
7. If you want to push the Product Master images to a local repository or customize the images, proceed as follows.
a. Run Download_IPM_Docker script:

$ ./Download_IPM_Docker.sh.x -version=12.0.x

IBM Product Master 12.0.0 107


The script prompts you to accept the license. After you accept the license, the script logs in to the IBM Docker registry and starts downloading the latest
version of Docker images.

b. Verify that all images are downloaded on local VM by using following command.

$ docker images

Following images are downloaded for the Product Master.

registry.ng.bluemix.net/product_master/ipm-admin-ubi8:12.0.3
registry.ng.bluemix.net/product_master/ipm-personaui-ubi8:12.0.3
registry.ng.bluemix.net/product_master/ipm-restapi-ubi8:12.0.3
registry.ng.bluemix.net/product_master/ipm-sch-ubi8:12.0.3
registry.ng.bluemix.net/product_master/ipm-wfl-ubi8:12.0.3
registry.ng.bluemix.net/product_master/ipm-fts-pim-ubi8:12.0.3
registry.ng.bluemix.net/product_master/ipm-fts-indexer-ubi8:12.0.3
registry.ng.bluemix.net/product_master/ipm-ml-ubi8:12.0.3
registry.ng.bluemix.net/product_master/ipm-gds-ubi8:12.0.3
registry.ng.bluemix.net/product_master/ipm-mongodb:4.0.22
registry.ng.bluemix.net/product_master/ipm-elasticsearch:7.7.0
registry.ng.bluemix.net/product_master/ipm-hazelcast:4.1.0
registry.ng.bluemix.net/product_master/ipm-mq:9.2.0.0-r2

registry.ng.bluemix.net/product_master/ipm-admin-ubi8:12.0.x
registry.ng.bluemix.net/product_master/ipm-personaui-ubi8:12.0.x
registry.ng.bluemix.net/product_master/ipm-restapi-ubi8:12.0.xregistry.ng.bluemix.net/product_master/ipm-sch-
ubi8:12.0.x
registry.ng.bluemix.net/product_master/ipm-wfl-ubi8:12.0.x
registry.ng.bluemix.net/product_master/ipm-fts-pim-ubi8:12.0.x
registry.ng.bluemix.net/product_master/ipm-fts-indexer-ubi8:12.0.xregistry.ng.bluemix.net/product_master/ipm-ml-
ubi8:12.0.x
registry.ng.bluemix.net/product_master/ipm-gds-ubi8:12.0.x
registry.ng.bluemix.net/product_master/ipm-mongodb:4.0.22
registry.ng.bluemix.net/product_master/ipm-elasticsearch:7.13.0
registry.ng.bluemix.net/product_master/ipm-hazelcast:4.1.1
registry.ng.bluemix.net/product_master/ipm-mq:9.2.0.0-r2

Following table lists the various Docker images available along with their purpose and contents.
Docker Image To Deploy Contains
ipm-admin Admin UI WebSphere® Liberty appserver, Admin, RMI, Event processor, and Queue manager

services.
ipm-personaui Persona-based UI WebSphere Liberty appserver
ipm-restapi REST APIs WebSphere Liberty appserver
ipm-wfl Workflow Workflow service, Admin and RMI services
ipm-sch Scheduler Scheduler service, Admin and RMI services
ipm-fts-pim Pim-collector Pim collector and dependencies
ipm-fts-indexer Indexer Indexer and dependencies
ipm-ml Machine Learning Machine learning services, Python, and dependencies
ipm-gds Supply Side GDS GDS queue services
ipm-mongodb Mongo DB Mongo DB 4.0.22
ipm-elasticsearch Elastic Search
Elastic Search 7.7.0
Elastic Search 7.13.0

ipm-hazelcast Hazelcast
Hazelcast 4.1.0
Hazelcast 4.1.1

ipm-mq IBM MQ IBM MQ 9.2.0.0-r2


Following files are present in the compressed file that you download from the IBM Support Fix Central site.

File File Purpose


registry_secret.yaml Secret for downloading images from the Bluemix repository.
app_secrets.yaml Product Master application secrets required for services.
catalog_source.yaml Operator catalog source spec.
operator_group.yaml Operator group spec.
subscription.yaml Operator subscription spec.
volumes.yaml Persistent volumes. The volumes are not auto provisioned on the local clusters.
ipm_12.0.x_cr.yaml Custom resource definition (CRD) for Product Master deployment.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

108 IBM Product Master 12.0.0


Configuring Product Master deployment YAML (Fix Pack 8)

( and later) Before deployment you need to configure Product Master YAML file.

Before you begin


Ensure the system requirements for Containers are met and required software is installed.
With IBM® Product Master 12.0 Fix Pack 8 release, Elasticsearch is replaced with OpenSearch. You need to install and configure OpenSearch. Install the
OpenSearch from the following location using your own SSL certificate.
Installing OpenSearch - Helm

File storage is mandatory for ‘appdata’ persistent volume. We recommend File storage for all the persistent volumes. If you use file storage ‘access_modes’ should
be set to ReadWriteMany.
All persistent volume claims are mandatory for successful deployment.
Before performing un-deployment, it is recommended to take complete backup of all the persistent volumes.

Procedure
1. Review and update secret values in the app_secrets.yaml file.
a. Generate self-signed certificate or procure valid certificate that is issued by the certificate authority for applying to the Ingress routes, which in turn are
applied to application URLs. Convert certificate and key to Base64 format by using the following command.

cat crt.cert | base64


cat cert.key | base64

Convert the output to single line for the crt.cert and cert.key files.
2. Update the values in the app_secrets.yaml file in the following format.

apiVersion: v1
kind: Secret
metadata:
name: app-secret
namespace: <namespace>
type: Opaque
stringData:
#Database details
db_type: "<db2 / oracle>"
db_name: "<DB name>"
db_host: "<IP or hostname of DB>"
db_user: "<DB username>"
db_pass: "<DB password. For FP5 and earlier releases provide encrypted password.>"
db_port: "<DB server port>"
#SSL details
db_ssl_crt: "<database plain text ssl certificate>"
#OpenSearch details for Free Text Search
opensearch_host: "https://<OpenSearch hostname>"
opensearch_user: "<OpenSearch username>"
opensearch_pass: "<OpenSearch password>"
opensearch_port: "<OpenSearch port>"
opensearch_ssl_crt: "<OpenSearch plain text ssl certificate>"
#IBM Watson Knowledge Catalog details
cpd_host_url: "<CPD host URL>"
cpd_user_name: "<CPD User name>"
wkc_auth_api_key: "<WKC API Key>"
wkc_catalog_name: "<WKC Catalog name>"
#Simple Mail Transfer Protocol (SMTP) details
smtp_address: "<SMTP server hostname>"
from_address: "<From email address>"
smtp_port: "<SMTP server port>"
smtp_user: "<SMTP username or API key>"
smtp_pass: "<SMTP password>"
smtp_additional_props: "<SMTP additional properties>"
#Security Assertion Markup Language (SAML) Single sign-on (SSO) details
sso_company: "<company code>"
sso_config_adminui: "<AdminUI SAML WebSSO configuration for IBM WebSphere® Liberty, in the <samlWebSso20>..
</samlWebSso20>> format"
sso_config_personaui: "<PersonaUI SAML WebSSO configuration for IBM WebSphere® Liberty Liberty, in the format
<samlWebSso20>..</samlWebSso20>>"
sso_idp_metadata: "<Identity provider metadata file content>"
#Magento connector details
magento_user: "<Magento service username>"
magento_pass: "<Magento service password>"
magento_host: "<Magento IP or hostname>"
magento_port: "<Magento connector port>"
magento_ipm_user: "<application username>"
magento_ipm_pass: "<application password>"
magento_ipm_company: "<application Magento company name>"

apiVersion: v1
kind: Secret
metadata:
name: tls-secret
namespace: <namespace>
type: kubernetes.io/tls
data:
#Specify the Base64 converted domain name certificate and key in a single line without any space

IBM Product Master 12.0.0 109


tls.crt: ""
tls.key: ""

3. Following is a list of sample deployment files for your reference.

catalog_source.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: ibm-productmaster-catalog
namespace: <namespace>
spec:
displayName: IBM Product Master
publisher: IBM
sourceType: grpc
image: registry.ng.bluemix.net/product_master/ipm-operator-catalog@<DIGEST_VALUE>

operator_group.yaml

apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
name: ibm-productmaster-catalog-group
namespace: <namespace>
spec:
targetNamespaces:
- <namespace>

subscription.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: ibm-productmaster-catalog-subscription
namespace: <namespace>
spec:
channel: v1.0
name: productmaster-operator
installPlanApproval: Automatic
source: ibm-productmaster-catalog
sourceNamespace: <namespace>

volumes.yaml

kind: PersistentVolume
apiVersion: v1
metadata:
name: appdata-volume
namespace: default
labels:
svc: appdata-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
storageClassName: standard
hostPath:
path: /mnt/ipm12/appdata
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: admin-log-volume
namespace: default
labels:
svc: admin-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/admin
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ftsind-log-volume
namespace: default
labels:
svc: ftsind-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/ftsindlog

110 IBM Product Master 12.0.0


---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ftspim-log-volume
namespace: default
labels:
svc: ftspim-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/ftspimlog
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: gds-log-volume
namespace: default
labels:
svc: gds-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/gds
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ml-log-volume
namespace: default
labels:
svc: ml-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/ml
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongodb-data-volume
namespace: default
labels:
svc: mongodb-data-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/mongodb
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongodb-log-volume
namespace: default
labels:
svc: mongodb-log-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/mongodblog
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mq-data-volume
namespace: default
labels:
svc: mq-data-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:

IBM Product Master 12.0.0 111


path: /mnt/ipm12/mq-data
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: personaui-log-volume
namespace: default
labels:
svc: personaui-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/personaui
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: restapi-log-volume
namespace: default
labels:
svc: restapi-log-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/restapi
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: sch-log-volume
namespace: default
labels:
svc: sch-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/sch
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: wfl-log-volume
namespace: default
labels:
svc: wfl-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/wfl
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: magento-connector-log-volume
namespace: default
labels:
svc: magento-connector-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/magento-connector

You can specify any value for the path: /mnt/ipm12.


ipm_12.0.x_cr.yaml

apiVersion: productmaster.ibm.com/v1
kind: ProductMaster
metadata:
name: productmaster
spec:
license:
accept: true
deployment_platform: 'openshift'
#deployment_platform: 'k8s'

112 IBM Product Master 12.0.0


version: 12.0.7

domain_name: ""

enable:
fts: 1
vendor: 1
dam: 1
ml: 1
gds: 1
wkc: 0
sso: 1
mountmgr: 1
image:
registry: registry.ng.bluemix.net/product_master
pullpolicy: Always

productmastersecret: "ipm-registry"
app_secret_name: "app-secret"
random_secret_name: "random-secret"
certs_secret_name: "tls-secret"

enable_volume_selectors: true

volume_details:
admin:
log:
name: admin-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
ftspim:
log:
name: ftspim-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
ftsind:
log:
name: ftsind-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
personaui:
log:
name: personaui-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
restapi:
log:
name: restapi-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
wfl:
log:
name: wfl-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
sch:
log:
name: sch-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
gds:
log:
name: gds-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
appdata:
name: appdata-volume
storage: 1Gi
access_modes: ReadWriteMany
storage_class_name: standard
########################################################################
# This is file storage
########################################################################
mq:
data:
name: mq-data-volume
storage: 1Gi
access_modes: ReadWriteOnce
storage_class_name: standard
mongodb:
data:
name: mongodb-data-volume
storage: 1Gi
access_modes: ReadWriteOnce
storage_class_name: standard
log:
name: mongodb-log-volume

IBM Product Master 12.0.0 113


storage: 1Gi
access_modes: ReadWriteOnce
storage_class_name: standard
ml:
log:
name: ml-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard

################################

ml_service:
replica_count: 1
image: ipm-ml-ubi8
imagetag: 12.0.7
http:
ext_port: 31005
httpa:
ext_port: 31105
httpb:
ext_port: 31205
httpc:
ext_port: 31305
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 2048Mi
cpu: 500m
limits:
memory: 3072Mi
cpu: 600m

admin_service:
replica_count: 1
image: ipm-admin-ubi8
imagetag: 12.0.7
memflag: -Xmx1024m -Xms256m
evmemflag: -Xmx128m -Xms48m
quememflag: -Xmx128m -Xms48m
admmemflag: -Xmx128m -Xms48m
readiness_probe:
initial_delay_seconds: 90
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 120
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 2048Mi
cpu: 1000m
limits:
memory: 3072Mi
cpu: 1400m

wfl_service:
replica_count: 1
image: ipm-wfl-ubi8
imagetag: 12.0.7
memflag: -Xmx1024m -Xms48m
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 600m
limits:
memory: 2048Mi
cpu: 800m

sch_service:
replica_count: 1
image: ipm-sch-ubi8
imagetag: 12.0.7
memflag: -Xmx1024m -Xms48m
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30

114 IBM Product Master 12.0.0


timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 600m
limits:
memory: 2048Mi
cpu: 800m

gds_service:
replica_count: 1
image: ipm-gds-ubi8
imagetag: 12.0.7
memflag: -Xmx1024m -Xms48m
gds_app_type: Supply
gds_datapool_gln: 8380160030003
gds_self_gln: 0864471000477
company_name: <Company where GDS module loaded>
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 600m
limits:
memory: 2048Mi
cpu: 800m
magento_service:
replica_count: 1
image: ipm-magento-connector-ubi8
imagetag: 12.0.7
memflag: -Xmx1024m -Xms48m
mg_company: magento3
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 600m
limits:
memory: 2048Mi
cpu: 800m

ftsind_service:
replica_count: 1
image: ipm-fts-indexer-ubi8
imagetag: 12.0.7
memflag: -Xmx1024m -Xms256m
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 60
timeout_seconds: 5000
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 500m
limits:
memory: 2048Mi
cpu: 600m

ftspim_service:
replica_count: 1
image: ipm-fts-pim-ubi8
imagetag: 12.0.7
memflag: -Xmx1024m -Xms256m
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 60
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 500m
limits:
memory: 2048Mi

IBM Product Master 12.0.0 115


cpu: 600m

personaui_service:
replica_count: 1
image: ipm-personaui-ubi8
imagetag: 12.0.7
maxmem: 1280
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
period_seconds: 60
timeout_seconds: 900
resources:
requests:
memory: 1536Mi
cpu: 1000m
limits:
memory: 2048Mi
cpu: 1400m

restapi_service:
replica_count: 1
image: ipm-restapi-ubi8
imagetag: 12.0.7
maxmem: 1536
readiness_probe:
initial_delay_seconds: 90
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 120
period_seconds: 60
timeout_seconds: 900
resources:
requests:
memory: 1536Mi
cpu: 1000m
limits:
memory: 2048Mi
cpu: 1400m

hazelcast_service:
replica_count: 1
image: ipm-hazelcast
imagetag: 4.2.4
readiness_probe:
initial_delay_seconds: 10
timeout_seconds: 900
period_seconds: 5
liveness_probe:
initial_delay_seconds: 15
period_seconds: 10
timeout_seconds: 900
resources:
requests:
memory: 1002Mi
cpu: 498m
limits:
memory: 1254Mi
cpu: 612m

ibm_mq_service:
replica_count: 1
image: ipm-mq
imagetag: 9.2.0.0-r2
mq_qmgr_name: QM1
mq_channel_name: DEV.APP.SVRCONN
queue_connection_factory: ptpQcf
inbound_queue_name: DEV.QUEUE.1
outbound_queue_name: DEV.QUEUE.2
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 5000
period_seconds: 10
port: http
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 5000
period_seconds: 10
port: http
service: productmaster-mq-service
port: 1414
resources:
requests:
memory: 240Mi
cpu: 80m
limits:
memory: 300Mi
cpu: 100m

mongodb_service:
replica_count: 1
image: ipm-mongodb

116 IBM Product Master 12.0.0


imagetag: 4.0.22
privileged: true
memflag: -Xmx2048m -Xms500m
readiness_probe:
initial_delay_seconds: 90
timeout_seconds: 5000
period_seconds: 10
liveness_probe:
initial_delay_seconds: 120
timeout_seconds: 5000
period_seconds: 110
resources:
requests:
memory: 3216Mi
cpu: 488m
limits:
memory: 3500Mi
cpu: 600m

4. Update the parameters in the ipm_12.0.x_cr.yamlfile before deployment as per the requirement. For more information, see ipm_12.0.x_cr.yaml file parameters.
Note: Unless otherwise stated the values are applicable to both IBM Product Master 12.0 Fix Pack 3 and Fix Pack 4 releases.

ipm_12.0.x_cr.yaml file parameters


Defines the ipm_12.0.x_cr.yaml file parameters.
Enabling a feature through CR
You can use Product Master CR file to enable a feature.
Configuring Product Master deployment YAML (Fix Pack 7 and later)
Before deployment you need to configure Product Master YAML file.
Configuring Product Master deployment YAML (Fix Pack 6 and later)
Before deployment you need to configure Product Master YAML file.
Configuring Product Master deployment YAML (Fix Pack 4 and later)
Before deployment you need to configure Product Master YAML file.
Configuring Product Master deployment YAML (Fix Pack 3 and later)
Before deployment you need to configure Product Master YAML file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

ipm_12.0.x_cr.yaml file parameters


Defines the ipm_12.0.x_cr.yaml file parameters.

Property Subproperties Description


apiVersion Specifies the Product Master API version name. The value is "productmaster.ibm.com/v1".
kind Specifies the CRD name. The value is "ProductMaster".
metadata name Specifies the CR name.
spec license.accept Specifies the license. The value is "true".
deployment_pla Specifies the type of deployment platform. Valid value is "openshift" or "k8s".
tform
version Specifies the Product Master release version.
domain_name "" Set the domain name (Ingress URL to access the Product Master interface, for example,
"ipm.persistent.com"). This domain name is same as the domain for which you specified the certificate
details in the app-secret.yaml file.
enable fts Specifies whether to enable the listed features. The valid value is 1 (enabled) or 0 (disabled).
vendor
dam
ml
gds
wkc
sso
mountmg
r

image registry Specifies the registry where the images are located.
pullpolicy Specifies the pull policy. Valid value is "Always" or "IfNotPresent".
productmastersecret Specifies the secret name of the registry. Default value is "ipm-registry".
app_secret_name Specifies the application secret name. Default value is "app-secret".
certs_secret_name Specifies the certificate secret name. Default value is "tls-secret".
enable_volume_selectors The default value is "true".

IBM Product Master 12.0.0 117


Property Subproperties Description
volume_details storage Specifies the storage size. The value is in gibibyte. For example, "2Gi".
access_m Specifies the access modes. For the file storage the value is "ReadWriteMany" and for the block
odes storage the value is "ReadWriteOnce".
storage_c Specifies the storage details. Recommended storage is file storage for all volumes. You can also
lass_nam use block storage for all volumes except "appdata". For "appdata", the volume can be file storage
e only.

Applicable to all services unless indicated


replica_count Specifies the number of identical running pods. The value of this property cannot be more than 1 for the
third-party services (Elasticsearch, Hazelcast, MongoDB, and IBM® MQ).
image Specifies the image name of the service.
imagetag Specifies the image tag of the service.
memflag Specifies the Java™ memory size.
admin service wfl service sch service gds
service ftsind service ftspim service
mongodb service
evmemflag Specifies the event manager Java memory size.
admin service
quememflag Specifies the queue manager Java memory size.
admin service
admmemflag Specifies the admin Java memory size.
admin service
readiness_probe initial_del A readiness probe determines whether a container is ready to service requests.
ay_secon
ds Specifies the initial wait time interval, in seconds before the starting the first probe.
timeout_ Specifies the time interval, in seconds to wait for a probe to finish.
seconds Specifies the time interval, in seconds for performing a readiness probe.
period_se
conds

liveness_probe initial_del A liveness probe checks if the container in which it is configured is still running.
ay_secon
ds Specifies the initial wait time interval, in seconds before the starting the first probe.
timeout_ Specifies the time interval, in seconds to wait for a probe to finish.
seconds Specifies the time interval, in seconds for performing a liveness probe.
period_se
conds

resources requests Specifies the minimum memory and CPU. The value is in mebibyte. For example, "1536Mi".

memory
cpu

limits Specifies the maximum memory and CPU. The value is in millicpu. For example, "600m".

memory
cpu

http<> ext_port Specifies the external port to access machine learning port from externally.
Machine learning service
gds_app_type Specifies the component of the application that is being installed. Default value is Supply. Valid
gds_datapool_gln value can be Supply or Demand.
gds_self_gln Specifies the global location number (GLN®) of the 1SYNC server.
company_name Specifies the GLN of the trading partner that is using the application.
Specifies the name of the company where GDS module is loaded.
GDS service
maxmem Specifies the maximum Java memory for REST API service.
Persona-based UI service REST API service
IBM MQ mq_qmgr Specifies the IBM MQ queue manager name.
_name Specify the IBM MQ queue manager channel name.
mq_chan Creates the JMS connections to queues provided by IBM MQ for point-to-point messaging.
nel_name Specifies the name of the IBM MQ messaging provider inbound queue destination.
queue_co Specifies the name of the IBM MQ messaging provider outbound queue destination.
nnection_
factory
inbound_
queue_na
me
outbound
_queue_n
ame

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

118 IBM Product Master 12.0.0


Enabling a feature through CR

You can use Product Master CR file to enable a feature.

About this task


You can enable any service by using CR file and minimize CPU usage and memory storage.

Following is the mapping between feature and services:


Feature Service
Digital Assets Management MongoDB
Free text search indexer, pimcollector, elasticsearch
Machine learning mongodb, ml
Global Data Synchronization ibmmq, gds

Procedure
1. In the ipm_12.0.x_cr.yaml file, to make only GDS-related pods (ibmmq and gds) accessible, update the value of enable: gds property to 1.

apiVersion: productmaster.ibm.com/v1
kind: ProductMaster
metadata:
name: productmaster
spec:
license:
accept: true
deployment_platform: 'openshift'
#deployment_platform: 'k8s'

version: 12.0.7

domain_name: ""

enable:
fts: 0
vendor: 1
dam: 0
ml: 0
gds: 1
wkc: 0
sso: 0
mountmgr: 1

2. Reapply the updated CR by using the following command.

oc apply -f ipm_12.0.x_cr.yaml
productmaster.productmaster.ibm.com/productmaster configured

Results
GDS-related pods (ibmmq and gds) are up and running.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring Product Master deployment YAML (Fix Pack 7 and later)

Before deployment you need to configure Product Master YAML file.

Before you begin


Ensure the system requirements for Containers are met and required software is installed.
File storage is mandatory for ‘appdata’ persistent volume. We recommend File storage for all the persistent volumes. If you use file storage ‘access_modes’ should
be set to ReadWriteMany.
All persistent volume claims are mandatory for successful deployment.
Before performing un-deployment, it is recommended to take complete backup of all the persistent volumes.

Procedure
1. Review and update secret values in the app_secrets.yaml file.
a. Generate self-signed certificate or procure valid certificate that is issued by the certificate authority for applying to the Ingress routes, which in turn are
applied to application URLs. Convert certificate and key to Base64 format by using the following command.

IBM Product Master 12.0.0 119


cat crt.cert | base64
cat cert.key | base64

Convert the output to single line for the crt.cert and cert.key files.
2. Update the values in the app_secrets.yaml file in the following format.

apiVersion: v1
kind: Secret
metadata:
name: app-secret
namespace: <namespace>
type: Opaque
stringData:
#Database details
db_type: "<db2 / oracle>"
db_name: "<DB name>"
db_host: "<IP or hostname of DB>"
db_user: "<DB username>"
db_pass: "<DB password. For FP5 and earlier releases provide encrypted password.>"
db_port: "<DB server port>"
#SSL details
db_ssl_crt: "<database plain text ssl certificate>"
#IBM Watson Knowledge Catalog details
cpd_host_url: "<CPD host URL>"
cpd_user_name: "<CPD User name>"
wkc_auth_api_key: "<WKC API Key>"
wkc_catalog_name: "<WKC Catalog name>"
#Simple Mail Transfer Protocol (SMTP) details
smtp_address: "<SMTP server hostname>"
from_address: "<From email address>"
smtp_port: "<SMTP server port>"
smtp_user: "<SMTP username or API key>"
smtp_pass: "<SMTP password>"
smtp_additional_props: "<SMTP additional properties>"
#Security Assertion Markup Language (SAML) Single sign-on (SSO) details
sso_company: "<company code>"
sso_config_adminui: "<AdminUI SAML WebSSO configuration for IBM WebSphere® Liberty, in the <samlWebSso20>..
</samlWebSso20>> format"
sso_config_personaui: "<PersonaUI SAML WebSSO configuration for IBM WebSphere® Liberty Liberty, in the format
<samlWebSso20>..</samlWebSso20>>"
sso_idp_metadata: "<Identity provider metadata file content>"
#Magento connector details
magento_user: "<Magento service username>"
magento_pass: "<Magento service password>"
magento_host: "<Magento IP or hostname>"
magento_port: "<Magento connector port>"
magento_ipm_user: "<application username>"
magento_ipm_pass: "<application password>"
magento_ipm_company: "<application Magento company name>"

apiVersion: v1
kind: Secret
metadata:
name: tls-secret
namespace: <namespace>
type: kubernetes.io/tls
data:
#Specify the Base64 converted domain name certificate and key in a single line without any space
tls.crt: ""
tls.key: ""

3. Following is a list of sample deployment files for your reference.

catalog_source.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: ibm-productmaster-catalog
namespace: <namespace>
spec:
displayName: IBM Product Master
publisher: IBM
sourceType: grpc
image: registry.ng.bluemix.net/product_master/ipm-operator-catalog@<DIGEST_VALUE>

operator_group.yaml

apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
name: ibm-productmaster-catalog-group
namespace: <namespace>
spec:
targetNamespaces:
- <namespace>

subscription.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription

120 IBM Product Master 12.0.0


metadata:
name: ibm-productmaster-catalog-subscription
namespace: <namespace>
spec:
channel: v1.0
name: productmaster-operator
installPlanApproval: Automatic
source: ibm-productmaster-catalog
sourceNamespace: <namespace>

volumes.yaml

kind: PersistentVolume
apiVersion: v1
metadata:
name: appdata-volume
namespace: default
labels:
svc: appdata-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
storageClassName: standard
hostPath:
path: /mnt/ipm12/appdata
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: admin-log-volume
namespace: default
labels:
svc: admin-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/admin
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ftsind-log-volume
namespace: default
labels:
svc: ftsind-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/ftsindlog

---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ftspim-log-volume
namespace: default
labels:
svc: ftspim-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/ftspimlog
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: gds-log-volume
namespace: default
labels:
svc: gds-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/gds
---
kind: PersistentVolume
apiVersion: v1

IBM Product Master 12.0.0 121


metadata:
name: ml-log-volume
namespace: default
labels:
svc: ml-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/ml
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongodb-data-volume
namespace: default
labels:
svc: mongodb-data-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/mongodb
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongodb-log-volume
namespace: default
labels:
svc: mongodb-log-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/mongodblog
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mq-data-volume
namespace: default
labels:
svc: mq-data-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/mq-data
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: personaui-log-volume
namespace: default
labels:
svc: personaui-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/personaui
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: restapi-log-volume
namespace: default
labels:
svc: restapi-log-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/restapi
---
kind: PersistentVolume
apiVersion: v1

122 IBM Product Master 12.0.0


metadata:
name: sch-log-volume
namespace: default
labels:
svc: sch-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/sch
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: wfl-log-volume
namespace: default
labels:
svc: wfl-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/wfl
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: elasticsearch-data-volume
namespace: default
labels:
svc: elasticsearch-data-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/elasticsearch-data
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: magento-connector-log-volume
namespace: default
labels:
svc: magento-connector-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/magento-connector

You can specify any value for the path: /mnt/ipm12.


ipm_12.0.x_cr.yaml

apiVersion: productmaster.ibm.com/v1
kind: ProductMaster
metadata:
name: productmaster
spec:
license:
accept: true
deployment_platform: 'openshift'
#deployment_platform: 'k8s'

version: 12.0.7

domain_name: ""

enable:
fts: 1
vendor: 1
dam: 1
ml: 1
gds: 1
wkc: 0
sso: 1
mountmgr: 1
image:
registry: registry.ng.bluemix.net/product_master
pullpolicy: Always

productmastersecret: "ipm-registry"
app_secret_name: "app-secret"

IBM Product Master 12.0.0 123


random_secret_name: "random-secret"
certs_secret_name: "tls-secret"

enable_volume_selectors: true

volume_details:
admin:
log:
name: admin-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
ftspim:
log:
name: ftspim-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
ftsind:
log:
name: ftsind-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
personaui:
log:
name: personaui-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
restapi:
log:
name: restapi-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
wfl:
log:
name: wfl-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
sch:
log:
name: sch-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
gds:
log:
name: gds-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
appdata:
name: appdata-volume
storage: 1Gi
access_modes: ReadWriteMany
storage_class_name: standard
########################################################################
# This is file storage
########################################################################
mq:
data:
name: mq-data-volume
storage: 1Gi
access_modes: ReadWriteOnce
storage_class_name: standard
mongodb:
data:
name: mongodb-data-volume
storage: 1Gi
access_modes: ReadWriteOnce
storage_class_name: standard
log:
name: mongodb-log-volume
storage: 1Gi
access_modes: ReadWriteOnce
storage_class_name: standard
ml:
log:
name: ml-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
elasticsearch:
data:
name: elasticsearch-data-volume
claim: elasticsearch-data-volume-claim
storage: 1Gi
access_modes: ReadWriteOnce
storage_class_name: standard

################################

ml_service:

124 IBM Product Master 12.0.0


replica_count: 1
image: ipm-ml-ubi8
imagetag: 12.0.7
http:
ext_port: 31005
httpa:
ext_port: 31105
httpb:
ext_port: 31205
httpc:
ext_port: 31305
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 2048Mi
cpu: 500m
limits:
memory: 3072Mi
cpu: 600m

admin_service:
replica_count: 1
image: ipm-admin-ubi8
imagetag: 12.0.7
memflag: -Xmx1024m -Xms256m
evmemflag: -Xmx128m -Xms48m
quememflag: -Xmx128m -Xms48m
admmemflag: -Xmx128m -Xms48m
readiness_probe:
initial_delay_seconds: 90
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 120
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 2048Mi
cpu: 1000m
limits:
memory: 3072Mi
cpu: 1400m

wfl_service:
replica_count: 1
image: ipm-wfl-ubi8
imagetag: 12.0.7
memflag: -Xmx1024m -Xms48m
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 600m
limits:
memory: 2048Mi
cpu: 800m

sch_service:
replica_count: 1
image: ipm-sch-ubi8
imagetag: 12.0.7
memflag: -Xmx1024m -Xms48m
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 600m
limits:
memory: 2048Mi
cpu: 800m

gds_service:
replica_count: 1
image: ipm-gds-ubi8

IBM Product Master 12.0.0 125


imagetag: 12.0.7
memflag: -Xmx1024m -Xms48m
gds_app_type: Supply
gds_datapool_gln: 8380160030003
gds_self_gln: 0864471000477
company_name: <Company where GDS module loaded>
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 600m
limits:
memory: 2048Mi
cpu: 800m
magento_service:
replica_count: 1
image: ipm-magento-connector-ubi8
imagetag: 12.0.7
memflag: -Xmx1024m -Xms48m
mg_company: magento3
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 600m
limits:
memory: 2048Mi
cpu: 800m

ftsind_service:
replica_count: 1
image: ipm-fts-indexer-ubi8
imagetag: 12.0.7
memflag: -Xmx1024m -Xms256m
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 60
timeout_seconds: 5000
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 500m
limits:
memory: 2048Mi
cpu: 600m

ftspim_service:
replica_count: 1
image: ipm-fts-pim-ubi8
imagetag: 12.0.7
memflag: -Xmx1024m -Xms256m
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 60
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 500m
limits:
memory: 2048Mi
cpu: 600m

personaui_service:
replica_count: 1
image: ipm-personaui-ubi8
imagetag: 12.0.7
maxmem: 1280
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30

126 IBM Product Master 12.0.0


period_seconds: 60
timeout_seconds: 900
resources:
requests:
memory: 1536Mi
cpu: 1000m
limits:
memory: 2048Mi
cpu: 1400m

restapi_service:
replica_count: 1
image: ipm-restapi-ubi8
imagetag: 12.0.7
maxmem: 1536
readiness_probe:
initial_delay_seconds: 90
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 120
period_seconds: 60
timeout_seconds: 900
resources:
requests:
memory: 1536Mi
cpu: 1000m
limits:
memory: 2048Mi
cpu: 1400m

hazelcast_service:
replica_count: 1
image: ipm-hazelcast
imagetag: 4.2.4
readiness_probe:
initial_delay_seconds: 10
timeout_seconds: 900
period_seconds: 5
liveness_probe:
initial_delay_seconds: 15
period_seconds: 10
timeout_seconds: 900
resources:
requests:
memory: 1002Mi
cpu: 498m
limits:
memory: 1254Mi
cpu: 612m

elasticsearch_service:
replica_count: 1
image: ipm-elasticsearch
imagetag: 7.16.2
privileged: true
readiness_probe:
initial_delay_seconds: 60
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 120
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 2006Mi
cpu: 498m
limits:
memory: 2506Mi
cpu: 612m

ibm_mq_service:
replica_count: 1
image: ipm-mq
imagetag: 9.2.0.0-r2
mq_qmgr_name: QM1
mq_channel_name: DEV.APP.SVRCONN
queue_connection_factory: ptpQcf
inbound_queue_name: DEV.QUEUE.1
outbound_queue_name: DEV.QUEUE.2
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 5000
period_seconds: 10
port: http
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 5000
period_seconds: 10
port: http
service: productmaster-mq-service
port: 1414
resources:
requests:
memory: 240Mi

IBM Product Master 12.0.0 127


cpu: 80m
limits:
memory: 300Mi
cpu: 100m

mongodb_service:
replica_count: 1
image: ipm-mongodb
imagetag: 4.0.22
privileged: true
memflag: -Xmx2048m -Xms500m
readiness_probe:
initial_delay_seconds: 90
timeout_seconds: 5000
period_seconds: 10
liveness_probe:
initial_delay_seconds: 120
timeout_seconds: 5000
period_seconds: 110
resources:
requests:
memory: 3216Mi
cpu: 488m
limits:
memory: 3500Mi
cpu: 600m

4. Update the parameters in the ipm_12.0.x_cr.yamlfile before deployment as per the requirement. For more information, see ipm_12.0.x_cr.yaml file parameters.
Note: Unless otherwise stated the values are applicable to both IBM® Product Master 12.0 Fix Pack 3 and Fix Pack 4 releases.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring Product Master deployment YAML (Fix Pack 6 and later)

Before deployment you need to configure Product Master YAML file.

Before you begin


Ensure the system requirements for Containers are met and required software is installed.
File storage is mandatory for ‘appdata’ persistent volume. We recommend File storage for all the persistent volumes. If you use file storage ‘access_modes’ should
be set to ReadWriteMany.
All persistent volume claims are mandatory for successful deployment.
Before performing un-deployment, it is recommended to take complete backup of all the persistent volumes.

Procedure
1. Review and update secret values in the app_secrets.yaml file.
a. Generate self-signed certificate or procure valid certificate that is issued by the certificate authority for applying to the Ingress routes, which in turn are
applied to application URLs. Convert certificate and key to Base64 format by using the following command.

cat crt.cert | base64


cat cert.key | base64

Convert the output to single line for the crt.cert and cert.key files.
2. Update the values in the app_secrets.yaml file in the following format.

apiVersion: v1
kind: Secret
metadata:
name: app-secret
namespace: <namespace>
type: Opaque
stringData:
#Database details
db_type: "<db2 / oracle>"
db_name: "<DB name>"
db_host: "<IP or hostname of DB>"
db_user: "<DB username>"
db_pass: "<DB password. For FP5 and earlier releases provide encrypted password.>"
db_port: "<DB server port>"
#SSL details
db_ssl_crt: "<database plain text ssl certificate>"
#IBM Watson Knowledge Catalog details
cpd_host_url: "<CPD host URL>"
cpd_user_name: "<CPD User name>"
wkc_auth_api_key: "<WKC API Key>"
wkc_catalog_name: "<WKC Catalog name>"
#Simple Mail Transfer Protocol (SMTP) details
smtp_address: "<SMTP server hostname>"
from_address: "<From email address>"
smtp_port: "<SMTP server port>"

128 IBM Product Master 12.0.0


smtp_user: "<SMTP username or API key>"
smtp_pass: "<SMTP password>"
smtp_additional_props: "<SMTP additional properties>"
#Security Assertion Markup Language (SAML) Single sign-on (SSO) details
sso_company: "<company code>"
sso_config_adminui: "<AdminUI SAML WebSSO configuration for IBM WebSphere® Liberty, in the <samlWebSso20>..
</samlWebSso20>> format"
sso_config_personaui: "<PersonaUI SAML WebSSO configuration for IBM WebSphere® Liberty Liberty, in the format
<samlWebSso20>..</samlWebSso20>>"
sso_idp_metadata: "<Identity provider metadata file content>"

apiVersion: v1
kind: Secret
metadata:
name: tls-secret
namespace: <namespace>
type: kubernetes.io/tls
data:
#Specify the Base64 converted domain name certificate and key in a single line without any space
tls.crt: ""
tls.key: ""

3. Following is a list of sample deployment files for your reference.

catalog_source.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: ibm-productmaster-catalog
namespace: <namespace>
spec:
displayName: IBM Product Master
publisher: IBM
sourceType: grpc
image: registry.ng.bluemix.net/product_master/ipm-operator-catalog@<DIGEST_VALUE>

operator_group.yaml

apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
name: ibm-productmaster-catalog-group
namespace: <namespace>
spec:
targetNamespaces:
- <namespace>

subscription.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: ibm-productmaster-catalog-subscription
namespace: <namespace>
spec:
channel: v1.0
name: productmaster-operator
installPlanApproval: Automatic
source: ibm-productmaster-catalog
sourceNamespace: <namespace>

volumes.yaml

kind: PersistentVolume
apiVersion: v1
metadata:
name: appdata-volume
namespace: default
labels:
svc: appdata-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
storageClassName: standard
hostPath:
path: /mnt/ipm12/appdata
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: admin-log-volume
namespace: default
labels:
svc: admin-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce

IBM Product Master 12.0.0 129


storageClassName: standard
hostPath:
path: /mnt/ipm12/admin
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ftsind-log-volume
namespace: default
labels:
svc: ftsind-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/ftsindlog

---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ftspim-log-volume
namespace: default
labels:
svc: ftspim-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/ftspimlog
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: gds-log-volume
namespace: default
labels:
svc: gds-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/gds
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ml-log-volume
namespace: default
labels:
svc: ml-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/ml
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongodb-data-volume
namespace: default
labels:
svc: mongodb-data-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/mongodb
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongodb-log-volume
namespace: default
labels:
svc: mongodb-log-volume
spec:
capacity:
storage: 1Gi
accessModes:

130 IBM Product Master 12.0.0


- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/mongodblog
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mq-data-volume
namespace: default
labels:
svc: mq-data-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/mq-data
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: personaui-log-volume
namespace: default
labels:
svc: personaui-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/personaui
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: restapi-log-volume
namespace: default
labels:
svc: restapi-log-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/restapi
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: sch-log-volume
namespace: default
labels:
svc: sch-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/sch
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: wfl-log-volume
namespace: default
labels:
svc: wfl-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/wfl
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: elasticsearch-data-volume
namespace: default
labels:
svc: elasticsearch-data-volume
spec:
capacity:
storage: 1Gi
accessModes:

IBM Product Master 12.0.0 131


- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/elasticsearch-data

You can specify any value for the path: /mnt/ipm12.


ipm_12.0.x_cr.yaml

apiVersion: productmaster.ibm.com/v1
kind: ProductMaster
metadata:
name: productmaster
spec:
license:
accept: true
deployment_platform: 'openshift'
#deployment_platform: 'k8s'

version: 12.0.6

domain_name: ""

enable:
fts: 1
vendor: 1
dam: 1
ml: 1
gds: 1
wkc: 0
sso: 1
mountmgr: 1
image:
registry: registry.ng.bluemix.net/product_master
pullpolicy: Always

productmastersecret: "ipm-registry"
app_secret_name: "app-secret"
random_secret_name: "random-secret"
certs_secret_name: "tls-secret"

enable_volume_selectors: true

volume_details:
admin:
log:
name: admin-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
ftspim:
log:
name: ftspim-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
ftsind:
log:
name: ftsind-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
personaui:
log:
name: personaui-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
restapi:
log:
name: restapi-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
wfl:
log:
name: wfl-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
sch:
log:
name: sch-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
gds:
log:
name: gds-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
appdata:
name: appdata-volume
storage: 1Gi

132 IBM Product Master 12.0.0


access_modes: ReadWriteMany
storage_class_name: standard
########################################################################
# This is file storage
########################################################################
mq:
data:
name: mq-data-volume
storage: 1Gi
access_modes: ReadWriteOnce
storage_class_name: standard
mongodb:
data:
name: mongodb-data-volume
storage: 1Gi
access_modes: ReadWriteOnce
storage_class_name: standard
log:
name: mongodb-log-volume
storage: 1Gi
access_modes: ReadWriteOnce
storage_class_name: standard
ml:
log:
name: ml-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
elasticsearch:
data:
name: elasticsearch-data-volume
claim: elasticsearch-data-volume-claim
storage: 1Gi
access_modes: ReadWriteOnce
storage_class_name: standard

################################

ml_service:
replica_count: 1
image: ipm-ml-ubi8
imagetag: 12.0.6
http:
ext_port: 31005
httpa:
ext_port: 31105
httpb:
ext_port: 31205
httpc:
ext_port: 31305
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 2048Mi
cpu: 500m
limits:
memory: 3072Mi
cpu: 600m

admin_service:
replica_count: 1
image: ipm-admin-ubi8
imagetag: 12.0.6
memflag: -Xmx1024m -Xms256m
evmemflag: -Xmx128m -Xms48m
quememflag: -Xmx128m -Xms48m
admmemflag: -Xmx128m -Xms48m
readiness_probe:
initial_delay_seconds: 90
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 120
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 2048Mi
cpu: 1000m
limits:
memory: 3072Mi
cpu: 1400m

wfl_service:
replica_count: 1
image: ipm-wfl-ubi8
imagetag: 12.0.6
memflag: -Xmx1024m -Xms48m
readiness_probe:

IBM Product Master 12.0.0 133


initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 600m
limits:
memory: 2048Mi
cpu: 800m

sch_service:
replica_count: 1
image: ipm-sch-ubi8
imagetag: 12.0.6
memflag: -Xmx1024m -Xms48m
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 600m
limits:
memory: 2048Mi
cpu: 800m

gds_service:
replica_count: 1
image: ipm-gds-ubi8
imagetag: 12.0.6
memflag: -Xmx1024m -Xms48m
gds_app_type: Supply
gds_datapool_gln: 8380160030003
gds_self_gln: 0864471000477
company_name: <Company where GDS module loaded>
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 600m
limits:
memory: 2048Mi
cpu: 800m

ftsind_service:
replica_count: 1
image: ipm-fts-indexer-ubi8
imagetag: 12.0.6
memflag: -Xmx1024m -Xms256m
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 60
timeout_seconds: 5000
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 500m
limits:
memory: 2048Mi
cpu: 600m

ftspim_service:
replica_count: 1
image: ipm-fts-pim-ubi8
imagetag: 12.0.6
memflag: -Xmx1024m -Xms256m
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 60
timeout_seconds: 900
period_seconds: 60
resources:

134 IBM Product Master 12.0.0


requests:
memory: 1536Mi
cpu: 500m
limits:
memory: 2048Mi
cpu: 600m

personaui_service:
replica_count: 1
image: ipm-personaui-ubi8
imagetag: 12.0.6
maxmem: 1280
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
period_seconds: 60
timeout_seconds: 900
resources:
requests:
memory: 1536Mi
cpu: 1000m
limits:
memory: 2048Mi
cpu: 1400m

restapi_service:
replica_count: 1
image: ipm-restapi-ubi8
imagetag: 12.0.6
maxmem: 1536
readiness_probe:
initial_delay_seconds: 90
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 120
period_seconds: 60
timeout_seconds: 900
resources:
requests:
memory: 1536Mi
cpu: 1000m
limits:
memory: 2048Mi
cpu: 1400m

hazelcast_service:
replica_count: 1
image: ipm-hazelcast
imagetag: 4.2.4
readiness_probe:
initial_delay_seconds: 10
timeout_seconds: 900
period_seconds: 5
liveness_probe:
initial_delay_seconds: 15
period_seconds: 10
timeout_seconds: 900
resources:
requests:
memory: 1002Mi
cpu: 498m
limits:
memory: 1254Mi
cpu: 612m

elasticsearch_service:
replica_count: 1
image: ipm-elasticsearch
imagetag: 7.16.2
privileged: true
readiness_probe:
initial_delay_seconds: 60
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 120
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 2006Mi
cpu: 498m
limits:
memory: 2506Mi
cpu: 612m

ibm_mq_service:
replica_count: 1
image: ipm-mq
imagetag: 9.2.0.0-r2
mq_qmgr_name: QM1
mq_channel_name: DEV.APP.SVRCONN

IBM Product Master 12.0.0 135


queue_connection_factory: ptpQcf
inbound_queue_name: DEV.QUEUE.1
outbound_queue_name: DEV.QUEUE.2
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 5000
period_seconds: 10
port: http
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 5000
period_seconds: 10
port: http
service: productmaster-mq-service
port: 1414
resources:
requests:
memory: 240Mi
cpu: 80m
limits:
memory: 300Mi
cpu: 100m

mongodb_service:
replica_count: 1
image: ipm-mongodb
imagetag: 4.0.22
privileged: true
memflag: -Xmx2048m -Xms500m
readiness_probe:
initial_delay_seconds: 90
timeout_seconds: 5000
period_seconds: 10
liveness_probe:
initial_delay_seconds: 120
timeout_seconds: 5000
period_seconds: 110
resources:
requests:
memory: 3216Mi
cpu: 488m
limits:
memory: 3500Mi
cpu: 600m

4. Update the parameters in the ipm_12.0.x_cr.yamlfile before deployment as per the requirement. For more information, see ipm_12.0.x_cr.yaml file parameters.
Note: Unless otherwise stated the values are applicable to both IBM® Product Master 12.0 Fix Pack 3 and Fix Pack 4 releases.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring Product Master deployment YAML (Fix Pack 4 and later)

Before deployment you need to configure Product Master YAML file.

Before you begin


Ensure the system requirements for Containers are met and required software is installed.
File storage is mandatory for ‘appdata’ persistent volume. We recommend File storage for all the persistent volumes. If you use file storage ‘access_modes’ should
be set to ReadWriteMany.
All persistent volume claims are mandatory for successful deployment.
Before performing un-deployment, it is recommended to take complete backup of all the persistent volumes.

Procedure
1. Review and update secret values in the app_secrets.yaml file.
a. Encrypt your IBM® Db2® or Oracle password by using the following openssl command (requires OpenSSL 1.1.1g FIPS version and later).

$ echo <Db2_password> | openssl enc -e -base64 -aes-256-cbc -salt -pbkdf2 -k <key_name>

Where,
<Db2_password> - Plain text password for the database.
<key_name> - Passphrase, any random alphanumeric string.
b. Generate self-signed certificate or procure valid certificate that is issued by the certificate authority for applying to the Ingress routes, which in turn are
applied to application URLs. Convert certificate and key to Base64 format by using the following command.

cat crt.cert | base64


cat cert.key | base64

Convert the output to single line for the crt.cert and cert.key files.
2. Update the values in the app_secrets.yaml file in the following format.

136 IBM Product Master 12.0.0


apiVersion: v1
kind: Secret
metadata:
name: app-secret
namespace: <namespace>
type: Opaque
stringData:
db_type: "<db2 / oracle>"
db_name: "<DB name>"
db_host: "<IP/Hostname of DB>"
db_user: "<DB username>"
encryption_key: "<Encryption key used to encrypt DB2/Oracle password>"
db_pass: "<Encrypted DB password>"
db_port: "<DB server port>"
#Update below details only if you are going to use DAM/ML features, else you can remove these secret entries.
mongodb_name: "<mongoDB database>"
mongodb_user: "<mongoDB user>"
mongodb_pass: "<mongoDB plain text password>"
#Update below details only if you are going to use FTS features, else you can remove these secret entries.
elastic_user: "elastic"
elastic_pass: "<elastic plain text password>"
#Update below details only if you are going to use GDS feature, else you can remove these secret entries.
mq_app_user: "<IBM MQ app user which will be created on IBM MQ Pod>"
mq_app_pass: "<Set IBM MQ app user plain text password>"
mq_ui_pass: "<IBM MQ UI plain text password>"
#Update below details only if you are going to use WKC feature, else you can remove these secret entries.
cpd_host_url: "<CPD host URL>"
cpd_user_name: "<CPD User name>"
wkc_auth_api_key: "<WKC API Key>"
wkc_catalog_name: "<WKC Catalog name>"
#Update below details only if SMTP is required, else you can remove these secret entries.
smtp_address: "<SMTP server hostname>"
from_address: "<From email address>"
smtp_port: "<SMTP server port>"
smtp_user: "<SMTP username or API key>"
smtp_pass: "<SMTP plain text password>"
smtp_additional_props: "<SMTP Additional Properties>"
#Update below details only if SAML SSO is required, else you can remove these secret entries.
sso_company: "<Company code>"
sso_config_adminui: "<AdminUI SAML WebSSO configuration for IBM Liberty, in the format <samlWebSso20>..</samlWebSso20>>"
sso_config_personaui: "<PersonaUI SAML WebSSO configuration for IBM Liberty, in the format <samlWebSso20>..
</samlWebSso20>>"
sso_idp_metadata: "<Identity provider metadata file content>"
---
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
namespace: <namespace>
type: kubernetes.io/tls
data:
tls.crt: <base64 converted domain name certificate in single line format>
tls.key: <base64 converted domain name certificate key in single line format>

3. Following is a list of sample deployment files for your reference.

catalog_source.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: ibm-productmaster-catalog
namespace: <namespace>
spec:
displayName: IBM Product Master
publisher: IBM
sourceType: grpc
image: registry.ng.bluemix.net/product_master/ipm-operator-catalog@<DIGEST_VALUE>

operator_group.yaml

apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
name: ibm-productmaster-catalog-group
namespace: <namespace>
spec:
targetNamespaces:
- <namespace>

subscription.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: ibm-productmaster-catalog-subscription
namespace: <namespace>
spec:
channel: v1.0
name: productmaster-operator
installPlanApproval: Automatic

IBM Product Master 12.0.0 137


source: ibm-productmaster-catalog
sourceNamespace: <namespace>

volumes.yaml

kind: PersistentVolume
apiVersion: v1
metadata:
name: appdata-volume
namespace: default
labels:
svc: appdata-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
storageClassName: standard
hostPath:
path: /mnt/ipm12/appdata
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: admin-log-volume
namespace: default
labels:
svc: admin-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/admin
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ftsind-log-volume
namespace: default
labels:
svc: ftsind-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/ftsindlog

---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ftspim-log-volume
namespace: default
labels:
svc: ftspim-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/ftspimlog
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: gds-log-volume
namespace: default
labels:
svc: gds-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/gds
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ml-log-volume
namespace: default
labels:
svc: ml-log-volume
spec:
capacity:

138 IBM Product Master 12.0.0


storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/ml
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongodb-data-volume
namespace: default
labels:
svc: mongodb-data-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/mongodb
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongodb-log-volume
namespace: default
labels:
svc: mongodb-log-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/mongodblog
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mq-data-volume
namespace: default
labels:
svc: mq-data-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/mq-data
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: personaui-log-volume
namespace: default
labels:
svc: personaui-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/personaui
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: restapi-log-volume
namespace: default
labels:
svc: restapi-log-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/restapi
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: sch-log-volume
namespace: default
labels:
svc: sch-log-volume
spec:
capacity:

IBM Product Master 12.0.0 139


storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/sch
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: wfl-log-volume
namespace: default
labels:
svc: wfl-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/wfl
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: elasticsearch-data-volume
namespace: default
labels:
svc: elasticsearch-data-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/elasticsearch-data

You can specify any value for the path: /mnt/ipm12.


ipm_12.0.x_cr.yaml

apiVersion: productmaster.ibm.com/v1
kind: ProductMaster
metadata:
name: productmaster
spec:
license:
accept: true
deployment_platform: 'openshift'
#deployment_platform: 'k8s'

version: 12.0.x

domain_name: ""

enable:
fts: 1
vendor: 1
dam: 1
ml: 1
gds: 1
wkc: 0
sso: 0
mountmgr: 1
image:
registry: registry.ng.bluemix.net/product_master
pullpolicy: Always

productmastersecret: "ipm-registry"
app_secret_name: "app-secret"
certs_secret_name: "tls-secret"

enable_volume_selectors: true

volume_details:
admin:
log:
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
ftspim:
log:
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
ftsind:
log:
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
personaui:
log:
storage: 2Gi

140 IBM Product Master 12.0.0


access_modes: ReadWriteOnce
storage_class_name: standard
restapi:
log:
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
wfl:
log:
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
sch:
log:
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
gds:
log:
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
appdata:
storage: 1Gi
access_modes: ReadWriteMany
storage_class_name: standard
########################################################################
# This is file storage
########################################################################
mq:
data:
storage: 1Gi
access_modes: ReadWriteOnce
storage_class_name: standard
mongodb:
data:
storage: 1Gi
access_modes: ReadWriteOnce
storage_class_name: standard
log:
storage: 1Gi
access_modes: ReadWriteOnce
storage_class_name: standard
ml:
log:
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
elasticsearch:
data:
claim: elasticsearch-data-volume-claim
storage: 1Gi
access_modes: ReadWriteOnce
storage_class_name: standard

################################

ml_service:
replica_count: 1
image: ipm-ml-ubi8
imagetag: 12.0.x
http:
ext_port: 31005
httpa:
ext_port: 31105
httpb:
ext_port: 31205
httpc:
ext_port: 31305
httpd:
ext_port: 31405
httpe:
ext_port: 31505
httpf:
ext_port: 31605
httpg:
ext_port: 31705
httph:
ext_port: 31805
httpi:
ext_port: 31905
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 2048Mi
cpu: 500m
limits:
memory: 3072Mi
cpu: 600m

IBM Product Master 12.0.0 141


admin_service:
replica_count: 1
image: ipm-admin-ubi8
imagetag: 12.0.x
memflag: -Xmx1024m -Xms256m
evmemflag: -Xmx128m -Xms48m
quememflag: -Xmx128m -Xms48m
admmemflag: -Xmx128m -Xms48m
readiness_probe:
initial_delay_seconds: 90
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 120
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 2048Mi
cpu: 1000m
limits:
memory: 3072Mi
cpu: 1400m

wfl_service:
replica_count: 1
image: ipm-wfl-ubi8
imagetag: 12.0.x
memflag: -Xmx1024m -Xms48m
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 600m
limits:
memory: 2048Mi
cpu: 800m

sch_service:
replica_count: 1
image: ipm-sch-ubi8
imagetag: 12.0.x
memflag: -Xmx1024m -Xms48m
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 600m
limits:
memory: 2048Mi
cpu: 800m

gds_service:
replica_count: 1
image: ipm-gds-ubi8
imagetag: 12.0.x
memflag: -Xmx1024m -Xms48m
gds_app_type: Supply
gds_datapool_gln: 8380160030003
gds_self_gln: 0864471000477
company_name: <Company where GDS module loaded>
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 600m
limits:
memory: 2048Mi
cpu: 800m

ftsind_service:
replica_count: 1
image: ipm-fts-indexer-ubi8
imagetag: 12.0.x

142 IBM Product Master 12.0.0


memflag: -Xmx1024m -Xms256m
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 60
timeout_seconds: 5000
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 500m
limits:
memory: 2048Mi
cpu: 600m

ftspim_service:
replica_count: 1
image: ipm-fts-pim-ubi8
imagetag: 12.0.x
memflag: -Xmx1024m -Xms256m
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 60
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 1536Mi
cpu: 500m
limits:
memory: 2048Mi
cpu: 600m

personaui_service:
replica_count: 1
image: ipm-personaui-ubi8
imagetag: 12.0.x
maxmem: 1280
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 900
period_seconds: 60
liveness_probe:
initial_delay_seconds: 30
period_seconds: 60
timeout_seconds: 900
resources:
requests:
memory: 1536Mi
cpu: 1000m
limits:
memory: 2048Mi
cpu: 1400m

restapi_service:
replica_count: 1
image: ipm-restapi-ubi8
imagetag: 12.0.x
maxmem: 1536
readiness_probe:
initial_delay_seconds: 90
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 120
period_seconds: 60
timeout_seconds: 900
resources:
requests:
memory: 1536Mi
cpu: 1000m
limits:
memory: 2048Mi
cpu: 1400m

hazelcast_service:
replica_count: 1
image: ipm-hazelcast
imagetag: 4.2.4
readiness_probe:
initial_delay_seconds: 10
timeout_seconds: 900
period_seconds: 5
liveness_probe:
initial_delay_seconds: 15
period_seconds: 10
timeout_seconds: 900
resources:
requests:
memory: 1002Mi
cpu: 498m

IBM Product Master 12.0.0 143


limits:
memory: 1254Mi
cpu: 612m

elasticsearch_service:
replica_count: 1
image: ipm-elasticsearch
imagetag: 7.16.2
privileged: true
readiness_probe:
initial_delay_seconds: 60
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 120
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 2006Mi
cpu: 498m
limits:
memory: 2506Mi
cpu: 612m

ibm_mq_service:
replica_count: 1
image: ipm-mq
imagetag: 9.2.0.0-r2
mq_qmgr_name: QM1
mq_channel_name: DEV.APP.SVRCONN
queue_connection_factory: ptpQcf
inbound_queue_name: DEV.QUEUE.1
outbound_queue_name: DEV.QUEUE.2
readiness_probe:
initial_delay_seconds: 30
timeout_seconds: 5000
period_seconds: 10
port: http
liveness_probe:
initial_delay_seconds: 30
timeout_seconds: 5000
period_seconds: 10
port: http
service: productmaster-mq-service
port: 1414
resources:
requests:
memory: 240Mi
cpu: 80m
limits:
memory: 300Mi
cpu: 100m

mongodb_service:
replica_count: 1
image: ipm-mongodb
imagetag: 4.0.22
privileged: true
memflag: -Xmx2048m -Xms500m
readiness_probe:
initial_delay_seconds: 90
timeout_seconds: 5000
period_seconds: 10
liveness_probe:
initial_delay_seconds: 120
timeout_seconds: 5000
period_seconds: 110
resources:
requests:
memory: 3216Mi
cpu: 488m
limits:
memory: 3500Mi
cpu: 600m

4. Update the parameters in the ipm_12.0.x_cr.yamlfile before deployment as per the requirement. For more information, see ipm_12.0.x_cr.yaml file parameters.
Note: Unless otherwise stated the values are applicable to both IBM Product Master 12.0 Fix Pack 3 and Fix Pack 4 releases.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring Product Master deployment YAML (Fix Pack 3 and later)

Before deployment you need to configure Product Master YAML file.

Before you begin


144 IBM Product Master 12.0.0
Ensure the system requirements for Containers are met and required software is installed.

Procedure
1. Review and update secret values in the app_secrets.yaml file.
a. Encrypt your IBM® Db2® or Oracle password by using the following openssl command (requires OpenSSL 1.1.1g FIPS version and later).

$ echo <Db2_password> | openssl enc -e -base64 -aes-256-cbc -salt -pbkdf2 -k <key_name>

Where,
<Db2_password> - Plain text password for the database.
<key_name> - Passphrase, any random alphanumeric string.
b. Generate self-signed certificate or procure valid certificate that is issued by the certificate authority for applying to the Ingress routes, which in turn are
applied to application URLs. Convert certificate and key to Base64 format by using the following command.

cat crt.cert | base64


cat cert.key | base64

Convert the output to single line for the crt.cert and cert.key files.
2. Update the values in the app_secrets.yaml file in the following format.

apiVersion: v1
kind: Secret
metadata:
name: app-secret
namespace: <namespace>
type: Opaque
stringData:
db_type: "<db2 / oracle>"
db_name: "<DB name>"
db_host: "<IP/Hostname of DB>"
db_user: "<DB username>"
encryption_key: "<Encryption key used to encrypt DB2/Oracle password>"
db_pass: "<Encrypted DB password>"
db_port: "<DB server port>"
#Update below details only if you are going to use DAM/ML features, else you can remove these secret entries.
mongodb_name: "<mongoDB database>"
mongodb_user: "<mongoDB user>"
mongodb_pass: "<mongoDB plain text password>"
#Update below details only if you are going to use FTS features, else you can remove these secret entries.
elastic_user: "elastic"
elastic_pass: "<elastic plain text password>"
#Update below details only if you are going to use GDS feature, else you can remove these secret entries.
mq_app_user: "<IBM MQ app user which will be created on IBM MQ Pod>"
mq_app_pass: "<Set IBM MQ app user plain text password>"
mq_ui_pass: "<IBM MQ UI plain text password>"
#Update below details only if you are going to use WKC feature, else you can remove these secret entries.
cpd_host_url: "<CPD host URL>"
cpd_user_name: "<CPD User name>"
wkc_auth_api_key: "<WKC API Key>"
wkc_catalog_name: "<WKC Catalog name>"
#Update below details only if SMTP is required, else you can remove these secret entries.
smtp_address: "<SMTP server hostname>"
from_address: "<From email address>"
smtp_port: "<SMTP server port>"
smtp_user: "<SMTP username or API key>"
smtp_pass: "<SMTP plain text password>"
smtp_additional_props: "<SMTP Additional Properties>"
#Update below details only if SAML SSO is required, else you can remove these secret entries.
sso_company: "<Company code>"
sso_config_adminui: "<AdminUI SAML WebSSO configuration for IBM Liberty, in the format <samlWebSso20>..</samlWebSso20>>"
sso_config_personaui: "<PersonaUI SAML WebSSO configuration for IBM Liberty, in the format <samlWebSso20>..
</samlWebSso20>>"
sso_idp_metadata: "<Identity provider metadata file content>"
---
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
namespace: <namespace>
type: kubernetes.io/tls
data:
tls.crt: <base64 converted domain name certificate in single line format>
tls.key: <base64 converted domain name certificate key in single line format>

3. Update the parameters in the ipm_12.0.x_cr.yamlfile before deployment as per the requirement. For more information, see ipm_12.0.x_cr.yaml file parameters.
Note: Unless otherwise stated the values are applicable to both IBM Product Master 12.0 Fix Pack 3 and Fix Pack 4 releases.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating or migrating database schema

IBM Product Master 12.0.0 145


If you are installing the product for the first time, you need to run create schema.

Before you begin


Install database. For more information, see Installing and setting up the database.

Procedure
1. Set the replica_count=0 for all the services except one service (admin or restapi).
2. Increase the readiness and liveness probe for the service in the ipm_12.0.x_cr.yaml file as follows.

admin:
replica_count: 1
………………….
readiness_probe:
initial_delay_seconds: 3600
timeout_seconds: 7200
period_seconds: 7200
liveness_probe:
initial_delay_seconds: 3600
timeout_seconds: 7200
period_seconds: 7200

3. Complete the deployment so that only one pod is up and running. For more information, see Deploying on the Kubernetes cluster and Deploying on the Red Hat
OpenShift cluster.
4. Stop the services on the pod with the following command.

cd /home/default
source .bash_profile
./stop.sh

5. Run the schema creation script. For more information, see Run schema creation scripts.
6. Optional: If the database schema exists, run only the migration scripts. For more information, see Database schema migration and Installing password security
update.
7. After updating the database schema, update the ipm_12.0.x_cr.yaml file to the default values by performing the deployment again.

Related concepts
Installing and setting up the database

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deploying on the Kubernetes cluster

IBM® Product Master services are deployed on the Kubernetes cluster by using OLM.

Procedure
Proceed as follows for volume creation when deploying on the local cluster.

Note: Not applicable in the cloud environment as volumes are auto-provisioned. Also, you must give appropriate storageclass for each volume in the
ipm_12.0.x_cr.yaml file.

1. Create directories for volumes with required directory structure and proper permissions on all the Master and Slave nodes on the Kubernetes cluster by using the
following command.

mkdir -p /mnt/ipm12/
cd /mnt/ipm12/
mkdir admin
mkdir ftsindlog
mkdir ftspimlog
mkdir gds
mkdir ml
mkdir mongodb
mkdir mongodblog
mkdir mq-data
mkdir personaui
mkdir sch
mkdir wfl
mkdir appdata
mkdir elasticsearch-data
mkdir restapi
mkdir magento-connector

Add user with UID 5000 and GID 0.

adduser default --uid 5000 --gid 0


chown -R 5000.0 /mnt/ipm12

146 IBM Product Master 12.0.0


chmod -R +777 /mnt/ipm12

If you get error while adding user, remove the existing user that has 5000 ownership.
Note: If you are using on-premise Kubernetes, you need to share the appdata-volume among cluster nodes by using the Network File System (NFS) service. If you
are deploying on the Kubernetes Cloud environment, you must use File storage as storageclass for the appdata-volume-claim. Rest of volumes can be "Block"
storage kind of storageclass. You can also specify all the volumes as File storage storageclass.
2. Create volumes by using the following command.

kubectl apply -f volumes.yaml

3. View the created volumes by using the following command.

kubectl get pv

4. Create namespace by using the following command.

kubectl create ns productmaster

5. Modify the app_secrets.yaml, catalog_source.yaml, operator_group.yaml, registry_secret.yaml, and subscription.yaml files to update <namespace>.
6. Deploy the operator by using the following command.

kubectl config set-context --current --namespace=productmaster


kubectl apply -f registry_secret.yaml
kubectl create serviceaccount ibm-productmaster-catalog -n productmaster
kubectl patch -n productmaster serviceaccount default -p '{"imagePullSecrets": [{"name": "ipm-registry"}]}'
kubectl patch -n productmaster serviceaccount serviceaccount -p '{"imagePullSecrets": [{"name": "ipm-registry"}]}'
kubectl patch -n productmaster serviceaccount ibm-productmaster-catalog -p '{"imagePullSecrets": [{"name": "ipm-
registry"}]}'
kubectl apply -f catalog_source.yaml
kubectl apply -f operator_group.yaml
kubectl apply -f subscription.yaml
kubectl apply -f app_secrets.yaml

7. Deploy the Product Master, by using the following command.

kubectl apply -f ipm_12.0.x_cr.yaml

8. View all resources by using the following command.

kubectl get all -n productmaster

9. View specific resources by using the following command.

kubectl get <pods/deploy/svc/pvc/cm/pv> -n productmaster

Example

kubectl get pods -n productmaster

10. View the Nginx Ingress resources by using the following command. Note the ports mappings.

kubectl get svc -n ingress-nginx

11. Check logs for a specific pod by using the following command.

kubectl logs -f <pod name> -n productmaster

12. You can log in to a specific pod by using the following command.

kubectl exec -it <pod name> /bin/bash

Note: It is recommended to stop the services by using /home/default/stop.sh before proceeding with any configuration changes on the pod.

Results
Access the Persona-based UI by using the following URL and Nginx Ingress HTTPS port.
https://<hostname>: 31521/mdm_ui/#/login

Example

https://ipm.persistent.com:31521/mdm_ui/#/login

Access the Admin UI by using the following URL and Nginx Ingress HTTPS port.
https://<hostname>:31521/

Example

https://ipm.persistent.com:31521/

You can get the Nginx external HTTPS port by using the following command.

kubectl get svc -A | grep nginx

Related concepts
Uninstalling the product

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 147


Installing OpenSearch on an OpenShift cluster

Using Helm you can easily install and manage OpenSearch in a OpenShift® cluster. Helm is the best way to find, share, and use software built for OpenShift.

Before you begin


You need to have the following software installed:

helm

Procedure
1. Create namespace by using the following command.

oc new-project opensearch

2. Go to the opensearch namespace and assign required privileges for this service by using the following command.

oc adm policy add-scc-to-user privileged -z default

3. Create PersistentVolume YAML files by using the following command.

cat <<EOF| oc apply -f -


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: opensearch-cluster-master-opensearch-cluster-master-0
namespace: opensearch
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
selector:
matchLabels:
svc: opensearch-cluster-master-opensearch-cluster-master-0-volume
storageClassName: standard
volumeMode: Filesystem
volumeName: opensearch-cluster-master-opensearch-cluster-master-0-volume
EOF

cat <<EOF| oc apply -f -


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: opensearch-cluster-master-opensearch-cluster-master-1
namespace: opensearch
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
selector:
matchLabels:
svc: opensearch-cluster-master-opensearch-cluster-master-1-volume
storageClassName: standard
volumeMode: Filesystem
volumeName: opensearch-cluster-master-opensearch-cluster-master-1-volume
EOF

cat <<EOF| oc apply -f -


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: opensearch-cluster-master-opensearch-cluster-master-2
namespace: opensearch
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
selector:
matchLabels:
svc: opensearch-cluster-master-opensearch-cluster-master-2-volume
storageClassName: standard
volumeMode: Filesystem
volumeName: opensearch-cluster-master-opensearch-cluster-master-2-volume
EOF

4. Add "opensearch helm-charts" repository to Helm by using the following command.

helm repo add opensearch https://opensearch-project.github.io/helm-charts/

5. Update the available charts locally from charts repositories by using the following command.\

148 IBM Product Master 12.0.0


helm repo update

6. List all of the releases for the namespace by using the following command.

helm repo list

Output

NAME URL
opensearch https://opensearch-project.github.io/helm-charts/

7. Install OpenSearch by using the following command.

helm install my-deployment opensearch/opensearch

Example output

NAME: <deployment_name>
LAST DEPLOYED: Wed Jan 4 14:36:04 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Watch all cluster members come up.
$ kubectl get pods --namespace=default -l app.kubernetes.io/component=opensearch-cluster-master -w

8. View pods by using the following commands.

oc get pods

Example output

NAME READY STATUS RESTARTS AGE


opensearch-cluster-master-0 1/1 Running 0 68s
opensearch-cluster-master-1 0/1 PodInitializing 0 68s
opensearch-cluster-master-2 1/1 Running 0 68s

9. View service details by using the following command.

oc get svc | grep open

Example output

opensearch-cluster-master ClusterIP <IP address> <none> 9200/TCP,9300/TCP 67m

10. Create a service with name as "opensearch-cluster-master-ext-name" and type as "ExternalName" by using the following command.

cat <<EOF | oc create -f -


apiVersion: v1
kind: Service
metadata:
name: opensearch-cluster-master-ext-name
namespace: opensearch
spec:
type: ExternalName
#The format for the ExternalName is <service_name>.<namespace>.svc.cluster.local
externalName: opensearch-cluster-master.opensearch.svc.cluster.local
EOF

11. Generate your OpenSSL certificate.


a. Generate an RSA private key by using the following command.

openssl genrsa -out root-ca-key.pem 2048

b. Generate a certificate authority (CA) certificate by using the following command.

openssl req -new -x509 -sha256 -key root-ca-key.pem -out root-ca.pem -days 730

You are about to be asked to enter information that will be incorporated


into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]: XXXXX
State or Province Name (full name) []:XXXXX
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]: XXXXX
Organizational Unit Name (eg, section) []:XXXXX
Common Name (eg, your name or your server's hostname) []:ROOT CA
Email Address []:

c. Generate an admin key by using the following command.

openssl genrsa -out admin-key-temp.pem 2048

d. Combine your key and certificate in a PKCS#8 (P8) bundle by using the following command.

openssl pkcs8 -inform PEM -outform PEM -in admin-key-temp.pem -topk8 -nocrypt -v1 PBE-SHA1-3DES -out admin-key.pem

e. Generate a certificate signing request (CSR) by using the following command.

openssl req -new -key admin-key.pem -out admin.csr

IBM Product Master 12.0.0 149


f. Specify "opensearch-cluster-master.opensearch.svc.cluster.local" as the common name.

You are about to be asked to enter information that will be incorporated


into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]: XXXXX
State or Province Name (full name) []:XXXXX
Locality Name (eg, city) [Default City]: XXXXX
Organization Name (eg, company) [Default Company Ltd]: XXXXX
Organizational Unit Name (eg, section) []:XXXXX
Common Name (eg, your name or your server's hostname) []:opensearch-cluster-master.opensearch.svc.cluster.local
Email Address []:

Please enter the following 'extra' attributes


to be sent with your certificate request
A challenge password []:
An optional company name []:

g. Create a v3.ext file by using the following command.

cat v3.ext

h. Copy the following content to the v3.ext file by using the following command.

subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer:always
basicConstraints = CA:TRUE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment, keyAgreement,
keyCertSign
subjectAltName = DNS:opensearch-cluster-master.opensearch.svc.cluster.local, DNS:opensearch-cluster-master,
DNS:opensearch-cluster-master.opensearch, DNS:opensearch-cluster-master.opensearch.svc
issuerAltName = issuer:copy

i. Generate a Admin CSR certificate using CA certificate and v3.ext files by using the following command.

openssl x509 -req -in admin.csr -CA root-ca.pem -CAkey root-ca-key.pem -CAcreateserial -sha256 -out admin.pem -days
730 -extfile v3.ext

Signature ok
subject=C=xx ST=xx L=xx O=xx OU=xx CN= opensearch-cluster-master.opensearch.svc.cluster.local
Getting CA Private Key

12. Upload SSL certificate on each pod.


a. Go to the pods by using the following command.

oc rsh opensearch-cluster-master-0

oc rsh opensearch-cluster-master-1

oc rsh opensearch-cluster-master-2

b. Create a folder on each pod by using the following command.

sh-4.2$ mkdir /usr/share/opensearch/data/ssl


sh-4.2$ exit

c. Copy the admin key, admin CSR certificate, CA certificate to the /usr/share/opensearch/data/ssl folder on each pod by using the following command.

oc cp admin.pem opensearch-cluster-master-0:/usr/share/opensearch/data/ssl
oc cp admin-key.pem opensearch-cluster-master-0:/usr/share/opensearch/data/ssl
oc cp root-ca.pem opensearch-cluster-master-0:/usr/share/opensearch/data/ssl

oc cp admin.pem opensearch-cluster-master-1:/usr/share/opensearch/data/ssl
oc cp admin-key.pem opensearch-cluster-master-1:/usr/share/opensearch/data/ssl
oc cp root-ca.pem opensearch-cluster-master-1:/usr/share/opensearch/data/ssl

oc cp admin.pem opensearch-cluster-master-2:/usr/share/opensearch/data/ssl
oc cp admin-key.pem opensearch-cluster-master-2:/usr/share/opensearch/data/ssl
oc cp root-ca.pem opensearch-cluster-master-2:/usr/share/opensearch/data/ssl

d. Edit the opensearch-cluster-master-config ConfigMap to load this SSL configuration by using the following command.

oc edit configmap opensearch-cluster-master-config

e. Add the following details to the "http" section of the opensearch-cluster-master-config ConfigMap.

http:
enabled: true
pemcert_filepath: /usr/share/opensearch/data/ssl/admin.pem
pemkey_filepath: /usr/share/opensearch/data/ssl/admin-key.pem
pemtrustedcas_filepath: /usr/share/opensearch/data/ssl/root-ca.pem
allow_unsafe_democertificates: true
allow_default_init_securityindex: true
authcz:
admin_dn:
- CN= opensearch-cluster-master.opensearch.svc.cluster.local,OU=xx,O=xx,L=xx,C=xx

f. Edit the StatefulSet and make replica count 0 by using the following command.

oc edit sts opensearch-cluster-master

g. View pods by using the following command.

150 IBM Product Master 12.0.0


oc get pod

Output

NAME READY STATUS RESTARTS AGE


opensearch-cluster-master-0 1/1 Terminating 0 22m
opensearch-cluster-master-1 1/1 Terminating 0 22m
opensearch-cluster-master-2 1/1 Terminating 0 22m

h. Create pods again with SSL by using the following command.

oc get pod

Output

NAME READY STATUS RESTARTS AGE


opensearch-cluster-master-0 1/1 Running 0 10m
opensearch-cluster-master-1 1/1 Running 0 10m
opensearch-cluster-master-2 1/1 Running 0 10m

During the IBM® Product Master installation, specify the following in the app_secrets.yaml file:
Property Value
opensearch_host https://opensearch-cluster-master.opensearch.svc.cluster.local
opensearch_port 9200
opensearch_user admin
opensearch_pass admin

Related information
OpenSearch documentation: Helm

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deploying on the Red Hat OpenShift cluster

IBM® Product Master services are deployed on the OpenShift® cluster by using OLM. As part of deployment, product images, operator images, and third-party
images are downloaded.

Procedure
Proceed as follows for volume creation when deploying on the local cluster.

Note: Not applicable in the cloud environment as volumes are auto-provisioned. Also, you must give appropriate storageclass for each volume in the
ipm_12.0.x_cr.yaml file.

1. Log in to your OpenShift Cluster by using following command.

oc login [Options]

[Options] - Check your oc command line tool documentation for the supported options.
2. Create directories for volumes with required directory structure and proper permissions on all the Master and Slave nodes on the Red Hat® OpenShift cluster by
using the following command.

mkdir -p /mnt/ipm12/
cd /mnt/ipm12/
mkdir admin
mkdir ftsindlog
mkdir ftspimlog
mkdir gds
mkdir ml
mkdir mongodb
mkdir mongodblog
mkdir mq-data
mkdir personaui
mkdir sch
mkdir wfl
mkdir appdata
mkdir elasticsearch-data
mkdir restapi

Add user with UID 5000 and GID 0.

adduser default --uid 5000 --gid 0


chown -R 5000.0 /mnt/ipm12
chmod -R +777 /mnt/ipm12

If you get error while adding user, remove the existing user that has 5000 ownership.
Note: If you are using on-premise OpenShift, you need to share the appdata-volume among cluster nodes by using the Network File System (NFS) service. If you are
deploying on the OpenShift Cloud environment, you must use File storage as storageclass for the appdata-volume-claim. Rest of volumes can be "Block" storage

IBM Product Master 12.0.0 151


kind of storageclass. You can also specify all the volumes as File storage storageclass.
3. Create volumes by using the following command.

oc apply -f volumes.yaml

4. View the created volumes by using the following command.

oc get pv

5. Create namespace by using the following command.

oc create ns productmaster

6. Modify the app_secrets.yaml, catalog_source.yaml, operator_group.yaml, registry_secret.yaml, and subscription.yaml files to update <namespace>.
7. Deploy the operator by using the following command.

oc config set-context --current --namespace= productmaster


oc apply -f registry_secret.yaml
oc create serviceaccount ibm-productmaster-catalog -n productmaster
oc patch -n productmaster serviceaccount default -p '{"imagePullSecrets": [{"name": "ipm-registry"}]}'
oc patch -n productmaster serviceaccount serviceaccount -p '{"imagePullSecrets": [{"name": "ipm-registry"}]}'
oc patch -n productmaster serviceaccount ibm-productmaster-catalog -p '{"imagePullSecrets": [{"name": "ipm-registry"}]}'
oc apply -f catalog_source.yaml
oc apply -f operator_group.yaml
oc apply -f subscription.yaml
oc apply -f app_secrets.yaml

8. Deploy the Product Master, by using the following command.

oc apply -f ipm_12.0.x_cr.yaml

9. View all resources by using the following command.

oc get all -n productmaster

10. View specific resources by using the following command.

oc get <pods/deploy/svc/pvc/cm/pv> -n productmaster

Example

oc get pods -n productmaster

11. Check logs for a specific pod by using the following command.

oc logs -f <pod name> -n productmaster

12. You can log in to a specific pod by using the following command.

oc rsh <pod name>

Or

oc exec -it <pod name> -- /bin/bash


source /home/default/.bash_profile

Note: It is recommended to stop the services by using /home/default/stop.sh before proceeding with any configuration changes on the pod.

Results
Access the Persona-based UI by using the following URL and ingress HTTPS port (443 by default).
https://<hostname>/mdm_ui/#/login

Example

https://ipm.persistent.com/mdm_ui/#/login

Access the Admin UI by using the following URL and Ingress HTTPS port (443 by default).
https://<hostname>/

Example

https://ipm.persistent.com/

Related concepts
Uninstalling the product

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Customizing the Docker container (Fix Pack 8 and later)

Product Master images can be customized according to your requirement.

152 IBM Product Master 12.0.0


Following is the list of customizable containers,

productmaster-admin
productmaster-fts-indexer
productmaster-fts-pim
productmaster-gds
productmaster-ml
productmaster-personaui
productmaster-restapi
productmaster-sch
productmaster-wfl

With this release, for fresh installation, you do not need to create a custom image but can use default image for customization. During deployment process, fixed set of
directories get created and you need to copy your modified (custom) files to the specific directories listed in the Pods and configuration files mapping table.
Important: You should not update Endpoints URL, ports, user name, password, and context paths in the ConfigMaps and XML files.

1. Configure custom files.


Start the images to create following folders, script files, and template (creates backup files of latest changes, which can be used for reference) in the
-/opt/MDM/public_html/customization folder inside pods.

+-- config-files
+-- top-jars
+-- custom-jars
+-- user-scripts
+-- ml-training
+-- user-files
+-- restapi
+-- jars
+-- classes
+-- persona-ui

Copy the required .template extension file, create a new file without .template extension, and then update the file as required. The changes get picked with the next
deployment or upgrade. The customization folder is shared among the customizable pods. Any update to the folders on a pod becomes available to the remaining
pods. Once the pods are up with new updated version, the deployment script copies the latest copy of the file from the original location as .template in the
customization folder. You should compare your previous version file with new version .template file and if there is a change, update the file.
Note: You can use kubectl or oc client for copying your files and taking backup.

config-files
Copy configurable files (XML). The application creates .template files for reference and you need to edit the files. For more information, see Pods and
configuration files mapping table.
Note: Do not change the log directory path in the log4j2.xml files.
top-jars
Copy the jar files to this directory. These jars are added in the $TOP folder.
custom-jars
Copy the jar files to this folder if you want to add them through the jars-custom.txt file.
user-scripts
There is a blank pre-config.sh and post-config.sh per customizable pod. Pre-configuration or post-configuration (Any processing before or after running the
configureEnv.sh file) through the <service-name>_pre-config.sh and <service-name>_post-config.sh scripts. For any processing after the installation but
before service starts, copy the script to this folder. If you want, you can use pre-config.sh and post-config.sh of a specific pod for further customization.
ml-training
Not currently used.
user-files
Copy all the admin service-related JS, JSP, CSS, and redirectBusinessUI.js files that need to be copied to the public_html/users folder
restapi
jars - Copy custom jars.
classes - Copy custom classes with the package structure.
Example
/opt/MDM/public_html/customization/restapi/classes/com/ibm/rs/custom/api/CustomController.class
persona-ui
Copy PNG, ICO, CSS, JSON (config.json and mdmce-roles.json), and chunk files.

Table 1. Pods and configuration files mapping


Pods File name Use name
admin, pimcollector, gds, scheduler, workflow flow-config.xml flow-config.xml
admin, pimcollector, gds, restapi, scheduler, workflow log_cbe.xml log_cbe.xml
log_cbe_pattern.xml log_cbe_pattern.xml
data_entry_properties.xml data_entry_properties.xml
mdm-ehcache-config.xml mdm-ehcache-config.xml
log4j2.xml admin_log4j2.xml
history_subscriptions.xml history_subscriptions.xml
admin_properties.xml admin_properties.xml
docstore_mount.xml docstore_mount.xml
admin server.xml admin_server.xml
indexer log4j2.xml indexer_log4j2.xml
pimcollector log4j2.xml pim_log4j2.xml
ml machinelearning.ini machinelearning.ini
restapi log4j2.xml rest_log4j2.xml
mdm-rest-cache-config.xml mdm-rest-cache-config.xml
server.xml restapi_server.xml
personaui server.xml personaui_server.xml

IBM Product Master 12.0.0 153


2. Update properties files.
The values in the following properties files are updated through ConfigMaps for each customizable pod.
File ConfigMap Add prefix to the key Applicable to services
common.properties productmaster-custom-common common_ admin, pimcollector, gds, restapi,scheduler, workflow
mdm-cache-config.properties productmaster-custom-mdmcacheconfig cacheconfig_
dam.properties productmaster-custom-dam dam_ admin, restapi, scheduler, workflow
damConfig.properties productmaster-custom-damconfig damconfig_ restapi
restConfig.properties productmaster-custom-restconfig restconfig_
application.properties productmaster-custom-applicationindexer appindexer_ indexer
application.properties productmaster-custom-applicationpimcollector apppimcollector_ pimcollector
You need to update the "data" section of the ConfigMap. The updated value from the ConfigMap takes effect once pods are deleted by setting the replica count to
zero (CR file) and then reverting back the replica count to original value.

To access configMaps, use the following command.


Note: You can use kubectl or oc client for accessing configMaps.

oc get cm

To edit a configMap, use the following command:

oc edit cm <configmap name>

Example

oc edit cm productmaster-custom-common

Example

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
# Make sure prefixes need to be added before key based upon above table
# NOTE: Please add prefix- 'common_' before each key as given in table
# If key name is examplekey then key should be named as given below
# common_examplekey: somevalue
apiVersion: v1
data:
common_blob.min.size: "50020"
common_enable_referer_check: "false"
common_enable_xframe_header: "true"
common_xframe_header_option: ALLOWALL
kind: ConfigMap
metadata:
creationTimestamp: "2022-10-07T06:27:52Z"
name: productmaster-custom-common
namespace: productmaster
ownerReferences:
- apiVersion: productmaster.ibm.com/v1
kind: ProductMaster
name: productmaster
uid: a282bcb4-3ee3-4a74-a753-2d043186886d
resourceVersion: "50158847"
uid: 8ec2e8c5-705f-4c18-a224-642f401d4e37

Customizing the Docker container (Fix Pack 7 and later)


Product Master images can be customized according to your requirement.
Customizing the Docker container (Fix Pack 5 and later)
Product Master images can be customized with custom tabs and extensions.
Customizing the Docker container (Fix Pack 3 and later)
Product Master images can be customized with custom tabs and extensions.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Customizing the Docker container (Fix Pack 7 and later)

Product Master images can be customized according to your requirement.

Following is the list of customizable containers,

productmaster-admin
productmaster-fts-indexer
productmaster-fts-pim
productmaster-gds
productmaster-ml
productmaster-personaui
productmaster-restapi
productmaster-sch
productmaster-wfl

154 IBM Product Master 12.0.0


With this release, for fresh installation, you do not need to create a custom image but can use default image for customization. During deployment process, fixed set of
directories get created and you need to copy your modified (custom) files to the specific directories listed in the Pods and configuration files mapping table.
Important: You should not update Endpoints URL, ports, user name, password, and context paths in the ConfigMaps and XML files.

1. Configure custom files.


Start the images to create following folders, script files, and template (creates backup files of latest changes, which can be used for reference) in the
-/opt/MDM/public_html/customization folder inside pods.

+-- config-files
+-- top-jars
+-- custom-jars
+-- user-scripts
+-- ml-training
+-- user-files
+-- restapi
+-- jars
+-- classes
+-- persona-ui

Copy the required .template extension file, create a new file without .template extension, and then update the file as required. The changes get picked with the next
deployment or upgrade. The customization folder is shared among the customizable pods. Any update to the folders on a pod becomes available to the remaining
pods. Once the pods are up with new updated version, the deployment script copies the latest copy of the file from the original location as .template in the
customization folder. You should compare your previous version file with new version .template file and if there is a change, update the file.
Note: You can use kubectl or oc client for copying your files and taking backup.

config-files
Copy configurable files (XML). The application creates .template files for reference and you need to edit the files. For more information, see Pods and
configuration files mapping table.
Note: Do not change the log directory path in the log4j2.xml files.
top-jars
Copy the jar files to this directory. These jars are added in the $TOP folder.
custom-jars
Copy the jar files to this folder if you want to add them through the jars-custom.txt file.
user-scripts
There is a blank pre-config.sh and post-config.sh per customizable pod. Pre-configuration or post-configuration (Any processing before or after running the
configureEnv.sh file) through the <service-name>_pre-config.sh and <service-name>_post-config.sh scripts. For any processing after the installation but
before service starts, copy the script to this folder. If you want, you can use pre-config.sh and post-config.sh of a specific pod for further customization.
ml-training
Not currently used.
user-files
Copy all the admin service-related JS, JSP, CSS, and redirectBusinessUI.js files that need to be copied to the public_html/users folder
restapi
jars - Copy custom jars.
classes - Copy custom classes with the package structure.
Example
/opt/MDM/public_html/customization/restapi/classes/com/ibm/rs/custom/api/CustomController.class
persona-ui
Copy PNG, ICO, CSS, JSON (config.json and mdmce-roles.json), and chunk files.

Table 1. Pods and configuration files mapping


Pods File name Use name
admin, pimcollector, gds, scheduler, workflow flow-config.xml flow-config.xml
admin, pimcollector, gds, restapi, scheduler, workflow log_cbe.xml log_cbe.xml
log_cbe_pattern.xml log_cbe_pattern.xml
data_entry_properties.xml data_entry_properties.xml
mdm-ehcache-config.xml mdm-ehcache-config.xml
log4j2.xml admin_log4j2.xml
history_subscriptions.xml history_subscriptions.xml
admin_properties.xml admin_properties.xml
docstore_mount.xml docstore_mount.xml
admin server.xml admin_server.xml
indexer log4j2.xml indexer_log4j2.xml
pimcollector log4j2.xml pim_log4j2.xml
ml machinelearning.ini machinelearning.ini
restapi log4j2.xml rest_log4j2.xml
mdm-rest-cache-config.xml mdm-rest-cache-config.xml
server.xml restapi_server.xml
personaui server.xml personaui_server.xml
2. Update properties files.
The values in the following properties files are updated through ConfigMaps for each customizable pod.
File ConfigMap Add prefix to the key Applicable to services
common.properties productmaster-custom-common common_ admin, pimcollector, gds, restapi,scheduler, workflow
mdm-cache-config.properties productmaster-custom-mdmcacheconfig cacheconfig_
dam.properties productmaster-custom-dam dam_ admin, restapi, scheduler, workflow
damConfig.properties productmaster-custom-damconfig damconfig_ restapi
restConfig.properties productmaster-custom-restconfig restconfig_
application.properties productmaster-custom-applicationindexer appindexer_ indexer
application.properties productmaster-custom-applicationpimcollector apppimcollector_ pimcollector

IBM Product Master 12.0.0 155


You need to update the "data" section of the ConfigMap. The updated value from the ConfigMap takes effect once pods are deleted by setting the replica count to
zero (CR file) and then reverting back the replica count to original value.

To access configMaps, use the following command.


Note: You can use kubectl or oc client for accessing configMaps.

oc get cm

To edit a configMap, use the following command:

oc edit cm <configmap name>

Example

oc edit cm productmaster-custom-common

Example

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
# Make sure prefixes need to be added before key based upon above table
# NOTE: Please add prefix- 'common_' before each key as given in table
# If key name is examplekey then key should be named as given below
# common_examplekey: somevalue
apiVersion: v1
data:
common_blob.min.size: "50020"
common_enable_referer_check: "false"
common_enable_xframe_header: "true"
common_xframe_header_option: ALLOWALL
kind: ConfigMap
metadata:
creationTimestamp: "2022-10-07T06:27:52Z"
name: productmaster-custom-common
namespace: productmaster
ownerReferences:
- apiVersion: productmaster.ibm.com/v1
kind: ProductMaster
name: productmaster
uid: a282bcb4-3ee3-4a74-a753-2d043186886d
resourceVersion: "50158847"
uid: 8ec2e8c5-705f-4c18-a224-642f401d4e37

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Customizing the Docker container (Fix Pack 5 and later)

Product Master images can be customized with custom tabs and extensions.

Most of the customization requires additional JAR files and JSP files, with modification to the data_entry_properties.xml, flow-config.xml, and the common.properties
configuration files. Product Master application runs with the user default user identifier (UID) 5000 and group identifier (GID) 0. Ensure that you use appropriate user
while customizing the docker images.

You can update the properties values by using any of the following methods.

Create ConfigMap and using the values in the post-config.sh file.


Hardcode the values in the post-config.sh file.

Using ConfigMap you do not need to re-create the docker image to change the property value. You can directly change the property value from ConfigMap and restart the
associated pod.
Following are the ConfigMap names.

productmaster-admin-configmap
productmaster-personaui-configmap
productmaster-restapi-configmap
productmaster-sch-configmap
productmaster-wfl-configmap
productmaster-fts-indexer-configmap
productmaster-fts-pim-configmap
productmaster-ml-configmap
productmaster-gds-configmap

You can create these ConfigMaps before deployment.

Add all your required properties in the productmaster-admin-configmap.yaml file in the following format.

apiVersion: v1
data:
XFRAME_HEADER_OPTION: ALLOWALL
RICH_SEARCH_DEFAULT_VIEW_INDEXED_ONLY: false
kind: ConfigMap
metadata:

156 IBM Product Master 12.0.0


name: productmaster-admin-configmap
namespace: <>

Apply productmaster-admin-configmap.yaml file in the namespace where your deployment is present or you where you are going to deploy. If you expect similar
changes in other pods like personaui or scheduler, then follow the same steps for those services too.

Following is the automation process to place the custom JAR files in the $TOP folder or configure additional dependencies by using the jars-custom.txt file. The JAR
files get added to the Product Master class path.

+-- top-jars
+-- custom-jars
+-- Dockerfile
+-- public_html
¦ +-- user
+-- user-scripts

top-jars
Copy the JAR files to this directory. These files are added in the $TOP folder.
custom-jars
Copy the JAR files to this folder if you want to add them through the jars-custom file.
user-scripts
Pre-configuration or Post-configuration (Any processing before or after the configureEnv.sh file) through pre-config.sh or post-config.sh scripts. For any processing
that happens after the installation but before starting the service, copy the script to this folder.
public_html/user
Any other files that need to be copied into the image.

Sample Dockerfile to update the Admin UI image

FROM ipm-admin-ubi8:12.0.x
USER root
RUN mkdir -p /home/default/user-scripts; \
mkdir -p /home/default/custom-jars; \
mkdir -p /home/default/files
COPY user-scripts/*.sh /home/default/user-scripts/
COPY custom-jars/*.jar /home/default/custom-jars/
COPY etc/default/flow-config.xml /home/default/files/
COPY etc/default/data_entry_properties.xml /home/default/files/
COPY public_html/user/* /home/default/files/
RUN chmod 755 /home/default/user-scripts/*.sh; \
chown 5000.0 /home/default/ -R;
USER 5000
WORKDIR /home/default
ENTRYPOINT ["./cmd.sh"]

Additional configuration is required, for custom tabs and entry preview functions that involve interaction between the UI elements in the Admin UI and the Persona-based
UIs. You can perform the configuration by using the post-config.sh script. When the container is initialized, the post-config.sh script runs and updates the configuration. In
the following example, an entry preview script is configured to open a new results window.

Sample Dockerfile

FROM ipm-restapi-ubi8:12.0.x
USER root
RUN mkdir -p /home/default/user-scripts; \
mkdir -p /home/default/custom-jars; \
mkdir -p /home/default/files
COPY user-scripts/*.sh /home/default/user-scripts/
COPY custom-jars/*.jar /home/default/custom-jars/
COPY etc/default/flow-config.xml /home/default/files/
COPY etc/default/data_entry_properties.xml /home/default/files/
COPY public_html/user/* /home/default/files/
RUN chmod 755 /home/default/user-scripts/*.sh; \
chown 5000.0 /home/default/ -R;
USER 5000
WORKDIR /home/default
ENTRYPOINT ["./cmd.sh"]

Sample post-config.sh script

#!/bin/sh
echo "Starting post-config.sh script....."
cp -f /home/default/files/flow-config.xml /opt/MDM/etc/default/
cp -f /home/default/files/data_entry_properties.xml /opt/MDM/etc/default/
cp -f /home/default/files/CustomAction.jsp /opt/MDM/public_html/user/CustomAction.jsp
cp -f /home/default/files/SamplePage.jsp /opt/MDM/public_html/user/SamplePage.jsp
cp -f /home/default/files/systemTab.css /opt/MDM/public_html/user/systemTab.css
sed -i "s/^xframe_header_option=.*/xframe_header_option=$XFRAME_HEADER_OPTION/g" /opt/MDM/etc/default/common.properties
sed -i
"s/^rich_search_default_view_indexed_only=.*/rich_search_default_view_indexed_only=$RICH_SEARCH_DEFAULT_VIEW_INDEXED_ONLY/
g" /opt/MDM/etc/default/common.properties

Similarly, the pre-config.sh file can be used to customize application. The script runs before the configureEnv.sh file runs when the container is initialized. In the
following example, Angular-based entry preview, custom tool, and custom tab scripts are added. Similarly, configuration for custom REST APIs can be added in the ipm-
restapi-ubi8:12.0.x image.

Sample Docker file

FROM ipm-personaui-ubi8:12.0.x
USER root
RUN mkdir -p /home/default/user-scripts; \
mkdir -p /home/default/custom-jars; \
mkdir -p /home/default/files;\
mkdir -p /home/default/files/personaCustomTabsLibrary.chunk;\

IBM Product Master 12.0.0 157


mkdir -p /home/default/files/classes/
COPY user-scripts/*.sh /home/default/user-scripts/
COPY angular-custom-code/*.json /home/default/files/
COPY angular-custom-code/*.js /home/default/files/personaCustomTabsLibrary.chunk/
COPY rest-custom-code/*.class /home/default/files/classes/
RUN chmod 755 /home/default/user-scripts/*.sh; \
chown 5000.0 /home/default/ -R;
USER 5000
WORKDIR /home/default
ENTRYPOINT ["./cmd.sh"]

Sample pre-config.sh file

#!/bin/sh
echo "Starting pre-config.sh script....."
cp -ar /home/default/files/redirectBusinessUI.js /opt/MDM/public_html/user/redirectBusinessUI.js
cp -ar /home/default/files/preview-script.json /opt/MDM/mdmui/custom/ui/json/preview-script.json
cp -ar /home/default/files/persona-custom-tabs.json /opt/MDM/mdmui/custom/ui/json/persona-custom-tabs.json
cp -ar /home/default/files/persona-custom-tools.json /opt/MDM/mdmui/custom/ui/json/persona-custom-tools.json
cp -ar /home/default/files/personaCustomTabsLibrary.chunk/personaCustomTabsLibrary.bundle.js
/opt/MDM/mdmui/custom/ui/js/personaCustomTabsLibrary.bundle.js
cp -ar /home/default/files/personaCustomTabsLibrary.chunk/personaCustomToolsLibrary.bundle.js
/opt/MDM/mdmui/custom/ui/js/personaCustomToolsLibrary.bundle.js
cp -ar /home/default/files/personaCustomTabsLibrary.chunk/previewScriptLibrary.bundle.js
/opt/MDM/mdmui/custom/ui/js/previewScriptLibrary.bundle.js
cp -ar /home/default/files/classes /opt/MDM/mdmui/custom/api/

If you want to change value of property after the custom images are deployed, you can adjust the value anytime from the OpenShift UI > ConfigMap and then restarting
pod.

If you are using Kubernetes, then you can edit ConfigMap using the following command and then restart the pod.

kubectl edit configmap productmaster-admin-configmap

You can follow the same steps for other pods that are using custom images.

Best practice for starting or stopping Product Master services


Make the replica count 0 for required service in the ipm_12.0.x_cr.yaml file and apply the CR file. As soon as you apply the CR file, the associated service pod is
removed.
To restart the service, you need to change the replica count back to 1 and apply the CR file. A new pod is created and started.
Database backup or maintenance - Database maintenance tasks like REORG or backup should be only performed when all the pods are in the stopped state.
Product maintenance scripts - Scripts like delete_old_versions.sh or estimate_old_version.sh do not need pods to be in a stopped state. The
recommendation is to run the scripts when the system is under low load. After you log in to the pod, run the following command to load the required environment
variables of the Product Master.

source /home/default/.bash_profile

The only product scripts that should be run with all pods in the stopped state are create_schema.sh and migration scripts. For more information, see Creating or
migrating database schema.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Customizing the Docker container (Fix Pack 3 and later)

Product Master images can be customized with custom tabs and extensions.

Most of the customization requires additional JAR files and JSP files, with modification to the data_entry_properties.xml, flow-config.xml, and the common.properties
configuration files. Product Master application runs with user default UID 5000 and group 0. Ensure that you use appropriate user while customizing the docker images.
Following is the automation process to place the custom JAR files in the $TOP folder or configure additional dependencies by using the jars-custom.txt file. The JAR files
get added to the Product Master class path.

+-- top-jars
+-- custom-jars
+-- Dockerfile
+-- public_html
¦ +-- user
+-- user-scripts

top-jars
Copy the jar files to this directory. These jars are added in the $TOP folder.
custom-jars
Copy the jar files to this folder if you want to add them through the jars-custom file.
user-scripts
Pre-configuration or Post-configuration (Any processing before or after the configureEnv.sh file) through pre-config.sh or post-config.sh scripts. For any processing
that happens after the installation but before starting the service, copy the script to this folder.
public_html/user
Any other files that need to be copied into image.

Sample Dockerfile to update the Admin UI image

158 IBM Product Master 12.0.0


FROM ipm-admin-ubi8:12.0.x
USER root
RUN mkdir -p /home/default/user-scripts; \
mkdir -p /home/default/custom-jars; \
mkdir -p /home/default/files
COPY user-scripts/*.sh /home/default/user-scripts/
COPY custom-jars/*.jar /home/default/custom-jars/
COPY etc/default/flow-config.xml /home/default/files/
COPY etc/default/data_entry_properties.xml /home/default/files/
COPY public_html/user/* /home/default/files/
RUN chmod 755 /home/default/user-scripts/*.sh; \
chown 5000.0 /home/default/ -R;
USER 5000
WORKDIR /home/default
ENTRYPOINT ["./cmd.sh"]

Additional configuration is required, for custom tabs and entry preview functions that involve interaction between the UI elements in the Admin UI and the Persona-based
UIs. You can perform the configuration by using the post-config.sh script. When the container is initialized, the post-config.sh script runs and updates the configuration. In
the following example, an entry preview script is configured to open a new results window.

Sample Dockerfile

FROM ipm-restapi-ubi8:12.0.x
USER root
RUN mkdir -p /home/default/user-scripts; \
mkdir -p /home/default/custom-jars; \
mkdir -p /home/default/files
COPY user-scripts/*.sh /home/default/user-scripts/
COPY custom-jars/*.jar /home/default/custom-jars/
COPY etc/default/flow-config.xml /home/default/files/
COPY etc/default/data_entry_properties.xml /home/default/files/
COPY public_html/user/* /home/default/files/
RUN chmod 755 /home/default/user-scripts/*.sh; \
chown 5000.0 /home/default/ -R;
USER 5000
WORKDIR /home/default
ENTRYPOINT ["./cmd.sh"]

Sample post-config.sh script

#!/bin/sh
echo "Starting post-config.sh script....."
cp -f /home/default/files/flow-config.xml /opt/MDM/etc/default/
cp -f /home/default/files/data_entry_properties.xml /opt/MDM/etc/default/
cp -f /home/default/files/CustomAction.jsp /opt/MDM/public_html/user/CustomAction.jsp
cp -f /home/default/files/SamplePage.jsp /opt/MDM/public_html/user/SamplePage.jsp
cp -f /home/default/files/systemTab.css /opt/MDM/public_html/user/systemTab.css
sed -i "s/^xframe_header_option=.*/xframe_header_option=ALLOWALL/g" /opt/MDM/etc/default/common.properties

To display the results, perform the following additional configuration:

Setting the value of the xframe_header_option=ALLOWALL for both the ipm-admin and the ipm-restapi containers in the common.properties property.
Initializing the ipm-persona container by configuring the Admin UI port by using the MDM_APP_SVR_PORT argument.

Similarly, the pre-config.sh file can be used to customize application. The script runs before the configureEnv.sh file runs when the container is initialized. In the following
example, Angular-based entry preview, custom tool, and custom tab scripts are added. Configuration for custom REST APIs can be added similarly in the ipm-restapi-
ubi8:12.0.x image.

Sample Dockerfile

FROM ipm-personaui-ubi8:12.0.x
USER root
RUN mkdir -p /home/default/user-scripts; \
mkdir -p /home/default/custom-jars; \
mkdir -p /home/default/files;\
mkdir -p /home/default/files/personaCustomTabsLibrary.chunk;\
mkdir -p /home/default/files/classes/
COPY user-scripts/*.sh /home/default/user-scripts/
COPY angular-custom-code/*.json /home/default/files/
COPY angular-custom-code/*.js /home/default/files/personaCustomTabsLibrary.chunk/
COPY rest-custom-code/*.class /home/default/files/classes/
RUN chmod 755 /home/default/user-scripts/*.sh; \
chown 5000.0 /home/default/ -R;
USER 5000
WORKDIR /home/default
ENTRYPOINT ["./cmd.sh"]

Sample pre-config.sh script

#!/bin/sh
echo "Starting pre-config.sh script....."
cp -ar /home/default/files/redirectBusinessUI.js /opt/MDM/public_html/user/redirectBusinessUI.js
cp -ar /home/default/files/preview-script.json /opt/MDM/mdmui/custom/ui/json/preview-script.json
cp -ar /home/default/files/persona-custom-tabs.json /opt/MDM/mdmui/custom/ui/json/persona-custom-tabs.json
cp -ar /home/default/files/persona-custom-tools.json /opt/MDM/mdmui/custom/ui/json/persona-custom-tools.json
cp -ar /home/default/files/personaCustomTabsLibrary.chunk/personaCustomTabsLibrary.bundle.js
/opt/MDM/mdmui/custom/ui/js/personaCustomTabsLibrary.bundle.js
cp -ar /home/default/files/personaCustomTabsLibrary.chunk/personaCustomToolsLibrary.bundle.js
/opt/MDM/mdmui/custom/ui/js/personaCustomToolsLibrary.bundle.js
cp -ar /home/default/files/personaCustomTabsLibrary.chunk/previewScriptLibrary.bundle.js
/opt/MDM/mdmui/custom/ui/js/previewScriptLibrary.bundle.js
cp -ar /home/default/files/classes /opt/MDM/mdmui/custom/api/

IBM Product Master 12.0 Fix Pack 8

IBM Product Master 12.0.0 159


Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling Horizontal Pod Autoscaler (HPA)

( and later) The HPA changes the resource allocation by increasing or decreasing the number of Pods in response to CPU or memory consumption.

Before you begin


Metric-server pod must be installed and running on the Kubernetes cluster.
Note: There is a default metric server in the Red Hat® OpenShift® cluster.
To create metric server on your Kubernetes cluster, configure the given YAML content, and then run the following command.

kubectl apply -f metric_server.yaml

YAML content

apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server

160 IBM Product Master 12.0.0


name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls=true
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false

IBM Product Master 12.0.0 161


readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100

Ensure that you have configured apiVersion: autoscaling/v2 on the Kubernetes cluster. Run the following command to check.

kubectl api-versions

About this task


You need to plan the capacity based on peak time workload, even then cluster might be under-utilized most of the time while available resources cannot be used by
workloads that require more resources. HPA introduces a way to dynamically scale up and down driven by the user workloads. HPA is available at the Service level that
includes the platform layer, and uses both CPU and memory metrics (key resource metrics that are highly correlated). All the pods for the service should be scaled and
proportionally.

A scaling policy controls how the container platform HPA scales pods. Scaling policies allow you to restrict the rate at which HPAs scale pods up or down by setting a
specific number or specific percentage to scale in a specified period of time.

behavior:
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Pods
value: 1
periodSeconds: 60
- type: Percent
value: 10
periodSeconds: 60
scaleDown:
stabilizationWindowSeconds: 60
policies:
- type: Pods
value: 2
periodSeconds: 60
- type: Percent
value: 10
periodSeconds: 60
selectPolicy: Max

Where,

behavior
Specifies the direction for the scaling policy, with valid value as either scaleDown or scaleUp.
policies
Specifies the scaling policy.
type
Specifies whether the policy scales by a specific number of pods or a percentage of pods during each iteration. By default, the value is pods.
type: pods value
Specifies the amount of scaling by the number of pods during each iteration.
periodSeconds
Specifies the length of a scaling iteration. By default, the value is 15 seconds.
type: percentage value
Specifies the amount of scaling by the percentage of pods during each iteration.
selectPolicy
Specifies which policy first use first, if multiple policies are set. Valid values are, Max (Use the policy that allows the highest amount of change), Min (Use the policy
that allows the lowest amount of change), or Disabled (Prevent the HPA from scaling). By default, the value is Max.
stabilizationWindowSeconds
Specifies the time period during which the HPA should look at the desired states. By default, the value is 0.

Procedure
Proceed as following sample to enable HPA after deployment for an admin_service pod,

162 IBM Product Master 12.0.0


1. In the ipm_12.0.x_cr.yaml file, set the value of autoScaleConfig property as "true" in the CR file.

apiVersion: productmaster.ibm.com/v1
kind: ProductMaster
metadata:
name: productmaster
spec:
license:
accept: true
deployment_platform: 'openshift'
#deployment_platform: 'k8s'
version: 12.0.8
autoScaleConfig: true
domain_name: ""

enable:
fts: 1
vendor: 1
dam: 1
ml: 1
gds: 1
wkc: 0
sso: 1
mountmgr: 1
image:
registry: registry.ng.bluemix.net/product_master
pullpolicy: Always

productmastersecret: "ipm-registry"
app_secret_name: "app-secret"
random_secret_name: "random-secret"
certs_secret_name: "tls-secret"

enable_volume_selectors: true

volume_details:
admin:
log:
name: admin-log-volume
storage: 2Gi
access_modes: ReadWriteOnce
storage_class_name: standard
################################
admin_service:
replica_count: 1
minReplicas: 1
maxReplicas: 10
image: ipm-admin-ubi8
imagetag: 12.0.8
memflag: -Xmx1024m -Xms256m
evmemflag: -Xmx128m -Xms48m
quememflag: -Xmx128m -Xms48m
admmemflag: -Xmx128m -Xms48m
readiness_probe:
initial_delay_seconds: 90
timeout_seconds: 900
period_seconds: 10
liveness_probe:
initial_delay_seconds: 120
timeout_seconds: 900
period_seconds: 60
resources:
requests:
memory: 2048Mi
cpu: 1000m
limits:
memory: 3072Mi
cpu: 1400m

Where,

replica_count
Specifies the number of identical running pods. The value of this property cannot be more than 1 for the third-party services (Elasticsearch, Hazelcast,
MongoDB, and IBM® MQ).
minReplicas
Specify the minimum number of replicas during deployment.
maxReplicas
Specify the maximum number of replicas to scale.

2. Verify and monitor the HPA status by using the following command.

$ oc get hpa

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring SAML SSO (Accelerated deployment)

IBM Product Master 12.0.0 163


IBM® Product Master supports SAML 2.0 web single sign-on with Just In Time (JIT) provisioning for the Admin UI and Persona-based UI.

About this task


Security Assertion Markup Language (SAML) is an OASIS open standard for representing and exchanging user identity, authentication, and attribute information. JIT
enables more efficient integration of SAML to provide a seamless application login experience for users as it automates user account and group creation. SAML JIT now
does not need a local LDAP for user authentication and instead relies on SAML attributes that are received as claims in the SAML assertion to retrieve user attributes and
groups. WebSphere® Liberty acts as a SAML service provider. A web user authenticates to a SAML identity provider, which produces an SAML assertion, and WebSphere
Liberty SAML service provider consumes a SAML assertion to establish a security context for the web user and grants access to the IBM Product Master Admin UI and
Persona-based UI web applications. Admin UI and Persona-based UI applications extract SAML attributes that are received as claims in the SAML assertion to create
users and roles in the Product Master. It is important to set the SAML assertion attribute mappings on the SSO partners.
Note: You must have a valid role in the Product Master to be able to log in to the application. Roles created as a result of the SAML login are created with default ACG
permissions. It is the Administrator's responsibility to assign the correct role to the user or update the permission in the roles. The newly created roles are not added to
the $TOP/mdmui/dynamic/mdm-rest/mdmce-roles.json file. The user is assigned a basic role, and allowed login to the Persona-based UI. You can disable the role
creation in the SSO Configuration lookup table. For more information, see Configuring SSO properties.

Procedure
1. Configure the Product Master.
2. Enable the SAML Web browser SSO.
3. Configure SSO partners.
4. Configure SSO in the browser.

Configure the IBM Product Master (Accelerated deployment)


Before configuring SAML SSO, complete the following task.
Enable the SAML Web browser SSO (Accelerated deployment)
To enable SAML Service Provider Initiated (SP-Initiated) web SSO (SSO), complete the following task.
Configure SSO partners (Accelerated deployment)
To configure SSO partners, complete the following tasks.
Configure SSO in the browser (Accelerated deployment)
To configure your browser to authenticate SSO, complete the following task in your browser.
Timeout behavior in the Persona UI
When SAML SSO is enabled, the session timeout for Product Master Persona-based UI is based on the following properties.
Known issues and limitations
Certain product features assume that the system is deployed by using a centralized deployment model where services share the
file system and product binaries. With containerized deployments, services no longer have a common file system and are working in isolation.

Related concepts
Troubleshooting the SAML SSO issues

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configure the IBM Product Master (Accelerated deployment)


Before configuring SAML SSO, complete the following task.

Configuring SSO properties


You need to enable SSO properties. To enable SSO properties, proceed as follows.

1. Enable SSO authentication in the Login.wpcs file. To enable SSO authentication, you must set the wpcOnlyAuthentication flag in the Login.wpcs file to false in case
SSO authentication is required. The Login.wpcs file identifies the authentication mechanism.
a. Click Data Model Manager > Scripting > Scripts Console.
b. Select Login script from the drop-down list.
c. Click Edit for the Login.wpcs script.
d. Find and set the wpcOnlyAuthentication flag to false.
2. Populate SAML attributes in the SSO Configuration lookup table from Admin UI.
a. Import the mdm-env.zip file located at $TOP/mdmui/env-export/mdm-env, if not already done.
b. Go to Product Manager > Lookup Table > Lookup Table Console.
c. Select SSO Configuration lookup table and add a new role.
d. Populate all the attributes as follows.
Attribute Name Description of attribute
Id The primary key of the lookup table entry, auto generated.
SSO Type SAMLv2.0
Create Role After logging into the Product Master,
True: User roles are created, if the roles do not exist.
False: User role are not created and the Administrator needs to manually create roles.

164 IBM Product Master 12.0.0


Attribute Name Description of attribute
First Name The user attribute, which represents the given name in the SAML assertion, for example,
Attribute http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name

.
Last Name The user attribute, which represents the surname in the SAML assertion, for example,
Attribute http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname
Mail ID Attribute The user attribute, which represents the mail ID in the SAML assertion, for example,
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress

.
Telephone The user attribute, which represents the telephone number in the SAML assertion, for example,
Number http://schemas.xmlsoap.org/ws/2005/05/identity/claims/telephone
Attribute
.
Fax Number The user attribute, which represents the fax number in the SAML assertion, for example,
Attribute http://schemas.xmlsoap.org/ws/2005/05/identity/claims/fax

.
Postal Address The user attribute, which represents the postal address in the SAML assertion, for example,
Attribute http://schemas.xmlsoap.org/ws/2005/05/identity/claims/address

.
Title Attribute The user attribute, which represents the title in the SAML assertion, for example,
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/title

.
Roles Attribute The member-of attribute, which represents the group in the SAML assertion, for example,
http://schemas.xmlsoap.org/claims/Group
Organization The user attribute, which represents the organization in the SAML assertion. For example,
Attribute http://schemas.xmlsoap.org/claims/organization

This attribute is required only for the Vendor Persona users. The vendor user is created under Vendor Organization Hierarchy based on the value
of the organization attribute. Possible values are: Vendor1OU, ParentOU/Vendor1OU, and so on.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enable the SAML Web browser SSO (Accelerated deployment)


To enable SAML Service Provider Initiated (SP-Initiated) web SSO (SSO), complete the following task.

Procedure
1. Update the app_secrets.yaml file to set the SSO secrets as follows.

sso_company: "<Company code>"


sso_config_adminui: "<AdminUI SAML WebSSO configuration for IBM Liberty, in the format <samlWebSso20>..</samlWebSso20>>"
sso_config_personaui: "<PersonaUI SAML WebSSO configuration for IBM Liberty, in the format <samlWebSso20>..
</samlWebSso20>>"
sso_idp_metadata: "<Identity provider metadata file content>"

Example

sso_company: “demo”
sso_config_adminui: |
<samlWebSso20
id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_installing_con_samlsso_oper_step2_defaultSP"
enabled="false"></samlWebSso20>
<samlWebSso20
id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_installing_con_samlsso_oper_step2_adminSP"
spHostAndPort="https://<IPM_HOSTNAME>:<SSL_PORT>" createSession="true"
idpMetadata="/opt/MDM/etc/default/idpMetadata.xml" realmName="http://<IDP_HOSTNAME>/adfs/services/trust"
wantAssertionsSigned="true" authnRequestsSigned="true" includeX509InSPMetadata="true" authFilterRef="authFilter"
authnContextClassRef="urn:oasis:names:tc:SAML:2.0:ac:classes:Password"
nameIDFormat="urn:oasis:names:tc:SAML:2.0:nameid-format:transient" useRelayStateForTarget="false"
targetPageUrl="https://<IPM_HOSTNAME>:<PORT>/utils/enterLogin.jsp" clockSkew="7h"></samlWebSso20>
<authFilter
id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_installing_con_samlsso_oper_step2_authFilter">
<requestUrl id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_installing_con_samlsso_oper_step2_url"
urlPattern="/" matchType="contains" /></authFilter>
sso_config_personaui: |
<samlWebSso20
id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_installing_con_samlsso_oper_step2_defaultSP"
enabled="false"></samlWebSso20>
<samlWebSso20
id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_installing_con_samlsso_oper_step2_personaSP"
spHostAndPort="https://<IPM_HOSTNAME>:<SSL_PORT>" createSession="true"
idpMetadata="/opt/MDM/etc/default/idpMetadata.xml" realmName="http://<IDP_HOSTNAME>/adfs/services/trust"

IBM Product Master 12.0.0 165


wantAssertionsSigned="true" authnRequestsSigned="true" includeX509InSPMetadata="true" authFilterRef="authFilter"
authnContextClassRef="urn:oasis:names:tc:SAML:2.0:ac:classes:Password"
nameIDFormat="urn:oasis:names:tc:SAML:2.0:nameid-format:transient" useRelayStateForTarget="false"
targetPageUrl="https:// <IPM_HOSTNAME>:<PORT>/mdm_ui/#/login" clockSkew="7h"></samlWebSso20>
<authFilter
id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_installing_con_samlsso_oper_step2_authFilter">
<requestUrl id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_installing_con_samlsso_oper_step2_url"
urlPattern="mdm_ui" matchType="contains" /><requestUrl
id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_installing_con_samlsso_oper_step2_excludeUrl"
urlPattern="mdm_ui/assets/json" matchType="notContain" /></authFilter>
sso_idp_metadata: |
<EntityDescriptor ID="_6a205bd8-1fff-426d-8541-795799789c1d" entityID="http://<IDP_HOSTNAME>/adfs/services/trust"
xmlns="urn:oasis:names:tc:SAML:2.0:metadata"> …………………

For more information on the SAML configuration with the WebSphere Liberty, see Configuring SAML Web Browser SSO in Liberty.

2. Enable SSO in the ipm_12.0.x_cr.yaml file by setting the value of the property sso=1.
3. Install the Product Master services. For more information, see Deploying on the Kubernetes cluster.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configure SSO partners (Accelerated deployment)


To configure SSO partners, complete the following tasks.

Configure SAML attribute mappings on the single sign-on partner.


Add an identity provider by using metadata of the identity provider (IdP).
Export the Service Provider metadata file.

Configure SAML attribute mappings on the single sign-on partner


The SAML subject identifies the authenticated user. Product Master SAML SSO requires that the single sign-on partner should configure NameID as the SAML assertion
subject.

The single sign-on partner should also define attribute mappings for Group memberships.

Setting NameID and Group mappings in the SAML assertion are mandatory for Product Master login. Other optional mapping can be defined for user attributes. For
example, First Name, Last Name, Title, Email Address, Telephone, Fax, and Address.

If you want to enable Vendor users to login with SAML SSO, Organization mapping is mandatory to be set in the SAML assertion. Ensure that the Organization attribute
value matches the Vendor organization present in the Product Master Vendor Organization Hierarchy.

Add an identity provider by using metadata of the identity provider


Use the metadata file export from Identity Provider as an input for sso_idp_metadata secret in the app_secrets.yaml file.

Export the Service Provider metadata file


1. Export the service provider metadata file for the Admin UI using the following URL.

Kubernetes
https://<IPM_HOSTNAME>:<PORT>/ibm/saml20/adminSP/samlmetadata

OpenShift®
https://<IPM_HOSTNAME>/ibm/saml20/adminSP/samlmetadata

2. Export the service provider metadata file for the Persona-based UI using the following URL.

Kubernetes
https://<IPM_HOSTNAME>:<PORT>/ibm/saml20/personaSP/samlmetadata

OpenShift
https://<IPM_HOSTNAME>/ibm/saml20/adminSP/samlmetadata

The service provider metadata files can be consumed by the SSO partners.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configure SSO in the browser (Accelerated deployment)


To configure your browser to authenticate SSO, complete the following task in your browser.

166 IBM Product Master 12.0.0


Only required only if Windows authentication method is configured for SAML in the Enable the SAML Web browser SSO.

Microsoft Internet Explorer


Chromium
Mozilla Firefox

Access the interfaces with the following URLs.

Admin UI
https://<ipmserver.ipm.com>:<port>
Persona-based UI
https://<ipmserver.ipm.com>:<port>/mdm_ui

Note: The context root for the Admin UI is "/" and for the Persona-based UI is "/mdm_ui". These are specified in the SAML properties and only these URL patterns are
intercepted by the SAML web SSO TAI.

Microsoft Internet Explorer


1. Open Microsoft Internet Explorer browser.
2. Select Tools > Internet Options > Security tab.
a. Select Local intranet and click Sites to display the list of trusted sites.
b. Select the following first two options.
i. Include all local (intranet) sites not listed in other zones.
ii. Include all sites that bypass the proxy server are checked.
c. Click Advanced and add the URL of the Identity Provider to list of trusted sites.
d. Click Custom level, under User Authentication, and Logon, select Automatic logon with current username and password security setting.
e. Select the Advanced > Security section, ensure that Enable Integrated Windows Authentication is selected.
f. Click OK and restart Microsoft Internet Explorer.
g. Similar steps are applicable for the Trusted sites.

Chromium
If you are using Google Chromium, it automatically fetches all the settings that are done in the Microsoft Internet Explorer for the SSO.
To import bookmarks from Microsoft Internet Explorer.

1. Open Chromium browser.


2. At the upper right, click More.
3. Select Bookmarks > Import Bookmarks and Settings.
4. Select the program that contains the bookmarks you would like to import.
5. Click Import and Done.

Mozilla Firefox
1. Open the Mozilla Firefox browser.
2. In the URL field, enter about:config, and press Enter.
3. Ignore the warning, and click I accept the risk!.
4. In the Search field, enter network.negotiate-auth.trusted-uris. This preference lists the trusted sites for Kerberos authentication.
5. Double-click network.negotiate-auth.trusted-uris.
6. In the Enter string value field, enter the Fully Qualified Domain Name (FQDN) of the host running the Product Master, and click OK.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Timeout behavior in the Persona UI


When SAML SSO is enabled, the session timeout for Product Master Persona-based UI is based on the following properties.

Lightweight Third Party Authentication (LTPA) timeout


Hypertext Transfer Protocol (HTTP) session timeout
Session inactivity timeout

Additionally the topic also provides information for the following.

How do the Session and LTPA timeout work


Best practices

LTPA timeout
When you log in to the Persona-based UI, you get an LTPA token by the IBM® WebSphere® Liberty. You can use this token to validate your access to the applications. The
LTPA token cannot be extended or renewed, even for an active user session. As a result, your session ends after the LTPA timeout. You get logged out of the application
and must provide login credentials again to get a new token. This fixed LTPA time is a security mechanism to prevent an unlimited user session, which is vulnerable to
exploitation from unauthorized sources.

With the LTPA mechanism, you can lose your unsaved work. As a result, the LTPA must be set for the longest allowable time by your IT Security team according to your
corporate compliance policies. The LTPA timeout is common for all applications. You can specify the LTPA timeout while you are installing the applications. You can modify

IBM Product Master 12.0.0 167


the timeout by changing the settings after installation in the IBM WebSphere Liberty.

You can change the LTPA timeout as follows.

Containerized deployment
The LTPA timeout setting must be added in the Persona UI pod configuration.
In the IBM WebSphere Liberty, increase the LTPA cookies expiration timeout in the server.xml file at the /opt/ibm/wlp/usr/servers/ipm folder. The default timeout is
120 minutes.
Add the following tag to specify the timeout value as 8 hours.

<ltpa expiration="480"/>

Non containerized deployment

1. Log in to the IBM WebSphere Application Server.


2. Click Security > Global Security > Authentication. By default, the LTPA selected.
3. Click the LTPA link.
4. Under LTPA timeout setting, change the LTPA timeout value, and click Apply. The default value is 480 minutes.
5. To save the change directly to the master configuration, click Save link.

HTTP session timeout


The HTTP session timeout settings keep your application session active while you are actively working in the session. When you access the Persona-based UI, an HTTP
session is created. A session is timed out after a specified period of inactivity for better management of memory resources. Note that the active session still ends when the
LTPA timeout limit is reached.

You can change the HTTP session timeout as follows.

In the config.json file at the /opt/MDM/mdmui/dynamic/mdmui folder. Increase the value of the timeouTS property as follows to specify the timeout value as 4
hours. The default timeout is 30 minutes.

"timeoutTS": "14400"

Session inactivity timeout


With the session inactivity timeout countdown, you are alerted about the session timeout in advance and sudden session termination is avoided. Once the session timeout
notification appears, any task that is work-in-progress is lost.

For example, if the HTTP session timeout is set to 30 minutes, the session inactivity timeout is set at 25 minutes, and the Inactivity timeout countdown is set to 5 minutes.
With these settings, if a user session is inactive for 25 minutes, the application UI starts displaying countdown of 5 minutes. A link is displayed that the you can click and
extend the session without logout. If you do not click the link before end of countdown, then you are logged out from the application. Note that the session still ends when
the LTPA timeout limit is reached.

You can change the session inactivity timeout as follows.

In the config.json file at the /opt/MDM/mdmui/dynamic/mdmui folder. Increase the value of the ideTS property as follows to specify the timeout value as 3.5 hours.
The default timeout is 25 minutes.

"idleTS": "12600"

Note: The default LTPA timeout in the IBM WebSphere Liberty that hosts the application is 120 minutes. The default HTTP session timeout for the Persona-based UI is 30
minutes. Though the default values for the LTPA and HTTP session timeouts can be extended, consult your IT team to determine the appropriate timeout interval.

How do the Session and LTPA timeout work


You must set the LTPA timeout to a value greater than the Session timeout value. If a session for an application is idle for more than the Session timeout value, and if you
click in the application, the application opens in the same window because the LTPA timeout is still active. However, when the session of any application is idle for a time
greater than the LTPA timeout value, and you click in an application, you are logged out of the application and must log in again to access the application.

Regardless of whether a user session was active or inactive, the LTPA session expires in 480 minutes and no new session is established. You are logged out and must log
in again to access the applications.

Note: If the browser tab in which the Persona-based UI is running is closed but the browser is still open, you can come back to the application without logging in again
until the LTPA timeout is reached.

Best practices
To avoid loss of data or other inconveniences, follow the recommendations.

Before you leave your application session idle, ensure to save any unsaved changes and log out of your session.
Be aware of the LTPA limit set by your organization. If you work continuously in a session without idling a session, save your work and log off before your LTPA
timeout limit is reached. You can log back in to the system to start a new session.
The Session Inactivity timeout for the Persona-based UI must not exceed the Session timeout.
The Session timeout must be less than the LTPA timeout.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

168 IBM Product Master 12.0.0


Known issues and limitations

Certain product features assume that the system is deployed by using a centralized deployment model where services share the file
system and product binaries. With containerized deployments, services no longer have a common file system and are working in isolation.

Following is the list of known issues and limitations.


Feature Admin UI Containerized deployment
System Administrator > Displays all the properties that are listed in the configuration files such as Displays property values that are configured for the ipm-
Properties common.properties and db.xml. admin container.
System Administrator > Displays all the logs files for each service in the log folder. Display logs files present in the ipm-admin container.
Log Files

Once SAML SSO is enabled, default Admin user as well any local user that does not exist on the Identity Provider will not be able to log in to the Admin UI and the
Persona-based UI applications.
Roles that are created by the SAML SSO login grant default ACG permissions hence Administrator needs to set correct permissions for such roles.
Any deletion of users or roles on the Identity Provider is not synced with the Product Master database as part of SAML SSO. This should be handled by the customer
as a routine administrative task.
User attributes for users who are created by the SAML SSO login cannot be edited from the Product Master Admin UI as the user edit screen is in read only mode.
Even though the Logout option is available on the Admin UI and Persona-based UI once the user clicks the Logout option, the user gets redirected to the main
application URL. This triggers the SAML SSO flow again.
There are different session timeouts users need to be aware of. Each UI has its own configured timeout as well as SAML token have the expiry set by the Identity
Provider. It is recommended to increase the session timeouts for both the application UI as well as SAML token to have a better user experience when SAML SSO is
enabled.
REST API access with SAML SSO through external source (custom code) is not yet supported. This is known limitation.

Related concepts
Troubleshooting the SAML SSO issues

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Migrating Product Master Kubernetes or Red Hat OpenShift deployment (Fix Pack 7
and later)

( and later) This topic explains the procedure to migrate Kubernetes or Red Hat® OpenShift® cluster.

Before you begin


Ensure that you understand how the Product Master is installed by using operators. For more information, see Installing the product by using Operators.
Ensure that the Containers system requirements are met.
Download the Product Master docker assets. For more information, see Downloading the Product Master Docker images (Operator).
Deployment YAML files are configured. For more information, see Configuring Product Master deployment YAML (Fix Pack 7 and later).
You have all the product deployment files for your current fix pack version.
Download all the latest fix pack release deployment files from the IBM Support Fix Central site.

Procedure
1. 1. Verify that the secret details are present by using the following command.

# oc get secret | grep ipm-registry

# oc get secret | grep 'app-secret\|tls-secret’

2. Take backup of the existing release deployment files, customization, app-secret, random-secret, IBM Db2® database and MongoDB, and /opt/MDM/public_html
folder of admin pod.
To take backup of MongoDB proceed as follows:
a. Go to the MongoDB pod by using the following command.

# oc rsh productmaster-mongodb-5464cb968-qcnts

Note: Get the MongoDB user and password through the random-secret file.
b. Take the backup of the MongoDB database by using the following command.

$cd /data/db
$mongodump \
--username <username> \
--authenticationDatabase admin \
--password <password> \
-d mldb2

c. Download the /data/db/dump folder to your local machine by using the following command.

IBM Product Master 12.0.0 169


oc rsync productmaster-mongodb-6975f78c4d-4hx5w:/data/db/dump .

3. Check the CR name by using the following command.

# oc get productmaster

Note: You need to use the same CR name in the ipm_12.0.x_cr.yaml file.
4. Update the ipm_12.0.x_cr.yaml file to make replica count of all pods to 0 and apply again.
5. Confirm that all the pods are removed by using the following command.

# oc get pods

6. Optional: If you have created any manual ingress route for IBM Product Master, delete it by using the following command.

# oc delete ingress <name>

7. Check existing packagemanifest before applying subscription by using the following command.

# oc describe packagemanifest productmaster-operator | grep -C3 "Default Channel"

8. Update the new catalog source docker images into the catalog_source.yaml file. You can find the new image in the latest downloaded catalog_source.yaml file (IBM
Product Master deployment files that are downloaded from the IBM Support Fix Central site).
9. Apply the catalog_source.yaml file by using the following command.

# oc apply -f catalog_source.yaml

a. Verify creation of new pods starts by using the following command.

# oc get pods

b. Verify that the new catalog is created by using the following command.

# oc get pods | grep catalog

c. Verify that the packagemanifest is updated with the new channel version and operator image by using the following command.

# oc describe packagemanifest productmaster-operator | grep -C3 "Default Channel"

10. Update CSV or Operator controller. This step is required only if you have set the value of InstallPlanApproval to "Manual" because when the value is "Automatic",
subscription gets updated automatically.
11. Add the volume details for the new magento and message-archive services as follows in your volumes.yaml file.

kind: PersistentVolume
apiVersion: v1
metadata:
name: magento-connector-log-volume
namespace: default
labels:
svc: magento-connector-log-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /mnt/ipm12/magento-connector

You can copy this message block also from the volumes.yaml file downloaded from the IBM Support Fix Central site).

What to do next
Upgrade IBM Product Master by using the newly downloaded ipm_12.0.x_cr.yaml file as follows.

1. Update the storage class to the same value as used in the IBM Product Master your current fix pack deployment.
2. If you are using own registry, update the value of the image > registry.
3. Ensure that the CR name for both releases is the same.

apiVersion: productmaster.ibm.com/v1
kind: ProductMaster
metadata:
name: <CR name>

4. Update all the changes that you need in your new CR including storage type including the new service for Magento Connector.
5. Enable feature that are required and mark rest of 0 so that associated pods don't come up and consume CPU and Memory. For more information, see
con_oper_custmfeature.html.
6. Apply the CR file. After all the pods are up and running, you can log in to the IBM Product Master by using the same login URLs that you used for the IBM Product
Master your current fix pack version.
7. If you are using dynamic volume provisioning then you need to perform following steps for MongoDB backup and restore. You need to restore the MongoDB
database that you backed up in the MongoDB backup section.
a. Copy dump into the /data/db folder of the MongoDB pod by using the following command.

oc rsync dump productmaster-mongodb-0:/data/db

b. Restore the MongoDB database by using the following command.

$ mongorestore
--username <username> \
--authenticationDatabase admin \

170 IBM Product Master 12.0.0


--password <password> \
/data/db/dump

8. Performing database migration.


a. Access the productmaster-admin-<container-name> pod by using the following command.

oc rsh productmaster-admin-<container-name>

b. Run the migrateToInstalledFP.sh script file in the $TOP/bin/migration folder by using the following command.

migrateToInstalledFP.sh
--fromversion=<fixpackversion>

9. If you had done the customization by creating custom images, then to get your customizations, you now need to follow con_oper_custmfp7.html
instructions.

Migrating Product Master Kubernetes or Red Hat OpenShift deployment (Fix Pack 4 to Fix Pack 6)
( to ) This topic explains the procedure to migrate Kubernetes or Red Hat OpenShift cluster from the Fix pack 3 onwards to the later releases
(12.0.x).
Migrating Product Master Kubernetes or OpenShift deployment (Fix Pack 2 to Fix Pack 3)
( to ) This topic explains the procedure to migrate Kubernetes or OpenShift cluster from the Fix pack 2 to the Fix pack 3 release.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Migrating Product Master Kubernetes or Red Hat OpenShift deployment (Fix Pack 4
to Fix Pack 6)

( to ) This topic explains the procedure to migrate Kubernetes or Red Hat® OpenShift® cluster from the Fix pack 3 onwards to the later releases
(12.0.x).

Before you begin


Ensure that you understand how the Product Master is installed by using operators..
Ensure that the Containers system requirements are met.
Download the Product Master docker assets.
Deployment YAML files are configured.
You have all the product deployment files for your current fix pack version.
Download all the latest fix pack release deployment files from the IBM® Support Fix Central site.

About this task


With the IBM Product Master 12.0 Fix pack 4 release, the Product Master container images are operator-based and built on the Enterprise Linux® (RHEL) 8 Universal Base
Image (UBI) base Docker image. This ensures that the images are compatible with all the environments that support Operator Life Cycle Management (OLM) and
Kubernetes (K8). Moreover, the images are lightweight as WebSphere® Application Server has been replaced with the light version of application server – WebSphere
Liberty.

Support for both the Db2® and Oracle databases is available out of the box without the overhead of creating custom images with database clients.

Following are the major changes in the Operator Life Cycle Management (OLM)-based deployment of the IBM Product Master 12.0 Fix pack 4 and later.

Implemented network policies.


Following table lists the implemented network policies for reference purpose only. You do not need to create these policies.
Table 1. Reference - Network policies
Network policy name Incoming traffic rule
np-admin Allows traffic from the Ingress and RESTAPI pods on any port.
np-elasticsearch Allow traffic from the FTS, Indexer, and RESTAPI pods on the "9200" port.
np-ftsind Denies traffic from all the services.
np-ftspim Allows traffic from the Scheduler pod on the "9095" port.
np-hazelcast Allows traffic from the FTS, Indexer, RESTAPI, ML, AdminUI, and PIM pods on the "5702" port.
np-ml Allows traffic from the Scheduler and Workflow pods on the "5000 - 5009" port range.
np-mongodb Allows traffic from the ML, RESTAPI, and Scheduler pods on the "27017" port.
np-personaui Allows traffic from the Ingress pod on any port.
np-restapi Allows traffic from the Ingress and AdminUI pods on any port.
np-sch Allows traffic from the AdminUI and RESTAPI pods on any port.
np-wfl Allows traffic from the AdminUI and RESTAPI pods on any port.
Changed service type from NodePort to ClusterIP for all the services except machine learning service.
Fixed 504 intermittent timeout error for the IBM Product Master in the Red Hat OpenShift Container platform.
Added imagepullsecret for the third-party images.

Procedure
IBM Product Master 12.0.0 171
1. Verify that the secret details are present by using the following command.

# oc get secret | grep ipm-registry

# oc get secret | grep 'app-secret\|tls-secret’

Note:
If you are using Kubernetes, then use the kubectl instead of the oc command.
You can use the oc command on Kubernetes as well. You need to download and install the Red Hat OpenShift CLI.
2. Take backup of the existing release deployment files.
3. Check the CR name by using the following command.

# oc get productmaster

You need to use the same CR name in the ipm_12.0.x_cr.yaml file.


4. Update the ipm_12.0.x_cr.yaml file to make replica count of all pods to 0 and apply again.
5. Confirm that all the pods are removed by using the following command.

# oc get pods

6. If you have created any manual ingress route for IBM Product Master, delete it by using the following command.

# oc delete ingress <name>

7. Check existing packagemanifest before applying subscription by using the following command.

# oc get packagemanifest | grep product

8. Check the current configuration of catalog source by using the following command.

# oc describe packagemanifest productmaster-operator | grep -C3 "Default Channel"

9. Configure the "app-secrets" by updating app_secrets.yaml file by using the following steps.
a. Use a plain (decrypted) password for the db_pass parameter. If you do not know the password decrypt by using the following command.

echo <db_pass(encrypted)> | openssl enc -d -base64 -aes-256-cbc -salt -pbkdf2 -k <encryption_key>

b. Remove encryption_key parameter.


c. Apply updated app_secret.yaml file by using the following command.

# oc apply -f app_secrets.yaml

10. Update the new catalog source docker images into the catalog_source.yaml file. You can find the new image in the latest downloaded catalog_source.yaml file (IBM
Product Master deployment files that are downloaded from the IBM Support Fix Central site).
11. Apply the catalog_source.yaml file by using the following command.

# oc apply -f catalog_source.yaml

You can see that the creation of new pods starts by using the following command.

# oc get pods

You can see that a new catalog is created by using the following command.

# oc get pods | grep catalog

You can see that the packagemanifest is updated with the new channel version and operator image by using the following command.

# oc describe packagemanifest productmaster-operator | grep -C3 "Default Channel"


registry.ng.bluemix.net/product_master/ipm-operator@sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Version: 1.0.1
Name: v1.0
Default Channel: v1.0
Package Name: productmaster-operator
Provider:
Name: IBM

12. Update CSV or Operator controller. This step is required only if you have set the value of InstallPlanApproval to "Manual" because when the value is "Automatic",
subscription gets updated automatically.
a. Update the subscription.yaml file and change the channel version from the "alpha" to "v1.0".

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: ibm-productmaster-catalog-subscription
namespace: <namespace>
spec:
channel: v1.0
name: productmaster-operator
installPlanApproval: Automatic
source: ibm-productmaster-catalog
sourceNamespace: <namespace>

b. Apply the subscription.yaml file by using the following command.

# oc apply -f subscription.yaml

What to do next
Upgrade IBM Product Master by using the newly downloaded ipm_12.0.x_cr.yaml file as follows.

172 IBM Product Master 12.0.0


1. Update the storage class to the same value as used in the IBM Product Master your current fix pack deployment.
2. If you are using own registry, update the value of the image > registry.
3. If you are using customized images, for each service update the value of the image > imagetag.
4. Ensure that the CR name for both releases is the same.

apiVersion: productmaster.ibm.com/v1
kind: ProductMaster
metadata:
name: <CR name>

5. Create ‘random-secret’ by using the following command.

# oc get secret app-secret -o json | jq '.metadata.name = "random-secret"' | jq 'del(.metadata.ownerReferences)' | oc


create -f -

6. Apply the CR file. After all the pods are up and running, you can log in to the IBM Product Master by using the same login URLs that you used for the IBM Product
Master your current fix pack version.
7. Run the migration script.
a. migrateToInstalledFP.sh script in the $TOP/bin/migration directory of the Admin pod.
b. If you are using Machine Learning services, you need to delete existing models from the MongoDB database as follows.
i. Login into the Machine Learning pod by using the following command.

# oc rsh <ml_pod>

ii. Update the source by using the following command.

source /home/default/.bash_profile

iii. Run clear_db.py script by using the following command.

python3.9 $TOP/mdmui/machinelearning/scripts/clear_db.pyc

8. If your environment is Kubernetes, and you get 502 Bad Gateway error for the Admin UI or Persona-based UI login URLs, fix the issue by running the following
command.

$oc delete ingress productmaster-ingress

To roll back to the previous release proceed as follows.

Update the ipm_12.0.x_cr.yaml file as follows and apply again.

1. Make the replica count of all the pods to 0.


2. Add version: 12.0.x in the following format.

apiVersion: productmaster.ibm.com/v1
kind: ProductMaster
metadata:
name: productmaster
spec:
license:
accept: true
deployment_platform: 'openshift'
#deployment_platform: 'k8s'

version: 12.0.x

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Migrating Product Master Kubernetes or OpenShift deployment (Fix Pack 2 to Fix


Pack 3)

( to ) This topic explains the procedure to migrate Kubernetes or OpenShift® cluster from the Fix pack 2 to the Fix pack 3 release.

Before you begin


Ensure that you understand how the Product Master is installed by using operators.
Ensure that the Containers system requirements are met.
Download the Product Master docker assets.
Deployment YAML files are configured. For more information, see Configuring Product Master deployment YAML (Fix Pack 3 and later).

About this task


Starting IBM® Product Master 12.0 Fix Pack 3 release, the Product Master container images are operator-based and built on the Red Hat® Enterprise Linux® (RHEL) 8
Universal Base Image (UBI) base Docker image. This ensures that the images are compatible with all the environments that support Operator Life Cycle Management
(OLM) and Kubernetes (K8). Moreover, the images are lightweight as WebSphere® Application Server has been replaced with the light version of application server –
WebSphere Liberty.
Support for both the Db2® and Oracle databases is available out of the box without the overhead of creating custom images with database clients.

IBM Product Master 12.0.0 173


Following are the major changes in Fix pack 3 release.

1. Images are based on WebSphere Liberty and RHEL 8 UBI base image.
2. New images for REST API and operator.
3. Images for third-party software (Mongo DB, Elastic Search, Hazelcast, and IBM MQ).
4. Images have default user UID 5000 and GID 0. Removed was, svcuser, and svcgroup from all the images.
5. New volumes have been added for elasticsearch-data and restapi.
6. Existing volumes have been renamed: appsvr renamed to admin, dam renamed to appdata and newui renamed to personaui.
7. Mongo DB has been upgraded to 4.0.22 and 3.x version support is deprecated.
8. Hazelcast has been upgraded to 4.1 and 4.0 version support is deprecated.
9. Admin UI and Persona-based UI applications are now accessed by using Ngnix Ingress URL and port.

Procedure
IBM Product Master services are deployed on the Kubernetes cluster by using OLM. You need to backup all persistent volumes, MongoDB database as well as Db2 or
Oracle database before proceeding with migration.

1. Log in to the MongoDB pod and generate MongoDB data backup.


2. Copy the data dump from the MongoDB pod to a backup server.
3. Create a backup copy of all directories that are being used as persistent volume on all worker nodes.
Example

e.g. cp -ar /mnt/ipm12 /mnt/ipm12-backup

4. If you are using the Cloud environment, backup all your data before proceeding.
5. MongoDB does not support direct upgrade to 4.0.22 version hence an intermediate upgrade to version 3.6 is required. Run the following commands to make
MongoDB available for upgrade to 3.6 version during Product Master deployment.

$ kubectl exec ipm-mongodb-xxxxxxxx --mongo --username


admin --authenticationDatabase admin --password xxxxxxxx
--eval "db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )"

$ kubectl exec ipm-mongodb-xxxxxxxx -- mongo --username admin --authenticationDatabase


admin --password xxxxxxx --eval "db.adminCommand( { setFeatureCompatibilityVersion: '3.6' } )"

6. Uninstall the Product Master application.


7. Delete created persistent volumes by using following command.

$kubectl delete -f volumes.yaml

Note: Do not delete directories that contain application data (/mnt/ipm12).


8. Rename the following persistent volume names.
/mnt/ipm12/dam /mnt/ipm12/appdata
/mnt/ipm12/appsrv /mnt/ipm12/admin
/mnt/ipm12/newui /mnt/ipm12/personaui
9. Use the following command to create directories for the new volume directories.

$mkdir /mnt/ipm12/elasticsearch-data
$mkdir /mnt/ipm12/restapi

10. Set the permissions for the new user "default" with UID 5000 and GID 0.
11. Use the following command on all the worker nodes to assign ownership for the "default" user.

adduser default --uid 5000 --gid 0


chown 5000.0 /mnt/ipm12 -R
chmod 775 /mnt/ipm12

If you get error while adding user, remove the existing user that has 5000 ownership.
Note: If you are using multiple worker node and is shared between the nodes using Network File System (NFS), /mnt/ipm12/dam then you need to update NFS
configuration because the directory is now changed to /mnt/ipm12/appdata. Also, you need to make the directory structure changes on all the worker nodes.
12. Set the MongoDB version as 3.6 in the ipm_12.0.3_cr.yaml CRD file. According to the MongoDB documentation, there is no direct upgrade from the MongoDB
version 3.4 to 4.0.22. You need to upgrade MongoDB version 3.4 to 3.6 and then to 4.0.22.
13. Verify that the storageclass is same as mentioned in the volumes.yaml file for all the persistent volumes.
14. Replica count of MongoDB, Elasticsearch, IBM MQ, and Hazelcast, should be 1. If you increase it to a value greater than 1, the services start failing.

Results
You can now deploy IBM Product Master 12.0 Fix Pack 3 release services. For more information, see

Deploying on the Kubernetes cluster


Deploying on the Red Hat OpenShift cluster

What to do next
Upgrade your MongoDB version.

1. Edit the ipm_12.0.3_cr.yaml file and set replica count to 0 for the MongoDB service and apply the CRD using the following command.

$ kubectl apply -f ipm_12.0.3_cr.yaml

2. Monitor the pods by using the following command.

$ kubectl get pods

174 IBM Product Master 12.0.0


3. Once the MongoDB pod is deleted, again edit the ipm_12.0.3_cr.yaml file and change the MongoDB version from 3.6 to 4.0.22 and set replica count to 1.
4. Apply the CRD again using the following command.

$ kubectl apply -f ipm_12.0.3_cr.yaml

Your deployment should now be running with the MongoDB version 4.0.22.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing the product by using the Docker images

Using Docker images, you can set up Product Master environment. You can use the Docker images to also set up Elasticsearch,
Hazelcast, and MongoDB services.

During an accelerated deployment, Product Master is installed from Docker images, which ensures a simple and consistent installation experience. Use accelerated
deployment when you need to consistently and quickly deploy a standard installation image.

The Product Master Docker images are built on a Red Hat® Enterprise Linux® (RHEL) 7 Universal Base Image (UBI) base Docker image that runs on all Docker supported
hosts. Product Master supports containerized deployment on the following:

Docker
Kubernetes

If you want to deploy the Product Master Docker images in a high availability environment, you can use a Kubernetes cluster.

System requirements and software stack


Before using Docker images to deploy IBM® Product Master, ensure following hardware and software requirements:
Operating System Linux Ubuntu or another Docker-supported operating system
Hardware 32 GB memory and quad-core processor
Docker software Docker Community Edition version 19.03.4
Following software is installed on the Docker containers as part of the Product Master image deployment:

Red Hat Enterprise Linux (RHEL) 7 Universal Base Image (UBI) base Docker image
IBM Db2® Version 11.5
IBM WebSphere® Application Server Base Edition Version 9.0.5
Perl 5.30.1
JAVA 8
Python 3.6 (ML image)

Downloading the Product Master Docker assets


To acquire the Product Master Docker assets, you must download them from the IBM Passport Advantage®. You can use a script
to acquire the Product Master Docker images from the IBM Docker registry.
Environment variables for deployed Docker image
The deployed Product Master images have a number of default settings that define how the deployed instance behaves.
Customizing the Docker container
The Docker containers can be customized with custom tabs and extensions.
Starting, accessing, or closing the Docker containers
Start, access, or close the Docker containers by using the following commands.
Customizing Elasticsearch, Hazelcast, and MongoDB services
You need to perform extra configuration for Product Master docker images to work with an Oracle Server.
Customizing IBM MQ service (GDS)
To use GDS service in docker environment, IBM MQ service is mandatory.
Deploying on the Oracle Server
You need to perform extra configuration for Product Master docker images to work with an Oracle Server.
Deploying on the Kubernetes cluster
IBM Product Master services are deployed on the Kubernetes cluster by using operators.
Known issues and limitations
Certain product features assume that the system is deployed by using a centralized deployment model where services share the file system and product binaries.
With containerized deployments, services no longer have a common file system and are working in isolation.

Related concepts
Downloading the Product Master Docker assets
Customizing the Docker container
Starting, accessing, or closing the Docker containers
Known issues and limitations

Related tasks

IBM Product Master 12.0.0 175


Deploying on the Oracle Server
Deploying on the Kubernetes cluster

Related information
Environment variables for deployed Docker image
Customizing Elasticsearch, Hazelcast, and MongoDB services

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Downloading the Product Master Docker assets

To acquire the Product Master Docker assets, you must download them from the IBM® Passport Advantage®. You can use a script to
acquire the Product Master Docker images from the IBM Docker registry.

Prerequisites
Following are the prerequisites for downloading the Docker images:

Verify that you have an IBM ID. If you do not have an IBM ID, you can create one at the IBM Passport Advantage site.
The primary IBM Passport Advantage contact for your company has granted your IBM ID permission to access and download IBM Product Master artifacts.
All containers need the Db2® IP address, username, password, database name, and port number.
Digital Asset Management (DAM) needs MongoDB IP address and port number.
Free text search needs starting Elasticsearch and Hazelcast since it requires Indexer and pim-collector images.
Machine learning needs MongoDB IP address, port, user, password, and DB name.

Downloading
Proceed as follows to download the Docker images:

1. On the host computer, create a Docker working directory for all your Docker activity (downloads, installation, running, and monitoring).
2. Give the directory a meaningful name, such as /mdm.
3. Open a browser and browse to the IBM Passport Advantage site.
Important: For fix packs, you should download the required image from the IBM Support Fix Central. For more information, see Deploying on the Kubernetes
cluster.
4. Locate and download the IBM Product Master artifacts. For more information, see the Download IBM® Product Master version 12.0 to determine the part numbers
that you should download.
5. Unpack each of the parts into the Docker working directory that you created on your host machine (/mdm).
6. Open IPM_12.0.0_DOCKER.zip and extract the Download_IPM_Docker script.
7. Run Download_IPM_Docker script:

$ ./Download_IPM_Docker.sh.x

The script prompts you to accept the license. After you accept the license, the script logs in to the IBM Docker registry and starts downloading the latest version of
Docker images. If you want to download any specific version, for example to download all the Product Master docker images from the registry for version 12.0 use
following command:

$ ./Download_IPM_Docker.sh.x -version=12.0.0

8. Verify that all images are downloaded on local VM by using following command:

$ docker images

Following images are downloaded for the Product Master.

registry.ng.bluemix.net/product_master/ipm-admin-ubi7 12.0.0
registry.ng.bluemix.net/product_master/ipm-persona-ubi7 12.0.0
registry.ng.bluemix.net/product_master/ipm-sch-ubi7 12.0.0
registry.ng.bluemix.net/product_master/ipm-wfl-ubi7 12.0.0
registry.ng.bluemix.net/product_master/ipm-fts-pim-ubi7 12.0.0
registry.ng.bluemix.net/product_master/ipm-fts-indexer-ubi7 12.0.0
registry.ng.bluemix.net/product_master/ipm-ml-ubi7 12.0.0

Following table lists the various Docker images available along with their purpose and contents.
Docker image To deploy... Contains...
ipm-admin Admin UI Db2 client, WebSphere® Application Server, MDM AppSvr, Event processor, and Queue manager service.
ipm-persona Persona-based UI Db2 client, WebSphere Application Server, Persona-based UI, and MDM REST service.
ipm-wfl Workflow service Db2 client, Java™, and workflow service dependencies.
ipm-sch Scheduler service Db2 client, Java, and scheduler service dependencies.
ipm-fts-pim pim-collector service Db2 client, Java, fts-pim dependencies
ipm-fts-indexer Indexer service Db2 client, Java, fts-indexer dependencies
ipm-ml Machine learning Python, Db2 client, Java
hazelcast hazelcast service hazelcast service 3.12.5

176 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Environment variables for deployed Docker image

The deployed Product Master images have a number of default settings that define how the deployed instance behaves.

As a prerequisite, the Docker images should exist on the computer where the containers are being created. Additionally, use the following command to create the network
a network, which is used by the containers to communicate with each other.

docker network create mdm-network

The Docker commands use this network. Modify the network name in the commands if you choose to use a different name.

Mandatory environment variables


Note: Following environment variables are applicable to all the Docker images, unless otherwise specified.

DB_HOST
The IP of the database server. The default value is x.x.x.x.
DB_PASSWORD
The password of the authentication account. The default value is db2inst1.
DB_PORT
The database port for database connection. The default value is 50000.
DB_USER_NAME
The user account configured for database authentication. The default value is db2inst1.
ENABLE_GDS
Enable Global Data Synchronization.
EXPOSED_NEWUI_SVR_PORT
The port value mapped to the HTTPS port configured. The default value is 31192 or 9445.
Note: Applicable only to deployments on Kubernetes.
ipm-persona
INDEXER_MEMORY_FLAG
The minimum or maximum memory that is configured for indexer service. This environment variable is optional and set to the default value of -Xmx1024m -
Xms1024m.
ipm-fts-pim ipm-sch
INGRESS_HOST_URL
The ingress route URL defined in the OpenShift® for routing requests. This environment variable must be set when routes or ingress URLs are defined for accessing
the Persona-based UI. The default value is ipm-persona-svc.mynamespace.routerdefault.svc.cluster.local.

MDM_ADMIN_MEMORY_FLAG
The minimum or maximum memory that is configured. The default value is -Xmx128m -Xms48m.
ipm-admin
MDM_APPSVR_MEMORY_FLAG
The minimum or maximum memory that is configured. The default value is -Xmx2048m -Xms256m.
ipm-admin

MDM_APP_SVR_PORT
The port number used for the Admin UI. This environment variable is optional. The default value is 7507.
MDM_DB_NAME
The name of the database. The default value is database1.
MDM_DB_TYPE
The database type to configure in the MDM configuration files. The default value is db2. Applicable to all the containers if the value is oracle.
MDM_EVENTPROCESSOR_MEMORY_FLAG
The minimum or maximum memory that is configured for event processor service. The default value is -Xmx128m -Xms48m.
ipm-admin
MDM_LOG_DIR
The default log location used by all the containers. The default value is /opt/mdmlogs.
MDM_SCHEDULER_MEMORY_FLAG
The minimum or maximum memory that is configured for scheduler service. The default value is -Xmx1024m -Xms48m.
ipm-sch
MDM_QUEUEMANAGER_MEMORY_FLAG
The minimum or maximum memory that is configured for queue manager service. The default value is -Xmx128m -Xms48m.
ipm-admin
MONGODB_DATABASE
The name of the Mongo database. The default value is mgdb.
MONGODB_PASSWORD
The password of the authentication account. The default value is mluser.
NODE_TYPE
The type of the node. Valid value can be Primary or Additional.
NEWUI_MAX_MEM
The maximum memory configured for the Persona-based UI. The default value is 2048.
NEWUI_SVR_PORT
The port number used for the Persona-based UI. The default value is 9092.
PIMCOLLECTOR_MEMORY_FLAG
The minimum or maximum memory that is configured for pim-collector service. The default value is -Xmx1024m -Xms1024m.

IBM Product Master 12.0.0 177


ipm-persona ipm-sch
PIMCOLLECTOR_URI
The URI to communicate with the pim-collector service. The default value is http://<ip address>:9095.
SERVICES
The value of service name. Depending upon the Docker image, the possible value can be machinelearning, scheduler, or workflowengine.
ipm-ml ipm-sch ipm-wfl

GDS environment variables


COMPANY_NAME
The name of the GDS company.
ipm-gds ipm-wfl

GDS_APP_TYPE
The component of the application being installed.
ipm-gds ipm-wfl
GDS_DATAPOOL_GLN
The Global Location Number (GLN®) of the data pool service.
ipm-gds ipm-wfl
GDS_SELF_GLN
The GLN of the trading partner that is using this application.
ipm-gds ipm-wfl
MQ_APP_PASSWORD
The password that the application uses to connect to the IBM® MQ service.
ipm-gds
MQ_APP_USERNAME
The username that the application uses to connect to the IBM MQ service.
ipm-gds
MQ_CHANNEL_NAME
The name of the IBM MQ service channel.
ipm-gds
MQ_HOST
The hostname or IP address of the IBM MQ queue manager.
ipm-gds
MQ_PORT
The port number of the IBM MQ queue manager.
ipm-gds
MQ_QUEUE_MANAGER
The queue manager of the IBM MQ service.
ipm-gds
INBOUND_QUEUE_NAME
The name of the queue for the inbound messages.
ipm-gds ipm-wfl
OUTBOUND_QUEUE_NAME
The name of the queue for the outbound messages.
ipm-gds ipm-wfl

QUEUE_CONNECTION_FACTORY
The name of the connection factory.
ipm-gds ipm-wfl

Digital Assets Management environment variables


ENABLE_DAM
To enable DAM. The default value is 0.
ipm-admin ipm-persona ipm-sch
IPM_SHARED_VOLUME_DIR
The directory shared between ipm-admin, ipm-persona, and ipm-sch containers. Depending on the IBM Product Master version, the value of this variable can be,
IBM Product Master 12.0 and later
/opt/ipm_shared_vol
IBM Product Master Version 12.0 Fix Pack 2 and later
/opt/MDM/public_html
ipm-admin ipm-persona ipm-sch
MONGODB_IP
The IP address of the server hosting the MongoDB service. The default value is x.x.x.x.
ipm-admin ipm-persona ipm-sch
MONGODB_PORT
The port number configured to use with the MongoDB. The default value is 27017.
ipm-admin ipm-persona ipm-sch

Free text search environment variables


ES_AUTH_PASSWORD
The default value is changeme.
ipm-admin ipm-persona ipm-fts-indexer ipm-fts-pim ipm-sch ipm-wfl
ES_AUTH_USERNAME
The user account name used for authentication with the Elasticsearch. Set to user elastic by default. The default value is elastic.
ipm-persona ipm-fts-indexer ipm-fts-pim ipm-sch

178 IBM Product Master 12.0.0


ES_CLUSTER_IP
The IP address of the server hosting the Elasticsearch service. The default value is x.x.x.x.
ipm-persona ipm-fts-indexer ipm-fts-pim
ES_CLUSTER_NAME
The name of the Elasticsearch cluster. The default value is docker-cluster.
ipm-persona ipm-fts-indexer ipm-fts-pim
ES_HTTP_PORT
The default HTTP port for Elasticsearch. The default value is 9200.
ipm-persona ipm-fts-indexer ipm-fts-pim
ENABLE_FTS
To enable free text search. The default value is 0.
ipm-admin ipm-persona ipm-sch ipm-wfl
ES_PASSWORD_ENCRYPTED
Flag to hint that the password is in encrypted format (similar to the database connections). The default value is 0.
ipm-persona ipm-fts-indexer ipm-fts-pim
ES_TRANSPORT_PORT
The default TCP port for transport-based communication. The default value is 9300.
ipm-persona ipm-fts-indexer ipm-fts-pim
ES_USE_AUTH
Flag to enable authentication with the Elasticsearch. This environment variable, by default, is set to value 0 as enabling authentication requires some additional
JARs, which are not free and carry license cost. You would also need to customize the Docker image to specify the additional JARs. The default value is 0.
ipm-persona ipm-fts-indexer ipm-fts-pim
HAZELCAST_IP
The IP address of the server hosting the Hazelcast service. The default value is x.x.x.x.
ipm-admin ipm-persona ipm-fts-pim ipm-fts-indexer ipm-ml ipm-sch ipm-wfl
HAZELCAST_PORT
The port number configured to use with Hazelcast. The default value is 5702.
ipm-admin ipm-persona ipm-fts-pim ipm-fts-indexer ipm-ml ipm-sch ipm-wfl
PIMCOLLECTOR_URI
The URI to communicate with the pim-collector service. The default value is http://<ip address>:9095.
ipm-persona ipm-fts-indexer ipm-fts-pim

Machine learning environment variables


ENABLE_ML
To enable machine learning. The default value is 0.
ipm-admin ipm-persona ipm-sch ipm-wf ipm-ml

ML_HOST
The IP address of the server hosting the Machine learning service. The default value is x.x.x.x.
ipm-admin ipm-persona ipm-sch ipm-wf ipm-ml

ML_PORT
The port number configured to use with the Machine learning. The default value is 5000.
ipm-admin ipm-persona ipm-sch ipm-wf ipm-ml

MONGODB_DATABASE
The name of the Mongo database. The default value is mgdb.
ipm-admin ipm-persona ipm-sch ipm-ml

MONGODB_IP
The IP address of the server hosting the MongoDB service. The default value is x.x.x.x.
ipm-admin ipm-persona ipm-sch ipm-ml

MONGODB_PASSWORD
Password of the authentication account. The default value is mluser.
ipm-admin ipm-persona ipm-sch ipm-ml

MONGODB_PORT
The port number configured to use with the MongoDB. The default value is 27017.
ipm-admin ipm-persona ipm-sch

Vendor Persona environment variables


ENABLE_VENDOR
To enable Vendor portal. The default value is 0.
ipm-admin ipm-persona ipm-sch ipm-wfl

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Customizing the Docker container

The Docker containers can be customized with custom tabs and extensions.

IBM Product Master 12.0.0 179


Most of the customization requires additional JAR files and JSP files, with modification to the data_entry_properties.xml, flow-config.xml, and the common.properties
configuration files.

The Product Master application runs with "was" and "svcuser" non-root users. Both these users belong to "svcgroup" group. Ensure that you use appropriate user while
customizing the Docker images.

was svcuser
ipm-admin-ubi7 ipm-sch-ubi7
ipm-persona-ubi7 ipm-wfl-ubi7
ipm-fts-pim-ubi7
ipm-fts-indexer-ubi7
ipm-ml-ubi7
ipm-gds-ubi7
Following is the automation process to place the custom JAR files in the $TOP folder or configure as additional dependencies by using the jars-custom file. The JAR files
get added to the Product Master class path.

+-- top-jars
+-- custom-jars
+-- Dockerfile
+-- public_html
¦ +-- user
+-- user-scripts

top-jars
Copy the jar files to this directory. These jars are added in the $TOP folder.
custom-jars
Copy the jar files to this folder if you want to add them through the jars-custom file.
user-scripts
Pre-configuration or Post-configuration (Any processing before/after the configureEnv.sh file) through pre-config.sh, post-config.sh scripts. Any processing after the
installation but before service starts, copy the script to this folder.
public_html/user
Any other files that need to be copied into image.

Sample Dockerfile to update the Persona-based UI image

FROM ipm-persona:12.0.0

USER root

RUN mkdir -p /home/was/user-scripts; \


mkdir -p /home/was/custom-jars; \
mkdir -p /home/was/files

COPY user-scripts/*.sh /home/was/user-scripts/


COPY custom-jars/*.jar /home/was/custom-jars/
COPY etc/default/flow-config.xml /home/was/files/
COPY etc/default/data_entry_properties.xml /home/was/files/
COPY public_html/user/*/home/was/files/

RUN chmod 755 /home/was/user-scripts/*.sh; \


chown was.svcgroup /home/was/ -R;

USER was

WORKDIR /home/was
ENTRYPOINT ["./cmd.sh"]

Sample Dockerfile to make some root level changes in the Scheduler image

FROM ipm-sch-ubi7:12.0.0

USER root
RUN mkdir -p /home/oracle; \
chown svcuser:svcgroup /home/oracle -R;

USER svcuser

WORKDIR /home/svcuser
ENTRYPOINT ["./cmd.sh"]

Additional configuration is required, for custom tabs and entry preview functions that involve interaction between the UI elements in the Admin UI and the Persona-based
UIs. You can perform the configuration by using the post-config.sh script. When the container is initialized, the post-config.sh script runs and updates the configuration. In
the following example, an entry preview script is configured to open a new results window.

Sample Dockerfile

FROM ipm-persona:12.0.0

USER root

RUN mkdir -p /home/was/user-scripts; \


mkdir -p /home/was/custom-jars; \
mkdir -p /home/was/files

COPY user-scripts/*.sh /home/was/user-scripts/


COPY custom-jars/*.jar /home/was/custom-jars/
COPY etc/default/flow-config.xml /home/was/files/
COPY etc/default/data_entry_properties.xml /home/was/files/
COPY public_html/user/*/home/was/files/

180 IBM Product Master 12.0.0


RUN chmod 755 /home/svcuser/user-scripts/*.sh; \
chown was.svcgroup /home/was/ -R;

USER was

WORKDIR /home/was
ENTRYPOINT ["./cmd.sh"]

Sample pre-config.sh script

#!/bin/sh

echo "Starting pre-config.sh script....."


cp -f /home/files/flow-config.xml /opt/MDM/etc/default/
cp -f /home/files/data_entry_properties.xml /opt/MDM/etc/default/
cp -f /home/files/CustomAction.jsp /opt/MDM/public_html/user/CustomAction.jsp
cp -f /home/files/SamplePage.jsp /opt/MDM/public_html/user/SamplePage.jsp
cp -f /home/files/systemTab.css /opt/MDM/public_html/user/systemTab.css

sed -i "s/^xframe_header_option=.*/xframe_header_option=ALLOWALL/g" /opt/MDM/etc/default/common.properties

Sample post-config.sh script

#!/bin/sh

echo "Starting post-config.sh script....."


cp -f /home/files/flow-config.xml /opt/MDM/etc/default/
cp -f /home/files/data_entry_properties.xml /opt/MDM/etc/default/
cp -f /home/files/CustomAction.jsp /opt/MDM/public_html/user/CustomAction.jsp
cp -f /home/files/SamplePage.jsp /opt/MDM/public_html/user/SamplePage.jsp
cp -f /home/files/systemTab.css /opt/MDM/public_html/user/systemTab.css

sed -i "s/^xframe_header_option=.*/xframe_header_option=ALLOWALL/g" /opt/MDM/etc/default/common.properties

To display the results, perform the following additional configuration:

Setting the value of the xframe_header_option=ALLOWALL for both the ipm-admin and the ipm-persona containers in the common.properties property.
Initializing the ipm-persona container by configuring the Admin UI port by using the MDM_APP_SVR_PORT argument.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Starting, accessing, or closing the Docker containers

Start, access, or close the Docker containers by using the following commands.

Starting the Docker containers


Following is the command to start the Docker containers:

docker container run -t -d -rm -e NODE_TYPE=<> -e DB_HOST=x.x.x.x


-e MDM_DB_NAME=<> -e DB_USER_NAME=<> -e DB_PASSWORD=<> -e DB_PORT=<> -e NEWUI_SVR_PORT=<>
-e MDM_APP_SVR_PORT=<> -e NEWUI_MAX_MEM=<> -e ES_CLUSTER_NAME=<> -e ES_CLUSTER_IP=<>
-e ES_HTTP_PORT=<> -e ES_TRANSPORT_PORT=<> -e PIMCOLLECTOR_URI=<> -e ES_USE_AUTH=<>
-e ES_PASSWORD_ENCRYPTED=<> -e ES_AUTH_USERNAME=<> -e ES_AUTH_PASSWORD==<> -e ES_AUTH_USERNAME=<>
-e MDM_SHARED_VOLUME_DIR=<> -e MDM_SHARED_VOLUME_DIR=<> -e INDEXER_MEMORY_FLAG=<>
-e MDM_APP_SVR_PORT=<> -e MDM_APPSVR_MEMORY_FLAG=<> -e MDM_EVENTPROCESSOR_MEMORY_FLAG=<>
-e MDM_QUEUEMANAGER_MEMORY_FLAG=<> -e MDM_ADMIN_MEMORY_FLAG=<> ENABLE_FTS=<> -e HAZELCAST_IP=<>
-e HAZELCAST_PORT=<> -e ENABLE_VENDOR=<> -e ENABLE_DAM=<> -e MONGODB_IP=<> -e MONGODB_PORT=<>
-e ENABLE_ML=<> -e ML_HOST=<> -e ML_PORT=<> -e MONGODB_DATABASE=<> -e MONGODB_USERNAME=<>
-e MONGODB_PASSWORD=<> --network=<> --name=<> -v <path> ipm-image_name:12.0.0

Where,

- t means allocate a pseudo-TTY


- d means run container in the background and print container ID
- rm means automatically remove the container when it exits
- e means set environment variables
- p means publish the port of a container to the host
--network means connect a container to a the network
- v means volume of containers with the following probable values:
/mnt/dockervol1/persona_logs:/opt/IBM/WebSphere/AppServer/profiles/AppSrv01/logs
/mnt/dockervol1/admin_logs:/opt/IBM/WebSphere/AppServer/profiles/AppSrv01/logs
/mnt/dockervol1/admin_logs:/opt/IBM/WebSphere/AppServer/profiles/AppSrv01/logs
/mnt/dockervol1/admin_logs:/opt/IBM/WebSphere/AppServer/profiles/AppSrv01/logs
/mnt/dockervol1/ipmsch_logs:/opt/mdmlogs /mnt/dockervol1/ipmwfl_logs:/opt/mdmlogs
/mnt/dockervol1/ipmfts_logs:/opt/MDM/mdmui/logs
/mnt/dockervol1/ipmfts_logs:/opt/MDM/mdmui/logs
/mnt/dockervol1/ipm_share_vol:/opt/ipm_share_vol
/mnt/dockervol1/ipmml_logs:/opt/MDM/mdmui/logs

For more information on the properties used to start the command, see Environment variables for deployed Docker image.

IBM Product Master 12.0.0 181


Accessing the Docker containers
When your deployment is up and running, you can access your Docker containers to begin by using the Product Master application.

Proceed as follows to access the Docker containers:

1. Run the Docker list command to get a list of all the Docker containers running in your system:

docker container ls

2. For remote access, attach to each Product Master Docker container, as needed.

docker exec –it <container name> /bin/bash

Closing the Docker containers


To stop and remove a Docker container, run the following command:

docker stop <container name>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Customizing Elasticsearch, Hazelcast, and MongoDB services

You need to perform extra configuration for Product Master docker images to work with an Oracle Server.

Prerequisites
Supported Elasticsearch versions:
- Elasticsearch 5.5.3
and later - Elasticsearch 7.7.0
Supported Hazelcast IMDG versions:
and later - Hazelcast IMDG 3.12.5
and later - Hazelcast IMDG 4.0

Customizing Hazelcast service


You need to customize the Hazelcast service with configuration parameters as required. Typically, this involves configuring the hazelcast.xml file. So, you need to create a
new Docker image for Hazelcast with customized settings. Create a Docker file with following content:

hazelcast/hazelcast:<version_number>
# Adding custom hazelcast.xml
ADD hazelcast.xml ${HZ_HOME}
ENV JAVA_OPTS -Dhazelcast.config=${HZ_HOME}/hazelcast.xml

You can use the hazelcast.xml supplied with the Product Master build to customize the image.

1. Create a customized Hazelcast image by using the following command:

docker build -t mdm-hazelcast:<version_number>.

The command creates a customized Hazelcast Docker image with the "mdm-hazelcast" name and "<version_number>" tag.
2. Start the Hazelcast service by using the following command:

docker run -itd -p 5702:5702 mdm-hazelcast:<version_number>

For more information, see Hazelcast Docker image.

Customizing Elasticsearch service


Elasticsearch by default is set up with authentication enabled. If you want to use the community edition capabilities of Elasticsearch, you need to disable authentication in
the Elasticsearch.
Elasticsearch 5.x.x

To start Elasticsearch with authentication disabled, use the following command:

docker run -itd --rm -p 9200:9200 -p 9300:9300 -e "http.host=0.0.0.0" -e


"transport.host=0.0.0.0" -e xpack.security.enabled=false --name=elasticserver
docker.elastic.co/elasticsearch/elasticsearch:5.x.x

To start Elasticsearch with authentication enabled, use the following command:

docker run -itd --rm -p 9200:9200 -p 9300:9300 -e "http.host=0.0.0.0" -e


"transport.host=0.0.0.0" --name=elasticserver docker.elastic.co/elasticsearch/elasticsearch:5.x.x

182 IBM Product Master 12.0.0


Elasticsearch 7.7.x ( IBM® Product Master Fix Pack 1 and later)
To start Elasticsearch, use the following command (emphasis on the Elasticsearch 7.7.x variables):

docker container run -t -d --rm -e discovery.type=single-node -e


cluster.name=docker-cluster -e http.host=0.0.0.0 -e transport.host=0.0.0.0 -e xpack.security.enabled=false -e
transport.compress=true -e xpack.ml.enabled=false -e indices.query.bool.max_clause_count=10000 -p
9200:9200 -p 9300:9300 --network=mdm-network --name=elasticsearch-container docker.elastic.co/elasticsearch/elasticsearch:7.7.x

To avoid data loss of Elasticsearch, mount the local volume on the following directories:

/usr/share/elasticsearch/data - Contains data


/usr/share/elasticsearch/logs - Contains logs

Note: For production usage, you might want to configure Elasticsearch with volume that is bound on the /usr/share/elasticsearch/data to persist data across container
restarts. For more information, see the Production mode section in the Install Elasticsearch with Docker.

Customizing MongoDB service


Required MongoDB docker image version is 3.4.

1. Create the /mnt/dockervol/docker-entrypoint-initdb.d directory on the docker host.


2. Copy the mongo-init.js file to /mnt/dockervol/docker-entrypoint-initdb.d directory.
3. Edit the mongo-init.js file.

db = db.getSiblingDB('<MONGODB_DATABASE>');
db.createCollection("test");

4. Start the MongoDB container by using the following command:

docker container run -itd --rm -p 27017:27017


-v /mnt/dockervol1/mongo_init_file:/docker-entrypoint-initdb.d
-v /mnt/dockervol1/mongodb/log:/var/log/mongodb
-v /mnt/dockervol1/mongodb/data:/data/db -e MONGO_INITDB_ROOT_USERNAME=<MONGODB_USERNAME>
-e MONGO_INITDB_ROOT_PASSWORD=<MONGODB_PASSWORD> --network=mdm-network --name=mongodb-container mongo:3.4

5. View the container by using the following command:

# docker ps | grep mongo

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAME


d62367f33879 mongo:3.4 "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:27017->27017/tcp mongodb-container

The MongoDB data and logs are stored on the mounted volume so even if you restart or re-create containers the data remains persistent.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Customizing IBM MQ service (GDS)

To use GDS service in docker environment, IBM® MQ service is mandatory.

Use the following command to fetch the docker image from the IBM MQ registry and start the IBM MQ container.

docker run --env LICENSE=accept --env MQ_QMGR_NAME=QM1 --volume


qm1data:/mnt/mqm --publish 1414:1414 --publish 9443:9443 --detach --env
MQ_APP_PASSWORD=xxxxxx --env MQ_ADMIN_PASSWORD=xxxxxx ibmcom/mq:latest

By default, the IBM MQ container creates the following queues.

DEV.QUEUE.1
DEV.QUEUE.2
DEV.QUEUE.3

The IBM MQ username is ‘app’ and the web console username is ‘admin’.
To ensure that the GDS works properly, add the following generated details to the IBM Product Master application on the docker environment.

MQ_QUEUE_MANAGER=QM1
INBOUND_QUEUE_NAME=DEV.QUEUE.1
OUTBOUND_QUEUE_NAME=DEV.QUEUE.2
MQ_APP_USERNAME=app
MQ_APP_PASSWORD=xxxxxx
MQ_HOST=<Docker Host IP>
MQ_PORT=1414
MQ_CHANNEL_NAME=DEV.APP.SVRCONN

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 183


Deploying on the Oracle Server

You need to perform extra configuration for Product Master docker images to work with an Oracle Server.

Before you begin


1. Download the appropriate client libraries for your Oracle Database from the Instant Client Downloads for Linux x86-64 (64-bit) website:
instantclient-basic-linux.x64-x.x.x.x.x.zip
instantclient-sqlplus-linux.x64-x.x.x.x.x.zip
2. Note the name of folder that is created after you extract the package. The folder name is required during docker image build process.
3. Create a copy of the ipm-sch-orcl.tar.gz and then extract the package. The package contains a sample Dockerfile that can be used to patch the ipm-sch:12.0.0.<fp>
image for the Oracle client binaries.
4. Copy the following JAR files to the respective locations and delete the placeholder files.
JAR file Location
ojdbc6.jar oraclejars/jdbc/lib
xmlparserv2.jar oraclejars/lib
xdb.jar oraclejars/rdbms/jlib
Note: This topic assumes that the Oracle Database is configured with Product Master schema and a company is provisioned.

Procedure
Proceed as follows to configure the Product Master for creating docker images:

1. Set up a local HTTPD service for the Oracle Database client binaries. Following is a sample Dockerfile to configure and run HTTPD service:

FROM httpd
EXPOSE 80

2. Using the sample Dockerfile, create a Dockerfile and a directory.


3. Build the docker image to configure HTTPD to port 80 using the following command:

docker build -t web .

4. Start the HTTPD container by using the following command:

docker container run -d -t -p 8080:80 --rm --name websvr -v


/mnt/dockervol1/install/:/usr/local/apache2/htdocs/ web

Where,
/mnt/dockervol1/install contains the Oracle Database client binary files.
5. Copy the directories that you create in the preceding 3 and 4 steps of the Before you begin section.
6. Build the docker images for each Product Master service. Following is a sample Dockerfile with emphasis on sections that required updates:

FROM <MDM-svc-base-image>:<TAG>
ARG ORCL_CLIENT_DIR_NAME
ENV MDM_ORACLE_CLIENT_DIR=${ORCL_CLIENT_DIR_NAME:-"instantclient_x_x"}

USER root

RUN mkdir -p /home/oracle; \


wget -O /opt/MDM/instantclient-basic-linux.x64-x.x.x.x.x.zip
http://<server-running-httpd-service>:<httpd-port>/instantclient-basiclinux.
x64-12.1.0.2.0.zip; \
unzip /opt/MDM/instantclient-basic-linux.x64-x.x.x.x.x.zip -d /home/oracle/; \
wget -O /opt/MDM/instantclient-sqlplus-linux.x64-x.x.x.x.x.zip
http://<server-running-httpd-service>:<httpd-port>/instantclient-sqlpluslinux.
x64-12.1.0.2.0.zip; \
unzip /opt/MDM/instantclient-sqlplus-linux.x64-x.x.x.x.x.zip -d /home/oracle/; \
mkdir -p /home/oracle/$MDM_ORACLE_CLIENT_DIR/network/admin; \
# Clean up unwanted files
rm -rf /opt/MDM/instantclient-basic-linux.x64-x.x.x.x.x.zip;\
rm -rf /opt/MDM/instantclient-sqlplus-linux.x64-x.x.x.x.x.zip
COPY ./tnsnames.ora /home/oracle/$MDM_ORACLE_CLIENT_DIR/network/admin
COPY ./oraclejars/ /home/oracle/$MDM_ORACLE_CLIENT_DIR/

RUN chown svcuser:svcgroup /opt/mdmlogs -R; \


chown svcuser:svcgroup /opt/MDM -R; \
chown svcuser:svcgroup /home/svcuser/ -R; \
chown svcuser:svcgroup /home/oracle -R

USER svcuser

WORKDIR /home/svcuser
ENTRYPOINT ["./cmd.sh"]

Where,
<MDM-svc-base-image>:<TAG> - Specify the image name and tag to update the image. To update the ipm-admin image specify ipm-admin:12.0.0.<fp>.
<server-running-httpd-service>:<httpd-port> and <server-running-httpd-service>:<httpd-port> - Specify the IP of the server and port details of the HTTPD
service where you have copied the Oracle Database client binaries.
"svcuser" name is replaced by'"was" if you are customizing ipm-admin-ubi7 or ipm-persona-ubi7 image.
7. Run the following command to build the image:

docker build --build-arg<ORCL_CLIENT_DIR_NAME>=instantclient_x_x -t ipm-admin-orcl:11.6.0.<fp>

184 IBM Product Master 12.0.0


8. Repeat the preceding 5, 6, and 7 steps for each Product Master docker image.
9. Run the following command to start the Docker container:

docker container run -t -d --rm -e NODE_TYPE=PRIMARY -e DB_HOST=x.x.x.x


-e MDM_DB_TYPE=oracle -e MDM_DB_NAME=database2 -e DB_USER_NAME=dev -e DB_PASSWORD=dev -e DB_PORT=1521
-e MDM_APP_SVR_PORT=7707 -e MDM_APPSVR_MEMORY_FLAG="-Xmx2048m -Xms256m" -e MDM_EVENTPROCESSOR_MEMORY_FLAG="-Xmx128m -
Xms48m"
-e MDM_QUEUEMANAGER_MEMORY_FLAG="-Xmx128m -Xms48m" -e MDM_ADMIN_MEMORY_FLAG="-Xmx128m -Xms48m" -e ENABLE_VENDOR=1
-e ENABLE_DAM=1 -e MONGODB_IP=x.x.x.x -e MONGODB_PORT=27017 -e ENABLE_FTS=1 -e ENABLE_ML=1 -e ML_HOST=x.x.x.x
-e ML_PORT=5000 -e MONGODB_DATABASE=mgdb -e MONGODB_USERNAME=mluser -e MONGODB_PASSWORD=mluser -p 7507:7507
-p 9060:9060 -p 9444:9444 --network=mdm-network --name=<container_name>-container -v /mnt/dockervol1/<log file location>
x.x.x.x:5000/<container_name>:12.0.0<fp>

Where,
- t means allocate a pseudo-TTY
- d means run container in the background and print container ID
- rm means automatically remove the container when it exits
- e means set environment variables
- p means publish a container’s port to the host
--network means connect a container to the network
<log file location> means the log file location that could be any of the following depending upon the container:
Admin UI - mdmappsvr_logs:/opt/mdmlogs
Persona-based UInewui_logs:/opt/IBM/WebSphere/AppServer/profiles/AppSrv01/logs
Scheduler - mdmsch_logs:/opt/mdmlogs
Workflow - mdmwfl_logs:/opt/mdmlogs
pim-collector and indexer - mdmfts_logs:/opt/NEWUI/mdmui/logs
Machine learning - mdmml_logs:/opt/NEWUI/mdmui/logs
To start the Docker containers with the Free text search feature enabled, set the value of the ENABLE_FTS property as 1. Similarly, to enable the DAM feature, set
the value of the ENABLE_DAM property as 1.

What to do next
You need to perform extra configuration for Product Master docker images to work with an Oracle Server. For more information, see Customizing Elasticsearch and
Hazelcast services.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deploying on the Kubernetes cluster

IBM® Product Master services are deployed on the Kubernetes cluster by using operators.

Before you begin


Download the following image and push the images to the local registry.
IBM-Product-Master-Docker

About this task


Following files are present in the compressed file you download by using the above link.
File Purpose
volumes.yaml Deploys volumes. The volumes are not auto-provisioned on the local clusters.
ipm.psl.comipmscrd.yaml Custom resources definition (CRD), enables you to add own or custom objects to the Kubernetes cluster.
operator.yaml Operator deployment spec
role_binding.yaml Operator role-based access control (RBAC)
role.yaml Operator role-based access control (RBAC)
service_account.yaml Operator role-based access control (RBAC)
ipm-12.0.2_cr.yml Custom resources definition (CRD) instance

Procedure
Proceed as follows for volume creation when deploying on the local cluster.

Note: Not applicable in the cloud environment as volumes are auto-provisioned. Also, you must give appropriate storageclass for each volume in the ipm-12.0.2_cr.ml
file.

1. Create directories for volumes with required directory structure and proper permissions on every node on Kubernetes cluster by using the following command.

mkdir <parent directory>


cd <parent directory>
mkdir appsrv
mkdir ftsindlog
mkdir ftspimlog
mkdir gds
mkdir ml

IBM Product Master 12.0.0 185


mkdir mongodb
mkdir mongodblog
mkdir mq-data
mkdir newui
mkdir sch
mkdir wfl
mkdir dam
chmod +777 <parent directory> -R
e.g. mkdir -p /mnt/ipm12/
cd /mnt/ipm12/
mkdir appsrv
mkdir ftsindlog
mkdir ftspimlog
mkdir gds
mkdir ml
mkdir mongodb
mkdir mongodblog
mkdir mq-data
mkdir newui
mkdir sch
mkdir wfl
mkdir dam
chmod +777 /mnt/ipm12/ -R

Note: You need to share the local directory for the DAM volume "/dam" among cluster nodes by using the Network File System (NFS) service.
2. Create volumes by using the following command.

kubectl apply -f volumes.yaml

3. View the created volumes by using the following command.

kubectl get pv

4. Create namespace by using the following command.

kubectl create ns ipm

5. Modify the operator.yml file to update operator registry, image, and version (to point to the local registry). Do not modify other files.
a. Update the line image: <local_registry_name>/ipm-operator:12.0.2 with the local registry
6. Deploy the operator by using the following command.

kubectl apply -f ipm.psl.com_ipms_crd.yaml


kubectl apply -f role_binding.yaml
kubectl apply -f role.yaml
kubectl apply -f service_account.yaml
kubectl apply -f operator.yaml

• The custom resource ipm-12.0.2_cr.yml file contains application deployment specification.


• While editing the CR, replace xxxxx with wanted value and replica count as required.
7. Deploy the Product Master, by using the following command.

kubectl apply -f ipm-12.0.2_cr.yml

8. View all resources by using the following command.

kubectl get all -n ipm

9. View specific resources by using the following command.

kubectl get <pods/deploy/svc/pvc/cm/pv> -n ipm

Example

kubectl get pods -n ipm

10. Check logs for a specific pod by using the following command.

kubectl logs -f <pod name> -n ipm

11. To uninstall by using the following command.

kubectl delete -f ipm-12.0.2_cr.yml


kubectl delete -f operator.yaml
kubectl delete -f role_binding.yaml
kubectl delete -f role.yaml
kubectl delete -f service_account.yaml
kubectl delete -f volumes.yaml
kubectl delete -f ipm.psl.com_ipms_crd.yaml
kubectl delete ns ipm

Results
Access the Persona-based UI using the following URL.
https://:32044/mdm_ui/#/MDM/Home

Example

https://x.x.x.x:32044/mdm_ui/#/MDM/Home

Access the Admin UI using the following URL.


https://:31144/utils/enterLogin.jsp

186 IBM Product Master 12.0.0


Example

https://x.x.x.x:31144/utils/enterLogin.jsp

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Known issues and limitations


Certain product features assume that the system is deployed by using a centralized deployment model where services share the file system and product binaries. With
containerized deployments, services no longer have a common file system and are working in isolation.

Following is the list of known issues and limitations.


Feature Admin UI Containerized deployment
System Administrator > Displays all the properties that are listed in the configuration files such as Displays property values that are configured for the ipm-
Properties common.properties and db.xml. admin container.
System Administrator > Displays all the logs files for each service in the log folder. Display logs files present in the ipm-admin container.
Log Files

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deploying on a clustered environment


Developers, administrators and transition engineers who want to set up a typical Product Master clustered environment only a cluster can follow one of two ways.

Deploying Product Master on an application server and then converting it into a cluster.
Deploying Product Master directly on a computer cluster.

Overview of clustering and workload management


You can use deployment scripts to deploy IBM® Product Master in a clustered environment. You use the WebSphere® Application Server Deployment Manager to deploy
Product Master.

IBM Product Master uses the clustering and workload management features by WebSphere Application Server.

Product Master supports the deployment of the Product Master product in this clustered environment as shown in the following figure.

Figure 1. Cluster deployment for Product Master

Preparing the logging and configuration directories


To configure horizontal clustering, you need to prepare the logging and configuration directories.
Maintaining a cluster environment
To maintain a cluster, some common tasks that you perform are adding more servers to the cluster, stopping the cluster, and restarting the cluster.
Deploying Product Master using WebSphere Application Server Deployment Manager
Before you can deploy IBM Product Master, you must configure your application server, create a cluster, configure your host, sync the application servers, and

IBM Product Master 12.0.0 187


restart the cluster and Product Master.
Enabling horizontal cluster for the IBM Product Master
Horizontal clustering involves adding more computers to a cluster. You can spread the cluster across several computers horizontally.
Configuring a cluster environment
To improve performance, you can run services in a clustered environment so that you can run multiple services on one computer or on multiple computers.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Preparing the logging and configuration directories


To configure horizontal clustering, you need to prepare the logging and configuration directories.

Procedure
1. Install IBM® Product Master. Ensure that the <install dir> directory is shared across all of the machines in the cluster and found on each machine in the same path.
The Product Master user on each machine must have write permissions to the <install dir> directory. NFS is the ideal approach. To configure for horizontal
clustering, you must manually install Product Master.
2. Create the configuration directory. Create a directory that is writable by the Product Master user.
For example, /home/mdmpim/config on each machine. Each machine in the cluster requires its own directory for configuration files.
3. Create the logging directory. Create a directory that is writable by the Product Master user.
For example, /home/mdmpim/logs on each machine. Each machine in the cluster can have its own logging directory.
Note: If you want to see the same log files for all of the services together, ensure that the logging directory is shared across all the machines in the cluster.
4. On one system in the cluster, copy all the files and directories from the <install dir>/bin/conf to the directory created above.
For example, cp –r /usr/local/mdmpim/bin/conf/*
/home/mdmpim/conf. Delete all the files and directories under the <install dir>/bin/conf directory.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Maintaining a cluster environment


To maintain a cluster, some common tasks that you perform are adding more servers to the cluster, stopping the cluster, and restarting the cluster.

Before you begin


Ensure that you start the WebSphere® Application Server Deployment Manager before you install Product Master and creating the cluster.
Important: Ensure that you do not use the install_war.sh script on an existing cluster or cluster member. If you run this script on an existing cluster or cluster
member, a failed error message is displayed.
Important: Before the deployment, ensure that only server1 exists. If any other servers exist, delete it.
Ensure that you set up the WebSphere Application Server. For more information, see Configuring WebSphere® Application Server.

Procedure
1. Create more application servers. For more information about creating application servers, see: WebSphere Application Server product documentation.
Create more members, for example, server2 at port 9081 and server3 at port 9082, in the cluster.
2. Restart the cluster.
a. Stop the cluster. For more information about stopping the cluster, see: WebSphere Application Server product documentation.
For example, select the cluster MDMPIM and click Stop. All servers (members) are stopped.
b. Start the cluster. For more information about starting the cluster, see: WebSphere Application Server product documentation.
For example, select the cluster MDMPIM and click Start. All servers (members) are started.
3. Stop and start servers server1, server2, and server3.
a. Stop all of the servers. For more information, see the "Stopping the product" section in the Accessing the product.
b. Start all of the servers. For more information, see the "Starting the product" section in the Accessing the product.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deploying Product Master using WebSphere Application Server Deployment Manager


Before you can deploy IBM® Product Master, you must configure your application server, create a cluster, configure your host, sync the application servers, and restart the
cluster and Product Master.

188 IBM Product Master 12.0.0


Before you begin
Ensure you meet the following prerequisites:

Start the WebSphere® Application Server Deployment Manager before you install Product Master and creating the cluster.
Important: Ensure that you do not use the install_war.sh script on an existing cluster or cluster member. If you run this script on an existing cluster or cluster
member, a failed error message is displayed.
Important: Before the deployment, ensure that only server1 exists. If any other servers exist, delete it.
Ensure that you set up WebSphere Application Server. For more information about setting up the server, see Configuring WebSphere® Application Server.
If you are using IBM WebSphere MQ, ensure that you install WebSphere MQ Client on all instances of Product Master on every cluster.

About this task


Deploy Product Master using WebSphere Application Server Deployment Manager:

Procedure
1. Deploy the application on the application server. For more information, see Configuring the WebSphere Application Server.
a. Ensure server1 is used in the env_settings.ini file.
For example:

[appserver.websphere]
application_server_profile=mdmpim
cell_name=mdmpimNode01Cell

node_name=mdmpimNode01
# set security to true if administrative security is enabled.
Defaults to false if not set
admin_security=false

[appserver.appsvr]
port=9080
# for websphere, add appserver_name and vhost_name
appserver_name=server1
vhost_name=mdmvhost

b. Install Product Master to the application server server1.


c. Log in to Product Master: http://<hostname>:<port>/utils/enterLogin.jsp
2. Create the cluster with one existing application server. For more information about creating a cluster, see: WebSphere Application Server product documentation.
A new cluster is created with one member that is converted from the application server server1.
3. Create more application servers. For more information about creating an application server, see: WebSphere Application Server product documentation.
Create more members, for example, server2 at port 9081 and server3 at port 9082, in the cluster.
4. Configure the virtual host. For more information about configuring a virtual host, see: WebSphere Application Server product documentation.
Configure the virtual host mdmvhost with host aliases for all cluster members (hostnames/ports) on the WebSphere Application Server admin console, for example:
*:9080 - for server1
*:9081 - for server2
*:9082 - for server3
5. Sync all of the application servers. Before you restart all of the cluster members, do a full synchronization and populate the application on server1 to all other
servers, that is, server2 and server3. For more information about syncing, see: WebSphere Application Server product documentation.
Select the node that is used for this application, for example, mdmpimNode01, and click Full Resynchronize. For more information about Full Resynchronize, see
WebSphere Application Server product documentation.
6. Restart the cluster.
a. Stop the cluster. For more information about stopping the cluster, see: WebSphere Application Server product documentation.
For example, select the cluster MDMPIM and click Stop. All servers (members) are stopped.
b. Start the cluster. For more information about starting the cluster, see: WebSphere Application Server product documentation.
For example, select the cluster MDMPIM and click Start. All servers (members) are started.
7. Stop and start servers server1, server2, and server3.
a. Stop all of the servers. For more information, see the "Stopping the product" section in the Accessing the product.
b. Start all of the servers. For more information, see the "Starting the product" section in the Accessing the product.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling horizontal cluster for the IBM Product Master


Horizontal clustering involves adding more computers to a cluster. You can spread the cluster across several computers horizontally.

With horizontal clustering, multiple application servers are deployed on the different physical computers, which provide an easy and convenient way of scaling beyond
hardware limitations of a single computer. If a computer fails, the remaining computers are unaffected and the application can still be served by another cluster member.
Horizontal clustering also increases the scalability of the system by distributing the processing across several CPUs. Typically, each physical computer comprises one
node, so generally, you can say that each computer is a cluster node.

To enable a horizontal cluster for the Product Master you need to perform the following tasks:

Setting up WebSphere® Application Server


Setting up horizontal cluster on the two boxes running Product Master over the IBM HTTP Server
Deploying the Product Master in the clustered environment

IBM Product Master 12.0.0 189


Deploying the Product Master on the AppSrv01Host01
Configure the WebSphere Application Server on Host02
Creating a cluster of Horizontal topology
Configuring JVM parameters and Product Master parameters for the clustered members
Configuring Product Master parameters for the clustered members
Configuring IBM HTTP Server
Enabling SSL for the cluster
Enabling sticky sessions

Note: The examples in this topic majorly use Linux® commands and show results on a Linux box. Differences might exist with the commands, output, or both if you are
using AIX®, or even other types of Linux.

Setting up WebSphere Application Server


Before you can install IBM® Product Master, you must set up your WebSphere Application Server.

If you plan to use the clustering and workload management features of WebSphere Application Server, you must install the deployment manager, as shown in the
following figure.

Figure 1. Installing WebSphere Application Server

The previous figure shows the following two profiles:

1. Dmgr01 – with dmgr01 for deployment manager


2. AppSrv01 – with server1 for a managed node, for example, mdmpimNode01.

Setting up horizontal cluster on the two boxes running Product Master over the IBM HTTP
Server
Prerequisites

IBM WebSphere Application Server 9 Network Deployment on each box. Run the following command to confirm the installed version:

WASinstall_directory}/bin/versionInfo.sh

SDK 1.8 (depending on the version of IBM WebSphere Application Server used) is enabled on all profiles.
You have installed a database server.
You must also create a database specifically for the Product Master. As a DB admin, run the following command:

create_pimdb.sh

Install a database client on both cluster and the workstation.


Disable the firewalls. Run the following command to disable the firewall:
Red Hat

service firewalld stop

CentOS

systemctl stop firewalld

The database that is used must be cataloged on each member of the cluster. As a DB admin, run the following command on each box:

catalog tcpip node ClustNod remote Hostname 50000


catalog database pimdb at node ClustNod
terminate

Where,
ClustNod is an arbitrary name for the database node (8 characters or less).
The database port is 50000.
The database for the Product Master is pimdb.

Deploying the Product Master in the clustered environment


Deploy the Product Master on an application server and then convert application server into a cluster.

1. Configuring WebSphere Application Server on Host01.


2. Create Deployment Manager (Dmgr01Host01) and a federated application server profile (AppSrv01Host01).
3. Mount directories over the Network File System (NFS) to share installation directories.
4. Configure IBM Installation Manager for the Product Master installation.
5. Install the Product Master on the AppSrv01Host01 using the Installation Manager or through the installation scripts.
6. Configure the WebSphere Application Server on Host02.
7. Create the application server profile (AppSrv02Host02) on the Host02 while you are federating the profile to the Dmgr01Host01.
8. Propagate the Product Master installation to the horizontal cluster topology.
9. Configure the JVM Parameters and Product Master parameters for the clustered members.
10. Add aliases to virtual host (MDMCE_VHOST01_HOST01).

190 IBM Product Master 12.0.0


11. Install a single HTTP server and configure the server to work as a front end to the cluster.
Note: You can perform all the tasks as a root user, at least initially. File access and ownership can be changed later, if wanted.

Create Deployment Manager and a federated application server profile

To create the profile, proceed as follows:

1. Open Profile Management Tool (PMT).


2. Open the WebSphere Customization Toolbox by using the following command in the command prompt or from the menu you are accessing the server with the VNC
client UI:
{Websphere Deployment Directory}/bin/ProfileManagement/eclipse/pmt.sh
3. In the WebSphere Customization Toolbox window, browse to the Profile Management Tool > Create.
4. In the Environment Selection window, select Cell (deployment manager and a federated application server), and then click Next.
5. In the Profile Creation Options window, select Advanced profile creation, and then click Next.
6. In the Optional Application Deployment window, select Deploy the administrative console, and then click Next.
7. In the Optional Application Deployment window, enter the names and home directories for the profiles to be created, for example:
Deployment manager profile - Dmgr01Host01
Application server profile - AppSrv01Host01
Profile directory - /opt/IBM/WebSphere/AppServer
8. In the Node, Host, and Cell Names window, enter the following values:
Deployment manager node name - CellManager01Host01
Application server node name - Node01Host01
Hostname - Hostname1
Cell name - Cell01Host01
9. In the Administrative Security window, enable administrative security by providing a username and a password. Use defaults in the next steps, review details in the
Profile Creation Summary window, and click Create.
10. After a successful profile creation, click Finish on the Profile Creation Complete window, and clear the Launch the first steps console checkbox. The WebSphere
Customization Toolbox window shows profiles that are created.
11. From the command prompt, use the following commands to start Dmgr01Host01 manager and AppSrv01Host01 node agent:

/opt/IBM/WebSphere/AppServer/profiles/Dmgr01Host01/bin/startManager.sh/
opt/IBM/WebSphere/AppServer/profiles/AppSrv01Host01/bin/startNode.sh

12. Use the following command to confirm that the SDK 1.7, 1.7.1 or 1.8 is enabled on these profiles:

/opt/IBM/WebSphere/AppServer/bin/managesdk.sh -listEnabledProfileAll

13. Run the following command to enable the appropriate SDK on the profile if needed:

/opt/IBM/WebSphere/AppServer/bin/managesdk.sh -enableProfile -profileName <profileName> -sdkname 8.0_64

Mount directories over the Network File System (NFS)

To set up Product Master directory over the NFS:

1. Create a directory for the Product Master files.

/opt/IBM/MDMCE

2. Give the folder full access rights by using the following command:

chmod -R 777 MDMCE

3. Check whether the NFS services are running by using the following command:

/sbin/service nfs status

4. If NFS services are not running, start them using the following command:

/sbin/service nfs start

5. Specify which part of the file system is to be accessed, or shared, by the nodes. Edit or create the file /etc/exports to export directories and set options for access by
remote systems through NFS. The format is
<export dir> <host1>(<options>)<host2>(<options>)...
6. To export the directory so that directory is available for mounting on the other servers, add the following line into the exports file:

/opt/IBM/MDMCE *(rw,sync,no_root_squash)

7. Export the directory by using the following command:

/usr/sbin/exportfs -a

8. Create the Product Master $TOP directory to deploy Product Master.


9. On Host02, create a directory by using the following command:

mkdir -p /opt/IBM/MDMCE

10. Give the folder full access rights by using the following command:

chmod -R 777 MDMCE

11. Mount the $TOP folder on host 02, log in to host 02 and use the following command to mount host 01 directories to host 02:
mount hostname:/opt/IBM/MDMCE /opt/IBM/MDMCE
Example: mount Hostname1:/opt/IBM/MDMCE /opt/IBM/MDMCE
Important: Ensure that the directory path of the shared directory on each box is identical.

Deploying the Product Master on the AppSrv01Host01

IBM Product Master 12.0.0 191


Prerequisites

The server Host01 has the latest version of IBM Installation Manager.
The database is cataloged on each workstation in the cluster.
The database is created for Product Master and the DB admin started the database by using the following command:
db2start
Firewalls are deactivated.

To deploy the Product Master proceed as follows:

1. Export the required environment variables by inserting the following lines in the ~/.bashrc, ~./bash_profile or ~/.profile depending on the default shell on both the
Host1 and Host2.

# User specific environment and startup programs

source /home/db2inst1/sqllib/db2profile

#--------------------------------------------#
# PIM configuration #
#--------------------------------------------#
PATH=$PATH:$HOME/bin

export TOP=/opt/IBM/MDMCE
export PERL5LIB=$TOP/bin/perllib
export JAVA_HOME=/opt/IBM/WebSphere/AppServer/java/8.0
export COMPAT=$TOP/bin/compat.sh
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:.

export MQ_INSTALL_DIR=/opt/mqm

export LANG=en_US
export JAVAC=`perl $HOME/.config.pl --option=javac 2>/dev/null`

export ANT_HOME=/opt/IBM/WebSphere/AppServer/deploytool/itp/plugins/org.eclipse.wst.command.env_1.0.409.v201004211805.jar
export ANT_OPTS=-Xmx1024m
export ANT_ARGS=-noclasspath
export MAVEN_USERNAME="username@xx.ibm.com"
export MAVEN_PASSWORD="xxxxxxxxx"

export CLASSPATH=$CLASSPATH:/opt/mqm/java/lib/providerutil.jar;
export CLASSPATH=$CLASSPATH:/opt/mqm/java/lib/ldap.jar;
export CLASSPATH=$CLASSPATH:/opt/mqm/java/lib/jta.jar;
export CLASSPATH=$CLASSPATH:/opt/mqm/java/lib/jndi.jar;
export CLASSPATH=$CLASSPATH:/opt/mqm/java/lib/jms.jar;
export CLASSPATH=$CLASSPATH:/opt/mqm/java/lib/connector.jar;
export CLASSPATH=$CLASSPATH:/opt/mqm/java/lib/fscontext.jar;
export CLASSPATH=$CLASSPATH:/opt/mqm/java/lib/com.ibm.mqjms.jar;
export CLASSPATH=$CLASSPATH:/opt/mqm/java/lib/com.ibm.mq.jar;

export PLUGIN_HOME=/opt/IBM/WebSphere/Plugins

PATH=$PATH:$ANT_HOME/bin

export WAS_HOME=/opt/IBM/WebSphere/AppServer

Note: Ensure that the db2profile, JAVA_HOME, and TOP values are from your setup.
2. Deploy the Product Master full release files under a directory:

/opt/IBM/MDMCE

The $TOP is opt/IBM/MDMCE.


3. Run the installation steps for the Product Master.
4. In the env_setting.ini file that is located in the /opt/IBM/MDMCE/bin/conf directory, check the following:
Edit the [cache] section as follows:

# multicast ip addr for MDMPIM cache. Must be unique on the LAN


multicast_addr=239.0.10.1
# TTL for multicast packets. Set to 0 for single-machine installations or 1 for clusters
multicast_ttl=1

Edit the [appserver.websphere]section as follows:

cell_name=Cell01Host01
node_name=Node01Host01
# set security to true if administrative security is enabled.
admin_security=true

Edit the [appserver.appsvr]section as follows:

port=7507
appserver_name=MDMCE_APPSERVER01_HOST01
vhost_name=MDMCE_VHOST01_HOST01

Check path to the common_properties and java_home properties.


5. Run the following commands:

$TOP/setup.sh –ov
configure.sh –ov
compat.sh
cd bin/websphere
create_appserver.sh
create_vhost.sh

192 IBM Product Master 12.0.0


install_war.sh
start_local.sh
$TOP/mdmui/bin/installAll.sh

6. Verify that the deployment was successful by logging in to the installed Product Master.
7. Confirm all the services are running by using the following command:

$TOP/bin/go/rmi_status.sh

8. Stop the instance by using the following command:

$TOP/bin/go/abort_local.sh

9. In the Product Master, import the following data models:

$TOP/mdmui/env-export/dammodel/dammodel.zip
$TOP/mdmui/env-export/mdmenv/mdm-env.zip

10. Stop the IBM Product Master by using the following command:

$TOP/bin/go/stop_local.sh

Application URLs:
Admin UI - http://<hostname>:<port>/utils/enterLogin.jsp
Persona-based UI - http://<hostname>:<port>/mdm_ui

Configure the WebSphere Application Server on Host02


To create the profile, proceed as follows:

1. Open Profile Management Tool (PMT).


2. Open the WebSphere Customization Toolbox by using the following command in the command prompt or from the menu that you are accessing the server with the
VNC client UI:

{Websphere Deployment Directory}/bin/ProfileManagement/eclipse64/pmt.sh

3. In the WebSphere Customization Toolbox window, browse to the Profile Management Tool > Create.
4. In the Environment Selection window, select Custom profile, and then click Next.
5. In the Profile Creation Options window, select Advanced profile creation, and then click Next.
6. In the Profile Name and Location window, enter the following values:
Field Value
Profile name AppSrv02Host02
Profile directory /opt/IBM/WebSphere/AppServer/profiles/AppSrv02Host02
7. In the Node and Host window, enter the following values:
Node name - Node02Host02
Hostname - Hostname2
8. Specify the details of the existing Deployment manager for federation, enter the following values:
Field Value
Deployment manager hostname Hostname2
Deployment manager SOAP port 8879
Username configadmin
Password ******
9. Clear the Federate this node later and click Next.
10. Use defaults in the next few windows, and review details in the Profile Creation Summary window, and click Create.
11. Start the AppSrv02Host02 node agent by using the following command:

/opt/IBM/WebSphere/AppServer/profiles/AppSrv02Host02/bin/startNode.sh

Creating a cluster of Horizontal topology


To span the existing Product Master application in to a WebSphere cluster, proceed as follows:

1. Log in to the WebSphere Application Server Integrated Solutions Console (admin console) on Host01. The admin console is for Dmgr01Host01 (manager profile).
URL - https://<hostname>:<port>/ibm/console/logon.jsp
Credentials - configadmin/passw0rd
2. Browse to System Administration > Console Preferences, select Synchronize changes with Nodes, and then click Save.
3. Browse to Servers > Clusters > WebSphere application server clusters, and click New.
4. In the Create a New Cluster page, specify the cluster name as MDMMDMCEHost01.
5. Clear Configure HTTP session memory-to-memory replication and click Next.
6. In the Create First Cluster Member page, enter the following:
Field Value
Member name MDMCE_APPSERVER01_HOST01
Select node Node01Host01(ND 9.0.0.0)
Weight 2
7. In the Create additional cluster members page, enter the following, and click Add Member.
Field Value
Member name MDMCE_APPSERVER01_HOST02
Select node Node01Host02(ND 9.0.0.0)
Weight 2
8. Clear Generate unique HTTP ports checkbox so that all the Product Master nodes use the same port to access the application.
9. Select the following check boxes, and click Next:

IBM Product Master 12.0.0 193


Select how the server resources are promoted in the cluster.
Create the member by converting an existing application server.
10. Review the information, on the Summary page, and click Finish.

To review the cluster that is created, click Servers > Clusters > Cluster Topology, and expand the MDMMDMCEHost01. Click Servers > Clusters > WebSphere Application
Server clusters to see the new cluster on the right pane.
Important: You can obtain the HTTP ports of the cluster MDMMDMCEHost01 members (MDMCE_APPSERVER01_HOST01 and MDMCE_APPSERVER02_HOST02
application servers) by browsing to the Servers > WebSphere application servers > {MDMCE_APPSERVER_HOST} > Ports, and then search for the port.

Configuring JVM parameters and Product Master parameters for the clustered members
Proceed as follows:

1. Browse to Servers > WebSphere Application servers > MDMCE_APPSERVER02_HOST02 > Server Types.
2. Under the Server Infrastructure, browse to Java and Process Management > Process definition > Java Virtual Machine, and update the host name (Hostname2):
-Dsvc_name=appsvr_Hostname2 -DTOP=/opt/IBM/MDMCE -DCCD_ETC_DIR=/opt/IBM/MDMCE/etc -Dsvc_etc_dir=/opt/IBM/MDMCE/etc/default
-Dtrigo.memflags=-Xmx1024m_-Xms256m -Djava.security.policy=/opt/IBM/MDMCE/etc/default/java.policy -
Dexit_if_config_file_not_found=false -DenableJava2Security=true -Dsysout.dir=/opt/IBM/MDMCE/logs/appsvr_Hostname2

Note: You do not need to do these steps for the MDMCE_APPSERVER01_HOST01 as the generic JVM arguments are correctly configured for hostname1.

Configuring Product Master parameters for the clustered members


You need to add the following host names so that Product Master services are aware of the cluster and nodes:

Edit the admin_properties.xml file that is at the $TOP/etc/default/ folder to add the two hostnames, as follows:

<admin>
<cluster>
<host name="Hostname1"/>
<host name="Hostname2"/>
</cluster>
</admin>

The two nodes are sharing the configuration files. The service list needs to be different at both the nodes, so you need to provide two different configuration files for both
the nodes. Proceed as follows:

1. Create a folder that is named cluster under the $TOP directory.


2. Create folders mdmapp1 and mdmapp2 in the cluster folder.
3. Go to $TOP/bin/conf directory on Node 1 and run the following command:

cp -R * . ./. ./cluster/mdmapp1
cp -R * . ./. ./cluster/mdmapp2

4. On Host01, add this line to the .bashrc file:

export CCD_CONFIG_DIR=/opt/IBM/MDMCE/cluster/mdmapp1

5. On Host02 add this line to the .bashrc file:

export CCD_CONFIG_DIR=/opt/IBM/MDMCE/cluster/mdmapp2

6. Enter bash on both the Host01 and Host02. Each host then has own configuration files.
7. On Host02, add the following to the env_settings.ini file at the $CCD_CONFIG_DIR/ folder to configure the scheduler:
Edit the [appserver.websphere] section as follows:

admin_security=true
application_server_profile=AppSrv02Host02
cell_name=Cell01Host01
node_name=Node02Host02

Edit the [services] section as follows:

admin=admin
eventprocessor=eventprocessor
queuemanager=queuemanager
scheduler=scheduler
workflowengine=workflowengine
appsvr=appsvr

8. On Host01, add the following to the env_settings.ini file at the $CCD_CONFIG_DIR/ folder to remove the workflow service:
Edit the [services] section as follows:

admin=admin
eventprocessor=eventprocessor
queuemanager=queuemanager
scheduler=scheduler
workflowengine=workflowengine
appsvr=appsvr

9. On both Host01 and Host02 run the following script:

$TOP/setup.sh

10. Rerun $TOP/bin/configureEnv.ini and then run the following script:

$TOP/bin/compat.sh

194 IBM Product Master 12.0.0


11. Add the host aliases to the virtual host. In the IBM WebSphere administrative console, browse to Environment > Virtual Hosts > MDMCE_VHOST01_HOST01 > Host
Aliases, and add the HTTP ports of the two cluster members.

Starting the Appservers

1. Browse to the $TOP/bin/go on Host01 and run the ./start_local.sh script.


2. Browse to the $TOP/bin/go on Host02 and run the ./start_local.sh script.
3. Check the services that are running on both the workstations by running the ./rmi_status.sh script.
Important:
After the cluster is created, do not run the following scripts else that you would need to delete and re-create the cluster:
create_vhost.sh
create_appsvr.sh
install_war.sh
installAll.sh
Since the Admin UI and Persona-based UI are deployed on the same Appserver, the application URL has port of the Admin UI , for example, "7507". Thus, in
the env_settings.ini file when the value of the cluster enable=yes, the was-server port is ignored.
Note: You need to re-create cluster if you update the instance or apply any fix pack.

Configuring IBM HTTP Server


Proceed as follows:

1. Start the IBM Installation Manager, configure WebSphere Application Server applicable to the IBM Installation Manager repository, and click Install.
2. In the Install Packages window, select the IBM HTTP Server and Web Server Plug-in for IBM WebSphere Application Server, and click Next.
3. Enter the IBM HTTP Server and web server plug-in destination directories, and click Next.
/opt/IBM/PIM/Websphere/HTTPServer
/opt/IBM/PIM/Websphere/Plugins
4. Specify the IBM HTTP Server port number as 80 (default port) and click Next.
5. Click Install on the final Install Packages window that lists all the packages.
6. Click Finish after successful installation of IBM HTTP Server message.

Create a web server instance through WebSphere Application Server console

1. Log in to the WebSphere Application Server Integrated Solutions Console.


2. Browse to the Servers > Server Types > Web Servers, and click New to create a new web server.
3. In the Select a node for the Web server and Select the Web Server Type page, specify the following, and then click Next.
select node - Node01Host01
Server name - IBMHTTPServer01
Type - IBM HTTP Server
4. In the Select a Web Server Template page, select IHS, and click Next.
Field Value
Port 80
Web server installation location /opt/IBM/PIM/Websphere/HTTPServer
Plug-in installation location /opt/IBM/PIM/Websphere/Plugins
Application mapping to the Web server All
5. In the Confirm new Web Server page, and click Finish.

Configuring IHS for Application Server


The plug-in enables IBM HTTP Server (IHS) to communicate with the cluster. To use this plug-in, you must enable the HTTP Server with the details of the plug-in. The
configuration changes are typically recorded in the httpd.conf file of IHS.

Add the following entries in the httpd.conf file at /opt/IBM/WebSphere/HTTPServer/conf/ folder.

LoadModule
was_ap24_module /opt/IBM/WebSphere/Plugins/bin/64bits/mod_was_ap24_http.so

WebSpherePluginConfig
/opt/IBM/WebSphere/Plugins/config/IBMHTTPServer01/plugin-cfg.xml

Virtual host

<VirtualHost *:80>
ServerName http://Hostname1/
ServerAlias Hostname1
ServerAlias Hostname2
ErrorLog logs/error_log
CustomLog logs/access_log common
</VirtualHost>

Map Modules to configured HTTP Server

1. Browse to Application > Application Types > Websphere Enterprise Applications. You have the following applications installed:
ccd_MDMCE_APPSERVER01_HOST01
mdm-rest.war
mdm_ui.war
2. Click Applications > Application Types > WebSphere enterprise applications > application_name > Manage modules in the console navigation.
3. Select the IBM HTTP Server and the cluster entry, and Apply.
4. Select each application, and specify the IBM HTTP Server and the cluster entry in the Manage modules, and Apply.

Add virtual host alias

1. Click Environment > Virtual Hosts > MDMCE_VHOST01_HOST01 > Host Aliases > New, and port number 80.

Generate plugin-cfg.xml file

IBM Product Master 12.0.0 195


1. Click Servers > Server Types > Web Servers, select IBMHTTPServer01, and click Generate Plug-in.
2. Click Servers > Server Types > Web Servers, select IBMHTTPServer01, and click Propagate Plug-in.

Updating config.json files of the HTTP server

Update IP address or hostname of the concerned node in their respective config.json file and try to log in to the application.
Location: $WAS_HOME/profiles/<your profile>/installedApps/<your cell>/mdm_ui.war.ear/mdm_ui.war/assets/config.json

Enabling SSL for the cluster


1. Browse to Servers > Server Types > WebSphere Application Server.
2. In the Preferences, click MDMCE_APPSERVER01_HOST01.
3. In the Container Settings, expand Web Container Settings, click the Web container transport chains and select WCInboundDefaultSecure.
4. Check the Enable checkbox, then click Apply > Save.
5. Repeat steps 3 and 4 for MDMCE_APPSERVER02_HOST02.
6. Click Environment > Virtual Hosts > MDMCE_VHOST01_HOST01 > Host Aliases > New.
7. Modify port to have the same value as displayed in step 4, and click Apply > Save.
8. In the httpd.conf file, add the following entries in the /opt/IBM/WebSphere/HTTPServer/conf/.
Note: Ensure that the path for KeyFile is correct.

LoadModule ibm_ssl_module modules/mod_ibm_ssl.so


Listen 443
<VirtualHost *:443>
SSLEnable
SSLServerCert default
ServerName Hostname1
ServerAlias Hostname1
ServerAlias Hostname2
KeyFile /opt/IBM/WebSphere/Plugins/config/IBMHTTPServer_Sticky/plugin-key.kdb
ErrorLog logs/error_log
CustomLog logs/access_log common
</VirtualHost>

9. StopProduct Master services.


10. Run the updateRtProperties.sh script at the $TOP/mdmui/bin folder.
11. Start Product Master services.
12. Repeat steps 9-11 for the HOST02.

Enabling sticky sessions


The sticky session feature enables the load balancer to bind a user's session to a specific instance. This ensures that all requests from the user during the session are sent
to the same instance.

Proceed as follows to enable sticky sessions for the IBM HTTP Server:

1. Browse to Servers > Server Types > WebSphere Application Server.


2. In the Preferences, click MDMCE_APPSERVER01_HOST01.
3. In the Container Settings, expand Session management, click the General properties and select Enable Cookies.
4. Check the Use the context root check box, then click Apply > Save.
5. Browse to Servers > Server Types > WebSphere Application Server.
6. In the Preferences, click MDMCE_APPSERVER01_HOST01.
7. In the Container Settings, expand Web Container Settings > Web Container > Custom properties > New.
8. Add HttpSessionCloneId property with the following values:
Field Value
Name HttpSessionCloneId
Value MDMCE_APPSERVER01_HOST01
Description HttpSessionCloneId
9. Repeat steps 2-8 for the MDMCE_APPSERVER02_HOST02.
10. Browse to Servers > Server Types > Web Servers, select web server and click Generate Plug-in.
11. Now restart the application and web servers to verify the application for the sticky session.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring a cluster environment


To improve performance, you can run services in a clustered environment so that you can run multiple services on one computer or on multiple computers.

Clustering Services
The most common system setup is to place every service within the same logical computer. This scenario is outlined in the installation documentation. This scenario is
acceptable for smaller installations and development installations. This scenario is the easiest to manage and set up because the administrator is required to use only one
set of scripts, on one logical computer to manage the instance. If you outgrow this type of installation, you can grow into a cluster configuration.
Advantages and possibilities of clustering are:

196 IBM Product Master 12.0.0


Each of the six product services performs some specific tasks, and is isolated at run time because each service runs in a separate JVM. The advantage of this design
is that each service has well-defined responsibilities, and can start or shut down independently.
You can use clustering to distribute the load and processing within the product domain to make the best use of the system infrastructure. You can also use
clustering to improve availability and system performance. The most common reason for setting up a clustered environment is for improving performance and
scalability. The common clustered environment configuration includes having multiple schedulers on dedicated computers. Separating the scheduler and appsvr
services onto separate computers increases the performance in any installation where the scheduler is used for frequent, large, or long-running jobs.
Determine the required number of scheduler and appsvr services you need to improve overall responsiveness. The number of concurrent jobs and their complexity
determines the number of schedulers and threads (number of jobs) that each scheduler can run.
You might want to cluster multiple product instances to work as a group, either at the application server level, or product application level.
When you create a clustered environment, if the primary server fails, the services that were not previously running on a secondary server can be restarted with
minimal effort and downtime.

Limitations of clustering are:

Running appsvr services on different servers can be problematic.


You can start multiple services to distribute loads, such as multiple schedulers to run jobs on one or more servers. However, the schedulers are simple independent
instances and do not support failover in case one instance is down.
Clustering for high availability can require more testing for your implementation.

The two types of application server clustering are vertical and horizontal:

Vertical clustering
Vertical clustering is effectively making the application server larger, or scaling it vertically. For example, if memory is slowing down performance, you can add
memory inexpensively without any additional licensing costs, and more services can be deployed on the same hardware. This application is a simple and
inexpensive way to achieve more performance from the system. With vertical clustering, you add more services to the same server.
Horizontal clustering
Horizontal clustering uses multiple servers. Deploying on more servers is similar to deploying more services on the same computer (vertical clustering), but you
must modify and run scripts for each server. You configure the admin_properties.xml file that is in the $TOP/etc/default directory to define each computer in your
clustered environment.
When you deploy on multiple servers, you must use a shared disk solution such as NFS.

Horizontal clustering
You can enable multiple appsvr, eventprocessor, queuemanager, or scheduler services on multiple workstation to increase the capacity of your system installation.
Vertical clustering
You can enable multiple appsvr and scheduler services on a single workstation to increase the capacity of your installation.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Horizontal clustering
You can enable multiple appsvr, eventprocessor, queuemanager, or scheduler services on multiple workstation to increase the capacity of your system installation.

The following figure shows a horizontally clustered environment where multiple services exist on multiple application servers:

Restriction: The following restrictions apply to horizontal clustering:

Each workstation must run rmiregistry


Each workstation must run at least the admin service
Each workstation in the cluster requires a separate directory for configuration files (the contents of <install dir>/bin/conf) and a separate logging directory.

IBM Product Master 12.0.0 197


To tune a complex installation, you implement multiple services and spread them across multiple systems. Tuning a complex installation is the same as tuning single
application servers, but tuning a complex installation might also involve using a hardware load balancer that routes user HTTP requests to a pool of application servers.

To tune an application server “pool”, you:

Plan the location and number of services


Tune individual servers

Planning the location and number of application servers for scaling


In a system deployment that involves more than one application server, each application server must run one admin and one rmiregistry service. The appsvr,
eventprocessor, queuemanager, or scheduler services can be instantiated multiple times on a single or on multiple physical systems and must be instantiated at least
once. However, the services that do the bulk of the work are the appsvr and the scheduler services. You typically need only one eventprocessor service and one
queuemanager service.

Given these restrictions, best practices are:

Run the eventprocessor and queuemanager services on any workstation with any other service. These services are not “heavy” services.
If a system runs the scheduler and appsvr services, use one or more dedicated systems for the scheduler. The application servers that you dedicate for the
scheduler service must also run the admin and rmiregistryservices. If memory and CPU capacity exists, multiple schedulers can run on the same workstation.
If possible, do not run the appsvr service on a workstation that also runs the scheduler.
To improve response time for users, use multiple appsvr services. These appserver services can be on a single workstation, or on multiple workstations, or both. If
possible, do not run the appsvr and scheduler services on the same system.

Tuning individual application servers


Tuning the application servers in the pool is similar to tuning a stand-alone application server. Although fewer services might be running on a system, the practical
maximum JVM size of 1.5 GB applies on 32-bit systems. If you have fewer services per system, you can use smaller individual systems where applicable.

Exception: In an environment with multiple application servers, the binary files and document store must be on a shared file system, most likely NFS. The connection
between each application server and the NFS server must be examined for performance. Because Product Master does not create a high demand on the disk, it is possible
to use one of the application servers as the NFS server. You must ensure that the NFS server is robust because the entire installation fails if the NFS server fails.

Configuring member workstations


On each workstation in the cluster, you need to configure the member workstations and every system must run at least the admin service.

Procedure

1. Create the init script.


a. In the Product Master user .bashrc file on each workstation, add the environment variable CCD_CONFIG_DIR and set it to the configuration directory.
b. Log out and log in or source the init script.
2. Set your runtime parameters.
a. Create an env_settings.ini file in the configuration directory.
b. Set the log_dir parameter in the [env] section of the env_settings.ini file to the logging directory.
Note: If you want to see the same log files for all of the services together, ensure that the logging directory is shared across all of the workstations in the
cluster.
c. Define the services to be run on each system.
d. Run setup.sh for each system.
e. Run configureEnv.sh for each system.
3. Update the admin_properties.xml file.
On one system, edit the <install dir>/etc/default/admin_properties.xml file and add the host name of each node.

The following example depicts a horizontal cluster using the following configuration:

IBM® Product Master is installed in the /usr/local/mdmpim directory. This directory is shared between all nodes and is available at /usr/local/mdmpim on all nodes.
The Product Master user has read, write, and run permissions to the directory and all the files and sub-directories.
The Product Master user name is mdmpim
The Product Master user's directory is /home/mdmpim
The cluster consists of three systems:
node1.mycompany.com
node2.mycompany.com
node3.mycompany.com
The logging directory is /home/mdmpim/logs
The configuration directory is /home/mdmpim/config
Node1 runs the appserver. Node 2 runs the workflowengine and a scheduler. Node 3 runs only a scheduler service.

Procedure

1. Create the logging directory. On all three nodes, run the mkdir
/home/mdmpim/logs command.
2. Create the configuration directory. On all three nodes, run the mkdir
/home/mdmpim/config command.
a. On node1, run the cp –r /usr/local/mdmpim/bin/conf/* /home/mdmpim/config command.
b. On node1, run the rm –fr /usr/local/mdmpim/bin/conf/* command.
3. Configure the Product Master user's environment on all three nodes:
a. Edit the $HOME/.bashrc directory.
b. Set and export PERL5LIB and LANG.
c. Set and export CCD_CONFIG_DIR=/home/mdmpim/config.
d. Log out and log in.
4. Configure runtime parameters on all three nodes:

198 IBM Product Master 12.0.0


a. Create and edit the env_settings.ini file.
b. Uncomment and set log_dir=/home/mdmpim/logs.
c. Configure services.
i. On node1, edit the [services] section to read:
admin=admin

eventprocessor=eventprocessor

queuemanager=queuemanager

appsvr=appsvr
ii. On node2:
admin=admin

scheduler=scheduler

workflowengine=workflowengine
iii. On node3:
admin=admin

scheduler=scheduler
d. Set the appserver and db parameters.
5. Start services on all three nodes, change to the <install dir>/bin/go directory and run the start_local.sh script.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Vertical clustering
You can enable multiple appsvr and scheduler services on a single workstation to increase the capacity of your installation.

Vertical clustering is only supported within the WebSphere® Application Server environment. The following guidelines must be met:

Ensure that there is an app server named appsvr. All other app servers must have unique names.
The names of all the other WebSphere Application Server app server and virtual host components must be unique. The WebSphere Application Server appserver
and virtual host components are installed in the same cell on the same node.
The app servers must use separate ports.

The following figure shows a single application server configuration where you can add more admin, eventprocessor, queuemanager, or scheduler services to vertically
scale the application server and create a clustered environment:

Creating a second application server service on the same server


To deploy a second application server service on the same server for vertical clustering, you must configure the application server to run on a different port and as a
different instance name.

Two application servers are being defined:

appsvr (the default)


Runs on port 7507
The WebSphere Application Server components are my_was_server and my_was_vhost.
appsvr2
Runs on port 7508
The WebSphere Application Server components are my_was_server2 and my_was_vhost2.

IBM Product Master 12.0.0 199


Both app servers are installed in profile AppSvr01 in node myNode01 in cell myCell01.
Procedure

1. Stop all IBM® Product Master services.


2. Update the env_settings.ini file. Add the application server to the appsvr line in the [services] section.
3. Add a section to the env_settings.ini file for the new service. For example, if the new application server is called appsvr01, then you must add a section that is
called appserver.appsvr01. You can copy an existing appserver section and update it.
4. Install the WebSphere Application Server components.
Run the following scripts:
create_vhost.sh
create_appsvr.sh
install_war.sh
These scripts replace the existing WebSphere Application Server components.
5. Restart all Product Master services, for example:
[services]

appsvr=appsvr, appsvr2

[appserver]

type=websphere

rmi_port=17507

home=/opt/IBM/WebSphere/AppServer

[appserver.websphere]

application_server_profile=AppSrv01

cell_name=myCell01

node_name=myNode01

#admin_security=false

[appserver.appsvr]

port=7507

appserver_name=my_was_server

vhost_name=my_vhost

[appserver.appsvr2]

port=7508

appserver_name=my_was_server2

vhost_name=my_vhost2

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Post-installation instructions
After completing the automated installation, you need to complete the following steps to configure Product Master for the Persona-based UI.

Procedure
1. Import the required data model in the Product Master.
2. Assign appropriate roles to the Product Master users as follows.

Generic Persona

Data model - <install_dir>/mdmui/env-export/mdmenv/mdmce-env.zip

200 IBM Product Master 12.0.0


Roles: Admin, Basic, Content Editor, Content Editor, Catalog Manager, Category Manager, GDS Supply Editor, Full Admin, and Solution Developer.
Important: You should take backup export of Roles, Specs, User, Lookup Table, and similar objects that might have same name as objects in the compressed
file because when you import mdmce-env.zip file, such objects can be overwritten. To take backup, log in to Admin UI > System Administrator > Selective
Export. Select object type that might contain same name as an object in the mdmce-env.zip file, and click Perform Export. Your export is generated in the
docstore. If any object changes due to import of the mdmce-env.zip file, you can reimport this export again.

Item completeness feature

Channel Hierarchy

Attribute Collection for Channel Hierarchy


Channel Attributes AC
Primary Spec for Channel Hierarchy
Channel Hierarchy Specification
Channels
Amazon
Adobe
eBay
Google
Magento
Secondary Spec for each channel
Amazon specific Attributes
Adobe specific Attributes
eBay specific Attributes
Google specific Attributes
Magento specific Attributes
Completeness Lookup Table

Completeness Lookup Table Spec

Machine learning feature

Report jobs
Machine Learning Services Training Report
Machine Learning Services Retraining Report
Machine Learning Active Services Report
Spec for report jobs
Machine Learning Services Training Spec
Machine Learning Services Retraining Spec
Machine Learning Active Services Spec
Lookup table
Machine Learning Services Threshold Lookup
Machine Learning Attributes Configuration Lookup
Lookup table specs
Machine Learning Services Threshold
Machine Learning Attributes Configuration

Scripts for report jobs


Machine Learning Services Training

//script_execution_mode=java_api="japi://com.ibm.ml.jobs.MLTrainingJob.class

Machine Learning Services Retraining

//script_execution_mode=java_api="japi://com.ibm.ml.jobs.MLRetrainingJob.class"

Machine Learning Active Services Spec

//script_execution_mode=java_api="japi://com.ibm.ml.jobs.MLGetRunningServices.class"

Lookup table (SAML JIT (Just-In-Time) feature)


SSO Configuration
Lookup table spec (SAML JIT (Just-In-Time) feature)
SSO Configuration Spec

Item bulk load feature

Report jobs
Bulk Load Job Report
Spec for report jobs
Bulk Load Job Report Spec
Lookup table
Bulk Load Error Configuration
Survivorship Configuration
Lookup table specs
Bulk Load Error Configuration Spec
Survivorship Configuration Spec
Scripts for report jobs
Bulk Load Job Report

//script_execution_mode=java_api="japis://com.ibm.rs.extension.catalogimport.ItemImportJob.class"

Digital Asset Management Persona

IBM Product Master 12.0.0 201


Data model - <install_dir>mdmui/env-export/dammodel/dammodel.zip

Roles: Digital Asset Manager and Merchandise Manager

Catalog and Hierarchy: Digital Asset Catalog and Digital Asset hierarchy
Important: Do not modify this system-defined catalog.
Others: Report jobs, file spec, and scripts for asset upload and renditions

Lookup table: Lookup table for DAM rendition configuration

Vendor Persona

Data model - <install_dir>/mdmui/env-export/vendorportal/vendorportal.zip

Roles: Vendor

Hierarchy: Vendor Organization hierarchy

Workflow: Product Enrichment workflow and Supplier Products Approval workflow

3. Enter the following URL in the address bar of your browser to access the GUI.

Admin UI
http://host_name:port_number/utils/enterLogin.jsp

Persona-based UI
http://host_name:port_number/mdm_ui

If Free Text Search service is enabled, then running the installAll.sh file also starts the pim-collector and indexer services. If the machine learning service is
enabled, then running the installAll.sh file also starts the machine learning services.
4. Optional: To manually start services, proceed as follows:

ML services
Go to the scripts folder, for example "$TOP/mdmui/machinelearning/scripts".
Use the following commands as required:
To start the service:

python3.9 start.pyc

To stop the service:

python3.9 stop.pyc

To check the status of the service:

python3.9 status.pyc

Persona-based UI server
To start the service:

$TOP/mdmui/bin/startServer.sh

To stop the service:

$TOP/mdmui/bin/stopServer.sh

Free Text Search services


To start the service:

$TOP/mdmui/bin/startFtsServices.sh

To stop the service:

$TOP/mdmui/bin/stopFtsServices.sh

To enable Free text search, see Using Free Text Search.

5. Optional: To implement custom modifications in the config.json file that is located in the $TOP/mdmui/dynamic/mdmui folder proceed as follows.
a. Update the required properties and run the updateRtProperties.sh script present in the $TOP/mdmui/bin folder.
b. Restart the server by using the start or stop server scripts present in the $TOP/mdmui/bin folder.
6. Optional: To copy custom modifications to the WebSphere® Application Server folder where REST and Persona-based UI war files are extracted after deployment,
run the updateRtProperties.sh script present in the $TOP/mdmui/bin folder.
To copy custom modifications successfully, you need to first place the custom modifications by using either of the following methods.
Place the custom code to the $TOP/mdmui/custom folder. The custom folder has following structure. API folder (custom code for REST API).
API folder (custom code for REST API)
UI folder (custom files for UI)
JS folder (JS file for UI)
JSON folder (JSON file for UI)
Place the custom code as a folder or a JAR file. The custom code should follow "<directory>/com.ibm.rs.custom.<directory/files>" folder structure.
7. Optional: To use jars-custom.txt, proceed as follows.
a. Add custom JAR full path to the jars-custom.txt.
b. Run configureEnv.sh file in the $TOP/bin folder.
c. Restart services with the start_local.sh script in the $TOP/bin/go folder.
8. Optional: To configure thumbnail and description for single-edit header, proceed as follows.
The supported image file formats are PNG, GIF, JPG, and JPEG and the supported image size is in KB.
a. Create following attributes in the primary spec.

202 IBM Product Master 12.0.0


Attribute Name Attribute Type
Display Image Thumbnail image or image
Display Image Description String
Note: If required, you can change the attribute name.
b. Set the following properties in the restconfig.properties file.

displayThumbnailAttributeName=Display Image
displayDescriptionAttributeName=Display Image Description

c. Restart the Persona-based UI server.

Enabling customizations
You can enable Admin UI-based or REST API-based customizations in the Persona-based UI.
Enabling HTTPS ports
For certain circumstances, you might want to deploy Product Master by using HTTPS ports.
Enabling IBM Operational Decision Manager
You can enable the IBM Operational Decision Manager in the Persona-based UI.

Related concepts
Enabling customizations
Using machine learning (ML) assisted data stewardship
Using Data Management
Working with the Vendor persona

Related tasks
Enabling IBM Operational Decision Manager

Related information
Enabling

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling customizations
You can enable Admin UI-based or REST API-based customizations in the Persona-based UI.

Custom tools page role access


Table 1. Custom Tools feature
Feat Subfeat Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
ure ure min sic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Custom Tools ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
The Persona-based UI supports following customizations:

Admin UI
REST API

The Admin UI supports custom components in the Persona-based UI by using either Trigger scripts or custom Angular scripts.

Custom Actions
This option is available in the Action drop-down list on the multi-edit page and single-edit page. Entry preview script allows you to create a sample view of a current
item set, which can be ran from the data entry screens. For example, you can write a script to view how an item displays when you use an XML format.
Custom Tabs
This option gives user ability to add customized tab in the single-edit page. You can show content that is related to Product information management (PIM) or any
other external content on this tab.
Custom Tools
This option can be used for multiple purposes like creating an independent UI interface, showing any external content or third-party page, or adding any
application-wide feature specific to your requirement.

Prerequisites
Before you begin, you need following software,

Node v14.x or later


Visual Studio Code editor
Angular version support
IBM® Product Master Fix Pack 2 onwards - Angular version 8.
IBM Product Master Fix Pack 8 onwards - Angular version 12.
For existing custom components, upgrade your custom sample library.

IBM Product Master 12.0.0 203


For more information, see Upgrading the existing custom sample library.

Using custom Angular scripts to add custom component


Use the custom library application provided with the Persona-based UI installer to develop Custom Actions, Custom Tabs, or Custom Tools.
Enabling Custom Actions by using Entry Preview scripts
You can enable Custom Actions in the Persona-based UI by using Entry Preview scripts.
Enabling Custom Tabs by using Trigger scripts
You can enable Custom Tabs in the Persona-based UI by using Trigger scripts.
Enabling Custom Tools by using Custom Tool script
You can enable Custom Tools in the Persona-based UI. The Custom Tool includes the Job Console pane to view the running jobs status.
Developing custom REST API
You can develop custom REST API in the Persona-based UI using Eclipse.
Upgrading the existing custom sample library
If you have existing custom sample library, you need to upgrade that to the latest Angular version.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using custom Angular scripts to add custom component


Use the custom library application provided with the Persona-based UI installer to develop Custom Actions, Custom Tabs, or Custom Tools.

Before you upgrade, take backup of the custom code. Upgrading replaces existing JS and JSON files.

Procedure
To add a custom component, proceed as follows.

1. Download the custom-library.zip file available in the $TOP/samples folder.


2. Extract the custom-library.zip file.
Following are the main folders in the custom-library folder.

custom-library\src\app
The app folder has following subfolders.
persona-custom-tabs
persona-custom-tools
preview-script
Each of the subfolder has a component folder. You need to add your custom component to this folder.

custom-library\src\assets
The assets folder has an important folder, json.
The json folder has following files.
persona-custom-tabs.json
persona-custom-tools.json
preview-script.json
You need to add your custom component to the respective JSON file in this folder.
3. Open the custom-library folder in the Visual Studio Code editor.
4. In the command terminal, run the npm install command.
5. Create components inside the predefined custom module. For more information, see Guidelines.
6. Add entry of the components in the respective JSON file.
scriptDisplayName - Appears as name in the single-edit page.
scriptPath - Should be the selector of the custom component.
scriptType - Used for identification of script, do not change.

Custom Actions

{ "scripts":
[
{
"scriptDisplayName": "Action1",
"scriptType": "Persona UI Entry Preview",
"scriptPath": "first-preview",
"containerName": [catalogname]
},
{
"scriptDisplayName": "Action2",
"scriptType": "Persona UI Entry Preview",
"scriptPath": "second-preview",
"containerName": ["catalogname"]
}
]
}

Custom Tabs

{ "scripts":
[
{

204 IBM Product Master 12.0.0


"scriptDisplayName": "Sample Tab1",
"scriptType": "Persona UI Custom Tabs",
"scriptPath": "sample-tab1",
"containerName": [5405]
},
{
"scriptDisplayName": "Sample Tab1",
"scriptType": "Persona UI Custom Tabs",
"scriptPath": "sample-tab2",
"containerName": [5407]
}
]
}

Custom Tools

{ "scripts":
[
{
"scriptDisplayName": "Search Item by PK",
"scriptType": "Search Item by PK",
"scriptPath": "custom-tool1",
"scriptType": "Persona_Custom_Tools",
},
{
"scriptDisplayName": "Action2",
"scriptType": "Persona UI Entry Preview",
"scriptPath": "second-preview",
}
]
}

7. Run the npm run generateLibrary command. In the mdm_ui.war\custom folder, the application creates .bundle library inside the respective module bundle folder.
8. Copy the generated JS files to the $TOP/mdmui/custom/ui/js folder and JSON files to the $TOP/mdmui/custom/ui/json folder.
9. Run the $TOP/mdmui/bin/updateRtProperties.sh script for WebSphere® Application Server-based on-premise deployments.
The script copies files to the following folders in the WebSphere Application Server.
JSON files to $WAS_HOME/profiles/<WAS profile>/installedApps/<Node Cell>/mdm_ui.war.ear/assets/json
JS files to $WAS_HOME/profiles/<WAS profile>/installedApps/<Node Cell>/mdm_ui.war.ear/js/custom
Note: For Operator-based deployments, see Customizing the Docker container.
10. Press Ctrl + Shift + Delete to clean the browser cache and close all the browser instances.
11. Open a new browser and verify that your changes reflect in the Persona-based UI.

Guidelines
Use the following guidelines while developing custom components.

Extend your component from the base component.


Use the CustomItemApiService and HttpErrorHandlerService for the custom API calls. Provide these services on the component level.
Use inline template and inline CSS in the custom component.
Add your component inside respective module.ts files.
Create your custom services inside the services folder at the component level.
Do not change the structure of any JSON file.
Do not modify the base component.
Other than adding components, do not update the preview module.
Do not create nested components.
Do not use any third-party package apart from those that are specified in the package.json of custom library. The product support team cannot guarantee the
compatibility of any such third-party library.
While creating HTML file for the new components,

Custom Actions
Do not change the structure of the modal(popup) structure.
Do not remove following first div from the HTML file:

<div class="backdrop" [ngStyle]="{ display: display }"></div>

Do not remove the close buttons and handler from the modal header and footer, and the event handler.
You can change the content of the modal including the following header title and body as required:

<span class="ui-dialog-title">Change the title</span>

Custom Tabs and Custom Tools


While creating HTML file for the new components, design the page as you want to display on the single-edit page.

Sample custom components


See the following sample custom components.

Custom Actions

FirstPreviewScriptComponent - Displays all items IDs in a pop-up window. You can configure this for both the collaboration and catalog.
SecondPreviewScriptComponent - Displays the selected items IDs in a pop-up window. You can configure this for both the collaboration and catalog.
UpdateNotesComponent- You must configure a catalog and add new attribute (specName/attributeName) in the catalog, for example,
"ProductDataDetails/Special Note". Using this component, you can update notes for all the items that are not checked out and can also delete the selected
items. You can check any catalog with configuration in any category having consistent item data.
Note: Do not use delete operation in single-edit page.

IBM Product Master 12.0.0 205


Custom Tabs

SampleTab1Component - Displays selected items ID along with container name in a table.


SampleTab2Component - Displays selected items ID along with container name in a table.

Custom Tools

SampleTool1Component - You can perform an Item search based on the catalog name and primary key that is entered in the input boxes. The component
shows the read-only view of the attributes and values with the display image.

Navigating to the Persona-based UI from custom Angular scripts


From the custom angular pop-up window, you can browse to the catalog or collaboration area pages of the Persona-based UI.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Navigating to the Persona-based UI from custom Angular scripts


From the custom angular pop-up window, you can browse to the catalog or collaboration area pages of the Persona-based UI.

The goToSingleEditCatalog(catalogObj) method is used for navigation to the catalog or collaboration area of the single-edit page.

public goToSingleEditCatalog(catalogObj: any) {


this.redirectItem.next(catalogObj);
}

The goToSingleEditCollab(collabObj) is used for navigation to the collaboration area of the multi-edit page.

public goToSingleEditCollab(collabObj: any) {


this.redirectItem.next(collabObj);
}

The flow handles invalid inputs, for example, if invalid catalogId is specified then an error notification "Please provide valid input criteria. Click Back,
to redirect to the previous
page.” is displayed. Following table lists the supported redirection flow:
Redirection
Entry points Catalog single-edit Catalog multi-edit Collaboration single-edit Collaboration multi-edit
Catalog single-edit ❌ ❌ ✓ ✓
Catalog multi-edit ❌ ❌ ✓ ✓
Collaboration single-edit ❌ ❌ ❌ ❌
Collaboration multi-edit ❌ ❌ ❌ ❌
Explorer ❌ ❌ ✓ ✓
Search ❌ ❌ ✓ ✓
Custom Tool ✓ ✓ ✓ ✓

Limitations
Following are the list of limitations:

Do not add base-preview.component.ts method as an inline script.


Always pass all the parameters with valid values while calling methods.

Sample custom page


Following sample custom page demonstrates navigation to the catalog or collaboration single-edit page. To run the redirection methods, you must provide valid input
parameters to the methods of base-preview.component.ts file.
Valid input parameters can be passed illustrated in the following sample code methods:

redirectToSECatalog()
redirectToSECollab()

import { Component, OnInit } from '@angular/core';


import { BasePreview } from './base-preview.component';
@Component({ selector: "redirectSE",
template: `
<div class="backdrop" [ngStyle]="{ display: display }"></div>
<!-- modal -->
<div
class="modal"
tabindex="-1"
role="dialog"
[ngStyle]="{ display: display }"
>
<!-- modal-dialog -->
<div class="modal-dialog">
<!-- modal-content -->
<div class="modal-content">
<!-- modal-header -->
<div class="modal-header">
<span class="ui-dialog-title"> Show All Items </span>

206 IBM Product Master 12.0.0


<button
type="button"
class="close"
(click)="closeModalDialog()"
aria-label="Close"
title="Close"
>
<span aria-hidden="true"></span>
</button>
</div>
<!-- modal-body -->
<div class="modal-body">
<div

>
<div>Related Items:</div>
<div *ngFor="let data of relatedItems">
<input
#checkId
type="checkbox"
[value]="data"
(change)="changeSelection(data, checkId.checked)"
/>
{{ data }}
</div>
</div>
<div class="action-btn-wrapper">
<button
class="btn btn-primary"
[disabled]="enableButtons()"
(click)="redirectToSECatalog()"
>
Go to Single Edit
</button>
<button class="btn btn-primary" (click)="redirectToSECollab()">
Go to Single Edit Collab
</button>
</div>
</div>
<!-- modal-footer -->
<div class="modal-footer">
<button
type="button"
class="btn btn-default"
(click)="closeModalDialog()"
title="Close" >
Close
</button>
</div>
</div>
</div>
</div>
`
})
export class RedirectSEComponent extends BasePreview implements OnInit
{ display: any = "block";
selectedIds: Array<number> = [];
relatedItems: Array<any> = [];
constructor() {
super();
}
ngOnInit() {
this.relatedItems = [45335, 45336, 45337];
}
closeModalDialog() {
this.display = "none";
}
changeSelection(id, isSelected)
{
if (isSelected) {
this.selectedIds.push(id);
} else {
let index = this.selectedIds.indexOf(id);
this.selectedIds.splice(index, 1);
}
}
redirectToSECatalog() {
let obj = {
type: "catalog",
catalogId: 5764,
itemIds: this.selectedIds
};
this.display = "none";
this.goToSingleEditCatalog(obj); }
redirectToSECollab() {
let obj = {
type: "collab",
collabType: "item",
entryIds: [60805, 61602],
collabId: 1247,
stepId: 614,
stepName: "step1"
};
this.display = "none";
this.goToSingleEditCollab(obj); }
enableButtons(): boolean {

IBM Product Master 12.0.0 207


return this.selectedIds.length > 0 ? false : true;
}
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling Custom Actions by using Entry Preview scripts


You can enable Custom Actions in the Persona-based UI by using Entry Preview scripts.

Before you begin


For Custom Actions support, you should first install and configure Hazelcast.

Install Hazelcast. For more information, see Installing Hazelcast IMDG.

Procedure
1. Create an implementation class for entry preview extension point which implements EntryPreviewFunction.
2. Implement entryPreview() method so that it writes a servlet URL in PrintWriter.write().
For example,

String url = "/redirect.wpc?itemList=" + itemDetails


+ "&containerName=" + catalogName;
printWriter.write(url);

3. Create a servlet which implements AsyncEnabled and RequestResponseEnabled interfaces.


4. Create servlet and JSP mapping in the flow-config.xml file.

<flow path="redirect" command="com.test.Redirect" method="entryPoint">


<flow-dispatch name="ok" location="/user/entryPreview.jsp" dispatchType="forward" />
</flow>

5. Create a JSP to render response from servlet and place the JSP in the $TOP/public_html/user folder.
6. In the entry preview extension point pass parameters as query params to servlet URL.
7. Read the params from request in servlet as set in the previous step.
8. Set the ids in request params or HTTP Session in the servlet so that they are available to subsequent requests like JSP.
Important: Due to these changes, the same entry preview script no longer works in the Admin UI.
9. Create a JAR for compiled extension code and place it in classpath for the Admin UI:
a. Add custom JAR entry in the $TOP/bin/conf/classpath/jars-custom.txt file.
b. Run the $TOP/bin/conf/updateRtClasspath.sh file.
10. Create a JAR for compiled extension code and set the classpath in the Websphere Application Server for the Persona-based UI.
Servers > Server Types > > WebSphere Application servers > mdmui > Java and Process Management > Process definition > Java Virtual Machine
11. Restart the Admin UI and the Persona-based UI Servers.
12. In the Admin UI, create an entry preview script in theScript Console of the Admin UI.
a. Click New, and then select container type, Catalog, and input parameter specs.
b. Select ASP/JSP like as the selected type.
c. Write a script body containing JSP URL in the IFRAME:

<%
var pk;
forEachEntrySetElement(entrySet, entry)
{
pk = entry.getEntryId();
break;
}
%>
<html>
<body>
<iframe src="/home/markdown/jenkins/workspace/Transform/in/SSADN3_12.0.0/installing/user/SamplePage.jsp" >
</iframe>
</body>
</html>

d. Add the following Java API URL , and save the script.

//script_execution_mode=java_api="japi://com.test.EntryPreviewTest.class"

e. Copy the JSP file to the $TOP/public_html/user folder.

Results
Select an item from the Catalog Explorer or a collaboration area and Click Action list to select the configured script. The configured scripts runs and resulting JSP is
displayed in a pop-up window.

Navigating to the Persona-based UI from Entry Preview scripts


From the Entry Preview pop-up window, you can browse to the single-edit and multi-edit pages of the Persona-based UI. You can create Entry Preview actions, by
using JSP, WPC, or HTML Entry Preview scripts.

208 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Navigating to the Persona-based UI from Entry Preview scripts


From the Entry Preview pop-up window, you can browse to the single-edit and multi-edit pages of the Persona-based UI. You can create Entry Preview actions, by using
JSP, WPC, or HTML Entry Preview scripts.

Configuring the navigation


The redirectBusinessUI.js file contains the methods that are used for navigation to the Persona-based UI.
Note: Use the same instructions to enable navigation to the Persona-based UI for the Custom Tools from the Custom Tools page.

1. Get the redirectBusinessUI.js file from the following location for JSP and WPC Entry preview script:
$WAS_HOME/profiles/<WAS profile>/installedApps/<NodeCell>/mdm_ui.war.ear/mdm_ui.war/ assets/customtool/scripts/redirectBusinessUI.js
2. Add the redirectBusinessUI.js file in the Entry Preview script page as an external script from the above location.
Note: You can use the instructions

The redirectToSingleEditCollab is used for navigation to the collaboration area of the single-edit page.

function navigateToSingleEditCollab ()
{
//set either item or category as string value
var collabType = 'item';
//set itemIds/categoryIds as entryIds based on collabType
var entryIds = [201, 202, 203]
var collabId = 101;
var collabName='Sample Collab';
var stepId = 1;
var stepName = 'Step1';
//call below method for navigation
businessUI.targetOrigin = “http://<ip address>:9090” //Persona UI url
businessUI.redirectToSingleEditCollab(entryIds, collabType,collabId,
collabName,stepId,stepName);
}

The redirectToMultiEditCollab is used for navigation to the collaboration area of the single-edit page.

function navigateToMultiEditCollab ()
{
//set either item or category as string value
var collabType = 'item';
var collabId = 101;
var collabName='Sample Collab';
var stepId = 1;
var stepName = 'Step1';
//call below method for navigation
businessUI.targetOrigin = “http://<ip address>:9090” //Persona UI config.json’s customScriptBaseUrl
businessUI.redirectToMultiEditCollab(collabType,collabId,
collabName,stepId,stepName, stepItemCount,isReserveToEditEnabled);
}

The redirection flow handles invalid inputs, for example, if invalid catalogId is specified then an error notification "Please provide valid input criteria. Click
Back, to redirect to the
previous page.” is displayed. Following table lists the supported redirection flow:
Redirection
Entry points Catalog single-edit Catalog multi-edit Collaboration single-edit Collaboration multi-edit
Catalog single-edit ❌ ❌ ✓ ✓
Catalog multi-edit ❌ ❌ ✓ ✓
Collaboration single-edit ❌ ❌ ❌ ❌
Collaboration multi-edit ❌ ❌ ❌ ❌
Explorer ❌ ❌ ✓ ✓
Search ❌ ❌ ✓ ✓
Custom Tool ✓ ✓ ✓ ✓

Limitations
Following are the list of limitations:

Do not add redirectBusinessUI.js method as an inline script.


The JS file must be included as an external script for WPC/JSP Entry Preview page.
Always pass all the parameters with valid values while calling methods.
Persona-based UI catalog multi-edit page is read-only page having only Open and Action.
Do not press Back on the catalog multi-edit page considering the overall system design.

Sample custom page

IBM Product Master 12.0.0 209


Following sample custom page demonstrates navigation to the catalog single-edit or multi-edit page. Based on the number of items, page redirects to the single-edit or
multi-edit page. To run the action Script inside the intended domain, set the targetOrigin property of the redirectBusinessUI.js file. The targetOrigin property should
have the value of the URL of the Persona-based UI.

<!DOCTYPE html>
<html>
<head>
<script type="text/javascript"
src="/home/markdown/jenkins/workspace/Transform/in/SSADN3_12.0.0/installing/user/redirectBusinessUI.js"></script>
<script>
businessUI.targetOrigin = “http://10.20.30.125:9090” //Persona UI url
function goToSingleEdit()
{
//set catalog id
var catalogId = 102 //
set itemids
var itemIds = [2022, 2023]
//call below method for navigation
businessUI.redirectToEditCatalogItem(catalogId, itemIds);
}
</script>
</head>
<body>
<center>
<h1></h1>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling Custom Tabs by using Trigger scripts


You can enable Custom Tabs in the Persona-based UI by using Trigger scripts.

Before you begin


For Custom Tabs support, you should first install and configure Hazelcast.

Install Hazelcast. For more information, see Installing Hazelcast IMDG.

Procedure
1. In the Admin UI, create a trigger script in the Script Console.
Following is a sample trigger script:

function getURL(entry){
var itemId = 0;
itemId =entry.getEntryId();
var colArea= getColAreaByName("Camera Features");
var url="/CustomTab.wpc?itemId="+itemId+"&collabId="+colArea.getColAreaId();
return url;
}

2. Add the trigger script in the data_entry_properties.xml file.


3. Create a servlet that implements AsyncEnabled and RequestResponseEnabled interfaces.
4. Create servlet and JSP mapping in the flow-config.xml file.

<flow path=" CustomTab" command="com.test.CustomTab" method="entryPoint">


<flow-dispatch name="ok" location="/user/custom_tab.jsp" dispatchType="forward" />
</flow>

5. Create a JSP to render response from servlet and place the JSP in the $TOP/public_html/user folder.
6. Create a JAR for compiled extension code and place it in class path for the Admin UI:
a. Add custom JAR entry in the $TOP/bin/conf/classpath/jars-custom.txt file.
b. Run the $TOP/bin/conf/updateRtClasspath.sh file.
7. Create a JAR for compiled extension code and set the class path in the WebSphere® Application Server for the Persona-based UI.
Servers > Server Types > > WebSphere Application servers > mdmui > Java and Process Management > Process definition > Java Virtual Machine
8. Restart the Admin UI and the Persona-based UI Servers.
9. In the Admin UI, create an entry preview script in the Script Console of the Admin UI.
a. Click New, and then select container type, Catalog, and input parameter specs.
b. In the Selected type list, select Regular.
c. Implement getURL() method, as illustrated in the following sample:

function getURL(entry){
var pk = entry.getEntryId();
var content = "/user/ SamplePage.jsp?mdmId="+pk;
return content;
}

d. Copy the JSP file to the $TOP/public_html/user folder.


Important: You can integrate a Custom Tab for a user-defined custom stand-alone JSP with the following limitations:
Place in the $TOP/public_html/user folder.
Should not contain any static includes such as

210 IBM Product Master 12.0.0


<%@ include file="/utils/header.jsp.include" %>
%> <%@ include file="/utils/footer.jsp.include"
%>
Should contain all the required JAVA imports.

Results
Select an item from the Catalog Explorer or a collaboration area and the custom tab is visible in the single-edit page. The configured scripts runs and resulting JSP is
displayed in the tab.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling Custom Tools by using Custom Tool script


You can enable Custom Tools in the Persona-based UI. The Custom Tool includes the Job Console pane to view the running jobs status.

Before you begin


For Custom Tools support, you should first install and configure Hazelcast.

Install Hazelcast. For more information, see Installing Hazelcast IMDG.

Procedure
1. Browse to the $TOP/etc/default/common.properties file.
2. In the common.properties file, change the value of the xframe_header_option to ALLOWALL.
3. Browse to the $TOP/mdmui/dynamic/mdmui/config.json file.
4. In the config.json file, change the value of the following:

customScriptBaseUrl=http://<old_ui_ip>:<port>

5. Update the config.json file by following command:

$TOP/mdmui/bin/updateRtProperties.sh

6. Browse to the $TOP/bin/go/start_local.sh file and run to restart the Admin UI server.
Restriction: For the Content Editor role, the Custom tab and the Custom Tools icon (left pane) is only visible in the Google Chrome browser and displays a certificate
error in the Mozilla Firefox and the Microsoft Internet Explorer browsers.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Developing custom REST API


You can develop custom REST API in the Persona-based UI using Eclipse.

Procedure
1. Open Eclipse, and go to File > New > Project > Maven Project.
2. Extract the sample project (custom-rest.zip) located in the $TOP/mdmui/samples folder to the new maven project.
3. Right-click the maven project, and go to Build Path > Configure Build Path.
4. In the Java Build Path window, add the following.
JRE System Library,
Admin UI JAR files located in the $TOP/jars folder,
REST API JAR file dependencies located in the $TOP/mdmui/libs folder,
Custom REST API JAR file (mdm-rest.jar) located in the $TOP/mdmui/libs/custom folder.
Following are the project structure guidelines.
The package name should have prefix of "com.ibm.rs.custom".
The controller class should contain public methods and follow package structure "com.ibm.rs.custom.api".
The service class should perform business validations for input and invoke Data Access Object (DAO) class. It should follow package structure
"com.ibm.rs.custom.service".
The DAO class should invoke the Java™ APIs to perform business operations and convert them to some higher-level constructs (objects, collections). It
should follow package structure "com.ibm.rs.custom.dao" and "com.ibm.rs.custom.bean".
5. To compile, right-click the cd custom-rest folder (pom.xml file), and go to Maven > Update project.
6. Stop the services by using the following command.

cd $TOP/bin/go
./stop_local.sh

IBM Product Master 12.0.0 211


7. To copy the custom modifications to the WebSphere® Application Server folder where REST and Persona-based UI war files are extracted after the deployment, run
the updateRtProperties.sh script present in the $TOP/mdmui/bin folder.
To copy custom modifications successfully, place the custom code to the $TOP/mdmui/custom folder. The custom folder has following structure.

API folder (custom code for REST API)


classes (class files)
Place the REST API custom code as a folder structure "$TOP/mdmui/custom/api/classes/com/ibm/rs/custom/…"
OR
As a JAR file $TOP/mdmui/custom/api/my-new-customapi.jar.
The custom code should follow package structure as "<directory>/com.ibm.rs.custom.<directory/files>" folder structure.
UI folder (custom files for UI)
JS folder (JS file for UI)
JSON folder (JSON file for UI)

8. Start the services by using the following command.

cd $TOP/bin/go
./start_local.sh

What to do next
You can test the custom REST APIs by using a REST client like Postman or Advanced REST Client plug-in in the Google Chrome browser.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Upgrading the existing custom sample library


If you have existing custom sample library, you need to upgrade that to the latest Angular version.

Procedure
1. Download and install the Node v14.x.x from the Downloads site.
2. Go to the $TOP/mdmui/samples folder.
3. Extract the custom-library.zip file >(Angular-based custom sample library).
4. Open the custom sample library in the Visual Studio Code editor.
5. In the command terminal, run the npm install command.
6. Override your existing components to the Angular custom sample library.
7. Update your existing components according to the latest library.
8. Copy the generated JS files to the $TOP/mdmui/custom/ui/js folder and JSON files to the $TOP/mdmui/custom/ui/json folder.
9. Run the $TOP/mdmui/bin/updateRtProperties.sh script for WebSphere® Application Server-based on-premise deployments.
The script copies files to the following folders in the WebSphere Application Server.
JSON files to $WAS_HOME/profiles/<WAS profile>/installedApps/<Node Cell>/mdm_ui.war.ear/assets/json
JS files to $WAS_HOME/profiles/<WAS profile>/installedApps/<Node Cell>/mdm_ui.war.ear/js/custom
Note: For Operator-based deployments, see Customizing the Docker container.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling HTTPS ports


For certain circumstances, you might want to deploy Product Master by using HTTPS ports.

Procedure
1. Open the WebSphere®® Application Server console.
http://server:port/ibm/console.
2. Log in to the WebSphere® Application Server console, by specifying the User ID as root.
3. Click Servers > Server Types > WebSphere Application servers and click {application_server_name} link.
4. Expand Web Container Settings and click Web container transport chains.
5. Click Web container transport chains > WCInboundDefaultSecure.
6. Check the Enable box, then click Apply > Save.
Note: After you save the changes, ensure the web container transport chains console shows that WSInboundDefaultSecure is now enabled.
7. Click Environment > Virtual Hosts > {host_name} > Host Aliases > New and add the new port.
Important:
The new port number should be same as the port number displayed in the step number 6 .
Do not modify the host name. Keep the default host name (*).
8. Click Apply > OK. Verify that the new vhost is created successfully.
9. Click Resources > Resource Environment > Resource environment entries > DataView > Custom properties and modify the following properties:
HTTP to HTTPS

212 IBM Product Master 12.0.0


port - To the port number specified in the step number 7.
host - Should contain hostname of the computer.
Click OK, and verify that the properties are modified.
10. Run the following command to update the Persona-based UI properties:
$TOP/mdmui/bin/updateRtProperties.sh

11. Import certificate for WAS JAVA_HOME by using the following commands:

keytool -import -trustcacerts -file <File Path> -alias <alias name for certificate>
-keystore <path of cacerts>

Example

keytool -import -trustcacerts -file /opt/IBM/WebSphere/AppServer/profiles/AppSrv01/etc/certnew.cer


-alias certificate1 -keystore $JAVA_HOME/jre/lib/security/cacerts

12. Restart the server that is used for the Product Master by using the following commands:
$TOP/bin/go/stop_local.sh
$TOP/bin/go/start_local.sh
13. Log in to Product Master by using the HTTPS port.
https://server_ip:port/mdm_ui

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling IBM Operational Decision Manager


You can enable the IBM Operational Decision Manager in the Persona-based UI.

Procedure
1. Browse to the $TOP/bin/conf/classpath/jars-custom.txt file and add following line:
etc/default/wodm/jars/wodm.command.jar
2. Add the wodm.command.jar file in the <app-Server>.
Servers > Server Types > > WebSphere Application servers > {application_server_name} > Java and Process Management > Process definition > Java Virtual
Machine
3. Browse to the $TOP/etc/default/common.properties file.
4. In the common.properties file, change the value of the xframe_header_option to ALLOWALL.
5. Browse to the $TOP/bin/conf/env_settings.ini file.
6. In the env_settings.ini file, change the value of the multicast_ttl property to 1.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Verifying the installation


To verify that you have successfully installed IBM® Product Master, log in to the product user interface.

Verifying the database and WebSphere Application Server settings


After you install IBM Product Master, use the following checks to ensure that you set up the application correctly.
Creating test company
Your Product Information Management (PIM) data is organized in IBM Product Master by companies. To be able to log in to Product Master, you must create a test
company by using the script that is provided.
Accessing the product
You need to start the IBM Product Master services to access the product.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Verifying the database and WebSphere Application Server settings


After you install IBM® Product Master, use the following checks to ensure that you set up the application correctly.

Database configuration and settings


Oracle Database - All necessary Product Master configurations for the Oracle are enabled in the init.ora file in Oracle. For more information, see Setting up your
Oracle Database. After the Oracle Database is up, you can verify the necessary Product Master settings. You can consult your DBA for assistance with this.

IBM Product Master 12.0.0 213


Db2® database - All necessary Product Master settings for DB2® database are enabled in three different configuration areas:
1. Db2 Registry variables.
2. Db2 database Manager Configuration.
3. Db2 database configuration.
Check these settings by running a Db2 Server shell script by using Db2 instance owner ID to verify the IBM Product Master Installation Guide recommended value
with the current value for configuration settings. For more information, see Setting up your Db2 database. For assistance with the shell script, consult your DBA.

WebSphere Application Server settings


Check settings in the WebSphere® Application Server admin console against recommendations for WebSphere Application Server in the WebSphere Application Server
documentation and against those made in the Product Master Installation Guide. You can verify that the correct Java™ and JDK libraries are used. For more information,
see Setting WebSphere Application Server parameters.

Product settings
All product settings are stored in the common.properties file. Verify that all the required settings are understood and used. If necessary, verify that the Mount manager is
installed and configured correctly.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating test company


Your Product Information Management (PIM) data is organized in IBM® Product Master by companies. To be able to log in to Product Master, you must create a test
company by using the script that is provided.

About this task


The create_cmp.sh script at the <install dir>/bin/db can be used to create new company. The –code argument is required for the create_cmp.sh script.

Procedure
1. Use the following shell script to create the schema:
<install dir>/bin/db/create_cmp.sh
For example,

create_cmp.sh –code=<company code> --name=<company name>

The script creates create_cmp.log file at <install dir>/logs/.


This script creates the company. You can use this empty test company in your test environment. You can use the create_cmp.sh script to create more test
companies.
Note: The ID and password information is hardcoded when you create the company by using the script, and so they are case-sensitive.
2. Create an empty company called 'test' (which contains no predefined data), by running the following shell script:
<install dir>/bin/db/create_cmp.sh --code=test
The company demonstration is created with a single user: Admin. The password for admin trinitron. Passwords are case-sensitive. The Admin user is created
with full privileges and should be used by an administrator.

Remember:
You must run the create_cmp.sh shell script only when your system is down.
You must not run the create_cmp.sh shell script multiple times in parallel so that more than one instance is running at a given time, otherwise the scripts fail.
3. Review the log file after running create_cmp.sh to check for errors.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Accessing the product


You need to start the IBM® Product Master services to access the product.

Starting the product


When you start the Product Master, you also start all the related services, and the application servers including the application server of the Global Data Synchronization
feature.
Important: Before you start the product, you must ensure that you have configured the Global Data Synchronization application server in the [services] section of the
env_settings.ini file.
You need to run the start_local.sh script to start the product and all of the services that are needed to run the product; and the rmi_status.sh script to verify that
the application is running. You also need to start the messaging service of the Global Data Synchronization feature to send and receive XML messages. An environment,
which provides graphical support (for example, VNC) is recommended. In case, global security is enabled in WebSphere® Application Server, but if the

214 IBM Product Master 12.0.0


admin_security=false parameter is specified in the env_settings.ini file, you are prompted with a window to provide the WebSphere Application Server administrative
user name and password. If you use an environment like PuTTY, which does not have graphical support, the execution of the command appears hung.
To start the services, proceed as follows:

1. Run the start_local.sh script at the <install directory>/bin/go directory. If the admin_security=true parameter is set in the env_settings.ini file, but the user
name and password are not provided in the [appserver] section, you should specify these values in the command line as follows:
start_local.sh --wsadminUsername=<was_admin_user>

--wsadminPwd=<was_admin_password>
2. Run the gdsmsg.sh script at the <install directory>/bin directory with the start parameter if you have enabled the Global Data Synchronization feature. You can use
the status parameter to fetch the status of the Global Data Synchronization listener service.
$<Install_Dir>/bin/gdsmsg.sh start
3. Run the rmi_status.sh script to verify that the application is up and running, and that the services have started. The following information is displayed:

admin_<name of instance>
appsvr_<name of instance>
eventprocessor_<name of instance>
queuemanager_<name of instance>
scheduler_<name of instance>
workflowengine_<name of instance>

This process takes approximately 30 - 40 seconds, depending on the speed of the processor.

Example
Following are the examples of the after running the rmi_status.sh script showing that all services have started on pimserver1

When Global Data Synchronization feature is enabled:

[pim1@pimserver1 pim1]$ /opt/pim/pim1/pim900/bin/go/rmi_status.sh


[success] rmistatus (Mon Mar 8 14:00:49 PDT 2010)
//pimserver1:17507/samplemart/admin/admin_pimserver1
//pimserver1:17507/samplemart/appsvr/appsvr_pimserver1
//pimserver1:17507/samplemart/queuemanager/queuemanager_pimserver1
//pimserver1:17507/samplemart/workflowengine/workflowengine_pimserver1
//pimserver1:17507/samplemart/scheduler/scheduler_pimserver1
//pimserver1:17507/samplemart/eventprocessor/eventprocessor_pimserver1

Global Data Synchronization feature is not enabled:

[pim1@pimserver1 pim1]$ /opt/pim/pim1/pim900/bin/go/rmi_status.sh


[success] rmistatus (Mon Mar 8 14:00:49 PDT 2010)
//pimserver1:17507/samplemart/admin/admin_pimserver1
//pimserver1:17507/samplemart/appsvr/appsvr_pimserver1
//pimserver1:17507/samplemart/queuemanager/queuemanager_pimserver1
//pimserver1:17507/samplemart/workflowengine/workflowengine_pimserver1
//pimserver1:17507/samplemart/scheduler/scheduler_pimserver1
//pimserver1:17507/samplemart/eventprocessor/eventprocessor_pimserver1

Logging in to the product


Product Master provides a browser-based user interface that you can use to manage and administer your system. You log in with the user name, password, and company
that you created from the demonstration scripts.

1. Open your web browser and enter the URL and port for the web server. It is important to enter a fully qualified host name along with "/utils/enterLogin.jsp". The URL
is similar to the following:
http://<DNS name or IP address>/utils/enterLogin.jsp

Where,

<DNS name or IP address> is the DNS name or the IP address of the system on which the product is running, and the port is defined in the common.properties file.

Note: During the product installation, the web server port was set to 7507 in a two tier configuration. If a different port is used, change the port reference in the file
server.xml for a three tier configuration.
2. Use the user name, password, and company that was created from running the demonstration test scripts. For example, enter the following information:
User name: Admin
Password: trinitron
Company Code: acme
For more information, see Creating test company.

Note: The ID and password information was hardcoded when you created the company by using the script, and hence are case-sensitive.
If the product home page loads, the installation was successful. Log out of the application.

Stopping the product


To update or change the runtime configuration of the Product Master, you must stop the product and all the related services, and the application servers including the
application server of the Global Data Synchronization feature.

You need to run the abort_local.sh script to stop the product and all of the services. You also need to stop the messaging service of the Global Data Synchronization
feature. An environment, which provides graphical support (for example, VNC) is recommended. In the case where global security is enabled in WebSphere Application
Server, but the admin_security=false parameter is specified in the env_settings.ini file, you are prompted with a window to provide the WebSphere Application Server
administrative user name and password. If you use an environment like PuTTY, which does not have graphical support, the execution of the command appears hung.

To stop the product, proceed as follows:

1. Run the abort_local.sh script at the <install directory>/bin/go directory. If admin_security=true parameter is set in the env_settings.ini file, but the user
name and password are not provided in the [appserver] section, you should specify these values in the command line as follows:

IBM Product Master 12.0.0 215


abort_local.sh --wsadminUsername=<was_admin_user>

--wsadminPwd=<was_admin_password>
2. Run the gdsmsg.sh script at the <install directory>/bin directory with the stop parameter, if you have enabled the Global Data Synchronization feature.
stop parameter - Use to complete processing the messages that have been picked from the message queue and then stop the Global Data Synchronization
messaging service.
abort parameter - Use to stop the Global Data Synchronization messaging service immediately without processing the messages that have been picked from
the message queue.
$<Install_Dir>/bin/gdsmsg.sh stop

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Applying fix pack


When IBM® releases a fix pack for the IBM Product Master, you can apply the fix pack.

Before you begin


Ensure that you have installed and configured the base version of IBM Product Master 12.0.
For applying any interim fix, ensure that you have installed and configured full build of IBM Product Master 12.0.
Example
To apply IBM Product Master Fix Pack 1 Interim Fix 1, proceed as follows:
IBM Product Master 12.0 GA > IBM Product Master 12.0 Fix Pack 1 > IBM Product Master 12.0 Fix Pack 1 Interim Fix 1
Download the fix pack from Fix Central to a user or temporary directory and extract the fix pack artifacts. For more information, see Extracting and installing the fix
pack.
Review the lists of known issues and issues that are fixed in the latest release. These lists are available on the IBM Support website.
To see the detailed system requirements from Software Product Compatibility Reports (SPCR), search "IBM Product Master" in the following link.
Software Product Compatibility Reports
Password encryption utility Run the following command to generate an encrypted password that can then be pasted in the respective properties file.

$JAVA_RT com.ibm.ccd.common.generate.config.DBEncryptionUtils -encrypt --password=password

Copy the existing env_settings.ini file at the $TOP/bin/conf/ having all important configuration to some other folder.
Avoid incompatibilities and issues in using the user interface, by cleaning your browser cache so that the latest JavaScript files are loaded and used by the user
interface.
Stop the Product Master application on the local server.
Check the scheduler to make sure that no critical jobs are running or need to complete.
Stop the scheduler manually by running the following shell script, if the scheduler queue is clear:
$TOP/bin/go/svc_control.sh --action=stop --svc_name=scheduler
Check the workflow engine to make sure that no critical workflow events are running or need to complete by running the following shell script:
$TOP/bin/go/svc_control.sh --action=short_status --svc_name=workflowengine
Check the scheduler status to make sure that no critical jobs are running or need to complete by running the following shell script:
$TOP/bin/go/svc_control.sh --action=short_status --svc_name=scheduler
Shut down the workflow engine manually by running the following shell script, if no critical workflow events are running:
$TOP/bin/go/svc_control.sh --action=stop --svc_name=workflowengine
Stop all remaining Product Master services by running the following script, for all applications deployed in a cluster environment:
$TOP/bin/go/abort_local.sh
Note: Running the abort_local.sh shell script does not affect any of the other JVM services.
Ensure that all processes are stopped by using the ps command.
If the GDS feature is enabled,
Ensure that you stop the GDS message listener by running the following script:
$TOP/bin/gdsmsg.sh stop
Allow the GDS message listener to finish processing the messages already picked up, and then run this script:
$TOP/bin/gdsmsg.sh abort
Back up your system. The installation overwrites your current files with updated versions from the fix pack. If any issues occur when you install the fix pack, you can
use this backup copy to roll back the installation.
Create a full backup of all of your Product Master directories, especially where the following directories where the configuration files, common.properties,
admin_properties.xml, and env_settings.ini are stored:
$TOP/etc/default
$TOP/bin/conf
Back up the following GDS messaging files and restore them after the installation is completed:
$TOP/etc/default/gds.properties
$TOP/etc/default/.bindings
Create a full backup of your database.

Extracting and installing the fix pack


After you download the fix pack, complete the steps that are listed here.
Database schema migration
If the fix pack needs database schema migration, complete the steps that are listed here.
Verifying the installation
After you install a fix pack, verify that the installation was successful by completing the steps that are listed here.
Installing password security update
Following instructions are applicable only for an Administrator user. Running the resetPasswords.sh script resets passwords for all the users in a given
company that are not enabled for LDAP (except the password for the user running the command), produces an XML file showing the changes, and optionally sends
an email to each user with the login instructions. Passwords for users that are enabled for LDAP are not changed, and such users are not affected by the command.

216 IBM Product Master 12.0.0


Reset password patch
Running the resetPassword.sh script resets passwords for all the users, produces changes in an XML file, and if configured, sends an email to each user
with the login instructions.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Extracting and installing the fix pack


After you download the fix pack, complete the steps that are listed here.

Before you begin


Ensure that the default WebSphere® (server1) is installed and running. Verify that you can log in to the server. For more information, see WebSphere Application Server
Network Deployment welcome page.

Procedure
To extract the downloaded file, complete the following steps:

1. Change the directory to $TOP (or the current working directory) and verify that the correct permissions exist for unpacking the files by running the following
command.

cd $TOP | chmod –R 755 $TOP

2. Unpack the files by using the GNU Tar utility, so extract the TGZ file to the same folder where Product Master 12.0 version is installed by using the following
command.

gtar zxvf $TOP/<fix pack tar file name>

Important: The Product Master files are packed by using the GNU gtar utility. Use the GNU Tar utility to unpack the archive file for best results, especially on
computers that run on the AIX® operating system. If you use the AIX version of the tar command, you might not be able to unpack correctly all files in the archive.
One indication of incorrect unpacking of the archive is the presence of the @LongLink file in the directory in which you unpack the archive.
3. Overwrite the existing env_settings.ini file in the extracted folder with your backup file.
4. Run the setup.sh script located in the $TOP folder.
5. Run the configureEnv.sh script located in the $TOP/bin folder to regenerate the compat.sh file.
Note:
If the value of the encrypt_password=yes in the [db] section of the env_settings.ini file, you can avoid the password prompt by specifying dbpassword
argument when you run the configureEnv.sh script.

$TOP/bin/configureEnv.sh -dbpassword=database_password

If you customize the content of some configuration files (except common.properties) and you choose to overwrite such files when running the
configureEnv.sh script, you need to manually restore customized values by using the backup copy. The file name of the backup copy is the name of the
corresponding configuration file with the BAK suffix.
If you run the configureEnv.sh script two or more times, you can lose the original customized configuration files because every time the configureEnv.sh
script is run, the generated backup copy has the same file name.
6. Run the compat.sh file to set most commonly used environment variables.
7. Verify that the db.xml file is created in the $TOP/etc/default folder.
8. Recompile the solution Java™ code by using the latest product JAR file that are included with the fix pack.
9. Redeploy the IBM® Product Master to the application server.
Note: If you are using the Script workbench, when you install this fix pack the Script workbench communication .jsp and .jar files are removed from your server
configuration.
10. Integrate the IBM Product Master and the WebSphere Application Server.
Admin UI
a. Run the install_war.sh script in the $TOP/bin/websphere folder, with the following optional parameters.

install_war.sh [ --wsadminUsername=WAS admin user name --wsadminPwd=password for WAS admin user]

The script installs the WAR file for each app server that is defined in the [services] section in the env_settings.ini file.
b. If the WebSphere Application Server security is enabled, add the wsadminUsername and wsadminPwd arguments to the install_war.sh script.
Note: Following arguments are no longer required in the $TOP/bin/start_local.sh, $TOP/bin/start_rmi_appsrv.sh, $TOP/bin/stop_local.sh, and
$TOP/bin/rmi_status.sh commands.

wsadminUsername=<WAS admin user name> --wsadminPwd=<password for WAS admin user>

Persona-based UI
a. Run the installAll.sh script in the $TOP/mdmui/bin folder, with parameters for update installation.

installAll.sh

The installAll.sh script installs the mdm-rest.war and mdm_ui.war WAR files. The script also configures features like dashboard, FTS, DAM, machine learning,
and starts services based on the configuration.
Note: If the Admin UI services are already started, then the script restarts it and starts the Persona-based UI services.
11. GDS If you want to install the GDS feature, run the install_gds_war.sh file in the $TOP/bin/websphere folder.
Note: You need to configure GDS feature before you start the service. For more information, see Configuring GDS feature.

IBM Product Master 12.0.0 217


What to do next
Enter the following URL in the address bar of your browser to access the interface.
Admin UI

http://host_name:port_number/utils/enterLogin.jsp
Persona-based UI

http://host_name:port_number/mdm_ui

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Database schema migration


If the fix pack needs database schema migration, complete the steps that are listed here.

Before you begin


Important:

Ensure that the database user for Product Master application has appropriate database privileges as mentioned DB2® privileges or Oracle privileges. If the database
privileges are modified for any reason, the migration script fails.
Ensure that the Product Master application is stopped on the local server.
Identify the version of fix pack that you are migrating from.

Procedure
1. Run the migrateToInstalledFP.sh in the $TOP/bin/migration directory.
and earlier:
migrateToInstalledFP.sh --fromversion=BASE, FP1, FP2
[--dbpassword=<database password>]
fromversion - Depends on the version you are migrating from.
dbpassword (Optional) - Stores the encrypted database password in the Product Master.
and later:
migrateToInstalledFP.sh --fromversion=BASE, FP1,
FP2, FP3
fromversion - Depends on the version you are migrating from.
2. Check the migration messages in the console to verify successful migration.
Types of migration messages
Message type Message
Migration successful --------------------------------------
Summary of the migration:
--------------------------------------
Migration to IBM Product Master 12.0.0 successful.
Migration failed !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Migration Failed : xxxx
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Please see the file for further details :
/home/pimuser/mdmcs12/logs/errfile.log
Migration failed for specific modules --------------------------------------
Summary of the migration:
--------------------------------------
Migration of the following modules failed : <module names>
Review the errfile.log file for more messages.
Review the $TOP/logs/migrateToInstalledFP.log file for more details.
For SQL errors, find the detailed error message from the SQL error code, correct the error, and run the migration script again.
For more information, see Troubleshooting migration issues.
Contact IBM Software Support if the problem persists after you correct the errors.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Verifying the installation


After you install a fix pack, verify that the installation was successful by completing the steps that are listed here.

Procedure
To verify that the installation was successful, complete the following steps:

218 IBM Product Master 12.0.0


1. Start Product Master.
2. Run the start_local.sh script in the $TOP/bin/go directory to start all the services needed to run the Product Master.
3. Run the svc_control.sh script to start the individual services.
The svc_control.sh supports starting multiple services from the same command:
svc_control.sh --action=start --svc_name=<service name>[--svc_name=<service name>]
Example

svc_control.sh --action=start --svc_name=appsvr --svc_name=admin --svc_name=scheduler


ML services
a. Go to the scripts folder, for example "$TOP/mdmui/machinelearning/scripts".
b. Use the following commands as required:
To start the service:

python3.9 start.pyc

Free Text Search services


To start the service:

$TOP/mdmui/bin/startFtsServices.sh

To stop the service:

$TOP/mdmui/bin/stopFtsServices.sh

4. Run the gdsmsg.sh script located in the $TOP/bin/ directory to start the GDS messaging service (optional):
gdsmsg.sh start
Important: For Product Master running WebSphere® Application Server, as part of a new installation or an upgrade to an existing installation, use the redeploy
option when running the start_local.sh script to ensure that the web services are deployed to the Product Master application.
This process might take approximately 30 - 40 seconds, depending on the speed of your CPU processor.

5. Verify that all Product Master JVM services are started.


6. Run the $TOP/bin/go/rmi_status.sh script and verify that the following services are started correctly:
admin_machine_Name
appsvr_machine_Name
eventprocessor_machine_Name
queuemanager_machine_Name
scheduler_machine_Name
workflow_machine_Name
Dashboards service status
FTS service status
Machine learning service status
7. Run the following script to verify that the GDS messaging services are correctly started :
$TOP/bin/gdsmsg.sh status

8. Verify the Product Master installation by reviewing version of the installed Product Master.
a. Run the get_ccd_version.sh shell script in the $TOP/bin directory.
$TOP/bin/get_ccd_version.sh

b. View the installation version through the Product Master user interface.
For Admin UI, go to Window > About Current PageId.
Persona-based UI, in the upper-right corner of the interface, click more > About IBM Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing password security update

Following instructions are applicable only for an Administrator user. Running the resetPasswords.sh script resets passwords for all the users in a given
company that are not enabled for LDAP (except the password for the user running the command), produces an XML file showing the changes, and optionally sends an
email to each user with the login instructions. Passwords for users that are enabled for LDAP are not changed, and such users are not affected by the command.

Before you begin


Complete the following steps:

Take the backup of the database, or at least the table TSEC_SCU_USER (alias SCU).
Run the following commands.

cd $TOP/bin/migration
unzip resetPassword.zip
cd $TOP/bin/migration/resetPassword

Using the following command, provide execute permission to the resetPasswords.sh file.

chmod 755 resetPasswords.sh

Before you run the resetPasswords.sh file, ensure that you set the $JAVA_RT environment variable. To set the variable, run the compat.sh file by using the following
command.

$TOP/bin/compat.sh

IBM Product Master 12.0.0 219


If the ResetPasswords.class file does not exist, the javac command is used to create the ResetPasswords.class file. You must ensure that the environment variable
$JAVA_HOME contains a copy of $JAVA_HOME/bin/javac whose version is compatible with the installed version of IBM® Product Master.
In the Persona-based UI, when you create a user, do not use colon ":" in the username.

Procedure
1. Enter the following command:

cd $TOP/bin/migration

2. Run the resetPasswords.sh script with the following parameters.

./resetPasswords.sh [option] Admin adminpw company output-file

Where,

option]
If you do not specify any value, the resetPasswords.sh script generates the output-file, changes passwords, and sends email to each user.
[option]= --dry-run or -d
Generates the output-file only (does not change password or send any email).
[option]= --no-email or -n
Generates the output-file and changes password only (does not send any email).
Admin
The username of the administrator.
adminpw
The password of the administrator.
Note: The resetPasswords.sh script does not change administrator password.
company
The company code.
output-file
The full path name of the output file with an XML extension.

3. Check the generated output-file. The file contains all the usernames for a specified company (except administrator), the corresponding new passwords, and the
corresponding email addresses.
a. If you have used -n option, you need to send each user an email with the instructions mentioned in the What to do next section.
b. For other users, share the information by appropriate method. You can also use output-file to write your own script to transmit the information.
4. Delete following files in the $TOP/bin/migration directory:
resetPasswords.sh
ResetAdminPW.class
ResetPasswords.class
resetPasswordsEmailTemplate.txt
resetPassword.html
5. As a best practice,
You should change the Administrator's password because this script can allow a malicious user to overwrite the Administrator's password.
If you have edited the resetPasswordsEmailTemplate.txt file, you might want to save a copy of the file before deleting it, in case you have to repeat this
procedure.
6. To configure automatic mails,
a. You need to edit the content of the resetPasswordsEmailTemplate.text file.

<Email_subject>
----
In order to repair a security vulnerability, your password for IBM Product Manager has been reset.
The next time you log in, please do so using the following password:
xxxxxxxx
and then immediately change your password.
If you wish, you may change your password to the password you used before it was reset.

Attention: Do not edit the delimiter "----" and password "xxxxxxxx" placeholders. The password placeholder gets replaced at the run time by the password of
the user to whom the email is being sent.
b. Set the following two properties in the common.properties file:

smtp_address
from_address

Importing data that was exported from an older Product Master version into existing or new company results in adding new users. You should run the reset
password utility to change the passwords according to the new policy. Else you can also change the user passwords from the Admin UI using the administrator
access.
Note: The reset password utility changes the password of all the users in the given company.

What to do next
As a user who got an email with new password, log in to the Product Master with your username and the new password. You can change your password to anything you
want, including the password you used before.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Reset password patch

220 IBM Product Master 12.0.0


Running the resetPassword.sh script resets passwords for all the users, produces changes in an XML file, and if configured, sends an email to each user with
the login instructions.

This is an obsolete feature. Starting from the IBM® Product Master Fix Pack 3 onwards, see Installing password security update.

Before you begin


Complete the following steps:

Take the backup of the database, or at least the table TSEC_SCU_USER (alias SCU).
Run the following commands.

cd $TOP/bin/migration
unzip resetPassword.zip
cd $TOP/bin/migration/resetPassword

Using the following command, provide execute permission to the resetPassword.sh file.

chmod 755 resetPassword.sh

Before you run the resetPassword.sh file, ensure that you set the $JAVA_RT environment variable. To set the variable, run the compat.sh file by using the following
command.

$TOP/bin/compat.sh

If the ResetPasswords.class file does not exist, the javac command is used to create the ResetPasswords.class file. You must ensure that the environment variable
$JAVA_HOME contains a copy of $JAVA_HOME/bin/javac whose version is compatible with the installed version of IBM Product Master.
In the Persona-based UI, when you create a user, do not use colon ":" in the username.

About this task


The resetPassword.sh script resets passwords for all the users (except the password for the user that runs the script), produces changes in an XML file, and if configured,
sends an email to each user with the login instructions.

This patch contains following files:

secpatch.html
resetPassword.zip that contains following files:
resetPassword.sh
ResetPasswords.java
MailContent.xml
The MailContent.xml file contains a Subject line for the emails and the informational content of the email, and is so arranged that the new password can be
automatically inserted into the email text.
If you need to convert the email text to the local language of your users:
Convert email text only.
Do not convert the complete MailContent.xml file.
Retain MailContent.xml file name.

Importing data that was exported from an older Product Master version into existing or new company results in adding new users. You should run the reset password
utility to change the passwords according to the new policy. Else you can also change the user passwords from the Admin UI using the administrator access.
Note: The reset password utility changes the password of all the users in a company.

Procedure
1. Enter the following command:

cd $TOP/bin/migration

2. Run the resetPassword.sh script with the following parameters.

./resetPassword.sh [option] Admin adminpw company output-file

Where,

[option]
If you do not specify any value, the resetPassword.sh script generates the output-file, changes passwords, and sends email to each user.
[option]=dry-run or -d
Generates the output-file only (does not change password or send any email).
[option]=no-email or -n
Generates the output-file and changes password only (does not send any email).
Admin
The username of the administrator.
adminpw
The password of the administrator.
Note: The resetPassword.sh script does not change administrator password.
company
The company code.
output-file
The full path name of the output file with an XML extension.

3. Check the generated file. The generated file contains all the usernames for the specified company (except administrator), corresponding new passwords, and email
addresses.
a. If you have used -n option, you need to send each user an email with the instructions mentioned in the What to do next section.

IBM Product Master 12.0.0 221


b. For other users, share the information by appropriate method. You can also use output-file to write your own script to transmit the information.
4. Log in to the Product Master and change your password to any string you want (referred here as newpw), if the password contains only characters whose decimal
value is less than 256.
5. Apply the patch and restart Product Master.
6. Log in to Product Master by using following credentials and then change your password to any string you want (including adminpw):
User name - Admin
Password- newpw
7. Delete following files in the $TOP/bin/migration directory:
resetPassword.sh
ResetPasswords.java
ResetPasswords.class
MailContent.xml
8. To configure automatic mails, set the following two properties in the common.properties file:
smtp_address
from_address

What to do next
As a user who got an email with new password, log in to the Product Master with your username and the new password. You can change your password to anything you
want, including the password you used before.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Migrating
You can migrate to the latest version of the IBM® Product Master.

Migration involves the following basic steps:

1. Verifying that enough database table space is available.


2. Stopping the instance that is using the database schema.
3. Backing up the existing database schema.
4. Installing IBM Product Master version 12.0.
5. Modifying the database schema by running the appropriate migration script to the IBM Product Master version 12.0 level.

Migration scenarios
Following are the possible migration scenarios:
Scenario Solution
Fresh installation for the IBM Product Master version 12.0 release Install by using the appropriate full build.
Migrating from the InfoSphere® Master Data Management Collaboration Server - Collaborative Edition Run the appropriate migration script. For more information, see
(11.4, 11.5, or 11.6) release to the IBM Product Master version 12.0 release Migrating to IBM Product Master Version 12.0.

Migration prerequisites
Ensure successful migration by following the listed procedure.
Migrating to IBM Product Master Version 12.0
To migrate to IBM Product Master Version 12.0, you must run the appropriate migration script.
Troubleshooting
You can use the following solutions to resolve common migration issues.

Related concepts
Troubleshooting

Related tasks
Migration prerequisites
Migrating to IBM Product Master Version 12.0

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Migration prerequisites
Ensure successful migration by following the listed procedure.

222 IBM Product Master 12.0.0


Before you begin
Before you start migration, ensure you complete the following steps:

You must have adequate permissions for following actions:


To create and modify tables, and indexes in the database,
To create and delete files in the sub directories of the $TOP directory.
You must verify that you have at least 30% free space in the database.
You must delete old Product Master data before you migrate to avoid running out of log space.
You must have a $TOP/logs directory before you run any migration scripts.
You must use the Db2® or Oracle database version as mentioned in the system requirements. For more information, see System requirements. See the Db2 or
Oracle documentation for upgrading your Db2 and Oracle database to the supported version.
You must ensure that the Product Master database user has appropriate privileges. Consult with your database Administrator team for any changes in database
user privileges because of any security policy.
While migrating to a higher release, and not a fix pack of the same release, you must use a new installation directory to avoid JAR file version mismatches.

Procedure
To ensure a successful migration, ensure you complete the following steps:

1. Manually copy following files and folder since migration method does not support exporting local system files.
$TOP/etc/default/docstore_mount.xml
$TOP/etc/default/user_jars.list
$TOP/public_html/suppliers/<cmp code>/ctg_files folder
Required user JAR files
For more information, see Deploying a custom user .jar file
2. Do not overwrite the property files from an older version to a new version.
3. Modify each customized property from a previous release to the corresponding file in the new version.
4. Manually redo any customization changes that you made to the Default Rich Search Results Report Script because migration overwrites the existing script available
in the /scripts/report docstore folder. The Default Rich Search Results Report Script script is used for exporting Product Master object search results to an Excel
sheet.
For more information, see Exporting search results to Excel.
5. Ensure that there is no verbose compilation option that is set for compiling scripts.
6. Verify that in the common.properties file you do not use verbose as the value for the script_compiler_options parameter. For more information, see
script_compiler_options parameters.
7. If you are using custom table space names instead of the default USERS, INDX, or BLOB_TBL_DATA, you must manually modify the table space names in the
following files:
create_pimdb.sh and create_pimdb_for_zLinux.sh script files located at $TOP/bin/db_creation folder
IBM® Db2 SQL files located at $TOP/src/db/schema/dbscripts/db2 folder
Oracle database SQL files located at $TOP/src/db/schema/dbscripts/oracle folder
Database Modify for 11.4 and 11.5 release Modify for 11.6 release
IBM Db2 addCehEntryId.sql alter_ers_table_detail_status.sql alter_scu_table_password_date.sql
Oracle database addSstShared.sql add_icm_index.sql alter_scu_table_email.sql
add_lot_index.sql add_sca_index.sql create_conn_workflow_connector.sqlcreate_idx_sch_job_status.sql
add_sit_index1.sql create_rules_engine.sql ddl_export_item_status_table_connector.sql
modifyIcm1Index.sqlredefine_ctg_indexes.sql
8. Run the setup.sh script located in the $TOP folder to check whether the version of PERL installed on your computer is compatible with the version of Product
Master that you install. If the current version of PERL is not compatible, you must install a new version.
For more information, see Installing Perl.
9. If you install the new version of Product Master in a new folder, you must update the value of TOP and PERL5LIB variables in the .bash_profile file. You must set
the PERL5LIB variable to $TOP/bin/perllib directory.

Related tasks
Migrating to IBM Product Master Version 12.0

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Migrating to IBM Product Master Version 12.0


To migrate to IBM® Product Master Version 12.0, you must run the appropriate migration script.

Procedure
1. Stop the running Product Master instance.
2. Back up the existing database schema.
3. Install Product Master Version 12.0 into a different folder.
4. Point the Product Master Version 12.0 instance to the database schema of existing instance, by adding the database parameters in the env_settings.ini file.
For more information, see Setting the common database parameters.
5. Run the appropriate migration script at $TOP/bin/migration folder. If the migration was not successful, you can run the migration script again.
Product Master instance Migration script

IBM Product Master 12.0.0 223


Product Master instance Migration script
Version 11.4 migrateFrom1140FP
Version 11.5 migrateFrom1150FP
Version 11.6 migrateFrom1160FP
See Results for the migration summary.
6. Run the test_db.sh script to verify the database schema, connectivity between Product Master and databases, and to check for JDBC and native client
connections. For more information, see test_db.sh script.
7. Start the IBM Product Master 12.0 version instance.

Results
All messages, such as the status of individual migration components and the overall migration summary, are displayed in the console. After you run the migration script,
message that is displayed in the console indicates whether the migration was successful.

Successful migration message

-----------------------------------------------------------
Summary of the migration
-----------------------------------------------------------
Migration to IBM Product Master 12.0 is complete. Check the
messages for any errors.

Generating database verification report...


___________________________________________________________

Changed tables
===========================================================
There are no changed tables
___________________________________________________________

___________________________________________________________

Missing Tables
===========================================================
There are no missing tables
___________________________________________________________

___________________________________________________________

Changed Indexes
===========================================================
There are no changed indexes
___________________________________________________________

___________________________________________________________

Missing Indexes
===========================================================
There are no missing indexes
___________________________________________________________

LOG FILE: $TOP/logs/default/ipm.log

Migration failed message

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Migration Failed : xxxx
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Please see the file for further details : /home/pimuser/mdmcs11/logs/errfile.log

Migration of specific modules failed message

-----------------------------------------------------------
Summary of the migration
-----------------------------------------------------------
Migration of the following modules failed :
<module names>

Discrepancy in the database tables or indexes message

Generating database verification report...


___________________________________________________________

Changed tables
===========================================================
<table names>
___________________________________________________________

___________________________________________________________

Missing Tables
===========================================================
<table names>
___________________________________________________________

___________________________________________________________

Changed Indexes
===========================================================
<index names>
___________________________________________________________

224 IBM Product Master 12.0.0


___________________________________________________________

Missing Indexes
===========================================================
<index names>
___________________________________________________________

LOG FILE: $TOP/logs/default/ipm.log

What to do next
Review the errfile.log file for more messages. For SQL errors, find the detailed error message from the SQL error code, correct the error, and run the migration script
again. Contact IBM Software Support if the problem persists after you rectify the errors.
For recommended action on migration failures, see Troubleshooting migration issues.
For more information about the Persona-based UI issues, Troubleshooting the issues.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting
You can use the following solutions to resolve common migration issues.

Running compiled migration scripts


You need to modify the common.properties file before you use compiled scripts. To use compiled scripts, make sure that the common.properties file in the
$TOP/etc/default directory has the following setting:

script_execution_mode = compiled_only

You can disable script compilation for individual scripts by including the following directive at the beginning of the script:

script_execution_mode=not_compiled

However, disabling script compilation at the script level is not ideal because it leads to significant performance degradation. To avoid performance degradation when you
use non-compiled scripts, change your server that is set to not_compiled rather than using the option of disabling compilation at the script level. If the server setting in the
common.properties file is set to not_compiled, then the script compilation for individual scripts cannot be enabled with the script level directives.
Note: Using a combination of compiled and non-compiled scripts degrade performance, and is not ideal. However, if you must use this combination, there is a limitation: a
non-compiled script can start functions in a compiled script, but a compiled script cannot start a function in a non-compiled script.

Common runtime errors and problems


There are known issues that you might encounter when you are running scripts. These examples illustrate some of those known issues and provide some insights on
avoiding or resolving them.

Invalid argument type:


An invalid argument type occurs when you are passing the wrong type of argument to a function (for example, a HashMap when it requires a String). You can also
receive an invalid argument type when IBM® Product Master cannot infer the type correctly. To resolve this issue, you might need to use a script operation such as
checkString() to make the type explicit.
Mismatched argument types in comparisons:
If the same data type, such as ==, >, <, <=, does not appear on both sides of a conditional operator, the expression evaluates to false. If false, this does not result in
an error message, but the corresponding code does not run. For example, the following does not work.

var id = “12345” ;
var my_id = item.getEntryAttrib(path to some attribute that is a sequence) ;
if ( id == my_id) {
// statements that need to be executed but won't be
}

The solution in this case is to explicitly use:

var id = “12345” ;
var my_id = checkString(item.getEntryAttrib(//some attribute that is a sequence),””) ;
if ( id == my_id) {
// statements to be executed
}

XML parsing:
The following code works in non-compiled mode and even in compiled mode when run from the script sandbox:

new XmlDocument(xmlDoc) ;
forEachXmlNode("item") {
// do the needful
}

However, in compiled mode, if this code is used in a script library function that is started by multiple users, then the statements inside the forEachXmlNode block do
not run. However, there is no error message, but you can use the following code as a workaround.

var doc = new new XmlDocument(xmlDoc) ;


var xmlNode ; forEachXmlNode(doc, "item", xmlNode) {

IBM Product Master 12.0.0 225


//do the needful
}

Resolve runtime errors and problems


To resolve runtime errors on the appserver, see to the file svc.out in the appsvr log directory. Sometimes, examining the exception.log and the default.log might be helpful.
With the Java™ file naming convention, it is easy to identify which script failed. The error message also identifies the line number in the generated Java file. To resolve the
problem, view the generated Java file and scroll to the line where the runtime error occurred. The generated Java code now includes actual script code as comments every
few lines. For example, consider the following portion of code from a sample-generated Java file:

// function checkIfPartyPartyTypeExist(party, partyType)


public static Object ScriptFunction__checkIfPartyPartyTypeExist(HashMap hmContext, Object party, Object
partyType) throws Exception
{
// var bRet = false;
Object bRet = (java.lang.Boolean) Boolean.FALSE; //
var rootEntry = party.getRootEntryNode();
Object rootEntry = GenGetRootEntryNodeOperation.execute(hmContext , (IEntry) party);
// var entryNodes = rootEntry.getEntryNodes(getCatalogSpecName() + "/Party Types/Party Type Code");
Object entryNodes = GenGetEntryNodesOperation.execute(hmContext , (EntryNode) rootEntry, (String)
BinaryOperation.execute(BinaryOperation.PLUS, ScriptFunction__getCatalogSpecName(hmContext), "/Party
Types/Party Type Code"));
// var entryNodesSize = entryNodes.size();
Object entryNodesSize = (java.lang.Integer) GenSizeOperation.execute(hmContext , (HashMap) entryNodes);

Each of the lines that begin with // in the comments previous are actual code from the corresponding IBM Product Master script. This indication makes it easy to identify
where failure occurred in the script.
For recommended action on migration failures, see Troubleshooting migration issues.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing master data


You can choose Admin or Persona-based user interface depending on the managing tasks that you want to perform.

Persona-based UI
Admin UI
The Admin UI provides an user interface that handles administrative tasks depending upon your role and privileges.
Synchronizing product data (GDS)
You can synchronize product data with Global Data Synchronization (GDS).

IBM Product Master 12.0 Fix Pack 8


Operating Systems: AIX, Linux, and Windows (Workbench only)

The Persona-based UI provides an enhanced user interface, which includes improved navigation experience, search, and collaboration areas on a home page depending
upon your role and privileges. You can choose Persona-based UI depending on the managing task that you want to perform.

The Persona-based UI of the Product Master supports following personas. Full Admin, Content Editor, Catalog Manager, Category Manager, Digital Merchandise Manager
(DAM), Mark Merchandiser, Solution Developer, and Vendor.

Click the following interactive image and to view the different topics.

226 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Types of Personas
The Persona-based UI of the Product Master supports following personas.

Admin and Full Admin


These users are the Administrators of the application. They are responsible for managing the specifications based on the business requirements and have access to the
entire system.

The user with this role has permission to perform the following tasks.

Access the workflow process to which the user is assigned as a performer.


Approve or Reject items that are sent by Vendors.
Upload Digital assets.
Link assets to items.

IBM Product Master 12.0.0 227


Create, modify, and delete items directly in the catalog.
Create, modify, and delete categories.
Create, modify, and delete specifications.
Create, modify, and delete rules.
Create and delete the search.
Perform the Free Text Search (FTS).
Access all the dashboards available in the application.
Enable or disable FTS and schedule indexing of data.
Enable or disable the chatbot feature for a company.

Basic
Basic has basic non-administrator access in the application.

Content Editor
Content Editor user is responsible for adding and manipulating the data into the system. The user with this role has the authority to create and modify data only through
the workflow process that has been created for the user that is on which the user is the performer. This user has limited access to the application. This user does not have
permission to directly operate on the catalog entries.

The user with this role has permission to perform the following tasks.

Access the workflow process to which the user is assigned as a performer.


View items in the catalog.
Create and delete the search.
Perform the FTS search.
Access the Workflow Summary dashboard.

Catalog Manager
Catalog Manager is responsible for adding and manipulation the data in the system directly into the catalog. This user can also add and update hierarchies and categories.
The user might also be responsible for approving content within the collaboration areas.

The user with this role has permission to perform the following tasks.

Access the workflow process to which the user is assigned as a performer.


Create, Modify, and delete items in the catalog.
Create and delete the search.
Perform the FTS search.
Access Audit history, Data Sanity, Vendor Summary, and Workflow Summary dashboards.

Category Manager
Category Manager is responsible for hierarchy maintenance and modifications. The user with this role has the authority to add and update hierarchies. The user has
limited access to the catalogs. This user can add, modify, delete, search, and map categories. The categories can be created by using the import jobs and the user also has
the authority for approvals on the collaboration areas.

The user with this role has permission to perform the following tasks.

Access the workflow process to which the user is assigned as a performer.


Create, modify, and delete categories.
Create and delete the search.
Perform the FTS search.
Access to the Workflow Summary dashboard.

Digital Asset Manager (DAM)


Digital asset manager is responsible for managing the digital assets in the system. The user has access only to the Digital Asset Catalog and Hierarchy.

The user with this role has permission to perform the following tasks.

Upload, modify, and delete digital assets.


Link assets to the items.
Search digital assets.
Perform the FTS search for the digital assets.
Access the DAM summary and Audit history dashboards.
Access the Digital Asset Catalog only in the Explorer section.

GDS Supply Editor


Global Data Synchronization (GDS) Supply Editor is responsible to create products and publish the products to the datapool for GDS.

Merchandiser manager
Merchandiser manager is responsible for associating the correct assets to the products in the system.

The user with this role has permission to perform the following tasks.

228 IBM Product Master 12.0.0


Upload and modify digital assets through a dedicated workflow process.
Link assets to the items through a dedicated workflow process.
Search digital assets and search on any catalog if given special access.
Perform the FTS search.
Access to the DAM Summary dashboard.
Access to the Digital Asset Catalog and other catalogs if given special access in the Explorer section.

Service Account
Service Account is responsible to perform activities, which are normally performed by external services like connectors (Third party services by using the REST API
interface).

Solution Developer
Solution Developer is responsible for working on the specifications that are shared by the solution architects. This user has almost access almost like that of the Admin
user.

The user with this role has permission to perform the following tasks.

Access the workflow process to which the user is assigned as a performer.


Upload digital assets.
Link assets to items.
View items in the catalog.
Create, modify, and delete specifications.
Create, modify, and delete rules.
Create and delete the search.
Perform the FTS search.
Access DAM Summary, User Summary, and Vendor Summary dashboards.

Vendor
The user with this role is an external user and these users access the IBM Product Master Application by using the Vendor portal. This role has restricted access to the
application.

The user with this role has permission to perform the following tasks.

Can add or update items only through the collaboration area to which the user has access.
View the user items through search only.

For more information, see Working with the Vendor persona.

Role access
Different roles of the Persona-based UI have different levels of interface access.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Role access
Different roles of the Persona-based UI have different levels of interface access.

Table 1. Home page


Subfeat Adm Bas Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
ure in ic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Home ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Table 2. Data Management feature
Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
Subfeature
min sic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Digital Asset ✓ ✓ ✓ ✓ ✓ ✓
Management
Table 3. Explorer feature
Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
Subfeature
min sic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Item ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Explorer
Category ✓ ✓ ✓ ✓ ✓
Explorer
Table 4. Search feature
Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
Subfeature
min sic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Item ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Search

IBM Product Master 12.0.0 229


Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
Subfeature
min sic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Digital ✓ ✓ ✓ ✓ ✓
Assets
Category ✓ ✓ ✓ ✓ ✓
Search
Transaction ✓
s
Table 5. Data Model Manager feature
Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Ven
Subfeature
min sic Manager Manager Editor Manager Admin Editor Manager Account Developer dor
Collaboration Area ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
console
Spec Console ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Rules Console ✓ ✓ ✓
Lookup Table ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Console
File Explorer ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Job Console ✓ ✓ ✓ ✓ ✓ ✓ ✓
Workflow console ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Table 6. Dashboards feature
Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
Subfeature
min sic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Audit History ✓ ✓ ✓ ✓
DAM ✓ ✓ ✓ ✓ ✓
Summary
Data Sanity ✓ ✓ ✓
Jobs ✓ ✓ ✓ ✓ ✓ ✓ ✓
Summary
User ✓ ✓ ✓
Summary
Vendor ✓ ✓ ✓ ✓
Summary
Worklist ✓ ✓ ✓ ✓ ✓
Summary
Table 7. Custom Tools feature
Feat Subfeat Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
ure ure min sic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Custom Tools ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Table 8. Global Data Synchronization feature
Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
Subfeature
min sic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Manage Items ✓
Synchronization ✓
Reports
Table 9. Settings feature
Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
Subfeature
min sic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Personalization ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Settings
FTS Settings ✓ ✓ ✓
DAM Settings ✓ ✓ ✓ ✓ ✓ ✓
Change Password ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Chatbot Settings ✓ ✓ ✓
Home page ✓ ✓ ✓
custom titles
Theme ✓ ✓
Polling Time ✓ ✓
Error Panel ✓ ✓
Settings
If you create a custom role, through the Data Model Manager > Security > Role Console in the Admin UI:

You can add your role in the mdmce-roles.json file and apply the appropriate default role permissions.
Important: You should not remove the "name": "Settings" content from the file because it is required to log in to Persona-based UI.
OR
You can create a role and apply appropriate parent role permissions for each user.
Example
You create a custom role, for example "XYZ_editor" based on the Content Editor role and for each user, apply the default Content Editor permissions to the
XYZ_editor role.

IBM Product Master 12.0 Fix Pack 8

230 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

Navigating the Persona-based UI


The Persona-based UI provides an enhanced user interface, which includes improved navigation experience, search, and collaboration areas on a home page depending
upon your role and privileges.

Home page role access


Table 1. Home page
Subfeat Adm Bas Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
ure in ic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Home ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓

Free Text Search


Login details
Show notifications
Log out
Personalize
Show more panels
Left pane

Free Text Search


In the upper-right corner of the interface, click Free Text Search to use the Free text search feature. For more information, see Using Free text search (Fix Pack 7 and
earlier)

Login details
The upper-right corner of the interface, displays the username that you have used to log in to the application.

LDAP user
Displays first name and last name, if specified. Else, displays the username.
Non LDAP user
Displays first name and last name (mandatory fields).

Show notification
On the upper-right corner of the page, click Show notifications to see the following.

Completed jobs
Click to expand and view the list of successful ( icon) or failed jobs ( icon) notifications. You can see notifications for recent five jobs here. Though different
types of jobs are displayed, but you can see schedule status only for the import entries jobs, which is an import or export job.
To see the job details, click <job name> link to open the Job details pop-up window. You can see the job name, running start and completion time, and job status.
About IBM Product Master
View the version of the IBM® Product Master.

Log out
In the upper-right corner of the interface, click Log out to log out of the application.

Personalize
You can customize which panels and widgets you want on the Home page of Persona-based UI. On the upper-right corner of the page, click Personalize. The Personalize
home page pop-up window, displays following default titles:

Data model summary


My tasks
Added items or Modified items
Items by attribute- 1
Items by attribute- 2
Items by attribute- 3
Item completeness
Added assets or Modified assets

To change a title, select title, and specify the custom title name. Click OK to save the updates. The maximum number of characters that are allowed in a title is 50.

In the Personalize home page pop-up window that you can select and display the following panels and widgets.

Panels - You can display following two panels on your home page.

Data model summary

IBM Product Master 12.0.0 231


The data model summary displays a compact view of the following in a company depending on your role: number of Items, Catalogs, Hierarchies, Categories,
Assets, Workflows, Specs, and Lookups.
My tasks
Displays available collaboration areas as cards. The name of collaboration area is displayed as the card name. The total number of entries (item, or category)
in a collaboration area is displayed to the right of the card name. You can distinguish between a category collaboration card and item collaboration card by
the card icon. The card also displays collaboration steps. Click any step to view details for the collaboration step. Click Show more to load more cards.
A card might have a red flag (high priority) or an amber flag (medium priority) depending upon the timeout status that is specified for any entry in a
collaboration step. If a card has a red flag, you need to take immediate action for all the expiring entries.
Category collaboration card
Click any <workflow step> to open the multi-edit page. Click primary key to directly open the category in single-edit page.
Item collaboration card
Click any <workflow step> to open the multi-edit page. Click Id to directly open the item in single-edit page.
Click to filter the collaboration area cards. In the Filter tasks pop-up window, filter by any of the following, and click Apply:
Task name
View empty tasks
View empty steps
To further sort the task names, click Ascending or Depending.

Key metrics - Specify catalog in the Filter by catalog field to load the data in the Catalog-dependent widgets.
Note: To personalize your home page, you need to have at least one catalog in a company.

Added items or Modified items


Displays new or modified items. You can further filter the data by specifying the time (last 7, 15, or 30 days).
Requires history subscription. For more information, see history_subscriptions.xml file.
Click <item name> link to open the item in the single-edit page.
Click to view all the items in the multi-edit page.
Items by attribute-<1, 2, or 3> <attribute name>
Displays a bar graph that is populated with the X-axis displaying the item count and Y-axis displaying the items by <attribute value>.
Click the bar graph to open the items in the multi-edit page.
Requires Free Text Search. For more information, see Using Free text search (Fix Pack 7 and earlier).
You can either display attribute name or attribute name and path in the Group by field through the widgetsAttributeDisplayFormat property in the config.json
file.
This widget does not support following attributes: Relationship, Linked, Password, Rich text, Unique key, sequence, and primary key.
Item completeness
Displays a donut graph that is populated with the items grouped according to the completeness percentage range (0 - 25%, 25 - 50%, 50 - 75%, 75 - 99.9%,
and 100%).
Require completeness configuration done for the selected catalog. For more information, see Configuring Item completeness.
Click the bar graph to open the items in the multi-edit page. You cannot click the 100 % section.
Added assets or Modified assets
Displays added or modified assets. You can further filter the data by specifying the time (last 7, 15, or 30 days).
Requires history subscription. For more information, see history_subscriptions.xml file.
Click <image name> link to open the item in single-edit page. For more information, see Using Data Management.
Click to view all the items in the multi-edit page.

Show more panels


On the upper-right corner of the page, click to see the following.

My quick links
Displays shortcuts to quickly access saved searches, templates, and lists through different tabs.

My notes
Add your notes with maximum character limit of 5000. You can configure the value for the maximum character limit through myNotesMaxCharLimit property in the
config.json file.
Note: There is a known limitation of exceeding permissible character limit with use of Enter (contributes 11 HTML characters) that though the contents are
discarded, error message is not displayed for exceeded character limit.
My shortcuts
Add maximum of 10 URLs as your quick access shortcuts. You can configure the value for the maximum number of shortcuts through myShortCutsMaxCount
property in the config.json file.
The Name field can have maximum of 25 characters.

Left pane
Left pane displays quick links to the different pages depending on your role.

Home
Displays Personalize, Show more panels, and Show notifications. For more information, see Personalize, Show more panels, and Show notification.
Data Management
Access to view all the digital assets and access job execution. For more information, see Using Data Management.
Explorer
Access to see filtered list of entries in a category. You can filter the category by using catalog and hierarchy. For more information, see Using Explorer.
Search
Access to create and run a new search, access a saved search, a saved template, or a saved list. For more information, see Using Search.
Data Model Manager
Access to view Spec, Attribute collection, Rules, Lookup Table, Job consoles, and File Explorer. For more information, see Using Spec console, Using Attribute
collection console, Using Rules console, Using Lookup Table console, Using Job console and Using File Explorer.
Dashboards
Access to view various dashboards. For more information, see Viewing dashboards.

232 IBM Product Master 12.0.0


Custom Tools
Access to view custom tools.
For more information, see Enabling customizations.
Settings
Access to view or modify the following settings, depending upon your role:

Personal settings
General preferences
Edit item screen settings
Locale and date/time
Password
For more information, see Customizing Personal settings.
Application settings
Free text search
Digital asset management
Chatbot
Table display settings (rows per page)
Home page widget and panel titles
Themes
For more information, see Customizing Application settings.

Global Data Synchronization


Access to use the GDS feature. For more information, see Navigating Supply Side GDS pages.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Customizing the Persona-based UI features


You can customize your Persona-based UI experience. The settings that you can access depend on your role. As a best practice after you do any type of customization,
clear your browser cache.

Settings page role access


Table 1. Settings feature
Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
Subfeature
min sic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Personalization ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Settings
FTS Settings ✓ ✓ ✓
DAM Settings ✓ ✓ ✓ ✓ ✓ ✓
Change Password ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Chatbot Settings ✓ ✓ ✓
Home page ✓ ✓ ✓
custom titles
Theme ✓ ✓
Polling Time ✓ ✓
Error Panel ✓ ✓
Settings
Proceed as follows to access the Settings page.

1. Log in to the Persona-based UI interface.


2. In the left panel, click Settings icon. The Settings page opens.
The Settings page consists of the following tabs and subsections.
Personal settings
General preferences
Edit item screen settings
Locale and date/time
Password
For more information, see Customizing Personal settings.
Application settings
Themes
Free text search
Digital asset management
Chatbot
Table display settings (rows per page)
Home page widget and panel titles
For more information, see Customizing Application settings.

You can also do the following customizations.

Customizing company name, company logo, or Login page.


Customizing theme.

IBM Product Master 12.0.0 233


Customizing company name, company logo, or Login page
You can configure your Persona-based UI workspace to display your company name, company logo, customize the theme, or auto-populated the Login page.
Customizing... Description
Company name Update companyName property in the config.json
Company logo Replace logo.png file in the assets/images folder.
Auto populating Company field in the Login page Update defaultCompany property in the config.json

Customizing theme
Ability to apply customized theme is an important feature of the Product Master application. Using themes, you can promote your brand influence.

To apply a custom theme, proceed as follows.

1. Navigate to the following App server folder path.


$WAS_HOME/profiles/<WAS profile>/installedApps/<NodeCell>/mdm_ui.war.ear/mdm_ui.war/css/custom-style .
2. Copy and rename the custom.css file as required.
You can customize the following aspects of the UI.
Header
Left panel
Grid
Hyperlinks
Primary call to action (buttons)
Accordion
Tabs
Job panel
Form control focused outline (Form Attribute)
Highlighted border color
Note:
Custom theme cannot be applied to some system colors and icons of the application.
Only the Admin role can change the application theme.
The Theme drop-down list in the Setting page is visible only for an Admin role who has access or permission to the Theme Lookup table.
Following are some important and mandatory CSS theme variables in the custom.css file that accept color name or hex color code.

--header-background-color: #242424;
--header-text-color: #cfcfcf;
--link-color: #008c2e;
--left-nav-background-color: #323232;
--left-nav-link-color-normal: #f0f0f0;
--left-nav-active-background-color:#000000;
--control-highlight-color: #b3ebb3;
--control-outline-color: #00260c;
--button-primary-text: #ffffff;
--button-secondary-background-color: #3d3d3d;
--button-secondary-hover-background-color: #6f6f6f;
--button-secondary-text: #fff;
--button-secondary-text-hover: #fff;
--grid-row-selected: #cbe8cb;
--tree-node-background-selected: #e2f8db;
--job-panel-background-color: #181f16;
--featured-sections-title: #54af54;
--accordion-background-color: #88cc88;
--accordion-border: #1c6904;
--tab-hover-color: #104400;
--tab-background-color-normal:#14360a;
--tab-textcolor-normal: #ffffff;
--single-edit-item-header-backgroung-color: #f5f5f5;

3. You can choose between any of the following scenarios for the Themes Lookup table.
a. You can import mdmce-env.zip file to dynamically create Themes Lookup table with default entries.
b. If the Themes Lookup table exists, you can directly add entry in same table.
c. You can manually create a Lookup table with name Themes.
To manually create the Lookup table, create a spec with name Theme List with mandatory attributes Id and Name.
Where,
Id - Same as the custom theme file name specified in Step 2.
Name - Any name, which you want to display in the Theme drop-down list on the Setting page.
Note: Do not change the table name, spec name, or attribute names.
4. Save all the files and clear your browser cache to see your changes.

Customizing Personal settings


The Personal settings tab of the Settings page enables you to set general preferences, specify edit item screen settings, locale and date/time settings, and change
password.
Customizing Application settings
The Application settings tab of the Settings page enables you to configure free text search, digital asset management, chatbot, table display settings, home page
widget and panel titles, and themes.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

234 IBM Product Master 12.0.0


Customizing Personal settings
The Personal settings tab of the Settings page enables you to set general preferences, specify edit item screen settings, locale and date/time settings, and change
password.

Proceed as follows to access the Personal settings tab.

1. In the Settings page, click Personal settings tab. The Personal settings tab contains following settings:
General preferences
Edit item screen settings
Locale and date/time
Password
Table 1. General preferences
Field Description
Set landing page as Select and specify an appropriate system page or dashboard as the default home page.
Generate report maximum entries Select an appropriate value from the drop-down list to specify maximum number entries in a report.
Table 2. Edit item screen settings
Field Description
Grouping multi-occurring attributes per page Select an appropriate value from the drop-down list to specify number of grouping multi-occurring attributes per
page.
Relationship visualization diagram tab based Specify whether the Relationship visualization diagram tab is based on attribute or catalog.
on
Table 3. Locale and date/time
Field Description
Locale for UI Select the language that you want the user interface to display. The default value is English.
Locale for item and category Select the language that you want the item and category data to display. The default value is NONE.
Time zone Select the time zone that you want the user interface to display.
Select input/output format Select date and time format from the following list:
dd-MMM-yyyy
yyyy-MM-dd
dd-MMM-yyyy HH:mm
M/d/yy h:mm a (default)
MM-dd-yyyy HH:mm:ss
dd-MMM-yyyy HH:mm:ss
yyyy-MM-dd HH:mm:ss
Specify input/output format Specify your preference for the date and time format from the following list:
dd/MM/yyyy
MM-yy-dd
yyyy-dd-MM
MMM-dd-yyyy
MMM-dd-yy
yy-MM-dd
yy-MMM-dd
MMMM-dd-yyyy
Locales applicable for specs Select an available locale for specs.
Table 4. Password
Field Description
New Password Specify a new password.
Confirm Password Reenter the new password to confirm. Click Show password to see the password.

Password criteria
Following is the default password criteria for both the Admin UI and the Persona-based UI.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Password criteria
Following is the default password criteria for both the Admin UI and the Persona-based UI.

General criteria
The password should not contain username.
The specified new password cannot be same as a previous password.

Property-specified criteria
This criteria is specified by the password_strength_criteria parameter in the common.properties file.

IBM Product Master 12.0.0 235


The length of the password.
The password must contain at least one character each from the following criteria:
Uppercase alphabet character [A–Z] (Or equivalent characters from other supported locales)
Lowercase alphabet characters [a–z] (Or equivalent characters from other supported locales)
Base 10 digits [0–9]
Allowed special characters
:;=?@!#$()*+,-.{}[]~\|^_

The password should not contain white space.

Properties used
Password criteria uses the following properties from the common.properties file.

enable_password_expiry
enable_user_lockout
force_strong_password_for_users
maximum_password_attempts
maximum_password_age_for_users
maximum_password_age_for_vendor
password_strength_criteria

Note: A password is considered to be in English language, if it does not have any non-English-language alphabet.

Related reference
common.properties file parameters

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Customizing Application settings


The Application settings tab of the Settings page enables you to configure free text search, digital asset management, chatbot, table display settings, home page widget
and panel titles, and themes.

Proceed as follows to access the Application settings tab.

1. In the Settings page, click Application settings tab. The Application settings tab contains following settings:
Themes
Free text search
Digital asset management
Chatbot
Polling for item status check
Load error panel
Table 1. Themes
Field Description
Choose Theme Select an appropriate theme from the drop-down list. Displays only user-created themes along with a default theme.
Table 2. Free text search
Field Description
Enable Free text Select to enable the Free text search feature.
search
Company Name Displays the company name.
User ID Specify the username.
Important: If you are enabling theFree text search feature for a company where Security Assertion Markup Language (SAML) Single Sign-on
(SSO) is enabled, enter the credentials of the PIM database user (preferably who has Full Admin persona).
Password Specify the password.
Verify Click to verify the user credentials that are entered in the User ID and Password fields.
Full indexing Specify the full indexing schedule. The default value is MM-DD-YYYY HH:MM.
schedule Remember: If the first full indexing schedule is successful, the next schedule deletes all the existing index data.
Table 3. Digital asset management
Field Description
URL The URL of the Persona-based UI server.
Credentials
User ID Enter the credentials for accessing the DAM repository. Select Same as DAM credentials to use existing Digital Assets Management credentials.
Password Enter the password for accessing the DAM repository. Select Show Password to see the password.
Note: The DAM Settings cannot be enabled by a user already present in the MongoDB repository.
Table 4. Chatbot
Field Description

236 IBM Product Master 12.0.0


Field Description
Enable chatbot Select to enable the feature in the lower-right corner of the GUI.
In the Chatbot script field, enter the script (without the "script" tags) that you get referring the Embedding the web chat on your page.
For more information, see Creating an assistant on the IBM Watson® Assistant site.
Table 5. Polling for item status check
Field Description
Choose polling time Select an appropriate polling time interval, in milliseconds from the drop-down list for checking the item status during checkout and edit.
Table 6. Load error panel
Field Description
Load error panel If the value is "true", specifies that for any error in the single-edit page, the error panel gets loaded automatically.

Related concepts
Using Data Management

Related tasks
Using Free text search (Fix Pack 7 and earlier)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enriching an entry
In a single-edit page, depending upon your role, you can edit an item, category, or a digital asset (henceforth referred to as entry).

You can access the single-edit page through Home > {collaboration step} > {multi-edit page} > {entry name} > {single-edit page}.

Adding an entry
Before you begin: Ensure that you have a catalog with entries.

You can create an item, category, or a digital asset (henceforth referred to as entry). You can create an entry to store information about the entities that are managed in
your implementation.

Only the Full Admin role, Content Editor role, and Category Manager role can add or edit an entry.
If you have Category Manager role and access to a collaboration area having items, or are an administrator of such collaboration area, you can edit items too.

To add an entry, proceed as follows:

1. Browse to Home > <Collaboration step> > Add. A new entry is created in the multi-edit page.
2. Select the new entry and click Open.
3. In the single-edit page, define the entry details, and click Save.

In both the single-edit and multi-edit pages,

Click icon to open the Configure Tabs pop-up window.


To reorder a tab, click Move Up or Move Down arrows or drag a tab in the Configure Tabs pop-up window or in the interface.
To remove a tab, clear the checkbox for the tab.
To hide a tab, click Hide tab from this view icon. You cannot hide a tab that has mandatory fields.

Click to to see list of all tabs available (except hidden tabs) for the selected view.
Active tab in highlighted in bold.
Any modified tab displays icon.
A tab with any input error displays icon after you save your modifications.
Click a tab makes it an Active tab.
You can by default, see 10 tabs only at a time in the list.
Number of tabs that you can view is configurable through the following properties in the config.json file: singleEditMaxTabCount and multiEditMaxTabCount.
If you have total of 20 tabs, and you click 10th tab through the tab list, the application loads 9th to 16th tabs.

You can select a tab either by manually clicking the tab or Click > Tab > Up/Down arrow key.
You cannot see a tab that is hidden using Configure Tabs pop-up window.
For a new or cloned entry, following tabs are disabled: History or Audit History, Relationship, and Linked.
To see the error details, click the error pane that appears at the end of the single-edit page. You can see following details in the error pane:
Attribute Name - The name of the attribute that has an input error. Click <attribute_name> to redirect directly to the attribute.
Error Type - The type of the error.
Messages - Resolve the error by using the displayed guidelines.
Known limitation - If you have same attribute on the different attribute tabs, and you enter a wrong value in the attribute field, you get a validation error. If you
toggle between such tabs, the validation error disappears.

Table 1. Tasks of the single-edit page


Save Click to save the edits. By default, the Save option is enabled in the interface. If you enable Reserve, the Save option gets disabled.

IBM Product Master 12.0.0 237


Revert Click to undo the edits.
Refresh Click to refresh the page.
Reserve* Click to reserve an available entry for editing. This disables the Save.
Release* Click to release a reserved entry.
Add Click to add an entry.
Clone Click to clone an entry.
Checkin* Displayed only on the FIXIT step of a collaboration area.
Categorize Displayed only if the All Re-categorization is selected for a step.
Action Displayed only if the entry preview script is configured for a catalog.
Approver Approve
Click to approve and process an entry.
Reject
Click to move the selected checked-out entry to the FIXIT workflow step. FIXIT enables releasing an entry checked-out to the collaboration
area by mistake.
Done
Click to process the entry further.

Search To search an attribute, click in the upper-right corner and either enter the attribute name or select from the drop-down list. Using the
Attributes attrsearchtype property in the config.json file, you can configure the Search Attributes field to display either attribute name or attribute path.
Important: For normal or grouping multi-occurrence, works only for the first occurrence of the searched attribute.
Checked-out If an entry is checked-out, icon is displayed beside the view on the right side of the single-edit page. Click Checked-out attributes icon to open the
attributes Checked-out Attributes pop-up window that displays lists of all the collaboration areas in which the entry is checked-out, along with a list of all the
attributes that have been checked out in the respective collaboration areas. If an entry is available, icon is displayed. If any ACG does not have a
collaboration list permission, such a user is still able to see the checked-out attributes.
Note: The Checked-out Attributes pop-up window does not display attributes in any specific order.

Grouping Delete icon is in the root accordion header, Plus icon is hidden if the maximum occurrence is reached, and both the icons are hidden if the minimum
attribute single and the maximum occurrence are set to 1.
occurrence
Multi- Add icon is in the row header, Clear and Delete icons are added on every occurrence of the multi-occurring attribute that would be displayed on
occurrence hovering over the attribute.
attribute
* - An entry checked out by other user to the step can be released, reserved, or checked in the following scenarios:

If the Admin (Admin, Full admin, or Solution developer) user is a performer on step,
If the Admin user is an Administrator on a collaboration area,
If any user is Administrator on a collaboration area.

Tabs for an entry


The details of an entry (item, category, or a digital asset) are displayed across the following tabs.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Tabs for an entry


The details of an entry (item, category, or a digital asset) are displayed across the following tabs.

You can access the single-edit page through Home > {collaboration step} > {multi-edit page} > {entry name} > {single-edit page}.

Attributes Catalog, Collaboration area, and Category


Displays various configured attributes for an entry. Attributes tab supports multi-line input.

Using display_flag_attributes_as_dropdown_list property in the common.properties file, you can specify the Flag attribute as a drop-down list or a
radio button. By default, the value of the property is True (drop down list).

For the items of the container name SDP Collaboration Area, click icon to directly open the spec details from the single-edit page.

The relationship field has an editor that you can use to search entries based on any attribute of the selected catalog. For more information, see
Relationship editor.

The linked field has an editor that you can use to search entries based on any attribute of the selected catalog. For more information, see Linked
editor.
Audit History Catalog
You can view the modified changes as a graph, but cannot edit any details.
View details - Subscribe to the history manager by updating the history_subscriptions.xml file at the install_dir/etc/default folder with the following
tags:

<subscription object_type="ITEM">
<subscription_levels>
<log event_type="CREATE" history_level="DELTA"/>
<log event_type="UPDATE" history_level="DELTA"/>
<log event_type="DELETE" history_level="DELTA"/>
<log event_type="MAPPING" history_level="DELTA"/>

238 IBM Product Master 12.0.0


</subscription_levels>
</subscription>
<subscription object_type="CATEGORY">
<subscription_levels>
<log event_type="CREATE" history_level="DELTA"/>
<log event_type="UPDATE" history_level="DELTA"/>
<log event_type="DELETE" history_level="DELTA"/>
<log event_type="MAPPING" history_level="DELTA"/>
<log event_type="HIERARCHY" history_level="DELTA"/>
</subscription_levels>
</subscription>
View by - Select an appropriate time interval from the View By list. The valid values are:

Last 6 days
Last 6 weeks
Last 6 months
Last year

<attribute name>: History section - Displays a graph that is populated with x-axis displaying the specified time interval and y-axis displaying the count
of attributes updated. Click any count(s) of attributes changed label in the graph to see more details in the History Details section.
Note: Even if you change an attribute several times in the Attributes tab, the graph counts only one instance.
History details section - Displays the details about the update done. To see specific audit history detail, enter a relevant keyword in the Search field.
Click to sort the Name or Applicable To column. In the pop-up window, depending upon your requirement click Ascending or Descending or type a
filter search criteria, and click Apply. You can drag column headers to group the details. To export the Audit History details as a Microsoft Excel
Workbook, click Export to Excel. To copy a cell, row, or entire table, select, and press Ctrl + C.
Field Description
Date The date when the attribute was modified.
Container The name of the container to which the item belongs.
Attribute path Displays the attribute name/updated field.
Old value The existing value.
New value The updated value.
User The name of the user who updated the attribute.
You can sort the data by using the table headers.
Categories Catalog and Collaboration area
Displays the list of entries belonging to a category. To filter the category tree,

Select a hierarchy in the Hierarchy list.


Enter appropriate keyword in the Search Category field, and click Search. To filter the empty categories (Categories that have no entries or
subcategories), click Filter, and select Hide Unpopulated categories.

Change Review Collaboration area


Displays attribute differences of the entries. The modified values for a particular attribute could be reverted if needed. You can also refresh an
individual entry for an attribute.

Click Show changes only if you want to view attribute and categories differences of the entries only from the collaboration and the catalog area.

Attribute - Displays following columns:

Date - Displays the date on which attribute was last modified.


Proposed Value - If the value is highlighted in the color:
Green indicates that a new value is added in the collaboration area and the catalog value is blank.
Purple indicates that the existing value of catalog has changed in the collaboration area (Value in the catalog and the collaboration area is
different).
Red indicates that, there is no value in the collaboration area, but the catalog has value.
Step - Displays the step name in which attribute has been modified.
User Name - Displays the name of user who modified the attribute.
Revert - Click Revert to the revert the value in the Proposed Value column by the value in the Original Value column. Click Undo to reset the
recent changes before saving.

Category - Displays the mapped category differences of the entries. The section has Original Categories, Proposed Categories, and Revert columns.
Note: The change review data is not applicable for the newly created entries in the collaboration area.
Comments Collaboration area
Enter comments, if any applicable to the entry.
Completeness Category
Displays the details for the Item completeness feature only if the catalog has localized attributes or the item is attached to any channel. For more
details, see Configuring Item completeness.
Digital Assets Catalog and Collaboration area
Linked Assets - Displays the list of linked assets. To edit, select an asset and click Edit. Enter the required details in the Metadata and Value fields, and
click Add. To delete an existing metadata, select the metadata, and click Delete. Click Delink to unlink the selected asset.

Browse Assets - Displays the Category list. To link an asset to the specified category, select the asset, and click Link.

Search Assets - Displays the Category list. To link an asset to the specified category, select the asset, and click Link.

Upload Local Assets - Enables uploading a local asset to the repository. To upload an asset, click and select the asset from your computer.

Note:

If a digital asset is checked-out (category) or unreserved (collaboration area), the Digital Asset tab opens only in read-only mode.
The Digital Assets attribute is only supported as a normal attribute in the Primary Spec. The attribute does not support following.
Grouping attributes
Localized attributes

IBM Product Master 12.0.0 239


Grouping Multi-occurance attribute

Hierarchy Category
Displays list of secondary specs that are mapped to the category.

On the left, the Hierarchy field and Category section displays selected hierarchy as root level category. You can map categories to the selected
hierarchy.
History Collaboration area
Displays the details about the user-specific edits that are done to the entry. You can view the modified changes, but cannot edit any details.

To see specific detail, enter a relevant keyword in the Search field. Click to sort the columns. In the pop-up window, depending upon your
requirement click Ascending or Depending or type a filter search criteria, and click Apply. You can drag column headers to group the details. To export
the details as a Microsoft Excel Workbook, click Export to Excel.

Following are the various fields.

Date - Displays the date when the entry was modified.


Step - Displays the step in which the entry was modified.
Event - Displays the name of the event in which the entry was modified.
Attribute Path - Displays the attribute name/updated field.
Old Value - The existing value.
New Value - The updated value.
User - Displays the name of the user who updated the entry.
Comments - Displays the comments, if any for the update done.

Linked items Catalog and Collaboration area


Displays the entries that link to an entry by using link attributes. You can view following for the linked entry:

Catalog Name
Primary Key
Display Name

Relationships Catalog and Collaboration area


The Visualization section, you can view a relationship diagram for the current entry that is generated by using the relationship type attribute.

Using relationshipsDisplayType property in the config.json file, you can toggle the display node between "Catalog Name" or the "Relationship Attribute
Name". If the value of the property is catalog, the page displays Catalog Name, and if it is attribute the page displays Relationship Attribute Name.

In the Details section, you can view the following details for the related entry (read-only view). Click an entry in the Primary key column to open in the
single-edit page.

Catalog name Relationship attribute name


Primary key Primary key
Display name Display name
Catalog name Relationship attribute name
Specs Category
Displays list of secondary specs that are mapped to the category.

Select item specs to map from the Item specs section. You can search or click to select an item spec. Click to add the item specs. The Mapping
details pop-up window opens. You can add the item specs across specs, across mapping, or to children.

Select secondary specs to map from the Secondary specs section. You can search or click to select a secondary spec.
Suspect Catalog and Collaboration area
Duplicate Displays the comparison between a new entry in an operational catalog against the master catalog. The tab is only visible if you have duplicates. For
Processing. more information, see Performing Suspect Duplicate Processing (SDP).

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enriching entries (Bulk updates)


In the multi-edit page, depending upon your role, you can edit, and batch process entries (item, category, or a digital asset).

You can access the multi-edit page through Home > {collaboration step} > {multi-edit page}.

You can see tabs in the multi-edit page, only if you assign attributes to the tabs through the Admin UI.

Table 1. Tasks of the multi-edit page


Open Select entries and click to open entry details page.
Save Click to save the edits.
Revert Click to undo the edits.
Replace Click to open Find and Replace pop-up window. If any previous job was run for this particular step of the collaboration area, the Find and replace pop-
up window displays Schedule status table. The Schedule status table has Job info, Schedule info, Running time, Current status, and Return value
details for the previous job.

1. From the Select column list, select the appropriate column for which you want to replace text.

240 IBM Product Master 12.0.0


2. In the Replace and By fields enter the appropriate text, and click Replace to replace the text.
3. Click Refresh in the multi-edit page to reflect the updates.

Important:

As a best practice, before you perform bulk replace, first enter and validate the target replace criteria as a filter.
Find and Replace provides an option to bulk update product data and is designed to work with a request for large volume of data update. To
avoid any user experience impact, a background job performs the replace operation.

Refresh Click to refresh the page.


Reserve* Click to reserve an available entry for editing. This disables the Save.
Release* Click to release reserved entries.
Add Click to add an entry.
Clone Click to clone an entry.
Import Click to import the updated Microsoft Excel workbook. For more information, see Export and Import feature (collaboration area).
Export Click to export all the entries to a Microsoft Excel workbook. For more information, see Export and Import feature (collaboration area).
Categorize Displayed only if the All Re-categorization is selected for a step.
Action Displayed only if the custom tools or tabs are configured for a catalog or a collaboration area. Click to get redirected to the following pages.

Custom tools
Single-edit page for catalog or collaboration area
Multi-edit page for catalog or collaboration area
Custom tabs
Single-edit page for catalog or collaboration area

Approver Approve
Click to approve and process an entry.
Reject
Click to move the selected checked-out entry to the FIXIT workflow step. FIXIT enables releasing an entry checked-out to the collaboration
area by mistake.
Done*
Click to process the entry further.

Error In the multi-edit page, if you manually enter invalid data in an editable field or select an invalid value in a list, and try to save the updates, you get data
validation errors. If more than four data validation errors are generated, the other errors are displayed in a scrolling format.
Sort entries You can sort the list of entries based on time of creation in an ascending or descending order. You can specify the number of entries that can be
displayed on a page by using the page size list.
Pagination From the right, you can select an appropriate pagination size (50, 100, 200, or 500) to specify the number of search results to be displayed. By default,
the pagination size is 50.
Click to open the Settings pop-up window.

To remove a column, clear the checkbox for the column.


To select all the columns, click Select All.

Table 2. Important columns of the multi-edit page


ID column Click to directly open an entry in the single-edit page. This is a static column.
Status column Displays three types of statuses. You can hover the mouse pointer over the icon in the Status column to see the details. This is a static column.

- A new entry, that you need to edit and save.


- Newly added entry from the collaboration area.
- Entry saved successfully without any errors.
- Entry saved with validation error. You can save the invalid values in the collaboration area, if the value of the save_as_draft_enabled
property value is true in the$TOP/etc/default/common.propertiess file.
Important: Invalid values for Lookup and Linked attributes, might lead to data corruption.
When you click the error icon, all the cell values having errors get highlighted in Red color.
Details of errors are displayed in the error panel, which is in the lower left of the window.

Availability Displays three types of statuses. You can hover the mouse pointer over the icon in the Availability column to see the details. This is a static column.
column
- Entry is available, click Reserve for further processing.
- Entry is reserved by you, click Release to allow the item to be available.
- Entry is reserved by someone else and will be available only after released.

Reserved by You can display Reserved by column by setting the value of the showReservedByUser property in the config.json file to true. Displays two types of
column statuses.

None - The entry is available and can be reserved for further processing if you click Reserve.
<username> - The entry is reserved and will become available only after <username> clicks Release. You can sort and filter values through this
column.

Binary Click to directly upload a binary file from your computer. After upload, you can click to manage the uploaded binary file (Download or delete).
You can drag and change the sequence of uploaded binary files.
Image Click to directly upload an image file from your computer. After upload, you can click to manage the uploaded image (Download or delete). You
Image url can drag and change the sequence of uploaded image or image URL.
Thumbnail Click to directly upload a thumbnail file from your computer. After upload, you can click to manage the uploaded thumbnail (Download or
Thumbnail url delete). You can drag and change the sequence of uploaded thumbnail or thumbnail URL.
Relationship The Relationship column has an editor that you can use to search entries based on any attribute of the selected catalog (Except Display attribute). For
column more information, see Relationship editor.

IBM Product Master 12.0.0 241


Linked column The Linked column has an editor that you can use to search entries based on any attribute of the selected catalog (Except Destination attribute). For
more information, see Linked editor.
Multi-
The column that contains multi-occurrence attributes (Relationship, Linked, Images, and so on) is signified by icon.
occurrence
You can click an appropriate cell to open a pop-up window that enables you add, delete, or update such attributes directly from the multi-edit page.
attributes
The pop-up window also displays the count of the number of occurrences added for an attribute.

For multi-occurrence attributes under a group node, to add the first occurrence, expand and save the grouping node from the single-edit page. Only
then can you be able edit the multi-occurrence attribute from the multi-edit page.
The multi-edit page for the Content Editor role supports Currency and Date formats.

* - An entry checked out by other user to the step can be released, reserved, or checked in the following scenarios:

If the Admin (Admin, Full admin, or Solution developer) user is a performer on step,
If the Admin user is an Administrator on a collaboration area,
If any user is Administrator on a collaboration area.

Export and Import feature (collaboration area)


The Export and Import feature allows you to perform bulk data updates enrichment of items or categories (collectively called "entries") in a collaboration area by
using Microsoft Excel.
Lookup table and relationship attributes
In the Product Master, you can select the lookup table and the relationship type attributes. More facets are available to control the display of these types of
attributes.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Export and Import feature (collaboration area)


The Export and Import feature allows you to perform bulk data updates enrichment of items or categories (collectively called "entries") in a collaboration area by using
Microsoft Excel.

About this task


You can export selected entries and columns from a collaboration area step, edit the generated Microsoft Excel file, and then import the updated Microsoft Excel file.

You can edit the following:


Change the values of the attributes
Add attribute occurrences
Add entries
Attributes can be of any attribute type.
The format for the category name is "Hierarchy Name/Category Primary key" (full category path and without any spaces) and the exported Microsoft Excel
contains full category path.
For secondary spec attribute values, the item must be mapped to any categories required for the attribute path to be valid.
If you create a new entry (by specifying a primary key that does not exist) and specify values for the secondary spec attributes, you must ensure that the categories
that are associated with those secondary specs are given in the normally hidden columns to the left of the primary key column.
If an exported item is mapped to one or more categories, the paths of the categories are exported to cells in the item's row immediately to the left of the primary
key attribute column.
Some important scenarios for this feature.
Table 1. List of Export and Import scenarios (collaboration area)
Scenario Result
Value of the save_as_draft property is set to "true" in the Saves invalid values in the collaboration area. The errors show up only when you attempt to
common.properties file move the entry to the next step.
Importing a reserved entry Entry is imported successfully.
Importing an entry that exists in the source container, but does No new entry is created.
not exist in the step An error is logged.
Invalid attribute value Import job is successful.
Displays an error when an entry is saved.
An error is logged.
Updating a read-only attribute Existing value is not changed.
An error is logged.
Invalid category path Import job is successful.
Appropriate warning message is logged.
Importing an existing primary key Import job is successful.
Entry is updated successfully.
Importing updates to existing items, but all the items are Import job is successful.
reserved by other user Appropriate message is logged.
Importing a new primary key value New item is created in the collaboration area step.
Empty collaboration step Exported file contains a blank template with all the attributes selected.
Adding new attributes Import job is successful.
Removing occurrences Adding new attributes - An error message is logged.
Deleting entries Removing occurrences, deleting entries, deleting attributes - Not Supported.
Deleting attributes

242 IBM Product Master 12.0.0


Scenario Result
Removing existing category mapping Import job is successful.
Removing category mapping is not supported.
Invalid file name Import job is successful.
An error is logged with correct name format suggestion for the worksheet name.
Click more in the upper-right corner of the interface to see the detailed report for an import job.

Procedure
1. Browse to Home > <collaboration_step>.
2. In the multi-edit page, click Export.
3. Double-click to open the exported data in the Microsoft Excel.
The exported Microsoft Excel file contains three tabs. The first tab has brief usage instructions. The second tab has attribute values, and the third tab has ranges of
cells to define valid values for enumeration and lookup attributes.
For a required attribute, the header rows highlight the attribute path and name in the red.
First column contains the primary key attribute.
The category columns are hidden, by default.
Attribute type Export details
Grouping and Each occurrence is exported to a separate column. To add an occurrence, insert a column immediately to the right of the highest existing
multi- occurrence, copy the two header cells to the column, and increase the occurrence number in the attribute path in the header by 1. Ensure that
occurrence there are no gaps in the sequence of occurrence numbers.
You cannot change the value of an existing occurrence to null.
If no value is specified in the new column, then the occurrences are not created for such attributes.
Multi-occurrence sequence and multi-occurrence relationship attributes are ignored during import.
Date and For import, specify the value in the MM/DD/YYYY HH:MM AM/PM format for both the attributes.
Date/Time
Linked and The value is the primary key of the link target or the Lookup table row. Exported only as a primary key. You can specify a new value from the
Lookup drop-down list in the column. Any invalid values in the Lookup and Linked attributes columns might lead to data corruption.
Relationship The value is container_name>>primary_key. Here the container_name is the name of the target's container and the primary_key is the primary
key of the target.
Currency Does not contain any currency symbols.
Binary, Image, On import, only the string value is replaced, and any associated uploaded file is not changed.
Thumbnail
Sequence, On import, values of Sequence type for attributes other than the primary key are ignored.
String For import of primary keys of the Sequence type,
enumeration If the value matches that of an existing entry the feature associates the values in the row with the existing entry.
If the value is -1, a new entry is created with the next key in sequence, and the values given in the row.
If the value is null or does not match that of an existing entry, the row is ignored.
For a String enumeration, you can specify a new value from the drop-down list in the column.
Password For export, the password value is exported as "********" (excluding the double quotation marks).
For import, any value other than "********" replaces the existing value.
4. Update the data as required and save the updated Microsoft Excel file.
5. To bulk enrich the entries, click Import, and select the saved Microsoft Excel file to trigger an import job. Click more in the upper-right of the interface to check the
import job status.
6. Click Refresh to reflect the updates in the multi-edit page.

Results
After a successful import job completion, click more in the upper-right corner of the interface to access the Completed jobs to see the details. To see the summary status
click Download Report link. For any technical issues, check the $TOP/logs/scheduler_<hostname>/ipm.log file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Lookup table and relationship attributes


In the Product Master, you can select the lookup table and the relationship type attributes. More facets are available to control the display of these types of attributes.

Lookup table type attributes


When defining any of the following specs, the lookup table value display format and lookup table value display attributes available:

File Spec
Primary Spec
Lookup Spec
Secondary Spec
Sub-Spec
Script Input Spec

Note: The lookup table value display format and lookup table value display attributes are not available for the Destination Spec.

IBM Product Master 12.0.0 243


Lookup table value display format - This attribute type controls the display format of a lookup table type attribute. The following formatting options are available
and you can pick one of the formatting options. By default, look up table type attribute values are displayed using the Primary Key format.
Primary Key
Display Attribute
Primary Key > Display Attribute
Lookup Table Name > Primary Key
Lookup Table Name > Display Attribute
Lookup Table Name > Primary Key > Display Attribute
Lookup table value display attribute - This attribute enables you to pick any attribute of the selected lookup table as the display attribute. A list of all the attributes
of the selected lookup table is displayed on the spec screen. You can select an attribute as the display attribute from this list. By default, the first attribute in the list
is selected when the facet is added. The attribute can be the primary key of the lookup table as well. The lookup table attribute can be indexed or non-indexed.
The default Lookup table attribute view is a pop-up window. Using the lookupTableProperties section in the config.json file at the $TOP/MDMUI/dynamic/mdmui/
folder, you can change the Lookup table attribute view to a drop-down list.
Important: To apply the updates to the config.json file, run the updateRtProperties.sh file in the $TOP/mdmui/bin folder.
editorMode - Specifies whether the lookup-table editor is displayed as a drop-down or dialog. The possible values are 'dropdown' or 'dialog'.
minTypeAheadLength - Specify the minimum number of character you have to type before the lookup-table editor starts displaying suggestions in the drop-
down mode.
maxItemsCount - Specify the maximum lookup-table entries to fetch from the server in a single server call.
Lookup table attribute searches based on the value of the Display Attribute or Primary Key. By default, the search preference is Display Attribute, if Lookup Table
Value Display Attribute is not set in the Admin UI. Lookup table drop-down list displays formatted value depending on the Lookup Table Value Display Format in the
Admin UI. By default, if no format is specified the Primary Key is displayed . The Lookup table drop-down list searches based on the Display Attribute only.
Note: For Admin UI, by default the search is for both Primary key and display attribute (if set).

Relationship type attributes


When defining any of the following specs, the relationship value display format attribute is available:

File Spec
Primary Spec
Lookup Spec
Secondary Spec
Sub-Spec
Script Input Spec

The relationship value display format attribute enables you to select the display format of the relationship attribute values. This optional attribute is available only for
relationship type attributes. After an import, the value of the relationship attribute is displayed only for items. The following formatting options are available and you can
pick one of the formatting options. By default, relationship attribute values are displayed using the Catalog Name > Primary Key > Display Attribute format.

Primary Key
Display Attribute
Primary Key > Display Attribute
Catalog Name > Primary Key
Catalog Name > Display Attribute
Catalog Name > Primary Key > Display Attribute

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using Data Management


You can view digital media using the Data Management icon from the left pane.

Data Management page role access


Table 1. Data Management feature
Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
Subfeature
min sic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Digital Asset ✓ ✓ ✓ ✓ ✓ ✓
Management
Digital Asset Management (DAM) is a process, which can be used to upload, organize, store, and retrieve digital media according to the organizational need. The digital
media is called as assets in the Persona-based UI. DAM provides access to the assets anytime and anywhere. This ensures a faster turnaround time, speed, consistency,
and control to the assets.

The DAM Persona provides following useful features:

Uploading digital assets


Linking assets and items
Transforming uploaded digital assets
Generating renditions of the uploaded assets
Migrating digital assets

Assets can be uploaded from either an FTP location or from your computer. You can opt for bulk upload and upload multiple assets. An asset can be linked or unlinked
with multiple items and eliminates duplication.

244 IBM Product Master 12.0.0


A DAM user has Digital Assets Manager role. A user with Merchandise Manager role can link and unlinked the assets to the items and perform various transformations on
the assets. All assets tab lists all the assets that are uploaded to the Digital Assets Manager. While Uncategorized assets tab lists all the assets present under unassigned
category.

You can select an asset, and perform following tasks in the Digital Assets Management page.
Icon Description
Open Click to open the asset details.
Refresh Depending upon the task, click to see recently added assets if you are uploading assets or generating renditions or revert changes that are done in the
Assets Details page.
Note: This is only visible in the Digital Asset Management (DAM) page.
Download Click to download the asset to your computer in a .zip file format. The .zip file name has a YYYYMMDDHHMMSS format.
Note: When you download an asset with renditions, the renditions also get downloaded.
Rendition Click to generate rendition of the asset.
Metadata Click to open the Metadata tab for the selected digital asset. Enter the required details in the Metadata and Value fields, and click Add.
Note: You can delete existing metadata using Open.
Categorize Click to open the Category tab for the selected digital asset.

Left pane displays categories for the DAM catalog and DAM hierarchy in a tree format.
You can map only a single category to a digital asset.
You can categorize multiple assets from the Digital Asset Management (DAM) page.
Note: Mapping multiple assets, overwrites any existing category mapping.

Delete Click to delete the asset. In the Delete Confirmation pop-up window, click Yes.
Note: Deleting an asset also deletes all the asset renditions.
Before you can perform the following tasks, you need to enable the DAM feature from the Settings page of the Persona-based UI. For more information, see Digital asset
management settings.

Uploading assets
You can upload digital asset through the Digital Assets Management page.
Linking assets and items
Using linkage file you can re-categorize existing assets or mapping assets to items. If a category does not exist during re-categorization, the category is
automatically created. The format of a linkage file is CSV.
Editing assets
Makes the job of the digital asset manager easier by providing an ability to edit image assets by using features like transform (flip, rotate, zoom in or zoom out, crop
and resize), filter (original, black and white, saturation, and brightness), watermark, asset rename, and asset type change. The feature ensures that the digital asset
managers can finish their work effectively without any need to switch between the different applications for editing the assets.
Generating renditions
You can generate renditions (variations) of different resolutions of an asset image for different channels.
Migrating assets
Using the import and export feature of the IBM® Product Master, you can export items from the Digital Asset Catalog of a source company and import to the
destination company.
Disabling DAM
If needed, you can disable DAM.

Related concepts
Setting for DAM in the UI

Related tasks
Installing MongoDB

Related reference
damConfig.properties file parameters
dam.properties file parameters
DAM properties - env_settings.ini file

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Uploading assets
You can upload digital asset through the Digital Assets Management page.

Procedure
To upload digital asset:

1. From the left navigation pane, click Data Management. The Digital Assets Management page opens.
2. In the Digital Assets Management page, select Assets on the upper-right corner of the interface.
3. To upload assets from FTP, click FTP. The Upload Assets From FTP page opens.

IBM Product Master 12.0.0 245


4. In the FTP Server Configuration section, specify the following fields.
URL - The URL to access the FTP Server.
Port - The port number of the FTP Server.
User ID - The username with upload permissions on the FTP Server.
Password - The password for the accessing the FTP Server.
5. Optional: To upload assets from your workstation, click Local folder. Browse to the folder on your local computer, and select the assets. In the Upload Assets From
Local page, select Automate linking with items along with upload to specify linkage file (CSV format), and click Upload.
Note: When the assets are uploaded, the assets are copied in the /blobstore directory configured in the $TOP/etc/default/config/dam/dam.properties file. The
MongoDB stores the image metadata and version information.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Linking assets and items


Using linkage file you can re-categorize existing assets or mapping assets to items. If a category does not exist during re-categorization, the category is automatically
created. The format of a linkage file is CSV.

Before you begin


To use the linkage feature, you must have the following:

Following permissions on the ACG mapped to the Related catalog:


Catalog View
Catalog List
Catalog Modify Items
Catalog Attributes

The related item must have one multi-occurrence attribute (Value of Max Occurrence property should be set to more than 1) with name "Digital Assets" of type
Relationship in the primary spec of the catalog. The Digital Assets attribute stores the digital assets that are related to the item.
The linkage file must have the following fixed header as the first line:

ASSET_NAME,CATEGORY_FULL_PATH,CATALOG_NAME,ITEM_PK

Example /32_727nokia_6600.jpeg,/mobiles/nokia,Product Catalog,nokiaMobile


/32_727nokia_1700.jpeg,/mobiles/nokia,Product Catalog,nokiaMobile

Column Description
ASSET_NAME The absolute path of the asset.
Example

Asset present in Unassigned category


/Nokia_1100.jpeg

Asset present in Mobiles category


/Mobiles/Nokia_100.jpeg

Asset upload plus linkage, for the new assets the path should be /AssetName.
/Nokia_1200.jpeg

CATEGORY_FULL_PATH The destination category for the assets.


Example

/Mobiles
/Mobiles/Samsung

CATALOG_NAME The name of the catalog that contains the item to which the asset is to be linked.
ITEM_PK The primary key of the item to which the asset is to be linked.

About this task


A linkage file maps an asset with an item. The linkage file should specify following correctly so that Related From column gets populated:

asset path
dest folder
product
item

Procedure
To upload linkage file:

1. From the left navigation pane, click Data Management. The Digital Assets Management page opens.
2. In the Digital Assets Management page, select Linkage file on the upper-right corner of the interface.
Important: If there are no assets, the Linkage file is unavailable.
3. To upload assets from FTP, click FTP. The Upload assets from FTP page opens.

246 IBM Product Master 12.0.0


4. In the FTP server configuration section, specify the following fields.
URL - The URL to access the FTP Server.
Port - The port number of the FTP Server.
User ID - The username with upload permissions on the FTP Server.
Password - The password for the accessing the FTP Server.
5. To upload assets from your workstation, click Local folder. Browse to the folder on your local computer, and select the assets. In the Upload assets from Local page,
select Automate linking with items along with upload to specify linkage file (CSV format), and click Upload.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Editing assets
Makes the job of the digital asset manager easier by providing an ability to edit image assets by using features like transform (flip, rotate, zoom in or zoom out, crop and
resize), filter (original, black and white, saturation, and brightness), watermark, asset rename, and asset type change. The feature ensures that the digital asset managers
can finish their work effectively without any need to switch between the different applications for editing the assets.

About this task


Assets (images) editing has following list of features that can help you give a new look to your image.
Transform Horizontal flip
Vertical flip
Left rotate
Right rotate
Zoom in and zoom out
Free crop
Crop reset
Resize

Filter Original
Black and White
Saturation
Brightness

Water Mark
Asset rename
Asset type change
Undo

Procedure
To edit an asset:

1. From the left navigation pane, click Data Management. The Digital Assets Management page opens.
2. Select assets that you want to edit, and click Open. The Asset Details page opens.
3. In the Asset Edit page, you can perform following actions,
Feature Description
Transform You can perform image transforms without destroying the original image.

Flip Vertical 90
Flips the original image into a mirror view vertically by 90 degrees.
Flip Horizontal 90
Flips the original image into a mirror view horizontally by 90 degrees.
Rotate Left 45
Rotates the original image left by 45 degrees.
Rotate Right 45
Rotates the original image right by 45 degrees.

Crop Using this asset editing feature, an asset can be cropped as required and you can also perform free size crop. Click Custom crop box to apply
crop on the image. The crop box is adjustable in size, and click Apply to crop the asset. Click Reset Default to undo the changes done. Asset can
be resized from the Resize Dimension in Pixel (px). The default value is the actual asset dimension with editable fields. Modify the values and
click Apply to apply the new dimensions on the asset.
Filter Asset filter applies colors on the image with the following four variations, click Apply once done:

Black/White
Convert the asset to Black and White scale according to the RHS color chart.

Saturation
Saturates an asset. Adjust the slider on the thumbnail to control the saturation.
Brightness
Increase or decrease brightness. Adjust the slider on the thumbnail to control the brightness.

Note: Microsoft Internet Explorer has a CSS filter limitation due to which the filter thumbnails do not appear correctly.

IBM Product Master 12.0.0 247


Feature Description
Watermark Specify the following values, and click Apply:

Type Text
Enter the text that you want to use as a watermark.
Text Position
Click and specify the watermark text position on the image.
Font
Click and specify the font type for the watermark text.
Size
Click and specify the font size for the watermark text.
Color
Click and specify the font color for the watermark text.

Asset Rename In the Type Text field, enter the new asset name.
Change File In the Select File Type list, click and select the new asset file type. You can convert the asset to following types:
Type PNG
GIF
JPG
JPEG
You can use the Zoom feature enables visualizing the asset minutely by using the Zoom In and Zoom Out controls. Asset editing provides the zoom controls in the
right of the asset editing screen. You can configure the zoom in and zoom out values through the config.json file by using the following properties:
{"ImageDefaultZoom": "200", "ImageMaxZoom": "400", "ImageMinZoom": "25"}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Generating renditions
You can generate renditions (variations) of different resolutions of an asset image for different channels.

Before you begin


Add channel name and resolution (%) in the DAM Renditions Configuration Lookup table.
Note: Even if multiple channels have same resolution, only a single rendition is created.

About this task


You can add a rendition for an image type asset (JPG, JPEG, PNG, GIF, or BMP) only.

Procedure
To add a rendition:

1. From the left navigation pane, click Data Management. The Digital Assets Management page opens.
2. Select an asset and click Open.
3. In the asset details page, click Renditions.
Note: The Renditions button is disabled, if the asset is not a type of an image or is an asset created using rendition (an asset that was created using Renditions
button).
4. If a rendition exists, click Yes in the Renditions Confirmation window. If a rendition does not exist, a job is scheduled to generate rendition.
5. Click Show notifications in the upper-right of the interface to see the job status. After job is successful, create Refresh.

Results
Click Renditions tab to see renditions of the asset. The rendition displays the name of the rendition and name of the channel.

Click a rendition to see the enlarged image of rendition. The enlarged rendition displays the name of the rendition, name of the channel, size, date created, and the
image type.
Note: If you do not see the rendition, reschedule a job.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Migrating assets
Using the import and export feature of the IBM® Product Master, you can export items from the Digital Asset Catalog of a source company and import to the destination
company.

248 IBM Product Master 12.0.0


Before you begin
Download the tool for upgrading Jack rabbit repositories to Oak from the following site.
Oak upgrade
Note: You must delete the Oak upgrade JAR file after completing the asset migration.
If the source and destination company are on different servers, you need to manually copy the blobstore folder that is located in the $TOP/mdmui folder from the
source to the destination server.
Important: Applicable only if asset migration is performed on a destination server that has no uploaded assets, to avoid data loss or corruption.
You need to manually copy the thumbnail images that are located in the $TOP/public_html/suppliers/<companyname>/ctg_files folder from the source to the
destination server.

Procedure
To migrate assets,

1. Add the Oak upgrade JAR file to the $TOP/mdmui/libs/dam folder of the destination server.
2. Run the following command on the destination server to copy the binary references from the source to the destination server.

java -jar oak-upgrade-*.jar [options] source destination

Where the source destination is the following,

mongodb://[username:password@]host1[:port1][/[database][?options]]

Example
Migration from a server to a server for same company

$TOP>mdmui/libs/dam > java -jar oak-upgrade-1.6.1.jar


mongodb://[username:password@]hostname1:port1/[Company name] ?authSource=admin
mongodb://[username:password@]hostname2:port2/[Company name] ?authSource=admin

Migration from a company to a company within same server

$TOP>mdmui /libs/dam > $TOP>mdmui/libs/dam > java -jar oak-upgrade-1.6.1.jar


mongodb://[username:password@]hostname:port/[Company name1] ?authSource=admin
mongodb://[username:password@]hostname:port/[Company name2] ?authSource=admin

What to do next
In the destination server, log in as a DAM user, and save the DAM user settings. You can see the imported digital assets.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Disabling DAM
If needed, you can disable DAM.

Procedure
1. Remove all the DAM JAR entries from the jars-persona-internal.txt or jars-custom.txt file that is located in the
$TOP/bin/conf/classpath/folder.
2. Run the following command to update the Product Master class path properties:

$TOP/bin/updateRtClasspath.sh

3. Restart the Product Master by using the following command:

$TOP/bin/go/stop_local.sh

$TOP/bin/go/start_local.sh

4. Update the [dam] section in the env_settings.ini file that is located in the $TOP/bin/conf/ folder:
enable=Set the value of this property to no.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using Explorer
Using the Explorer icon from the left pane, you can view catalogs or hierarchies.

IBM Product Master 12.0.0 249


Explorer feature role mapping
Table 1. Explorer feature
Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
Subfeature
min sic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Item ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Explorer
Category ✓ ✓ ✓ ✓ ✓
Explorer
You can filter the category further by specifying catalog and hierarchy. If you do not specify a catalog, the Explorer then shows all the hierarchy-related content.

1. In the left pane, click Explorer icon. The Explorer page opens.
2. Select a catalog from the Catalog list. You can type ahead the name to filter the list. By default, the selected catalog is None.
3. Depending on the selected catalog, select an available hierarchy from the Hierarchy list. You can type ahead the name to filter the list. By default, the selected
hierarchy is None. If the selected catalog is None, the Hierarchy list displays all the available hierarchies. If you hover over the category name, you only see the
category count.
4. On the left, the Category section displays selected hierarchy as root level category.

Number of subcategories and items


To see number of subcategories and items, hover over a category name. The category name displays the details in the <number of subcategories>| <number
of items>format.
Filter
To filter the category tree, enter appropriate keyword in the Search Category field, and click Search.
To filter the empty categories (no items or subcategories), click , and select Hide unpopulated categories.
Resize
To resize the category pane, drag the slider to see long category names.
You can also resize the items pane on the right to full width to see items.
The minimum width of right pane is set to 600 pixels and minimum width of the left pane is set to 0 pixels.
Views
Select a view from the Views list on the right. Views provide a more efficient or task-specific view of items, create groups of attributes that are related to a
specific data entry or data maintenance process.
Entries per page
Specify the number of items that can be displayed on a page by using the page size list.
Status
You can hover the mouse pointer over the icon in the Status column to see the details. The Status column displays following types of statuses for an item.
icon - Item is checked-out to a collaboration area. Click to edit the item in the single-edit page of the collaboration area.
icon - You can check out the item, since the item is not checked-out to any collaboration area.
icon - You cannot checkout the item, since it is already checked-out to a collaboration area.

Table 2. Tasks - Explorer page


Action Tasks
Right-click the root category on the left to perform You can perform the following tasks depending on the user permissions; Add category, Add item, Clone
the tasks for a category. category, Checkout category, Open category, Rename category, and Delete category.
Note: You cannot delete a category that has existing items or subcategories.
Click <number of subcategories> hyperlink to open You can perform the following tasks for an category depending on the user permissions; Open, Refresh, Add,
the list of entries on the right pane. Clone, Import, Export, Checkout, Actions, Delete, and Publish.
For more information on the Export and Import feature, see Export and Import feature (catalogs and
hierarchies).

The Related Product field has an editor that you can use to search entries based on any attribute of the
selected catalog (Except Display attribute). For more information, see Relationship editor.

The Linked Product field has an editor that you can use to search entries based on any attribute of the
selected catalog (Except Destination attribute). For more information, see Linked editor.
Click <number of items> hyperlink to open the list You can perform the following tasks depending on the user permissions; Open, Refresh, Add, Clone, Import,
of entries on the right pane. Export, Save as list, Generate report, Actions, Delete, and Publish.
If you open any item from the Explorer, for the Merchandiser role, the Digital Assets tab is in the read-only
mode and for the Catalog Manager role, you can directly access the single-edit page and check out or delete
the item from the catalog.
If the catalog is Digital Asset Catalog, click You can perform the following tasks depending on the user permissions; Open, Refresh, Download,
<number of assets> hyperlink to open the list of Rendition, Metadata, Categorize, and Delete. For more information, see Using Data Management.
assets on the right pane.
If you select Digital Asset Hierarchy Right-click the root category on the left, click Add category to open Add category pop-up window. Type the
name of the new category, and click OK.
Note: If you click Add category for any other hierarchy, you are redirected to the single-edit page.
5. Select an appropriate entry and click Open. Depending on your role, you can see following tabs for an item; Attributes, Suspect Duplicate Processing, Digital Assets,
Categories, Relationships, Linked items, History, Comments, Change Review, Audit History, Specs, Hierarchy, and Completeness. For more information, see Tabs for
an entry.

Export and Import feature (catalogs and hierarchies)


The Export and Import feature allows you to perform bulk data updates enrichment of catalogs or hierarchies (collectively called "entries") in the Explorer page by
using Microsoft Excel.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

250 IBM Product Master 12.0.0


Export and Import feature (catalogs and hierarchies)
The Export and Import feature allows you to perform bulk data updates enrichment of catalogs or hierarchies (collectively called "entries") in the Explorer page by using
Microsoft Excel.

About this task


You can export selected entries and columns, edit the generated Microsoft Excel file, and then import the updated Microsoft Excel file.

You can edit the following:


Change the values of the attributes
Add attribute occurrences
Add entries
Attributes can be of any attribute type.
For items - The columns before the primary key column contain categories and the exported Microsoft Excel contains full category path that is mapped to items.
You can add columns in Microsoft Excel worksheet to map multiple categories.
For categories - The column immediately before the primary key column contains parent category (only one parent allowed) and the exported Microsoft Excel
contains full category path.
For secondary spec attribute values, the item must be mapped to any categories required for the attribute path to be valid.
If you create a new entry (by specifying a primary key that does not exist) and specify values for the secondary spec attributes, you must ensure that the categories
that are associated with those secondary specs are given in the normally hidden columns to the left of the primary key column.
If an exported item is mapped to one or more categories, the paths of the categories are exported to cells in the item's row immediately to the left of the primary
key attribute column.
Performance tuning - For more details on the performance tuning best practices for , see Performance tuning for the Persona-based UI.
Table 1. List of Export and Import scenarios (catalogs and hierarchies)
Scenario Result
Invalid Microsoft Excel worksheet file name Import job is successful.
An error is logged with correct name format suggestion for the
worksheet name.
Invalid Sequence type primary key value Import job is successful.
An error message is logged (except -1) and entry gets ignored.
Invalid attribute path Import job is successful.
An error is logged for the incorrect attribute path and attribute gets
ignored.
Additional attribute columns Import job is successful.
An error is logged for the incorrect attribute path and attribute gets
ignored.
Invalid category path Import job is successful.
A warning message is displayed for the category having invalid
hierarchy path.
No value specified in the mandatory attributes column Import job is successful.
An error is logged.
No new entry is created.
Invalid number, integer, flag, or seq type attribute values Import job is successful.
An error is logged and modification or creation of entry is ignored.
Invalid multi-occurance value Import job is successful.
An error is logged when maximum allowed occurrence limit is
reached.
Importing an existing primary key Import job is successful.
Existing entry gets updated.
No value specified in the mandatory secondary spec attributes column Import job is successful.
An error is logged and modification or creation of entry is ignored.
Importing an item with a category mapping, but without re-categorization permission Import job is successful.
An error is logged and modification or creation of entry is ignored.
Importing item with secondary spec mapping to a category, but without having the Import job is successful.
secondary specs mapped Item gets imported to the "Unassigned" category.
Adding invalid values in the Enum_Ranges worksheet Invalid values are not displayed in the drop-down list.
Note: Do not tamper the values in the Enum_Ranges worksheet.
Data that fails validation rules Import job is successful.
An error is logged with appropriate message indicating the reason
for failure.
Invalid attribute name Import job is successful.
An error is logged in the report.out file.
Note: From the two header rows, import uses the first row to
retrieve the attribute name.
Providing alphanumeric values for numeric attribute key Import job is successful.
An error is logged and no new entry is created.
On the upper-right corner of the page, click Show notifications to see the detailed report for an import job.

Procedure
1. In the left pane, click Explorer icon. The Explorer page opens.
2. Select a catalog from the Catalog list. You can type ahead the name to filter the list.

IBM Product Master 12.0.0 251


3. Depending on the selected catalog, select an available hierarchy from the Hierarchy list.
4. On the left, the Category section displays selected hierarchy as root level category.
5. Click <number of subcategories> hyperlink to open the list of entries on the right pane.
6. Select all the items and click Export.
7. Double-click to open the exported data in the Microsoft Excel.
The exported Microsoft Excel file contains three tabs. The first tab has brief usage instructions. The second tab has attribute values, and the third tab has ranges of
cells to define valid values for enumeration and lookup attributes.
For a mandatory attribute, the header rows highlight the attribute path and name in the red.
First column contains the primary key attribute.
The category columns are hidden, by default.
Some important scenarios for this feature.
Attribute type Export details
Grouping and Each occurrence is exported to a separate column. To add an occurrence, insert a column immediately to the right of the highest existing
multi- occurrence, copy the two header cells to the column, and increase the occurrence number in the attribute path in the header by 1. Ensure that
occurrence there are no gaps in the sequence of occurrence numbers.
You cannot change the value of an existing occurrence to null.
If no value is specified in the new column, then the occurrences are not created for such attributes.
Multi-occurrence sequence and multi-occurrence relationship attributes are ignored during import.
Date and For import, specify the value in the MM/DD/YYYY HH:MM AM/PM format for both the attributes.
Date/Time
Linked and The value is the primary key of the link target or the Lookup table row. Exported only as a primary key. You can specify a new value from the
Lookup drop-down list in the column. Any invalid values in the Lookup and Linked attributes columns might lead to data corruption.
Relationship The value is container_name>>primary_key. Here the container_name is the name of the target's container and the primary_key is the primary
key of the target.
Currency Does not contain any currency symbols.
Binary, Image, On import, only the string value is replaced, and any associated uploaded file is not changed.
Thumbnail
Sequence, On import, values of Sequence type for attributes other than the primary key are ignored.
String For import of primary keys of the Sequence type,
enumeration If the value matches that of an existing entry the feature associates the values in the row with the existing entry.
If the value is -1, a new entry is created with the next key in sequence, and the values given in the row.
If the value is null or does not match that of an existing entry, the row is ignored.
For a String enumeration, you can specify a new value from the drop-down list in the column.
Password For export, the password value is exported as "********" (excluding the double quotation marks).
For import, any value other than "********" replaces the existing value.
8. Update the data as required and save the updated Microsoft Excel file.
9. To bulk enrich the entries, click Import, and select the saved Microsoft Excel file to trigger an import job.
10. Click Refresh to reflect the updates in the Explorer page.

Results
After a successful import job completion, click more in the upper-right corner of the interface to access the Completed jobs to see the details. To see the summary status
click Download Report link. For any technical issues, check the $TOP/logs/scheduler_<hostname>/ipm.log file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using Search
You need to find the entry (item, collaboration area, or digital asset) so that you can provide attribute values.

From the left pane, click Search to display the Search page. The search page displays the Product and Digital Assets tab.

Search page role access


Table 1. Search feature
Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
Subfeature
min sic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Item ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Search
Digital ✓ ✓ ✓ ✓ ✓
Assets
Category ✓ ✓ ✓ ✓ ✓
Search
Transaction ✓
s

Products tab
The Products tab displays the following options:

252 IBM Product Master 12.0.0


New Search - Displays the search results based on your specified criteria.
Saved Search - Displays the saved searches with pre-defined search specifications.
Saved Template - Displays the saved searches with pre-defined, but editable search specifications.
Saved List - Displays the search results with pre-defined search specifications.

Note: You can directly edit values within fields without the use of editors in the Search page.
The states of the all the options in the Products tab are saved across user sessions.

New Search
To create a new search, proceed as follows,

1. Select the required catalog from the Catalog list.


2. In the Select Category field, click List View. The Select category dialog box opens.
3. In the Select category dialog box,
To filter the category tree, enter appropriate keyword in the Search Category field, and click Search.
To filter the empty categories (no items or subcategories), click Filter, and select Hide unpopulated categories.
4. Specify the required category, and click . The Mapped category | count categories pane gets populated.
5. Click OK to save your selection. The Select Category field in the Search page reflects your selected category.
6. Click Add criterion.
a. Select an appropriate attribute.
b. Specify an operator. You can choose from various operator types. For more information, see Supported operators.
c. To further filter the search results with an appropriate Logical operator (And/OR), click Add criterion specify details.
Note: The And condition has a higher precedence than the Or condition, which means that the And logical operation is evaluated before an Or
operation.
7. Click Search.
Click icon to open Relationship pop-up window. You can use the editor to search entries based on any attribute of the selected catalog. For more
information, see Relationship editor.

Click icon to open Linked pop-up window. You can use the editor to search entries based on any attribute of the selected catalog. For more information,
see Linked editor.

Saved Search
To search by using a saved search, from the Saved Searches list, select the appropriate search, and click Search.
To see list of saved searches, click List View. The Saved Searches | count dialog box opens. The Catalog, Category, and Add criterion fields get auto-populated.
To delete a saved search, click Delete selected saved search.

Saved Template
To search by using a saved template, from the Saved Templates list, select the appropriate template, and click Search.
To see list of saved templates, click List View. The Saved Templates | count dialog box opens. The Catalog, Select Category, and Add criterion fields get auto-
populated.
To delete a saved template, click Delete selected saved template.
Saved List
To save the search specifications, click Save Search. The Save Search dialog box opens. In the Search name field, enter an appropriate name. You can also enter a
description. Click Save Template to save with editable search specifications. The Save Template dialog box opens. In the Template name field, enter an appropriate
name. You can also enter a description. The Search name or Template name field does not support special characters.
Tasks
You can perform the following tasks on the search results. From the right, select an appropriate pagination size (50, 100, 200, or 500) to specify the number of
search results to be displayed. By default, the pagination size is 50.

Open - Select an item from the search results, and click Open. The single-edit page opens to display the item details in a read-only mode.
Refresh - Click to refresh the search results.
Checkout - Click Checkout > collaboration area> to check out the selected item from the specified collaboration area. The single-edit page opens to display
the item details in an editable mode.
Important: You must select Checkout and Edit checkbox while you are creating a collaboration area to enable the Checkout functionality.
Save as list - Click to open Save as listpop-up window. Enter an appropriate name in the List name field, and click Apply. You can access this saved list
anytime through the Saved List option on the Search page.
Note: The Saved list icon is not visible on the Search page.
Import - Click to import the updated Microsoft Excel workbook. For more information, see Export and Import feature (collaboration area).
Export - Click to export all the items to a Microsoft Excel workbook. For more information, see Export and Import feature (collaboration area).
Export to Excel - Click to open Export to Excel pop-up window. Select the relevant script from the Select script list, and click Apply. A Microsoft Excel file
format report of the search results gets downloaded to your computer.
Restriction: The product can contain out of the box, default scripts. However, the scripts are provided as samples and might not be production-ready.
Customization of such scripts can be required depending on the usage scenario. Customization might involve modifying or optimizing the out of the box script
or writing an entirely new script and selecting the new script when running the particular functionality.
Actions - Click Actions > script to perform a preconfigured entry preview script for a catalog.

Digital Assets tab


Digital Asset Manager

You can search a digital asset by either name or the format type. Select Include metadata while search to search a digital asset by using the associated metadata.

The Search Results table displays the search results based on your specified criteria in a tabular format. You can perform same functions on the searched digital asset as
you do on the digital asset in the single-edit page. For more information, see Working with the DAM Persona.

Transactions tab
GDS Supply Editor

Enter details for any of the following fields, and click Search.

IBM Product Master 12.0.0 253


Message id
Specifies a unique identifier for the transaction message.
Type
Specifies the type of the transaction. A valid value can be, Add Item, Add Item Link, Item Unlink, Publish Initial Load, Publish New Item, or Synchronize Changes.
Status
Specifies the type of the transaction. A valid value can be, Registered, Registered - GS1, Registered Failed, or Submitted For Registration.
From date
Specify the from date for the transaction.
To date
Specify the to date for the transaction.

The Search Results table displays the search results based on your specified criteria in a tabular format.

Supported operators
This topic describes different types of supported operators in the Search page.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Supported operators
This topic describes different types of supported operators in the Search page.

Supported operators
Following table contains list of the supported operators in the Search page.
Operator Type
Begins with String operator
Contains String operator
Ends with String operator
Equals Relational operator
Empty Generic for multiple operators
Does not begin with String operator
Does not contain String operator
Does not end with String operator
Does not equal Relational operator
Is not empty Generic for multiple operators
In Membership operator
Not in Membership operator
Length less than String length operator
Length greater than String length operator
Matches String operator
Does not match String operator
Begins with match case String operator
Ends with match case String operator
Contains match case String operator
Greater than Relational operator
Greater than or equal to Relational operator
Less than Relational operator
Less than or equal to Relational operator
Is one of Identity operator
Is not one of Identity operator
Does not begin with match case String operator
Does not end with match case String operator
Does not contain with match case String operator
Equals match case String operator
Not equals with match case String operator
Is not greater than Relational operator
Is not greater than or equal to Relational operator
Is not less than Relational operator
Is not less than or equal to Relational operator

Attributes and supported operators mapping


Following table describes attributes and supported operators mapping.
Operator - In Operator - Not in
Attributes Operator - Length greater than Operator - Length less than
Scope - All Scope - All
Binary ❌ ❌ ❌ ❌

254 IBM Product Master 12.0.0


Operator - In Operator - Not in
Attributes Operator - Length greater than Operator - Length less than
Scope - All Scope - All
Currency ✓ ✓ ❌ ❌
Date ❌* ❌* ❌ ❌
Flag ✓ ✓ ❌ ❌
Image ❌ ❌ ❌ ❌
Image url ✓ ✓ ❌ ❌
Integer ✓ ✓ ❌ ❌
Lookup ✓ ✓ ❌ ❌
Number ✓ ✓ ❌ ❌
NumberEnum ✓ ✓ ❌ ❌
Password ❌ ❌ ❌ ❌
Relationship ✓ ✓ ❌ ❌
Rich Text ✓ ✓ ❌ ❌
Sequence ❌ ❌ ❌ ❌
String ✓ ✓ ❌ ✓
String Enum ✓ ✓ ❌ ✓
Thumbnail ❌ ❌ ❌ ❌
Thumbnail url ✓ ✓ ❌ ❌
Timezone ❌* ❌* ❌ ❌
URL ✓ ✓ ❌ ❌
* - Support with IBM Product Master 12.0 Fix Pack 9 release

Known limitations
Following is list of known limitations.

Membership operators and Length operators -


Support only a single Hierarchy.
Support only Indexed attributes.
Do not work in combination with each other.
Do not support special character in the values or category mapping.
Membership operators with scope "ALL" do not work with Membership operator with Scope "ANY" or without scope.

Membership operators with scope "ALL" and Length operators only support "Contains" operator.

Membership operators support comma-separated list of values.


String length operators only work support String and String enumeration types attributes.
Lookup table and Relationship attributes support following format.

Lookup table attribute


LKPNAME_PK_DISPATTRIB
LKPNAME_PK
Relationship attribute
CTG_PK_DA
CTG_PK

When you create a search, "LKPNAME" and "CTG" names are mandatory in the displayformat else the primary key, or the display attribute cannot be fetched. The
primary key is unique to its container and the display attribute is never unique and hence search might fail.

Sequence attribute is not supported in a multi-occurrence format.


Password is in the encrypted form, and hence cannot be searched.
You cannot search images, binary, or thumbnail files.
Membership operators do not support Date and Timezone attributes.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using Free text search (Fix Pack 8 and later)

( and later) You can use the Free text search feature through the Product Master interface.

Before you begin


Important: Starting from IBM® Product Master Fix Pack 8 onwards, Elasticsearch is getting replaced with OpenSearch. This is due to change in licensing strategy (no
longer open source) by Elastic NV. Due to this, you need to shift to OpenSearch 2.4.1 and later, and run full indexing. For more information, see Installing OpenSearch (Fix
Pack 8 and later).

Enable Free text search feature from the Settings > Application settings page of the Persona-based UI. For more information, see Free Text Search settings.
Ensure that the account that you have configured for the Free text search meets following requirement:
The account should have the Full Admin role.

IBM Product Master 12.0.0 255


The Full Admin role should have full access on all the catalogs, which are to be indexed. Access can be given by modifying the ACG associated with the
catalog to select the Full Admin role and grant all permissions on all the catalogs for the Full Admin role.

Procedure
1. Click icon, in the upper-right corner of the interface. The left pane loads the free text search criteria.
2. Select Catalog or Hierarchy.
3. Specify the catalog or hierarchy from the drop-down list.
4. In the Search query field, specify the search keyword.
5. Optional: In the Select category dialog,
a. Optional: Select appropriate Hierarchy from the list. Click and OK to select appropriate categories.

b. Optional: Click to specify categories within Search field.


c. Optional: You can also use type ahead feature to select categories. In the Select category dialog, enter first 3 characters of a category name to display a
category.
Note: To enable the feature, set the value of the allowTypeAhead property under categorySelector property in the config.json file to "true". You can configure
the minimum number of characters you must enter by changing the value of the minTypeAheadLength property under categorySelector property in the
config.json file. By default, the value is 3.
6. Click Search.
a. Optional: Click Save free text search to save and reuse the defined criteria.
b. Optional: To search by using a saved free text search, from the Saved free text search, select an appropriate free text search, and click Load. You can click

to see list of saved searches. Click to delete a saved search.


c. Optional: To map an attribute collection for a catalog, click to open Attribute collection dialog. Select an appropriate attribute collection, and click OK.
Click to delete a mapped attribute collection.
7. The right pane is populated with free text search results depending on your specified criteria. You can perform following tasks on the results.
Table 1. Tasks of the FTS page
Icon Description
Open Select an item or multiple items and click to open Items details page.
Refresh Click to refresh the search results.
Checkout Catalog
Select an item, and click Checkout > <collaboration area> to check out the selected item to the specific collaboration area.
Save as list Catalog
Click to open Save as list pop-up window. Enter an appropriate name in the List name field, and click Apply. You can access this saved list
anytime through the Saved List option on the Search page.
Generate Report Catalog
Click to export search results to a Microsoft Excel format. In the Generate Report pop-up window, select the appropriate script, and click OK.
Note:
Specify the value for the number of entries through the Maximum number of entries to be written to a report on Generate Report in Multi
Edit list in the Admin UI.
The generated report contains the PRIMARY_SPEC details only.
Actions Catalog
Select an item, and click Actions > <action name> to open the <action name> pop-up window.
Note: The Actions icon is only displayed if the entry preview script is configured for the selected catalog or container.
Clear search Click to undo the search results, and reset the search field.
Publish Click to publish specs from the IBM Product Master to the IBM Watson® Knowledge Catalog. For more information, see Publishing specs to the
IBM Watson Knowledge Catalog.
Compact view Click to see the search results displayed as a compact or tabular view.

Using OpenSearch to access indexed content (Fix Pack 8 and later)


( and later) You can access indexed content through OpenSearch.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using OpenSearch to access indexed content (Fix Pack 8 and later)

( and later) You can access indexed content through OpenSearch.

Before you begin


Ensure that you have installed and configured OpenSearch. For more information, see Installing OpenSearch (Fix Pack 8 and later).

About this task


Using OpenSearch you can publish product data in a flattened format. Data is stored in a JSON format as “Attribute Name:Value” pair. You can quickly export this data to
external systems for any kind of analytics or reporting purposes.

Procedure

256 IBM Product Master 12.0.0


1. Use the following API details to search by using a keyword, for example "item2".

URL
http://<opensearch-host>:<port>/<index_id>/_search
Method
POST
Request body

{
"query": {
"bool": {
"must": [
{
"query_string": {
"query": "item2",
"type": "cross_fields",
"default_operator": "AND"
}
},
{
"term": {
"type": "item"
}
},
{
"term": {
"containerName.keyword": "catalog"
}
}
]
}
},
"size": 50,
"from": 0,
"aggs": {
"Containers": {
"filter": {
"term": {
"containerName.keyword": "catalog"
}
}
}
}
}

Response body

{
"took": 33,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 1,
"relation": "eq"
},
"max_score": 4.698229,
"hits": [
{
"_index": "2",
"_id": "1027",
"_score": 4.698229,
"_source": {
"type": "ITEM",
"companyName": "mytest",
"containerName": "catalog",
"identifier": "1027",
"primaryKey": "item2",
"displayName": "item2",
"attributes": {
"cat_specs": {
"item_id": "item2",
"Name": null,
"Value": 120.0,
"ExpDate": "2023-03-11T00:00:00+0580",
"NewDate": null,
"Date2": null
}
},
"mappings": {
"hier": [
"cat1"
]
},
"dataCompletenessPercent": [],
"validationErrorCount": 0,
"lastModifiedOn": 1678699738000,
"lastModifiedBy": "Admin",
"versionNumber": 1
}

IBM Product Master 12.0.0 257


}
]
},
"aggregations": {
"Containers": {
"doc_count": 1
}
}
}

2. Optional: Use the following API details to search by using a key-value pair, for example "item_id" and "item2".

URL
http://<opensearch-host>:<port>/<index_id>/_search
Method
POST
Request body

{
"query": {
"bool": {
"must": [
{
"query_string": {
"fields": [
"attributes.cat_specs.item_id"
],
"query": "item2",
"default_operator": "AND"
}
},
{
"term": {
"type": "item"
}
},
{
"term": {
"containerName.keyword": "catalog"
}
}
]
}
},
"highlight": {
"fields": {
"attributes.cat_specs/item_id": {}
}
},
"size": 50,
"from": 0,
"aggs": {
"Containers": {
"filter": {
"term": {
"containerName.keyword": "catalog"
}
}
}
}
}

Where,
cat_specs is the spec name,
item_id is the attribute name,
catalog is the containerName.
Response body

{
"took": 11,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 1,
"relation": "eq"
},
"max_score": 4.02548,
"hits": [
{
"_index": "2",
"_id": "1027",
"_score": 4.02548,
"_source": {
"type": "ITEM",
"companyName": "mytest",
"containerName": "catalog",
"identifier": "1027",
"primaryKey": "item2",

258 IBM Product Master 12.0.0


"displayName": "item2",
"attributes": {
"cat_specs": {
"item_id": "item2",
"Name": null,
"Value": 120.0,
"ExpDate": "2023-03-11T00:00:00+0580",
"NewDate": null,
"Date2": null
}
},
"mappings": {
"hier": [
"cat1"
]
},
"dataCompletenessPercent": [],
"validationErrorCount": 0,
"lastModifiedOn": 1678699738000,
"lastModifiedBy": "Admin",
"versionNumber": 1
}
}
]
},
"aggregations": {
"Containers": {
"doc_count": 1
}
}
}

Example

JSON file format

{
"took": 21,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 1,
"relation": "eq"
},
"max_score": 10.309448,
"hits": [
{
"_index": "2",
"_id": "9980",
"_score": 10.309448,
"_source": {
"type": "ITEM",
"companyName": "demo",
"containerName": "Product Offerings",
"identifier": "9980",
"primaryKey": "3912",
"displayName": "Nokia 7205",
"attributes": {
"Product Specification": {
"Product ID": "3912",
"Model ID": "Nokia 7205",
"Model Name": "Intrigue",
"Vendor": "Vendor Lookup»Nokia»",
"Product Type": "Mobile",
"Dimensions": null,
"Colors": [],
"Images": {
"Web Image": "nokia7205-1356314092-0.jpg",
"Image URL": "http://hdimages-raw.s3.amazonaws.com/nokia7205-1356314092-0.jpg"
},
"Features": [
"500 entries",
"Caller groups",
"Multiple numbers per contact",
"Picture ID",
"Ring ID",
"Calendar",
"Alarm",
"To-Do",
"Calculator",
"World clock",
"Stopwatch",
"Notes",
"SMS",
"MMS",
"Predictive text input",

IBM Product Master 12.0.0 259


"Instant Messaging",
"Games",
"Advanced Audio Distribution (A2DP)",
"Basic Imaging (BIP)",
"Basic Printing (BPP)",
"Dial-up networking (DUN)",
"File Transfer (FTP)",
"Generic Access (GAP)",
"Generic Audio\/Video Distribution (GAVDP)",
"Handsfree (HFP)",
"Headset (HSP)",
"Object Push (OPP)",
"Serial Port (SPP)",
"Polyphonic ringtones",
"Vibration",
"Phone profiles",
"Speakerphone",
"Voice dialing (Speaker independent)",
"Voice commands (Speaker independent)",
"Voice recording",
"TTY\/TDD"
],
"Descriptions": {},
"Display Image": "nokia7205-1356314092-0.jpg",
"Display Image Description": null,
"Digital Assets": [],
"Related": [],
"Completeness_Core": null,
"Completeness_Amazon": null,
"Completeness_eBay": null,
"Completeness_Adobe": null,
"Completeness_Google": null,
"Completeness_Magento": null
},
"Publication Secondary Spec": {
"Publication Group": []
},
"Region": {
"Region": []
},
"Core": {
"0001_INTERNAL_PRODUCT_CODE": null,
"0590_CODE_INFOS": {},
"1039_SAP_AFFILIATE": {},
"0794_CONSUMER_UNIT_INFOS": {},
"0797_TRADE_UNIT": {}
},
"Mobile Specifications": {
"General": {
"App": null,
"App Version": null,
"App Category": "microSD,microSDHC",
"CPU": null,
"Connectors": "MicroUSB,2.5mm Audio",
"Battery": "Li-Ion 860 mAh",
"Aliases": null,
"Platform": null,
"Platform Version": null,
"Platform Version Max": null,
"Benchmark Max": "0",
"Benchmark Min": null,
"Memory Internal": "150MB",
"Memory Slot": "MP3,AAC,AAC+,WMA",
"Network": "CDMA800,CDMA1900,Bluetooth 2.0",
"Browser": null,
"Browser Version": null,
"Language": "240"
},
"Display": {
"Size": "2.2",
"CSS Screen Sizes": "240x320",
"Virtual": "0",
"EUSAR": null,
"Type": "TFT",
"Pixel Ratio": "1",
"PPI": "182",
"X Ratio": "2",
"Y Ratio": "0",
"Display Colors": "262K",
"Other": "Second External"
},
"Media": {
"Front Camera": "2MP,1600x1200",
"Secondary Camera": null,
"Video Capture": "Yes",
"Video Playback": null,
"Audio": "Digital zoom,LED Flash",
"Other": "Internal"
},
"Design": {
"Form Factor": "Clamshell",
"Keyboard": "Numeric",
"Weight": "90",
"Side Keys": "Volume",
"Dimensions": "90 x 47 x 14",
"Antenna": null,

260 IBM Product Master 12.0.0


"Softkeys": "320"
}
},
"Approval Secondary Spec": {
"Approval Details": []
}
},
"mappings": {
"Primary Hierarchy": [
"Nokia"
]
},
"dataCompletenessPercent": [
{
"channel": "Core",
"completeness": null
},
{
"channel": "Amazon",
"completeness": null
},
{
"channel": "eBay",
"completeness": null
},
{
"channel": "Adobe",
"completeness": null
},
{
"channel": "Google",
"completeness": null
},
{
"channel": "Magento",
"completeness": null
}
],
"validationErrorCount": 0,
"lastModifiedOn": 1619539957000,
"lastModifiedBy": null,
"versionNumber": -1
}
}
]
},
"aggregations": {
"Containers": {
"doc_count": 1
}
}
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using Free text search (Fix Pack 7 and earlier)

( and earlier) You can use the Free text search feature through the Product Master interface.

Before you begin


Important: Starting from IBM® Product Master Fix Pack 8 onwards, Elasticsearch is getting replaced with OpenSearch. This is due to change in licensing strategy (no
longer open source) by Elastic NV. Due to this, you need to shift to OpenSearch 2.4.1 and later, and run full indexing. For more information, see Installing OpenSearch (Fix
Pack 8 and later).

Enable Free text search feature from the Settings > Application settings page of the Persona-based UI. For more information, see Free Text Search settings.
Ensure that the account that you have configured for the Free text search meets following requirement:
The account should have the Full Admin role.
The Full Admin role should have full access on all the catalogs, which are to be indexed. Access can be given by modifying the ACG associated with the
catalog to select the Full Admin role and grant all permissions on all the catalogs for the Full Admin role.

Procedure
1. Click Free Text Search icon, in the upper-right corner of the interface.
2. Select appropriate option (Catalog or Hierarchy), enter the search term, and click Search.
Table 1. Tasks of the FTS page
Icon Description
Open Select an item or multiple items and click to open Items details page.
Refresh Click to refresh the search results.
Catalog

IBM Product Master 12.0.0 261


Icon Description
Checkout Select an item, and click Checkout > <collaboration area> to check out the selected item to the specific collaboration area.
Save as list Click to open Save as list pop-up window. Enter an appropriate name in the List name field, and click Apply. You can access this saved list
anytime through the Saved List option on the Search page.
Generate Report Click to export search results to a Microsoft Excel format. In the Generate Report pop-up window, select the appropriate script, and click OK.
Note:
Specify the value for the number of entries through the Maximum number of entries to be written to a report on Generate Report in Multi
Edit list in the Admin UI.
The generated report contains the PRIMARY_SPEC details only.
Actions Select an item, and click Actions > <action name> to open the <action name> pop-up window.
Note: The Actions icon is only displayed if the entry preview script is configured for the selected catalog or container.
Clear search Click to undo the search results, and reset the search field.
Publish Click to publish specs from the IBM Product Master to the IBM Watson® Knowledge Catalog. For more information, see Publishing specs to the
IBM Watson Knowledge Catalog.
3. To display only required attributes in an attribute collection, you can configure attributes that show in the Free text search results by adding the FTS Display
Attribute Collection=<attribute collection name> property in the Catalog Attributes. By default, an entire attribute set is mapped to a catalog and hence the Free
text search results can go beyond 200.
4. To automatically enable a fuzzy search that uses "~" (Tilde), set the value of the fts_enable_fuzzy_search property to True in the restConfig.properties file. By
default, the value of the fts_enable_fuzzy_search property is False.

Fuzzy search
Format: Append "~" (Tilde) to each word.
Result: Returns all documents where the search term is found.
Use the query_string query of the Free text search with the value of the fuzziness parameter as "Auto". The Fuzziness is interpreted by using the
Levenshtein Edit Distance (Number of one-character changes that need to be made to one string to make it the same as another string). Allowed Levenshtein
Edit Distance is:
If the length of a word is less than 2 characters, no edit rules. The word must match exactly.
If the length of a word is in the range 3 - 5 characters, the Levenshtein Edit Distance allows one edit. For example, "Apple" word, one misspelling
works, but more than that fetches no results.
If the length of a word is more than 5, the Levenshtein Edit Distance allows two edits. For example, "Apples" word, two misspellings work, but more
than that fetches no results.
Note: Due to performance impact, do not use more than four words in the fuzzy search.
Example
Search term Search results
Strxxt~ Street
NXw~ ZXXland~ New Zealand
Country=NXw~ ZXXland~
title=Wenn;city=Auckland Wenn and Auckland
title=Wenx~;city=Aucklanx~
Uz~ No results
Applx~ +Silvex~ Apple with Silver
Release Date=[2020-01-13 TO 2021-08-14] Results having specified range

Important: If you take a backup of a company, then delete it, and create a new company with same name, you must restart the Persona-based UI services before
you import the data into the new company. OpenSearch ( and later) or Elasticsearch ( and earlier) picks up the new set of data after full
indexing. If you do not restart the services, OpenSearch ( and later) or Elasticsearch ( and earlier) picks up the container IDs from the cache
and hence the item data does not open on the single-edit page.

Example
Following are the possible search options:
Where,

xyz: Search term


abc: Second search term

Simple search term


Format: xyz
Result: Returns all documents where the search term is found.
Simple "AND" search
Format: xyz
abc
Result: Returns all documents where both the search terms are found.
Wildcard search
Format: xyz*
Result: Returns all documents where the search term is found as an exact match or as a substring.
Important:

Do not use any other special character in the search term with Asterisk (*).
If you have restricted access (static selection user), do not search by using only Asterisk (*).

Special character search


Format:

xyz +abc [Plus sign (+)]


xyz -abc [Minus or hyphen sign (-)]
x?z [Question mark (?)]

262 IBM Product Master 12.0.0


Result: Returns all documents where the search term is found.
Multiple attribute values search
Format: Attribute1 = Value1 ; Attribute2 = Value2 ; [Attribute3 = Value3]
Specific value search
Example: color=silver OR red.
Specified date range search
Format: Attribute1=[Date1 to Date2].
Result: date = [2019-03-29 to 2019-03-31].
Attribute name search
Result: color=silver;name=iPhone.
Numeric attribute search
Format: Attribute1 = [Value1 To Value2].
Result: price = [2000 TO *] price = [2000 TO 10000].

Filtering Free text search results


You can search attribute names, which are an exact match or a substring of the actual attribute name. The results are achieved through attributes in the spec and
associated attribute paths in the spec.
Excluding content from the Free text search results
Using the Admin UI, you can specify exclusion of specific content from getting indexed in to the Free text search.

Related concepts
Setting for FTS in the UI

Related tasks
Installing Elasticsearch (Fix Pack 7 and earlier)

Related reference
FTS properties - env_settings.ini file

Related information
Fuzzy search - Query String Query
Fuzzy search - Fuzziness
Fuzzy search - Levenshtein distance

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Filtering Free text search results


You can search attribute names, which are an exact match or a substring of the actual attribute name. The results are achieved through attributes in the spec and
associated attribute paths in the spec.

The search terms should be in the Key = Value format, for example, "color = red" where color is used as the key to match against set of attributes and red is used as the
value to search for in the matching records. Similarly, multiple search attributes can be specified in the following format by using semicolon (;) as a separator:

Key = Value ; Key = Value [; Key = Value]


Search results are accurate if the specified search attribute matches exactly one attribute in the specs that are associated with a Catalog or Hierarchy. If the specified
search attribute matches with more than one attribute in the specs that are associated with a Catalog or Hierarchy, then the results depend on the matching criteria that
are configured by using the fts_should_match_expression property in the restConfig.properties file. By default, the value of the fts_should_match_expression property is
set to 1<90%. If more than one attribute from spec matches the attribute that is specified in the search term, then the 90% clause is applied. It returns results only when
floor (n x 90%) of attribute values match. Here ‘n’ is number of attributes that matched the search term.

Following types of attributes are supported:

String
Numbers
Dates

Note:

If an attribute specified matches multiple attributes of incompatible types, no results are returned. For example, if you specify date = today search term and spec
has attributes that are named date (type string) and Start Date attribute (type Date) then Elasticsearch produces an unable to convert string today to date error.
You can exclude a term from the search operation by using a hyphen (-). For example, if you search for item-00001, then the search results show attributes names
that contain "item" but not "00001".
Ignore trailing zeros while searching with the number type attribute. For example, search attribute1 = 19.24 instead of attribute1 = 19.24000.

Examples

String attribute:
Display records with attribute name "iphone" and color "silver":

IBM Product Master 12.0.0 263


color=silver; name=iphone
No result because a phone cannot be both "silver" and "red" colored:
color=silver red
Display records with either of the specified values:
color=silver OR red
Display records by using "OR" operator:
color=silver OR red ; brand=google
Numeric attribute:
Display records for phones having price higher than 2000
price = [2000 TO *]
Display records for phones in the 2000 - 10,000 price range
price = [2000 TO 10000]
Date attribute:
Display all records with specified attribute date range (2019-03-29/30/31) including lower and upper limits:
date = [2019-03-29 TO 2019-03-31]
Display all records with attribute date range until 2019-03-30, but exclude 2019-03-31:
date = {* TO 2019-03-31}
Display all records with attribute date range of 2019-03-29/30, but exclude upper limit 2019-03-31:
date = [2019-03-29 TO 2019-03-31}
Display all records with attribute date range of 2019-03-29/30, but exclude upper and lower limits:
date = {2019-03-29 TO 2019-03-31}
Mixed attributes:
Display all records for "oppo" phones having price higher than 2000:
name = oppo ; price = [2000 TO *]
Displays all records with attribute date range of 2019-03-29/30, exclude the upper limit 2019-03-31, and price in the range 3000 - 4000:
price = [3000 TO 4000] ; date = [2019-03-29 TO 2019-03-31}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Excluding content from the Free text search results


Using the Admin UI, you can specify exclusion of specific content from getting indexed in to the Free text search.

Prerequisites
You have configured Admin UI and Persona-based UI successfully.
You have started and configured Free text search services (pim-collector and indexer) successfully.
Hazelcast and Elasticsearch started and configured successfully.

Excluding items of a catalog


To exclude all the items of a catalog from getting indexed into the Elasticsearch, proceed as follows:

1. Log in to the Admin UI.


2. In the Modules tab on the left, right-click and select Catalog Attributes. The right pane displays the Catalog Detail for '<catalog name>' pane.
3. In the Catalog Detail for '<catalog name>' section, add an attribute with the following value, and click Add.
Exclude from Free Text Search=True
The custom attribute gets associated to the catalog.

Important:

If you add or remove Exclude from Free Text Search from the Catalog attributes, reschedule full indexing else the catalog might still appear in the search results.

Excluding attribute
To exclude an attribute, proceed as follows:

1. Log in to the Admin UI.


2. Go to Data Model Manager > Specs Mappings > Specs Console.
3. For the spec you want to modify, click Edit.
4. Click <attribute name> that you want to exclude.
5. Select Exclude from Free Text Search from the list.
6. Click Select a type as the spec node type to associate with the selected attribute. The right pane now displays Exclude from Free Text Search attribute.
7. To enable, select the Exclude from Free Text Search checkbox, and click Save.

Important:

Children attributes (Grouping or Multi occurrence) if any, also get excluded.


If you select or clear Exclude from Free Text Search checkbox for an attribute, reschedule full indexing else the attribute might still appear in the search results.
Searching for excluded attributes might return an invalid result.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

264 IBM Product Master 12.0.0


Using Data Model Manager
Using Data Model Manager, you can view or edit specs, attribute collections, rules, lookup table, and access file explorer.

Data Model Manager page role access


Table 1. Data Model Manager feature
Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Ven
Subfeature
min sic Manager Manager Editor Manager Admin Editor Manager Account Developer dor
Collaboration Area ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
console
Spec Console ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Rules Console ✓ ✓ ✓
Lookup Table ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Console
File Explorer ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Job Console ✓ ✓ ✓ ✓ ✓ ✓ ✓
Workflow console ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Access following features.

Spec console
Attribute collection console
Rules console
Lookup Table console
File Explorer

Using Collaboration Area console


Access the collaboration area console from the left pane by Data Model Manager > Collaboration area console.
Using File Explorer
The File Explorer manages all incoming and outgoing files, including import feeds, scripts, reports, and specs. You can access the File Explorer from the left pane by
Data Model Manager > File Explorer. The File Explorer page displays total number of files and number of selected files.
Using Job console
Access the Job console from the left pane by Data Model Manager > Job console. The Job console page displays total number of existing jobs and allows you to set
a schedule for a Job.
Using Lookup Table console
Access the Lookup Table console from the left pane by Data Model Manager > Lookup Table console. The Lookup Table console page displays total number of
lookup tables and number of selected lookup tables.
Using Rules console
Access the Rules console from the left pane by Data Model Manager > Rule console. The Rules console page displays total number of rules and number of selected
rules.
Using Spec console
Access the Spec console from the left pane by Data Model Manager > Spec console and Attribute collection. The Primary, Secondary, and Lookup tabs display the
various specs that are uploaded in the application.
Using Attribute collection console
Access the Attribute collection console from the left pane by Data Model Manager > Spec console and Attribute collection.
Using Workflow console
Access the Workflow console from the left pane by Data Model Manager > Workflow console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using Collaboration Area console


Access the collaboration area console from the left pane by Data Model Manager > Collaboration area console.

Tasks
You can perform following tasks in the Collaboration area console page.
Task Description
Refresh Click to refresh and reflect the updates that are done in the Collaboration area console page.
Delete Select a collaboration area, and click Delete. The Delete confirmation pop-up window opens. To delete the collaboration area, click Yes.
On the right side of the page, click to filter the collaboration areas. In the Search field, enter the collaboration area name, and click Apply.
Sort Use the drop-down list to sort the collaboration area names by ascending or descending order. Default value is ascending order.
Create collab Click to create a collaboration area. For more details, see Editing or creating a collaboration area.

Editing or creating a collaboration area


Using the collaboration area console, you can edit an existing collaboration area or create a new collaboration area.

IBM Product Master 12.0.0 265


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Editing or creating a collaboration area


Using the collaboration area console, you can edit an existing collaboration area or create a new collaboration area.

About this task


To edit an existing collaboration area or create a new collaboration area, proceed as follows.

To edit an existing collaboration area, click <collaboration area name> link .


To create a new collaboration area, click Create collab.

The collaboration area details page opens.

Procedure
1. In the Collab area properties section, add or edit the following.
Field Description
Name Displays name of the collaboration area.
Description Displays description for the collaboration area.
Delete when empty? Select if you want to delete the collaboration area when empty.
Check out and edit? Select if you want to check out and edit the collaboration area.
Workflow Click to select the workflow.
Container Click to specify the container type as catalog or hierarchy.
Access Control Group Click to select the Access Control Group.
2. In the Timeout section, add or edit the following.
None - Select to specify no timeout.
After - Enter a value to specify x days of timeout.
On - Click to select and specify the date and time interval for the timeout.
3. In the Administrators section, add or edit the following.
Users - From the left pane, select users to move them to right pane, and click OK.
Roles - From the left pane, select roles to move them to right pane, and click OK.
4. Click Save.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using File Explorer


The File Explorer manages all incoming and outgoing files, including import feeds, scripts, reports, and specs. You can access the File Explorer from the left pane by Data
Model Manager > File Explorer. The File Explorer page displays total number of files and number of selected files.

You can access following file directories to the files that are stored on your Oracle or Db2 database:

archives
public_html
eventprocessor
schedule_logs
feed_files
scripts
FTP
tmp
params
users

The ftp and public_html directories are file system directories that are mounted into the document store and are defined in the docstore_mount.xml configuration file in
the $TOP/etc directory. The docstore_mount.xml file provides the location of your file system mount points and uses the ftp_root_dir and supplier_base_dir parameters
from the common.properties file:

<mnt doc_path="/public_html/" real_path="$supplier_base_dir/" inbound="yes"/>


<mnt doc_path="/ftp/" real_path="$supplier_ftp_dir/" inbound="yes"/>

Tasks
You can perform following tasks in the File Explorer page.
Task Description

266 IBM Product Master 12.0.0


Task Description
Upload Click to open the Upload file pop-up window.

File - Click Browse to upload a file from your computer


Read the package name from .class file - If the file extension is .class, select to read the package name from the class file.
Destination - Specifies the upload location (read-only).
Access Control Group - Specify an ACG from the Access Control Group list.

The maximum file size that you can upload is 50 MB.


Delete Select a file, and click Delete. The Delete files pop-up window opens. To delete the file, click Yes.
Click to toggle between details or large icons views.
Click to locate files containing the specified search keyword. You need to specify minimum of 3 characters for a search keyword to yield valid results.
Download Click to download the selected file in .zip format. You can configure file size limit for download according to your usage requirement by using the
docstore_download_size_mb_limit property in the restConfig.properties file. For more information, see docstore_download_size_mb_limit.
Cut Click to cut and move a file to a new location.
Copy Click to copy a file to a new location.
Paste Browse to the required folder and click Paste to add the file from the previous "cut" or "copy" operations.
Rename Click to open Rename file pop-up window. In the Name field, type the new file name, and click OK.
You can also right-click any file to perform download, view or edit Access Control Group, cut, copy, paste, rename, or delete operations.

You can access two reports for maintaining the file explorer. For more information, see Document store maintenance.
To make the FTP directory work in the file explorer, set up the FTP directory. For more information, see Setting the FTP directory.
You can use also use a file system instead of using the database. For more information, see Using file system as document store.
You can set user privileges for users by setting an access control group (ACG) for each file. For more information, see Setting document store access privileges.
To save space, you can compress the BLOB files. For more information, see Compressing document store files.

Related information
Troubleshooting the docstore issues

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using Job console


Access the Job console from the left pane by Data Model Manager > Job console. The Job console page displays total number of existing jobs and allows you to set a
schedule for a Job.

The Jobs console enables you to add schedule, disable job, and delete job.

Tasks
You can perform following tasks in the Job console page.
Task Description
Job type Click to select a type from the drop-down list. You can either view all the job types, or filter by the following types.

Import jobs - Select to view Import jobs page. For more information, see Import jobs.
Export jobs - Select to view Export jobs page. For more information, see Export jobs.
Report jobs - Select to view Report jobs page. For more information, see Report jobs.

View all job runs Click to view Job runs page.


Schedule Specify a new or edit an existing job schedule (Using Schedule pattern in the job details page). You can specify multiple schedules for a single job. You
can add one time or recurring schedule. All the schedules added for import jobs run only with the latest uploaded import file.
For more information, see Adding or editing a job schedule.
Disable Select a job, and click Disable to disable all the schedules of the job.
Delete job Select a job, and click Delete to delete jobs other than import, export, or report. The Delete confirmation pop-up window opens. To delete the job, click
Yes.
Click the icon on a column to sort by ascending or descending order. Default value is ascending order. To filter the list, in the Search field enter a name,
and click Apply.

Import jobs
In Import jobs page, you can perform the following tasks.
Task Description
Add run Specify a new job run schedule. Run job adds an immediate schedule to the import job. For more information, see Adding or editing a job schedule.
Upload file Select a job, and click Upload file. The Upload file pop-up window opens. Click Browse to specify the file you want to upload in the Select file field, and
click Upload.
In case of PULL_FTP and DOC_STORE datasource types, the job is triggered in backend to upload the file to docstore.
Download file Select a job, and click Download file.

IBM Product Master 12.0.0 267


Task Description
Delete job Select a job, and click Delete. The Delete confirmation pop-up window opens. To delete the job, click Yes.
Approve job Approver
Select a job and click Approve job (enabled only for jobs that have pending approval status). The Approve job pop-up window opens. Enter the reason
for approval and click Approve.
Reject job Approver
Select a job and click Reject job (enabled only for jobs that have pending approval status). The Reject job pop-up window opens. Enter the reason for
rejection and click Reject.
Click the icon on a column to sort by ascending or descending order. Default value is ascending order. To filter the list, in the Search field enter a name,
and click Apply.
Source Click <source name> link for any import job to view the datasource details.
Spec mapping Click <mapping name> link for any import job to view the source and destination spec mapping details in the Spec mapping details pop-up window.
Click any import job to view more details like job details, enable, disable, delete, or add a schedule, loading process, and job run status.

Export jobs
In Export jobs page, you can perform the following tasks.

Task Description
Add run Specify a new or edit an existing job run schedule. Run job adds an immediate schedule to the export job. For more information, see Adding or editing
a job schedule.
Delete job Select a job, and click Delete. The Delete confirmation pop-up window opens. To delete the job, click Yes.
Approve job Approver
Select a job and click Approve job (enabled only for jobs that have pending approval status). The Approve job pop-up window opens. Enter the reason
for approval and click Approve.
Reject job Approver
Select a job and click Reject job (enabled only for jobs that have pending approval status). The Reject job pop-up window opens. Enter the reason for
rejection and click Reject.
Click the icon on a column to sort by ascending or descending order. Default value is ascending order. To filter the list, in the Search field enter a name,
and click Apply.
Spec mapping Click <mapping name> link for any export job to view the source and destination spec mapping details in the Spec mapping details pop-up window.
Script file Click Details link to view the script file details.
Distributions Click <number> link to view distribution details.
Alerts Click on the <alert count> to view the alert details.
Click any export job to view more details like job details, enable, disable, delete, or add a schedule, catalog export, and job run status.

Report jobs
In Report jobs page, you can perform the following tasks.

Task Description
Add run Specify a new or edit an existing job run schedule. Run job adds an immediate schedule to the import job. For more information, see Adding or editing
a job schedule.
Manage Select a job, and click Manage parameters. The Manage parameters pop-up window opens. View or modify the parameters and click Save.
parameters
Delete job Select a job, and click Delete. The Delete confirmation pop-up window opens. To delete the job, click Yes.
Click the icon on a column to sort by ascending or descending order. Default value is ascending order. To filter the list, in the Search field enter a
name, and click Apply.
File script Click <file script name> link for any report job to view the script file details.
Distribution Click <distribution name> link for any report job. The Distribution pop-up window opens. You can view distribution, distribution type, and distribution
script type.
Click any export job to view more details like job details, enable, disable, delete, or add a schedule, report details, and job run status.

Adding or editing a job schedule


Using Job console page, you can add or edit an existing job schedule.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Adding or editing a job schedule


Using Job console page, you can add or edit an existing job schedule.

About this task


To edit an existing job schedule or create a new job schedule, proceed as follows.

Procedure

268 IBM Product Master 12.0.0


Select a job, and click Schedule. The Add Schedule pop-up window opens. Specify the following details, and click Save.

Schedule name
Name of the schedule.
Start date and time
Specify the date and time for the job run.
Schedule pattern
You can select one time or recurring schedule.
Schedule pattern - One time
Select to run the job at the specified date and time.
Recurring
Select to specify the recurrence pattern.

Minute - Select an appropriate value from the Minutes drop-down list to run the job at the specified minute interval.
Hourly - Select an appropriate value from the Hour drop-down list to run the job every hour at the specified time.
Daily - Select an appropriate value from the Hour drop-down list to run the job every daily at the specified time.
Weekly - Select a day to run the job every week on the specified day.
Monthly - Specify a date to run the job every month on the specified date.
Yearly - Specify month and date on which you want to run the schedule to run every year.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using Lookup Table console


Access the Lookup Table console from the left pane by Data Model Manager > Lookup Table console. The Lookup Table console page displays total number of lookup
tables and number of selected lookup tables.

Tasks
You can perform following tasks in the Lookup Table console page.
Task Description
Refresh Click to refresh and reflect the updates that are done in the Lookup Table console page.
Delete Select a lookup table, and click Delete. The Delete confirmation pop-up window opens. To delete the lookup table, click Yes.
Select all Click to select all the lookup tables.
On the right side of the page, click to filter the specs. In the Search field enter the spec name, and click Apply.
Sort Use the drop-down list to sort the lookup table names by ascending or descending order. Default value is ascending order.
View
Click on the right side of the page to toggle between the card and grid view. The selected view displays an Orange highlight. The default value
is card view.
Create spec Click to directly create a spec through the Lookup Table console. For more details on the fields, see Editing or creating a spec.
Create table Click to open Create lookup table pop-up window. Specify the Lookup table name, select the appropriate value from the Select lookup spec list, and
click Create table and add entries. For more details on the fields, see Editing or creating a Lookup table.
To see details of any lookup table, click <lookup table name> link. You can also click in the Spec Name column to directly open the Attributes properties section of a
spec in an editable mode.

Editing or creating a Lookup table


Using the Lookup Table console, you can edit an existing Lookup table or create a new Lookup table.

Related concepts
Lookup tables

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Editing or creating a Lookup table


Using the Lookup Table console, you can edit an existing Lookup table or create a new Lookup table.

About this task


To edit an existing Lookup table or create a new lookup table, click <Lookup table name> link. The Lookup Table details page opens.

Procedure
Add or update the following in the Lookup Table details page.
Task Description

IBM Product Master 12.0.0 269


Task Description
Lookup table name Displays the Lookup table name.
Lookup spec Displays the Lookup table spec name.
Note: For Lookup spec, use "String enum" as the attribute instead of the Lookup Table attribute.
View version Select the version of the Lookup Table entries you want to view.
Save Click to save the updates that are done in the Lookup Table details page.
Revert Click to undo the updates that were done in the Lookup Table details page.
Refresh Click to refresh and reflect the updates that are done in the Lookup Table details page.
Add Click to add a Lookup table entry. Specify the value in the id column, and click Save.
Clone Click to clone an existing Lookup table entry. Specify the value in the id column, and click Save.
Import Click to bulk update the Lookup table entries. Select the updated Grid Data Export Report - YYYYMMDD_HHMMSS.xlsx file from your computer.
Export Click to download a list of Lookup table entries in the Grid Data Export Report - YYYYMMDD_HHMMSS.xlsx format.
Delete Select a Lookup table entry, and click Delete. The Delete confirmation pop-up window opens. To delete the Lookup table, click Yes.
Action Displayed only if the entry preview script is configured for a Lookup table.
Select all Click to select all the Lookup tables.
Click to locate a Lookup table entry containing the specified search keyword.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using Rules console


Access the Rules console from the left pane by Data Model Manager > Rule console. The Rules console page displays total number of rules and number of selected rules.

Prerequisites
Following are the prerequisites for using the Rules feature:

Run the migration script and verify you can see the tctg_rle_rule table created in the database.
Import the mdmce-env.ZIP file at the $TOP/mdmui/env-export/mdmenv folder. Verify that you can see the RulesLookupTable in the Lookup Table console.

Tasks
You can perform following tasks in the Rules console page.
Task Description
Clone Select a rule, and click Clone to open the cloned rule in the Rules details page.
Delete Select a rule, and click Delete. The Delete confirmation pop-up window opens. To delete the rule, click Yes.
Displays whether the rule is basic or composite.
Composite
A "Basic" rule is a rule created on a specific extension (stand-alone rule).
A "Composite" rule is a rule that consists of 2 or more "Basic" rules.

Status Click to enable a rule directly from the Rules console page.
Click to open the Settings pop-up window.

To remove a column, clear the checkbox for the column.


To select all the columns, click Select All.

To see details of any rule, click <rule name> link.

Known Limitations
Following is the list of known limitations:

Rules feature

The Rules engine does not support the following:


Password, binary, image, thumbnail image, grouping, localised grouping, and external content reference attributes
Post-save rule extension (catalog)
Nested "if/else" statements
Linked type of attribute is only supported for catalog.
If you manually remove a script from the Admin UI from an extension point that also has a rule that is attached from the Persona-based UI, the rule script
does not run. As a workaround, save the same rule again from the Persona-based UI to generate a dummy script.
Conditions that were added during creating basic rule are only considered in a composite rule.
If a basic rule in a composite rule is edited, the update is not considered.
Rule with length operator and the predicate type "attribute" is not supported.
During rule creation if you use "Is mapped to" operator in the "IF" part of the rule to map Item to a category, the rule fails.
Currently, rule support is for an Item only.
String enumeration attribute type with string enumeration rule script has string type editor instead of the string enumeration drop-down list.

Restriction: The category path of mapping expression (condition or action) in a rule should not contain Forward slash "/ " character, which is a delimiter for the
category path.

270 IBM Product Master 12.0.0


"In" and "Not in" operators

Currently, supports only the "Value" predicate.


Multi-value-selector:
To add data, you need to press Enter for multi-value selector.
Only deals with list of strings.
No validations are present in the multi-value selector.
Rich text editor: A rule is not applied if during rule creation the HTML tags that need to be compared are not added.
Specialized editors - Following attributes have specialized editors:
Date, Date-time, Timezone, Relationship, Lookup table, and Linked.

Lookup table and Relationship attributes


Following is the display format for Relationship and Lookup table attribute:

Lookup table attribute


LKPNAME_PK_DISPATTRIB
LKPNAME_PK
Relationship Type attribute
CTG_PK_DA
CTG_PK

The CTG_name or the LKPNAME is mandatory in the display format while you are creating rules or else the primary key or the display attribute cannot
be fetched and the rule creation fails.
Date attribute
The Input and Output format should be set same as specified in the Settings page.
Multi-occurrence attribute
You cannot add Sequence in the "Then" part of the rule.
Catalog and workflow do not support type "Then" part of the rule.

Editing or creating a rule


Using the Rules console, you can edit an existing rule or create a new rule.
Supported operators and rules
This topic describes different types of operators in the Rules console and the rules supported.
Rules expressions
This topic defines syntax and semantics of expressions that are used in the predicate and assignment sections of the Rules.json file.

Related concepts
Types of scripts (extension points)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Editing or creating a rule


Using the Rules console, you can edit an existing rule or create a new rule.

Tasks
You can perform following tasks in the Rules details page.
Icons Description
New Click New to create a new rule in the Rules details page.
Save Click Save to save the rule.
Refresh Click Refresh to refresh the Rules details page.
Clone Select a rule, and click Clone to clone a rule.
Delete Select a rule, and click Delete. The Delete confirmation pop-up window opens. To delete the rule, click Yes.
Append to rule Click to open the Create composite rule pop-up window. Select New to specify a new composite rule or Existing to append an existing composite rule.

About this task


To edit an existing rule or create a new rule, proceed as follows.

To edit an existing rule, click <rule name> link.


To create a new rule, click Create rule.

The Rules details page opens.

Procedure
1. In the Rule name field, type the name of the rule.
2. In the Rule properties section, select an appropriate value from the Apply rule to list. Depending upon your selection, the fields of the Rule properties section
further populate.

IBM Product Master 12.0.0 271


Catalog
a. Select the catalog that you want to apply the rule to from the Catalog list.
b. Select the extension that you want to apply the rule to from the Extension point list. The available option is Entry build, Pre processing, or Post
processing.
Spec
a. Select the spec that you want to apply the rule to from the Spec list.
b. Select and specify the attribute for the selected spec. To distinguish the attributes in a localized spec, you can set the value of the property
attributeDisplayFormat=path in the config.json file.
c. Select the rule type from the Rule type list. The available option is Value rule, Validation rule, or Default value rule. If the attribute type is string
enumeration, the Rule type list also displays an option of String enumeration rule.
Workflow
a. Select the workflow that you want to apply the rule to from the Workflow list.
b. Select and specify the workflow step to which you want to apply the rule to from the Step list.
c. Select the catalog that you want to apply the rule to from the Catalog list. If the workflow is a hierarchy, the Rule type list also displays an option of
Hierarchy.
Note: If you specify nested workflow as the workflow you want to create rule in, and the step as child workflow, you can specify "Item" mapping only.

3. In the Rule structure section, you can create and configure the rule structure. The Rule structure section displays the Design view and Code view tabs. You define a
rule in the Design view tab.
Basic rule

You define a rule in the Design view tab. The Design view tab has following available building blocks for a rule.
If statement operator
And logical operator
Or logical operator
Then conditional operator
The Rules structure section, displays the operators depending on your selections in the Rule properties section.
Note: The Rule structure of a basic rule is a read-only mode only until the rule is associated with a composite rule. To edit the rule, remove the basic rule from the
composite rule.

Design view tab: "If" statement operator

Logical operator
And/OR
List of attributes
Select an appropriate attribute.
Operator
You can choose from various operator types. For more information, see Supported operators and rules.
Predicate
Available option is Value, Attribute, or Function. Depending upon your specifications the next field can be an integer, or text.
Relationship and Linked editors
Click icon to open Relationship pop-up window. You can use the editor to search entries based on any attribute of the selected catalog. For more
information, see Relationship editor.
Click icon to open Linked pop-up window. You can use the editor to search entries based on any attribute of the selected catalog. For more
information, see Linked editor.

Design view tab: "Then" statement operator

List of attributes
Select an appropriate attribute.
Operator
Relational operator
Predicate
Available option is Value, Attribute, or Function. Depending upon your specifications the next field can be an integer, or text.

Design view tab: "Then" statement operator for an "Item" attribute

Logical operator
And/OR
List of attributes
Item
Operator
You can choose from following Category operator types:
Is mapped to category
Is not mapped to category
Edit Category mapping
Click Edit category mapping to open Select category pop-up window. Select category and click Category to mapped, and OK.

Design view tab: "Then" statement operator for an "Item" attribute

Operator
Map to category

If you are creating a rule for Workflow, use the Design view tab to define statement and logical operators for each of the In, Out, and Timeout tabs.

Click Add criterion to specify the rule parameters. If required, to specify additional nested rule parameter and logical operator, click Add set of criteria. To remove a
rule, click Remove criterion.

Associations tab

272 IBM Product Master 12.0.0


Basic rule becomes view-only when associated with any composite Rule. This tab provides list of all the Composite rules with which the basic rule is associated.
You can clone basic rule, append the rule to more composite rules, and add any new rule. You though cannot delete this basic rule.

Composite rule

Applicable rules section - Displays a list of all "Basic" rules with matching rule properties. Select a rule and click Add. You can select multiple rules.

Selected rules section - Displays list of your selected "Basic" rules. You can reorder the rules by using the Move up or Move down icons.

4. Click Save.

What to do next
After you have configured rule, click Code view tab to check the JSON output for the rule structure, and click Save. If you want, you can click Copy to copy the JSON file.
For more information on the syntax and semantics of expressions used in the JSON file, see Rules expressions.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Supported operators and rules


This topic describes different types of operators in the Rules console and the rules supported.

Supported operators
Following table lists different types of supported operators in the Rules console.
Operator Type
Begins with String operator
Contains String operator
Ends with String operator
Equals Relational operator
Empty Generic for multiple operators
Does not begin with String operator
Does not contain String operator
Does not end with String operator
Does not equal Relational operator
Is not empty Generic for multiple operators
In Membership operator
Not in Membership operator
Length less than String operator
Length greater than String operator
Matches String operator
Does not match String operator
Begins with match case String operator
Ends with match case String operator
Contains match case String operator
Greater than Relational operator
Greater than or equal to Relational operator
Less than Relational operator
Less than or equal to Relational operator
Is one of Identity operator
Is not one of Identity operator
Does not begin with match case String operator
Does not end with match case String operator
Does not contain with match case String operator
Equals match case String operator
Not equals with match case String operator
Is not greater than Relational operator
Is not greater than or equal to Relational operator
Is not less than Relational operator
Is not less than or equal to Relational operator
Occurrence count not equal to Count operator
Occurrence count equal to Count operator
Occurrence count greater than Count operator
Occurrence count less than Count operator

Rules and supported operators mapping

IBM Product Master 12.0.0 273


Following table lists the rules for each supported operator in the Rules console.
Operator Rule condition If satisfied...
Begins with Check whether the String36 attribute of the spec begins with or starts with the word "Item". Rule sets the value of the number enumeration
attribute of the spec to 23.
Contains Checks whether the value specified for "trade" is present in the String36 attribute of the Rule sets the value of the number attribute of the
spec. spec to "100".
Does not begin Checks if the "id" attributes (primary key) of the spec does not start with the word "Cat". Rule sets the value of the flag attribute of the spec
with to "False".
Does not contain Checks whether the value specified for "ed" is not present in the String36 attribute of the Rule sets the value of the flag attribute of the spec
spec. to "True".
Does not end with Checks whether the String36 attribute of the spec does not end with "ed". Rule sets the value of the integer attribute of the
spec to "1000".
Does not equal Checks whether the currency attribute of the spec does not equal "500". Rule sets the value of the time zone attribute of the
explanation spec to "(GMT-09:00) Alaska".
The rule does not run in case the "If" condition is
not satisfied.
Empty Checks whether there is any value in the currency attribute of the spec. Rule sets the value of the date attribute of the spec
to "9/11/22 1:42 PM".
Ends with Checks whether the value of the String36 attribute of the spec, ends with or finishes with the Rule sets the value of the integer attribute of the
word "tion". spec is set to "5500".
Equals Checks whether the value in the integer attribute of spec name equals value "50". Rule sets the value of the value
"System_Unit_Conversions»m»m" in the Lookup
table attribute of the spec.
Greater than Checks whether the value in the integer attribute of the spec is greater than value "50". Rule sets the value of the "Rules
Catalog»Itm33»Itm33" in the Relationship attribute
of the spec.
Greater than or Checks whether the value in the currency attribute of the spec is greater than or equal to Rule sets the value "true" in flag attribute of the
equal to value "500". spec.
IN Checks the given "If" condition in any or all the multi-occurrences depending on the scope
and applies the "then" part. The operator has following scopes:

Any
All

Any - If the condition is satisfied in any of the attributes.


Example

Apply a rule on the color attribute (multi-occurrence type attribute). If "Any" of the values of
this attribute have value "blue" or "black" then the Rule sets the value of the flag iscolour
to "true".
All - If the condition is satisfied in all the attributes.
Example

Apply a rule on the color attribute (multi-occurrence type attribute). Only when "All" of the
values of this attribute have value "blue" or "black", the Rule sets the value of the flag
iscolour to "true".
Is not empty Checks whether the currency attribute of the spec contains any value. Rule sets the value of the URL attribute to
"www.google.com".
Length greater than Checks whether the length of the value in the String36 attribute of the spec is greater than Rule sets the value of the currency attribute of the
"5". spec to "600".
Length less than Checks whether the length of the value in the String36 attribute of the spec is less than "4". Rule sets the value of the number attribute of the
spec to "2123".
Less than Checks whether the value in currency attribute of spec name is less than value "500". If the rule runs successfully, it maps the "Item" to
the category "C11".
Less than or equal Checks whether the value in currency attribute of spec name is less than or equal to value Rule sets the value "System_Unit_Conversions»US
to "500". gallon»US gallon" in the Lookup table attribute of
the spec.
Occurrence count Checks whether the multi-occurrence attribute of the spec has number of occurrences equal Rule sets the value of the number attribute of the
equal to to "4". spec to "2123."
Occurrence count Checks whether the multi-occurrence attribute of the spec has number of occurrences Rule sets the value of the number attribute of the
greater than greater than "4". spec to "2123."
explanation
Occurrence count Checks whether the multi-occurrence attribute of the spec has number of occurrences less Rule sets the value of the number attribute of the
Less than than "4". spec to "2123."
Occurrence count Checks whether the multi-occurrence attribute of the spec has number of occurrences Rule sets the value of the number attribute of the
not equal to greater than or less than "4 "but not equal to "4". spec to "2123."

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Rules expressions

274 IBM Product Master 12.0.0


This topic defines syntax and semantics of expressions that are used in the predicate and assignment sections of the Rules.json file.

For an existing rule, you can view the content of the Rules.json file through Rules console > Rule details > Rule structure > Code view.

Context
Expressions can occur in the predicate and assignment sections of rules, in the "rules" property. This property is an object that has following four properties:

Property Description
field Specified attribute path.
operator Specifies the operator.
value Value can be a number or text. Depends on the predicateType property.
predicateTyp Specifies the type of predicate. The type can be "Value" (Constant value), "Attribute" (Attribute path), "Expression" (either numeric or string), and
e "Special" (function).

Syntax terms
Numeric constant
Sequence of digits that can optionally contain a decimal point.
Example
123
3.14
42.0
String constant
Sequence of characters enclosed in single quotation marks.
Example
'string'
'string with blanks'
'string with single quote\''
Attribute reference
Attribute path enclosed in double quotation marks.
Example
"Search Ctg Spec/integer"
"Global_Local_Attributes_Spec/suggestedRetail/effectiveStartDate"
Substring function
Single token. The first argument of the string is attribute reference. Following function fetches substring from a string:

substring(string, start-index, end-index)

Output - Sequence of characters from string, from start-index to end-index - 1.

substring(string, start-index)

Output - Sequence of all the characters from start-index.


Both start-index and end-index must be literal integers.
Expression
Either can be a numeric expression or a string expression. The expression type is determined by the type of first operand (same). The semantics of expressions
correspond to their semantics in the IBM® Product Master script, which are equivalent to those of expressions in Java™.

Operator precedence for numeric expressions is same.


Type of an expression is attribute in the field property.
All the tokens in an expression must be of the same type.
A constant has the type and value of itself.
An attribute reference has the type and value of the attribute at its path. The value can be null, or the attribute may not be defined for the path and the value
is null. Operations on null values are not allowed.

Numeric expression - Predicate

{
"field": "Search Ctg Spec/integer",
"operator": "lessThan",
"value": "2 * \"Search Ctg Spec/integer-RTS\"",
"predicateType": "Expression"
}

Numeric expression - Assignment

{
"field": "Search Ctg Spec/integer",
"operator": "equal",
"value": "\"Search Ctg Spec/integer-RTS\" / 2 * 4.5 *
(\"Search Ctg Spec/number-RTS\" - \"Search Ctg Spec/number\")",
"predicateType": "Expression"
}

String expression - Predicate

{
"field": "Search Ctg Spec/string-RTS",
"operator": "beginsWith",
"value": "substring(\"Search Ctg Spec/string-RTS\", 3, 6) + 'abc'",
"predicateType": "Expression"
}

IBM Product Master 12.0.0 275


String expression - Assignment

{
"field": "Search Ctg Spec/pk",
"operator": "equal",
"value": "'p\'qr' + \"Search Ctg Spec/string\"",
"predicateType": "Expression"
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using Spec console


Access the Spec console from the left pane by Data Model Manager > Spec console and Attribute collection. The Primary, Secondary, and Lookup tabs display the various
specs that are uploaded in the application.

Prerequisites
Following are configurable predefined limits while you create or update a spec.

Maximum number of specs is 500.


Maximum occurrences in an attribute are 50.
Maximum number of attributes is 500.
Maximum nested grouping levels are 8.

To update the limits, tweak the values in the restConfig.properties file at the mdmui/dynamic/mdmrest folder.

Tasks
You can perform following tasks in the Spec console page.
Task Description
Refresh Click to refresh and reflect the updates that are done in the Spec console page.
Import Select a spec, and click Import to upload a spec in the XML file format. The Upload Spec XML pop-up window opens.

1. Select Overwrite if you want to overwrite the existing spec XML file.
2. In the File field, click Browse to select the new spec XML file.
3. To upload the new spec XML file, click Upload.

Export Select a spec, and click Export to download the spec XML file in a compressed format with the exportSpec_YYYYMMDD_HHMMSS.zip label. In the
XML, if you edit the value of the <SpecName> and <Value> fields, you can import the edited XML file to create a new spec.
Note: You can batch export specs.
Publish Click to publish specs from the IBM® Product Master to the IBM Watson® Knowledge Catalog. For more information, see Publishing specs to the IBM
Watson Knowledge Catalog.
Delete Select a spec, and click Delete. The Delete confirmation pop-up window opens. To delete the spec, click Yes.
Note: If the spec you want to delete has any associations (dependencies), you need to remove the associations to successfully delete the spec.
Select all Click to select all the specs.
On the right side of the page, click to filter the specs. In the Search field enter the spec name, and click Apply.
Sort Use the drop-down list to sort the spec names by ascending or descending order. Default value is ascending order.
View
Click on the right side of the page to toggle between the card and grid view. The selected view displays an Orange highlight. The default value
is card view.
Show more Click to display more specs. By default the page displays 40 specs and supports lazy loading for better user experience. You can keep loading more
specs through Show more, but after you click Refresh, you see only 120 specs. You can configure the number of spec cards that Spec console page
displays through the specPageSize property in the config.json file.
You can see details of any spec through any of the following methods:

<spec name>link
Associations link
Attributes link
Locales link

Editing or creating a spec


Using the Spec console, you can edit an existing spec or create a new spec.
Publishing specs to the IBM Watson Knowledge Catalog
You can publish specs from the IBM Product Master to the IBM Watson Knowledge Catalog.

Related concepts
Specs

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

276 IBM Product Master 12.0.0


Editing or creating a spec
Using the Spec console, you can edit an existing spec or create a new spec.

About this task


To edit an existing spec or create a new spec, proceed as follows.

To edit an existing spec, click any of the following.


<spec name>link
Associations link
Attributes link
Locales link
To create a new spec, click Create spec.

The spec details page opens.

Procedure
1. In the Spec properties section, add or edit the following.

Spec properties section


Locales Displays list of available locales. Select the applicable locales.
Associations Displays associations with various catalogs, hierarchies, or attributes collection types.
2. In the left pane of the Attribute structure and details section, add or edit the following. The left pane displays a node tree that has details about the associated
attribute, child attribute, or sub spec.

Attribute structure and details


Attribute Click to add an attribute as a node.
Child attribute Select a node, and click to add a child attribute.
Sub spec After you save this spec, you can add a node as a child spec.
Note: You cannot edit any attribute properties for a sub spec. You can add a sub spec only if the locales match with the parent spec.
Clone Clone a selected node with all specified properties.
Note: You cannot clone a localized or grouping node.
Delete Delete the selected node.
Search attribute Find an attribute in the node tree. All nodes that have the searched keyword are highlighted. Use Previous or Next to scroll through the
search results.
If you try to create an attribute with an existing attribute name, node tree section highlights the duplicate field. If there are multiple duplicate fields, the last
edited field gets highlighted. Click Refresh to remove the duplicate highlight.
Important: You cannot remove the Primary key property from a node.
You can drag and reorder a node with the following limitations in comparison to the Admin UI:
You can drag a node within same hierarchy only.
You can drag a node only at the space between the two nodes and not over a node.
You cannot drag the second last and the last node since you can drag a node only above a node.
3. In the right pane of the Attribute structure and details section, add or edit the following. The right pane displays attribute details when a node is selected. The
Attribute type list displays 21 types of attribute for Primary and Secondary tabs and 18 attributes for the Lookup tab. Depending on your choice, each attribute
displays different fields.
Click (Grid view) to quickly view all the attributes and facets in a tabular form. Click (Tree view) to toggle to the traditional tree view. Click Maximize to
enlarge the view.

Attribute details section


Name Specifies the name of the new attribute.
Maximum occurrence Specifies a value more than 1 for the attribute to have multiple values. By default, the value is 1.
Multi occurrence Group label Select a child attribute to use as a multi-occurrence group label to provide more group occurrence information.
Minimum occurrence By default, the value is 0.
Editable, Unique, Link, Hidden, Indexed, Select to specify additional properties to this attribute.
Non Persistent, Localized, Cascade,
Exclude from Free Text Search
UI display flags Select New line before to display attribute from a new row and select New line after to end the attribute
on this row (next attribute starts from a new row) in a single-edit page.
Note: For Grouping data type and localized attribute, both the UI Display flags are disabled. For multi-
occurrence, though the UI Display flags are enabled, but since multi-occurrence consumes a whole row,
the specifications in the UI Display flags do not take priority.
View as table - Select to view and edit the attributes in a tabular format on single-edit page. Applicable
only for grouping type and must be configured on the last nested child. Sub-spec does not support this
feature.

Common fields in the Attribute type properties section


Default value Specifies the default value.
Value rule Select to specify a Value rule. Click Add to open the Add rule window. Select the Spec attributes, and build the rule using the rule editor.
Default value Select to specify a Default Value rule. Click Add to open the Add rule window. Select the Spec Attributes, and build the rule using the rule
rule editor.
Validation rule Specify the validation rule.

IBM Product Master 12.0.0 277


Non persisted Select the Non persisted rule to specify a rule. Click Add to open the Add rule window. Select the Spec Attribute, and build the rule using
rule the rule editor.
Help text Add the custom tooltip by using common HTML tags like <a>, <b>, <i>, <u> and <br>, and click Save.
Note: For hyperlink <a> tag, specify a valid URL for href attribute and ensure that target=_blank to open the URL in a new window.
You can add a URL without anchor tags. If the URL is valid, you can see the HELP link. In case of an invalid URL, you see plain text.

Mapping fields of the Attribute type properties section


Curre Total Fracti Pr Minim Minimum Maximum Minimum Maximum Dat Defaul Single Pattern Maxim Minim
ncy number onal eci um value value value value e t line (Regular um um
symbo of digits digits sio kength (Exclusive) (Exclusive) (Inclusive) (Inclusive) onl timezo string expression) length length
l n y ne
Binary ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ✓ ✓ ❌
Currency ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ❌ ❌ ❌ ✓ ❌ ❌
Date ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ✓ ❌ ❌ ✓ ❌ ❌
External ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ✓ ✓ ❌
Content
Reference
Flag ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌
Image ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ✓ ✓ ❌
Image URL ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ✓ ✓ ❌
Integer ❌ ✓ ❌ ❌ ✓ ✓ ✓ ✓ ✓ ❌ ❌ ❌ ✓ ❌ ❌
Number ❌ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ❌ ❌ ❌ ✓ ❌ ❌
Number ❌ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ❌ ❌ ❌ ✓ ❌ ❌
Enum
Password ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ✓ ✓ ✓
Rich Text ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ✓ ✓ ❌
String ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ✓ ✓ ✓ ✓
String ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ✓ ✓ ✓
Enum
Thumbnail ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ✓ ✓ ❌
Thumbnail ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ✓ ✓ ❌
Image
Thumnail ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ✓ ✓ ❌
URL
Timezone ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ✓ ❌ ❌ ❌ ❌
URL ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ✓ ✓ ❌

Attribute properties section - Lookup table attribute fields


Name Select the name of the Lookup Table.
Display attribute Select the display attribute. If you do not specific a value for the Display attribute field, the Value search option field takes the value of
Primary key and is in read-only mode.
Display format Select the display format. The display format can be one of the following options:
Primary key
Display attribute
Primary key>>Display attribute
Lookup table name>>Primary key
Lookup table name>>Display attribute
Lookup table name>>Primary key>>Display attribute
Value search Select an appropriate value from the list and click Save. Depending upon the selection, you can then search the attribute in the single-
option edit page through:
Primary key
Display Attribute
Primary key and Display Attribute
Show as a Select to display the Lookup Table as a drop-down list. In a spec, attribute can have either Lookup Table as a drop-down list or a pop-up
dropdown window.

Attribute properties section - Relationship attribute fields


Default catalog Select the default catalog.
Default Select the default hierarchy.
hierarchy
Destination Click Add to open the Destination Catalogs window. Select a catalog from the Destination catalogs list, and click Add to display in the
catalogs Selected destination catalogs list.
Value display Select the value display format. The display format can be one of the following options:
format Primary key
Display attribute
Primary key>>Display attribute
Lookup table name>>Primary key
Lookup table name>>Display attribute
Lookup table name>>Primary key>>Display attribute

Attribute properties section - Sequence attribute fields


Amount by which to increment sequence Specifies the amount by which you want to increment the sequence.
Default value for sequence start Specifies the default value for starting the sequence.

278 IBM Product Master 12.0.0


Filter and multi-occurrence
The Persona-based UI supports filter and multi-occurrence for the following "Number" type attributes on the Collaboration, Explorer, and Search pages.
number
integer
sequence
currency
string enum
number enum

Unit of measure (UOM)


Select the appropriate value from the Select unit of measure list.
The Persona-based UI supports UOM as a suffix in the Attribute name column for the following attributes.
number
integer
number enum
If the UNIT_OF_MEASURE schema facet is defined, the value becomes the key that is described in the System_Unit_Conversions Lookup table and the
corresponding factor is applied to the stored value.
Table 1. System_Unit_Conversions Lookup
table
Key Factor Category
m 1.0 Length
km 0.001 Length
in 39.37007874 Length
ha 1.0 Area
acre 2.4710538147 Area
l 1.0 Volume
US gallon 0.2641720524 Volume
kg 1.0 Mass
lb 2.2046226218 Mass
Units that have factor of 1.0 are the internal system units. Factors should not be changed, but new units can be added to the Lookup table. The Category field
ensures that different unit of measures remain consistent. International (SI) units are represented by their standard abbreviations.

The System_Unit_Conversions Lookup table contains following existing unit of measures; US gallon, acre, ha, in, kg, km, l, lb, and m.

You can update the existing unit name and conversion factors in the System_Unit_Conversions Lookup table.

4. Click Save.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Publishing specs to the IBM Watson Knowledge Catalog


You can publish specs from the IBM® Product Master to the IBM Watson® Knowledge Catalog.

Before you begin


Create a catalog in the IBM Watson Knowledge Catalog. The IBM Product Master specs are published as assets in this catalog.
Create an account in the IBM Watson Knowledge Catalog. The account should have an "Editor" or "Admin" permissions to create and edit asset types and assets.
For more information, see Managing access to a catalog (Watson™ Knowledge Catalog).
Generate an authentication key to automatically authenticate to the IBM® Cloud Pak for Data platform or to a specific instance of a service from a script or
application. For more information, see Generating API keys for authentication.

About this task


The IBM Product Master specs get created in the IBM Watson Knowledge Catalog with a distinct "ipm_spec_asset" asset type. Each spec is created as an asset and can be
viewed in the IBM Watson Knowledge Catalog.

Procedure
To start publishing to the IBM Watson Knowledge Catalog, proceed as follows.

1. Specify following properties in the restConfig.properties file. For more information, see restConfig.properties file parameters.
cpd_cluster_host_url
cpd_username
wkc_auth_api_key
wkc_ipm_catalog_name
2. Specify publishSpecOnWKC property in the config.json file. For more information, see config.json file parameters.
3. Add the following in the mdm-rest-cache-config.xml file.

<!-- cache to store tokens for WKC access -->


<cache name="accessTokenCache" maxElementsInMemory="100">
<cacheEventListenerFactory

IBM Product Master 12.0.0 279


class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
properties="replicateAsynchronously=true, replicatePuts=true,
replicateUpdates=true,
replicateUpdatesViaCopy=false, replicateRemovals=true "/>
</cache>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using Attribute collection console


Access the Attribute collection console from the left pane by Data Model Manager > Spec console and Attribute collection.

Tasks
You can perform following tasks in the Attribute collection console page.
Task Description
Refresh Click to refresh and reflect the updates that are done in the Attribute collection console page.
Delete Select an attribute collection, and click Delete. The Delete Confirmation pop-up window opens. To delete the attribute collection, click Yes.
Note: If the attribute collection you want to delete has any associations (dependencies), you need to remove the associations to successfully delete
the attribute collection.
Select all Click to select all the attribute collections.
On the right side of the page, click to filter the attribute collections. In the Search field enter the attribute collection name, and click Apply.
Select View system generated attribute collection to also display all the system generated attribute collections. The system generated attribute
collections start with a prefix of "[*]Generated Default Core Collection for <name>" and are read-only.
Sort Use the drop-down list to sort the attribute collection names by ascending or descending order. Default value is ascending order.
View
Click on the right side of the page to toggle between the card and grid view. The selected view displays an Orange highlight. The default value
is card view.
Show more Click to display more attribute collections. By default the page displays 40 attribute collections and supports lazy loading for better user experience.
You can keep loading more attribute collections through Show more, but after you click Refresh, you see only 120 attribute collections. You can
configure the number of attribute collection cards that Attribute collection console page displays through the specPageSize property in the config.json
file. For more information, see specPageSize.
You can see details of any attribute collection through any of the following methods:

<attribute collection name>link


Associations link
Attributes link

Editing or creating an attribute collection


Using the Attribute collection console, you can edit an existing attribute collection or create an attribute collection.

Related concepts
Attribute collections

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Editing or creating an attribute collection


Using the Attribute collection console, you can edit an existing attribute collection or create an attribute collection.

About this task


To edit an existing attribute collection or create a new attribute collection, proceed as follows.

To edit an existing attribute collection, click any of the following.


<attribute collection name>link
Associations link
Attributes link
To create a new attribute collection, click Create AC.

The attribute collection details page opens.

Procedure
1. In the Name field, add or edit the name of the attribute collection.
2. In the Description field, add or edit the description for the attribute collection.

280 IBM Product Master 12.0.0


3. In the Attribute collection properties section, add or edit the following.
Table 1. Attribute collection properties section
Associations Displays associations with various catalogs, hierarchies, and workflows.
4. In the Attribute collection section, you can build an attribute collection by searching and selecting attributes from existing specs.
a. In the left pane, search for the specs and attributes.
Table 2. Search specs and attributes section - Search
Spec name Specifies the name of the spec.
Attribute path Specifies the path or name of the attribute to be searched inside a spec.
Spec type Specifies the type of spec.
Primary spec
Secondary spec
Locales Specifies all the locales set for the company.
Search Select whether you want to search specs and attributes or only specs.
Show more Click to display more attribute collections. By default the page displays 50 attribute collections and supports lazy loading for better user
experience. You can keep loading more attribute collections through Show more, but after you click Refresh, you see only 120 attribute
collections. You can configure the number of attribute collection cards that Attribute collection console page displays through the
attributeCollectionTreeNodeCount property in the config.json file. For more information, see attributeCollectionTreeNodeCount.
b. Select the specs and attributes, and click to move to the Selected specs and attributes pane. You can select single or multiple specs and attributes (select
and click Ctrl or Shift).
c. Optional: To search again, click .
d. Optional: To remove a spec or attribute, select and click Remove.
e. Verify the additions in the Selected specs and attributes pane, and click Save.

Related reference
config.json file parameters

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using Workflow console


Access the Workflow console from the left pane by Data Model Manager > Workflow console.

Tasks
You can perform following tasks in the Workflow console page.
Task Description
Refresh Click to refresh and reflect the updates that are done in the Workflow console page.
Delete Select a workflow, and click Delete. The Delete confirmation pop-up window opens. To delete the workflow, click Yes.
Note: If the workflow you want to delete has any associations (dependencies), you need to remove the associations to successfully delete the
workflow.
On the right side of the page, click to filter the workflows. In the Search field, enter the workflow name, and click Apply.
Sort Use the drop-down list to sort the workflow names by ascending or descending order. Default value is ascending order.
Create collab Click to directly create a collaboration area through the Workflow console. For more details, see Editing or creating a collaboration area.
Create workflow Click to create a workflow through the Workflow console. For more details, see Editing or creating a workflow.

Editing or creating a workflow


Using the Workflow console, you can edit an existing workflow or create a new workflow.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Editing or creating a workflow


Using the Workflow console, you can edit an existing workflow or create a new workflow.

About this task


To edit an existing workflow or create a new workflow, proceed as follows.

To edit an existing workflow, click <workflow name> link.


To create a new workflow, click Create workflow.

The workflow details page opens.

IBM Product Master 12.0.0 281


Procedure
1. In the Workflow properties section, add or edit the following.
Field Description
Name Displays name of the workflow.
Description Displays description for the workflow.
Access Control Group Click to select the Access Control Group.
Container type Click to specify the container type as catalog or hierarchy.
2. In the Workflow structure section, drag-and-drop the steps to start building a workflow. You can use the following types of steps to build a workflow.
Table 1. System steps
Step Description
Initial First step of a workflow. Every workflow must end in either a Success, Failure, or Fixit step. There is only one Initial step per workflow.
Success Last step of a workflow. In this step, the system attempts to check the entries into the source container (catalog or hierarchy) of the
collaboration area for the workflow.
Failure Last step of a workflow. In this step, the system removes the entries from the collaboration area and changes to the attribute values in the
workflow are lost."
Fixit Special step used to repair entries. You can directly send entries from any step in the workflow to the Fixit step for not satisfying a condition.
From Fixit, the administrator of the collaboration area can decide whether to remove the entry from the workflow or check the entry into the
source container (catalog or hierarchy) of the collaboration area. You cannot correct the item and return the item back to the original place.
Table 2. User-defined steps
Step Description
General Allows you to modify attributes. When this step is created, by default, Reserve to edit is disabled.
Modify Allows you to modify attributes. When this step is created, by default, Reserve to edit is enabled.
Or Approval An approval step where any performer can approve or reject entries, and immediately move them to the next step as specified by either the
APPROVED or REJECTED exit values.
And Approval An approval step where all the performers must approve or reject entries, and immediately move them to the next step as specified by either
the APPROVED or REJECTED exit values. If the administrator of the collaboration area approves or rejects any entries, they immediately move
to the next step.
Dispatch Allows you to decide the next step in the workflow by adding additional exit values to the workflow step definition. This is a view-only step and
you cannot modify any attributes.
Wait Makes entries to wait in the step. When used with timeout, you can check in entries to the source container at a specific date. You can also use
with logic in the IN() and OUT() functions to trigger an action or delay.
Table 3. Automated steps
Step Description
Automated Does not require any action to proceed to the next step. The logic of this step is defined in the IN() and OUT() functions of the step's script.
Make Unique Removes all the entries copies from other branches of the workflow and thus ensures that entries reaching this step do not exist anywhere else
in the workflow before moving to the next step. This usually happens following a split, where there are several next steps that are associated
with a single exit value.
Merge Merges the entries coming from several steps after a split. If you have ‘n’ steps pointing to the merge step, then ‘n’ copies of each entry must
enter the merge step before that entry can move to the next step. Use the condenser step to reduce the number of incoming steps.
Condenser Used before a merge step to reduce the number of paths into a merge step by having several steps pointing to the condenser.
Interim Checkin Saves any changes that are done to attribute values in this workflow back to the source container (catalog or hierarchy), and then moves the
entries to the next step in the workflow.
Interim Undo the changes done to attribute values in this workflow. Performs a checkout that re-fetches the attribute values from the source container
Checkout (catalog or hierarchy).
Nested Used to include another workflow as a step. The exit values for this step are the same as the termination exit values for the included nested
Workflow workflow.
Delete workflow Select a workflow step, and click Delete. The Delete confirmation pop-up window opens. To delete the workflow step, click Yes.
step
In the Workflow structure section, you can click to toggle the grid view, click to rearrange the workflow steps, or click to take a printout of the workflow.

3. In the Step properties : <name> section, add or edit the following.


Table 4. Step properties: <name> section
Field Description
Step details
Step type Displays the type of the step.
Step name Displays the name of the step.
Allow Allows recategorization of entries in this step.
recategorization
Allow import into Allows import of entries into this step.
step
Reserve to edit Allows editing of entries only if reserved.
Exit details
Exit value Displays the exit value of the step.
Exit step Displays the exit step.
Performers (User-defined steps)
Users Click to open Add or edit users pop-up window. From the left pane, select users to move them to right pane, and click OK.
Roles Click to open Add or edit roles pop-up window. From the left pane, select roles to move them to right pane, and click OK.
Attribute collections
For single edit Click to open Add or edit attribute collections pop-up window. From the left pane, select attribute collections to move them to right pane.
You can specify which attribute collections you want as read-only, editable, or required, and then click OK.

282 IBM Product Master 12.0.0


Field Description
For multi edit
For item popup
(Not in Initial step)
Notifications
Send emails when Select and enter recipient email address to send emails when the step starts.
step starts
Send emails when Select and enter recipient email address to send e-mails when the step timeouts.
step timeouts
A performer can directly Approve or Reject through the email notification.
Script
Add script Click to open Add script pop-up window. From the left pane, select supported expressions and move them to right pane, and click OK.
Any section with validation error is highlighted with icon. Click to see the error details and view attributes with failed validation.
4. Click Save.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Viewing dashboards
The dashboard feature enables you to view various integrated dashboards.

Dashboard page role access


Table 1. Dashboards feature
Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
Subfeature
min sic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Audit History ✓ ✓ ✓ ✓
DAM ✓ ✓ ✓ ✓ ✓
Summary
Data Sanity ✓ ✓ ✓
Jobs ✓ ✓ ✓ ✓ ✓ ✓ ✓
Summary
User ✓ ✓ ✓
Summary
Vendor ✓ ✓ ✓ ✓
Summary
Worklist ✓ ✓ ✓ ✓ ✓
Summary
Depending upon your privileges, you can access various reporting dashboards in the Product Master. The dashboard is designed to visualize key metrics for each persona.
From the summary level dashboards (main level), you can drill down to the detailed level dashboards or to go into next level details for easy data analysis.

Types of dashboards
Following are the various types of dashboards available:

Audit History dashboard


DAM Summary dashboard
Data Sanity dashboard
Job Summary dashboard
User Summary dashboard
Vendor Manager dashboard
Worklist Summary dashboard

Specific tasks
On individual dashboards, when applicable, click, and perform the following tasks.

Open
Click to open the item in the single-edit page. For a category, the page also supports state maintenance.
Refresh
Category Click to refresh the page.
Save as list
Click to open Save as List pop-up window. Enter an appropriate name in the List name field, and click Apply.
Export to Excel
Click to open Export to Excel pop-up window. Select the relevant script from the Select script list, and click Apply. A Microsoft Excel file format report of the items
gets downloaded to your computer.
Checkout
Category Click Checkout > <collaboration area> to check out the selected item or category from the specified collaboration area. The single-edit page opens to
display the item or category details in an editable mode.

IBM Product Master 12.0.0 283


Important: You must select Checkout and Edit checkbox while you are creating a collaboration area to enable the Checkout functionality.
Actions
Click to see entry preview scripts.

Import
Click to import a Microsoft Excel file format report of the items from your computer.
Export
Click to download a Microsoft Excel file format report of the items to your computer.

Updates to the mdmce-roles.json file


The Dashboards module format is updated in the mdmce-roles.json file with new additions and modifications to the existing dashboard structure.

Example

Existing Audit History dashboard New Data Model Summary dashboard


{ {
"name": "Dashboards", "name": "Dashboards",
"index": "6", "index": "6",
"sequentialMenu": [ “sequentialMenu": [
{ {
"name": "Audit History", "name": "Audit History"
"properties": { },
"url": "", {
"userid": "", "name": "DAM Summary"
"dashboardId": "" },
"name": "Data Model Summary", {
"properties": { "name": "User Summary"
"url": "/dashboards/jsp/index.jsp? },
dashboardName=Data_Model_Summary&previewMode=true", {
"userid": "Full Admin", "name": "Worklist Summary"
"dashboardId": "1001" }
} ]
} }
]
}

Audit History dashboard


The Audit History dashboard is accessible to the Catalog Manager, Digital Asset Manager, and Full Admin roles.
DAM Summary dashboard
The DAM Summary dashboard is accessible to the Full Admin, Merchandise Manager, and DAM roles.
Data Sanity dashboard
The Data Sanity dashboard is accessible to the Catalog Manager and Full Admin roles.
Job Summary dashboard
The Job Summary dashboard is accessible to the Full Admin, Content editor, Merchandise Manager, Catalog Manager, Category Manager, and Solution Developer
roles.
User Summary dashboard
The User Summary dashboard is accessible to the Full Admin role.
Vendor Summary dashboard
The Vendor Summary dashboard is designed for the Vendor Summary Persona and is accessible to the Full Admin, Catalog Manager and Category Manager roles.
Worklist Summary dashboard
The Worklist Summary dashboard is accessible to the Full Admin, Content editor, Catalog manager, Category manager, and Supplier manager role.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Audit History dashboard


The Audit History dashboard is accessible to the Catalog Manager, Digital Asset Manager, and Full Admin roles.

Click Dashboards > Audit History to see the Audit History page. The Audit History page displays the Catalog and Hierarchy tabs.

Catalog tab
Catalog
Select an appropriate catalog from the Catalog list.
View By
Select an appropriate time interval from the View By list. The valid values are:

Last 6 days
Last 6 weeks
Last 6 months
Last year

catalog name: History


Displays a column chart that is populated with X-axis displaying the specified time interval and Y-axis displaying items that are edited, added, or removed in the
specified catalog.

284 IBM Product Master 12.0.0


Hierarchy tab
Hierarchy
Select an appropriate hierarchy from the Hierarchy list.
View By
Select an appropriate time interval from the View By list. The valid values are:

Last 6 days
Last 6 weeks
Last 6 months
Last year

hierarchy name: History


Displays a column chart that is populated with X-axis displaying the specified time interval and Y-axis displaying categories that are edited, added, or removed in
the specified hierarchy.

Important:

Even if you change an attribute several times, the chart counts only a single instance.
You can sort the data by using the table header columns.
In following scenarios, the Audit History dashboard might display mismatched data:
Old and new values are same.
While creating new item or category, the graph and table display the count of modified attributes.

For the specific tasks that you can perform on this dashboard (when applicable), see Specific tasks.
Note: Following are known limitations for a Selection user.

On Y-axis, there is no user access filter for removed items or categories count. Removed count remains same for all the users.
When you add an item to selection list and later delete the item, then in the graph, such item is shown in the Removed event, and not in the Added event. For other
users, such item entry gets added in both the Added and Removed events.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

DAM Summary dashboard


The DAM Summary dashboard is accessible to the Full Admin, Merchandise Manager, and DAM roles.

Click Dashboards > DAM Summary to see the DAM Summary page. The DAM Summary page provides the key metrics that are related to digital assets (Digital Asset
catalog and Digital Asset Primary hierarchy).

TOTAL ASSETS
Displays the total number of digital assets available in the Digital Asset catalog.
AVG ASSETS ADDED / DAY
Displays the average count of digital assets added per day in last 30 days as a Pie chart.
AVG ASSETS UPDATED / DAY
Displays the average count of digital assets that are updated per day in last 30 days as a Pie chart.
Assets by Size
Displays a doughnut chart that is populated with the digital assets information based on the size; small (less than 5 MB), medium (5 - 10 MB), and large (more than
10 MB) file size.
Assets by File Type
Displays a doughnut chart that is populated with the digital assets count based on file type; JPG, PNG, GIF, BMP, and Others (REST assets).
Assets Published by Channel
Displays a doughnut chart that is populated with the maximum count of digital assets across channels. Also, shows the maximum count of unpublished digital
assets.
Assets With Items
Displays a histogram that is populated with X-axis displaying the number of items that are associated and Y-axis displaying assets. Also, shows count of the digital
assets with no items.
Assets by Popular Categories
Displays a histogram that is populated with X-axis displaying the categories and Y-axis displaying assets. Also, shows the count of unassigned digital assets.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data Sanity dashboard


The Data Sanity dashboard is accessible to the Catalog Manager and Full Admin roles.

Click Dashboards > Data Sanity to see the Data Sanity page. The Data Sanity page displays following sections.

Data Completeness by catalog and channels

IBM Product Master 12.0.0 285


Catalog
Select an appropriate catalog from the Catalog list.
Channel
Select an appropriate catalog from the Channel list.
Items Completeness
Displays a donut graph that is populated with the items grouped according to the completeness percentage range (0 - 100 %). You cannot click the 0 % and 100 %
sections.
This widget can be configured for the percentage range. In the restConfig.properties file, change the values of the following properties:

completenessLowSplit
completenessMediumSplit
completenessHighSplit

By default, the percentage range is 0 - 25, 25 - 50, 50 - 75, and 75 - 100.

Data Quality by Rules


Catalog
Select an appropriate catalog from the Catalog list.
Rule type
Select an appropriate catalog from the Rule type list.
Compliant Vs Non-compliant items
Displays a donut graph that is populated with the items grouped according to the compliant and non-compliant criteria.
In the donut graph, click Non-compliant > Multi-edit page.
In the multi-edit page, if any of the items have a validation error, the Product ID column header displays an error flag.
Click an item to open the details in single-edit page. In the single-edit page, you can see the following:

The page title displays total number of errors in the item.


Click <error count> link to highlight all the attributes violating the rule.

Step Performance
View By
Select to see the performance of steps either on fortnightly, monthly, or quarterly basis.
Top 5 Steps
Displays a bar graph that is populated according to the list of top five steps based on performance in minutes, in the ascending order.
Worst 5 Steps
Displays a bar graph that is populated according to the list of worst five steps based on performance in weeks, in the ascending order.

For the specific tasks that you can perform on this dashboard (when applicable), see Specific tasks.

If your Data Sanity dashboard is loading slowly, see Data Sanity dashboard loading slow.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Job Summary dashboard


The Job Summary dashboard is accessible to the Full Admin, Content editor, Merchandise Manager, Catalog Manager, Category Manager, and Solution Developer roles.

Click Dashboards > Job Summary dashboard to see the Job Summary dashboard page. The Job Summary dashboard page displays following sections.

Total Jobs
Displays the total number of Jobs.
Jobs by Types
Displays total number of Jobs across the following types: Import, Export, Report, and Other.
Job runs by status
Displays a doughnut chart that is populated with the Job runs based on the status: Completed, Failed, and Running. Enter a date and time to see Jobs status for a
specific date and time range.
Job runs by failure
Displays a line graph that is populated with X-axis displaying the types of Jobs and Y-axis displaying number of Jobs failed.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

User Summary dashboard


The User Summary dashboard is accessible to the Full Admin role.

Click Dashboards > User Summary to see the User Summary dashboard page.

286 IBM Product Master 12.0.0


Note: In the User Summary dashboard page, to filter the user summary for a vendor or an internal user, click the appropriate value from the User Filter list, and click Apply
Filter.

TOTAL USERS
Displays the total number of the users for a company.
ACTIVE USER SESSIONS
Displays the total active user sessions present in last 24 hours for a company.
Active vs Inactive Users
Displays a doughnut chart that is populated with the count of active and inactive users present for a company.
Active User Sessions
Displays list of all the active user activity sessions for a company depending on the login time in hours. To filter the active user sessions, select an appropriate time
duration from the list. By default, the duration is Last 24 hours.
Peak Usage Trend
Displays a line chart that populated with the peak system usage for duration of every four hours starting from 12am-4am and ending at 8pm-12am for a day. Also,
displays the page hits for each duration.
Recent User Activity
Displays recent user activity for a company depending on the time visited in days. To filter the recent user activities, select an appropriate day duration from the list.
By default, the duration is Last 1 day.
Users
Displays list of all the users for a company.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Vendor Summary dashboard


The Vendor Summary dashboard is designed for the Vendor Summary Persona and is accessible to the Full Admin, Catalog Manager and Category Manager roles.

Before you begin, ensure that you configure and setup Vendor Persona.

Click Dashboards > Vendor Summary dashboard to see the Vendor Summary dashboard page. The Vendor Summary dashboard page displays reports about vendors that
are useful for the Supplier or Vendor Summary to track the vendor performance.

Catalog
Select the catalog for which you want to see the vendor data.
View By
Select the duration as either the last 7 days, 15 days, 1 month, 6 months, or 1 year.
TOTAL VENDORS
Displays the count of total vendors present in a company.
Top 5 Vendors By Items Submitted
Displays a column chart that is populated with the five best performing vendors based on the items that are submitted or checked-in to the specified catalog for the
time duration. All the calculation is done at the Catalog level.
Bottom 5 Vendors By Items Submitted
Displays a column chart that is populated with the five least performing vendors based on the items that are submitted or checked-in to the specified catalog for the
time duration. All the calculation is done at the Catalog level.
Vendor Score
Displays a scatter chart that is populated with the X-axis displaying the items processed and Y-axis displaying approval rate in percentage. You can hover the mouse
pointer over any point on the graph to see the name of the vendor.

The scatter chart maps the X-axis according to the vendor score according to the number of accurate items sent by a vendor.
Vendor Efficiency
Displays a scatter chart that is populated with the X-axis displaying the items reworked and Y-axis displaying the Turn Around Time (TAT) in hours. You can hover
the mouse pointer over any point on the graph to see the name of the vendor. The calculation for vendor efficiency is based on the Collaboration Area transactions,
for the catalogs where Vendor Organization Hierarchy is present as the secondary hierarchy.
For the items that the vendor sends to the Supplier or Vendor Summary, any of the following scenarios are applicable:

Supplier Manager approves the items, and the items move to the catalog
Supplier Manager rejects the items, the items are sent back to the vendor for reworking, and post rework the vendor sends the items back to the Supplier
Manager. These scenarios are then again applicable to the items.
Supplier Manager takes no action on the items.

The Vendor Summary dashboard considers only the items that are reworked for the vendor efficiency calculation (items that have at least passed a single iteration
for rework and have eventually moved to the catalog). The TAT is measured for such items that govern efficiency of a vendor.
The formula for calculating the Vendor Turn Around Time (TAT) is:

Sum of the TAT for all the items of the Vendor/Total number of the items that are checked-in to the Catalog by the vendor

Example
If a vendor has total four items with the TAT for the first item being 1 hour, second item being 2 hours, third item being 3 hours, and the fourth item being 4 hours,
the Vendor TAT is calculated as:

(1+2+3+4)/4 = 10/4 = 2 hours, 30 minutes

The TAT for an item is the time that is taken by the vendor to rework on the item (Time that is taken by the vendor to move an item from the rework step of the
Vendor Collaboration Area to the Internal approval step of the Owner Collaboration Area). The TAT for an item is the summation of the time that is taken for all the
iterations of rework done by the vendor on that item.

IBM Product Master 12.0.0 287


Related concepts
Working with the Vendor persona

Related tasks
Configure Vendor Persona

Related reference
Vendor properties - env_settings.ini file
Vendor properties - common.properties file

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Worklist Summary dashboard


The Worklist Summary dashboard is accessible to the Full Admin, Content editor, Catalog manager, Category manager, and Supplier manager role.

Click Dashboards > Worklist Summary dashboardto see the Worklist Summary dashboard page. The Worklist Summary dashboard page displays following sections.

Total Entries
Displays the total number of entries (items) present in the selected collaboration area.

Entries by priority
Displays count of entries of different collaboration areas having different priorities (High, Medium, Low, or None). Click any card to see list of the collaboration areas with
entry count.

Entries by age
Displays the age of the entry from when checked out in the particular collaboration area. By default, the age range is less than 15 days, 15 - 30 days, 30 - 60 days, and
more than 60 days. This widget can be configured for age range. In the restConfig.properties file, change the values of the following properties:

ageLowSplit
ageMediumSplit
ageHighSplit

Worklist entries by priority or by age


Appears when you click any card. Displays list of the collaboration areas with entry count. Click any collaboration item to display a graph of steps with count.

For the specific tasks that you can perform on this dashboard (when applicable), see Specific tasks.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Additional features
You can additionally configure Free Text Search, Generate Stock Keeping Unit (SKU), Use machine learning (ML) assisted data stewardship, Suspect Duplicate Processing
(SDP), Vendor Persona, and Global Data Synchronization (GDS) features.

Relationship and Linked editors


IBM® Product Master application has Relationship and Linked editors in the single-edit, search, multi-edit, and Rule edit pages.
Bulk loading items
Bulk loading items feature enables any third-party integration to perform loading of items into the IBM Product Master catalog efficiently.
Generating Stock Keeping Unit (SKU)
Any product that has multiple variations is often represented by a base product number that is common to all the variations defined by specific SKUs.
Using machine learning (ML) assisted data stewardship
Machine learning assisted data stewardship provides multi-fold benefits like accelerating manual tasks, shortening review cycles, and improving data quality.
Machine learning-based product categorization feature helps the data stewards in using time on more meaningful activities than thinking about the right category
for the products. The feature is based on open source lightweight Python libraries, which have no additional licensing overhead.
Using Suspect Duplicate Processing (SDP)
Suspect Duplicate Processing involves identifying possible suspects, matching, and if applicable merging data from an Operational catalog to the Master catalog.

288 IBM Product Master 12.0.0


When an item is imported or updated into the Operational catalog, the item is compared against the existing items of the Master catalog, and duplicate details, if
any are displayed in the Suspect Duplicate Processing tab. Then, the item can be processed further based on the user selection.
Configuring Item completeness
Item completeness feature allows you to track the completion percentage of any item. The completion is calculated based on the preselected attributes in an
attribute collection.
Working with the Vendor persona
Vendor persona enables an external user to access Product Master to add or update their products.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Relationship and Linked editors


IBM® Product Master application has Relationship and Linked editors in the single-edit, search, multi-edit, and Rule edit pages.

Relationship editor
You can use the Relationship editor to select related item, and also search entries based on any attribute of the selected catalog.

By default, the selected attribute is the Display attribute of the selected catalog. The attributes in the Select attribute list are populated based on the
rich_search_indexed_only property set in the common.properties file. All the attributes from Primary specs, Secondary specs, and sub-specs are populated in the Select
attribute list. You can quickly identify an Indexed attribute (theme accent color label) and non-indexed attribute (Gray label) by color labels.

To use the editor, proceed as follows:

1. Click icon to open Relationship pop-up window.


2. Specify a catalog from the Select catalog list.
3. Select an attribute from the Select attribute list.
4. Specify a search term in the Search value field, and press Enter.

Linked editor
You can use the Linked editor to search entries based on any attribute of the selected catalog.

By default, the selected attribute is the Destination attribute of the selected catalog. The attributes in the Select attribute list are populated based on the
rich_search_indexed_only property set in the common.properties file. All the attributes from Primary specs, Secondary specs, and sub-specs are populated in the Select
attribute list. You can quickly identify an Indexed attribute (theme accent color label) and non-indexed attribute (Gray label) by color labels.

To use the editor, proceed as follows:

1. Click icon to open Linked pop-up window.


2. Select an attribute from the Select attribute list.
3. Specify a search term in the Search value field, and press Enter.

Search results
The search results display all the entries matching the search term in the chosen attribute.
Table 1. Attribute and Input details
Attribute Input
Binary A free text field with an input value as a file name.
Currency A free text field (except currency symbol).
Date only Specify an appropriate value by using the icons.
Date and Time
Flag Select and specify either True or False.
Image A free text field with an input value as a file name.
Image URL A free text field with an input value as URL.
Integer A free text field.
Linked A free text field with an input value as Primary key of an Item.
Lookup Table
Number A free text field.
Number Enum Select and specify a value of Enumeration.
Relationship A free text field with an input value as Primary key of an Item.
Rich Text A free text field.
Sequence
String
String Enum Select and specify a value of Enumeration.
String Enum Rule A free text field.
Thumbnail Image A free text field with an input value as a file name.
Thumbnail Image URL A free text field with an input value as URL.
Time zone Select and specify a value of Time zone.

IBM Product Master 12.0.0 289


Attribute Input
URL A free text field.

Known limitations
- The editor does not support a search on the attribute type of Password.
- Searching by using a term that contains number sign (#) and Ampersand (&) does not fetch search results, and displays "Error occurred while fetching related
item" error message.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Bulk loading items

Bulk loading items feature enables any third-party integration to perform loading of items into the IBM Product Master catalog efficiently.

There are two integration APIs available to fetch schema and load items, and the APIs are compliant with the standard JSON schema. Bulk load is done by scheduled jobs
thus, keeping the APIs separate from backend processing.

Prerequisites
Import mdmce-env.zip file. After the file imported successfully, verify that the data model mentioned in the Postinstallation instructions topic for the Bulk loading
items feature is present.
The common.properties file has data_import_queue properly configured. For more information, see common.properties file parameters.

Procedure
1. Get the spec schema by using Get Schema API.

Usage
Get Spec Schema for catalog.
Method
GET
URL
https://productmaster.<host>.com/api/v1/integrations/catalogs/<catalogname>/schema?category=<categoryname>
Where,
catalogname - Specifies the name of the catalog.
category name - Specifies the name of the category whose secondary spec schema needs to be returned in response. It is an optional query
parameter. If not specified, schema for all the secondary specs for all categories from all hierarchies that are associated with the catalog gets returned
in response.
Table 1. Data types and standard JSON schema
types
Data type Standard JSON schema type
CURRENCY Number
NUMBER
NUMBER_ENUMERATION
FLAG Boolean
INTEGER Integer
SEQUENCE
Other data types String
For more information on response body, see Integrations REST APIs - Catalog.

2. Push the item data JSON to queue by using bulk load API.

Usage
Load bulk items to queue to import to the catalog.
Method
POST
URL
https://productmaster.<host>.com/api/v1/integrations/bulkload
Request and response body
Primary spec of catalog in which the item needs to be imported must have an attribute, which is unique, indexed, and mandatory. Attribute path of this
attribute should be mentioned against identifierAttributeName and the attribute path value should be mentioned against identifierAttributeValue in
the Request body.
Following table represents the supported and non-supported attribute type for an attribute mentioned against identifierAttributeName in the request
body.
Supported Attribute Type Non-Supported Attribute Type
Currency Binary
Date Flag
Integer Image
Image URL Lookup Table
Number Password

290 IBM Product Master 12.0.0


Supported Attribute Type Non-Supported Attribute Type
Number Enumeration Relationship
Rich Text Thumbnail Image
Sequence Time Zone
String Linked
String Enumeration
Thumbnail Image URL
URL
For more information on request and response body, see Integrations REST APIs - Catalog.
Deduplication of items
If request JSON contains multiple data JSON objects for the same item,
Multiple item data JSON objects get merged into a single JSON such that attributes values from the last JSON entry overwrites all the previous entries
present in the request body.
Category "mappings" from multiple item data JSON objects get merged.

Survivorship
In the Survivorship Configuration Lookup table, each attribute is mapped to a trusted source system from which application can accept an update.
Source for each item in the Request JSON gets checked against the trusted source system from the Survivorship Configuration Lookup table to decide
whether to accept the update on an individual attribute basis.

Survivorship Configuration Lookup table details


Name: Survivorship Configuration
Attributes:
Source: Name of the source for item data.
Attributes: List of comma-separated leaf attribute (attributes that have no child attribute) full path
Example
Product Spec/Product Name, Product Spec/Price, Product Spec/Delivery Address/City

Mandatory attributes should be present in "Attribute"’ against the specific catalog in the Lookup table, else item save fails.
Note: If the Survivorship Configuration Lookup table is empty, then the survivorship check is skipped.

3. Configure and trigger "Bulk Load Job Report"’to fetch item data JSON from queue and load items the catalog.

Usage
Fetch item data JSON from queue and load items in a catalog.
Configuration
Following table lists the inputs that are required for configuration.
Name Type Description Is Mandatory? Default Value
Container Name String Name of the catalog to which items need to be imported. Yes NA
Batch Size String Number of items to be committed to the database in one go. No 500
Poll timeout String Timeout for queue polling. (Job waits for messages to be available on the queue No 3600 secs
(secs) until poll timeout expires.)
Run the report job to load items into the configured catalog.
If an item exists in the catalog matching identifierAttributeName & identifierAttributeValue, then existing item gets updated. Category is mapped
(according to "mappings" from the item data JSON) and attribute is set (according to "attributes" from item data JSON.)
Otherwise, new item is created in the catalog. Category is mapped (according to "mappings" from the item data JSON) and attribute is set (according
to "attributes" from item data JSON.)
Loading existing items into the catalog:
Multi-occurring attributes: All the occurrences are replaced with incoming data. This is true for:
Multi-occurring attribute of simple type (say String),
Multi-occurring grouping attribute,
Grouping attribute containing multi-occurring attribute.
Category mappings:
If "mappings" has null or empty array (that is "[]") value in item data JSON, there is no change in existing category mapping of the item.
Existing category mapping for item is never deleted.
New item import: If"mappings" has null or empty array, item gets created under unassigned category.
Move item to collaboration step: If there is error while saving an item to a catalog, the item gets created or updated in collaboration area step
configured against the catalog in the Bulk Load Error Configuration Lookup table.

Bulk Load Error Configuration Lookup table details


Name: Bulk Load Error Configuration
Attributes:
Container Name
Collaboration Area Name
Collaboration Step Name
Collaboration area step name, which is configured in the Lookup table should have:
"Allow Import into Step" flag enabled.
"Allow Recategorization" flag enabled.
Note: If the Bulk Load Error Configuration Lookup table is empty, then this step is skipped.

After all the items from queue are processed or "Poll Timeout" expires, report job gets completed and report.out file is generated (successful completion of
job) containing summary of import.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 291


Generating Stock Keeping Unit (SKU)
Any product that has multiple variations is often represented by a base product number that is common to all the variations defined by specific SKUs.

Before you begin


1. Create a Lookup Table with name "SKU Generation Config Lookup" and following details.
Field Description
Catalog The name of the catalog, for example "Product Offerings".
Attribute The full path of the comma-separated attribute list on which SKU needs to be generated, for example "Product Specification/Colors, Product
Specification/Features".
Product The full path of the category for an item including hierarchy, for example "Hierarchy/Electronics/Mobile and Smartphones/Apple".
Category
Sku Category The full path of the category for SKUs including hierarchy, for example "Hierarchy/SKU".
2. In the Primary or Secondary spec of an item, create attribute "Parent Product" of type "relationship".
3. Create a category in which variants need to be mapped and specify same in the Sku Category in the Lookup Table.
4. Create a new workflow for SKU generation.
5. Create a "Sku Generation" step in the new workflow.
"Sku Generation" step - Launches a class file which creates variation for the products based on the attributes that are specified in the Lookup Table and maps them
to the SKU category.
6. In the workflow step, select Allow Re-categorization to enable the categorization of product.
7. Add the SKUExplosion-ext.jar to the class path for the workflow extensions.
8. Add the following path in the "Sku Generation" workflow:

//script_execution_mode=java_api="japi://com.ibm.sku.extensions.workflow.SKUExplosion.class"

You can also import the class files from the com.ibm.mdm.extensions.sku.zip using the Docstore custom tools and then update the class path for the "Sku
Generation" workflow:

//script_execution_mode=java_api="japi://uploaded_java_classes:com.ibm.sku.extensions.workflow.SKUExplosion.class"

Note: For generating SKU, the Lookup table supports only two attributes of String type.

About this task


In IBM® Product Master, SKU generator is used to create an array of SKUs for a product automatically. Once you define the product and the SKU category, run the SKU
generator to automatically build all the SKU products.

Procedure
1. Create a new collaboration area with the catalog and workflow that is created for SKU generation.
2. Check out an item from the catalog into the collaboration area.
3. Add values to the attributes specified in the Lookup Table on which the variant needs to be generated.
4. Click Exit in the workflow step.

Results
After successful completion of workflow, variants are generated and mapped to the SKU category.

What to do next
Right-click com.ibm.mdm.extensions.sku.zip, and select Save Target As to download the sample SKU code to your computer.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using machine learning (ML) assisted data stewardship


Machine learning assisted data stewardship provides multi-fold benefits like accelerating manual tasks, shortening review cycles, and improving data quality. Machine
learning-based product categorization feature helps the data stewards in using time on more meaningful activities than thinking about the right category for the products.
The feature is based on open source lightweight Python libraries, which have no additional licensing overhead.

Product Auto Categorization


In Product Master, machine learning (ML) is used to auto-categorize the product names to appropriate product categories. The trained ML assigns appropriate categories
to products imported into the Product Master. This feature requires the initial data file containing product names and the expected product categories be specified for the
training. The training data must have representation of product names from every category. This representation must contain data samples such that the model learns
enough variations of possible product names in every category. With this feature, next sets of products imported into the system do not need their categories that are
specified at import time. ML assigns appropriate categories. This model can be improved by passing the feedback from manual review of misclassification identified.

292 IBM Product Master 12.0.0


Ensure that the spec contains Suggested Categories grouping multi-occurrence attribute that can be in primary or secondary spec. The value of minimum occurrence is 0,
the value of the maximum occurrence is 3, and attributes are as follows:

Suggested Categories/Name
Suggested Categories/Confidence Score

The spec should have a Feedback attribute of Boolean type. Feedback attribute can be present in primary or secondary spec. Feedback attribute is used in retraining. If
you manually update mapping of an item in the collaboration area, the value of the Feedback attribute gets set to True in the workflow.

Product Attributes Enrichment


In Product Master, product description is used to identify the values of attributes for the inlaying data model. This feature enables enriching a product just from it’s
descriptions with no additional need to set the values explicitly. To use this feature, one must provide the list of categories and their products with their attributes and
values that are provided in an excel or csv file. Internally, it uses probabilistic data structures to store the data model and at run time assigns the best value to attributes
from a given product description.

Product data standardization


To address data quality issues, IBM® Product Master, uses ML to standardize the product descriptions to fix any evident misspellings. The ML service learns contextual
information of product descriptions. These learnings help to identify and replace the incorrect words in the descriptions. This also uses feedback to improve on identifying
and fixing the known issues. As part of training the ML service, a Microsoft Excel or CSV file containing all true product descriptions needs to be provided.

Sample machine learning workflow project


Right-click sample_project_ML.zip, and select Save Target As to download the sample workflows code to your computer.

Training the machine learning services model


You can train the machine learning services model, by using the training reports in the Admin UI.
Configuring machine learning Services
You need to set up machine learning services before you can use the feature.
Accessing the machine learning by using the REST APIs
The REST API documentation contains URLs, description of each URL and sample input and output data for the machine learning APIs.

Related tasks
Installing Python and machine learning modules

Related reference
ml_configuration file parameters
ML properties - common.properties file

Related information
ML - Post installation instructions

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Training the machine learning services model


You can train the machine learning services model, by using the training reports in the Admin UI.

About this task


In the IBM® Product Master, there are three types of reports that can be used for training the ML services.

Machine Learning Services Training Report - Used for the initial training of the ML services. After successfully completion of this job, the data is inserted into the
MongoDB collection and starts the ML services to start training.
Input parameter

Container Name
The name of the catalog.
File Location
The location of the file that is used for training.
If the file location is System Directory, add the file name only.
The file should be present in the following location.
$TOP/supplier_base_dir/<company_name>/ctg_files/<file_name>
Example
catalog_data.xlsx
If the file location is Docstore, add the relative path of the file.
Example

IBM Product Master 12.0.0 293


/archives/catalog_data.xlsx
Training Samples file
The file used for training.
Service type
The type of training (categorization).

Output

report.out file
Example

{"detail": "Started <ServiceType> training for model: <CompanyCode>_<CatalogName>_<ServiceType>."}

Machine Learning Active Services Report -


Used to monitor training jobs and fetch list of all models that are successfully deployed by ML services along with the version, URL, status, and port number details.
Models whose status is deployed are successfully trained. Other status can be training_inprogress and training_initiated.

Output

report.out file
Example

{ "services": [ { "name": "<CompanyCode>_<CatalogName>_<ServiceType>", "version": <Version>, "status": "<Status>",


"category": "<ServiceType>" } ] }

Machine Learning Services Retraining Report - Used for categorization retraining, the report fetches list of items that were mapped after the last execution of
retraining job. If no retraining job is found, the report fetches the last initial training job. This report combines the items whose mapping has been updated in the
collaboration area and items which you manually categorize due to incorrect prediction from ML service. After successful completion of this job, the data is added
into the MongoDB collection with existing training data and ML training starts on entire data.
Input parameter

Container Name
The name of the catalog.
Collaboration Name
The name of the collaboration area.
Training Samples file
The file used for training.
Service type
The type of training (categorization, attributes, and standardization).

Output

report.out file
A report.out file that contains a message.
Example

{"detail": "Started <ServiceType> training for model: <CompanyCode>_<CatalogName>_<ServiceType>."}

To fetch items that were manually mapped, enable subscription in the $TOP/etc/default/history_subscriptions.xml file for the collaboration area as follows and restart the
server.

<subscription object_type="COLLABORATION_ITEM">
<subscription_levels>
<log event_type="CREATE" history_level="DELTA"/>
<log event_type="UPDATE" history_level="DELTA"/>
<log event_type="DELETE" history_level="DELTA"/>
</subscription_levels>
</subscription>

Procedure
To run a training report, proceed as follows:

1. Log in to the Admin UI.


2. Go to Product Manager > Reports > Reports Console.
3. From Action column, click Report Console icon to run the report.
4. Check the job status through the Schedule column. You can also check the status through Data model manager > Scheduler > Scheduler status.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring machine learning Services


You need to set up machine learning services before you can use the feature.

Before you begin


Ensure you complete the following tasks:

Enable machine learning while installing the Persona-based UI and machine learning services are up and running.

294 IBM Product Master 12.0.0


Add an entry in the Machine Learning Attributes Configuration Lookup table for attribute value-mapping (name of attribute on which machine learning is trained)
with the following parameters:
Id: Auto generated depending on Container and Service type.
Container: The name of the catalog.
Service type: The type of service (Categorization, Attributes, and Standardization).
Attributes: The comma-separated full path of the attributes on which machine learning model is trained. For many attributes, the sequence should be same
as that in the training sheet.
<Spec_name>/<Attribute_name>
Example
Product Specification/Description, Product Specification/Name
Add an entry in the Machine Learning Services Threshold Lookup Table to set the value of threshold over which category should be mapped for item with the
following parameters:
Id: Auto generated depending on Container and Service type.
Container: The name of the catalog.
Service type: The type of service (Categorization).
Threshold Value: If the confidence score is greater than the threshold value, a category gets mapped.
You can add custom code for the workflow by any of the following methods:
Upload the class files in the Docstore folder (Sample ZIP file).
Sample path

//script_execution_mode=java_api="japi://uploaded_java_classes:com.ibm.ml.extensions.workflow.MLAutoCategorizationSte
p.class"

Add the JAR file in the class path.


Sample path

//script_execution_mode=java_api="japi://com.ibm.ml.extensions.workflow.MLAutoCategorizationStep.class"

Procedure
To start using machine learning services in your existing workflows or create new workflows, proceed as follows:

1. Log in to the Admin UI.


2. Go to Data Model Manager > Workflows > New Workflow.
3. In the right pane, under the New Workflow section, specify the appropriate values in the Name, Description, Access Control Group, and Container Type fields.
4. Click Add step and add the following steps, and click Save.
Step name Description
Product Launches machine learning standardization service /mlapi/v1/standardization and standardizes spellings and variations based on description.
Standardization Add the following script, which runs on exit from the step:

//script_execution_mode=java_api=
"japi://com.ibm.ml.extensions.workflow.MLStandardizationStep.class"
Product Launches machine learning categorization service /mlapi/v1/categorization and categorizes items based on the machine learning response if
Classification the confidence score is greater than the threshold value mentioned in the Machine Learning Services Threshold Lookup Table. Also sets the
Suggested Categories and Confidence Score attributes. Add the following script, which runs on exit from the step and item moves to next step
Distribute :

//script_execution_mode=java_api=
"japi://com.ibm.ml.extensions.workflow.MLAutoCategorizationStep.class"
Distribute This is an automated step:
If item is mapped with a category, item moves to the next step mentioned in the Pass exit value.
If category is mapped, an item moves to Product Value, else it moves to Manual Categorization.
If no category is mapped to an item, item moves to the next step mentioned in the Fail exit value.
Add the following script that gets started after exit of this step:

//script_execution_mode=java_api="japi://com.ibm.ml.extensions.workflow.DistributeStep.class"
Product Values Launches machine learning attributes service /mlapi/v1/attributes and sets item attributes that are suggested by the machine learning service
based on long description. Add the following script, which runs on exit from the step and item moves to next step Approval:

//script_execution_mode=java_api=
"japi://com.ibm.ml.extensions.workflow.MLAttributeValueStep.class"
Manual Item moves to manual categorization in the following scenarios:
Categorization If hierarchy returned for category does not exist.
If category does not exist in hierarchy.
If machine learning response has confidence score below the specified threshold value.
In manual categorization, you must manually map the category to item. When the category is mapped, the value of the Feedback attribute is set
to true this is later used by the retraining job.

Add the following script that gets started after exit of this step and item moves to next step Approval::

//script_execution_mode=java_api="japi://com.ibm.ml.extensions.workflow.ManualCategorizationStep.class"
Approval Approval step for user to review changes that are done during Product Categorization and Product Values steps. Removes the last two
occurrences of suggested categories set in Product Categorization step. Add the following script, which runs on exit from the step:

//script_execution_mode=java_api=
"japi://com.ibm.ml.extensions.workflow.ApprovalStep.class"

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 295


Accessing the machine learning by using the REST APIs
The REST API documentation contains URLs, description of each URL and sample input and output data for the machine learning APIs.

There are three types of REST APIs used for machine learning:

Training REST APIs - Using the input given in a MongoDB collection, trains the data, and stores the model in the MongoDB database.
Prediction REST APIs - Loads and queries the model that is created by the training REST API to predict appropriate output.
Status REST API - Returns status of current models.

For more information, see the following machine learning REST API documentation:

Machine learning REST APIs

The training REST APIs require data in Microsoft Excel format that gets added in the database. The data needs to be precise, accurate, and clean for the machine learning
module to conduct training operation.

Create a Microsoft Excel XLSX format file in English language with the following columns:
Categorization
First column Second column Third column
Category Spec Name / Attribute Name Spec
Name/Attribute
Name
Primary iPad pro 11(2020) smartphone was launched on March 2020. The phone comes with an 11-inch touchscreen display with Apple XS
Hierarchy/Electron a resolution of 1668 x 2388 pixels at a pixel density of 265 pixels per inch (ppi). iPad pro 11(2020) is powered by an Octa-
ics/Mobile and core Apple A12Z Bionic processor. It comes with 4 GB of RAM. The iPad pro 11(2019) runs iOS 13.4.1 and is powered by
Smartphones/Appl 8134 mAh nonremovable battery.
e
Primary iPhone 7 Plus provides better cameras, long-lasting battery life, powerful processor, enhanced stereo speakers along with Apple iPad
Hierarchy/Electron vibrant display. Sleek design with water- and splash-resistant enclosure makes it every bit impressive.
ics/Mobile and
Smartphones/Appl
e
Standardization
First column Second column
Category Spec Name / Attribute Name
Primary iPad pro 11(2020) smartphone was launched on March 2020. The phone comes with an 11-inch touchscreen display with a resolution of
Hierarchy/Electronic 1668 x 2388 pixels at a pixel density of 265 pixels per inch (ppi). iPad pro 11(2020) is powered by an Octa-core Apple A12Z Bionic
s/Mobile and processor. It comes with 4 GB of RAM. The iPad pro 11(2019) runs iOS 13.4.1 and is powered by 8134 mAh nonremovable battery.
Smartphones/Apple
Primary iPhone 7 Plus provides better cameras, long-lasting battery life, powerful processor, enhanced stereo speakers along with vibrant display.
Hierarchy/Electronic Sleek design with water- and splash-resistant enclosure makes it every bit impressive.
s/Mobile and
Smartphones/Apple
Where,

Category
Full path of category including the hierarchy, for example,

Primary Hierarchy/Electronics/Mobile and Smartphones/Apple

Attribute
Full path of attribute (specName/Attribute Name), for example,

Product Specification/Model Name

Note:
The Lookup table configuration and training sheet can have only one attribute for standardization. Following is the list of the supported attributes; String,
Number, Integer, Number Enum, and String Enum.
All categories must have almost equal number of products in the training data. Training on 20 to 50 products per category should be sufficient for a good
model.
Attributes
First column Second column Third column Fourth column N number of columns
Full path of Category Full path of Attribute Actual Value for Variation 1 for attribute Variation n for attribute
attribute value value
Primary Hierarchy/Electronics/Mobile and Product Specification/Model Apple Apple-7s Apple7
Smartphones/Apple Name
Primary Hierarchy/Electronics/Mobile and Product Specification/Model iPhone 7 IPhone - 7
Smartphones/Apple Name
Primary Hierarchy/Electronics/Mobile and Product Specification/Colors Blue Blues
Smartphones/Apple
Where,

Category
Full path of category including the hierarchy, for example,

Primary Hierarchy/Electronics/Mobile and Smartphones/Apple

Attribute

296 IBM Product Master 12.0.0


Full path of attribute (specName/Attribute Name), for example,

Product Specification/Model Name

Value
Actual value for the attribute which machine learning model should predict, for example,

Apple 4

Variation
Variation in the actual value of attributes, for example,

Apple 7

Note: Training sheet can have n number of variations for the actual value in the N number of columns.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using Suspect Duplicate Processing (SDP)


Suspect Duplicate Processing involves identifying possible suspects, matching, and if applicable merging data from an Operational catalog to the Master catalog. When an
item is imported or updated into the Operational catalog, the item is compared against the existing items of the Master catalog, and duplicate details, if any are displayed
in the Suspect Duplicate Processing tab. Then, the item can be processed further based on the user selection.

As an example, consider a data model where Product Master is used to perform master data management of products. Once a product is introduced or updated in SDP
Collaboration Area or Operational catalog, SDP engine using Elasticsearch compares attributes of incoming product against the existing products present in the Master
catalog. Based on the product attributes like Product Identifier, Name, Description, the SDP engine displays a list of possible suspects with duplication score. You can
review and compare all the duplicate suspects with this incoming product through attributes merge table. Attributes merge table is displayed only after selection of any
one of the possible suspects. After reviewing all the suspected duplicate products, you can mark the incoming product as a Match or No Match.
SDP might have following possible scenarios:

Scenario: Confident that the two products are same


If all or important attributes of current incoming product are exactly same as those of an existing master product and you are confident that the incoming product is
duplicate, then select the existing master product and click Match to mark incoming product as a duplicate of the master product. Once a product is marked as a
"Match" (duplicate product), then the product remains in the Operational Catalog and is not created in the Master Catalog.

Scenario: Unsure if the two products are same


If important attributes of current incoming product are partially matching with those of an existing master product and you are confident that the incoming product
is duplicate, then select the existing master product and click Match to mark incoming product as a duplicate of the respective master product. Also, if you are
confident that the incoming product attributes are more accurate than those of the existing master product, you can merge attributes from the incoming product to
the existing master product by selecting individual attributes and clicking Merge.
Scenario: Confident that the two products are not same
If only one or a few low-priority attributes are matching, and you are confident that two products are not the same, then mark the incoming product as a "No
Match". After you select No Match, the product is created in the master catalog.

Prerequisites
Ensure that you have completed the following tasks:

Deploy and configure the latest Admin UI and Persona-based UI versions.


Enable the Free text search feature and start the Free text search services.
Identify, create, and index all catalogs with Elasticsearch.
Deploy the sdp-ext.jar file located at $TOP/mdmui/libs/mdm/sdpExtensions and update the classpath for the Admin UI and Persona-based UI. Get the list of
master catalogs and users performing the SDP operations.
Configure SDP for catalogs. For more information, see Running SDP automation script.

SDP terms
Following terms are used in the SDP:

Golden copy
The item exists in the Master catalog.

Match
An item in the Operational catalog is duplicate of an item in the Master catalog.
No match
The item in the Operational catalog that is not a duplicate.

Running SDP automation script


To use the SDP feature, you need to create SDP entries for the master catalog. SDP entries are created by using SDP automation script. Multi-occurrence, Grouping,
and grouping multi-occurrence attributes are not supported in the SDP feature.
Performing Suspect Duplicate Processing (SDP)
Suspect Duplicate Processing involves searching, matching, and if applicable merging data from an Operational catalog to a Master catalog.
Automating SDP
SDP feature requires you to manually open each item and make a match or no match decision. This is mostly a time consuming and tedious activity. You can
automate SDP process so that entries are automatically flagged without any manual intervention.

IBM Product Master 12.0.0 297


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Running SDP automation script


To use the SDP feature, you need to create SDP entries for the master catalog. SDP entries are created by using SDP automation script. Multi-occurrence, Grouping, and
grouping multi-occurrence attributes are not supported in the SDP feature.

Procedure
You need to be an Admin user to run this script because the script creates Catalog, Hierarchies, Specs, and workflows for the operational catalog.

1. Log in to the Admin UI.


2. Go to Data Model Manager > Scripting > Script Sandbox.
3. In the Script Input Pane field, enter the name of master catalog (in square brackets), username, and password separated by commas, no spaces.
Format example
[Catalog1Name,Catalog2Name],Username,Password
Note: Note: If you are not going to use Automated SDP feature, specify with an empty string ( "").
4. In the Script Pane field, enter the following automation script Java™ class, and click Run Script.

//script_execution_mode=java_api="japi://
com.ibm.mdm.extensions.sdp.datamodel.ScriptingSandboxGenerateDataModelImpl.class"

Results
After successful completion of the SDP automation script, following are created:
Table 1. SDP entries
SDP entry Description
Operational Catalog Corresponding to the specified Master catalog. The operational catalog has an "SDP" suffix. Associates with the Post Save script.
Reference Field Reference Field grouping attribute for the primary specs of the master catalog. The attribute has following read-only fields:
Attribute
MasterID
Contains primary key of the master item for which the current item is a duplicate.

Score
Duplicate score returned by the Elasticsearch.
User
Name of the user who requested the SDP operation.

Reference Attribute Contains the Reference Field grouping attribute.


Collection
<Container Name> SDP <Master Catalog Name> SDP Attribute collection. Associates with the primary and secondary spec that is associated to the master catalog.
Attribute Collection
SDP view Creates operational catalog view that is called as SDP View. Associates with the following:

<Container Name> SDP Attribute Collection


Reference Attribute Collection

<Container Name> SDP The SDP Collaboration Area for each Operational catalog. The source container for this collaboration area is the Operational catalog. Associates
Collaboration Area with the following:

SDP Workflow
SDP Step

Master Item Creation Creates an item Master Catalog during the SDP processing.
script Master Item Creation Script gets associated to Operational catalog as a Post Save script. During SDP processing, if an item is marked as "No
Match" then this script create the same item in the Master catalog.

Master Item Creation Script gets associated to SDP Step. This script creates an item in the Master catalog only after the item exists the step.

Associates with the following:

SDP Workflow
SDP Step

Auto Match Item Script Creates an Auto Match Item Script. This script performs SDP Automation based on the Do_AutoSDP flag and the threshold value in the SDP
Container Lookup Table and automatically classifies items as match/no match based on score and threshold value.
The script is attached to the Automated Step. The script is implemented in the IN method and processes items that come out from “Product
Enrichment” step.
Lookup Tables Two Lookup tables that are used for the SDP configuration.

SDP Container Lookup Table


SDP Workflow Step Mapping Lookup Table

Duplicate Delete Report Creates Delete Report that is used to delete "No Match"item present in the Operational catalog. You can run the report to delete duplicate
items immediately or you can schedule a job to delete at a specific time.

298 IBM Product Master 12.0.0


Table 2. SDP Container Lookup Table
Spec Attribute Description
Master_Catalog The name of the master catalog.
Operational_Cat The name of the operational catalog.
alog
Enable_SDP Specify to enable SDP. Possible value is True or False. The default value is none.
Matching_Attrib Attributes that help in finding the possible duplicates. The default value is All. The possible value is as follows:
utes
All - All attributes are used to identify possible duplicates.

Attribute collection name: If duplicate identification is to be based on limited set of attributes, you can provide attribute collection name.

Note: The Matching_Attributes does not support Currency, Number, or Date attributes.
SDP_Response_ The number of maximum duplicates that can be displayed on the Suspect Duplicate Processing tab. The default value is 3.
Size
Minimum_Shoul Controls the number of terms that must match. Possible value is a valid integer percentage in the 30 - 90% range. The default value is 30%.
d_Match
Threshold Minimum value to be considered to compare with score of a duplicate.
Attribute_Weigh The name of attributes weight lookup table. If you want to provide weight to the attributes, you need to create lookup table by using SDP Attributes
ts Weight Lookup Spec. This feature helps in matching entry performance.
The attributes that have more weight contribute more in deciding matching entry. Attributes weight support gets applied in the following scenario
when both conditions are fulfilled.

Matching_Attributes field value must be valid attribute_collection_name.


If the value is set to the default value of "ALL", the attributes weight support is not provided. For more information, see Matching_Attribute field
description.
Attribute_Weights field value must be provided (Valid lookup table name).

Do_AutoSDP Specify whether you need to automate SDP process. Possible value is true or false. By default, the value is false.
Username Specify your username.
Password Specify your password in plain text.
Table 3. SDP Attributes Weight Lookup Spec
Spec Attribute Description
Attribute_Name The absolute attribute path in the specName/attributeName format. For example, Product Details Spec/Product Id.
Weightage A valid integer value (1 - 100). Similarly, lower the value, lower is the weight age for the respective attribute, higher the value, higher the weight age.
Table 4. SDP Workflow Step Mapping Lookup Table
Spec Attribute Description
Key A valid workflow name.
Value A valid workflow step name where the SDP is processed.
By default, the SDP Workflow and SDP Step are added to the respective lookup table. If you want to add your workflow and step, you should associate primary Item
Creation Script your step as follows:

1. Go to Workflows > Workflow Console.


2. Select your workflow.
3. Click <step_name> > Script > Edit.
4. In the Script Pane field, enter the following automation script Java class, and click Run Script.

//script_execution_mode=java_api="japi://com.ibm.mdm.extensions.sdp.scripts.ScriptCreateMasterItem.class"

What to do next
You need to complete the following tasks after you have successfully deployed and configured SDP.

Check users in the SDP Collaboration Area, SDP Workflow, and SDP Steps. By default, only an Admin user is added as a performer. Admin user needs to then add
users or roles who need to perform SDP processing.

Ensure that the Master Item Creation Script is selected in the Post-save Script list. If Master Item Creation Script is missing, proceed as follows:
1. Go to Data Model Manager > Scripting > Script Console.
2. Select Catalog Script from the list.
3. Click Edit for the Master Item Creation Script, and then click Save.
Select the Master Item Creation Script in the Post-save Script list.
Check SDP Container Lookup Table for the master catalog lookup entry. Verify all attribute values like master catalog name, operational catalog name, Enable_SDP,
matching attributes.
Check SDP Workflow Step Mapping Lookup Table and verify an entry having Key=SDP Workflow and Value=SDP Step.
Check for Reference Field. Reference Field should be associated to primary spec of the catalog.
The Reference Field should not be present in the view of the Master catalog. If present, remove the Reference Field from the attribute collection that is associated
with the Master catalog view. Reference Field should be present in the operational catalog view (SDP View). The SDP View should be a default view to the
operational catalog.
Check the following tabs:
The single-edit page for the Master catalog item should have Duplicate tab.
The single-edit page for the Operational catalog item should have Suspect Duplicate Processing tab.
Open the report console and schedule the report jobs for deletion for each catalog.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 299


Performing Suspect Duplicate Processing (SDP)
Suspect Duplicate Processing involves searching, matching, and if applicable merging data from an Operational catalog to a Master catalog.

Procedure
To perform Suspect Duplicate Processing for an item, proceed as follows:

1. Log in to the Persona-based UI. The Home page opens.


2. In the Home page, locate <Container Name> SDP Collaboration Area. For more information, see Configuring SDP for a catalog
3. Click Product Enrichment to open the Product Enrichment page.
4. Click Add to add an item.
5. Select the new item, and click Open.
6. In the Attribute tab, enter the details for the new item, and click Save.
Important: In SDP, do not add Reference Field in an attribute collection and map it to a workflow or container as a mandatory attribute because the value is system-
generated.
7. Click SDP Step to open the SDP Step page.
8. Click Suspect Duplicate Processing tab. A table displays list of suspected duplicate values. The matching score is calculated based on the Lucene scoring formula of
the Elasticsearch feature. The duplicates are displayed in the descending order of the matching score.
9. Select a suspected duplicate value (highlighted in color) to see a table displaying the following details:

Attributes
Displays the attributes of the item.
Current Entry
Displays the value of the attribute in the Operational catalog.
Matching Entries
Displays the value of the attribute in the Master catalog.

10. You can perform the following actions:


If the suspected duplicate is a duplicate, click Mark as Match.
If the suspected duplicate is a golden copy, click No Match.
If you select Mark as Match, you can merge the value of the suspected duplicate.
To merge, select the row and click Merge. You can also click Merge all differences to merge all the differences.
From the single-edit page, click Done to complete the merge, and process the item to the next step.
From the Explorer page, click Save to complete the merge.
Important: You cannot merge checked-out items from either operational or the master catalog.
If you want to go back and select any other suspected duplicate, click Change Selection.
Important: You cannot change the selection after merging.
You can also use the SDP feature through the Explorer page.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Automating SDP
SDP feature requires you to manually open each item and make a match or no match decision. This is mostly a time consuming and tedious activity. You can automate SDP
process so that entries are automatically flagged without any manual intervention.

Before you begin


Ensure that you completed and verified the following:

Running SDP automation script.


SDP Container Lookup Table

This feature adds a step to the existing workflow process:

Product enrichment
Automated SDP (new step)
SDP processing step

When you add data into the Product enrichment step and move the entries to Automated SDP step, based on the comparison of matching score with the threshold, the
algorithm decides whether it’s a "match" or a "no match" and takes the appropriate decision.

If the item is designated as a "match", then the item is not created in the Master catalog and an entry is into the Operational catalog.
If the item is designated as a "no match" (match score is less than the threshold value), then the item is created in the Master catalog.

You can bypass the feature by setting the value of the Do_AutoSDP attribute to true. When enabled, all the items directly move from the Product Enrichment step to the
SDP step.
Note: Depending on the item in the Operational catalog, you need to manually merge any data from the current item to the Master catalog item.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

300 IBM Product Master 12.0.0


Configuring Item completeness
Item completeness feature allows you to track the completion percentage of any item. The completion is calculated based on the preselected attributes in an attribute
collection.

About this task


Catalog or category completeness in the Search, Explorer, or Free Text Search pages
Displays a Percentage Complete column for a catalog on which completeness is enabled. The Percentage Complete column has three distinct color-coded ranges:

Red (Incomplete) means less than 50 %,


Orange (Partially complete) means greater than 50 % but less than 100 %,
Green (Complete) means equal to 100 %.

You can hover the mouse pointer over the icon in the Percentage Complete column to see the details. If an Item is not saved after the completeness is enabled on a
catalog or hierarchy, you see "Completeness information not available".
Catalog or category completeness in the single-edit page
The upper right area of the single-edit page, displays COMPLETED section. The section displays the completion information for the core attributes. The range of
completion is 0 - 100 % with three distinct color-coded ranges:

Red means less than 50 %,


Orange means greater than 50 % but less than 100 %,
Green means equal to 100 %.

Displays count of the Missing Attributes (number of attributes that are required to make an item 100 % complete). Click the Missing Attributes link to see all the
missing attributes.
Note:

You cannot click the Missing Attributes link, if the completion percentage is 100 %.
If a catalog has existing items and you enable item completeness, all the items initially show "0%" and "0 Missing Attributes". To see details in the
COMPLETED section and Completeness tab, save the item.

Tip: You can save all the items through the Catalog pane in the Admin UI.
Completeness tab
Displayed only if the catalog has localized attributes or the item is attached to any channel. The tab has following sections:

Completeness by Core Attributes/Channels


Completeness by Locales

Completeness by Core Attributes/Channels


Click any channel to view details. By default, you can see five channels. The section is not displayed if the item is not mapped to any channel.
Completeness by Locales
Displays the following sections:

Completeness by Locales : <Core Attributes/Channels>


Missing Attributes : <Core Attributes/Channels>

In case a catalog has no localized attributes, but has channels, then click <channel name> to get redirected to the Attribute tab of the single-edit page with a
pane displaying the list of missing attributes. You can click each locale under the Completeness by Locales : <Core Attributes/Channels> section to see the
list of the corresponding missing attributes in the Missing Attributes : <Core Attributes/Channels> section. Click a missing attribute to get redirected to the
Attribute tab.
Restriction: If a missing attribute is part of a group or a multi-occurence group, you need to first add the occurrence of such attribute for seeing a pane
displaying the list of missing attributes.

Procedure
To use the Item completeness feature, import the Data Model file and complete the following procedure:

1. Log in to Admin UI.


2. Import the mdmce-env.zip file.
a. Browse to System administrator > Import Company Environment.
b. Click Browse and select the mdmce-env.zip file.
c. Click Import to load the generic persona data model. For more information, see Postinstallation instructions.
3. Go to Product Manager > Catalogs > Catalog Console.
4. Select the <catalog name> you want, and click Attributes.
5. In the Catalog detail for '<catalog name>' section, attach the Channel Hierarchy imported from the mdmce-env.zip file to the add secondary hierarchy list.
a. If you want to use an existing Hierarchy, specify a hierarchy link, and map the Path Attribute to the name of the channel.
b. If you do not want to update an existing Hierarchy, create a hierarchy as follows:
i. Go to Product Manager > Hierarchy > Hierarchy Console > New.
ii. Create a hierarchy for the channel and select the channel name as the value for the Select Path Attribute list.
iii. Go to Data Model Manager > Specs/Mapping > Spec Console > Lookup Table > Completeness Lookup Table Spec > Edit. If you have done a full import
or Completeness Lookup Table import, save entries of the Lookup Table.
iv. Click ChannelName link, and edit as follows:
1. In the String enumeration rule field, click CLICK HERE.
2. In the Enumeration Rule Editor, add the name of the new hierarchy in the following code line for the rule:

ctr = getCategoryTreeByName("Hierarchy");

IBM Product Master 12.0.0 301


v. Click ChannelPath link, and edit as follows:
1. In the Value rule field, click CLICK HERE.
2. In the Value Rule Editor, add the name of the new hierarchy in the following code line for the rule:

ctr = getCategoryTreeByName("Hierarchy");

3. Edit the CompletenessValueRule.wpcs Value Rule that is attached to each Completeness_X attribute of Catalog Primary Spec, add the name of
the new hierarchy in the following code line for the rule:

Completeness_X attribute of Catalog Primary Spec.


function isMappedTo(channelPath) {

var categories = entry.getCtgItemCategoryPaths("/", true, getCategoryTreeByName("Hierarchy"));

}

vi. Click CatTreeId_CatId and edit as follows:


1. In the Value rule field, click CLICK HERE.
2. In the Value Rule Editor, add the name of the new hierarchy in the following code line for the rule:

catTree = getCategoryTreeByName("Hierarchy");

6. Add completeness attribute in the Catalog Primary Spec that stores Completeness score. There should be one Completeness_Core attribute for Core and
Completeness_<channel name> attribute for each <channel name>, for example,"Completeness_Amazon" with the following specifications:
Type=Number, Precision=16, Minimum Occurrence=0, Hidden and Indexed.
Note: Completeness attribute should not be localized and also should not have "Localized Names" set.
7. Attach the CompletenessValueRule.wpcs Value rule script that is at the $TOP/mdmui/samples folder to each completeness attribute.
8. Add the name of the new <channel name> in the following code line as the value of channelName variable for the rule. By default, the value of this variable is
"Core".

var channelName = "Core";

Example

var channelName = "Amazon";

Important: Do not add the Completeness attribute in a user-defined Attribute Collection.


9. Create an attribute collection on which you want to calculate Item Completeness. Go to Data Model Manager > Attributes Collections > Attribute Collection Console
> New. You can add Primary and Secondary spec. You can add maximum of 50 attributes in this attribute collection.
Important:
Do not include following SDP-specific attributes to this attribute collection; Master Id, User, Score, and MergedDate.
If you have updated this attribute collection,
Save entries of the Lookup Table,
Save item to get the latest missing attribute count on modified data.
10. Add entry in Completeness Lookup table. Go to Product Manager > Lookup Tables > Lookup Table Console > Completeness Lookup Table > Action > Add.
a. Add entry in Completeness Lookup table. Go to Product Manager > Lookup Tables > Lookup Table Console > Completeness Lookup Table > Action > Add.
i. In the AttributeCollectionName column, add the name of the new attribute collection that is created earlier.
ii. In the ContainerName column, click to select the catalog on which you want to implement Item completeness.
iii. In the ChannelName column, click to select the channel name. The list displays all channels available under the Hierarchy and Core channel (Primary
Hierarchy).
iv. Click Save to auto populate the Container_Channel and ChannelPath columns. For the "Core" channel, the ChannelPath column is empty.
11. Add all the completeness attributes in the attribute collection, which is provided as value to the FTS Display Attribute Collection property in the Catalog Attributes
of the catalog for which you have configured the Item Completeness.
12. Click Save to see the completion percentage in the following pages:
Single-edit page (of the specified catalog)
Multi-edit page (of the specified catalog)
Search page
Explorer page
Free Text Search page
Note: The Item Completeness feature calculates the completeness status on the populated or empty attributes before Save. The attributes that get populated with
the help of scripts (Entry Build, Pre-processing, or Post Save Script) have an initial empty value, and hence are calculated as empty. However, after you save the
item again, the updated value gets automatically reflected in the completion percentage.

Related concepts
Item completeness in the UI

Related reference
Item completeness properties - restConfig.properties file

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Working with the Vendor persona


Vendor persona enables an external user to access Product Master to add or update their products.

302 IBM Product Master 12.0.0


The Vendor user can add or update items from the collaboration area specific to the vendor, but can view items belonging to the Vendor organization only through the
Search.

Vendors are able to access the system through the internet thus a security model is in place to restrict the user to only specific data pertaining to organization. Vendors
cannot access the Admin UI.
Important: Application supports LDAP-based Vendor user login with following conditions:

User should be member of the Vendor group on the LDAP server.


User should be mapped under an organization on the LDAP server and that organization should also be present in the Product Master application under the Vendor
Organization Hierarchy.

Table 1. Security model


Each vendor has own collaboration area, which can be created on demand. All users who are listed under the vendor in the organization hierarchies
Collaboration
are administrators of this collaboration area. Since all users are assigned to the vendor role too, they get access to underlying workflow steps, but only
Area
through their own collaboration area.
The workflow on which the vendor collaboration areas can be created. In the workflow, there is Product Edit step, External Approval Step, and Rework
Step. The Product Edit step allows import and recategorization. The External approval step is for reviewing and making final changes to the item before
Workflow
it moves to the owner for approval. The Rework step is where the item moves back in case where the owner rejects the item for any further changes to
be made. The vendor role can be the performer on each step of this workflow.
Supplier The approval workflow, which can be handled by the manager or the administrator persona of the owner organization. All items from the vendor
Products workflow are moved here for approval. The items that are created by any vendor move to the Catalog only when the administrator approves the items.
Approval
Workflow
Approver or owner can share a comment for item rejection to the vendor user. Add the appropriate comment in the Comments tab in the single-edit
Comment page and click Reject to reject the item. Item then progresses to the Rework step of the Vendor User collaboration area. Vendor user can see the
comment in the Approver Details tab.
Supplier The collaboration area of the owner to which the items are sent by the vendor for approval.
products
approval
collaboration
area
Enables Vendor user to check for all the created items. The user can access only the items in the organization that user belongs to and cannot see any
Search other items. The user can check out the items back to the collaboration area to make any further additions, but then item needs again to go through
the approval steps for the changes to reflect in the system.
Password should be alphanumeric with at least 8 characters and contain at least one number, one lowercase letter, one uppercase letter, and one
Password special character without any white spaces or the username. By default, the password expiry is set to 90 days, but the administrator can tweak this
restrictions value depending on the system requirements. If a vendor user enters a wrong password thrice, the access is blocked and the user needs to
administrator re-enable the account.
To use vendor user to add products (items), you use "Product Enrichment" collaboration area and to approve items by admin user, you use "Supplier Products Approval"
collaboration area. If you want to change these default collaboration area names change the value of the following properties in the common.properties file:

vendor_product_collab_prefix
For product enrichment collaboration area.

owner_approval_collab_prefix
For supplier products approval collaboration area.

Restriction: If you are using custom classes or custom ZIP file that is created for the vendor user, then you need to change the value of the vendor_product_collab_prefix
and owner_approval_collab_prefix properties.
Product edit collaboration area name is following:

"{vendor_product_collab_prefix}" +"-"+ {ctg_name}+ "-" + {VendorName};

Supplier Products Approval collaboration area name is following:

"{owner_approval_collab_prefix }" +"-"+ {ctg_name};

Using the sample Vendor code


Vendor code is distributed between the following JAR files.

vendor-ext.jar
Contains vendor hierarchy creation, hierarchy update, and vendor user creation code.
vendor-ext-wfl-sample.jar
Contains compiled vendor workflow-related extension code.
This code for workflow extension is customizable and allows you to create your own vendor workflows with custom extensions.
The sample source code for workflow extensions is provided here:
com.ibm.mdm.extensions.sample.vendorportal.zip
Right-click and select Save Target As to download the sample Vendor code to your computer. Use the sample code as a reference for implementing logic of your
vendor workflows. Compile the custom code and replace the vendor-ext-wfl-sample.jar.

Note: If you are using sample vendor ZIP file provided by product, you do not need to change the value of the vendor_product_collab_prefix and
owner_approval_collab_prefix properties.

Related concepts
Vendor Summary dashboard

Related tasks
IBM Product Master 12.0.0 303
Configure Vendor Persona

Related reference
Vendor properties - env_settings.ini file
Vendor properties - common.properties file

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Admin UI
The Admin UI provides an user interface that handles administrative tasks depending upon your role and privileges.

The Admin UI of the Product Master supports 8 different roles. Following are the details.

Admin
Full Admin
Catalog Manager
Category Manager
Content Editor
Digital Asset Manager
Solution Developer

Navigating the Admin UI


You can navigate the Admin UI pages with the left panel toolbar, menu bar, consoles, and the navigation map. The pages that you use depend on your role.
Customizing Admin UI features
You can customize your Admin UI workspace by specifying your user settings, configuring the user interface, and localizing the user interface.
Working with the Admin UI
Creating items and searching for items and editing content within items and catalogs are key tasks in master data management
Running and managing jobs
Running and managing jobs enables you to automate when a report, import, or export is run.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Navigating the Admin UI


You can navigate the Admin UI pages with the left panel toolbar, menu bar, consoles, and the navigation map. The pages that you use depend on your role.

1. In your web browser, type the URL provided by your administrator. The user interface login screen opens. If the page fails to open, contact your administrator.
2. Type the details for the username, password, and company fields, and click Login.
3. Type your username and password for Global Data Synchronization.
Note: If the administrator has set a default company name, the company name is automatically populated.

The home page has a left panel, the collaboration console, and the menu bar containing modules.

Modules (Menu)
The menu bar is at the upper area of the Admin UI application. Provides tabs that allow you to navigate between various tasks, such as managing items and manage
messages.
Menus Menu description
Home Access tasks that are related to your home page, task lists, settings, and profile.
Product Access tasks that are related to catalogs, hierarchies, selections, reports, lookup tables for data entry and data maintenance.
Manager Manages content from multiple sources and audiences and provide customer-focused content. This module is designed for users who manage product
information everyday such as content managers, product managers, and pricing specialists.
Collaboration Access tasks related to import, exports, collaboration areas, queues, web services, document store, routing, and generated files for setting up the data
Manager module and business rules for content managers.
Import into and export information from Product Master and view the list of files that are stored in the document store, create workflows, and manage
workflows. This module is designed for business process analysts who set up the data module and business rules for content managers.
Data Model Access tasks that are related to scheduler, specs and mappings, attribute collections, scripting, security, alerts, staging areas, and workflows.
Manager Manage scheduled activities, alerts, external communication, create, and edit specifications and mappings, administer users, roles, privileges, view,
and create scripts. This module is designed for business process analysts who set up the data module and business rules for content managers.
System Access tasks that are related to audit, database administration, performance information, properties, log files, system status, profiler, import
Administrator environment, selective export, selective import, and size distribution.
Use the functional tools that are available to troubleshoot and maintain the application. You can review configuration settings, application
performance measurements, view various log files, and system status information. This module is designed for system administrators who are
responsible for the operation of Product Master application and related services.
Custom Tools Access tasks that are related to custom tools.

304 IBM Product Master 12.0.0


Menus Menu description
Window Access tasks related to enabling left panel or using the navigation map to connect to consoles.
The navigation map displays the same structure as the menu bar, but shows more detail. The structure of the navigation map is a detailed list of
components that mirror the same top-level menu items of the menu bar. From the navigation map, you can select and connect to any component.

Collaboration Area Console


The Collaboration Area Console consists of the left panel access bar and the collaboration area on the right page. The console lists the collaboration areas and the number
of items that are associated with each area. Click to apply filters to the list. Click to open the properties section. In the Properties section,

You can hide empty steps, inaccessible steps, or empty collaboration areas. You can choose the columns that you want to display in the list, and specify number of
rows per page.

Click to refresh the information.

Modules (Left panel)


Located in the left panel, displays the quick links and quick search. You can customize and develop your own left panel module.
Add a module as follows.

1. In left panel, click Module > Please select a module to add.


2. In the Please select a module to add list, select the module, and click .
Click to refresh the table information. Click to use the On-Demand Filter, or to use the On-Demand Search.

You can add the following modules to the left panel.

Alerts Identifies the status of any current alert. The alerts table presents the status and number of alerts that are active. The types of alerts include: error
alerts, information alerts, and actionable alerts.
Bookmarks Links to a location that you define, including searches. You can select this object for components that are most commonly used. You can add different
areas of Admin UI as bookmarks that appear in the bookmarks list.
Custom Tools Displays a list of custom tools that are available to you.
Documents The document store shows Admin UI documents, including hyperlinks to the file location.
Jobs If you have more than one job running at a time, it is difficult to check the status of all the jobs from a single screen. Therefore, management and
monitoring of jobs is critical. This job explorer module reduces the time and provides ease in managing and monitoring the status of all of your running
jobs. A progress bar for each running job displays in the left panel navigation. You can filter the jobs based on the type of job.
Important: This feature is no longer supported.
Last Visited Stores the last 10 pages that were visited. When a page is visited, it is automatically added to the Last Visited section. The list is cleared when you log
out of Admin UI. You can click the page name to display the page.
Selections Displays all of the statically saved catalog selections. When a catalog item selection is created, it appears in the Item Selection Console and is
automatically added to the Selections section in the left panel.
Spec Explorer Displays the search results of specifications and attributes. You can search and view all specs that are available to you.
From the Spec Explorer, you can click Search and search for a spec by specifying a search criteria. From the results list, click a spec name to add it to
the Spec Explorer section.
Catalogs Displays all the catalogs that are available to you. A catalog module allows you to browse and search a given catalog and perform various operations
on it.
Collaboration Displays all the collaboration areas that are available to you and includes the collaboration area hierarchy. You can expand the collaboration area
Areas section to see a list of steps that are contained by the workflow of the collaboration area. You can also expand the steps to see the items in each step.

Right-click to perform these tasks.

View Workflows Definitions - Opens the View Workflow Details page.


Hide inaccessible steps - Display only the steps on which you can perform an action.

Hierarchies Displays all the hierarchies that are available to you. A hierarchy module allows you to browse and search a given hierarchy as well as perform various
operations on it.

Right-click to perform these tasks.

List View - Toggle the view.


Category Rich Search - Opens the Search for Categories - New Category Search page.
Hierarchy Attributes - Opens the hierarchy details.

Organizations Displays all the organizations that are available to you. An organization module allows you to browse and search a given organization as well as
perform various operations on it.

Right-click to perform these tasks.

List View - Toggle the view.


Organization Rich Search - Opens the Search for Categories - New Category Search page.
Hierarchy Attributes - Opens the hierarchy details.

Customizing the left panel modules


You need to edit the configuration file before you define your module.

1. Edit the $TOP/etc/default/user_leftpanel.conf.xml extension point.


This file defines the custom modules. You must specify the name of the custom modules. For those modules that allow multiple instances, you must also specify
the fully qualified class path of your implementation of LeftpanelDataObjectFactory.

IBM Product Master 12.0.0 305


2. Edit the $TOP/public_html/user/user_jsp_include.jsp extension point.
This file is the JavaScript interface for i18n labels for the custom modules. You must edit this file to provide the appropriate i18n labels for your custom modules.
3. Optional: You can edit the $TOP/public_html/user/userjs/module/<custom module
name>.jsp file. This file is the JavaScript part of custom module implementation. You can edit this file to include your JavaScript files.

Password criteria
Following is the default password criteria for both the Admin UI and the Persona-based UI.
User interface panels
IBM® Product Master uses a modular design that aligns the user interface with your job role. The four modules, Product Manager, Collaboration Manager, Data
Model Manager, and System Administrator, provide information that is specific to three primary job roles, including content management, business process analysis,
and system administration.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Password criteria
Following is the default password criteria for both the Admin UI and the Persona-based UI.

General criteria
The password should not contain username.
The specified new password cannot be same as a previous password.

Property-specified criteria
This criteria is specified by the password_strength_criteria parameter in the common.properties file.

The length of the password.


The password must contain at least one character each from the following criteria:
Uppercase alphabet character [A–Z] (Or equivalent characters from other supported locales)
Lowercase alphabet characters [a–z] (Or equivalent characters from other supported locales)
Base 10 digits [0–9]
Allowed special characters
:;=?@!#$()*+,-.{}[]~\|^_

The password should not contain white space.

Properties used
Password criteria uses the following properties from the common.properties file.

enable_password_expiry
enable_user_lockout
force_strong_password_for_users
maximum_password_attempts
maximum_password_age_for_users
maximum_password_age_for_vendor
password_strength_criteria

Note: A password is considered to be in English language, if it does not have any non-English-language alphabet.

Related reference
common.properties file parameters

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

User interface panels


IBM® Product Master uses a modular design that aligns the user interface with your job role. The four modules, Product Manager, Collaboration Manager, Data Model
Manager, and System Administrator, provide information that is specific to three primary job roles, including content management, business process analysis, and system
administration.

The main menu for the Product Master user interface consists of a Home menu, menus for the four role-based modules, and a Window menu. From each module menu,
you access various consoles that provide direct access to operations that you commonly perform based on your job role.

Note: When you create an Product Master object on any of the User interface panels, do not use special characters. For more information, see Limitations for special
character strings.

306 IBM Product Master 12.0.0


Product manager module
Using the Product Manager module, you can manage product information on a daily basis. Use the tools in the Product Manager module to define and store product
information in catalogs, maintain hierarchy information, create and maintain attributes, and manage localization. The module also provides access to functions for
managing lookup tables and reports.
Collaboration manager module
Using the Collaboration Manager module, you can import objects into and export objects from the IBM Product Master. Use the tools in the Collaboration Manager
module to maintain queues and messages, view a list of files stored in the document store, and manage collaboration areas.
Data model manager module
Using the Data Model Manager module, you can setup the data models and business rules for content management. Use the tools in the Data Model Manager
module to schedule and monitor jobs, to configure data models, to add scripts that help manage business processes, to administer users, roles, and organizations,
to create alerts, to create workflows, to manage hierarchies, and to maintain catalog items.
System administrator module
You can use the System Administrator module to support, maintain, and troubleshoot the IBM Product Master environment. You can review configuration settings,
view system log files, view system status, and analyze database and application performance.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Product manager module


Using the Product Manager module, you can manage product information on a daily basis. Use the tools in the Product Manager module to define and store product
information in catalogs, maintain hierarchy information, create and maintain attributes, and manage localization. The module also provides access to functions for
managing lookup tables and reports.

Catalog Console
The Catalog Console provides a centralized location for managing product information. You can use the console to create, modify, delete, and search for catalogs. In
addition, you can compare any two existing catalogs by using the Catalog Console.
Catalog to Catalog Export Console
Use the Catalog to Catalog Export Console to load a catalog to another catalog. You can load a print catalog or any other catalog to a different catalog. The Catalog
to Catalog Export process loads the categorization and items from one catalog to another.
Hierarchy Console
Use the Hierarchy Console to manage hierarchies. You can add or modify hierarchies from the Hierarchy Console.
Hierarchy Mapping Console
You can use the Hierarchy Mapping Console to map categories of one hierarchy to other categories within another hierarchy so that you can categorize an item to a
category in the first hierarchy and the same item is automatically categorized to the second hierarchy.
Selection Console
Use the Selection Console to manage your selections, which you use to return a specific set of items or to export a group of items. You can create a static selection
or a dynamic selection. You can add, update, or delete selections from the Selection Console.
Report Console
Use the Report Console to manage your reports. You can create, preview, run, and view status of scheduled reports from the Report Console.
Lookup Table Console
Use the Lookup Table Console to manage lookup tables, which you use to add, delete, and search for data within a lookup table.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Catalog Console
The Catalog Console provides a centralized location for managing product information. You can use the console to create, modify, delete, and search for catalogs. In
addition, you can compare any two existing catalogs by using the Catalog Console.

The Catalog Console displays a list of current catalogs. To customize the columns that are included in the table, click in the upper right corner of the table to display the
Properties section. Click directly on the column header to sort through that particular column. To search for specific catalogs and display only those rows that match your
search criteria, click at the top of the table.

To perform an advanced search, click Rich Search at the top of the Catalog Console. An advanced search is more comprehensive and shows results in the Single Edit or
Multiple Edit panel.

You can create, modify, and delete catalogs from the Catalog Console. To create a new catalog, click at the top of the Catalog Console. To modify specific details about
a catalog, click the appropriate link of the catalog. To delete a catalog, select the catalog and click Delete at the top of the console.

To create a new view for the catalog, select the catalog and click Views at the top of the console. To associate the view that is created with the catalog, select the view
from the menu in the Views column.

To edit attributes for a particular catalog, select the catalog you want to edit and click Attributes on the toolbar.

To compare two different versions of a catalog, select the two catalogs in the Catalog Console, and click Differences on the toolbar. If you decide that you want to use a
prior version of a catalog after making a comparison, select the catalog and click Rollback on the toolbar, and then select the version of the catalog that you want to roll
back to. The rollback operation is not reversible. If you roll back to the previous version, the current version is permanently deleted. However, the rollback operation does
not revert to the previous version of the spec, and items continue to display according to the latest spec. Also, the new version of the catalog does not show the attributes
for spec nodes that are deleted, even if the spec nodes exist in that version of the catalog.

IBM Product Master 12.0.0 307


You can also import or export a catalog. You can use the import functionality to create a catalog from various other sources of data. You can use the export option to
publish and distribute product information to customers.

Related concepts
Catalogs

Related reference
Catalog Console

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Catalog to Catalog Export Console


Use the Catalog to Catalog Export Console to load a catalog to another catalog. You can load a print catalog or any other catalog to a different catalog. The Catalog to
Catalog Export process loads the categorization and items from one catalog to another.

The Catalog to Catalog Export Console displays a list of catalog exports. To customize the columns that are included in the table, click in the upper right corner of the
table to display the Properties section. Click directly on the column header to sort through that particular column. To search for specific catalogs exports and display only
those rows that match your search criteria, click at the top of the table.

You can set up a catalog for loading, run a catalog export, and delete a catalog export from the Catalog to Catalog Export Console. To set up a catalog for loading, click

at the top of the Catalog to Catalog Export Console. To run a catalog export, select the catalog export from the list and click on the toolbar. To delete a catalog export,

select the catalog export and click on the toolbar.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Hierarchy Console
Use the Hierarchy Console to manage hierarchies. You can add or modify hierarchies from the Hierarchy Console.

The Hierarchy Console displays hierarchies as rows in a table. To customize the columns that are included in the table, click in the upper right corner of the table to
display the Properties section. Click directly on the column header to sort through that particular column.

To perform an advanced search within a hierarchy, select the hierarchy and click Rich Search at the top of the Hierarchy Console. You can specify detailed search criteria
such as, Hierarchy Node Common Attributes, Default Hierarchy Primary Spec, and Item Hierarchy Specs to search within a hierarchy.

You can create, modify, and delete hierarchies from the Hierarchy Console. To create a new hierarchy, click at the top of the console. To edit a hierarchy, click the
hierarchy name. To modify the attributes of a hierarchy, select the hierarchy and click Attributes at the top of the console.

To delete a hierarchy, select the hierarchy and click at the top of the console. An error occurs if you try to delete a hierarchy that is currently being used by another
object such as, a catalog or organization.

By default, all hierarchies have the System Default view. To add a new view to a hierarchy, select the hierarchy and click Views at the top of the console.

If you decide that you want to use a prior version of a hierarchy, select the hierarchy and click Rollback on the toolbar, and then select the version of the hierarchy that you
want to roll back to. The rollback operation is not reversible. If you roll back to the previous version, the current version is permanently deleted.

You can also import and export a hierarchy from the Hierarchy Console. To import a hierarchy, select a hierarchy and click Import on top of the console. To export a
hierarchy, select a hierarchy and click Export on top of the console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Hierarchy Mapping Console


You can use the Hierarchy Mapping Console to map categories of one hierarchy to other categories within another hierarchy so that you can categorize an item to a
category in the first hierarchy and the same item is automatically categorized to the second hierarchy.

The Hierarchy Mapping Console displays the hierarchy mappings as rows in a table. To customize the columns that are included in the table, click in the upper right
corner of the table to display the Properties section. Click directly on the column header to sort through that particular column. To search for specific hierarchy mappings

308 IBM Product Master 12.0.0


and display only those rows that match your search criteria, click at the top of the table.

You can create, edit, or delete hierarchy mappings from the Hierarchy Mapping Console. To create a new hierarchy mapping, click located at the top of the console. To

edit a hierarchy mapping, click in the table row of the hierarchy mapping you want to edit. To delete a hierarchy mapping, click in the table row of the hierarchy
mapping you want to delete.

Note: Only the users with the default administrator role can view sub-categories while creating the hierarchy mappings. The other users can only view the top level parent
category while creating the hierarchy map.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Selection Console
Use the Selection Console to manage your selections, which you use to return a specific set of items or to export a group of items. You can create a static selection or a
dynamic selection. You can add, update, or delete selections from the Selection Console.

The Selection Console displays the selections as rows in a table. To customize the columns that are included in the table, click in the upper right corner of the table to
display the Properties section. Click directly on the column header to sort through that particular column. To search for specific selections and display only those rows that
match your search criteria, click at the top of the table.

You can create either a static selection or a dynamic selection from the Selection Console. To create a static selection, click New Static Selection at the top of the Selection
Console. To create a dynamic selection, click New Dynamic Selection on the toolbar.

To edit a selection, click next to the selection in the Options column. To delete a selection, click next to the selection in the Options column.

To preview the items for a selection, click next to the selection in the Options column.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Report Console
Use the Report Console to manage your reports. You can create, preview, run, and view status of scheduled reports from the Report Console.

The Report Console displays the reports as rows in a table. To customize the columns that are included in the table, click in the upper right corner of the table to
display the Properties section. Click directly on the column header to sort through that particular column. To search for specific reports and display only those rows that
match your search criteria, click at the top of the table.

To create a new report, click at the top of the Report Console. To set values to report input parameters, click the report name.

You can verify the contents of a report before running the report by clicking the report type link for that report. To view results of the previous execution of the report, click
in the Action column for the report.

To run a report, click in the Action column for the report. To view the status of a scheduled report, click in the Action column for the report.

To delete a report, click , which is in the first column of the row of the report.

To modify the location of the report, click in the Delivery location column.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Lookup Table Console


Use the Lookup Table Console to manage lookup tables, which you use to add, delete, and search for data within a lookup table.

The Lookup Table Console displays the lookup tables as rows in a table. To customize the columns that are included in the table, click in the upper right corner of the
table to display the Properties section. Click directly on the column header to sort through that particular column. To search for specific lookup tables and display only
those rows that match your search criteria, click at the top of the table.

You can create, browse, delete, and search for lookup tables from the Lookup Table Console. To create a new lookup table, click at the top of the Lookup Table

Console. To search for data within a lookup table, click on the lookup table toolbar. To delete a lookup table, select the lookup table and click on the lookup table
toolbar.

IBM Product Master 12.0.0 309


You can also browse the contents of a lookup table. To browse a lookup table, select the lookup table and click in the last column of the row of the lookup table.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Collaboration manager module


Using the Collaboration Manager module, you can import objects into and export objects from the IBM® Product Master. Use the tools in the Collaboration Manager
module to maintain queues and messages, view a list of files stored in the document store, and manage collaboration areas.

Import Console
Use the Import Console to manage your imports, which you use to add, update, or delete data in IBM Product Master. You can also manually run the imports from
the Import Console.
Export Console
Use the Export Console to manage your exports, which you use to publish your product information to either internal or external customers.
Collaboration Area Console
Use the Collaboration Area Console to manage your collaboration areas, which indicate the number of items at any step in a workflow.
Queue Console
Use the Queue Console to manage queues, which you must define to handle inbound and outbound messages from external sources or destinations.
Message Console
Use the Message Console to search for messages in a queue.
Web Service Console
Use the Web Service Console to manage Web services used by SOAP over HTTP.
Transaction Console
Use the Transaction Console to view all Web service transactions.
Document Store
The Document Store is a repository of all incoming and outgoing files, including: import scripts, reports, and specs. You can use the Document Store to view details,
control access, or delete documents.
Data Source Console
Use the Data Source Console to manage your data sources, which are entities that define how data can be imported into IBM Product Master.
Routing Console
Use the Routing Console to manage the distributions, which you use to notify individuals about changes to the system.
Generated Files
Use the Generated Files panel to search for generated files by specifying a given date range or narrowing your search to include only the files that were generated
with a specific destination spec.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Import Console
Use the Import Console to manage your imports, which you use to add, update, or delete data in IBM® Product Master. You can also manually run the imports from the
Import Console.

The Import Console displays the imports as rows in a table. To customize the columns that are included in the table, click in the upper right corner of the table to
display the Properties section. Click directly on the column header to sort through that particular column. To search for specific imports and display only those rows that
match your search criteria, click at the top of the table.

You can create, modify, and delete imports from the Import Console. To create a new import, click at the top of the Import Console. When creating a new import, you
can select your data import type. For example, In the Select data import type field, select one of the following:

Binary feed
This type of import is used to upload a zip file which can contain any type of information. While creating the import, you can specify where the zip file is to be
uploaded in the docstore. When the import is ran, the files in the zip package are extracted under the ctg_files directory in the docstore. All of the catalogs that have
a binary image or thumbnail image attribute can use the file name from any of these files to populate the attributes.
Hierarchy feed
This type of feed is used to import categories from a file to a hierarchy.
Item feed
This type of feed is used to import items from a file to a catalog.
Item to category map feed
This type of feed is used to import item data from a file to a catalog and then map it to certain categories. For example, in a CSV file if each line has a category data
and an item data, that item is imported to that category. If the category does not exist, it is created under the hierarchy related to the catalog.
In order to allow a mapping of categories, you need to first define a typical file spec with the following two attributes:

the item primary key


the category which has to be of type "Category". The type "Category" is a special "type" for a file spec.
the "delimiter for the category path" needs to be specified. The delimiter is a one-character delimiter that separates the elements of the category path; for
example, "/", as in "top/middle/bottom".

Because you can only modify minimal aspects of an import, to modify an import it is best to delete the existing import and create a new one. To delete an import, click
in the first column of the row next to the import.

310 IBM Product Master 12.0.0


You can also manually run imports from the Import Console. To run an import, you first load the file into Product Master and then you start the import.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Export Console
Use the Export Console to manage your exports, which you use to publish your product information to either internal or external customers.

The Export Console displays the exports as rows in a table. To customize the columns that are included in the table, click in the upper right corner of the table to
display the Properties section. Click directly on the column header to sort through that particular column. To search for specific exports and display only those rows that
match your search criteria, click at the top of the table.

You can create, run, and delete exports from the Export Console. To create a new export, click at the top of the console. To run an export, select the export and click

at the top of the console. To delete an export, select an export and click Delete at the top of the console.

You can modify only certain aspects of an export, such as its destination spec, mapping, and hierarchy. To modify the destination spec of an export, click in the
Mapping column for the export. To modify the destination spec of the export, click the spec name in the Destination Spec column for the export. To modify the hierarchy of
the export, click the hierarchy name in the Hierarchy column for the export.

If you have started an export by clicking , you can view the status of that export job by clicking in the Job Info column for the export.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Collaboration Area Console


Use the Collaboration Area Console to manage your collaboration areas, which indicate the number of items at any step in a workflow.

The Collaboration Area Console displays the collaboration areas as rows in a table. To customize the columns that are included in the table, click in the upper right of
the table to display the Properties section. Click directly on the column header to sort through that particular column. To search for specific collaboration areas and display
only those rows that match your search criteria, click at the top of the table.

You can specify the column on which collaboration area data must be sorted in the Configure Table. You can also choose to hide specific columns by clicking the check
boxes next to the columns in the Configure Table.

You can create, view, and delete collaboration areas from the Collaboration Area Console. To create a new collaboration area, click at the top of the console. To view a
collaboration area, select the collaboration area and click View at the top of the Collaboration Area Console. To delete a collaboration area, click Delete at the top of the
console.

You can also view a graphical representation of the workflow that is associated with a collaboration area. To view the graphical representation of the steps of a workflow,
click at the end of the Name column for the collaboration area.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Queue Console
Use the Queue Console to manage queues, which you must define to handle inbound and outbound messages from external sources or destinations.

The Queue Console displays the queues as rows in a table. To customize the columns that are included in the table, click in the upper right corner of the table to display
the Properties section. Click directly on the column header to sort through that particular column. To search for specific queues and display only those rows that match
your search criteria, click at the top of the table.

You can create and delete queues from the Queue Console. To create a new queue, click at the top of the Queue Console. To edit a queue, click the name of the queue

in the Distribution Name column for the queue. To delete a queue, click in the last column for the queue.

You can also view messages in a queue from the Queue Console. To view the messages of a queue, click the number in the Messages column for the queue.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 311


Message Console
Use the Message Console to search for messages in a queue.

To search for messages in a queue, enter the arrival date range and click in the Queue Messages Search panel.

The list of messages that are associated with the selected queue are displayed in the Queue Messages panel.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Web Service Console


Use the Web Service Console to manage Web services used by SOAP over HTTP.

The Web Service Console displays the Web services as rows in a table. To customize the columns that are included in the table, click in the upper right of the table to
display the Properties section. Click directly on the column header to sort through that particular column. To search for specific Web services and display only those rows
that match your search criteria, click at the top of the table.

You can create a new web service, view details of a Web service, view the number of transactions for a web service, or delete a web service from the Web Service Console.

To create a new web service, click at the top of the Web Service Console. To view the details of a Web service, click the name of the web service. To view the

transactions of a web service, click the number in the Transactions column for the Web service. To delete a Web service from the console, click for the Web service.

You can also use a Web Services Description Language (WSDL) use case. To use a WSDL use case, in the Web Services Console, click new and enter the required
information for the following fields:

Web service name


For example, Item_Request.
Web service description
Enables an external application to request an item's detail based on ID attributes for the item such as GTIN or UPC, GLN, and Target Market.
Web services description file
The WSDL file is uploaded from the Web Services Console and contains a description of the Web service in WSDL 1.2 format. The Web service uses SOAP 1.2
request and response encoding and the WSDL file includes the following:
XSD for the request message
XSD for the response message
XSD for the fault message
All other content required by WSDL 1.2
The Web services definition file is published to the default HTTP server, which is the HTTP server for IBM® Product Master. This is also where the Web
Services Description Script is published. For assistance, click Help.
Web services implementation script
The Web services implementation script is invoked by an incoming SOAP 1.2 request message that complies with the Web service definition. Web services
implementation script undertakes the following:
Parses SOAP 1.2 request message using the Web service definition.
Typically queries one or more IBM Product Master containers for product information.
Creates a SOAP 1.2 response message containing product information.
Transmits the SOAP 1.2 response message to the Requestor Application.
Optional per the WebSphere® Administrator in the script - Logs the request and response messages into the Document store and logs a link to the
messages in the Document store in a manner that can be accessed in the Message Console.
Requestor application request message
Administrator of requestor application writes process to create SOAP 1.2 message in compliance with the Web services definition.
Requestor application response message
Administrator of requestor application writes process to receive and handle SOAP 1.2 message in compliance with the Web services definition.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Transaction Console
Use the Transaction Console to view all Web service transactions.

The Transaction Console displays an option to specify the date range for searching for transactions. To search for transactions, enter the date range and click .

The Transaction Console displays a list of transactions matching the search criteria. To view details of a transaction, click in the Response or Request columns for the
transaction.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

312 IBM Product Master 12.0.0


Document Store
The Document Store is a repository of all incoming and outgoing files, including: import scripts, reports, and specs. You can use the Document Store to view details,
control access, or delete documents.

The Document Store saves the content in a hierarchical structure just like the folders and files on the local file system. To customize the columns that are included in the
table, click in the upper right corner of the table to display the Properties section. Click directly on the column header to sort through that particular column.

The Document Store displays a list of folders and sub-folders that contain categorized documents. The Document Store has some pre-existing files and folders. During the
execution of scripts the documents are stored in the Document Store. For export purpose, the product generates a compressed file which is also stored in the Document
Store.

The metadata of the documents is stored in the database, however, the real files can be stored in the database (as a BLOB) or on the local file system.

On each leaf node of the Document Store, you can perform these:

View contents
To view the contents of a file, click on the file to open a popup window. If the file is a text document, the content appears in the popup window, otherwise browser
prompts you to either save the file or open it with the respective application.
View metadata
To view audit information of a document, click next to the document.
Delete document
To delete a document, click next to the document.
Set security restrictions
To update the access control of any document, click next to the document.

Restriction: All file names that are uploaded to the document store cannot contain the following special characters: !@$%^&()=+.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data Source Console


Use the Data Source Console to manage your data sources, which are entities that define how data can be imported into IBM® Product Master.

The Data Source Console displays the data sources as rows in a table. To customize the columns that are included in the table, click in the upper right corner of the
table to display the Properties section. Click directly on the column header to sort through that particular column. To search for specific data sources and display only
those rows that match your search criteria, click at the top of the table.

You can create, modify, and delete data sources from the Data Source Console. To create a new data source, click at the top of the Data Source Console. To change

the data source configuration of a data source, click in the Action column for the data source. To delete a data source, click in the Action column for the data
source.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Routing Console
Use the Routing Console to manage the distributions, which you use to notify individuals about changes to the system.

The Routing Console displays the distributions as rows in a table. To customize the columns that are included in the table, click in the upper right corner of the table to
display the Properties section. Click directly on the column header to sort through that particular column. To search for specific routings and display only those rows that
match your search criteria, click at the top of the table.

You can create, modify, and delete distributions or distribution groups from the Routing Console. To create a new distribution group, click New Distribution Group at the

top of the console. To create a new distribution, click New Distribution at the top of the console. To edit a distribution or a distribution group, click in the Control

column for the distribution or distribution group. To delete a distribution or a distribution group, click in the Control column for the distribution or distribution group.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Generated Files

IBM Product Master 12.0.0 313


Use the Generated Files panel to search for generated files by specifying a given date range or narrowing your search to include only the files that were generated with a
specific destination spec.

To search for generated files, in the Search Generated Files panel, enter the date range in the Date From and Date To fields, select a destination spec from the Destination
Spec menu if you want to narrow your search, and then click .

The search results include the file path of each generated file and the date and time that each file was generated.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data model manager module


Using the Data Model Manager module, you can setup the data models and business rules for content management. Use the tools in the Data Model Manager module to
schedule and monitor jobs, to configure data models, to add scripts that help manage business processes, to administer users, roles, and organizations, to create alerts, to
create workflows, to manage hierarchies, and to maintain catalog items.

Jobs Console
Use the Jobs Console to manage your scheduled jobs, which you use to view, update, compare, disable, and delete jobs schedules in IBM Product Master.
Specs Console
Use the Specs Console to manage all of the different types of specs, which define a data template for how data is stored, validated, and maintained both within IBM
Product Master and outside of IBM Product Master.
Spec Map Console
Use the Spec Map Console to manage your spec maps, which define how information from one source is routed to another source, such as when you export data
from IBM Product Master to another system.
Attribute Collection Console
Use the Attribute Collection Console to manage your attribute collections, which define a manageable set of attributes that can be used to create views, tabs,
access privileges, inheritance rules, or workflows.
Scripts Console
Use the Scripts Console to manage your scripts, which you use to apply business rules, cleanse data, validate data, or run custom reports. Scripts are saved in the
document store.
Script Sandbox
Use the Script Sandbox to create sample scripts and test scripts before they are implemented.
User Console
The User Console manages your users. You can view user privileges, enable or disable users, and edit profile information from the User Console.
Role Console
Use the Role Console to manage your roles, which you use to control a user's privileges to the catalog. You can also view the users that are assigned to a role and
edit the association of a role and your custom tools from the Role Console.
Access Control Group Console
Use the Access Control Group Console to manage your access control groups, which you use to define access privileges for users to specific objects.
Catalog Access Privileges Console
Use the Catalog Access Privileges Console to manage your catalog access privileges, which you use to define the access privileges of catalogs for roles.
Defining access for roles
Use the Defining Privileges for Roles Console to set access to the role. If you assign a user to multiple roles, the user inherits the access from each role.
Hierarchy Access Privilege Console
Use the Hierarchy Access Privilege Console to manage user-based hierarchy access permissions.
Activity Log
Use the Activity Log to monitor the activities of users. You can enable logging user activities and send notifications to users from the Activity Log.
Alerts Subscription Console
Use the Alert Subscription Console to manage alerts, which identify the occurrence of an event.
Staging Area Console
Use the Staging Area Console to manage staging areas, which you can use to launch data changes on an item level. A staging area can be associated with an export.
The export operation, when run, populates the associated staging area with documents.
Workflow Console
Use the Workflow Console to manage your workflows, which implement business processes for managing project information management (PIM) data in IBM
Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Jobs Console
Use the Jobs Console to manage your scheduled jobs, which you use to view, update, compare, disable, and delete jobs schedules in IBM® Product Master.

The Jobs Console displays jobs that are scheduled to run as rows in a table, including: import, export, and report jobs. To customize the columns that are included in the
table, click in the upper right corner of the table to display the Properties section. Click directly on the column header to sort through that particular column. To search
for specific jobs and display only those rows that match your search criteria, click at the top of the table.

You can view the schedules and status, update, delete, disable, and compare jobs from the Jobs Console. To view a job schedule, click on its description in the Description
column. For each job there are action buttons in the Action column of each row of the table that are specific to what you can perform on each job. To view the status of a

job, click . To update a job, click , then select the type of schedule and specify both the schedule details and job description. If a job is associated with a schedule,

314 IBM Product Master 12.0.0


will be available for you to disable the schedule from running the specific job. To delete a job, click . To compare jobs that have already been executed, click
.

You can also search for jobs by their schedule status and view job schedule details from the Jobs Console. To search for a job by schedule status, click Search Schedule
Status in the upper left corner of the Jobs Console. Specify the state, date, and the user who created the schedule. Optionally, you can specify whether to display the jobs
that were simultaneously executed by the system during a schedule if you select View System Job Runs. To view the job schedule details, click the description in the
Schedule Information column of the Jobs Console. The job schedule detail panel displays the job information and the associated schedules which you can enable, disable,
edit, view, and delete.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Specs Console
Use the Specs Console to manage all of the different types of specs, which define a data template for how data is stored, validated, and maintained both within IBM®
Product Master and outside of IBM Product Master.

The Specs Console displays each type of spec separately in separate tables. Use the buttons at the top of the Specs Console to select the type of specs that display in the
table. To sort the specific type of specs by name, click the corresponding letter located directly below the spec buttons.

You can create, edit, and delete specs from the Specs Console. To create a new spec, click located at the top of the table. To edit a spec, click next to the spec in

the table. To delete a spec, click the check boxes in each row of the specs you want to delete and click .

You can also import specs and export specs. To import specs, click Import Spec. To refresh the spec table and display the specs you import, click the spec display button
of that spec type. To export specs into the document store, click the check boxes next to each of the specs in the table that you want to export and click either Export XML
or Export XSD. If you export a spec into either of the two file formats, XML or XSD, when you export the same spec again, regardless of the format, the existing file is
overwritten.

To delete a spec, follow these steps:

1. Check whether the spec that you want to delete is in any attribute collection which associated to any workflow.
a. Check whether any collaboration area is attached to that workflow.
b. Delete the collaboration area.
2. Remove the reference of the attribute collection in the workflow step.
Note: If the workflow is not needed, you can delete the workflow.
3. Go to the Catalog Console and check the catalogs to which the spec is attached.
4. Check whether any view is created for the attribute collection.
a. Delete the view.
b. Delete the catalog.
Note: If you directly delete the catalog without deleting the view, the reference to the attribute collection in the view remains and hence you cannot delete the
attribute collection.
5. Delete the attribute collection.
6. Delete the spec.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Spec Map Console


Use the Spec Map Console to manage your spec maps, which define how information from one source is routed to another source, such as when you export data from
IBM® Product Master to another system.

The Spec Map Console displays spec maps as rows in a table. To customize the columns that are included in the table, click in the upper right corner of the table to
display the Properties section. Click directly on the column header to sort through that particular column. To search for specific spec maps and display only those rows
that match your search criteria, click at the top of the table.

You can view, create, edit, and delete spec maps from the Spec Map Console. To create a new spec map, select the type of spec map you want to create from the menu at
the top of the Spec Map Console and click New Map.

To view a spec map, click at the end of the row for the spec map. You can also view the source spec details from the Spec Map Console by clicking on the source name

in the row of the spec map. To edit a spec map, click at the end of the row for the spec map. To delete a spec map, click at the beginning of the row of the spec
map.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Attribute Collection Console


IBM Product Master 12.0.0 315
Use the Attribute Collection Console to manage your attribute collections, which define a manageable set of attributes that can be used to create views, tabs, access
privileges, inheritance rules, or workflows.

The Attribute Collection Console displays the attribute collections as rows in a table. To customize the columns that are included in the table, click in the upper right
corner of the table to display the Properties section. Click directly on the column header to sort through that particular column. To search for specific attribute collections
and display only those rows that match your search criteria, click at the top of the table.

You can create, view, edit, and delete attribute collections from the Attribute Collection Console. To create a new attribute collection, click at the top of the Attribute
Collection Console. To view or edit an attribute collection, click on the name of the attribute collection in the Name column.

To delete an attribute collection, click the check box next to the attribute collection, and then click at the top of the Attribute Collection Console. You cannot delete
attribute collections if they are associated with either a view, tab, access privilege, inheritance rule, or workflow.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Scripts Console
Use the Scripts Console to manage your scripts, which you use to apply business rules, cleanse data, validate data, or run custom reports. Scripts are saved in the
document store.

Business users should not have access to the System Administrator Console because users can modify and even delete data from the system and from the database. Only
administrators or developers should have access to this console. Any access restrictions specified in the product UI have no effect if they can run scripts in the Script
Sandbox.

The Scripts Console displays the scripts as rows in a table. From the menu at the top of the Scripts Console, select the type of scripts that you want displayed in the table.
To filter the scripts by name, click the corresponding letter located directly below the menu.

You can create or edit scripts from the Scripts Console. To create a new script, click in the top of the table. To view or edit a script, click next to the script in the
table.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Script Sandbox
Use the Script Sandbox to create sample scripts and test scripts before they are implemented.

Business users should not have access to the System Administrator Console because users can modify and even delete data from the system and from the database. Only
administrators or developers should have access to this console. Any access restrictions specified in the product UI have no effect if they can run scripts in the Script
Sandbox.

The Script Sandbox consists of Expression Builder, Script Input Pane, and Script Pane sections, which you can use the build your scripts.

When you select an expression from the Expression Builder section, the available operators and operands display in the column next to the Expression Builder section.
You can click an operand to add an operand to the script. The selected operand displays in the Script Pane.

You can edit a script in the Script Pane section. To run a script, click Run Script at the top of the panel. The results of script execution display at the bottom of the panel.
You can continue the process of updating and running the script to test the script before it is implemented.

When using the IBM® HTTP Server, a sandbox script might not return results if the script takes a relatively longer time to complete. For example, a script that takes more
than two minutes to complete might not return results. This behaviour is caused by the time out parameter of IBM HTTP Server called ServerIOTimeout. You can set the
value of the ServerIOTimeout parameter to 0 to avoid this issue when using sandbox scripts that take a certain time to complete.

CAUTION:
This setting has not been tested completely and could bear some risk when used.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

User Console
The User Console manages your users. You can view user privileges, enable or disable users, and edit profile information from the User Console.

The User Console displays the users as rows in a table. To customize the columns that are included in the table, click in the upper right corner of the table to display the
Properties section. Click directly on the column header to sort through that particular column. To search for specific users and display only those rows that match your
search criteria, click at the top of the table.

316 IBM Product Master 12.0.0


You can view user access and enable or disable users from the User Console. The current user access is displayed in each row of the table by displaying a if the user is

enabled or an if the user is disabled. If a user is an administrator, an asterisk is displayed to the right of the User name column.

If a user is an LDAP user, the is displayed in the LDAP user column and the LDAP URL and Entry DN columns are populated. If the user in not an LDAP user, an is
displayed in the LDAP user column.

To enable a IBM® Product Master user, click the check box in the Select column next to the user, and then click at the top of the User Console. To disable a user, click

the check box in the Select column next to the user, and then click at the top of the User Console.

You can also edit a users profile information from the User Console. To edit a profile, click the name of the user in the User name column.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Role Console
Use the Role Console to manage your roles, which you use to control a user's privileges to the catalog. You can also view the users that are assigned to a role and edit the
association of a role and your custom tools from the Role Console.

The Role Console displays the roles as rows in a table. To customize the columns that are included in the table, click in the upper right corner of the table to display the
Properties section. Click directly on the column header to sort through that particular column. To search for specific roles and display only those rows that match your
search criteria, click at the top of the table.

You can create, modify, and delete roles from the Role Console. To create a new role, click . To edit a role, click the check box next to the role, and then click . To

delete a role, click the check box next to each role, and then click .

You can also view the users that are assigned to a role and edit associations between roles and your custom tools from the Role Console. To display the users that are
assigned to a role, click the description in the Assigned to column of the table. To edit a role association, click the check box next to the role, and then click Custom Tools
Map.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Access Control Group Console


Use the Access Control Group Console to manage your access control groups, which you use to define access privileges for users to specific objects.

The Access Control Group Console displays the access control groups (ACGs) as rows in a table. To customize the columns that are included in the table, click in the
upper right side of the table to display the Properties section. Click directly on the column header to sort through that particular column. To search for specific access
control groups and display only those rows that match your search criteria, click .

You can create, view, or edit privileges of an ACG from the Access Control Group Console. You create an ACG to set privileges for each role that is assigned to the ACG,

therefore creating privileges for users that are assigned to each role in an ACG. To create an ACG, click . To view or edit an ACG, click in the Actions column for the
ACG. You cannot delete an ACG, but you can edit every component of an ACG to create an ACG.

You can also define system-wide access privileges for users from the Access Control Group Console. To define system-wide access, click Edit System-wide Access.

Important: For lookup table permissions, Catalog section of the Default ACG is used, as internally even Lookup table is treated as a catalog and its entries are treated as
items.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Catalog Access Privileges Console


Use the Catalog Access Privileges Console to manage your catalog access privileges, which you use to define the access privileges of catalogs for roles.

The Catalog Access Privileges Console displays the access privileges for the catalogs as rows in a table. To customize the columns that are included in the table, click in
the upper right corner of the table to display the Properties section. Click directly on the column header to sort through that particular column. To search for specific
catalog access privileges and display only those rows that match your search criteria, click at the top of the table.

You can create, edit, and delete catalog access from the Catalog Access Privileges Console. To create a new Catalog Access Privilege, click at the top of the Catalog

Access Privileges Console. To view or edit a Catalog Access Privilege, click in the Action column of the table. To delete a Catalog Access Privilege, click in the
Action column of the table.

IBM Product Master 12.0.0 317


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Defining access for roles


Use the Defining Privileges for Roles Console to set access to the role. If you assign a user to multiple roles, the user inherits the access from each role.

Based on the requirements of different users, you can define access access at different levels. There are three main types of access that you can define for roles.

system wide access


screen access
access control group (ACG), also known as object wide access
Note: For a detailed list of ACGs or object wide access, see ACG object types and access.

For directions on how to grant access permissions to a role to access system screens so that users have access to the required screens, see Granting permission to access
screens.

System wide access access


Table 1. System wide access access
Option Privileges
Scheduler__view_company_jobs Scheduler__view_company_jobs privilege grants access to view all scheduler jobs and schedules in a company.
Screen__view Screen__view privilege grants access to view all screens/pages in a company.
Script__create_modify_scripts Script__create_modify_scripts privilege grants access to create and modify all scripts in a company.
Security__modify_roles_access Security__modify_roles_access privilege grants access to modify all roles and the access access in roles in a company.
Security__modify_users Security__modify_users privilege grants access to modify all users and their related info in a company.
Spec__modify_spec Spec__modify_spec privilege grants access to modify all specs in a company
Spec__modify_spec_map Spec__modify_spec_map privilege grants access to modify all spec maps in a company

Screen access
Table 2. Screen access
Screen Java API
Screen ID Screen name Screen description Privilege description
privilege screen privilege
Home_tab Home PAGE_OBJ_HO ScreenPrivilege This privilege grants access
ME_TAB___vie .VIEW_HOME_T to view the Home/Home_tab
w AB_SCREEN screen in a company.
my_home My Home PAGE_OBJ_MY_ ScreenPrivilege. This privilege grants access to
HOME__view VIEW_MY_HOM view the My Home/my_home
E_SCREEN screen in a company.
new_homepage New Home Page PAGE_OBJ_NEW ScreenPrivilege. This privilege grants access to
_ui _HOMEPAGE_UI VIEW_NEW_HO view the New Home
___view MEPAGE_UI_SC Page/new_homepage_ui
REEN screen in a company.
new_worklist_ui New Worklist PAGE_OBJ_NEW ScreenPrivilege. This privilege grants access to
Details Page _WORKLIST_UI VIEW_NEW_WO view the New Worklist Details
___view RKLIST_UI_SCR Page/new_worklist_ui screen
EEN in a company.
new_item_detail New Item Detail PAGE_OBJ_NEW ScreenPrivilege. This privilege grants access to
_ui UI _ITEM_DETAIL_ VIEW_NEW_ITE view the New Item Detail
UI___view M_DETAIL_UI_S UI/new_item_detail_ui screen
CREEN in a company.
new_category_d New Category PAGE_OBJ_NEW ScreenPrivilege. This privilege grants access to
etail_ui Detail UI _CATEGORY_DE VIEW_NEW_CAT view the New Category Detail
TAIL_UI___view EGORY_DETAIL UI/new_category_detail_ui
_UI_SCREEN screen in a company.
my_tasklist My Task List This home page displays information of your choice, on pages PAGE_OBJ_MY_ ScreenPrivilege. This privilege grants access to
designed to your preferences. Get started by clicking Add New TASKLIST___vie VIEW_MY_TASK view the My Task
Page button. w LIST_SCREEN List/my_tasklist screen in a
company.
customize Customize Page PAGE_OBJ_CUS ScreenPrivilege. This privilege grants access to
TOMIZE___view VIEW_CUSTOMI view the Customize
ZE_SCREEN Page/customize screen in a
company.
save_customize Save PAGE_OBJ_SAV ScreenPrivilege. This privilege grants access to
Customization E_CUSTOMIZE_ VIEW_SAVE_CU view the Save
__view STOMIZE_SCRE Customization/save_customiz
EN e screen in a company.

318 IBM Product Master 12.0.0


Screen Java API
Screen ID Screen name Screen description Privilege description
privilege screen privilege
edit_params Edit Params PAGE_OBJ_EDIT ScreenPrivilege. This privilege grants access to
_PARAMS___vie VIEW_EDIT_PA view the Edit
w RAMS_SCREEN Params/edit_params screen
in a company.
save_params Save Params PAGE_OBJ_SAV ScreenPrivilege. This privilege grants access to
E_PARAMS___vi VIEW_SAVE_PA view the Save
ew RAMS_SCREEN Params/save_params screen
in a company.
request_detail Request Detail Status of Submitted Tasks that Require Approval. PAGE_OBJ_REQ ScreenPrivilege. This privilege grants access to
UEST_DETAIL__ VIEW_REQUEST view the Request
_view _DETAIL_SCREE Detail/request_detail screen
N in a company.
alert_actionable Actionables List of actionable tasks. PAGE_OBJ_ALE ScreenPrivilege. This privilege grants access to
s RT_ACTIONABL VIEW_ALERT_A view the
ES___view CTIONABLES_S Actionables/alert_actionables
CREEN screen in a company.
custom_page Custom Page Get a custom page from a script. PAGE_OBJ_CUS ScreenPrivilege. This privilege grants access to
TOM_PAGE___vi VIEW_CUSTOM_ view the Custom
ew PAGE_SCREEN Page/custom_page screen in
a company.
user_settings My Settings Use this screen to review user settings. PAGE_OBJ_USE ScreenPrivilege. This privilege grants access to
R_SETTINGS___ VIEW_USER_SE view the My
view TTINGS_SCREE Settings/user_settings screen
N in a company.
adm_user_view My Profile PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
_USER_VIEW__ VIEW_ADM_USE view the My
_view R_VIEW_SCREE Profile/adm_user_view screen
N in a company.
configure_table Configure Table PAGE_OBJ_CON ScreenPrivilege. This privilege grants access to
FIGURE_TABLE_ VIEW_CONFIGU view the Configure
__view RE_TABLE_SCR Table/configure_table screen
EEN in a company.
help_intro Help PAGE_OBJ_HEL ScreenPrivilege. This privilege grants access to
Introduction P_INTRO___vie VIEW_HELP_IN view the Help
w TRO_SCREEN Introduction/help_intro
screen in a company.
Content_Mgmt Product PAGE_OBJ_CON ScreenPrivilege This privilege grants access
Manager TENT_MGMT___ .VIEW_CONTEN to view the Product
view T_MGMT_SCRE Manager/Content_Mgmt
EN screen in a company.
catalog_parent Catalogs PAGE_OBJ_CAT ScreenPrivilege. This privilege grants access to
ALOG_PARENT_ VIEW_CATALOG view the
__view _PARENT_SCRE Catalogs/catalog_parent
EN screen in a company.
ctg_console Catalog Console PAGE_OBJ_CTG ScreenPrivilege. This privilege grants access to
_CONSOLE___vi VIEW_CTG_CON view the Catalog
ew SOLE_SCREEN Console/ctg_console screen
in a company.
cat_create New Catalog PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
CREATE___view VIEW_CAT_CRE view the New
ATE_SCREEN Catalog/cat_create screen in
a company.
item_location_a Item Location PAGE_OBJ_ITE ScreenPrivilege. This privilege grants access to
ttributes Attributes M_LOCATION_A VIEW_ITEM_LO view the Item Location
TTRIBUTES___v CATION_ATTRI Attributes/item_location_attri
iew BUTESE_SCREE butes screen in a company.
N
relation_ctg_cre Define Location PAGE_OBJ_REL ScreenPrivilege. This privilege grants access to
ate Specific ATION_CTG_CR VIEW_RELATIO view the Define Location
Attributes EATE___view N_CTG_CREATE Specific
_SCREEN Attributes/relation_ctg_creat
e screen in a company.
ctg_tab_view Tab View PAGE_OBJ_CTG ScreenPrivilege. This privilege grants access to
_TAB_VIEW___v VIEW_CTG_TAB view the Tab
iew _VIEW_SCREEN View/ctg_tab_view screen in a
company.
item_rollback Item Rollback PAGE_OBJ_ITE ScreenPrivilege. This privilege grants access to
M_ROLLBACK__ VIEW_ITEM_RO view the Item
_view LLBACKE_SCRE Rollback/item_rollback
EN screen in a company.
cat_item_list Item List PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
ITEM_LIST___vi VIEW_CAT_ITE view the Item
ew M_LIST_SCREE List/cat_item_list screen in a
N company.

IBM Product Master 12.0.0 319


Screen Java API
Screen ID Screen name Screen description Privilege description
privilege screen privilege
item_list_popup Item List Popup PAGE_OBJ_ITE ScreenPrivilege. This privilege grants access to
M_LIST_POPUP VIEW_ITEM_LIS view the Item List
___view T_POPUP_SCRE Popup/item_list_popup
EN screen in a company.
cat_add_mod_it Edit Item PAGE_OBJ_CAT ScreenPrivilege. This privilege grants access to
em _ADD_MOD_ITE VIEW_CAT_ADD view the Edit
M___view _MOD_ITEM_SC Item/cat_add_mod_item
REEN screen in a company.
item_data_entry Item Data Entry PAGE_OBJ_ITE ScreenPrivilege. This privilege grants access to
M_DATA_ENTRY VIEW_ITEM_DA view the Item Data
___view TA_ENTRY_SCR Entry/item_data_entry screen
EEN in a company.
data_entry_view Data Entry View PAGE_OBJ_DAT ScreenPrivilege. This privilege grants access to
_search Search A_ENTRY_VIEW VIEW_ENTRY_VI view the Data Entry View
_SEARCH___vie EW_SEARCH_SC Search/data_entry_view_sear
w REEN ch screen in a company.
data_entry_valid Data Entry PAGE_OBJ_DAT ScreenPrivilege. This privilege grants access to
ation Validation Errors A_ENTRY_VALID VIEW_DATA_EN view the Data Entry Validation
ATION___view TRY_VALIDATIO Errors/data_entry_validation
N_SCREEN screen in a company.
category_data_e Category Data PAGE_OBJ_CAT ScreenPrivilege. This privilege grants access to
ntry Entry EGORY_DATA_E VIEW_CATEGOR view the Category Data
NTRY___view Y_DATA_ENTRY Entry/category_data_entry
_SCREEN screen in a company.
cat_bulk_edit Bulk Edit Item PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
BULK_EDIT___v VIEW_CAT_BUL view the Bulk Edit
iew K_EDIT_SCREE Item/cat_bulk_edit screen in
N a company.
image_uploader Image Uploader PAGE_OBJ_IMA ScreenPrivilege. This privilege grants access to
GE_UPLOADER_ VIEW_IMAGE_U view the Image
__view PLOADER_SCRE Uploader/image_uploader
EN screen in a company.
doc_uploader_in Doc Uploader PAGE_OBJ_DOC ScreenPrivilege. This privilege grants access to
to_cms Into CMS _UPLOADER_IN VIEW_DOC_UPL view the Doc Uploader Into
TO_CMS___view OADER_INTO_C CMS/doc_uploader_into_cms
MS_SCREEN screen in a company.
doc_uploader_in Doc Uploader PAGE_OBJ_DOC ScreenPrivilege. This privilege grants access to
to_cms_properti Into CMS _UPLOADER_IN VIEW_DOC_UPL view the Doc Uploader Into
es Properties TO_CMS_PROPE OADER_INTO_C CMS
RTIES___view MS_PROPERTIE Properties/doc_uploader_into
S_SCREEN _cms_properties screen in a
company.
search_cms Search CMS PAGE_OBJ_SEA ScreenPrivilege. This privilege grants access to
RCH_CMS___vie VIEW_SEARCH_ view the Search
w CMS_SCREEN CMS/search_cms screen in a
company.
tabs_cms Tabs CMS PAGE_OBJ_TAB ScreenPrivilege. This privilege grants access to
S_CMS___view VIEW_TABS_CM view the Tabs CMS/tabs_cms
S_SCREEN screen in a company.
content_previe Content Preview PAGE_OBJ_CON ScreenPrivilege. This privilege grants access to
w TENT_PREVIEW VIEW_CONTENT view the Content
___view _PREVIEW_SCR Preview/content_preview
EEN screen in a company.
content_propert Content PAGE_OBJ_CON ScreenPrivilege. This privilege grants access to
ies Properties TENT_PROPERT VIEW_CONTENT view the Content
IES___view _PROPERTIES_ Properties/content_propertie
SCREEN s screen in a company.
content_version Content PAGE_OBJ_CON ScreenPrivilege. This privilege grants access to
s Versions TENT_VERSION VIEW_CONTENT view the Content
S___view _VERSIONS_SC Versions/content_versions
REEN screen in a company.
content_referen Content PAGE_OBJ_CON ScreenPrivilege. This privilege grants access to
ces References TENT_REFEREN VIEW_CONTENT view the Content
CES___view _REFERENCES_ References/content_referenc
SCREEN es screen in a company.
entry_preview Actionable Entry PAGE_OBJ_ENT ScreenPrivilege. This privilege grants access to
Preview RY_PREVIEW__ VIEW_ENTRY_P view the Actionable Entry
_view REVIEW_SCREE Preview/entry_preview screen
N in a company.
show_inheritanc Inheritance PAGE_OBJ_SHO ScreenPrivilege. This privilege grants access to
e_paths Paths W_INHERITANC VIEW_SHOW_IN view the Inheritance
E_PATHS___vie HERITANCE_PA Paths/show_inheritance_path
w THS_SCREEN s screen in a company.

320 IBM Product Master 12.0.0


Screen Java API
Screen ID Screen name Screen description Privilege description
privilege screen privilege
show_inheritanc Inheritance PAGE_OBJ_SHO ScreenPrivilege. This privilege grants access to
e_entries Entries W_INHERITANC VIEW_SHOW_IN view the Inheritance
E_ENTRIES___v HERITANCE_EN Entries/show_inheritance_ent
iew TRIES_SCREEN ries screen in a company.
cat_rollback Catalog Rollback PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
ROLLBACK___vi VIEW_CAT_ROL view the Catalog
ew LBACK_SCREEN Rollback/cat_rollback screen
in a company.
ctg_view Catalog View Use this screen to define/edit user views on a catalog. PAGE_OBJ_CTG ScreenPrivilege. This privilege grants access to
_VIEW___view VIEW_CTG_VIE view the Catalog
W_SCREEN View/ctg_view screen in a
company.
ctr_view Hierarchy View Use this screen to define/edit user views on a hierarchy. PAGE_OBJ_CTR ScreenPrivilege. This privilege grants access to
_VIEW___view VIEW_CTR_VIE view the Hierarchy
W_SCREEN View/ctr_view screen in a
company.
catalog_detail Catalog Detail PAGE_OBJ_CAT ScreenPrivilege. This privilege grants access to
ALOG_DETAIL_ VIEW_CATALOG view the Catalog
__view _DETAIL_SCREE Detail/catalog_detail screen
N in a company.
container_previ Container PAGE_OBJ_CON ScreenPrivilege. This privilege grants access to
ew Preview Popup TAINER_PREVIE VIEW_CONTAIN view the Container Preview
W___view ER_PREVIEW_S Popup/container_preview
CREEN screen in a company.
catalog_version Catalog Version PAGE_OBJ_CAT ScreenPrivilege. This privilege grants access to
_statistics Statistics ALOG_VERSION VIEW_CATALOG view the Catalog Version
_STATISTICS___ _VERSION_STAT Statistics/catalog_version_sta
view ISTICS_SCREEN tistics screen in a company.
loc_inh_setup Setup Location Specify which attribute groups on which to perform inheritance PAGE_OBJ_LOC ScreenPrivilege. This privilege grants access to
Attributes for this location hierarchy and catalog. _INH_SETUP___ VIEW_LOC_INH view the Setup Location
Inheritance view _SETUP_SCREE Attributes
N Inheritance/loc_inh_setup
screen in a company.
cat_differences Catalog Compare two versions of a catalog and highlight modifications PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
Differences made to items and categories. Also revert to an older version of DIFFERENCES_ VIEW_CAT_DIFF view the Catalog
a specific item or category. __view ERENCES_SCRE Differences/cat_differences
EN screen in a company.
cat_item_differe View PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
nces Differences ITEM_DIFFERE VIEW_CAT_ITE view the View
NCES___view M_DIFFERENCE Differences/cat_item_differen
S_SCREEN ces screen in a company.
select_diff_cat_i Differences PAGE_OBJ_SEL ScreenPrivilege. This privilege grants access to
tems Options ECT_DIFF_CAT_I VIEW_SELECT_ view the Differences
TEMS___view DIFF_CAT_ITEM Options/select_diff_cat_items
S_SCREEN screen in a company.
select_diff_type Differences PAGE_OBJ_SEL ScreenPrivilege. This privilege grants access to
Types ECT_DIFF_TYPE VIEW_SELECT_ view the Differences
___view DIFF_TYPE_SCR Types/select_diff_type screen
EEN in a company.
cat_delete Catalog Delete PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
DELETE___view VIEW_CAT_DEL view the Catalog
ETE_SCREEN Delete/cat_delete screen in a
company.
cat_user_define Catalog Activity Report User-defined Log activity for selected period. PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
d_logs USER_DEFINED VIEW_CAT_USE view the Catalog
_LOGS___view R_DEFINED_LO Activity/cat_user_defined_log
GS_SCREEN s screen in a company.
cat_run_previe Catalog Preview Run a catalog preview script. PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
w_script RUN_PREVIEW_ VIEW_CAT_RUN view the Catalog
SCRIPT___view _PREVIEW_SCR Preview/cat_run_preview_scr
IPT_SCREEN ipt screen in a company.
catalog_to_catal Catalog to Export content to a catalog. Specify items, version, and PAGE_OBJ_CAT ScreenPrivilege. This privilege grants access to
og_export_main Catalog Export attributes to be exported. Map catalog content to target catalog ALOG_TO_CATAL VIEW_CATALOG view the Catalog to Catalog
and create data validation or conversion rules OG_EXPORT_M _TO_CATALOG_ Export/catalog_to_catalog_ex
AIN___view EXPORT_MAIN_ port_main screen in a
SCREEN company.
catalog_to_catal Catalog to Export content to a catalog. Specify items, version, and PAGE_OBJ_CAT ScreenPrivilege. This privilege grants access to
og_export_cons Catalog Export attributes to be exported. Map catalog content to target catalog ALOG_TO_CATAL VIEW_CATALOG view the Catalog to Catalog
ole Console and create data validation or conversion rules. OG_EXPORT_CO _TO_CATALOG_ Export
NSOLE___view EXPORT_CONSO Console/catalog_to_catalog_
LE_SCREEN export_console screen in a
company.

IBM Product Master 12.0.0 321


Screen Java API
Screen ID Screen name Screen description Privilege description
privilege screen privilege
new_catalog_to New Catalog to Set up an export of a catalog's content (whole or partial to a PAGE_OBJ_NEW ScreenPrivilege. This privilege grants access to
_catalog_export Catalog Export new catalog). _CATALOG_TO_C VIEW_NEW_CAT view the New Catalog to
ATALOG_EXPOR ALOG_TO_CATAL Catalog
T___view OG_EXPORT_SC Export/new_catalog_to_catal
REEN og_export screen in a
company.
cat_content Hierarchies Add, edit, and manage multiple categories and map categories PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
to each other. CONTENT___vie VIEW_CAT_CON view the
w TENT_SCREEN Hierarchies/cat_content
screen in a company.
category_tree_c Hierarchy PAGE_OBJ_CAT ScreenPrivilege. This privilege grants access to
onsole Console EGORY_TREE_C VIEW_CATEGOR view the Hierarchy
ONSOLE___view Y_TREE_CONSO Console/category_tree_consol
LE_SCREEN e screen in a company.
ctr_create New Hierarchy Allows user to create a new Hierarchy. PAGE_OBJ_CTR ScreenPrivilege. This privilege grants access to
_CREATE___vie VIEW_CTR_CRE view the New
w ATE_SCREEN Hierarchy/ctr_create screen in
a company.
ctr_rollback Hierarchy PAGE_OBJ_CTR ScreenPrivilege. This privilege grants access to
Rollback _ROLLBACK___ VIEW_CTR_ROL view the Hierarchy
view LBACK_SCREEN Rollback/ctr_rollback screen
in a company.
ctr_delete Hierarchy Delete PAGE_OBJ_CTR ScreenPrivilege. This privilege grants access to
_DELETE___vie VIEW_CTR_DEL view the Hierarchy
w ETE_SCREEN Delete/ctr_delete screen in a
company.
category_tree_d Hierarchy Detail PAGE_OBJ_CAT ScreenPrivilege. This privilege grants access to
etail EGORY_TREE_D VIEW_CATEGOR view the Hierarchy
ETAIL___view Y_TREE_DETAIL Detail/category_tree_detail
_SCREEN screen in a company.
select_mkt_cate Item PAGE_OBJ_SEL ScreenPrivilege. This privilege grants access to
gory Categorization ECT_MKT_CATE VIEW_SELECT_ view the Item
GORY___view MKT_CATEGORY Categorization/select_mkt_ca
_SCREEN tegory screen in a company.
select_spec_attr Select Spec PAGE_OBJ_SEL ScreenPrivilege. This privilege grants access to
ib Attributes ECT_SPEC_ATT VIEW_SELECT_S view the Select Spec
RIB___view PEC_ATTRIB_SC Attributes/select_spec_attrib
REEN screen in a company.
category_tree_ Hierarchy Maps PAGE_OBJ_CAT ScreenPrivilege. This privilege grants access to
map_parent EGORY_TREE_M VIEW_CATEGOR view the Hierarchy
AP_PARENT___v Y_TREE_MAP_P Maps/category_tree_map_par
iew ARENT_SCREEN ent screen in a company.
category_tree_ Hierarchy PAGE_OBJ_CAT ScreenPrivilege. This privilege grants access to
map_console Mapping EGORY_TREE_M VIEW_CATEGOR view the Hierarchy Mapping
Console AP_CONSOLE__ Y_TREE_MAP_C Console/category_tree_map_
_view ONSOLE_SCREE console screen in a company.
N
cat_map_cat_ge Map Hierarchies PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
neric MAP_CAT_GENE VIEW_CAT_MAP view the Map
RIC___view _CAT_GENERIC_ Hierarchies/cat_map_cat_gen
SCREEN eric screen in a company.
load_ccm_map Update Items PAGE_OBJ_LOA ScreenPrivilege. This privilege grants access to
with Hierarchy D_CCM_MAP___ VIEW_LOAD_CC view the Update Items with
Map view M_MAP_SCREE Hierarchy
N Map/load_ccm_map screen in
a company.
selections_pare Selections PAGE_OBJ_SEL ScreenPrivilege. This privilege grants access to
nt ECTIONS_PARE VIEW_SELECTIO view the
NT___view NS_PARENT_SC Selections/selections_parent
REEN screen in a company.
selections_cons Item Selections This screen allows you to create groups of individual items at PAGE_OBJ_SEL ScreenPrivilege. This privilege grants access to
ole the per-item level, by category, or by defining a rule or set of ECTIONS_CONS VIEW_SELECTIO view the Item
rules to use in order to create the selection of items. OLE___view NS_CONSOLE_S Selections/selections_consol
CREEN e screen in a company.
cat_preview_sel Preview Preview the items that have been chosen by a item selection. PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
ect Selection PREVIEW_SELE VIEW_CAT_PRE view the Preview
CT___view VIEW_SELECT_S Selection/cat_preview_select
CREEN screen in a company.
cat_browse_sel Static Selection Create a grouping of items using static forms of selection, such PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
ect as selecting an entire category or each item individually. BROWSE_SELEC VIEW_CAT_BRO view the Static
T___view WSE_SELECT_S Selection/cat_browse_select
CREEN screen in a company.

322 IBM Product Master 12.0.0


Screen Java API
Screen ID Screen name Screen description Privilege description
privilege screen privilege
cat_item_list_p Item List Popup PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
opup ITEM_LIST_POP VIEW_CAT_ITE view the Item List
UP___view M_LIST_POPUP_ Popup/cat_item_list_popup
SCREEN screen in a company.
cat_browse_sel Browse Select Create a grouping of items using static forms of selection, such PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
ect_popup Popup as selecting an entire category or each item individually. BROWSE_SELEC VIEW_CAT_BRO view the Browse Select
T_POPUP___vie WSE_SELECT_P Popup/cat_browse_select_po
w OPUP_SCREEN pup screen in a company.
cat_item_sel_dy Dynamic Create a grouping of items using dynamic forms of selection, PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
namic Selection such as grouping together all items that cost more than $50 or ITEM_SEL_DYN VIEW_CAT_ITE view the Dynamic
all items that have had more than 30 orders in the past week. AMIC___view M_SEL_DYNAMI Selection/cat_item_sel_dyna
C_SCREEN mic screen in a company.
expression_edit Expression PAGE_OBJ_EXP ScreenPrivilege. This privilege grants access to
or Editor RESSION_EDIT VIEW_EXPRESS view the Expression
OR___view ION_EDITOR_S Editor/expression_editor
CREEN screen in a company.
validation_rule_ Validation Rule PAGE_OBJ_VALI ScreenPrivilege. This privilege grants access to
editor Editor DATION_RULE_ VIEW_VALIDATI view the Validation Rule
EDITOR___view ON_RULE_EDIT Editor/validation_rule_editor
OR_SCREEN screen in a company.
reports Reports PAGE_OBJ_REP ScreenPrivilege. This privilege grants access to
ORTS___view VIEW_REPORTS view the Reports/reports
_SCREEN screen in a company.
report_console Reports Console PAGE_OBJ_REP ScreenPrivilege. This privilege grants access to
ORT_CONSOLE_ VIEW_REPORT_ view the Reports
__view CONSOLE_SCRE Console/report_console
EN screen in a company.
report_def Create/Edit PAGE_OBJ_REP ScreenPrivilege. This privilege grants access to
Reports ORT_DEF___vie VIEW_REPORT_ view the Create/Edit
w DEF_SCREEN Reports/report_def screen in
a company.
report_view Preview a PAGE_OBJ_REP ScreenPrivilege. This privilege grants access to
Report ORT_VIEW___vi VIEW_REPORT_ view the Preview a
ew VIEW_SCREEN Report/report_view screen in
a company.
user_defined_lo User Defined Console displaying user defined logs for a container. PAGE_OBJ_USE ScreenPrivilege. This privilege grants access to
gs_console Logs Console R_DEFINED_LO VIEW_USER_DE view the User Defined Logs
GS_CONSOLE__ FINED_LOGS_C Console/user_defined_logs_c
_view ONSOLE_SCREE onsole screen in a company.
N
user_defined_lo User Defined Screen to view details of a user defined log. PAGE_OBJ_USE ScreenPrivilege. This privilege grants access to
gs_detail Log Detail R_DEFINED_LO VIEW_USER_DE view the User Defined Log
GS_DETAILS___ FINED_LOGS_D Detail/user_defined_logs_det
view ETAILS_SCREEN aile screen in a company.
user_defined_lo User Defined Screen displaying all logs for particular entry {Item|Category}. PAGE_OBJ_USE ScreenPrivilege. This privilege grants access to
gs_for_entry Logs For Entry R_DEFINED_LO VIEW_USER_DE view the User Defined Logs
GS_FOR_ENTRY FINED_LOGS_F For
___view OR_ENTRY_SCR Entry/user_defined_logs_for_
EEN entry screen in a company.
supplier_history Company PAGE_OBJ_SUP ScreenPrivilege. This privilege grants access to
Activity History PLIER_HISTORY VIEW_SUPPLIE view the Company Activity
___view R_HISTORY_SC History/supplier_history
REEN screen in a company.
lkp_parent Lookup Tables PAGE_OBJ_LKP_ ScreenPrivilege. This privilege grants access to
PARENT___view VIEW_LKP_PAR view the Lookup
ENT_SCREEN Tables/lkp_parent screen in a
company.
lkp_main Lookup Table Gets the information about the lookup tables available for a PAGE_OBJ_LKP_ ScreenPrivilege. This privilege grants access to
Console particular supplier. MAIN___view VIEW_LKP_MAI view the Lookup Table
N_SCREEN Console/lkp_main screen in a
company.
lkp_add New Lookup PAGE_OBJ_LKP ScreenPrivilege. This privilege grants access to
Table _ADD___view VIEW_LKP_ADD view the New Lookup
_SCREEN Table/lkp_add screen in a
company.
lkp_table_popu Lookup Table PAGE_OBJ_LKP_ ScreenPrivilege. This privilege grants access to
p Popup TABLE_POPUP_ VIEW_LKP_TABL view the Lookup Table
__view E_POPUP_SCRE Popup/lkp_table_popup
EN screen in a company.
lkp_import_res Lookup Table PAGE_OBJ_LKP_ ScreenPrivilege. This privilege grants access to
Import Result IMPORT_RES__ VIEW_LKP_IMP view the Lookup Table Import
_view ORT_RES_SCRE Result/lkp_import_res screen
EN in a company.

IBM Product Master 12.0.0 323


Screen Java API
Screen ID Screen name Screen description Privilege description
privilege screen privilege
Connectivity Collaboration PAGE_OBJ_CTR ScreenPrivilege This privilege grants access
Manager _CONNECTIVIT .VIEW_CONNEC to view the Collaboration
Y___view TIVITY_SCREE Manager/Connectivity
N screen in a company.
feeds_main Imports Create and run data feeds to import product information, PAGE_OBJ_FEE ScreenPrivilege. This privilege grants access to
taxonomies, and binary files. Map source files to Product DS_MAIN___vie VIEW_FEEDS_M view the Imports/feeds_main
Master and create validation rules to be applied to imported w AIN_SCREEN screen in a company.
data.
import_center Import Console Create and run data feeds to import product information, PAGE_OBJ_IMP ScreenPrivilege. This privilege grants access to
taxonomies, and binary files. Map source files to Product ORT_CENTER__ VIEW_IMPORT_ view the Import
Master and create validation rules to be applied to imported _view CENTER_SCREE Console/import_center
data. N screen in a company.
add_new_feed New Import Add a new feed. PAGE_OBJ_ADD ScreenPrivilege. This privilege grants access to
_NEW_FEED___ VIEW_ADD_NE view the New
view W_FEED_SCREE Import/add_new_feed screen
N in a company.
feed_fetch_han Feed Fetch PAGE_OBJ_FEE ScreenPrivilege. This privilege grants access to
dler Handler D_FETCH_HAND VIEW_FEED_FE view the Feed Fetch
LER___view TCH_HANDLER_ Handler/feed_fetch_handler
SCREEN screen in a company.
fetch_process_ Process PAGE_OBJ_FET ScreenPrivilege. This privilege grants access to
push_www Browser Fetch CH_PROCESS_P VIEW_FETCH_P view the Process Browser
USH_WWW___vi ROCESS_PUSH_ Fetch/fetch_process_push_w
ew WWW_SCREEN ww screen in a company.
load_process Process Load PAGE_OBJ_LOA ScreenPrivilege. This privilege grants access to
D_PROCESS___ VIEW_LOAD_PR view the Process
view OCESS_SCREEN Load/load_process screen in
a company.
syndication_mai Exports Export content to multiple destinations. Specify items, version, PAGE_OBJ_SYN ScreenPrivilege. This privilege grants access to
n and attributes to be exported. Map Product Master content to DICATION_MAI VIEW_SYNDICA view the
target destinations and create data validation or conversion N___view TION_MAIN_SC Exports/syndication_main
rules. REEN screen in a company.
export_center Export Console Export content to multiple destinations. Specify items, version, PAGE_OBJ_EXP ScreenPrivilege. This privilege grants access to
and attributes to be exported. Map Product Masteret ORT_CENTER__ VIEW_EXPORT_ view the Export
destinations and create data validation or conversion rules. _view CENTER_SCREE Console/export_center screen
N in a company.
cat_ctg_to_mkt New Export Schedule a catalog to be published to a marketplace. PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
_upload CTG_TO_MKT_U VIEW_CAT_CTG_ view the New
PLOAD___view TO_MKT_UPLOA Export/cat_ctg_to_mkt_uploa
D_SCREEN d screen in a company.
collaboration_to Collaboration The Collaboration screens allow the user to create Item and PAGE_OBJ_COL ScreenPrivilege. This privilege grants access to
p Areas Category sets for collaboration. LABORATION_T VIEW_COLLABO view the Collaboration
OP___view RATION_TOP_S Areas/collaboration_top
CREEN screen in a company.
collaboration_co Collaboration PAGE_OBJ_COL ScreenPrivilege. This privilege grants access to
nsole Area Console LABORATION_C VIEW_COLLABO view the Collaboration Area
ONSOLE___view RATION_CONSO Console/collaboration_consol
LE_SCREEN e screen in a company.
col_area_item_ View PAGE_OBJ_COL ScreenPrivilege. This privilege grants access to
differences Differences _AREA_ITEM_D VIEW_COL_ARE view the View
IFFERENCES___ A_ITEM_DIFFE Differences/col_area_item_di
view RENCES_SCREE fferences screen in a
N company.
col_area_info Information on PAGE_OBJ_COL ScreenPrivilege. This privilege grants access to
Checked Out _AREA_INFO__ VIEW_COL_ARE view the Information on
Attributes _view A_INFO_SCREE Checked Out
N Attributes/col_area_info
screen in a company.
collaboration_ar New PAGE_OBJ_COL ScreenPrivilege. This privilege grants access to
ea_setup Collaboration LABORATION_A VIEW_COLLABO view the New Collaboration
Area REA_SETUP___v RATION_AREA_ Area/collaboration_area_setu
iew SETUP_SCREEN p screen in a company.
view_collaborati Collaboration PAGE_OBJ_VIE ScreenPrivilege. This privilege grants access to
on_area_setup Area Details W_COLLABORAT VIEW_VIEW_CO view the Collaboration Area
ION_AREA_SET LLABORATION_ Details/view_collaboration_ar
UP___view AREA_SETUP_S ea_setup screen in a
CREEN company.
queue_top Queues PAGE_OBJ_QUE ScreenPrivilege. This privilege grants access to
UE_TOP___view VIEW_QUEUE_T view the Queues/queue_top
OP_SCREEN screen in a company.
queue_console Queue Console Lists all the queues. PAGE_OBJ_QUE ScreenPrivilege. This privilege grants access to
UE_CONSOLE__ VIEW_QUEUE_C view the Queue
_view ONSOLE_SCREE Console/queue_console
N screen in a company.

324 IBM Product Master 12.0.0


Screen Java API
Screen ID Screen name Screen description Privilege description
privilege screen privilege
message_consol Message Lists the messages for queues. PAGE_OBJ_MES ScreenPrivilege. This privilege grants access to
e Console SAGE_CONSOLE VIEW_MESSAGE view the Message
___view _CONSOLE_SCR Console/message_console
EEN screen in a company.
queue_detail New Queue Lists the details for a queue. PAGE_OBJ_QUE ScreenPrivilege. This privilege grants access to
UE_DETAIL___v VIEW_QUEUE_D view the New
iew ETAIL_SCREEN Queue/queue_detail screen in
a company.
queue_msg_list Message List for Lists the message types that can be received but a queue. PAGE_OBJ_QUE ScreenPrivilege. This privilege grants access to
Queue UE_MSG_LIST_ VIEW_QUEUE_ view the Message List for
__view MSG_LIST_SCR Queue/queue_msg_list
EEN screen in a company.
queue_msg_typ Message Type Set up the detail for a message type. PAGE_OBJ_QUE ScreenPrivilege. This privilege grants access to
e_detail Detail UE_MSG_TYPE_ VIEW_QUEUE_ view the Message Type
DETAIL___view MSG_TYPE_DET Detail/queue_msg_type_detai
AIL_SCREEN l screen in a company.
invoke_queue Invoke Queue PAGE_OBJ_INV ScreenPrivilege. This privilege grants access to
OKE_QUEUE___ VIEW_INVOKE_ view the Invoke
view QUEUE_SCREEN Queue/invoke_queue screen
in a company.
wbs_top Web Services PAGE_OBJ_WBS ScreenPrivilege. This privilege grants access to
_TOP___view VIEW_WBS_TOP view the Web
_SCREEN Services/wbs_top screen in a
company.
wbs_console Web Service Lists all the Web services. PAGE_OBJ_WBS ScreenPrivilege. This privilege grants access to
Console _CONSOLE___vi VIEW_WBS_CO view the Web Service
ew NSOLE_SCREEN Console/wbs_console screen
in a company.
transaction_con Transaction Lists the recorded transactions for Web services. PAGE_OBJ_TRA ScreenPrivilege. This privilege grants access to
sole Console NSACTION_CON VIEW_TRANSAC view the Transaction
SOLE___view TION_CONSOLE Console/transaction_console
_SCREEN screen in a company.
wbs_detail New Web Lists the details for a Web service. PAGE_OBJ_WBS ScreenPrivilege. This privilege grants access to
Service _DETAIL___vie VIEW_WBS_DET view the New Web
w AIL_SCREEN Service/wbs_detail screen in
a company.
adm_browse_do Document Store Browse the document store. PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
cstore _BROWSE_DOC VIEW_ADM_BR view the Document
STORE___view OWSE_DOCSTO Store/adm_browse_docstore
RE_SCREEN screen in a company.
edit_script View/Edit Script PAGE_OBJ_EDIT ScreenPrivilege. This privilege grants access to
_SCRIPT___vie VIEW_EDIT_SCR view the View/Edit
w IPT_SCREEN Script/edit_script screen in a
company.
edit_input_para View/Edit Input PAGE_OBJ_EDIT ScreenPrivilege. This privilege grants access to
ms Parameters _INPUT_PARAM VIEW_EDIT_INP view the View/Edit Input
S___view UT_PARAMS_SC Parameters/edit_input_para
REEN ms screen in a company.
edit_doc_access Edit Document PAGE_OBJ_EDIT ScreenPrivilege. This privilege grants access to
Access _DOC_ACCESS_ VIEW_EDIT_DO view the Edit Document
__view C_ACCESS_SCR Access/edit_doc_access
EEN screen in a company.
edit_subscript View/Edit Script PAGE_OBJ_EDIT ScreenPrivilege. This privilege grants access to
Part _SUBSCRIPT___ VIEW_EDIT_SU view the View/Edit Script
view BSCRIPT_SCRE Part/edit_subscript screen in
EN a company.
adm_view_docu View Document PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
ment _VIEW_DOCUM VIEW_ADM_VIE view the View
ENT___view W_DOCUMENT_ Document/adm_view_docum
SCREEN ent screen in a company.
adm_view_docu View Document PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
ment_content Content _VIEW_DOCUM VIEW_ADM_VIE view the View Document
ENT_CONTENT_ W_DOCUMENT_ Content/adm_view_document
__view CONTENT_SCRE _content screen in a company.
EN
adm_invalidate_ Invalidate Script PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
script Cache _INVALIDATE_S VIEW_CADM_IN view the Invalidate Script
CRIPT___view VALIDATE_SCRI Cache/adm_invalidate_script
PT_SCREEN screen in a company.
invoker Invoker PAGE_OBJ_INV ScreenPrivilege. This privilege grants access to
OKER___view VIEW_INVOKER view the Invoker/invoker
_SCREEN screen in a company.

IBM Product Master 12.0.0 325


Screen Java API
Screen ID Screen name Screen description Privilege description
privilege screen privilege
secure_invoker Secure Invoker PAGE_OBJ_SEC ScreenPrivilege. This privilege grants access to
URE_INVOKER_ VIEW_SECURE_ view the Secure
__view INVOKER_SCRE Invoker/secure_invoker
EN screen in a company.
data_src_parent Data Sources PAGE_OBJ_DAT ScreenPrivilege. This privilege grants access to
A_SRC_PARENT VIEW_DATA_SR view the Data
___view C_PARENT_SCR Sources/data_src_parent
EEN screen in a company.
data_src_consol Data Source PAGE_OBJ_DAT ScreenPrivilege. This privilege grants access to
e Console A_SRC_CONSOL VIEW_DATA_SR view the Data Source
E___view C_CONSOLE_SC Console/data_src_console
REEN screen in a company.
data_source_cre New Data PAGE_OBJ_DAT ScreenPrivilege. This privilege grants access to
ate Source A_SOURCE_CRE VIEW_DATA_SO view the New Data
ATE___view URCE_CREATE_ Source/data_source_create
SCREEN screen in a company.
distribution_par Routing Use this set of screens to create and edit distribution lists. PAGE_OBJ_DIS ScreenPrivilege. This privilege grants access to
ent TRIBUTION_PA VIEW_DISTRIB view the
RENT___view UTION_PARENT Routing/distribution_parent
_SCREEN screen in a company.
distribution_tabl Routing Console PAGE_OBJ_DIS ScreenPrivilege. This privilege grants access to
e TRIBUTION_TA VIEW_DISTRIB view the Routing
BLE___view UTION_TABLE_ Console/distribution_table
SCREEN screen in a company.
distribution New Distribution PAGE_OBJ_DIS ScreenPrivilege. This privilege grants access to
TRIBUTION___v VIEW_DISTRIB view the New
iew UTION_SCREEN Distribution/distribution
screen in a company.
select_distributi Routing PAGE_OBJ_SEL ScreenPrivilege. This privilege grants access to
on Selection ECT_DISTRIBUT VIEW_SELECT_ view the Routing
ION___view DISTRIBUTION_ Selection/select_distribution
SCREEN screen in a company.
distribution_gro New Distribution PAGE_OBJ_DIS ScreenPrivilege. This privilege grants access to
up Group TRIBUTION_GR VIEW_DISTRIB view the New Distribution
OUP___view UTION_GROUP_ Group/distribution_group
SCREEN screen in a company.
generated_files Generated Files You can list and search all the catalogs you have generated for PAGE_OBJ_GEN ScreenPrivilege. This privilege grants access to
the different marketplaces. ERATED_FILES_ VIEW_GENERAT view the Generated
__view ED_FILES_SCRE Files/generated_files screen
EN in a company.
setup Data Model PAGE_OBJ_SET ScreenPrivilege This privilege grants access
Manager UP___view .VIEW_SETUP_S to view the Data Model
CREEN Manager/setup screen in a
company.
schedule_main_ Scheduler Use this screen to do any scheduler related activities. PAGE_OBJ_SCH ScreenPrivilege. This privilege grants access to
page EDULE_MAIN_P VIEW_SCHEDUL view the
AGE___view E_MAIN_PAGE_ Scheduler/schedule_main_pa
SCREEN ge screen in a company.
job_console Jobs Console PAGE_OBJ_JOB ScreenPrivilege. This privilege grants access to
_CONSOLE___vi VIEW_JOB_CON view the Jobs
ew SOLE_SCREEN Console/job_console screen
in a company.
job_details Job Details PAGE_OBJ_JOB ScreenPrivilege. This privilege grants access to
_DETAILS___vie VIEW_JOB_DET view the Job
w AILS_SCREEN Details/job_details screen in a
company.
sch_schedule_ Schedule For adding and removing schedules. PAGE_OBJ_SCH ScreenPrivilege. This privilege grants access to
main _SCHEDULE_MA VIEW_SCH_SCH view the
IN___view EDULE_MAIN_S Schedule/sch_schedule_main
CREEN screen in a company.
schedule_popup Schedule This screen display information about a given schedule. PAGE_OBJ_SCH ScreenPrivilege. This privilege grants access to
Information EDULE_POPUP_ VIEW_SCHEDUL view the Schedule
__view E_POPUP_SCRE Information/schedule_popup
EN screen in a company.
complete_status Schedule This screen display information about a given schedule. PAGE_OBJ_COM ScreenPrivilege. This privilege grants access to
Information PLETE_STATUS_ VIEW_COMPLET view the Schedule
__view E_STATUS_SCRE Information/complete_status
EN screen in a company.
update_schedul upd sch PAGE_OBJ_UPD ScreenPrivilege. This privilege grants access to
e ATE_SCHEDULE VIEW_UPDATE_ view the upd
___view SCHEDULE_SCR sch/update_schedule screen
EEN in a company.

326 IBM Product Master 12.0.0


Screen Java API
Screen ID Screen name Screen description Privilege description
privilege screen privilege
entry_processor Entry Processor Displays status of Item processors. PAGE_OBJ_ENT ScreenPrivilege. This privilege grants access to
_runs Status RY_PROCESSOR VIEW_ENTRY_P view the Entry Processor
_RUNS___view ROCESSOR_RU Status/entry_processor_runs
NS_SCREEN screen in a company.
sch_schedule_st Schedule Status For checking the status of a schedule. PAGE_OBJ_SCH ScreenPrivilege. This privilege grants access to
atus _SCHEDULE_ST VIEW_SCH_SCH view the Schedule
ATUS___view EDULE_STATUS Status/sch_schedule_status
_SCREEN screen in a company.
schedule_run_in Schedule Run PAGE_OBJ_SCH ScreenPrivilege. This privilege grants access to
fo Information EDULE_RUN_IN VIEW_SCHEDUL view the Schedule Run
FO___view E_RUN_INFO_S Information/schedule_run_inf
CREEN o screen in a company.
schedule_run_p Schedule Run PAGE_OBJ_SCH ScreenPrivilege. This privilege grants access to
erf_info Performance EDULE_RUN_PE VIEW_SCHEDUL view the Schedule Run
Information RF_INFO___vie E_RUN_PERF_I Performance
w NFO_SCREEN Information/schedule_run_pe
rf_info screen in a company.
schedule_run_d Schedule Run PAGE_OBJ_SCH ScreenPrivilege. This privilege grants access to
ebug_info Debug EDULE_RUN_DE VIEW_SCHEDUL view the Schedule Run Debug
Information BUG_INFO___vi E_RUN_DEBUG Information/schedule_run_de
ew _INFO_SCREEN bug_info screen in a
company.
schedule_run_p Schedule Run PAGE_OBJ_SCH ScreenPrivilege. This privilege grants access to
rogress_info Progress EDULE_RUN_PR VIEW_SCHEDUL view the Schedule Run
Information OGRESS_INFO_ E_RUN_PROGR Progress
__view ESS_INFO_SCRE Information/schedule_run_pr
EN ogress_info screen in a
company.
schedule_run_p Schedule Run PAGE_OBJ_SCH ScreenPrivilege. This privilege grants access to
rogress_compar Progress EDULE_RUN_PR VIEW_SCHEDUL view the Schedule Run
e Comparison OGRESS_COMP E_RUN_PROGR Progress
ARE___view ESS_COMPARE_ Comparison/schedule_run_pr
SCREEN ogress_compare screen in a
company.
schedule_stop Stop schedule PAGE_OBJ_SCH ScreenPrivilege. This privilege grants access to
EDULE_STOP___ VIEW_SCHEDUL view the Stop
view E_STOP_SCREE schedule/schedule_stop
N screen in a company.
cat_spec Specs/Mappings View, Create, and Edit Specs and Mappings for Files, Catalogs, PAGE_OBJ_CAT_ ScreenPrivilege. This privilege grants access to
and Marketplaces. SPEC___view VIEW_CAT_SPEC view the
_SCREEN Specs/Mappings/cat_spec
screen in a company.
specs_console Specs Console View, Create, and Edit Specs for Files, Catalogs, and PAGE_OBJ_SPE ScreenPrivilege. This privilege grants access to
Marketplaces. CS_CONSOLE__ VIEW_SPECS_C view the Specs
_view ONSOLE_SCREE Console/specs_console
N screen in a company.
spec_tree Spec Tree Screen to create/edit Specs. PAGE_OBJ_SPE ScreenPrivilege. This privilege grants access to
Create/Edit C_TREE___view VIEW_SPEC_TR view the Spec Tree
EE_SCREEN Create/Edit/spec_tree screen
in a company.
enter_spec_to_d Spec Tree to DB PAGE_OBJ_ENT ScreenPrivilege. This privilege grants access to
b ER_SPEC_TO_D VIEW_ENTER_S view the Spec Tree to
B___view PEC_TO_DB_SC DB/enter_spec_to_db screen
REEN in a company.
spec_enum_pop Spec Enum PopUp Screen to show enums. PAGE_OBJ_SPE ScreenPrivilege. This privilege grants access to
up Popup C_ENUM_POPU VIEW_SPEC_EN view the Spec Enum
P___view UM_POPUP_SCR Popup/spec_enum_popup
EEN screen in a company.
sub_spec_popu Sub Spec Popup PopUp Screen to select subspec for addition. PAGE_OBJ_SUB ScreenPrivilege. This privilege grants access to
p _SPEC_POPUP_ VIEW_SUB_SPE view the Sub Spec
__view C_POPUP_SCRE Popup/sub_spec_popup
EN screen in a company.
subnode_spec_ Edit Spec Use this screen to specify if a subspec node should be part of PAGE_OBJ_SUB ScreenPrivilege. This privilege grants access to
associations Associations For the specs associated with the node's subspec. NODE_SPEC_AS VIEW_SUBNOD view the Edit Spec
Node SOCIATIONS___ E_SPEC_ASSOC Associations For
view IATIONS_SCREE Node/subnode_spec_associat
N ions screen in a company.
node_group_ass Edit Attribute Use this screen to specify if a node should be part of the PAGE_OBJ_NOD ScreenPrivilege. This privilege grants access to
ociations Collection Attribute Collection. E_GROUP_ASSO VIEW_NODE_GR view the Edit Attribute
Associations For CIATIONS___vie OUP_ASSOCIAT Collection Associations For
Node w IONS_SCREEN Node/node_group_associatio
ns screen in a company.

IBM Product Master 12.0.0 327


Screen Java API
Screen ID Screen name Screen description Privilege description
privilege screen privilege
node_tab_tabul Edit Tabular Use this screen to specify if a grouping node present in a view PAGE_OBJ_NOD ScreenPrivilege. This privilege grants access to
ar_associations Associations For tab should be displayed tabular or not. E_TAB_TABULA VIEW_NODE_TA view the Edit Tabular
Node R_ASSOCIATIO B_TABULAR_AS Associations For
NS___view SOCIATIONS_S Node/node_tab_tabular_asso
CREEN ciations screen in a company.
import_spec Import Spec PAGE_OBJ_IMP ScreenPrivilege. This privilege grants access to
ORT_SPEC___vi VIEW_IMPORT_ view the Import
ew SPEC_SCREEN Spec/import_spec screen in a
company.
impexp_spec Import/Export PAGE_OBJ_IMP ScreenPrivilege. This privilege grants access to
Spec EXP_SPEC___vie VIEW_IMPEXP_ view the Import/Export
w SPEC_SCREEN Spec/impexp_spec screen in
a company.
spec_map_pare Spec Maps PAGE_OBJ_SPE ScreenPrivilege. This privilege grants access to
nt C_MAP_PARENT VIEW_SPEC_MA view the Spec
___view P_PARENT_SCR Maps/spec_map_parent
EEN screen in a company.
mapping_consol Spec Map Create new Spec Mappings. PAGE_OBJ_MAP ScreenPrivilege. This privilege grants access to
e Console PING_CONSOLE VIEW_MAPPING view the Spec Map
___view _CONSOLE_SCR Console/mapping_console
EEN screen in a company.
spec_map Spec Maps PAGE_OBJ_SPE ScreenPrivilege. This privilege grants access to
C_MAP___view VIEW_SPEC_MA view the Spec
P_SCREEN Maps/spec_map screen in a
company.
inheritance_rule Inheritance Rule Screen to view/create/edit inheritance rules for a catalog or PAGE_OBJ_INH ScreenPrivilege. This privilege grants access to
_console Console hierarchy. ERITANCE_RUL VIEW_INHERIT view the Inheritance Rule
E_CONSOLE___ ANCE_RULE_CO Console/inheritance_rule_con
view NSOLE_SCREEN sole screen in a company.
create_inheritan Create Screen to view/create/edit inheritance rules for a catalog or PAGE_OBJ_CRE ScreenPrivilege. This privilege grants access to
ce_rule Inheritance Rule category tree. ATE_INHERITA VIEW_CREATE_I view the Create Inheritance
NCE_RULE___vi NHERITANCE_R Rule/create_inheritance_rule
ew ULE_SCREEN screen in a company.
attribute_groups Attribute PAGE_OBJ_ATT ScreenPrivilege. This privilege grants access to
_parent Collections RIBUTE_GROUP VIEW_ATTRIBU view the Attribute
S_PARENT___vi TE_GROUPS_PA Collections/attribute_groups_
ew RENT_SCREEN parent screen in a company.
attribute_group_ Attribute PAGE_OBJ_ATT ScreenPrivilege. This privilege grants access to
console Collection RIBUTE_GROUP VIEW_ATTRIBU view the Attribute Collection
Console _CONSOLE___vi TE_GROUP_CON Console/attribute_group_cons
ew SOLE_SCREEN ole screen in a company.
edit_attribute_g New Attribute PAGE_OBJ_EDIT ScreenPrivilege. This privilege grants access to
roup Collection _ATTRIBUTE_G VIEW_EDIT_ATT view the New Attribute
ROUP___view RIBUTE_GROUP Collection/edit_attribute_gro
_SCREEN up screen in a company.
scripts_pages Scripting PAGE_OBJ_SCR ScreenPrivilege. This privilege grants access to
IPTS_PAGES___ VIEW_SCRIPTS_ view the
view PAGES_SCREEN Scripting/scripts_pages
screen in a company.
scripts_console Scripts Console View, Create, and Edit Scripts. PAGE_OBJ_SCR ScreenPrivilege. This privilege grants access to
IPTS_CONSOLE VIEW_SCRIPTS_ view the Scripts
___view CONSOLE_SCRE Console/scripts_console
EN screen in a company.
adm_script_san Script Sandbox Playing with the scripting engine. PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
dbox _SCRIPT_SAND VIEW_ADM_SCR view the Script
BOX___view IPT_SANDBOX_ Sandbox/adm_script_sandbo
SCREEN x screen in a company.
adm_org_user Security Manage users, roles and access. PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
_ORG_USER___ VIEW_ADM_OR view the
view G_USER_SCREE Security/adm_org_user
N screen in a company.
adm_display_us User Console PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
ers _DISPLAY_USER VIEW_ADM_DIS view the User
S___view PLAY_USERS_S Console/adm_display_users
CREEN screen in a company.
adm_sec_user_r User-Role Map PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
ole_map _SEC_USER_RO VIEW_ADM_SEC view the User-Role
LE_MAP___view _USER_ROLE_M Map/adm_sec_user_role_map
AP_SCREEN screen in a company.
adm_usr_profile My Profile Use this screen to review your personal information. PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
_USR_PROFILE VIEW_ADM_US view the My
___view R_PROFILE_SC Profile/adm_usr_profile
REEN screen in a company.

328 IBM Product Master 12.0.0


Screen Java API
Screen ID Screen name Screen description Privilege description
privilege screen privilege
user_brief_view User PAGE_OBJ_USE ScreenPrivilege. This privilege grants access to
Information R_BRIEF_VIEW_ VIEW_USER_BR view the User
__view IEF_VIEW_SCRE Information/user_brief_view
EN screen in a company.
adm_sec_roles Role Console PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
_SEC_ROLES___ VIEW_ADM_SEC view the Role
view _ROLES_SCREE Console/adm_sec_roles
N screen in a company.
adm_display_us Users for Role PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
ers_for_role _DISPLAY_USER VIEW_ADM_DIS view the Users for
S_FOR_ROLE__ PLAY_USERS_F Role/adm_display_users_for_
_view OR_ROLE_SCRE role screen in a company.
EN
role_security_ac Role-Security PAGE_OBJ_ROL ScreenPrivilege. This privilege grants access to
cess Access Map E_SECURITY_A VIEW_ROLE_SE view the Role-Security Access
CCESS___view CURITY_ACCES Map/role_security_access
S_SCREEN screen in a company.
edit_screen_acc Edit Screen PAGE_OBJ_EDIT ScreenPrivilege. This privilege grants access to
ess_popup Access _SCREEN_ACCE VIEW_EDIT_SCR view the Edit Screen
SS_POPUP___vi EEN_ACCESS_P Access/edit_screen_access_p
ew OPUP_SCREEN opup screen in a company.
adm_role_contr Role Controller PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
oller _ROLE_CONTRO VIEW_ADM_RO view the Role
LLER___view LE_CONTROLLE Controller/adm_role_controlle
R_SCREEN r screen in a company.
role_to_script_ Role to Product Use this screen to define/edit the mapping of roles to Product PAGE_OBJ_ROL ScreenPrivilege. This privilege grants access to
map Master App Map Master applications. E_TO_SCRIPT_M VIEW_ROLE_TO view the Role to Product
AP___view _SCRIPT_MAP_S Master App
CREEN Map/role_to_script_map
screen in a company.
map_container_t Container to PAGE_OBJ_MAP ScreenPrivilege. This privilege grants access to
o_locales Locales Map _CONTAINER_T VIEW_MAP_CON view the Container to Locales
O_LOCALES___v TAINER_TO_LO Map/map_container_to_locale
iew CALES_SCREEN s screen in a company.
adm_company_ Company Manage company attributes. PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
attributes Attributes _COMPANY_ATT VIEW_ADM_CO view the Company
RIBUTES___vie MPANY_ATTRIB Attributes/adm_company_attr
w UTES_SCREEN ibutes screen in a company.
acg_parent Access Control PAGE_OBJ_ACG ScreenPrivilege. This privilege grants access to
Groups _PARENT___vie VIEW_ACG_PAR view the Access Control
w ENT_SCREEN Groups/acg_parent screen in
a company.
map_object_to_ Object to Access The screen allows the user to map an object to a different PAGE_OBJ_MAP ScreenPrivilege. This privilege grants access to
access_control_ Control Group Access Control Group. _OBJECT_TO_A VIEW_MAP_OBJ view the Object to Access
group Map CCESS_CONTRO ECT_TO_ACCESS Control Group
L_GROUP___vie _CONTROL_GRO Map/map_object_to_access_c
w UP_SCREEN ontrol_group screen in a
company.
access_control_ Access Control PAGE_OBJ_ACC ScreenPrivilege. This privilege grants access to
group_console Group Console ESS_CONTROL_ VIEW_ACCESS_ view the Access Control
GROUP_CONSOL CONTROL_GRO Group
E___view UP_CONSOLE_S Console/access_control_grou
CREEN p_console screen in a
company.
edit_access_con Edit Access PAGE_OBJ_EDIT ScreenPrivilege. This privilege grants access to
trol_group Control Group _ACCESS_CONT VIEW_EDIT_AC view the Edit Access Control
ROL_GROUP___ CESS_CONTROL Group/setup screen in a
view _GROUP_SCREE company.
N
edit_system_wi Edit System- PAGE_OBJ_EDIT ScreenPrivilege. This privilege grants access to
de_access wide Access _SYSTEM_WIDE VIEW_EDIT_SYS view the Data Model
_ACCESS___vie TEM_WIDE_ACC Manager/edit_access_control
w ESS_SCREEN _group screen in a company.
access_parent Access PAGE_OBJ_ACC ScreenPrivilege. This privilege grants access to
Privileges ESS_PARENT__ VIEW_ACCESS_ view the Access
_view PARENT_SCREE Privileges/access_parent
N screen in a company.
ctg_access_con Catalog Access List of catalogs and access access associated with them. PAGE_OBJ_CTG ScreenPrivilege. This privilege grants access to
sole Privilege _ACCESS_CONS VIEW_CTG_ACC view the Catalog Access
Console OLE___view ESS_CONSOLE_ Privilege
SCREEN Console/ctg_access_console
screen in a company.

IBM Product Master 12.0.0 329


Screen Java API
Screen ID Screen name Screen description Privilege description
privilege screen privilege
ctg_access_prv New Catalog Use this screen to define Viewing and Editing access of a role PAGE_OBJ_CTG ScreenPrivilege. This privilege grants access to
Access on a catalog. _ACCESS_PRV_ VIEW_CTG_ACC view the New Catalog Access
Privileges __view ESS_PRV_SCRE Privileges/ctg_access_prv
EN screen in a company.
ctr_access_cons Hierarchy List of hierarchies and access access associated with them. PAGE_OBJ_CTR ScreenPrivilege. This privilege grants access to
ole Access Privilege _ACCESS_CONS VIEW_CTR_ACC view the Hierarchy Access
Console OLE___view ESS_CONSOLE_ Privilege
SCREEN Console/ctr_access_console
screen in a company.
ctr_access_prv New Hierarchy Use this screen to define Viewing and Editing access of a role PAGE_OBJ_CTR ScreenPrivilege. This privilege grants access to
Access on a hierarchy. _ACCESS_PRV_ VIEW_CTR_ACC view the New Hierarchy
Privileges __view ESS_PRV_SCRE Access
EN Privileges/ctr_access_prv
screen in a company.
adm_alg_parent Activity Logs PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
_ALG_PARENT_ VIEW_ADM_AL view the Activity
__view G_PARENT_SCR Logs/adm_alg_parent screen
EEN in a company.
adm_alg_main Activity Log The activity log enables you to monitor any user on the site. PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
When logging is turned on for a particular user, the pages _ALG_MAIN___ VIEW_ADM_AL view the Activity
visited by that user are logged into the database along with the view G_MAIN_SCREE Log/adm_alg_main screen in
date and time of the visit. N a company.
adm_alg_viewlo View Log PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
g _ALG_VIEWLOG VIEW_ADM_AL view the View
___view G_VIEWLOG_SC Log/adm_alg_viewlog screen
REEN in a company.
adm_login_histo Login History PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
ry_view _LOGIN_HISTO VIEW_ADM_LO view the Login
RY_VIEW___vie GIN_HISTORY_ History/adm_login_history_vi
w VIEW_SCREEN ew screen in a company.
cal_company_pr Event Audit Select the audit events to be exposed to the users. PAGE_OBJ_CAL ScreenPrivilege. This privilege grants access to
eference _COMPANY_PRE VIEW_CAL_COM view the Event
FERENCE___vie PANY_PREFERE Audit/cal_company_preferenc
w NCE_SCREEN e screen in a company.
event_details Event Details PAGE_OBJ_EVE ScreenPrivilege. This privilege grants access to
NT_DETAILS___ VIEW_EVENT_D view the Event
view ETAILS_SCREEN Details/event_details screen
in a company.
site_map Navigation Map PAGE_OBJ_SITE ScreenPrivilege. This privilege grants access to
_MAP___view VIEW_SITE_MA view the Navigation
P_SCREEN Map/site_map screen in a
company.
no_privilege Not Authorized PAGE_OBJ_NO_ ScreenPrivilege. This privilege grants access to
PRIVILEGE___vi VIEW_NO_PRIV view the Not
ew ILEGE_SCREEN Authorized/no_privilege
screen in a company.
renderer Renderer PAGE_OBJ_REN ScreenPrivilege. This privilege grants access to
DERER___view VIEW_RENDERE view the Renderer/renderer
R_SCREEN screen in a company.
search_templat Search PAGE_OBJ_SEA ScreenPrivilege. This privilege grants access to
e Template RCH_TEMPLATE VIEW_SEARCH_ view the Search
___view TEMPLATE_SCR Template/search_template
EEN screen in a company.
test_widgets Test Widgets PAGE_OBJ_TEST ScreenPrivilege. This privilege grants access to
_WIDGETS___vi VIEW_TEST_WI view the Test
ew DGETS_SCREEN Widgets/test_widgets screen
in a company.
consoles Consoles PAGE_OBJ_CON ScreenPrivilege. This privilege grants access to
SOLES___view VIEW_CONSOLE view the Consoles/consoles
S_SCREEN screen in a company.
change_look Display Styles Change the look of the Product Master application. PAGE_OBJ_CHA ScreenPrivilege. This privilege grants access to
NGE_LOOK___vi VIEW_CHANGE_ view the Display
ew LOOK_SCREEN Styles/change_look screen in
a company.
alerts Alerts Server PAGE_OBJ_ALE ScreenPrivilege. This privilege grants access to
RTS___view VIEW_ALERTS_ view the Alerts Server/alerts
SCREEN screen in a company.
show_alert_sub Alerts Console Manage your subscriptions for alerts. PAGE_OBJ_SHO ScreenPrivilege. This privilege grants access to
s W_ALERT_SUBS VIEW_SHOW_A view the Alerts
___view LERT_SUBS_SC Console/show_alert_subs
REEN screen in a company.

330 IBM Product Master 12.0.0


Screen Java API
Screen ID Screen name Screen description Privilege description
privilege screen privilege
alert_sub_detail Alerts PAGE_OBJ_ALE ScreenPrivilege. This privilege grants access to
Description RT_SUB_DETAIL VIEW_ALERT_S view the Alerts Description
Detail ___view UB_DETAIL_SC Detail/alert_sub_detail screen
REEN in a company.
subscribe_alerts New Alert Add subscriptions for alerts. PAGE_OBJ_SUB ScreenPrivilege. This privilege grants access to
Subscription SCRIBE_ALERTS VIEW_SUBSCRI view the New Alert
___view BE_ALERTS_SC Subscription/subscribe_alerts
REEN screen in a company.
select_user_ids User Selection PAGE_OBJ_SEL ScreenPrivilege. This privilege grants access to
ECT_USER_IDS_ VIEW_SELECT_ view the User
__view USER_SCREEN Selection/select_user_ids
screen in a company.
alerts_display Alerts Display Use this to search for and view alerts. PAGE_OBJ_ALE ScreenPrivilege. This privilege grants access to
RTS_DISPLAY__ VIEW_ALERTS_ view the Alerts
_view DISPLAY_SCREE Display/alerts_display screen
N in a company.
alert_detail Alerts Detail PAGE_OBJ_ALE ScreenPrivilege. This privilege grants access to
RT_DETAIL___vi VIEW_ALERT_D view the Alerts
ew ETAIL_SCREEN Detail/alert_detail screen in a
company.
staging_area_pa Staging Areas PAGE_OBJ_STA ScreenPrivilege. This privilege grants access to
rent GING_AREA_PA VIEW_STAGING view the Staging
RENT___view _AREA_PARENT Areas/staging_area_parent
_SCREEN screen in a company.
staging_area_co Staging Area PAGE_OBJ_STA ScreenPrivilege. This privilege grants access to
nsole Console GING_AREA_CO VIEW_STAGING view the Staging Area
NSOLE___view _AREA_CONSOL Console/staging_area_consol
E_SCREEN e screen in a company.
staging_area_cr New Staging PAGE_OBJ_STA ScreenPrivilege. This privilege grants access to
eate Area GING_AREA_CR VIEW_STAGING view the New Staging
EATE___view _AREA_CREATE Area/staging_area_create
_SCREEN screen in a company.
wfl_top Workflow PAGE_OBJ_WFL ScreenPrivilege. This privilege grants access to
_TOP___view VIEW_WFL_TOP view the Workflow/wfl_top
_SCREEN screen in a company.
workflow_conso Workflow PAGE_OBJ_WO ScreenPrivilege. This privilege grants access to
le Console RKFLOW_CONS VIEW_WORKFL view the Workflow
OLE___view OW_CONSOLE_ Console/workflow_console
SCREEN screen in a company.
workflow_step_ Workflow Step PAGE_OBJ_WO ScreenPrivilege. This privilege grants access to
setup Setup RKFLOW_STEP_ VIEW_WORKFL view the Workflow Step
SETUP___view OW_STEP_SETU Setup/workflow_step_setup
P_SCREEN screen in a company.
select_performe Performers PAGE_OBJ_SEL ScreenPrivilege. This privilege grants access to
rs Selection ECT_PERFORME VIEW_SELECT_P view the Performers
RS___view ERFORMERS_SC Selection/select_performers
REEN screen in a company.
select_attr_grou Attribute PAGE_OBJ_SEL ScreenPrivilege. This privilege grants access to
ps_for_wfl_step Collections ECT_ATTR_GRO VIEW_SELECT_A view the Attribute Collections
Selection UPS_FOR_WFL_ TTR_GROUPS_F Selection/select_attr_groups_
STEP___view OR_WFL_STEP_ for_wfl_step screen in a
SCREEN company.
wfl_step_view Workflow Step PAGE_OBJ_WFL ScreenPrivilege. This privilege grants access to
View _SETP_VIEW___ VIEW_WFL_SET view the Workflow Step
view P_VIEW_SCREE View/wfl_step_view screen in
N a company.
display_wfl_nod Nodes Display PAGE_OBJ_DIS ScreenPrivilege. This privilege grants access to
es PLAY_WFL_NOD VIEW_DISPLAY_ view the Nodes
ES___view WFL_NODES_SC Display/display_wfl_nodes
REEN screen in a company.
add_next_steps Nodes Selection PAGE_OBJ_ADD ScreenPrivilege. This privilege grants access to
_for_wfl_exit_va _NEXT_STEPS_F VIEW_ADD_NEX view the Nodes
lues OR_WFL_EXIT_ T_STEPS_FOR_ Selection/add_next_steps_for
VALUES___view WFL_EXIT_VALU _wfl_exit_values screen in a
ES_SCREEN company.
workflow_edit_g Workflow GUI PAGE_OBJ_WO ScreenPrivilege. This privilege grants access to
ui Creation RKFLOW_EDIT_ VIEW_WORKFL view the Workflow GUI
GUI___view OW_EDIT_GUI_ Creation/workflow_edit_gui
SCREEN screen in a company.
workflow_view_ Workflow GUI PAGE_OBJ_WO ScreenPrivilege. This privilege grants access to
gui RKFLOW_VIEW_ VIEW_WORKFL view the Workflow
GUI___view OW_VIEW_GUI_ GUI/workflow_view_gui
SCREEN screen in a company.

IBM Product Master 12.0.0 331


Screen Java API
Screen ID Screen name Screen description Privilege description
privilege screen privilege
workflow_setup New Workflow PAGE_OBJ_WO ScreenPrivilege. This privilege grants access to
RKFLOW_SETUP VIEW_WORKFL view the New
___view OW_SETUP_SCR Workflow/workflow_setup
EEN screen in a company.
item_workflow_ Perform PAGE_OBJ_ITE ScreenPrivilege. This privilege grants access to
step Workflow Step M_WORKFLOW_ VIEW_ITEM_WO view the Perform Workflow
STEP___view RKFLOW_STEP_ Step/item_workflow_step
SCREEN screen in a company.
admin_tab System PAGE_OBJ_AD ScreenPrivilege This privilege grants access
Administrator MIN_TAB___vie .VIEW_ADMIN_ to view the System
w TAB_SCREEN Administrator/admin_tab
screen in a company.
aud_search Search Audit PAGE_OBJ_AUD ScreenPrivilege. This privilege grants access to
Logs _SEARCH___vie VIEW_AUD_SEA view the Search Audit
w RCH_SCREEN Logs/aud_search screen in a
company.
adm_db_query_ Database PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
detail Administration _DB_QUERY_DE VIEW_ADM_DB_ view the Database
TAIL___view QUERY_DETAIL Administration/adm_db_quer
_SCREEN y_detail screen in a company.
analyze_db Analyze DB Select tables to analyze. PAGE_OBJ_ANA ScreenPrivilege. This privilege grants access to
Tables LYZE_DB___vie VIEW_ANALYZE view the Analyze DB
w _DB_SCREEN Tables/analyze_db screen in a
company.
adm_perf_main App. You can monitor various parts of the system and monitor the PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
Performance JVM using the various tools available. These allow you to _PERF_MAIN__ VIEW_ADM_PE view the App.
monitor the number of running processes, and other info about _view RF_MAIN_SCRE Performance/adm_perf_main
the system health. Logging levels and other server data can EN screen in a company.
also be controlled from here.
adm_perf_profil Application Get profiling data for every page and executable. PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
e Profiling _PERF_PROFILE VIEW_ADM_PE view the Application
___view RF_PROFILE_SC Profiling/adm_perf_profile
REEN screen in a company.
adm_perf_profil Analyze DB Analyze the profiling data. PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
e_analysis Profiles _PERF_PROFILE VIEW_ADM_PE view the Analyze DB
_ANALYSIS___vi RF_PROFILE_A Profiles/adm_perf_profile_an
ew NALYSIS_SCREE alysis screen in a company.
N
adm_perf_info Monitor Check the performance of your JVM by noting the times for PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
Performance page completions. _PERF_INFO___ VIEW_ADM_PE view the Monitor
view RF_INFO_SCREE Performance/adm_perf_info
N screen in a company.
adm_db_perf_in Monitor Check the performance of your JVM by noting the times for PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
fo DataBase various Database calls. _DB_PERF_INFO VIEW_ADM_DB view the Monitor DataBase
Performance ___view _PERF_INFO_SC Performance/adm_db_perf_in
REEN fo screen in a company.
spec_cache Caches Screen to show information about objects being cached in PAGE_OBJ_SPE ScreenPrivilege. This privilege grants access to
various caches. C_CACHE___vie VIEW_SPEC_CA view the Caches/spec_cache
w CHE_SCREEN screen in a company.
adm_cfg_proper Properties View configuration file properties. PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
ties _CFG_PROPERT VIEW_ADM_CFG view the
IES___view _PROPERTIES_ Properties/adm_cfg_properti
SCREEN es screen in a company.
tail_log Show Tail of PAGE_OBJ_TAIL ScreenPrivilege. This privilege grants access to
System Logs _LOG___view VIEW_TAIL_LOG view the Show Tail of System
_SCREEN Logs/tail_log screen in a
company.
proc_tail Tail Log Result Shows tail portion of the system logs. PAGE_OBJ_PRO ScreenPrivilege. This privilege grants access to
C_TAIL___view VIEW_PROC_TA view the Tail Log
IL_SCREEN Result/proc_tail screen in a
company.
adm_system_st System Status PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
atus _SYSTEM_STAT VIEW_ADM_SYS view the System
US___view TEM_STATUS_S Status/adm_system_status
CREEN screen in a company.
ccd_profiler_ma Product Master PAGE_OBJ_CCD ScreenPrivilege. This privilege grants access to
in Profiler _PROFILER_MA VIEW_CCD_PRO view the Product Master
IN___view FILER_MAIN_S Profiler/ccd_profiler_main
CREEN screen in a company.
import_env_mai Import Use this screen to import the company environment. PAGE_OBJ_IMP ScreenPrivilege. This privilege grants access to
n Environment ORT_ENV_MAIN VIEW_IMPORT_ view the Import
___view ENV_MAIN_SCR Environment/import_env_mai
EEN n screen in a company.

332 IBM Product Master 12.0.0


Screen Java API
Screen ID Screen name Screen description Privilege description
privilege screen privilege
selective_export Selective Export PAGE_OBJ_SEL ScreenPrivilege. This privilege grants access to
ECTIVE_EXPOR VIEW_SELECTIV view the Selective
T___view E_EXPORT_SCR Export/selective_export
EEN screen in a company.
selective_import Selective Import PAGE_OBJ_SEL ScreenPrivilege. This privilege grants access to
ECTIVE_IMPOR VIEW_SELECTIV view the Selective
T___view E_IMPORT_SCR Import/selective_import
EEN screen in a company.
fetch_execute Fetch Execute PAGE_OBJ_FET ScreenPrivilege. This privilege grants access to
CH_EXECUTE__ VIEW_FETCH_E view the Fetch
_view XECUTE_SCREE Execute/fetch_execute screen
N in a company.
adm_blob_size_ Blob Size Check the size distributions of various paths in the document PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
dist Distribution store. _BLOB_SIZE_DI VIEW_ADM_BL view the Blob Size
ST___view OB_SIZE_DIST_ Distribution/adm_blob_size_d
SCREEN ist screen in a company.
adm_eff_compa Change Change your effective organization to access customer PAGE_OBJ_ADM ScreenPrivilege. This privilege grants access to
ny Organization accounts. _EFF_COMPANY VIEW_ADM_EFF view the Change
___view _COMPANY_SCR Organization/adm_eff_compa
EEN ny screen in a company.
widgets_unit_te Widgets Unit PAGE_OBJ_WID ScreenPrivilege. This privilege grants access to
sts Tests GETS_UNIT_TES VIEW_WIDGETS view the Widgets Unit
TS___view _UNIT_TESTS_S Tests/widgets_unit_tests
CREEN screen in a company.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Hierarchy Access Privilege Console


Use the Hierarchy Access Privilege Console to manage user-based hierarchy access permissions.

The Hierarchy Access Privilege Console displays the hierarchy access permissions as rows in a table. To customize the columns that are included in the table, click in
the upper right corner of the table to display the Properties section. Click directly on the column header to sort through that particular column. To search for specific
hierarchy access privileges and display only those rows that match your search criteria, click at the top of the table.

You can create, modify, and delete hierarchy access permissions from the Hierarchy Access Console. To define access permissions for a hierarchy for a particular user,

click at the top of the Hierarchy Access Privilege Console. To update access permissions to a hierarchy or to provide access to a hierarchy to different users, click

in the Action column for the hierarchy access permission. To delete access permission to a hierarchy, click in the Action column for hierarchy access permission.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Activity Log
Use the Activity Log to monitor the activities of users. You can enable logging user activities and send notifications to users from the Activity Log.

The Activity Log displays a list of users as rows in a table. The users who are currently logged have the icon. To modify logging and notification activities, click the
appropriate check boxes in the row of the user, and then click Update.
Use the Notify Users panel to send notifications to users. You have the option of sending notifications to all the users or only to those users who are currently logged on.

An email address needs to be specified for each of the users to be sent the email. An email address needs to be in the form of a@b.c, for example, jdoe@abc.def.com.
Otherwise, if there are one or more entries in the email address list, the send email function available in the Activity Log screen will not work.
Note: If an entry on the email address list is of the form a@b.c but the email is not reachable, a delivery failure notification will be sent to the email associates with the
Product Master user who performed the Send function.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Alerts Subscription Console


Use the Alert Subscription Console to manage alerts, which identify the occurrence of an event.

IBM Product Master 12.0.0 333


The Alerts Subscription Console displays a list of alerts categorized by different event types that you can define subscriptions for. To subscribe to an alert, click next to

the event type. To delete an alert, click next to the alert.

When you subscribe to an alert, the alert is enabled by default; displays next to the alert. To disable an alert for some period of time, click next to the alert. The

selected alert becomes unavailable; displays next to the alert.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Staging Area Console


Use the Staging Area Console to manage staging areas, which you can use to launch data changes on an item level. A staging area can be associated with an export. The
export operation, when run, populates the associated staging area with documents.

The Staging Area Console displays the staging areas as rows in a table. To customize the columns that are included in the table, click in the upper right corner of the
table to display the Properties section. Click directly on the column header to sort through that particular column. To search for specific staging areas and display only
those rows that match your search criteria, click at the top of the table.

You can create and view details of staging areas from the Staging Area Console. To create a new staging area, click Create new Staging Area at the top of the console. To
view all the associated files of a staging area in the Document Store, click the name of the staging area. To view the content of a document in a staging area, click the
document name.

To view the audit log for a document in a staging area, click next to the document.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Workflow Console
Use the Workflow Console to manage your workflows, which implement business processes for managing project information management (PIM) data in IBM® Product
Master.

The Workflow Console displays individual workflows as rows in a table. To customize the columns that are included in the table, click in the upper right corner of the
table to display the Properties section. Click directly on the column header to sort through that particular column.

You can create workflow instances that contain a series of steps, where the steps correspond to a specific business process, in the Workflow Console. You can then edit,

integrate, set alerts, or delete these workflows to handle your overlying PIM processes. To create a new workflow, click at the top of the Workflow Console. To edit a

workflow, click the radio button next to the workflow in the Select column, then click at the top of the Workflow Console. To delete a workflow, click the radio button

next to the workflow in the Select column, then click .

You can also view the details about the access control groups and collaboration areas that are associated with a workflow. To view the access control group details, click
on the access control group name in the ACG column. To view the collaboration area, click on the collaboration area name in the Collaborations Areas column.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

System administrator module


You can use the System Administrator module to support, maintain, and troubleshoot the IBM® Product Master environment. You can review configuration settings, view
system log files, view system status, and analyze database and application performance.

Business users should not have access to the System Administrator module because users can modify and even delete data from the system and from the database. Only
administrators or developers should have access to this module.

Audit
Use the Audit panel to search for and display all activities performed by a user in IBM Product Master in a defined time period. You can view details about users who
log on or log out of the application, items or roles that were created, and import and export activities from the Audit panel.
DB Admin
Use the DB Admin panel to run simple SQL queries. Ensure that you run queries that do not jeopardize the integrity of the data in the IBM Product Master database.
Performance Info
The Performance Info menu includes links to panels where you can view profiling statistics, analyze performance data, and view the details of cached objects.
Properties
Use the Properties panel to view the current configuration files.
Log Directory Listing
Use the Log Directory Listing panel to view the details of your log files.

334 IBM Product Master 12.0.0


System Status
Use the System Status panel to manage system services. The Product Master system consists of six types of services running concurrently, which are Admin,
scheduler service, Workflow Engine, event processor, queue manager, Application Server.
Profiler
Use the Profiler tool to capture performance metrics of your JVM services: admin server, application server, event processor, queue manager, scheduler service, and
workflow engine.
Import Environment
Use the Import Company Environment pane to import an entire instance of a IBM Product Master. This tool addresses the content modeling administration needs of
the IBM Product Master production environment.
Selective Export
Use the Selective Export feature to export objects that are specified by you to a compressed file.
Selective Import
Use the Selective Import feature to import specific objects into the IBM Product Master.
Size Distribution
Use the Size Distribution panel to view distribution of files in the document store.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Audit
Use the Audit panel to search for and display all activities performed by a user in IBM® Product Master in a defined time period. You can view details about users who log
on or log out of the application, items or roles that were created, and import and export activities from the Audit panel.

In the Audit Log Search panel, enter the details of an event to search for. To customize the columns that are included in the table, click in the upper right corner of the
table to display the Properties section. Click directly on the column header to sort through that particular column. To display the search results, specify the user and the
date range, and then click at the bottom of the search interface.

The details about events that match the specified search criteria are displayed in the Search Audit Logs panel. To view more details about the user who performed an
activity click the link on the user name.

All audit logs are globalized and display in the language of your current locale.

Trimming records from auditing tables


The TAUD_ALO_AUDIT_LOG table in the database contain auditing information that is safe to delete. This table is an audit log for specific events such as password
changes and items added to a catalog.

Remember: The instructions provided are strictly applicable for the mentioned table only, applying changes to other application tables will lead to the application not
functioning properly.
Important: Back up your database before running any of the SQL statements or doing any database actions.
Depending on the number of records in the tables, the delete statements may take a while to run. The SQL statements below assume you want to keep records after
March 1, 2009. You can run all the SQLs through the Product Master UI by going to System Administrator -> DB Admin.

Estimate records for DB2: select count(*)from


alo where date(ALO_TIMESTAMP)<'2009-03-01'

Delete records for DB2: delete from alo where date(ALO_TIMESTAMP)<'2009-03-01'

Estimate records for Oracle: select count(*)from alo where ALO_TIMESTAMP <
to_date('2009-03-01','YYYY-MM-DD')

Delete records for Oracle: delete from alo where ALO_TIMESTAMP < to_date('2009-03-01','YYYY-MM-DD')

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

DB Admin
Use the DB Admin panel to run simple SQL queries. Ensure that you run queries that do not jeopardize the integrity of the data in the IBM® Product Master database.

Business users should not have access to the System Administrator Console because users can modify and even delete data from the system and from the database. Only
administrators or developers should have access to this console. Any access restrictions specified in the product UI have no effect if they can run SQL in the Database
Admin console.

The DB Admin panel displays a list of saved SQL queries in the History pane. To run an SQL query, enter the query in the SQL Command pane and click Run Query at the
top of the panel. To enter explanation about the SQL query, click Explain Plan at the top of the panel.

To erase an existing SQL query and enter a new query, click CLEAR in the SQL Command pane.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 335


Performance Info
The Performance Info menu includes links to panels where you can view profiling statistics, analyze performance data, and view the details of cached objects.

Profiling
Use the Profiling panel to view profiling statistics for pages, executables, and workflow events.
Performance
Use the Performance panel to monitor the performance of Java™ virtual machines (JVMs) in IBM Product Master.
Database Performance
Use the Database Performance panel to monitor the performance of all the pages in IBM Product Master.
Caches
Use the Caches panel to view details of various objects that are cached in IBM Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Profiling
Use the Profiling panel to view profiling statistics for pages, executables, and workflow events.

The Profiling panel displays a list of profiles as rows in separate tables. The page, executable, and workflow event profiles display in their respective tables: Page Profiles,
Executable Profiles, and Workflow Event Profiles. For each table, a profile provides details on the time duration of any given activity. An activity can be anything from a
click on the screen to a job, for example, an import job or a workflow event.

The Page Profiles table lists the details of a page's loading time after a link has been clicked in the user interface. The Executable Profiles table lists time duration of every
job that was executed by the scheduler and the Workflow Event Profiles table lists the time duration of every workflow event that was invoked by item transitions.

To narrow the listed profiles in the Workflow Event Profiles table, enter a pattern in the Pattern field and a time duration in the Time Duration field and click Submit Query.

To view details of an action listed in each table, click on an action in the Action column. To flush all profile details for a given table, click located at the top right hand of
the table.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Performance
Use the Performance panel to monitor the performance of Java™ virtual machines (JVMs) in IBM® Product Master.

You can search for and view details of JVMs from the Performance panel. To view details of a JVM, select the JVM from the Search menu, and click . The Performance
Results pane displays the details of the selected JVM. To clear all performance measurements, click Flush to DB.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Database Performance
Use the Database Performance panel to monitor the performance of all the pages in IBM® Product Master.

You can analyze the performance of each page by viewing the length of time it spends executing queries in the database. You can also view the number of times each
query is made and the average time duration it takes for each query to complete.

The Database Performance panel displays the current performance results for each page in the Current Performance Result pane. The Query Performance Statistics pane
displays query details across all pages. To delete the existing performance results and statistics, click CLEAR at the top of the panel.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Caches

336 IBM Product Master 12.0.0


Use the Caches panel to view details of various objects that are cached in IBM® Product Master.

The Caches panel displays the details of caches as rows in a table. To display details of caches for different objects, select the required object from the menu at the top of

the Caches panel. To refresh the cache information, click at the top of the panel.

To flush the cache, click Flush Cache at the top of the panel.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Properties
Use the Properties panel to view the current configuration files.

The Properties panel displays the current configuration files. You can only view the details of the parameters from the Properties panel. To edit the parameter values, open
the specific configuration file with a text editor outside of IBM® Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Log Directory Listing


Use the Log Directory Listing panel to view the details of your log files.

The Log Directory Listing panel displays the log files grouped by categories. The log files are listed in alphabetical order within each group.

You can view a log file or a subset of it. To view an entire log file, select the log file from the list by selecting the radio button in the last column, click the Entire Log File
check box at the bottom of the list, and then click Submit. To view only a few lines of a log file, select the log file, specify the number of lines that you need to view in the
text field, and then click Submit at the bottom of the list.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

System Status
Use the System Status panel to manage system services. The Product Master system consists of six types of services running concurrently, which are Admin, scheduler
service, Workflow Engine, event processor, queue manager, Application Server.

The System Status panel displays all the services, along with a short description of the status, that are currently running in a tabular format in the Services Status pane.
The panel also displays a legend at the top of the table that lists all the types of services. To view a detailed description of the status of a service, click in the column
next to the service. You can also view detailed status or summarized status of all the services in the table by using or in the column headers.

You can create, end, or stop a service from the System Status panel. To create a new service, specify the details of the service and click in the Create New Service pane.
To end a service, which interrupts the currently running tasks in order to stop the service, select the service and click Abort at the top of the panel. To stop a service, which
ensures that the currently running tasks complete smoothly, select the service and click Stop at the top of the panel.

To view the latest status of all the services, click Refresh at the top of the panel.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Profiler
Use the Profiler tool to capture performance metrics of your JVM services: admin server, application server, event processor, queue manager, scheduler service, and
workflow engine.

The Profiler tool is recommended for use only when advised by IBM® support during performance troubleshooting. The Profiler tool is disabled by default due to its
negative performance impact.

You have the option to purchase and use either of the vendor-acquired JVM profilers that have been partially integrated into the user interface: YourKit or JProfiler.

From the Profiler tool interface, you can do the following:

Choose a JVM service to profile for performance metrics

IBM Product Master 12.0.0 337


Choose the detail level for profiling your data
Depending on which JVM profiler you use, there are other options you can choose, including:
Capture heap data
Capture HPROF data
Save on exit
Reset and clear previous profiling data
Capture memory snapshots
Save intermediate profiling statistics

There are four action buttons in the Profiler tool:

Start
Initiates capturing of profiling information requested by the user. When profiling completes, your profiling statistics are saved into a file that you specify in the File
text field of the Parameters panel. If you do not specify a file name and directory, the memory snapshot is saved in the default directory: $TOP/profiler/snapshots.
Stop
Stops the profiler from capturing information and saves your profiling statistics to the same directory as it would if profiling completed.
Capture Memory Snapshot
Saves a memory snapshot into the file that you specify in the File text field of the Parameters panel.
Save
Though your profiling statistics are saved when profiling completes or when you click Stop, you can also save intermediate profiling statistics into a file that you
specify in the File text field located below the Save Intermediate title. These intermediate profiling statistics are additional to the statistics that are saved when
profiling stops or completes. The intermediate profiling statistics are captured immediately when you click Save so that you can view you profiling statistics
throughout the duration the performance profiling process without interrupting the session.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Import Environment
Use the Import Company Environment pane to import an entire instance of a IBM® Product Master. This tool addresses the content modeling administration needs of the
IBM Product Master production environment.

The environment import feature is designed to port the data from one instance to another instance. No validation is performed during importing data to the system. You
need to validate the data in the environment package before importing it. For example, for a unique attribute that is defined in a spec, the package that is created by using
the IBM Product Master user interface application performs the validation at the application level before the environment is exported. You need to ensure the attribute
values in the .csv files are unique if modified manually.

When using the environment import to import items or categories with primary keys of type "sequence", the values of the system-generated sequences will not
automatically be chronologically incremented. For example, it is possible that after you import an item with the primary key of "64388" through the environment import,
you create a new item, which might get a primary key with a value below this number; for example, "4755" for instance.

To import an environment, browse for a file that contains the instance of the data model, and click Import. The application uploads the file to the Document Store and
schedules the importing activity. The import process, when run, results in the creation of a new data model that replicates the instance of model that is stored in the
imported file.
Restriction: Do not use the default company "trigo" in your product environment. For instance, not all the scripts might get loaded into the document store if company
"trigo" is used.
Note: The file system that is used by Linux®, AIX®, Solaris, and HP-UX systems limit the size of files to 2 GB. To ensure a successful import of your compressed file, split
the import data based on the size of your metadata and data into files less than 2 GB.
Important: If catalog or lookup items with a primary key in the CSV file exist on an instance, its corresponding attribute values can be modified.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Selective Export
Use the Selective Export feature to export objects that are specified by you to a compressed file.

To selectively export objects, complete the following steps:

1. Browse to System Administrator > Select Export.


2. In the Select Object Types And Objects For Export page, select check box for each object you want to import, and click Perform Export. A pop-up window is
displayed.
Note: To see a list of objects that have same object type, click an object type in the Object Type column.
3. In the pop-up window, verify the list of objects and their dependencies, and click OK.
4. Click Document Store link to download the new compressed file (in the export_YYYYDDMM_HHMMSS.zip format).

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

338 IBM Product Master 12.0.0


Selective Import
Use the Selective Import feature to import specific objects into the IBM® Product Master.

To selectively import objects, complete the following steps:

1. Browse to System Administrator > Select Import.


2. Specify the compressed file, and click Upload. For more information, see Selective Export.
3. Import your new compressed file into the destination Product Master system, click Start Import.
4. Click Start Import to deploy your source system's company data model from the compressed file into your destination system.

Important: If catalog or lookup items with a primary key in the CSV file exist on an instance, its corresponding attribute values can be modified.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Size Distribution
Use the Size Distribution panel to view distribution of files in the document store.

The Size Distribution panel list all the companies in Product Master for which you can view the size distribution of files.

To view the size distribution of a company, click on one or more companies in the Select Companies menu, and then click at the top of the Size Distribution panel. To
obtain more detailed search results, you can specify a higher number of levels of subdirectories to search for in the Maximum Path Depth field. The DocStore Size
Distribution Statistics panel displays the size distribution details of the companies you select.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Customizing Admin UI features


You can customize your Admin UI workspace by specifying your user settings, configuring the user interface, and localizing the user interface.

Customizing settings
You can access the My Settings page through Home > My Settings.The right panel displays following sections.

General Settings
Table Display Settings
Specific Screen Settings

General Settings
Field Description
Locale for user interface display Select the language that you want the user interface to display. The default value is English.
Note: Even if you specify Français (French) locale, the keyword OK is not displayed as Bien.
Locale for Item and Category Data Display Select the language that you want the item and category data to display. The default value is NONE.
Restrict the displayed attributes in item and Select the locales that you want to restrict the display of locale-specific attributes in the item and category screens.
category screens to the selected locale The default value is no restriction.
TimeZone Used for the scheduler service for Jobs.
Datetime Output Format Select your preference for displaying the output date and time format. The default value is M/d/yy h:mm a.
Datetime Input Format Select your preference for the format of entering the date and time. The default date and time input format is:
M/d/yy h:mm a.
Note: For the Datetime Output Format and Datetime Input Format, though the timestamp mentions the Seconds
value in the format but the value is only for display purpose, and is not used anywhere in any function.
Base font size used application wide Select the text font size that you want the user interface to display.
Email upon alert Select the checkbox if you want to receive alerts through email.
Table Display Settings
Field Description
Rows per Page in Specs Console Specify the maximum number of rows per page in the Specs Console.
Rows per Page in Scripts Console Specify the maximum number of rows per page in the Scripts Console.
Rows per Page in Item Set Specify the maximum number of rows per page in the Item Set section.
Rows per Page in Alerts Display Specify the maximum number of rows per page in the Alerts Console and Alerts Display section.
Specific Screen Settings
Field Description
Homepage, Category and Hierarchy Consoles, Left Nav, Specs
Configure the home page Specify the screen that you want to make your home page. The default value is Collaboration Area Console.

IBM Product Master 12.0.0 339


Field Description
Display the locked icon on the Select to display the locked icon the catalog and hierarchy consoles.
catalog and hierarchy consoles
Hide left navigation panel Select to hide the left navigation panel. The default value is to show the left navigation panel.
Remember Last Saved Category Select to remember the last saved category tree that was used for browsing a catalog in the left panel.
Tree Used For Browsing a
Catalog in the Left Pane
Default Search Attribute for left Specify the search attribute for the left panel Item search. The default value is Display or primary key.
panel Item Search
Default Search Attribute for left Specify the search attribute for the left panel Category search. The default value is Display or path or primary key.
panel Category Search
Display the type of node in the Select to display the type of node in the Specs screen.
Specs screen
Use detailed mode icons in the Select to display the detailed node icons in the Specs screen.
Specs screen
Display spec attributes as Specify the type of format that you want the spec attributes to be displayed. The default value is drop-down.
Editing and entering data
Sort all item lists by Select the type of sort that you want to sort item lists by.
In Single Edit, spacing between Select the vertical spacing that you want between the attribute fields.
fields
In Single Edit, Field Length Select the size of the attribute field you want as a proportion of the available space.
Note: There is a minimum field length set for a single edit view of an entry depending on the available space and the deepest
node level of the entry. The minimum field length feature has two limitations:

The deepest node level is based on all of the attributes that exist for a given item irrespective of whether the attributes
are present in the given current view or not. Therefore, for example in a tabbed view, the minimum field length applied to
an entries single edit view might not be the applicable "minimum field length" for that particular tab view.
In a boundary condition where the window size is small and the field length has reached its minimum length, any new
attribute fields added to an entry in single edit view might move out of alignment. Once saved or refreshed, the
alignment is restored back to its original state.

In Single Edit, Label width [50 - Specify in pixels the width of the labels for the attribute fields. If the label width for a given attribute exceeds the width that is
800 px] specified by this setting, the attribute label is truncated. The complete attribute label text can be accessed by mouse hover
over the label.
Collapse nodes of grouped or Select the number of levels deep that you want collapsed.
multi-occurring attributes that
are deeper than this level
Collapse multi-occurring Select the maximum number of occurrences that are to be displayed. All other occurrences are collapsed.
attributes that have more than
this number of occurrences
Multi-occurrences: Enable Select the maximum number of occurrences that can be displayed without paging. If the number of occurrences for a multi-
paging when the number of occurring attribute exceeds this number, paging enabled for that multi-occurring attribute.
occurrences is larger than this
number
Multi-occurrences: Number of Select the number of occurrences that you want displayed per page.
occurrences that will display per
page
Multi-occurrences: Always Select to display the delete controls.
display delete controls
Maximum number of entries to Specify the maximum number of entries that you want written to a report in either the single edit or multi-edit screen.
be written to a report on
Generate Report in Multi-Edit

Customizing tasks
You can access the My Tasks List page through Home > My Task List. Proceed as follows to customize your task list.

1. In the right corner, click Add New page icon.


2. In the Page Name field, type a name for the task list screen.
3. Select a module from the Available Modules column and click Add. You can arrange the order of the modules by using the Move Up and Move Down. Click Remove
to remove a module.
4. Click Save to save all the modules that you selected for your customized view. The selected modules appear in your My Task List view.

Customizing profile
You can access the My Profile page through Home > My Profile. Use this component to set your name, username, email, and your role. You can also create and change your
password. You can review and edit your profile settings in the My Profile interface. Proceed as follows to customize your profile settings:

1. Modify the user profile settings. Mandatory fields are marked with a Red Asterisk (*).
2. To save the modifications, click Modify User Profile or Reset Password.

Importing user settings


For installations with a large number of users it can be a tedious task to manually define the user settings (in the menu "My Settings"). You can use Java APIs to
migrate user settings from one system to another system.
Customizing your user settings
Each Admin UI user has a customizable home page with settings that are saved in the application. You can log in and access your customized settings.

340 IBM Product Master 12.0.0


Choosing a locale
Admin UI adheres to ISO Globalization Standards and supports the various locales for the user interface. Locales are created based on language or country pairs.
Setting the default date and time attributes
The application server processes all of the values by using the server time zone and stores the date attributes as Coordinated Universal Time (UTC) values in the
database.
Customizing Unit of Measure (UOM)
The single-edit screen of the Product Master application provides an extra UOM feature for displaying user-defined content. Only one instance of UOM pane can be
configured in a single-edit screen.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Importing user settings


For installations with a large number of users it can be a tedious task to manually define the user settings (in the menu "My Settings"). You can use Java APIs to migrate
user settings from one system to another system.

Procedure
1. Develop a Java API based extension point (for example, a Scripting Sandbox extension point), custom JSP or a WebService.
2. Retrieve the user settings from the source system using getUserSettingValue() Java API.
3. Set the values to the target system using the Java API setUserSettingValue().

Example
The following code is an example of the usage of getUserSettingValue() API, which reads the User Settings for User "User1":

Context context = PIMContextFactory.getCurrentContext();


OrganizationManager manager = context.getOrganizationManager();
User user = manager.getUser("User1");

for (UserSetting setting : UserSetting.values()) {


System.out.println("User Setting value: ", user.getUserSettingValue(setting));
}

The following code is an example of the usage of setUserSettingValue() API, which sets the User Settings values for User "User1":

Context context = PIMContextFactory.getCurrentContext();


OrganizationManager manager = context.getOrganizationManager();
User user = manager.getUser("User1");
user.setUserSettingValue(UserSetting.LOCALE, "fr_FR");
user.setUserSettingValue(UserSetting.DATETIMEOUTPUTFORMAT, "M/d/yy h:mm a");
user.save();

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Customizing your user settings


Each Admin UI user has a customizable home page with settings that are saved in the application. You can log in and access your customized settings.

You can access the following associated components from the Home module:

My Task List
My Settings
My Profile

Customizing task list


This component shows a list of tasks.

Proceed as follows to customize your task list:

1. Click Home > My Task List.


2. In the upper right corner, click Add New page icon.
3. In the Page Name field, type a name for the task list screen.
4. Select a module from the Available Modules column and click Add. You can arrange the order of the modules by using the Move Up and Move Down. Click Remove
to remove a module.
5. Click Save to save all the modules that you selected for your customized view. The selected modules appear in your My Task List view.

Customizing settings

IBM Product Master 12.0.0 341


Use this component to choose which user interface screen you want to use, select different settings for the screens, and to determine what content your My Home link
displays. You can also use this component to choose the language, time zone, date, time, font size, toolbar position, and how you receive email notifications. You can
review and edit your user settings in the My Settings interface. For more information, see Customizing settings.

Customizing profile
Use this component to set your name, user name, email, and your role. You can also create and change your password for the Admin UI application. You can review and
edit your profile settings in the My Profile interface.

Proceed as follows to customize your profile settings:

1. Click Home > My Profile to update or review your user profile settings.
2. Modify the user profile settings. Mandatory fields are marked with a Red Asterisk (*).
3. To save the modifications, click Modify User Profile or Reset Password.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Choosing a locale
Admin UI adheres to ISO Globalization Standards and supports the various locales for the user interface. Locales are created based on language or country pairs.

There are three levels of localization:

User interface localization


Metadata localization
Data localization

Locales are created based on language or country pairs. The Admin UI supports the following languages for the user interface:

en_US: English (US)


de_DE: German (Germany)
el_GR: Greek (Greece)
es_ES: Spanish (Spain)
fr_FR: French (France)
it_IT: Italian (Italy)
ja_JP: Japanese (Japan)
ko_KR: Korean (Korea)
pl_PL: Polish (Poland)
pt_BR: Portuguese (Brazil)
ru_RU: Russian (Russia)
tr_TR: Turkish (Turkey)
zh_CN: Chinese (Simplified)
zh_TW: Chinese (Traditional)

You can localize data to any language or country pair, including those locales that are not listed, by adding the locale to the available locales list in My Settings and then by
localizing the fields of the spec of the catalog or hierarchy.
Localization is configured at the spec level (primary, lookup table, or secondary spec), therefore, you can localize a single node or multiple nodes and set localized display
names for each node. You must restrict the displayed attributes in item and category screens to the selected locale in My Settings.
Note: If you localize pre-defined specs such as the Catalog Entry spec or Catalog Entry Categories spec, the localized attributes are not added to the existing attribute
collections automatically. You must update the respective attribute collections part of the Tabbed View to include these localized attributes.
Proceed as follows to specify a locale:

1. Click Data Model Manager > Security > Company Attributes.


2. Select a locale, and click Add to add the locale to the Selected Locales column, and click Save. Click Remove to remove a locale. Similarly, you can also specify
currency.

Configuring language-specific attributes


You can localize attribute values for any catalog attribute or hierarchy node. Locales are created based on language and country pairs, which provide variances across
countries, for example US English and British English. You can localize a single attribute value or display name.

Proceed as follows:

1. Ensure that the required locale is added to the list of Selected Locales.
a. Open the spec and enable the Localized option.
b. Select the locale and click Add.
c. Click the attribute to localize the attributes display name.
d. Enable the Localized option under the Attribute Collection Associations section. The attribute is localized for all the locales.
2. Click Data Model Manager > Security > Company Attributes. After the required locales are added, click Save in the company Attributes user interface.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

342 IBM Product Master 12.0.0


Setting the default date and time attributes
The application server processes all of the values by using the server time zone and stores the date attributes as Coordinated Universal Time (UTC) values in the database.

About this task


The TimeZone attribute type allows you to add "default TimeZone" facet. The "default TimeZone" facet, shows up by default when you add an item.
For example, if you select the(GMT+05:30) Calcutta, Chennia, Mumbai, New Delhi for the Default Time Zone field, when you create a new item with this spec,
the timezone field is pre-populated with the default time zone.

The Date attribute type allows you to add "default Value" facet. The "default Value" facet, is pre-populated when you add an item and provide a value.

Procedure
To define default date or time attribute, proceed as follows.

Click Data Model Manager > Spec mapping > Spec console and create a new spec.
Option Description
a. Add the TimeZone type attribute.
Time Zone attribute b. Add "default TimeZone" facet and provide your time zone.
c. Add a new item to the catalog. The timezone field is pre-populated.

a. Add the Date type attribute.


b. Add "default Value" facet with the 3/25/2011 7:15 PM value.
Date attribute c. Add an item to the catalog.
d. Select the date attribute that is pre-populated.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Customizing Unit of Measure (UOM)


The single-edit screen of the Product Master application provides an extra UOM feature for displaying user-defined content. Only one instance of UOM pane can be
configured in a single-edit screen.

About this task


Using single UOM browser pane, the single-edit screen displays only user-defined or manipulated content that is related to the Product Master data.
You can configure UOM by using either getContent or getURL() methods.

Procedure
To configure UOM for the single-edit screen.

1. Browse to Data Model Manager > Scripting > Script Console > trigger script and trigger either getContent or getURL() method.
getContent method

function getContent(entry){
var _htmls = “<div id=’scriptWrapper’ style=’height:XXXpx; width:100%;’ >XXXXX</div>”;
return _htmls;
}
data_entry_properties.xml

getURL() method

getURL(entry){ return url;}


function getContent(entry){
var _url = getURL();
var _htmls = “<div id=’scriptWrapper’ style=’height:XXXpx; width:100%;’ >”;
htmls += “<iframe src=’”+_url +”’ height=’100%’ width=’100%’></iframe></div>”;
return _htmls;
}

Where,
XXX - Defines height and cannot be empty or 0.
XXXXX - Defines custom HTML element.
For more information, see Running a trigger script.
2. Edit and add following in the data_entry_properties.xml file.

<catalog name="catalogOrHierarchyNameUOMShouldDisplay">
<script>
<type>content</type>

IBM Product Master 12.0.0 343


<extra>height='XXX'</extra>
<title>Special</title>
<path>/scripts/triggers/scriptName</path>
<extraincludes>
<include/>
</extraincludes>
</script>
</catalog>

For more information, see data_entry_properties.xml file file parameters.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Working with the Admin UI


Creating items and searching for items and editing content within items and catalogs are key tasks in master data management

Creating an object
You can create an item or a category.
Using search
You can search on items, entries, and categories using the left navigation, single edit, single edit simplified, or single and multi-edit with rich search screens.
Using IBM Watson search
IBM Watson® search is a free text search capability that scans the content of your Product Master catalogs according to the search parameters you enter, and then
returns detailed search results with descriptions and thumbnails. IBM Watson search is an optional functionality that can be enabled or disabled based on the
enable_de_search property in the $TOP/etc/default/common.properties configuration file. Once enabled, the IBM Watson Search text box and the search button are
always available to use in the right of the row where the menu bar appears.
Editing objects
You can edit an item or category.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating an object
You can create an item or a category.

The IBM® Product Master provides two authoring screens, the Single Edit screen and the Multi Edit screen, for editing and for adding items and categories.

Creating an item
You can create an item to store information about the entities managed in your implementation.
Creating a category
You can create a category within a category hierarchy. A category hierarchy is a container with a hierarchical structure.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating an item
You can create an item to store information about the entities managed in your implementation.

Procedure
1. Expand the catalog module in the left pane navigation to view the list of categories.
2. Right-click on a category and select Add Item to create an item under selected category.
3. Provide values for the attributes of the item.
4. Optional: Add an item.
a. Expand the hierarchy to view the categories. Right-click the link next to the category that lists the number of items that are associated with that category, and
select Edit All. The multi-edit screen opens and a list of items display.
b. Select Add. The new item is added to the bottom of the list. The Add button is only available if the user has the privileges for adding items to the catalog.
c. Provide the values for the attributes of the item.
d. Click Save.
5. Optional: You can also import the items by following options:
Option Description

344 IBM Product Master 12.0.0


Option Description
a. Click Data Model Manager > Workflows > Workflow Console.
b. Select a workflow and a step.
Through a
c. Select the Allow import into step check box to import a new item into the collaboration area at that particular workflow step.
workflow step
d. Add items manually or through an import job. If you add items manually, you must navigate to the multi edit window from the
collaboration console or the worklist page, and then add items using the same steps as defined in step 4.
a. Click Collaboration Manager > Imports > New Imports.
Through an b. In the New Import wizard, complete each step to specify the necessary details for the import.
import job c. Select Item feed option in the Select data import type field
d. Click Save.
Through an a. Click System Administrator > Import Environment.
import b. Browse for a file that has the list of items.
environment c. Click Import.
6. Click Save.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a category
You can create a category within a category hierarchy. A category hierarchy is a container with a hierarchical structure.

Before you begin


Ensure that you have a catalog with a hierarchy already created. You should not create a category with the name Unassigned.

About this task


While adding a new category, or cloning a category, the parents of this category will be displayed in the Add categories section. Once the new category is saved, the parent
categories will be displayed in the Parent categories section.
Note: On the Single edit screen, if you choose to clone a categorized item, click the Categories tab. The mapped category appears in the Add category to categories screen.
After you have successfully saved the mapped category, the cloned item will then display in the Parent categories screen.

Procedure
1. Expand the catalog module in the left pane navigation to view the list of hierarchies.
2. Right-click on a hierarchy, and select Add Category to create a category under selected hierarchy.
3. Provide values for the category fields in the right window pane.
4. Optional: Add a category.
a. Expand the catalog module in the left-pane navigation to view the list of hierarchies. Right-click the link next to the hierarchy that lists the number of
categories, and select Edit All. The multi-edit screen opens and a list of categories display.
b. Select Add. The new category is added to the bottom of the list. The Add button will only be available if the user has the privileges for adding categories to
the hierarchy
c. Provide the values for the attributes of the category. Do not use Unassigned as a category name.
5. Click Save.
When using the new business user interfaces, the Checkout button is disabled until you save the category.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using search
You can search on items, entries, and categories using the left navigation, single edit, single edit simplified, or single and multi-edit with rich search screens.

From the left panel, to search you can either use Modules tab or Search tab.

Modules tab
In the Module tab, click to use the On-Demand Search.

1. Specify the search keyword.


2. Select Item Search or Category Search.
3. Specify an option for the Search on field.
4. Click to search.

Search tab

IBM Product Master 12.0.0 345


The Search tab displays the following:

New Search option


Recent Searches
Saved Searches
Search Templates

To create a new search, proceed as follows,

1. From the left panel, click Search tab.


2. Click New Search.
3. Select Item Search or Category Search.
4. In the Search for Items - New Item Search window, select the required catalog from the Search in Catalog list.
5. In the Search within Categories section, specify one of the following option.
Any categories - The items matching the search criteria returned in the final result set exist in one or more of the categories.
In ALL categories - The items matching the search criteria returned in the final result set exist in all the categories.
6. In the Search Attributes section, specify attribute, operator, value, or condition filters.
Note: The And condition has a higher precedence than the Or condition. This means the And logical operation will be evaluated before an Or operation.
Attention: Grouping of search predicate using parenthesis is not currently supported in the search user interface. Given this limitation and the fact that And operator
has a higher precedence than Or operator, in order to define a search of the form (A or B) and C, you first need to do some transformations so parenthesis are not
needed. Using this example, the search specification will be transformed to A and C or B and C. In this form parenthesis are not needed and therefore can be
defined using the search user interface.
The numeric attribute types that support the Between operator are the following:
Integer
Number
Number Enumeration
Sequence
Currency
The Between operator does not support multi value search on a multi-occurrence attribute.

For a hierarchy rich search in a product hierarchy or organization hierarchy module, the Between operator on level is not supported.

Note: Attributes display just the attribute name, however, you can rollover the name to display the tooltip with the full path to help identify the attributes easier in
the case of having more than one with the same name.
Note: You can use the same attribute in two or more search criteria.
Remember: The search attributes search based on the index type and environment configuration property. To include all specs in the search attributes, set the
rich_search_indexed_only property to false in the common.properties file. To include only the index specs in the search attributes, set the
rich_search_indexed_only property to true, in the common.properties file.
7. In the Search Options section, specify Search Scope, Sort by, and Save Results preferences.
8. Click Search.
To save the search criteria, click Save. Specify the name of the saved search and provide an optional description.
To save the search specification, click Save As > New Saved Search. Specify the name of the saved search and provide an optional description.
To save the search specification as a new search template, click Save As > New Saved Template. Specify the name of the saved template and provide an
optional description.
You can click Modify to edit an existing search or Run to run a long running search in the background.

Creating a search template


You create a search template to define a view such as the attributes of an item to display on the rich search page.
Search using flag attributes
You can have three values for the flag attribute in rich search: true, false, and unset.
Search using the list view
From the list view in the left navigation pane, you can search for the modules catalogs, hierarchies and organizations. The classic catalog module and hierarchy
module come with two distinct views, tree view and list view. The list view is designed to address a major usability limitation associated with the tree view on
catalogs with large number of sub categories.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a search template


You create a search template to define a view such as the attributes of an item to display on the rich search page.

About this task


Search templates appear in a separate area in the Search tab. You can select a template from the drop-down list of the Templates area. When you select a template, the
corresponding template appears in the drop-down list. This is a read-only view that displays the contents of the template. After you select a template, you can click
Search, which performs a search and displays the results in the right panel.

Sharing a search template


A search template can be private to the creator or can be shared with all of the users of the company in which the search template is defined.
Working with the default search template
There is one default search template per container (a catalog, a hierarchy, or a workflow step). When a user starts Rich Search on the container with the default
search template defined, the information that is encapsulated in the default search template is used in initializing the content to be rendered in the simple search
user interface screen.
Defining an effective default search template on a container

346 IBM Product Master 12.0.0


A private default search template is a default search template that is defined by a user and the template is not shared with any other user in the company. Each user
can have at most one private default search template for a container.

Procedure
1. In the left panel, select the Search tab. You can right-click New Search and select Item Search or Category Search.
2. For item search, specify the catalog to search in the Search in Catalog field. For category search, specify the catalog to search in the Search in Catalog field.
3. In the Search within Categories (applicable only to item search) and Search Attributes sections, specify your search criteria.
4. In the Search Options section, select the search scope, attribute to sort on, and save type specifying whether the results should be saved as dynamic selection or
static selection. Click Save.
5. Specify a name for the search and whether this should be saved as a template or as a saved search, and click OK.
A saved search saves both the fields searched and the parameters. A search template saves the fields searched but allows the option whether the parameters are
to be saved.

Results
You now have a saved search template. When viewing a list of saved search templates, the templates are first grouped by the creator, for example, user1, user3,
user4, and so on. Then, amongst the search templates created by that user, the search templates are listed in alphabetical order, for example, for templates created by
user1, the templates are 'Def', 'SSSTM0020', 'SSSTN0017_', and so on.

Transforming the search template


You can transform the classic user interface search template to the business user interface search template. The transformSearchTemplates.sh script
enables an administrator to transform existing search templates in the format of the classic user interface search to search templates in the format of simplified
search.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Transforming the search template


You can transform the classic user interface search template to the business user interface search template. The transformSearchTemplates.sh script enables an
administrator to transform existing search templates in the format of the classic user interface search to search templates in the format of simplified search.

Running the shell script from the command line


1. Log in to the Linux account used to perform maintenance on an Product Master installation.
2. Set the current directory to $TOP/bin/migration.
3. Run the following command.

./transformSearchTemplates.sh --cmpCode=companyCode [--adminUser=adminUser]

companyCode
The code of the company whose templates are to be transformed (this is the same as the company you specified in the Product Master login screen).
adminUser
Optional: This is the name of the administrative user to whom is assigned ownership of templates whose owner is not defined in the classic UI. If no value is
provided, the default is Admin.

Log file usage


Log files are in the directory $TOP/logs. Before the script is run for the first time there is no log file. When the script is run for the company cmp it generates a log file with
the name transformSearchTemplates_cmp.log. If the script is run a second time, it uses this log file to limit processing to just those templates for which the initial attempt
to transform failed. Before it writes the new log file, the script copies the existing log file to a backup called transformSearchTemplates_cmp.log.bak in the same directory.
Subsequently, the script will not run if the backup file exists, and instead it will prompt the user to archive this file (if wanted) and then remove it.
If a user creates a bookmark in the classic UI after running the transform script once, then you must rename the previously created log files for that company in order to
transform the newly created bookmarks.

The lines in the log file look like this:

+:2803:202: Template testc for user Admin copied to new UI templates


-:2805:202: Template ntestc has a name conflict for user Admin
*:6413:5602: Template excp for user user1 encountered exception
java.lang.NullPointerException: null:
com.ibm.ccd.ui.search.TransformSearchTemplates.makeBlob(TransformSearchTemplates.java:863)

The first character in the line denotes whether the transform attempt succeeded or failed, as follows:

Plus "+"
The template was transformed successfully.
Minus "-"
The template could not be transformed because a template with the same name exists in simplified UI format.
Asterisk "*"
The template could not be transformed because the specified exception was encountered. This probably indicates that the data for the classic UI template is
incomplete or inconsistent.

If the script is run when a log file exists, it will attempt to transform only those templates listed in the log file whose line does not start with a "+". This means an existing
but empty log file will cause no templates to be processed.

IBM Product Master 12.0.0 347


Transform rules
Ensure that you are familiar with the following rules that govern the behavior of the program when the transformation is complete.

The default template option is always turned off for a transformed template.
The transformed template is marked as shared if and only if the new owner is a user with administrator privileges. In the classic UI, if the template was created by
an admin user the owner id is set to -1, therefore the actual owner is effectively lost. By default, the owner in the transformed template is set to the user given in
the command line (or Admin). However, if there is associated data, that data always has an identifiable owner so that owner replaces the unknown admin owner in
the new template. In that case, the transformed template is shared if and only if that owner has administrator privileges.
The Save values option is turned on in the transformed template if and only if the corresponding source template has associated data. Data are saved by use of the
Bookmark button in the classic UI search screen.
The classic UI has an Included check box for each attribute. An attribute is included in the search only if that box is checked; at least one attribute must be checked
or the search fails. When the data are saved using the Bookmark button, data are saved for all the attributes, whether the box is checked, and the state of the box is
also saved. There is no corresponding capability in the new UI – all attributes that are displayed in the template are included in the search. During the
transformation, all attributes in the data for the source template (if it has data) are included in the transformed template, whatever the state of the Included check
box.
It is sometimes necessary to create default data in the transformed template. The classic UI and new UI differ in the way they assign the default predicate operator.
In the classic UI, the operator choices are presented in a drop-down list ordered alphabetically for the users locale, and the first operator in the list is the default (in
en_US, it is usually "Begins with"). In the new UI, the order is defined by a list in Product Master and a particular value is chosen as default (often "Equals"). The
default operator in the transformed template is the one that applies in the classic UI. That is, the operators in the source and transformed templates are literally the
same.
In the classic UI, the template and data are stored separately, whereas in the new UI the template and data are stored together. There is a common use case where
this matters, for example, in the classic UI an admin user creates a (shared) template with no data, and three other users load this and then bookmark their own
data; there is one template and there are three data sets. In the transformed templates, there are three templates with data, one for each user. In addition, a fourth
"transformed" template is created, whose owner is the specified admin user, with empty default data.
Order of attributes may be different in classic bookmarks than in transformed templates. Templates are attribute collections, and those are sets and not necessarily
ordered.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Search using flag attributes


You can have three values for the flag attribute in rich search: true, false, and unset.

If you search for a flag attribute value of not true, the items with the flag attribute value that equals false or unset is returned. For example, this is a standard ternary, 3-
valued system:

Data:
Flag
item1: true
item2: false
item3: NONE/UNSET

Search: Expected Results


Flag == true: item1
Flag == false: item2
Flag Is Empty: item3
Flag != true: item2, item3
Flag != false: item1, item3
Flag Is Not Empty: item1, item2

Another example is the following:

(Flag == true) union (Flag != true) should return all the entries; AND
(Flag == true) intersect (Flag != true) should return empty set
The same principle applies to other operators.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Search using the list view


From the list view in the left navigation pane, you can search for the modules catalogs, hierarchies and organizations. The classic catalog module and hierarchy module
come with two distinct views, tree view and list view. The list view is designed to address a major usability limitation associated with the tree view on catalogs with large
number of sub categories.

About this task


The list view enables you to see a summary of the content within a particular module. You can then navigate within the content. For example, lets say you have a category
that contains hundreds of items. In the standard left navigation pane, using the tree view, you would see only the first 100 items and would not be able to navigate past

348 IBM Product Master 12.0.0


item 100. However, using the list view, you can see a limited number of items as well, but you have the option to navigate within all of the items within any particular
category.
The following list describes the list view advantages:

It keeps track of parent categories on a branch of a hierarchy tree.


It provides node level pagination support to categories.
Displays one list of immediate child categories of the last visited parent category. Because the categories are all immediate children, no indentation is necessary on
the presentation of categories.
Limits the display of child category nodes to a maximum of ten nodes at a time. The rest of the sibling child nodes are accessible through pagination.

The following list describes the known limitations of the list view.

Within a catalog, you cannot check out an item to a collaboration area.


Within a catalog, you cannot cut, copy, insert before, remove or delete an item or category through the pop-up menu.
Within a hierarchy, you cannot cut, copy, insert before, remove or delete a category through the pop-up menu.
Within a organization, you cannot cut, copy, paste, remove or delete an organization through the pop-up menu.
Within a organization, you cannot cut, copy, or remove a user through the pop-up menu.

Procedure
1. Right click in the white space for a catalog or hierarchy module in the left pane navigation.
2. Select List View in the right-click menu. The list view displays.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using IBM Watson search


IBM Watson® search is a free text search capability that scans the content of your Product Master catalogs according to the search parameters you enter, and then returns
detailed search results with descriptions and thumbnails. IBM Watson search is an optional functionality that can be enabled or disabled based on the enable_de_search
property in the $TOP/etc/default/common.properties configuration file. Once enabled, the IBM Watson Search text box and the search button are always available to use
in the right of the row where the menu bar appears.

Before you begin


1. Install IBM® Product Master.
2. Install IBM WebSphere® MQ. For more information, see Configuring a Queue with WebSphere MQ or Configuring with the command line.
3. Install IBM Data Explorer. For more information, see Installing IBM Data Explorer.
4. Load the catalogs from your collaborative MDM instance database into the Data Explorer BigIndex database. For more information, see Exporting the Product
Master catalog data into a Data Explorer database.
5. Configure IBM Product Master to use IBM Data Explorer. For more information, see Configure IBM Product Master to use IBM Data Explorer.
6. After you install WebSphere MQ, you must update the env_settings.ini file to enable WebSphere MQ to work with Product Master.
For more information, see Configuring WebSphere MQ parameters.

Procedure
Proceed as follows to use the search.

1. In the IBM Watson Search text box, enter the information that you want to locate. Use these constructs to create searches:
Search options Instructions
Simple search Use this format:

my_term

where my_term is your search term.


Simple OR search Use this format:

my_term_1 OR my_term_2

where my_term_1 is your first search term and my_term_2 is your second search term.
Simple AND search Use either of these formats:

my_term_1 AND my_term_2

my_term_1 my_term_2

where my_term_1 is your first search term and my_term_2 is your second search term.
Search a specific Use this format:
catalog
catalog:my_catalog my_term

where my_catalog is the catalog that you want to search and my_term is the term that you want to search for.
If the name of your catalog contains a space, such as first catalog, you must bracket the catalog name with apostrophes, as in this
example: 'first catalog'.
2. References for items that match the search are shown in result panel. To review the item details, select one or multiple items and open them in a single or multi-
edit screen.

IBM Product Master 12.0.0 349


Example
You can combine any of the search options to create a complex search, as in these examples:

catalog:my_catalog my_term_1 my_term_2* 12 true

This example searches in my_catalog for all of these terms: my_term_1, my_term_2*, 12, and true.

catalog:my_catalog ite* 410 OR catalog:my_catalog 12

This example searches in my_catalog for all of the items with the strings ite* and 410 or would search my_catalog for items with the string 12.

Configuring a Queue with WebSphere MQ


Complete these installation and configuration tasks to enable WebSphere MQ for Watson Search.
Installing IBM Data Explorer
Follow these steps to install IBM Data Explorer.
Selecting catalogs for indexing
You need to index the data in IBM Product Master business catalogs to make it available for free-text search.
Selecting attributes for indexing
You can select the set of attributes to index before exporting data to IBM Data Explorer.
Configuring IBM Product Master to use IBM Data Explorer
After you install IBM Data Explorer, you must configure it.
Exporting the Product Master catalog data into a Data Explorer database
Before you can use Watson search, you must export your Product Master catalog data into a Data Explorer Big Index database.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring a Queue with WebSphere MQ


Complete these installation and configuration tasks to enable WebSphere® MQ for Watson Search.

Complete a default installation of the WebSphere MQ version 7.5 that is provided with the product.

Configuring with WebSphere MQ Explorer


You can use WebSphere MQ Explorer to configure a queue manager, local queue, initial context, and connection factory.
Configuring with the command line
You can use the command line to configure WebSphere MQ queues, JMS, and the .binding file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring with WebSphere MQ Explorer


You can use WebSphere® MQ Explorer to configure a queue manager, local queue, initial context, and connection factory.

Create a queue manager


Use the Create Queue Manager wizard to create a new Queue Manager.
Create a local queue
Use the New Local Queue wizard to create a new local queue.
Create initial context
Use the Add Initial Context wizard to create an initial context.
Create a connection factory
Use the New Connection Factory wizard to create a new connection factory.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Create a queue manager


Use the Create Queue Manager wizard to create a new Queue Manager.

Procedure
1. Open WebSphere® MQ Explorer.
2. In the MQ Explorer - Navigator pane, right-click the Queue Managers folder.
3. Select New > Queue Manager...

350 IBM Product Master 12.0.0


The New Queue Manager wizard displays.
4. Enter these values in the Enter basic values window and then click Next:
Queue manager name: MyQueueManager
Max handle limit: 256
Trigger interval: 999999999
Max uncommitted messages: 1000
5. Enter these values and select these options in the Enter data and log values window and then click Next:
Queue manager name: MyQueueManager
Use circular logging
Log file size: 4096
Log primary files: 3
Log secondary files: 2
Use default paths
6. Enter these values and select these options in the Enter configuration options window and then click Next:
Queue manager name: MyQueueManager
Start queue manager after it has been created
Type of queue manager startup: Service
Create server-connection channel
7. Enter these values and select these options in the Enter listener options window and then click Next:
Queue manager name: MyQueueManager
Create listener configured for TCP/IP
Listen on port number: 1415
8. Enter these values and select these options in the Enter MQ Explorer options window and then click Finish:
Queue manager name: MyQueueManager
Autoreconnect
Automatically refresh information shown for this queue manager
Interval (seconds): 15

Results
The queue manager is created.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Create a local queue


Use the New Local Queue wizard to create a new local queue.

Procedure
1. In the MQ Explorer - Navigator pane, right-click the Queue Managers > bcg.queue.manager > Queues folder.
2. Select New > Local Queues...
The New Local Queue wizard displays.
3. Enter or select these values in the Change properties window and then click Next:
Queue name: MyQueue
Queue type: Local
Put messages: Allowed
Get messages: Allowed
Default priority: 0
Default persistence: Not persistent
Scope: Queue
Usage: Normal

Results
You receive a message that the object was created successfully.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Create initial context


Use the Add Initial Context wizard to create an initial context.

Procedure
1. In the MQ Explorer - Navigator pane, right-click the JMS Administered Objects folder.

IBM Product Master 12.0.0 351


2. Select Add Initial Context...
The Add Initial Context wizard displays.
3. Enter or select these values in the Connection details window and then click Next:
File system
Factory class: com.sun.jndi.fscontext.RefFSContextFactory
Bindings directory for Windows: C:\JNDI_DIRECTORY
Bindings directory for UNIX: /var/mqm/JNDI_DIRECTORY/
4. Enter or select these values in the User preferences window and then click Finish:
Context nickname: file:/C:/JNDI_LOCATION/
Connect immediately on finish

Results
Your Initial Context displays in the JMS Administered Objects pane.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Create a connection factory


Use the New Connection Factory wizard to create a new connection factory.

Procedure
1. In the MQ Explorer - Navigator pane, right-click the JMS Administered Objects > file:/C:/JNDI_LOCATION/ > Connection Factories folder.
2. Select New > New Connection Factory.
The New Connection Factory wizard displays.
3. Enter or select these values in the Enter the details of the connection factory window and then click Next:
Name: TestConFactory
Messaging provider: WebSphere MQ
4. Enter or select these values in the Select the type of connection factory window and then click Next:
Name: TestConFactory
Type: Queue Connection Factory
5. Enter or select these values in the Select the transport that the connections will use window and then click Next:
Name: TestConFactory
Transport: MQ Client
6. Click Finish.
7. Navigate to the WebSphere® MQ /bin directory.
8. Run this sequence of four commands to disable the channel authentication:
a. runmqsc QueueMgrName
b. DISPLAY QMGR CHLAUTH
c. ALTER QMGR CHLAUTH(DISABLED)
d. DISPLAY QMGR CHLAUTH
9. Add the administrator to the MQM group:
a. In Windows Explorer desktop, click Start > Administrative Tools > Computer Management
b. Enter your password.
c. In the Computer Management pane, click System Tools > Local Users and Groups > Groups.
d. Right-click the MQM group and choose the Add to group... option.
e. In the window, click Add....
f. Type the name of the administrator.
g. Click OK.
The administrator is added to the MQM group.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring with the command line


You can use the command line to configure WebSphere® MQ queues, JMS, and the .binding file.

Using the command line to configure WebSphere MQ


You can use the command line to create and configure WebSphere MQ queues.
Using the command line to configure JMS
You can use the command line to create and configure JMS.
Using the command line to configure the binding file
You can use the command line to configure the binding file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

352 IBM Product Master 12.0.0


Using the command line to configure WebSphere MQ
You can use the command line to create and configure WebSphere® MQ queues.

Before you begin


1. Open a command window.
2. Navigate to the directory where WebSphere MQ is installed, such as /opt/mqm/bin.

About this task


Follow these steps to configure WebSphere MQ from the command line.

Procedure
1. Run this command to create a queue manager that is named MyQueueManager, where MyQueueManager is your designated queue manager name:

crtmqm -c "Data Explorer Queue Manager" -ll -q 'MyQueueManager'

2. Run this command to start the queue manager:

strmqm MyQueueManager

3. Run this command to open a WebSphere MQ Script:

runmqsc MyQueueManager

4. Run this command to define your local queue, where MyQueue is your designated queue name:

define qlocal MyQueue

5. Run these commands in sequence to disable the channel authentication:


a. To display the channel authentication status, run:

DISPLAY QMGR CHLAUTH

b. To disable channel authentication, run:

ALTER QMGR CHLAUTH(DISABLED)

c. To verify channel authentication, run:

DISPLAY QMGR CHLAUTH

6. Run this command to define the channel, where the MY_CHANNEL = the channel name, svrconn = the channel type, and tcp = the transport type.

define channel(MY.CHANNEL) chltype(svrconn) trptype(tcp)

7. Run this command to define the listener, where the MY_LISTENER = the listener name and is your designated 1416 is the listening port.

DEFINE listener(MY_LISTENER) trptype(tcp) port(1416)

8. Run this command to start the listener

start listener(MY_LISTENER)

Results
Verify that the message started by MY_LISTENER is received.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using the command line to configure JMS


You can use the command line to create and configure JMS.

Before you begin


1. Open a command window.
2. Navigate to the <mqm instalation directory>/java/bin folder, such as /opt/mqm/java/bin directory.

About this task

IBM Product Master 12.0.0 353


Follow these steps to configure JMS from the command line.

Procedure
1. Run this command to set up the environment:

setjmsenv

2. Locate and open the JMSAdmin.config file.


3. In the JMSAdmin.config file:
a. Revise the INITIAL_CONTEXT_FACTORY parameter to this definition:
INITIAL_CONTEXT_FACTORY=com.sun.jndi.fscontext.RefFSContextFactory
b. Ensure that the PROVIDER_URL parameter is defined, for example: PROVIDER_URL= file:/opt/mqm/JNDI-Directory. If it does not exist, ensure that you
create the directory, otherwise the JMSAdmin command will fail.
4. Run this command to open the JMS Administration command line:

JMSAdmin

If the JMSAdmin command fails with an error like java.lang.NoClassDefFoundError: com.ibm.mq.jms.admin.JMSAdmin, you might need to set the
CLASSPATH variable pointing to the mqjms.jar file. For example, export CLASSPATH=/opt/mqm/java/lib/com.ibm.mqjms.jar
5. Run this command to define the queue, where MyQueue is the queue name.

def q (MyQueue) queue (MyQueue)

6. Run this command to define the Queue Connection Factory, where myQcf is the queue connection factory, MY.CHANNEL is the existing channel name, host is the
host name for this WebSphere® MQ instance, and MyQueueManager is the queue manager.

def qcf(myQcf) transport(CLIENT) channel(MY.CHANNEL) host(9.30.203.110) port(1416) qmgr(MyQueueManager)

7. Enter and run this command to close the initial context mode:

end

Note: When everything runs successful, a .bindings file is created in the configured JNDI_Directory directory. To verify the creation of the file, run the ls -la
/var/mqm/JNDI_DIRECTORY command.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using the command line to configure the binding file


You can use the command line to configure the binding file.

Before you begin


1. Open a command window.
2. Navigate to the /var/mqm/JNDI_DIRECTORY directory or the directory that is identified in the PROVIDER_URL parameter of the JMSAdmin.config file.

Note: You cannot bind the object if the .binding file is already present in the /var/mqm/JNDI_DIRECTORY directory.

About this task


Follow these steps to configure the binding file from the command line.

Procedure
1. Locate the .binding file in the /var/mqm/JNDI_DIRECTORY directory.
If necessary, run the ls -la command to list the files.
2. Copy the .binding file to the $TOP/etc/default/data_explorer directory.
3. Use the details from the previous MQ configuration steps, found in the .bindings file, to revise the settings in the BigIndexConfig.xml file.
a. Revise the Queue name to MyQueue
b. Revise the Queue Connection Factory tomyQcf.
c. Revise the PROVIDER_URL to file:///home/mdmadmin/JNDI-Directory.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing IBM Data Explorer


Follow these steps to install IBM® Data Explorer.

354 IBM Product Master 12.0.0


Procedure
1. Locate the IS_DATA_EXPLORER_V9.0_INSTALL_GUIDE.pdf.
2. Follow the instructions in the IS_DATA_EXPLORER_V9.0_INSTALL_GUIDE.pdf to install the product. During installation, you must:
a. Enable the embedded web server that is included in the InfoSphere® Data Explorer engine.
b. Enable and install the Apache ZooKeeper service.
3. After InfoSphere Data Explorer installation is completed, run this command to enable the embedded web server:

velocity-config webserver/is-enabled=True

4. Navigate to the $DE_INSTALL_DIR/Engine/bin or DE_INSTALL_DIR\Engine\bin directory.


5. From the directory, run one of these commands to start the web server and enable access to the InfoSphere Data Explorer engine:
Linux®: velocity-startup
Windows: velocity-startup.exe
6. To access the Velocity administrative interface, enter this url in your browser: http://localhost:9080/vivisimo/cgi-bin/admin.exe. The default username
and password for the Velocity administrative interface is: username: data-explorer-admin, password: TH1nk1710.
Note: To access the administrative interface from an alternative computer, replace localhost with the DNS name or IP address of the alternative computer.
7. Enter the default user name and password to the admin console.
Note: You should change your password after your first login. To change the password, click Management > Users to access and modify the setting.
8. Navigate to the Apache ZooKeeper server installation directory: DE_INSTALL_DIR
9. Start the Apache ZooKeeper server with one of these commands:
Linux: Zookeeper/zookeeper/bin/zkServer.sh
start
Windows: Zookeeper\zookeeper\bin\zkServer.cmd
start
10. To enable wildcard support, follow the instructions in the Enabling Wildcard Support at the Collection Level topic link.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Selecting catalogs for indexing


You need to index the data in IBM® Product Master business catalogs to make it available for free-text search.

In the IBM Product Master environment, all catalogs are not business catalogs. Generally, only data from business catalogs is required to be available for free-text search.
To be available for free text search, data from these catalogs must be indexed on Watson™ Data Explorer. To select the catalogs for indexing, select the Exclude from Free
Text Search check box on the catalog attributes page. This check box is selected by default and displayed on the catalog attributes page, if the enable_de_search property
in the $TOP/etc/default/common.properties file is set to True. By selecting only the required catalogs for indexing, data that is populated on IBM Data Explorer Collections
is also reduced.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Selecting attributes for indexing


You can select the set of attributes to index before exporting data to IBM® Data Explorer.

You can select the set of attributes that you want to index to reduce the data that is being sent to IBM Data Explorer. This also reduces the number of attributes being
displayed on the free text search result screen. To select these set of attributes for indexing, use the Enable for Free Text search facet available on attributes of the
Product Master Spec. This facet is of type flag.

You can add this facet to secondary specs and subspecs as well as enable or disable it for individual attributes. However, it is not available for Lookup Spec and
Destination Spec types. The default value of the facet is True. When you export catalog data to IBM Data Explorer, only those attributes whose facet value is set to True are
exported and available for free text search.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring IBM Product Master to use IBM Data Explorer


After you install IBM® Data Explorer, you must configure it.

Before you begin


You must confirm your Velocity user name and password.

IBM Product Master 12.0.0 355


Procedure
1. Navigate to the etc/default/ directory.
2. Open the common.properties file and set the enable_de_search property to true. Close the file.
3. Navigate to the etc/default/data_explorer/ directory.
4. Open the BigIndexConfig.xml file.
5. Revise the settings for following attributes in the BigIndexConfig.xml file:
Note: The queueConfiguration details are from the IBM WebSphere® MQ installation; the zooKeeperConfiguration and dataExploreEngineInstance
details are from the IBM Data Explorer installation.

<?xml version='1.0' encoding='utf-8'?>


<indexConfiguration>

<queueConfiguration>
<queue>MyQueue</queue>
<queueConnectionFactory>MyConFactory</queueConnectionFactory>
<providerURL>file:///home/user/JNDI-Directory/</providerURL>
</queueConfiguration>

<bigIndexConfiguration>
<zooKeeperConfiguration>
<endpoint>
<host>localhost</host>
<port>2181</port>
</endpoint>
</zooKeeperConfiguration>
<dataExplorerEngineInstance>
<username>api-user</username>
<password>trinitron</password>
<url>http://localhost:9080/vivisimo/cgi-bin/velocity.exe?v.app=api-soap&wsdl=1&use-types=true&</url>
</dataExplorerEngineInstance>
</bigIndexConfiguration>
<indexConfiguration>

6. Close the BigIndexConfig.xml file.


7. From the IBM InfoSphere Data Explorer Admin console, complete these steps:
a. Click Management > Services > Search Engine > Options.
b. Click Edit.
c. In the Allowed IPs field, enter an asterisk (*) and
d. Click OK.
All remote machines are now set up to search services on Data Explorer.
8. Configure the $TOP/bin/conf/env_settings.ini file to enable MQ:

in env_settings.ini:
[mq]
enabled=yes
#home will default to /opt/mqm if not set
home=/opt/mqm

After you have applied the changes to the env_settings.ini file, ensure that you run the $TOP/bin/configureEnv.sh script.
Important: If the version of the installed WebSphere MQ client does not match with the version of WebSphere MQ Resource Adapter (RA) that is shipped with the
WebSphere Application Server, see the instructions at JMS connectivity issues using WebSphere MQ server to prevent an error due to this version mismatch.
9. Open a browser and enter this url: http://localhost:9080/vivisimo/cgi-bin/admin.exe.
Note: In a new Data Explorer setup for Watson Search, the following steps are not required as the ce_collection* has yet to exist. It will be created when the
receiver report is run for the first time.
The Velocity administrative login screen is displayed.
10. Enter your user name and password.
11. Click the Search Collections link.
12. Verify that the Crawling and Indexing options are stopped.
13. Select each of the following collections and then click Delete:
ce-collection_1_1
ce-collection_1_1-tmpImpl
Unnecessary collections are removed.
Note: On a new installation both collections will not exist, thus no action is required. The collections need to be deleted only if there had been precessing
synchronization attempts.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Exporting the Product Master catalog data into a Data Explorer database
Before you can use Watson search, you must export your Product Master catalog data into a Data Explorer Big Index database.

About this task


Follow these steps to export Product Master catalog data into a Data Explorer Big Index database.

Creating reports
You must create two reports.

356 IBM Product Master 12.0.0


Performance parameters for the export and receiver report jobs
You can modify parameters to improve the performance of the export and receiver report jobs. However, in most cases the default values would be good for optimal
performance. Do not change these parameters unless you understand the implications.
Running the export and receiver reports
Follow these steps to run the export and receiver reports.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating reports
You must create two reports.

Creating the export report


Follow these steps to create the export report.
Creating the receiver report
Follow these steps to create the receiver report and to view the Report Console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating the export report


Follow these steps to create the export report.

Procedure
1. Click Product Manager > Reports > Reports Console.
The Create/Edit Report Wizard displays.
2. Click New to create a new Report.
3. Click New to create a new Report Type.
4. Click New to create a new Input Parameters Spec.
5. In the Spec Name option box, select SearchExportInputSpect. Click New.
6. Create two attributes in the SearchExportInputSpec Parameters Spec.
a. Create the generateDifferencesAfterFirstRun attribute with this information:
7. In the Select Distribution option box, select the Default option.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating the receiver report


Follow these steps to create the receiver report and to view the Report Console.

Procedure
1. Click Product Manager > Reports > Reports Console.
The Create/Edit Report Wizard displays.
2. Click New to create a new Report.
3. Click New to create a new Report Type.
4. Select the CatalogSummaryInputs input parameter.
5. In the Report Type option box, enter CESearchDataReceiverReportType.
6. Click Next.
7. In the Select type option box, select Regular.
8. Enter this code in the Scriptlet Editor:

//script_execution_mode=java_api="japi://com.ibm.mdm.ce.de.queue.extensionpoints.JMSQueueReceiverReportGenerateFunction.cl
ass"

9. Click Save.
The CESearchDataReceiverReportType report type was created.
10. Select the CESearchDataReceiverReportType report type.
11. In the Report Name option box, enter CESearchDataReceieverReport.
12. In the Select Distribution option box, select the Default option.

IBM Product Master 12.0.0 357


Results
The Report Console contains two reports with these names and types:
Table 1. Report Console results
Name Type
CESearchDataExportReport CESearchDataExportReportType
CESearchDataReceiverReport CESearchDataReceiverReportType

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Performance parameters for the export and receiver report jobs


You can modify parameters to improve the performance of the export and receiver report jobs. However, in most cases the default values would be good for optimal
performance. Do not change these parameters unless you understand the implications.

About this task


A brief description of these parameters is as follows:

threadCount - the number of concurrent indexers created by receiver jobs. The default is 1.
waitForIndexingAfter - The receiver thread calls wait for all indexing to complete after synchronizing a number of items as defined by this value. The default is 500.
maxWaitWhenQueueEmpty - The receiver jobs are designed to tolerate an empty queue for a duration as defined by this value in seconds. In a situation where the
Exporter and Receiver are working together, and the Exporter happens to be slow, the receiver may encounter an empty queue. In this situation the receiver will not
exit immediately; instead it will wait for this duration before exiting. The default is 60 seconds.
tpsReportingInterval - The interval in seconds after which the receiver reports the TPS in logs. The default is 30 seconds.
flushBatchSize - The size of outstanding jobs submitted via the Indexer that can exist before flushing jobs to be indexed. The default is 100.
flushInterval - The frequency before flushing jobs to be indexed. The default is 5000 ms.
indexVerificationBatchSize - The size of outstanding jobs submitted for the asynchronous indexing operations. The default is 1000.
indexVerificationInterval - The frequency to check the success of the asynchronous indexing operations. The default is 300 ms.
maximumInProgressRequests - The maximum byte size of outstanding jobs submitted via the Indexer that can be processed without future job submissions being
blocked. The default is 5000.

Procedure
1. Navigate to the etc/default/data_explorer/ directory.
2. Open the BigIndexConfig.xml file.
3. Revise the settings for the following attributes in the BigIndexConfig.xml file:
threadCount
flushBatchSize
flushInterval
indexVerificationBatchSize
indexVerificationInterval
maximumInProgressRequests
waitForIndexingAfter
maxWaitWhenQueueEmpty
tpsReportingInterval
For example, observe the attributes displayed in bold font:

<?xml version='1.0' encoding='utf-8'?>


<indexConfiguration>

<queueConfiguration>
<queue>MyQueue</queue>
<queueConnectionFactory>TestConFactory</queueConnectionFactory>
<providerURL>file:/C:/JNDI_DIRECTORY/</providerURL>
</queueConfiguration>

<bigIndexConfiguration>

<zooKeeperConfiguration>
<endpoint>
<host>localhost</host>
<port>2181</port>
</endpoint>
</zooKeeperConfiguration>

<dataExplorerIndexerBuilderOptions>

<threadCount>1</threadCount>
<flushBatchSize>100</flushBatchSize>
<flushInterval>5000</flushInterval>
<indexVerificationBatchSize>1000</indexVerificationBatchSize>
<indexVerificationInterval>300</indexVerificationInterval>
<maximumInProgressRequests>5000</maximumInProgressRequests>
<waitForIndexingAfter>500</waitForIndexingAfter>
<maxWaitWhenQueueEmpty>60</maxWaitWhenQueueEmpty>

358 IBM Product Master 12.0.0


<tpsReportingInterval>30</tpsReportingInterval>

</dataExplorerIndexerBuilderOptions>

<dataExplorerEngineInstance>

<auto-namespace>false</auto-namespace>
<username>api-user</username>
<password>trinitron</password>
<url>http://localhost:9080/vivisimo/cgi-bin/velocity.exe?v.app=api-soap&wsdl=1&use-types=true&</url>

</dataExplorerEngineInstance>
</bigIndexConfiguration>

</indexConfiguration>

4. Save and close the BigIndexConfig.xml file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Running the export and receiver reports


Follow these steps to run the export and receiver reports.

Before you begin


Review this information about the default report values.

generateDifferenceAfterFirstRun = false
The false value exports every item in the catalogs. The true value exports only the items that have been updated since the last export report.
chunkSize = 100
The 100 value designates 100 attributes for inclusion in the xml chunk that is sent to the WebSphere® MQ JMS queue.
A smaller value sends fewer xml chunks to the WebSphere MQ JMS queue and results in a shorter response time. A larger value sends more xml chunks and results
in a longer response time.

To change the values:

1. In the Report Console, click the name of your report.


2. Adjust the parameters to your preference.

Procedure
1. In the Report Console, click the arrow in the CESearchDataExportReport row.
You ran the CESearchDataExportReport report.
2. In the Schedule Status Information twistie, verify that the report ran without errors.
3. In the WebSphere MQ explorer queue, confirm that the TestQueue is populated.
4. In the Report Console, click the arrow in the CESearchDataReceiverReport row.
You ran the CESearchDataReceiverReport report.
5. In the Schedule Status Information twistie, verify that the report ran without errors.
6. In the WebSphere MQ explorer queue, confirm that the TestQueue is cleared.

Results
The exported items are indexed and available for search through Watson Search.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Editing objects
You can edit an item or category.

Inline editing
In the IBM® Product Master, interface for inline editable attributes, you can edit the value in the text box without bringing up the pop-up editor. The text boxes have an icon
inside the text box towards the right-top corner of the text box, clicking on which will bring up the pop-up editor to edit the same value. The inline text box also has an
auto-expand and auto-shrink feature, which means it would expand and shrink in height as the user would type and edit the value inline. The height of the text box keeps
expanding until it reaches the maximum height of the text area, as defined in the user settings. As the height of the text box maximizes, the icon to open the pop-up editor

IBM Product Master 12.0.0 359


stays at the top-right corner of the text box. Once the content expands beyond the maximum height of the text box a scroll bar appears. At any point if you switch back and
forth from pop-up to inline editor or inline to pop-up editor, the edited value is synchronized in both the editors.

Editing using Single Edit


You can navigate to the Single-Edit UI in the following ways:

Select an item on the left-navigation panel. This option launches single-item edit page.
Search for an item and select it from a list of results in the Search results page. This option launches the single-edit pop-up editor.
Select an item in a multi-edit page. This option launches the single-edit pop-up editor.

You can edit the attributes of an item in the Single Edit screen using any of the following ways:

Edit the value inline (if the attribute is inline editable) using inline editing feature
Click on the image button icon. Clicking on the image button (icon) within the field via the mouse, will display the editor for that particular attribute.
Double-click within the field.
Use the keyboard command Ctrl+Enter.

When modifying an attribute in a spec from the item edit screen, the spec attribute changes are only seen after you have exited the screen and return back to the item.

Editing using Multi Edit


You can navigate to the Multi-Edit UI in the following ways:

Select the View X items link on the left-navigation panel. This option launches multi-edit page.
Search for items and the list of results in the Search results page. This option provides a multi-edit view.

You can use the inline editing feature in the Multi Edit screen using any of the following ways:

Edit the value inline (if the attribute is inline editable) using inline editing feature when in edit mode.
Click on the image button icon.
Double-click within the field when in edit mode.
Use the keyboard command Ctrl+Enter when in edit-mode.

Note: The multi-edit table shows a distinction between editable and non-editable cells through the use of shading. If a row is shaded, it is not editable.
When editing a multiple-line input field, for example, when the input field wraps to a new line while it is being edited, the grid will not display correctly until after you finish
in Edit mode and click Enter or click on another cell. After either of these actions are taken, the grid refreshes and the row is correctly displayed.

When you are creating an item, ensure that you are familiar with the following notes:

The maximum length defined for a string and rich text attribute is defined by the number of bytes (instead of characters).
For a rich text attribute, the HTML markup tags are included in the total number of bytes calculation. For example, <b>Hello</b> is counted as 12 bytes instead of
5 bytes.
The maximum number of characters that can be entered in a string attribute will be fewer than the maxLength if any of the following type characters are used:
double-byte characters, or
2-byte characters, or
4-byte GB18030 characters.
As an example, the use of the 5000 maxLength string attribute allows up to 1333 double-byte characters.

See Product accessibility for shortcuts that can help you navigate between editors on the Single Edit page using the new user interface.

Screens for editing objects


You can view or modify objects, such as items and categories with a content authoring screen.
Editing items in a catalog
You edit an item to provide values for its attributes.
Editing categories in a hierarchy
You edit a category by providing values for its attributes.
Editing lookup table type attribute values
You can edit lookup table type attribute values.
Editing attribute values using the pop-up menu
You can edit attribute values for either categories or items using the pop-up menu.
Restricting editing of all attributes in the [System Default] view
Customized catalog views are created to limit editing or accessing attribute values of an item, however, you can still switch the view to [System Default] in the
Catalog Console screen, which makes all the attributes within an item editable. Although, there is a way to prevent a user from being able to edit all the attributes of
an item.
Restricting catalog selections for relationship attributes
You can configure the list of available choices and defaults for the relationship attributes of your spec-node.
Checking out and editing items and categories
You can check out an entry and have that entry open automatically for you to edit.
Using link attributes
When creating the items under a catalog, items from the target catalog are used as values of link attributes. Linked attribute values are displayed using the
destination attribute picked on the Catalog Attributes screen.
Categorizing an item
You can map one or more items into one or more categories. This category mapping enables you to group related items together. You can categorize an item using
Single Edit, where you add one item to multiple categories, or Multi Edit, where you can add multiple items to multiple categories.
Filtering a list of items and categories
You can use the text field boxes on the Multi Edit screen to filter through the list of items or categories for easier searching and modifying.
Changing where categories and items are displayed in the left pane
Using the Use Ordering feature, you can modify the display position of individual items and categories in the left pane. You can use Java APIs and Script APIs to
read items or categories that are displayed in the user-defined order.

360 IBM Product Master 12.0.0


Viewing relationships
You can view relationships between items in the Linked Items and Related Items tabs in the single-edit screen. The Linked Items tab shows the items that link to
the given item. The Related Items tab shows the relationships of the current given item.
Working with collaboration areas
A collaboration area is an area in which a group of users can work on certain sets of attributes for an entry (an item or category). A collaboration area has an
associated source container, which is either a catalog or a hierarchy, and it has an associated workflow, which specifies the steps and the flow of steps in the
collaboration area.
Rich text editor overview
You use the rich text editor to customize the format, colors and various other kinds of features to text associates with an attribute that is of type Rich Text. You can
use the native rich text editor shipped with the product or you can use a rich text editor of your choice.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Screens for editing objects


You can view or modify objects, such as items and categories with a content authoring screen.

Item and Category Detail screens


You use the Item and Category Detail screen to view, edit, create an item or category, and add attribute values. The Item Detail screen displays when you are
working with an item. The Category Detail screen displays when you are working with a category. You can select an item or category from the catalog or hierarchy
modules in the left pane, collaboration area console, or Worklist UI to access one of these screens.
Single Edit screen
Business users can navigate to the Single Edit screen in several ways:

Select an item or category on the left navigation pane


This will launch the single edit screen.
Search for an item or category and the search returns in a single object
This will launch the single edit screen.
Search for an item or category and select it from a list of search results in the multi-edit results page and click "Open"
This will launch the Single Edit pop-up screen. This is applicable only to the new UI, single, and multi-edit with rich search simplified.
Select an item in a multi-edit page and click "Open"
This will launch the Single Edit pop-up screen. This is applicable only to the new UI, single, and multi-edit with rich search simplified.
More
This button, if visible, lists the additional available functions that you can choose from. This is applicable to all four types of screens of the new UI, single edit,
multi edit, workflow single edit, and workflow multi edit.

You use the Single Edit screen to edit a single object. This screen displays a single object that you can view or edit. The icon in the top section provides the status of
the object. For example, if the object is in sync with the database, a green icon is displayed. If an error has occurred, a red icon is displayed.

When you use the Single Edit screen, you can directly edit values within text, date, and URL type fields without the use of pop-up editors in the new UI.

When you select a single object from the left pane toolbar, that object displays in a Single Edit screen. When you select multiple objects, those objects display in a
Multiple Edit screen. You can edit objects and to navigate through the list of objects in both screens.

Multiple Edit screen


Business users can navigate to the Multiple Edit screen in several ways:

Select the “View X items” or "View X categories" link on the left navigation panel
This will launch the Multiple Edit screen.
Search for items or categories that return more than one object
This will launch the Multiple Edit pop-up screen.
More
This button, if visible, lists the additional available functions that you can choose from.

You use the Multiple Edit screen to edit multiple objects at the same time. You can also keep track of objects that you have previously visited in this screen.

Note: In the Multiple Edit screen,

You can clear the previously selected items by clicking the Deselect All check box. However, there is no Select All feature available currently.
You must use the background mode when you generate reports for catalogs with many items. To use the background mode, select the Submit as a
background job option when you click Generate Report.
For range selection, to ensure best performance before performing any operation on the browser, first let the item load completely and work with limited
items.

Important: Wait for items to load completely before you perform any action on multi-edit screen.
For every item that you click in the Multiple Edit panel, the item details display in the Single Edit panel and you must click the Multiple Edit panel tab at the top of
the panel to return to your search results. To highlight and track the items that you have visited in the Multiple Edit panel, you can set the track_visited_entries
parameter value to true. After you select an object from your search results, that object is saved in your browser history.

Workflow Single Edit screen


You can view and edit an item or category in a workflow step in the Workflow Single Edit screen. Only the attributes that were defined as accessible in the workflow
step definition are displayed in the Workflow Single Edit screen.
Business users can navigate to the Workflow Single Edit screen in several ways:

Select an item or category on the left navigation pane in a step in the collaboration area left navigation module.
This will launch the single edit screen.
Search for an item or category in a step of a collaboration area and the search returns in a single object

IBM Product Master 12.0.0 361


This will launch the single edit screen.
Search for an item or category in a step of a collaboration area and select it from a list of search results in the multi-edit results page and click "Open"
This will launch the single edit pop-up screen.
Select an item or category in a multi-edit page for a collaboration step and click "Open"
This will launch the single edit pop-up screen.
Select an item or category in the worklist page for a collaboration step and click "Open"
This will launch the single edit screen.

Note: In the Single Edit screen, any entered comments are cleared after they are successfully applied by a Reserve or Release action. This prevents the same
comments from being applied to subsequent actions unintentionally.
Workflow Multi Edit screen
You can view and edit multiple items or categories in a workflow step in the Workflow Multi Edit screen. Only the attributes that were defined as accessible in the
workflow step definition are displayed in the Workflow Multi Edit screen.
Business users can navigate to the Workflow Multiple Edit screen in several ways:

Click a collaboration step in the collaboration area console


This will launch the multiple edit screen.
Search for items or categories in a step of a collaboration area that returns more than one object
This will launch the multiple edit screen.
Select multiple items or categories in the worklist page for a collaboration step and click "Open"
This will launch the single edit screen.

CAUTION:
IBM® Product Master does not support any browser-based external feature or plug-in.
Example

In Mozilla Firefox, do not right-click an image and click View Image.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Editing items in a catalog


You edit an item to provide values for its attributes.

Before you begin


You must search for the items that you want to edit.

About this task


In the catalog attributes, you can specify a spec attribute as the one to be displayed as the item's name. In the new business user interface, what is shows depends on the
attribute collections and views. For example, in the new business user interface, the primary and secondary spec attributes included in the selected view are shown in the
single edit screen. The display attribute’s value for the item is shown at the top of the screen above the action buttons. The display attribute or primary key attribute is only
shown if the selected catalog view includes those attributes in its attribute collections.

Procedure
1. Expand the catalog module in the left pane navigation to view the list of items.
2. Click on an item to open in the right pane.
3. Modify the attributes of the item.
If you have an attribute whose type is string enumeration rule, and whose value is dependent on another attribute, then you must perform the following:
In version 6.0.0 and 9.0.0, click Refresh after you make your unsaved changes. The refresh will reflect the changed value in the drop down for the string
enumeration.
In version 9.0.0 fix pack 2 and later, click the drop down. The string enumeration rule is evaluated using the most up-to-date values of attributes on the
client, even those that have not been saved.
Note: In the new business user UI all of the item attributes are rendered in a multi-step process. In the first step only those attributes are rendered that can be
accommodated on the screen. The next set of attributes are rendered only if you scroll down. While the rendering of the first set of attributes is very fast, you will
notice a slight delay if you need to scroll down to view the next set of item attributes. This slight delay is the normal behavior in the new UI and not a performance
issue.
a. Focus on the attribute field to modify an attribute of the item.
b. Provide values for the attributes inline in the attribute field for certain attribute types that accept inline input. Alternatively, click on the icon visible towards
the right of the attribute field to open a popup editor where available. Provide values and click Ok to accept the changes.
Note: The inline editor in the new UI displays warnings in runtime about input exceeding attribute size for attributes of type String only.
c. Click Save.
You can also make multiple changes to multiple items. To make changes to a specified attribute across multiple items, which was previously referred to as the "fill-
down" capability, you use the copy and paste features in the Collaborative MDM UI:
a. Select the cell. Ensure that you select the cell itself, not the contents in the cell.
b. Right-click and choose Copy Cell. Alternatively, use Ctrl+c to copy the contents of the cell.
c. Select the rows that you want to update the attributes with this data that you just copied. Use standard Windows multi-select options, for example, select a
row, then Shift and select the last row in the set.
d. Right-click and choose Paste. The paste shows you what attribute will be pasted. Alternatively, use Ctrl+v to paste the previously copied contents.
If you are editing through the Single edit pop-up window of the Multi-edit screen, do the following:
a. Right-click on the attribute field and choose Copy Attribute.

362 IBM Product Master 12.0.0


b. Close the Single edit pop-up window to return to Multi-edit screen.
c. Select the rows that you want to update the attributes with this data that you just copied. Use standard Windows multi-select options, for example, select a
row, then press Shift and select the last row in the set.
d. Right-click and choose Paste. The paste shows you what attribute will be pasted.
e. Click Save.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Editing categories in a hierarchy


You edit a category by providing values for its attributes.

Before you begin


You must search for the categories that you want to edit or browse to the categories you want to edit using the left navigation pane.

Procedure
1. Expand the catalog module in the left pane navigation to view the list of categories.
2. Click on a category to open in the right pane.
3. Modify the attributes of the category.
Note: If you have an attribute whose type is string enumeration rule, and whose value is dependent on another attribute, then you must perform the following:
In version 6.0.0 and 9.0.0, click Refresh after you make your unsaved changes. The refresh will reflect the changed value in the drop-down for the string
enumeration.
In version 9.0.0 fix pack 2 and later, click the drop-down. The string enumeration rule is evaluated using the most up-to-date values of attributes on the
client, even those that have not been saved.
a. Focus on the attribute field to modify an attribute of the category.
b. Provide values for the attributes inline in the attribute field for certain attribute types that accept inline input. Alternatively, click on the icon visible at the
right of the attribute field to open a pop-up editor where available. Provide values and click Ok to accept the changes.
c. Click Save.
You can also make multiple changes to multiple categories. To make changes to a specified attribute across multiple categories, which was previously referred to as
the "fill-down" capability, you use the copy and paste features in the Collaborative MDM user interface:
a. Select the cell. Ensure that you select the cell itself, not the contents in the cell.
b. Right-click and choose Copy Cell. Alternatively, use Ctrl+c to copy the contents of the cell.
c. Select the rows that you want to update the attributes with this data that you just copied. Use standard Windows multi-select options, for example, select a
row, then press Shift and select the last row in the set.
d. Right-click and choose Paste. The paste shows you what attribute will be pasted. Alternatively, use Ctrl+v to paste the previously copied contents.
e. Click Save.
If you are editing through the Single edit pop-up window of the Multi-edit screen, perform these steps:
a. Right-click on the attribute field and choose Copy Attribute.
b. Close the Single edit pop-up window to return to Multi-edit screen.
c. Select the rows that you want to update the attributes with this data that you just copied. Use standard Windows multi-select options, for example, select a
row, then Shift and select the last row in the set.
d. Right-click and choose Paste. The paste shows you what attribute will be pasted.
e. Click Save.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Editing lookup table type attribute values


You can edit lookup table type attribute values.

Before you begin


You must search for the categories or items that you want to edit. Ensure that you already have a lookup table type attribute that is defined in a spec.

About this task


When defining the File Spec, Primary Spec, Lookup Spec, Secondary Spec, Sub-Spec and Script Input Spec, there are specific fields where you can select the type of
display for your lookup table attribute values.

To select the display of the lookup table attribute values, you can either select a specific format from the Lookup Table Value Display Format field or the Lookup Table
Value Display Attribute field drop-down list. For relationship type attributes, you can select a value for the Relationship Value Display Format field.

The Lookup Table Value Display Format field enables you to choose the type of format. The Lookup Table Value Display Attribute field enables you to choose any attribute
of the selected lookup table as the display attribute, attribute can be the primary key of the lookup table as well. The Relationship Value Display Format field enables you
to display the different types of relationship values.

IBM Product Master 12.0.0 363


The single edit and multi-edit screens displays the lookup table and relationship attributes values in the specified format and also uses the specified display attribute.

The lookup table displays both the primary key of the lookup table attribute and the specified display attribute. The relationship attributes displays the destination catalog
name, primary key, and the display attribute.

Procedure
1. Expand the catalog module in the left pane navigation to view the list of categories or items.
2. Click on a category or an item to open in the right pane.
3. Edit a lookup table type attribute value.
a. Modify the attribute value using the select lookup table entry window. This window opens as a pop-up when you focus on the look table type attribute field
and click on the icon towards the right of the attribute field.
b. Provide the primary key for the lookup table entry in the Key field. Alternatively, if a display attribute was identified in the spec definition of this lookup table
attribute, provide the display name for the lookup table entry in the Display field.
c. Alternatively, if you do not know what the primary key or the display value is, you can provide a search criteria under the Search for Entries section. Click on
the green arrow to execute the search. Alternatively, if you want to look at all the entries in the lookup table without specifying a search criteria, then Click
View All Rows. Choose a entry from the search results by clicking on it.
Note: To make changes to a specified attribute across multiple items, do the following:
i. Select the cell. Right-click and choose Copy Cell.
ii. Select the rows that you want to update the attributes with this data that you just copied. Use standard Windows multi-select options, for example,
select a row, then press Shift and select the last row in the set.
iii. Right-click and choose Paste. The paste shows you what attribute will be pasted
d. Click Ok.
e. Click Save.
To modify lookup table type attribute for multiple categories or items:
a. Click the link next to the hierarchy or category that lists the number of categories or items.
b. Select Edit All. The right pane displays the list of categories or items with its attributes.
c. Modify the attribute value using the select lookup table entry window. This window opens as a pop-up when you focus on the look table type attribute field
and click on the icon towards the right of the attribute field.
d. Provide the primary key for the lookup table in the Key field. Alternatively, if a display attribute was identified in the spec definition of this lookup table
attribute, provide the display name for the lookup table entry in the Display field.
e. Alternatively, if you do not know what the primary key or the display value is, you can provide a search criteria under the Search for Entries section. Click on
the green arrow to execute the search. Alternatively, if you want to look at all the entries in the lookup table without specifying a search criteria, then Click
View All Rows. Choose a entry from the search results by clicking on it.
f. Click Ok.
g. Click Save all modified.

Lookup table and relationship attributes


Within the spec screen, you can select the lookup table and the relationship type attributes. Additional facets are available to control the display of these type of
attributes.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Lookup table and relationship attributes


Within the spec screen, you can select the lookup table and the relationship type attributes. Additional facets are available to control the display of these type of
attributes.

Lookup table type attributes


This section focuses on the lookup table value display format type and the lookup table value display attribute type. These two types are optional facets available only for
the lookup table type attributes.
These attribute types are available when defining any of the following:

File Spec
Primary Spec
Lookup Spec
Secondary Spec
Sub-Spec
Script Input Spec

These attribute types are not available for the Destination Spec type.
Lookup table value display format: This attribute type controls the display format of a lookup table type attribute. The following formatting options are available and you
can pick one of the formatting options. By default, look up table type attribute values are displayed using the Primary
Key format.

Primary Key
Display Attribute
Primary Key > Display Attribute
Lookup Table Name > Primary Key
Lookup Table Name > Display Attribute
Lookup Table Name > Primary Key > Display Attribute

364 IBM Product Master 12.0.0


Lookup table value display attribute: This attribute enables you to pick any attribute of the selected lookup table as the display attribute. A drop-down of all the
attributes of the selected lookup table is displayed on the spec screen. You can select an attribute as the display attribute from this drop down. The first attribute in the
drop-down is selected by default when the facet is added. The attribute can be the primary key of the lookup table as well. The lookup table attribute can be indexed or
non-indexed.

Note: The lookup table entries are stored in the ITM table. If an item has an indexed lookup attribute, the value is stored in the ITA.ITA_NUMERIC_VALUE column of the
ITA table. If you have an empty lookup attribute for an item, the respective entry for ITA.ITA_NUMERIC_VALUE is NULL and hence a search query that involves indexed
lookup attribute does not fetch the items if the lookup attribute value is NULL.

Relationship type attributes


The relationship value display format attribute enables you to select the display format of the relationship attribute values. This optional attribute is available only for
relationship type attributes. After an import, the value of the relationship attribute is displayed only for items.
The following formatting options are available and you can pick one of the formatting options. By default, relationship attribute values are displayed using the Catalog
Name > Primary Key > Display
Attribute format.
Note: The Display Attribute is the display attribute of the selected target catalog for the relationship attribute.

Primary Key
Display Attribute
Primary Key > Display Attribute
Catalog Name > Primary Key
Catalog Name > Display Attribute
Catalog Name > Primary Key > Display Attribute

This attribute displays when defining any of the following:

File Spec
Primary Spec
Lookup Spec
Secondary Spec
Sub-Spec
Script Input Spec

This attribute type is not available for the Destination Spec type.
Relationship type attributes allow users to save references in a given item to other items.

For example, lets say we want Item A in Catalog A to contain a reference to Item B in Catalog B. In order to contain this reference, you have to create a relationship
attribute in a spec implemented by Item A and populate that attribute in item A with a reference to item B.

In the new business user interface, a related item is selected using the popup relationship editor. You can select the related item in one of the following ways:

Select the primary key of the related item from the key picker, or the display value of the related item from the display picker in the top section of the editor. You can
choose the value from the dropdown, or type part of the value in the text field and select the desired value from the list of matches offered. If you do not select a
valid value before clicking OK, you are prompted to choose a valid value. You can use click Cancel to exit out of the relationship editor without choosing a valid
value. When you type in the entire matching value, verify that your value selections become visible before you click OK. This is to ensure that the values are
populated correctly.
Browse for the related item in the hierarchy tree in the Search and Browse for items section.
Search for related items using the search fields in the Search and Browse for items section and choose the desired related item from the search results.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Editing attribute values using the pop-up menu


You can edit attribute values for either categories or items using the pop-up menu.

Before you begin


You must search for the items that you want to edit.

Procedure
1. Expand the catalog module in the left pane navigation to view the list of categories or items.
2. Click on a category or an item to open in the right pane.
3. Modify an attribute value for an entry choosing any option:
Option Description
a. Right click on any text populated field row. A list of options to edit the attribute value displays.
Deleting attribute text: b. You can delete the text within the field by selecting Clear Field.
Note: Selecting Clear Field deletes the content within the field, not the field itself.
Adding an occurrence to an a. Click Add more next to an attribute. Alternatively, you can right click on the name of an attribute and select Add
attribute: occurrence. An occurrence is added to the attribute.
Deleting an occurrence from a. Right-click on the number next to the occurrence of an attribute that you want to delete.
an attribute: b. Select Delete occurrence. The occurrence is deleted from the attribute.
4. Optional: You can make changes to a specified attribute across multiple items.

IBM Product Master 12.0.0 365


Option Description
a. Select the cell. Ensure that you select the cell itself, not the contents in the cell.
b. Right-click and choose Copy Cell.
c. Select the rows that you want to update the attributes with this data that you just copied. Use
Copying an attribute value:
standard Windows multi-select options, for example, select a row, then press Shift and select the last
row in the set.
d. Right-click and choose Paste. The paste shows you what attribute will be pasted.
a. Select the categories or items from the left pane navigation. A list of attributes related to the
categories or items display in the Multi Edit screen. The attributes are listed in columns.
Deleting an attribute value: b. Right click on any cell or row. A list of options to edit the attribute value displays.
c. Select Delete Cell to delete the text within the cell.
Note: Selecting Delete Cell will delete the content within the field, not the field itself.
a. Select the categories or items from the left pane navigation. A list of attributes related to the
categories or items display in the Multi Edit screen. The attributes are listed in columns.
Deleting a category or item:
b. Right click on any cell or row. A list of options to edit the attribute displays.
c. Delete the row by selecting Delete Row. The category or item is deleted.
a. Right-click and choose Copy Attribute. Close the Single edit pop-up window to return to the Multi edit
screen.
If you are editing through the Single edit pop-up b. Select the rows that you want to update the attributes with this data that you just copied. Use
window of the Multi-edit screen, do the standard Windows multi-select options, for example, select a row, then press Shift and select the last
following: row in the set.
c. Right-click and choose Paste. The paste shows you what attribute will be pasted.
d. Click Save.
5. Click Save.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restricting editing of all attributes in the [System Default] view


Customized catalog views are created to limit editing or accessing attribute values of an item, however, you can still switch the view to [System Default] in the Catalog
Console screen, which makes all the attributes within an item editable. Although, there is a way to prevent a user from being able to edit all the attributes of an item.

About this task


The View drop-down in the top-right corner of the Catalog Console screen of the application is designed to display all the views. However, this catalog view drop-down is
not associated with any role or ACG (access control group), so a business user can edit all the attributes of an item by choosing [System Default] as the view.
Although there is no way to suppress the [System Default] view, you can restrict edit access of the items in this default view.

Procedure
1. Select Product Manager > Catalogs > Catalog Console.
2. Choose the catalog that you want to work on.
3. After choosing the catalog, click Attrs from the panel at the top.
4. Change User Defined Core Attribute Collection to None.

Results
Now, only the primary key of the item is seen and is editable; all other attributes are not shown and therefore cannot be edited.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restricting catalog selections for relationship attributes


You can configure the list of available choices and defaults for the relationship attributes of your spec-node.

About this task


Let's say an application designer creates a spec that includes a relationship attribute. The application designer wants to configure a list of allowed destination catalogs,
along with a default catalog and a hierarchy. The user edits the entry with the relationship attribute and sees that the user interface reflects the wants of the application
designer to limit the choices of the destination catalogs and defaults. You can use any of the following three properties to configure a relationship attribute on a spec node:

Relationship Default Catalog


Relationship Default Hierarchy
Relationship Destination Catalogs

366 IBM Product Master 12.0.0


Procedure
1. Select the drop-down field under the Type and above the Attribute Collection Associations fields. The following three fields display:
Relationship Default Catalog
Relationship Default Hierarchy
Relationship Destination Catalogs
2. Select the default catalog name from the drop-down list in the Relationship Default Catalog field.
Note: If no default catalog is set, or if the chosen default is not in the list of destinations, or if the chosen catalog is “source”, but the attribute is shown on a
hierarchy, then the first catalog alphabetically in the drop down will automatically be selected, which will then pick the corresponding hierarchy.
3. Select the default hierarchy name from the drop-down list in the Relationship Default Hierarchy field.
Note: If no default hierarchy is specified or if the default value is invalid in that the chosen default is not mapped to the picked catalog, then default will be to the
primary hierarchy of the chosen catalog. An example of an invalid default hierarchy is one which does not correspond to the selected catalog.
4. Click CLICK HERE for the Relationship Destination Catalogs field to specify which catalogs you want to add or remove from the mapping.
Note: If no destination catalog is specified, then all of the catalogs display. If a destination catalog is selected, but it does not resolve, then no catalogs display in
the user interface.
5. Select the type and click Save.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Checking out and editing items and categories


You can check out an entry and have that entry open automatically for you to edit.

Before you begin


Ensure that you are a performer for the particular workflow step where you are checking out an item or category.

About this task


The checkout and edit feature enables you to check out an item or category from either the left navigation pane or the Single and multi-edit with rich search simplified
single edit user interface screen into a collaboration area. The checked out item or category opens and displays in the right pane window ready to be modified. You can
then directly modify the item or category in the right pane. This feature bypasses the need to manually browse through the collaboration area, selecting the item or
category that you want to modify, and then open the entry to edit.

Procedure
1. Create a collaboration area.
2. Select the Checkout and Edit check box to check out and immediately edit an item or category.
Note: If the Checkout and Edit check box is selected in a collaboration area and there is a script attached to the workflow step, the UI does not show the updated
data because the App server service does not wait for the workflow engine to complete the execution. Click Refresh to see the updated data.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using link attributes


When creating the items under a catalog, items from the target catalog are used as values of link attributes. Linked attribute values are displayed using the destination
attribute picked on the Catalog Attributes screen.

Manual entry of linked attribute values


You can manually enter the values directly into the link attribute editor.

In the new business user interface, if a display attribute is used instead of the primary key of the destination catalog, value of the display attribute is used for display
purposes.

The system then looks up the corresponding item using this value in the destination catalog and retrieves the primary key of the destination item for storage as the value
of link attribute.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Categorizing an item

IBM Product Master 12.0.0 367


You can map one or more items into one or more categories. This category mapping enables you to group related items together. You can categorize an item using Single
Edit, where you add one item to multiple categories, or Multi Edit, where you can add multiple items to multiple categories.

Before you begin


Ensure that you already have an item created.

Procedure
Categorize an item. Based on your settings choices, you have the following options available:
Option Description
a. Select an item in the left pane navigation. The Single edit screen displays.
b. Click the Categories tab.
c. Choose the hierarchy in which to find categories to map to using the Hierarchy dropdown.
d. Browse to the category to map to using the hierarchy tree displayed. Double click on the category to map to. Alternatively, click on the category
to map to and click on Add Category.
e. Alternatively, search for the category that you want to map the items to using the Search section. Double click on the category to map to.
Single edit Alternatively, click on the category to map to and click on Add Category.
f. Alternatively, choose the category to map to from the dropdown in “Add Categories to Item” section.
g. Click OK to save your mapping. The saved mapping is then captured in the Multi edit screen.
h. Click Save > Save all modified. To view your changes in the left pane navigation, refresh the left pane navigation. The mapped items display
under the categories.
i. Click Save to save everything in the Attributes and Categories tabs. The mapping displays in the Mapped categories box.

a. Click the link next to the category that lists the number of items that are associated with that category. The Multi Edit screen displays.
b. Select one or more items that you want to map to one or more categories, and click Categorize. The Category Mapping pop-up displays.
c. Choose the hierarchy in which to find categories to map to using the Hierarchy dropdown.
d. Browse to the category to map to using the hierarchy tree displayed. Double click on the category to map to. Alternatively, click on the category
to map to and click on Add Category.
e. Alternatively, search for the category that you want to map the items to using the Search section. Double click on the category to map to.
Multi edit Alternatively,click on the category to map to and click on Add Category.
f. Search on one or more categories that you want to map the items to, and click Add Category. The mapping is displayed in the Add Categories to
Item box.
g. Click OK to save your mapping. The saved mapping is then captured in the Multi edit screen.
h. Click Save > Save all modified. To view your changes in the left pane navigation, refresh the left pane navigation. The mapped items display
under the categories.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Filtering a list of items and categories


You can use the text field boxes on the Multi Edit screen to filter through the list of items or categories for easier searching and modifying.

Before you begin


Ensure that the attributes on which you want to filter are indexed.

About this task


You can filter only the string and string-based data types.
The following list contains examples of some of the string types of attributes:

String
String enumeration
URL - In the url field of the Persona-based UI, specify any URL (Uniform Resource Locator) and click the icon to open the page as a new browser tab. If you enter a
wrong URL, the icon remains disabled.
Note: You can specify only the English language URLs.
Image URL
Thumbnail image
Thumbnail image URL
Binary
Image

Important: You cannot filter on numeric attributes.

Procedure
1. Add a catalog in the left pane navigation and click +.
2. Expand the hierarchy to view the categories. Click the link next to the category that lists the number of items that are associated with that category, and select Edit
All. The Multi-edit screen opens and a list of items display.

368 IBM Product Master 12.0.0


3. Select the filter icon in the top right corner to turn the filtering on. A row of text boxes display under each column.
4. Click in the text box under the column that you want to filter on.
5. Provide a value for your search criteria in the text box and press Enter to apply the filter. Remove the contents of the filter text box and press Enter to remove the
filter.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Changing where categories and items are displayed in the left pane
Using the Use Ordering feature, you can modify the display position of individual items and categories in the left pane. You can use Java APIs and Script APIs to read items
or categories that are displayed in the user-defined order.

About this task


By default, any specific display order of items or categories in the left pane is not guaranteed. Using this feature, you can set a customized display order for items or
categories in the left pane. The ordering of items and categories is with respect to their parent category. Modifying the display position of individual items and categories to
the top of the list reduces scrolling when you have many items and categories but use only a few frequently.
Note: The actual positions of individual items and categories in the list is not changed. They are displayed at different positions as long as the Use Ordering check box in
the Catalog Attributes window is selected. If you clear this check box, items and categories are displayed in the order in which they were displayed before you selected the
check box.

Procedure
1. Go to the Catalog Attributes window.
You can go to the Catalog Attributes window either from the catalog console or by right-clicking a catalog in the left pane.
2. Select the Use Ordering check box from the Catalog Attributes window.
3. Right-click the item or category that you would like to move to a different position.
4. Click Cut from the menu.
5. Right-click the item before which the earlier item needs to be displayed.
6. Click Insert Before from the menu.
The Insert Before option is enabled because the Use Ordering check box is selected.
7. Click OK when you are asked to confirm the reordering.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Viewing relationships
You can view relationships between items in the Linked Items and Related Items tabs in the single-edit screen. The Linked Items tab shows the items that link to the given
item. The Related Items tab shows the relationships of the current given item.

Linked items
The Linked Items tab displays the items that link to this item using link attributes. The link attribute exists on the items that are shown in the tab and the attributes point
to this item as the target of the link. Within this tab, you can view the catalog of the source item of the link, the primary key of the linked item, and the display name of the
linked item. You can also open any of the linked items in a single-edit popup screen to view more details about that item.

Related items
The Related Items tab displays the items that the current item is related to through the relationship type attribute. The relationship attribute exists on the current item.
Within this tab, you can view the catalog of the related item, the relationship (which is the name of the relationship attribute), the primary key, and the display name of the
related item. You can also open any of the related items in a single-edit popup screen to view more details about that item.
Consider the following limitations with viewing the related items:

Any relationships that have just been set in the Attributes tab but not yet saved in the item, will not show up in the Related Items tab.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Working with collaboration areas


A collaboration area is an area in which a group of users can work on certain sets of attributes for an entry (an item or category). A collaboration area has an associated
source container, which is either a catalog or a hierarchy, and it has an associated workflow, which specifies the steps and the flow of steps in the collaboration area.

IBM Product Master 12.0.0 369


Collaboration area users An admin user is a user who has administrator privileges and is therefore authorized to take any action that is subject to constraint by the
general set of privileges. An admin user is analogous to the root user in a UNIX system.

A collaboration area may have one or more administrative users designated for it. They must already be users in the Product Master. Being an admin user of a
collaboration area does not confer any extra privileges on the user outside the collaboration area. However, only designated users may perform certain operations on a
collaboration area, such as moving entries in a Fixit step to some other disposition. That is, even an admin user may not perform such actions in a collaboration area
unless such a user is a designated admin user of the collaboration area.

A designated admin user of a collaboration area may edit its attributes, for example, such a user may add another admin user. An admin user may also manipulate entries
in the collaboration area.

Within the collaboration area, you work with attributes of entries, for example, the price and description attributes of a shirt. You check out entries to the collaboration
area. When you check out an entry, the attributes of that entry that are associated with the workflow (for example, the price and description attributes), are locked in the
source container (catalog or hierarchy). The attributes remain locked until the entry completes the workflow. Any modifications that you make in a collaboration area to
the attributes of an entry are made only to the copy of the corresponding entry in the collaboration area and are copied back into the source container when the workflow
completes, and then the attributes are unlocked.

After you update the appropriate attributes for an entry, you specify that you are done with the step. The collaboration area might have more than one button to choose
from, such as Approve and Reject to indicate that you are done. Based on the exit value for that button, the entry moves to the next step as defined in the workflow. Each
button might be assigned to numerous next steps. For example, the Approve button might send the entry to three other steps, rather than to just one step.
Important: Ensure that you open the collaboration area and click Refresh whenever attributes (for example, catalog scripts or link attributes) are modified on the
associated source catalog.

An entry moving through a collaboration area


For a clothing store that sells shirts, each type of shirt needs to have a description, dimension for shipping, and a price for the product. For this scenario, the item in the
collaboration area is a specific type of shirt.

This workflow is a simple three-step workflow:

Step 1 is the Description step.


Step 2 is the Dimension step.
Step 3 is the Price step.

This workflow has three performers. Mary is responsible for providing the description of the shirt, Bob is responsible for providing the dimensions for shipping, and Joe is
responsible for specifying a price for the shirt.
For this workflow, the collaboration area is called the Master data refinement collaboration area.

To move the shirt through a collaboration area:

1. Mary goes to the catalog that has the item. She opens the item and checks it out by clicking the Check out button and specifying the Master data refinement
collaboration area where the item goes. Because Mary chose the Master data refinement collaboration area, three attributes of that item are now locked: the
description, dimension, and price attributes. The item is checked out to the collaboration area, however, the three attributes (description, dimension, and price) are
enabled for editing. The item is checked out to the Master data refinement collaboration area. The item is now in the first step in the collaboration area.
2. Mary clicks Collaboration Manager > Collaboration Areas > Collaboration Area Console to view the Master data refinement collaboration area. The item is listed in
the collaboration area under the Description step. Mary selects the item, provides a description, and clicks the Save button. When Mary clicks the Save button, she
sees a message that verifies whether what she provided for the description is acceptable. For example, if Mary typed a description that exceeded the character limit
for that attribute, a validation error appears. Mary must correct the problem before the item can move to the next step. Mary then clicks the button at the bottom of
the screen to move the item to the next step.
3. Bob goes to the Collaboration Area Console where he selects the Master data refinement collaboration area, and a list of steps for the workflow displays. The item
is listed in the collaboration area under the Dimension step. Bob selects the item, provides the dimensions for shipping, and clicks the Save button. Bob then clicks
the button at the bottom of the screen to move the item to the final step.
4. Joe goes to the Collaboration Area Console where he selects the Master data refinement collaboration area from the list. A step for the workflow displays. The item
is listed in the collaboration area under the Price step. Joe selects the item, provides a price, and clicks the Save button. Joe then clicks the button at the bottom of
the screen to indicate that he is done. The item is now checked back into the source container. This means that the source item's description, dimension, and price
former attribute values are now replaced by the values they have in the collaboration area. The item is no longer present in the collaboration area after it has
successfully been checked in.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Rich text editor overview


You use the rich text editor to customize the format, colors and various other kinds of features to text associates with an attribute that is of type Rich Text. You can use the
native rich text editor shipped with the product or you can use a rich text editor of your choice.

You can enrich master data using a rich text editor by editing an object as it would be presented.

Depending on how you want to use a rich text editor, there are multiple ways to access it. In the Simplified Single Edit screen, you can click the F2 button. In the Multi Edit
screen, Single Edit screen and the Rich Edit Single Edit, the rich text editor opens as a pop-up when you focus on the field or when you double-click the entry.
Note: Ensure that the rich text attribute values are valid HTML content. If invalid HTML content is provided, the content may not display correctly.
Note: If the value of a rich text attribute is specified through the use of the editor provided in the product graphical user interface, the HTML content will always be of
correct form. Invalid HTML content can be introduced only if the value of the rich text attribute is set through the script operations or a Java extension point. In such
situations, you need to correct the script or the Java program to ensure HTML of the correct form is used.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

370 IBM Product Master 12.0.0


Running and managing jobs
Running and managing jobs enables you to automate when a report, import, or export is run.

Before you begin


To run or manage a job, ensure that you have created a report, import, or export and that you have scheduled a job for that object.

Running a report
A report structure for analyzing, reporting progress and results can easily be created and then many instances of it can be run against different catalogs.
Running an import
An import is configured manually and can then run on a scheduled or on-demand basis.
Running an export
After the export job is configured, it can be generated manually or automatically through the scheduler. If the job is dependent on approval, the approving authority
must accept the job before it can be generated.
Scheduling jobs
A job is an import, export, or report. You can specify multiple schedules for a single job. The Jobs Console enables you to disable a job, compare jobs, view the
status of a job, or update schedule information from within the console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Running a report
A report structure for analyzing, reporting progress and results can easily be created and then many instances of it can be run against different catalogs.

Before you begin


Ensure that you have created a report.

About this task


You cannot run the same report multiple times in parallel. As a workaround, create multiple reports using different names. These "different reports" (different by name)
can now be executed in parallel.
If Java API based extension point classes are used to implement the report, use different extension point classes to implement the different reports. These classes can
possibly have the same implementation code.

Procedure
1. Click Product Manager > Reports > Reports Console. A list of reports displays.
2. Click the timer icon under the Schedule column.
3. Click the Job Details button.
4. Click the + button and select a schedule to run your report.

Results
Your report will run during the scheduled time.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Running an import
An import is configured manually and can then run on a scheduled or on-demand basis.

Before you begin


Ensure that you have created an import.

Procedure
1. Click Collaboration Manager > Imports > Import Console. A list of imports displays.
2. Click the timer icon under the Load column.

IBM Product Master 12.0.0 371


3. Click the Job Details button.
4. Click the + button and select a schedule to run your import.

Results
Your import will run during the scheduled time.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Running an export
After the export job is configured, it can be generated manually or automatically through the scheduler. If the job is dependent on approval, the approving authority must
accept the job before it can be generated.

Before you begin


Ensure that you have created an export.

Procedure
1. Click Collaboration Manager > Exports > Export Console. A list of exports displays.
2. Click the timer icon under the Job Info column.
3. Click the Job Details button.
4. Click the + button and select a schedule to run your export.

Results
Your export runs during the scheduled time. After the export job completes, check the output, any warnings, or errors files in the following docStore path:
/archives/ctg/generated/{destination_spec used}/{source container used}/{date ran}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Scheduling jobs
A job is an import, export, or report. You can specify multiple schedules for a single job. The Jobs Console enables you to disable a job, compare jobs, view the status of a
job, or update schedule information from within the console.

Before you begin


To schedule a job, you must first create a report, import, or export.

Procedure
1. Click Data Model Manager > Scheduler > Jobs Console.
2. Optional: Search for the job based on either the creator of the job or the description of the job.
a. Specify a search function.
b. Provide a search string and click the arrow in the search string box. Alternatively, you can specify a * as a wildcard and click the blue and white arrow to
display everything. Your results display.
3. Select a job that is marked No Associated Schedules under the Schedule Information column.
4. Click the timer icon or the + button under the Action column to set the schedule for that particular job.
5. Manage the jobs. You can:
Click the description link to view the details of the job in a calendar view.
Click the schedule information link to view the details of the schedule for the job.
Click the clock icon under the Action column to run a job.
Click the + to update the schedule information.
Click the x to delete a job.
Click the compare button to compare to different jobs.

Results
A description, schedule information, and action appear for each job. You can click on any one of these items to view detailed information for that particular job. Once a
schedule is set for a particular object, you will see x associated schedules display under the Schedule Information column, where x is the number of schedules associated
with that object.

372 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Synchronizing product data (GDS)


You can synchronize product data with Global Data Synchronization (GDS).

You can perform these tasks in either user interface or through scripting.

Before you begin


Ensure that you have completed the required configuration. For more information, see Configuring GDS featureand Postinstallation instructions.

Navigating Supply Side GDS pages


You can access the GDS feature in the Persona-based UI through the GDS Persona. GDS supports the GDS Supply Editor role that gets loaded when you run the
loadGDSDatamodel script.
Creating master data by using Supply Side GDS
You need to configure the GDS Persona - Supply Side after the installation.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Navigating Supply Side GDS pages


You can access the GDS feature in the Persona-based UI through the GDS Persona. GDS supports the GDS Supply Editor role that gets loaded when you run the
loadGDSDatamodel script.

GDS page role access


Table 1. Global Data Synchronization feature
Ad Ba Catalog Category Content Digital Asset Full GDS Supply Merchandise Service Solution Vend
Subfeature
min sic Manager Manager Editor Manager Admin Editor Manager Account Developer or
Manage Items ✓
Synchronization ✓
Reports

Logging in to the GDS


Proceed as follows:

1. In your web browser, type the URL provided by your administrator. The user interface login screen opens. If the page fails to open, contact your administrator.
2. Type the details for the Username, Password, and Company fields, and click Login. The home page opens.

Home page
The Home page for the Full Admin role displays following specific collaboration area cards for GDS.

Item Enrichment
Trading Partner Enrichment

For the GDS Supply Editor role, the Manage Items page is the home page.

Manage items (GDS)


The Manage items has Quick links and Summary sections.
Synchronization reports (GDS)
You can access the Synchronization reports page through the Left pane > Global Data Synchronization > Synchronization reports.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Manage items (GDS)


The Manage items has Quick links and Summary sections.

IBM Product Master 12.0.0 373


Quick links
Using quick links that you can use to perform the following tasks.

Create item
Click to open Create item window. Specify the following details and click Create item.

Global Trade Item Number (GTIN) - Specify the 14-digit number that is used to identify trade items.
Information Provider - Select an appropriate information provider.
Target Market - Select an appropriate target market.
Select Product Type - Select an appropriate product type.
Select Internal Classification Code - Select an appropriate internal classification code.
Internal Classification Description - Auto populates based on the internal classification code.
Global Item Classification Description - Select an appropriate global item classification description.
Select Global Item Classification Code - Auto populates based on the global item classification description.

Register item, Edit GDS item, Edit GDS link, Publish GDS item, Edit Trading Partner
Click to open a saved template on the Search page with pre-defined, but editable search specifications. Click Search to see list of the items for further processing.
GDS Explorer
Click to open the Search page that enables you to search results based on your specified criteria.
Create New Partner
Click to open Create Trading Partner window. Specify the following details and click Create trading partner.

Global Location Number - Specify the 13-digit number that is used to identify a trade location.
GLN Identifier - Select an appropriate Global Location Number (GLN) identifier.
Trading Partner Name - Specify the name of the trading partner.
Trading Partner Country - Specify the country of the trading partner.

Summary
The Summary section, displays the following details. Click each status type to see more details.

Items
Displays the total number of items and a breakdown of the items based on the following status:

New items
Modified items
Registered items
Published items
Unfinished items

Responses received
Displays the total number of responses that were received in past 15 days and a breakdown of the responses based on the following status:

Item add
Item modify
Item link
Item unlink
Item publication
Item confirmation

Errors
Displays the total number of errors that were received in past 15 days and a breakdown of the errors based on the following status:

Validation
Sync

Saved Reports
Displays the search reports with pre-defined specifications.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Synchronization reports (GDS)


You can access the Synchronization reports page through the Left pane > Global Data Synchronization > Synchronization reports.

Specify the following details, and click Search to see specific synchronization reports.

Field Description
From date Specify the from date for the synchronization report.
To date Specify the to date for the synchronization report.
Internal item category Select an appropriate internal item category.
Target Market (TM) Select an appropriate target market.
Information Provider (IP) Select an appropriate information provider.

374 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating master data by using Supply Side GDS


You need to configure the GDS Persona - Supply Side after the installation.

Load GDS data model


You need to load GDS data model to configure the GDS Persona - Supply Side, see Post-installation instructions. These steps are to be performed post-installation.
Any configuration changes related to properties, env_settings.ini file requires restart of the server. You should complete the configuration to avoid multiple restarts.
Configure Market Group
Market groups that are defined in 1WorldSync can be used during publishing items. To use these groups, you need to configure the market groups, which are of
interest in the Market Group Catalog. The members (trading partners) associated with a market group are also required to be added in the market group definition.
This is useful to identify partners to whom items have been published and record their responses (Received, Synchronized, and so on). It is recommended to
configure all market groups, which would be used during publishing of items in the Market Group Catalog. If one does not intend to use Market Groups during
publishing, this configuration can be skipped.

Troubleshooting
Following are some troubleshooting tips.

mdmce-env.zip
The file is available in the following folder and loaded in the environment where the GDS persona is being configured.
$TOP/mdmui/env-export/mdmenv
LoadGDSDatamodel Script
Some database connection related error can be seen and should be ignored as they are expected. Check the errors while importing the MDM objects.
Example

+++++ Loading look up specs and tables


value of top is : D:/mdm/120Patch/ccd_srcNote: \tmp\19100\WPCenvironmentcatalogcontentimport15902150569300.java uses
unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
Caught Exception in processing: CWPCM0298E:There was an error while executing the batch statement.
Exception in thread "Thread-1673" java.lang.RuntimeException: CWPCM0298E:There was an error while executing the batch
statement.
at com.ibm.ccd.connectivity.common.ContentSaveConsumer.run(ContentSaveConsumer.java:68)
at java.lang.Thread.run(Thread.java:811)
Caused by: CWPCM0298E:There was an error while executing the batch statement.
at com.ibm.ccd.common.context.common.DBContext.commitPrepBatch(DBContext.java:838)
at com.ibm.ccd.common.context.common.DBContext.commitBatch(DBContext.java:890)
at com.ibm.ccd.content.common.Item.BOFtoDB(Item.java:2642)
at com.ibm.ccd.content.common.Item.insertAll(Item.java:341)
at com.ibm.ccd.element.common.BaseObject.persist(BaseObject.java:1315)
at com.ibm.ccd.element.common.PersistenceManager.persistObject(PersistenceManager.java:77)
at com.ibm.ccd.element.common.PersistenceManager.persistObjectBasedOnInBatch(PersistenceManager.java:114)
at com.ibm.ccd.element.common.PersistenceManager.persistObject(PersistenceManager.java:144)
at com.ibm.ccd.content.common.Item.toDB(Item.java:2563)
at com.ibm.ccd.content.common.Item.toDB(Item.java:2504)
at com.ibm.ccd.common.script.ScriptOperationsItem.insert(ScriptOperationsItem.java:1192)
at com.ibm.ccd.common.script.ScriptOperationsItem.saveCtgItem(ScriptOperationsItem.java:1125)
at com.ibm.ccd.common.interpreter.operation.generated.GenSaveCtgItemOperation.execute(GenSaveCtgItemOperation.java:74)
at com.ibm.ccd.connectivity.common.ContentSaveConsumer.run(ContentSaveConsumer.java:62)
... 1 more
Caused by: com.ibm.db2.jcc.am.SqlIntegrityConstraintViolationException: Error for batch element #1: DB2 SQL Error:
SQLCODE=-803, SQLSTATE=23505, SQLERRMC=2;PIMDBUSR.TCTG_ITM_ITEM, DRIVER=4.19.49
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.t4.bb.a(Unknown Source)
at com.ibm.db2.jcc.t4.bb.a(Unknown Source)
at com.ibm.db2.jcc.t4.p.a(Unknown Source)
at com.ibm.db2.jcc.t4.wb.a(Unknown Source)
at com.ibm.db2.jcc.am.gp.a(Unknown Source)
at com.ibm.db2.jcc.am.gp.d(Unknown Source)
at com.ibm.db2.jcc.am.gp.a(Unknown Source)
at com.ibm.db2.jcc.am.gp.c(Unknown Source)
at com.ibm.db2.jcc.am.gp.executeBatch(Unknown Source)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch(DelegatingStatement.java:345)
at org.apache.commons.dbcp2.DelegatingStatement.executeBatch(DelegatingStatement.java:345)
at com.ibm.ccd.common.context.common.DBContext.commitPrepBatch(DBContext.java:815)
... 14 more

Data model updates


Verify all the objects present in the environment including Catalog, Collaboration area, and Workflow before beginning with the business flow.

Workflow out script is attached to the workflow steps.


Custom property is attached to the GDS Product catalog.
Custom property is attached to the GDS Product catalog.

IBM MQ-related
If the mq.xml file is not generated in the $TOP/etc/default folder, create a new file by copying the mq.xml.default file and then run the configure configureEnv
command in the $TOP/bin folder.
gdsConfig.properties file

IBM Product Master 12.0.0 375


The file should be present in the $TOP/mdmui/dynamic/mdmrest folder for GDS to work.
gds.properties file
The file should be present in the $TOP/etc/default folder. After the build is deployed, you should verify the presence of the file and its content.
gdsSupplySideTradeItemFunctions.properties file
The file should be present in the $TOP/etc/default folder. After the build is deployed, you should verify the presence of the file. If the file is not present in the folder,
you can create a new file by copying the gdsSupplySideTradeItemFunctions.properties.default file.

Related information
GS1 GDSN
GS1 US Resource Library

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Administering
Administering in Product Master involves database administrators, system administrators, Product Master Server system administrators, performance optimization, and
solution deployment.

Administering database
You must set up and maintain a database to store all of the Product Master data. You can use an Oracle database or a IBM® DB2® database, which you generally set
up on a separate server.
Administering system
System administrators can integrate LDAP and perform hardware performance tuning.
Administering Product Master system
Product Master system administrators can manage company data models, users, roles, and system services; define clusters; and monitor the system.
Performance tuning
When you are planning for performance, you need to account the project plan, use cases, timing, hardware, and tracking efforts.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Administering database
You must set up and maintain a database to store all of the Product Master data. You can use an Oracle database or a IBM® DB2® database, which you generally set up on
a separate server.

The performance and reliability of Product Master is highly dependent on a well-managed database. All the data that you enter in the system is stored in the underlying
database, therefore it is critical to ensure optimal database performance and availability. You can optimize the database server by configuring the database lock wait,
buffer pool, and memory parameters to meet the system performance requirements.

Database server performance checklist


Use the database server performance checklist to resolve common issues with your database.
The following list identifies possible resolutions that can help you to identify the source of your performance database issues:

Ensure that the server where the database is installed has the appropriate capacity to handle the load and is not shared with other systems.
Check and make sure that the database statistics are up to date.
Check memory allocation to make sure that there are no unnecessary disk reads.
Check to see whether the database needs defragmentation.

Back up and recovery


Data in the Product Master can be rendered unusable because of hardware or software failure. To prevent the potential loss of data, set up a back up and recovery
plan, which includes scheduling regular back ups. You must back up data whenever a large data feed file is imported or a large amount of data is entered or
modified.
Database maintenance tasks
You must monitor and optimize the database server to ensure database availability and performance. Maintaining Db2 and Oracle databases involves updating
statistics, monitoring database, server, and space utilization, and planning backup and recovery strategies.
Administering IBM Db2 database
As the database administrator, you must set up and maintain your IBM Db2® database to store all of your system data and ensure that you have configured the
database for optimal performance.
Administering Oracle Database
As the database administrator, you must set up and maintain your Oracle Database to store all of your system data and ensure that you configure the database for
optimal performance.
Database administrator troubleshooting responsibilities
The database administrator takes the initiative to resolve problems and ensures that the application is running well with respect to the database. They are familiar
with schema including all Product Master tables and indexes and familiar with the create scripts in $TOP/src/db on the application server.

IBM Product Master 12.0 Fix Pack 8

376 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

Back up and recovery


Data in the Product Master can be rendered unusable because of hardware or software failure. To prevent the potential loss of data, set up a back up and recovery plan,
which includes scheduling regular back ups. You must back up data whenever a large data feed file is imported or a large amount of data is entered or modified.

The process of backing up and recovering data consists of following tasks:

File system back up and recovery


Database back up and recovery

File system back up and recovery


To back up your system, back up the $TOP folder as defined in the common.properties file. Schedule daily back ups because the files change frequently in these folders.
You must have a back up schedule that consists of a regular full back up and daily incremental back ups.

To recover your system and the supporting files, you must restore the missing files or folders to their original locations, and then start the Product Master.

Database back up and recovery


You can use several methods to back up your database, such as exports, hot back ups, cold back ups, or mirroring. Use any method that is convenient to you for backing
up the database schema, as defined in the common.properties file.

To ensure data safety, you must have a well-defined back up strategy in place that backs up data at regular intervals for when a large feed file is imported or a large
amount of data is modified.
Important: Always check the logs and status to make sure that the back ups and restores are completed successfully with no errors.

Exporting and importing the database schema


You can export the IBM Product Master schema to a .tar file by using the db2_export.sh script.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Exporting and importing the database schema


You can export the IBM® Product Master schema to a .tar file by using the db2_export.sh script.

Before you begin


Before you run the db2_export.sh script in the $TOP/src/db/scripts/backup directory, you must:

Shut down the Product Master application that is connected to the database schema.
Ensure that the backup directory exists on the local disks on the database server and does not contain DB2® data files.
Ensure that the owner of the backup directory is the Db2 instance owner on the database server.
Ensure that you log in to the database server as the Db2 instance owner to copy and run the db2_export.sh script.
Important: You must not run the script from the application server.
If you copy db2_export.sh script from a Windows-based computer to the database server, ensure that the script is saved in the UNIX format. If the file is not in
the UNIX format, the system might introduce end-of-line characters that cause the script to fail.

About this task


Running the db2_export.sh script generates the SQL scripts that are required to create tables, indexes, and sequences with their current values, and stores them in a
.tar file in the backup directory. The .tar file is useful for maintaining a backup of your database schema and for creating a similar environment on another computer. You
can also upload this file when you need to send a copy of your database schema to IBM Support for resolution of any issue.

Procedure
Run the db2_export.sh shell script that is in the $TOP/src/db/scripts/backup folder.
For example, to export the wpcdb database schema to the /u01/backup directory in a file that is called july10bkp, you can specify the following command:

db2_export.sh --db=wpcdb --dbuser=wpc1 --dbpassword=passwd --backupdir=/u01/backup --bkpfile=july10bkp

What to do next
You can use the db2_fast_import.sh script to import the IBM Product Master schema from the .tar file that is created by running the db2_export.sh script.
Importing the schema is useful for restoring a backup of your database schema and for creating a similar environment on another computer. Before you run the
db2_fast_import.sh script, you must:

IBM Product Master 12.0.0 377


Ensure that there are no errors at the time of export. If there are any errors, you must rectify the errors, run the db2_export.sh script again, and then try the
import.
Ensure that there is enough space in the table spaces before you import the database schema. You can use the db2 list tablespaces show detail command
to check the available space in the table spaces. If required, you can add more space.
Ensure that the db2_fast_import.sh script is saved in the UNIX format if you copy it from a Windows-based computer to the database server. If it is not in the
UNIX format, the system might introduce end-of-line characters that cause the script to fail.
Run this script only when you log in as the Db2 database administrator.
Optional: Create a new empty database user if you want to import to a new database user.

CAUTION:
If you import into an existing schema with tables, all existing tables are dropped and data is lost.
You can import the schema into the same or a different database under the same schema name or a different schema name.

1. Run the db2_fast_import.sh shell script that is in the $TOP/src/db/scripts/backup folder.


For example, to import the trigodev database schema from the /u01/backup/july10/july10bkp directory to the pimdb database, you can specify the following
command:

db2_fast_import.sh --db=pimdb --dbuser=trigodev --dbpassword=trigodev --backupdir=/u01/backup/july10/july10bkp

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Database maintenance tasks


You must monitor and optimize the database server to ensure database availability and performance. Maintaining DB2® and Oracle databases involves updating statistics,
monitoring database, server, and space utilization, and planning backup and recovery strategies.

Apply the latest database server and client software fix pack on the database and application
server
Always apply the latest fix pack on the database and application server because they contain bug fixes and performance enhancements. Both the database and client are
at the same fix pack level to avoid issues that are related to database version mismatch.

Tuning the database for performance


You must ensure that the database is tuned properly. Databases are highly adjustable and you can modify them or setup monitoring to increase performance. Tuning the
database to improve response time can dramatically decrease the amount of time that users spend waiting for operations to finish running, enable the implementation to
keep up with the speed at which business is conducted, and optimize resource usage to ensure that existing hardware investments are used efficiently.

Reorganizing and generating database statistics


The database optimizer requires statistical metadata information about its user tables to choose the best way to access data. The optimizer requires up-to-date statistical
metadata information, including number of rows and cardinality.
The database administrator must manage and organize tables and indexes of your database schema to find and avoid potential problems or issues. Most common issues
are data and index fragmentation, row migration, and old statistics.

Managing old object versions


With IBM® Product Master versioning mechanism, many old versions can accumulate in IBM Product Master internal tables over time (especially in icm, ita, itd, itm, nod,
noa tables).
With increased data volume along with potential fragmentation of data, performance to retrieve data might decrease over time dramatically. It is important to maintain old
versions regularly.

Two types of jobs are provided, one that estimates the number of old version rows and other that deletes old version rows.

Managing database connections


You can define the maximum number of connection pools for each of your services to optimize the balance between your resources and the number of required
connections that are made to either your Db2 or Oracle Database.
Your database must be configured with enough connections to meet the demands of all implementation processes in both typical and clustered environments for each of
your services: admin, application server, scheduler service, event processor, queue manager, and workflow engine.

Checking and deleting old object versions with scripts


You can get size estimates or delete old object versions and collaboration area history in the various tables of IBM Product Master using shell scripts.
Monitoring databases
Monitoring the state of the database ensures better performance and availability of the database. The process of monitoring databases involves checking the
availability of free space for all the table spaces and checking for database and server errors.
Dropping temporary tables
You can drop all data from your temporary tables and indexes that hold data because a job process failed to complete. Run the following scripts weekly or whenever
you have planned maintenance or downtime.

IBM Product Master 12.0 Fix Pack 8

378 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

Checking and deleting old object versions with scripts


You can get size estimates or delete old object versions and collaboration area history in the various tables of IBM® Product Master using shell scripts.

Before you begin


Before you delete old object versions and collaboration area history, you must back up the database.

About this task


Old versions are created for objects like catalogs, items, and categories whenever the objects are exported, imported, or modified. You can use
delete_old_versions.sh shell script to delete old versions of objects that are no longer needed. Deleting old object versions restores the database storage space and
therefore improves database performance.
You can use the estimate_old_versions.sh shell script to get a size estimate for the number of old object versions in various Product Master database tables.

When the execution of the delete_old_versions.sh script completes, you are prompted if you want to remove Collaboration Area History. The
delete_collab_history.sh script is merged with the delete_old_versions.sh script. The collaboration history records are created for objects like items and categories
whenever the objects go through a workflow. You can delete collaboration history records that are no longer needed with the delete_old_versions.sh script. Deleting
collaboration history records restores the database storage space and thus improves database performance.

These scripts are run when Product Master is running. To avoid any potential impact on performance, you should run these scripts when the Product Master application is
lightly used, for example during night time or weekends.

Procedure
Run the estimate_old_versions.sh shell script or the delete_old_versions.sh shell script that are in the $TOP/src/maintenance/old_versions directory.
Collaboration area history takes the code and end_date parameters from the delete_old_version.sh. For more information on the parameters and usage, see
delete_old_versions script and estimate_old_version script file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Monitoring databases
Monitoring the state of the database ensures better performance and availability of the database. The process of monitoring databases involves checking the availability of
free space for all the table spaces and checking for database and server errors.

To prevent critical database error messages or data loss, ensure that there is sufficient free space for all of the table spaces, especially the USERS, and INDX table spaces.
You can check the log files that are generated by the database regularly to check if the problems that are encountered by your system are related to database failure.

Typical monitoring tasks


You must constantly monitor the database to ensure that it is up and running and is not bogged down with heavy jobs that slows performance. The database administrator
can receive notification alerts when system errors occur and when resources are being used excessively. The database administrator must also run their own checklists of
monitoring tasks, which can include the following:

Monitor free space in table spaces (especially USERS and INDX because they are most likely to run out of space)
Check log files for database errors
Check for free disk space and disk usage (know when the database is reaching a shortage of free disk space)
Monitor server resources usage and errors (processor, memory, swapping, disk I/O, network I/O)
Check the status of periodic back ups
Monitor rollback usage (when rollbacks occur and if they take too long)
Set alerts (for example, when unique constraints are violated, locks are held for a long time or deadlocks occur, queries run slowly, or resources are used
excessively)

Checking for free space in table spaces


You must constantly monitor the available free space of all table spaces, especially the USERS, and the INDX table spaces, because they are most likely to run out of
space. You can check the availability of free space at regular intervals that depend on the initial configuration of free space and the workload. Sufficient free space
prevents critical error messages and data loss.

Listing table space usage in DB2®


You can check for table space usage from the command line by using any of the following commands:

db2 connect to dbname


db2 list table spaces show detail
db2pd -db dbname –tablespaces

IBM Product Master 12.0.0 379


Listing table space usage in Oracle
You can check table space usage in Oracle by using the Oracle Enterprise Manager Console.

Remember: Always check with your database administrator on the health of the table spaces before you run a large import.

Monitoring slow running queries


There are several reasons for slow running queries, including table fragmentation, data volume, application server process, or server load. You can monitor the db.log log
file for DELAYED QUERY log entries to locate and determine if there are slow running queries. Each service has its own db.log log file. Each of the db.log files is located in
their respective $TOP/logs/service/service_Name directories, where service_Name is the unique service name of one of your services.

To ensure that you have current statistics in the db.log file, you should update your database statistics before viewing your service's db.log file to check for slow running
queries. For more information about updating your database statistics, see Reorganizing Db2 databases.

You must enable logging for delayed queries by setting the profiling_query_delay_threshold parameter in the common.properties file.

Example
This is an example of a DELAYED QUERY log entry in the db.log file:

2009-01-23 15:27:08,961 [main] WARN com.ibm.ccd.common.db.Query - DELAYED QUERY


(341 ms)

Checking for database errors


As the database administrator, you must check the log files that are provided by Oracle or DB2 regularly. Many Product Master problems might have a root in the failures in
the underlying databases. You can find all the DB2 log files at DB2 instance\sqllib\db2dump. You can also set up alerts and be notified when unique constraints are
violated, when locks are held for a long time or deadlocks occur, and when queries start to run slow. You can look for all Oracle error messages in
$ORACLE_BASE/admin/oracle SID/directory.

For more information about monitoring your Oracle database, see:

Monitoring and tuning the Oracle 10g database


Oracle 10g database documentation
Oracle 11g database documentation

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Dropping temporary tables


You can drop all data from your temporary tables and indexes that hold data because a job process failed to complete. Run the following scripts weekly or whenever you
have planned maintenance or downtime.

Dropping temporary aggregate tables and indexes


You can drop all of your temporary aggregate tables and indexes of your import and update jobs to restore storage space and speed-up database utilities. The
drop_temp_agg_tables.pl is a bash shell script that starts CLI and SQL for DB2®, or SQLPLUS for Oracle, for the respective backend databases.

1. Run the drop_temp_agg_tables.pl script that is in the $TOP/src/db/schema/util folder:

$TOP/src/db/schema/util/drop_temp_agg_tables.pl

Dropping temporary user log tables


You can drop all of your temporary tables that start with TCTG_ULE_USER_LOG_ENTRY and associated temporary sequences in your database to restore storage space
and speed-up database utilities. The drop_log_tables_seq.pl is a bash shell script that starts CLI and SQL for Db2, or SQLPLUS for Oracle, for the respective backend
databases.
Important: If you use "User-Defined Logs" in business logic or for auditing, do not run this script. Starting this script deletes all "User-Defined Logs" entries.

1. Run the drop_log_tables_seq.pl script that is in the $TOP/src/db/schema folder:

$TOP/src/db/schema/drop_log_tables_seq.pl

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Administering IBM Db2 database


As the database administrator, you must set up and maintain your IBM® Db2® database to store all of your system data and ensure that you have configured the database
for optimal performance.

380 IBM Product Master 12.0.0


Db2 database performance
You can configure the following aspects of Db2 to improve the database performance:

Applying the latest fix packs


Apply the latest Db2 fix packs to the Db2 database on the Db2 server and the Db2 client on the application server because they might contain some bug fixes and
performance enhancements. You can download the latest Db2 fix pack from the Download DB2 Fix Packs.
Note: The database and client must be at the same fix pack level to avoid any conflicting database version issues.
Tuning the operating system
Tuning the operating system configuration in the areas of memory utilization and file caching can maximize the performance of the Db2 database.
Tuning the storage for performance
You can tune the storage to be able to store and retrieve data quickly and efficiently. You must understand the advantages and disadvantages of choosing a
particular data placement strategy to achieve this goal. The placement of data can directly affect the performance of the Product Master system that uses the
database. The overall goal is to spread the database effectively across as many disks as possible to try to minimize I/O wait.
Tuning the database manager and the database configuration
You can configure parameters in Db2 both at the instance level and the database level.

Db2 connection parameters


To optimize the number of database connections to your Db2 database, determine the number of required connections to the database and add 10 more connections to
that number to act as your buffer connections and account for any Db2 background processes.

Db2 parameters - For more information, see GET DATABASE MANAGER CONFIGURATION command.
MAXAGENTS parameter - For more information, see maxagents - Maximum number of agents configuration parameter.
MAXAPPLS parameter - For more information, see maxappls - Maximum number of active applications configuration parameter.

Db2 table spaces


Data that is entered into Product Master gets stored in the underlying database and is organized in different table spaces in the database. Because large amounts of data
transactions occur, you must ensure that there is enough free space in all the table spaces of the database. You can further improve the performance and availability of the
database by monitoring the free space available to table spaces and by using temporary tables.

Configuration parameters for Db2 databases


To ensure optimal performance of your database, you can configure database manager parameters and database configuration parameters.

Db2 server optimization


You can optimize the database server for better response time, output, and availability of the server. To ensure optimal performance, you configure your database lock
wait, memory, and buffer pool parameters to meet the system requirements.
You can maintain optimal performance of a Db2 database in the following ways:

Use sufficient agents for the workload.


Restrict Db2 from closing and opening unnecessary files.
Prevent extended lock waits.
Manage Db2 sort memory conservatively and do not mask sort problems with large values SORTHEAP.
Analyze table access activity and identify tables with unusually high rows that are read per transaction or overflow counts.
Analyze the performance characteristics of each table space to improve the performance of the table spaces with the slowest read times, longest write times,
highest physical I/O read rates, worst press ratios, and access attributes that are inconsistent with expectations.
Create multiple buffer pools, and make assignments of table spaces to buffer pools such that access attributes are shared.
Examine Db2 SQL statement Event Business Monitor information to discover which SQL statements are consuming the largest proportions of computing resources,
and take corrective actions.
Reevaluate configuration and physical design settings after high-cost SQL is eliminated.

Db2 database health check


You must check the health of the database systems at regular intervals to ensure high availability of the systems. You can use the Db2 Health Center to monitor the state
of a set of health indicators by defining the warning and threshold values for the indicators. If the current value of a health indicator crosses the acceptable operating
range that is defined by its warning and threshold values, the health monitor generates a health alert.
For example, you can monitor the time that is taken to run database queries. To do so, you can define the time limits within which the queries must be run. If any query
needs more time to be run, the health monitor sends an alert. You can also configure the alert settings so that the alert reaches specific users. Db2 has a set of predefined
values for the health monitors. You can use the Health Center to customize these values as required.

You can perform the following tasks by using the Db2 Health Center:

View the status of the database environment.


View alerts for an instance or a database.
View detailed alert information and the recommended actions.
Configure health monitor settings for an object.
Set the recipients for alerts.
Review alert history for an instance.

Reorganizing Db2 databases


To provide the optimizer with accurate information for determining an optimal access plan, tables and index statistics should be updated periodically. If necessary,
tables and indexes need to be reorganized for optimal performance.
Purging performance profiling data in DB2
You can purge your performance profiling data to restore free disk space on your DB2 database.

IBM Product Master 12.0.0 381


Scripts and commands for Db2
You can use database scripts and commands to manage your Db2 database.
Tuning your IBM Db2database
You must tune your database by increasing the heap size setting for IBM Db2 database server.

Related information
IBM® Db2 Version 11.5
IBM® Db2 Version 11.1

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Reorganizing Db2 databases


To provide the optimizer with accurate information for determining an optimal access plan, tables and index statistics should be updated periodically. If necessary, tables
and indexes need to be reorganized for optimal performance.

Before you begin


Check with your database administrator regarding the database maintenance plan.

About this task


Tables and index statistics can be generated by using one of the following methods.
Note: Use the automatic statistics collection method for continued optimal performance. For more information, see:Enabling automatic statistics collection

Procedure
1. Automatic statistics collection.
Database statistics can be collected automatically by setting the following database configuration parameters:
a. From database server, by using DB2® instance owner user ID, run the following commands after you connect to the database.

db2 update db cfg using auto_maint on


db2 update db cfg using auto_runstats on

2. Manual statistics collection.


Database statistics can be collected manually by running the following shell script from Product Master server.
a. If significant update activity has occurred since the last statistic update, run this shell script regularly, at least weekly:

$TOP/src/db/schema/util/analyze_schema.sh

What to do next
Occasionally, it is required to reorganize the database for optimal performance. Db2 provides the REORGCHK utility to report the tables that need reorganization.

Related information
Reorg operators
REORGCHK command

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Purging performance profiling data in DB2


You can purge your performance profiling data to restore free disk space on your DB2® database.

Before you begin


Before you can purge your performance profiling data, you must deactivate profiling in IBM® Product Master.

Procedure
1. Shut down your system by running the following commands:

382 IBM Product Master 12.0.0


cd $TOP/bin/go
./abort_local.sh

2. Connect to the DB2 database as the DB user by running the following command:

db2 connect to db_Service_Name user db_userName using db_password

Where db_Service_Name is the last value of the db_url parameter, and db_userName and db_password are the respective db_userName and db_password
parameters that are in the common.properties file.
3. Drop and re-create the performance tables by running the ddl_pfm_performance.sql SQL command:

db2 -tvf $TOP/src/db/schema/gen/ddl_pfm_performance.sql

4. Drop and re-create the indexes on the performance tables by running the idx_pfm_performance.sql SQL command:

db2 -tvf $TOP/src/db/schema/gen/idx_pfm_performance.sql

5. Collect your runstats by running the following commands:

cd $TOP/src/bin/db/
./analyze_schema.sh

6. Restart the system by running the following commands:

cd $TOP/bin/go
./start_local.sh

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Scripts and commands for Db2


You can use database scripts and commands to manage your DB2® database.

You can perform the following tasks with the database management scripts and commands for Db2:

Scripts for Db2


analyze_schema.sh
Updates the statistics and receives a report indicating indexes that need reorganization.
drop_temp_agg_tables.pl
A bash script that drops temporary aggregate tables.
drop_log_tables_seq.pl
A bash shell script that drops all of your temporary tables that start with TCTG_ULE_USER_LOG_ENTRY and associated temporary sequences in your database to
restore storage space. Starts CLI and SQL for Db2.
ddl_pfm_performance.sql
Drops and re-creates the performance tables.
idx_pfm_performance.sql
Drops and re-creates the indexes on the performance tables.
sysinfo.sql
Collects configuration details for the database.

Commands for Db2


Db2 reorgchk
Updates your statistics information to your Db2 database and checks for reorganization need at the same time.
Db2 reorg
Reorganizes specific tables.
select count(*) from tctg_ita_item_attributes where ita_next_version_id = 999999999 and ita_company_id = company_ID;
Counts the number of rows in a database table that belong to the latest object version.
select count(*) from tctg_ita_item_attributes where ita_next_version_id = 999999999 and ita_company_id = company_ID;
Count the total number of rows in a database table that belong to old object versions.
select * from cmp
Retrieves the CompanyID from the CMP table.
select * from ctg
Retrieves the CatalogID from the CTG table.

Related concepts
Dropping temporary tables

Related tasks
Reorganizing Db2 databases
Purging performance profiling data in DB2

IBM Product Master 12.0.0 383


Related information
REORG TABLE command

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Tuning your IBM Db2database


You must tune your database by increasing the heap size setting for IBM® DB2® database server.

Before you begin


Before you can tune your Db2 database, you must log in as the Db2 administrator.

Procedure
1. Run the following commands to increase the heap size setting.

db2 update database configuration for database <dbname> using applheapsz 8192
db2 update database configuration for database
<dbname> using app_ctl_heap_sz 8192
db2 update database configuration for database <dbname> using LOGFILSIZ 4000

Where dbname is the actual name of the Db2 database instance. You can find the current values of these parameters with the following commands:

db2 get dbm cfg


db2 get db cfg for <dbname>

2. Run the following commands to maintain log conditions.

update db config for <dbname> using logprimary 10;


update db config for <dbname> using logsecond 20;

Note: The previous numbers in Db2 commands are indicative only. You must get the correct parameter values for your environment from your DBA.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Administering Oracle Database


As the database administrator, you must set up and maintain your Oracle Database to store all of your system data and ensure that you configure the database for optimal
performance.

Oracle Database performance


Configure and tune your Oracle Database to ensure optimal performance. You can optimize the database for memory usage, storage allocation, and connections.
Recommendation: Enabling Automatic Shared Memory Management can help in efficiently using system memory.

Oracle connection parameters


Run your Oracle Database in "dedicated server" mode where you need one database connection (that are Oracle process) for each application connection (user process).
In Oracle databases, the maximum number of connections that are allowed by the database is set by using the PROCESSES parameter in the initsid.ora file where sid is
the system ID of your database.

Example
If you have only one Product Master instance that is connected to your database and the instance needs a maximum of 133 connections, then ensure that the PROCESSES
parameter is set to at least 143, which is 10 more than your required maximum. You need 10 extra connections to account for Oracle background processes and some
SQL PLUS connections. In addition, if you add another Product Master instance, which also requires 133 connections to the same database, then you should set the
PROCESSES parameter to at least 276, where 143 accounts for the first instance and an extra 133 connections account for the second Product Master instance.

Oracle table spaces


Data that is entered into Product Master gets stored in the underlying database and is organized in different table spaces in the database. Because large amounts of data
transactions occur, you must ensure that database is tuned optimally.
You can further improve the performance and availability of the database by monitoring the free space available to table spaces and by using temporary tables.

384 IBM Product Master 12.0.0


Configuration parameters for Oracle databases
To ensure optimal performance of your database, you can configure the parameters in database configuration.
Oracle uses configuration parameters to locate files and specify runtime parameters that are common to all Oracle products. All Oracle parameters are stored in the
init.ora file of the registry.

Oracle Database health check


You can use the following Oracle tools to monitor your database:

Oracle Alerts
Monitoring table space usage.
Automatic Workload Repository (AWR) Reports
Monitoring various performance diagnostics.
Oracle Enterprise Manager for database
Monitoring state and workload.

Reorganizing Oracle Database


To provide the optimizer with accurate information for determining an optimal access plan, current statistics are essential. Therefore, it is highly recommended to
use automatic statistics gathering.
Scripts and commands for Oracle
You can use database scripts and commands to manage your Oracle Database.

Related information
Oracle Database Release 19c
Oracle Database 18c
Oracle Database 12c Release 2

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Reorganizing Oracle Database


To provide the optimizer with accurate information for determining an optimal access plan, current statistics are essential. Therefore, it is highly recommended to use
automatic statistics gathering.

Procedure
1. Ensure that the automatic statistics gathering job is scheduled, by using the following SQL.

SELECT ENABLED FROM DBA_SCHEDULER_PROGRAMS WHERE PROGRAM_NAME = 'GATHER_STATS_PROG';

Output should be 'TRUE'.


Note: GATHER_STATS_PROG is the program that runs the internal procedure GATHER_DATABASE_STATS_JOB_PROC).
2. Delete any existing statistics, by using the following command from SQLPLUS. Replace WPCUSER with the user who is specified on the db_user variable found in the
$TOP/etc/default/common.properties file.

EXEC DBMS_STATS.DELETE_SCHEMA_STATS('WPCUSER');

3. Change the default "ESTIMATE_PERCENT" to 100% for DBMS_STATS. Use the following procedure from SQLPLUS to change the default. Run this using the Oracle
SYSDBA user ID:

EXEC DBMS_STATS.SET_PARAM('ESTIMATE_PERCENT','100');

4. Run GATHER_DATABASE_STATS using the following command from SQLPLUS. Run this command by using the Oracle SYSDBA user ID.

EXEC DBMS_STATS.GATHER_DATABASE_STATS(ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE, METHOD_OPT => 'FOR ALL COLUMNS SIZE
AUTO', DEGREE => 2, CASCADE => TRUE);

5. Shut down and start the Oracle Database.


Note: This step flushes any dynamic SQL plans that are still using old access paths that did not use the current updated statistics.
6. Modify the default maintenance window for the automatic statistics gathering job.
Oracle provides utilities to change the default schedule window of automatic statistics gathering.
This example changes the default maintenance schedule to be 1:00 AM. to 3:00 AM. everyday. You must run these commands as the sysdba user.

EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE(name => 'WEEKNIGHT_WINDOW', attribute => 'repeat_interval', value =>


'freq=daily;byday=MON, TUE, WED, THU, FRI;byhour=1;byminute=0;bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE(name => 'WEEKNIGHT_WINDOW', attribute => 'duration', value => interval '120' minute);
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE(name => 'WEEKEND_WINDOW', attribute => 'repeat_interval', value =>
'freq=daily;byday=SAT, SUN;byhour=1;byminute=0;bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE(name => 'WEEKEND_WINDOW', attribute => 'duration', value => interval '120' minute);

What to do next

IBM Product Master 12.0.0 385


Occasionally, it is required to reorganize the database for optimal performance. To check for table fragmentation you can use the oracle_table_fragmentation_report.sh
script.

1. Create the file oracle_table_fragmentation_report.sh file on Product Master server side, for example at $TOP/bin/db and paste the script content as described as
follows.
a. Add an execute permission to the chmod u+x oracle_table_fragmentation_report.sh file.
b. Start fragmentation report by running the following command:

./oracle_table_fragmentation_report.sh <db_user> <db_password> <dbname>

The generated fragmentation report is created in the local directory with name table_fragmentation_report.out. The report looks similar to the following:

Table name : <25% full Blocks : 25-50% full Blocks : 50-75% full Blocks : >75%
full Blocks : Full Blocks
...
TCTG_CAD_CATEGORY_DETAIL : 7 : 1117 : 239 : 977 : 80177
TCTG_CAT_CATEGORY : 5 : 121 : 64 : 521 : 13466
TCTG_CAX_CATEGORY_CONTENT : 1 : 3 : 1 : 542 : 92918
TCTG_CCE_CTLG_CTLG_EXPORT : 0 : 0 : 0 : 0 : 0
TCTG_CCM_CATEGORY_CATEGORY_MAP : 0 : 1 : 2 : 0 : 2
...
TCTG_ICM_ITEM_CATEGORY_MAP : 65 : 14389 : 6020 : 3746 : 31777
TCTG_ITA_ITEM_ATTRIBUTES : 39 : 46833 : 32414 : 138070 : 878941
TCTG_ITD_ITEM_DETAIL : 54 : 38286 : 22507 : 53205 : 270075
TCTG_ITM_ITEM : 642 : 5630 : 2307 : 1518 : 17340
TCTG_ITX_ITEM_CONTENT : 26 : 18 : 21 : 1328 : 953764
...

Tables, which show a high portion of blocks with low usage (<50% full) should be considered for reorganization.

oracle_table_fragmentation_report.sh
#!/bin/bash
/* _______________________________________________________ {COPYRIGHT-TOP} _____
* IBM Confidential
* OCO Source Materials
*
* 5725-E59
*
* (C) Copyright IBM Corp. 2007, 2014 All Rights Reserved.
*
* The source code for this program is not published or otherwise
* divested of its trade secrets, irrespective of what has been
* deposited with the U.S. Copyright Office.
* ________________________________________________________ {COPYRIGHT-END} _____*/

if [ $# -lt 3 ]
then
echo "Usage " $0 "db_user db_password db_name"
echo "Example: " $0 "pimuser pass4pim pimdb"
exit 0
fi
userid=$1
passwd=$2
dbname=$3

CONNECTION="$userid/$passwd@$dbname"

#echo 'CONNECTION: ' $CONNECTION

echo '== Generating Table fragmentation report for ' ${userid} ' =='

sqlplus -s ${CONNECTION} <<EOF 2>&1 >table_fragmentation_report.out


SET term off ver off trims on serveroutput on size 1000000 feed off;
declare
v_unformatted_blocks number;
v_unformatted_bytes number;
v_fs1_blocks number;
v_fs1_bytes number;
v_fs2_blocks number;
v_fs2_bytes number;
v_fs3_blocks number;
v_fs3_bytes number;
v_fs4_blocks number;
v_fs4_bytes number;
v_full_blocks number;
v_full_bytes number;
cursor table_list is select table_name from user_tables where partitioned = 'NO' order by table_name;
cursor table_part_list is select table_name, partition_name from user_tab_partitions where high_value is not null
order by table_name;
begin
dbms_output.put_line('Table name : <25% full Blocks : 25-50% full Blocks : 50-75% full Blocks : >75% full Blocks :
Full Blocks');
for table_list_rec in table_list loop
dbms_space.space_usage (USER,table_list_rec.table_name,'TABLE', v_unformatted_blocks,
v_unformatted_bytes, v_fs1_blocks, v_fs1_bytes, v_fs2_blocks, v_fs2_bytes,
v_fs3_blocks, v_fs3_bytes, v_fs4_blocks, v_fs4_bytes, v_full_blocks, v_full_bytes);
dbms_output.put_line(table_list_rec.table_name || ' : ' || v_fs1_blocks || ' : ' || v_fs2_blocks || ' : ' ||
v_fs3_blocks|| ' : ' || v_fs4_blocks || ' : ' || v_full_blocks);
end loop;
for table_part_list_rec in table_part_list loop
dbms_space.space_usage (USER,table_part_list_rec.table_name,'TABLE PARTITION', v_unformatted_blocks,
v_unformatted_bytes, v_fs1_blocks, v_fs1_bytes, v_fs2_blocks, v_fs2_bytes,
v_fs3_blocks, v_fs3_bytes, v_fs4_blocks, v_fs4_bytes, v_full_blocks,
v_full_bytes,table_part_list_rec.partition_name);
dbms_output.put_line(table_part_list_rec.table_name || ' : ' || v_fs1_blocks || ' : ' || v_fs2_blocks || ' : ' ||

386 IBM Product Master 12.0.0


v_fs3_blocks|| ' : ' || v_fs4_blocks || ' : ' || v_full_blocks);
end loop;
end;
/
EOF

echo '== Check Table fragmentation report at table_fragmentation_report.out =='


exit 0

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Scripts and commands for Oracle


You can use database scripts and commands to manage your Oracle Database.

Use following scripts and commands for managing Oracle databases:

Scripts for Oracle


analyze_schema.sh
Updates the statistics.
drop_temp_agg_tables.pl
A bash script that drops temporary aggregate tables.
drop_log_tables_seq.pl
A bash shell script that drops all temporary tables that begin with TCTG_ULE_USER_LOG_ENTRY and its associated temporary sequences in the database to restore
storage space. Starts SQLPLUS for Oracle.
Important: If you use "User-Defined Logs" in business logic or for auditing, do not run this script. Starting this script deletes all "User-Defined Logs" entries.
index_compress.sql
Periodically compressing indexes on table ita might help reduce the database size and increase performance of the rich-search function on "indexed" attributes.

Commands for Oracle


Place the following commands in the index_compress.sql file:

ALTER INDEX ICTG_ITA_0 REBUILD COMPRESS;


ALTER INDEX ICTG_ITA_1 REBUILD COMPRESS;
ALTER INDEX ICTG_ITA_2 REBUILD COMPRESS;
ALTER INDEX ICTG_ITA_3 REBUILD COMPRESS;

Note: You must shut down Product Master before you run these ALTER INDEX commands.
Save the index_compress.sql file in the $TOP directory and run the commands from within the $TOP directory in the following manner:

perl $PERL5LIB/runSQL.pl --sql_file=index_compress.sql

This process can take from a few minutes to several hours for completion.

Related concepts
Dropping temporary tables

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Database administrator troubleshooting responsibilities


The database administrator takes the initiative to resolve problems and ensures that the application is running well with respect to the database. They are familiar with
schema including all Product Master tables and indexes and familiar with the create scripts in $TOP/src/db on the application server.

The database administrator performs the following responsibilities:

Back up and recovery

Performs periodic back ups


Has full knowledge of the restore procedure

Monitoring database activity


The database administrator understands the following:

When transaction rollbacks occur


When the database is out of system disk space
When unique constraints have been violated (this can be accomplished by using alerts)
When not to shut down the database while the application is running

IBM Product Master 12.0.0 387


Performance

Takes immediate action when performance issues arise:


Analyzes SQL statements and if some are taking an inordinate amount of time to run, determines the cause:
Explain plan
Checks updated statistics
Monitors when the database performs a rollback on a very large transaction causing performance issues with other transactions
Owns recalculating of database statistics
Verifies that the database is running in an optimized fashion, not only at the system level but at the level of tables and queries as well
Tunes procedure for gathering statistics to obtain optimal performance
Calculates how often statistics need to be updated to obtain optimal performance
Reorganizes the tables and indexes at regular intervals of time

Locks

Analyzes where locks are coming from


Gets trace of SQLs
Matches SIDs to server or process
Detects deadlocks
Checks why the source of the block is still blocking
If it is a long running job because of slow running SQLs
Why are the SQLs slow?
Perhaps the DB is doing a rollback on a session and the application is still generating SQLs
Maybe it is a bad explain plan (check SQL performance)
Possibly the DB is doing a rollback on a transaction
Size of the transaction could be a factor

Note: If the trained Database Administrator has followed all of these guidelines and still encounters difficulty, we recommend opening a PMR.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Administering system
System administrators can integrate LDAP and perform hardware performance tuning.

Integrating Lightweight Directory Access Protocol (LDAP)


You can integrate LDAP into Product Master so that you can locate organizations, individuals, and other resources such as files and devices in a network.
Global Data Synchronization (GDS) administration
According to your requirements, the following configurations are required to be performed. The configurations take place in the Product Master, however, the
settings reflect in the GDS. Ensure that you are logged in to InfoSphere Master Data Management Collaboration Server - Collaborative Edition user interface.
System monitoring
You can monitor both your software and hardware components of Product Master to ensure that your system operates at an optimal level.
Managing document store
You can manage the document store to manage all incoming and outgoing files, including import feeds, scripts, reports, and specs.
Data maintenance
New report jobs are available to estimate, delete, or archive obsolete or unused data in the Product Master system. These report jobs are based on the options that
are provided in a lookup table, and can be scheduled to run on an appropriate maintenance window by using the in-built scheduler.
Enabling and configuring the spell checker
You must enable and configure IBM® Product Master parameters and the Sentry Spell Checking Engine spell checker from Wintertree to provide spell checking
capabilities within the Single Edit screen for items and categories.
Application server performance checklist
Use this checklist to resolve common issues of the performance of your application server service.
Application and Business expert troubleshooting responsibilities
The application and business expert tracks all changes to the application and system to determine if a change is causing a problem. They also interpret patch
release notes and notify all internally affected parties and works closely with the database administrator and system expert to fix problems and increase
performance.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrating Lightweight Directory Access Protocol (LDAP)


You can integrate LDAP into Product Master so that you can locate organizations, individuals, and other resources such as files and devices in a network.

LDAP integration enables your system to support over 1000 casual users where each user requires authorization for various internal and external roles. For example,
Category Managers is an internal role and an Assistant Brand Manager is an external role. With LDAP integration, you can distribute your LDAP directory over several
servers and improve your security infrastructure through:

Real-time LDAP user entitlement,


User import from an LDAP server for immediate setup,
User authentication within the same LDAP server that you import from.

388 IBM Product Master 12.0.0


You can also integrate a separate LDAP server tool into your system to use for the authentication process. In this case, the system authorization infrastructure is used to
authorize LDAP users, and the separate LDAP server tool is used to authentication each user. To differentiate each LDAP user in your system, you use LDAP flags. This
process of entitlement for LDAP users and roles into your system is done during run time and is based on either user-invoked or system-invoked script operations.

For more information, see your product documentation for details.

LDAP users and roles


The following list describes how LDAP users and roles function in the Product Master:

If a user is authenticated in a session, then the user continues to be authenticated until the end of the session. Even if the user identity changes during that period,
the user is still authenticated. For example, a change in role or password does not invalidate user authentication.
If the user exists in the Product Master and the LDAP flag is not set, then authentication is run against Product Master.
If the user exists in the Product Master and the LDAP flag is set, then authentication is run against the LDAP server. Any Product Master roles that are set within the
LDAP server must match the user-role mappings in the Product Master.
If the user exists and contains a role on the LDAP server but does not exist in the Product Master, the required entitlements for the user are created in an LDAP flag
set.
If you delete or remove an LDAP user from the Active Directory, you need to manually disable such a user in the Product Master through Admin UI > Security > User
Console.
The script operations getLdapUserInfo and getAllLdapUsersInfo enables you to source a list of users from the LDAP server.

LDIF integration
You can use the ASCII file format, Lightweight Directory Interchange Format (LDIF), to exchange and synchronize your data between LDAP servers.
You can synchronize you LDAP directories by extracting either a full or partial directory then formatting the contents into LDIF files. The LDIF format is used to convey
either a directory of information or a description of a set of changes that are made to directory entries. An LDIF file consists of a series of records that are separated by line
separators. Each record in an LDIF file consists of either a sequence of lines that describe a specific directory entry or a set of changes to a directory entry.

Product Master also includes an LDIF parser that can read and write directly to the LDIF protocol into LDAP core objects that include the following:

Distinguished name
Object classes
Associated attributes
Other core objects

You can write script functions to access the LDAP core objects that are based on your business logic requirements.

LDAP limitations
The following lists the LDAP integration limitations.

Single sign-on capabilities


Product Master supports single sign-on with the LDAP v3-compliant LDAP servers. LDAP support does not enable single sign-on support automatically.
Product Master and WebSphere®® Application Server applications integrate with LDAP and all the local users from the Product Master and WebSphere® Application
Server are not allowed to log in in to their application. Only LDAP users can log in to both applications with default company. By default, LDAP or single sign-on
disables the local Admin or users and hence users cannot log in separately with and without single sign-on in the same application instance. For more information,
see Configuring SSO.
Note: WebSphere Global Security is required for Product Master application if you are enabling LDAP configuration over Secure Sockets Layer (SSL).
Locale-specific string extraction
LDAP entry searches are not certified.
SASL binding
Novell eDirectory server has a known issue with SASL bind (integrated with DIGEST-MD5) in the globalized environment. Contact Novell technical support to
determine whether this problem is applicable to your environment.

Supported LDAP servers


Product Master can work with the LDAP v3 compliant LDAP servers.

Configuring LDAP over Secure Socket Layer


If you must configure LDAP over Secure Socket Layer (SSL), then you must perform the following extra steps to be able to log in to IBM® Product Master.

1. Configuration that is required for SSL connection by using Active Directory and IBM WebSphere Application Server: import the certificate that is exported from the
LDAP server into WebSphere Application Server Cell truststore.
2. Configuration that is required for IBM Product Master and Active Directory over SSL: customize the LDAP script (WPCS file) provided by Product Master after the
changes done previously. To customize the LDAP script, modify the LDAPLibrary.wpcs trigger script to replace the system properties from:

runJavaMethod(null,setPropertyMethod,"javax.net.ssl.trustStore",keystore);

To

runJavaMethod(null,setPropertyMethod,"com.ibm.ssl.trustStore", "/webAS/config/ssl/cacerts");

where /webAS/config/ssl/cacerts is the directory in the WebSphere Application Server installation where certificates are stored.
You can point the keystore attribute in the LDAP Properties lookup table in Product Master to either the custom keystore path or the WebSphere Application Server
truststore path.

Integrating with Product Master


The following sections describe the configurations that are involved in LDAP integration with Product Master.

IBM Product Master 12.0.0 389


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrating with Product Master


The following sections describe the configurations that are involved in LDAP integration with Product Master.

Before you begin


WebSphere® Global Security is required for Product Master application if you are enabling LDAP configuration over Secure Sockets Layer (SSL).
Ensure that you are aware of the case sensitivity as LDAP server attribute values are case-sensitive.

Procedure
1. Enable LDAP authentication. To enable LDAP authentication, you must set the wpcOnlyAuthentication flag in the Login.wpcs file to false in case LDAP authentication
is required. The Login.wpcs file identifies the authentication mechanism.
a. Click Data Model Manager > Scripting > Scripts Console.
b. Select Login Script from the drop-down list.
c. Click Edit for the Login.wpcs script.
d. Find and set the wpcOnlyAuthentication flag to false.
2. Enable logger.
To enable logger, you must add a logger and appender for this ldap logger in the $TOP/etc/default/log4j2.xml. In the Login.wpcs script, the default logger is ldap.
For example,
Table 1. Logger definitions and scripts
Definition Script
Category definition <Logger name="com.ibm.ccd.wpc_user_scripting.ldap" level="info" additivity="false">
<AppenderRef ref="LDAPLOGGER" />
</Logger>
Appender <RollingFile name="LDAPLOGGER" fileName="%LOG_DIR%/${svc_name}/ldap.log" append="true"
definition filePattern="%LOG_DIR%/${svc_name}/ldap-%d{MM-dd-yyyy}-%i.log">
<PatternLayout>
<Pattern>%d [%t] %-5p %c %x- %m%n</Pattern> </PatternLayout> <Policies> <TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="10 MB" /> </Policies> <DefaultRolloverStrategy max="2" />
</RollingFile>
3. Configure LDAP server.
Perform the following tasks, from the Novell iManager web console, to set up users and groups in the various LDAP servers that are supported.
a. Create an organization.
i. Click eDirectory Administration > Create Object.
ii. Provide a name for the organization and context in which the organization should reside.
b. Create a user.
i. Click Users > Create User.
ii. Select the newly created organization for this user.
iii. Set the NDS Password and Simple Password.
Note: For more information, see LDAP User name.
c. Create a group.
i. Click Groups > Create Group.
ii. Select the newly created organization for this group.
iii. Modify the group and associate the users to the groups.
4. Add a matching role in Product Master.
Before you can add a matching role, create a role in Product Master with the same name as the group configured in the LDAP server whose members are to be
authenticated through this integration.
5. Populate the lookup table. Ensure that you provide the details about the LDAP configuration so that Product Master can use it for connectivity and authentication.
a. Click Product Manager > Lookup Tables > Lookup Table Console.
b. Click the magnifying icon besides the row for LDAP Properties.
c. Click + to add a row, enter the information as the LDAP configurations in step 3.
The following table describes each attribute for the LDAP server:
Table 2. LDAP server attributes
Attribute name Description of attribute
Bind Type The bind type can be one of the following simple, sasl, or ssl. This type is provided as an enum.
Example

simple
FAX Number The user attribute, which represents the fax number in LDAP, for example, facsimiletelephonenumber in Tivoli®.
Attribute
Full Name Attribute The user attribute, which represents the full name in LDAP, for example, cn in Tivoli.
Given name Attribute The user attribute, which represents the given name in LDAP, for example, the givenname in Tivoli.
groupClassNames The groups class name in the LDAP server.
Example

group
Group Parent DNs The Pipe (|) delimited Parent DN where the groups are likely to be found. If you do not know the Parent DN, you can set to "".
Example

DC=ipm,DC=com

390 IBM Product Master 12.0.0


Attribute name Description of attribute
Keystore The location of the file that was imported in to the JVM.
LDAP Group Naming The naming attribute for the groups in this LDAP server.
Attribute Example

sAMAccountName
LDAP URL This attribute is the LDAP server URL. The primary key of the lookup table entry. The values are for the LDAP server.
Example

<ldap-server-hostname>:389
LDAP User Naming The naming attribute for the users in this LDAP server.
Attribute Example

sAMAccountName
Mail ID Attribute The user attribute, which represents the mail ID in LDAP, for example, mail in Tivoli.
personClassNames The person class name in the LDAP server.
Example

person
Postal Address The user attribute, which represents the postal address in LDAP, for example, postaladdress in Tivoli.
Attribute
Root Entry DN The root users’ Entry DN in this LDAP server.
Example

CN=<username>,OU=Apple,DC=ipm,DC=com
Root Password The password of the root user.
Example

<password of user>
SSL Bind Type The subtypes that are allowed in ssl bind. The ssl bind type can be one of the following: simple orDIGEST-MD5. This type is provided as
an enum.
Example

simple
supportedSaslMecha Subset of server supported sasl mechanisms, which the customer wants to authenticate LDAP users whether the bind type is sasl. The
nisms list of mechanisms is delimited by a space character.
Surname Attribute The user attribute, which represents the surname in LDAP, for example, sn in Tivoli.
Telephone Number The user attribute, which represents the telephone number in LDAP, for example, telephonenumber in Tivoli.
Attribute
Title Attribute The user attribute, which represents the title in LDAP, for example, title in Tivoli.
User Parent DNs The Pipe (|) delimited Parent DNs where the users are likely to be found. If you do not know the Parent DN, you can set to "".
Example

DC=ipm,DC=com
LDAP username
LDAP username can contain special characters, but some of the special characters need to be handled by escape character. When you enter user details in the
LDAP Properties table, make any one of the following changes to use special characters.
Append escape character - backslash (\) before the special character,
Enclose the username containing special characters within double quotation marks (“ ”)
Important: Following special characters can be used as LDAP username without escape character.
Colon (:)
Pound (£)
Exclamation mark (!)
Tilde (~)
At sign (@)
Dollar sign ($)
Percent sign (%)
Caret (^)
Ampersand (&)
Hyphen (-)
Underscore (_)
Note:
Escape character does not work for backward slash (\), double quotation marks (“ ”), asterisk (*), and parentheses () so do not use these special characters
as LDAP username.
After LDAP user login, multiple log files get populated with “Unprocessed Continuation Reference” exception that is a child exception of “Partial Result
Exception”. This exception is just a warning and can be ignored. You can avoid this exception and reduce space consumption. For more information, see Log
files.
Important: After editing First Name, Last Name, Title, Email Address, Telephone, Fax, and Address details of an LDAP user from the IBM® Product Master, you need
to either log out and re-login or refresh your browser.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Global Data Synchronization (GDS) administration

IBM Product Master 12.0.0 391


According to your requirements, the following configurations are required to be performed. The configurations take place in the Product Master, however, the settings
reflect in the GDS. Ensure that you are logged in to InfoSphere Master Data Management Collaboration Server - Collaborative Edition user interface.

Ensure that you are logged in to the Product Master user interface.

The general GDS administration steps are applicable for Supply side GDS administration. The system administrators must complete certain configuration settings to make
the GDS solution available according to the requirements. The administrator has access to all the features of GDS and user management

Processing CINs from the data pool


A CIN XML from data pool passes through various stages in the GDS. The GDS Receiver module receives the CIN XML from the data pool. The further flow has two main
branches that are traversed by a CIN depending on whether the pass through model is enabled or not.

If the pass through model is enabled, the pass through model scripts gets triggered that enable the automatic execution of the process flow on the CIN XML. Otherwise,
the CIN XML is passed through the conventional workflow for processing. These scripts are prepackaged and can be modified only by the development team. If the
conventional workflow, your interaction is required at every workflow step making it a potentially time-consuming process.

In the pass through model, you are only involved if an error occurs during CIN processing. During the execution of the pass through model script on a CIN, if no errors are
encountered, the message item does not enter any workflow and it is ready for further processing. If an error is encountered, the message item is checked out into the
appropriate step in the custom GDS workflow. You are notified to perform the appropriate action at this stage. Once you have finished your task, the rest of the script is run
on the message item. The process is repeated for every error that occurs until the end of the script. After the script has completed execution, the message item is ready
for further processing.

Another reason for a message item to get caught in a workflow step is if the category mapping settings are not set to automatic processing. To do so, set the following
properties to True in the $TOP/etc/default/gds.properties file.

mandatory_gpc_validation
gpc_to_internal_hierarchy_mapping

Essentially, processing and final output of CIN XML messages in the pass through or conventional model are the same. The difference lies in your actions at different steps.
The pass through model offers the following advantages over the conventional model:

Less user intervention: You are notified only if an error occurs while you process a CIN XML.
Better performance: The pass through model is tuned to perform better. More time is saved in processing a CIN XML as most of the processing is automatic.

The pass through model is the default model of CIN processing in GDS. Thus, there are no setup costs and no services engagement that is required in using this model.
However, if you want to customize the features of the pass through model, you can write the necessary scripts for these customizations.

The settings that are required for the pass through model are:

Ensure that the publicationprocess.properties file is present in the $TOP/etc/processflow/wwre folder. (By default, this file exists.)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

System monitoring
You can monitor both your software and hardware components of Product Master to ensure that your system operates at an optimal level.

The following tasks contribute to ensuring optimal performance in your system:

Routine performance monitoring and maintenance


If you maintain a consistent routine for monitoring and maintaining your system, you can ensure that your system runs at optimal performance. To reach optimal
performance, you must configure your system that is based on your system's needs.
Use the following list of guidelines to develop customized monitoring tools or routines for monitoring performance:

From the System Administrator > Profiler window in the user interface, you can use the processor and memory profiling agent to capture performance
metrics.
Use the Log and Trace Analyzer for log file review and to pin point and resolve any performance issues.
Monitor the log files of your services to locate and determine whether an underlying problem exists.

Use the following list of monitoring options to assist you in maintaining the performance of your system:

Create a virtual host on your application server or a separate monitoring server. The rmi_status.sh and svc_control.sh scripts create system load and
can affect performance, and so using a separate monitoring server is appropriate for many production environments.
Use a scripting language such as Perl to create a Common Gateway Interface (CGI) wrapper for the svc_control.sh shell script to control and display
status for specified services.
Create an alias in the web server that points to $TOP/logs for direct access to log file.
Create a utility that parses your log files in $TOP/logs for any exceptions or other errors and to check specified service status.
Create a utility to send email or other means of notification for error events.

Meeting hardware requirements


You must also configure your system that is based on your usage demands and ensure that you have the minimum hardware requirements to meet those needs.

Measuring performance
You can measure your system's performance through each Java™ Virtual Module (JVM) by measuring the time that it takes for individual pages to complete.
Disk space management
You can manage your available disk space to ensure optimal performance for Product Master and all temporary storage partitions.

392 IBM Product Master 12.0.0


Caching web pages
You can configure IBM® Product Master to cache web pages when you have low connectivity bandwidth.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Measuring performance
You can measure your system's performance through each Java™ Virtual Module (JVM) by measuring the time that it takes for individual pages to complete.

Before you begin


Ensure you complete the following prerequisites:

Enable profiling by setting the profiling_info_collection_depth parameter in the common.properties file to a high number, for example 50.
Enable profiling of scheduled jobs by setting the profiling_scheduled_jobs parameters in the common.properties file to full.
Delete all of the former profiling information to ensure that the retrieval of any profiling information is not slow and not to consume large amounts of database disk
space.

Procedure
1. Open the Performance window: System Administrator > Performance Info > Performance.

2. In the Performance Search window, select a search value from the Search menu, then click .
3. Use the following list to determine your performance that is based on the results that display in the Current Performance Result pane.

N – Ref
The name of the JVM service.
Max
The longest time that it took for a JVM service to run.
Min
The shortest time that it took for a JVM service to run.
Avg
The average time that it took for a JVM service to run.
Visits
The number of visits that were made to the JVM service.

4. Optional: Clear all performance results and measure only your current conditions by clicking Flush to DB.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Disk space management


You can manage your available disk space to ensure optimal performance for Product Master and all temporary storage partitions.

Although Product Master is not a particularly disk-intensive system. You need to manage your disk space to ensure adequate storage space for your system and all
temporary partitions. Managing your disk space involves maintaining separate file systems, providing sufficient disk space, using shared devices for storage, managing
temporary files.

Use separate file systems


For improved disk space administration, optimal configuration is to use separate file systems for each application server and database server:

Application server

OS components
Third party components
Product Master:
Executable files
Temporary work files
Log files

Database server

Document store
Database-related files

Use shared storage devices


For clustered machine environments, shared storage is necessary for your application servers and recommended for your web servers:

IBM Product Master 12.0.0 393


The $TOP directory must be shared in the same location on all application servers in the cluster. For example, if $TOP is /usr/local/envs/wpc, then all machines in
the cluster should see $TOP as /usr/local/envs/wpc.
Install the application server and support applications such as Apache and JDK on your local storage device.

Log files can be stored either in your local storage or in shared storage.

Store your temp directory on your local storage device by specifying the tmp_dir parameter in the common.properties file.

Manage temporary files


Temporary files hold temporary runtime-generated files. Store your temporary files on your shared storage device in the $TOP/public_html/created_files/distributor
directory.
Your temporary file directories might differ from those that are listed in the following table, depending on your version of the product:

$TOP/public_html/created_files/distributor
Table 1. The $TOP/public_html/created_files/distributor directory
Purpose When to delete Recommendation Example
For outbound FTP distributions, the queue manager Delete all of the files in Use a seven-day life span for the These examples, for Linux®, show
downloads a document from the database into this this directory during all temporary files and sort all files by how to find, sort, and delete files
directory for temporary storage, then transfer the file scheduled application date. Then, delete anything older older than seven days from the
to the destination. downtimes. than seven days. command-line.
To access the distributor file
directory:

cd
$TOP/public_html/crea
ted_files/distributor

To view files that are older


than seven days:

find . -type f -mtime


+7 -exec ls -l {} \;

To delete the files that are


older than seven days:

find . -type f -mtime


+7 -exec rm -f {} \;
$TOP/public_html/suppliers/company code/aggregated_files
Table 2. The $TOP/public_html/suppliers/company code/aggregated_files directory
Purpose When to delete
Import and export files that are retrieved through an FTP fetch Do not delete this directory from within the file system. If necessary, access this directory
are temporarily stored in this directory. from the Product Master user interface to delete files.
$TOP/public_html/suppliers/company code/tmp_files
Table 3. The $TOP/public_html/suppliers/company code/tmp_files directory
Purpose When to delete Recommendation
This directory stores This directory stores temporary work files. You can purge files automatically when Save the files in this directory for a few
temporary work files. Product Master is restarted or with a schedule you define. weeks and purge them regularly.
$TOP/logs
Table 4. The $TOP/logs directory
Purpose When to delete Recommendation
This directory Provide sufficient disk space that is based on your defined logging detail level of either error or debug mode. Specify 2 - 3 GB of disk
holds the Product The average full-day time span generates approximately 30 - 40 MB of log files. You can set up Product space in case you need
Master middleware Master to automatically purge these log files by configuring the log4j2.xml file in the $TOP/etc/default to use the debug
log files. directory. mode.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Caching web pages


You can configure IBM® Product Master to cache web pages when you have low connectivity bandwidth.

About this task


By default, the no_cache parameter is set to off so that proxy servers do not cache web pages.

If the no_cache parameter is set to on, web page caching is enabled but your web browser's Back function is limited. Your web browser's Back is only functional and not
cause errors only when the no_cache parameter is set to off.

Note: Users must clean their browser cache before you use the user interface for the first time after a fix pack, interim fix, or test fix is applied. Frequently, JavaScript files
that the user interface depends on are updated and installed with each release. These JavaScript files are cached by the browser when the user interface loads. To avoid
incompatibilities and issues in using the user interface, you must clean your browser cache such that the latest JavaScript files are loaded and used by the user interface.

394 IBM Product Master 12.0.0


Procedure
1. With a text editor, open the common.properties file in the $TOP/etc/default directory.
2. Set the no_cache parameter to on.

Example
In this example, web page caching is enabled.

no_cache=on

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing document store


You can manage the document store to manage all incoming and outgoing files, including import feeds, scripts, reports, and specs.

The document store is accessible through the user interface at Collaboration Manager > Document Store. The Documents window provides the following accessible file
directories to the files that are stored on your Oracle or DB2® database:

archives
public_html
eventprocessor
schedule_logs
feed_files
scripts
FTP
tmp
params
users

The ftp and public_html directories are file system directories that are mounted into the document store and are defined in the docstore_mount.xml configuration file in
the $TOP/etc directory. The docstore_mount.xml file provides the location of your file system mount points and uses the ftp_root_dir and supplier_base_dir parameters
from the common.properties file:

<mnt doc_path="/public_html/" real_path="$supplier_base_dir/" inbound="yes"/>


<mnt doc_path="/ftp/" real_path="$supplier_ftp_dir/" inbound="yes"/>

For each file in the document store, you can view file details and audit log information of who has accessed the file and when.

File details

You can use the document store as a backup engine because every file that goes through your system is copied and stored into the document store.

Complete the following in the document store:

Control access to files


Compress file size
Delete files
Defragment the document store

The document store database architecture includes table spaces that are designated for all files that are stored in the document store. When a file is stored in the
document store, a new record in the database is created. The database stores the file as a BLOB (binary large object) file. A BLOB file refers to random large blocks of
memory bits that are stored in a database and are used to hide specific information about a file. Information is hidden because a BLOB is an object that cannot be
interpreted as a specific object type from within the database and so each object is only viewable as an indistinctive BLOB file. The database stores BLOB files within one
of the table spaces in the database. The advantage of using BLOB files and table spaces is that the database is able to protect your table data by using database server
mechanisms, including back up-and-recovery and security mechanisms.

If you use two application server instances, and share database, you might find some documents to have disappeared. When you upload documents through portal in the
public_html folder, the documents seem to disappear from the docstore. From the log files of both the application servers, you are able to see that the files are deleted.
The mount manager on either instance periodically polls and synchronizes the database with its file system. When the mount manager on the second instance
synchronizes the database with its file system, it removes the docstore entry added by the first instance, since the file does not exist in its file system.

A shared NFS mount is the solution to this problem. The $TOP/public_html file must be NFS-shared for the clustering to work. The docstore_mount.xml file contains
mount manager configuration. The attribute inbound in this configuration file should be set to "yes" for the synchronization process to occur.

Table space management is an ongoing task. The document store table grows and shrinks in size that depends on use. You must ensure that sufficient disk space is being
used efficiently and that available disk space exists to support large binary files without interruption.

To maintain the performance of the document store, you should regularly defragment the files. Defragmentation chunks all of the files that exist in the document store into
one continuous cluster and improves file import duration times.

Restriction: All file names that are uploaded to the document store cannot contain the following special characters: !@$%^&()=+.

Document store maintenance


The document store maintenance reports are available to report, delete, and archive files in the Document Store. Reports are created using Product Master Scripts
and can be scheduled to run on an appropriate maintenance window using the Product Master scheduler.

IBM Product Master 12.0.0 395


Setting the FTP directory
Specific settings must be set in the common.properties and docstore_mount.xml files to make the FTP directory work in the docstore.
Using file system as document store
In IBM® Product Master, a file system can be used as a document store instead of using the database.
Setting document store access privileges
You can set user privileges for users by setting an access control group (ACG) for each file in the document store.
Deleting files
You can delete files from the document store.
Compressing document store files
You can compress BLOB files to reduce their size in the document store.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Document store maintenance


The document store maintenance reports are available to report, delete, and archive files in the Document Store. Reports are created using Product Master Scripts and can
be scheduled to run on an appropriate maintenance window using the Product Master scheduler.

There are two reports for document store maintenance:

IBM® MDMPIM DocStore Volume Report


This report provides the number of files in each of the root directories in the Document Store and also the number of files in the sub-directories of three high volume
directories.
IBM MDMPIM DocStore Maintenance Report
This report deletes or archives files in the Document Store directories that are based on the information that is provided in a Lookup table.

These two reports have the following dependent Product Master objects:
Product Master Object Name Product Master Object Type Usage
IBM MDMPIM DocStore Volume Script file in DocStore directory /scripts/reports/ Script file that is used by IBM MDMPIM DocStore Volume Report
Script
IBM MDMPIM DocStore Script file in DocStore directory /scripts/reports/ Script file that is used by IBM MDMPIM DocStore Maintenance Report
Maintenance Script
DB_Queries Script file in DocStore directory /scripts/IBM Script file that is used by IBM MDMPIM DocStore Maintenance Report
MDMPIM Data Maintenance/
IBM_MDMDocStore_Maintenance Lookup Table Input parameters that are used by IBM MDMPIM DocStore Maintenance Report
_Lookup
IBM_MDMDocStore_Maintenance Lookup Table Spec Lookup Table Spec for IBM_MDMDocStore_Maintenance_Lookup lookup table
_Lookup_Spec
IBM MDMPIM Data Maintenance Distribution A dummy email distribution that is used by reports. Modify the distribution with
Distribution appropriate distribution method.
Attributes definition for IBM_MDMDocStore_Maintenance_Lookup table
Lookup attribute Comment
Key Lookup Key
Directory_Name DocStore directory name to be archived or purged. Value should be a valid DocStore directory name that starts with "/".
Action_on_Direct Action to be performed on the DocStore directory. Valid values are "purge" or "archive".
ory_files
Days_to_keep Number of days before current date, the DocStore directory files are preserved. Action is performed on the DocStore directory files older than given
the number of days. This value should be a positive integer or zero.
Default values in the IBM_MDMDocStore_Maintenance_Lookup table and the default action by the IBM MDMPIM DocStore Maintenance Report.
Directory_Nam Action_on_Directory_fil
Days_to_keep Action that is performed by the IBM MDMPIM DocStore Maintenance Report
e e
/reports/ purge 30 Files in the /reports/ DocStore directory older than 30 days are deleted.
/job_summary/ purge 30 Files in the /job_summary/ DocStore directory older than 30 days are deleted.
/archives/ archive 30 Files in the /archives/ DocStore directory older than 30 days are archived to a compressed file then
deleted.
/entryprocessor purge 30 Files in the /entryprocessor/ DocStore directory older than 30 days are deleted.
/
/feed_files/ purge 365 Files in the /archives/ DocStore directory older than 365 days are deleted.
Modify these Lookup table values or add new values per your environment before you run the maintenance report job.
Sample IBM MDMPIM DocStore Volume Report output

DocStore Volume Report for ibm Company

Files in Top-Level Directories


Directory name Number of files
/job_summary 31
/reports 31
/scripts 10
/params 2
/schedule_logs 2
Files in Subdirectories of /job_summary

396 IBM Product Master 12.0.0


No Subdirectories

Files in Subdirectories of /reports


Directory name Number of files
/reports/03-MAR-2010 14:41:42 1
/reports/03-MAR-2010 14:54:42 10
/reports/03-MAR-2010 15:05:26 12
/reports/03-MAR-2010 16:54:46 8
Files in Subdirectories of /scripts
Directory name Number of files
/scripts/triggers 4
/scripts/reports 2
/scripts/ldap_usr_fetch 1
/scripts/login 1
/scripts/logout 1
/scripts/report 1
End of DocStore Volume Report
Sample IBM MDMPIM DocStore Maintenance Report output

DocStore Maintenance Report for ibm Company


Directory name Action completed Files deleted Cut off date Archived compressed file
/reports/ purge 30 2010-02-23 00:05:55 none
/job_summary/ purge 30 2010-02-23 00:05:55 none
/archives/ archive 30 2010-02-23 00:05:55 /IBM_MDMDocStore_Maintenance/Archives/2010-03-25 00:05:55/archives.zip
/entryprocessor/ purge 30 2010-02-23 00:05:55 none
/feed_files/ purge 365 2009-03-25 00:05:55 none
DocStore Maintenance completed successfully

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting the FTP directory


Specific settings must be set in the common.properties and docstore_mount.xml files to make the FTP directory work in the docstore.

Procedure
1. Create the directories and a sample file in IBM® Product Master.
For example /ftp/test/trigo/sample.txt. Set these directories to allow read and write privilege for Product Master users.
2. Stop Product Master.
3. In the common.properties file, make the following changes
a. Set the ftp_root_dir parameter, specifying a slash at the end of an absolute directory.
For example:
…….

# base directory for each supplier (relative to ${TOP}) MUST START with public_html!!!!!!!!!

supplier_base_dir=/public_html/suppliers/

# Can multiple ctg files (for image/binary attributes) exist with the same name?

# If false, will store files in ctg_files within the supplier base

# If true, will store files in subdirectories within ctg_files with timestamps for names

allow_multiple_files_with_same_name=false

# root directory for ftp

# (must end with a "/")

#ftp_root_dir=/u01/ftp/

ftp_root_dir=/ftp/test/

………
Product Master appends the company name "trigo" for this example, assuming that your company is “trigo”. Make sure that this directory exists and that files
exist in the directory.
Note: Files that are stored in the /ftp/test/COMPANY_CODE/files directory shows up in the FTP section of the docstore. If the files do not show up in the
docstore, use the touch command to change the dates on the files. Also, make sure that the files have the correct permissions. Files that are stored in the
/ftp/test directory does not show up in the docstore, as it looks only in the ftp/test/COMPANY_CODE/ directories.
b. Set the enable_mountmgr property to true:

IBM Product Master 12.0.0 397


....

other docstore specific parameters in austin.properties

# mount mgr daemon info

mountmgr_daemon_sleep_time=120000

# MountMgr is only useful if an external process adds/deletes files

to file system directories mounted in the docstore

# values: true/false

enable_mountmgr=true

....
4. In the docstore_mount.xml file, make sure that you specify the real path for the FTP doc_path parameter and set the inbound parameter to yes:
<?xml version="1.0"?>

<mnts>

<mnt doc_path="/public_html/" real_path="$supplier_base_dir/" inbound="yes"

/>

<mnt doc_path="/ftp/" real_path="$supplier_ftp_dir/" inbound="yes" />

</mnts>
If inbound is set to yes, file system is replicated to the docstore.
Note: The setting of the parameter ftp_root_dir in the common.properties file is copied onto the parameter supplier_ftp_dir in the docstore_mount.xml file.
Therefore, they have the same value.
5. Start Product Master. To test the docstore mount manager for immediate results, you can run the following command:
$JAVA_RT com/ibm/ccd/docstore/common/MountMgr -op=sync -company_code=trigo
Where company_code is set to your company, in this case "trigo".

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using file system as document store


In IBM® Product Master, a file system can be used as a document store instead of using the database.

Inbound parameter
If the inbound parameter is set to "yes", then file system directories are mounted to the docstore. If it is set to no, then file system directories are not mounted to the
docstore. If you set the inbound parameter to "yes" for a particular directory, and later set it to "no", the changes to the directory from that point is not reflected in the
docstore. The docstore retains the old data. This parameter has no effect on copying of files from the docstore to the file system.
Note: Product Master supports only one-way synchronization that works from file system to the docstore. The reverse is not true; uploading a file to the docstore does not
create a copy of the file in the file system.
When you mount a file system directory to the docstore, you need to add a line in the docstore_mount.xml file. This file is in the $TOP/etc/default directory on the
server that is running the IBM Product Master instance.

<mnt doc_path="/TestMount/" real_path="/usr/local/TestMount/" inbound="yes" />

Where:

doc_path="/TestMount/"
Directory in the docstore under which you can view the folder of the mounted local file system.
real_path="/usr/local/TestMount/"
Directory from the local file system that you are going to mount.
inbound="yes"
Specifies the behavior. Because this parameter is set to yes, the TestMount directory from the file system is mounted to the docstore.

Mounting a folder to the docstore


You can mount file system directories to the docstore and then manipulate the directories with script operations or Java™ API. The docstore acts as a mount point
and does not physically store the directories and files, which reside in the file system.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

398 IBM Product Master 12.0.0


Mounting a folder to the docstore
You can mount file system directories to the docstore and then manipulate the directories with script operations or Java™ API. The docstore acts as a mount point and
does not physically store the directories and files, which reside in the file system.

About this task


The mount manager does not start with the start of the application server. Instead, you need to manually log in to the user interface post that the mount manager daemon
starts and then the file system is mounted onto the docstore.

Procedure
1. Stop the server instance.
2. Enable the mount manager. In $TOP/etc/default/common.properties set enable_mountmgr=true to enable the mount manager to mount the local file system to the
docstore. Also set mountmgr_daemon_sleep_time=1000 to set the interval after which mount manager scans the local file system for any changes to reflect in the
docstore.
While you are scanning, mount manager looks for files that do not exist in docstore and loads such files into the docstore. If the last modified time stamp of a file on
the file-system has changed, then it loads that file again to the docstore. In addition to LAST_MODIFIED_TIMESTAMP, another attribute FILESYSTEM_TIMESTAMP
is saved for each file, and the value of which is set to the modified timestamp of the real file. This attribute is used to check whether the file on the file system is
modified to load it again in the docstore.
If you set the attribute mountmgr_daemon_sleep_time to less than 1 minute, then that value is ignored and 1 minute is used as the default sleep time. This is to
avoid unexpected results.
3. Create a folder ./usr/local/TestMount/ on the Linux® or UNIX server file system.
4. Mount the file system directory. Open the $TOP/etc/default/docstore_mount.xml file on the server that is running the IBM® Product Master instance and add the
following line to the file:

<mnt doc_path="/TestMount/" real_path="/usr/local/TestMount/" inbound="yes" />

Where:

doc_path="/TestMount/”
Is the folder in the docstore under which you view the folder of the mounted local file system.
real_path="/usr/local/TestMount/"
Is the folder from the local file system that you are going to mount.
inbound=”yes"
Specifies the behavior. If inbound="no", files are exported to the file system from the docstore.
If inbound=”yes”, files are imported into the docstore. Also, if any files are added, deleted, or changed on the local file system outside Product Master,
docstore reflects those changes in the Document Store.

5. Start the server instance and log in to Product Master. Go to the docstore and click the TestMount directory. You see your folder structure.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting document store access privileges


You can set user privileges for users by setting an access control group (ACG) for each file in the document store.

Before you begin


Before you can set document store access privileges, you must determine the access privileges that you want to set for specific users. This includes the ability to create,
view, delete, or have full control of a file in the document store.

Procedure
1. Open the document store: Collaboration Manager > Document Store.
2. Select the file name that you want to associate an ACG to.
3. Click next to the file name.
4. In the Docstore Access Details window, select the ACG that you want to associate with the file:
Option Description
To select an ACG that exists: Select the ACG from the Access Control Group menu, then click .

a. Create the ACG, then click .


To create and associate a new ACG:
b. Select the ACG you created from the Access Control Group menu, then click .

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 399


Deleting files
You can delete files from the document store.

About this task


Deleting a file does not provide more free disk space. Deleting a file from the document store deletes the BLOB from the memory block it resides in. The memory blocks
are not deleted and remain as allocated disk space when your BLOB files are deleted. Any available memory blocks that are empty are reused to store new files.
CAUTION:
Files that you delete cannot be restored.

Procedure
1. Open the document store: Collaboration Manager > Document Store.
2. Select the file name that you want to delete.
3. Click next to the name of the file.
4. Click OK to confirm the file deletion.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Compressing document store files


You can compress BLOB files to reduce their size in the document store.

About this task


Enabling BLOB file compression compresses all files that are stored in BLOB.

Procedure
1. Open the common.properties file with a text editor from the $TOP/etc/default directory.
2. Set the gzip_blobs parameter to true:

gzip_blobs=true

3. Save the common.properties file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data maintenance
New report jobs are available to estimate, delete, or archive obsolete or unused data in the Product Master system. These report jobs are based on the options that are
provided in a lookup table, and can be scheduled to run on an appropriate maintenance window by using the in-built scheduler.

Attribute definitions for Product Master Data Maintenance Lookup


Lookup
Definition
Attribute
Component PIM component that is to be maintained, do not modify this value.
Days_to_keep Integer field for specifying number of days before current date, PIM component data to be preserved. Obsolete data older than the given
number of days are deleted. This value must be a positive integer, zero or null. If null, End_date_to_delete attribute value is used. If zero, all
obsolete data is deleted.
Start_date_to_d Optional date attribute that is used along with End_date_to_delete. If a date is provided, data between start date and end date are deleted.
elete
End_date_to_del Date attribute that is used with or without Start_date_to_delete. If both start date and end date are provided, obsolete data between start date
ete and end date are deleted. If only end date is provided, obsolete data that is created before end date are deleted.
Catalog_Name Optional string attribute for entering a catalog Name to delete old version data specific to that catalog. This attribute is used only for "Old
Version" maintenance.
Note:

Value for either Days_to_keep or End_date_to_delete attributes must be provided. If both the attribute values are provided, Days_to_keep takes precedence.
Days_to_keep attribute helps in automatic scheduling of the maintenance report jobs without the need for modifying the lookup table each time.
Modify these lookup table values per your requirement before you run the maintenance report job.

400 IBM Product Master 12.0.0


Old object versions
You can check for old object versions and delete them from your database tables to restore storage space, enhance IBM® Product Master performance, and speed
up database utilities.
Job history
You can purge your historical job information that is scheduled in Product Master to improve performance to restore free disk space and speed up database utilities.
Creating a catalog version
IBM Product Master automatically creates catalog versions each time a change is made to the catalog. However, you might want to manually create a specific
catalog version to have a point of reference to roll back or after bulk item imports.
Performance profile
You can purge your performance profiling data to restore free disk space and speed up database utilities.
Updating DB statistics
The database table and index statistics should be up-to-date for an optimal performance so that the database optimizer can choose the best (quickest) access plan
to retrieve data for each SQL query.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Old object versions


You can check for old object versions and delete them from your database tables to restore storage space, enhance IBM® Product Master performance, and speed up
database utilities.

Versioning is a backup feature for all objects so that you can roll back your objects to a specific object version from within the user interface if you encounter problems.
The object versioning process creates duplicate entries in your respective database tables for every new version of your objects.

As an alternative to running the old version maintenance report, you can also run the shell script. For more information, see Checking and deleting old object versions with
scripts.

There are two types of versioning:

Implicit versioning
Occurs when the current version of an object is automatically archived as a backup and a new object version is created during a manual item edit, export, or import
process.
Explicit versioning
Occurs when you manually request a backup.

Creating duplicate object entries in your database tables results in an increase of the overall size of your database and might cause a performance decrease when you
access your objects. You can evaluate how many object versions you stored in your database and determine which version to delete to restore your storage space.

Check for old object versions at least one time every month.

Report jobs for maintaining old object versions


IBM Product Master provides two reports for maintaining old object versions.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Report jobs for maintaining old object versions


IBM® Product Master provides two reports for maintaining old object versions.

1. IBM MDMPIM Estimate Old Versions Report – This report provides a row count of old object versions in Product Master database table that is based on the criteria
that are mentioned in the lookup table.
2. IBM MDMPIM Delete Old Versions Report – This report deletes old object versions in Product Master based on the criteria that are mentioned in the lookup table.

These two reports have the following dependent Product Masterobjects:


Product Master Object Name Product Master Object Type Usage
IBM MDMPIM Estimate Old Script file in docstore directory /scripts/reports/ Script file that is used by IBM MDMPIM Estimate Old Versions Report
Versions Script
IBM MDMPIM Delete Old Script file in docstore directory /scripts/reports/ Script file that is used by IBM MDMPIM Delete Old Versions Report
Versions Script
DB_Queries Script file in docstore directory /scripts/IBM Script file that is used by the report jobs
MDMPIM Data Maintenance/
IBM MDMPIM Data Lookup Table Input parameters that are used by the report jobs
Maintenance Lookup
IBM MDMPIM Data Lookup Table Spec Lookup Table Spec for IBM MDMPIM Data Maintenance Lookup table
Maintenance Lookup Spec
IBM MDMPIM Data Distribution A dummy email distribution that is used by the reports. Modify the distribution
Maintenance Distribution with appropriate distribution method.
Default value of “Days_to_keep” attribute in IBM MDMPIM Data Maintenance lookup table is 0. Hence the default action for IBM MDMPIM Delete Old Versions Report is to
delete all old object versions.

IBM Product Master 12.0.0 401


Modify the lookup table values per your requirement before you run the IBM MDMPIM Delete Old Versions Report job.

Sample Report:

Estimate Old Versions for Company: ibm

Job run date: 2010-08-09 17:59:39

Old Version data between dates: 2010-04-01 17:55:00 & 2010-04-28 00:56:58
Database Table Name Old Version Rows
TCTG_ICM_ITEM_CATEGORY_MAP 74844
TCTG_ITA_ITEM_ATTRIBUTES 412500
TCTG_ITD_ITEM_DETAIL 37500
TCTG_ITM_ITEM 37500
TCTG_VER_VERSION 48
TAUD_SHI_SIMPLE_HIERARCHY 0
TAUD_SMP_SIMPLE_OBJECT 0
TAUD_SSM_SIMPLE_SIMPLE_MAP 0
TCNT_CNA_CONTAINER_ATTRIBUTES 0
TCNT_EEM_ENTRY_ENTRY_MAP 0
TCNT_EFP_ENTRY_FULL_PATHS 0
TCNT_EHI_ENTRY_HIERARCHY 0
TCNT_ENT_ENTRY 0
TCNT_ESA_ENTRY_SYS_ATTR 0
TCNT_ESM_ENTRY_SPEC_MAP 0
TCNT_ETA_ENTRY_ATTRIBUTES 0
TCTG_AGD_ATTR_GROUP_DYN_NODE 0
TCTG_AGL_ATTR_GROUP_LOCALE 0
TCTG_AGN_ATTR_GROUP_NODE 0
TCTG_CAA_CATALOG_ATTRIBUTES 0
TCTG_CAB_CATEGORY_ATTRIBUTES 0
TCTG_CAD_CATEGORY_DETAIL 0
TCTG_CAT_CATEGORY 0
TCTG_CCM_CATEGORY_CATEGORY_MAP 0
TCTG_CFP_CAT_FULL_PATHS 0
TCTG_CGM_CATALOG_CATALOG_MAP 0
TCTG_CHI_CATEGORY_HIERARCHY 0
TCTG_CSA_CAT_SYS_ATTR 0
TCTG_CSM_CATEGORY_SPEC_MAP 0
TCTG_CTA_CATEG_TREE_ATTRIBUTE 0
TCTG_NOA_NODE_ATTRIBUTES 0
TCTG_NOD_NODE 0
TCTG_NOH_NODE_HIERARCHY 0
TCTG_NOM_NODE_MAPPING 0
TCTG_OBM_OBJECT_MAP 0
TCTG_ODD_ORD_DETAIL 0
TCTG_OMD_OBJECT_MAP_DETAIL 0
TCTG_OMS_OBJECT_MAP_SCRIPTS 0
TCTG_REA_RELATION_ATTRIBUTES 0
TCTG_RGM_RUL_GROUP_MAP 0
TCTG_RUL_INHERITANCE_RULE 0
TCTG_SPA_SPEC_ATTRIBUTES 0
TCTG_TMT_TEST_MULTI_ROW 0
Estimate Old Versions completed

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Job history
You can purge your historical job information that is scheduled in Product Master to improve performance to restore free disk space and speed up database utilities.

Maintaining job history


The IBM® Job Data Maintenance Report is available to delete job data based on the options provided in a lookup table. This report is created using the Product Master
scripts and can be scheduled to run on an appropriate maintenance window using the Product Master scheduler.
Job Data Maintenance Report has the following dependent Product Master objects:
Product Master object name Product Master object type Usage

402 IBM Product Master 12.0.0


Product Master object name Product Master object type Usage
IBM Job Data Maintenance script Script file, /scripts/reports/ in the docstore directory. Script file that is used by IBM Job Data Maintenance Report
DB Queries Script file, /scripts/IBM_MDMDocStore_Maintenance/ in the Script included in the file that is used by IBM Job Data
docstore directory. Maintenance Report
Product Master Data Maintenance Lookup Table Input parameters that are used by IBM Job Data Maintenance
Lookup Report
Product Master Data Maintenance Lookup Table Spec Lookup Table Spec for Product Master Data Maintenance
Spec Lookup table
Important: The default value of the Days_to_keep attribute in the Product Master Data Maintenance lookup table is 30. Therefore, the default action for the Job Data
Maintenance Report is to delete all of the job schedule information older than 30 days. Ensure that you modify the lookup table values per your requirement before
running the Job Data Maintenance Report.

Sample IBM Job Data Maintenance Report output


Job data maintenance for company: ibm

Job run date: 2013-01-01 16:57:14

Deleting completed job schedules older than date: 2013-01-01 16:57:14

Total job schedules deleted : 54

Job data maintenance completed

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a catalog version


IBM® Product Master automatically creates catalog versions each time a change is made to the catalog. However, you might want to manually create a specific catalog
version to have a point of reference to roll back or after bulk item imports.

About this task


You can manually create a catalog version using a script operation or using the user interface.

Procedure
Create a catalog version.
Option Description
You create an import using a dummy catalog, which updates items in the destination catalog "ctg1." After several runs of this import job,
ctg1 is populated to the point where a significant milestone is reached. You want to create a new version for ctg1 so that it is possible to roll
back to this point later on if necessary.
Scenario 1: Creating a
catalog version using a You can use the script operation insertNewVersion() in the dummy import script to identify these milestones and automatically create
script operation a new version when a milestone is reached. The syntax for this script operation is

Version Container::insertNewVersion(String sName)


Call this script operation from a container and provide the name of the version to create.
Users are adding items to a catalog from a workflow, and at some point you want to manually set a new version to enable rollback to this
Scenario 2: Creating a catalog state.
catalog version using the
a. Select the catalog that you want to create a new version of, and then click Attrs
user interface
b. Type a name in the Add a version with the name section, and then click the plus sign to save the new catalog version.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Performance profile
You can purge your performance profiling data to restore free disk space and speed up database utilities.

If you encounter performance problems and are asked to activate profiling for determining your performance bottleneck, the profiling data that is written into specific
database tables might become large over time. After you complete your performance analysis and deactivating profiling, you can delete the profiling data that you have

IBM Product Master 12.0.0 403


collected to restore disk space.

Maintaining performance profile data


The IBM® MDMPIM Profile Data Maintenance Report is available to delete performance profile data based on the options provided in a lookup table. This report is created
using the Product Master Scripts and can be scheduled to run on an appropriate maintenance window using the Product Master scheduler.

Profile Data Maintenance Report has the following dependent Product Master objects:
Product Master Object Name Product Master Object Type Usage
IBM MDMPIM Profile Data Script file in docstore directory /scripts/reports/ Script file that is used by IBM MDMPIM Profile Data Maintenance
Maintenance Script Report
DB Queries Script file in docstore directory Script file that is used by IBM MDMPIM Profile Data Maintenance
/scripts/IBM_MDMDocStore_Maintenance/ Report
IBM MDMPIM Data Maintenance Lookup Table Input parameters that are used by IBM MDMPIM Profile Data
Lookup Maintenance Report
IBM MDMPIM Data Maintenance Lookup Table Spec Lookup Table Spec for IBM MDMPIM Data Maintenance Lookup
Spec table

Sample IBM MDMPIM Profile Data Maintenance Report output


Profile Data Maintenance for ibm Company

Date: 2010-05-25 16:57:14

Delete Profile data older than 2010-05-24 16:57:14

Profile Table Name Rows deleted


TPFM_PSD_SCHEDULE_DETAIL 5
TPFM_PSI_PROFILE_SCHEDULE_INFO 10
TPFM_PPR_PROFILE 107
TPFM_PPI_PROFILE_INFO 24
Profile Data Maintenance completed.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Updating DB statistics
The database table and index statistics should be up-to-date for an optimal performance so that the database optimizer can choose the best (quickest) access plan to
retrieve data for each SQL query.

IBM® Product Master provides a report for updating database statistics. The report fetches such tables whose last maintenance date exceeds 15 days and run the
RUNSTATS on the tables and indexes. For Oracle, gather_table_stats is performed. This ensures that the database statistics are up-to-date and enhance the IBM Product
Master performance.

The Update DB Statistics report has the following dependent Product Master objects:

Object Name Object Type Usage


IBM MDMPIM Update DB Script file in docstore directory Script file that is used by IBM MDMPIM Update DB Statistics Report.
Statistics Report /scripts/reports/
IBM MDMPIM Data Maintenance Distribution A dummy email distribution that is used by the reports. Modify the distribution with the
Distribution appropriate distribution method.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling and configuring the spell checker


You must enable and configure IBM® Product Master parameters and the Sentry Spell Checking Engine spell checker from Wintertree to provide spell checking capabilities
within the Single Edit screen for items and categories.

About this task


IBM Product Master is not bundled with a spell checker but you can purchase and install the separate Sentry Spell Checking Engine from Wintertree for integrated use in
the Single Edit screen.

A valid Wintertree license is required to enable spell checking. The Wintertree spell checker must be installed on the Product Master server in a location, which is readable
by the Product Master user.

404 IBM Product Master 12.0.0


Procedure
1. Stop Product Master services.
2. Add the Wintertree JAR file. Perform one of the following steps:
a. Copy the ssce.jar file from the runtime/lib directory in the Wintertree home directory to the <install dir>/lib directory.
or
a. Add the absolute path of the ssce.jar to the <install dir>/bin/conf/classpath/jars-custom.txt file.
For example, add <wintertree home>/runtime/lib/ssce.jar to jars-custom.txt.
3. Update the wintertree_home parameter in the env_settings.ini file.
4. Run the script configureEnv.sh to regenerate the Wintertree configuration files.
Note: Do not overwrite the common.properties file when prompted.
5. Update the <install dir>/etc/default/common.properties file by providing the following values:
spell_check=true
spell_check_vendor=wintertree
spell_check_vendor_class=com.ibm.ccd.common.plugins.wintertree.WinterTreePlugin
spell_license=<your license number>
6. Start Product Master services.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Application server performance checklist


Use this checklist to resolve common issues of the performance of your application server service.

These possible resolutions can help you identify the source of the performance problem that is occurring within the application server:

Make sure that the server where the application server is installed has the appropriate capacity to handle the load and is not shared with other systems.
Make sure that the server has the appropriate capacity for handle the load and is not shared with other systems.
Verify that the application server is connected to the database by using a gigabit network.
Verify that the network card is set to gigabit full duplex and not on auto-negotiate.
Verify that the average ping time between the application server and the database is less 0.25 milliseconds.
Check the ulimit settings to make sure that the number of open file descriptors is 8000.
If you are a WebSphere® Application Server user, make sure that the server is configured according to the following recommendation in the WebSphere Application
Server product documentation: Tuning performance.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Application and Business expert troubleshooting responsibilities


The application and business expert tracks all changes to the application and system to determine if a change is causing a problem. They also interpret patch release
notes and notify all internally affected parties and works closely with the database administrator and system expert to fix problems and increase performance.

The Application and Business expert performs the following responsibilities:

Determines severity level for PMRs

Determines the severity level of all PMRs based on impact to business operations only (impact to development or maintenance operations are only
considered in so much as they affect business operations)
Follows guidelines to ensure that resources are properly allocated, does not overestimate the severity as this can have the effect of pulling resources off of
other more important issues
Provides the reasoning to help the support team understand the business impact for sev 1 and 2, provides reasoning behind the severity level if severity 1 or
2

Performance

Always checks and interprets profiling information


Works with Database Administrator and System Expert to see if problem is in their area of expertise:
Check if items updating according to feed file
Activate debug logging for SQL scripts and check when things have been run

Enhancements

Is cognizant of all the ways users are using the product therefore able to determine the validity and impact of user enhancement requests
Determines the benefit to the business that the enhancement will have (quantified on a scale of 1-10) with associated reasoning
Understands the trade-off between costs and benefits and does not consider requests that would not add at least moderate benefit to the business
Always evaluates requests coming from users and does not simply forward them to Support
Understands all current application functionality to find ways to fulfill enhancement requests by existing means instead of submitting code change requests
Remembers to keep enhancement requests generic to the application and not tied to customer specific data, setup, or custom scripted functionality
Distinguishes between user requests for custom scripted functionality enhancements and product code change enhancements

Scripted solutions

IBM Product Master 12.0.0 405


Understands all custom scripted functions and solutions in order to be able to support them
Properly submits bug reports only relating to IBM® Product Master script operations, not custom scripted functionality
Never sends entire script to IBM Software Support directly, instead identifies problem in small script test case

Errors

Always works to determine the root cause of an error, fix it, and find a workaround
Provides all findings and problem research in ESR case log
Creates test cases to reproduce the problem and records all such tests in the ESR case log
If the problem is not consistently reproducible, flags this in the ESR case log by describing its nature as intermittent if appropriate
If error relates to the system, contacts the System Expert
If error relates to the database, contact the Database Administrator
Always thoroughly examines logs for all entries relating to the error event

Checking logs

Gets a timestamp for when the problem occurs and correlates errors or user observations with logs
Always checks audit logs to see what actions were occurring during a problem
Tries to determine if operations occurring in parallel are causing the problem
Understands different log levels and when to activate them (log4j2.xml file):
For example: setting up a monitoring window for all database SQLs to trap a problem
Understands impact to disk space and makes sure enough is available
Ensures that application logs never exceed the amount available of free space in the file system
When customer environment consists of more than one server, knows which server to investigate on
Knows which log files to examine:
Exception and default logs for script line numbers
Default for memory usage statistics
Knows how to get all the information needed from the logs

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Administering Product Master system


Product Master system administrators can manage company data models, users, roles, and system services; define clusters; and monitor the system.

All shell scripts that are packaged with the product are bash shell scripts. They must be run in a bash shell environment, even if bash shell is not the default command
shell. IBM® does not support scripts that are modified nor problems that are directly or indirectly caused by scripts that you modified.

Company management
You can deploy, propagate, or create companies in Product Master.
Managing the system and services
You can start, stop, or cancel Product Master and all services from the command-line.
System administrator troubleshooting responsibilities
The system administrator is responsible for managing an organization's computer and operating systems, for the day-to-day maintenance of the operating system,
including back up and recovery, adding and deleting user accounts, and performing software upgrades, and for installing, configuring, and maintaining the network.
They work closely with the database administrator and application and business expert to fix problems and increase performance and ensures that all current
operating system patches are applied.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Company management
You can deploy, propagate, or create companies in Product Master.

Creating a company
You can create a new company for production deployment or a test company that you can use to test your installation and initial login to IBM® Product Master.
Object index maintenance
To ensure that your rich search results are accurate, you can use the index regeneration script, indexRegenerator.sh, to regenerate object indexes.
Company deployment
Company deployment reduces your content modeling activities and improves administration productivity. You can deploy a company from your test instance to your
production instance of Product Master or you can propagate your company data model from one production instance to another.
Propagate a company
To deploy a company by propagating a data model between two production instances of Product Master, follow the instructions for data model deployment, but be
aware that some objects must be propagated in certain ways.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

406 IBM Product Master 12.0.0


Creating a company
You can create a new company for production deployment or a test company that you can use to test your installation and initial login to IBM® Product Master.

Procedure
Run the script create_cmp.sh from the <install dir>/bin/db directory to create a company.

Syntax

create_cmp.sh –code=<company_code> --name=<company_name>

Parameters

-code
Specifies the company code. This parameter is necessary.
–name
Specifies the name of the company.

This script creates a log file called <install dir>/logs/create_cmp.log.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Object index maintenance


To ensure that your rich search results are accurate, you can use the index regeneration script, indexRegenerator.sh, to regenerate object indexes.

The indexRegenerator.sh script is not run automatically; however, to ensure that your search results are accurate, run the script after you change an attribute from non-
indexed to indexed.

You cannot run the indexRegenerator.sh script on workflows or collaboration areas.

Use the following guidelines to determine when to run the indexRegenerator.sh script:

If there are newly marked attributes as indexed for existing items, those are not updated in the relational table automatically and thus would result in incomplete
result set. Running the indexRegenerator.sh tool parses all existing items and update the relational table with all attribute values that are marked as indexed.
If a new version of the container is generated after you regenerate indexes for an object, you do not need to run the indexRegenerator.sh script.
If you roll back to a version of an object that includes regenerated indexes, you do not need to run the indexRegenerator.sh script.
If the object that you roll back to does not include regenerated indexes, you need to run the indexRegenerator.sh script.
Restriction: If you roll back a catalog or hierarchy object concurrently while the indexRegenerator.sh script is processing one of those objects, the
indexRegenerator.sh script exits and logs an error in the log file.

Regenerating indexes for objects


Use the indexRegenerator.sh script to regenerate the object indexes of a catalog, hierarchy, or all items that are specified in a CSV file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Regenerating indexes for objects


Use the indexRegenerator.sh script to regenerate the object indexes of a catalog, hierarchy, or all items that are specified in a CSV file.

About this task


Due to performance impacts, run the indexRegenerator.sh script only during scheduled maintenance hours.

Procedure
Run the index regeneration script, indexRegenerator.sh from the $TOP/bin directory:
Option Description

IBM Product Master 12.0.0 407


Option Description
When you run the indexRegenerator.sh script, the items of the catalog are iterated through, and when the script completes, the items are saved.
When you save the items, new indexes are created. This process can take hours to complete if there are many items in a catalog.

a. Run the indexRegenerator.sh script:

Syntax
indexRegenerator.sh --catalog=catalogName --company=companyName
Parameters
For catalogs in
--catalog
a company:
The catalogName parameter is the name of the catalog.
You cannot combine catalog parameter with any other parameters.
For example, the --catalog and --hierarchy parameters cannot be specified together. If you specify the --catalog and --items
parameters together, .pk files only are generated and index regeneration is not performed.

--company
The companyName parameter is the name of the company.

When you run the indexRegenerator.sh script, the categories of the hierarchy are iterated through, and when the script completes, the category
tree is saved. When you save the category tree, new indexes are created.

a. Run the indexRegenerator.sh script:

Syntax
indexRegenerator.sh --hierarchy=hierarchyName --company=companyName
Parameters
For hierarchies
in a company: --hierarchy
The hierarchyName parameter is the name of the hierarchy.
You cannot combine hierarchy parameters with any other parameters.
For example, the --hierarchy and --catalog parameters cannot be specified together.

--company
The companyName parameter is the name of the company.

When you run the indexRegenerator.sh script, the items are iterated through, and when the script completes, the items are saved. When you save
the items, new indexes are created.

a. Prepare a CSV file to specify all the items that you want to regenerate indexes for.
b. Run the indexRegenerator.sh:

Syntax
indexRegenerator.sh --items=CSV_file_dir --company=companyName --encoding=encoding
Parameters

--items
The CSV_file_dir parameter is the fully qualified directory of the CSV file.
For items in a You must specify pairs of catalog names and item primary key values for each item and separate each pair with a comma. For
company: parameter values that contain spaces, you must enclose the parameter value between quotation marks ("). For special characters
in parameter values that are not enclosed between quotation marks, you must escape every special character with a backslash (\).
If you specify more than one file, insert a file number before the file extension. For example, items.csv becomes items-1.csv or
items-2.csv.

If you specify the --items and --catalog parameters together, .pk files only are generated and index regeneration is not performed.
--company
The companyName parameter is the name of the company.
--encoding
The encoding parameter is only for the type of items. If not specified, the value that is specified for the charset_value parameter in
the common.properties file is used.

Example
In this example, the indexRegenerator.sh script is run on the items that are listed in the CSV file $TOP/item-list.csv, in the company that is named test_Co, and
uses the utf8 encoding type.

$TOP/bin/indexRegenerator.sh --items=$TOP/item-list.csv --company=test_Co --encoding=utf8

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Company deployment
Company deployment reduces your content modeling activities and improves administration productivity. You can deploy a company from your test instance to your
production instance of Product Master or you can propagate your company data model from one production instance to another.

408 IBM Product Master 12.0.0


You typically create a company and test it, and then deploy the company from a test instance to production. You deploy a company by exporting it from the test instance to
a compressed file and then importing the compressed file to the production instance.

You deploy your company through both the export and the import environments. You extract data model objects from your test instance of Product Master with the export
environment, which exports a compressed file into the document store. You can extract all objects of your data model or only the objects and object dependants that you
specify.

To deploy your company on the production instance of Product Master, you use the import environment to import your compressed file.

The compressed file that you export for deployment can contain either the XML, comma-separated value (CSV), or both file formats depending on the object types that
you export. You can perform your company deployment from the user interface or from the command-line.

If you are deploying your company from one production instance to another production instance for data model propagation, see Propagate a company to review the
propagation restrictions.

Restrictions
The following restrictions exist for company deployment:

You cannot export or import data models as a means to migrate between different release versions of Product Master because the data model varies between each
release version. Company deployment is supported only between instances of the same release version.
Manual modification to the content in a compressed file is not supported and if you attempt to import a compressed file that was modified, the import might fail.
If you can stop an import that you initiated, you might cause inconsistencies and you need to manually remove any elements that were imported.
The export or import environments do not have transaction management to ensure that the objects in your data model were committed or not so if your export or
import operation is interrupted, your target instance might become inconsistent and unusable.
During an import, all rules on catalogs and hierarchies are disabled, including: required fields, length validation, validation rules, value rules, and pre/post
processing script.
See the troubleshooting checklists for workaround details for importing hierarchy content with categories that have a relationship attribute set and opening
compressed files that appear empty.

Restriction: Do not use the default company "trigo" in your product environment. For instance, not all the scripts are loaded into the document store if company "trigo" is
used.

Preparing to deploy a company


You must take steps to prepare for deploying your company to ensure that the company data model is deployed successfully.
Deploying a company
You can deploy a company using the user interface, command-line, or the Java™ API to export and import your company data models between instances of IBM®
Product Master.
Viewing your company deployment status
During the deployment process, you can view your export and import debug log files to view the status of the import operation. The debug log files display the
completion percentage of the process and include details of any errors that are encountered.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Preparing to deploy a company


You must take steps to prepare for deploying your company to ensure that the company data model is deployed successfully.

You should use a phased approach to help you to reduce the overall processing time of your company deployment, and it also helps to identify possible problem areas.

Review the complexity of the system implementation to determine what the features and data types are so that you can identify which of your objects might require
manual migration.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deploying a company
You can deploy a company using the user interface, command-line, or the Java™ API to export and import your company data models between instances of IBM® Product
Master.

Limitations for deploying a company


When you deploy a company ensure that you are aware of the company export and import limitations.
Deploying a company from the user interface
To deploy a company, you can use the user interface to export your company data model from a source instance of IBM Product Master and import it into a target
instance.
Deploying a company from the command-line
To deploy a company, you can use the command-line to export your company data model from a source instance of IBM Product Master and import it into a target
instance.
Deploying a company using the Java APIs
To deploy a company, you can use the Java APIs to export your company data model from a source instance of IBM Product Master and import it into a target
instance.

IBM Product Master 12.0.0 409


Scripts for company deployment
You can create an export script to select your objects and object types for company deployment by manually writing the script or through the Selective Export
window in the user interface.
Specifying file names for exported objects
When you export objects from a company, an extra file that is called NameMapping.xml is created. This file provides a mapping for the objects that are exported and
the names of the corresponding files that are created. You can provide your own mapping XML file by using an optional parameter for the mapping file path of the
exportEnv script operation.
Packaging files with the command-line
You must package your company files before you can export or import the company data.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Limitations for deploying a company


When you deploy a company ensure that you are aware of the company export and import limitations.

Limitations for exporting a company


When you deploy a company, ensure that you are aware of the company export limitations.
Limitations for importing a company
When you deploy a company ensure that you are aware of the company import limitations.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Limitations for exporting a company


When you deploy a company, ensure that you are aware of the company export limitations.

The deployment process does not support the import and export of:
Inherited data
Catalog and hierarchy content that supports inheritance
Collaboration area content
Default objects
Role to locale mapping
Organization hierarchy content
Ordering information of catalog items
The deployment process does not support the export of content of non-persistent attributes during environment exports. The values of non-persistent attributes
are not stored in the database but are shown in the user interface by using the logic that is provided in the non-persistent script editor.
Keep the size of the document store as small as possible to avoid any Out Of Memory errors. You can make a backup of any previously exported compressed files
and delete them from the document store to reduce the document store size.
Sample script of document store export:

envObjList = new EnvObjectList();


envObjList.addAllObjectsToExport("DOC_STORE" );
result = exportEnv(envObjList, "521docstore.zip");
out.writeln(result);

Modify your scripts in the source instance to make them compatible to the target instance. If your source instance of Product Master is an older release version,
then after you import your document store, you need to modify all scripts in the target instance.
The company trigo that is packaged by default, does not support a fully functional import or export environment. You must not use the company trigo as a source or
target for imports or exports.

Basic objects
The basic objects with export limitations include:

COMPANY_ATTRIBUTES
SPEC
LOOKUPTABLE
LOOKUPTABLE_CONTENT

Sample script to manually export basic objects:

envObjList = new EnvObjectList();


envObjList.addAllObjectsToExport("COMPANY_ATTRIBUTES" );
envObjList.addAllObjectsToExport("SPEC" );
envObjList.addAllObjectsToExport("LOOKUP_TABLE" );
envObjList.addAllObjectsToExport("LOOKUP_TABLE_CONTENT" );
result = exportEnv(envObjList, "521bascidata.zip");
out.writeln(result);

Data definitions
410 IBM Product Master 12.0.0
The data definitions with export limitations include:

ACG
ROLES
USERS
ATTRIBUTE_COLS
HIERARCHY
CATALOG
CONTAINER_ACCESSPRV
WORKFLOW
COLLABORATION_AREA
MAPS
DATASOURCE
FEEDS JOBS
DISTRIBUTION
EXPORTS
REPORTS
HIERARCHY_VIEW
CATALOG_VIEW
MY_SETTINGS
WEBSERVICE
DISTRIBUTION_GROUP
ALERT
QUEUE
UDL

Sample script to manually export data definitions:

envObjList = new EnvObjectList();


envObjList.addAllObjectsToExport("ACG" );
envObjList.addAllObjectsToExport("ROLES" );
envObjList.addAllObjectsToExport("USERS" );
envObjList.addAllObjectsToExport("ATTRIBUTE_COLS" );
envObjList.addAllObjectsToExport("HIERARCHY" );
envObjList.addAllObjectsToExport("CATALOG" );
envObjList.addAllObjectsToExport("CONTAINER_ACCESSPRV" );
envObjList.addAllObjectsToExport("WORKFLOW" );
envObjList.addAllObjectsToExport("COLLABORATION_AREA" );
envObjList.addAllObjectsToExport("MAPS" );
envObjList.addAllObjectsToExport("DATASOURCE" );
envObjList.addAllObjectsToExport("FEEDS" );
envObjList.addAllObjectsToExport("JOBS" );
envObjList.addAllObjectsToExport("DISTRIBUTION" );
envObjList.addAllObjectsToExport("EXPORTS" );
envObjList.addAllObjectsToExport("REPORTS" );
envObjList.addAllObjectsToExport("HIERARCHY_VIEW" );
envObjList.addAllObjectsToExport("CATALOG_VIEW" );
envObjList.addAllObjectsToExport("MY_SETTINGS" );
envObjList.addAllObjectsToExport("WEBSERVICE" );
envObjList.addAllObjectsToExport("DISTRIBUTION_GROUP" );
envObjList.addAllObjectsToExport("ALERT" );
envObjList.addAllObjectsToExport("QUEUE" );
envObjList.addAllObjectsToExport("UDL" );
result = exportEnv(envObjList, "521definition.zip");
out.writeln(result);

Data
The data with export limitations includes:

HIERARCHY_CONTENT
CATALOG_CONTENT
SELECTION
UDL_CONTENT

Sample script to manually export data:

envObjList = new EnvObjectList();


envObjList.addAllObjectsToExport("HIERARCHY_CONTENT" );
envObjList.addAllObjectsToExport("CATALOG_CONTENT" );
envObjList.addAllObjectsToExport("SELECTION" );
envObjList.addAllObjectsToExport("UDL_CONTENT" );
result = exportEnv(envObjList, "521data.zip");
out.writeln(result);

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Limitations for importing a company


When you deploy a company ensure that you are aware of the company import limitations.

IBM Product Master 12.0.0 411


The following items, data, and objects must be manually imported for successful company deployment:

The deployment process does not support the import of:


Inherited data
Catalog and hierarchy content that supports inheritance
Collaboration area content
Default objects
Role to locale mapping
Organization hierarchy content
Ordering information of catalog items
The deployment process supports the import of:
Binary
Category
Currency Date
Flag Image
ImageURL
Integer
LookupTable
Number
NumberEnumeration
Password String
StringEnumeration
ThumbnailImage
ThumbnailImageURL
Timezone
URL
The deployment process does not support the import of:
Relationship attribute information.
Items and categories with relationship information. All relationship information is lost in the target instance. For example, if an item has a spec node of type
relationship and another item is assigned to spec node, the assignment information will not be present in the target instance.
Data with integer value more than Integer.
For all imports, you should always use the scheduler for large volumes of data.
To avoid errors, you must ensure that all objects that you export for deployment are successfully imported into your target instance.
When you import jobs with an environment import, historic schedules are not imported. Historic schedules are not repeating schedules in the past. Therefore, you
see the following error in the import log: ERROR:CWXIM0272E:Create not supported for job of type CTGTODB. If this error message is being thrown in
context of an environment import for objects of type "Job", this means that historic job schedules are not being repeated for this job and that this job is not imported
into the target company. Job schedules that took place in the past in the source company is not imported into a company through environment import.

The company trigo that is packaged by default, does not support a fully functional import or export environment. You must not use the company trigo as a source or
target for imports or exports.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deploying a company from the user interface


To deploy a company, you can use the user interface to export your company data model from a source instance of IBM® Product Master and import it into a target
instance.

Before you begin


Before you can deploy a company from the user interface, you must:

Ensure that a company is designed, and the data model exists.


Ensure that a target instance of Product Master exists as the target of your company deployment.
If you plan to deploy a company between multiple production instances of Product Master, ensure that you create a system backup of the target instance before
you start the import.
Ensure that your Product Master is configured for the deployment process. To configure the Product Master, you must:
Ensure that you have the following memory configuration for the scheduler service in the .bashrc file:

export SCHEDULER_MEMORY_FLAG="-Xmx1024m -Xms48m"

Ensure that a compressed file of 200 MB or larger can be deployed into the target instance.
Ensure that you have the following browser timeout setting in the common.properties file:

max_inactive_interval=36000

Procedure
1. Create a report type in the source instance:
a. Open the Reports Console in the user interface: Product Manager > Reports > Reports Console
b. Create a report by clicking New.
c. Create a report type by clicking Select on the first row of the Create/Edit Report window.
2. Create your export script to select the object types you want to deploy:
Restriction: Do not use the default company "trigo" in your product environment. For instance, not all the scripts are loaded into the document store if company
"trigo" is used.

412 IBM Product Master 12.0.0


Option Description
To use the Selective Export window to generate See Generating export scripts for details on using the Selective Export window.
your export script:
To configure a script input spec and use a See Configuring the script input spec and using a predefined export script for details on creating your export
predefined export script: script.
a. Select any script from the Select Import Parameters Spec menu and click Select.
b. Enter a report type name in the Report Type field and click Next.
c. Select Regular from the Select Type menu and click Select.
d. In the Scriptlet Editor, enter your deployment export script. For additional information on writing your
export script:
To write your whole export script:
See Scripts for company deployment for information on how to write your export script.
See Object type dependencies to ensure that you add all dependent object types for every
object type you add to your export scripts.
See Deployment action modes to ensure that you specify the action modes you want performed
on your objects when they are imported.

3. Click above the Report Type window, then click to return to the Create/Edit Report window.
4. Create the report:
a. Select your report type from the Select Report Type menu and click Select.
b. Type a report name in the Report Name field and click Next.
c. Specify a distribution type:

To select an existing distribution:


Select a distribution type from the Select Distribution menu then click Select.
To create a new distribution:

i. Click on the third row of the Create/Edit Report window.


ii. Type a distribution name in the Distribution Name field and click Next.
iii. Select the distribution type from the Select Distribution Type menu and click Select.

iv. Depending on the distribution type you select, enter the requested information, then click .

d. Click to return to the Report Console window.


5. Set the parameter values:
a. In the Report Console window, click your report name to open the Parameters Value Set window.
b. In the Enter Values window, click each Value checkbox that you want to deploy.

c. Click above the Parameters Value Set window.

d. Click above the Parameters Value Set window.

6. Run the export. In the Report Console window, click to run the export.

7. To check the status of the job, click in the Schedule column.


Your compressed file is stored in the directory you specify in the document store when the export completes.
The status bar reflects the percentage of the total number object types that are exported or imported and does not reflect the actual progress of the export or
import process regarding file size or time.

8. Go to the target instance and import the compressed file:


Option Description
a. In the target instance, open the Import Company Environment window: System Administrator > Import
Environment
Use the Import Company Environment b. In the File field, specify the fully qualified directory of your compressed file or click Browse to locate the file.
window: c. Click Import to import the compressed file.
d. To view the files or the status, click the linked import results statement that appears in the Import Company
Environment window.
Use the Selective Import window: See Selectively importing objects for details on using the Selective Import window.

What to do next
You should log in to the target system and confirm that your data model is imported correctly. View Limitations for importing a company for details on the objects that you
must ensure is imported.

Selectively importing objects


You can use the Selective import window to selectively import objects into Product Master.
Configuring the script input spec and using a predefined export script
You can configure the script input spec and use a predefined export script to deploy your company from the user interface.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Selectively importing objects


You can use the Selective import window to selectively import objects into Product Master.

Used the Selective import window to specify objects that you want to import from an Product Master exported compressed file.

IBM Product Master 12.0.0 413


After you select the objects you want to import, Product Master performs a dependency verification check on all of the objects you specify before you create a new
compressed file for import. The new import compressed file is created with the import_File_Namefile name, in the
$TOP/public_html/suppliers/company_Name/envexpimp/Time_Stamp directory, where import_File_Name is the file name of the initial compressed file, company_Name is
the name of your company, and Time_Stamp is a time stamp of when the file was created.

For importing exported compressed files from versions before Product Master, the Action tag and specified action mode are not available so all object imports use the
default action mode: CREATE_OR_UPDATE.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring the script input spec and using a predefined export script
You can configure the script input spec and use a predefined export script to deploy your company from the user interface.

Procedure
1. In the IBM® Product Master instance, create a script input spec:

a. Click on the first row of the Report Type window.


b. Type a spec name in the Spec Name field and click Next.
c. In the spec details that appear, click next to the spec name that you specified.
2. Specify the object types that you want to deploy in the Add Attribute to Spec field and click to add the object type.
You can specify the following object types:
Lookup tables
Roles
Users
Scripts
Feeds
Exports
Schedules
The object type attribute names are case-sensitive and you must ensure that you add every dependent object type for every object type that you want to deploy.
See Object type dependencies for details.

Each attribute is created as type flag.

3. Optional: Specify to capture an exported archive file path. In the Details window, specify String in the Type menu.
4. In the Details window, add each additional object type by clicking next to the name of the spec you created and click Save at the top of the window when you

finish adding all the object types you want to deploy then click to return to the Report Type window.
5. Select your spec name from the Select Import Parameters Spec menu and click Select.
6. Type a report type name in the Report Type field and click Next.
7. Select Regular from the Select Type menu and click Select.
8. In the Scriptlet Editor window, enter the following export script

function isChecked(paramName)
{
return checkString(inputs[paramName], "FALSE") == "TRUE";
}
var sExportFilePath = "/archives/export." + formatDate(today(), "hhmmss") + ".zip";
catchError(e)
{
exportList = new EnvObjectList();

if (isChecked("Lookup tables"))
{ exportList.addAllObjectsToExport("LOOKUP_TABLE");
}
if (isChecked("Roles"))
{
exportList.addAllObjectsToExport("ROLES");
}
if (isChecked("Users"))
{
exportList.addAllObjectsToExport("USERS");
}
if (isChecked("Scripts"))
{
exportList.addObjectByNameToExport("scripts", "DOC_STORE");
}
if (isChecked("Feeds"))
{
exportList.addAllObjectsToExport("FEEDS");
}
if (isChecked("Exports"))
{
exportList.addAllObjectsToExport("EXPORTS");
}
if (isChecked("Schedules"))
{
exportList.addAllObjectsToExport("JOBS");
}
exportList.exportEnv(sExportFilePath);
}

414 IBM Product Master 12.0.0


if (e == null)
{
out.writeln("Export ran successfully <a target=\"_blank\"
href="#_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_administering_sys_admin_ +
getHrefForDocPath(sExportFilePath) + ">[Export]</a>");
}
if (e != null)
{
out.writeln("Export failed to run [" + e + "]");
}

What to do next
Return to Deploying a company from the user interface and complete the remaining company deployment steps.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deploying a company from the command-line


To deploy a company, you can use the command-line to export your company data model from a source instance of IBM® Product Master and import it into a target
instance.

Before you begin


Before you can deploy a company from the command-line, you must:

Ensure that a company is designed, and the data model exists.


Ensure that a target instance of Product Master exists as the target of your company deployment.
Ensure that you create a system back up of the target instance before you start the import if you deploy a company between multiple production instances of
Product Master.
Ensure that your Product Master is configured for the deployment process. To configure the Product Master, you must:
Ensure that you have the following memory configuration for the scheduler service in the .bashrc file:

export SCHEDULER_MEMORY_FLAG="-Xmx1024m -Xms48m"

Ensure that a compressed file of 200 MB or larger can be deployed into the target instance.
Ensure that you have the following browser timeout setting in the common.properties file:

max_inactive_interval=36000

Procedure
1. Create your export script and save it to your source Product Master application server:
Option Description
See Scripts for company deployment for information on how to write your export script.
See Object type dependencies to ensure that you add all dependent object types for every
To write your whole export script: object type you add to your export scripts.
See Deployment action modes to ensure that you specify the action modes you want
performed on your objects when they are imported.
To use the Selective Export window in the user interface to See Generating export scripts for details on using the Selective Export window.
generate your export script:
To configure a script input spec and use a predefined See Configuring the script input spec and using a predefined export script for details on creating
export script: your export script.
2. Log in to your source Product Master application server.
3. In the command-line, run the exportCompanyAsZip.sh shell script that is in the $TOP/bin directory:

Syntax

exportCompanyAsZip.sh --company_code=code --script_path=script_Path

Parameters

--company_code=code
Specifies the company code where code is your company code.
--script_path=script_path
Specifies the directory where your export script is stored where script_path is the fully qualified directory and name of your export script in your
source Product Master.

4. Ensure that the compressed file is in the local file system of your target Product Master application server.
5. In the command-line, run the importCompanyFromZip.sh shell script that is in the $TOP/bin directory:

Syntax

importCompanyFromZip.sh --company_code=code --zipfile_path=zipFile_Path

Parameters

IBM Product Master 12.0.0 415


--company_code=code
Specifies the company code where code is your company code.
--zipfile_path=script_path
Specifies the directory where your export script is stored where script_path is the fully qualified directory and name of your export script in your target
Product Master.

What to do next
You should log in to the target system and confirm that your data model is imported correctly. View Limitations for importing a company for details on objects, ensure it
has been imported.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deploying a company using the Java APIs


To deploy a company, you can use the Java™ APIs to export your company data model from a source instance of IBM® Product Master and import it into a target instance.

Before you begin


Before you can deploy a company using the Java APIs, you must:

Ensure that a company is designed, and the data model exists.


Ensure that a target instance of Product Master exists as the target of your company deployment.
Ensure that you create a system backup of the target instance before you start the import if you deploy a company between multiple production instances of
Product Master.
Ensure that your Product Master is configured for the deployment process. To configure the Product Master, you must:
Ensure that you have the following memory configuration for the scheduler service in the .bashrc file:

export SCHEDULER_MEMORY_FLAG="-Xmx1024m -Xms48m"

Ensure that a compressed file of 200 MB or larger can be deployed into the target instance.
Ensure that you have the following browser timeout setting in the common.properties file:

max_inactive_interval=36000

Procedure
Write your code to use the Java APIs provided for this feature.
See IBM® Javadoc Documentation to view all the available information about the parameters, returns, exceptions, and syntax for the deployment of a company.

What to do next
You must log in to the target system and confirm that your data model is imported correctly. View Limitations for importing a company for details on objects, ensure that
are imported.

Java interfaces for exporting and importing data models


You can use the Java API to export and import your data models between source and target instances of Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Java interfaces for exporting and importing data models


You can use the Java™ API to export and import your data models between source and target instances of Product Master.

When you export many Product Master objects, the export might fail with the following error if the data files are too large:

Error occurred while archiving..., Exception; A file cannot be larger than the value set by ulimit.
java.io.IOException: A file cannot be larger than the value set by ulimit
The reason is that Java limits the size of compressed files to 2 GB. If the objects that you are exporting will create a data file that is larger than 2 GB, the export fails with
this error. Any data that exceeds this 2 GB limit is not included in the compressed file.

A large export can be divided into smaller exports where each export would have fewer objects than those in the original export. If there is a single object that is larger
than 2 GB and cannot be exported in smaller exports, contact IBM® support for further assistance.

See IBM Javadoc Documentation to view all of the available information on the parameters, returns, exceptions, and syntax for all Java classes.

Sample EnvironmentExporter interface usage


Use the EnvironmentExporter interface to create an export list container object that holds the names of objects to be exported and the action modes that each

416 IBM Product Master 12.0.0


object uses during import. The interface also defines methods for exporting objects of a specified export list container object into a compressed file in the
document store.
Sample ExportList interface usage
Use the ExportList interface to create an export list container object that lists all the object names and types to be exported and the action mode that each object
uses during import.
Sample ExportRequisiteList interface usage
Use the ExportRequisiteList interface to determine and retrieve all the dependent objects for objects in a export list container object and hold them in a requisite
list container object.
Sample EnvironmentImporter interface usage
Use the EnvironmentImporter interface to import compressed files that were exported from a source instance into a target instance of Product Master.
Sample ImportList interface usage
Use the ImportList interface to create an import list container object to hold the names, types, and action modes of the objects that are in a given compressed file
that was created in the export environment of the source instance of Product Master.
Sample ImportRequisiteList interface usage
Use the ImportRequisiteList interface to determine and retrieve all the dependent objects for objects in a import list container object and hold the object names in a
requisite list container object.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample EnvironmentExporter interface usage


Use the EnvironmentExporter interface to create an export list container object that holds the names of objects to be exported and the action modes that each object uses
during import. The interface also defines methods for exporting objects of a specified export list container object into a compressed file in the document store.

See IBM® Javadoc Documentation to view all the available information about the parameters, returns, exceptions, and syntax for the EnvironmentExporter class.

This sample code uses the export(ExportList exportList, java.lang.String documentPath, boolean checkForRequisites) method to export all the objects in the export list
container object, exportList, to the /data/wpc directory. If the export fails, another export will be attempted only after the export list container object is updated to include
all dependent objects.

Context ctx = null;


EnvironmentExporter envExporter = null;
ExportList exportList = null;
try
{
ctx = PIMContextFactory.getContext("Admin","xxx","MyCompany");
envExporter= ctx.getEnvironmentExporter();
exportList=envExporter.createExportList();
//Add objects to the exportList
try{
envExporter.export(exportList, "/data/wpc.zip", true);
}
catch (PIMInternalException exc)
{
exportList.addAllRequisites();
envExporter.export(exportList, "/data/wpc.zip", true);
//Modify the exportList to include required objects and retry the
//export
}

}
catch (PIMAuthorizationException ae)
{
// Expected a failure
System.out.println("Authorization Failure");
return;
}
catch(PIMInternalException ie)
{
System.out.println("Internal Error");
return;
}
catch(Exception e)
{
System.out.println(e.getMessage());
}

This sample code uses the export(ExportList exportList, java.lang.String documentPath, boolean checkForRequisites, java.lang.String mappingPath) method to export all
the objects in the export list container object, exportList, to the /data/wpc directory. It creates the file names inside the compressed file package according to the name
mapping provided in the XML file at mappingPath.

Context ctx = null;


EnvironmentExporter envExporter = null;
ExportList exportList = null;
try
{
ctx = PIMContextFactory.getContext("Admin","xxx","MyCompany");
envExporter= ctx.getEnvironmentExporter();
exportList=envExporter.createExportList();
//Add objects to the exportList
try{
envExporter.export(exportList, "/data/wpc.zip", true, "/config/namemapping.xml");
}
catch (PIMInternalException exc)

IBM Product Master 12.0.0 417


{
exportList.addAllRequisites();
envExporter.export(exportList, "/data/wpc.zip", true, "/config/namemapping.xml");
//Modify the exportList to include required objects and retry the
//export
}

}
catch (PIMAuthorizationException ae)
{
// Expected a failure
System.out.println("Authorization Failure");
return;
}
catch(PIMInternalException ie)
{
System.out.println("Internal Error");
return;
}
catch(Exception e)
{
System.out.println(e.getMessage());
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample ExportList interface usage


Use the ExportList interface to create an export list container object that lists all the object names and types to be exported and the action mode that each object uses
during import.

See IBM® Javadoc Documentation to view all of the available information on the parameters, returns, exceptions, and syntax for the ExportList class.

This sample code shows two addAllObjects(ExportList.Type type, ExportList.ActionMode actionMode) methods. One method is used to add the objects of type
ExportList.Type.CATALOG to the export list container object with their action modes specified as ExportList.ActionMode.CREATE_OR_UPDATE. The second method is used to
add the objects of type ExportList.Type.SPEC to the export list container object with their action modes specified as ExportList.ActionMode.CREATE_OR_UPDATE.

Context ctx = null;


EnvironmentExporter envExporter = null;
ExportList exportList = null;

try
{
ctx = PIMContextFactory.getContext("Admin","xxx","MyCompany");
envExporter = ctx.getEnvironmentExporter();
exportList=envExporter.createExportList();
try{
exportList.addAllObjects(ExportList.Type.CATALOG, ExportList.ActionMode.CREATE_OR_UPDATE);
exportList.addAllObjects(ExportList.Type.SPEC, ExportList.ActionMode.CREATE_OR_UPDATE);
}
catch(IllegalArgumentException exc)
{
System.out.println(exc.getMessage());
}
exportList.addAllRequisites();
envExporter.export(exportList, "/envexport/wpc.zip");

}
catch (PIMAuthorizationException ae)
{
// Expected a failure
System.out.println("Authorization Failure");
return;
}
catch(PIMInternalException ie)
{
System.out.println("Internal Error");
return;
}
catch(Exception e)
{
System.out.println(e.getMessage());
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample ExportRequisiteList interface usage

418 IBM Product Master 12.0.0


Use the ExportRequisiteList interface to determine and retrieve all the dependent objects for objects in a export list container object and hold them in a requisite list
container object.

See IBM® Javadoc Documentation to view all of the available information on the parameters, returns, exceptions, and syntax for the ExportRequisiteList class.

This sample code shows the getRequisiteTypes() method that is being used to retrieve the dependent object types for all the objects in the export list container object that
correspond to the objects in the requisite list container object.

Context ctx = null;


EnvironmentExporter envExporter = null;
ExportList exportList = null;
try
{
ctx = PIMContextFactory.getContext("Admin","xxx","MyCompany");
envExporter = ctx.getEnvironmentExporter();
exportList=envExporter.createExportList();
try{
exportList.addAllObjects(ExportList.Type.CATALOG, ExportList.ActionMode.CREATE_OR_UPDATE);
ExportRequisiteList requisites = exportList.getRequisites();
for(ExportList.Type objectType : requisites.getRequisiteTypes())
{
if (ExportList.Type.SPEC.equals(objectType))
{
exportList.addAllObjects(ExportList.Type.SPEC, ExportList.ActionMode.CREATE_OR_UPDATE);
}
}
}
catch(PIMInternalException exc)
{
System.out.println("Action Mode not permitted: " + exc.getMessage());
}
}
catch (PIMAuthorizationException ae)
{
// Expected a failure
System.out.println("Authorization Failure");
return;
}
catch(PIMInternalException ie)
{
System.out.println("Internal Error");
return;
}
catch(Exception e)
{
System.out.println(e.getMessage());
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample EnvironmentImporter interface usage


Use the EnvironmentImporter interface to import compressed files that were exported from a source instance into a target instance of Product Master.

See IBM® Javadoc Documentation to view all of the available information on the parameters, returns, exceptions, and syntax for the EnvironmentImporter class.

This sample code shows the getImportList(Document document) method that is being used to retrieve the list of the objects from the document file /data/wpc in the
document store, then uses the importEnvironment(Document document) method to import document into the file system of the company.

Context ctx = null;


EnvironmentImporter envImporter = null;
ImportList importList = null;
try
{
ctx = PIMContextFactory.getContext("Admin","xxx","MyCompany");
envImporter = ctx.getEnvironmentImporter();
Document document = ctx.getDocstoreManager().getDocument("/data/wpc.zip");
importList=envImporter.getImportList(document);
envImporter.importEnvironment(document);
}
catch (PIMAuthorizationException ae)
{
// Expected a failure
System.out.println("Authorization Failure");
return;
}
catch(PIMInternalException ie)
{
System.out.println("Internal Error");
return;
}
catch(Exception e)
{
System.out.println(e.getMessage());
}

IBM Product Master 12.0 Fix Pack 8

IBM Product Master 12.0.0 419


Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample ImportList interface usage


Use the ImportList interface to create an import list container object to hold the names, types, and action modes of the objects that are in a given compressed file that was
created in the export environment of the source instance of Product Master.

You use the import list container object to create new compressed files that contain only the objects that you specify for import into the target instance of Product Master.

See IBM® Javadoc Documentation to view all of the available information on the parameters, returns, exceptions, and syntax for the ImportList class.

This sample code shows the getObjectTypes() method that is being used to retrieve all the objects from the import list container object, then uses the
removeObject(ExportList.Type type, java.lang.String objectName) method to remove the object of type Type.CATALOG with name MyCatalog from the import list container
object. The removeObjects(ExportList.Type type) method is also used to remove all objects of type Type.CATALOG_CONTENT. The createZip(java.lang.String
documentPath) method is then used to create a compressed file that contains only the objects that remain in the import list container object and stores the compressed
file in the /data/wpc.zip directory in the document store.

Context ctx = null;


EnvironmentImporter envImporter = null;
ImportList importList = null;
try
{
ctx = PIMContextFactory.getContext("Admin","xxx","MyCompany");
envImporter = ctx.getEnvironmentImporter();
Document archiveDocument = ctx.getDocstoreManager().getDocument("/data/wpc.zip");
importList=envImporter.getImportList(archiveDocument);
Set<Type> objTyps=importList.getObjectTypes();
importList.removeObject(Type.CATALOG, "MyCatalog");
importList.removeObjects(Type.CATALOG_CONTENT);
importList.createZip("/data/wpc.zip");
}
catch (PIMAuthorizationException ae)
{
// Expected a failure
System.out.println("Authorization Failure");
return;
}
catch(PIMInternalException ie)
{
System.out.println("Internal Error");
return;
}
catch(Exception e)
{
System.out.println(e.getMessage());
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample ImportRequisiteList interface usage


Use the ImportRequisiteList interface to determine and retrieve all the dependent objects for objects in a import list container object and hold the object names in a
requisite list container object.

The requisite list container object lists all the dependent object names that are required for import but are not specified in the import list container object. The object
names in the object container are obtained from the objects in the compressed file. The object names were exported from the source Product Master instance.

Note: A verification check is not performed during import to determine whether the objects exist in the destination Product Master instance from where you start these
Java™ methods.
See IBM® Javadoc Documentation to view all of the available information on the parameters, returns, exceptions, and syntax for the ImportRequisiteList class.

This sample code shows the getRequisiteObjects(ExportList.Type type) method that is being used to retrieve all objects of type ExportList.Type.SPEC. The
getRequisiteTypesForObject(ExportList.Type type, java.lang.String objectName) method is also used to retrieve all of the objects that are dependent to the specified object
type type, and name name, in the import list container object. The getRequisiteObjectsForObjectByType(ExportList.Type importObjectType, java.lang.String objectName,
ExportList.Type requisiteObjectType) method is then used to retrieve all of the objects of type reqdType that are dependent to the specified object type type, and name
name, in the import list container object.

Context ctx = null;


EnvironmentImporter envImporter = null;
ImportList importList = null;
try
{
ctx = PIMContextFactory.getContext("Admin","xxx","MyCompany");
envImporter = ctx.getEnvironmentImporter();
Document archive = ctx.getDocstoreManager().getDocument("/data/wpc.zip");
importList=envImporter.getImportList(archive);
importList.removeObjects(ExportList.Type.SPEC);
ImportRequisiteList requisites = importList.getRequisites();
List <String> specNames = requisites.getRequisiteObjects(ExportList.Type.SPEC);
for(String name : specNames)

420 IBM Product Master 12.0.0


{
try{
importList.addObject(ExportList.Type.SPEC, name);
}
catch(PIMInternalException exc)
{
System.out.println("Spec " + name + "does not exist in archive");
}
}

//For each object type


for (Type type : importList.getObjectTypes())
{
//Get the objects of the type
for(String name : importList.getObjectNames(type))
{
//for each object, get the types which this object needs, for instance, CATALOG needs SPEC, HIERARCHY
for(Type reqdType : requisites.getRequisiteTypesForObject(type, name))
{
System.out.println(type + " " + name + " requires the following objects of type" + reqdType );
//get the objects of each type, CATALOG MyCatalog depends on SPEC primSpec and secSpec
for (String reqdObjectName : requisites.getAllRequisiteObjectsForObjectByType(type, name, reqdType))
{
System.out.println(reqdObjectName);
}
}
}
}

}
catch (PIMAuthorizationException ae)
{
// Expected a failure
System.out.println("Authorization Failure");
return;
}
catch(PIMInternalException ie)
{
System.out.println("Internal Error");
return;
}
catch(Exception e)
{
System.out.println(e.getMessage());
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Scripts for company deployment


You can create an export script to select your objects and object types for company deployment by manually writing the script or through the Selective Export window in
the user interface.

You must create scripts to deploy your company from either the command-line or the user interface. Create your deployment scripts to specify your objects and object
types that you want to deploy from a source Product Master system to a destination system. You can use any combination of scripts, for example, you can specify to
include all objects of a certain object type and only certain objects of another object type.

Use the following two example scripts to develop your deployment scripts. See Deployment script operations to view the descriptions for script operations in the scripts.

Scripts example to deploy object types with all the object type entities:

envObjList = new EnvObjectList();

envObjList.addAllObjectsToExport("CATALOG");
envObjList.addAllObjectsToExport("CATALOG_VIEW");
envObjList.addAllObjectsToExport("HIERARCHY_VIEW");
envObjList.addAllObjectsToExport("MAPS");
envObjList.addAllObjectsToExport("LOOKUP_TABLE");

ADD ADDITIONAL OBJECT TYPES HERE

sDocFilePath = "archives/mycompleteexport.zip";
exportEnv(envObjList, sDocFilePath);

If you want to specify the name mapping file, then you must add the optional mappingFilePath parameter. The following example code enable you to specify the
optional mappingFilePath parameter.

envObjList = new EnvObjectList();

envObjList.addAllObjectsToExport("CATALOG");
envObjList.addAllObjectsToExport("CATALOG_VIEW");
envObjList.addAllObjectsToExport("HIERARCHY_VIEW");
envObjList.addAllObjectsToExport("MAPS");
envObjList.addAllObjectsToExport("LOOKUP_TABLE");

ADD ADDITIONAL OBJECT TYPES HERE

IBM Product Master 12.0.0 421


sDocFilePath = "archives/mycompleteexport.zip";
mappingFilePath = "archives/nameMapping.xml"
exportEnv(envObjList, sDocFilePath, mappingFilePath);

For more information, see Specifying file names for exported objects.
Scripts example to deploy object types with only the object type entities that the script specifies:
When you specify the catalog and hierarchy content and object types, you must:

1. Specify the object type.


2. Set the name of the hierarchy or catalog object types with the setHierarmchyByNameToExport or setcatalogByNameToExport script operation.
3. Set the attribute collection of the hierarchy or catalog object types with the addObjectByNameToExport or addAllObjectsToExport.
4. You must place envObjList.addAllObjectsToExport() after both the envObjList.setTypeToExport("CATALOG_CONTENT") and
envObjList.setHierarchyByNameToExport("Catalog Name") scripts if you have catalogs that are set up without a user defined core attribute collection.

For item-to-category maps and hierarchy maps, use the following script operations:

1. Set the source and destination hierarchies to export with the setHierarchyMapToExport.
2. Set both the catalog and hierarchy names that the items are mapped to with the setItemCategoryMapToExport script operation.

//Create the list for export


envObjList = new EnvObjectList();

//Set the type of object to be exported


envObjList.setTypeToExport("CATALOG");

//Set the entities for that specified object type


//"Admin Ctg" will be exported in "CREATE" mode
envObjList.addObjectByNameToExport("Admin Ctg", "CREATE");

//"CE Admin 1Ctg" will be exported in "UPDATE" mode


envObjList.addObjectByNameToExport("CE Admin 1 Ctg", "UPDATE");

//Set the object type to export


envObjList.setTypeToExport("HIERARCHY_CONTENT");

//set the name of the hierarchy whose content needs to be exported


envObjList.setHierarchyByNameToExport("CE Master CatTree");

//Add the list of attribute collections to be exported


envObjList.addObjectByNameToExport(“Attr Collec1”);

//Set the object type to export


envObjList.setTypeToExport("CATALOG_CONTENT");

//set the name of the catalog whose content needs to be exported


envObjList.setCatalogByNameToExport("CE Master CTG");

//Add the list of attribute collections to be exported


envObjList.addObjectByNameToExport(“Attr Collec2”);

//The CREATE_OR_UPDATE action mode will be used.


envObjList.setHierarchyMapToExport (“Hierarchy1”, “Hierarchy2”);

//Set the Hierarchy maps to export, Catalog refers to catalog


//name, and Hierarchy refers to hierarchy name
//hierarchy and Hierarchy2 refers to destination hierarchy
envObjList.setHierarchyMapToExport (“CATALOG”, “HIERARCHY”);

//Set the document store path to export files


envObjList.setTypeToExport("DOC_STORE");
envObjList.addObjectByNameToExport(“/scripts”);

ADD ADDITIONAL OBJECT TYPES HERE

//The document store path where the exported file is stored


sDocFilePath = "archives/partialcontentexport.zip";

//Pass the object list and the document store path to ExportEnv operation
exportEnv(envObjList, sDocFilePath);

For this example script to work successfully, you must revise the attribute collection Attr Collec2 and set it to your user-defined core attribute collection for the
catalog.

Generating export scripts


You can use the Selective export window to generate an export script that contains all the objects you want to export.
Deployment script operations
You can use script operations to export your objects and data models for company deployment in Product Master.
Deployment action modes
You can define the action mode of each object that you export to specify what action is performed on the object when it is imported.
Object type dependencies
You can export and import objects that are based on their dependency to other objects within the data model.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Generating export scripts

422 IBM Product Master 12.0.0


You can use the Selective export window to generate an export script that contains all the objects you want to export.

All Product Master object types are displayed in the Selective export window for you to specify which objects you want to include in your export script. You can specify the
object type and action mode for each object type or you can select specific objects and specify each object's action mode. When you generate an export script, Product
Master performs a dependency verification check on all of the objects you specify to ensure that your export script is valid.

You can also define the action mode for each object you export to specify how the object is imported. When you generate the export script, it displays in the Selective
export window and can be saved as a report in the document store.

Remember: You can use the name mapping even after the script is generated. To do so, add the parameter for the name mapping file path in the exportEnv operation
within the script before you use it.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deployment script operations


You can use script operations to export your objects and data models for company deployment in Product Master.

The following table shows each export script operation and description. You can create scripts that call these script operations to export your company during
deployment. You can combine the script operations to export specific object types in separate parts, or you can export your complete data model.
Table 1. Script operations for exporting a company for deployment
Script
Description
operation
new Returns a container for exporting the object types. This class is used to add and retrieve the object types for export.
EnvObjectList
setTypeToExport Sets the object types for export.
addObjectByNa Sets the object for export by specifying the object name as an argument. Use the optional argument, sActionMode, to specify the action mode that the
meToExport object uses for export. If you do not provide a value for this argument, the default action mode CREATE_OR_UPDATE is used on the object during
import.

For catalog and Hierarchy content export, use this script operation to specify the attribute collection that is associated with the object.

For a document store partial export, use this script operation to specify the document store path.
addAllObjectsTo Sets all the entities of a specific object type for export. Use the optional argument, sActionMode, to specify the action mode that the object uses for
Export export. If you do not provide a value for this argument, the default action mode CREATE_OR_UPDATE is used on the object during import.
setCatalogByNa Sets the catalog contents for export.
meToExport
setHierarchyBy Sets the Hierarchy contents for export.
NameToExport
getCatalogName Returns the last values that are set with setCatalogByNameToExport.
ToExport
setHierarchyMa Sets the source and destination Hierarchies mappings for export. Use the optional argument, sActionMode, to specify the action mode that the object
pToExport uses for export. If you do not provide a value for this argument, the default action mode CREATE_OR_UPDATE is used on the object during import.
setItemCategor Sets the catalog and Hierarchy item category mappings for export. Use the optional argument, sActionMode, to specify the action mode that the object
yMapToExport uses for export. If you do not provide a value for this argument, the default action mode CREATE_OR_UPDATE is used on the object during import.
getHierarchyNa Returns the last values that are set with setHierarchyByNameToExport.
meToExport
getTypeToExpor Returns the last object type that is set with setTypeToExport.
t
getTypesToExpo Returns all the object types are set with setTypeToExport.
rt
exportEnv Exports the object types that are specified in envObjList at the specified document store path. Use the argument, sDocFilePath, to specify the
directory file path of the compressed file that you export into the document store.

To create a new instance, export only the necessary objects of the document store.
importEnv Imports the content of the exported archive file into the company and returns a string that represents the debug log.

You can print the debug log or send it as an email.


setActionModeT Sets the action mode in subsequent objects that are added to the export list you use for export. See Deployment action modes for details.
oExport

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deployment action modes


You can define the action mode of each object that you export to specify what action is performed on the object when it is imported.

You can choose from four action modes when you export objects to define how they are imported into Product Master.

IBM Product Master 12.0.0 423


When you create an export script and specify action modes for your objects, an Action tag is stored in the corresponding XML file of your export compressed file to specify
the action mode of each object.

The four action modes are:

CREATE
The CREATE action mode creates the object in the specified company of the target system. An error message is logged if you attempt to import an object that exists
in the company.
Limitations:

Jobs of executable type, except for the Import Environment objects, cannot be created through an export or import process.
The following objects will always be updated or created if the object types do not exist:
Lookup table content
Hierarchy content
Catalog content

UPDATE
The UPDATE action mode updates an existing object in the target system. An error message is logged if you attempt to import an object that does not exist in the
company. Only the attributes of objects that can be modified from the user interface or through script is modified when you use this mode.
Limitations:

The UPDATE action mode is not applicable with the following object types:
Exports
Feeds
Collaboration areas
Reports
When an alert object is exported and then imported with the UPDATE action mode, a new alert is created even if an alert with the same description exists.
The following object types will always be updated or created if the object types do not exist:
Lookup table content
Hierarchy content
Catalog content
The UPDATE action mode is the only action that you specify for the following object types:
Company attributes: You can only specify to update either all or none of the objects.
My Settings: You can only specify to update either all or none of the objects.

DELETE
The DELETE action mode deletes the object from the specified company of the target system. An error message is logged if you attempt to delete an object that
does not exist in the company.
Limitation:

The DELETE mode is not applicable with the following object types:
Access control group (ACG)
Users
Company attributes
My settings
Alerts
Jobs of executable type, except for the Import Environment objects, cannot be deleted through an export or import process.

CREATE_OR_UPDATE
The CREATE_OR_UPDATE action mode creates an object if it does not exist, and update the object if it exists in the company of the target system. This action mode
is the default mode and is used when you do not define the action mode.
When you import compressed files that are generated in versions before Product Master, the CREATE_OR_UPDATE action mode is used.

Limitations:

The UPDATE action mode is not applicable with the following object types:
Exports
Imports
Feeds
Collaboration areas
Reports
When an alert object is exported and then imported with the UPDATE action mode, a new alert is created even if an alert with the same description exists.
Jobs of executable type, except for the Import environment, cannot be created through an export or import process.
The following object types will always be updated or created if the objects do not exist:
Lookup table content
Hierarchy content
Catalog content

For writing scripts, ensure that you specify the action modes by using uppercase text only. An error message is logged if you specify an incorrect action mode.

When you import jobs with an environment import, historic schedules will not be imported. Historic schedules are not repeating schedules in the past. Therefore, you see
the following error in the import log: ERROR:CWXIM0272E:Create not supported for job of type CTGTODB. If this error message is being thrown in context of an
environment import for objects of type "Job", this means that historic job schedules are not being repeated for this job and that this job will not be imported into the target
company. Job schedules that took place in the past in the source company will not be imported into a company through environment import.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Object type dependencies


424 IBM Product Master 12.0.0
You can export and import objects that are based on their dependency to other objects within the data model.

The objects that are required for any environment export depends upon what is selected for export. The base objects are specs and lookup table specs, which are the root
objects that all other objects require. The dependency sequence is similar to the following:
Collaboration Area > Workflow > Catalog > Hierarchy > Spec > Lookup Tables > Lookup Spec
Additionally, collaboration areas and workflows depend upon Users and Roles. All container objects, for example, collaboration areas, catalogs, hierarchies, and so on
depend upon access control groups. Views depend upon attribute collections.
The following table lists all the object types that you can deploy in Product Master and provides a description, the export format, and object dependencies.
Table 1. Object types
Object type
Object type Object type name Export format
dependency
Access control group (ACG) ACG XML Roles
Alerts ALERT XML Distributions
Distribution groups
Users
Attribute collections ATTRIBUTE_COLS XML Specs
Catalogs CATALOG XML Primary specs
Scripts (Document
Store)
Users
Roles
Catalog content CATALOG_CONTENT CSV Catalogs
Catalog view CATALOG_VIEW XML Catalogs
Attribute collections
Collaboration area COLLABORATION_AREA XML Workflows
Catalogs
Hierarchies
ACGs
Users
Roles
Collaboration area content COLLABORATION_AREA_CONTENT XML Collaboration areas
Company attributes COMPANY_ATTRIBUTES (LOCALES) XML
Container access privileges CONTAINER_ACCESSPRV XML Catalogs
Hierarchies
Roles
Attribute collections
Data source DATASOURCE XML
Destination spec DESTINATION_SPEC XML
Distribution DISTRIBUTION XML
Distribution group DISTRIBUTION_GROUP XML Distributions
Document store DOC_STORE XML
Exports EXPORTS XML Catalogs
Lookup tables
Destination specs
Hierarchies
Spec maps
Distributions
Document stores
Users
Scripts (Document
Store)
Feeds FEEDS XML ACGs
DataSources
File specs
Catalogs
Lookup tables
Document stores
Users
Scripts (Document
Store)
File spec FILE_SPEC XML
Hierarchy HIERARCHY XML Primary specs
Scripts (document store)
Hierarchy content HIERARCHY_CONTENT CSV Hierarchies
Category to category maps HIERARCHY_MAPS XML Hierarchies
Hierarchy view HIERARCHY_VIEW XML Hierarchies
Attribute collections
Item to category map, referenced as setItemCategoryMaps in the export Item to category map CSV Hierarchies
script. (setItemCategoryMaps) Catalogs
Hierarchy content
Catalog content
Jobs (schedules) JOBS XML Users
Lookup table LOOKUP_TABLE XML Lookup table specs
Lookup table content LOOKUP_TABLE_CONTENT CSV Lookup tables
Lookup table specs LOOKUP_TABLE_SPEC XML Company attributes

IBM Product Master 12.0.0 425


Object type
Object type Object type name Export format
dependency
Maps MAPS XML Specs
Catalogs
My settings MY_SETTINGS XML Company attributes
Organization hierarchy ORG_HIERARCHY XML Primary specs
Scripts (document store)
Organization hierarchy content ORG_HIERARCHY_CONTENT CSV Organization
hierarchies
Primary specs PRIMARY_SPEC XML Sub-specs
Company attributes
Queue QUEUE XML
Reports REPORTS XML Script input specs
Distributions
Document stores
Role locale access ROLE_LOCALE_ACCESS XML Roles
Roles ROLES XML ACGs
Script input specs SCRIPT_INPUT_SPEC XML
Search templates SEARCH_TEMPLATES XML Catalogs
Workflows
Secondary specs SECONDARY_SPEC XML Company attributes
Sub-specs
Static selections SELECTION XML Catalogs
Hierarchy
Hierarchy content
ACGs
Specs SPEC XML
Sub specs SUB_SPEC XML Company attributes
User-defined logs UDL XML Catalogs
Hierarchies
User-defined log content UDL_CONTENT CSV Catalogs
Hierarchies
Catalog content
Hierarchy content
UDLs
Users USERS XML Roles
Web service WEBSERVICE XML
Workflow WORKFLOW XML ACGs
Users
Roles
Attribute collections
Scripts (document store)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Specifying file names for exported objects


When you export objects from a company, an extra file that is called NameMapping.xml is created. This file provides a mapping for the objects that are exported and the
names of the corresponding files that are created. You can provide your own mapping XML file by using an optional parameter for the mapping file path of the exportEnv
script operation.

Procedure
To use the name mapping feature, you must add a parameter to the exportEnv script operation in the export script. This parameter provides the path for the name
mapping file.
For example,

var result = exportEnv(envObjList, "export_20090519_125511.zip", "C:/mapping_files/NameMapping_ip.xml");

For more information about how to write your script, see Scripts for company deployment.

Example
You can use the sample mapping file to create your own name mapping file.
The following sample has a few object types. The name mapping file that is created by the export generates elements for the object types. Some of these elements can be
default file names for the objects. While you are creating the input name mapping file, you must create a similar file with specific file names required for the objects.

<?xml version="1.0" encoding="UTF-8"?>


<NameMapList CreatedDate="Mon May 18 17:01:20 PDT 2009" name="" version="6.1.0 ">
<ObjectType type="HIERARCHY_MAPS">
<Mapping>

426 IBM Product Master 12.0.0


<ObjectName>HIERARCHY_MAPS</ObjectName>
<FileName>HIERARCHY_MAPS.xml</FileName>
</Mapping>
</ObjectType>
<ObjectType type="SEARCH_TEMPLATES">
<Mapping>
<ObjectName>SEARCH_TEMPLATES</ObjectName>
<FileName>SEARCH_TEMPLATES.xml</FileName>
</Mapping>
</ObjectType>
<ObjectType type="LKP_SPEC">
<Mapping>
<ObjectName>LKP_SPEC1</ObjectName>
<FileName>LKP_SPEC_1.xml</FileName>
</Mapping>
<Mapping>
<ObjectName>LKP_SPEC2</ObjectName>
<FileName>LKP_SPEC_2.xml</FileName>
</Mapping>
<Mapping>
<ObjectName>LKP_SPEC3</ObjectName>
<FileName>LKP_SPEC_3.xml</FileName>
</Mapping>

</ObjectType>
<ObjectType type="MKT_SPEC">
<Mapping>
<ObjectName>dest spec</ObjectName>
<FileName>MKT_SPEC_1.xml</FileName>
</Mapping>
</ObjectType>
<ObjectType type="CATALOG_MKT_MAP">
<Mapping>
<ObjectName>destmap1</ObjectName>
<FileName>CATALOG_MKT_MAP_2208.xml</FileName>
</Mapping>
</ObjectType>
<ObjectType type="FILE_CATALOG_MAP">
<Mapping>
<ObjectName>fileMap</ObjectName>
<FileName>FILE_CATALOG_MAP_2209.xml</FileName>
</Mapping>
</ObjectType>
<ObjectType type="CATALOG_CONTENT">
<Mapping>
<ObjectName>Lkp1</ObjectName>
<FileName>Lkp1.xml</FileName>
</Mapping>
<Mapping>
<ObjectName>Lkp2</ObjectName>
<FileName>Lkp2.xml</FileName>
</Mapping>
<Mapping>
<ObjectName>Catalog1</ObjectName>
<FileName>CTG_1.xml</FileName>
</Mapping>
<Mapping>
<ObjectName>Catalog2</ObjectName>
<FileName>CTG_2.xml</FileName>
</Mapping>
</ObjectType>
<ObjectType type="PRIMARY_SPEC">
<Mapping>
<ObjectName>PRIMARY_SPEC1</ObjectName>
<FileName>PRIMARY_SPEC_1.xml</FileName>
</Mapping>
<Mapping>
<ObjectName>PRIMARY_SPEC2</ObjectName>
<FileName>PRIMARY_SPEC_2.xml</FileName>
</Mapping>
</ObjectType>
<ObjectType type="CATALOG">
<Mapping>
<ObjectName>CATALOG</ObjectName>
<FileName>CATALOG.xml</FileName>
</Mapping>
</ObjectType>
<ObjectType type="SUB_SPEC">
<Mapping>
<ObjectName>test_subspec1</ObjectName>
<FileName>test_subspec_1.xml</FileName>
</Mapping>
</ObjectType>
</NameMapList>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Packaging files with the command-line

IBM Product Master 12.0.0 427


You must package your company files before you can export or import the company data.

Start the packaging tool with the following command:

Syntax

$TOP/bin/runPackagingTool.sh --controlfilelocation=path_to_control_file --uploadlocation=path_to_upload_location --


zippackage=true/false

Parameters

–controlfilelocation=path_to_control_file
Specifies the control file location where path_to_control_file is the path to your control file.

--uploadlocation=path_to_upload_location
Specifies the upload location where path_to_upload_location is the path to your upload location.

--zippackage=true/false
Specifies whether the compressed file package is required.

The control file that you provide for the packaging tool has the same XML format as an import control file created by the environment export. You can refer to the
ImportEnvControl.xml file in any export package that is created during the environment export for the format.

Inside the <File> tag in this control file, you must provide the complete path on the file system of the files that are included in the package to be created. The “type”
attribute for the <Import> tag provides the object type of the objects whose description is provided in the files inside the <Import> tag.

Sample:

<Import enable="true" type="CATALOG">


<File>/Packaging_Tool/EXPORT_3/CATALOG/CATALOG.xml</File>
</Import>

You must also update all the files that have pointers to other files included in them.

Content data XML files


For the objects types catalog content, hierarchy content, lookup table content, and organization hierarchy content, the control file must provide a path of the XML file that
in turn includes the path of the actual content files.

Sample:

<Container>
<ContainerType>CATEGORY_TREE</ContainerType>
<ContainerName>Web</ContainerName>
<EntryDataFilePath>/Packaging_Tool/EXPORT_rde8/HIERARCHY_CONTENT/HIERARCHY_CONTENT_9206_DATA.csv</EntryDataFilePath>
<Encoding>UTF-8</Encoding>
</Container>

The tag <EntryDataFilePath> provides the complete path of the hierarchy content CSV file. Similar changes are required for the XML files, which point to the CSV files
for the object types catalog content, lookup table content, and organization hierarchy content. The absolute path of these data XML files must be included in the control
file that is mentioned in the Content data XML files section.

XML file for docstore


For the format of this file, refer to the DOCSTORE.xml file from any package that is created by environment export. This file can be located within the DOCSTORE directory
in any export package. This file provides the path of the files to be imported in the docstore when this package is imported. You must provide one such file. The absolute
path of this file must be provided in the control file that is mentioned previously.

Sample:

<Doc type="any">
<LocalPath>/Packaging_Tool/EXPORT_rde19/DOCSTORE/FILES/1240493467 360-Preview</LocalPath>
<Action>CREATE_OR_UPDATE</Action>
<StorePath>/scripts/workflow/SalesWorkflow/Preview</StorePath>
<Attribs>
<Attrib name="CHARSET" value="UTF-8"/>
<Attrib name="LAST_MODIFIED_TIMESTAMP" value="14-APR-2009 19:29:13"/>
<Attrib name="COMPRESSED" value="false"/>
</Attribs>
</Doc>

In this case, you must provide the absolute path of the file in <LocalPath> tag so it can be picked up by the packaging utility.

XML file for UDL content


This file provides the path of the UDL content CSV files to be imported. The absolute path of this file is provided in the control file. You must provide the absolute path of
CSV files in the tag <CSVFileName> in this file.

Sample:

<UDL_CONTENT>
<Name>eCom_Delta_UDL</Name>
<ContainerType>CATEGORY_TREE</ContainerType>
<ContainerName>Emergency ColArea-Hierarchy</ContainerName>
<CSVFileName>/Packaging_Tool/EXPORT_5/UDL_CONTENT/CATEGORY_TREE_9208_811_DATA.csv</CSVFileName>
</UDL_CONTENT>

428 IBM Product Master 12.0.0


After the contents are provided, the packaging utility picks up the required files and packages them in a format that is understood by the Product Master environment
import.

It is expected that you understand all the dependencies between objects and that you includes all the required object files within the control file so that they are included
in the package.

Control file for map XML file


The control file provides the absolute path of the XML files for map objects. There is a one line reference for each of the map XML file provides the absolute path. The path
should be changed to provide the absolute path of the XML files for maps. The format of this file is similar to any MAPS.xml file found within the maps folder in an exported
package. The absolute path of this file must be given in the control file as mentioned in the Content data XML files section for the packaging utility.

Sample:

/Packaging_Tool/EXPORT_20/MAPS/map1202.xml

Sample package control file usage


The package control file must be created correctly before you can use it to export your company data.
Sample hierarchy content file usage
The hierarchy content file must be updated appropriately before you can use it.
Sample UDL file usage
The UDL file must be updated appropriately before you can use it.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample package control file usage


The package control file must be created correctly before you can use it to export your company data.

This sample code of the package control file lists all the files that are to be exported:

<?xml version="1.0" encoding="UTF-8"?>


<ImportList CreatedDate="Fri May 23 20:56:06 PDT 2008" name="" version="5.4.0 ">
<Import enable="true" type="WORKFLOW">
<File>/Export_files_1/WORKFLOW/WORKFLOW.xml</File>
</Import>
<Import enable="true" type="ROLES">
<File>/Export_files_2/ROLES/ROLES47003.xml</File>
<File>/Export_files_2/ROLES/ROLES44619.xml</File>
<File>/Export_files_2/ROLES/ROLES44620.xml</File>
</Import>
<Import enable="true" type="SPEC">
<File>/Export_files_3/SPECS/LKP_SPEC/LKP_SPEC_16338_SPEC.xml</File>
<File>/Export_files_3/SPECS/LKP_SPEC/LKP_SPEC_16339_SPEC.xml</File>
<File>/Export_files_3/SPECS/LKP_SPEC/LKP_SPEC_16340_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16341_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16342_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16343_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16344_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16345_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16346_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16347_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16348_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16349_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16350_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16351_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16352_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16353_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16354_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16355_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16356_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16357_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16358_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16359_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16360_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16361_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16362_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16363_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16364_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16365_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16366_SPEC.xml</File>
<File>/Export_files_3/SPECS/PRIMARY_SPEC/PRIMARY_SPEC_16367_SPEC.xml</File>
<File>/Export_files_3/SPECS/SECONDARY_SPEC/SECONDARY_SPEC_16368_SPEC.xml</File>
<File>/Export_files_3/SPECS/SECONDARY_SPEC/SECONDARY_SPEC_16369_SPEC.xml</File>
<File>/Export_files_3/SPECS/SUB_SPEC/SUB_SPEC_16336_SPEC.xml</File>
<File>/Export_files_3/SPECS/SUB_SPEC/SUB_SPEC_16337_SPEC.xml</File>
</Import>
<Import enable="true" type="COMPANY_ATTRIBUTES">
<File>/Export_files_4/COMPANY_ATTRIBUTES/COMPANY_ATTRIBUTES.xml</File>
</Import>
<Import enable="true" type="CATALOG">
<File>/Export_files_5/CATALOG/CATALOG.xml</File>
</Import>
<Import enable="true" type="MY_SETTINGS">

IBM Product Master 12.0.0 429


<File>/Export_files_6/MY_SETTINGS/MY_SETTINGS.xml</File>
</Import>
<Import enable="true" type="ATTRIBUTE_COLS">
<File>/Export_files_7/ATTRIBUTE_COLS/ATTRIBUTE_COLS.xml</File>
</Import>
<Import enable="true" type="HIERARCHY">
<File>/Export_files_8/HIERARCHY/HIERARCHY.xml</File>
</Import>
<Import enable="true" type="CATALOG_CONTENT">
<File>/Export_files_9/CATALOG_CONTENT/CATALOG_CONTENT_DATA.xml</File>
</Import>
<Import enable="true" type="HIERARCHY_CONTENT">
<File>/Export_files_10/HIERARCHY_CONTENT/HIERARCHY_CONTENT_DATA.xml</File>
</Import>
<Import enable="true" type="LOOKUP_TABLE">
<File>/Export_files_11/LOOKUP_TABLE/LOOKUP_TABLE.xml</File>
</Import>
<Import enable="true" type="USERS">
<File>/Export_files_12/USERS/USERS.xml</File>
</Import>
<Import enable="true" type="COLLABORATION_AREA">
<File>/Export_files_13/COLLABORATION_AREA/COLLABORATION_AREA.xml</File>
</Import>
<Import enable="true" type="LOOKUP_TABLE_CONTENT">
<File>/Export_files_14/LOOKUP_TABLE_CONTENT/LOOKUP_TABLE_CONTENT_DATA.xml</File>
</Import>
</ImportList>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample hierarchy content file usage


The hierarchy content file must be updated appropriately before you can use it.

<?xml version="1.0" encoding="UTF-8" ?>


<ExportContainerEntries>
<Container>
<ContainerType>CATEGORY_TREE</ContainerType>
<ContainerName>Packaging Type</ContainerName>

<EntryDataFilePath>/Packaging_Tool/EXPORT_8/HIERARCHY_CONTENT/HIERARCHY_CONTENT_9202_DATA.csv</EntryDataFilePath>
<Encoding>UTF-8</Encoding>
</Container>
<Container>
<ContainerType>CATEGORY_TREE</ContainerType>
<ContainerName>Product Type</ContainerName>

<EntryDataFilePath>/Packaging_Tool/EXPORT_8/HIERARCHY_CONTENT/HIERARCHY_CONTENT_9204_DATA.csv</EntryDataFilePath>
<Encoding>UTF-8</Encoding>
</Container>
<Container>
<ContainerType>CATEGORY_TREE</ContainerType>
<ContainerName>SAP</ContainerName>

<EntryDataFilePath>/Packaging_Tool/EXPORT_8/HIERARCHY_CONTENT/HIERARCHY_CONTENT_9205_DATA.csv</EntryDataFilePath>
<Encoding>UTF-8</Encoding>
</Container>
<Container>
<ContainerType>CATEGORY_TREE</ContainerType>
<ContainerName>Store</ContainerName>

<EntryDataFilePath>/Packaging_Tool/EXPORT_8/HIERARCHY_CONTENT/HIERARCHY_CONTENT_9207_DATA.csv</EntryDataFilePath>
<Encoding>UTF-8</Encoding>
</Container>
<Container>
<ContainerType>CATEGORY_TREE</ContainerType>
<ContainerName>Web</ContainerName>

<EntryDataFilePath>/Packaging_Tool/EXPORT_8/HIERARCHY_CONTENT/HIERARCHY_CONTENT_9206_DATA.csv</EntryDataFilePath>
<Encoding>UTF-8</Encoding>
</Container>
</ExportContainerEntries>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample UDL file usage


The UDL file must be updated appropriately before you can use it.

430 IBM Product Master 12.0.0


<?xml version="1.0" encoding="UTF-8"?>
<UDLContent version="6.0.0">
<UDL_CONTENT>
<Name>eCom_Delta_UDL</Name>
<ContainerType>CATEGORY_TREE</ContainerType>
<ContainerName>Web</ContainerName>
<CSVFileName>Packaging_Tool/EXPORT_5/UDL_CONTENT/CATEGORY_TREE_9206_803_DATA.csv</CSVFileName>
</UDL_CONTENT>
<UDL_CONTENT>
<Name>eCom_Delta_UDL</Name>
<ContainerType>CATEGORY_TREE</ContainerType>
<ContainerName>Emergency ColArea-Hierarchy</ContainerName>
<CSVFileName>/Packaging_Tool/EXPORT_5/UDL_CONTENT/CATEGORY_TREE_9208_811_DATA.csv</CSVFileName>
</UDL_CONTENT>
</UDLContent>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Viewing your company deployment status


During the deployment process, you can view your export and import debug log files to view the status of the import operation. The debug log files display the completion
percentage of the process and include details of any errors that are encountered.

Before you begin


Before you can view your company deployment status, you must:

Ensure that the profiling_scheduled_jobs parameter is set to full in the common.properties file.
Run an export or import before viewing your deployment progress in the debug log file.

Procedure
1. Open the Job Console in the user interface: Data Model Manager > Scheduler > Jobs Console.

2. Click in the Action column to check the status of the job.

3. Click in the Job Description column to display the Schedule Run Progress Chart window.
4. Click Debug Report over the Schedule Status Information window to display the debug log.

What to do next
When you deploy a company, you must complete some post deployment tasks to ensure that your company data model was deployed successfully.

If the Product Master source instance contains modified default objects, manually modify those default objects in the target instance.
Modify your scripts in the source instance to make them compatible to the target instance. If your source instance is an older release version, then after you import
your document store, you need to modify all scripts in the target instance.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Propagate a company
To deploy a company by propagating a data model between two production instances of Product Master, follow the instructions for data model deployment, but be aware
that some objects must be propagated in certain ways.

Read the propagation restrictions for each of the following objects before you propagate a company.

Restrictions for propagating scripts


Use the script propagation restrictions to ensure that your data model is propagated successfully.
Restrictions for propagating specs
Use the spec propagation restrictions to ensure that your data model is propagated successfully.
Restrictions for propagating attribute collections
Use the attribute collection propagation restrictions to ensure that your data model is propagated successfully.
Restrictions for propagating roles
Use the role propagation restrictions to ensure that your data model is propagated successfully.
Restrictions for propagating users
Use the user propagation restrictions to ensure that your data model is propagated successfully.
Restrictions for propagating access control groups
Use the access control group propagation restrictions to ensure that your data model is propagated successfully.
Restrictions for propagating lookup tables
Use the lookup table propagation restrictions to ensure that your data model is propagated successfully.

IBM Product Master 12.0.0 431


Restrictions for propagating views
Use the view propagation restrictions to ensure that your data model is propagated successfully.
Restrictions for propagating workflows
Use the workflow propagation restrictions to ensure that your data model is propagated successfully.
Restrictions for propagating collaboration areas
Use the collaboration area propagation restrictions to ensure that your data model is propagated successfully.
Restrictions for propagating hierarchies
Use the hierarchy propagation restrictions to ensure that your data model is propagated successfully.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restrictions for propagating scripts


Use the script propagation restrictions to ensure that your data model is propagated successfully.

Script extensions help to define programmatic rules. When you propagate an entity change that contains script extensions, the scripts are not automatically propagated.
You must manually ensure that any necessary scripts are propagated.

Scripts are not automatically propagated when a company is deployed between multiple production instances of Product Master.

Propagation is supported for the following script changes:

Adding a script
Script deletion
Script modifications

Script renaming is not supported.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restrictions for propagating specs


Use the spec propagation restrictions to ensure that your data model is propagated successfully.

Specs define the metadata of your Product Master solution. When you propagate spec changes, you must ensure that all necessary specs are propagated into the target
instance.

Different recommendations exist for each of the different types of specs and spec attributes.

Workflow specs
For propagating workflow spec changes into the target instance, you can either

Clear all spec attributes from your collaboration areas.


If you are unable to clear all collaboration areas, propagate your attribute additions with the following limitations:
Mandatory attributes can move into the fixit step.
Non-mandatory attributes must have no known issues.
Do not propagate attribute deletions and modifications while they are checked out to a collaboration area.

Spec attributes that reference lookup tables


For spec changes that are dependent on the addition of new lookup tables, deploy your solution changes in two stages:

1. Deploy your lookup table specs and then deploy lookup table definitions in the first stage.
2. Deploy the specs that reference your lookup tables in the second stage.

Spec attributes
For spec attributes, renaming is not supported and changing the attribute type is not recommended.
Changing the type of an attribute can result in meaningless or unavailable data after propagation. You should delete then readd attributes by either:

Refreshing your attribute changes from the source system.


Exporting your attributes before propagation, then importing the data after you propagate the change of attribute that is defined in the information, such as length.

Subspec attributes
For subspec, the propagation restrictions include

Subspec attribute additions

432 IBM Product Master 12.0.0


Attribute additions must be manually propagated to consuming specs.
Attribute additions with the consuming spec
To ensure that attributes are not lost during import, do not export consuming specs at the same time as subspecs.
Develop a custom solution to reattach changed subspecs in the consuming spec.
Subspec attribute changes
The following subspec attribute changes are automatically propagated with the consuming spec:

Making an attribute mandatory or non-mandatory


Making an attribute that is indexed or non-indexed
Increasing of the maximum number of occurrences
Increasing or decreasing the minimum number of occurrences
Increasing of an attribute length

The following subspec attribute changes are automatically propagated with the consuming spec but can cause data loss:

Decreasing the maximum number of occurrences can cause data truncation or complete data loss
Decreasing an attribute length can cause data truncation or complete data loss

Subspec attribute addition or deletion


The following subspec attribute deletions are automatically propagated with the consuming spec but can result in data loss:

Deleting an attribute
Deleting an attribute group

The following subspec attribute additions must be manually propagated to the consuming spec:

Adding an attribute
Adding an attribute group with sub group attributes

Primary and secondary specs


For primary and secondary specs, the propagation restrictions include

Primary or secondary spec definitions for specs that consume a subspec


Identical spec definitions on both the source and target instances are automatically propagated, but both the source and target instances can have different spec
definitions if you manually defined them.
Primary or secondary spec attribute changes
The following primary and secondary spec attribute changes are automatically propagated:

Making an attribute mandatory or non-mandatory


Making an attribute that is indexed or non-indexed
Increasing of the maximum number of occurrences
Increasing or decreasing of the minimum number of occurrences
Increasing of an attribute length

The following primary and secondary spec attribute changes are automatically propagated but can result in data loss:

Decreasing of the maximum number of occurrences might result in data truncation or complete data loss
Decreasing of an attribute length might result in data truncation or complete data loss

Primary or secondary attribute spec additions or deletions


The following primary or secondary attribute additions are automatically propagated:

Adding an attribute
Adding an attribute group with subgroup attributes

The following primary or secondary attribute deletions are automatically propagated but might result in data loss:

Deleting an attribute
Deleting an attribute group

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restrictions for propagating attribute collections


Use the attribute collection propagation restrictions to ensure that your data model is propagated successfully.

Attribute collections are groups of item or category attributes that, in a context, are associated and behave the same way.

When you propagate attribute collection changes, be aware of any additional attributes or specs that are dependant of primary or secondary spec changes and ensure that
the primary or secondary spec changes are propagated at the same time.

Primary or secondary specs that have identical attribute collections on both the source and target instances are automatically propagated.

Primary or secondary specs that do not have identical attribute collections on both the source and target instances, the source and target instances are not synchronized
because the target instance receives objects that did not exist before propagation. For objects that have the same names as objects that exist in the target instance, the
import fails.

The following attribute collection changes are automatically propagated:

IBM Product Master 12.0.0 433


Adding an attribute collection
Modifying attribute collection attributes
Deleting an attribute collection if the attribute collection is not associated to a catalog or collaboration area. Deleting an attribute can be propagated only through
deleting import jobs through the Collaboration Manager > Imports > Import Console in the user interface.

If you try to delete an attribute collection that is linked to a catalog or workflow, the import fails and you do not receive an error message to inform you of the failure.

Attribute renaming is not supported.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restrictions for propagating roles


Use the role propagation restrictions to ensure that your data model is propagated successfully.

You can propagate roles to use for both the source and target instances.

The following are propagated automatically:

Adding a role.
Modifying a role.
Deleting a role if the role is not associated to an access control group. Deleting a role can be propagated only through deleting import jobs through the Collaboration
Manager > Imports > Import Console in the user interface.

If you try to delete a role that is associated to an access control group, the import fails and you do not receive an error message to inform you of the failure.
Role-to-custom-tool mapping information is not automatically propagated.

Renaming a role and propagating roles when one of the Product Master instances, either the source or the target, uses an LDAP user repository, is not supported.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restrictions for propagating users


Use the user propagation restrictions to ensure that your data model is propagated successfully.

You can propagate a local user repository to use for both the source and target instances.

Propagating users is not supported when either the source or the target instance uses an LDAP user repository.

The following user changes are propagated automatically:

Adding a user
Modifying a user
Disabling a user

The following user changes are not supported:

Renaming a user
Deleting a user

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restrictions for propagating access control groups


Use the access control group propagation restrictions to ensure that your data model is propagated successfully.

Access control groups reference other entities such as roles within their definitions. When you propagate any type of access control group addition or change that contains
entity references, the entities are not automatically propagated and you must ensure that any necessary extensions changes are propagated.

Propagating users is not supported when the source or the target instance, either, uses an LDAP user repository.

The following access control group changes are propagated automatically:

Adding an access control group


Modifying an access control group

The following access control group changes must be manually propagated:

434 IBM Product Master 12.0.0


Renaming an access control group
Deleting an access control group

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restrictions for propagating lookup tables


Use the lookup table propagation restrictions to ensure that your data model is propagated successfully.

Programmatic rules are defined within lookup tables. When you propagate lookup table changes that contain script extensions, the scripts are not automatically
propagated. You must manually ensure that all necessary scripts are propagated.

For lookup tables that have dependant spec changes, deploy your solution changes in two stages:

1. Deploy your lookup table specs and lookup table definitions in the first stage.
2. Deploy the specs that reference your lookup tables in the second stage.

The following are propagated automatically:

Adding a lookup table


Deleting a lookup table if the lookup table is not referenced within a spec. Deleting a lookup table can be propagated only through deleting import jobs through the
Collaboration Manager > Imports > Import Console in the user interface.

If you try to delete a lookup table that is referenced within a spec, the import fails and you do not receive an error message to inform you of the failure.
The following actions are not supported:

Renaming a lookup table


Modifying a lookup table

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restrictions for propagating views


Use the view propagation restrictions to ensure that your data model is propagated successfully.

Views reference other entities such as attribute collections within their definitions. When you propagate any type of view addition or change that contains references, the
referenced entities are not automatically propagated and you must ensure that any necessary entities are manually propagated.

The following view changes are propagated automatically:

Adding a view
Important: The import fails and you do not receive an error message to inform you of the failure if you attempt to add a view when the required entity and
associated entities do not exist.
Modifying a view
Deleting a view if the view is not associated to a catalog or workflow. Deleting a view can be propagated only through deleting import jobs through the Collaboration
Manager > Imports > Import Console in the user interface.

If you try to delete a view that is associated to a catalog or workflow, the import fails and you do not receive an error message to inform you of the failure.
Renaming a view is not supported.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restrictions for propagating workflows


Use the workflow propagation restrictions to ensure that your data model is propagated successfully.

Workflows reference other entities such as views, roles, users, attribute collections, and scripts within their definitions. When you propagate any type of workflow addition
or change that contains entity references, the entities are not automatically propagated and you must ensure that any necessary entities are manually propagated.

The following workflow additions and deletions are propagated automatically:

Adding a workflow
The import fails and you do not receive an error message to inform you of the failure if you attempt to add a workflow when the referenced entities do not exist.

Deleting a workflow is supported if the view is not associated to a collaboration area. Deleting a workflow can be propagated only through the delete import.
The import fails and you do not receive an error message to inform you of the failure if you attempt to delete a workflow that is associated to a collaboration area.

IBM Product Master 12.0.0 435


Deleting a workflow if the workflow is not associated to a collaboration area. Deleting a workflow can be propagated only through deleting import jobs through the
Collaboration Manager > Imports > Import Console in the user interface.

If you try to delete a workflow that is associated to a collaboration area, the import fails and you do not receive an error message to inform you of the failure.
Renaming a workflow is not supported.

The following workflow definitions are automatically propagated:

Adding a workflow
The import fails and you do not receive an error message to inform you of the failure if you attempt to propagate changes to a workflow definition when the
associated entities are not available in the target system, including users, roles, and attribute collections.
Workflow access control group

The following step changes are automatically propagated:

Adding a step
Deleting a step

Deleting a step is not supported.

Adding an attribute collection is automatically propagated.

Removing an attribute collection from a workflow step is not supported.

The following performer changes are automatically propagated:

Adding a performer
Removing a performer

The following extension points and step links are automatically propagated:

Adding an extension point


Changing an extension point link
Deleting an extension point

Renaming an extension point is not supported.

Earlier, workflow definition could not be updated if the workflow was in use. For instance, if the workflow had collaboration area that is associated with it, and items were
checked out to the collaboration area. Changes were made in Version 6.5.0 to enable you to change a workflow while it is in use.

Workflow change propagation


Earlier, workflow definition could not be updated if the workflow was in use. For instance, if the workflow had collaboration area that is associated with it, and items
were checked out to the collaboration area. Changes were made in Version 6.5.0 to enable you to change a workflow while it is in use.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Workflow change propagation


Earlier, workflow definition could not be updated if the workflow was in use. For instance, if the workflow had collaboration area that is associated with it, and items were
checked out to the collaboration area. Changes were made in Version 6.5.0 to enable you to change a workflow while it is in use.

A workflow is "in use" if there is any collaboration area defined that references the workflow, whether or not that collaboration area is empty. It is recommended that
workflows that are in use NOT be updated. However, such workflows may be updated if the following guidelines are followed:

Environment export and import


The workflow definition can be changed in the source environment and exported. The definition can then be imported to the target environment in which the
corresponding workflow is in use. When complex workflow changes need to be applied, it is recommended to use the "Environment export and import" approach
because it allows changes in a consistent manner within short update intervals. It also should be done while having a full online database backup and having
stopped all workflow activity.
Script operations
In the Product Master environment, script operations to update a workflow can be completed while the workflow is in use.
Java™ APIs
The Java API methods can be used to update a workflow while it is in use.

Important: Deleting a workflow is still not allowed while it is in use or associated with a collaboration area.
While you are updating the workflow the following guidelines must be followed:

Make sure that no operation is running with any collaboration area associated with the workflow that you want to update. These operations include activities like
checking out items or importing items into a step. The update fails in such a case.
After the workflow update operation is started, make sure that no new events are posted to the collaboration areas associated with that specific workflow. This can
again include activities like checking out or importing items to a step, or working with items already present in the step. These operations fail while the update
operation is still going on. Those operations can be done after the update is completed.

You can complete the following operations on a workflow to update it while it is still in use:

Add a step to the workflow. The associated collaboration areas are refreshed to get the new step.
Delete a step from the workflow. If there are any items that are checked out to in any related collaboration area which is in this step at the time, they are moved to
the FIXIT step of the workflow. Also the events pending for this step are deleted. If this step is in a nested workflow, the entries in the corresponding steps of the
containing workflows are moved to the FIXIT step of the containing workflow.
Edit the other attributes of a workflow. If other attributes of a workflow are changed, such as, name, description, the changes reflect in the environment.

436 IBM Product Master 12.0.0


Edit the properties of a step in workflow. The properties or attributes of any step can also be changed and the workflow can be updated. Here is a list of some valid
scenarios:
The deadline or timeout for a step can be changed.
The Reserve to edit flag can be cleared. In this case, all the entries in the step for which this change is made are unreserved if they are in reserved state.
The performer for a step can be deleted. If a user is no longer a performer for the step, due to change in performers that are defined for the step, then all the
entries in the step that is reserved by that user are unreserved.

If any of the updates are made to a nested workflow, the change is propagated to all the containing workflows, and the collaboration areas that are associated with them.

The target environment can have items that are checked out to collaboration areas related to the workflow. The changes that are made to the code must allow the
workflow in the target environment to be updated and the items must be adjusted according to the status of the new workflow.

Workflow aspects can be changed in the source environment and propagated to the target environment, including the name of workflow or workflow step. If the name is
changed, the target environment considers this workflow or step as a new one, and does not update the existing workflow or step. You must take care of this aspect if you
change the name of a workflow or a step.

After the export is complete, the workflow description is available in the WORKFLOW.xml file within the exported compressed file.

Sample:

<WORKFLOW>
<Name>NewProductIntroductionV2</Name>
<Action>CREATE_OR_UPDATE</Action>
<Desc>Process for introducing new products.</Desc>
<ACG isDefault="true"/>
<ContainerType>CATALOG</ContainerType>
<GUIDocStorePath>/workflow/gui/NewProductIntroductionV2.html</GUIDocStorePath>
</WORKFLOW>

If you change the name, you must manually provide an extra tag in this exported file after the <Name> tag as follows:
<OldName>NewProductIntroductionV1</OldName>.

You must follow the same approach if the name of a step is changed:

<Step>
<StepName>TechnicalEnrichmentStep_new</StepName>
<StepOldName>TechnicalEnrichmentStep</StepOldName>
<StepType>MODIFY</StepType>
<StepDesc></StepDesc>
</Step>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restrictions for propagating collaboration areas


Use the collaboration area propagation restrictions to ensure that your data model is propagated successfully.

Collaboration areas reference other Product Master entities such as workflows within their definitions. When you propagate any type of collaboration area that contains
entity references, the entities are not automatically propagated and you must manually ensure that any necessary entities are propagated.

The following collaboration area changes are automatically propagated:

Adding a collaboration area


Modifying a collaboration area definition if the collaboration area is empty.
If you try to modify a collaboration area definition that is not empty, the import fails and you do not receive an error message to inform you of the failure.

Deleting a collaboration area if the collaboration area is empty. Deleting a collaboration area can be propagated only through deleting import jobs through the
Collaboration Manager > Imports > Import Console in the user interface.
If you try to delete a collaboration area that is not empty, the import fails and you do not receive an error message to inform you of the failure.

Renaming collaboration areas is not supported.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restrictions for propagating hierarchies


Use the hierarchy propagation restrictions to ensure that your data model is propagated successfully.

Hierarchies reference other entities such as scripts within their definitions. When you propagate any type of hierarchy addition or change that contains entity references,
the entities are not automatically propagated and you must manually ensure that any necessary entities are propagated.

The following hierarchy changes are automatically propagated:

Adding hierarchies
Modifying hierarchy definition is supported if the hierarchy is not associated with a catalog.

IBM Product Master 12.0.0 437


If you try to modify a hierarchy that is associated with a catalog, the import fails and you do not receive an error message to inform you of the failure.

Deleting a hierarchy if the hierarchy is not associated with a catalog. Deleting a hierarchy can be propagated only through deleting import jobs through the
Collaboration Manager > Imports > Import Console in the user interface.
If you try to delete a hierarchy that is associated with a catalog, the import fails and you do not receive an error message to inform you of the failure.

Renaming a hierarchy is not supported.

The following hierarchy category changes are automatically propagated:

Adding a category is automatically propagated.


Deleting a category, but any items that are assigned to only this category becomes unassigned.

Renaming a category is not supported.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing the system and services


You can start, stop, or cancel Product Master and all services from the command-line.

You can also individually start, stop, or cancel your services while the RMI registry and Product Master remain running.

Optionally, you can add multiple instances of certain services to create a clustered environment and improve the performance of a growing system.

Starting the system and all services


You can start IBM® Product Master and all services from the command-line.
Stopping the system and all services
You can stop IBM Product Master and all services from the command-line.
Checking the status of the system and all services
You can check the status of IBM Product Master and all services from the command line.
Aborting the system and all services
You can abort IBM Product Master and all services from the command-line.
Cache management
Keeping frequent accessed objects in cache is a primary technique to improvement performance. Product Master has built-in cache mechanism for some Product
Master objects, and at solution level, some data can be cached to achieve reuse for performance purposes.
Configuring initial spec cache loading during server startup
Product Master provides a mechanism to populate the spec cache during service startup.
Product services
IBM Product Master includes several components that are implemented as JVM services.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Starting the system and all services


You can start IBM® Product Master and all services from the command-line.

Before you begin


Before you can start the system and all services, you must complete a successful installation.

Procedure
Run the start_local.sh script from the $TOP/bin/go/ directory to start the system and all services.

Syntax

start_local.sh

The average system startup time is approximately 30 - 40 seconds.

You can use the optional argument --rm_logs with this script. This argument deletes all the files under the logging directory. The default location of the logging directory
is <install dir>/logs.

What to do next
You should check the status of your system and all of the services before you log in, see Checking the status of the system and all services.

IBM Product Master 12.0 Fix Pack 8

438 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

Stopping the system and all services


You can stop IBM® Product Master and all services from the command-line.

Before you begin


Before you can stop the system and all services, you must ensure that no important tasks are running that is affected by stopping all services.

About this task


Stopping a service is not the same as aborting a service. If you stop a service, Product Master will attempt to stop the service but only after all tasks that are currently
using the service is stopped. If you abort a service, Product Master attempts to shut down the service but if a task is using the service, the abort might fail.

Procedure
Run the stop_local.sh script from the $TOP/bin/go/ directory to stop the system and all services.

Syntax

stop_local.sh

Example
The following example shows the output after you run the stop_local.sh shell script:

#./stop_local.sh
stopping services on localhost
++ [success] stop service 'appsvr_LORAX' (Mon Aug 26 17:55:46 PDT 2002)
Product Master will stop in 5 seconds
++ [success] stop service 'admin_LORAX' (Mon Aug 26 17:55:47 PDT 2002)
admin will stop in 5 seconds
++ [success] stop service 'eventprocessor' (Mon Aug 26 17:55:47 PDT 2002)
event processor stopped
++ [success] stop service 'scheduler' (Mon Aug 26 17:55:47 PDT 2002)
scheduler will stop in 5 seconds
++ [success] stop service 'queuemanager' (Mon Aug 26 17:55:48 PDT 2002)
queue manager stopped
killing service 'rmi'

What to do next
To restart your system, see Starting the system and all services.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Checking the status of the system and all services


You can check the status of IBM® Product Master and all services from the command line.

Procedure
1. Run the shell script rmi_status.sh from the $TOP/bin/go/ directory to verify that the system is up and running:

Syntax

rmi_status.sh

2. Review the output of the rmi_status.sh shell script to verify that the following services are started:
Admin service: admin_machine name
Application server: appsvr_machine name
Event processor: eventprocessor
Queue manager: queuemanager
Scheduler: scheduler
Workflow engine: workflow

Example
The following example shows the output after you run the rmi_status.sh shell script:

IBM Product Master 12.0.0 439


#./rmi_status.sh
++ [success] rmistatus (Mon Aug 26 17:29:47 PDT 2003)
rmi://machine name:17507/CMP1/appsvr/appsvr_machine name
rmi://machine name:17507/CMP1/admin/admin_machine name
rmi://machine name:17507/CMP1/eventprocessor/eventprocessor_machine name
rmi://machine name:17507/CMP1/scheduler/scheduler_machine name
rmi://machine name:17507/CMP1/queuemanager/queuemanager_machine name
rmi://machine name:17507/CMP1/workflow/workflow_machine name

What to do next
You can now log in to your system.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Aborting the system and all services


You can abort IBM® Product Master and all services from the command-line.

Before you begin


Before you can abort the system and all services, you must ensure that no important tasks are running that is affected by stopping all services.

About this task


Stopping a service is not the same as aborting a service:

If you stop a service, the system will attempt to stop the service but only after all tasks that are currently using the service are stopped.
If you abort a service, the system attempts to shut down the service but if a task is using the service, the abort might fail.

Procedure
Run the shell script abort_local.sh from the $TOP/bin/go/abort/ directory.

Syntax

abort_local.sh

What to do next
To restart your system, see Starting the system and all services.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Cache management
Keeping frequent accessed objects in cache is a primary technique to improvement performance. Product Master has built-in cache mechanism for some Product Master
objects, and at solution level, some data can be cached to achieve reuse for performance purposes.

The use of a cache must balance the number of objects against memory consumption. If the cache size is too small, it could lead to frequent object replacement and
cache miss; if the cache size is too large, it could lead to unnecessary memory consumption since the size limit might not take effect to flush some not used cache objects.

Check the following parameters, which are used to tune performance and resource utilization. Review the following list and make appropriate changes for your
environment. The parameters are listed in priority.
Table 1. Performance parameters
Parameter Defined in file name
max_specs_in_cache mdm-cache-config.properties
max_lookups_in_cache mdm-cache-config.properties
max_ctgviews_in_cache mdm-cache-config.properties
max_roles_in_cache mdm-cache-config.properties
max_accesses_in_cache mdm-cache-config.properties

Example
1. If one entry has a view with 1000 secondary specs that are associated with it (as attribute collection used by the view), it would try to load or access the specs
during entry build. If the spec cache is set to 500, the cache hit could be zero, as it suddenly loads the 1000 specs in a sequence, later ones do not get cache hit but

440 IBM Product Master 12.0.0


push out the old ones. In such a case, it could waste the processing time and memory at the same time. In this scenario, it would be better to set the cache to 1000,
so it can get good cache hits and avoid the cache pushing-out.
2. If one implementation has 1000 lookup tables for various purposes, but they are not likely to be utilized in a single user scenario. It may not be necessary to set the
cache size to 1000, meaning we do not need to cache all Lookup Table all the time. With cache size limit control, some LRU cache can be flushed to free up memory.

The effectiveness of cache is about how frequently objects are being accessed, the key is the “cache hit”. If not properly configured, cache setup could have a negative
effect on the performance (when you get mostly “cache miss”).
Product Master object caches are configured based on object type, with maximal object count and cache timeout value to put a limit on the memory consumption. When
the object count reaches cache size limit, some cached objects are flushed out based on LRU algorithm. When a cache object reaches its timeout value of inactivity, that
object is claimed. This ensures the memory to be freed up promptly according to configuration.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring initial spec cache loading during server startup


Product Master provides a mechanism to populate the spec cache during service startup.

About this task


You can modify the file initial_spec_load.xml in the $TOP/etc/default directory. It is important to understand the files in this directory define the defaults for all services.
But as the initial spec loading makes sense most probably only for the appsvr, workflow engine and scheduler process, the initial_spec_load.xml file must be defined for
each wanted service individually.

Procedure
1. Create a directory under the $TOP/etc directory for each service. The directory must be named exactly as the service. The correct naming can be checked best by
looking at the directory names in $TOP/logs.
For example, when ls in %TOP/logs shows:

admin_JONAS-MDM eventprocessor_JONAS-MDM rmi_JONAS-MDM


workflowengine_JONAS-MDM appsvr_JONAS-MDM default queuemanager_JONAS-MDM
scheduler_JONAS-MDM

You would create following new directories:

$TOP/etc/appsvr_JONAS-MDM
$TOP/etc/scheduler_JONAS-MDM
$TOP/etc/workflowengine_JONAS-MDMM

When you define multiple scheduler or appsvr services, you would need to create multiple scheduler or appsvr directories.
2. Copy the initial_spec_load.xml file and eventually the common.properties file from the $TOP/etc/default directory to those newly added directories. This enables
you to specify individual settings for each of those services. If you want to use the same settings for all three services, then you might copy the
initial_spec_load.xml and common.properties to yet another location.
For example, $TOP/etc/common_app_sched_wfl, and then create a link from the respective service configuration directories to the same files in
common_app_sched_wfl. For example:

cd $TOP/etc/appsvr_JONAS-MDM
ln -s $TOP/etc/common_app_sched_wfl/initial_spec_load.xml
ln -s $TOP/etc/common_app_sched_wfl/common.properties.xml

Repeat these commands for each wanted service.

By doing so, you would only need to modify files in $TOP/etc/common_app_sched_wfl/ and the appsvr, scheduler, and workflow engine process would pick up the
settings automatically.
Note: Do not modify the initial_spec_load.xml in $TOP/etc/default. If you do so, this setting is used by all other Product Master services, for which you do not
provide a service-specific setting as outlined previously. In consequence, services, which are configured with lesser amount of memory (init_ccd_vars.sh) might not
start successfully as they run out of memory while you load all of the specs.
3. Modify respective initial_spec_load.xml to configure the spec loading behavior. The embedded instructions in that file show:
This file contains information for which Specs needs to be preinstalled by the Application on start. This is useful since it results in the specs that are being fetched in
bulk, and hence results in faster performance.

Consideration needs to be given to the number of specs that are being loaded. Make sure that the number of specs that are listed here is less than number that are
allowed in the cache setting in common.properties.

Bulk Fetch Size needs to be tuned for each setup since the best side depends on how many nodes each spec has, for smaller specs keep this number at about 100,
for larger specs set it to a lesser more conservative number like 20. For example,

<specs bulkfetchsize="100">
<spec pattern="spec*"/>
<spec name="other spec"/>
</specs>"

There is no definitive answer what value should be used for bulkfetchsize. But going with the smaller size probably is the safer option, it might take slightly longer
for Product Master to start up, but ultimately all of the specs go into the cache.
To load all secondary specs, that start with _ETIM and end with some number [0-9], (do not load the *stand-alone specs, you can use the following settings:

<?xml version="1.0"?>
<specs bulkfetchsize="30">
<spec pattern="_ETIM_*0"/>

IBM Product Master 12.0.0 441


<spec pattern="_ETIM_*1"/>
<spec pattern="_ETIM_*2"/>
<spec pattern="_ETIM_*3"/>
<spec pattern="_ETIM_*4"/>
<spec pattern="_ETIM_*5"/>
<spec pattern="_ETIM_*6"/>
<spec pattern="_ETIM_*7"/>
<spec pattern="_ETIM_*8"/>
<spec pattern="_ETIM_*9"/>
</specs>

4. Restart Product Master. All of the specs should then be cached into memory upon startup and the performance of many aspects should hopefully be improved as a
result.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Product services
IBM® Product Master includes several components that are implemented as JVM services.

The six JVM services and the RMI (Java™ Remote Method Invocation) registry that is run concurrently in the product. The RMI registry registers all product services and
must be running before starting all other services.

Table 1. JVM Services


JVM Service Description
admin The admin service starts and stops modules on remote machines.
appsvr The application server service serves JavaServer Pages.
eventprocess The event processor service dispatches events between all the modules.
or
queuemanager The queue manager service sends documents outside of Product Master.
scheduler The scheduler service runs all scheduled jobs in the background.
The scheduler provides a unified view to manage all jobs that are scheduled within Product Master. Through the Jobs Console, a job can be run based
on a defined timetable and monitored with status information.

The scheduler service communicates with the application through the unified database server and file system, as well as through the rmiregistry.
workflow The workflow engine processes workflow events that are posted to the database.
rmiregistry The RMI (Remote Method Invocation) registry service is a standard Java method that finds and starts methods or functions on remote systems.
RMI is type of RPC (Remote Procedure Call). In Java, a remote system can be on another physical system or on the same machine but in a different
JVM. The rmiregistry is a simple directory. Java objects connect to the registry and register how to connect to them and what methods or functions
they have. Other services look up the function they need in the registry to find out where it is, then call the remote object and run the method. An
example is to shut down a service. The RootAdmin Java object looks up Product Master services in the registry, finds out how to contact them, and
starts their shutdown method. As such, the rmiregistry service does not require a great deal of system resources.
rootadmin rootadmin is a Java class that enables you to provide command options to get certain information about each service type and to perform specific
activities that are associated with each service.

Service names
You can specify unique service names to differentiate the multiple services in a clustered environment.
List all defined services
Display a list of the full names for all the services that are running on your system. Type svc_control.sh --action=list or rmi_status.sh to display a list of
the full names for all of the services. The svc_control.sh script is in the <install dir>/bin/go directory.
Configuring memory flags for each service type
You can set memory flags to performance tune each service's JVM.
Starting a service
You can start individual services in IBM Product Master from the command \-line.
Stopping a service
You can stop individual services in IBM Product Master from the command-line.
Aborting a service
From the command-line, you can abort IBM Product Master services and make them unavailable for use.
Getting the short status of a service
You can view the status of a service as a short status message from the command line.
Service checks
Service checks enables you to specify which applications you want to run before any service starts up, shuts down, or is aborted. Likewise, applications can be run
before a particular service is started, stopped, or aborted. When running a service check before any service is started, you are validating all of the policies that are
defined in the history_subscriptions.xml file for history logging.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Service names
You can specify unique service names to differentiate the multiple services in a clustered environment.

442 IBM Product Master 12.0.0


Each service has a long name and a short name, and in clustered environments, you can identify certain services with unique service names. Run the
$TOP/bin/go/svc_control.sh --action=list script to see the full names of all the services that are defined on a server.

The workflow service can only have one instance running in all environments. All services other than the workflow service, can have multiple instance running so you
should ensure that each service name is different because a service will not start if another service in the clustered environment shares the same name.

Fixed short names


The admin and appsvr service names are fixed by Product Master:

Admin service
admin_machine_name
Application service
appsvr_machine_name

Default short names


The service default short names are:

Event processor
eventprocessor
Queue manager
queuemanager
Scheduler service
scheduler
Workflow engine
workflow
RMI registry
rmiregistry

Long service names


The long names for services are built internally by the system and are based on the short name of each service:

rmi://machine_name:rmi_port/database_user_name/service/service_short_name

Long service name variables:


machine_name is your machine name
rmi_port is the port your RMI registry listens for
database_user_name is the user name of your database
service is the Product Master service
service_short_name is the short name of the service

Examples of long names


If you are running a scheduler service on a machine that connects a database with the following settings:

Machine name: server 1


RMI port: 17507
Database user name: pauadm
Service: sheduler
Service short name: sch1

The long name is rmi://server1:17507/pauadm/scheduler/sch1


For the same database user and rmi port, if another scheduler service with short name sch2 is run on another server with machine name server 2, the long name will be:
rmi://server2:17507/pauadm/scheduler/sch2.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

List all defined services


Display a list of the full names for all the services that are running on your system. Type svc_control.sh --action=list or rmi_status.sh to display a list of the
full names for all of the services. The svc_control.sh script is in the <install dir>/bin/go directory.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring memory flags for each service type

IBM Product Master 12.0.0 443


You can set memory flags to performance tune each service's JVM.

Procedure
1. Stop all services.
2. Edit the $TOP/bin/conf/service_mem_settings.ini file.
Specify the maximum amount of memory that each JVM can use along with the minimum amount of memory each JVM should start with. Modifying your memory
flags should take into account the size of your system's total memory. The recommended memory flag settings are sufficient for most production systems. For more
support on setting your memory flags to optimize performance, contact IBM® Software Support.

a. For each JVM, specify the -Xmx parameter to define the maximum amount of memory each JVM can use.
b. For each JVM, specify the -Xms parameter to define the minimum amount of memory each JVM starts with.

Default settings

export ADMIN_MEMORY_FLAG='-Xmx64m -Xms48m'


export APPSVR_MEMORY_FLAG='-Xmx512m -Xms64m'
export EVENTPROCESSOR_MEMORY_FLAG='-Xmx64m -Xms48m'
export QUEUEMANAGER_MEMORY_FLAG='-Xmx64m -Xms48m'
export SCHEDULER_MEMORY_FLAG='-Xmx1024m -Xms48m'
export WORKFLOWENGINE_MEMORY_FLAG='-Xmx64m -Xms48m'

3. Restart all services.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Starting a service
You can start individual services in IBM® Product Master from the command \-line.

Before you begin


Before you can start a service, you must:

Ensure that the service is not already running. See Checking the status of the system and all services to determine what services you are currently running.
Ensure that the machines are listed in the admin_properties.xml configuration file, if you are running services on other machines in a horizontal clustering
environment.

Note: If you start a service with the name of a service that is already running, the service that is running is first aborted then restarted.

Procedure
1. Ensure that the RMI registry is running.
You must start the RMI registry before you start any services. To start the RMI registry, run the svc_control.sh --action=start --svc_name=<rmi
service name> script.
2. Run the shell script of the specified service from the <install dir>/bin/go directory. For Product Master services, the svc_control script is useful for displaying
information about the configuration of a service. The <install
dir>/bin/go/svc_control.sh –action=show_config –svc_name=<svc full
name> script displays all configuration information that includes the class path and JVM system properties that are used to start a service.
a. Display a list of the full names for all the services that are running on your system. Type svc_control --action=list.
b. Start a service on your system. Type svc_control --action=start
--svc_name=<full name>.

Example
In this example, the scheduler service is started and given the service name sch1.

svc_control --action=start --svc_name=sch1_MYHOST

What to do next
You can check the status of the service to ensure that it is started correctly, see Getting the short status of a service.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Stopping a service
You can stop individual services in IBM® Product Master from the command-line.

444 IBM Product Master 12.0.0


About this task
Stopping a service is not the same as aborting a service. If you stop a service, Product Master will attempt to stop the service but only after all tasks that are currently
using the service is stopped. The scheduler stops only when it completes all of the jobs that are currently running. If you abort a service, Product Master attempts to shut
down the service but if a task is using the service, the abort might fail.
If the service is blocked, it might not stop because it is waiting for an I/O such as file reading or writing.

Procedure
Run the shell script of the specified service from the <install dir>/bin/go directory.
A service is stopped after all currently running tasks that use the specified service are stopped.

a. Display a list of the full names for all the services that are running on your system. Type svc_control --action=list.
b. Stop a service on your system. Type svc_control --action=stop
--svc_name=<full name>.

Example
In this example, the scheduler service is stopped and given the service name sch1.

svc_control --action=stop --svc_name=sch1_MYHOST

What to do next
To restart the service that you just stopped, see Starting a service.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Aborting a service
From the command-line, you can abort IBM® Product Master services and make them unavailable for use.

About this task


Stopping a service is not the same as aborting a service:
If you stop a service, Product Master attempts to stop the service but only after all tasks that are currently using the service is stopped.
If you abort a service, Product Master attempts to shut down the service but if a task is using the service, the abort might fail.
If you abort the RMI registry, you are unable to contact services on remote machines.

Procedure
Run the shell script of the specified service from the <install dir>/bin/go directory.

a. Display a list of the full names for all the services that are running on your system. Type svc_control --action=list.
b. Abort a service on your system. Type svc_control --action=abort
--svc_name=<full name>.

Example
In this example, the scheduler service is aborted and given the service name sch1.

svc_control --action=abort --svc_name=sch1_MYHOST

What to do next
To restart the service that you just aborted, see Starting a service.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Getting the short status of a service


You can view the status of a service as a short status message from the command line.

About this task

IBM Product Master 12.0.0 445


The Java™ class RootAdmin has a public interface that enables you to provide command options to get certain information about each service type and to perform specific
activities that are associated with each service. Besides, the administration of the services, the following useful information can be captured by using the public interface
of the RootAdmin class:

Memory usage (service and connection pool)


Connection pool status (size of connection pool, current connection usage)
Database pool status (connections in use, maximum connections possible)
Thread names and count per service / process
Session cache information

Running the RootAdmin class without options, shows the following usage information:

$JAVA_RT com.ibm.ccd.admin.common.RootAdmin

Usage: RootAdmin -cmd=<cmd> [-host=<host>] [-svc=<svc name>] [-type=<type>]


See config file: <WPC INSTALL DIR>/etc/default/admin_properties.xml
1/ to start a service: -cmd=start -host=<host> -svc=<svc name> -type=<svc type>
where <svc type> is:
-admin
-appsvr
-eventprocessor
-queuemanager
-scheduler
-workflowengine
and <host> is:
-jonas1
2/ to stop a service: -cmd=stop -svc=<svc name>
3/ to stop a thread: -cmd=stop_thread -svc=<svc name> -thread=<thread number>
4/ to abort a service: -cmd=abort -svc=<svc name>
5/ to check a service: -cmd=check -svc=<svc name>
6/ to get the status of a service: -cmd=status -svc=<svc name>
7/ to get the list of services: -cmd=rmi_status
8/ to unbind a service: -cmd=unbind -svc=<svc name>
9/ to stop all local services: -cmd=stop_local

Procedure
Run the rootadmin.sh shell script that is in the $TOP/bin/go directory:

Syntax

rootadmin.sh -cmd=status -svc=service_name

Parameters

-cmd=status
Specifies the script command where status can be one of the following commands:

check
stop
stop_thread
abort
rmi_status
stop_local

Use check to display the short status.


-svc=service_name
Specifies the service where service_name is the service name.
The service name for the option -svc can be found by going to $TOP/logs/. This directory provides the names of the respective service types, which include
the hostname.

Short status messages

Running
The service is running and responding.

Not found
The service is not found. The service might not be started or it might have stopped.

Found but not responding


The service was found as registered with the RMI registry, but the service is not responding to the "heartbeat" function. The service might need to be
restarted.

Example
In this example, the scheduler short status for the service displays.

rootadmin.sh -cmd=check -svc=scheduler

In this following example, the directory for the appserver process under $TOP/logs has the name "appsvr_Jonas1", and this name needs to be supplied to the option "-
svc".

$JAVA_RT com.ibm.ccd.admin.common.RootAdmin -cmd=status -svc=appsvr_Jonas1

The resulting HTML output from this command can be put into a file and is readable by an Internet browser.

446 IBM Product Master 12.0.0


Memory usage: 107102k

connection pool status


local date:Wed Jun 03 12:00:58 CEST 2009
memory (in kb): used:118400,free:154226,total:272628
service type:appsvr
db instance:pimdb
db user:wpcfp11
broker size:10
broker in use:0
conn file:/opt/wpc/envs/jonas15/wpc_532_FP11_DB2/logs/appsvr_JONAS1/db_pool/db_pool.txt.28-MAY_11_48_43_176

db pool status(size:2)
name:default
used:0
max:26

name:system
used:0
max:4

db tread status(size:7) <td>true<td>true<td>truetrue


name:Thread-21
create time:Thu May 28 11:48:44 CEST 2009
pool:default
block:false
blocked:false

name:memorymonitor_daemon1
create time:Thu May 28 11:48:44 CEST 2009
pool:system
block:true
blocked:false

name:dst_worker_1
create time:Thu May 28 11:49:28 CEST 2009
pool:default
block:false

name:qmgr_deamon
create time:Thu May 28 11:49:28 CEST 2009
pool:system
block:true
blocked:false

name:dst_worker_0
create time:Thu May 28 11:49:28 CEST 2009
pool:default
block:false

name:dst_worker_2
create time:Thu May 28 11:49:28 CEST 2009
pool:default
block:false

name:email_deamon
create time:Thu May 28 11:49:28 CEST 2009
pool:system
block:false

db connection status(size:0)

Status Information for Session Cache


CtgViewMgr:Admin[1252476390]
Type:View, Name:RegView
Type:View, Name:RegView
Type:View, Name:[System Default]
Type:View, Name:RegView

What to do next
If the service status shows that the service has failed or is unresponsive, you can stop or abort the service, see either Stopping a service or Aborting a service.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Service checks
Service checks enables you to specify which applications you want to run before any service starts up, shuts down, or is aborted. Likewise, applications can be run before
a particular service is started, stopped, or aborted. When running a service check before any service is started, you are validating all of the policies that are defined in the
history_subscriptions.xml file for history logging.

About this task


When an application is run, it must return a 0 on success and a nonzero value on failure. If the application should fail before a service starts, the service will not start.

IBM Product Master 12.0.0 447


There are three types of service checks:

before_start
Runs before the service starts.
after_stop
Runs after the service stops.
after_abort
Runs after the service stops.

Currently only one check of each service type is supported. If multiple service types for the same purpose exist, only the last one is run.

Procedure
1. Add a line to the [service.check all] section in the env_settings.ini file if you want a service check that runs for any service.
2. Add a section that is called [service.checks <service
name>], where <service name> is the name of one of the services from the [services] section if you want a service check that runs for a particular service. Then,
add a line for the service check.
Note: Refer to the comments in the [service.check all] section in the env_settings.ini.default file for restrictions on syntax and quoting.

Example
In this example, a service check runs before any service is started:

[service.check all]

before_start=my test –option=some arg

In this example, a service check runs before a service named appsvr2 starts:

[service.check appsvr2]

before_start=my test –option=some arg

In this example, two service checks run for a service named scheduler:

[service check scheduler]

after_abort: my_cleanup_script.sh

before_start: check_something.exe

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

System administrator troubleshooting responsibilities


The system administrator is responsible for managing an organization's computer and operating systems, for the day-to-day maintenance of the operating system,
including back up and recovery, adding and deleting user accounts, and performing software upgrades, and for installing, configuring, and maintaining the network. They
work closely with the database administrator and application and business expert to fix problems and increase performance and ensures that all current operating system
patches are applied.

The System Expert and Administrator performs the following responsibilities:

Errors

Always checks system logs with error timestamp to see if it was caused by a system error
For example: Product Master reporting system error saving a file to docstore should result in the System Expert checking the system logs to see if it is
an IO problem or disk full or file system problem.

Monitor

Monitors all system errors and critical system messages


Checks for disk space getting full

Performance

When performance problems arise determines IO bandwidth, memory, swapping, and CPU usage to see if there is a bottleneck in the current hardware setup
Checks for the existence of zombie or defunct processes and determines cause of freeze

Client PCs

Determines if different software packages on PC might be causing a problem


If cannot determine, then remove all non standard packages and see if problem goes away
Tracks modifications to all PC settings to determine if a problem was caused by configuration changes
For example: Internet Explorer and network settings

Network

Configures and maintains load balancer (if applicable) and knows when it may be the cause of a problem
For example: if unexpectedly logged out of the system, correlates time of the problem with load balancer logs to see if cause was load balancer

448 IBM Product Master 12.0.0


Might bypass load balancer completely for a period of time to see if it fixes a problem
Configures and maintains proxy server
If using a proxy server ensures all relevant proxy server http caches are flushed when a Product Master patch is installed
Monitors network bandwidth

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Performance tuning
When you are planning for performance, you need to account the project plan, use cases, timing, hardware, and tracking efforts.

Project plan
Ensure that you include 20% extra time for each of your line items for testing, profiling, and tweaking the solution. This extra time should be included before any known
performance problems are uncovered. Allow extra time for use cases that have a high potential for performance problems, for example, many specs, a large amount of
location data or many workflow steps.

Use cases
Define all the use cases, which are performance sensitive for the users up front in the project. Identify the requirements, dependencies, and wanted performance. Set
general customer expectations to be at a proper level, such as “subsecond” response is not realistic in any web application. Reset the expectations as performance tuning
and testing progress, then, the business-specific processing time can be considered.

Timing
Test and profile uses-cases as they are developed or as early as possible if there are any other dependencies. Do not delay performance testing until the end of the
project.

Hardware
Identify the hardware that is needed for performance testing and have it available early in the project. The hardware for performance testing should be a replica of the
hardware that is planned for production. Performance testing and user acceptance testing should always be done on hardware that is identical to production.

Tracking
Establish a baseline for the use-cases and have them approved by the customer. For each subsequent iteration, maintain the history data of tuning action and
performance effect.

Performance best practices


Ensure that you follow the performance best practices.
Performance checklist
Use the IBM® Product Master performance checklist to resolve common performance issues with the product.
Volume testing checklist
Use this checklist to conduct volume testing and potentially detect any performance problems. The objectives of conducting volume testing are to preemptively
address potential performance problems and to rest any expectations if necessary.
Performance tuning for the application server
You can optimize application servers, both at the software and hardware level.
Performance tuning for the Persona-based UI
Ensure that you follow the performance best practices for the Persona-based UI.
Implement and maintain table partitioning
Table partitioning highly improves the application performance during high concurrency and search, reduces the database maintenance time, and helps improve the
application scalability.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Performance best practices


Ensure that you follow the performance best practices.

Hardware sizing and tuning


Modeling considerations for lookup tables
Modeling the process to steps in a workflow
Performance design considerations
Performance checklist
Volume testing checklist
Cache management

IBM Product Master 12.0.0 449


Cache property files
Performance tuning for the Persona-based UI
Troubleshooting performance issues

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Performance checklist
Use the IBM® Product Master performance checklist to resolve common performance issues with the product.

Important: Correct installation and administration of the database is critical to ensure performance. For more information, see Administering database.
While it is known that large amounts of objects can lead to performance issues, they are often times not avoidable. A custom implementation can require a large amount of
data, with complex entity attributes, and many specs, together with sophisticated business logic. A smart design or arrangement can affect the performance to some
extent, and for the remaining, you must deal with them case by case.
Typical questions include, "how large is large?" and "what is the optimal number for different objects?" There is no absolute number or standard answer, as the different
components need to work together and display unique characteristics for specific user scenarios of specific customer implementation. There is some benchmark data on
the volume testing, which can be considered as a general reference.

The following list identifies possible resolutions that can help you to identify the source of your product performance issues:

User interface response time due to large number of concurrent users. When there are large numbers of users (100+), the appsvr process can be stressed due to:
1. More processing
2. More memory consumption (for all the user sessions)
3. Potentially thread issue
Import jobs for initial load. Following are the characteristics of initial data load,
1. Large volume of data
2. Business validation logic may/may not be turned off
3. Mostly database writes (much less reads)
Potential solutions:
1. Parallel jobs
2. Check data record dependency
3. Keep validation logic to minimal
4. Cache candidate for things to be accessed for most of the items (such as category cache, if the item needs to be mapped under the categories)
5. Check that the performance bottleneck is in scheduler or at database side
Import jobs for incremental load. Following are the characteristics of incremental load:
1. Data volume is less than the initial load
2. May require more business validation logic
3. May need more check on existing item/category data
4. Combination of database reads/searches/writes
Potential solutions:
1. Parallel jobs
2. Optimization of script, potential object/data cache
System resource sharing between various services. The various services of Product Master on the same box can compete for the limited system resources. For
example, while you run jobs, the user interface response time can become slower, due to the processor cycle on the appserver machine, and potentially more loads
on the database machine. So if possible, the jobs should be scheduled to run during off-hour when users are not in the system.
Solution scripts. Many of business logic exists as either script or custom Java™ code (using Product Master Java API) in Product Master. Code optimization is
important to improve the user interface or job performance.
1. Compiled mode,
2. Get read-only version of objects if script operation supports that option,
3. Be careful of the interaction/validation with external system,
4. Potentially cache mechanism to reuse object if frequently accessed,
5. Profiling the scenario for deep analysis. Sort the methods by where the most time is spent and look for the following:
a. How much time is being spent (focus on the largest number first),
b. The number of executions of the method where a large amount of time is spent.
6. Apply solution-specific knowledge to the numbers.
For example, if you know that your import is loading 100 items, which map to only three possible categories, but you are calling
getCategoryByPath() 100 times you are wasting processing - if you cached the category you would call this at most three times, despite
processing 100 (or even 10 million) items
For example, if you see that a particular database query is taking 10 seconds, can you optimize the query (retrieve less data, simplify what it is
searching for, use indexed attributes)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Volume testing checklist


Use this checklist to conduct volume testing and potentially detect any performance problems. The objectives of conducting volume testing are to preemptively address
potential performance problems and to rest any expectations if necessary.

These possible considerations can help you identify the source of the performance problem that is occurring when volume testing:

450 IBM Product Master 12.0.0


As with any sort of testing, the earlier volume is done the more time the development team must correct any problems. However, testing without realistic test cases
and data yields inaccurate results. The best time to begin testing is early in the development cycle, but after the basic entities (specs, catalogs, hierarchies, and so
on) are identified.
Identify areas of solution that extends beyond normal range. Examples of these are:
Large number of items
Lots of searchable attributes
Large number of lookup tables
Large number of specs
Many location mapping/attributes per item
Many multi-occurrences, especially nested multi-occurrences
Large number of relationships between items (50+ relationships from a single item to other items)
Large number of relationships between items and categories (50+ categorizations of a single item)
Prioritize tests by criticality and availability. In some instances, test cases that are most critical might not be ready for testing before other less critical test cases.
This does not mean that you should wait for the most critical tests to begin testing. For example, if a solution is expected to have 20,000,000 items and 5000
specs, it might be easy to test browsing a single item with all 5000 specs before all 20,000,000 items are loaded. This way, any caching issues can be addressed
and problems that are avoided further down the line.
Establish a baseline. Run a set of basic tests in a controlled environment to establish the baseline. The tests need to be run in a manner that is repeatable and yields
consistent results for repeated runs.
Identify targets. After a baseline is established, the use-cases can be analyzed for improvement. Then, the expected improvements can be estimated and targets
identified. With these estimated in place, the next step would be to see how they line up with customer expectation and see whether there is a need to reset them.
Analyze the use cases. After a baseline is established, the use cases can be analyzed for improvement and then the expected improvements can be estimated and
targets identified. With these estimated in place, the next step is to see how they line up with the customer's expectation and see whether there is a need to reset
them.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Performance tuning for the application server


You can optimize application servers, both at the software and hardware level.

Typical server and process configuration


Your Product Master system is a three-tier Java™ Platform, Enterprise Edition application. In a typical production system, the web and application tiers are
combined on one or more physical systems (application servers).
Tune the application server
You tune a single application server in the same way that you tune multiple servers.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Typical server and process configuration


Your Product Master system is a three-tier Java™ Platform, Enterprise Edition application. In a typical production system, the web and application tiers are combined on
one or more physical systems (application servers).

The application server work station runs a Java application server, typically WebSphere® Application Server. Your system does not run within the context of the Java
application server, but as separate process. The third tier, the data store, is a Relational Database Management System (RDBMS) on a separate physical system.

The application server communicates with the data store by using Java Database Connectivity (JDBC) over the underlying network, which is usually TCP/IP over Ethernet.
Users interact with the system over the network with HTTP, either through a web server such as Apache or directly with the Java application server.

The system consists of the following services:

1. admin service
2. rmiregistry
3. eventprocessor
4. scheduler
5. queuemanager
6. workflowengine
7. appsvr service
8. pim-collector service
9. indexer service
10. ML services

The appsvr process is the Java Platform, Enterprise Edition application server. The admin service and rmiregistry services must be run on each system. You can
implement load sharing by instantiating one or more of each of the remaining four services on the same or separate physical work stations (except workflowengine
service). The pim-collector service interacts with the IBM® Product Master Server to extract item and category data for indexing. The extracted data is then
published to the indexer service over a messaging channel. After full indexing is scheduled, the service receives the trigger from the Product Master server to start
fetching items from all the existing catalogs, convert them to it into JSON and publish to a queue. The indexer
service transforms the item data that is received from the pim-collector
service and indexes it to the Elasticsearch. The service uses elasticsearch API endpoints to index catalog items and categories.

IBM Product Master 12.0.0 451


Optimizing the application server tier involves optimizing the server hardware for the chosen application server and tuning the Java application server. You must tune your
system for your environment and system demands.

Following is an example of a typical system configuration where all services exist on one physical work station.

In clustered environments, tuning involves the placement and number of the services on multiple physical work stations. Following are the requirements for clustered
environments,

The system binary files that are stored in the $TOP and $TOP/public_html folders should exist on a shared file system such as Network File System (NFS).
The Java application server must be installed on local storage where all application servers use the same Unique Identifier (UID) and Group ID (GID) to access the
shared file system.
In cases where there is a separate web tier, for example, the system uses Apache or some other web server that passes requests to the Java application server. The
load that is placed on the system by the web server can be assumed to be negligible.

Following is an example of a clustered system configuration with multiple work stations with multiple services.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Tune the application server


You tune a single application server in the same way that you tune multiple servers.

You need to tune the following things:

Hardware
Product Master
WebSphere® Application Server
For tuning information, see WebSphere Application Server documentation.

452 IBM Product Master 12.0.0


Hardware sizing and tuning
Product Master is a Java™ Platform, Enterprise Edition application where RAM and processor performance are more important than, for example, disk access
speed.
System tuning
To tune Product Master, you must tune the JVM memory settings, horizontally and vertically scale the services, tune the scheduler, and tune the workflow.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Hardware sizing and tuning


Product Master is a Java™ Platform, Enterprise Edition application where RAM and processor performance are more important than, for example, disk access speed.

Allocating the hardware


Allocating the correct hardware is critical for sustained performance of the solution. The correct size of the hardware that is required to effectively run the final solution
depends on the volume of the activity on the system and the overall complexity of the solution. Correct sizing can be done by working with the technical sales team, IBM®
services team, or the performance team.

System memory
Ensure that there is sufficient physical memory available so no memory swapping and paging occurs. On an environment that runs Product Master services, make sure
that all services -Xmx settings account in the maximum (total physical memory - 1.5 GB).

For more information, see Configuring memory flags for each service type.

Tuning
Correctly sized hardware is only effective when it is properly tuned. There are a few key areas that commonly appear as the cause of performance problems:

Latency and bandwidth between the application server and database. The latency should be under 0.25 ms between the application server and the database. This
can be measured by using the traceroute command on most systems. The connection between the two should be a gigabit Ethernet capable of transferring large
files at 25 MB/s through FTP.
Number of open descriptors is too low. Unexpected problems can be avoided by checking the number open descriptors and verifying that they are set according to
WebSphere® Application Server guidelines to 8000. Number of open descriptors can be checked by using the ulimit –a on most computers, and can be reset by
using the ulimit –n.
Note: The ulimit displays the settings per user for the logged-in user.

File system
Although Product Master does not create a high demand on the disk, a separate file system is recommended. A separate file system might not affect performance greatly,
but it does separate the system data and log files from other files on the system and improves disk space administration. A file system for the document store and another
file system for the log files is recommended.

Product Master does not make heavy use of the disk for the subsystem in local installations, so nearly any disk configuration should suffice, whether it is single-disk,
spanning, or RAID arrays.

Use at least fast Ethernet links (Gigabit) between the database and application servers. In a clustered installation, fast Ethernet between the application servers and the
NFS server are a minimum, as is a well-tuned and suitably sized NFS server.

Load balancing and failover


The proper configuration of any application is to balance the load and allow for failover in case of unexpected problems. An easy way to address potential overloading of
the application server is to use a load balancer. Multiple instances of the scheduler can be started on one or more servers and various services of the scheduler load
balance themselves automatically.

Processor
You should obtain the fastest processor and the most memory affordable. Multiprocessor systems improve performance greatly, and a system for your services with less
than 8 GB of memory should not be considered for most applications. A dual-processor dual-core system and at least 8 GB of RAM is recommended.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

System tuning
To tune Product Master, you must tune the JVM memory settings, horizontally and vertically scale the services, tune the scheduler, and tune the workflow.

IBM Product Master 12.0.0 453


JVM tuning
Memory settings for all the Product Master services are in the $TOP/bin/conf/service_mem_settings.ini directory. The default memory settings are not optimized.

Following are the best practices for JVM tuning:

Size your Java™ heap so that your application runs with a minimum heap usage of 40%, and a maximum heap usage of 70%.
Set –Xmx and –Xms parameters for scheduler, appsvr, and wfl engine to 1024 or 1536 m. Other services settings might remain initially at the default size of 64 m.
On 64-bit environments, increase memory settings if needed to a higher value than 1536 m until there is sufficient physical memory available so no memory
swap occurs.
On 32-bit environments, the –Xmx setting should not be increased higher than 1536 m as this increases the risk of running out of native Java memory.
For optimal settings, memory usage needs to be monitored and adapted. For more information, see
JVM monitoring
Configuring memory flags for each service type
Note: You can add the -verbose:gc to the $TOP/bin/conf/service_mem_settings.ini file. By default, the verbose recording occurs in the svc.out file. You specify a
different file by adding the following:

-Xverbosegclog:<Unassignedfilepath and filename>

To track JVM service memory usage spontaneously, memory usage snapshots can be collected as well by running the following:

$JAVA_RT com.ibm.ccd.common.wpcsupport.util.SupportUtil --cmd=getRunTimeMemDetails

However, for continued monitoring, use of verbose gc is the recommended method.

Your memory settings for the appsvr service are set in the application object and can be changed in the System Status in the System Administrator module by re-creating
the application object.

Scheduler tuning
To tune the scheduler you set memory flags for the size of the largest job and instantiate enough schedulers that are based on the number of processors in the application
server workstation.

The number of schedulers to set up is determined by the number of processors in the scheduler server at a 1:1 ratio. This ratio includes hyper-threaded processors, but
the ratio can be increased slightly for dual-core processors to 2:1. You should test this ration to measure its effect on performance gains.

Each scheduler can run multiple worker threads. Each worker thread can run multiple jobs, and 8 is the default number of worker threads. The number of threads is
specified by the num_threads parameter in $TOP/etc/default/common.properties file. In environments with large numbers of jobs, this number can be increased to 10 or
even 20, but increasing the number of schedulers, scheduler servers, or both is more useful.

Large jobs can benefit more from configurations with multiple schedulers where each scheduler runs a single thread. A single thread per scheduler increases the amount
of memory that each scheduler has per job.

Tip: If possible, do not run the scheduler on a system that also runs the appserver service.

Workflow tuning
Increasing the memory by setting the –Xmx parameter to 1536 m is the only tunable aspect of the workflow engine service.

Horizontal and vertical scaling


For information about horizontal and vertical scaling, which involves implementing product services across multiple application servers or multiple services on the same
server to improve performance, see Configure a cluster environment

Performance tuning
Important: You need sudo user access to set the following parameters.

Ulimit parameters
Add the following lines in the limits.conf file at the /etc/security/ folder to improve performance:

* soft nofile 100000


* hard nofile 100000
* soft nproc 100000
* hard nproc 100000
* soft core unlimited
* hard core unlimited
<db instance owner> soft nproc 100000
<db instance owner> hard nproc 100000

Note: Depending upon the scenario, increase these values in case of any performance issue.
TCP tuning
Open a command-line window, and run the following commands:

sysctl -w net.ipv4.tcp_tw_recycle=1
sysctl -w net.ipv4.tcp_tw_reuse=1

To verify, run the following command:

sysctl -a | egrep "reuse|recycle"

The value of the following keys should now be 1:

454 IBM Product Master 12.0.0


net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Performance tuning for the Persona-based UI


Ensure that you follow the performance best practices for the Persona-based UI.

Best practices for the export and Import feature (catalogs and hierarchies)
Following are some of the best practices.

Use the Export and Import feature for small set of items.
Total number of items to be imported or exported should ideally be around 5000 items with a maximum limit of 10, 000. Increasing the items beyond these
limits might cause degradation in performance (time that is taken to complete the import transactions).
For import or export of more than 10, 000 items, you can use the Admin UI jobs.
Increasing the number of jobs linearly increase the number of items getting imported.
For example, 10, 000 imports with 1 job takes approximately 24 minutes, 10, 000 imports with 2 jobs (each jobs having 5000 items) takes approximately 12
minutes.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting
To help you understand, isolate, and resolve problems with your IBM® software, the troubleshooting information contains instructions for using the problem-determination
resources that are provided with your IBM product.

The first step in the troubleshooting process is to describe the problem by using the following basic questions:

"What are the symptoms of the problem?"


"Where does the problem occur?"
"When does the problem occur?"
"Under which conditions does the problem occur?"
"Can the problem be reproduced? "

Answers to these questions typically lead to a good problem description, which can then lead to a problem resolution.

What are the symptoms of the problem?


Following questions help you create a more descriptive picture of the problem:

Who, or what, is reporting the problem?


What are the error codes and messages?
How does the system fail?
For example, is it a loop, hang, crash, performance degradation, or incorrect result?

Where does the problem occur?


Following questions help you to focus on where the problem occurs to isolate the problem layer:

Is the problem specific to one platform or operating system?


Is it common across multiple platforms or operating systems?
Is the current environment and configuration supported?

Describe the problem environment, including the operating system and version, all corresponding software and versions, and hardware information. Also, confirm that you
are running within an environment that is a supported configuration.

When does the problem occur?


You can easily develop a timeline by working backward: Start at the time an error was reported (as precisely as possible, even down to the millisecond), and work
backward through the available logs and information.

Following questions help you develop a detailed timeline of events:

Does the problem happen only at a certain time of day or night?


How often does the problem happen?
What sequence of events leads up to the time that the problem is reported?
Does the problem happen after an environment change, such as upgrading or installing software or hardware?

IBM Product Master 12.0.0 455


Under which conditions does the problem occur?
Following questions about your environment can help you to identify the root cause of the problem:

Does the problem always occur when the same task is being performed?
Does a certain sequence of events need to occur for the problem to surface?
Do any other applications fail at the same time?

Remember that just because multiple problems might have occurred around the same time, the problems are not necessarily related.

Can the problem be reproduced?


From a troubleshooting perspective, an ideal problem is the one that can be reproduced. Following questions can help you with this question:

Can the problem be re-created on a test system?


Are multiple users or applications encountering the same type of problem?
Can the problem be re-created by running a single command, a set of commands, or a particular application?

Troubleshooting issues
Use the following troubleshooting sections to help identify your problem and possible resolution.
Tools for troubleshooting
Tools are available to collect and analyze both system and performance diagnostic data.
Log files
Log files generally record Product Master runtime events, including exception traces and error messages and can help you resolve issues.
FAQs - Cloud offering
Read frequently asked questions about Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting issues
Use the following troubleshooting sections to help identify your problem and possible resolution.

Following questions can help you identify the source of a problem that is occurring in Product Master:

1. Is the configuration supported? For more information, see System requirements.


2. Have any error messages been issued? For more information, see Using LTA tool.
3. How long has the problem been occurring?
Did you recently install or begin by using this product feature for the first time?
Did the feature work until some point and then start to fail?
4. If the problem occurred subsequent to some period of normal operation, did anything change in the environment?
Was the client, host, or server upgraded to a new patch level?
Was an operating system patch applied?
Did the network environment change? For example, was a server moved or a domain migrated?
Did the system (client or server) recently fail or abnormally terminate?
5. Can you reproduce the problem on a test system (so that you do not negatively affect the production system)? What steps are required to reproduce the problem?
6. How many users are impacted?
Is this problem affecting one, some, or all users?
Is the problem occurring only for a user who was recently added to the environment, such as a new employee?
Do differences exist between the users who are affected and the users who are not affected?
7. How many applications or business processes are impacted?
Is this problem affecting one, some, or all applications or business processes?
Is the problem occurring only for a new application or business process?
Do differences exist between the applications or business processes that are affected and the applications or business processes that are not affected by the
problem?
8. Is the problem specific to this feature in the product?
Are multiple features within the product affected?
Are similar problems occurring outside of the application, such as with other applications or operating system operations?
9. Do the logs our system output identify a specific error?
When this problem occurs, is a specific error message or error code issued?
Are the reported errors reported in a popup window, in logs on the client or server, or elsewhere?
Is trace output of the operation available?
10. If the topics do not guide you to a resolution, you can collect more diagnostic data. This data is necessary for IBM Software Support to effectively troubleshoot and
assist you in resolving the problem. For more information, see Contacting IBM Support.

Troubleshooting general issues


Use the following topics to help resolve the common issues in Product Master.
Unable to start the Java Message Service (JMS) receiver
Unable to start or stop the messaging receiver, also known as the Java Message Service (JMS) receiver, in Global Data Synchronization for Product Master.
Troubleshooting profiling agents issues
Use the following topics to help resolve issues pertaining to profiling agents. These topics will help you to resolve common issues of the CPU and memory profiling
agents in Product Master.
Troubleshooting application server issues
Use the following topics to help resolve application server issues.

456 IBM Product Master 12.0.0


Troubleshooting connectivity issues
Use the following topics to help resolve connectivity issues.
Troubleshooting user interface issues
Use the following topics to resolve common issues with the Product Master client.
Troubleshooting database issues
Use the following topics to resolve common issues with the database.
Troubleshooting performance issues
Use the following topics to resolve common issues with Product Master performance.
Troubleshooting multicast
Product Master uses ehcache to perform distributed caching of objects between Java™ virtual machines (JVMs), both on the same machine and in the same
network. Each Product Master JVM needs to know about all of the other JVMs in the cluster. The way this is done is that a JVM will periodically send a multicast
message, for example saying "I'm here!". If we did not use multicast, the configuration of all the caches would be very tedious and error-prone. Consequently, IBM
recommends that you use the multicast configuration of ehcache. Although you can configure the cache setup in the $TOP/etc/default/mdm-ehcache-config.xml
file yourself; note that IBM does not support non-multicast configuration.
Troubleshooting product installation
If the installation of Product Master fails, you can try performing the following debug steps.
Troubleshooting migration issues
Describes some common issues for troubleshooting that might come up during migration in IBM Product Master.
Troubleshooting the Persona-based UI issues
Use the following topics to resolve common issues with the Persona-based UI.
Troubleshooting the SAML SSO issues
Use the following topics to resolve common issues with the SAML SSO.
Troubleshooting the operator issues
Use the following topics to resolve common issues with the operator-based deployment.
Troubleshooting the docstore issues
Use the following topics to resolve common issues with the docstore.
Troubleshooting import job schedule issues
Use the following topic to resolve common issues with the import job schedule.
Troubleshooting Admin UI "Error 203" issue
Use the following topic to resolve "Error 203" issue in the Admin UI .

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting general issues


Use the following topics to help resolve the common issues in Product Master.

During installation, no error in the Test Connection option on the Database Configuration window
I have never received this error before. Why am I getting this error and how can I fix it?
Workflow events are not working properly
You need to stop and restart Product Master when workflow events do not seem to be working.
Items stuck in the merge workflow step
We have recently seen a pattern where items stay in the "Merge" step for an extended time. How can we check whether any of these items are stuck or whether it is
waiting for a split item entry to join before moving to the next step? If items are stuck, how can we move them to the next step and how can we find the root cause?
Appsvr (WebSphere® Application Server) process stops responding when started or stopped
When global security is enabled in WebSphere® Application Server but admin_security=false is specified in the env_settings.ini file, you are prompted with a
dialog box to provide the WebSphere Application Server administrative user name and password.
Error processing xml queue while saving
Whenever I save a particular item (or category) in the user interface or try to use it in a job, I get XML parser errors; other data objects are fine. Why am I getting this
error and how can I correct it?
Errors while running create_schema.sh script
I am trying to install the product with DB2® database. When I run create_schema.sh, I get SQL0552N and/or SQL0403W errors. Why am I getting these errors and
how can I prevent them?
Error occurred in XML processing error
I have never received this error before. Why am I getting these errors and how can I fix them?
CWPAP0127E: The Java API object reference is inaccessible due to deletion or similar operation
Why am I getting the CWPAP0127E: The Java API object reference is inaccessible due to deletion or similar operation exception and
how do I fix it?
Cannot end Appsvr (WebSphere® Application Server) process
When global security is enabled in WebSphere Application Server and admin_security=true is set in the env_settings.ini file, if an incorrect WebSphere
Application Server administrator user name or password is specified in the env_settings.ini file or in the command line, the command to stop the appsvr process will
fail.
Invalid special character strings
In general, you should avoid all uses of HTML special character strings.
Importing hierarchy content for hierarchies with categories that have a relationship attribute set not supported
I have never received this error before. Why am I getting this error and how can I fix it?
Date fields in the Mass Transactions screen not displaying proper values
I have never received this error before. Why am I getting this error and how can I fix it?
Cannot access items or categories due to lock busy messages
Encountering slave or stale locks actually does not impact performance, but rather prevent proper access to objects.
No such file or directory warning when using the CCD_CLASSPATH environment variable to compile Java classes
When using CCD_CLASSPATH environment variable to compile Java classes for Product Master warnings about .jar files not found are displayed.
Restrictions with multi-domain entities and translation
In Product Master versions 10.1 and earlier, Product Master assumed the default products, catalog, entities to be items, and hierarchy, entities to be categories.

IBM Product Master 12.0.0 457


The multi domain feature removes this restriction and allows the product entity to be a user-defined domain entity specified on the product catalog and hierarchy
as well.
Some docstore files do not show up in the file system
An export job re-creates a few output files in a docstore folder, which has been mounted to the file system. While I see some of the files copied from the docstore
into the file system with the correct time stamp, the rest of the files do not show up.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

During installation, no error in the Test Connection option on the Database


Configuration window
I have never received this error before. Why am I getting this error and how can I fix it?

Symptoms
While installing Product Master, if you provide the database home of another version of DB2® than the one on which the Product Master database is created and you select
the Test Connection option, an error does not display.

Causes
This issue occurs if you have more than one version of Db2 installed. In that case if you enter the database home of the older version of Db2 and all the other parameters
of the latest Db2 version, the Test Connection is still established successfully because the database home is not used while establishing connection to the database.

Resolving the problem


While installing Product Master, make sure that you specify the correct database home. Also, avoid installing multiple versions of Db2.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Workflow events are not working properly


You need to stop and restart Product Master when workflow events do not seem to be working.

Symptoms
Sometimes reserving an item in a workflow step by clicking Reserve does not succeed. Even clicking Refresh multiple times does not show the item as being reserved. The
reservation of items in workflows is processed by the workflow engine. Therefore, the logs for the workflow engine must be checked. The logs are located in
the$TOP/logs/workflowengine/ folder.

Causes
If there are exceptions in the log files with time stamps close to the time when the Reserve was clicked, then search for Technotes describing such messages, otherwise
open a ticket with Support.

Diagnosing the problem


When most of the workflow engine log files (especially the ipm.log) have not been updated recently (check with ls -l), then this might be an indication that the workflow
engine has stopped running. In this case, you also would see an increasing backlog of new workflow events when triggering new workflow-related actions like checkout
items or reserving items. This can be checked by running following query, for example, from within the DB Admin console (click System Administrator > DB Admin):
select count(*) from wfe where wfe_event_status ='NEW';

Resolving the problem


If there is indication that workflow engine is no longer responding, follow these steps:

1. Stop all Product Master services:


$TOP/bin/go/stop_local.sh
2. Make sure that there are no other processes running by entering the following command:
ps –ealf | grep <user>
The pattern of related Java™ processes has something like the following:
For example:
AIX

jonas13 471166 1 0 Jun 15 - 3:59


/opt/WebSphere/AppServer/java/bin/java -

458 IBM Product Master 12.0.0


DTOP=/opt/wpc/envs/jonas13/wpc_532_IF11_DB2
-DCCD_ETC_DIR=/opt/wpc/envs/jonas13/wpc_532_IF11_DB2/etc
...
-Dprofiler_opts=__ com.ibm.ccd.workflow.common.WorkflowEngine force

Linux

0 S stan9 19261 1 0 75 0 - 298022 schedu Jun03 ?


00:43:49 /opt/WebSphere/AppServer6/java/bin/java -
DTOP=/wpc/envs/stan9/wpc_5328_DB2
-DCCD_ETC_DIR=/wpc/envs/stan9/wpc_5328_DB2/etc -Dfile.encoding=ISO8859_1
-classpathwpc/envs/stan9/wpc_5328_DB2/jars/ccd_svr.jar:/wpc/envs/stan9/wpc_5
328_DB2/jars/ccd

If you find processes like shown preceding step, you need to end those by using the kill command. The process IDs to use are in the preceding examples. The
syntax for the kill command is:
kill -9 <stale process id> <stale process id> ...
for example: kill -9 471166 19261
3. Restart all Product Master services:
$TOP/bin/go/start_local.sh
4. Start the user interface and reserve an item in any workflow. This time the item should be reserved successfully, showing the reserve sign. However, if the problem
is not solved, contact IBM® Software Support.

Stopping and restarting Product Master

To ensure that Product Master starts correctly, first stop, then restart Product Master by using the following steps.

Procedure

1. Attempt to gently stop Product Master by running the stop_local.sh script:


$TOP/bin/go/stop_local.sh
2. Wait for approximately one minute, then run the following command:
ps -u $USERNAME
3. If there are any active Java processes, a scheduled job might still be in progress. You can let the job complete or you can stop it manually by using the
abort_local.sh script:
$TOP/bin/go/abort_local.sh
4. Wait for approximately thirty seconds, then run the following command:
ps -u $USERNAME
5. If there continues to be active Java processes, the JVM might have crashed. The Java processes must be manually stopped by running the following command:
kill -9 `ps -u $USERNAME | grep java | cut -b1-5
6. If any Java processes still exist, restart the system.
7. Once all Java processes have been stopped, restart the Product Master using the start_local.sh script:
$TOP/bin/go/start_local.sh
8. After the start_local.sh script completes, run the rmi_status.sh script to verify that all services return a status and confirm that Product Master has started
correctly:
$TOP/bin/go/rmi_status.sh
9. Open a browser and ensure that you can log into Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Items stuck in the merge workflow step


We have recently seen a pattern where items stay in the "Merge" step for an extended time. How can we check whether any of these items are stuck or whether it is
waiting for a split item entry to join before moving to the next step? If items are stuck, how can we move them to the next step and how can we find the root cause?

Symptoms
An item appears in the merge step when any of the feeder steps complete processing. The item stays there waiting for the remaining feeder steps to complete processing
and maintains an internal count of how many of the feeder steps have completed processing. When all prior steps have completed processing, the item will move onto the
next step. Therefore, for an item to be stuck, all feeder steps must have completed processing and the item should still be present in the merge step.

Diagnosing the problem


Following are some of the ways of identifying a stuck item:

1. Manually review all the steps from item split to merge and check whether an entry exists for that item in the prior steps. If it does, then the merge step is waiting for
that entry before sending the item through and is hence not stuck. If an entry doesn't exist in prior steps, then it is stuck.
2. The product maintains an audit log of all activities in the workflow. This is done in the CEH table and among other things, it stores the following information in
database columns:
2.1. CEH_DATE: Time the event happened
2.2. CEH_ENTRY_KEY: Primary key of item
2.3. CEH_WFL_NAME: Name of the workflow
2.4. CEH_USER_NAME: The user who moved the item
2.5. CEH_STEP_PATH: Name of the workflow step
2.6. CEH_EVENT: The event, which happened like BEGINSTEP, ENDSTEP, RESERVE_ACTIVE_LOCK, RELEASE_ACTIVE_LOCK and so on

IBM Product Master 12.0.0 459


Therefore, if an item entered the workflow step, then it should register a BEGINSTEP event for that item. If x number of input steps feed an item, then an item will
wait for x BEGINSTEP events before moving the item through to the next step, through the ENDSTEP event in workflow. If there are less than x BEGINSTEP, then the
item is not stuck and is just waiting for one of split entries to join. If it has x BEGINSTEP events, and is still in the merge step, then it is stuck. You can use the
following SQL to get this information: SELECT * FROM CEH WHERE CEH_ENTRY_KEY =
'<Primary Key of Item>' ORDER BY ceh_date DESC;
3. The number of split item entries, which have reached the merge step is saved in the CAE_DATA column of the CAE table. For a merge step with x feeding steps, if
the CAE_DATA has less than x, then it is not stuck while a value of x and the item still in the workflow step would indicate it is stuck.
4. We can also get the equivalent of the number of BEGINSTEP events fired or CAE_DATA programmatically using the getEntryMergeState function. We can run this
function for one or all items for the workflow step and if the value returned is x (x being number of feeder steps), then the item is stuck. Usage: int
CollaborationArea::getEntryMergeState(Entry entry, String
stepPath

Resolving the problem


You can use any of the following methods to move an item through to the next step:

1. Open the merge step and manually move the stuck item to the next step.
2. You can use code to move an item through to the next step by using the following function: HashMap CollaborationArea::moveEntryToNextStep(Entry
entry, String stepPath, String
exitValue)
The above method posts a request to move the entry from the specified stepPath to the next step for the given exitValue. It returns a hash map of item primary key
to string of validation errors (which can be zero-length). The move will take place after the current transaction has committed.

3. If multiple items are stuck, then you can move all of them simultaneously through code. Following sample code loops through all the items in a collaboration area
workflow step, checks for items whose CAE_DATA is equal to x (x being number of feeder steps) and then moves them to the next step:
var sColAreaName = "<WORKFLOW_NAME>";

var sStepPath = "<Path_To_Workflow_Step>";

var sStepAction = "DONE";

var sAttribPath = "Attribute_Path_For_Primary_Key";

var hmResult;

var oColArea = getColAreaByName(sColAreaName);

var oEntrySet = oColArea.getEntriesInStep(sStepPath);

forEachEntrySetElement(oEntrySet, oEntry)

if(null != oEntry)

var iEntryMergeState = oColArea.getEntryMergeState(oEntry, sStepPath);

out.writeln("INFO:: Merge state for Entry PK ["+oEntry.getEntryAttrib(sAttribPath)+"] is: "+ iEntryMergeState);

if( null != iEntryMergeState && iEntryMergeState == x) //Replace x with number of feeder steps

hmResult = oColArea.moveEntryToNextStep(oEntry, sStepPath, sStepAction);

if(null != hmResult && hmResult.size() > 0)

out.writeln("ERROR:: Result of moveEntryToNextStep() call: "+hmResult);

If you identify a stuck item, then you can use the following approach to identify the root cause:

1. Query the CEH table to look for anomalous entries for the stuck item, for example, automated steps, which have a BEGINSTEP but no ENDSTEP. You can use the
same SQL as mentioned earlier:
SELECT * FROM CEH WHERE CEH_ENTRY_KEY = '<Primary Key of Item>' ORDER BY ceh_date DESC;
2. If you do not find suspicious entries in the CEH audit trail, then note down the time stamp for events like BEGINSTEP event of merge step fire, ENDSTEP event for
the feeder step and so on. They are printed as part of the preceding SQL.
3. Analyze the logs (especially the workflow engine logs and custom logging) and search for errors or warnings during that time stamp.
If you do not find a descriptive error message, then raise the logging level to debug and repeat the preceding process. Refer to the knowledge center on how to
increase the level of logging to debug.

Merge Type Workflow Step

Symptoms

A merge step ensures all of the incoming steps are completed for that entry and then creates a single merged item before forwarding it to the next workflow step. If x
number of steps point to the merge step, then x copies of the entry must reach this merge step before this item can move to the next step.

460 IBM Product Master 12.0.0


Workflow steps can be broadly categorized into two main types: "User steps" and "Automated". User steps are ones where the user (or script) must go to the workflow
step, make wanted modifications to an item/items, and then move the item/items to the next step by selecting an Exit value. Automated steps are ones where items move
through them and on to the next one without any user interaction. The purpose of these types steps is to take a predefined action or do logical checks by using code in the
IN and OUT function of these workflow steps and then move them to the next step.
Merge type step is a special type of automated step. During setup, you specify more than one entry point (multiple steps feeding entries to this step) and the step
combines all these entries and outputs into consolidated form of the item.
Note: The Admin user can move items out of a merge step even when all of the inputs to the merge have not arrived.
A step is defined as merge step by setting the "Type" field to "Merge" in workflow step definition page. You can reach the workflow step definition page by using the
following path: Data Model Manager > Workflows > Workflow Console > Select a workflow to open and click Add Step or open an existing step.

Split steps are not a separate step type. We define them by specifying multiple entries in the "Next Step" column for that step in the workflow definition screen.

These splits and merges are used for division of labor and to have users work on distinct workflow activities in parallel; thus increasing efficiency and reducing
bottlenecks.

Resolving the problem

An item appears in the merge step when any of the feeder steps complete processing. In the above example, as soon as any of the feeder steps complete (namely Split11,
Split21, Split31, and Split41), an item appears in the "Sample Merge" step. But the entry in step "Sample Merge" will wait for the rest of the feeder steps to complete
processing and once all of them finish, the item moves through to the next step. These feeder steps can be of any type: user steps or automated. At any time while the
merge step is waiting for input, a user can manually (or via script) open the item in the merge step and move the entry through to the next step. The item moves through
fine and there will not be any data inconsistencies introduced. But the changes, which have been made in other feeder steps, which have not yet completed processing
will be lost.
As the item is waiting, it maintains an internal count of how many of the feeder steps have completed processing. We can get this information by querying the CAE_DATA
column in the CAE table. For a merge step, this count is 1 when the first of the feeder steps complete processing. This count will increase by 1 every time another feeder
step completes processing. These increments continue until all the feeder steps have completed; 4 in case of preceding example. When this count reaches four, the item
will move through to the next step.

We can also get the state of the item (CAE_DATA value) programmatically by using the following function:

int CollaborationArea::getEntryMergeState(Entry entry, String stepPath)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Appsvr (WebSphere® Application Server) process stops responding when started or


stopped
When global security is enabled in WebSphere® Application Server but admin_security=false is specified in the env_settings.ini file, you are prompted with a dialog
box to provide the WebSphere Application Server administrative user name and password.

Symptoms
The attempt to start or stop the appsvr process appears hanged.

Causes
The likely cause is that you used an environment like PuTTY, which does not have graphical support, the attempt to display a dialog box led to a lengthy timeout.

Resolving the problem


If you decide to keep global security enabled in WebSphere Application Server, update the env_settings.ini file and set admin_security=true. In addition, you need to
specify the administrator user name and password in the username and password parameters in the env_settings.ini file or provide these values in the
wsadminUsername and wsadminPwd arguments in the command line.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Error processing xml queue while saving


Whenever I save a particular item (or category) in the user interface or try to use it in a job, I get XML parser errors; other data objects are fine. Why am I getting this error
and how can I correct it?

Symptoms
The application creates an XML copy of all the items and categories before committing the save operation in the database. If an item (or category) contains an unreadable
character, then trying to save that item will result in an XML parser error as shown below:
Caused by: Error processing xml queue. m_xmlPendingQueue contained at time of failure: []

...

IBM Product Master 12.0.0 461


com.ibm.ccd.content.common.EntryXmlProcessor.sendPendingXmlToDb(EntryXmlProcessor.java:311)

at com.ibm.ccd.common.context.common.DBContext.ensurePendingXMLWrittenToDB(DBContext.java:592)

...

Caused by: CWPCM0577E:failed to update xml: '<entry>

...

at com.ibm.ccd.common.util.db2.DB2Utils.mergeXML(DB2Utils.java:197)

at com.ibm.ccd.content.common.EntryXmlProcessor.sendPendingXmlToDb(EntryXmlProcessor.java:294)

... 25 more

Caused by: com.ibm.db2.jcc.b.yn: DB2 SQL Error: SQLCODE=-16111, SQLSTATE=2200M, SQLERRMC=2, DRIVER=3.53.95

at com.ibm.db2.jcc.b.bd.a(bd.java:668)

...

Causes
The application places few restrictions on what type of data can be used with the software. But while saving special characters like &, $, Ð and so on, to the database, it
is essential that these characters are first converted into database compatible format, for example and can need to be converted to &. If such a conversion does not take
place, then the invalid format can result in an unreadable character causing data corruption.

Resolving the problem


Follow these steps to correct the data corruption for an item:

1. Open the item in the single edit screen.


2. Browse the item attributes for special characters.
3. Remove the special character and try to save the item. If the save operation completes, then there is no further corruption and you can proceed to step 6.
4. If you can't save the item, then repeat step 2 and 3 until all the special characters have been accounted for.
5. If you remove all special characters and still can't save the item, then delete it and create a new one with identical information.
6. If you were able to successfully save the item, then add all the special characters back to the item attributes and save the item for one last time. Follow similar
steps for a category.
Note: The application can support most special characters and a correctly saved special character will not cause this error. The root cause of the error is that the
character was not saved properly in the database leading to an unreadable database entry. This might be due to a number of reasons including an exception while
trying to perform the save operation, the database not being stable and so on.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Errors while running create_schema.sh script


I am trying to install the product with DB2® database. When I run create_schema.sh, I get SQL0552N and/or SQL0403W errors. Why am I getting these errors and how can
I prevent them?

Symptoms
As the error text suggests, SQL0552N means the database user does not have the necessary permissions to be able to create/edit data structures like tables.

Causes
Before using the application, you must create a database schema. This is done by running $TOP/bin/db/create_schema.sh. Since this script creates data structures like
tables and indexes, the database user of the application must have the necessary permissions to be able to run this script successfully.
If the database user does not have these permissions, then we get SQL0552N and/or SQL0403W errors usually in the following form:

===== ERROR SQL0552N "Database_User" does not have the privilege to perform operation
or
===== ERROR SQL0403W The newly defined alias "Database_User.XYZ" resolved to the object

===== ERROR SQLSTATE=01522

===== ERROR SQLSTATE=42601


where XYZ is the table name.

Resolving the problem


To correct this error, give the database user the following permissions:

1. CREATETAB
2. BINDADD

462 IBM Product Master 12.0.0


3. CONNECT
4. CREATE_NOT_FENCED
5. IMPLICIT_SCHEMA
6. LOAD ON DATABASE

Meanwhile, SQL0403W is got when the table or view being referenced does not exist. The reason for this can be that the user does not have the necessary table space
usage permissions. As of version 10.1.0, the database user needs usage permission for following table spaces: USERS, INDX, BLOB_TBL_DATA, XML_DATA,
XML_LARGE_DATA.

If you are using custom table space mapping file with the following table spaces ICM_DATA, ICM_IX, ITM_DATA, ITM_IX, ITD_DATA, ITD_IX, ITA_DATA, ITA_IX,
LCK_DATA, and LCK_IX, then you should provide necessary usage permissions for the same.

For more information, refer to the related URL section.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Error occurred in XML processing error


I have never received this error before. Why am I getting these errors and how can I fix them?

Symptoms
I was saving item data (or running migrateDataToXml.sh) and got an error saying invalid XML character.

Causes
IBM® Product Master uses XML format to store item data in the database. XML has universal standards and supports only a clearly defined set of Unicode characters: Valid
characters in XML
If I try to save an item with an invalid XML character in one of the attributes, then the application will not be able to store the item attribute in XML format. This can happen
during:

1. Entering data into the system for example, data entry in the user interface or import jobs
2. Running scripts like $TOP/bin/migration/migrateDataToXml.sh. This script is run during migration to any post 10.0.0 Fix Pack 1 version of the product to create XML
format of all existing data. If this script finds any invalid characters while creating XML format, it gives an error stack similar to the following:
Error Stack for Oracle:

java.sql.SQLException: ORA-31061: XDB error: XML event error

ORA-19202: Error occurred in XML processing

In line 240 of orastream:

LPX-00217: invalid character 19 (U+0013)

at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:445)

at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)

at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:879)

at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:450)

at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:192)

at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)

at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:207)

at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1044)

at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1329)

at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3584)

at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3665)

at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1352)

at com.ibm.ccd.common.util.oracle.OracleUtils.mergeXML(Unknown Source)

at com.ibm.ccd.content.common.EntryXmlProcessor.sendPendingXmlToDb(Unknown Source)

at com.ibm.ccd.common.context.common.DBContext.ensurePendingXMLWrittenToDB(Unknown Source)

at com.ibm.ccd.common.context.common.DBContext.commit(Unknown Source)

at com.ibm.ccd.common.context.common.DBContext.commit(Unknown Source)

Resolving the problem


IBM Product Master 12.0.0 463
To correct this error, remove the invalid character from the item attribute. Since only a certain type of attribute can hold these characters, so we can ignore many attribute
types like integers, enumerations, lookup tables, sequences and so on. These bad attributes are probably from string type attributes and can contain long strings or
external URL. These invalid attributes can be introduced with a feed file import or be present in the database from previous versions.
The following list describes some of the ways to detect them:

1. Open the item and browse through the attributes to look for the invalid character.
2. Find the attribute through trial and error. This can be done by removing an attribute and save the item. If it gives the same error, then the attribute is good and so
paste the value back. Repeat this process until you able to save the item and the last removed attribute will be the one with the bad attribute.
3. Use the following script to print the item and browse through it to look for attributes, which might be a candidate for invalid characters:
var ctg;

ctg = getCtgByName("Catalog_Name");

if(ctg != null)

out.writeln(ctg1.getEntryByPrimaryKey("pk1"));

out.writeln(ctg1.getEntryByPrimaryKey("pk2"));

....

}
4. You can also use any third-party XML tool to detect the invalid character.

To fix this attribute, you can:

1. Open the item in the user interface, remove the bad character from the attribute and save the item.
2. Use SQL, WQL, or a script to delete the attribute value and set it to NULL.
3. If there are lots of items, then write an export job to export values of all the items. Then, edit the bad attribute values, and import these values again by using a feed
file or an import job.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

CWPAP0127E: The Java API object reference is inaccessible due to deletion or


similar operation
Why am I getting the CWPAP0127E: The Java API object
reference is inaccessible due to deletion or similar operation exception and how do I fix it?

Symptoms
Certain methods are not designed to be used with newly created items, for example, getModifiedAttributesWithNewData. It can not be used with a new items because it
compares the new and old versions of the attribute value and in the absence of old data.

Resolving the problem


You may encounter the above mentioned error while adding an item in the single edit screen, collaboration area, import job or custom tool. To troubleshoot it, check the
corresponding exception.log file for the error stack, which should be similar to the following:
com.ibm.pim.common.exceptions.PIMInternalException: CWPAP0127E:The Java API object reference is inaccessible due to deletion or
similar operation.

at com.ibm.ccd.api.attribute.AttributeChangesImpl.checkOwners(Unknown Source)

at com.ibm.ccd.api.attribute.AttributeChangesImpl.getEntryChangedData(Unknown Source)

at com.ibm.ccd.api.attribute.AttributeChangesImpl.Problematic_Function(Unknown Source)

*** location of the custom code where the problematic function was encounter ***

... so on
Identify the exact location of the problematic function using the sample error stack above and modify the logic accordingly. For example, if you get the following exception:

com.ibm.pim.common.exceptions.PIMInternalException: CWPAP0127E:The Java API object reference is inaccessible due to deletion or


similar operation.

at com.ibm.ccd.api.attribute.AttributeChangesImpl.checkOwners(Unknown Source)

at com.ibm.ccd.api.attribute.AttributeChangesImpl.getEntryChangedData(Unknown Source)

at com.ibm.ccd.api.attribute.AttributeChangesImpl.getModifiedAttributesWithNewData(Unknown Source)

at com.company_name.pim.validations.validateItem(ValidateAttribute.java)

at com.company_name.pim.validations.ValidateAttribute.java.applyRule(ValidateAttribute.java)

... so on

464 IBM Product Master 12.0.0


You can prevent the above error by modifying the ValidateAttribute.java to check whether the item is new and not to use getModifiedAttributesWithNewData for
new items. This behavior will change from version 10.0.0 FP5 onwards where the function will work for newly created items returning a list of all attribute changes for that
item.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Cannot end Appsvr (WebSphere® Application Server) process


When global security is enabled in WebSphere® Application Server and admin_security=true is set in the env_settings.ini file, if an incorrect WebSphere Application
Server administrator user name or password is specified in the env_settings.ini file or in the command line, the command to stop the appsvr process will fail.

Symptoms
The appsvr process remains running even if other processes that are related to the product are stopped.

Causes
Incorrect WebSphere Application Server administrator user name or password was provided.

Resolving the problem


Find out the process ID of the appsvr process. Issue kill -9 to end the appsvr process. To avoid similar situation from happening, provide the correct WebSphere
Application Server administrator user name and password either in the env_settings.ini file or in the arguments of the command.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Invalid special character strings


In general, you should avoid all uses of HTML special character strings.

Symptoms
The commonly used HTML special character strings such as &quot;, &lt;, and &gt;, inconsistently display as either the HTML special character string themselves or as the
HTML converted equivalent such as ”, <, and >.

Causes
When you create Product Master object entity names that contain an HTML special character string, the object entity name can display incorrectly in the user interface.
Limitations for special character strings

Use the list of known limitations for special character strings when you are creating entity names.

In general, you should avoid all uses of HTML special character strings. When you create Product Master object entity names that contain an HTML special character
string, the object entity name can display incorrectly in the user interface. Commonly used HTML special character strings such as &quot;, &lt;, and &gt;, inconsistently
display as either the HTML special character strings or as the HTML transformed equivalent such as ”, <, and >.
Note: Ensure that you use alphanumeric characters and avoid use of special characters.
Table 1. Special character string limitations
Special
Symbol Character HTML Name UI Scenario Result
Name
" Opening &quot; Selections Selection that is called &quot; is created Displayed as "
quotation marks Console
Import Console Import that is called &quot; is assigned to an ACG and
you want to modify the ACG
Collaboration Collaboration area that is called &quot; is created
Area Console
Attribute Attribute collection that is called &quot; is created
Collections
Console
Lookup Table Lookup table is associated with a spec called &quot;
Console
Catalog Console Catalog is associated with a spec called &quot;
Primary hierarchy for a catalog is called &quot;
Secondary hierarchy for a catalog is called &quot;

IBM Product Master 12.0.0 465


Special
Symbol Character HTML Name UI Scenario Result
Name
Catalog is assigned to an ACG called &quot;
Job Console Report called &quot; is scheduled
Hierarchy Creating a hierarchy mapping, if a hierarchy
mapping contains a category that is called &quot;
Hierarchy itself is called &quot;

Multi-edit page An item or category has a value of &quot; for one of its
attributes and cell is not selected for editing
Note: The special character appears correctly if the cell
is selected for editing.
Single-edit page Parent path of a category (including a hierarchy name)
contains a category that is called &quot;
Scheduler Return value of a scheduler service job contains a
service job reference to an entity called &quot;
» Left double &raquo; Lookup Table Value of the Display Attribute in the Lookup Table Value Data is truncated.
angle brackets Console Display Format
Note: Ensure that the data you want to import in Lookup
Table Console does not have any special characters.
& Ampersand &amp; Object name has & Banking asset fails.
; Semicolon &#59; On Product Master implementations that use WebSphere Application Server index
WebSphere® Application Server, if you open a file with a out of bounds error page appears
; in the name from the Document Store instead of the document.
Note: You should avoid by using semicolon in a file
name.
Table 2. Mapping between the special characters and specific names
Admin UI and
Persona-based
Digital Asset name
Special Object Entity Digital Asset SFTP Server Attribute and UI names
Symbol supporting Search Workflow name
Character name name folder name Spec name supporting
and Filter
Search and
Filter
:\\\\ :\\\\ ⛌ ⛌
» &raquo; ⛌
& Ampersand ⛌ ✓ ✓ ⛌
<> Angle brackets ⛌ ⛌ ✓ ⛌ ⛌
' Apostrophe ⛌ ⛌
* Asterisk ⛌ ⛌ ✓ ⛌ ⛌
@ At sign ⛌ ✓ ✓
\ Backslash ⛌ ⛌ ⛌ ⛌
Blank space ✓
{} Braces ⛌ ✓ ✓ ⛌ ⛌
[] Brackets ⛌ ⛌ ✓ ⛌ ⛌
^ Caret ✓ ✓
: Colon ⛌ ✓ ⛌
, Comma ⛌ ✓ ✓ ⛌ ⛌
$ Dollar sign ✓
" " or “ ” Double ⛌ ✓ ⛌ ⛌
quotation marks
` Grave accent ✓ ✓
= Equal sign ⛌ ✓
! Exclamation ✓ ✓
point
/ Forward slash ⛌ ⛌ ⛌ ⛌
- Minus sign ✓ ✓
# Number sign ⛌ ✓ ✓ ⛌ ⛌ ⛌
() Parentheses ⛌ ✓ ⛌ ⛌
. Period ✓ ✓ ⛌
+ Plus sign ⛌ ✓ ⛌ ⛌
% Percent sign ✓
? Question mark ⛌
; Semicolon ✓ ✓
' ' or ‘ ’ Single quotation ⛌ ✓ ✓ ⛌
marks
~ Tilde ✓ ✓
_ Underscore ✓ ⛌
| Vertical bar ⛌ ⛌ ✓ ⛌

Special characters impacting the catalog group status

466 IBM Product Master 12.0.0


When special characters are used in the code attribute for catalog group, the status of the catalog group remains in an Open state even after successful export.

Symptoms
When special characters are used in the code attribute for catalog group, the status of the catalog group remains in an Open state even after successful export. The status
field can be seen to be Open. On the WebSphere Commerce user interface, the group is seen correctly. In the scheduler default.log, the escaped characters can be seen in
the SOAP response.

Causes
This issue occurs because the special characters in the code attribute for catalog group are sent correctly in the SOAP request but the SOAP response returns escaped
hexadecimal format, which cannot be matched with the original identifier and the status is not correctly updated.

Environment
All Collaborative MDM environments with the Advanced Catalog Management solution installed.

Diagnosing the problem


The $TOP/logs/<service_name>/default.log file shows both SOAP request and response. You can see that the identifier tag value is mismatched.

Resolving the problem


To work around this issue, do not use accented, special, or language-specific characters in the code attribute for catalog groups.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Importing hierarchy content for hierarchies with categories that have a relationship
attribute set not supported
I have never received this error before. Why am I getting this error and how can I fix it?

Symptoms
Importing hierarchy contents for hierarchies that have categories with data that is specified for relationship attributes is not supported.

Resolving the problem


Important: This workaround results in a loss of relationship data.

1. Extract the compressed file that was generated from the company export.
2. Browse to the HIERARCHY_CONTENTS directory.
3. Edit each CSV file and delete the column or columns that specify relationship data. Remember to save the files after editing them.
4. Compress the files again. The directory structure must be the same, where the file ImportEnvControl.xml and the archives directory are at the root level.
5. Import the updated compressed file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Date fields in the Mass Transactions screen not displaying proper values
I have never received this error before. Why am I getting this error and how can I fix it?

Symptoms
The Date fields in the Mass Transactions screen are not displaying proper values.

Resolving the problem


Change the date settings in the Product Master under My Settings to User Select and then select any other value other than the first element.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Cannot access items or categories due to lock busy messages


IBM Product Master 12.0.0 467
Encountering slave or stale locks actually does not impact performance, but rather prevent proper access to objects.

Symptoms
While trying to work with objects, such as items or categories, you can see messages like the following:
Lock Busy for item
or
Checkout failed - the entry is

already locked by collaboration areas

Resolving the problem


Detect and delete invalid locks.
Note: This is for removing all locks. If you want to remove a lock for a specific item, you need to find the right item ID and row in the lck table. Then, customize an SQL
statement to delete it.

1. Back up the database.


2. Shut down Product Master to guarantee we are not deleting good locks.
3. Run SQLs A and B to check for stale slave and master locks;
(A): select * from lck where LCK_TYPE='S';

(B): select * from lck where LCK_TYPE='M' and (LCK_THREAD_ID != '-1' or LCK_JVM_ID != '-1');
4. if (A) returns any rows, while Product Master is down, remove all slave locks.
(C): delete from lck where LCK_TYPE='S';
5. If (B) returns any rows, reset (LCK_THREAD_ID) or (LCK_JVM_ID) of any master lock to -1
D): update lck set LCK_THREAD_ID = '-1', LCK_JVM_ID = '-1'

where (LCK_THREAD_ID != '-1' OR LCK_JVM_ID != '-1') and LCK_TYPE='M';

(F): commit;
6. Restart Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

No such file or directory warning when using the CCD_CLASSPATH environment


variable to compile Java classes
When using CCD_CLASSPATH environment variable to compile Java classes for Product Master warnings about .jar files not found are displayed.

Symptoms
When using CCD_CLASSPATH environment variable to compile Java classes, you can get warnings about few .jar files not found. These warnings occur due to the particular
.jar file not found at the mentioned class path. To remove these warnings, make sure all the required .jar files are present at the class path. The example that is shown
following is only a sample and should be used for reference only.

/usr/local/envs/mdm6000/javac/*.java
/usr/local/envs/mdm6000/*.java -Xlint:all -classpath $CCD_CLASSPATH -d $TOP/classes

The sample warning that can occur:

warning: [path] bad path element "/apps/PIM6/jars/xml-apis.jar": no such file or directory

Causes
Here the class path $TOP/classes is specified and as the /apps/PIM6/javaapi_classes was not found, the warning message occurred.

Resolving the problem


You can avoid viewing these warnings by using the -path parameter with CCD_CLASSPATH environment variable while compiling Java classes. Therefore, workaround for
the preceding example will be:

/usr/local/envs/mdm6000/*.java -Xlint:all,-path -classpath $CCD_CLASSPATH -d $TOP/classes

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restrictions with multi-domain entities and translation

468 IBM Product Master 12.0.0


In Product Master versions 10.1 and earlier, Product Master assumed the default products, catalog, entities to be items, and hierarchy, entities to be categories. The multi
domain feature removes this restriction and allows the product entity to be a user-defined domain entity specified on the product catalog and hierarchy as well.

With this feature, all of the labels and messages with the keywords item, items, category and categories are replaced with the corresponding new domain entity
name labels that are defined in the $TOP/etc/default/domains/multi_domain_entities_${locale}.xml file.
There are two conditions for this limitation:

multi-domain entity management is not enabled


multi-domain entity management is enabled

There are many gender/animated and unanimated/vowel and consonant issues for different languages. There is no common translation to fit unknown substitutes for non-
English languages. Even in English, there are similar issues, for example, "This is a pencil". If you change this sentence to "This is a ${0}", then when you replace ${0} with
"apple", the sentence would read "This is a apple" and instead it should read "This is an apple." This example is a minor grammar error.

Multi-domain entity management is not enabled


Even when the multi-domain entity feature is not enabled, there are still variable substitutions in some of the texts retrieved from the resource bundle. These substitutions
were not there in prior releases of the product. The terms used in the substitutions come from the DomainEntityUtils.js file of the respective locale. The terms in this file
are translated without the context on how they are used. Furthermore, there are situations where one term is used is different contexts which require different ways of
translation due to the grammar rules specific to a locale. It is because of these two reasons, it is possible that problems with the grammar will happen in some non-
English locales. This type of problem is more commonly found in Polish and Russian.

Multi-domain entity management is enabled


If the multi-domain entity management function is enabled, the item and categories keywords are replaced with any user defined strings. Therefore, this will cause
grammar errors for non-English languages. Articles, adjectives, pronouns, verbs, quantifier and so on need to be changed according to what substitutes will be used. For
example:
Table 1. Grammar rules for non-English languages
Language Grammar rule
German, Spanish, For these languages, all nouns have a gender. The nouns are either masculine or feminine. Articles, adjectives, some pronouns and some verbs
French, Italian, have to change depending on the gender of the noun they modify. Also, articles and prepositions have to change depending on whether the
Portuguese (Brazil), following words start with a vowel or a consonant. There are two genders; male with male, and, female with female are correct. It is not correct
Russian, and Greek to have male with female variables.
Note: In addition to the above description,

The German language has three genders, singular and plural, different ending of nouns, adjectives, and so on.
The Russian language has two types of nouns. For example, animated nouns (employee) have a different grammar paradigm than
unanimated nouns (item).
The Greek language has three noun genders, masculine, feminine, and neutral. Each of these genders is declined by case, nominative,
genitive, and accusative as well as by number, singular and plural. The endings of nouns differ for each combination of gender, case and
number. The endings of other surrounding words such as articles, adjectives, and pronouns need to change according to the noun.

Japanese The word order has to change in case of "Verb + Object" in English to "Object + Verb" according to the Japanese grammar. Without knowing what
"Object" is, the translated text in Japanese may be grammatically incorrect.
Korean The postposition have to change depending on whether the words end with a vowel or a consonant.
Polish Polish grammar declines words by cases. There are several cases for singulars and plurals, therefore, depending on where the variable ${1} is, it
will be replaced with a custom string. For example, the word "Employee" occurs in the sentence, in Polish it can be:

1. pracownik
2. pracownika
3. pracownikowi
4. pracownikiem
5. pracownicy
6. pracowników
7. pracownikom
8. pracownikami
9. pracownikach

Additionally, Polish grammar distinguishes between genders, and they affect, among other things, pronouns and verbs. For example, when the
gender of the variable ${1} changes, (for example Server in Polish is masculine, Group is feminine, Link is neutral), other words in a sentence
need to change as well:

English
${1} was successfully updated.
Polish masculine
${1} został pomyślnie zaktualizowany.
Polish feminine
${1} została pomyślnie zaktualizowana.
Polish neutral
${1} zostało pomyślnie zaktualizowane.

Simplified Chinese There are possible translation issues in simplified and traditional Chinese. For example, there are occasions where a different quantifier is to be
and traditional used before a noun in Chinese. In these situations, the enablement of the feature may result in an incorrect use of a quantifier.
Chinese

IBM Product Master 12.0.0 469


Language Grammar rule
Turkish Turkish is an agglutinative language and has no gender. The language contains words that may take several grammatical suffixes to determine
meaning. For example, the dictionary-form of a noun might take different inflectional suffixes such as suffix of possession and plurality.
In the Turkish language, vowels are modified to ensure vowel harmony. This means that if a suffix is attached to a stem, the vowel in the suffix
agrees with the last vowel in the stem in terms of roundedness, and frontness and backness.

Besides, Turkish generally follows the SOV model. This model means that Turkish sentences are formed in the order of "Subject + Object + Verb".
This model is unlike English sentences where the sentences are formed in the order of SVO, "Subject + Verb + Object". Without knowing the
"Object", the translated text in Turkish might be grammatically incorrect.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Some docstore files do not show up in the file system


An export job re-creates a few output files in a docstore folder, which has been mounted to the file system. While I see some of the files copied from the docstore into the
file system with the correct time stamp, the rest of the files do not show up.

Symptoms
If there is incorrect information in the DHI table for that file, then this might be one of the factors influencing this behavior.

Resolving the problem


The DHI table has a record for every file that has been copied from the docstore to the file system. It has two columns - DHI_DOC_PATH, which is the relative path of the
folder in the docstore where the file is located and DHI_DOC_REAL_PATH, which is the relative path of the folder in the file system where it is copied to.
While the file hello.txt refers to the correct real path - /usr/local/envs/sup4/docs/, the helloA.txt refers to an incorrect real path - /usr/local/envs/sup4/sample/, so
while the file hello.txt sync with the file system, the other file helloA.txt which was present in the docstore never sync up with the file system as the file system
path was incorrect in the DHI table.

This situation can happen if the database memory dump has been copied across instances or if a file has been copied across mounted folders.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Unable to start the Java Message Service (JMS) receiver


Unable to start or stop the messaging receiver, also known as the Java Message Service (JMS) receiver, in Global Data Synchronization for Product Master.

Symptoms
When you run the application command gdsmsg.sh start the message Couldn't start GDS Message Receiver! appears. When you start the application, you can
notice that the message receiver is not processing the messages.

Resolving the problem


Verify that the following product configurations are set correctly:

Go to the $TOP/etc/default/ product installation directory, and in the gds_system.properties file verify that the value of the SEND_TO_JMS variable is set to TRUE.
Go to the $TOP/etc/messaging/xml/ product installation directory, choose the /demand or /supply directory based on the application that is installed, and go to the
/datapool directory regarding your requirement, and find the properties.xml file. Open the file properties.xml, and set the correct values for the
<inBoundQueueName>, <outBoundQueueName>, and <queueConnectionFactory> parameters.

In addition, to start, stop, or check the status of the JMS receiver, use the following scripts that can be found in the $TOP/bin product installation directory:

To start JMS receiver:

gdsmsg.sh start

To abort the JMS receiver:

gdsmsg.sh abort

To check the status of the JMS receiver:

gdsmsg.sh status

Note: Before starting the JMS receiver, make sure the .binding file in the $TOP/etc/default product installation directory has been updated with the queue that is
configured previously.

470 IBM Product Master 12.0.0


Unable to start the Java Message Service (JMS) receiver
Unable to start or stop the messaging receiver, also known as the Java Message Service (JMS) receiver, in Global Data Synchronization for Product Master.
Invalid value in the GDS lookup tables
I have never received this error before. Why am I getting this error and how can I fix it?
Receiving "en_US.error.searchgln" error
I have never received this error before. Why am I getting this error and how can I fix it?

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Invalid value in the GDS lookup tables


I have never received this error before. Why am I getting this error and how can I fix it?

Symptoms
When we are sending an XML with an attribute with value, the value is sent as NULL by the business object causing the item to fail validation and auto review. This is
causing the delay in the item from being set up.

Resolving the problem


The Global Data Synchronization feature of Product Master is developed based on the data pool's data model. The Global Data Synchronization lookup tables are based on
the data model that is defined by data pool. The schema and documentation are supposed to match up but in some cases there are discrepancies. Unfortunately, there are
so many values that it is impossible to compare each and every value to catch discrepancies. Only some of the discrepancies can be caught and the data pools are
informed about these discrepancies but others are missed.
In this scenario, two cases are possible:

Case 1: Schema contains a value that does not exist in the document. In this case, add the needed value to the lookup table in order to support the value. This is a
simple change.
Case 2: The document contains a value that does not exist in the schema. In this case, there is no solution other than to provide a product update or the following
quick fix. Remove the value from the lookup table that is defined for valid values that are associated with the attribute and do not use it in the communication to the
data pool.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Receiving "en_US.error.searchgln" error


I have never received this error before. Why am I getting this error and how can I fix it?

Symptoms
An "en_US.error.searchgln" error when click Search for a trade item in the Manage Items tab.

Causes
This error message "en_US.error.searchgln" can occur for any or all trade items while clicking Search.

Resolving the problem


To avoid this exception, change the following configuration settings:

From the Administration Console, confirm that the class path at <WAS> Application Servers > GDS_Application_server > Process Definition > Java Virtual Machine
has this jar file added:<GDS_Install_Dir>/etc/ajax_jars/jsonrpc-cvs.jar
Make sure that you set the default language for supply side and demand side in the <GDS_Install_Dir>/etc/default/gds.properties file.

DEFAULT_LANGUAGE=en

Make sure the DefaultLocale property is set to "en_US" in the properties.xml file. The properties.xml file is in the appropriate data pool folder, which is in the
demand or supply folder (based on the product installed), which is in the <GDS_Install_Dir>/etc/messaging/xml/ directory.
Restart the Global Data Synchronization for Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 471


Troubleshooting profiling agents issues
Use the following topics to help resolve issues pertaining to profiling agents. These topics will help you to resolve common issues of the CPU and memory profiling agents
in Product Master.

WASX7016E: Exception received while reading file


I have never received this error before. Why am I getting this error and how can I fix it?
Can't load "libjprofiler.a" error
I have never received this error before. Why am I getting this error and how can I fix it?
Profiling agent libraries are not loaded on the AIX platform
I have never received this error before. Why am I getting this error and how can I fix it?
Generating and sharing a jProfiler profile task with the IBM Support
I have never received this error before. Why am I getting this error and how can I fix it?

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

WASX7016E: Exception received while reading file


I have never received this error before. Why am I getting this error and how can I fix it?

Symptoms
When you enable your profiling agent for the appsvr service, you receive the following error: WASX7016E: Exception received while reading file
"$TOP/bin/websphere/modify_jvmargs.pyc"; exception information: sun.io.MalformedInputException

Resolving the problem


To resolve this issue, export LANG=C in the shell prompt, then run the pimprof.sh shell script.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Can't load "libjprofiler.a" error


I have never received this error before. Why am I getting this error and how can I fix it?

Symptoms
When you enable your profiling agent for a Product Master service, the following message appears in the $TOP/logs/service_Name/svc.out log file, where service_Name is
the name of the service you want to profile:Can't load "libjprofiler.a", because load ENOENT on shared libraries.

Resolving the problem


The log file message appears when there are missing system libraries in your application server that are dependent to the libjprofiler.a file. To resolve this issue, run the
following command to determine whether there are missing system libraries:ldd
library_name, where library_name is the name of a system library.
If you determine that a system library is missing, notify your system administrator to install the missing system library.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Profiling agent libraries are not loaded on the AIX platform


I have never received this error before. Why am I getting this error and how can I fix it?

Symptoms
Profiling agent libraries are not loaded.

Resolving the problem


472 IBM Product Master 12.0.0
Run export LIBPATH=$LIBPATH:$TOP/profiler/jprofiler before you start the appserver and enable profiling for any services. The LIBPATH environment
variable must be exported in the same shell prompt that you use to start the appserver or enable profiling for any services.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Generating and sharing a jProfiler profile task with the IBM Support
I have never received this error before. Why am I getting this error and how can I fix it?

Symptoms
You have rerun as much business logic that was removed as possible and a hotspot was identified.

Resolving the problem


Generate a jProfiler profile of the task and send it to IBM® Support. You need to bundle the specific profiler files and ensure to include only the exact files and the license
agreements in the bundle.

1. Modify the startup scripts so that it the server will start up with profiling turned on.
Note: We do not want profiling turned on during normal operation as that will slightly slow down the performance.
2. Restart the server.
3. Get the system ready to do the action to be profiled. Perform the following:
a. For the Appserver, go to the user interface and be ready to click the button that starts the "slow" operation.
b. For the Scheduler, get ready to start the import job.
c. For the Workflow, get ready to start the workflow process.
4. On the system status page, click Start Profiling.
5. Start the action that you want to profile from the previous step 3. Click Save.
6. When that action finishes, click Stop Profiling. The profile files are saved under $TOP/profiles.
7. Rename the most recently created profile file to reflect the action being profiled.
8. Send the profile file to IBM Support. See Contacting IBM Software Support.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting application server issues


Use the following topics to help resolve application server issues.

Unresponsive WebSphere® Application Server


The Product Master pseudo-user on the application server must have the following mentioned environment variables that are configured before starting Product
Master.
Invalid configuration settings
In the common.properties file, an incorrect database specifier was set.
Unresponsive user interface
I have never received this error before. Why am I getting this error and how can I fix it?

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Unresponsive WebSphere® Application Server


The Product Master pseudo-user on the application server must have the following mentioned environment variables that are configured before starting Product Master.

Symptoms
If the following variables are not set before starting Product Master, the application server can not start. Pseudo-user environment variables:

TOP is the top directory of your Product Master installation


DB2_HOME is necessary for DB2® client binaries
JAVA_HOME is necessary for the Java™ SDK
PATH must include $DB2_HOME/bin and should include $JAVA_HOME/bin

Causes
IBM Product Master 12.0.0 473
There are two potential causes:

The application server becomes unresponsive. Although it is possible to ping the server, users cannot log into their environment and the administrator cannot log in
to the application server.
The app_server service cannot be started (with an Oracle database) on some systems if the environment variables cause the command-line arguments to exceed
4000 characters.

Diagnosing the problem


See whether a user recently started an unusually large job. If the job was intentional, review the script that is used by the job.

Resolving the problem


The total length of the command-line arguments exceeds the maximum length of 4000 characters that are allowed by some operating systems. Create a symbolic link
with a short path to the installation directory for the product. By shortening the path by using the symbolic link, the command-line arguments do not exceed the 4000
character limit for the operating system.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Invalid configuration settings


In the common.properties file, an incorrect database specifier was set.

Symptoms
These services will not start:
appsvr
eventprocessor
queuemanager
scheduler
workflowengine
Errors appear in these log files:
$TOP/etc/logs/db_pool
$TOP/etc/logs/svc

Resolving the problem


Ensure that the smtp_address parameter points to an SMTP relay. Send email from the localhost or another system, which is capable of sending messages outside
of the organization.
Make sure that the server has the appropriate capacity to handle the load and it is not shared with other systems.
Verify that the application server is connected to the database through a gigabit network.
Verify that the network card is set to gigabit full duplex and not on autonegotiate.
Verify that the average ping time between the application server and the database is less 0.25 ms.
Check the ulimit settings to make sure that the number of open file descriptors is 8000.
If using WebSphere® Application Server, make sure that the server is configured according to the recommendation.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Unresponsive user interface


I have never received this error before. Why am I getting this error and how can I fix it?

Symptoms
If you encounter issues with the user interface, the issues might be caused by an incompatibility between the Java™ Development Kit that you are using and the
application server.

Resolving the problem


You must ensure that you are using the level of Java Development Kit that is included with the application server.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

474 IBM Product Master 12.0.0


Troubleshooting connectivity issues
Use the following topics to help resolve connectivity issues.

JMS connectivity issues by using WebSphere Application Server bundled with WebSphere MQ
JMS connection for Product Master is set configured by using WebSphere® Application Server bundled with WebSphere MQ messaging provider.
JMS connectivity issues using WebSphere MQ server
JMS connection for Product Master is set and configured by using WebSphere MQ server.
Java connectivity issues
I have never received this error before. Why am I getting this error and how can I fix it?
java.net.ConnectException: Connection refused error
I am getting errors in the logs that are related to connection refusal.
java.sql.SQLRecoverableException: IO Error: Connection reset error
When the application is running, after a certain interval the read/write SQL queries fail due to the java.sql.SQLRecoverableException: IO Error:
Connection reset error.
java.sql.SQLException: Listener refused the connection error
Running the test_db.sh script sometimes results in the java.sql.SQLException: Listener refused the connection with the following
error: ORA-12505, TNS:listener does not currently know of SID given in connect descriptor error.
java.lang.StackOverflow error
Why am I getting this error and how can I prevent it?
java.lang.OutOfMemoryError: Failed to create a thread error
Why am I getting this and how can I correct it?
Transaction log full error for imports implemented with the Java API
Processing large data sets through imports that are implemented with Java™ API may perform poor and eventually cause Transaction log full error (for example,
SQL0964C on DB2®) when data is not committed intermittently.
Unable to change to remote directory error
Product Master tried to log in to a target FTP server and failed.
Product Master box does not see the target destination
I have never received this error before. Why am I getting this error and how can I fix it?
Is distributor working
I have never received this error before. Why am I getting this error and how can I fix it?

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

JMS connectivity issues by using WebSphere Application Server bundled with


WebSphere MQ
JMS connection for Product Master is set configured by using WebSphere® Application Server bundled with WebSphere MQ messaging provider.

Symptoms
Using Product Master script sandbox user interface, JMS connection can be established but the same script code snippet when used in a report job that runs under
scheduler service, fails with exceptions.

Environment
The following exceptions are examples of what you might see:
2013-06-26 13:12:01,533 [sch_worker_0] ERROR com.ibm.ccd.common.error.AustinException JOB_ID:2091- CWPCM0002E:Generic error /
Exception: Generic Error, Exception:Failed to create InitialContext using factory specified in hashtable
javax.naming.NoInitialContextException: Failed to create InitialContext using factory specified in hashtable [Root exception is
java.lang.NullPointerException]

at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:255)

at javax.naming.InitialContext.initializeDefaultInitCtx(InitialContext.java:318)

at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:348)

at javax.naming.InitialContext.internalInit(InitialContext.java:286)

at javax.naming.InitialContext.(InitialContext.java:211)

at com.ibm.ccd.connectivity.jms.TrigoJMS.getInitialContext(TrigoJMS.java:88)

at com.ibm.ccd.common.script.ScriptOperationsJms.jmsGetContext(ScriptOperationsJms.java:42)

at com.ibm.ccd.common.interpreter.operation.generated.GenJmsGetContextOperation.execute(GenJmsGetContextOperation.java:74)

at WPCLGReport13722406649020.run(WPCLGReport13722406649020.java:37)

at com.ibm.ccd.common.interpreter.engine.Script.runFunction(Script.java:554)

at com.ibm.ccd.common.interpreter.engine.Script.execute(Script.java:484)

at com.ibm.ccd.common.interpreter.engine.Script.run(Script.java:335)

IBM Product Master 12.0.0 475


at com.ibm.ccd.report.common.Report.generate(Report.java:305)

at com.ibm.ccd.report.common.ReportExe.execute(ReportExe.java:104)

at com.ibm.ccd.scheduler.threads.SchedulerThread.fuzaoRun(SchedulerThread.java:262)

at com.ibm.ccd.common.util.FuzaoRunnableAdapter.run(FuzaoRunnableAdapter.java:54)

at com.ibm.ccd.common.util.FuzaoThread.run(FuzaoThread.java:123)

Caused by: java.lang.NullPointerException

at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:247)

... 16 more

2013-06-26 13:12:01,534 [sch_worker_0] ERROR com.ibm.ccd.common.error.AustinException JOB_ID:2091- CWPCM0002E:Generic error /


Exception: Script execution failed (com.ibm.ccd.common.error.AustinException) Exception:Generic Error Generic Error

at com.ibm.ccd.common.script.ScriptOperationsJms.jmsGetContext(ScriptOperationsJms.java:46)

at com.ibm.ccd.common.interpreter.operation.generated.GenJmsGetContextOperation.execute(GenJmsGetContextOperation.java:74)

at WPCLGReport13722406649020.run(WPCLGReport13722406649020.java:37)

at com.ibm.ccd.common.interpreter.engine.Script.runFunction(Script.java:554)

at com.ibm.ccd.common.interpreter.engine.Script.execute(Script.java:484)

at com.ibm.ccd.common.interpreter.engine.Script.run(Script.java:335)

at com.ibm.ccd.report.common.Report.generate(Report.java:305)

at com.ibm.ccd.report.common.ReportExe.execute(ReportExe.java:104)

at com.ibm.ccd.scheduler.threads.SchedulerThread.fuzaoRun(SchedulerThread.java:262)

at com.ibm.ccd.common.util.FuzaoRunnableAdapter.run(FuzaoRunnableAdapter.java:54)

at com.ibm.ccd.common.util.FuzaoThread.run(FuzaoThread.java:123)

Caused by: javax.naming.NoInitialContextException: Failed to create InitialContext using factory specified in hashtable [Root
exception is java.lang.NullPointerException]

at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:255)

at javax.naming.InitialContext.initializeDefaultInitCtx(InitialContext.java:318)

at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:348)

at javax.naming.InitialContext.internalInit(InitialContext.java:286)

at javax.naming.InitialContext.(InitialContext.java:211)

at com.ibm.ccd.connectivity.jms.TrigoJMS.getInitialContext(TrigoJMS.java:88)

at com.ibm.ccd.common.script.ScriptOperationsJms.jmsGetContext(ScriptOperationsJms.java:42)

... 10 more

2013-07-17 10:36:48,401 [sch_worker_1] ERROR com.ibm.ccd.common.error.AustinException JOB_ID:6203-CWPCM0002E:Generic error /


Exception: Generic Error, Exception: javax.naming.Reference incompatible with javax.jms.QueueConnectionFactory

java.lang.ClassCastException: javax.naming.Reference incompatible with javax.jms.QueueConnectionFactory

at com.ibm.ccd.connectivity.jms.TrigoJMS.getConnectionFactory(TrigoJMS.java:104)

at com.ibm.ccd.common.script.ScriptOperationsJms.jmsGetConnectionFactory(ScriptOperationsJms.java:57)

at
com.ibm.ccd.common.interpreter.operation.generated.GenJmsGetConnectionFactoryOperation.execute(GenJmsGetConnectionFactoryOperat
ion.java:74)

at WPCJms13740502073320.run(WPCJms13740502073320.java:39)

at com.ibm.ccd.common.interpreter.engine.Script.runFunction(Script.java:554)

at com.ibm.ccd.common.interpreter.engine.Script.execute(Script.java:484)

at com.ibm.ccd.common.interpreter.engine.Script.run(Script.java:335)

at com.ibm.ccd.report.common.Report.generate(Report.java:305)

at com.ibm.ccd.report.common.ReportExe.execute(ReportExe.java:104)

at com.ibm.ccd.scheduler.threads.SchedulerThread.fuzaoRun(SchedulerThread.java:262)

at com.ibm.ccd.common.util.FuzaoRunnableAdapter.run(FuzaoRunnableAdapter.java:54)

at com.ibm.ccd.common.util.FuzaoThread.run(FuzaoThread.java:123)

476 IBM Product Master 12.0.0


2013-07-17 10:36:48,401 [sch_worker_1] ERROR com.ibm.ccd.common.error.AustinException JOB_ID:6203-CWPCM0002E:Generic error /
Exception: Script execution failed(com.ibm.ccd.common.error.AustinException) Exception:Generic Error Generic Error

at com.ibm.ccd.common.script.ScriptOperationsJms.jmsGetConnectionFactory(ScriptOperationsJms.java:61)

at
com.ibm.ccd.common.interpreter.operation.generated.GenJmsGetConnectionFactoryOperation.execute(GenJmsGetConnectionFactoryOperat
ion.java:74)

at WPCJms13740502073320.run(WPCJms13740502073320.java:39)

at com.ibm.ccd.common.interpreter.engine.Script.runFunction(Script.java:554)

at com.ibm.ccd.common.interpreter.engine.Script.execute(Script.java:484)

at com.ibm.ccd.common.interpreter.engine.Script.run(Script.java:335)

at com.ibm.ccd.report.common.Report.generate(Report.java:305)

at com.ibm.ccd.report.common.ReportExe.execute(ReportExe.java:104)

at com.ibm.ccd.scheduler.threads.SchedulerThread.fuzaoRun(SchedulerThread.java:262)

at com.ibm.ccd.common.util.FuzaoRunnableAdapter.run(FuzaoRunnableAdapter.java:54)

at com.ibm.ccd.common.util.FuzaoThread.run(FuzaoThread.java:123)

Caused by: java.lang.ClassCastException: javax.naming.Reference incompatible with javax.jms.QueueConnectionFactory

at com.ibm.ccd.connectivity.jms.TrigoJMS.getConnectionFactory(TrigoJMS.java:104)

at com.ibm.ccd.common.script.ScriptOperationsJms.jmsGetConnectionFactory(ScriptOperationsJms.java:57)

... 10 more

, Exception:Generic Error

Generic Error

at com.ibm.ccd.common.script.ScriptOperationsJms.jmsGetConnectionFactory(ScriptOperationsJms.java:61)

at
com.ibm.ccd.common.interpreter.operation.generated.GenJmsGetConnectionFactoryOperation.execute(GenJmsGetConnectionFactoryOperat
ion.java:74)

at WPCJms13740502073320.run(WPCJms13740502073320.java:39)

at com.ibm.ccd.common.interpreter.engine.Script.runFunction(Script.java:554)

at com.ibm.ccd.common.interpreter.engine.Script.execute(Script.java:484)

at com.ibm.ccd.common.interpreter.engine.Script.run(Script.java:335)

at com.ibm.ccd.report.common.Report.generate(Report.java:305)

at com.ibm.ccd.report.common.ReportExe.execute(ReportExe.java:104)

at com.ibm.ccd.scheduler.threads.SchedulerThread.fuzaoRun(SchedulerThread.java:262)

at com.ibm.ccd.common.util.FuzaoRunnableAdapter.run(FuzaoRunnableAdapter.java:54)

at com.ibm.ccd.common.util.FuzaoThread.run(FuzaoThread.java:123)

Caused by: java.lang.ClassCastException: javax.naming.Reference incompatible with javax.jms.QueueConnectionFactory

at com.ibm.ccd.connectivity.jms.TrigoJMS.getConnectionFactory(TrigoJMS.java:104)

at com.ibm.ccd.common.script.ScriptOperationsJms.jmsGetConnectionFactory(ScriptOperationsJms.java:57)

... 10 more

Resolving the problem


JMS jars that are bundled in WebSphere Application Server are used by WebSphere Application Server runtime and are provisioned for services that are started within
WebSphere Application Server boundaries (contained by WebSphere Application Server). When using a stand-alone client, process, or service for creating a JMS
connection that is maintained by WebSphere Application Server, these jars are required to be included in the class path of the JVM for that client, process, or service. In
this case, it is Product Master scheduler service. Therefore, to solve this issue, the following WebSphere Application Server bundled runtime jars are required to be added
in the class path for the scheduler service.

For WebSphere Application Server 8.5.x:


$WAS_HOME/runtimes/com.ibm.ws.admin.client_8.5.0.jar
$WAS_HOME/runtimes/com.ibm.jaxws.thinclient_8.5.0.jar
$WAS_HOME/runtimes/com.ibm.ws.sib.client.thin.jms_8.5.0.jar

The following additional JAR files are also required:

$WAS_HOME/plugins/com.ibm.ws.runtime.jar
$WAS_HOME/plugins/com.ibm.ws.sib.server.jar

IBM Product Master 12.0.0 477


$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.mq.commonservices.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.mq.connector.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.mq.headers.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.mq.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.mq.jmqi.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.mq.jmqi.local.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.mq.jmqi.remote.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.mq.jmqi.system.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.mq.jms.admin.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.mq.pcf.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.mqjms.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.commonservices.j2se.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.commonservices.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.jms.internal.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.jms.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.matchspace.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.provider.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.ref.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.wmq.common.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.wmq.factories.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.wmq.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.wmq.v6.jar
$WAS_HOME/installedConnectors/wmq.jmsra.rar/dhbcore.jar

Note: Under the classpath parameter in the $TOP/bin/conf/env_settings.ini directory, the Product Master appserver will fail to start as inclusion of these jars in the
classpath causes issues in the WebSphere Application Server class path. Therefore, the following steps are recommended.
Note: Anytime Product Master is upgraded with a new maintenance patch, these steps are required.

1. Stop all of the Product Master services by using the stop script:
cd $TOP/bin/go

./abort_local.sh
2. Edit the $TOP/bin/conf/env_settings.ini file to add the WebSphere Application Server JMS runtime jars towards the end of the classpath variable separated by ":",
for example:
classpath=
<Existing_Classpath_Entries>:/opt/IBM/WebSphere/AppServer/runtimes/com.ibm.ws.admin.client_7.0.0.jar:/opt/IBM/WebSphere/Ap
pServer/runtimes/com.ibm.jaxws.thinclient_7.0.0.jar:/opt/IBM/WebSphere/AppServer/runtimes/com.ibm.ws.sib.client.thin.jms_7
.0.0.jar:/opt/IBM/WebSphere/AppServer/plugins/com.ibm.ws.runtime.jar:/opt/IBM/WebSphere/AppServer/plugins/com.ibm.ws.sib.s
erver.jar:/opt/IBM/WebSphere/AppServer/installedConnectors/wmq.jmsra.rar/com.ibm.mq.commonservices.jar:/opt/IBM/WebSphere/
AppServer/installedConnectors/wmq.jmsra.rar/com.ibm.mq.connector.jar:/opt/IBM/WebSphere/AppServer/installedConnectors/wmq.
jmsra.rar/com.ibm.mq.headers.jar:/opt/IBM/WebSphere/AppServer/installedConnectors/wmq.jmsra.rar/com.ibm.mq.jar:/opt/IBM/We
bSphere/AppServer/installedConnectors/wmq.jmsra.rar/com.ibm.mq.jmqi.jar:/opt/IBM/WebSphere/AppServer/installedConnectors/w
mq.jmsra.rar/com.ibm.mq.jmqi.local.jar:/opt/IBM/WebSphere/AppServer/installedConnectors/wmq.jmsra.rar/com.ibm.mq.jmqi.remo
te.jar:/opt/IBM/WebSphere/AppServer/installedConnectors/wmq.jmsra.rar/com.ibm.mq.jmqi.system.jar:/opt/IBM/WebSphere/AppSer
ver/installedConnectors/wmq.jmsra.rar/com.ibm.mq.jms.admin.jar:/opt/IBM/WebSphere/AppServer/installedConnectors/wmq.jmsra.
rar/com.ibm.mq.pcf.jar:/opt/IBM/WebSphere/AppServer/installedConnectors/wmq.jmsra.rar/com.ibm.mqjms.jar:/opt/IBM/WebSphere
/AppServer/installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.commonservices.j2se.jar:/opt/IBM/WebSphere/AppServer/insta
lledConnectors/wmq.jmsra.rar/com.ibm.msg.client.commonservices.jar:/opt/IBM/WebSphere/AppServer/installedConnectors/wmq.jm
sra.rar/com.ibm.msg.client.jms.internal.jar:/opt/IBM/WebSphere/AppServer/installedConnectors/wmq.jmsra.rar/com.ibm.msg.cli
ent.jms.jar:/opt/IBM/WebSphere/AppServer/installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.matchspace.jar:/opt/IBM/WebS
phere/AppServer/installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.provider.jar:/opt/IBM/WebSphere/AppServer/installedCo
nnectors/wmq.jmsra.rar/com.ibm.msg.client.ref.jar:/opt/IBM/WebSphere/AppServer/installedConnectors/wmq.jmsra.rar/com.ibm.m
sg.client.wmq.common.jar:/opt/IBM/WebSphere/AppServer/installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.wmq.factories.j
ar:/opt/IBM/WebSphere/AppServer/installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.wmq.jar:/opt/IBM/WebSphere/AppServer/
installedConnectors/wmq.jmsra.rar/com.ibm.msg.client.wmq.v6.jar:/opt/IBM/WebSphere/AppServer/installedConnectors/wmq.jmsra
.rar/dhbcore.jar
3. Start all of the Product Master services by using the start script:
cd $TOP/bin/go

./start_local_rmlogs.sh
Note: You can need to back up to the $TOP/logs directory.
Note: Product Master appserver process fails to start and this is expected. This is because of the inclusion of WebSphere Application Server runtime jars in the class
path that gets updated in WebSphere Application Server for Product Master appserver. Therefore, the next step should be to remove these WebSphere Application
Server JMS runtime jars from deployed Product Master appserver in WebSphere Application Server.
4. Start the default server "server1" of WebSphere Application Server, if not already started:
cd <WAS_HOME>/profiles/AppSrv01/bin

./startServer.sh server1
5. Log in to he WebSphere Application Server admin console:
<http://<ServerName_Or_IP>:9060/ibm/console/login.do>
where 9060 is your console port number.
6. On left side navigation, click Servers > Server Types > WebSphere Application Servers.
7. Open the server where Product Master is installed, as defined in the $TOP/bin/conf/env_settings.ini file. Click Java and Process Management > Process definition >
Java Virtual Machine.
8. Remove the entries for the WebSphere Application Server JMS runtime jars from the Classpath property. The added jars are towards the end of the class path
entry.
9. Click Apply and then Save.
10. Stop all of the Product Master services by using the stop script:
cd $TOP/bin/go

./abort_local.sh

11. Start all of the Product Master services by using the start script:
cd $TOP/bin/go

./start_local_rmlogs.sh

478 IBM Product Master 12.0.0


Note: You can need to back up to the $TOP/logs directory.

Now all of the services should start normally. You can now restart all Product Master services by using the default out of the box scripts, unless there is a redeployment
that is done by using the install_war.sh file for moving to a higher fix pack or interim fix, in which case the WebSphere Application Server class path would need to be
modified manually as detailed preceding section.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

JMS connectivity issues using WebSphere MQ server


JMS connection for Product Master is set and configured by using WebSphere® MQ server.

Symptoms
Unable to connect to WebSphere MQ using Product Master. The queues, queue connection factory, and queue manage are defined in the WebSphere MQ server.

Resolving the problem


WebSphere MQ client jars are required to be included in Product Master class path. To configure this environment follow these steps:

1. Ensure that the WebSphere MQ client is installed. The client should be of the same version as the WebSphere MQ server.
Important: If the version of WebSphere MQ client and the WebSphere MQ Resource Adapter (RA) included with WebSphere Application Server do not match, the
following error is generated:
MQJCA1008: An incorrect version of the WebSphere MQ classes for JMS was found. Deployment failed during Websphere Application
server startup due to MQ version missmatch in between MQ files included in Websphere Application server installation and MQ
client installation

To avoid this error, create the following symbolic links to load the JAR files directly into the class path from the resource adapter of the WebSphere Application
Server.

ln -s $WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.mq.jar
$TOP/jars/com.ibm.mq.jar
ln -s $WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.mq.jmqi.jar
$TOP/jars/com.ibm.mq.jmqi.jar
ln -s $WAS_HOME/installedConnectors/wmq.jmsra.rar/com.ibm.mqjms.jar
$TOP/jars/ com.ibm.mqjms.jar
2. Stop all of the Product Master services by using the stop script:
cd $TOP/bin/go

./abort_local.sh

3. Edit the $TOP/bin/conf/env_settings.ini file to enable the inclusion of WebSphere MQ jars. The section details are:
#MQ client section

[mq]

enabled=yes

#home will default to /opt/mqm if not set

home=<mq_home>
If /opt/mqm is not the defined home for WebSphere MQ client installation, include the correct directory value in <mq_home>.
4. Issue the $TOP/bin/configureEnv.sh file to reset the classpath variable. This includes the WebSphere MQ client jars in the class path.
5. Start all of the Product Master services by using the start script:
cd $TOP/bin/go

./start_local_rmlogs.sh
Note: You can need to backup to the $TOP/logs folder.
You should now be able to connect to WebSphere MQ server by using the JMS connection configured.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Java connectivity issues


I have never received this error before. Why am I getting this error and how can I fix it?

Symptoms
The JDBC URL is defined in the common.properties file.

IBM Product Master 12.0.0 479


Diagnosing the problem
Test the Java™ connectivity from the Product Master URL to the JDBC URL.

Resolving the problem


Use the following script to attempt to connect to the database and run a simple select count(*) from dual SQL command. If a connection is established, the results from
the test script appear. $TOP/bin/test_db.sh

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

java.net.ConnectException: Connection refused error


I am getting errors in the logs that are related to connection refusal.

Symptoms
Along with the errors, I am also seeing the following behavior intermittently:

1. The CPU usage spikes up to over 100%.


2. The application user owning the installation is unable to log in to the server; other users are working fine.
3. The downstream systems are unable to interact with the application.

The product works normally upon an application restart. Why am I getting this and how can I prevent it?

Causes
While creating a user, we specify the resource usage limit that can be allocated to the user as well as the processes creating from it. This prevents the user from being able
to crash the whole server in case of failure. However, if this limit is too low then it can also lead to issues with the user and processes that are owned by the user.
One of the common reasons for getting the java.net.ConnectException is that the user is unable to create the process that is requested by the application and is hence
unable to connect to the back-end server. The configuration controlling this limit is the "ulimit -u" setting, which specifies the maximum number of processes available to
the user. If the user exhausts all available processes, then it will not be able to make new connections resulting in issues that are mentioned above and exceptions like the
following:

Exception:Connection refused to host: 10.15.66.164; nested exception is:

java.net.ConnectException: Connection refused

java.rmi.ConnectException: Connection refused to host: 10.15.66.164; nested exception is:

java.net.ConnectException: Connection refused

at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:613)

at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:210)

at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:196)

at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:122)

at com.ibm.ccd.scheduler.common.Scheduler_Stub.getRunningInfo(Scheduler_Stub.java:236)

at com.ibm.ccd.scheduler.common.JobStatus._updateStatus(JobStatus.java:71)

at com.ibm.ccd.scheduler.common.JobStatus.getCached(JobStatus.java:147)

at com.ibm.ccd.scheduler.common.JobStatus.getRunningByJobId(JobStatus.java:158)

at com.ibm.ccd.scheduler.threads.MasterThread.checkJobNotCurrentlyRunningOnAnyJVM(MasterThread.java:266)

at com.ibm.ccd.scheduler.threads.MasterThread.fuzaoRun(MasterThread.java:418)

at com.ibm.ccd.common.util.FuzaoRunnableAdapter.run(FuzaoRunnableAdapter.java:54)

at com.ibm.ccd.common.util.FuzaoThread.run(FuzaoThread.java:123)

Caused by: java.net.ConnectException: Connection refused

at java.net.PlainSocketImpl.socketConnect(Native Method)

at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:381)

at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:243)

at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:230)

at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:377)

at java.net.Socket.connect(Socket.java:539)

480 IBM Product Master 12.0.0


at java.net.Socket.connect(Socket.java:488)

at java.net.Socket.<init>(Socket.java:385)

at java.net.Socket.<init>(Socket.java:199)

at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:34)

at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:140)

at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:607)

... 11 more

Resolving the problem


Check the max number of allowed processes in the server by using the following command: ulimit -u
If this is set to a low value, say 1024, then increase it to 131072 or unlimited using:

ulimit -u 131072

ulimit -u unlimited
Once this value is increased, the application user should work normally.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

java.sql.SQLRecoverableException: IO Error: Connection reset error


When the application is running, after a certain interval the read/write SQL queries fail due to the java.sql.SQLRecoverableException: IO Error: Connection
reset error.

Symptoms
Following is an example of the stack trace:

2017-12-06 20:02:49,711 [sch_worker_0] ERROR


com.ibm.ccd.common.db.Query JOB_ID:2203- CWPCM0040E:The query
failed : [ Static Query [name:
GEN_DOC_DHI_DOC_HIERARCHY_GETBYPATH]
[id: 681]
SELECT *
FROM tdoc_dhi_doc_hierarchy
WHERE dhi_cmp_id = 1
AND dhi_doc_path = '/scripts/reports/Matt report'], Exception:IO Error: Connection reset
java.sql.SQLRecoverableException: IO Error: Connection reset
at
oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:790)
at
oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:925)
at
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1111)
at
oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:4798)
at
oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:4845)
at
oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1501)
at
org.apache.commons.dbcp2.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:83)
at
org.apache.commons.dbcp2.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:83)
at com.ibm.ccd.common.db.Query.execute(Query.java:982)
at com.ibm.ccd.common.db.Query.execute(Query.java:858)
at com.ibm.ccd.common.db.Query.execute(Query.java:824)
at com.ibm.ccd.common.db.Query.execute(Query.java:806)
at
com.ibm.ccd.common.gendb.GenDocDhiDocHierarchyTable.getByPath(GenDocDhiDocHierarchyTable.java:615)
at com.ibm.ccd.docstore.common.Doc$3.run(Doc.java:423)
at
com.ibm.ccd.common.context.common.DBContext.runInNewDBContext(DBContext.java:1392)
at com.ibm.ccd.docstore.common.Doc.get(Doc.java:418)
at
com.ibm.ccd.docstore.common.DocStoreMgr.get(DocStoreMgr.java:67)
at
com.ibm.ccd.docstore.common.DocStoreMgr.get(DocStoreMgr.java:62)
at
com.ibm.ccd.docstore.common.DocStoreMgr.get(DocStoreMgr.java:54)
at com.ibm.ccd.report.common.Report.<init>(Report.java:105)
at
com.ibm.ccd.report.common.ReportMgr.buildReport(ReportMgr.java:64)
at
com.ibm.ccd.report.common.ReportMgr.buildReport(ReportMgr.java:79)
at
com.ibm.ccd.report.common.ReportMgr.getReportById(ReportMgr.java:106)

IBM Product Master 12.0.0 481


at
com.ibm.ccd.report.common.ReportExe.execute(ReportExe.java:97)
at
com.ibm.ccd.scheduler.threads.SchedulerThread.fuzaoRun(SchedulerThread.java:262)
at
com.ibm.ccd.common.util.FuzaoRunnableAdapter.run(FuzaoRunnableAdapter.java:54)
at
com.ibm.ccd.common.util.FuzaoThread.run(FuzaoThread.java:123)
Caused by: java.net.SocketException: Connection reset

Causes
Connection reset, the database connection drop-off can be influenced by the network.

Resolving the problem


The network administrator can check the following:

1. Check whether there is a load balancer or gateway that can cause the connection to drop.
2. Run a trace route command from the application server to the database server to check whether any hop is coming from a load balance.
3. Check the network statistics with the netstat -rn command for a gateway that is present under the VLAN or network.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

java.sql.SQLException: Listener refused the connection error


Running the test_db.sh script sometimes results in the java.sql.SQLException: Listener refused the connection with the following error:
ORA-12505,
TNS:listener does not currently know of SID given in connect descriptor error.

Symptoms
java.sql.SQLException: Listener refused the connection with the following error:
ORA-12505, TNS:listener does not currently know of SID given in connect descriptor

at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:743)
at oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:662)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:560)
at java.sql.DriverManager.getConnection(DriverManager.java:426)
at java.sql.DriverManager.getConnection(DriverManager.java:474)
at com.ibm.ccd.common.db.TestDB.main(TestDB.java:82)
Caused by: oracle.net.ns.NetException: Listener refused the connection with the following error:
ORA-12505, TNS:listener does not currently know of SID given in connect descriptor

at oracle.net.ns.NSProtocolStream.negotiateConnection(NSProtocolStream.java:275)
at oracle.net.ns.NSProtocol.connect(NSProtocol.java:264)
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1452)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:496)
... 6 more

The test_db.sh script tries to create two types of connections:

Client database connection


JDBC connection

Connection Special character that is used in the syntax after port number
type Colon (:) Forward slash (/)
Client database String is considered as local service name, if the string is present in String is considered as Oracle Database service that is present on Oracle Server
the tnsnames.oar file, the connection is successful. and registered with listener and the connection is successful.
jdbc:oracle:thin:@IP address:port number:mdmdb jdbc:oracle:thin:@IP address:port number/mdmdb_India
mdmdb - String is considered as local service name mdmdb_India - String is considered as Oracle Database service name
Note: In case the string is missing, the connection fails with an
ORA-12154: Unknown service
name error. To resolve, create a tns-entry having same value as the
string.
JDBC String is considered as SID, and the connection is successful.
jdbc:oracle:thin:@IP address:port number:mdmdb
mdmdb - String is considered as SID

Causes
In the db.xml file if the generated db_url has a colon ":" after the port number, change to forward slash "/" for the Oracle Database 12c Release 2 or Oracle RAC.

Resolving the problem

482 IBM Product Master 12.0.0


Proceed as follows to resolve the connection issue:

Browse to the db.xml file at the $TOP/etc/default folder and change the colon ":" to forward slash "/" as highlighted:

<?xml version="1.0" encoding="UTF-8"?>


<db_config>
<db_userName>PIMDB</db_userName>
<db_password_encrypted/>
<db_password_plain>PIMDB</db_password_plain>
<db_url>jdbc:oracle:thin:@IP address:port number/PIMDB</db_url>
<db_class_name>oracle.jdbc.driver.OracleDriver</db_class_name>
<db_type>oracle</db_type>
</db_config>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

java.lang.StackOverflow error
Why am I getting this error and how can I prevent it?

Symptoms
I am seeing java.lang.StackOverflowError while running the application.

Causes
Every thread that is created in a Java program or Java Virtual Machine (JVM) has its own stack space, which is independent of Java heap. The total stack size available to
an application is determined during startup and that value determines the number of threads we can have; exceeding it results in java.lang.StackOverflowError
like the following:
com.ibm.ws.webcontainer.servlet.ServletWrapper service SRVE0068E: Uncaught exception created in one of the service methods of
the servlet utils.secure_invoker in application ccd_mdmprod-appsvr. Exception created :
com.ibm.websphere.servlet.error.ServletErrorReport: java.lang.StackOverflowError

at org.apache.jasper.runtime.PageContextImpl.handlePageException(PageContextImpl.java:695)

at utils.secure_invoker._jspService(Unknown Source)

at com.ibm.ws.webcontainer.jsp.runtime.HttpJspBase.service(HttpJspBase.java:103)

at javax.servlet.http.HttpServlet.service(HttpServlet.java:831)

at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1657)

at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1597)

at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:104)

at com.ibm.ws.webcontainer.filter.WebAppFilterChain._doFilter(WebAppFilterChain.java:77)

at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:908)

Resolving the problem


To prevent this error, the stack size available to the application should be increased. Following are the steps to do so:

1. Locate the service_mem_settings.ini file in the conf directory. This directory is usually under the $TOP/bin directory but can be in a custom location if you have
horizontal clustering.
2. This file contains JVM arguments for the various services, for example, the following flag for the appserver process: APPSVR_MEMORY_FLAG=-Xmx1024m -
Xms256m
The above entry illustrates that the maximum heap size is 1024 MB while the default starting heap size is 256 MB. To increase the heap size, we can add another -
Xss argument to it:

APPSVR_MEMORY_FLAG=-Xmx1024m -Xms256m -Xss2048k

The above entry sets the stack size to 2 MB and you can increase it further if that is not sufficient. This is applicable for all services and not just the appserver.

3. Restart the application by using scripts under $TOP/bin/go directory.


Note:
a. This issue is rare unless there are large data models. Usually the defaults are sufficient.
b. Setting the default stack space to a very large value can result in performance degradation. Start with 2 MB and increase it as needed.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

java.lang.OutOfMemoryError: Failed to create a thread error

IBM Product Master 12.0.0 483


Why am I getting this and how can I correct it?

Symptoms
I was trying to run a script/starting the product services, but the environment was unable to create Java threads for it due to OutOfMemoryError exception.

Causes
The java.lang.OutOfMemoryError: Failed to create a thread message occurs when the system does not have enough resources to create a new thread.
There are three possible causes for this message:

Inadequate user/application resources.


The system has run out of native memory to use for the new thread. Threads require a native memory for internal JVM structures, a Java stack, and a native stack.
There are too many threads already running and the system has run out of internal resources to create new threads.

Sample error stack:


java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830, errno 11

at java.lang.Thread.startImpl(Native Method)

at java.lang.Thread.start(Thread.java:891)

at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:738)

at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:668)

at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:396)

at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:353)

at java.lang.Thread.run(Thread.java:738)
Note: The native, or system heap, is different from Java heap. The Java heap contains the instances of Java objects and is maintained by Garbage Collection. The
maximum size of the Java heap is pre-allocated during JVM startup as one contiguous area, even if the minimum heap size setting is lower. Meanwhile the native heap is
allocated by using the underlying malloc and free mechanisms of the operating system, and is used for the underlying implementation of Java objects. We can increase or
decrease the maximum native heap available by altering the size of the Java heap. This relationship between the heaps occurs, because the process address space that is
not used by the Java heap is available for the native heap usage.

Resolving the problem


Refer to the following (in order) to correct this error:

1. Linux has a maximum allowed process per user limit, we can check this using the ulimit -u command. If this value is low (default is 1024), then either make it
unlimited or raise it to a high value, say 131072. This section is also highlighted as the max
user processes section in ulimit -u output. Use the following command to set it to unlimited: ulimit -u unlimited
2. Increase the amount of native memory available by lowering the size of the Java heap by using the -Xmx option. The process address space that is not used by the
Java heap is available for the native heap usage. Java heap is set in $TOP/bin/conf/service_mem_settings.ini file for the 6 services and as custom_java_options in
$TOP/bin/conf/env_settings.ini file for the back end scripts.
3. Check space requirements by using the df -k command.
4. Lower the number of threads being used.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Transaction log full error for imports implemented with the Java API
Processing large data sets through imports that are implemented with Java™ API may perform poor and eventually cause Transaction log full error (for example,
SQL0964C on DB2®) when data is not committed intermittently.

Symptoms
During the early phase of large imports, entry.save() operations perform fast, but over time decreases significantly. Eventually save operations might fail due to
SQL0964C errors (Db2) and scheduler service exception.log files show entries like:
[<code>]2013-01-24 12:08:32,809 [sch_worker_0] ERROR com.ibm.ccd.common.error.AustinException JOB_ID:5613- CWPCM0002E:Generic
error / Exception: CWPCO0015E:Update failed, the operation has been rolled back: , Exception:DB2 SQL Error: SQLCODE=-964,
SQLSTATE=57011, SQLERRMC=null, DRIVER=3.64.82

com.ibm.db2.jcc.am.SqlException: DB2 SQL Error: SQLCODE=-964, SQLSTATE=57011, SQLERRMC=null, DRIVER=3.64.82

at com.ibm.db2.jcc.am.bd.a(bd.java:682)

...

at com.ibm.db2.jcc.am.jo.executeUpdate(jo.java:750)

...

at com.ibm.ccd.api.catalog.item.ItemImpl.save(ItemImpl.java:698)

...

484 IBM Product Master 12.0.0


at com.ibm.ccd.api.extensionpoints.ImportFunctionArgumentsImpl.run(ImportFunctionArgumentsImpl.java:79)[</code>]

Causes
If no additional transaction handling is implemented, all items that are processed by one import will be handled in one transaction, which in consequence can cause poor
import performance and running out of transaction log space on database server side. This is likely to occur only while processing large data sets.

Diagnosing the problem


Add logging statements to your import extension point code to monitor progress in data processing and check exception.log files for errors.

Resolving the problem


Implement transaction handling in your import extension point code and commit data at regular intervals. For more information, see:

1. Sample code for transaction and exception handling in import and report jobs
2. Transactions and exception handling guidelines for Java APIs: Common import scenarios

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Unable to change to remote directory error


Product Master tried to log in to a target FTP server and failed.

Symptoms
If Product Master tried to log in to a target FTP server and failed to find the specified directory, an error occurs, "Unable to change to remote directory.".

Causes
There are a couple of reasons for this error:

The target FTP address is not accessible from the Product Master server.
The file name might be wrong.

Resolving the problem


Check for capitalization and spelling errors. From the Product Master server, try to FTP directly to the target server and verify the file transfer.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Product Master box does not see the target destination


I have never received this error before. Why am I getting this error and how can I fix it?

Symptoms
I cannot get the Product Master box to see my target destination.

Resolving the problem


Use a Linux® or UNIX HTTP browser, such as Lynx, and type in the Product Master URL to see whether the target is accessible.
If a browser is not available from the Product Master server, telnet to port 80 on the destination because port 80 is the default HTTP port on most web servers. For
example, if the destination URL is http://myserver/urlname, type telnet myserver 80.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Is distributor working

IBM Product Master 12.0.0 485


I have never received this error before. Why am I getting this error and how can I fix it?

Symptoms
I cannot tell if the Product Master distributor is working correctly.

Causes
Check for the existence of new files under $TOP/public_html/created_files/distributor.
Check to see whether any file has the approximate time stamp of when you tried to push the file through. It is possible that a runaway script has generated a bad
output file.
Check the file size. Does the file size correspond to what you were expecting?
If the file is an XML file or an otherwise readable file, type it out.
Does it contain the correct information that you were expecting?

Resolving the problem


If the file exists, is the transfer in progress?
You can use different tools to see whether an actual transfer is in progress. At the minimum, you need to use:

For Solaris, netstat and snoop


For Linux®, tcpdump

If the file size is 300 MB, and it is posting to a URL through the internet, the file can only meet the maximum speed of the internet connection.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting user interface issues


Use the following topics to resolve common issues with the Product Master client.

When specific user interface actions perform slow, it is important to determine:

how long it takes until the requested page is rendered (total runtime)
and what portion of that overall runtime is spent on the appsvr service to retrieve required information from database and build the result that is sent to client for
rendering. This can be determined from most result pages by checking performance value.

Screen rendering in the Persona-based UI interface


Single edit screen
There is a limit on how many attributes are rendered when you open an item in the single-edit screen. The limit is determined by how many attributes can fit into
the initial display page and hence depends on the type of attributes. To display more attributes, you should scroll down. There is single second delay followed by
the resizing of the scroll bar and all the attributes are rendered.
Multi edit screen
There is no limit on the number of items that you can open at a time. But in the case of many items, loading of the multi-edit screen does not mean all the items
have been rendered. In fact, only those items are rendered which are on the section of the page in focus, for example, if you open 1000 items in the multi-edit
screen, then initially only the first few items are rendered. As you scroll down, you notices a pattern of dots, which mean that the items are being rendered and then
you see the items. If you scroll through a page too fast, then the page does not get a chance to be rendered and still have dots when you scroll up. These items need
to be rendered only once unless the page is refreshed.

Browser issues
Product Master user interface might cause performance problems. The topic discusses resolution specifically for the Microsoft Internet Explorer browser, but
widely can be applied to other browsers too.
Could not load error after login
Why am I getting this error and how can I fix it?
Error 500: com.ibm.websphere.servlet.session.UnauthorizedSessionRequestException error
You encounter an error if you are using same browser for two different applications.
Item Rich Search on a multi-occurring attribute returns incorrect results
If you run Item Rich Search on a multi-occurring attribute in a search template, then the result set includes items that appear to have incorrect values of the
attribute.
Login page is getting reloaded
After entering credentials on the login page and clicking on "Login", the page reloads instead of logging the user in. Internet Explorer shows a small exclamation
mark icon indicating there were errors on the page.
Unauthorized error for the static selection user
In the catalog explorer single-edit page of the Persona-based UI, a static selection user is not able to add an item.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

486 IBM Product Master 12.0.0


Browser issues
Product Master user interface might cause performance problems. The topic discusses resolution specifically for the Microsoft Internet Explorer browser, but widely can
be applied to other browsers too.

Browser back button does not load the previous application page
Symptoms
On clicking "back" button in the browser, the application does not load the previous page.

Causes
The browser "back" button behavior is not controlled by the application.

Resolving the problem


Use the Back button that is provided in the application for navigating back to the previous application page.

Wrong or slow display of user interface


Symptoms
Wrong display of user interface, especially menu drop-down anomalies, or slow display of user interface with browser showing high CPU usage while rendering a page, or
Anti-virus software showing high CPU usage.

Resolving the problem


If you encounter performance or display anomalies, check the following:
Table 1. Checklist for performance or display anomalies
Browser Using the latest supported version ensures optimal performance. For more information, see System requirements.
compatibility
Security settings Configured to the recommended levels.
Zoom level 100%
Browser cache Clean the browser cache after applying a fix pack, interim fix, or test fix. Frequently, JavaScript files that the user interface depends on are updated
and installed with each release. These JavaScript files are cached by the browser when the user interface loads. So, to avoid incompatibilities and
issues in using the user interface, you must clean your browser cache such that the latest JavaScript files are loaded and used by the user interface.
Avoid multiple Use only one browser window or tab at a time to access the interface. As an administrator, you can disable tabbed browsing and ensure that each
windows or tabs Internet Explorer window uses own session.
To disable tabbed browsing, proceed as follows:

1. Open Internet Explorer browser.


2. Click Tools > Internet Options > General > Tabs.
3. Clear all the Tabbed Browsing options.
4. Click OK > Apply > OK.
5. Restart Internet Explorer.

To ensure that each Internet Explorer window uses own session:

In the command line, type the following, and press Enter:

iexplore.exe [ [ -noframemerging ]

OR
Right-click the Internet Explorer icon on your desktop and in the Target field, add -noframemerging parameter, and click OK

"C:\Program Files\Internet Explorer\iexplore.exe"-noframemerging

Screen Minimum of 1366x768 pixels


resolution
Anti-virus Configure Product Master as a trusted application in the anti-virus software (if possible), to bypass the scanning.
software Note: The anti-virus or malware software program scans can cause slow performance rendering or cause UI malformation across most browsers
because of the JavaScript errors. The JavaScript errors occur because the anti-virus or malware software program scanning interferes with the
JavaScript scripts from running asynchronously. You can identify JavaScript errors by using the Developer tools of the most browsers.
Document Mode Set the Document Mode in the Internet Explorer browser correctly.
Use an HTTP Monitor application to see when data is received by the browser and monitor the Total CPU usage, and especially CPU usage of the
browser process (Open Windows Task Manager, click Processes tab and watch the iexplorer.exe process) to determine whether performance is due to
client side else check server side.
Compatibility Disable the intranet site for the Compatibility View mode in the Internet Explorer browser.
View mode
Avoid CTRL-N Do not use the keyboard shortcut combination, CTRL-N, to open a new interface session.
keyboard
shortcut
Search bars Do not have any vendor-acquired search bars that are installed in your browser.
Working with Check the following options, when working with complex items (that use many groupings and multi-occurring values):
complex items
Collapse nodes of grouped or multi-occurring attributes that are deeper than this level.
Collapse multi-occurring attributes that have more than this number of occurrences.

IBM Product Master 12.0.0 487


Multi-occurrences - Enable paging when the number of occurrences is larger than this number.
Multi-occurrences - Number of occurrences that display per page to gain faster display.

Browser back
button

Internet Explorer does not load web pages correctly


Symptoms
Experiencing browser exceptions like "Could not load" or "Errors on this web page might cause it to work incorrectly" or some parts of the web page are blank.

Causes
Product documentation and enterprise software include resources that seldom change, such as CSS files, image files, JavaScript files. These resources take time to
download over the network, increasing the time that is taken to load a web page. Browsers cache this information to prevent the need for a round trip. Once a resource is
cached, a browser or proxy can refer to the locally cached copy instead of downloading again on subsequent visits to the web page.
Thus, there can be a scenario where resources in the application or the product documentation hosting server might have changed or upgraded, but the browser has
previous information cached. Browser issues mentioned previously, might occur during such circumstances.
Example

Web page error details:


User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0;
.NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR
3.5.30729; InfoPath.1; MS-RTC LM 8)Timestamp: Sat, 6 Oct 2012 04:59:15 UTC

Message: Could not load 'dijit.MenuItem'; last tried


'../dijit/MenuItem.js'
Line: 16
Char: 6218
Code: 0
URI: https://Product_Center_URL/js/dojo_toolkit/release/dojo/dojo/dojo.js

Resolving the problem


For resolution, try the following in the Internet Explorer browser:

Browse to Tool > Internet options > General tab. In the Browsing history section, click Delete to delete the temporary files and cookies and restart your browser.
Browse to Tools > Safety > Delete Browsing History.
Clear the Preserve Favorite website data check box
Select Temporary internet files and website files check box
Click Delete to delete the cache
Restart the browser and check whether the page is loading properly.
Browse to Tool > Internet option > Advanced tab. Click Restore advanced setting and restart the browser to reset all advanced settings to default values with which
the application and product documentation had been tested.
Browse to Tool > Internet options > Advanced tab. In the Reset Internet Explorer settings section, click Reset to restore all the settings to the default values.
If you still have an issue, open a PMR.

An item fails to load in the navigational pane


Symptoms
An item fails to load in the navigational pane with the SingleEditQueryStore JavaScript error.

Causes
An encryption configuration on the load balancer affects certain Product Master web pages that contain JSON.

Resolving the problem


Disable the encryption configuration on the load balancer.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Could not load error after login


Why am I getting this error and how can I fix it?

Symptoms
I migrated to a newer version or fix pack of the product. After I log in to the application using my web browser, I get a "Could not load" error.

Causes

488 IBM Product Master 12.0.0


The migration process of the IBM® Product Master includes deploying the ccd.war file. This file contains the source code and determines which code will be ran when a
user performs certain action. The ccd.war of the older version of the product becomes unreachable and can lead to issues if Internet Explorer cache still contains
reference to previous ccd.war file. The user can get errors similar to the following logging in:
Web page error details:

User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR
3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.1; MS-RTC LM 8)Timestamp: Sat, 6 Oct 2012 04:59:15 UTC

Message: Could not load 'dijit.MenuItem'; last tried '../dijit/MenuItem.js'

Line: 16

Char: 6218

Code: 0

URI: https://Product_Center_URL/js/dojo_toolkit/release/dojo/dojo/dojo.js

Message: Could not load 'ibm.ccd.util.PIMDojoFixes'; last tried '../ibm/ccd/util/PIMDojoFixes.js'

Line: 16

Char: 6218

Code: 0

URI: https://Product_Center_URL/js/dojo_toolkit/release/dojo/dojo/dojo.js

Message: Could not load class 'dijit.MenuBar'. Did you spell the name correctly and use a full path, like 'dijit.form.Button'?

Line: 16

Char: 69269

Code: 0

URI: https://Product_Center_URL/js/dojo_toolkit/release/dojo/dojo/mdmdojo-main.js

Resolving the problem


Clear the cache.
Internet Explorer

Click Tools > Internet Options > General tab, browse to Browsing History section and click Delete. On the next screen, click Delete again to clear the cache.
Note: When clearing the Internet Explorer cache, make sure that the browser does not retain any data. For example, if Preserve Favorites website data option is checked in
Internet Explorer, then JavaScript and CSS files can be preserved in the browser cache leading to unreachable file references from previous product level (before applying
a patch). Ensure this is not checked before clearing the cache.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Error 500:
com.ibm.websphere.servlet.session.UnauthorizedSessionRequestException error
You encounter an error if you are using same browser for two different applications.

Symptoms
Logging in to the WebSphere® Application Server and the Admin UI on different tabs, but in the same browser, produces the following error.

Error 500: com.ibm.websphere.servlet.session.UnauthorizedSessionRequestException:


SESN0008E: A user authenticated as anonymous has attempted to access a session owned by user:
defaultWIMFileBasedRealm/uid=admin,o=defaultWIMFileBasedRealm

Causes
When single sign-on (SSO) is enabled on the WebSphere Application Server, the SSO uses same session for all the applications on the same browser windows in the
different tabs. When you first log in to the WebSphere Application Server, the SSO took the first WebSphere Application Server session that was not applicable to the IBM®
Product Master resulting in an exception.

Resolving the problem


Always use different browsers for the two applications for different sessions.

IBM Product Master 12.0.0 489


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Item Rich Search on a multi-occurring attribute returns incorrect results


If you run Item Rich Search on a multi-occurring attribute in a search template, then the result set includes items that appear to have incorrect values of the attribute.

Symptoms
Item Rich Search on a multi-occurring attribute returns incorrect results.

Causes
Incorrect interpretation of results returned by the Item Rich Search on a multi-occurring attribute.

Diagnosing the problem


The Item Rich Search on a multi-occurring attribute is designed to return all items that contain the search value for any occurrence of the attribute.
To illustrate this working of the Item Rich Search on a multi-occurring attribute, consider the following example. Consider a catalog that contains a list of DVDs from your
personal collection. For simplicity, let the catalog spec contain only three attributes: MovieID, MovieTitle, and Actors. Because there can be many actors in a movie, we can
define the Actors attribute as a multi-occurring attribute with minimum occurrence of 1 and maximum occurrence of 10.

When you search for a movie that has a certain actor, the result set contains all the movies that have this actor as one of the actors. However, when the search results are
displayed in the Multi Edit UI screen, only the first value of that attribute is displayed. This means that if a certain record has this actor's name as the third actor, then the
record is included in the result set but will display the name of the first actor in the Multi Edit UI screen. If you move your mouse pointer to the column header of the multi-
occurring attribute, you see a tooltip that indicates that only the first occurrence of the attribute is displayed.

To verify this, you can highlight the record that you suspect to have been returned in error, and click Open to see the record in the Single Edit mode. In the Single Edit
mode, you can see that the actor whose name you had included in the search operation is actually the third actor who is listed in the multi-occurring attribute Actors.

This behavior is true regardless of the type of the attribute, and regardless of the operator specified in the search criteria.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Login page is getting reloaded


After entering credentials on the login page and clicking on "Login", the page reloads instead of logging the user in. Internet Explorer shows a small exclamation mark icon
indicating there were errors on the page.

Symptoms
The login page does not fully render or redirects to itself and there is an IE exclamation point icon at the lower left hand corner of the screen, which when clicked, shows
multiple missing JS and JSP files. When attempting to directly access JS files from the browser, the user is directed back to the login page, which can be partially
rendered.
If the IE exclamation point is clicked the following (or similar) JavaScript errors are displayed in a window:

Message: Syntax error


Line: 22
Char: 5
Code: 0
URI: http://appsvr.domain.com:7507/utils/enterLogin.jsp

Message: Syntax error


Line: 22
Char: 5
Code: 0
URI: http://appsvr.domain.com:7507/utils/enterLogin.jsp

Message: Syntax error


Line: 22
Char: 5
Code: 0
URI: http://appsvr.domain.com:7507/utils/enterLogin.jsp

Message: 'is_minor' is undefined


Line: 193
Char: 1
Code: 0
URI: http://appsvr.domain.com:7507/utils/enterLogin.jsp

Message: 'bLoginAllowed' is undefined


Line: 213
Char: 1

490 IBM Product Master 12.0.0


Code: 0
URI: http://appsvr.domain.com:7507/utils/enterLogin.jsp

Message: 'dojo' is undefined


Line: 224
Char: 5
Code: 0
URI: http://appsvr.domain.com:7507/utils/enterLogin.jsp

Message: Object expected


Line: 144
Char: 2
Code: 0
URI: http://appsvr.domain.com:7507/utils/enterLogin.jsp

Causes
Certain enabled WebSphere Application Server security settings can cause this behavior.

Diagnosing the problem


Enter the application server's WebSphere Application Server Admin console and check whether either of the following settings are enabled (checked):

1. Application servers > pimAppSvr_* > Security > Security domain > Application security > Enable application security
2. Security > Global security > Custom properties > com.ibm.ws.security.addHttpOnlyAttributeToCookies

Resolving the problem


Try changing one or more settings under "Diagnosing the problem" to be unavailable (cleared), save in the console and then restart the application cleanly.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Unauthorized error for the static selection user


In the catalog explorer single-edit page of the Persona-based UI, a static selection user is not able to add an item.

Symptoms
If you are a static selection user with only SELECTION_MEMBERS_ADD_ITEMS permission, and you click Add to add an item, you see the following error.
You are not authorized, contact your administrator for more details.

Causes
When you click Add to add an item, the system checks for the SELECTION_MEMBERS_ADD_ITEMS and SELECTION_MEMBERS_RECATEGORIZE_ITEMS permissions. If
any permission is missing, you get an unauthorized error.

Resolving the problem


A static selection user needs to have both of the following permissions in the static selection ACG.

SELECTION_MEMBERS_ADD_ITEMS
SELECTION_MEMBERS_RECATEGORIZE_ITEMS

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting database issues


Use the following topics to resolve common issues with the database.

Ensure you have performed the following database best practices:

Make sure that the server has the appropriate capacity to handle the load and is not shared with other systems.
Check and make sure that the DB statistics are up to date.
Check memory allocation to make sure that there are no unnecessary disk reads.
Check to see if the database needs to be de-fragmented.

Character set error messages during data export or import


Character set error messages display when you create test environments by exporting and importing a copy of the database.

IBM Product Master 12.0.0 491


Database space allocation problems
Occasionally, import and export jobs fail because of insufficient space allocated for tables, indexes, rollback segments, and temporary segments.
Slow performance if a running job is stopped
Whenever a job is stopped such as an import or export job, the database system must roll back the complete transaction to bring the database to a consistent state.
Redo log switch problems
I have never received this error before. Why am I getting this error and how can I fix it?
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired error
I was running the database migration script or modifying the database schema and received an error.
Product Master hangs and the client UI is frozen
If errors occur when accessing your Product Master, it is possible that the connection to the database might have been lost.
Product Master hangs when running analyze schema
The analyze schema command is run only for DB2® as there's no need to run the analyze schema command for Oracle.
com.ibm.db2.jcc.b.SqlException: Failure in loading T2 native library db2jcct2 error
You receive an error when using the Db2 9.1 64-bit client in AIX®.
Improving query performance
I have never received this error before. Why am I getting this error and how can I fix it?
Drop temporary aggregate tables and indexes
Dropping temp tables at regular intervals can save users lots of disk space.
Complex SQL statements are running very slow
If the Db2 database configuration parameter STMTHEAP SZ is set to high, this can have a severe impact on total query runtime especially for complex SQL
statements in both Product Master or Global Data Synchronization.
Changed database tables after running migrateToInstalledFP.sh script
I use Oracle database. After running the migrateToInstalledFP.sh migration scripts, there are tables that were reported as changed by the database verification
report. How do I fix them?
DBV_0_UK DBV_VERSION index error
After running the migrateToInstalledFP.sh migration scripts, the console output reported that the index is missing. How do I fix it?
java.sql.SQLException: ORA-00600: internal error code, arguments error
Why am I getting this error while running the migrateDataToXml.sh script and how can I fix it?

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Character set error messages during data export or import


Character set error messages display when you create test environments by exporting and importing a copy of the database.

Symptoms
For example, if you export a database that uses the US7ASCII character set, the following error message is recorded in the export log: Export done in US7ASCII
character set and UTF8 NCHAR character set server uses UTF8 character set (possible charset conversion).

Resolving the problem


Whenever exporting and importing the database, set NLS_LANG to american_america.utf8.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Database space allocation problems


Occasionally, import and export jobs fail because of insufficient space allocated for tables, indexes, rollback segments, and temporary segments.

Symptoms
If the rollback segment is full or the rollback segment table space is full, an error message is recorded in the alert log file that is similar to the following error message:
ORA-1650: unable to extend rollback segment RBS8 by 512 in
tablespace RBS.
Failure to extend rollback segment 9 because of 1650 condition
FULL status of rollback segment 9 set.

Resolving the problem


Make sure that you have enough free space in the table spaces. For larger jobs, more space might be needed in rollback and temporary segments. Check the alert log file
of the database every day to see whether there are any errors generated related to space issues in the database.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

492 IBM Product Master 12.0.0


Slow performance if a running job is stopped
Whenever a job is stopped such as an import or export job, the database system must roll back the complete transaction to bring the database to a consistent state.

Symptoms
This rollback process uses maximum system resources such as CPU time and memory.

Resolving the problem


Wait until the rollback completes and the system returns to a normal state. Do not stop a running job unless necessary.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Redo log switch problems


I have never received this error before. Why am I getting this error and how can I fix it?

Symptoms
An inadequate number of log files or an inadequate size of the log files can cause the database system to wait a long time for a log switch and to determine whether all the
redo log files are active.

Resolving the problem


Increase the number of log files.
Increase the size of the redo log files.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
error
I was running the database migration script or modifying the database schema and received an error.

Symptoms
I received the following error: ORA-00054: resource busy and acquire with NOWAIT specified
or timeout expired. Why am I getting this error and how can I prevent it?

Causes
Data definition language (DDL) is used to define database schemas and Data manipulation language (DML) is used to modify tables in your database. To preserve data
integrity, a database locks a table or a row within that table before updating/reading it (various modes like read, write, exclusive, and so on). Since a DDL and DML affect all
rows within a table, so it needs an exclusive lock and existing locks on any of the rows within that table result in failure.
In Oracle, when a DDL and DML encounter a lock, we get the following error: ORA-00054: resource busy and
acquire with NOWAIT specified or timeout expired

Resolving the problem


DDLs or database migrations are only meant to be run during downtime. All the product services should be turned completely off using $TOP/bin/go/stop_local.sh and the
database should not be in use.
Other alternatives are:

1. Find and stop the session that is preventing the exclusive lock.
2. In Oracle 11g you can set ddl_lock_timeout, for example, allow DDL to wait for the object to become available, simply specify how long you would like it to wait:
SQL> alter session set ddl_lock_timeout = 600;

Session altered.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 493


Product Master hangs and the client UI is frozen
If errors occur when accessing your Product Master, it is possible that the connection to the database might have been lost.

Symptoms
Your Product Master UI freezes or is in a constant wait state.
Errors occur when attempting to access your Product Master.

Resolving the problem


Check the status of the listener process in Oracle:
1. Log on to Oracle database server by using your Oracle ID.
2. Run the following command to check whether the listener is running: lsnrctl
status
If the command completed successfully and you get the "Listening Endpoints Summary..." message, then the listener is running.
If the command ends in an error, there might be problems with your database server and you should contact your database administrator.
Check the status of your DB2® database:
Check the database connectivity with the test_db.sh script located in the $TOP/bin directory.
Check the JDBC connectivity with the test_db.sh script located in the $TOP/bin directory. The Admin user can check JDBC connectivity only after running
create_schema.sh.
Check whether all the services of Product Master have been started. You can run rmi_status.sh script to check the service status.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Product Master hangs when running analyze schema


The analyze schema command is run only for DB2® as there's no need to run the analyze schema command for Oracle.

Symptoms
When you load large amounts of data into the database or delete or purge the tables in the database, you must analyze the schema. Before running the analyze schema
command, you must stop Product Master. If the Product Master is not stopped, the analyze schema job might hang because the tables are being used by the product.

Resolving the problem


If analyze schema hangs, stop the analyze job, stop Product Master, run analyze schema again, and then start Product Master. Analyze the schema in regular intervals to
collect the latest statistics about the data distribution in the database.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

com.ibm.db2.jcc.b.SqlException: Failure in loading T2 native library db2jcct2 error


You receive an error when using the DB2® 9.1 64-bit client in AIX®.

Symptoms
You receive the com.ibm.db2.jcc.b.SqlException: Failure in loading T2 native library db2jcct2 error when running the test_db.sh shell script.

Resolving the problem


Make sure that you are using a type 4 connection in $TOP/bin/conf/env_settings.ini.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Improving query performance

494 IBM Product Master 12.0.0


I have never received this error before. Why am I getting this error and how can I fix it?

Symptoms
You need to reorganize your existing indexes to improve query performance.

Resolving the problem


You can use the analyze_schema.sh maintenance script to re-create all indexes.
For more information, see

Oracle - Generating statistics and reorganizing Oracle databases


Db2 - Generating statistics and reorganizing DB2® databases

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Drop temporary aggregate tables and indexes


Dropping temp tables at regular intervals can save users lots of disk space.

Symptoms
You need to restore storage space and speed up database utilities. Usage:
$TOP/src/db/schema/util/drop_temp_agg_tables.sh

Resolving the problem


You can use the drop_temp_agg_tables.pl maintenance script to drop temporary aggregate tables and indexes. For more information, see Dropping temporary aggregate
tables and indexes.
For more information, see the following links for a list of database admin scripts and commands:

Oracle - Scripts and commands for Oracle


DB2®- Scripts and commands for DB2

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Complex SQL statements are running very slow


If the Db2 database configuration parameter STMTHEAP SZ is set to high, this can have a severe impact on total query runtime especially for complex SQL statements in
both Product Master or Global Data Synchronization.

Symptoms
The Db2 database configuration value found in the database configuration file for the parameter STMTHEAP SZ (Statement Heap size) was set to high, which causes the
Db2 Optimizer to spend too much time on query preparation, resulting in a high preparation time, especially if the SQL statement is complex such as using many
subselects or joins.

Resolving the problem


The following steps are based on the assumption that the slow return of the SQL query result set in the UI is caused by a slow SQL statement. Furthermore it is assumed,
that the database statistics are all updated.
Once the slow SQL statement is there, you need to check the query execution and query preparation time values.

Only if the preparation time shows a considerably larger value that is compared to the execution time, then there is a strong indication that the Db2 query optimizer is
spending too much time to prepare the SQL statement and that there is a good chance that this can be reduced considerably by decreasing the value for the DB
configuration parameter for the Statement Heap Size (STMTHEAP SZ). To determine which time value, "prepare" or "run", of the SQL query is slow, put the SQL statement
into a file (for example, slow.sql) and perform the db2batch command as following:

db2batch -d <dbname> -a <userid>/ <userpasswd> -f slow.sql -i complete -o f -1 p 2 o 5

To resolve the problem, it is useful to iteratively change the STMTHEAP SZ parameter and test the query performance after each iteration with the db2batch command as
shown previously. So, whenever there are slow and complex SQL statements found, and database statistics, network performance, and so on, have been verified, it is
worth to use this approach to tune complex SQL statements.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 495


Changed database tables after running migrateToInstalledFP.sh script
I use Oracle database. After running the migrateToInstalledFP.sh migration scripts, there are tables that were reported as changed by the database verification report.
How do I fix them?

Symptoms
After running the migrateToInstalledFP.sh migration scripts, the following tables were reported as changed by the database verification report:
- TCTG_ITA_ITEM_ATTRIBUTES

- TSEC_COT_COMPANY_ATTRIB

- TSEC_SCU_USER

Causes
This is not a result of migration to any of the currently supported versions of the product. Instead, it is caused by preexisting issues in the database schema.

Resolving the problem


Following are the steps to fix these tables.

TCTG_ITA_ITEM_ATTRIBUTES
Check if the $TOP/logs/default/ipm.log has the following entry:

[main] INFO com.ibm.ccd.common.util.SystemDB - Information: COLUMN LENGTH ITA_VALUE_STRING 3000 4000

Here 3000 refers to the existing length of the ITA_VALUE_STRING column while 4000 is the length required by the product. Run the following command to change
this length.

perl $PERL5LIB/runSQL.pl --sql_command="alter table TCTG_ITA_ITEM_ATTRIBUTES modify (ita_value_string VARCHAR2(4000));"

TSEC_COT_COMPANY_ATTRIB
Check if the $TOP/logs/default/ipm.log has the following entry.

[main] INFO com.ibm.ccd.common.util.SystemDB - Information: COLUMN LENGTH COT_COMPANY_ID 10,0 9,0

This should be ignored, no modification is necessary. Existing precision "number (10,0)" is higher than the required precision "number(9,0)".
TSEC_SCU_USER
Check if the $TOP/logs/default/ipm.log has the following entry.

[main] INFO com.ibm.ccd.common.util.SystemDB - Information: COLUMN LENGTH SCU_USER_EMAIL 40 100

Here 40 refers to the existing length of the SCU_USER_EMAIL column while 100 is the length required by the product.
Run the following Perl SQL to change this length.

perl $PERL5LIB/runSQL.pl --sql_command="alter table TSEC_SCU_USER modify (SCU_USER_EMAIL VARCHAR2(100));"

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

DBV_0_UK DBV_VERSION index error


After running the migrateToInstalledFP.sh migration scripts, the console output reported that the index is missing. How do I fix it?

Symptoms
After running the migrateToInstalledFP.sh migration scripts, the console output reported the following index under "Missing Indexes" section:
DBV_0_UK DBV_VERSION

Causes
This is a defect in the migration script and will be fixed with a future fix pack.

Resolving the problem


To fix the issue manually, run the following commands.

perl $PERL5LIB/runSQL.pl --sql_file=$TOP/src/db/schema/gen/db2/ddl_ver_synchronize.sql . $TOP/bin/compat.sh

$JAVA_RT com.ibm.ccd.synchronize.DBSchemaVersion --autoupd

Verify the fix by running the $TOP/bin/db/verify_tables_indexes.sh script and proceed with next migration steps as documented in the product documentation.

496 IBM Product Master 12.0.0


Note: The previous fix remains valid even if you did not notice the missing index originally and later realized it after running the database verification report manually
through $TOP/bin/db/verify_tables_indexes.sh
script.
The ./verify_tables_indexes.sh script outputs the list of out of the box indexes if found missing from the DB. If there is a performance issue, this script should be
run to determine whether the slowness is due to missing indexes.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

java.sql.SQLException: ORA-00600: internal error code, arguments error


Why am I getting this error while running the migrateDataToXml.sh script and how can I fix it?

Symptoms
When I run the migrateDataToXml.sh script, I get the following error:
java.sql.SQLException: ORA-00600: internal error code, arguments:

[KGHALP1], [0x000000000], [], [], [], [], [], [], [], [], [], []

Causes
Oracle introduced certain performance improvements in the XML Parser of the 11 g release 2 version of the database. These improvement features also contain a defect,
which is causing the preceding exception.

Resolving the problem


Work with your DBA to either install the Oracle patch fix or disable the feature, which contains this defect. You should either contact Oracle to get details about the patch
or you can disable the feature with the following command:
alter system set event='31156 trace name context forever, level 0x400' scope=spfile;
IBM recommends installing the patch.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting performance issues


Use the following topics to resolve common issues with Product Master performance.

Understanding and defining the performance problem


There are many components involved that can affect performance. As an initial step, it is recommended to identify the component within the hardware and
software stack, which is causing the performance problem before going into details in the second phase.
Basic health checks to ensure optimal performance
Whenever encountering performance issues, at first perform basic health checks. These help to ensure that basic configuration, maintenance requirements are met
and help to identify whether there are resource bottlenecks. A set of basic configuration and maintenance guidelines is provided with Product Master. You can
follow these guidelines to help avoid general performance problems. This document summarizes how to quickly check recommended configuration and
maintenance status by using the pimSupport.sh script.
Locating the performance problem in the stack
There are many components involved in performing a specific task, which can influence the performance severely. This needs to be kept in mind while trying to
locate a performance problem. What needs to be checked will also depend on the type of use-case. If some scheduled jobs activity (reports, imports or exports) is
slow, you mainly have to focus at first on Product Master side checks with special focus on the scheduler services log files and activity. But if working with the user
interface is slow, then you have to include client side checks as well in your investigations.
Individual troubleshooting techniques
Use the following topics to resolve specific performance issues with Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Understanding and defining the performance problem


There are many components involved that can affect performance. As an initial step, it is recommended to identify the component within the hardware and software stack,
which is causing the performance problem before going into details in the second phase.

The following questions should help you to narrow down a performance problem to a specific component or will help others to start investigations in case assistance is
required, for example from support or services.

IBM Product Master 12.0.0 497


What exactly is slow:
only a specific use case (for example, opening an item from a specific catalog in the user interface, what container scripts, specs, views are used, do you see
specific errors in the appsvr service log files, and so on.)
general slowness, which affects all user interface interactions and scheduled jobs (this can point to some basic configuration or maintenance deficiencies or
resource bottlenecks).
When did the problem manifest:
did performance degrade over time (old version data or general database maintenance, for example reorganize, update statistics might help), or
was there an abrupt change in performance after, for example, some new application customization (what has been changed, spec definition, container
script, and so on.) Or, was there some large data processing, for example with import or report, task (update statistics, need to reorganize database tables).
Who is affected:
All users working with the user interface, or
only specific users (do they do specific tasks, do they have specific permissions, are they located in a different location or network, what workstation and
Internet Explorer version are they using)

Answers to previous questions already can provide important hints where to look in more details first.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Basic health checks to ensure optimal performance


Whenever encountering performance issues, at first perform basic health checks. These help to ensure that basic configuration, maintenance requirements are met and
help to identify whether there are resource bottlenecks. A set of basic configuration and maintenance guidelines is provided with Product Master. You can follow these
guidelines to help avoid general performance problems. This document summarizes how to quickly check recommended configuration and maintenance status by using
the pimSupport.sh script.

Symptoms
Product Master delivers slow performance.

Causes
Incorrect configuration and the presence of many old version rows might cause slow performance.

Diagnosing the problem


Whenever you face a performance issue, your first task should be to answer the following questions:

Is the database configured correctly?


Does the current table and index layout match the default schema?
Are the table and index statistics up to date?
Are old versions being deleted regularly?
Are caches configured with sufficient space and show a cache hit ratio that is greater than 97%?
Is there any CPU or memory-related bottlenecks on the application or database server side?

If the answer to any of the questions is no, then you must take action to correct the situation.

How to verify configuration and maintenance healthiness


Use the pimSupport.sh script to perform a basic health check. An example of using this script and sample output is given below:

$TOP/bin/pimSupport.sh -b
=======================================
Software and System Stack: starting....Done
Setup and Configuration: starting......Done
Implementation & Deployment: starting..Done
Runtime Statistics: starting...........Done
=======================================

Collecting information on DB2


COMMON_INFO_COLLECT_INFO_SH WebSphere Application Server
All results can be found in $TOP/tmp/pimSupport_MM_DD_YYYY__HH_MM.tar.gz file

It is best to run the script when users are working on the system and in situations when you normally observe performance issues. The script collects more
information than you might require, but in the context of this basic health check you find all needed information in the generated archive
.../health_check/healthCheckResult.out.

Interpreting healthCheckResult.out contents for basic health check


The contents of the healthCheckResult.out file varies slightly depending on whether IBM® Db2® or Oracle is used as the back-end database server and on which
release and version the script is run.

Database configuration

With DB2 as backend DB server:


====================================================
Generating database configuration checklist...
====================================================
db2 DB2 v10.1
====================================================

498 IBM Product Master 12.0.0


DB2_SKIPDELETED INSTALL GUIDE VALUE = NO/OFF FOR DB2 v10.1 : DATABASE SETTING OFF
DB2_EVALUNCOMMITTED INSTALL GUIDE VALUE = NO/OFF FOR DB2 v10.1 : DATABASE SETTING OFF
DB2_SKIPINSERTED INSTALL GUIDE VALUE = NO/OFF FOR DB2 v10.1 : DATABASE SETTING OFF
DB2_PARALLEL_IO INSTALL GUIDE VALUE = * : DATABASE SETTING *
mon_heap_sz INSTALL GUIDE VALUE = AUTOMATIC : DATABASE SETTING = AUTOMATIC 90
sheapthres INSTALL GUIDE VALUE = 0 : DATABASE SETTING = 0
dft_queryopt INSTALL GUIDE VALUE = 5 : DATABASE SETTING = 5
applheapsz INSTALL GUIDE VALUE = AUTOMATIC : DATABASE SETTING = AUTOMATIC CURRENT VALUE = 10240
....

With Oracle as backend DB server:


oracle 11.2.0.1.0
====================================================
db_block_size INSTALL_GUIDE_VALUE = 8192 DATABASE SETTING = 8192
query_rewrite_enabled INSTALL_GUIDE_VALUE = TRUE DATABASE SETTING = TRUE
processes INSTALL_GUIDE_VALUE >= 200 DATABASE SETTING = 800
open_cursors INSTALL_GUIDE_VALUE >= 600 DATABASE SETTING = 600
db_cache_size INSTALL_GUIDE_VALUE = 0 OR >= 1 GB DATABASE SETTING = 0
shared_pool_size INSTALL_GUIDE_VALUE = 0 OR >= 200 MB DATABASE SETTING = 0
optimizer_index_caching INSTALL_GUIDE_VALUE = 90 DATABASE SETTING = 90
optimizer_index_cost_adj INSTALL_GUIDE_VALUE = 50 DATABASE SETTING = 50
GATHER_STATS_PROG INSTALL_GUIDE_VALUE = TRUE DATABASE SETTING = TRUE
STATS ESTIMATE PERCENT INSTALL_GUIDE_VALUE = 100 DATABASE SETTING = 100

The first column shows the parameter, the second column shows the recommended settings and the third column shows the current settings. The current settings
should match the recommended settings. In the case of resource allocations, the current settings should have higher values than the recommended ones.
For details about recommended settings, see Installing and setting up the database.

Schema verification
Starting with the heading Indexes status, you find a report on changed and missing Tables and Indexes. If nothing is reported, then the status is correct. If there are
deviations that are listed in the report, then you must clarify them.
Note: It is a requirement that your environment conforms with the default database schema. In most of installations this will ensure optimal performance.
If your database administrators suggest modifications to the current schema (for example, deleting, adding or modifying indexes), it is recommended to recheck
those changes with product support so potential negative side effects can be avoided. This also will ensure that schema changes, which will improve performance
for all customers will be part of the default schema in the future.

Indexes status
_______________________________________________________________
|Changed Tables |
|===============================================================|
| There are no changed tables |
|_______________________________________________________________|
_______________________________________________________________
|Missing Tables |
|===============================================================|
| There are no missing tables |
|_______________________________________________________________|
_______________________________________________________________
|Changed Indexes |
|===============================================================|
| |
| ICTG_CHI_0 |
| Current Column Structure: |
| CHI_CHILD_ID |
| CHI_NEXT_VERSION_ID |
| CHI_PARENT_ID |
| CHI_VERSION_ID |
| CHI_CAT_TREE_ID |
| CHI_COMPANY_ID |
| Required Column Structure: |
| CHI_CHILD_ID |
| CHI_NEXT_VERSION_ID |
| CHI_VERSION_ID |
| CHI_COMPANY_ID |
| CHI_CAT_TREE_ID |
| CHI_PARENT_ID |
| |
|_______________________________________________________________|
_______________________________________________________________
|Missing Indexes |
|===============================================================|
| ICTG_CHI_0 CHI_CHILD_IDCHI_NEXT_VERSION_ID....IDCHI_PARENT_ID|
|_______________________________________________________________|

Any changed or missing indexes will not trigger any product malfunction, but might cause slow running SQL (Delayed Queries) leading to overall slow product
performance. In order to correct schema deviations, you can check *.sql files for the required schema layout in the following directory:
$TOP/src/db/schema/gen/[db2|oracle]
For example if you want to see the DDL for index ICTG_CHI_0:

cd $TOP/src/db/schema/gen/db2
grep -i ICTG_CHI_0 *.sql ==> this shows file idx_ctg_catalog.sql containing the definition
open idx_ctg_catalog.sql and search for chi_0

Table and index statistics


For optimal performance, it is must database table and index statistics are up to date. This ensures that the database optimizer can choose the best (quickest)
access plan to retrieve data for each SQL query.
In the file, healthCheckResult.out you find a section with the heading "Last maintenance", which shows the last statistic update time for each table and index.

With DB2 as backend DB server:


SCHEMA_NAME TABLE_NAME STATS_TIME NO_OF_ROWS PROFILE_USED
----------- ------------------------------ ---------------- -----------
SCHEMA TCTG_ITA_ITEM_ATTRIBUTES 2013-06-10 08:30 2002101 RUNSTATS ON TABLE
"SCHEMA"."TCTG_ITA_ITEM_ATTRIBUTES" ON ALL COLUMNS WITH DISTRIBUTION ON ALL COLUMNS AND SAMPLED DETAILED INDEXES ALL

IBM Product Master 12.0.0 499


SCHEMA TPFM_PSD_SCHEDULE_DETAIL 2013-06-23 21:00 1129281 RUNSTATS ON TABLE
"SCHEMA"."TPFM_PSD_SCHEDULE_DETAIL" ON ALL COLUMNS WITH DISTRIBUTION ON ALL COLUMNS AND SAMPLED DETAILED INDEXES ALL
...

SCHEMA_NAME INDEX_NAME STATS_TIME NO_OF_ROWS COLUMNS


------------------ ------------------------- ---------------- -----------
SCHEMA ICTG_ITA_1 2013-06-10 08:30 2025185
+ITA_NODE_ID+ITA_VALUE_NUMERIC+ITA_NEXT_VERSION_ID+ITA_VERSION_ID+ITA_CATALOG_ID+ITA_OCCURRENCE_ID+ITA_ITEM_ID
SCHEMA ICTG_ITA_3 2013-06-10 08:30 2024506
+ITA_NODE_ID+ITA_VALUE_STRING_IGNORECASE+ITA_NEXT_VERSION_ID+ITA_VERSION_ID+ITA_CATALOG_ID+ITA_OCCURRENCE_ID+ITA_ITEM_ID
SCHEMA ICTG_ITA_2 2013-06-10 08:30 2023320
...

With Oracle as backend DB server:


TABLE_NAME LAST_ANALYZED NUM_OF_ROWS SAMPLE_SIZE TABLESPACE_NAME

======================================================================================================

TCTG_NOA_NODE_ATTRIBUTES 18-JUN-13 896814 896814 USERS


TPFM_PPR_PROFILE 18-JUN-13 892969 892969 USERS
...

INDEX_NAME TABLE_NAME LAST_ANALYZED NUM_ROWS SAMPLE_SIZE DISTINCT_KEYS


STATUS TABLE_SPACE_NAME

==========================================================================================================================
================================

ICTG_NOA_3 TCTG_NOA_NODE_ATTRIBUTES 18-JUN-13 896814 896814 896814


VALID INDX
ICTG_NOA_4 TCTG_NOA_NODE_ATTRIBUTES 18-JUN-13 896814 896814 6
VALID INDX
...

Ensure that statistics are up to date and a detailed sampling ratio is used.
For Db2 based installations you should see:
RUNSTATS ON TABLE .... ON ALL COLUMNS WITH DISTRIBUTION ON ALL COLUMNS AND SAMPLED DETAILED INDEXES ALL
For Oracle based installations: A 100% sampling ratio should be configured, which is confirmed when the num_rows column shows the same number as
sample_size per table/index. For more information, see:

Db2: Generating statistics and reorganizing DB2® databases


Oracle: Generating statistics and reorganizing Oracle databases

Old version counts


Overall performance might decrease significantly as more and more old version rows accumulate. Therefore, taking care of old versions is an important
administrative task. For more information, see
Old object versions.
You can quickly check whether too many old version rows have accumulated over time by observing the “current/old row count ratios on tables" section in
healthCheckResult.out.

current/old row count ratios on tables


====================================================

table name current version row count old version row count ratio current/old row count
---------- ------------------------- --------------------- ---------------------------
noa 22117 49925 .44
nod 3673 7469 .49
noh 3342 6959 .48
icm 181523 164387 1.10
ita 1013552 988832 1.02
itm 95660 99404 .96
itd 95660 99403 .96

For selected tables the current and old version rows count is provided along with the ratio of current/old version row count. If you see the current version row count
for table itm or itd is more than 100,000 and the ratio is less than 0.3, then it is highly recommended to delete old versions.

Cache hit utilization


Product Master has a built-in cache mechanism for some Product Master objects. Ensuring high cache hit ratios can be the key for optimal performance.
For more information, see Cache management.
HealthCheckResult.out file provides you with a quick overview of current cache configuration and utilization for each service in "PIM CACHE SNAPSHOT" section:

PIM CACHE SNAPSHOT:


Title Current Max Cache Cache Hit
C. Size C. Size Hits Requests Percentage
...
--------------appsvr_LORIOT----------------
Spec Cache 400 126 15953 16089 99%
Spec Key To Current Start Version Cache 400 257 15999 16287 98%
Spec Key Version to Start Version Cache 400 13 0 6 0%
Lookup Table Cache 500 24 1610 1636 98%
Role Cache 2000 13 111872 111885 100%
Access Cache 5000 0 0 0 0%
Spec Name Cache 2000 7 29 37 78%
Attribute Collections Cache 500 92 3335 3563 94%
WSDL Cache 0 0 0 0 0%
Workflow Definition Cache 250 3 63 67 94%
Script Cache 1000 17 139 170 82%
Catalog Cache 100 2 245 247 99%
Catalog Definition Cache 2000 49 500 554 90%
Node Id Cache 2000 127 800 927 86%

500 IBM Product Master 12.0.0


View Cache 5000 35 865 954 91%
...

The Current® Cache Size displays the current configuration settings whereas the Max Cache Size shows how many objects are cached at that time.
The maxElementsInMemory settings for each cache, as defined in the mdm-cache-config.properties file, should be increased iteratively (as long as there is
sufficient memory) whenever you find that the Max Cache Size reaches the defined upper limit and the Hit Percentage on a well used cache is lower than 95%.
Whether a cache is highly used can be judged by the Cache Hits and Cache Requests values.

If you need to monitor cache hit utilization, then you can trigger a cache snapshot by running:

$JAVA_RT com.ibm.ccd.common.wpcsupport.util.SupportUtil --cmd=getRunTimeCacheDetails

Checking Resource utilization


Performance issues during peak usage times might be triggered by running out of resources on either Product Master or database server side. This can be:

server-side CPU bottleneck


server-side memory bottleneck

To check for server side bottlenecks, you can use various utilities like top, topas,
sar, vmstat or nmon. Which one to use depends on your preferences. For monitoring purposes nmon might be preferable, but requires installation of the
respective package for more information).
If the CPU usage is over 90% for long periods of time or the server starts to swap memory, then you must investigate the reasons for high resource usage and
eventually make more resources available.

Performance issues might also surface, if some Product Master service JVMs are using its memory on the upper limit, thus causing long garbage collection times.
The file healthCheckResult.out provides you with the services JVM memory configuration details and a one time snapshot of memory usage for each service.

== 2. SETUP and CONFIGURATION


-----------------------------------------------------------------------------------------
PARAMETER VALUE
SCHEDULER_MEMORY_FLAG -Xmx1024m -Xms48m
QUEUEMANAGER_MEMORY_FLAG -Xmx64m -Xms48m
APPSVR_MEMORY_FLAG -Xmx1024m -Xms256m
ADMIN_MEMORY_FLAG -Xmx64m -Xms48m
WORKFLOWENGINE_MEMORY_FLAG -Xmx1024m -Xms48m
EVENTPROCESSOR_MEMORY_FLAG -Xmx64m -Xms48m

JVM LEVEL SERVICE MEMORY SNAPSHOT:


Service Short Name Service Type Total Mem(MB) Total Used Mem(MB)
workflowengine_LORIOT workflowengine 73815 34535
scheduler_LORIOT scheduler 71584 30339
appsvr_LORIOT appsvr 268435 88690
queuemanager_LORIOT queuemanager 64601 26746
admin_LORIOT admin 50331 6327
eventprocessor_LORIOT eventprocessor 66647 28488

Under SETUP and CONFIGURATION, you find the configured maximum memory size for each JVM (-Xmx) and in the JVM LEVEL SERVICE MEMORY SNAPSHOT
section you see for each services JVM, how much memory is allocated currently and how much memory is in use. If the Total Mem and Total Used Mem are close to
the defined -Xmx setting, then it might be an indicator that the respective service requires more memory and the -Xmx setting needs to be increased. The JVM
Level Service Memory Snapshot can be triggered solely by running:

$JAVA_RT com.ibm.ccd.common.wpcsupport.util.SupportUtil --cmd=getRunTimeMemDetails

Note: This command should not be run at high intervals as it triggers garbage collection on each JVM.
If JVM memory usage is an issue, then more detailed monitoring needs to be set up. For example, by enabling verbose:gc for the respective service.
General hints about Java™ heap sizing can be found at:

How to do heap sizing

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Locating the performance problem in the stack


There are many components involved in performing a specific task, which can influence the performance severely. This needs to be kept in mind while trying to locate a
performance problem. What needs to be checked will also depend on the type of use-case. If some scheduled jobs activity (reports, imports or exports) is slow, you
mainly have to focus at first on Product Master side checks with special focus on the scheduler services log files and activity. But if working with the user interface is slow,
then you have to include client side checks as well in your investigations.

Understand the Product Master architecture, see the following:


Architecture of Product Master and Product overview.
A three-tier architecture is used that consists of the following:

a web-based user interface for rendering Product Master content in a web browser on the client side
a middle tier with the functional modules and services that process user requests and data on the central Product Master server
Note: The middle tier consists of 6 services which are implemented as separate JVM processes. Each service is responsible for specific tasks and logs debugging or
error messages in its own directory under $TOP/logs. The major services are:
the application service, (process appsvr?_<hostname>), it handles user interaction with the system.
the scheduler service, (process scheduler?_<hostname>), it processes all of the scheduled jobs like imports, exports, and reports.
workflow service (process workflowengine_<hostname>), it handles processing of items through the workflow.
a database management system, for example, Db2 Oracle, that stores the data

All three tiers, if located on different servers, use the network to transfer data and requests.

IBM Product Master 12.0.0 501


Client-side checks
Measure the time that it takes to display the item in the user interface.
Product Master side checks
Any task, whether it is a request to display an item or whether it is running an import job, will trigger some type of workload such as:
DB server-side checks
If Product Master side checks reveal that most of the time is needed due to long SQL runtimes, at first verify that basic health checks are met.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Client-side checks
Measure the time that it takes to display the item in the user interface.

It is recommended to also monitor at the same time:

CPU usage of iexplorer.exe (the Internet Explorer process) by displaying the iexplorer.exe file in the Processes tab of the Windows Task Manager screen.
If available, use some type of HTTP traffic monitoring tool to trace when data packets are received on the client side from the server.

If it is found, that data is received quickly, but it takes a long time to display the results in the user interface, then you most likely face some client-side issue. For more
information, see: Browser issues.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Product Master side checks


Any task, whether it is a request to display an item or whether it is running an import job, will trigger some type of workload such as:

item details (attribute-, spec-, category-, security- and so on type details) need to be retrieved from the local cache, or
when details are not available, they need to be retrieved from the database server by using SQL, and
retrieved information needs to be processed, this includes running container scripts, attribute value rules or other custom scripts, which can be quite resource
intensive, and
data packets need to be sent to the client.

There are different factors to consider which can influence the performance on Product Master side:

Are cache settings sufficiently high so that SQL statements that are to retrieve information from the database can be avoided? See Tools for troubleshooting.
What is the SQL footprint when facing performance issues? Use the Sumall utility, see Sumall utility to collect SQL workload and performance statistics, to analyze
the db.log files collected in debug mode to see whether there are many long running SQLs accounting for most of the runtime or whether it is rather the
accumulated runtime of fast performing SQLs impacting the overall performance. Too many SQLs of the same type might be an indicator for customization issues
(complex custom scripts). SQL footprint might be problematic if for a defined use case the amount of total SQL runtime for the respective Product Master service
accounts for a high portion of the monitored time period.
Are there any CPU or memory bottlenecks on the server, which throttle overall performance? You would need to use tools like nmon, vmstat, top, topas to
track the physical resource usage.
Is the respective Product Master service JVM started with the right options to allow optimal performance (for example, memory settings and garbage collection
policy)? See System tuning for more information.
What is the complexity of involved custom scripts (for example, container scripts, attribute value rules, import script logic, and so on). This can result in both
retrieval of large amounts of data from the database and high CPU usage of the respective Product Master service JVM. The following two methods can be used to
diagnose such problems:
monitor the SQL workload using Sumall utility
profile the respective services JVM to see which functions contribute most to the overall workload
See Monitoring Java virtual machine (JVM) for more information.
See "JVM monitoring" section in the Troubleshooting IBM Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

DB server-side checks
If Product Master side checks reveal that most of the time is needed due to long SQL runtimes, at first verify that basic health checks are met.

See Basic health checks to ensure optimal performance for more information. In addition, check the following:

database configuration matches general guidelines


all required indexes exist
table and index statistics are up-to-date
old versions are cleaned from the system regularly

502 IBM Product Master 12.0.0


DB server physical resource utilization: CPU, memory
buffer pool settings and buffer pool hit ratio
need for table and index reorganization
Note: For more information, see Generating statistics and reorganizing DB2® databases and Check for reorganization need.

If all of the above checks do not reveal an issue, select some of the identified slow running SQLs and measure their runtime while triggering them from within some native
database client, for example Db2 or SQLplus.

directly on Db2 server side


on Product Master side

Repeat the tests a couple of times.


If the measured runtimes differ largely between both locations, then a slow network in between both servers are the culprit and needs to be checked.
If the runtimes measured on both servers by using the native database clients are much faster than what is logged in by Product Master services in the db.log files, then
this would be an indication that the respective Product Master services JVM is resource that is constrained. Meaning, the JVM threads cannot pick up the returned results
fast enough. In this case, JVM profiling should help to reveal the problems.

If measured runtimes are slow on its first execution, but much faster on subsequent executions, it might be an indication that database buffer pool are too small to hold
most of the relevant data in memory or data intensive queries are running often, which dispose data from the buffer pool so the data needs to be read from the disk
continuously, which is expensive.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Individual troubleshooting techniques


Use the following topics to resolve specific performance issues with Product Master.

General performance issues


Many performance issues can be spotted and resolved by performing scripting code reviews and verification testing.
Check update statistics time in DB2
Table and index statistics should be up to date to enable the database optimizer to choose optimal query plans. When in doubt, SQLs can be used to check the last
update statistics collection time.
Check update statistics time in Oracle
Table and index statistics should be up to date to enable the database optimizer to choose optimal query plans. When in doubt, SQLs can be used to check the last
update statistics collection time.
Check existence of required tables and indexes
It is a requirement that your environment conforms with the default database schema that will be deployed when creating the schema for Product Master. This
ensures for the majority of installations optimal performance.
Check for invalid indices
Sometimes indices might become invalid. This should be rather unusual, but when using Oracle, invalid indices can be checked via SQL.
Manual update of statistics
In case you have not enabled automatic statistic collection yet, but require to update table and index statistics right away, the easiest approach will be to run:
$TOP/bin/db/analyze_schema.sh. This will update the statistics for all tables and indexes of the current Product Master schema.
Check SQL runtimes directly on database server
Sometimes slow queries are identified based on delayed queries flag in the db.log file, however, it might result from JVM issues on Product Master server. In order
to exclude this, such slow queries can be run directly on dbserver to exclude network or JVM issues. Of course you should use only select statements for such
runtime measurement tests.
Check query plan
Once slow performing queries are identified and initial checks do not reveal major problems, it might make sense to investigate the query plan generated by the
database for the slow running query. Interpreting those plans and especially drawing the right conclusions out of this is not always easy and might require
assistance from database experts.
Check table layout
If you need to check the current table layout with what it should be according to DDLs in the $TOP/src/db/schema/gen/ * directory.
Checking the size of the Document Store
You should use the docstore maintenance report to check docstore size and delete no longer needed documents. Alternatively, you can use following SQL´s to
quickly check the number of documents in the main docstore directories.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

General performance issues


Many performance issues can be spotted and resolved by performing scripting code reviews and verification testing.

Symptoms
For example, for most of the user interface actions, there are two portions of time taken to respond to the actions.

1. the Product Master product internal processing, mostly dictated by modeling


2. custom logic hooked into various points. Code reviews can uncover performance issues caused by custom logic.

IBM Product Master 12.0.0 503


Diagnosing the problem
In the scenario of loading an item on user interface display, it may invoke entry build script to do certain processing. By turning on/off the entry build script, you can find
out how much difference, and whether or not to look into the entry build script. In the scenario of saving an item, there could have pre-proc, post-proc and post-save
scripting logic in the processing. Try to isolate and determine the impact.

Resolving the problem


Some common findings on the scripting level of performance issue are invoking database access unnecessarily. Identify if there is any processing in loop. Identify if there
is an opportunity to cache data objects in script to avoid a repeated fetch. If some specific scenario is still performing poorly after code review, it would require profiling
the scenario to see how processing time is distributed. Java profiling usually gives us accurate information for tuning. Some times performance issue is caused by memory
usage and limitations. JVM may start to spend too much time on garbage collection. By memory profiling, it will help to understand potential cause and identify Java
objects.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Check update statistics time in DB2®


Table and index statistics should be up to date to enable the database optimizer to choose optimal query plans. When in doubt, SQLs can be used to check the last update
statistics collection time.

Symptoms
The column “stats_time” shows when statistics have been updated the last time. If this timestamp is not current (last week timeframe), then the statistics need to be
updated. The following additional information is also retrieved:

in which tablespace is the table created (tbspace)


in which tablespace will indices be created (idx_tbspace)
cardinality (card)

respectively for indexes:

what are the columns which constitute the index (colnames)


cardinality (indcard)

For table statistics, you can use the following SQL:

SELECT substr(tabschema, 1, 10) AS schema_name

,substr(tabname, 1, 30) AS table_name

,substr(to_char(stats_time, 'YYYY-MM-DD HH24:Mi'), 1, 16) AS stats_time

,cast(card AS INT) AS no_of_rows

,statistics_profile AS profile_used

FROM syscat.tables

WHERE tabschema = <CURRENT_SCHEMA>

AND type = 'T'

ORDER BY no_of_rows DESC

For indexes, you can use the following SQL:

SELECT substr(indschema, 1, 18) AS schema_name

,substr(indname, 1, 25) AS index_name

,substr(to_char(stats_time, 'YYYY-MM-DD HH24:Mi'), 1, 16) AS STATS_TIME

,cast(numrids AS INT) AS no_of_rows

,rtrim(substr(colnames, 1, 140)) AS columns

FROM syscat.indexes

WHERE tabschema = <CURRENT_SCHEMA>

ORDER BY no_of_rows DESC;

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

504 IBM Product Master 12.0.0


Check update statistics time in Oracle
Table and index statistics should be up to date to enable the database optimizer to choose optimal query plans. When in doubt, SQLs can be used to check the last update
statistics collection time.

Symptoms
For tables, the following information is retrieved:

table_name: Name of table


num_rows: number of rows in table
sample_size: what is the detail level of statistics collected
last_analyzed: last update statistics timestamps

and in addition for indexes:

index_name: name of index


table_name: table on which index is defined
distinct_keys: number of different keys
status: index status
Note: The value should show “VALID”, otherwise it is an indicator that index needs to be rebuild

To check table statistics use:

select owner,

table_name,

num_rows,

sample_size,

last_analyzed,

tablespace_name

from dba_tables where owner='<SCHEMANAME>'

order by owner

To check for index statistics use:

select index_name,

table_name,

num_rows,

sample_size,

distinct_keys,

last_analyzed,

status

from user_indexes where table_owner='<SCHEMANAME>'

order by table_owner;

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Check existence of required tables and indexes


It is a requirement that your environment conforms with the default database schema that will be deployed when creating the schema for Product Master. This ensures for
the majority of installations optimal performance.

Symptoms
If you are having problems with missing or corrupted indexes, you should check the existence of required tables and indexes.

Environment
$JAVA_RT com.ibm.ccd.common.util.SystemDB

|Extra Indexes

IBM Product Master 12.0.0 505


|=========================================================================|

| There are no extra indexes

|_________________________________________________________________________|

_________________________________________________________________________

|Changed Indexes

|=========================================================================|

| There are no changed indexes

|_________________________________________________________________________|

_________________________________________________________________________

|Missing Indexes

|=========================================================================|

| CTG_1_UK

| DOA_0_UK

| ISEC_OAC_0

| ISEC_OBT_0

Diagnosing the problem


If your database administrators suggest modifications to the current schema (for example, deleting, adding or modifying indexes), it is recommended to recheck those
changes with product support so potential negative side effects can be avoided. This also will ensure that schema changes, which will improve performance for all
customers will be part of the default schema in the future. To check, issue the following command: $JAVA_RT com.ibm.ccd.common.util.SystemDB

Resolving the problem


If you need to check existing index definitions, one option would be to query Database system tables. This would be only necessary if you find some index definition
deviations with SystemDB utility and would need to compare with the required original definitions. The schema definition can be found in the $TOP/src/db/schema/gen/*
directory.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Check for invalid indices


Sometimes indices might become invalid. This should be rather unusual, but when using Oracle, invalid indices can be checked via SQL.

Symptoms
DB2 There is no method to check for invalid indices using SQL.
Oracle
select count(1) from user_indexes where status <> 'VALID';

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Manual update of statistics


In case you have not enabled automatic statistic collection yet, but require to update table and index statistics right away, the easiest approach will be to run:
$TOP/bin/db/analyze_schema.sh. This will update the statistics for all tables and indexes of the current Product Master schema.

Symptoms
If statistics are not up to date, performance could be compromised.

Environment
If you need to update statistics only for a limited number of tables, you might use:

On DB2:

506 IBM Product Master 12.0.0


RUNSTATS ON TABLE <schema>.<tablename> WITH DISTRIBUTION ON KEY COLUMNS AND INDEXES ALL
On Oracle:
Run within SQLplus: exec DBMS_STATS.GATHER_TABLE_STATS('<SCHEMANAME
IN UPPERCASE>', '<TABLENAME IN UPPERCASE>', estimate_percent =>
100, method_opt => 'FOR ALL COLUMNS SIZE AUTO', DEGREE => 2, cascade
=> TRUE);
Note: Ensure you abide by the following guidelines:

Specify the schema and table name in upper case letters.


Set the option cascade to TRUE. This will cause an update on index statistics as well.
For the estimate_percent parameter, use 100 for the full sampling.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Check SQL runtimes directly on database server


Sometimes slow queries are identified based on delayed queries flag in the db.log file, however, it might result from JVM issues on Product Master server. In order to
exclude this, such slow queries can be run directly on dbserver to exclude network or JVM issues. Of course you should use only select statements for such runtime
measurement tests.

Symptoms
Oracle:
You can get query runtime using sqlplus. Create a file containing the SQL along with some settings, for example, sql1.sql. The file has to end with .sql
otherwise sqlplus will not recognize the input.
set timing on;

set linesize 1000;

set pagesize 1000;

<SQL statement>;

quit;
run the query:
sqlplus <user>/<passwd> @sql1.sql
the query result will be printed on screen with the runtime shown like:
Elapsed: 00:00:02.71
If you want to avoid screen display, you can use additional settings to redirect the output to a file:
set timing on;

set linesize 1000;

set pagesize 1000;

set term off;

spool tmp.out;

<SQL statement>;

spool off;

quit;
DB2®:
Use the db2batch utility to collect query execution time.
Save the SQL into file. Aggregate the SQL on one line and terminate with a semicolon.
Invoke db2batch:
db2batch -d <dbname> -a <userid>/<passwd> -f <sqlfile> -i complete -o f -1 p 1 o 5

whereby the options have following meaning

-i complete: The time to prepare, execute, and fetch are expressed separately

-o specifies a set of options like:

f number of rows to fetch (-1 all, n = n rows)

p level of performance information to be returned (2 = Return elapsed time and CPU time)

o query optimization level (default 5)

IBM Product Master 12.0 Fix Pack 8

IBM Product Master 12.0.0 507


Operating Systems: AIX, Linux, and Windows (Workbench only)

Check query plan


Once slow performing queries are identified and initial checks do not reveal major problems, it might make sense to investigate the query plan generated by the database
for the slow running query. Interpreting those plans and especially drawing the right conclusions out of this is not always easy and might require assistance from database
experts.

Symptoms
If the goal is to update or retrieve only a couple rows from within some large tables, it is always the best approach for the database server to find the respective rows by
using indices instead of running a table scan.

Diagnosing the problem


In order to check the used query plan for a specific SQL, the following approaches can be used.

Oracle:
You can view the query plan on Oracle based installations directly from within the DB Admin console. Paste the query into the SQL Command window and
click Explain plan.
Note: Do not click Run Query, especially if you are investigating an update, insert, or delete statement.
DB2:
There exist various options to generate query plans with DB2, but the preferred option is to use the db2exfmt utility, as the generated output will contain
most details.
Perform the following steps to generate the query plan:
1. Log into DB2 server as instance owner.
2. Ensure that tables exist or just recreate them. Details can be found in DB2 infocenter. For example:
db2 connect to <database name>

db2 "CALL SYSPROC.SYSINSTALLOBJECTS('EXPLAIN', 'C',CAST (NULL AS VARCHAR(128)), CAST (NULL AS VARCHAR(128)))"

or

db2 connect to <database name>

cd ~/sqllib/misc

db2 -tvf EXPLAIN.DDL


3. Save the problematic SQL statement in a file, for example, sql1.sql. Statements should be terminated by ;. Either use fully qualified table names
(including the schema name) or pre-pend the SQL statement with a set current schema statement, for example:
set current schema nes1011b;

select * from (SELECT itm_id, itm_primary_key pk , '' d

FROM tctg_itm_item

WHERE itm_id IN (

select pk_row.ita_item_id

from tctg_ita_item_attributes pk_row

where pk_row.ita_node_id = 1657

and pk_row.ita_value_string like '%335245' ESCAPE '\'

and pk_row.ita_next_version_id >= 999999999

and pk_row.ita_version_id <= 999999999

and pk_row.ita_catalog_id = 1602

fetch first 500 rows only ) AND itm_container_id = 1602

AND itm_version_id <= 999999999

AND itm_next_version_id >= 999999999

ORDER BY UPPER(pk) ) AS OM98 fetch first 10 rows only;


4. Use the db2exfmt command to generate a query plan:
db2 set current explain mode explain

db2 -tvf sql1.sql

db2exfmt -d <dbname> -# 0 -w -1 -g TIC -n % -s % -o sql1_exfmt.

db2 set current explain mode no


The generated sql1_exfmt file contains a graphical query plan with statistical information for each data retrieval step.
As an alternative, use the db2support feature to collect optimizer related diagnostic information.

508 IBM Product Master 12.0.0


5. Run the following command as DB2 instance owner:
db2support . -d <dbname> -sf search.sql -cl 1
It will generate a db2support file with all of the required information, also showing the db2exfmt output. More details can be found in DB2 technote:
Collecting Data for DB2 Compiler Issues.

Resolving the problem


If the optimizer chooses an inefficient access plan (for example, a large table scan or using inappropriate index - index that does not contain columns which are used as
search predicates), then this might be due to:

the statistics are not up to date (update statistics), or


the statistics were generated by using some inappropriate sampling method (use 100% sampling ratio), or
the index does not contain the correct set of columns to allow quick access (make sure your table and index layout conforms to Product Master default schema).

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Check table layout


If you need to check the current table layout with what it should be according to DDLs in the $TOP/src/db/schema/gen/
* directory.

Symptoms
Oracle:
To check for a specific table, run either of the following commands within sqlplus:
describe <tablename>
or SQL:
SELECT

column_name "Name",

nullable "Null?",

concat(concat(concat(data_type,'('),data_length),')') "Type"

FROM user_tab_columns

WHERE table_name='<tablename in uppercase>';


DB2®:
On the command line, you can retrieve details by running the following command:
db2 describe table VAN1012E.TUTL_LCK_LOCK show detail
Alternatively, you could issue the db2look command to get DDLs for various database objects. For example, to extract all of the table and index definitions for
a schema:
db2look -d <DBname> -z <schemaname> -e -f -o db2look.sql

-e: extract DDL statements

-f: extract configuration parameters and registry variables that affect the query optimizer

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Checking the size of the Document Store


You should use the docstore maintenance report to check docstore size and delete no longer needed documents. Alternatively, you can use following SQL´s to quickly
check the number of documents in the main docstore directories.

Symptoms
The size of your docstore report is quite large and contains unnecessary documents.

Resolving the problem


Use the docstore maintenance report, see Document store maintenance, to check docstore size and delete no longer needed documents.

Oracle
SELECT DIRECTORY, count ( 1 ) FROM

(SELECT SUBSTR( dhi_doc_path,0,INSTR( dhi_doc_path,'/', 1,2 ))

DIRECTORY FROM dhi

IBM Product Master 12.0.0 509


WHERE not dhi_doc_path is null )

GROUP BY DIRECTORY

ORDER BY count(1) DESC


DB2
SELECT DIRECTORY, count ( 1 ) FROM

(SELECT SUBSTR( dhi_doc_path,1,posstr(strip(dhi_doc_path,L,'/'),'/'))

DIRECTORY FROM dhi

WHERE not dhi_doc_path is null )

GROUP BY DIRECTORY

ORDER BY count(1) DESC

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting multicast
Product Master uses ehcache to perform distributed caching of objects between Java™ virtual machines (JVMs), both on the same machine and in the same network. Each
Product Master JVM needs to know about all of the other JVMs in the cluster. The way this is done is that a JVM will periodically send a multicast message, for example
saying "I'm here!". If we did not use multicast, the configuration of all the caches would be very tedious and error-prone. Consequently, IBM recommends that you use the
multicast configuration of ehcache. Although you can configure the cache setup in the $TOP/etc/default/mdm-ehcache-config.xml file yourself; note that IBM® does not
support non-multicast configuration.

Confirm caching is working in cluster


If you have distributed caching working and you turn the ehcache logging setting to "debug", you will see log entries similar to the following in the native_cache.log
file.
Configuring multicast
The following sample code displays the defaults for the multicast configuration for Product Master. Do not change these settings unless you are familiar with
multicast configuration. The address and the ttl values are defined in the env_settings.ini file. The configure_env.sh script will write these values to the appropriate
xml file.
Verify the network and machine configuration
IBM recommends that you first confirm that multicast communication works on your network before proceeding to Product Master configuration. Typically, a
network and/or servers need some configuration to enable multicast. Once you have confirmed that multicast communication works, then check the logs, and then
you will have confirmed that all the JVMs in your Product Master cluster are properly sharing caches.
Allowing multicast broadcasting on a Linux machine
You must allow for multicast broadcasting to take place in order for ehcache to sync successfully between servers on a Linux® machine.
Contents of the native_cache.log file
You can review the following examples of how the contents within the native_cache.log file should look.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Confirm caching is working in cluster


If you have distributed caching working and you turn the ehcache logging setting to "debug", you will see log entries similar to the following in the native_cache.log file.

The log entries will determine whether the JVM is receiving a message from the remote cache. Within the log file you should see the given IP address, port, and cache
name.
The following sample code shows the cache setting in the log4j2.xml file:

<Logger name=" net.sf.ehcache " level="debug" additivity="false">


<AppenderRef ref="NATIVE_CACHE" />
</Logger>

The following sample code shows caching is working in a cluster in the native_cache.log file.
Note: The URL and port will be different for your setup.
2013-05-14 22:25:25,293 [Multicast keep-alive Heartbeat Receiver thread-1]

DEBUG net.sf.ehcache.distribution.RMICacheManagerPeerProvider -

Lookup URL //192.168.1.69:59569/workflowCache

Checklist of multicast settings


If you believe your caching is not working in your cluster, go through the following checklist to help debug the problem.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

510 IBM Product Master 12.0.0


Checklist of multicast settings
If you believe your caching is not working in your cluster, go through the following checklist to help debug the problem.

1. Issue the ping 224.0.0.1 command to see if the machines on your network are listening to multicast traffic. You can run ping 224.0.0.1 | grep <network IP of other
server> for example, ping 224.0.0.1 | grep 192.168.1.100 to see if the given server in the cluster is listening to your server.
2. Modify your network adapter of your local Linux® machine to allow multicast traffic. Here is an example when submitting "ifconfig". In this example multicast is
configured:
en5:
flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,
CHAIN>

inet 9.156.178.105 netmask 0xffffff00 broadcast 9.156.178.255

tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0

lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>

inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255

inet6 ::1/0
3. On some machines you may have to add a route for multicast traffic. This applies to AIX® machines as well as some Linux distributions. Here is an example of how
this could look:
Note: 10.0.0.10 is my default gateway. This is optional and the command may look different on your machine.
route add -net 224.0.0.0/3 10.0.0.10
4. Check that you can run MulticastHelloWorld successfully to verify there is no security or configuration issues at the network or server level.
5. Make sure the configuration of the multicast_addr is the same for all of the machines in the cluster. It is easy to change one, and not the others. This can be
verified in the native_cache.log file.
6. The multicast address being used for the caching mechanism responsible to communicate such changes is being defined in the two configuration files: the
$TOP/bin/conf/env_settings.ini file and after running the $TOP/bin/configure_env.sh command, also in the $TOP/etc/default/mdm-ehcache-config.xml file. Usually,
you only have to modify your configuration $TOP/bin/conf/env_settings.ini file and invoke the $TOP/bin/configure_env.sh command. Usually, there is no need to
configure the $TOP/etc/default/mdm-ehcache-config.xml file manually. The default address is 239.0.10.1 and the multicastGroupPort
4446. Speak with you network administrator if this address and port is free to use in your network and that there are no firewall or router rules that prevent
multicast traffic.
7. Try using a different multicast address, for example: 224.0.0.1.
8. Ensure the ttl setting is set to 1 on all of the machines if using a cluster.
9. Make sure that if you ping your machine name from that machine, that you get a valid IP address on the network and not 127.0.1.1.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring multicast
The following sample code displays the defaults for the multicast configuration for Product Master. Do not change these settings unless you are familiar with multicast
configuration. The address and the ttl values are defined in the env_settings.ini file. The configure_env.sh script will write these values to the appropriate xml file.

[cache]

# multicast ip addr for MDMPIM cache. Must be unique on the LAN

multicast_addr=239.0.10.1

# TTL for multicast packets. Set to 0 for single-machine installations or 1 for clusters

multicast_ttl=0
In the mdm-ehcache-config.xml file, the defaults are as follows:
<cacheManagerPeerProviderFactory

class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"

properties="peerDiscovery=automatic,

multicastGroupAddress=239.0.10.1,

multicastGroupPort=4446, timeToLive=0"/>
Ensure that use a unique multicast_addr for each multicast application on your network. A different multicastGroupPort would serve the same purpose.
Note: There is a very specific range of IP addresses that can be used for multicast: 224.0.0.0 to 239.255.255.255.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Verify the network and machine configuration

IBM Product Master 12.0.0 511


IBM® recommends that you first confirm that multicast communication works on your network before proceeding to Product Master configuration. Typically, a network
and/or servers need some configuration to enable multicast. Once you have confirmed that multicast communication works, then check the logs, and then you will have
confirmed that all the JVMs in your Product Master cluster are properly sharing caches.

My machines are not listening to multicast traffic


I do not know if my machines are listening to multicast traffic. How do I confirm this?
Confirm cache is setup correctly in the log files
In order to confirm that cache is setup correctly in the log files, ensure you have the following values set.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

My machines are not listening to multicast traffic


I do not know if my machines are listening to multicast traffic. How do I confirm this?

Symptoms
My server is not picking up any multicast traffic.

Diagnosing the problem


Issue the ping 224.0.0.1 command to see if the machines on your network are listening to multicast traffic. You can run ping 224.0.0.1 | grep <network IP of other server>
for example, ping 224.0.0.1 | grep 192.168.1.100 to see if the given server in the cluster is listening to your server.
User response: In the created output you should find the IP addresses of your Product Master server. If you do not find the IP address, you will need to talk to your
network administrators and ask them to allow multicast traffic in your network segment in which your Product Master servers are located.

Resolving the problem


Modify your network adapter of your local Linux® machine to allow multicast traffic. Here is an example when submitting "ifconfig". In this example multicast is configured:
en5:
flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,CHAIN
>

inet 9.156.178.105 netmask 0xffffff00 broadcast 9.156.178.255

tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0

lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>

inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255

inet6 ::1/0
On some machines you may have to add a route for multicast traffic. This applies to AIX® machines as well as some Linux distributions. Here is an example of how this
could look:
Note: 10.0.0.10 is my default gateway. This is optional and the command may look different on your machine.
route
add -net 224.0.0.0/3 10.0.0.10

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Confirm cache is setup correctly in the log files


In order to confirm that cache is setup correctly in the log files, ensure you have the following values set.

Change the priority value parameter to debug in the log4j2.xml file. For example:

Change the priority value parameter to debug in the log4j2.xml file. For example:
<Logger name=" net.sf.ehcache " level="debug" additivity="false">
<AppenderRef ref="NATIVE_CACHE" />
</Logger>

When the cache peers are advertised, a message similar to the following displays:
2013-05-14 22:25:15,287 [Multicast Heartbeat Sender Thread]
DEBUG net.sf.ehcache.distribution.PayloadUtil - Cache peers for this
CacheManager to be advertised: //192.168.1.69:59569/workflowCache|/
/192.168.1.69:59569/specCache__KEY_VERSION_TO_START_VERSION|//192.168.1.69:59569/
ctgViewCache|//192.168.1.69:59569/lookupCache|//192.168.1.69:59569/catalogCache|/
/192.168.1.69:59569/specNameToSpecIdCache|//192.168.1.69:59569/roleCache|/
/192.168.1.69:59569/nodeIdToSpecIdCache|//192.168.1.69:59569/attrGroupCache|/
/192.168.1.69:59569/catalogDefinitionCache|//192.168.1.69:59569/scriptCache|/
/192.168.1.69:59569/specCache__KEY_START_VERSION_TO_VALUE|//192.168.1.69:59569/
specCache__KEY_TO_CURRENT_START_VERSION|//192.168.1.69:59569/wsdlCache|/
/192.168.1.69:59569/accessCache

512 IBM Product Master 12.0.0


Note: The IP address must not be 127.0.1.1.
You will see a message similar to the following when a machine is receiving notification from peers in its cluster, in the native_cache.log file:
2013-05-14 22:25:15,287 [Multicast Heartbeat Receiver Thread]
DEBUG net.sf.ehcache.distribution.MulticastKeepaliveHeartbeatReceiver - rmiUrls
received //<IP-address>:<port>/workflowCache|//<IP-address>:<port>/attrGroupCache|/
/<IP-address>:<port>/specCache__KEY_VERSION_TO_START_VERSION|//<IP-address>:<port>/
scriptCache|//<IP-address>:<port>/ctgViewCache|//<IP-address>:<port>/
specCache__KEY_TO_CURRENT_START_VERSION|//<IP-address>:<port>/
specCache__KEY_START_VERSION_TO_VALUE|//<IP-address>:<port>/lookupCache|/
/<IP-address>:<port>/catalogCache|//<IP-address>:<port>/roleCache|/
/<IP-address>:<port>/wsdlCache|//<IP-address>:<port>/accessCache
You will see a message similar to the following if the ehcache is still not working:
2013-05-14 22:25:15,287 [net.sf.ehcache.CacheManager@6d0b6d0b]
DEBUG net.sf.ehcache.util.UpdateChecker - Update check failed:
java.net.SocketTimeoutException: connect timed out

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Allowing multicast broadcasting on a Linux machine


You must allow for multicast broadcasting to take place in order for ehcache to sync successfully between servers on a Linux® machine.

Issue the iptables –list command. If multicast broadcasting is allowed, you will see a message similar to the following:
...
ACCEPT all -- anywhere anywhere PKTTYPE = multicast
...
If multicast broadcasting is not allowed, issue the following to allow it:
$ iptables -A INPUT -m pkttype --pkt-type multicast -j ACCEPT

$ service iptables save


The command will look similar to:

./iptables -I <rule_name> <position_in_rule> -m pkttype --pkt-type multicast -j ACCEPT


for example:
iptables -I RH-Firewall-1-INPUT 26 -m pkttype --pkt-type multicast -j ACCEPT
ehcache mechanism:
- Displaying the current configuration:
./etc/rc.d/init.d/iptables status
oder
iptables -list
- Add / Grant multicast traffic:
iptables -A INPUT -m pkttype --pkt-type multicast -j ACCEPT
oder
iptables -A INPUT -m pkttype --pkt-type multicast -j ACCEPT
oder
vi /etc/sysconfig/iptables
oder
iptables -I RH-Firewall-1-INPUT 33 -m pkttype --pkt-type multicast -j ACCEPT
- Permanently save this firewall configuration:
service iptables save
oder
./etc/rc.d/init.d/iptables save

Debuging ehcache using the ehcache debugger


You will download a file from the ehcache web site and issue a command that monitors the network for cache events. This provides a much easier way to verify
distributed caching is working, as compared to doing something like modifying workflow scripts and checking that the workflow engine picked up the modified
script.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Debuging ehcache using the ehcache debugger


You will download a file from the ehcache web site and issue a command that monitors the network for cache events. This provides a much easier way to verify distributed
caching is working, as compared to doing something like modifying workflow scripts and checking that the workflow engine picked up the modified script.

Procedure
1. Download the ehcache debugger tar file from the ehcache web site, for example: http://ehcache.org/
2. Unpack the archive and place the downloaded JAR file, for example ehcache-debugger-1.7.1.jar, into the $TOP/jars directory.
3. Add the following line to the $TOP/bin/conf/classpath/jars-custom.txt file:
jars/ehcache-debugger-1.7.1.jar
4. Run the $ bin/configure_env.sh command. You can answer no to all of the questions to avoid disturbing your changes to the log4j2.xml file.
5. Run the $ echo $JAVA_RT command and make sure that the ehcache-debugger-1.7.1.jar is in the command.
6. Run the $ . bin/compat.sh command.

IBM Product Master 12.0.0 513


7. Run the $ $JAVA_RT net.sf.ehcache.distribution.RemoteDebugger
$TOP/etc/default/mdm-ehcache-config.xml command.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Contents of the native_cache.log file


You can review the following examples of how the contents within the native_cache.log file should look.

Values at the beginning of the native_cache.log must be the same for all JVMS in the cluster
2013-05-14 21:56:38,785 [wfl_root] DEBUG net.sf.ehcache.util.PropertyUtil - Value found for multicastGroupAddress: 224.0.0.1
2013-05-14 21:56:38,785 [wfl_root] DEBUG net.sf.ehcache.util.PropertyUtil - Value found for multicastGroupPort: 4446
2013-05-14 21:56:38,785 [wfl_root] DEBUG net.sf.ehcache.util.PropertyUtil - Value found for timeToLive: 1

Example of log for workflow engine showing that it's hearing the app server (127.0.0.1:51913)
2013-05-14 21:56:41,149 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:51913/workflowCache
2013-05-14 21:56:41,156 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:51913/specCache__KEY_VERSION_TO_START_VERSION
2013-05-14 21:56:41,160 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:51913/ctgViewCache
2013-05-14 21:56:41,164 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:51913/lookupCache
2013-05-14 21:56:41,168 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:51913/catalogCache
2013-05-14 21:56:41,170 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:51913/specNameToSpecIdCache
2013-05-14 21:56:41,171 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:51913/roleCache
2013-05-14 21:56:41,173 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:51913/nodeIdToSpecIdCache
2013-05-14 21:56:41,175 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:51913/attrGroupCache
2013-05-14 21:56:41,176 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:51913/catalogDefinitionCache
2013-05-14 21:56:41,178 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:51913/scriptCache
2013-05-14 21:56:41,180 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:51913/specCache__KEY_TO_CURRENT_START_VERSION
2013-05-14 21:56:41,181 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:51913/specCache__KEY_START_VERSION_TO_VALUE
2013-05-14 21:56:41,183 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:51913/wsdlCache
2013-05-14 21:56:41,184 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:51913/accessCache
2013-05-14 21:56:43,806 [Multicast Heartbeat Sender Thread] DEBUG net.sf.ehcache.distribution.PayloadUtil
- Cache peers for this CacheManager to be advertised:
//127.0.1.1:39290/workflowCache|//127.0.1.1:39290/specCache__KEY_VERSION_TO_START_VERSION|/
/127.0.1.1:39290/ctgViewCache|//127.0.1.1:39290/lookupCache|//127.0.1.1:39290/catalogCache|//127.0.1.1:39290/specNameToSpecIdCache
|/
/127.0.1.1:39290/roleCache|//127.0.1.1:39290/attrGroupCache|//127.0.1.1:39290/nodeIdToSpecIdCache|//127.0.1.1:39290/catalogDefinit
ionCache|/
/127.0.1.1:39290/scriptCache|//127.0.1.1:39290/specCache__KEY_TO_CURRENT_START_VERSION|//127.0.1.1:39290/specCache__KEY_START_VERS
ION_TO_VALUE|/
/127.0.1.1:39290/accessCache|//127.0.1.1:39290/wsdlCache
Similarly, the app server may report the following message in the native_cache.log file.
Note: The log file shows that our machine received 127.0.1.1:39290, which is what the workflow engine is broadcasting.
2013-05-14 21:56:43,815 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:39290/workflowCache
2013-05-14 21:56:43,826 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:39290/specCache__KEY_VERSION_TO_START_VERSION
2013-05-14 21:56:43,833 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:39290/ctgViewCache
2013-05-14 21:56:43,834 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG

514 IBM Product Master 12.0.0


net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:39290/lookupCache
2013-05-14 21:56:43,836 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:39290/catalogCache
2013-05-14 21:56:43,837 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:39290/specNameToSpecIdCache
2013-05-14 21:56:43,839 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:39290/roleCache
2013-05-14 21:56:43,842 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:39290/attrGroupCache
2013-05-14 21:56:43,843 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:39290/nodeIdToSpecIdCache
2013-05-14 21:56:43,845 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:39290/catalogDefinitionCache
2013-05-14 21:56:43,847 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:39290/scriptCache
2013-05-14 21:56:43,848 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:39290/specCache__KEY_TO_CURRENT_START_VERSION
2013-05-14 21:56:43,850 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:39290/specCache__KEY_START_VERSION_TO_VALUE
2013-05-14 21:56:43,852 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:39290/accessCache
2013-05-14 21:56:43,854 [Multicast keep-alive Heartbeat Receiver thread-1] DEBUG
net.sf.ehcache.distribution.RMICacheManagerPeerProvider
- Lookup URL //127.0.1.1:39290/wsdlCache

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting product installation


If the installation of Product Master fails, you can try performing the following debug steps.

Error messages in the IBM Installation Manager or installation logs


Symptoms
After the installation is complete, you can see error messages in IBM® Installation Manager on the Installation Complete window. You might also see errors in the
installation logs.
Resolving the problem

1. Review the messages in the installation log files to diagnose and correct individual error conditions.
2. For DB2® or application server problems, use their tools to diagnose and correct individual error conditions.
3. Rerun IBM Installation Manager to reinstall Product Master.

Issue: Configuring the default_locale parameter to ensure that users can log in successfully
When installing Product Master, version 6.0.0 or later, you might have login problems and receive an error after importing a database memory dump into a different
environment.

The following errors might occur after importing the database memory dump:

In the user interface


WPC - Error

AUS-20-014 Invalid username/password/company code.


In the $TOP/logs/<Appserver_hostname>/exception.log file
2009-05-19 07:35:14,651 [jsp_2: enterLogin.jsp] ERROR

com.ibm.ccd.common.error.AustinException - Could not find

lookup table

with name: Propriétés LDAP, Exception:Could not find

lookup table

with name: Propriétés LDAP


One possible reason that users cannot log in to Product Master is that the value of the default_locale parameter is configured incorrectly. For example, if the
default_locale parameter is set to a certain value, and then a database memory dump export was taken, the default_locale value in the environment where the
memory dump is going to be imported should be set to the same value as the parameter value in the exported environment. That is, if the default_locale parameter
was set to fr_FR in the exported environment, it should be set to fr_FR in the imported environment. If the values are not the same, users will not be able to log in
to the system.

IBM Product Master 12.0.0 515


Issue: Product Master AppServer might not start, or install_war.sh can fail with "arg list too
long" error
When installing Product Master, Version 6.0.0 or later, the following error might occur:

/bin/go/init_svc_vars.sh: line 21: /usr/bin/mkdir:


The parameter or environment lists are too long.

/bin/read_config_file.sh: line 34: /usr/bin/awk:


The parameter or environment lists are too long.

If the ncargs parameter on AIX® is not set to a high enough value, the install_war.sh installation script might fail with the error message "arg list too long". This problem
might also cause the application server to fail.

The AIX default for the ncargs parameter is set to 6 * 4k.

Since the product needs a long list of arguments to install and run correctly, this parameter might not specify enough memory to run the install_war.sh script or to start the
product.

To resolve the problem, the AIX administrator must increase the size of the ncargs parameter, by issuing the following command:

chdev -l sys0 -a ncargs=NewValue


where NewValue is the number of 4k blocks to be allocated for the argument list. You can specify a number between 6 (which is the default value) and 128. You can review
the current setting by issuing the following command:
lsattr -E -l sys0 -a ncargs
You can also change the value of the ncargs parameter (and view additional information) by using the smit or smitty AIX configuration commands. For more information,
see the AIX older versions.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting migration issues


Describes some common issues for troubleshooting that might come up during migration in IBM® Product Master.

FAILED:Export of AccessControlGroups error message


Symptoms
When you migrate to Product Master, you might see the following error that is related to exporting access control groups and roles:

Exporting ACG and Roles for company: trigo 7/4/13 6:27 AM


STARTED:Export 7/4/13 6:27
AM STARTED:Export of AccessControlGroups 7/4/13 6:27 AM
INFO:Skipping export of [Default] 7/4/13 6:27 AM
FAILED:Export of AccessControlGroups

Cause
There was a failure in exporting the access control group (ACG).

Solution
Check the logs at the $TOP/logs/default directory to find out the reason for the failure.

Case-sensitive GDS migration questions


All the GDS access control group migrations are case-sensitive : Enter Y or N (uppercase)

Missing indexes error


Symptoms
When you migrate to Product Master, you might see missing indexes in the migration console report.
The following errors that are related to missing indexes might occur during migrating:

Missing indexes on the SCA table

Missing Indexes
|============================================================
|SCA_0_PK SCA_CATEGORY_IDSCA_SELECTION_IDSCA_COMPANY_IDSCA_
|CATALOG_IDSCA_CAT_TREE_ID

You can run this PERL script to resolve this issue. Choose directory Db2 or oracle.

perl $PERL5LIB/runSQL.pl --sql_file=$TOP/src/db/schema


/dbscripts/<db2 or oracle>/sca_pk.sql

Missing indexes on the DBV table

516 IBM Product Master 12.0.0


Missing Indexes
|============================================================
|DBV_0_UK DBV_VERSION

You can run this PERL script to resolve this issue. Choose directory Db2 or oracle appropriately.

perl $PERL5LIB/runSQL.pl --sql_file=$TOP/src/db/schema


/gen/<db2 or oracle>/ddl_ver_synchronize.sql
. $TOP/bin/compat.sh
$JAVA_RT com.ibm.ccd.synchronize.DBSchemaVersion --autoupd

Missing indexes on the DOA and CTG tables (DB2® only)

Missing Indexes
|==============================================
| CTG_1_UK CTG_COMPANY_IDCTG_NAME
| DOA_0_UK DOA_DOC_IDDOA_CMP_IDDOA_NAME

You can run this PERL script to resolve this issue:

perl $PERL5LIB/runSQL.pl
--sql_command="alter table tctg_ctg_catalog drop constraint ctg_1_uk ;"
perl $PERL5LIB/runSQL.pl
--sql_command="drop index ictg_ctg_2;"

perl $PERL5LIB/runSQL.pl
--sql_command="alter table tctg_ctg_catalog
add constraint ctg_1_uk unique (ctg_company_id, ctg_name);"

perl $PERL5LIB/runSQL.pl
--sql_command="create index ictg_ctg_2 on
tctg_ctg_catalog ( ctg_name, ctg_company_id)
ALLOW REVERSE SCANS;"

perl $PERL5LIB/runSQL.pl
--sql_command="alter table TDOC_DOA_DOC_ATTRIBUTES
drop constraint doa_0_uk ;"

perl $PERL5LIB/runSQL.pl
--sql_command="drop index idoc_doa_0;"

perl $PERL5LIB/runSQL.pl
--sql_command="alter table tdoc_doa_doc_attributes
add constraint doa_0_uk unique (doa_doc_id, doa_cmp_id, doa_name);"

perl $PERL5LIB/runSQL.pl
--sql_command="create index idoc_doa_0
on tdoc_doa_doc_attributes ( doa_name, doa_doc_id, doa_cmp_id)
ALLOW REVERSE SCANS;"

Missing tables ITX and CAX error


Symptoms
When you migrate to Product Master, you might see this error in the DB verification report:

___________________________________________________________
|Missing Tables
|===========================================================
| TCTG_CAX_CATEGORY_CONTENT
| TCTG_ITX_ITEM_CONTENT

|Missing Indexes|
| CAX_0_PK CAX_ENTRY_IDCAX_NEXT_VERSION_ID
| ITX_0_PK ITX_ENTRY_IDITX_NEXT_VERSION_ID

Causes
Indicates a problem with table space creation, XDB installation, or db parameters.

Solution
Check the errfile.log file at the $TOP/logs directory for the exact message. Ensure that database setup is done correctly and then run the migration script
again.

Insufficient privileges error message


Symptoms
When you run a migration script for migrating to Product Master, you might see an insufficient privileges error in the console as follows:

Oracle database

create index icnt_eem_2 on tcnt_eem_entry_entry_map (


*
ERROR at line 1:
ORA-01031: insufficient privileges

Db2 database

IBM Product Master 12.0.0 517


SQL0551N "USERNAME" does not have the privilege to perform operation
"CREATE INDEX" on object "USERNAME.TWFL_WFE_WORKFLOW_EVENT".
SQLSTATE=42501

Causes
The database user does not have enough privileges to create an index in the database.

Solution
You must grant "create index" privilege to the database user and then run the migration script again. You must also grant access to the user to be able to create and
modify tables.

Migration script failure


Symptoms
When you run a migration script for migrating to Product Master, the script might fail as follows:

-----------------------------------------------------------
Summary of the migration
-----------------------------------------------------------
Migration of the following modules failed:
data_maintenance_reports

The $TOP/logs/errfile.log file contains the following content:

net.sf.ehcache.distribution.
MulticastKeepaliveHeartbeatReceiver$MulticastReceiverThreadrun
SEVERE: Multicast receiver thread caught throwable. Cause was null. Continuing...

Solution
There is an issue with the cache configuration parameters, but the migration script ran and completed successfully. There is no need to rerun the migration script.
For more information, see Configuring cache parameters.

Common script compilation errors


Symptoms
When you work with compiled scripts, a script can be saved in the script console only if it compiles correctly. If there is an error, check svc.out in the appsvr logs
directory for the full javac output and error message. Following are some common compilation errors:

Solution
The following are some common compilation errors and resolutions:

1. A break or return statement inside a forEach*Element() block does not compile due to an "unreachable code" error.
To fix this issue:

forEachItemSetElement(itemSet, item)
{
return item;
}

Change it to:

forEachItemSetElement(itemSet, item)
{
if (item != null)
{
return item;
}
}

This code is equivalent, but satisfies the compiler.


2. If you return a value from a function, you need to return a value in every case. In other words, this code does not compile:

function sample() {
var e = null;
catchError (e) {
// do something...
return "a string";
} if (e != null) {
reportError(...);
}
}

Since it does not return a value, if an exception happens in the catchError block. You need to change it.

function sample() {
var e = null;
catchError (e) {
// do something...

return "a string";


}
if (e != null) {
reportError(...);
}
return null;
}

518 IBM Product Master 12.0.0


3. For major compilation issues, you can look at the generated Java™ files. These generated Java files are in a directory that is specified by tmp_dir parameter in
the common.properties. The Java file naming convention is changed to include the script name and a generated sequence, for example:
MyScript12345.java

4. Additionally, the full path of the script from the docstore is placed as a comment at the top of each generated Java file. If you are mapping the docstore to the
file system, you can run a recursive grep command to find out which Java file matches a script.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting the Persona-based UI issues


Use the following topics to resolve common issues with the Persona-based UI.

Login issues
Symptoms
The login page for the Persona-based UI displays the Could not find required class - org.glassfish.jersey.servlet.ServletContainer error.
Solution
This error appears when Jersey JARs are missing from the $TOP_MDMUI folder. Ensure that you have installed full build for the Persona-based UI and then
upgraded to the latest FP.

"Response to preflight request doesn't pass access control check" error


Symptoms
You are unable to log in and the browser debugging tool displays following common error messages:

Response to preflight request doesn't pass access control check: No


'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost'
is therefore not allowed access.
Response to preflight request doesn't pass access control check: A wildcard '*' cannot
be used in the 'Access-Control-Allow-Origin' header when the credentials flag is true.

Solution
Evaluate the baseUrl parameter in the config.json file to ensure that you have the correct IP hostname address or fully qualified domain name. The preflight request
errors in the browser are due to configuring the REST server URL and accessing the URL by using different hostname method. For example, if you configure the
REST server URL as 1.200.30.40, but access the URL as http://mynextgenui.com/api/v1/ then this is treated as cross domain.
Note: The baseUrl parameter value in the config.json file must be the IP numerical hostname or the fully qualified domain name of the server. The short name of the
server cannot be used.

Error uploading assets in Digital Asset Management with the IOException in logs
Symptoms
Scheduler default.log file shows java.io.IOException:
org.apache.jackrabbit.core.data.DataStoreException: Could not add record error.
Log snippet

----------- UPLOAD ASSET SUMMARY -----------

Total
assets for upload: 1, Total assets uploaded: 0

2017-09-06
04:25:30,617 [sch_worker_0] INFO com.ibm.mdm.dam.bulkupload.impl.BulkUploadHandler
JOB_ID:40006- CWCUS0001I:doUpload:

-----------FAILED ASSETS SUMMARY---------- Sr No, Asset Name, Reason

1, selfie1.png, javax.jcr.RepositoryException: java.io.IOException:


org.apache.jackrabbit.core.data.DataStoreException: Could not add record

Solution
Check whether the Blobstore folder specified in the $TOP/etc/default/dam/config/dam.properties has write permissions for the MDM root user.
blob.store.dir=/blobstore

Digital Assets tabs are not visible for the Merchandise Manager
Symptoms
Digital Assets tabs are not visible for the Merchandise Manager in the single-edit page of the collaboration area.
Solution
Check the following:

Enable the DAM module from Settings > DAM Settings page. For more information, see Customizing the features.
Ensure that the Digital Assets attribute of the type multi-occurrence and relationship is present in the editable attribute collection on the collaboration area
step.
If the tab view is defined, then ensure that the Digital Assets attribute is present in one of the tab views.

IBM Product Master 12.0.0 519


Updates to the catalog definition not reflecting in the collaboration area
Symptoms
From the Admin UI, Catalog definition can be modified to add scripts like the following:

Entry preview
Pre-processing
Post-processing
Post Save

In the Persona-based UI, on the collaboration item save, these scripts are not run.
Solution
To resolve, either:

1. Update the catalogDefinitionCache properties in the $TOP/etc/default/mdm-ehcache-config.xml file. Set the value of replicatePuts and replicateUpdates to
true.
Example:

<cacheEventListenerFactory
class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
properties="replicateAsynchronously=true, replicatePuts=true, replicateUpdates=true, replicateUpdatesViaCopy=false,
replicateRemovals=true "/>

2. Run the start_local.sh file in the $TOP/bin/go/abort_local.sh $TOP/bin/go/ directory to restart the Product Master Server.
3. Run the following scripts to restart the Persona-based UI server:
$TOP/mdmui/bin/stopServer.sh

$TOP/mdmui/bin/startServer.sh

OR

If Catalog definition is updated, clear the cache from the Admin UI to reflect the changes in the Persona-based UI.

Failed to get search response while searching using Elasticsearch feature


Symptoms
Though Elasticsearch feature is enabled, on searching you get "Failed to get search response".
Cause
This error can occur due any of the following reasons:

The MDM REST service is not configured correctly


The search index is not yet created on Elasticsearch

Solution
To resolve, proceed as follows:

Try to connect to the Elasticsearch URL from other hosts where pim-collector and indexer services are running. Test by using a cURL request to
http://<elasticserarch-server>:9200/. Check and modify the port. The default value is 9200.
In a manual installation, check if the following properties are correctly configured.
Property Location Description Example
common.proper $TOP/etc/defaul If you have used the hazelcast.xml hazelcast_group_name=mdmce-hazelcast-instance
ties t with the MDM deployment package, hazelcast_password=mdmce-hazelcast hazelcast_network_ip_address=
only the IP address needs to be 10.51.239.105:5702
updated (highlighted) as the group
name and password are set to the
default values.
restConfig.prope mdmrest The properties that point to the elastic_search_service_uri=http://10.53.17.174:9200/
rties Elasticsearch service and pim- event_receiver_app_url=http://10.53.17.174:9096/
collector service need to be updated.
application.prop $TOP/mdmui/dy The pim-collector service properties mdm.topDir=/opt/11.6/MDM mdm.etcDir=/opt/11.6/MDM/etc
erties namic/pimcollec should be configured correctly for mdm.ccdClasspath=/opt/11.6/MDM/jars/ccd_svr.jar mq.groupName=mdmce-
tor hazelcast, and $TOP location. hazelcast-instance mq.password=mdmce-hazelcast
mq.networkIpAddress=10.51.239.105:5702 app.datasource.username=
<username> app.datasource.password=<db_password_plain_or_encrypted>
instance.dbPasswordEncrypted=<true_or_false>
$TOP/mdmui/dy The indexer service properties es.clusterName=escluster es.serverIp=10.53.17.174 es.httpPort=9200
namic/indexer should be configured correctly for es.transportPort=9300 mq.groupName=mdmce-hazelcast-instance
hazelcast and Elasticsearch. mq.password=mdmce-hazelcast mq.networkIpAddress=10.51.239.105:5702
app.datasource.username=<username> app.datasource.password=
<db_password_plain_or_encrypted> instance.dbPasswordEncrypted=
<true_or_false
Ensure that the database migration scripts create TIDX_IDX_SCH_JOB_STATUS table.

"com.ibm.ccd.common.error.PimIndexEventException" error
Symptoms
You receive the following error:

2018-07-04 13:51:23,269 [jsp_2260: saveEntries.wpc] ERROR com.ibm.ccd.content.common.Item - Error occured in


prepareAndSendPimEvent() while sending ITEM data for indexing., Exception:Failed to send ITEM having Id = 355429 and Name

520 IBM Product Master 12.0.0


= TestItem for indexing.
com.ibm.ccd.common.error.PimIndexEventException: Failed to send ITEM having Id = 355429 and Name = TestItem for indexing.

Cause
The Hazelcast service was restarted after the Admin UI service was up. The Admin UI could not access the new Hazelcast instance, and tries to connect to the old
instance impacting the item add, update, or delete operations.
Solution
Restart the Admin UI service too if you restart the Hazelcast service.

"Error 500: javax.servlet.ServletException" error


Symptoms
When you open custom tool that is created through Admin UI in the Persona-based UI, you receive the following error:

Error 500: javax.servlet.ServletException: NEWUIUSERINFOCACHE does not contain the JWT token

Cause
This error occurs due to the multicast not working between Persona-based UI and Admin UI.
Solution
Ensure that following properties are set correctly and that the same multicast address and port is used between Admin UI and Persona-based UI.

1. Browse to the common.properties file located in the $TOP/etc/default folder.


2. Change the value of the xframe_header_option to ALLOWALL.
3. Browse to the env_settings.ini file located in the $TOP/bin/conf folder.
4. Change the value of the multicast_ttl property to 1.
5. Browse to the config.json file located in the $WAS_HOME/profiles/<AppSrv>/installedApps/<NodeCell>/mdm_ui.war.ear/mdm_ui.war/assets/ folder.
6. Change the value of the customScriptBaseUrl=http://<old_ui_ip>:<port>.
7. Browse to the mdm-rest-cache-config.xml file located in the $WAS_HOME/AppServer/profiles/<AppSrv>/installedApps/<NodeCell>/mdm-
rest.war.ear/mdm-rest.war/WEB-INF/classes/ folder.
8. Change the value of the following properties:

multicastGroupAddress=<multicast address>
multicastGroupPort=<multicast port>

"Too many files open" error


Solution

1. Increase the open file limit on the server.


2. Add the following lines in the /etc/security/limits.conf file:

* soft nofile 32000


* hard nofile 32000

3. If the Free text search feature is enabled, increase the open file limit further to 65536 for the Elasticsearch.
4. If you observe performance issue with the catalog that has large number of attributes (more than 200), update the memory flags for the pim-collector and
indexer services in the /mdmui/bin/fts/start_collector.sh and /mdmui/bin/fts/start_indexer.sh files to the following values:

pim-collector: -Xms4096m -Xmx4096m -XX:NewSize=1000m -XX:MaxNewSize=1000m


indexer service: -Xms1200m -Xmx1200m -XX:NewSize=400m -XX:MaxNewSize=400m

"Could not listen on port 1535 on host 0.0.0.0:java.net.BindException: Address already in use
(Bind failed)" error
Symptoms
When you try to connect to the jdbc:derby://localhost:1535/prddb URL, you receive the following error:

Could not listen on port 1535 on host 0.0.0.0:java.net.BindException: Address already in use

Cause
Dashboards feature installs Apache Derby database on the '1535' port. If the port is already used with some other process ID, the prddb folder is not created and
thus the Dashboards do not work or are not displayed.

Solution

1. Using PuTTY, log in to the virtual machine where you are installing dashboard feature.
2. Run the following command to see list of all process IDs that use '1535 port':

lsof -t -i:1535

3. End all the processes by using the following command for each process ID:

kill -9 <process id>

4. Reinstall the Persona-based UI.

"cp: cannot stat (logback-test.xml log file)" warning


Symptoms
During dashboard installation, you receive the following soft warning for the logback-test.XML log file:

IBM Product Master 12.0.0 521


Warning : cp: cannot stat
'/opt/IBM/WebSphere/AppServer/profiles/AppSrv01Host01/installedApps/Cell01Host01//dashboards.ear/oed-1.4.0.0.war/WEB-
INF/classes/logback-test.xml': No such file or directory
/opt/IBM/mdmui/dashboards/tnpmoed/bin/requiredModules.sh: line 375:
/opt/IBM/WebSphere/AppServer/profiles/AppSrv01Host01/installedApps/Cell01Host01//dashboards.ear/oed-1.4.0.0.war/WEB-
INF/classes/logback-test.xml: No such file or directory

Solution
You can ignore the warning message, since the warning does not impact the installation process. This will be fixed in a future release.

Data Sanity dashboard loading slow


Symptoms
Data Sanity dashboard is loading slowly.
Cause
The Collaboration Area History table contains too much historical data.
Solution
Clean the Collaboration Area History table.

Run the delete_old_versions.sh file in the $TOP/src/maintenance/old_versions.


Select Yes in the prompt to purge the collaboration history.

"Error getting details of saved search : java.lang.ClassCastException" error


Symptoms
On clicking Saved Search, you get the following error. Similar error is also noticed while clicking Saved Template.

Error getting details of saved search : java.lang.ClassCastException:


com.ibm.rs.bean.MSearchOption$JaxbAccessorM_getReserved_by_setReserved_by_java_lang_String incompatible with
com.sun.xml.internal.bind.v2.runtime.reflect.Accessor

Solution
Follow the steps to update the value of the Dcom.sun.xml.bind.v2.bytecode.ClassTailor.noOptimize=true property on the WebSphere Application
Server (WAS).

1. In the Administration Console select Servers.


2. Expand Server Type and select WebSphere application servers.
3. Click the name of your server.
4. Expand Java and Process Management and select Process Definition.
5. Under the Additional Properties section, click Java virtual machine .
6. Scroll and locate the text box for Generic JVM arguments.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting the SAML SSO issues


Use the following topics to resolve common issues with the SAML SSO.

1. In the WebSphere® Application Server administrative console, go to Troubleshooting > Logs and trace > Open the application > Change log details levels.
2. To enable trace logs, add the following and select Enable log and trace correlation.

*=info: com.ibm.ws.security.web.*=all:
com.ibm.ws.security.saml.*=all:
com.ibm.websphere.wssecurity.*=all:
com.ibm.ws.wssecurity.*=all:
com.ibm.ws.wssecurity.platform.audit.*=off

"The relying party trust <acs_url_here> indicates that authentication requests sent by this
relying party will be signed but no signature is present" error in the identity provider log
Cause
Indicates that the service provider or relying party has not signed the authentication request. By default, when you export service provider metadata that is
exported from the WebSphere Application Server, the value of the AuthnRequestsSigned property is set as true in the metadata XML file.
Solution
Ensure that the WebSphere Application Server signing certificate is correctly set and is exported with metadata XML file.
Try to set the value of the AuthnRequestsSigned property to false in the metadata XML file and import the file again in the identity provider if request signing is not
required for development environments.

"Error 203: SRVE0295E: Error reported: 203" error when accessing the Admin UI
Solution

1. Go to the $TOP/etc/default/common.properties file.


2. Set the value of the enable_referer_check property to false.

522 IBM Product Master 12.0.0


3. Restart the application and ensure that all services are running.

"CWPKI0428I: The signer might need to be added to the local trust store" error when
accessing the Admin UI
Solution
You can use the Retrieve from port in the WebSphere Application Server administrative console to retrieve the certificate and resolve the problem. If you determine
that the request is trusted, complete the following steps.

1. Log in to the administrative console.


2. Expand Security and click SSL certificate and key management.
3. Under the Configuration settings, click Manage endpoint security configurations.
4. Select the appropriate outbound configuration to get to the (cell):<CELL>:(node):<NODE> management scope.
5. Under the Related Items, click Key stores and certificates and click the NodeDefaultTrustStore key store.
6. Under the Additional Properties, click Signer certificates and Retrieve From Port.
7. Enter the host name, port, and alias.
8. Click Retrieve Signer Information and verify the certificate information.
9. Click Apply and Save.

On IdP session expiry with Windows authentication enabled for SAML, Admin UI and Persona-
based UI do not load after refreshing browser.
Solution

1. Close the browser.


2. Open the browser and access the Admin UI or Persona-based UIs. Confirm that you can now log in to both the interfaces.

Admin UI or Persona-based UI login screen is displayed


Cause
The Login page can be displayed due to multiple reasons. The URL used for accessing the application do not match the pattern that is given in the SAML SSO
configuration.
Admin UI URL - https://<hostname>:<port>/
Persona-based UI - https://<hostname>:<port>/mdm_ui/#/login
Solution

Session has expired, refresh the URL in the browser to log in again.
You can also increase the session timeout for the application.
The SAML authentication has failed, check your SAML configuration.
Enable following loggers to trace the issue.
1. Enable the Login.wpcs logger.
To enable logger, you must add a logger and appender for this LDAP logger in the $TOP/etc/default/log4j2.xml file. In the Login.wpcs script, the
default logger is ldap.
For example,
Definition Script
Category <Logger name="com.ibm.ccd.wpc_user_scripting.ldap" level="info" additivity="false">
definition <AppenderRef ref="LDAPLOGGER" />
</Logger>
Appender <RollingFile name="LDAPLOGGER" fileName="%LOG_DIR%/${svc_name}/ldap.log" append="true"
definition filePattern="%LOG_DIR%/${svc_name}/ldap-%d{MM-dd-yyyy}-%i.log">
<PatternLayout>
<Pattern>%d [%t] %-5p %c %x- %m%n</Pattern> </PatternLayout> <Policies>
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="10 MB" /> </Policies> <DefaultRolloverStrategy max="2" />
</RollingFile>
2. Enable SSO request filter logger.
To enable debug logger, you must set level=debug in the $TOP/etc/default/log4j2.xml file for the following logger.

<Logger name="com.ibm.ccd.ui.sso.filters" level="debug" additivity="false">


<AppenderRef ref="SERVLET_FILTERS" />
</Logger>

3. Check the ipm.log file for SAML attributes and roles that are assigned to the user.

HTTP error message


Symptoms
On accessing the Admin UI or Persona-based UI, you get "HTTP Error 403 – Forbidden" error.
Cause
The error indicates that the SAML token has expired.
Solution

Refresh the URL in the browser and SAML login should work.
Increase the SAML token expiry on your SSO Partner.

Related concepts
Known issues and limitations

IBM Product Master 12.0.0 523


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting the operator issues


Use the following topics to resolve common issues with the operator-based deployment.

Error while creating pods


Symptoms
Error creating: pods "productmaster-elasticsearch-fxxxxxx-" is forbidden: unable to
validate against any security context constraint: [spec.containers[0].securityContext.privileged:
Invalid value: true: Privileged containers are not allowed]
Solution
In the elasticsearch section of the ipm_12.0.x_cr.yaml file, update the value of the privileged property to "false" and apply again.

Database connection errors


Solution
Before you start deployment, ensure that the following connections are open.

OpenShift® or Kubernetes platforms and the database server


OpenShift or Kubernetes platforms and the Bluemix® registry (registry.ng.bluemix.net/product_master)

Failing IBM MQ pod


Symptoms
Creating queue manager: Permission denied attempting to access an INI file." then please
change the storageclass to from file storage to block storage
Solution
If you are using NFS storage file storage for the IBM® MQ pod, then in the mq section of the ipm_12.0.x_cr.yaml file, change the value of the storage property to
"block" and apply again.

No route to host (Host unreachable) error


Symptoms
Error opening socket to the server (dbserver.somedomain.com/xx.xx.xx.xx) on port 52,332 with the following message in the ipm.log file of the Admin UI pod.
No route to host (Host unreachable)
Causes
The error indicates a database connection issue.
Solution
Verify whether database connection is getting established in your environment. You can run the following commands to test the database connection.

kubectl exec -it <pod name> -- /bin/bash


source /home/default/.bash_profile
cd $TOP/bin/
./test_db.sh

Deploying multiple Product Master pods failing


Symptoms
When you try to deploy multiple instances of the Product Master pods, the deployment fails.
Causes
The deployment fails because the exposed ports are already occupied by first instance of deployment.
Solution
In the ipm_12.0.x_cr.yaml file, update all the ports of the ext_port property to unique, and apply again. This avoids conflict with existing Product Master
deployment.

Admin UI pod shows error after deployment


Symptoms
In some OpenShift environments Admin UI pod displays following error after deployment.

524 IBM Product Master 12.0.0


Solution
Run the following command on the OpenShift environment and refresh the page.

openshift: oc get --namespace openshift-ingress-operator ingresscontrollers/default –output

If the value of the output is HostNetwork then run following command.

oc label namespace default 'network.openshift.io/policy-group=ingress'

Hazelcast service error


Symptoms
Though the Hazelcast service is running, the Scheduler pod is unable to connect with following error.
java.lang.Exception: Hazelcast instance found to be null. Possible reason is unable to
connect to any address.
Causes
The Hazelcast service is blocking the Scheduler service.
Solution
To open the Scheduler service, apply the following hz-sch-networkpolicy.yaml file to each deployment.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np-hazelcast2
namespace: <>
spec:
ingress:
- from:
- podSelector:
matchLabels:
app: productmaster-sch
ports:
- port: 5702
protocol: TCP
podSelector:
matchLabels:
app: productmaster-hazelcast
policyTypes:
- Ingress

MongoDB pod-related
Symptoms
MongoDB pod fails to run with either of following errors.

Another mongod instance is already running on the /data/db directory, terminating

No space left on device

Solution
Change the storage class from IBM Cloud File Storage (ibmc-file-gold-gid) to IBM Cloud Block Storage (ibmc-block-gold) in the Persistent Volume Claim for
MongoDB on the IBM Cloud Public (ROKS) cluster.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting the docstore issues


Use the following topics to resolve common issues with the docstore.

Symptoms
When you try to run the IBM® MDMPIM DocStore Maintenance report, the report fails with the following message in the console.

IBM Product Master 12.0.0 525


The Job did not sucessfully.
CWREP0009E: There was an error generating this report.
Could not execute query.

Causes
If you create a company in IBM Db2® and then migrate the database to Oracle, the database queries script present in the docstore does not get updated because it is
database-dependent and thus the report fails.

Resolving the problem


Run the $TOP/bin/db/install_Maintenance_Reports.sh script to resolve the issue.

install_Maintenance_Reports.sh --code=<name of the company>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting import job schedule issues


Use the following topic to resolve common issues with the import job schedule.

Symptoms
When you click Import, you get the following error.
Could not initialize class sun.awt.X11GraphicsEnvironment (initialization failure)

Resolving the problem


1. Using PuTTY, log in with Admin credentials.
2. Go to the $TOP folder by using the following command.

-cd $TOP

3. Go to the go folder by using the following command.

-cd bin/go

4. Stop all the local services by using the following command.

-./stop_local.sh

5. After all the services stop, run the following command.

unset DISPLAY

6. Start all the local services by using the following command.

-./start_local.sh

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting Admin UI "Error 203" issue

Use the following topic to resolve "Error 203" issue in the Admin UI .

Symptoms
When you try to log in to the Admin UI, you get the following error.
Error 203: SRVE0295E: Error reported: 203

Causes
A reverse proxy such as IBM® HTTP Server or Red Hat® OpenShift® route is used with the Admin UI application.

Resolving the problem


1. Open the common.properties file located in the $TOP/etc/default folder.
2. Edit the value of the enable_referer_check property to "false" and save the file.

526 IBM Product Master 12.0.0


3. Restart the application and ensure that all services are running.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Tools for troubleshooting


Tools are available to collect and analyze both system and performance diagnostic data.

Troubleshooting in Product Master focuses primarily on analyzing system log files, but following tools are available that can help you during the troubleshooting process.

Diagnostic data collection


Using pimSupport.sh script
Using Db2® server tools
Using Oracle Database server tools
Using IBM® Support Assistant
Monitoring Java™ virtual machine (JVM)
Using Sumall utility
Using LTA tool
Using the built-in cache mechanism
Using the built-in profiling mechanism

Diagnostic data collection


Using pimSupport.sh script
Use the pimSupport.sh script to collect basic environment configuration and system status information along with Product Master and application server log files as
a starting point when requiring assistance from support. For more information, see
pimSupport.sh script.

Using Db2 server tools


Using a single command for each tool, basic configuration and error message logs (db2diag.log) can be collected from Db2 server.
db2support archives all of the files into a compressed file containing either the file name that is specified by the user (-o option), or the default file name,
db2support.zip.

db2support . -d MyDbName -c

db2pd collects all the db2pd diagnostics. For more information, see Monitoring and troubleshooting using db2pd command.
Using Oracle Database server tools
The database instance stores various types of diagnostic data in the Automatic Diagnostic Repository. Especially when performance problems are investigated, IBM
support requires Automatic Diagnostic Repository reports. You can query V$DIAG_INFO to see where to find various logs.

select * from v$diag_info;


1;Diag Enabled;TRUE
1;ADR Base;/opt/oracle/11
1;ADR Home;/opt/oracle/11/diag/rdbms/jora11g/jora11g
1;Diag Trace;/opt/oracle/11/diag/rdbms/jora11g/jora11g/trace
1;Diag Alert;/opt/oracle/11/diag/rdbms/jora11g/jora11g/alert
1;Diag Incident;/opt/oracle/11/diag/rdbms/jora11g/jora11g/incident
1;Diag Cdump;/opt/oracle/11/admin/jora11g/cdump
1;Health Monitor;/opt/oracle/11/diag/rdbms/jora11g/jora11g/hm
1;Default Trace File;/opt/oracle/11/diag/rdbms/jora11g/jora11g/trace/jora11g_s011_1114164.trc
1;Active Problem Count;3
1;Active Incident Count;186

By default, the Oracle Database generates snapshots once every hour, and retains the statistics in the workload repository for 7 days. When necessary, you can use
DBMS_WORKLOAD_REPOSITORY procedures to manually create, drop, and modify the snapshots. To launch these procedures, a user must be granted the
database administrator role. For more information, see Managing the Automatic Workload Repository.
To generate an HTML or text Automatic Workload Repository report for a range of snapshot IDs, run the awrrpt.sql script at the SQL prompt.

sqlplus system/manager @$ORACLE_HOME/rdbms/admin/awrrpt.sql

You can then specify snapshots to be used for the report in an interactive mode and generate the workload repository report.

First, you need to specify whether you want an HTML or a text report.
Enter value for report_type: html

Specify the number of days for which you want to list snapshot Ids.
Enter value for num_days: 2

After the list displays, you are prompted for the beginning and ending snapshot Id for the workload repository report.
Enter value for begin_snap: 150
Enter value for end_snap: 160

Next, accept the default report name or enter a report name. The default name is accepted in the following example:
Enter value for report_name:
Using the report name awrrpt_1_150_160

Using IBM Support Assistant


For more information, see Collecting data with the IBM Support Assistant.

Monitoring Java virtual machine (JVM)


IBM Product Master 12.0.0 527
For more information, see Monitoring Java virtual machine (JVM).

Using Sumall utility


For more information, see Using Sumall utility.

Using LTA tool


For more information, see Using LTA tool.

Using the built-in cache mechanism


Product Master has a built-in cache mechanism for some Product Master objects. Ensuring high cache hit ratios can be the key for optimal performance. If you need to
monitor cache hit utilization, you can trigger a cache snapshot by running the following command.

[<code>]
$JAVA_RT com.ibm.ccd.common.wpcsupport.util.SupportUtil --cmd=getRunTimeCacheDetails
[</code>]

For more information, see Cache management.

Using the built-in profiling mechanism


For more information, see Using profiling mechanism.

Collecting data with the IBM Support Assistant


You can use the IBM Support Assistant to collect your data.
Monitoring Java virtual machine (JVM)
IBM Health Center can be used with the Product Master as a diagnostic tool for monitoring the status of the JVM of the application.
Using Sumall utility
Sumall utility collects SQL workload and performance statistics.
Using LTA tool
Analyze your log files with the Log and Trace Analyzer (LTA) of the IBM Support Assistant to analyze and troubleshoot all Product Master issues.
Using profiling mechanism
The Product Master profiling mechanism provides information about bottlenecks, especially when investigating performance problems due to custom
implementations (any scripted implementations).
Contacting IBM Support
IBM Support provides assistance with product defects, answering FAQs, and performing rediscovery.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Collecting data with the IBM Support Assistant


You can use the IBM® Support Assistant to collect your data.

Before you begin


Before you collect data with the IBM Support Assistant, you must install the data collection add-on.

Procedure
1. In the IBM Support Assistant, on the Home panel, click Analyze Problem.
2. On the Collect Data tab, click the Select Collectors tab.
3. Click the icon to expand the category list. Select an option based on your requirement.
4. Click the Add button located on the right side of the Select Collectors panel to add a job to the Collector Queue panel.
5. In the Collector Queue panel, click the job that you just added, then click the Collect All button.
6. In the User Input window, enter the root directory of Product Master.
7. Click the OK to collect your data.
8. In the Collector Queue panel click the job that you created, then click the View Details button to view the status of the job.
9. Optional: In the User Input window regarding feedback, you can provide feedback to the IBM Support Assistant team by clicking the Yes button.
10. In the User Input window regarding FTP logs, select an option, then click the OK button.
11. Retrieve the location of the data collector results from the bottom of the Current Status panel.

Results
Send the data collector results to IBM Software Support. For information about collecting data and contacting IBM Software Support, see Contacting IBM Support.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

528 IBM Product Master 12.0.0


Monitoring Java virtual machine (JVM)
IBM® Health Center can be used with the Product Master as a diagnostic tool for monitoring the status of the JVM of the application.

About this task


IBM Health Center provides information on garbage collection, locking, and profiling, that help in the troubleshooting the potential problems in the Product Master. For
more information, see IBM Monitoring and Diagnostic Tools - Health Center.

Procedure
1. Download and install the Health Center Agent on the JVM of the server. For more information, see Installing the Health Center agent.
2. Start the Java™ applications with the IBM Health Center Agent enabled. For more information, see Starting a Java application with the Health Center agent enabled.
Product Master can be enabled for the IBM Health Center in the either of the following way. To find the correct option to use for your specific Java version, see,
Platform requirements.
Option Steps
To enable the IBM Health Center in the appserver.
a. Log in to the WebSphere Application Server Admin console.
b. Click Servers > Server Types > Websphere application servers and choose your server.
c. Click Java and Process Management > Process definition.
d. Under the additional properties section, select Java Virtual Machine.
WebSphere® e. Provide the Java argument to enable the IBM Health Center agent in the Generic JVM arguments section. For example, if you are
Application Server using Java 6 SR4, provide the following code in the Generic JVM arguments section:
Admin console
-agentlib:healthcenter -Xtrace:output=perfmon.%p.out

Note: To disable the IBM Health Center agent in the appserver by using this method, remove the added option from the Generic JVM
arguments section and continue to the next step.
f. Restart the server.
a. Set the environment variable IBM_JVM_OPTIONS. For example, in Linux® using Java 6 SR4, set the variable to the following value:

export IBM_JAVA_OPTIONS=
Setting environment "-agentlib:healthcenter -Xtrace:output=perfmon.%p.out"
variable
Note: This enables all of the Product Master services for the IBM Health center. By default, the first service is assigned the value
IBM_JAVA_OPTIONS
1972. The port increments by 1 for each subsequent service started. To disable the IBM Health Center agent by using this method,
remove the environment variable: unset IBM_JVM_OPTION.
b. Restart the server.
3. Connect to the Product Master by using the IBM Health Center client. For more information, see Connecting to a Java application using the Health Center client.
Note: If you have more than one service enabled for the IBM Health center, ensure that you select Scan next 100 ports for available connections. To determine
which service you are connected to, select the Environment > Configuration tabs and check the Java command line field. The -Dsvc_name shows the name of the
service you are connected to.

What to do next
Use IBM Health Center profiles to identify performance bottlenecks in the Product Master and custom code. Sort by the Tree(%) column in the descending order to identify
methods that utilize time. Following are some common methods that can show up in the profiles and are helpful for performance analysis:

com.ibm.ccd.common.interpreter.engine.Script.run
com.ibm.ccd.element.common.EntryPopulater.runContainerScript
com.ibm.ccd.element.common.AbstractEntry.runEntryBuildScript
com.ibm.ccd.connectivity.common.CtgToDb.execute
com.ibm.ccd.workflow.common.events.BeginStepEvent.runStepScript
com.ibm.ccd.workflow.common.events.EndStepEvent.runStepScript
com.ibm.ccd.element.common.EntryPopulator.populateEntry

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using Sumall utility


Sumall utility collects SQL workload and performance statistics.

Before you begin


Set the environment variable $JAVA_RT, by using the following command to source the contents of the compat.sh file:

. $TOP/bin/compat.sh

About this task

IBM Product Master 12.0.0 529


The Sumall utility is used to parse the db.log files and generate a statistical summary of tracked SQL runtimes. While investigating performance problems, it is
recommended to track all the SQL statements by setting DB appender to the debug mode. Only by seeing the full SQL workload, you can evaluate the relative impact of
specific SQL statements. Not only long running SQLs but also fast running SQLs, can impact performance, and are only seen when tracking DB appender in the debug
mode.

Procedure
Use the following command to run the Sumall utility:

$JAVA_RT com.ibm.ccd.common.wpcsupport.util.Sumall <db.log files>

Results
By default, the only the SQLs, which are not marked as DELAYED are evaluated. Equal SQLs are aggregated at one line. Dynamic queries are differentiated by the first n
characters as defined by option leng (default = 70). Parameter values are replaced '?', thus similar SQLs can be aggregated.
For each "unique SQL", statistical values are generated:

Total runtime in milliseconds: sum(ms)


number of executions: cnt
average runtime in milliseconds: avg(ms)
Coefficient of Variation for the tracked SQLs: CV
SQL identifier: QUERY

SQL statistics is sorted on overall runtime in an ascending order. An aggregated statistical overview is printed at the end, that shows:

Monitored timeperiod (MT) - Time-range for which SQLs were tracked,


Aggregated total statistics - With %MT calculation that represents total tracked SQL in relation to the monitored time period.

What to do next
Use the following options to modify the default behavior.

$JAVA_RT com.ibm.ccd.common.wpcsupport.util.Sumall
[leng=n characters] [delayed] [printsql] [cvon] [cleanoff]
[fromtime="yyyy-MM-dd HH:mm:ss[,SSS]"] [totime="yyyy-MM-dd HH:mm:ss[,SSS]"]
[threadname="<threadname>"] [threadlist] [version] [h|help] <db.log files>

Where,

leng
Defines how many characters from the beginning of the SQL are used for statistical aggregation.

printsql
Extract SQLs and runtime in sequential order, print one line per SQL.
delayed
Only SQLs marked as delayed is analyzed.
cvon
Provides a normalized metric on how much a single runtime varies compared to average runtime.
cleanoff
When set, parameter values are not replaced in the Dynamic SQL strings. Depending on the leng, might cause dynamic queries of the same type to be reported on
the separate lines, in case the individual parameter values are contained in the first n (leng) characters.
fromtime
Analyze only SQLs with a timestamp > fromtime, required input format: yyyy-MM-dd HH:mm:ss.
totime
Analyze only SQLs with a timestamp < totime, required input format: yyyy-MM-dd HH:mm:ss.
threadname
SQLs are analyzed only for the provided thread. Use a threadname as listed by the threadlist.
threadlist
Print sqlstatistics per thread ordered by total SQLruntime.
version
Print current version sample output.

java -jar Sumall.jar db.log* threadname="jsp_57678: moveEntriesToNextStep.wpc"



130 130 1,0 GEN_CTG_CSA_CAT_SYS_ATTR_GETBYCATEGORYID] SELECT * FROM tctg_csa_cat_s
134 130 1,0 GEN_CTG_CHI_CATEGORY_HIERARCHY_GETBYCHILDID] SELECT * FROM tctg_chi_ca
134 130 1,0 GEN_CTG_CFP_CAT_FULL_PATHS_GETBYCATEGORYID] SELECT * FROM tctg_cfp_cat
158 1 158,0 GET_EVENT_ENTRY_MATCHING_SIGNATURE] SELECT wee_entry_id FROM twfl_wfe_
219 235 0,9 GEN_SEC_SCU_USER_GETBYID] SELECT * FROM tsec_scu_user WHERE scu_user_i
223 220 1,0 GEN_SEC_CMP_COMPANY_GETBYID] SELECT * FROM tsec_cmp_company WHERE cmp_
267 278 1,0 GEN_CTG_ICM_ITEM_CATEGORY_MAP_GETBYITEMVERSION] SELECT * FROM tctg_icm
276 250 1,1 GEN_CTG_ITD_ITEM_DETAIL_GETBYIDVERSION] SELECT * FROM tctg_itd_item_de
431 428 1,0 GEN_CTG_ITM_ITEM_GETBYPRIMARYKEYVERSION] SELECT * FROM tctg_itm_item W
-----------------------------------------------
sum(ms) cnt avg(ms) QUERY

Monitored timeperiod (MT): 2015-07-24 10:37:54,467 - 2015-07-24 10:37:57,563 : 0,00 h | 0,00 m | 3,10 s
Statistical values for all queries
Total Count Average Total Total Runtime % of MT
(ms) (cnt) (avg ms) (minutes) (hours) (%)
2862 2733 1 0,0 0,0 92,4

IBM Product Master 12.0 Fix Pack 8

530 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

Using LTA tool


Analyze your log files with the Log and Trace Analyzer (LTA) of the IBM® Support Assistant to analyze and troubleshoot all Product Master issues.

Before you begin


Install the IBM Support Assistant and enable the LTA.
Make the log file accessible for import into the LTA.
Install the Common Base Event adapter plug-in.

About this task


You use the LTA along with the symptom catalog as a centralized location to retrieve and analyze information on log file messages. You can only import log files into the
LTA that use the Common Base Event format. Common Base Event is an IBM implementation of the Web Services Distributed Management standard. When you install the
Common Base Event adapter, every time you import your log files into the LTA, they are automatically reformatted into the Common Base Event format.

Procedure
1. In the $TOP/jars directory, open the Common Base Event adapter plug-in file.

com.ibm.ccd.logging.parsers_n.n.n.jar

2. Copy the .jar file into your IBM Support Assistant plugins directory.
3. Open the IBM Support Assistant, and then click Analyze Problem.
4. In the Analyze Problem tab, click the Tools tab.
5. In the Tools Catalog panel, select Log Analyzer.
6. Click the Launch in the main panel to launch the LTA.
7. Import the symptom catalog:
a. On the File menu, click Import Symptom Catalog.
b. Select From Local host and specify the xxxx.symptom file from within the $TOP/locales/en_US/system_resource_bundle directory where xxxx is the name of
your symptom catalog.
c. Click Finish to import the symptom catalog.
8. Import your log files:
a. On the File menu, click Import Log File > Add.
b. Under Log types, in the Filter panel, select IBM Product Master log.
c. Under Log details, in the Enter the properties of the log files panel, click Details tab.
d. In the IBM Product Master log file path field, enter the fully qualified file name and directory of the log file that you want to import.
e. Optional: On the Log View tab that corresponds to the log file you imported, sort the messages by clicking on a column heading. Sorting the messages by
Severity collects and displays all messages by severity level, which can help you to determine where to start your analysis.
9. Right-click the message that you want to analyze, and click Analyze Selection to retrieve and display all available information from the symptom catalog.
10. From the Symptom Analysis Results View, click Recommendations and actions tab to view the message descriptions and possible resolutions.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using profiling mechanism


The Product Master profiling mechanism provides information about bottlenecks, especially when investigating performance problems due to custom implementations
(any scripted implementations).

You need to enable profiling by setting the profiling_info_collection_depth parameter in the common.properties file to a sufficiently high number (for example, 50). To
profile scheduled jobs, the profiling_scheduled_jobs parameter in the common.properties must be set to full.
Note: delete all of the former profiling information as otherwise retrieval of profiling information might be slow and lots of database disk space is consumed.
To access the collected performance information in the Product Master user interface, click System Administrator > Performance Info > … menu

Profiling
To access profiling in the Product Master user interface, click System Administrator > Performance Info > Profiling. Profiling displays which function calls the most time is
spent. Profiling information is accessed either through:

System Administrator > Performance Info > Profiling, or


in case of scheduled jobs, click View Details while you are on the Schedule Status Information and then click Performance data.

Database performance
To access profiling in the Product Master user interface for the database performance, click System Administrator > Performance Info > DataBase Performance.

IBM Product Master 12.0.0 531


The database performance view provides a statistical overview for each static query also for queries that are associated with each page. Analyzing the db.log file with the
sumall.awk script is more straight forward and dynamic queries can be differentiated.

Click a specific query displays the query automatically in the DB Admin window and up to three argument sets can be retrieved.
Note: Dynamic queries are not tracked.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Contacting IBM Support


IBM® Support provides assistance with product defects, answering FAQs, and performing rediscovery.

Before you begin


You can use Fix Central to find the fixes that are recommended by IBM Support for various products. With Fix Central, you can search, select, order, and download fixes for
your system with a choice of delivery options.

To download fix pack or interim fix for the IBM Product Master, browse to Fix Central.

About this task


After trying to find your answer or solution by using other self-help options, you can contact IBM Support. Your company must have an active IBM software subscription
and support contract, and you must be authorized to submit problems to IBM. The type of software subscription and support contract that you need depends on the type
of product you have. For information about the types of available support, see Support details for IBM Product Master.

Procedure
Complete the following steps to contact IBM Support with a problem:

1. Define the problem, gather background information, and determine the severity of the problem. To determine the severity level, you need to understand and assess
the business impact of the problem you are reporting. Use the following criteria:
Table 1. Severity Descriptions
Severity Detailed description Example
1 System or Service Down Company website is
Business-critical software component is inoperable or critical interface has failed. This usually applies down affecting all users.
to a production environment and indicates you are unable to use the program resulting in a critical A production server is
impact on operations. This condition requires an immediate solution. down.
Note: We will work with you 7x24 to resolve critical problems providing you have a technical resource
available to work those hours.

2 Significant business impact All users of an application


A software component is severely restricted in its use or you are in jeopardy of missing business receive an error when
deadlines because of problems with a new application roll-out. attempting to access a service.

3 Some business impact A client cannot connect to a


Indicates that the program is usable with less significant features (not critical to operations) server.
unavailable.

4 Minimal business impact Documentation is


A non-critical software component is malfunctioning, causing minimal impact, or a non-technical incorrect.
request is made. Additional
documentation
requested
2. Gather diagnostic information. For example,
What software versions were you running when the problem occurred?
Do you have logs, traces, and messages that are related to the problem symptoms? IBM Support is likely to ask for this information.
Can the problem be re-created? If so, what steps led to the failure?
Have any changes been made to the system (for example, hardware, operating system, networking software, and so on)?
Are you currently using a workaround for this problem? If so, please be prepared to explain it when you report the problem.
3. Submit the problem to IBM Support.

IBM Support Assistant (ISA)


Browse to IBM Support Assistant.
Online
Browse to Service requests and support cases to open, update, and view all your Service Requests.
Phone
For the phone number to call in your country, see the IBM Directory of worldwide contacts.

Results
If the problem you submit is for a software defect or for missing or inaccurate documentation, IBM Support creates an Authorized Program Analysis Report (APAR). The
APAR describes the problem in detail. Whenever possible, IBM Support provides a workaround that you can implement until the APAR is resolved and a fix is delivered.

532 IBM Product Master 12.0.0


IBM publishes resolved APARs on the IBM Support website daily so that other users who experience the same problem can benefit from the same resolution. You can
subscribe to the APAR from the IBM Support to receive content updates and delivery notices of the APAR.

What to do next
To stay informed of important information about the IBM products that you use, you can subscribe to updates. You can subscribe to updates by using one of two
approaches:

RSS feeds and social media subscriptions


Download and install an RSS reader and use your reader to subscribe to the IBM Product Master feed.
My Notifications
With My Notifications, you can subscribe to support updates for IBM Product Master and customize the delivery methods that best suit your needs.

Related information
IBM Support
IBM Support Guide
Fix Central
Support details for IBM Product Master
IBM Support Assistant
Service requests and support cases
IBM Directory of worldwide contacts
My Notifications
IBM Product Master feed

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Log files
Log files generally record Product Master runtime events, including exception traces and error messages and can help you resolve issues.

The Product Master log files are in the <install dir>/logs folder and can provide helpful debugging information.

For troubleshooting and analysis of your log files, you can use the Log and Trace Analyzer (LTA). For message troubleshooting and analysis, you can use the symptom
catalog. All Product Master log file messages are collected and listed in the symptom catalog to create a centralized knowledge base of issues and resolutions. The
symptom catalog uses the XML file format and contains specific message information to help you identify issues by providing the following information about the message:

Message ID numbers
Descriptions
Time stamps
Thread IDs
Severity levels
Possible resolutions
Source components

The symptom catalog file is named xxxx, where xxxx is the name of your catalog, and is located in the $TOP/locales/en_US/system_resource_bundle folder. For more
information, see Using LTA tool.

Runtime log files


Each product service includes several different runtime-generated log files that record certain events within each of the services. The svc.out log file is a runtime log file
that each service creates, and should be one of the first log files that you analyze when troubleshooting a problem. After starting a service, view the svc.out file for errors
or exceptions to ensure that the service started correctly.

The svc.out log file is located in the following folders:

$TOP/logs/service/service_Name

Where,

service - Can be admin, appsvr, default, eventprocessor, queuemanager, rmi, scheduler, and workflowengine.
service_Name is the unique service name of a service.

Types of log files


The following table defines each type of log file that is under each service folder and refers Product Master as the 'Application'. To view the Logs folder structure, see the
"Fetching the Log folder structure" section.
Table 1. Admin UI log files
Log file name Contains
ehcache.log Messages for EHCache third-party component.
gdsmsg.log Errors, events, and messages for message processing from the gdsmsg component.
mdm_cache.log Errors, events, and diagnostic information for the system cache.

IBM Product Master 12.0.0 533


Log file name Contains
svc.err Runtime errors for each JVM. It is the standard error (STDERR) output.
svc.out Runtime errors for each service created, and it should be one of the first log files that you analyze when troubleshooting a problem. After starting a
service, view the svc.out file for errors or exceptions to ensure that the service started correctly. For more information, see Log files.
Table 2. Persona-based UI log files
Log file name Contains
derby_installation_logs Contains the Apache Derby database installation logs.
engine.log Runtime errors for Dashboards OED engine.
indexer.log Runtime errors for the Free text search services.
installation.log Contains the Persona-based UIconfiguration and installation logs.
pim_collector.log Runtime errors for the Free text search services.
attributes.log Runtime errors for the machine learning attributes service APIs.
categorization.log Runtime errors for the machine learning categorization service APIs.
controller.log Runtime errors for the machine learning controller APIs.
standardization.log Runtime errors for the machine learning standardization service APIs.
restapi.log Runtime errors for the Product Master REST APIs.

Log file format


You can set the log file format of all of your logs files to either the log4j format or the Common Base Event (CBE) format.

All the messages by default, are logged by using the log4j PatternLayout format. The log4j format provides the Product Master logging infrastructure that outputs an easy
to read format with one message per line in a .log file. The optional CBE log file format that is used with the Log and Trace Analyzer (LTA) produces messages in the XML
file format and is more appropriate for machine reading.

You specify the log format on the log_format parameter in the common.properties file. Following are the valid values of the log_format parameter.

PatternLayout
The default log4j log file format.
CBELayout
The CBE log file format.
CBELayout_PatternLayout
Logs messages in both the log4j and the CBE log file formats.

Note: The CBE log file format creates a performance overhead and is recommended for use only when instructed by IBM Software Support, for example, when performing
log analysis and troubleshooting with the LTA.

Customizing log4j log files


You can customize your log4j formatted log files including file location, size, and number of backup files.

Use the log formatting options to modify and manage your log4j log files. Edit the log4j2.xml file located in the $TOP/etc/default folder.

Generated file location


To change the location of a generated log file, change the File and filePattern parameters of the specified log configuration file.
Example

<RollingFile name="EXCEP" fileName="$TOP/logs/webserver_db.log" append="true"


filePattern="/opt/MDM/logs/${svc_name}/exception-%d{MM-dd-yyyy}-%i.log">

Log file size


To control when the log file truncates,
To control when the file begins to truncate, change the Size parameter value in the SizeBasedTriggeringPolicy tag.
In this example, the maximum file size is 10MB.

<SizeBasedTriggeringPolicy Size="10MB"/>

Maximum backup files


To control the number of files you store, change the max parameter value.
Example

<DefaultRolloverStrategy max="2" />

Ignoring a particular exception through the log4j2.xml file


If you want to ignore a particular type of exception that is logged in the log files, then you need to add a filter in the log4j2.xml located in the $TOP/etc/default/. folder. If
you add the filter, you can avoid warnings, and less severe exceptions from getting logged.

In the log4j2.xml file add following to filter out a particular exception:

<filter class="org.apache.log4j.varia.StringMatchFilter">
<param name="StringToMatch" value="ExceptionName" />
<param name="AcceptOnMatch" value="false" />
</filter>

ExceptionName - The name of exception that you want to avoid getting logged.

AcceptOnMatch - Set to the value of false, so whenever the parameter finds an exception having particular exception name. The exception is not logged.

<RollingFile name="EXCEP" fileName="/opt/MDM/logs/${svc_name}/ exception.log" append="true"


filePattern="/opt/MDM/logs/${svc_name}/exception-%d{MM-dd-yyyy}-%i.log">

534 IBM Product Master 12.0.0


<PatternLayout>
<Pattern>%d [%t] %-5p %c %x- %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="10 MB" />
</Policies>
<DefaultRolloverStrategy max="2" />
<filter class="org.apache.log4j.varia.StringMatchFilter">
<param name="StringToMatch" value="ExceptionName" />
<param name="AcceptOnMatch" value="false" />
</filter>
</RollingFile>

Fetching the log folder structure


Proceed as follows:

1. Stop the application.


2. Upgrade the application to the latest build.
3. Run the configureEnv.sh file, and overwrite the following files:
log4j2.xml.bak
log_cbe_xml.bak
log_cbe_pattern_xml.bak
4. Take a backup of the existing Logs folder by using the following command:

cp $TOP/logs <destination_directory>
OR
cp $TOP/opt <destination_directory>

Example

cp $TOP/logs /bkp_folder

5. Delete the existing Logs folder by using the following command:

rm -rf $TOP/logs
OR
rm -rf $TOP/opt

6. Run the ./start_local.sh file. After the services are started, the folder structure resembles the list of the log files that are mentioned in the "List of the log files"
section.

Viewing log files


You can view your log files from the user interface, the log file directory, or with the Log and Trace Analyzer (LTA).
Enable custom logging
You can add custom loggers and appender in the log4j2.xml file to enable custom logging.
Debugging Persona-based UI logs
You can troubleshoot and debug logs for the Persona-based UI.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Viewing log files


You can view your log files from the user interface, the log file directory, or with the Log and Trace Analyzer (LTA).

Before you begin


Before you access your log files with the LTA, you must download and install the IBM® Support Assistant and enable the LTA plug-in.

Procedure
View the log files. Use one of the following methods to view the log files: user interface, text editor, or LTA.
Option Description
a. Click System Administrator > Log Files. The Log Directory Listing pane appears.
b. Select log files that you want to view.
User interface c. In the Log Directory Listing pane, select any of the following, and click Submit.
Select the Entire Log File checkbox to view the entire file.
Specify the maximum number of lines that you want to view in the text field.

IBM Product Master 12.0.0 535


Option Description
a. Browse to the logs directory.
Admin UI
All log files are in the $TOP/logs directory.
Text editor Persona-based UI
The installer logs are in the $TOP_MDMUI/logs directory and the log files for services are in the $TOP_MDMUI/logs/service_name directory.
b. Open the appropriate file directory to locate the log file that you want to view.
c. Use any text editor to open and view your log files.

LTA For instructions on how to view log files with the LTA, see the Using LTA tool.

Viewing the Global Data Synchronization log files


You can view log files from the Administration Console of Global Data Synchronization.
Collecting log files
You have an option on whether you want to collect all the required Product Master and WebSphere® Application Server log files.
Troubleshooting script-related errors
Often you see errors in the exception or application log files that are caused by scripts.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Viewing the Global Data Synchronization log files


You can view log files from the Administration Console of Global Data Synchronization.

Procedure
View the log files. Use one of the following methods to view the log files: user interface or text editor.
Option Description
a. From the menu bar, select Administration > View Log Files.
User interface b. Click the log file that you want to view. The file opens for viewing.
Note: You cannot edit the displayed file from the Administration Console.

a. Go to the $TOP/logs directory for log4j formatted log files.


Text editor b. Open the appropriate file directory to locate the log file that you want to view.
c. Use any text editor to open and view your log files.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Collecting log files


You have an option on whether you want to collect all the required Product Master and WebSphere® Application Server log files.

Before you begin


In order to use the date and time-based log entry filtering mechanism, ensure that your log file format creates the date and time format pattern.

About this task


Using the pimSupport.sh script file you can collect all the Product Master and WebSphere Application Server log files.

Procedure
1. Open a command prompt.
2. To collect all the log files, enter the following command:

pimSupport.sh --collectpimlogs=[all | appsvr]

3. Optional: To collect data within a specified time range, enter the following command:

./pimSupport.sh --collectpimlogs="all" --fromtime=<MM_dd_yyyy__HH_mm_SS> --totime=<MM_dd_yyyy__HH_mm_SS>

Where,
--fromtime is the start date
--totime is the end date
The fromtime and totime depend on the ConversionPattern specification that is given in the $TOP/etc/default/log4j2.xml or
$TOP/etc/default/log_cbe_pattern.xml file. The log_format parameter in the $TOP/etc/default/common.properties specifies which one of these files are used.

536 IBM Product Master 12.0.0


Note: Do not modify the ConversionPattern string in the file, which is associated with the log_format parameter before the time-based filtering mechanism is
used to collect the log entries.

Example
Following is an example command that will collect data within a specified time range.

./pimSupport.sh --collectlogs="all" --fromtime="10_31_2020__00_18" --totime="11_13_2020__13_18"

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting script-related errors


Often you see errors in the exception or application log files that are caused by scripts.

Procedure
Use the following steps to get details on the script causing the errors.

1. Check whether it is a scheduler error by looking at the corresponding entry in the scheduler_default or scheduler_exception logs. If so, you can research
the scheduler log for this issue.
2. If not, then it is most likely caused by a user action. The user actions can be traced from the from the System Administrator > Audit logs and then selecting the
appropriate time when you have experienced this error.
3. If you need more help, you can also increase the debug level in the appsvr logs and see whether you can get more information about this error. To increase the
debug level, perform the following:
a. Look for the logs directory under the trigo installation directory on the server.
b. Locate and edit the appsvr.log4j2.xml to change the priority value="debug" for following austin.error.AustinException category.

<Logger name=" austin.error.AustinException " level="info" additivity="false">


<AppenderRef ref="EXCEP" />
</Logger>

c. Save your change. Once this change is done, you should see more information about this error in the appsvr logs, which help you to better locate this error.
4. You can change the suspect scripts to add the following:

catchError(e){
....the entire script goes here.....
}
wt = createOtherOut("test.out");
wt.println("script="+name of this script); //enter name of this script
wt.println("error message = "+e);
wt.println("user name = "+getCurrentUserName());
wt.save("temp/test.out"+script_name); //the script name goes here along
with any docstore path
wt.close();

This creates a new file for each script run.


Note: Run only for a short time, as might fill up the logs, and create a memory issue and delete temporary docstore files after your testing is complete.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enable custom logging


You can add custom loggers and appender in the log4j2.xml file to enable custom logging.

Procedure
Use the following steps to enable custom logging.

1. Go to the $TOP/etc/default folder.


2. Edit the log4j2.xml file.
3. Go to the <Loggers> section.
4. Add the following custom logger.

<Loggers>
<Logger name="package_name" level="debug" additivity="false">
<AppenderRef ref="DEFAULT" />
</Logger>
</Loggers>

The default appender in the log4j2.xml file is configured to log the logging events in the ipm.log file.
5. Optional: To log the custom logs in a different log file (example custom.log), you can add the following logger and appender.

IBM Product Master 12.0.0 537


<RollingFile name="CUSTOM" fileName="/opt/MDM/logs/${svc_name}/custom.log" append="true"
filePattern="%LOG_DIR%/${svc_name}/ipm-%d{MM-dd-yyyy}-%i.log">
<PatternLayout>
<ScriptPatternSelector defaultPattern="%d [%t] %-5p %c %x- %m%n">
<ScriptRef ref="decideLoggingPattern"/>
<PatternMatch key="showIpAddressAndUserName" pattern="%d
ip_address=[%X{loggedInUserIp}] user=[%X{UserName}] [%t] %-5p %c %x- %m%n"/>
</ScriptPatternSelector>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="10 MB" />
</Policies>
<DefaultRolloverStrategy max="2" />
</RollingFile>

loggedInUserIp - Visible if the enable_client_ip_username_logging property is enabled in the common.properties file. For more information, see
common.properties file parameters. This logs user details like IP address and username into the ipm.log file of the app server. You need to overwrite the log4j2.xml
file to enable this. When you run the configureEnv.sh script file, you must enter "y" as an input when prompted for the log4j2.xml file.

Example
<Loggers>
<Logger name="com.test" level="debug" additivity="false">
<AppenderRef ref="DEFAULT" />
</Logger>
</Loggers>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Debugging Persona-based UI logs


You can troubleshoot and debug logs for the Persona-based UI.

Procedure
Use the following steps to enable debug level logs for the Persona-based UI.

1. Go to the $WAS_HOME/profiles/<WAS profile>/installedApps/<NodeCell>/mdm-rest.war.ear/mdm-rest.war/WEB-INF/classes folder.


2. Edit the log4j2.xml file.
3. Go to the <Loggers> section.
4. Change the value of level from "info" to "debug", for example,

<Logger name="com.ibm.rs" level="debug" additivity="false">


<AppenderRef ref="RollingFile" /> </Logger>

5. Restart the Persona-based UI services.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

FAQs - Cloud offering


Read frequently asked questions about Product Master.

1. What is the role of the product team in the IBM® Product Master Hosted Cloud offering?
The product team provisions only the offering and hand over the account details to the customer.

2. What type of deployment does the IBM Product Master cloud-hosted offering support?
IBM Product Master cloud offering supports container-based deployment.

3. Does the offering support High-availability?


Yes, the offering supports HA with multiple nodes on Kubernetes (Open source version).

4. What is the supported infrastructure?


The new offering supports IBM Cloud® Virtual Private Cloud (VPC).

5. Where is the database deployed?


The database is deployed on a virtual machine (VM).

6. What plan does the offering support?


The offering supports a basic level plan that has IBM Cloud Internet Services, Log Analysis, and Cloud tracker enabled.

7. Does the offering support upgrading the basic level plan?

538 IBM Product Master 12.0.0


Yes, you can upgrade the basic level plan that is provided with the offering.

8. Does the offering support backup and recovery?


Yes, the offering has IBM Spectrum® Protect for IBM Db2® database and Velero for Kubernetes backup solutions.

9. Which different T-shirt sizes are supported by the offering?


The offering supports three T-shirt sizes: small, medium, and large.

10. Which environments does the offering support?


The offering has separate Production, Staging, and DEV/QA environments.

11. Do all the environments support the same set of T-shirt sizes?
No, following are the T-shirt size and environment-mapping details:
Large T-shirt size: Production, Staging, and DEV/QA environments
Small and medium T-shirt size: Production environment.
12. What do you need to provide to the product team?
You need to provide an IBM Cloud account with administrative privileges for services required during provisioning.

13. How many subnets does a production environment have?


The production environment has three subnets: Network policies (Subnet 1), High availability disaster recovery (HADR) cluster (Subnet 2), and IBM Spectrum
Protect (Subnet 3).

14. What does the first subnet contain in a production environment?


The first subnet contains Kubernetes Master VMs and Worker VMs.

15. Where are the product images installed?


The product images are installed in subnet 1 under the Product Master namespace.

16. What does the second subnet contain in a production environment?


The second subnet contains a primary and standby IBM Db2 database.

17. What does the third subnet contain in a production environment?


The third subnet contains IBM Spectrum Protect that takes backup of subnet 2 databases.

18. What would Kubernetes use for product installation in the offering?
Kubernetes uses Persistent Volumes (PV) on the IBM Cloud File Storage.

19. What does Persistent Volume contain?


The Persistent Volume contains product data like logs, uploaded images, and imports.

20. Does IBM Spectrum Protect also backup File Storage?


Yes, IBM Spectrum Protect also takes a backup of File Storage.

21. What is the difference in the network architecture for Production, Staging, and DEV/QA environments?
The Staging and DEV/QA environments do not have IBM Spectrum Protect backup service, and the DEV/QA environment has only 1 primary IBM Db2 database.

22. Does the offering have the ability to display the public URL for the application?
The offering has IBM Cloud load balancers and hence can display the public URL for the application.

23. How is infrastructure managed?


You can manage infrastructure through a VPN connection.

24. What options does the VPN gateway provide in the offering?
The VPN gateway provides you with an option of either site-to-site VPN or client-to-site VPN.

25. How many VMs does the Kubernetes system require?


Kubernetes system requires 6 VMs for Production and Staging and 2 VMs for DEV/QA environments.

26. What does the Product Master namespace have after the installation?
The Product Master namespace has primary, secondary, and third-party services.

27. What are the primary services in the Product Master namespace?
Primary services include Admin UI, Persona-based UI, REST API, and Machine learning services.

28. What are the secondary services in the Product Master namespace?
Secondary services include Workflow, Scheduler, PIM collector, and Indexer services.

29. What are the third-party services in the Product Master namespace?
Third-party services include Elastic search, Hazelcast, IBM MQ, and MongoDB services.

30. What are the various monitoring and infrastructure tools that the offering uses?
IBM Cloud Log Analysis service, IBM Cloud Activity Tracker service, IBM Cloud Internet Services, IBM Spectrum Protect, and Velero are the various monitoring and
infrastructure tools that the offering uses.

31. What extra monitoring and infrastructure tool does the offering have?
You can avail of ManageEngine EventLog Analyzer Premium Edition and ManageEngine Vulnerability Manager Plus tools at an extra cost with a services team that is
managed by the Persistent Systems Limited.

32. What services and tools are used by the provisioning team?
VPC, IBM Cloud Virtual Servers, IBM Cloud Block Storage, IBM Cloud File Storage, IBM Cloud Object Storage, Application load balancers, IBM Key Protect,
VPN gateway, IBM Db2 database, and Kubernetes orchestration.
33. How do you download the Product Master Docker images?
For operator-based accelerated installation, you can download Product Master Docker assets from the IBM Passport Advantage® or use a script to download the
images from the IBM Support Fix Central. For more information, see Downloading the Product Master Docker images (Operator) .

34. How many docker images does the Product Master application have?

IBM Product Master 12.0.0 539


The Product Master application has almost 13 different docker images. For more information, see Types of Docker images .

35. How many YAML files do you download through the Product Master docker images?
You download 7 different YAML files. For more information, see Types of YAML files.

36. How do you deploy the Product Master docker images on Kubernetes?
You can follow the procedure that is listed in the Deploying on the Kubernetes cluster topic.

37. How can you upgrade a Kubernetes deployment to the latest version?
You can follow the procedure that is listed in the Migrating Product Master Kubernetes or Red Hat OpenShift deployment (Fix Pack 7 and later)topic.

38. How is the Product Master deployed through a YAML file?


You can follow the procedure that is listed in the Configuring Product Master deployment YAML (Fix Pack 7 and later) topic.

39. How is the database schema migrated?


You can follow the procedure that is listed in the Creating or migrating database schema topic.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Develop applications
You can develop application to use with IBM® Product Master.

Important: Code examples are meant to only explain and show concept and not used as a working code.

Languages and code


There are two different types of code you can write and use with IBM Product Master. You can choose from Java™ API or Script API. Queries can be called from the
script and the Java code. After you choose the type of code, you then need to add the code to Product Master.
Creating web services
You can create a web service so that users can access Product Master Server system data from an external application. For example, you can create a web service
to search for items in a specific catalog.
Developing scripts and extension points with the script workbench
You can develop scripts and Java extension points with script workbench for IBM Product Master.
Deploying a third party or custom user .jar file
To use the third-party code or code that is available from custom JARs, those custom JARs need to get deployed in to the system.
Samples
IBM Product Master provides several different samples that you can use to develop your Product Master Server solution.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Languages and code


There are two different types of code you can write and use with IBM® Product Master. You can choose from Java™ API or Script API. Queries can be called from the script
and the Java code. After you choose the type of code, you then need to add the code to Product Master.

You can use scripting to extend basic function, add flexibility, and manipulate data. Scripting extends the basic function of Product Master in the following ways:

Implements custom business rules and processes,


Imports from and export to virtually any file standard and custom file format,
Completes mass updates.

Scripting provides added flexibility on the way data is formulated:

Product Master scripting engine allows for sophisticated data manipulations during the import or export of information. The added flexibility enables you to:
Apply business rules to standardize data
Define calculated fields
Run custom reports
Perform rules-based cleansing of data
Scripting is used to do the following data manipulations:
Cleanse
Transform
Validate
Calculate

Other uses for scripting include:

Imports and exports


Data manipulation and cleansing
File formatting
Expression mappings
Mass updates
Pre and post-catalog processing providing enhanced data integrity

540 IBM Product Master 12.0.0


Business rules
Calculated values
Attribute relationships

Java API
The Java API provides a Java interface with a set of utility classes and methods that you can use to write Java code. The code can access IBM Product Master
entities directly, without the need for custom scripts. You use the Java application programming interface (API) to interact with Product Master.
Script API
The Script API is similar to Java API. Script API operations extend the basic function of IBM Product Master.
Query language
You can use the IBM Product Master query language to easily write queries that retrieve complicated Product Master Server specific data from Product Master
Server systems easily. The query language adopts the syntax of the Structured Query Language (SQL).

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Java API
The Java™ API provides a Java interface with a set of utility classes and methods that you can use to write Java code. The code can access IBM® Product Master entities
directly, without the need for custom scripts. You use the Java application programming interface (API) to interact with Product Master.

Developing applications for Product Master by using Java code has these benefits:

You can write object-oriented code, which allows good design practices, greater code encapsulation, and reuse.
You can use existing Java IDEs (integrated development environments) such as Rational® Software Architect.
Debugging the user code becomes much easier.
You can call and use any third-party Java code (.jar) within your code easily.
You can use existing Java skills.
You can use existing Java utilities, and applications can be called directly in your Java code.
You can reference the worldwide Java development community to find coding help.

The Java API has three parts:

The Java API interface (used for development)


The Java API interface is a set of Java interfaces that document all the classes and methods that are available to you. This API is included as a ccd_javaapi.jar
(Version 5.3.2) and ccd_javaapi2.jar (Version 6.0.0 and later) file in the javaapi folder in the Product Master installation directory. Import the
ccd_javaapi2.jar file into your Java development environment to allow Java API classes to be developed in isolation from a Product Master server.
The Java API implementation (used at run time)
The Java API implementation refers to the Product Master internal code that provides the function that is documented in the Java API interface. No action is
required to enable this implementation. A Java class that uses the Java API interface to automatically use this internal code when deployed.
The Java API reference documentation
Explanations are provided for some common classes and methods that are available in the Java API interface. For a complete technical reference with detailed
explanations of the classes and methods that are available in your Product Master server instance, see the Product Master Javadoc. This document is generated
from the Product Master code. For more information, see "Javadoc" section in the Reference.

Java API components


The Java API for IBM Product Master supports over 20 components, including components for extension points, items, catalogs, and categories.
Required components
The following components are required for writing Java API-based code:
Requirements and restrictions
Before you develop Java API code, be sure that you read the requirements and restrictions.
Develop code using Java API
You can develop Java API-based stand-alone applications or Java API-based extension points or Java API-based web services.
Develop Java API-based stand-alone applications
Java API-based stand-alone applications are Java classes with a main() method and can be run from command line or within an IDE like Rational Software
Architect.
Developing and implementing extension points
An extension point is a point in the application where custom code can be started, such as entry preview script, post save script, and validation rule script. You can
either use Java API or IBM Product Master scripting language to develop the extension point code. Extension points are the various points within Product Master
where you can modify the behavior by running some user-defined business logic.
Transactions and exception handling guidelines
You can use Java API to start the startTransaction(), commit() rollback() methods of the context interface to run your code within a transaction. Running related
code inside a transaction provides you the ability to ensure atomic execution of the related code. You can also use transactions in long-running jobs, such as import
or report jobs, for periodic commit of the changes to the database. Finally, you can use the setSavepoint() and rollbackToSavepoint() methods of the context
interface to perform partial rollbacks within a transaction.
Debugging Java API code
You can use the following methods to debug your Java API code. With the IBM Product Master Java API, these debugging methods are possible only for web-
service-deployed Java API code and extension point Java API code.
Advanced programming with Java API
PIMCollection usage, multi-catalog batching support, and save points are some of the other topics to consider while programming through Java API.
Java API code best practices
Best practices for working with Java API include reusing objects, catching and handling exceptions, through JUnits to test the code.
Java API migration
You can migrate code that is written with Product Master Script API to use the Java APIs.
Troubleshooting Java API
The following are some of the troubleshooting tips for developing or running the Java API-based code.

IBM Product Master 12.0.0 541


Resources
The following resources are available on Java API.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Java API components


The Java™ API for IBM® Product Master supports over 20 components, including components for extension points, items, catalogs, and categories.

The following table shows the component names and descriptions. For information, see the Javadoc.
Component Description
com.ibm.pim.attribute Supports attribute functions, including attribute definitions, attribute instances, and attribute collection handling.
com.ibm.pim.catalog Supports catalog functions.
com.ibm.pim.catalog.item Supports item functions.
com.ibm.pim.collaboration Supports collaboration-area functions, including history.
com.ibm.pim.collection Supports PIMCollection functions.
com.ibm.pim.common Supports some of the common functions that include batch, version information, processing options, and validation errors.
com.ibm.pim.common.exceptions Holds a list of exceptions that are used by the Java API.
com.ibm.pim.context Holds context-factory interfaces.
com.ibm.pim.docstore Supports docstore functions.
com.ibm.pim.extensionpoints Supports extension point functions, including argument beans and function interfaces.
com.ibm.pim.hierarchy Supports hierarchy functions.
com.ibm.pim.hierarchy.category Supports category functions.
com.ibm.pim.history Supports history manager functions.
com.ibm.pim.integration Supports third-party integration, including Content Manager System.
com.ibm.pim.job Supports job functions, including schedules, imports, exports, and reports.
com.ibm.pim.lookuptable Supports lookup table functions.
com.ibm.pim.organization Supports organization functions, including company, user, role, and organization.
com.ibm.pim.search Supports search functions.
com.ibm.pim.selection Supports selection functions.
com.ibm.pim.spec Supports spec functions.
com.ibm.pim.system Supports system functions, including getPageURL support.
com.ibm.pim.userdefinedlog Supports user-defined log functions.
com.ibm.pim.utils Supports utility functions, including environment import-export, logger, distribution, and data source.
com.ibm.pim.view Supports view functions.
com.ibm.pim.webservice Supports web services functions.
com.ibm.pim.workflow Supports workflow functions.
For more information, see "Javadoc" section in the Reference.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Required components
The following components are required for writing Java™ API-based code:

IBM® Product Master installation: These files must be extracted or untared to a local folder on the file system. Ensure that the following directories are available
before you run a Java API-based application:

etc/default
Contains the API configuration files, which specify the connection parameters to the Product Master server.
Jars
Contains all of the Product Master and supporting JAR files that are required for connecting to the server.
Locales
Contains the locale-specific resources.
Logs
Contains the log files that are generated by the API.

IBM or Sun JDK: Version of the JDK depends on the version of Product Master being used. JDK version to be used starts from 1.5.
The tools.jar file from JDK is required to be present in the class path of the Java project.
Access to a running Product Master instance is required. This instance can be a remote instance, for example, it does not need to be running on the same computer
where IDE is being run to develop the application.
Client connectivity to the database, which is being used by the Product Master instance.
Database-specific client libraries:
For Oracle, ojdbc5.jar or ojdbc6.jar is required on the class path of the project.
For DB2®, db2jcc.jar and db2jcc_license_cu.jar are required on the class path.

542 IBM Product Master 12.0.0


Verify the database related parameters in the db.xml file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Requirements and restrictions


Before you develop Java™ API code, be sure that you read the requirements and restrictions.

Requirements for running Java API code:

Java APIs can be started from any of the following invocation points of Product Master:
Product hosted web service, directly through the Java APIs.
Product hosted web service, through reflection support in scripting API.
Custom-hosted web service, directly hosted on an application server such as WebSphere® Application Server.
Scripting API, through reflection support.
Custom tool, through reflection support of scripting API.
Extension point, through reflection support of scripting API.
Extension point, through the URL mechanism that Product Master provides. This extension point also includes custom tool extension point.
Custom JSPs and servlets.
Custom JavaServer Pages (JSPs)
Product Master is running when you run the Java API code.
Ensure that any Java API code that you develop is:
Reentrant
Thread safe

Restrictions for Java API:

Do not use Java API code within a multi-thread program. This is important for Product Master extension points which must adhere to the Java Platform, Enterprise
Edition guidelines.
Do not call the Java API methods from a stand-alone Java class. Only the invocation points that are listed are supported.
Do not mix the Script API scripts and Java API extension point redirection URL scripts. You can either run the script or redirect the execution to a Java API extension
point class.
Do not call Script API scripts from Java API code. No mechanism is available to pass variables to the script execution environment from the Java API. You can
access Java API from Script API scripts by using a reflection mechanism. However, passing parameters from scripts to Java API is not supported.
The Java API does not support several of the wrapper operations that were supported by the Script API, including currency operations, JDBC operations, and Excel
operations. You can use these independent software vendors libraries directly within the Java environment instead.
The Java API can be called from Script API through script reflection support. Also, both Script API and Java API support that starts user transactions. However,
starting of multiple transactions in the same operation that uses both Script API and Java API is not supported. Such usage can lead to undesired data persistence
and transaction rollback issues.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Develop code using Java API


You can develop Java™ API-based stand-alone applications or Java API-based extension points or Java API-based web services.

Java API programming pattern


When you are programming through the Java API, you generally follow a programming pattern. The pattern involves obtaining a context, obtaining a manager from the
context, obtaining a Java API entity object, and modifying the object and saving it.

Obtaining a context
To access IBM® Product Master entities and methods, the Java code must first obtain a PIMContext. You can obtain a PIMContext in two ways:

You can obtain a fresh context by providing the username, password, and company information to the API: PIMContextFactory.getContext(user
name, password, company name). This method is used when you are writing stand-alone Java API applications or unsecured web services.
If the Java API code is running in an already authenticated context (for example, an extension point implementation class that runs within the Product
Master application or a secure web service, where authentication information was provided when the web service was started), you can obtain the existing
context through the API: PIMContextFactory.getCurrentContext(). Using this approach removes the over head of creating extra contexts.

Obtaining a manager from the context


After the context is available, you can retrieve from the context a Java API manager object that corresponds to the entity. For example, a manager for the catalog
can be obtained by using the Java API.
CatalogManager ctgManager = context.getCatalogManager();
Obtaining a Java API entity object
After manager for an entity is obtained from the context, you can access the entity itself. For example, a catalog object can be obtained from the catalog manager by
using the Java API.
Catalog catalog = ctgManager.getCatalog("my catalog");
Modifying the object and saving it

IBM Product Master 12.0.0 543


The entity object that you obtained from the manager class can be modified by using the APIs that are available within the object itself. After the modifications are
done, you save them to the database with the save() method. For example, a catalog object that is obtained from the catalog manager can be modified and saved by
using the Java API.
catalog.addSecondaryHierarchy(hierarchy);

catalog.save();

Java API and Rational Software Architect


You can use IDEs like RSA (Rational® Software Architect) to develop your application and resolve Java compilation errors from the application.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Develop Java API-based stand-alone applications


Java™ API-based stand-alone applications are Java classes with a main() method and can be run from command line or within an IDE like Rational® Software Architect.

Ensure that you install IBM® Product Master. See Developing using Script Workbench for details about the script workbench for IBM Product Master.
Important: Stand-alone Java API programs are not supported but this mode can be used to develop test programs before they are deployed as extension points or are
used within web services.

Java API stand-alone application example


The following example is a stand-alone Java API program. This program can be compiled and can be run as a stand-alone program from command prompt or from
within an IDE like Rational Software Architect. This program assumes certain user name, spec, and catalog are available on the IBM Product Master system.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Java API stand-alone application example


The following example is a stand-alone Java™ API program. This program can be compiled and can be run as a stand-alone program from command prompt or from within
an IDE like Rational® Software Architect. This program assumes certain user name, spec, and catalog are available on the IBM® Product Master system.

File: JAPIDemoApp.java

import com.ibm.pim.catalog.Catalog;
import com.ibm.pim.catalog.CatalogManager;
import com.ibm.pim.context.Context;
import com.ibm.pim.context.PIMContextFactory;
import com.ibm.pim.spec.Spec;
import com.ibm.pim.spec.SpecManager;

public class JAPIDemoApp


{
public static void main(String[] args)
{
Context ctx = null;
try
{
//Obtain the context
ctx = PIMContextFactory.getContext("user", "password", "MyCompany");
System.out.println("Context" + ctx.toString());

//Create a Spec object for an pre-existing spec “Test Spec”


SpecManager specMgr = ctx.getSpecManager();
Spec spec = specMgr.getSpec("Test Spec");

//Load a pre-existing catalog in the system.


CatalogManager ctgManager = ctx.getCatalogManager();
Catalog ctg = ctgManager.getCatalog("WS Catalog1");

ctx.cleanUp();

System.exit(0);
}
catch (Exception e)
{
//If any exception is encountered, print the stack trace and
//a message
e.printStackTrace();
System.out.println(“JAPIDemoApp failed");
}
}
}

IBM Product Master 12.0 Fix Pack 8

544 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

Developing and implementing extension points


An extension point is a point in the application where custom code can be started, such as entry preview script, post save script, and validation rule script. You can either
use Java™ API or IBM® Product Master scripting language to develop the extension point code. Extension points are the various points within Product Master where you
can modify the behavior by running some user-defined business logic.

About this task


To implement the extension points through the Java API, you need to implement a set of predefined interfaces, which are supplied with Product Master. More than 20
extension point interfaces are included with Product Master.

In Product Master, Java enabled extension points are started through a redirection script that appears as a script to Product Master, but points at implementation in Java.
Each Java enabled extension point is represented by a unique interface within the Java API. For example, a custom tool implementation must implement the
CustomToolsFunction interface. Methods on these interfaces accept specific arguments to determine what data is passed to the extension points.

For developing extension points through the script workbench, see Developing scripts and extension points with the script workbench.

Procedure
Following are the steps to be used when you develop Java API-based extension points:

1. Develop an extension point implementation class.


2. Make the extension point class available to Product Master.
3. Provide a URL for invocation of extension points by the Product Master.

Developing an extension point implementation class


You develop an extension point class to provide custom code for a particular method.
Making the extension point class available
To make the extension point class available to IBM Product Master, you need to load the .class file to the document store. Or, you need to make the .class file
available using a .JAR file in the class path of the Product Master instance.
Registering a URL for invocation of extension points
Before IBM Product Master can start a specific extension point, the Java-based extension point implementation class needs to be registered from the
corresponding extension point of the Product Master user interface.
Starting Java API extension points from scripting
You can start Java API extension points from existing scripting API code by passing parameters across the scripting environment and the Java API code. This
passing allows the customer solutions that are already developed through Scripting APIs to take advantage of the new capabilities in Java API.
Testing extension point implementation classes
Ensure that you unit test your extension point implementation code before you deploy the code to the production environment. The implementation of the Java
methods can be unit that is tested by adding the main() method to the implementation class or through unit testing frameworks like JUnits.
Developing in a team mode
Because there can be multiple users who work on the development of Java API-based code, there might be a need to develop and test the code in a team
environment.
Security in extension points
Java API extension points can be started in secure or insecure mode.
Caching extension point classes
You can upload Java classes individually to the document store or make them available with the custom or user JARs mechanism. For some extension points, the
compiled scripts are cached, which means that if you change a file and load it to the document store, you must clear the cache to see the changes.
Sample extension points in Java API
The following samples provide code to preview entry, catalog, pre-, and post-processing extension points.

Related reference
Developing scripts and extension points with the script workbench

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Developing an extension point implementation class


You develop an extension point class to provide custom code for a particular method.

For example, the scripting sandbox extension point demonstrates the implementation of an extension point. You use the scripting sandbox to run custom code. The
following sample code is an interface with no implementation for the method scriptingSandbox(ScriptingSandboxFunctionArguments
inArgs). This extension point is implemented by writing a class that implements the ScriptingSandboxFunction interface and provides custom code for the
scriptingSandbox(ScriptingSandboxFunctionArguments
inArgs) method.

package com.ibm.pim.extensionpoints;

/**

IBM Product Master 12.0.0 545


* Interface that represents the Scripting Sandbox function
*
* @mdmpim.scriptequiv scripts launched via the Scripting Sandbox
* @since 6.0.0
*/
public interface ScriptingSandboxFunction
{
public static final String copyright = com.ibm.pim.utils.Copyright.copyright;

/**
* Java function that can be invoked via the sandbox.
*
* @param inArgs
* the arguments for this invocation.
*/
public void scriptingSandbox(ScriptingSandboxFunctionArguments inArgs);
}

After the implementation class is written, you need to compile it to produce a .class file. You can compile the .class file by using an IDE (such as Rational® Software
Architect) or through the Java™ command-line compiler. The implementation class needs to be compiled without any errors within an IDE environment to create the .class
file. The .class file is uploaded to the IBM® Product Master system as outlined in the following sections.

Extension points methods arguments


Each extension point has its own set of argument interfaces. Arguments to extension point methods are Java interfaces by themselves and are available as part of
Java APIs.
Implementation class packaging
When there are multiple extension point implementation classes, ensure that you package them using Java packages.
Sample implementation class
The following sample shows an implementation class for the ScriptingSandboxFunction interface.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Extension points methods arguments


Each extension point has its own set of argument interfaces. Arguments to extension point methods are Java™ interfaces by themselves and are available as part of Java
APIs.

In the method scriptingSandbox(ScriptingSandboxFunctionArguments


inArgs), ScriptingSandboxFunctionArguments is the argument interface. Argument interfaces provide access to IBM® Product Master entities like items and
categories, which are relevant to the extension point.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Implementation class packaging


When there are multiple extension point implementation classes, ensure that you package them using Java™ packages.

For example, in the following sample code, the sandbox implementation class is in the package mdmpim.extend.myextensionpoints.

package mdmpim.extend.myextensionpoints;

import java.io.PrintWriter;
import com.ibm.pim.extensionpoints.ScriptingSandboxFunction;
import com.ibm.pim.extensionpoints.ScriptingSandboxFunctionArguments;

public class ScriptingSandboxTestImpl implements ScriptingSandboxFunction


{
//implementation of the method scriptingSandbox
public void scriptingSandbox (ScriptingSandboxFunctionArguments inArgs)
{
PrintWriter cpOut = inArgs.getOutput();
cpOut.write("Testing Sandbox Functionality for JavaApi");
}
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample implementation class


The following sample shows an implementation class for the ScriptingSandboxFunction interface.

546 IBM Product Master 12.0.0


package mdmpim.extend.myextensionpoints;

import java.io.PrintWriter;
import com.ibm.pim.extensionpoints.ScriptingSandboxFunction;
import com.ibm.pim.extensionpoints.ScriptingSandboxFunctionArguments;

public class ScriptingSandboxTestImpl implements ScriptingSandboxFunction


{
//implementation of the method scriptingSandbox
public void scriptingSandbox (ScriptingSandboxFunctionArguments inArgs)
{
PrintWriter cpOut = inArgs.getOutput();
cpOut.write("Testing Sandbox Functionality for JavaApi");
}
}

When started from the scripting sandbox, this extension point implementation class prints the message Testing Sandbox Functionality for JavaApi on the
scripting sandbox output window.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Making the extension point class available


To make the extension point class available to IBM® Product Master, you need to load the .class file to the document store. Or, you need to make the .class file available
using a .JAR file in the class path of the Product Master instance.

Before you begin


After you implement an extension point and create the compiled .class file, the .class file is made available to Product Master. Therefore, it gets started by the system
when that particular extension point is reached.

About this task


1. Use one of the following methods to make the extension point class available to Product Master:
Load the .class file directly to the document store. You can use a simple Java™ class, which is based on Product Master Java API, to upload a .class file to the
document store. The class can be loaded under a subfolder in the document store as well. The file system can be used as a document store in Product
Master. The same file system location can be used as a repository of .class files.
Note: When the .class files are directly loaded to the document store, users are individually working on a particular extension point. This approach might be
suitable when you develop the extension point implementation code but is not suited for production environments where there is a need to deploy multiple
extension point implementations. Since typically a Product Master deployment has multiple extension points that are implemented, by using a single JAR file
to bundle all these classes and deploy them is less error prone compared to handling them individually.

Approach® 1: Mount the file system as docstore


By using a file system-mounted docstore, the local file system can be used as a repository of Java API extension point classes. After the file system is
mounted, the .class files can be directly copied to that location.
For more information about mounting, see Mounting a folder to the docstore.

Approach 2: Use Java APIs to load the .class into docstore


This approach uses a Java API-based application to upload the .class file. In the following sample code, the .class file gets loaded into the
/uploaded_java_classes directory area of the document store. This code segment can be used in a stand-alone Java program, which is compiled and
run as Java application. This program can be further updated to clean up all of the classes in the docstore before you upload and can be made to
upload more than one .class file.
In this example: The path of the uploaded .class file in the document store reflects the package structure of the .class file. The document store path is:
/uploaded_java_classes/mdmpim/extend/myextensionpoints/ ScriptingSandboxTestImpl.class
The fully qualified class name that indicates the package structure for the class is:
mdmpim.extend.myextensionpoints.ScriptingSandboxTestImpl.class

Context ctx = PIMContextFactory.getContext("user", "password", "MyCompany");

// Get a handle to docstore manager


DocstoreManager dsMgr = ctx.getDocstoreManager();

//Create an empty document in the doc store, with full path


Document doc = dsMgr.createAndPersistDocument
("/uploaded_java_classes/mdmpim/extend/myextensionpoints/
ScriptingSandboxTestImpl.class");

//Set the content


//create an inputstream from the file system file for your
//java class for extension point
FileInputStream inStream = new FileInputStream
("C:\\project\\classes\\mdmpim\\extend\\myextensionpoints\\
ScriptingSandboxTestImpl.class");

doc.setContent(inStream);

ctx.cleanUp();

Click Collaboration manager > Document Store to check the existence of the .class file.

IBM Product Master 12.0.0 547


Use the custom or user JAR mechanism. You can use the custom or user .jar mechanism to make an extension point implementation class available to the
Product Master system. With this mechanism, all of the extension point implementation classes are bundled into a JAR file and the JAR file is made available
to Product Master. This approach is suitable when there are several extension point implementation classes.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Registering a URL for invocation of extension points


Before IBM® Product Master can start a specific extension point, the Java-based extension point implementation class needs to be registered from the corresponding
extension point of the Product Master user interface.

About this task


Register the extension point class with an invocation URL so that Product Master can start the specific extension point class when that extension point is reached.

Procedure
Register an extension point implementation class by using a special URL. Following is the syntax of the URL:

//script_execution_mode=java_api=”identification of the User class”

For example, //script_execution_mode=java_api=”japi:///docstore/path:org.pkg.MyJava.class”

Registering the document store loaded class


A URL can be registered to start the extension point class in the document store.
Register the custom user JAR-based class
A URL can be registered to start the class in the custom JAR file, custom JAR file made available to the IBM Product Master system with the custom or user
compress mechanism.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Registering the document store loaded class


A URL can be registered to start the extension point class in the document store.

About this task


If the class that implements the extension point is loaded into the document store, the format of the URL is:

japi:///</docstore folder>(:</docstore folder>):<class to use>


Where the </docstore folder> segments might repeat as required, and are terminated by the fully qualified class to use.

If the .class is loaded under a subfolder in the docstore, multiple </docstore folder> are specified in the URL. Product looks for the .class file in each of the specified
folder in the order until it finds it.

For example,
//script_execution_mode=java_api="japi:///uploaded_java_classes/folder2/folder3:mdmpim.extend.myextensionpoints.ScriptingSandbo
xTestImpl.class"

Procedure
To start the scripting sandbox extension point implementation class that is developed before, proceed as follows:

1. Click Data model manager > Script Sandbox.


2. Provide the URL in the Script Pane field of the scripting sandbox extension point. Following is the invocation URL:
//script_execution_mode=java_api="japi:///uploaded_java_classes:

mdmpim.extend.myextensionpoints.ScriptingSandboxTestImpl.class"

If the docstore is mounted on the file system, a URL similar to the following can be used to register the extension point:
//script_execution_mode=java_api="japi:///public_html/

mdmpim.extend.myextensionpoints.ScriptingSandboxTestImpl.class"
Note: If you are syncing your custom JAR classes from your file system with the Product Master document store, ensure that you set the value of the
enable_mountmgr=true and set up your designated file system directory setup. For more information, see Using file system as document store.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

548 IBM Product Master 12.0.0


Register the custom user JAR-based class
A URL can be registered to start the class in the custom JAR file, custom JAR file made available to the IBM® Product Master system with the custom or user compress
mechanism.

If the class that implements the extension point is made available to Product Master by bundling it in a custom compress through the custom or user .jar mechanism, the
invocation URL does not need the document store path because the class is not in the document store, the path section of the URL can be omitted.

For example,

//script_execution_mode=java_api="japi://

mdmpim.extend.myextensionpoints.ScriptingSandboxTestImpl.class"
In this example, the fully qualified name of the class is displayed. mdmpim.extend.myextensionpoints.ScriptingSandboxTestImpl.class.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Starting Java API extension points from scripting


You can start Java™ API extension points from existing scripting API code by passing parameters across the scripting environment and the Java API code. This passing
allows the customer solutions that are already developed through Scripting APIs to take advantage of the new capabilities in Java API.

The following features are supported:

Argument passing
User's custom objects can be passed between IBM® Product Master scripts and Java API implementations.
Note: When it comes to Product Master entities, same set of objects that are used in the scripting are available for Java API code through the argument interfaces.
Seamless modification of Product Master entities
Since Scripting API code and Java API extension points can be mixed, it is possible to update the Product Master entities in scripting. You can get the same set of
entities that are further updated through Java API code. This modification allows the updates on Product Master entities to be split across Scripting API code and
Java API code. For example, an Item object can be modified partially in scripting API code and rest of the modifications can be done in an extension point
implementation Java API class. The Java API class needs to have access to the same Item object through the argument interface.
Multiple invocations
Multiple invocations of compatible extension point Java code from the same Product Master script are supported.

Invocation of Java API extension points from scripting API is achieved through the introduction of a new scripting API runJavaApiExtensionPoint(). This script
operation takes a Java API URL, which points to the extension point implementation to be started. Additionally, to facilitate a parameter passing across a Scripting
environment and Java APIs, the argument interfaces of extension points provide following two extra methods, which can be used to read or set custom parameters:

Object getCustomParameter(String key);


void setCustomParameter(String key, Object userObject);

Basically, the users can set a custom variable in the script implementation through the setScriptContextValue() script operation and retrieve it in Java
implementation through the getCustomParameter() method. Similarly, they can set a variable in the Java implementation through setCustomParameter() method
and retrieve it in the script implementation through getScriptContextValue() script operation. Through this approach, you can mix the Product Master script and
Java API implementations and be able to pass non Product Master objects between the two implementations.

Product Master Java API method getCustomParameter() returns the custom value from script context, in the context of extension point invocation. Custom values can
be placed in the context through the method setCustomParameter(), which accepts the key/value pair as input. Each value is identified through the key and the value
must be an object. This method is useful when values need to be transferred across script API and Java API extension point code. This method is available from the
function argument objects, which are passed to the extension point implementation classes.
For example,

public void reportGenerate(ReportGenerateFunctionArguments arg0)


{
try {
String value1Val = (String) arg0.getCustomParameter("value1");
}
catch (Exception e) {
e.printStackTrace();
System.out.println(" Exception"+ e.getMessage());
}
}

Usage Guidelines
To obtain the Java API context in the extension point implementation class, use the API getCurrentContext() instead of creating a new context.
The method setCustomParameter() has the same restrictions as defined for setScriptContextValue. For more information, see setScriptContextValue
script operation.

Sample scripting code


The following sample scripting API code shows the usage of starting Java API extension points from scripting approach.

IBM Product Master 12.0 Fix Pack 8

IBM Product Master 12.0.0 549


Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample scripting code


The following sample scripting API code shows the usage of starting Java™ API extension points from scripting approach.

Script that is used in this case is the Pre-processing Script of the catalog.

var logger = getLogger("default");

item.setCtgItemAttrib("Default Hierarchy Primary Spec/Name", "Item attribute value set from script");
logger.loggerInfo(item.getCtgItemAttrib("Default Hierarchy Primary Spec/Name"));

logger.loggerInfo("Java API extension point being invoked to update the same item attribute…");
runJavaApiExtensionPoint("japi:///uploaded_java_classes:packaged_class_sample.com.ibm.pim.PrePostProcessingFunctionImpl.class")
;

logger.loggerInfo(item.getEntryAttrib("Default Hierarchy Primary Spec/Name"));.

Java API extension point implementation class:

public class PrePostProcessingFunctionImpl implements PrePostProcessingFunction


{
Context ctx = null;

public PrePostProcessingFunctionImpl() throws Exception {


ctx = PIMContextFactory.getCurrentContext();
}

public void prePostProcessing


(com.ibm.pim.extensionpoints.ItemPrePostProcessingFunctionArguments arg0)
{
Item item = arg0.getItem();
item.setAttributeValue("Default Hierarchy Primary Spec/Name", "Item attribute value set from Java API extension
point");
}

public void prePostProcessing (com.ibm.pim.extensionpoints.CategoryPrePostProcessingFunctionArguments arg0)


{
}

public void prePostProcessing


(com.ibm.pim.extensionpoints.CollaborationItemPrePostProcessingFunctionArguments arg0)
{
}

public void prePostProcessing (com.ibm.pim.extensionpoints.CollaborationCategoryPrePostProcessingFunctionArguments arg0)


{
}
}

When the item is saved, the combination of script and Java API extension point produces the following output (as seen in the logs). The value of the attribute reflects the
value set from the Java API extension point:

Item attribute value set from script


Java API extension point that is started to update the same item attribute…
Item attribute value set from Java API extension point

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Testing extension point implementation classes


Ensure that you unit test your extension point implementation code before you deploy the code to the production environment. The implementation of the Java™ methods
can be unit that is tested by adding the main() method to the implementation class or through unit testing frameworks like JUnits.

About this task


You can create a JUnit test against each of the methods in the implemented interface. The methods for extension point interfaces take arguments through the argument
interface. When extension points are started in real time, IBM® Product Master passes the populated argument objects to the extension point method. However, when you
are unit testing, this is not the case.

You can use the mock object testing design pattern to provide these argument objects during testing. This means that you can create a mock implementation of the
argument objects that a method takes and pass an instance of that in the method call. The mock objects can be created manually or by using mock object frameworks like
EasyMock or JMock.

The following code shows an example of how an extension point class can be tested by adding the main() method. Same class can be used as the extension point class as
well since it implements the required interface.

550 IBM Product Master 12.0.0


public class ItemInitialload implements ImportFunction
{
public void doImport(ImportFunctionArguments args)
{
Context ctx = PIMContextFactory.getCurrentContext();
......

try {
ItemInitialload.mainProcess(ctx, hmArgs);
} catch (Exception e) {
e.printStackTrace();
}
}

//setup required parameters when invoking the program.


public static void main(String[] args)
{
try {
hmArgs = getArgs();
m_ctx = PIMContextFactory.getContext("user", "password",
"MyCompany");
mainProcess(m_ctx, hmArgs);

System.exit(0);

} catch(Exception e) {
e.printStackTrace();
}
}

public static void mainProcess (Context m_ctx, HashMap<String,String>


hmArgs) throws Exception {

......
}
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Developing in a team mode


Because there can be multiple users who work on the development of Java™ API-based code, there might be a need to develop and test the code in a team environment.

During the development and unit testing of the Java API-based code, the document store can be used as the repository of the classes. After the classes are developed and
unit tested, a compressed file can be built and deployed through the user.jar mechanism.

The custom and user JAR mechanism requires that you restart IBM® Product Master to pick up the new changes in the JAR. However, when document store based
approach is used, you can clear the script cache to reload the new classes in the document store.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Security in extension points


Java™ API extension points can be started in secure or insecure mode.

If an extension point is started in secure mode, a check is performed to make sure every Product Master Server object that is exposed through the argument bean is
accessible, and a PIMAuthorizationException error is thrown if the user does not have required privileges. If extension points are started in insecure mode, the
authorization check is not performed for every object.

You can configure the security with the javaapi_security flag in the common.properties file. By default, the javaapi_security flag is set to true. You can disable
the security by setting javaapi_security to false.
Important: The javaapi_security flag affects both API and UI.

Starting an extension point in a secure environment


Set the javaapi_security flag to true , and use the japis:// URL instead of the japi:// URL.
Example

//script_execution_mode=java_api="japis:
///uploaded_java_classes:wpc.javaapi.test.extensionpoints.CatalogPreviewTestImpl.class"

Starting an extension point in an insecure environment

If the javaapi_security flag is set to false - Both japi:// and japis:// URLs run in the insecure mode where no permission authorization is
performed.
If the javaapi_security flag is set to true - You can run the extension point in an insecure mode by using the japi:// URL instead of the japis://
URL.

IBM Product Master 12.0.0 551


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Caching extension point classes


You can upload Java™ classes individually to the document store or make them available with the custom or user JARs mechanism. For some extension points, the
compiled scripts are cached, which means that if you change a file and load it to the document store, you must clear the cache to see the changes.

The cache is cleared when the system is restarted, but you can clear the script cache without restarting the system. To clear the script cache, go to System Administrator >
Performance Info > Caches, select script, and click Flush Cache. After you clear the cache, the new code is picked up when the extension point is started.

If the docstore is mounted on the file system, following steps can be used to clear the cache.

Remove old class file from docstore and file system.


Flush Script cache from System Administrator > Performance Info > Caches.
Copy the new class file to file system and test this class file after it shows from docstore.

When you load Java classes with the custom or user JAR mechanism, you must restart the IBM® Product Master instances to see the changes to the extension point code.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample extension points in Java API


The following samples provide code to preview entry, catalog, pre-, and post-processing extension points.

Sample entry preview extension point in Java API


The following sample code shows how to use the entry preview extension point.
Sample catalog preview extension point in Java API
The following sample code shows how to use the extension point to preview a catalog.
Sample pre-processing and post-processing extension point in Java API
The following sample code shows how to use the pre-processing and post-processing extension point.
Sample code for transaction and exception handling in import and report jobs
The following sample code illustrates transactions and exception handling within import and report jobs. Batchable interface might optionally be used in import and
report extension point implementation.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample entry preview extension point in Java™ API


The following sample code shows how to use the entry preview extension point.

The entry preview extension point is an entry script that you can use to create custom previews of entries. A preview can be in HTML format, a CSV file, or any form of data
writable.

You can query an item, category, or workflow information when appropriate, preview your entries, and write your output.

Sample code for the entry preview extension point


public class UserCode implements EntryPreviewFunction {
public UserCode() {
}

public void entryPreview(ItemPreviewFunctionArguments inArgs) {


PIMCollection<Item> items = inArgs.getItems();
PrintWriter out = inArgs.getOutput();

if (items != null)
out.println("<html><body>the count of entry:" + items.size() + "</body></html>");

public void entryPreview(CollaborationItemPreviewFunctionArguments inArgs) {


PIMCollection<CollaborationItem> items = inArgs.getCollaborationItems();
PrintWriter out = inArgs.getOutput();

if (items != null)
out.println("<html><body>the count of entry:" + items.size() + "</body></html>");

552 IBM Product Master 12.0.0


public void entryPreview(CategoryPreviewFunctionArguments inArgs) {
PIMCollection<Category> categories = inArgs.getCategories();
PrintWriter out = inArgs.getOutput();

if (categories != null)
out.println("<html><body>the count of entry:" + categories.size()
+ "</body></html>");

public void entryPreview(


CollaborationCategoryPreviewFunctionArguments inArgs) {
PIMCollection<CollaborationCategory> categories = inArgs
.getCollaborationCategories();
PrintWriter out = inArgs.getOutput();

if (categories != null)
out.println("<html><body>the count of entry:" + categories.size()
+ "</body></html>");
}
}

Note: IBM® Product Master does not support the use of frames in scripting.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample catalog preview extension point in Java™ API


The following sample code shows how to use the extension point to preview a catalog.

You use this extension point to generate custom previews of a catalog or a selection of items of a catalog.

Sample code for the catalog preview extension point


public class UserCode implements CatalogPreviewFunction
{
public UserCode()
{
}

public void catalogPreview(CatalogPreviewFunctionArguments inArgs) {


Catalog c = inArgs.getCatalog();
PrintWriter out = inArgs.getOutput();
out.println("Preview of catalog"+c.getName());

}
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample pre-processing and post-processing extension point in Java™ API


The following sample code shows how to use the pre-processing and post-processing extension point.

The pre-processing and post-processing scripts share extension points. When you use the script, you need to specify which type of processing you want.

The pre-processing script runs before any other operation, such as value rules and validation rules on an item or category. The post-processing script runs after any other
operation.

Sample code for the pre-processing and post-processing extension point


public class UserCode implements PrePostProcessingFunction
{
public UserCode()
{
}

public void prePostProcessing(ItemPrePostProcessingFunctionArguments inArgs)


{
Item item = inArgs.getItem();
item.setPrimaryKey("FISH");
}

public void prePostProcessing(CollaborationItemPrePostProcessingFunctionArguments inArgs)


{

IBM Product Master 12.0.0 553


CollaborationItem item = inArgs.getCollaborationItem();
item.setPrimaryKey("FISH");
}

public void prePostProcessing(CategoryPrePostProcessingFunctionArguments inArgs)


{
Category cat = inArgs.getCategory();
cat.setAttributeValue("attributeInstancePath", "value");
}

public void prePostProcessing(CollaborationCategoryPrePostProcessingFunctionArguments inArgs)


{
CollaborationCategory cat = inArgs.getCollaborationCategory();
cat.setAttributeValue("attributeInstancePath", "value");
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample code for transaction and exception handling in import and report jobs
The following sample code illustrates transactions and exception handling within import and report jobs. Batchable interface might optionally be used in import and report
extension point implementation.

Sample import extension point


Sample code for transactions and exception handling in a Java™ API-based import exit point implementation. This sample uses batchable interface for persisting items:

public class sampleImport implements ImportFunction


{
// This function wraps an Exception in a RuntimeException to allow
// ImportFunction to throw an unchecked exception, which is compatible
// with its signature.
private void throwRuntime(Exception e)
{
throw new RuntimeException(e);
}

// This function evaluates the exception to determine if it is critical.


// The code below is just an example, where an exception is deemed
// critical if it is an instance of SQLException.
private boolean isCriticalException(Exception e)
{
Throwable rootCause = e;
Throwable cause = (e == null) ? null : e.getCause();
while (cause != null)
{
rootCause = cause;
cause = cause.getCause();
}
return rootCause instanceof SQLException;
}

// This function must return null if there are no more items.


private Collection<Item> getNextItemSetFrominput()
{
// Read the data from input, create new items and set the values
// using spec map.
}

public void doImport(ImportFunctionArguments inArgs)


{
Catalog catalog = inArgs.getCatalog();
Catalog.startBatchProcessing();
Collection<Item> itemSet = null;
Context context = PIMContextFactory.getCurrentContext();

while((itemSet = getNextItemSetFromInput() != null)


{
for(Item item : itemSet)
{
item.save();
}
try
{
List<ExtendedValidationErrors> errs=catalog.flushBatch();
// Process the failed items as identified from errs.
// Perform any additional work for succeeded items.
context.commit();
}
catch(Exception e)
{
context.rollback();
if (isCriticalException(e) )
throwRuntime(e);

554 IBM Product Master 12.0.0


// Process all items in the batch as failed
}
finally
{
try
{
context.startTransaction();
}
catch(PIMAlreadyInTransctionException paie)
{
// unexpected exception. re-throw
throwRuntime(e);
}
}
}
catalog.stopBatchProcessing();
}
}

Sample report extension point


Sample code for transactions and exception handling in a Java API-based report exit point implementation:

public class sampleReport implements ReportFunction


{
// This function wraps an Exception in a RuntimeException.
private void throwRuntime(Exception e)
{
throw new RuntimeException(e);
}

private boolean isCriticalException(Exception e)


{
// evaluate the exception to determine if it is critical
}

// This function must return null if there are no more items.


private Item getNextItemToSave()
{
// Create a new item to be saved as part of this report
}

public void doReport(ReportFunctionArguments inArgs)


{
int numItemsPerTransaction = 200; // Commit every 200
Item item=null;
Int numItemsSinceCommit = 0;
Context context = PIMContextFactoy.getCurrentContext();

while((item = getNextItemToSave() != null)


{
try
{
ExtendedValidationErrors errs = item.save();
if (errs != null)
{
// handle the errors
}
else
{
numItemsSinceCommit++;
// perform any additional actions for this item.
}
}
catch(Exception e)
{
if ( isCriticalException(e) )
throwRuntime(e);
// Process the failure for this item
}
if (numItemsSinceCommit >= numItemsInTransaction)
{
numItemsSinceCommit = 0;
try
{
context.commit();
}
catch(Exception e)
{
context.rollback();
if ( isCriticalException(e) )
throwRuntime(e);
// Process failure for all items since last commit
}
finally
{
try
{
context.startTransaction();
}
catch(PIMAlreadyInTransactionException e)
{
// unexpected exception. re-throw
throwRuntime(e);

IBM Product Master 12.0.0 555


}
}
}
}
}
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Transactions and exception handling guidelines


You can use Java™ API to start the startTransaction(), commit() rollback() methods of the context interface to run your code within a transaction. Running related code
inside a transaction provides you the ability to ensure atomic execution of the related code. You can also use transactions in long-running jobs, such as import or report
jobs, for periodic commit of the changes to the database. Finally, you can use the setSavepoint() and rollbackToSavepoint() methods of the context interface to perform
partial rollbacks within a transaction.

When you are using transaction-related operations, you must take adequate care to ensure that code does not unintentionally disrupt an already active transaction. For
example, if you start commit() or rollback() when the currently active transaction was not started by your code then you cause premature completion of the currently
active transaction. Use of savepoints can never disrupt a transaction.

IBM® Product Master prevents disruption of the active transaction by user code in certain extension points to ensure consistency of the product operations that rely on
those transactions. For more information, see Limitations on using transactions within extension points.

When your code that starts Product Master Java API methods receives an exception, it might choose, according to the requirements of the business logic, to handle the
exception or rethrow it to the calling code. However, if the root cause of the exception is an SQLException, and if the code is running within an active transaction, your code
must ensure that the changes that are made by the failing operation are rolled back.

If the user code shows that the exception started the transaction, the user code must complete one of the following two tasks:
Roll back the transaction by starting the rollback() method on the context interface.
If it had previously set a savepoint before the operation, use the rollbackToSavepoint() method on the context interface.
If the user code shows that the exception did not start the current transaction or set a savepoint before the failing operation in the current transaction, it should
rethrow the exception to the calling code. The calling code might be another piece of user code or product code. Eventually, the exception is propagated to the
owner of the transaction and that owner rolls back the change.

The inTransaction() method of the context interface can be used to ascertain whether the code is running within an active transaction.

Limitations on using transactions within extension points


IBM Product Master starts user code for certain extension points inside an active transaction, and expects that the transaction is not disrupted by the user code for
extension point. Product Master throws an exception when an attempt is made to commit or roll back the active transaction within any of these extension points.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Limitations on using transactions within extension points


IBM® Product Master starts user code for certain extension points inside an active transaction, and expects that the transaction is not disrupted by the user code for
extension point. Product Master throws an exception when an attempt is made to commit or roll back the active transaction within any of these extension points.

Currently, the following extension points are recommended to avoid use of transactional logic:

Value rule
Default value rule
String enumerated value rule
Validation rule script
Post save script
Pre and post-processing scripts

Additionally, if you are using import script or catalog to catalog export script, commit or rollback is prevented if batch processing is being used (enabled by default).
However, transactional operations are not prevented if the script disables the batch processing through the disableBatchProcessingForItems() script operation.

Invocation of script operations useTransaction(), startTransaction(), and Java™ API methods such as Context.commit() and Context.rollback() can cause disruption to
the current transaction. It is recommended that they are not used within these extension points. Neither Context.setSavepoint() nor Context.rollbackToSavepoint() can
disrupt a transaction, so these methods can be used freely within an extension point.

If you are migrating from previous versions of the product, review your extension point implementation for use of these operations.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Debugging Java API code


556 IBM Product Master 12.0.0
You can use the following methods to debug your Java™ API code. With the IBM® Product Master Java API, these debugging methods are possible only for web-service-
deployed Java API code and extension point Java API code.

Procedure
Debug your Java API code by using one of the following options:
Option Description
Print statements are sent to the log file of the application server where the product and its web services are running.
Print
statements System.out.println("Search routine completed successfully.");
System.err.println("Could not contact database.");
{
Item returnedItem = cat.getItemByPrimaryKey(itemPk);
assert(returnedItem!=null);
}
Assertions catch (AssertionError e)
{
System.err.println("Assertion failed. "+e);
}
Java API log All PIMExceptions are logged to the logs/appsvr_yourWPCservername directory in your Product Master server installation directory.
file
If you are using an IDE with remote debugging capability, such as RSA (Rational® Software Architect), you can use your source Product Master Java
API project (the project that contains the Java API interface) to step through or "breakpoint" the code in your web service when it runs (and examines
variables).

a. Log in to the admin console of WebSphere Application Server.


User interface b. Locate the Product Master Enterprise Application, and go to the Debugging Service.
for Java IDE c. Enter a debug port, for example, 7777 in the Debugging screen.
debugger d. Load the RSA project that you used to develop the Java API application (the one that you imported the ccd_javaapi2.jar file to).
support e. Create a debug configuration as "Remote Java Application" within your IDE project for the port number that you chose in WebSphere®
Application Server. If necessary, change localhost to the IP address of your Product Master server.
f. Set an appropriate breakpoint in your code.
g. Start your debug profile in RSA.
h. Start your web service through the URL method or your Java client.

You can use your extension point code to debug your Java API code.

a. Start Product Master in debug mode. For example, $TOP/bin/go/start_local.sh --debug.


b. Create a Java project with your extension point source files.
c. Go to the Debug Configurations dialog box and create a new remote Java application debug configuration with an appropriate name. Provide the
following values in the appropriate fields:
i. In the Connect tab, select your extension points implementation project as the project.
ii. Select Standard (Socket Attach) for the Connection Type.
Extension iii. For the Host field, enter the hostname or IP address.
points code iv. In the Port field, specify the port that the service is running on. The type of port the service is running on depends on the type of
extension point that you are trying to debug. For example, the import extension points are run as part of the scheduler; therefore, the
port number should be the port on which the scheduler is running. The run value rule function is run as part of the application server, so
the port number should be the one associated with the application server. You can create different debug configurations for different
extension points as needed.
v. Click Apply to save the new configuration.
vi. Place a breakpoint in your extension point source code and start the debugger.
vii. Start the extension point.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Advanced programming with Java API


PIMCollection usage, multi-catalog batching support, and save points are some of the other topics to consider while programming through Java™ API.

PIMCollection class: Collections of product entities


In the Java API, you use the class PIMCollection to deal with a set or array of IBM Product Master entities, such as a set of items or a list of user-defined log entries.
Attributes in Java API
For attribute handling, you have an object that points to a particular data value in an item or category that can be used to go to the data values in a tree-like manner.
Committing and backing out units of work
When you update IBM Product Master entities, such as items and user-defined logs, you can group a batch of updates together into a single transaction.
Multi-catalog batching for imports
The Java API supports multi-catalog batching for imports so that you can feed data into multiple catalogs.
Savepoints
Savepoints allow the partial rollback of a transaction, and they operate only within the scope of a transaction.
Custom Resource Bundle
The Java API enables you to fetch custom messages from your custom message XML file.
Using setExitValue API to route the entries in automated steps of a workflow
The following sample extension point code shows how the Java API setExitValue() method can be used to control the routing of entries in an automated
workflow step. The Java API moveToNextStep() method cannot be used in such cases since these are automated steps and entries are not accessible through

IBM Product Master 12.0.0 557


the user interface (so that user can take a particular exit value). The following extension point code can be used for non-automated workflow steps as well. The
sample code shows how the exit values can be set conditionally.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

PIMCollection class: Collections of product entities


In the Java™ API, you use the class PIMCollection to deal with a set or array of IBM® Product Master entities, such as a set of items or a list of user-defined log entries.

The PIMCollection class is a read-only, immutable specialization of java.util.Collection that is optimized for handling Product Master Server objects such as catalogs,
categories, items, hierarchies, and specs.

This class implements a "lazy lookup", which keeps hold of objects by their internal ID only, and does not try to resolve the ID to a real object until the last possible
moment (such as when you try to retrieve the object from the collection).

The PIMCollection class applies to many entities where only a subset of these entities is required to be shown to users. A PIMCollection class provides a way of working
with a subset of a large set of entities.

The disadvantage of using the PIMCollection class is that in a large, rapidly changing Product Master system, the contents of the collection must be different from what is
shown by the PIMCollection class. For example, an object might get deleted after the collection was created.

The behavior of this class is different from the java.util.Collection class in the following ways:

pimCollection.iterator()
Returns a java.util.Iterator of entities in the collection that exist at the time the iterator is constructed. An entity that was deleted since the collection was
constructed is ignored by .next() and .hasNext(). In this way, you can get a reasonably up-to-date view on the PIMCollection when you construct a new
iterator(). You might experience fewer iterations than expected.
pimCollection.toArray() and pimCollection.size()
These methods always reflect the size of the collection at the time the PIMCollection was constructed. Any entities that were deleted appears as null at the
appropriate array index. Your code must be prepared to retrieve unexpected null values.
pimCollection.toArray()
This method always returns the same size of array (equal to pimCollection.size()); however it reretrieves all entities in the array each time it is called. This
behavior has two consequences:

Subsequent calls to give toArray() a more up-to-date view of the data, but might differ from earlier calls if data changed.
Because toArray() resolves every object in the PIMCollection, it is not recommended for general use on large collections. Performance is poor due to the
expense of resolving every object. Use pimCollection.iterator() instead.

Methods that return PIMCollections


PIMCollection items = catalog.getAllItems();

PIMCollection categories = hierarchy.getAllCategories();

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Attributes in Java API


For attribute handling, you have an object that points to a particular data value in an item or category that can be used to go to the data values in a tree-like manner.

There are two types of attributes:

Attribute definition
The metadata for a single attribute in a spec. Each attribute definition can map to a number of attribute instances.
Attribute instance
An attribute instance represents a scripting entry node. Each attribute instance maps to exactly one attribute definition. Each attribute instance represents a
specific attribute or value in an entry. The entries with its attributes display in a tree-like structure. In an attribute instance navigation, the root node is a grouping of
nodes for each spec. Attribute instance has three types: VALUE, GROUP, and MULTI_OCCUR. You can use the AttributeInstance object to detect the type of
attribute instance.

Attribute instances are useful because they allow,

Code to extend to all values inside of an item, such as to convert the values to an XML document or to validate all values.
Restriction: Do not use attribute instances to set and get values. You need to call the set or get value directly.
User interface components to have pointers to values that are stable given that occurrences can be deleted. For example, if you have a string-based path that is
called /MySpec/MultiGrouping#1/ChildGrouping#2/StringAttribute, it cannot be used as a stable pointer to get and set StringAttribute because the first occurrence
of MultiGrouping (/MySpec/MultiGrouping#0) might be deleted. Attribute instances provide a more sophisticated pointer into the data.
Retrieval of information about inheritance on location attributes. For example, the source location of inherited data can be identified.
Reporting of validation errors. Java™ API results contain an attribute instance with the validation information, rather than a simple string path.
Setting and retrieving related items.
Examining attribute value changes between two versions of an item.

558 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Committing and backing out units of work


When you update IBM® Product Master entities, such as items and user-defined logs, you can group a batch of updates together into a single transaction.

About this task


Unlike the Script API, which uses a transaction by default, each data update in the Java™ API is completed in non-transactional mode unless otherwise specified.
Therefore, that single update (for example, adding to a category or deleting an attribute) is auto-committed as soon as the command is entered.
To group a number of updates together, you must explicitly open a transaction. The updates are not completed until the transaction is committed, and if the transaction is
rolled back, the updates are abandoned.

You can also complete a partial rollback within a transaction through savepoints. You set a savepoint to capture the state of a transaction at a particular point in time, and
as long as the transaction is active you can roll back to the savepoint and restore the transaction to the saved state. If you roll back to a savepoint, you can commit the
transaction, thus making permanent all the updates made up to the savepoint, or you can make further updates within the transaction and commit those updates as well.

Procedure
Use the following example:
Option Description
The following sample code demonstrates how to use a transaction block.

pimContext.startTransaction();
try
{
// perform updates here
Using a transaction block // ...
pimContext.commit();
in the Java API // commit transaction
}
catch (Exception e)
{
pimContext.rollback();
// an error occurred, rollback changes
}
Important: Do not start a transaction if you are already in one.
To improve code maintainability and error handling, you cannot start a transaction if you are already in one. You must explicitly deal with the
exception or prevent it from occurring. You can handle your startTransaction() call safe in either of the way:

Option 1: Handle the exception.

{
pimContext.startTransaction();
}
catch (PIMAlreadyInTransactionException e)
{
// At this point you could:
// - throw an error to give up
// - continue to join the current transaction
// - pimContext.rollback() to rollback existing transaction, then call startTransaction() again
Safely starting a // - pimContext.commit() to commit existing transaction, then call startTransaction() again
transaction }

Option 2: Make the exception impossible.

if (!pimContext.inTransaction())
{
pimContext.startTransaction();
}
else
{
// At this point you could:
// - throw an error to give up
// - continue to join the current transaction
// - pimContext.rollback() to rollback existing transaction, then call startTransaction() again
// - pimContext.commit() to commit existing transaction, then call startTransaction() again
}

IBM Product Master 12.0.0 559


Option Description
The following code demonstrates the basic use of a savepoint. The idea is that you want to perform updates C and D atomically but even if
either update fails you still want to commit updates A and B.

pimContext.startTransaction();
try
{
// perform update A
// perform update B
String name = pimContext.setSavepoint();
try
{
// perform update C
Using a savepoint to // perform update D
perform a partial }
catch (Exception e)
rollback {
pimContext.rollbackToSavepoint(name);
}
pimContext.commit();
// commits the update of A and B. If C and D were
// both updated without exception then also commits those.
// Otherwise, updates to both C and D are not committed.
}
catch (Exception e)
{
pimContext.rollback();
// an error occurred, rollback all changes
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Multi-catalog batching for imports


The Java™ API supports multi-catalog batching for imports so that you can feed data into multiple catalogs.

With Java API and batching, you can:

Load items into multiple catalogs with support for batching.


Return the validation errors of the items that are saved in a batch.

The batch-able interface in Java API provides a list of methods that allow an item to be saved as a batch. Following is a sample Java API code for batching:

ctg1.startBatchProcessing();
ctg2.startBatchProcessing();

while( ! done )
{
//during processing loop,
...
if( reach_ctg1_flush_count ) ctg1.flushBatch();
...
if( reach_ctg2_flush_count ) ctg2.flushBatch();
}
ctg1.stopBatchProcessing();
ctg2.stopBatchProcessing();

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Savepoints
Savepoints allow the partial rollback of a transaction, and they operate only within the scope of a transaction.

As an example, think of a transaction as a sequence of operations over time, such as A to M, at the end of which (after M), the transaction is completed by either
committing all the operations A through M or rolling them all back. It is possible to set a savepoint after a particular operation, say C. As long as the transaction is not
complete, you can roll back to the savepoint. This feature preserves the operations up to and including C, and also nullifies the subsequent operations. It is possible to
define a series of savepoints in this manner and roll back to any one of them, which preserves the savepoint that is rolled back to but removes all subsequent savepoints.

If you roll back to a savepoint, it is possible to continue the transaction with more operations, and to set new savepoints and roll back to any defined savepoint. When the
transaction is completed, all savepoints are removed. In effect, setting a savepoint preserves the state of a transaction then (including the definition of the savepoint), and
rolling back to a savepoint restores the transaction to the corresponding state.

Savepoints are implemented in two methods that are added to the Java™ API's Context interface; they are Context.setSavepoint() and
Context.rollbackToSavepoint(name). The former sets a savepoint for the most recent operation (C, in the example previously) and returns a unique name as a String. The
latter rolls back to the savepoint indicated by the given name. Both might be called only within an active transaction, and related calls must be within the same
transaction. The precise specifications of these methods are included in the documentation of the Java API.

560 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Custom Resource Bundle


The Java™ API enables you to fetch custom messages from your custom message XML file.

To use a custom message, add the message in the following format to the resource XML file stored in the $TOP/locales/xy_XY/custom_resource_bundle directory, where
xy is the language identifier and XY is a valid region identifier for the language. The message is picked up by the Java API com.ibm.pim.utils.MessageBundle automatically,
and is then available for your code. In the sample code, the resource XML file is in the $TOP/locales/en_US/custom_resource_bundle directory to provide the custom
resource bundle in English (United States) locale:

<?xml version="1.0" encoding="UTF-8"?>


<trigoResources locale="en_US" description="English (US)" version="1.0">
<message id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_apps_code_java_con_custresbundle_MESSAGE_ID01"><!
[CDATA[Hello, World!]]></message>
<message id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_apps_code_java_con_custresbundle_MESSAGE_ID02"><!
[CDATA[Dear {0}, Welcome to IBM Product Master]]></message>
</trigoResources>

You can access the custom message through the following sample code:

Context pimctx = PIMContextFactory.getContext("Admin","trinitron","trigo");


String message = pimctx.getMessageBundle().getMessage("CUSTOM.MESSAGE_ID02",new String [] {"Trinitron"});

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using setExitValue API to route the entries in automated steps of a workflow

The following sample extension point code shows how the Java API setExitValue() method can be used to control the routing of entries in an automated workflow
step. The Java API moveToNextStep() method cannot be used in such cases since these are automated steps and entries are not accessible through the user interface
(so that user can take a particular exit value). The following extension point code can be used for non-automated workflow steps as well. The sample code shows how the
exit values can be set conditionally.

public class SetExitValueSample implements WorkflowStepFunction


{
public void in(WorkflowStepFunctionArguments inArgs)
{
CollaborationStepTransitionConfiguration colConfig = inArgs.getTransitionConfiguration();
CollaborationStep currentStep = inArgs.getCollaborationStep();

PIMCollection<CollaborationItem> itemList = inArgs.getItems();


for (CollaborationItem collabItem : itemList)
{
boolean useExitValue1 = true; //Replace with code that decides which exit value to use
if(useExitValue1) {
//use the first exit value so that items use this path
colConfig.setExitValue(collabItem, currentStep.getWorkflowStep().createExitValue("DONE1"));
}
else {
//else, use the second exit value so that items use this path
colConfig.setExitValue(collabItem, currentStep.getWorkflowStep().createExitValue("DONE2"));
}
}
}

public void out(WorkflowStepFunctionArguments inArgs) {


// TODO Auto-generated method stub
}

public void timeout(WorkflowStepFunctionArguments inArgs) {


// TODO Auto-generated method stub
}
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Java API code best practices


Best practices for working with Java™ API include reusing objects, catching and handling exceptions, through JUnits to test the code.

Reuse objects when possible

IBM Product Master 12.0.0 561


Java APIs are wrappers over internal product code. In many cases, calling a Java API method can cause queries to the database. Therefore, to reduce the number
of database queries, reuse the objects when possible without creating the new object each time.
Watch for specific product behaviors
It is likely that the Java API inherits the same behavior as the product since Java APIs are a wrapper on the core product function.
Use JUnits to test your Java API code
If you use JUnits, you can run code coverage tools like Emma or IBM® InfoSphere® DataStage®.
Use standard Java best practices for coding with the Java API
Guidelines about using specific data structures in Java are equally relevant for Java API use.
Catch and handle exceptions
Because several of the exceptions are runtime exceptions, your code needs to catch them. You need to add handling code as needed. See the Javadoc for the
exceptions that are thrown by each method.
Development and deployment of extension point classes
If you are using a single JAR file to deploy the extension point classes, it is a best practice to have one person that is designated as the 'build' specialist who is
responsible for building and deploying the entire JAR file and doing the necessary system restarts.

For more information, see "Javadoc" section in the Reference.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Java API migration


You can migrate code that is written with Product Master Script API to use the Java™ APIs.

Migration from script to Java


If you are familiar with the Script API, you can use the following migration tables to see the associated Java API for each equivalent script operation. Migration
tables are provided for each component.
Sample Java code outside of Java API
You can use sample Java classes and methods from outside of Java API for commonly used operations; for which built-in support was available in Script APIs. To
easily migrate to the new Java API, the following sample codes are provided as-is.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Migration from script to Java


If you are familiar with the Script API, you can use the following migration tables to see the associated Java™ API for each equivalent script operation. Migration tables are
provided for each component.

Attribute collection operations - script to Java migration


The migration tables list the script operations that map to Attribute Collection Java API methods.
Attribute operations - script to Java migration
The migration tables list the script operations that map to Attribute Java API methods.
Catalog operations - script to Java migration
The migration tables list the script operations that map to Catalog Java API methods.
Category operations - script to Java migration
The migration tables list the script operations that map to Category Java API methods.
Collaboration area operations - script to Java migration
The migration tables list the script operations that map to Collaboration area Java API methods.
Currency operations - script to Java migration
For the Currency operations, not all of the script operations from the Script API are implemented in the Java API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.
Database script operations - script to Java migration
For the Database operations, not all of the script operations from the Script API are implemented in the Java API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.
Date operations - script to Java migration
For the Date operations, not all of the script operations from the Script API are implemented in the Java API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.
Distribution operations - script to Java migration
For the Distribution operations, not all of the script operations from the Script API are implemented in the Java API. Alternative Java code is provided for those
script operations that are not implemented in the Java API.
Environment operations - script to Java migration
The migration tables list the script operations that map to Environment Java API methods.
Excel operations - script to Java migration
For the Excel operations, not all of the script operations from the Script API are implemented in the Java API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.
Hierarchy operations - script to Java migration
The migration tables list the script operations that map to Hierarchy Java API methods.
Item operations - script to Java migration
The migration tables list the script operations that map to Item Java API methods.

562 IBM Product Master 12.0.0


Job operations - script to Java migration
The migration tables list the script operations that map to Job Java API methods.
JMS operations - script to Java migration
For the JMS operations, not all of the script operations from the Script API are implemented in the Java API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.
Locale operations - script to Java migration
For the Locale operations, not all of the script operations from the Script API are implemented in the Java API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.
LookupTable operations - script to Java migration
The migration tables list the script operations that map to LookupTable Java API methods.
Math operations - script to Java component
For the Math operations, not all of the script operations from the Script API are implemented in the Java API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.
MQ operations - script to Java migration
For the MQ operations, not all of the script operations from the Script API are implemented in the Java API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.
Number operations - script to Java migration
For the Number operations, not all of the script operations from the Script API are implemented in the Java API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.
Organization operations - script to Java migration
The migration tables list the script operations that map to Organization Java API methods.
RE operations - script to Java migration
For the RE operations, not all of the script operations from the Script API are implemented in the Java API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.
Reader operations - script to Java migration
For the Reader operations, not all of the script operations from the Script API are implemented in the Java API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.
Search operations - script to Java migration
The migration tables list the script operations that map to Search Java API methods.
Selections operations - script to Java migration
The migration tables list the script operations that map to Selections Java API methods.
Spec operations - script to Java migration
The migration tables list the script operations that map to Spec Java API methods.
String manipulations operations - script to Java migration
For the String operations, not all of the script operations from the Script API are implemented in the Java API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.
System-Utils operations - script to Java migration
The migration tables list the script operations that map to System-Utils Java API methods.
Timezone operations - script to Java migration
For the Timezone operations, not all of the script operations from the Script API are implemented in the Java API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.
User defined log operations - script to Java migration
The migration tables list the script operations that map to User defined log Java API methods.
Workflow operations - script to Java migration
The migration tables list the script operations that map to Workflow Java API methods.
Writer operations - script to Java migration
For the Writer operations, not all of the script operations from the Script API are implemented in the Java API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.
XML operations - script to Java migration
For the XML operations, not all of the script operations from the Script API are implemented in the Java API, but you can use a basic XML parser instead.
Zip operations - script to Java migration
For the Zip operations, not all of the script operations from the Script API are implemented in the Java API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Attribute collection operations - script to Java migration


The migration tables list the script operations that map to Attribute Collection Java™ API methods.

Attribute Collection
These script operations can be mapped to the following Attribute Collection Java API methods.
Table 1. Script operations that map to the
Attribute Collection Java API methods
Script operation Java method
addAttributeToAttrGroup addAttribute
addLocalesToAttrGroup addLocaleRestriction
setLocaleRestrictions
addSpecToAttrGroup addAllAttributes
deleteAttrGroup delete
getAllAttributePathsFromAttrGroup getAttributes

IBM Product Master 12.0.0 563


Script operation Java method
getAttrGroupName getName
getLocalesOfAttrGroup getLocaleRestrictions
removeAttributeFromAttrGroup removeAttribute
removeSpecFromAttrGroup removeSpec

Attribute Collection Manager


These script operations can be mapped to the following Attribute Collection Manager Java API methods.
Table 2. Script operations that map to the
Attribute Collection Manager Java API methods
Script operation Java method
new AttrGroup createAttributeCollection
getAllAttrGroupsForAttribute getAttributeCollections
getAttrGroupByName getAttributeCollection

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Attribute operations - script to Java migration


The migration tables list the script operations that map to Attribute Java™ API methods.

AttributeChanges
These script operations can be mapped to the following AttributeChanges Java API methods.
Table 1. Script operations that map to the AttributeChanges Java API methods
Script operation Java method
getAddedAttributePathsNewEntry getNewlyAddedAttributes
getDeletedAttributePathsOldEntry getDeletedAttributes
ExtendedAttributeChanges.getDeletedLocationAttributes
getModifiedAttributePathsNewEntry getModifiedAttributesWithNewData
ExtendedAttributeChanges.getModifiedLocationAttributesWithNewData
getModifiedAttributePathsOldEntry getModifiedAttributesWithOldData
ExtendedAttributeChanges.getModifiedLocationAttributesWithOldData
getLocationsAddedAvailability ExtendedAttributeChanges.getLocationsAddedAsAvailable
getLocationsRemovedAvailability ExtendedAttributeChanges.getLocationsRemovedAsAvailable
getLocationsHavingChangedData ExtendedAttributeChanges.getLocationsHavingChangedData
getLocationsChangedToHaveData ExtendedAttributeChanges.getLocationsChangedToHaveData
getLocationsChangedToHaveNoData ExtendedAttributeChanges.getLocationsChangedToNotHaveData

Attribute Instance
These script operations can be mapped to the following Attribute Instance Java API methods.
Table 2. Script operations that map to the Attribute Instance Java
API methods
Script operation Java method
EntryNode.getEntryNodeExactPath getPath
EntryNode.getEntryNodeValue getValue
EntryNode.setEntryNodeValue setValue
EntryNode.populateNonPersistedFor setValueByExecutingNonPersisted
EntryNode AttributeRule
Item.clearAttribute setValue(null)
EntryNode.getPossibleEntryNodeValues getPossibleValues
EntryNode.setEntryNodeRelationshipValue setValue
EntryNode.setEntryNodeRelationshipValue
UsingItem
EntryNode.getEntryNodeChildren getChildren
EntryNode.getParent getParent
addAttributeOccurrence getOccurrenceIndex
deleteAttributeOccurrence canAddOccurrence
addOccurrence
removeOccurrence
EntryNode.getEntryNodePath AttributeDefinition.getPath()

AttributeOwner
564 IBM Product Master 12.0.0
These script operations can be mapped to the following AttributeOwner Java API methods.
Table 3. Script operations that map to the
AttributeOwner Java API methods
Script operation Java method
Entry.setEntryAttrib setAttributeValue
Entry.getEntryAttrib getAttributeValue
EntryNode.getEntryChangedData getChangesComparedTo
EntryNode.getEntryChangedData getChangesSinceLastSave
SinceLastSave

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Catalog operations - script to Java migration


The migration tables list the script operations that map to Catalog Java™ API methods.

Catalog
These script operations can be mapped to the following Catalog Java API methods.
Table 1. Script operations that map to the Catalog Java API methods
Script operation Java method
defineLocationSpecificData addLocationDataConfiguration
addSecondaryCategoryTree addSecondaryHierarchy
containsByPrimaryKey containsItem
insertNewVersion createVersion
deleteCatalog deleteAsynchronous
getCatalogAccessControlGroupName getAccessControlGroup
getCtgCategorySpecs getCategorySpecs
setContainerProperties getItemBuildScript
ctg.getCtgItemByPrimaryKey getItemByPrimaryKey
getItemSetForCatalog getItems
getItemCollectionByPK getItemsByPrimaryKeys
getCategoryCollectionByPK
getCtgItemByAttributeValue getItemsWithAttributeValue
defineLocationSpecificData getLocationDataConfigurations
getCatalogCategoryTrees getLocationHierarchies
getCtgName getName
setContainerProperties getPostSaveScript
getPostScript
getPreScript
getCtgSpec getSpec
getItemSetForUnassigned getUnassignedItems
getUserDefinedLog getUserDefinedLog
getCatalogVersion getVersionInfo
isEntryCheckedOutForPrimaryKey isItemCheckedOut
isOrdered isOrdered
linkCatalog link
saveCatalog save
setOrdered setOrdered
setContainerProperties setPostSaveScript
setPostScript
setPreScript
UserDefinedLog Container::newUserDefinedLog(String name, String description, Boolean Catalog.createUserDefinedLog(String name, boolean
isRunningLog) runningLog)

CatalogManager
These script operations can be mapped to the following CatalogManager Java API methods.
Table 2. Script operations that map to
the CatalogManager Java API methods
Script operation Java method
new Catalog createCatalog
getCtgByName getCatalog
getCatalogVersionSummary getVersionInfo

IBM Product Master 12.0.0 565


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Category operations - script to Java migration


The migration tables list the script operations that map to Category Java™ API methods.

Category
These script operations can be mapped to the following Category Java API methods.
Table 1. Script operations that map to the Category Java API methods
Script operation Java method
Boolean Category::addChildCategory(Category childCategory) addChild (Category childCategory)
void addItemSecondarySpec
Category::addItemSecondarySpecToCategory (SecondarySpec spec, boolean
(String sSpecName [, Catalog[] ctgs]) addToChildCategories, boolean
addAcrossMapping)

addItemSecondarySpec
(SecondarySpec spec,
Catalog[] ctgs, boolean
addToChildCategories,
boolean addAcrossMapping)
void addSecondarySpec (SecondarySpec spec)
Category::addSecondarySpecToCategory
(String sSpecName [, Boolean
bAddToPicture])
Category[] Category::getCategoryChildren([Boolean ordered, Catalog catalog, Boolean restrictToSubtreeWithItems]) getChildren()

getChildren ( Catalog catalog, boolean


ordered, boolean
restrictToSubtreeWithItems)
String Category::getCategoryCode() getPrimaryKey()
Entry::getCheckedOutEntryColAreas() getCollaborationAreas()
CategorySet getDescendents()
Category::getDescendentCategorySet
ForCategory([Boolean bReadonly])
String[] Category::getFullPaths([String sDelimiter] [, boolean bWithRootName]) getFullPaths (String delimiter, boolean
includeRootName)
Catalog::getItemsInCategory(Category cat, Boolean ordered) PIMCollection getItems(Catalog catalog)

PIMCollection getItems(Catalog catalog,


boolean ordered)
Spec[] getItemSecondarySpecs()
Category::getItemSecondarySpecs
ForCategory([Catalog ctg])
Integer[] Category::getCategoryLevels() getLevels()
Category Category::getCategoryParent([CategoryCache cat_cache]) getParent()
Category[] Category::getCategoryParents() getParents()
Spec[] Category::getSecondarySpecsForCategory() getSecondarySpecs()
Boolean hasChildren()
Category::getCategoryHasChildren()
Entry::isEntryCheckedOut() isCheckedOut()
void mapToOrganization(Organization)
Category::mapCategoryToOrganizations
(Category[]
categories [, boolean bAdd])
void Category::removeChildCategory(String categoryName) removeChild(Category)
void removeItemSecondarySpec (SecondarySpec spec)
Category::removeItemSecondary
SpecFromCategory
(String sSpecName)
void removeSecondarySpec (SecondarySpec spec)
Category::removeSecondary
SpecFromCategory
(String sSpecName)
Category::mapCategoryToOrganizations Category::mapToOrganization(Organization org)
(Category[] categories [, boolean bAdd])

IBM Product Master 12.0 Fix Pack 8

566 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

Collaboration area operations - script to Java migration


The migration tables list the script operations that map to Collaboration area Java™ API methods.

Catalog
These script operations can be mapped to the following Catalog Java API methods.
Table 1. Script operations that map to the Catalog Java API
methods
Script operation Java method
Container.isEntryCheckedOutForPrimaryKey isItemCheckedOut(String pk)

Category
These script operations can be mapped to the following Category Java API methods.
Table 2. Script operations that map to the Category Java API
methods
Script operation Java method
getCheckedOutEntryColAreas getCollaborationAreas()
Container.getCheckedOutEntryCol AreasByPrimaryKey getCollaborationAreas
Entry.isEntryCheckedOut isCheckedOut

CategoryCollaborationArea
These script operations can be mapped to the following CategoryCollaborationArea Java API methods.
Table 3. Script operations that map to the
CategoryCollaborationArea Java API methods
Script operation Java method
getColAreaSrcContainer getSourceHierarchy
checkoutEntries checkout
checkoutEntry checkoutAndWaitForStatus
dropEntries drop
dropEntry
Container.getEntryByPrimaryKey getCheckedOutCategory
(String primaryKey)
Container.getEntrySetForPrimaryKeys getCheckedOutCategories
(List<String>
primaryKeys);
getEntries getCheckedOutCategories
getCountOfEntriesInColArea getNumberOfCheckedOutCategories
publishEntriesToSrcContainer interimCheckin
addEntryIntoColArea addIntoToCollaborationArea
moveEntryToNextStep moveToNextStep
moveEntriesToNextStep

CollaborationArea
These script operations can be mapped to the following CollaborationArea Java API methods.
Table 4. Script operations that map to the CollaborationArea Java
API methods
Script operation Java method
getColAreaSrcContainer.getContainerType getType
getColAreaWorkflow getWorkflow
getColAreaAdminRoles getAdministrators
getColAreaAdminUsers
setColAreaAdminRoles setAdministrators
setColAreaAdminUsers
Container.disableContainer getProcessingOptions
ProcessingOptions ProcessingOptions
Container.setAttributeGroups
ToProcess

IBM Product Master 12.0.0 567


Script operation Java method
Container.setContainerAttribute setPostScript
Container.setContainerProperties setPreScript
setPostSaveScript
setBuildScript
setUserDefinedAttributeCollection
getColAreaName getName
getWflStepsForRole getSteps(Performer)
getWflStepsForUser
isColAreaLocked isLocked
lockColArea lock
unlockColArea unlock
saveColArea save
setAccessControlGroup setAccessControlGroup

CollaborationAreaManager
These script operations can be mapped to the following CollaborationAreaManager Java API methods.
Table 5. Script operations that map to the
CollaborationAreaManager Java API methods
Script operation Java method
CollaborationArea() createItemCollaborationArea
createCategoryCollaborationArea
getColAreaNamesForUser getCollaborationAreas(Performer)
getColAreaNamesForRole
getColAreaByName getCollaborationArea

CollaborationCategory
These script operations can be mapped to the following CollaborationCategory Java API methods.
Table 6. Script operations that map to the
CollaborationCategory Java API methods
Script operation Java method
collabArea.getColAreaSrcContainer() getSourceCategory
srcContainer.getCategory(pk)

CollaborationItem
These script operations can be mapped to the following CollaborationItem Java API methods.
Table 7. Script operations that map to the
CollaborationItem Java API methods
Script operation Java method
collabArea.getColAreaSrcContainer() getSourceItem
srcContainer.getItem(pk)

CollaborationHistoryEvent
These script operations can be mapped to the following CollaborationHistoryEvent Java API methods.
Table 8. Script operations that map to the
CollaborationHistoryEvent Java API methods
Script operation Java method
getColAreaHistoryDate getDate
getColAreaHistoryEntryKey getPrimaryKey
getColAreaHistoryEventType getType
getColAreaHistoryEventAttribute getComment
getColAreaHistoryUser getUser
getColAreaHistoryStepPath getCollaborationStep

CollaborationStep
These script operations can be mapped to the following CollaborationStep Java API methods.
Table 9. Script operations that map to the CollaborationStep Java API
methods
Script operation Java method
getEntriesInStep getContents
getEntryInStep
getCountOfEntriesInColAreaStep getContents().size
getItemsInStepByAttributeValue getContentsByAttributeValue
getItemsInStepBySelection getContents(Selection)

568 IBM Product Master 12.0.0


Script operation Java method
getReservedEntriesInStep getReservedObjects
getUsernameForReservedEntryInStep getUserWhoReserved(CollaborationObject)
getStepEntryTimeout getTimeout
setStepEntryTimeout setTimeout
getWflStepItemsByAttributeValue getContentsByAttributeValue
releaseEntryInStep release
reserveEntryInStep reserve

Hierarchy
These script operations can be mapped to the following Hierarchy Java API methods.
Table 10. Script operations that map to the Hierarchy Java API
methods
Script operation Java method
Container.isEntryCheckedOutForPrimaryKey isCategoryCheckedOut(String pk)

Item
These script operations can be mapped to the following Item Java API methods.
Table 11. Script operations that map to the Item
Java API methods
Script operation Java method
getCheckedOutEntryColAreas getCollaborationAreas()
Container.getCheckedOutEntryCol getCollaborationAreas
AreasByPrimaryKey
Entry.isEntryCheckedOut isCheckedOut

ItemCollaborationArea
These script operations can be mapped to the following ItemCollaborationArea Java API methods.
Table 12. Script operations that map to the ItemCollaborationArea Java
API methods
Script operation Java method
moveEntriesToColArea moveItemsToOtherCollaborationArea
moveEntryToColArea moveItemToOtherCollaborationArea
getColAreaSrcContainer getSourceCatalog
checkoutEntries checkout
checkoutEntry checkoutAndWaitForStatus
dropEntries drop
dropEntry
Container.getEntryByPrimaryKey getCheckedOutItem(String primaryKey)
Container.getEntrySetForPrimaryKeys getCheckedOutItems(List<String> primaryKeys);
getCountOfEntriesInColArea getNumberOfCheckedOutItems
publishEntriesToSrcContainer interimCheckin
getEntries getCheckedOutItems()
addEntryIntoColArea addIntoToCollaborationArea
moveEntryToNextStep moveToNextStep
moveEntriesToNextStep

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Currency operations - script to Java migration


For the Currency operations, not all of the script operations from the Script API are implemented in the Java™ API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.

Table 1. Script operations and alternative Java code for the Currency operations
Script operation Alternative Java code
getCurrencySymbolByCode com.ibm.icu.util.Currency currency = getCurrency(currencyCode); return currency.getSymbol(Locale);
()
getCurrencyDescByCode() com.ibm.icu.util.Currency currency = getCurrency(currencyCode); return currency.getName(userLocale, Currency.LONG_NAME, new
boolean[1]);
getCompanyCurrencies() Company.getCurrenices()
setCompanyCurrencies() Company.addCurrencies(List<Currency> currencies) Company.removeCurrencies(List<Currency> currencies)

IBM Product Master 12.0.0 569


Script operation Alternative Java code
getAllCurrencies() OrganizationManager.getAllCurrencies()

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Database script operations - script to Java migration


For the Database operations, not all of the script operations from the Script API are implemented in the Java™ API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.

The following database script operations can be achieved by using standard JDBC connectivity.

getWPCDBContext()
getWPCDBConnection()
releaseWPCDBConnection(Connection conn)
openJDBCConnection()
releaseJDBCConnection()
loadJar()
Commit()
rollback()
executeQuery()
executeUpdate()
executeBatchUpdate()
next()
getColumn()
getColumnAt()

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Date operations - script to Java migration


For the Date operations, not all of the script operations from the Script API are implemented in the Java™ API. Alternative Java code is provided for those script operations
that are not implemented in the Java API.

Table 1. Script operations and alternative Java code for the Date operations
Script operation Alternative Java code
today () new java.util.Date();
new Date() java.text.SimpleDateFormat sdfFormat = new java.text.SimpleDateFormat(sFormat,locale); sdfFormat.parse(sDate);
isDateBefore() java.util.Date.before(sSecondDate);
isDateAfter() Java.util.Date.after(sSecondDate);
formatDate() com.ibm.icu.text.SimpleDateFormat
(pattern,locale).format(date);
parseDate() com.ibm.icu.text SimpleDateFormat dateFormat = new SimpleDateFormat(pattern, locale); Date date = dateFormat.parse(stringDate, new
ParsePosition(0);
addDate Calender cal = java.util.Calender.getInstance(); Cal.setTime(date); Cal.add(Calender.YEAR, value);
getDateField java.util.Calender.set();
setDateField java.util.Calender.set();
getTime new Integer((int)(date.getTime()/1000));
getDateTimeInUs User.getTimeZoneOffset After obtaining the time zone offset, you need to convert the int obtained to hh:mm format, and use
erTimeZone TimeZone.getTimeZone(“GMT+5:30”). Use the Calendar object to convert time to the server's TimeZone.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Distribution operations - script to Java migration


For the Distribution operations, not all of the script operations from the Script API are implemented in the Java™ API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.

The following distribution script operations can be achieved by using external JARs.

getFtp
sendFtp

570 IBM Product Master 12.0.0


The following distribution script operations can be achieved by using the java.net package.

getFullHTTPResponse
getHTTPResponse
saveMultipartRequestData
sendHttp
sendHttpString
sendMultipartPost

The following distribution script operations can be achieved by using the javax.mail.* package.

sendEmail
sendHTMLEmail

The following distribution Java methods can be achieved by using the javax.mail.* package.

setHttpServletResponseHeader
setHttpServletResponseStatus

RoutingManager
For the Distribution script operations, the following alternative Java code is provided.
Table 1. Script operations and alternative Java code for the Distribution operations
Script operation Alternative Java code
getDistributionByName getDistribution
createDataSource createDataSource
Distribution createDistribution
String createDataSource(String name, String type [, HashMap extraAttribs]) DataSource createDataSource(String name, String type)
Distribution new Distribution(String name, String type [, HashMap extraAttribs]) Distribution createDistribution(String name, String type)
getDistributionByName(String name) getDistribution (String name)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Environment operations - script to Java migration


The migration tables list the script operations that map to Environment Java™ API methods.

EnvironmentExporter
These script operations can be mapped to the following EnvironmentExporter Java API methods.
Table 1. Script operations that map to the EnvironmentExporter Java API methods
Script operation Java method
new EnvObjectList() createExportList()
exportEnv(EnvObjectList envObjList, String sDocFilePath) export(ExportList exportList, String documentPath)

EnvironmentImporter
These script operations can be mapped to the following EnvironmentImporter Java API methods.
Table 2. Script operations that map to the EnvironmentImporter Java API
methods
Script operation Java method
importEnv(String sDocFilePath, [bFromFileSystem]) importEnvironment(Document document)
importEnv(String sDocFilePath, [bFromFileSystem]) importEnvironment(String documentPath)

ExportList
These script operations can be mapped to the following ExportList Java API methods.
Table 3. Script operations that map to the ExportList Java API methods
Script operation Java method
void addObject(Type type, Object object, ActionMode actionMode)
EnvObjectList::addObjectByNameToExport
(String sEntityName[, String sObjectType,
[String sActionMode]]
void addObject(Type type, Object object, ActionMode actionMode)
EnvObjectList::setHierarchyMapToExport
(String sourceHierarchy, String
destHierarchy [,String sActionMode])

IBM Product Master 12.0.0 571


Script operation Java method
void addContent(Type type, Object object)
EnvObjectList::setCatalogByNameToExport
(String sCatalog)
void addContent(Type type, Object object)
EnvObjectList::setHierarchyByNameTo
Export(String sHierarchy)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Excel operations - script to Java migration


For the Excel operations, not all of the script operations from the Script API are implemented in the Java™ API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.

The Excel methods can be achieved by using the HSSF library:


Table 1. Script operations and alternative Java code for the Excel operations
Script operation Alternative Java code
createExcelCell HSSFCell hssfCell = row.createCell(columnIndex);
createExcelCellStyle HSSFCellStyle hssfCellStyle= workBook.createCellStyle();
createExcelSheet HSSFSheet sheet = workBook.createSheet();
createFont HSSFFont hssfFont = workBook.createFont();
createRow HSSFRow hssfRow = sheet.createRow(rowIndex);
ExcelBook Load
com.ibm.ccd.common.excel.impl.hssf.
HssfIExcelBookFactory then invoke
createIExcelBook()
getCellObj HSSFRow.getCell(index)
getDateCellValue HSSFCell.getDateCellValue();
getDateFromDoubleValue HSSFDateUtil.getJavaDate
(argument.doubleValue());
getExcelCell HSSFRow row = sheet.getRow(iRowIndex); HSSFCell cell = row.getCell((short)iColIndex); cell.getStringCellValue()
getExcelCellEncoding HSSFCell.getEnconding()
getExcelCellType HSSFCell.getCellType()
getExcelRow HSSFSheet.getRow(rowIndex.intValue());
getExcelSheet HSSFWorkbook.getSheetAt(int) Or HSSFWorkbook.getSheet (String)
getExcelSheets HSSFWorkbook.getNumberOfSheets();
repeat through each of them using
HSSFWorkbook.getSheetAt(i);
getFirstCellNum HSSFRow.getFirstCellNum()
getFirstRowNum HSSFSheet.rowIterator();
getLastCellNum HSSFRow.getLastCellNum()
getLastRowNum HSSFSheet .getLastRowNum()
getNbRows HSSFSheet. getPhysicalNumberOfRows()
getNumericCellValue HSSFCell.getNumericCellValue();
getStringCellValue HSSFCell.getStringCellValue();
saveToDocStore Upload using docstore exposed APIs
setAlignment HSSFCellStyle.setAlignment()
setBoldWeight HSSFFont.setBoldweight()
setBorderBottom HSSFCellStyle.setBorderBottom
setBorderLeft HSSFCellStyle.setBorderLeft
setBorderRight HSSFCellStyle.setBorderRight
setBorderTop HSSFCellStyle.setBorderTop
setBottomBorderColor HSSFCellStyle.setBottomBorderColor
setCellType HSSFCell.setCellType();
setColor HSSFFont.setColor(color)
setDataFormat HSSFCellStyle.setDataFormat()
setDateCellValue HSSFCell.setCellValue();
setExcelStyle HSSFCell.setCellStyle();
setFillBackgroundColor HSSFCellStyle.setFillForegroundColor()
setFillForegroundColor HSSFCellStyle.setFillForegroundColor()
setFillPattern HSSFCellStyle.setFillPattern()
setFont HSSFCellStyle.setFont()
setFontHeight HSSFFont .setFontHeight()
setFontName HSSFFont setFontName(sFontName);
setIndention HSSFCellStyle.setIndention()

572 IBM Product Master 12.0.0


Script operation Alternative Java code
setItalic HSSFFont.setItalic()
setLeftBorderColor HSSFCellStyle.setLeftBorderColor
setNumericCellValue HSSFCell.setCellValue()
setRightBorderColor HSSFCellStyle.setRightBorderColor
setStrikeout HSSFFont.setStrikeout()
setStringCellValue HSSFCell.setCellValue()
setTopBorderColor HSSFCellStyle.setTopBorderColor()
setUnderline HSSFFont.setUnderline()
setVerticalAlignment HSSFCellStyle.setVerticalAlignmentT()
setWrapText HSSFCellStyle.setWrapText()

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Hierarchy operations - script to Java migration


The migration tables list the script operations that map to Hierarchy Java™ API methods.

Hierarchy
These script operations can be mapped to the following Hierarchy Java API methods.
Table 1. Script operations that map to the Hierarchy Java API methods
Script operation Java method
new Category(CategoryTree ctr, createCategory(String parentCategoryPath,
String path, [String delimiter], String delimiter, String pathValue)
[String primaryKey])
createCategory(String parentCategoryPath,
Category String delimiter, String pathValue, String
CategoryTree::buildCategory primaryKeyValue)
(String path, [String delimiter],
[String primaryKey])
ValidationError[] deleteCategoryTree(CategoryTree ctr) deleteAsynchronous()
CategorySet CategoryTree::getCategorySet([Boolean bReadonly]) getCategories()
CategorySet getCategoriesWithAttributeValue
CategoryTree::getCategorySet (String attributeInstancePath,
ByAttributeValue Object attributeValue)
(String attribPath, Object
attribValue
[, Boolean bReadOnly])
CategorySet getCategoriesByItemSecondarySpec (SecondarySpec spec)
CategoryTree::getCategorySet
ByItemSecondary
Spec(String specName)
CategorySet getCategoriesAtLevel(int level)
CategoryTree::getCategorySetByLevel
(Integer level [, Boolean bReadOnly])
CategorySet getCategoriesByPaths(
CategoryTree::getCategorySetByFull Collection<String>
NamePath(String[] fullNamePaths, fullNamePaths)
getCategoriesByPath(
Collection<String>
fullNamePaths, String delimiter)
CategorySet getCategoryByPrimaryKey()
CategoryTree::getCategorySetByPrimaryKey
(String primaryKey [, Boolean bReadOnly])
CategorySet getCategoriesByStandaloneSpec (Spec spec)
CategoryTree::getCategorySetByStand
AloneSpec(String specName)
Category CategoryTree::getCategoryByPath(String sNamePath, String sDelim [, boolean bLight, boolean getCategoryByPath(String path)
bReadOnly]) getCategoryByPath(String path,
String delimiter)
getEntryByPrimaryKey getCategoryByPrimaryKey (String primaryKey)
Boolean isCategoryCheckedOut()
Container::isEntryCheckedOutForPrimary
Key(String sPrimaryKey)
String CategoryTree::getCategoryTreeName() getName()
Spec CategoryTree::getCategoryTreeSpec() getPrimarySpec()
ValidationError[] CategoryTree::saveCategoryTree() save()

IBM Product Master 12.0.0 573


Script operation Java method
UserDefinedLog Container::newUserDefinedLog(String name, String description, Boolean isRunningLog) Hierarchy.createUserDefinedLog(String name, boolean
runningLog)
Container::getUserDefinedLog(String name) Hierarchy.getUserDefinedLog(String name)

Hierarchy Manager
These script operations can be mapped to the following Hierarchy Manager Java API methods.
Table 2. Script operations that map to the Hierarchy Manager Java API methods
Script operation Java method
CategoryTree new CategoryTree(Spec spec, createHierarchy (PrimarySpec spec,
String name [, HashMap String name)
optionalArgs]) createHierarchy (PrimarySpec spec,
String name, SpecNode pathAttribute)

createHierarchy(PrimarySpec spec,
String name, SpecNode pathAttribute,
AccessControlGroup
accessControlGroup, SpecNode
displayAttribute)

OrganizationManager::
createOrganizationHierarchy
(PrimarySpec spec,
String name)

OrganizationManager::create
OrganizationHierarchy(PrimarySpec
spec,
String name, SpecNode pathAttribute,
AccessControlGroup
accessControlGroup, SpecNode
displayAttribute)
String ::getDefaultLktHierarchyName() Company::getDefaultLookup
TableHierarchy()
String ::getDefaultOrgHierarchyName() Company::getDefaultOrganizationHierarchy
CategoryTreeMap getCategoryTreeMap(CategoryTree ctr1, CategoryTree ctr2) getHierarchyMap (Hierarchy hierarchy1, Hierarchy hierarchy2)

HierarchyMap
These script operations can be mapped to the following HierarchyMap Java API methods.
Table 3. Script operations that map to the HierarchyMap Java API methods
Script operation Java method
void addMapping Category sourceCategory, Category destinationCategory)
CategoryTreeMap::addCategory
TreeMapping
(Category cat1, Category cat2)
void removeMapping Category sourceCategory, Category destinationCategory)
CategoryTreeMap::removeCategory
TreeMapping
(Category cat1, Category cat2)
saveCategoryTreeMap() save()

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Item operations - script to Java migration


The migration tables list the script operations that map to Item Java™ API methods.

Item
These script operations can be mapped to the following Item Java API methods.
Table 1. Script operations that map to the Item Java API methods
Script operation Java method
getEntryAttrib AttributeOwner.getAttributeValue
getCatalog getCatalog
getCtgItemCategories getCategories

574 IBM Product Master 12.0.0


Script operation Java method
getCtgItemCatSpecificAttribsList No method is available, but you can use the Java API to replicate this function. See Java API for the getCtgItemCatSpecificAttribsList
script operation.
getEntryChangedData AttributeOwner.getChangesComparedTo
getEntryChangedDataSinceLastS AttributeOwner.getChangesSinceLastSave
ave
getDisplayValue getDisplayName
getLinkedItems getLinkedItems
getPrimaryKey getPrimaryKey
getItemUsingEntryRelationshipA AttributeOwner.getAttributeValue
ttrib
getSourceEntrySetForRelatedEnt getRelatedItems
ries
getSaveResult getSaveResult
getItemStatus
getItemXMLRepresentation getXMLRepresentation
isItemAvailableInLocation ExtendedAttributeOwner.isAvailableInLocation
isEntryCheckedOut isCheckedOut
makeItemAvailableInLocation ExtendedAttributeOwner.makeAvailableInLocation
makeItemAvailableInLocations ExtendedAttributeOwner.makeAvailableInLocationRecursively
makeItemUnavailableInLocation ExtendedAttributeOwner.makeUnavailableInLocation
makeItemUnavailableInLocation ExtendedAttributeOwner.makeUnavailableInLocationRecursively
s
mapCtgItemToCategories mapToCategory
moveCtgItemToCategories moveToCategories
removeCtgItemFromCategory removeFromCategory
saveCtgItem save
setEntryAttrib AttributeOwner.setAttributeValue
setIgnoreCategorySpecificAttrib setCategorySpecificAttributeProcessing
utes
setCtgItemPrimaryKey setPrimaryKey

LocationAttributeInstance
These script operations can be mapped to the following LocationAttributeInstance Java API methods.
Table 2. Script operations that map to the
LocationAttributeInstance Java API methods
Script operation Java method
EntryNode.getEntryNodeInheritedValue getInheritedValue()
EntryNode.hasInheritedValue hasInheritedValue()
EntryNode.hasNonInheritedValue hasNonInheritedValue()
EntryNode.getLocation getLocation()
Item.isInheriting isInheriting()

LocationDataConfiguration
These script operations can be mapped to the following LocationDataConfiguration Java API methods.
Table 3. Script operations that map to the
LocationDataConfiguration Java API methods
Script operation Java method
void Catalog::defineLocationSpecificData getCatalog()
(CategoryTree ctr, Spec spc, AttrGroup[]
inhAttrGrps)
void Catalog::defineLocationSpecificData getHierarchy()
(CategoryTree ctr, Spec spc, AttrGroup[]
inhAttrGrps)
void Catalog::defineLocationSpecificData getSpec()
(CategoryTree ctr, Spec spc, AttrGroup[]
inhAttrGrps)
void Catalog::defineLocationSpecificData delete()
(CategoryTree ctr, Spec spc, AttrGroup[]
inhAttrGrps)

ProcessingOptions
These script operations can be mapped to the following ProcessingOptions Java API methods.
Table 4. Script operations that map to the ProcessingOptions Java API methods
Script operation Java method
setContainerAttribute setAllProcessingOptions(Boolean setEnabled);

IBM Product Master 12.0.0 575


Script operation Java method
disableContainerProcessingOptions setCategory
LockingForItemSaveProcessing
(Boolean setEnabled);
disableContainerProcessingOptions setDefaultValuesProcessing(Boolean setEnabled);
disableContainerProcessingOptions setDefaultValueRulesProcessing(Boolean setEnabled);
disableContainerProcessingOptions setEntryBuildScriptProcessing (Boolean setEnabled);
disableContainerProcessingOptions setCollaborationAreaLocksValidationProcessing
Processing(Boolean
setEnabled);
disableContainerProcessingOptions setLockingProcessing
disableContainerProcessingOptions setMergeWithOldVersionProcessing(Boolean setEnabled);
disableContainerProcessingOptions setMinMaxLengthProcessing(Boolean setEnabled);
disableContainerProcessingOptions setMinMaxOccurrenceValidation(Boolean setEnabled);
disableContainerProcessingOptions setPatternValidationProcessing(Boolean setEnabled);
disableContainerProcessingOptions setPossibleValueProcessing(Boolean setEnabled);
disableContainerProcessingOptions setPostSaveScriptProcessing (Boolean setEnabled);
disableContainerProcessingOptions setPostScriptProcessing (Boolean setEnabled);
disableContainerProcessingOptions setPreScriptProcessing
(Boolean setEnabled);
disableContainerProcessingOptions setSequencesProcessing
(Boolean setEnabled);
disableContainerProcessingOptions setTypeValidationProcessing (Boolean setEnabled);
disableContainerProcessingOptions setUniqueValidationProcessing (Boolean setEnabled);
disableContainerProcessingOptions setValidationRulesProcessing (Boolean setEnabled);
disableContainerProcessingOptions setValueRulesProcessing (Boolean setEnabled);
disableContainerProcessingOptions resetProcessingOptions();
disableContainerProcessingOptions setLocaleRestrictionsForScripts

VersionInfo
These script operations can be mapped to the following VersionInfo Java API methods.
Table 5. Script operations that map to the
VersionInfo Java API methods
Script operation Java method
Date Version::getVersionDate() getVersionDate
String Version::getVersionName() getVersionName
String Version::getVersionType() getVersionType

Java API for the getCtgItemCatSpecificAttribsList script operation


You can use the following examples to retrieve the Category Specific Attributes for an Item. You can use any of the following examples according to the usage
scenario, or modify the implementations to suit your needs.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Java API for the getCtgItemCatSpecificAttribsList script operation


You can use the following examples to retrieve the Category Specific Attributes for an Item. You can use any of the following examples according to the usage scenario, or
modify the implementations to suit your needs.

Returns a list of Category Specific AttributeDefinitions relating to the passed Item


The following methods returns a list of Category Specific AttributeDefinitions relating to the passed Item.

public ArrayList<AttributeDefinition> getCategorySpecificAttributeDefinitionsRecursive(Item i1)


{
Collection<Spec> itemSpecs = i1.getSpecs();
ArrayList<AttributeDefinition> attrDefList = new ArrayList<AttributeDefinition>();

for (Spec spec : itemSpecs)


{
if (!spec.getType().equals(Spec.Type.PRIMARY_SPEC))
{
AttributeDefinition rootDef = spec.getAttributeDefinition(spec.getName());
processAttributeDefinitionRecursive(rootDef,attrDefList);

576 IBM Product Master 12.0.0


return attrDefList;

public void processAttributeDefinitionRecursive(AttributeDefinition ad, ArrayList<AttributeDefinition> adList)


{

if (ad.isLeaf())
{
adList.add(ad);

}
else
{
List<AttributeDefinition> children = ad.getChildren();

for (AttributeDefinition child : children)


{
processAttributeDefinitionRecursive(child, adList);
}

Returns the list of Leaf Category Specific Attribute Instances when they exist
The following example returns the list of Leaf Category Specific Attribute Instances when they exist.

public ArrayList<AttributeInstance> getCategorySpecificAttributeInstances(Item itm)


{
ArrayList<AttributeInstance> al = new ArrayList<AttributeInstance>();

List<AttributeInstance> rootInstances = null;

rootInstances = (List<AttributeInstance>) itm.getRootAttributeInstances();


List<AttributeInstance> children;

for (AttributeInstance instance : rootInstances)


{
// Skip the primary spec root attribute instance
if(!instance.getAttributeDefinition().getSpec().getType().equals(Spec.Type.PRIMARY_SPEC))

{
children = (List<AttributeInstance>) instance.getChildren();
for (AttributeInstance child : children)
{
processAttributeInstance(child, al);
}

return al;

public void processAttributeInstance(AttributeInstance ai, ArrayList<AttributeInstance> al)


{

if (ai.isValue())
{
al.add(ai);

}
else
{
List<AttributeInstance> children = (List<AttributeInstance>) ai.getChildren();

for (AttributeInstance instance : children)


{
processAttributeInstance(instance, al);
}

Returns a list of root level category specific attribute instances for the Item
Example

For a grouping Group->grouping1-1,grouping1-2, the code returns Group AttributeInstance. You can then traverse down the AttributeInstance according to need.

IBM Product Master 12.0.0 577


public ArrayList<AttributeInstance> getCategorySpecificAttributeInstances(Item itm)
{
ArrayList<AttributeInstance> al = new ArrayList<AttributeInstance>();

List<AttributeInstance> rootInstances = null;

rootInstances = (List<AttributeInstance>) itm.getRootAttributeInstances();

for (AttributeInstance instance : rootInstances)


{
// Skip the primary spec root attribute instance
if(!instance.getAttributeDefinition().getSpec().getType().equals(Spec.Type.PRIMARY_SPEC))

{
al.addAll(instance.getChildren());

return al;

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Job operations - script to Java migration


The migration tables list the script operations that map to Job Java™ API methods.

Job Manager
These script operations can be mapped to the following Job Manager Java API methods.
Table 1. Script operations that map to the Job Manager Java API methods
Script operation Java method
runJob createSchedule(Job)
String createImport( createCategoryImport
String sImportName,
String sImportType,
String sSourceName,
String sFileSpecName,
String sCatalogName,
String sSpecMapName,
String sCategoryTreeName,
String sScriptName,
String sACGName
[, HashMap optionalArgs])
String createImport(…) createItemImport
String createExport( createExport
String marketSpecName,
String catalogName,
String specMapName,
String exportScriptName,
String syndicationName
[, HashMap optionalArgs])
new Report(String reportName, String reportScriptName, Distribution dist) createReport (String reportName,
Document reportScript, ScriptInputSpec
getReportByName(String reportName) getReport(String name)

Schedule
These script operations can be mapped to the following Schedule Java API methods.
Table 2. Script operations that map to the Schedule Java API
methods
Script operation Java method
queryJobStatus getLatestCompletedStatus()
queryJobCompletionPercentage getLatestRunCompletionPercentage()
stopJob stop()

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

578 IBM Product Master 12.0.0


JMS operations - script to Java migration
For the JMS operations, not all of the script operations from the Script API are implemented in the Java™ API. Alternative Java code is provided for those script operations
that are not implemented in the Java API.

The following JMS methods can be achieved by using the implementation: com.ibm.mq.jms.
Table 1. Script operations and alternative Java code for the JMS operations
Script operation Alternative Java code
jmsCreateTextMsg Javax.jms.TextMessage msg = QueueSession.createTextMessage() Msg.setText();
jmsDisconnect Javax.jms.QueueSession.close(); QueueConnection.close()
jmsGetConnectionFactory Javax.jms.Context.lookup()
jmsGetContext Hashtable env = new Hashtable(); env.put
(Context.INITIAL_CONTEXT_FACTORY,
jndiFactory); env.put
(Context.PROVIDER_URL, url);
return new InitialContext(env);
jmsGetMessageCorrelationID Javax.jms.Message.getJMSCorrelationID()
jmsGetMessageID Javax.jms.Message.getJMSMessageID()
jmsGetMessageProperties Message.getPropertyNames()
jmsGetMQConnectionFactory MQQueueConnectionFactory factory =
new com.ibm.mq.jms.MQQueue
ConnectionFactory(); factory.setQueue
Manager(mqQueueManager); factory.
setHostName(mqHostname); factory.
setChannel(mqChannel); factory.setPort
(mqPort);
jmsGetQueue Context.lookup()
jmsGetQueueByName Context.lookup() or QueueSession.createQueue();
jmsGetQueueConnection QueueConnectionFactory.
createQueueConnection(user, passwd)
jmsGetQueueSession QueueSession qsession = QueueConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
jmsGetTextFromMsg Message.getText() if instanceof TextMessage else Message.toString()
jmsReceiveMsg QueueReceiver queueReceiver = QueueSession.createReceiver(queue); Message msg = queueReceiver.receive();
jmsSendMsg QueueSender queueSender = QueueSession.createSender(queue); queueSender.send(msg);
jmsSetMessageText Message.setText()

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Locale operations - script to Java migration


For the Locale operations, not all of the script operations from the Script API are implemented in the Java™ API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.

Table 1. Script operations and alternative Java code for the Locale operations
Script operation Alternative Java code
new Locale() new java.util.Locale();
getLocalizedSpecNames() Retrieve all specs and invoke isLocalized () on each of them.
getLocaleCode() java.util.Locale.toString()
getLocaleDisplayName() java.util.Locale.getDisplayName();
addToCompanyLocales() Company.addLocales(List<Locale> locales)
removeFromCompanyLocales() Company.removeLocales(List<Locale> locales)
getUserLocale() User.getUserSettingValue(UserSetting.LOCALE)
getCompanyLocales () Company.getLocales()
getDefaultLocale Company.getDefaultLocale()
getDefaultACGName() Company.getDefaultAccessControlGroup()
getDefaultSpecName() Company.getDefaultSpec()
getDefaultOrgHierarchyName() Company.getDefaultOrganizationHierarchy()
getDefaultLktHierarchyName() Company.getDefaultLookupTableHierarchy()
getDefaultAttrCollectionName() Company.getDefaultAttributeCollection(Spec spec)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 579


LookupTable operations - script to Java migration
The migration tables list the script operations that map to LookupTable Java™ API methods.

LookupTable
These script operations can be mapped to the following LookupTable Java API methods.
Table 1. Script operations that map
to the LookupTable Java API
methods
Script operation Java method
addRow createEntry()
addRowByOrder createEntry()
put createEntry()
deleteLookupTable deleteAsynchronous()

LookupTableEntry
These script operations can be mapped to the following LookupTableEntry Java API methods.
Table 2. Script operations that
map to the LookupTableEntry
Java API methods
Script operation Java method
lookupValues getValues()

LookupTableManager
These script operations can be mapped to the following LookupTableManager Java API methods.
Table 3. Script operations that map to the
LookupTableManager Java API methods
Script operation Java method
getLkpByName getLookupTable(String lookupTableName)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Math operations - script to Java component


For the Math operations, not all of the script operations from the Script API are implemented in the Java™ API. Alternative Java code is provided for those script operations
that are not implemented in the Java API.

Table 1. Script operations and alternative Java code for the Math operations
Script
Alternative Java code
operation
max() new Double( Java.lang.Math.max(a.intValue(), b.intValue())
min() new Double( Java.lang.Math.min(a.intValue(), b.intValue())
rand() new
Integer(Math.round((float)Math.floor
(Math.random()*max.intValue()))
reformatDouble( com.ibm.icu.text .DecimalFormat dfFormat = new DecimalFormat(“decimalformate”); String sReformattedNumber =
) dfFormat.format(numberTobeFormatted);
toDouble() new Double(string.intValue());
toInteger() new Integer(string.intValue);
Note: Rounding and precision truncation conforms to the Java IEEE 754 standard, which rounds digit to the nearest even digit when the decimal number binary
representation contains fractional parts.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

MQ operations - script to Java migration

580 IBM Product Master 12.0.0


For the MQ operations, not all of the script operations from the Script API are implemented in the Java™ API. Alternative Java code is provided for those script operations
that are not implemented in the Java API.

The following MQ methods can be achieved by using com.ibm.mq.* external jars.


Table 1. Script operations and alternative Java code for the MQ operations
Script operation Alternative Java code
mqDisconnect com.ibm.mq.MQQueueManager.disconnect();
mqGetMessageDiagnostics Printing various parameters of com.ibm.mq.MQMessage
mqGetMessageId MQMessage.messageId
mqGetQueueMgr new com.ibm.mq.MQQueueManager()
mqGetReceivedMsg com.ibm.mq.MQQueue.get();
mqGetReceivedMsgByMessageID com.ibm.mq.MQQueue.get();
mqGetResponseToMsg com.ibm.mq.MQQueue.get();
mqGetTextFromMsg For UTF:
com.ibm.mq.MQMessage.readUTF();
Otherwise:
com.ibm.mq.MQMessage.readString
OfByteLength(length);
mqGetXMLMessageContent XML parser
mqSendReply com.ibm.mq.MQQueue.put();
mqSendReplyWithStatus com.ibm.mq.MQQueue.put();
mqSendTextMsg For UTF: com.ibm.mq.MQMessage.writeUTF(); Otherwise: com.ibm.mq.MQMessage.writeString ();
mqSendTextMsgWithReply For UTF: com.ibm.mq.MQMessage.writeUTF(); Otherwise: com.ibm.mq.MQMessage.writeString (); with a replyQueue

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Number operations - script to Java migration


For the Number operations, not all of the script operations from the Script API are implemented in the Java™ API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.

The following number methods can be achieved by using java.text.* package.


Table 1. Script operations and alternative Java code for the Number operations
Script operation Alternative Java code
formatNumber Java.text.NumberFormat.getInstance
(Locale).format(srcValue)
formatNumberByL NumberFormat format = NumberFormat.getInstance(loc); format.setMinimumFractionDigits(precision);
ocPrecision format.setMaximumFractionDigits(precision); format.format();
formatNumberByP NumberFormat format = NumberFormat.getInstance(loc); format.setMinimumFractionDigits(precision);
recision format.setMaximumFractionDigits(precision); format.format(); However, but getInstance() is called without locale parameter
parseDouble NumberFormat nft = NumberFormat.getInstance(Locale); ParsePosition pos = new ParsePosition(0); Number num = nft.parse(valueStr, pos); return
new Double(num.doubleValue());
parseNumber NumberFormat format = NumberFormat.getInstance(locale); ParsePosition parsePosition = new ParsePosition(0); Number number =
numberFormat.parse(srcValue.trim(), parsePosition);
Note: Rounding and precision truncation conforms to the Java IEEE 754 standard, which rounds digit to the nearest even digit when the decimal number binary
representation contains fractional parts.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Organization operations - script to Java migration


The migration tables list the script operations that map to Organization Java™ API methods.

AccessControlGroup
These script operations can be mapped to the following AccessControlGroup Java API methods.
Table 1. Script operations that map to the AccessControlGroup Java API methods
Script operation Java method
Role::getAccessControlGroupPrivsForRole getPermissionList(Role role)
Role::setAccessControlGroupForRole grantPermissions(Role role, Set<Permission> permissions)
revokePermissions(Role role, Set<Permission> permissions)
String ACG::getAccessControlGroupName() getName

IBM Product Master 12.0.0 581


Company
These script operations can be mapped to the following Company Java API methods.
Table 2. Script operations that map to the Company Java API methods
Script operation Java method
addToCompanyLocales (Locale [] companyLocales) Company.addLocales(List<Locale> companyLocales)
setCompanyCurrencies(String[]) Company.addCurrencies(List<Currency> companyCurrencies)
removeFromCompanyLocales(Locale []companyLocales) Company.removeLocales(List<Locale> companyLocales)
Role[] getRoles() Company.getRoles()
User[] getAllUsers() Company.getUsers()
getDefaultACGName() Company.getDefaultAccessControlGroup()
getDefaultOrgHierarchyName() Company.getDefaultOrganizationHierarchy()
getDefaultAttrCollectionName() Company.getDefaultAttributeCollection(Spec spec)
getCompanyLocales() getLocales()
getCompanyCurrencies() getCurrencies()
getDefaultLocale() getDefaultLocale()
getDefaultSpecName() getDefaultSpec()
getDefaultLktHierarchyName() getDefaultLookupTableHierarchy()

Context
These script operations can be mapped to the following Context Java API methods.
Table 3. Script operations that map to the
Context Java API methods
Script operation Java method
getCurrentUserName() Context.getCurrentUser()

Organization
These script operations can be mapped to the following Organization Java API methods.
Table 4. Script operations that map to the Organization Java API methods
Script operation Java method
Boolean Category::addChildCategory(Category childCategory) addChild (Organization childOrganization)
Category[] Category::getCategoryChildren([Boolean ordered, Catalog catalog, Boolean getChildren()
restrictToSubtreeWithItems])
CategorySet getDescendents()
Category::getDescendentCategory
SetForCategory
([Boolean bReadonly])
String[] Category::getFullPaths([String sDelimiter] [, boolean bWithRootName]) getFullPaths (String delimiter, boolean
includeRootName)
Integer[] Category::getCategoryLevels() getLevels()
Category Category::getCategoryParent([CategoryCache cat_cache]) getParent()
Category[] Category::getCategoryParents() getParents()
Boolean Category::getCategoryHasChildren() hasChildren()
void Category::removeChildCategory(String categoryName) removeChild(Organization)
Category[] Category:: setCategoryAttrib AttributeOwner.setAttributeValue
Category::getCategoryOrganizations() Category::getOrganizationsMappedTo()
Item::getCtgItemOrganizations() Item::getOrganizationsMappedTo()

OrganizationHierarchy
These script operations can be mapped to the following OrganizationHierarchy Java API methods.
Table 5. Script operations that map to the OrganizationHierarchy Java API methods
Script operation Java method
new createOrganization
Category(CategoryTree ctr, String path, (String parentOrganizationPath,
[String delimiter], [String primaryKey]) String delimiter, String pathValue)

Category createOrganization
CategoryTree::buildCategory(String path, (String parentOrganizationPath,
[String delimiter], [String primaryKey]) String delimiter, String pathValue,
String primaryKeyValue)
ValidationError[] deleteCategoryTree(CategoryTree ctr) deleteAsynchronous
CategorySet CategoryTree::getCategorySet([Boolean bReadonly]) getOrganizations()
CategorySet getOrganizationsWithAttributeValue (String attributeInstancePath,
CategoryTree::getCategorySet Object attributeValue)
ByAttributeValue
(String attribPath, Object attribValue
[, Boolean bReadOnly])

582 IBM Product Master 12.0.0


Script operation Java method
CategorySet CategoryTree::getCategorySetByLevel(Integer level [, Boolean bReadOnly]) getOrganizationsAtLevel(int level)
CategorySet getOrganizationsByPaths(
CategoryTree::getCategorySet Collection<String> fullNamePaths)
ByFullNamePath getOrganizationsByPath(
(String[] fullNamePaths, Collection<String> fullNamePaths,
String delimiter)
CategorySet getOrganizationByPrimaryKey()
CategoryTree::getCategorySetByPrimaryKey
(String primaryKey [, Boolean bReadOnly])
Category CategoryTree::getCategoryByPath(String sNamePath, String sDelim [, boolean bLight, getOrganizationByPath(String path)
boolean bReadOnly]) getOrganizationByPath(String path,
String delimiter)
getEntryByPrimaryKey getOrganizationByPrimaryKey (String primaryKey)
String CategoryTree::getCategoryTreeName() getName()
Spec CategoryTree::getCategoryTreeSpec() getPrimarySpec()
ValidationError[] CategoryTree::saveCategoryTree() save()

OrganizationManager
These script operations can be mapped to the following OrganizationManager Java API methods.
Table 6. Script operations that map to the OrganizationManager Java API methods
Script operation Java method
CategoryTree new CategoryTree(Spec spec, String name [, HashMap optionalArgs]) OrganizationManager:: createOrganizationHierarchy (PrimarySpec spec, String
name)
CategoryTree new CategoryTree(Spec spec, String name [, HashMap optionalArgs]) OrganizationManager::create OrganizationHierarchy(PrimarySpec spec, String
name, SpecNode pathAttribute, AccessControlGroup accessControlGroup,
SpecNode displayAttribute)
ACG createAccessControlGroup(String sACGName [, String sACGDesc]) createAccess ControlGroup
Role createRole(String name, String desc); createRole
User createUser(String username, String firstname, String lastname, String email, createUser
Boolean enabled, String password, HashMap roles, Category organization [, Boolean
encryptPassword])
ACG getAccessControlGroupByName(String sACGName) getAccessControlGroup
Role getRoleByName(String sRoleName) getRole

Role
These script operations can be mapped to the following Role Java API methods.
Table 7. Script operations that map to the Role Java API methods
Script operation Java method
Role::setLocalesForRole(String localesCSVString) Role.setLocales(List<Locale> locales)
Role::setContainerLocalesForRole(String localesCSVString) Role.setLocales(Catalog catalog, List<Locale> locales)
Role::setContainerLocalesForRole(String localesCSVString) Role.setLocales(Hierarchy hierarchy, List<Locale> locales)
Role::setAccessControlGroupForRole(String acgName, Role.grantPermissions(AccessControlGroup accessControlGroup, Set<Permission> permissions);
String[] privs)
Role::setAccessControlGroupForRole(String acgName, Role.grantSystemWideAccessPrivileges(Set<SystemWideAccessPrivilege> privileges)
String[] privs)
Role::setAccessControlGroupForRole(String acgName, Role.grantScreenPrivileges(Set<ScreenPrivilege> privileges)
String[] privs)
Role::setAccessControlGroupForRole(String acgName, Role.revokeSystemWideAccessPrivileges(Set<SystemWideAccessPrivilege> privileges)
String[] privs)
Role::setAccessControlGroupForRole(String acgName, Role.revokePermissions(AccessControlGroup accessControlGroup, Set<Permission> permissions)
String[] privs)
Role::setAccessControlGroupForRole(String acgName, Role.revokeScreenPrivileges(Set<ScreenPrivilege> privileges)
String[] privs)
String[] Role::getAccessControlGroupsForRole() getAccessControlGroups
getCtgAccessPrvByRole(String sRoleName) getAttributeCollectionForPrivilegeType (Catalog catalog, PrivilegeType type)
getCtgAccessPrvByRole(String sRoleName) getAttributeCollectionForPrivilegeTypes (Hierarchy hierarchy, Privilege type)
String Role::getRoleDescription() getDescription
String Role::getLocalesForRole() getLocales
String Role::getContainerLocalesForRole() getLocales(Catalog cat)
String Role::getContainerLocalesForRole() getLocales(Hierarchy hier)
String Role::getRoleName() getName
User[] Role::getUsersFromRole() getUsers
setCtgAccessPrv(String[] attrGroups, String[] permissions) setAttributeCollectionPrivilege (Catalog catalog, Privilege privilege, Collection<AttributeCollection>
attrCollections)

User
These script operations can be mapped to the following User Java API methods.

IBM Product Master 12.0.0 583


Table 8. Script operations that map to the User Java API methods
Script operation Java method
getDateInputFormat() User.getUserSettingValue(UserSetting.DATETIMEINPUTFORMAT)
getDateOutputFormat() User.getUserSettingValue(UserSetting.DATETIMEOUTPUTFORMAT)
getUserLocale() User.getUserSettingValue(UserSetting.LOCALE)
getUserTimeZone() User.getUserSettingValue(UserSetting.TIMEZONE)
setDateInputFormat(String format) User.setUserSettingValue(UserSetting.DATETIMEINPUTFORMAT, String value)
setDateOutputFormat(String format) User.setUserSettingValue(UserSetting.DATETIMEOUTPUTFORMAT, String value)
setUserTimeZone(int offset) User.setUserSettingValue(UserSetting.TIMEZONE, String value)
String User::getUserAddress() getAddress
String User::getUserEmail() getEmail
boolean User::getUserEnabled() isEnabled
String User::getUserFax() getFax
String User::getUserFirstName() getFirstName
String User::getUserLastName() getLastName
boolean User::getLdapEnabled() isLdapEnabled
String[] User::getLdapEntryDn() getLdapEntryDn
String[] User::getLdapServerUrl() getLdapServerUrl
String User::getUsername() getUserName
Category[] User::getUserOrganizations() getOrganizations
String User::getUserPhone() getPhone
String[] User::getUserRoles() getRoles
String User::getUserTitle() getTitle
ValidationError[] save save
void User::setUserAddress(String str) setAddress
void User::setUserEmail(String str) setEmail
void User::setUserFax(String str) setFax
void User::setUserFirstName(String str) setFirstName
void User::setUserLastName(String str) setLastName
void User::setUserLdapEnabled(boolean TrueOrFalse) setLdapEnabled
getUser() getCurrentUser
void User::setUserPhone() setPhone
Boolean User::setUserRoles(Role[] roles) setRoles
void User::setUserTitle() setTitle

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

RE operations - script to Java migration


For the RE operations, not all of the script operations from the Script API are implemented in the Java™ API. Alternative Java code is provided for those script operations
that are not implemented in the Java API.

The following RE methods can be achieved by using org.apache.regexp.


Table 1. Script operations and alternative Java code for the RE
operations
Script operation Alternative Java code
buildRE Org.apache.regexp.RE = new RE(pattern, matchFlags.intValue)
new RE Org.apache.regexp.RE = new RE(pattern, matchFlags.intValue)
Match Org.apache.regexp.RE.match(str)
substitute Org.apache.regexp.RE.subst(substrituteIn, substitution);

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Reader operations - script to Java migration


For the Reader operations, not all of the script operations from the Script API are implemented in the Java™ API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.

The following reader methods can be achieved by using native Java:

CSVParser
DelimParser

584 IBM Product Master 12.0.0


FixedWidthParser
forEachLine
forEachXMLNode
getCurrentLine
newCSVParser
newDelimParser
newFixedWidthParser
splitLine
nextLine
parseXMLNodeWithNameSpace
parseXMLNode

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Search operations - script to Java migration


The migration tables list the script operations that map to Search Java™ API methods.

Catalog
These script operations can be mapped to the following Catalog Java API methods.
Table 1. Script operations that map to the Catalog
Java API methods
Script operation Java method
Catalog.getListOfCtgViewNames getViews()
CategoryTree. getListOfCtrViewNames
Container.getCtgViewByName getView(String)
CtgView constructor createView(String)

Context
These script operations can be mapped to the following Context Java API methods.
Table 2. Script operations that map
to the Context Java API methods
Script operation Java method
createSearchQuery() createSearchQuery()

Hierarchy
These script operations can be mapped to the following Hierarchy Java API methods.
Table 3. Script operations that map to the Hierarchy Java API
methods
Script operation Java method
CategoryTree. getListOfCtrViewNames Hierarchy.getViews()
Container.getCtgViewByName Hierarchy.getView(String name)
CtgView constructor Hierarchy.createView(String viewName)

SearchQuery
These script operations can be mapped to the following SearchQuery Java API methods.
Table 4. Script operations that map
to the SearchQuery Java API
methods
Script operation Java method
SearchQuery::execute() execute()

SearchResultSet
These script operations can be mapped to the following SearchResultSet Java API methods.
Table 5. Script operations that map to the
SearchResultSet Java API methods
Script operation Java method
SearchResultSet::size() size()
SearchResultSet::next() next()
SearchResultSet::next() getInt()

IBM Product Master 12.0.0 585


Script operation Java method
SearchResultSet::next() getDouble()
SearchResultSet::getLong() getLong()
SearchResultSet::getQuery() getSearchQuery()
SearchResultSet::getFloat() getFloat()
SearchResultSet::getString() getString()
SearchResultSet::getBoolean() getBoolean()
SearchResultSet::getDate() getDate()
SearchResultSet::getitem() getItem()
SearchResultSet::getCategory() getCategory()
SearchResultSet::getCatalog() getCatalog()
SearchResultSet::getHierarchy() getHierarchy()
SearchResultSet::getSpec() getSpec()

ScreenView
These script operations can be mapped to the following ScreenView Java API methods.
Table 6. Script operations that map to the ScreenView Java API methods
Script operation Java method
CtgView.getCtgViewAttrGroupsList getAttributeCollections
CtgView.getCtgViewAttribsList getAttributes
getViewableAttributes
getEditableAttributes
CtgView.setCtgView setEditableAttributeCollections
setViewableAttributeCollections
getView().save()
CtgView.getCtgViewPermission(String attributeCollectionName) getEditableAttributeCollections().
contains(attrCol)
getViewableAttributeCollections().
contains(attrCol)
CtgView.getNewCtgTab addFilter()
CtgView.addCtgTab
CtgView.insertCtgTabAt addFilter(int pos)
CtgView.removeCtgTabAt ScreenView.setFilters(Collection<ScreenViewFilter> filters)
CtgView.getCtgViewType getScreenType
CtgView.getCtgTabByName getFilter
CtgView.getCtgTabs getFilters

ScreenViewFilter
These script operations can be mapped to the following ScreenViewFilter Java API methods.
Table 7. Script operations that map to the
ScreenViewFilter Java API methods
Script operation Java method
CtgTab.getCtgTabAttrGroupsList getAttributeCollections
CtgTab.getCtgTabName getName
CtgTab.setCtgTabRow[] setAttributeCollections
CtgTabRow.getTabRowPath AttributeDefinition.getPath

View
These script operations can be mapped to the following View Java API methods.
Table 8. Script operations that map to the
View Java API methods
Script operation Java method
deleteCtgView delete
CtgView.saveCtgView save
CtgView.setCtgView getScreenView(ScreenType)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Selections operations - script to Java migration


The migration tables list the script operations that map to Selections Java™ API methods.

586 IBM Product Master 12.0.0


DynamicItemSelection
These script operations can be mapped to the following DynamicItemSelection Java API methods.
Table 1. Script operations that map to the
DynamicItemSelection Java API methods
Script operation Java method
ItemSet Selection::getItemSetForSelection() getItems()
Catalog Selection::getSelectionCatalog() getCatalog()
void setQuery(String query)
DynamicSelection::setDynamic
SelectionQueryString(String
queryString)

DynamicSelection
These script operations can be mapped to the following DynamicSelection Java API methods.
Table 2. Script operations that map to the
DynamicSelection Java API methods
Script operation Java method
String getQuery()
DynamicSelection::getDynamic
SelectionQueryString()

Selection
These script operations can be mapped to the following Selection Java API methods.
Table 3. Script operations that map to the Selection Java API methods
Script operation Java method
String Selection::getSelectionName() getName()
String Selection::getSelectionAccess getAccessControlGroup ()
ControlGroupName()
void Selection::setSelectionName(String name) setName(String name)
void setAccessControlGroup (AccessControlGroup acg)
Selection::setSelectionAccessControl
GroupName(String acgName)
void Selection::saveSelection() save()
boolean Selection::deleteSelection() delete()
Integer Selection::getSelectionItemCount() getCount()

SelectionManager
These script operations can be mapped to the following SelectionManager Java API methods.
Table 4. Script operations that map to the SelectionManager Java API methods
Script operation Java method
new BasicSelection(Catalog catalog, String name) createStaticItemSelection(Catalog catalog, Hierarchy hierarchy, String selectionName)
new DynamicSelection(String selectionName, String queryString) createDynamicItemSelection(String selectionName, String query)
String[] getSelectionNamesList(Catalog catalog) SelectionManager.getSelections(Catalog catalog)
Selection getSelectionByName(String sName) getSelection(String selectionName)

StaticItemSelection
These script operations can be mapped to the following StaticItemSelection Java API methods.
Table 5. Script operations that map to the StaticItemSelection Java API
methods
Script operation Java method
void Selection::addEntryToSelection(Entry entry) addItem (Item item)
Integer Selection::getSelectionHierarchyNodeCount() getCategoryCount()
HierarchyNodeSet Selection::getHierarchyNodeSetForSelection() getCategories()
Hierarchy Selection::getSelectionHierarchy() getHierarchy()
ItemSet Selection::getItemSetForSelection() getItems()
Catalog Selection::getSelectionCatalog() getCatalog()

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Spec operations - script to Java migration


IBM Product Master 12.0.0 587
The migration tables list the script operations that map to Spec Java™ API methods.

AttributeDefinition
These script operations can be mapped to the following AttributeDefinition Java API methods.
Table 1. Script operations that map to the AttributeDefinition Java
API methods
Script operation Java method
getNodeChildren() getChildren
getNodeName() getName
getNodeDisplayName([Locale locale]) getLocalizedNameforLocale
getNodePath() getPath
isNodeGrouping() isGrouping
getLocaleNode(Locale locale) getLocalized
getNodeLocale() getLocales
isNodeSpecRoot() isSpecRoot
String AttributeDefinitionProperty::getValue()
Node::getNodeAttributeValue
(String attributeName)
getNodeSpec() getSpec()

AttributeDefinitionProperty
These script operations can be mapped to the following AttributeDefinitionProperty Java API methods.
Table 2. Script operations that map to the AttributeDefinitionProperty Java API
methods
Script operation Java method
getNodeAttributeValue() getValue()
getNodeAttributeValues() getValues()
Node::setAttribute(String sAttributeName, String sValue [, Boolean dontReplace]) setValue
Node::setAttributes(String sAttributeName, HashMap sValues) setValues

LookupSpec
These script operations can be mapped to the following LookupSpec Java API methods.
Table 3. Script operations that map to the LookupSpec Java API
methods
Script operation Java method
PrimarySpec:: getPrimaryKeyAttributeDefinition
Spec::getPrimaryKeyNode()
Spec::getSpecPrimaryKeyAttributePath() AttributeDefinition to get to the path
Spec::setPrimaryKeyPath() setPrimaryKey

PrimarySpec
These script operations can be mapped to the following PrimarySpec Java API methods.
Table 4. Script operations that map to the PrimarySpec Java API
methods
Script operation Java method
PrimarySpec:: getPrimaryKeyAttributeDefinition
Spec::getPrimaryKeyNode()
Spec::getSpecPrimaryKeyAttributePath() AttributeDefinition to get to the path
Spec::setPrimaryKeyPath() setPrimaryKey

Spec
These script operations can be mapped to the following Spec Java API methods.
Table 5. Script operations that map to the Spec Java API methods
Script operation Java method
SpecNode new SpecNode createAttributeDefinition
(Spec spec, String path,
Integer order)
Node buildSpecNode
(Spec spec, String path,
Integer order)
Spec::saveSpec() save()
Spec::addToSpecLocales() addLocale
Spec::replaceSpecLocales() replaceLocales
Spec::setLocalized() setLocalized

588 IBM Product Master 12.0.0


Script operation Java method
Spec::deleteSpec() delete()
Spec::removeNode() removeAttributeDefinition
Spec::removeFromSpecLocales removeLocale()
Spec::getSpecType getType()
Spec::isLocalized isLocalized()
Spec::getSpecAttribNames getAttributeDefinitions() and retrieve names of AttributeDefinitions

Spec::getSpecAttribPaths

Spec::getSpecPrimaryKeyAttributePath
Spec::getSpecMultiOccurAttributePaths
Spec::getSpecSequenceAttributePaths
Spec::getSpecUniqueAttributePaths
Spec::getSpecName getName
Spec::getSpecNodes() getAttributeDefinitions
Spec::getNodeByPath() getAttributeDefinition
Spec::getLocales() getLocales

SpecManager
These script operations can be mapped to the following SpecManager Java API methods.
Table 6. Script operations that map to the SpecManager Java API methods
Script operation Java method
new Spec(String specName, String specType [, String specFileType]) createSpec(String specName, SpecType aSpecType)
new Spec(String specName, String specType [, String specFileType]) createSpec(String specName, FileSpec.FileType fileType)
Spec getSpecByName(String name [, Boolean bImmutable]) getSpec
SpecMap getSpecMapByName([String name]) SpecMap getSpecMap(String specMapName);
SpecMap buildTestSpecMap(String mapName, String mapType, SpecMap createSpecMap(String specMapName, SpecMap.Type specMapType,
Object source, Object destination) String sourceObjectName, String destinationObjectName);
new SpecMap(String mapName, String mapType,
Object source, Object destination)

SpecMap
These script operations can be mapped to the following SpecMap Java API methods.
Table 7. Script operations that map to the SpecMap Java API methods
Script operation Java method
Object SpecMap::getSpecMapSrcObject() String getSourceObjectName();
Object SpecMap::getSpecMapDstObject() String getDestinationObjectName();
SpecMap::saveSpecMap() save();
SpecMap::map(String sSrcPath, String sDstPath) SpecMapEntry addMapping(AttributeDefinition sourceAttributeDefinition,
AttributeDefinition destinationAttributeDefinition);

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

String manipulations operations - script to Java migration


For the String operations, not all of the script operations from the Script API are implemented in the Java™ API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.

The following string manipulation Java methods can be achieved by using native Java.

buildCSV
buildDelim
buildFixedWidth
checkString
escapeForCSV
escapeForHTML
escapeForJS
getNameFromPath
getParentPath
getRidOfRootName

Table 1. Script operations and alternative Java code for the String manipulation operations
Script operation Alternative Java code
checkDouble Double.,parseDouble();
checkInt Integer.parseInt(sArg);

IBM Product Master 12.0.0 589


Script operation Alternative Java code
concat StringBuffer.append()
contains new Boolean(s.indexOf(sMatch)>= 0);
encodeUsingCharset new
String(stringToEncode.getBytes
(charset))
endsWith Boolean
(stringTobeMatched.endsWith
(sMatch));
escapeWithHTMLEntities int ch = (int)stringtoEscape.charAt(i);
if (ch < beg || ch > end) StringBuffer
sbResult.append("&#" + ch + ";");
else sbResult.append(s.charAt(i));
indexOf stringToMatch.indexOf(sMatch)
isLowerCase Use Character.isLowerCase()
isUpperCase Use Character.isUpperCase()
isStringSingleByte int iUnicode = (int)c; return (iUnicode < 128) || (iUnicode > = 0xFF60 && iUnicode <= 0xFF9F);

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

System-Utils operations - script to Java migration


The migration tables list the script operations that map to System-Utils Java™ API methods.

AdminHelper
These script operations can be mapped to the following AdminHelper Java API methods.
Table 1. Script operations that
map to the AdminHelper Java API
methods
Script operation Java method
flushScriptCache () flushScriptCache ()

PIMProgress
These script operations can be mapped to the following PIMProgress Java API methods.
Table 2. Script operations that map to the PIMProgress
Java API methods
Script operation Java method
void ::setScriptProgress(number percent) setProgress(int percent)

ScriptStatistics
These script operations can be mapped to the following ScriptStatistics Java API methods.
Table 3. Script operations that map to the ScriptStatistics Java API
methods
Script operation Java method
void ::setScriptStatsDeletedCnt(number count) setItemsDeletedCount(int count)

UIHelper
These script operations can be mapped to the following UIHelper Java API methods.
Table 4. Script operations that map to the UIHelper Java API methods
Script operation Java method
String getPageURL(“ITEM_LIST”, ICatalog ctg, ICategory cat, ICategoryTree ctr) getItemListURL(Catalog catalog, Category category)
String getPageURL(“ITEM”, ICatalog ctg, String sSku) getSingleEditURL(Item item)
String getPageURL(“CATEGORY”, ICategory category) getSingleEditURL(Category category)
String getPageURL(“SEARCH”, ICatalog ctg) getRichSearchURL(Catalog catalog)
String getPageURL(“COLAREA_STEP”, ICollaborationArea ca, IStepAtPath stepAtPath) getCollaborationStepURL(CollaborationStep collaborationStep)
String getPageURL(“COLAREA_ENTRY”, ICollaborationArea ca, IStepAtPath stepAtPath, getCollaborationEntryURL(CollaborationStep collaborationStep,
IContainer container) Category category)
String getPageURL(“COLAREA_ENTRY”, ICollaborationArea ca, IStepAtPath stepAtPath, getCollaborationEntryURL(CollaborationStep collaborationStep, Item
IContainer container) item)

WebService
590 IBM Product Master 12.0.0
These script operations can be mapped to the following WebService Java API methods.
Table 5. Script operations that map to the WebService Java API methods
Script operation Java method
String WebService::getName() getName()
void WebService::setName(String name) setName(String Name)
String WebService::getImplclass() getImplementationClassName()
void WebService::setImplclass(String implclass) setImplementationClassName(String implClassName)
String WebService::getDesc() getDescription()
void WebService::setDesc(String desc) setDescription(String description)
String WebService::getUrl() getURL()
String WebService::getWsdlUrl() getWsdlURL()
String WebService::getWsdlDocPath() getWsdlDocument()
void WebService::setWsdlDocPath(String wsdlDocPath) setWsdlDocument(Document wsdlDoc)
String WebService::getWsddDocPath() getWsddDocument()
void WebService::setWsddDocPath(String wsddDocPath) setWsddDocument(Document wsddDoc)
String WebService::getStyle() getStyle()
void WebService::setStyle(String style) setStyle(MessageStyle style)
String WebService::getImplScriptPath() getImplementationScript()
setImplScriptPath(String implScriptPath) setImplementationScript(Document implScript)
Boolean WebService::getStoreIncoming() isRequestStored()
void WebService::setStoreIncoming(Boolean storeIncoming) setStoreRequest(boolean storeRequest)
Boolean WebService::getStoreOutgoing() isResponseStored()
void WebService::setStoreOutgoing(Boolean storeOutgoing) setStoreResponse(boolean storeResponse)
Boolean WebService::isDeployed() isDeployed()
void WebService::setDeployed(Boolean deployed) setDeployed(boolean isDeployed)
Boolean WebService::isAuthRequired() isAuthenticationRequired()
void WebService::setAuthRequired(Boolean authRequired) setAuthenticationRequired(boolean isAuthenticationrequired)

WebServiceManager
These script operations can be mapped to the following WebServiceManager Java API methods.
Table 6. Script operations that map to the WebServiceManager Java API methods
Script operation Java method
WebService getWebService(String webServiceName)
getWebServiceByName ()
WebService createWebServiceUsingJava
createWebService () (String webServiceName,
String description, Document
wsdlDoc, Document wsddDoc,
MessageStyle messageStyle,
String implementingClass,
boolean storeIncoming, boolean
storeOutgoing,boolean deployed,
boolean authenticationRequired,
boolean skipRequestValidation,
boolean skipResponseValidation)
WebService createWebService(String
createWebService () webServiceName, String
description,
Document wsdlDoc,
MessageStyle messageStyle,
Document implementationScript,
boolean storeIncoming,
boolean storeOutgoing, boolean
deployed, boolean
authenticationRequired,
boolean skipRequestValidation,
boolean skipResponseValidation)
WebService createWebService(String webServiceName, String description, Document wsdlDoc, MessageStyle messageStyle,Document
createWebService () implementationScript)
WebService createWebServiceUsingJava(String webServiceName, String description, Document wsdlDoc, Document wsddDoc,MessageStyle
createWebService () messageStyle, String implementingClass)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Timezone operations - script to Java migration

IBM Product Master 12.0.0 591


For the Timezone operations, not all of the script operations from the Script API are implemented in the Java™ API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.

Table 1. Script operations and alternative Java code for the Timezone operations
Script operation Alternative Java code
getUserTimeZoneDesc() getUserTimeZone() TimeZone tz = TimeZone.getTimeZone("IST"); tz.getDisplayName(new Locale("de", "DE"));
getUserTimeZoneOffset() Context ctx = PIMContextFactory.getCurrentContext(); User usr = ctx.getOrganizationManager().getUser("Admin"); String
timeZone = usr.getUserSettingValue(UserSetting.TIMEZONE);

String offset = ctx.getTimezoneResolver().getOffsetFromEncodedForm(timeZone);


setUserTimeZone Context ctx = PIMContextFactory.getCurrentContext();
User usr = ctx.getOrganizationManager().getUser("Admin");
String encodedForm = ctx.getTimezoneResolver().getEncodedFormFromOffset(345);
usr.setUserSettingValue(UserSetting.TIMEZONE, encodedForm);
usr.save();
String parseTimeZoneToDBValue(String getEncodedFormFromOffset(int offset)
srcStr)
Number getOffsetFromEncodedForm(String encodedTimezone)
getTimeZoneOffsetFromDBValue(String
dbValue)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

User defined log operations - script to Java migration


The migration tables list the script operations that map to User defined log Java™ API methods.

CategoryUserDefinedLog
These script operations can be mapped to the following CategoryUserDefinedLog Java API methods.
Table 1. Script operations that map to the CategoryUserDefinedLog Java API methods
Script operation Java method
UserDefinedLog::userDefinedLogGetEntriesFor(Entry entry[, Entry category]) CategoryUserDefinedLog.getLogEntries(Category category)
newUserDefinedLogEntry(Date date, Container container, Entry entry, String log[, Entry CategoryUserDefinedLog.createLogEntry(Category category, String
category]) entry)
UserDefinedLog::userDefinedLogAddEntry(Entry entry, [String log_message], [Entry category]) CategoryUserDefinedLog.createLogEntry(Category category, String
entry)
UserDefinedLog::userDefinedLogDeleteEntriesFor(Entry entry [, Entry category]) CategoryUserDefinedLog.deleteEntries(Category category)
UserDefinedLog::userDefinedLogGetContainer() CategoryUserDefinedLog.getHierarchy()

ItemUserDefinedLog
These script operations can be mapped to the following ItemUserDefinedLog Java API methods.
Table 2. Script operations that map to the ItemUserDefinedLog Java API methods
Script operation Java method
UserDefinedLog::userDefinedLogGetEntriesFor(Entry entry[, Entry category]) ItemUserDefinedLog.getLogEntries(Item item)
ItemUserDefinedLog.getLogEntries(Item item, Category category)
newUserDefinedLogEntry(Date date, Container container, Entry entry, String log[, Entry ItemUserDefinedLog.createLogEntry(Item item, Category category, String
category]) entry)
UserDefinedLog::userDefinedLogAddEntry(Entry entry, [String log_message], [Entry ItemUserDefinedLog.createLogEntry(Item item, String entry)
category]) ItemUserDefinedLog.createLogEntry(Item item, Category category, String
entry)
ItemUserDefinedLog.deleteEntries(Item item)
UserDefinedLog::userDefinedLogDeleteEntriesFor(Entry entry [, Entry category]) ItemUserDefinedLog.deleteEntries(Item item, Category category)
UserDefinedLog::userDefinedLogGetContainer() ItemUserDefinedLog.getCatalog()

UserDefinedLog
These script operations can be mapped to the following UserDefinedLog Java API methods.
Table 3. Script operations that map to the UserDefinedLog Java API methods
Script operation Java method
UserDefinedLog::userDefinedLogGetName() UserDefinedLog.getName()
UserDefinedLog::userDefinedLogDeleteEntriesFor(Entry entry [, Entry category]) UserDefinedLog.deleteAllEntries()
UserDefinedLog::userDefinedLogSetName(String name) UserDefinedLog.setName(String name)
UserDefinedLog::userDefinedLogSetDescription(String desc) UserDefinedLog.setDescription(String description)
UserDefinedLog::insertUserDefinedLog() UserDefinedLog.save()
UserDefinedLog::saveUserDefinedLog()

592 IBM Product Master 12.0.0


Script operation Java method
UserDefinedLog::userDefinedLogDelete() UserDefinedLog.delete()
UserDefinedLog::userDefinedLogGetDescription() UserDefinedLog.getDescription()
UserDefinedLog::dumpUserDefinedLog(Writer out, String delim, String outputType, String docTag, HashMap UserDefinedLog.accept(UserDefinedLogWriter
hmNodeTags) udlWriter)

UserDefinedLogEntry
These script operations can be mapped to the following UserDefinedLogEntry Java API methods.
Table 4. Script operations that map to the
UserDefinedLogEntry Java API methods
Script operation Java method
UserDefinedLogEntry::userDefinedLog setValue(String logValue)
EntrySetValue(String log_message)
UserDefinedLogEntry::userDefinedLog getValue()
EntryGetValue()
UserDefinedLogEntry::userDefinedLog getDate()
EntryGetDate()
UserDefinedLogEntry::userDefinedLog setDate()
EntrySetDate(Date date)
UserDefinedLogEntry::userDefinedLog getTargetItem()
EntryGetTarget([Boolean container
IsCatalog)
UserDefinedLogEntry::userDefinedLog getTargetCategory()
EntryGetTarget([Boolean container
IsCatalog)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Workflow operations - script to Java migration


The migration tables list the script operations that map to Workflow Java™ API methods.

Workflow
These script operations can be mapped to the following Workflow Java API methods.
Table 1. Script operations that map to the Workflow Java API
methods
Script operation Java method
saveWfl save
validate
setCategoryTreesForRecategorization setHierarchiesForRecategorization
setWlfName setName
getWflName getName
getWflDesc getDescription
setWflDesc setDescription
setWflAccessControlGroup setAccessControlGroup
getWflAccessControlGroup getAccessControlGroup
getWflContainerType isCategoryWorkflow
isItemWorkflow
getWflSteps getSteps
getWflStepPaths
getWflInitialStep getInitialStep
getWflSuccessStep getSuccessStep
getWflFailureStep getFailureStep
deleteWfl delete
createWflStep addStep
CreateNestedWflStep addNestedWorkflowStep
getWflStepByName getStep
getWflStepByName(“FIXIT”) getFixit

Workflow Manager
These script operations can be mapped to the following Workflow Manager Java API methods.
Table 2. Script operations that map to
the Workflow Manager Java API methods

IBM Product Master 12.0.0 593


Script operation Java method
Workflow(name,type) createItemWorkflow
createCategoryWorkflow
getWflByName() getWorkflow

WorkflowStep
These script operations can be mapped to the following WorkflowStep Java API methods.
Table 3. Script operations that map to the WorkflowStep Java API methods
Script operation Java method
getWflStepName getName
getWflStepDesc getDescription
setWflStepDesc setDescription
getWflStepType getType
getWflStepExitValues getExitValues
setWflStepExitValues getExitValue
createExitValue
hasExitValue
getNextWflStepsForExitValue getNextSteps
getWflStepReserveToEdit isReserveToEditEnabled
getWflStepAddEntries isImportAllowed
getWflStepCategorizeEntries isRecategorizeEnabled
getWflStepPerformerRoles getPerformers
getWflStepPerformerUsers
mapWflStepExitValueToNextStep setNextStep
setNextSteps
setWflStepPerformerRoles addPerformer
setWflStepPerformerUsers addPerformers
setWflStepReserveToEdit setReserveToEdit
setWflStepAddEntries setAllowImportIntoStep
setWflStepCategorizeEntries setRecategorize
getEditableAttributeGroups getEditableAttributeCollections
getViewableAttributeGroups getViewableAttributeCollections
getRequiredAttributeGroups getRequiredAttributeCollectionss
setEditableAttributeGroups setEditableAttributeCollections
setViewableAttributeGroups setViewableAttributeCollections
setRequiredAttributeGroups setRequiredAttributeCollections
getWflStepAttributeGroups getAttributeCollections
getLocationHierarchyNames getLocationHierarchiesWithCan
ModifyAvailabilitySettings
getModifyLocationHierarchyAvailability setModifyLocationHierarchyAvailability setCanModifyLocationHierarchyAvailability getCanModifyLocationHierarchyAvailability
getWflStepEntryNotification getEntranceNotificationAddresses
getWflStepTimeoutNotification getTimeoutNotificationAddresses
setWflStepEntryNotification setEntranceNotificationAddresses
setWflStepTimeoutNotification setTimeoutNotificationAddresses
getWflStepTimeoutDate getTimeoutDate
setWflStepTimeoutDate setTimeoutDate
getWflStepTimeoutDuration getTimeoutDuration
setWflStepTimeoutDuration setTimeoutDuration
getWflStepScriptPath getDefaultStepScriptPath
setWflStepScriptPath getStepScript

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Writer operations - script to Java migration


For the Writer operations, not all of the script operations from the Script API are implemented in the Java™ API. Alternative Java code is provided for those script
operations that are not implemented in the Java API.

The following writer methods can be achieved by using a mix of java.io.* for file system and docstore APIs for writing to the docstore.
Table 1. Script operations and alternative Java code for the Writer operations
Script operation Alternative Java code
Close DocstoreManager.createDocument(); Document.setContent(String); Document.setContent(InputStream);
createOtherOut new PrintWriter(String fileName, String csn);
print PrintWriter.print();

594 IBM Product Master 12.0.0


Script operation Alternative Java code
println PrintWriter.println();
save DocstoreManager.createDocument(); Document.setContent(String); Document.setContent(InputStream);
setOutputAttribute Document.setProperty(Property, String); Document.setProperty(String, String);
setOutputName DocstoreManager.createDocument();
write PrintWriter.write();
writeBinaryFile Document.copyTo(String documentPath);
writeDoc String content = Document.getContent(); Writer.write(content);
writeFile Document.writeContentToOutputStream(OutputStream);
new PrintWriter(OutputStream);
writeFileUsingReader char[] buffer = new char[1024]; int length = -1; while ((length = reader.read(buffer)) != -1) writer.write((char[])buffer, 0, length); reader.close();
writeln PrintWriter.writeln();
setOutputAttribute Document.setProperty(Property, String)
setOutputAttribute Document.setProperty(String, String)
setOutputAttribute PIMWriter.setProperty(String propertyName,String propertyValue)
setOutputName DocstoreManager.createDocument();
setOutputName PIMWriter.setName(String name)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

XML operations - script to Java migration


For the XML operations, not all of the script operations from the Script API are implemented in the Java™ API, but you can use a basic XML parser instead.

The following XML methods can be achieved by using a basic XML parser.

getXMLNode
getXMLNodeName
getXMLNodePath
getXMLNodes
getXMLNodeValue
getXMLNodeValues
setXMLNodeValue
setXMLNodeValues
validateXML
xmlDocToString
XmlDocument

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Zip operations - script to Java migration


For the Zip operations, not all of the script operations from the Script API are implemented in the Java™ API. Alternative Java code is provided for those script operations
that are not implemented in the Java API.

Table 1. Script operations and alternative Java code for the Zip
operations
Script operation Alternative Java code
unzip new java.util.zip.ZipInputStream(new FileInputStream(File));
zip new java.util.ZipOutputStream(new FileOutoutStream(File));

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample Java code outside of Java API


You can use sample Java™ classes and methods from outside of Java API for commonly used operations; for which built-in support was available in Script APIs. To easily
migrate to the new Java API, the following sample codes are provided as-is.

BidiUtils sample
The BidiUtils sample utility transforms bidiText based on the parameters that are passed in. This utility is used with the Imports and Exports. If the direction is

IBM Product Master 12.0.0 595


IMPORT, the utility uses the BiDi attributes that are specified in the parameters to create a BiDiText and then transforms it to BiDiText with default attributes. If the
direction is EXPORT, the utility creates a BiDiText by using the default attributes and then transforms it to BiDiText with the attributes that are specified in the
parameters.
BuildCSV sample
The BuildCSV sample provides a way to convert an array of Strings into comma-separated values (CSV) format. It accepts an array of String tokens and returns a
String that has the tokens concatenated by commas. If the tokens contain embedded newline characters (\n), CRLF characters (\r) or commas (,), the token is
enclosed in double quotation marks.
BuildDelim sample
The BuildDelim sample takes an array of tokens and uses the array to construct a Delim format string. The delimiter is passed in as an argument. A qualifier, which
is passed in as an argument, is used to enclose tokens that contain embedded delimiters, new line characters, or carriage return characters.
BuildFixedWidth sample
The BuildFixedWidth sample accepts an array of Strings and an array of Widths to generate a fixed Width format String. If the token has newline characters (\n) or
CRLF characters (\r) and is being considered by the width for the token, the token is embedded in double quotation marks.
CSVParser sample
This sample parser parses a reader stream and tokenizes the stream based on a comma (`,`). The stream is broken down into lines, which are then tokenized. An
empty string token is returned if the parser finds two consecutive commas (`,`) or if the line starts or ends with a comma.
DelimiterParser sample
This sample parser parses a reader stream and tokenizes the stream based on the delimiter that is passed as one of the argument. The stream is broken down into
lines which are then tokenized. An empty string token is retuned if the parser finds two consecutive delimiters.
ExcelParser sample
The ExcelParser Java class provides a way to parse Microsoft Excel spreadsheets by using the APACHE POI HSSF class. This class returns the contents of an Excel
row in the form of a String array.
FixedWidthParser sample
This sample parser parses a reader stream and tokenizes the stream based on the width that is passed in as an array of type integer. The stream is broken down
into lines that are then tokenized based on the size of the width.
SampleTokenizer sample
The SampleTokenizer Java class converts a string into a token based on a delimiter. This class is different from the StringTokenizer. This class returns an empty
String if it encounters two instances of the delimiter together.
StringUtils sample
The following sample provides String utility methods for working with the Java API.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

BidiUtils sample
The BidiUtils sample utility transforms bidiText based on the parameters that are passed in. This utility is used with the Imports and Exports. If the direction is IMPORT,
the utility uses the BiDi attributes that are specified in the parameters to create a BiDiText and then transforms it to BiDiText with default attributes. If the direction is
EXPORT, the utility creates a BiDiText by using the default attributes and then transforms it to BiDiText with the attributes that are specified in the parameters.

package com.ibm.ccd.api.samplecode;

import com.ibm.bidiTools.bdlayout.BidiFlag;
import com.ibm.bidiTools.bdlayout.BidiFlagSet;
import com.ibm.bidiTools.bdlayout.BidiText;

public class BidiUtils {

public enum BidiTypeOfText


{
IMPLICIT,
VISUAL
}

public enum BidiOrientation


{
LTR,
RTL,
CONTEXTUAL_LTR,
CONTEXTUAL_RTL
}

public enum BidiNumShapes


{

NOMINAL,
NATIONAL,
CONTEXTUAL,
ANY

}
public enum BidiTextShapes
{
NOMINAL,
SHAPED,
INITIAL,
MIDDLE,
FINAL,
ISOLATED

596 IBM Product Master 12.0.0


/**
* If direction is "IMPORT", use the BiDi attributes specified in the parameters to create a BiDiText and then tranform it
to BiDiText with default attributes.
* If direction is "EXPORT", create a BiDiText using default attribute then tranform it to BiDiText with attributes
specified in the parameters.
*
* @param srcStr the string to be transformed
* @param operateType the value can be "IMPORT"|"EXPORT"
* @param typeOfText the value can be "IMPLICIT"|"VISUAL".
* @param orientation the value can be "LTR"|"RTL"|"CONTEXTUAL_LTR"|"CONTEXTUAL_RTL"
* @param swap the value can be "YES"|"NO"
* @param numShapes the value can be "NOMINAL"|"NATIONAL"|"CONTEXTUAL"|"ANY"
* @param textShapes the value can be "NOMINAL"|"SHAPED"|"INITIAL"|"MIDDLE"|"FINAL"|"ISOLATED"
* @return the transformed string
*/
public static String bidiTransform(String srcStr, String operateType, BidiTypeOfText typeOfText, BidiOrientation
orientation, Boolean swap, BidiNumShapes numShapes, BidiTextShapes textShapes) throws Exception {
if (srcStr == null) {
return null;
}
boolean needBidiSupport = needBidiSupport(numShapes, orientation, swap, textShapes, typeOfText);
if (needBidiSupport) {
if (typeOfText == null) {
typeOfText = BidiTypeOfText.IMPLICIT;
}
if (orientation == null) {
orientation = BidiOrientation.LTR;
}
if (swap == null) {
swap = Boolean.TRUE;
}
if (numShapes == null) {
numShapes = BidiNumShapes.NOMINAL;
}
if (textShapes == null) {
textShapes = BidiTextShapes.NOMINAL;
}
BidiFlagSet flag = new BidiFlagSet(getTypeOfText(typeOfText),
getOrientation(orientation),
getSwap(swap),
getNumShapes(numShapes),
getTextShapes(textShapes));
BidiFlagSet defaultFlag = new BidiFlagSet();

BidiFlagSet srcFlag = null, destFlag = null;


if ("IMPORT".equals(operateType)) {
srcFlag = flag;
destFlag = defaultFlag;
} else if ("EXPORT".equals(operateType)) {
srcFlag = defaultFlag;
destFlag = flag;
}

BidiText src = new BidiText(srcFlag, srcStr);


BidiText dest = src.transform(destFlag);
return dest.toString();
} else {
return srcStr;
}
}

public static boolean needBidiSupport(BidiNumShapes numShapes, BidiOrientation orientation, Boolean swap, BidiTextShapes
textShapes, BidiTypeOfText typeOfText) {
if (numShapes != null || orientation != null ||
swap != null |
| textShapes != null || typeOfText != null) {
return true;
} else {
return false;
}
}

public static BidiFlag getTypeOfText(BidiTypeOfText typeOfText) throws Exception {


if (BidiTypeOfText.IMPLICIT == typeOfText) {
return BidiFlag.TYPE_IMPLICIT;
}
if (BidiTypeOfText.VISUAL == typeOfText) {
return BidiFlag.TYPE_VISUAL;
}

throw new Exception("wrong value of type of text '" + typeOfText + "'");

public static BidiFlag getOrientation(BidiOrientation orientation) throws Exception


{

if (BidiOrientation.LTR == orientation) {
return BidiFlag.ORIENTATION_LTR;
}
if (BidiOrientation.RTL == orientation) {
return BidiFlag.ORIENTATION_RTL;
}
if (BidiOrientation.CONTEXTUAL_LTR == orientation) {
return BidiFlag.ORIENTATION_CONTEXT_LTR;

IBM Product Master 12.0.0 597


}
if (BidiOrientation.CONTEXTUAL_RTL == orientation) {
return BidiFlag.ORIENTATION_CONTEXT_RTL;
}

throw new Exception("wrong value of orientation '" + orientation + "'");

public static BidiFlag getSwap(Boolean swap) {


if (swap)
return BidiFlag.SWAP_YES;
else
return BidiFlag.SWAP_NO;
}

public static BidiFlag getNumShapes(BidiNumShapes numShapes) throws Exception


{
if (BidiNumShapes.NATIONAL == numShapes) {
return BidiFlag.NUMERALS_NATIONAL;
}
if (BidiNumShapes.NOMINAL == numShapes) {
return BidiFlag.NUMERALS_NOMINAL;
}
if (BidiNumShapes.CONTEXTUAL == numShapes) {
return BidiFlag.NUMERALS_CONTEXTUAL;
}
if (BidiNumShapes.ANY == numShapes) {
return BidiFlag.NUMERALS_NATIONAL;
}
throw new Exception("wrong value of num shapes '" + numShapes + "'");

public static BidiFlag getTextShapes(BidiTextShapes textShapes) throws Exception


{
if (BidiTextShapes.FINAL == textShapes) {
return BidiFlag.TEXT_FINAL;
}
if (BidiTextShapes.INITIAL == textShapes) {
return BidiFlag.TEXT_INITIAL;
}
if (BidiTextShapes.MIDDLE == textShapes) {
return BidiFlag.TEXT_MIDDLE;
}
if (BidiTextShapes.NOMINAL == textShapes) {
return BidiFlag.TEXT_NOMINAL;
}
if (BidiTextShapes.SHAPED == textShapes) {
return BidiFlag.TEXT_SHAPED;
}
if (BidiTextShapes.ISOLATED == textShapes) {
return BidiFlag.TEXT_ISOLATED;
}
throw new Exception("wrong value of text shapes '" + textShapes + "'");

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

BuildCSV sample
The BuildCSV sample provides a way to convert an array of Strings into comma-separated values (CSV) format. It accepts an array of String tokens and returns a String
that has the tokens concatenated by commas. If the tokens contain embedded newline characters (\n), CRLF characters (\r) or commas (,), the token is enclosed in double
quotation marks.

package com.ibm.ccd.api.samplecode.parser;

import com.ibm.ccd.api.samplecode.StringUtils;

public class BuildCSV


{

public static String build(String[] as)


{
StringBuffer sb = new StringBuffer();

for (int i = 0; i < as.length; i++)


{
String sChildValue = as[i];

if (i > 0)
{

598 IBM Product Master 12.0.0


sb.append(",");
}

if ((sChildValue.indexOf(",") >= 0) || (sChildValue.indexOf("\n") >= 0) || (sChildValue.indexOf("\r") >= 0)


|| (sChildValue.indexOf("\"") >= 0))
{
sb.append("\"");
sb.append(StringUtils.replaceString(sChildValue, "\"", "\"\""));
sb.append("\"");
}
else
{
sb.append(sChildValue);
}
}

return sb.toString();
}

public static void main(String args[])


{
String [] str = {"a","b","c","d"};
System.out.println(build(str));
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

BuildDelim sample
The BuildDelim sample takes an array of tokens and uses the array to construct a Delim format string. The delimiter is passed in as an argument. A qualifier, which is
passed in as an argument, is used to enclose tokens that contain embedded delimiters, new line characters, or carriage return characters.

package com.ibm.ccd.api.samplecode.parser;

//common imports

import com.ibm.ccd.api.samplecode.StringUtils;

public class BuildDelim


{

public static String build(String sDelim, String sTextQualifier, String[] as)


{
StringBuffer sb = new StringBuffer();

for (int i = 0; i < as.length; i++)


{
String sChildValue = as[i];

if (i > 0)
{
sb.append(sDelim);
}

if ((sChildValue.indexOf(sDelim) >= 0) || (sChildValue.indexOf("\n") >= 0) ||


(sChildValue.indexOf("\r") >= 0))
{
sb.append(sTextQualifier);
sb.append(StringUtils.
replaceString(sChildValue, sTextQualifier, sTextQualifier + sTextQualifier));
sb.append(sTextQualifier);
}
else
{
sb.append(sChildValue);
}
}

return sb.toString();
}
public static void main(String args[])
{
String tokens [] = {"token1", "tok|en2", "token3"};
String sDelim = "|";
String sTextQualifier = "\"";
System.out.println(build(sDelim, sTextQualifier, tokens));

IBM Product Master 12.0 Fix Pack 8

IBM Product Master 12.0.0 599


Operating Systems: AIX, Linux, and Windows (Workbench only)

BuildFixedWidth sample
The BuildFixedWidth sample accepts an array of Strings and an array of Widths to generate a fixed Width format String. If the token has newline characters (\n) or CRLF
characters (\r) and is being considered by the width for the token, the token is embedded in double quotation marks.

package com.ibm.ccd.api.samplecode.parser;

import com.ibm.ccd.api.samplecode.StringUtils;

public class BuildFixedWidth


{

public static String build(String[] aValues, int[] aWidths)


{
StringBuffer sb = new StringBuffer();

for (int i = 0; i < aValues.length; i++)


{
String sValue = aValues[i];
int nWidth = aWidths[i];
String sPaddedValue = padStringToRight(sValue, nWidth, ' ');

if (sPaddedValue.length() > nWidth)


{
sPaddedValue = sPaddedValue.substring(0, nWidth);
}

if ((sPaddedValue.indexOf("\n") >= 0) || (sPaddedValue.indexOf("\r") >= 0))


{
sb.append("\"");
sb.append(StringUtils.replaceString(sPaddedValue, "\"", "\"\""));
sb.append("\"");
}
else
{
sb.append(sPaddedValue);
}
}

return sb.toString();
}

private static String padStringToRight(String sArg, int len, char padChar)


{
StringBuffer buf = new StringBuffer(len);
int arg_len = sArg.length();
int needed_len = len - arg_len;

buf.append(sArg);

if (arg_len < len)


{
for (int i = 0; i < needed_len; ++i)
{
buf.append(padChar);
}
}
return buf.toString();
}

public static void main(String args[])


{
String [] sValues = {"a", "bb", "ccc" , "dddd"};
int [] iWidth = {1, 1, 3, 4};
System.out.println(build(sValues, iWidth));
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

CSVParser sample
This sample parser parses a reader stream and tokenizes the stream based on a comma (`,`). The stream is broken down into lines, which are then tokenized. An empty
string token is returned if the parser finds two consecutive commas (`,`) or if the line starts or ends with a comma.

600 IBM Product Master 12.0.0


package com.ibm.ccd.api.samplecode.parser;

import java.io.BufferedReader;
import java.io.StringReader;
import java.util.ArrayList;

public class CSVParser


{

String oneRes;
private String line = "";
private int nbLines = 0;
private BufferedReader reader;

public CSVParser(BufferedReader reader)


{
this.reader = reader;
}

public String[] splitLine() throws Exception


{
nbLines = 0;
ArrayList<String> al = new ArrayList<String>();
line = nextLine();
if (line == null)
return null;

nbLines = 1;
int pos = 0;

while (pos < line.length())


{
pos = findNextComma(pos);
al.add(oneRes);
pos++;
}

if (line.length() > 0 && line.charAt(line.length() - 1) == ',')


{
al.add("");
}

return (String[])al.toArray(com.ibm.ccd.common.util.Const.JAVA_LANG_STRING_EMPTY_ARRAY);
}

private int findNextComma(int p) throws Exception


{
char c;
int i;
oneRes = "";
c = line.charAt(p);

// empty field
if (c == ',')
{
oneRes = "";
return p;
}

// not escape char


if (c != '"')
{
i = line.indexOf(',', p);
if (i == -1)
i = line.length();
oneRes = line.substring(p, i);
return i;
}

// start with "


p++;

StringBuffer sb = new StringBuffer(200);


while (true)
{
c = readNextChar(p);
p++;

// not a "
if (c != '"')
{
sb.append(c);
continue;
}

// ", last char -> ok


if (p == line.length())
{
oneRes = sb.toString();
return p;
}

c = readNextChar(p);
p++;

IBM Product Master 12.0.0 601


// "" -> just print one
if (c == '"')
{
sb.append('"');
continue;
}

// ", -> return


if (c == ',')
{
oneRes = sb.toString();
return p - 1;
}

throw new Exception("Unexpected token found");


}
}

private char readNextChar(int p) throws Exception


{
if (p == line.length())
{
String newLine = reader.readLine();
if (newLine == null)
throw new Exception("Error occured while parsing");
line += "\n" + newLine;
nbLines++;
}
return line.charAt(p);
}

public String nextLine()


throws Exception
{
do
{
line = reader.readLine();
if (line == null)
return null;
}
while (line.trim().equals(""));
return line;
}

public static void main (String args[]) throws Exception


{
BufferedReader reader = null;
try
{
String doc = "a,a ab,c,d a\n" +
",1 a\n" +
"1, \n" +
"a,\n" +
"1," +
"\"v \"\"a v\"";

System.out.println("String to be parsed = " + doc);


reader = new BufferedReader(new StringReader(doc));
CSVParser parser = new CSVParser(reader);
String[] res;

ArrayList<String> tokens = new ArrayList<String>();

while ((res = parser.splitLine()) != null)


{
for (int i = 0; i < res.length; i++)
{
System.out.println("Token Found ["+res[i]+"] \n");
}
}
}
catch(Exception e )
{
e.printStackTrace();
}
finally
{
reader.close();
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

DelimiterParser sample

602 IBM Product Master 12.0.0


This sample parser parses a reader stream and tokenizes the stream based on the delimiter that is passed as one of the argument. The stream is broken down into lines
which are then tokenized. An empty string token is retuned if the parser finds two consecutive delimiters.

package com.ibm.ccd.api.samplecode.parser;

import com.ibm.ccd.api.samplecode.SampleTokenizer;

import java.io.BufferedReader;
import java.io.StringReader;

public class DelimiterParser


{

private String delim;


private String line = "";
private int nbLines = 0;
private BufferedReader reader;

public DelimiterParser(BufferedReader reader, String delim)


{
this.reader = reader;
this.delim = delim;
}

public String[] splitLine() throws Exception


{
nbLines = 0;
line = nextLine();
if (line == null)
return null;
nbLines = 1;

SampleTokenizer st = new SampleTokenizer(line, delim.charAt(0));


int numTokens = st.countTokens();
String[] allTokens = new String[numTokens];

for (int i = 0; i < numTokens; i++)


allTokens[i] = st.nextToken();

return allTokens;
}

public String nextLine()


throws Exception
{
do
{
line = reader.readLine();
if (line == null)
return null;
}
while (line.trim().equals(""));
return line;
}

public static void main (String [] args)


{
BufferedReader reader = null;
try
{
String doc = "a,a ab,c,d a\n" +
",1 a\n" +
"1, \n" +
"a,\n" +
"1," +
"\"v \"\"a v\"";

System.out.println("String being parsed = [" + doc + "]");


reader = new BufferedReader(new StringReader(doc));
DelimiterParser parser = new DelimiterParser(reader,",");
String[] res;

while ((res = parser.splitLine()) != null)


{
for (int i = 0; i < res.length; i++)
{
System.out.println("token found ["+res[i]+"] \n");
}
}
}
catch(Exception e )
{
e.printStackTrace();
}

IBM Product Master 12.0 Fix Pack 8

IBM Product Master 12.0.0 603


Operating Systems: AIX, Linux, and Windows (Workbench only)

ExcelParser sample
The ExcelParser Java™ class provides a way to parse Microsoft Excel spreadsheets by using the APACHE POI HSSF class. This class returns the contents of an Excel row in
the form of a String array.

Note: Microsoft Excel 2007 file format (*.xlsx) is not supported by the APACHE POI jar.

package com.ibm.ccd.api.samplecode.parser;

// Java imports
import java.io.IOException;
import java.text.SimpleDateFormat;
import java.util.Date;

import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.InputStream;
import java.io.*;

// Apache POI - HSSF imports


import org.apache.poi.hssf.usermodel.HSSFCell;
import org.apache.poi.hssf.usermodel.HSSFDateUtil;
import org.apache.poi.hssf.usermodel.HSSFSheet;
import org.apache.poi.hssf.usermodel.HSSFRow;
import org.apache.poi.hssf.usermodel.HSSFWorkbook;
import org.apache.poi.poifs.filesystem.POIFSFileSystem;

public class ExcelParser {

HSSFSheet m_sheet;
int m_iNbRows;
int m_iCurrentRow = 0;
private static final String JAVA_TOSTRING =
"EEE MMM dd HH:mm:ss zzz yyyy";

public ExcelParser(HSSFSheet sheet)


{
m_sheet = sheet;
m_iNbRows = sheet.getPhysicalNumberOfRows();
}

/* Returns the contents of an Excel row in the


form of a String array.
* @see com.ibm.ccd.common.parsing.Parser#splitLine()
*/
public String[] splitLine() throws Exception {
if (m_iCurrentRow == m_iNbRows)
return null;

HSSFRow row = m_sheet.getRow(m_iCurrentRow);


if(row == null)
{
return null;
}
else
{
int cellIndex = 0;
int noOfCells = row.getPhysicalNumberOfCells();
String[] values = new String[noOfCells];
short firstCellNum = row.getFirstCellNum();
short lastCellNum = row.getLastCellNum();

if (firstCellNum >=0 && lastCellNum >=0)


{
for(short iCurrent = firstCellNum; iCurrent <lastCellNum; iCurrent++)
{
HSSFCell cell = (HSSFCell)row.getCell(iCurrent);
if(cell == null)
{
values[iCurrent] = "";
cellIndex++;
continue;
}
else
{
switch(cell.getCellType())
{
case HSSFCell.CELL_TYPE_NUMERIC:
double value = cell.getNumericCellValue();
if(HSSFDateUtil.isCellDateFormatted(cell))

604 IBM Product Master 12.0.0


{
if(HSSFDateUtil.isValidExcelDate(value))
{
Date date = HSSFDateUtil.getJavaDate(value);
SimpleDateFormat dateFormat = new
SimpleDateFormat(JAVA_TOSTRING);
values[iCurrent] = dateFormat.format(date);
}
else
{
throw new Exception("Invalid Date value found at row number " +
row.getRowNum()+" and column number
"+cell.getCellNum());
}
}
else
{
values[iCurrent] = value + "";
}
break;

case HSSFCell.CELL_TYPE_STRING:
values[iCurrent] = cell.getStringCellValue();
break;

case HSSFCell.CELL_TYPE_BLANK:
values[iCurrent] = null;
break;

default:
values[iCurrent] = null;
}
}
}
}
m_iCurrentRow++;
return values;
}

public static void main(String args[])


{
HSSFWorkbook workBook = null;
File file = new File("/home/sprasad/austin_api/Book1.xls");
InputStream excelDocumentStream = null;
try
{
excelDocumentStream = new FileInputStream(file);
POIFSFileSystem fsPOI = new POIFSFileSystem(new BufferedInputStream(excelDocumentStream));
workBook = new HSSFWorkbook(fsPOI);
ExcelParser parser = new ExcelParser(workBook.getSheetAt(0));
String [] res;
while ((res = parser.splitLine()) != null)
{
for (int i = 0; i < res.length; i++)
{
System.out.println("Token Found [" + res[i] + "]");
}
}
excelDocumentStream.close();

}
catch(Exception e)
{
e.printStackTrace();
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

FixedWidthParser sample
This sample parser parses a reader stream and tokenizes the stream based on the width that is passed in as an array of type integer. The stream is broken down into lines
that are then tokenized based on the size of the width.

package com.ibm.ccd.api.samplecode.parser;

//other imports

IBM Product Master 12.0.0 605


import java.io.BufferedReader;
import java.io.StringReader;
import java.io.IOException;
import java.util.Vector;

public class FixedWidthParser


{
public String line = "";
protected int nbLines = 0;
int[] tokenSizeArray;
private BufferedReader reader;

public FixedWidthParser(BufferedReader reader, int[] tokenSizeArray)


{
this.reader = reader;
this.tokenSizeArray = tokenSizeArray;
}

public String[] splitLine() throws IOException


{
nbLines = 0;
line = nextLine();
if (line == null)
return null;
nbLines = 1;

Vector v = new Vector();


int pointer = 0;
int end = line.length();
for (int i = 0; i < tokenSizeArray.length; i++)
{
if (pointer + tokenSizeArray[i] > end)
{
if (pointer < end)
v.addElement(line.substring(pointer));
else
v.addElement("");
}
else
{
v.addElement(line.substring(pointer, pointer + tokenSizeArray[i]));
}
pointer += tokenSizeArray[i];
}
return (String[])v.toArray(new String[0]);
}
public String nextLine()
throws IOException
{
do
{
line = reader.readLine();
if (line == null)
return null;
}
while (line.trim().equals(""));
return line;
}

public static void main (String args[])


{
BufferedReader reader = null;
try
{
String doc = "This is a test String which is being parsed by FixedWidthParser";

System.out.println("String being parsed = [" + doc + "]");


reader = new BufferedReader(new StringReader(doc));
int [] widths = {4, 3, 2, 5, 7, 6, 3, 6, 7, 3, 17};
FixedWidthParser parser = new FixedWidthParser(reader,widths);
String[] res;

while ((res = parser.splitLine()) != null)


{
for (int i = 0; i < res.length; i++)
{
System.out.println("token found ["+res[i]+"] \n");
}
}
}
catch(Exception e )
{
e.printStackTrace();
}
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

606 IBM Product Master 12.0.0


SampleTokenizer sample
The SampleTokenizer Java™ class converts a string into a token based on a delimiter. This class is different from the StringTokenizer. This class returns an empty String if it
encounters two instances of the delimiter together.

package com.ibm.ccd.api.samplecode;

import java.util.*;

public class SampleTokenizer


{

String string;
String delim;
int currentIndex;
int length;

/**
* Constructs the string to be converted
*
* @param
* str, a string to be parsed
* @param
* delim, the char delimiter
*/

public SampleTokenizer(String string, char delim)


{
this.delim = delim + "";
this.string = string;
this.currentIndex = 0;
this.length = string.length();
}

/**
* Constructs the string to be converted
*
* @param
* str, the string to be parsed
* @param
* delim, the string delimiter
*/
public SampleTokenizer(String string, String delim)
{
this.delim = delim;
this.string = string;
this.currentIndex = 0;
this.length = string.length();
}

public boolean hasMoreTokens()


{
return (currentIndex <= length);
}

public int countTokens()


{
int count = 1;
int tempIndex = this.currentIndex;

if (currentIndex >= length)


return 0;

while (((tempIndex = this.string.indexOf(delim, tempIndex)) != -1)


&& (tempIndex < this.length))
{
tempIndex = tempIndex + delim.length();
count++;
}

return count;
}

public String nextToken()


{
int tempIndex = this.string.indexOf(delim, currentIndex);
if (tempIndex == -1)
tempIndex = length;
String str = this.string.substring(currentIndex, tempIndex);
currentIndex = tempIndex + delim.length();
return str;
}

public Object nextElement()


{
return nextToken();
}

public static void main(String [] args)


{

IBM Product Master 12.0.0 607


String stringToBeParsed = "a,b,,c,d";
SampleTokenizer st = new SampleTokenizer(stringToBeParsed, ",");
while(st.hasMoreTokens())
{
System.out.println("Token = [" + st.nextToken() + "]");
}
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

StringUtils sample
The following sample provides String utility methods for working with the Java™ API.

The utility methods are:

replaceString
This function takes three arguments, the source string, the string to be matched match, and the string to be replaced replace. The function returns a new string that
results from replacing the first occurrence of match with replace in the source String.
escapeForJS
This function adds escape characters into the string, as necessary, so that the string can be used in JavaScript. For example, a \ character is converted to \\, \\ is
converted to \\\\, and \n is converted to \\n.
escapeForCSV
If the given string contains a comma (,), newline character (\n), CRLF character (\r), or double quotation marks, this function returns a string that has double
quotation marks at the beginning and end. Also, any embedded double quotation marks have another double quotation mark added to them. The string
abdc,asjdfh, "asdfdas" would become "abdc,asjdfh, ""asdfdas""".
escapeForHTML
When isAscii is true, this function searches the string for HTML special characters and replaces them with $#[asciicode];. When isAscii is false, the HTML type
control characters in the given string are escaped as follows: < is converted to &#60, > is converted to &#62, \ is converted to &#34, ' is converted to &#39, and \\
is converted to &#92. Continuous space characters are converted to &nbsp.
escapeWithHTMLEntities
This function takes a string and two integers, beg and end. The two integers define a character range. The characters outside of this range are converted to HTML
escape sequences. For example, if the range does not include the numeric representation of the letter A (65), then any A in the given string is converted to &#65.

package com.ibm.ccd.api.samplecode;

import java.io.*;
import java.util.*;

/**
* Class StringUtils
*
*/
public class StringUtils
{

public static String replaceString(String s, String sMatch, String sReplace)


{
if (sReplace == null)
sReplace = "";

if (sMatch == null || "".equals(sMatch) || sMatch.equals(sReplace))


return s;

if (s == null || s.equals(""))
{
return "";
}

int i = 0;
int j = s.indexOf(sMatch);

if (j < 0)
{
return s;
}

StringBuffer sb = new StringBuffer(s.length());

while (true)
{
sb.append(s.substring(i, j));
sb.append(sReplace);

i = j + sMatch.length();
j = s.indexOf(sMatch, i);

608 IBM Product Master 12.0.0


if (j < 0)
{
sb.append(s.substring(i));
break;
}
}

return sb.toString();
}

public static String escapeDelimiter(String s, String sEscaper, String sDelimiter, String sBackendDelimiter)
{
if (s == null || "".equals(s))
return "";

StringBuffer sbResult = new StringBuffer();


for (int i = 0; i < s.length(); i++)
{
if (s.startsWith("" + sEscaper + sEscaper, i))
{
sbResult.append(sEscaper);
i++;
}
else if (s.startsWith("" + sEscaper + sDelimiter, i))
{
sbResult.append(sDelimiter);
i++;
}
else if (s.startsWith("" + sDelimiter, i))
{
sbResult.append(sBackendDelimiter);
}
else
{
sbResult.append(s.charAt(i));
}
}

return sbResult.toString();
}

public static String escapeForJS(String txt)


{
return escapeForJS(txt, false);
}

public static String escapeForJS(String txt, boolean useDosCRLF)


{
txt = StringUtils.replaceString(txt, "\\", "\\\\");

if (useDosCRLF)
{
txt = StringUtils.replaceString(txt, "\r", "");
txt = StringUtils.replaceString(txt, "\n", "\\r\\n");
}
else
{
txt = StringUtils.replaceString(txt, "\r", "");
}

txt = StringUtils.replaceString(txt, "\n", "\\n");


txt = StringUtils.replaceString(txt, "'", "\\'");
txt = StringUtils.replaceString(txt, "\"", "\\\"");
return txt;
}

public static String escapeForCSV(String sArg)


{
StringBuffer sb = new StringBuffer();
if (sArg != null)
{
if ((sArg.indexOf(",") >= 0) || (sArg.indexOf("\n") >= 0) || (sArg.indexOf("\r") >= 0)
|| (sArg.indexOf("\"") >= 0))
{
sb.append("\"");
sb.append(StringUtils.replaceString(sArg, "\"", "\"\""));
sb.append("\"");
}
else
{
sb.append(sArg);
}
}

return sb.toString();
}

public static String escapeForHTML(String sArg, boolean isAscii)


{

IBM Product Master 12.0.0 609


String s;
if(isAscii)
{
s = replaceString(sArg, "&", "&");
s = replaceString(s, "<", "<");
s = replaceString(s, ">", ">");
s = replaceString(s, "\"", """);
s = replaceString(s, "'", "'");
}
else
{
s = replaceString(sArg,"&","&");
s = replaceString(s, "<", "<");
s = replaceString(s, ">", ">");
s = replaceString(s, "\"", """);
s = replaceString(s, "'", "'");
}
return preserveSpaceRuns(s);
}

public static String preserveSpaceRuns(String s)


{
if (s.indexOf(" ") < 0)
// Quick check for no runs of spaces
return s;
else
{
int imax = s.length();
StringBuffer sb = new StringBuffer(imax + imax);
for (int i = 0; i < imax; i++)
{
char c = s.charAt(i);
sb.append(c);
if (c == ' ')
{
for (int j = i + 1; j < imax && s.charAt(j) == c; j++)
{
sb.append(" ");
i = j;
}
}
}
return sb.toString();
}
}

public static String escapeWithHTMLEntities(String s, int beg, int end)


{
StringBuffer sbResult = new StringBuffer();

for (int i = 0; i < s.length(); i++)


{
int ch = (int)s.charAt(i);
if (ch < beg || ch > end)
sbResult.append("&#" + ch + ";");
else
sbResult.append(s.charAt(i));
}

return sbResult.toString();
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting Java API


The following are some of the troubleshooting tips for developing or running the Java™ API-based code.

If your stand-alone application test program terminates without any error, it is likely that you do not have the database-related jars in your class path. Add
ojdbc5.jar/ojdbc6.jar or db2jcc.jar/ db2jcc_license_cu.jar, depending on the database type.
If you receive warnings like ‘ERROR MESSAGE ID NOT FOUND ANYMORE', most likely you are using the wrong version of the JavaAPI jar - either an older version of
ccd_javaapi2.jar or ccd_javaapi.jar is being picked up.
If you notice that your application does not even compile when your try to run, make sure the “Source folders” on build path and “Output folders” are configured
correctly.
Compile errors, such as the following error, indicate that you might have an error on your class path and the Java API .jar file cannot be found. Cannot resolve
symbol: PIMWebServicesContextFactory
1. In the New Project Wizard or the Project Properties screen, select the Libraries tab.
2. Click Add External JARs.
3. Go to the ccd_javaapi2.jar and select it. Additionally, go to the jars folder and select the files in the directory.
4. Save the changes to your project. Any errors due to not being able to resolve the references to Java API classes should now disappear.

610 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Resources
The following resources are available on Java™ API.

You can discuss IBM® Product Master and Java API at IBM Developer.
Java documentation for all of the Java API interfaces are available in the product documentation Reference and click Javadocs.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Script API
The Script API is similar to Java™ API. Script API operations extend the basic function of IBM® Product Master.

Important: New product functions might have programming interface that might no longer support scripting APIs. Hence, ensure that you move from scripting APIs to
Java APIs.
With script operations you can clean, transform, validate, and calculate information to align with business rules and processes. This information can then be imported and
exported to virtually any file standard and custom file format or used to complete mass updates to a catalog of information.

Product Master provides a library of scripting operations that you can view in the script sandbox. The script operations can be run either in the script sandbox or in a
command line.

To access the script sandbox, click Data Model Manager > Scripting > Script Sandbox.

To use the command line, use the following command.

$JAVA_RT com.ibm.ccd.common.interpreter.test.TestScript -scriptpath=your_script_path -companycode=your_company_name

Script operations use the following prototype definition:

Return type - Object - Method - (Parameters)

The Script sandbox has an Expression Builder that has following fields:

Script Constants
The script constants are predefined by Product Master. They are true, false, "
", and null.
Script Implicit Variables
The script implicit variables are predefined by Product Master in some contexts. They are item,
err, var, res, category, and feed_doc_path.
Script Operations
There are more than 1000 script operations.
Script Operators
The following script operators are supported:
+, -, *, /, ==, !=, <, <=>, >=, &&, ||, !

Types of scripts (extension points)


An extension point is where you run a script (Script API). You can replace a Script API with a one-line script that redirects to a Java class.
Query all lookup table names through scripts
You must use SQL queries to retrieve this information.
Delete a non-multi-occurring attribute group through scripts
Using a script for an item, you can delete a non-multi-occurrence attribute group.
Retrieve a select group of items through scripting
Using a script for an item, you can retrieve a select group of items. For example, only the items under a specific category or having a specific attribute value to be
displayed or exported.
Script expressions
Script expressions are small scripts that are attached to an attribute either in a spec or in a mapping.
Scriptlets
Scriptlets are small scripts that define business rules. All scripts can be edited and viewed in the Scripts Console.
Language constructs
Language constructs are the basic code of the IBM Product Master scripting language.
Guidance on transactions and exception handling
You can use the useTransaction() and startTransaction() operations to run scripting code inside a transaction. Running related code inside a transaction
provides you an ability to ensure atomic execution of the related code. You can also use transactions in long-running jobs such as import or report jobs for periodic
commit of the changes to the database.
Compiled scripts
You can run scripts in any of these modes: compiled_only, not_compiled, or compiled_if_available. The compiled_only mode applies more
restrictions on writing scripts than the other modes; however, it also makes the scripts run faster.
Predefined scripts
You can create predefined scripts for the users to use. Predefined scripts are simple scripts.

IBM Product Master 12.0.0 611


Scripting tips
You can use the IBM Product Master workbench to write your scripts to ensure accuracy of the script syntax.
JDBC script operations
You use the script API to drive the Java JDBC calls directly for script operations.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Types of scripts (extension points)


An extension point is where you run a script (Script API). You can replace a Script API with a one-line script that redirects to a Java™ class.

The full list of script types available can be found in the Scripts Console. To access the Scripts Console click Data Model Manager > Scripting > Scripts Console. Although
there are various script types available from the Scripts Console, they are all grouped in one of the following types:

Data manipulation, cleansing, file formatting, and expression mappings for imports and exports
Mass updates
Custom validation rules
Calculated values

Script extension points are the places where Product Master runs a user-defined script. The following table lists the available extension points:
Table 1. Extension points
Product Master
Script type Description
name
Report Report script This script is run when the report is generated. Report scripts are used to create custom reports. When you create a report in
generation Product Master, you use scripts to define the report output. The report script is used to define how the information is ordered and
script formatted.
Run value rule Value rule This script sets values for an attribute. You can also use this script to change the value of an attribute when accessed.
Default value Default value This script provides a default value for an attribute. Instead of giving a default value for an attribute, you can specify that a script
rule rule generates the value.
Note: Primary key attribute does not support this rule.
Non-persisted Non-persisted This script generates a value for an attribute that is not stored in the database. This script runs when you view an item.
attribute rule attribute rule Note:
This rule is triggered twice, and the second invocation gets triggered in context of retrieving item view information and the item
attribute values cannot be accessed. Before proceeding, always first check if item.getPrimaryKey() is not an empty string.
String String This script provides a dynamic list of possible values for an attribute. This script returns an array of strings for enumeration. A string
enumeration enumeration enumeration rule, like a value rule or a validation rule, is used within a specification. A string enumeration rule can be used only with
value rules rule attributes of "String Enumeration type".
Validation rule Validation rule This script determines whether an attribute is valid. This script runs when an attribute is validated. A validation rule, like a value rule,
script is used within a specification. A validation rule is used to validate an attribute-based value on a business rule. A validation rule must
return a value of true or false. A value rule is created as a parameter of an attribute in a specification. A value rule calculates the
value of the attribute to which it is attached. When an item is created or saved, the value rule is computed.
Workflow step Workflow step This script affects the In, Out, and Timeout functions for workflow transitions. When you define a workflow step, you can define
script three scripts (1 script that has 3 functions). IN is run when items move into the step. OUT is run when items leave the step.
TIMEOUT is run when items are in the step too long and therefore time out.
Post save script Post save script This script is the last script that is run during a save. Post save scripts are attached to a catalog and hierarchy and are run when an
item or category is saved. The post save script operation happens after the record commits to the database.
The refresh data tool sets the post save script for the hierarchy. However, the post save script does not show up in the catalog Entry
Categories hierarchy attributes user interface. Therefore, there is a limitation in that the post save script does not display in the
drop-down box on the catalog Entry Categories hierarchy attributes page.
Distribution Distribution This script provides custom distribution so that you can send messages by your own means. Distribution scripts, for example, Ariba
script script catalog Upload, FTP, HTTP POST, and email, are used to create a custom distribution that is not addressed by the built-in Product
Master distributions.
Mutable lookup Lookup table This script runs during the import process. This script populates the lookup tables. Lookup table scripts are similar to aggregation
table loading import script scripts; they are used to populate the contents of a lookup table instead of a catalog. The navigation to access lookup table import
script scripts is the same as catalog import scripts.
Catalog to Catalog to This script is set in the Catalog Console from the Catalog main menu. Catalog export scripts are used to complete advanced, "on-
catalog export catalog export the-fly" operations on data that is contained in the catalog before it is exported to an output file. Modifications that are made to the
script script content through the scripting engine at the time of export are not applied to the catalog, but rather applied to the output file as a
one-time content modification. All exports require the use of a script. Unlike the import, selecting a script during export cannot be
skipped. However, for each new destination spec that you create, three default generated scripts are available to choose from: CSV,
tab-delimited, and fixed-width.
Queue sync Queue message This script is started by ccd_connectivity/invoke_queue.jsp.
processing processor
script
Queue sync Queue message This script is started by the queue manager JVM as a job.
script processor
Catalog Catalog export This script is started by the catalog export function. The catalog export script can complete a comparison of two catalog versions.
generation script For each item, the status between the two versions can be accessed. There are four possible types of status: Modified, Added,
script Deleted, Unmodified.
Import catalog Import script This script is started by the hierarchy and catalog import function. Import and export scripts are used to import data into and export
"before" script data out of Product Master.

612 IBM Product Master 12.0.0


Product Master
Script type Description
name
Run container Pre-processing When you use Product Master the order of execution of the container scripts is:
script and post-
processing
script
User creates or edits an entry (item or category).

Entries build script runs.


Populate sequences and enumerations (only for new entries, primary key first), non-persistent attribute rules.
Entry is displayed in the user interface.
User initiates the Save function.
Pre-processing script runs.
Value rules run.
Post-processing catalog script runs.
Validation rules, enumerations, non-persistent attribute rules.
Entry is saved, committed, and persisted.
Post-save catalog script runs.

The validation rules that are run after the post-processing script completes.
Run entry build Entries build This script is started by the entry build.
script script Note: The entry build script gets ran twice for every entry build.
Scripting Scripting This script is started by the PageContent script operation that is started from the sandbox.
sandbox sandbox UI
Entry preview Entry preview This script is started by the entry preview function, as started through the JavaScript library. The entry preview script allows you to
script script create a sample view of a current item set, which can be ran from the data entry screens. For example, you can write a script to view
how an item displays when you use an XML format.
WorkListHolder Entry preview This script is started by the entry preview script, as started through the mainline code.
preview marked script
script
getURL or Data entry more This script allows custom windows to be added to data entry screens that are set through the XML within
getContent window script etc/default/data_entry_properties.xml.
script
Container Catalog preview This script is started by the container (catalog) preview function, as started from the user interface. A catalog script is a sequence of
preview script script operations that you specify to be run at the time of item creation and edit. This function provides another layer of function over the
attribute level operations available through the catalog specs.
Custom page Custom tools This script is a custom Product Master JSP user function. You can create a custom tool, which is a script that outputs a web page
script and deals with request and response.
Secure invoker (Secure) trigger This script is started through an http URL as an external interface to Product Master. Trigger scripts are created to avoid the need to
and invoker script populate the same script operations in multiple places. Trigger scripts are stored in the document store and can be called from
script another script function. Trigger scripts externally trigger events in Product Master, for example, imports and exports.
Report search Rich search This script is started when you run reports.
result button report script
script
Macro entry Entry macro This script is started from the Macro button in the user interface. A user can run an entry macro script within the data entry screens.
processor script script For example, you can write a script to replace all strings with a value. This script triggers a post processing script to save
modifications, if any, to an item.
Note: Macro supports the asynchronous mode. Macro runs on the EntryProcessor parameters and results are logged through the
EntryProcessor status.

String Enumeration rule script


The String Enumeration rule script is attached to an attribute of String Enumeration type and is run under specified conditions. Its intended purpose is to allow the
set of enumerated values to be generated dynamically, as opposed to being defined as a static list in the attribute definition in the spec.
UI Refresh script
A UI refresh script can be defined for a container (catalog or hierarchy). The script operates with a Refresh button on the single-edit and multi-edit pages of the
interface.
Value rule script
You can write a value rule for each node of a multi-occurring attribute such that a value corresponding to each multi-occurring attribute is generated.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

String Enumeration rule script


The String Enumeration rule script is attached to an attribute of String Enumeration type and is run under specified conditions. Its intended purpose is to allow the set of
enumerated values to be generated dynamically, as opposed to being defined as a static list in the attribute definition in the spec.

The String Enumeration rule runs whenever the button that shows the drop-down list on the corresponding widget is clicked. Most current values of all attributes are
transferred to the server before the rule is run.

The most common scenario for using the String Enumeration rule script is for interdependent attributes, where the value of one attribute (which is of String Enumeration
type) depends on the values of one or more other attributes. The rule defines how to compute the choices that are displayed in the list based on those values.

Important: When you define primary keys in specs of type String, ensure to limit the character length to 300.

IBM Product Master 12.0.0 613


In the following example, in a spec that is called as spec1, when you edit an item, you see two String Enumeration list. If you click Country list, you receive a choice of two
values, UK, or US. If you select US, then click City list to receive a choice of three values, Washington, Omaha, or Seattle.

So spec1 contains these two attributes:

Country
String enumeration
Static fixed set of values {UK, USA}
City
String enumeration
Dynamic set of values that depend on your Country attribute

var val = entry.getEntryAttrib("spec1/Country");


res = [];
if (val == null || val == "") {
res.add("Please select a Country");
} else if (val == "UK") {
res.add("London");
res.add("Birmingham");
res.add("Manchester");
} else if (val == "USA") {
res.add("Washington");
res.add("Omaha");
res.add("Seattle");
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

UI Refresh script
A UI refresh script can be defined for a container (catalog or hierarchy). The script operates with a Refresh button on the single-edit and multi-edit pages of the interface.

After you click Refresh, the following steps occur:

1. On the client, takes all the changes that are made and sends them to the server.
2. On the server, for each entry in the set,
Retrieve the current copy of the item from the database.
Apply the Entry Build script.
Apply the pending changes that are transferred from the client.
Apply the Default Value rules and the Non-persisted Value rules.
Apply the UI Refresh script.
Add the updated entry to a set to be sent to the client.
3. On the server, return the set of updated entries to the client.
4. On the client, render the data on the screen.

For single-edit, the set of entries is the entry that is being edited. For multi-edit, the set of entries is those that are currently selected. The UI refresh script runs after all of
the other scripts and thus, all the attributes have most current values.
In single edit, the UI Refresh script is applied to the displayed entry where any attribute's value changed. In multi-edit, the Refresh is disabled unless a row is selected,
and the UI Refresh script is applied only to selected entries (rows), where they contain attributes whose values changed, and in the order of those rows from descending
order.

Specifying the UI Refresh script


You can define a UI Refresh script for a container. To do so, you must upload the text of the script to the docstore, through a specific directory and a script name of your
choice, and define the script name to the container. For a catalog, the directory is /scripts/catalog and for a hierarchy it is /scripts/category_tree. To define the
script name to the container, use the script operation setContainerAttribute with the key UI_REFRESH_SCRIPT_NAME, and specify the name but not the directory,
as illustrated in the following example script (where the script name is ui_refresh.wpcs):

var container = getCtgByName("CATALOG");


var attrs = [];
attrs.add("ui_refresh.wpcs");
container.setContainerAttribute("UI_REFRESH_SCRIPT_NAME", attrs);
container.saveCatalog();

Refresh script contents


A UI refresh script can contain any script operations. The entry on which script operates is available through the implicit variable entry, and in general the set of implicit
variables is the same as that for a Post-save script.

If a UI refresh script is used to change the values of attributes in the entry, the order of those changes is completely under the control of the script's author. If the
transitive dependencies (attribute A depends on attribute B, which depends on attribute C) the changes must be done in the correct order (in this example, C, then B
followed by A).

Example script for a catalog that is defined with spec that has two String attributes A and B:

var val_B = entry.getEntryAttrib("spec/B");


entry.setEntryAttrib("spec/A", "The value of B is: " + val_B);

IBM Product Master 12.0 Fix Pack 8

614 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

Value rule script


You can write a value rule for each node of a multi-occurring attribute such that a value corresponding to each multi-occurring attribute is generated.

The same value is calculated for every node in a multi-occurring attribute according to the value rule. Following procedure describes how to avoid that and write a value
rule, which calculates the value for each corresponding multi-occurring attribute.
Suppose a multi-occurring attribute (Price) has the following structure:

SpecParts
Price
PartName
Cost
Discount
Wholesaleprice

If "Price" occurs twice as follows,

Price(0)
Wheel
32
2
WholesalepriceA
Price(1)
Tyre
98
6
WholesalepriceB

Suppose that the following value rule script is written on Wholesaleprice:

cost = item.getCtgItemAttrib("SpecParts/Price/Cost") ;
discount = item.getCtgItemAttrib("SpecParts/Price/Discount");
res = cost*(100-discount)/100;

The value rule script repeats the same value of Wholesaleprice for every occurrence of "Price", which is incorrect because Wholesaleprice is derived from the
calculation of cost and discount. For every occurrence of "Price", the calculated value of Wholesaleprice must be different.

Alternatively, to route through the multi-attribute tree, retrieve, and set values through EntryNodes, use the following value rule script:

You might set the following script as a value rule on Wholesaleprice:

pnode = entrynode.getEntryNodeParent();
wnode = pnode.getEntryNode("/Cost");
mnode = pnode.getEntryNode("/Discount");
cost = wnode.getEntryNodeValue("/Cost");
discount = mnode.getEntryNodeValue("/Discount");

This script returns the correct value for WholesalepriceB, WholesalepriceA, which depends on the corresponding value of cost and discount for that occurrence.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Query all lookup table names through scripts


You must use SQL queries to retrieve this information.

The following script queries the names of all defined lookup tables from the product database tables and returns them in an array. You can use this code sample in the
script sandbox. The defined function getAllLookupTableNames also can be part of a trigger script.

// ######################################################
///@brief getAllLookupTableNames() - returns the name of all lookup tables in a hash map
///@return hashmap "hmAllLookupTables" cotaining strings with names of existing lookup tables
// ######################################################
function getAllLookupTableNames() {
var strCmpName = getCompanyCode();
var strSQLquery = "select CTG_NAME from TCTG_CTG_CATALOG where CTG_TYPE = 'LOOKUP_TABLE' and CTG_COMPANY_ID = (select
CMP_COMPANY_ID from TSEC_CMP_COMPANY where CMP_COMPANY_NAME = '" + strCmpName + "')";
var hmLookupTables = [];

var dbContext = getWPCDBContext();


var connection = dbContext.getWPCDBConnection();
var rsLookupTables = connection.executeQuery(strSQLquery);
connection.commit();
while(rsLookupTables.next()){
hmLookupTables.add(rsLookupTables.getColumnAt(1));
}
dbContext.releaseWPCDBConnection(connection);
return hmLookupTables;
}

// main script which calls function "getAllLookupTableNames()"

IBM Product Master 12.0.0 615


var i = 0;
var hmAllLookupTables = getAllLookupTableNames();
var iNumberOfLookupTables = hmAllLookupTables.size();

if( iNumberOfLookupTables == 0 ) {
out.writeln("No lookup tables have been defined in this company");
} else {
out.writeln(iNumberOfLookupTables + " lookup tables have been defined in this company:");
for( i = 0; i < iNumberOfLookupTables ; i++) {
out.writeln("Lookup table # " + i + ": \"" + hmAllLookupTables[i] + "\"");
}
}

The following sample output is displayed:

7 lookup tables have been defined in this company:


Lookup table # 0: "LDAP Properties"
Lookup table # 1: "Group"
Lookup table # 2: "Currency"
Lookup table # 3: "BrandName"
Lookup table # 4: "ItemType"
Lookup table # 5: "Language"
Lookup table # 6: "PriceCode"
Lookup table # 7: "CustomerStatus"

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Delete a non-multi-occurring attribute group through scripts


Using a script for an item, you can delete a non-multi-occurrence attribute group.

Since there are no multi-occurrences, you need to delete the only occurrence by referring to the occurrence number (#0). Instead of calling deleteEntryNode() on the
non-multi occurring attribute group, it should be called on the first occurrence node of that attribute group. For example, if the spec has the following definition for the
optional attribute group that is not multi-occurring:

Attribute_Group (Maximum Occurrence=1, Minimum Occurrence=0)


- Grp_Member1 (Maximum Occurrence=1, Minimum Occurrence=0)
- Grp_Member2 (Maximum Occurrence=1, Minimum Occurrence=0)
- Grp_Member3 (Maximum Occurrence=1, Minimum Occurrence=0)

The sample code to delete the attribute group is as follows:

var ctg = getCtgByName("Test_Catalog") ;


var entry = ctg.getEntryByPrimaryKey("C0001");

entry.getRootEntryNode().getEntryNode("Test_Catalog_Spec/Attribute_Group#0").deleteEntryNode();
entry.saveCtgItem();

Replace "Test_Catalog", "C0001", and "Test_Catalog_Spec/Attribute_Group" with your own catalog name, primary key, and catalog spec name/attribute group
name.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Retrieve a select group of items through scripting


Using a script for an item, you can retrieve a select group of items. For example, only the items under a specific category or having a specific attribute value to be displayed
or exported.

Use a selection to specify a set of items to display or export. Create a Selection on the catalog from the Selections Console. Enter a selection criterion to associate to a
specific category or having a specific attribute value. This retrieves only the required items from the catalog. Use the selection (instead of the whole catalog) in the script.
The selection enables you to work with a select group of catalog items.
The following script operations are used with selections:

getHierarchyNodeSetForSelection - Return the hierarchy nodes in a selection as a HierarchyNodeSet


getItemSetForSelection - Return the items in a selection as an ItemSet

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Script expressions
Script expressions are small scripts that are attached to an attribute either in a spec or in a mapping.

616 IBM Product Master 12.0.0


The following list shows the types of script expressions:
Script expression type Usage
String enumeration rules Used to create a list of valid values for attributes of type string enumeration.
Value rules Used to calculate the value of an attribute.
Validation rules Used to validate that the value that is provided for a field is valid.
Mapping expressions In an import or export, used to populate a catalog attribute (Import map) or a destination attribute (Export map).

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Scriptlets
Scriptlets are small scripts that define business rules. All scripts can be edited and viewed in the Scripts Console.

To access the Scripts Console click Data Model Manager > Scripts > Scripts Console.

When you write scriptlets, there is a character number limitation. Thus, when you write large scriptlets, it is best to use function calls from libraries. The following
guidelines should be followed when you write scriptlets:

Scriptlets are created, modified, and saved in the Scriptlet Editor window when you work with:
Value and validation rules
String enumeration rules
Mapping expressions
Title and correct header are not necessary
Appropriate documentation should be used throughout a scriptlet

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Language constructs
Language constructs are the basic code of the IBM® Product Master scripting language.

Functions and variables


Functions and variables refer to the variable that is being called out to in the script.
Looping and conditionals
Looping means that the same line of code is repeated. Conditionals means that you can have a line of code where a variable has a condition of whether it is true.
HashMaps and arrays
HashMaps and arrays are two types of language constructs.
Calling a method
Calling a method refers to a block of code that takes some parameters and returns a value.
Local variables
A local variable is a variable that is defined inside a function and with the word var. If no var exists, the global variable with the same name is used.
Global variables
A global variable is a variable that is defined in the main script.
Declaring variables and functions
To declare a variable, use var before the variable name.
Getting script by path
You can use this script operation to get a script from the docstore or in a file system. Most of the time, this script operation is used together with
getFunctionByName() and invoke() to start a function in a trigger script.
Void definition
Void is not used within scripts. You use void in the definition of the script operations in the reference information.
New definition
New is used as semantic in the script language. Some scripting operations require that a new keyword is used just before the script operation is called. Use of the
new keyword in other places in the script is invalid.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Functions and variables


Functions and variables refer to the variable that is being called out to in the script.

If you declare var c = 10; inside the function, then c inside the function has just that scope. Otherwise, the function gets any variable that is defined outside of it.

Variable scope
IBM Product Master 12.0.0 617
The following script is an example of a variable scope:

var c = 10;

function add(a, b)
{
// var c = 5;
c++;
out.writeln("c is " + c);
return a + b;
}

var sum = add(1,2);


out.writeln("Sum is " + sum + ", c = " + c);

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Looping and conditionals


Looping means that the same line of code is repeated. Conditionals means that you can have a line of code where a variable has a condition of whether it is true.

Looping and conditionals are a part of language constructs. For example,

var i;
for (i = 0; i < 10; i++)
{
out.writeln(i + ": hello world");
}

The line for (i = 0; i < 10; i++) is an example of looping. When looping, you need to ensure that a condition tells the loop when to end. For example,

var i = 1;
while ( i < 10)
{
out.writeln(i + "");
if (i == 3)
{
out.writeln("breaking");
break;
}
i++;
}
out.writeln("i at loop completion is: " + i );

The prior script uses a break to exit the loop. To continue a loop after a break and go back to the beginning of the loop for the next iteration, type:

var i = 1;
while ( i < 10)
{
i++;
if (i == 3)
{
out.writeln("continuing (not printing i)");
continue;
}
out.writeln(i + "");
}

Another conditional language construct is the "else" statement. If something is conditional, you need to decide whether you want to run the next object. The "else" stands
for otherwise, meaning, if you are hungry you eat dinner, otherwise you watch television. If you do not have the "else" in the code, the script would mean, you eat dinner
and then watch television. The example script of an if else statement:

var hungry = true;

if (hungry)
{
out.writeln("Where's dinner?");
}
else
{
out.writeln("not hungry now!");
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

HashMaps and arrays


HashMaps and arrays are two types of language constructs.

618 IBM Product Master 12.0.0


A hashmap sets up the mapping of keys to values. For example,

var map = [];


map["a"] = "Apple";
map["b"] = "Boy";
out.writeln ("map = " + map);
out.writeln ("map[a] = " + map["a"] );
map.remove("a");
out.writeln ("map = " + map);

An array is a list of values. For example,

var array = [];


array.add("a", "b", "c");
out.writeln ("array = " + array);
out.writeln ("array[0] = " + array[0]);
array.remove(0);
out.writeln ("array = " + array);

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Calling a method
Calling a method refers to a block of code that takes some parameters and returns a value.

The following script shows two parameters, (a, b) being called from the code.

function add(a, b)
{
return a + b;
}

var sum = add(1,2);


out.writeln("Sum is " + sum);

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Local variables
A local variable is a variable that is defined inside a function and with the word var. If no var exists, the global variable with the same name is used.

In the following example, var c = 5; in the function local_var() is the local variable.

var c = 10; //this is the global variable

function global_var()
{
//this takes the global var, not local from local_var()
out.writeln("===inside function global_var(), c is " + c);
}

function local_var()
{
var c = 5; //this is the local variable
c++;
out.writeln("inside function local_var(), c is " + c);
global_var();
}

out.writeln("in main script, initial value of c = " + c);


local_var();
out.writeln("in main script, new value of c = " + c);

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Global variables
A global variable is a variable that is defined in the main script.

In the following example, var c=10; in the main script is a global variable.

IBM Product Master 12.0.0 619


var c = 10; //this is the global variable

function global_var()
{
//this takes the global var, not local from local_var()
out.writeln("===inside function global_var(), c is " + c);
}

function local_var()
{
var c = 5; //this is the local variable
c++;
out.writeln("inside function local_var(), c is " + c);
global_var();
}

out.writeln("in main script, initial value of c = " + c);


local_var();
out.writeln("in main script, new value of c = " + c);

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Declaring variables and functions


To declare a variable, use var before the variable name.

To declare a variable (globally or locally), use var in front of the variable name. IBM® Product Master variables can take any kind of supported objects such as string,
number, catalog, and collaboration area. However, you do not need to specify the object type when you declare a variable. The variable is assigned an object type when it
is assigned. Following are four examples of declaring variables:

//for number and String


var intVar = 5;
var strVar = "abc";

//for objects Catalog and CollaborationArea


var ctgVar = getCtgByName("My Catalog");
var caVar = getColAreaByName("My CA");

//for HashMap
var hmVar = [];
hm["CA"] = "California";
hm["NY"] = "New York";

//for an array
var arrayVar = [];
arrayVar[0] = "item1";
arrayVar[1] = "item2";
or arrayVar.add("item3");

To declare a function inside a script, use the word function before the function name, then list the parameters that are required for this function. For example, to declare
a function that is named add, which takes in two numbers and returns the sum:

function add(a, b)
{
return a + b;
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Getting script by path


You can use this script operation to get a script from the docstore or in a file system. Most of the time, this script operation is used together with getFunctionByName()
and invoke() to start a function in a trigger script.

The directive #include can also be used in a IBM® Product Master script to include another script. This directive must be the first line in the calling script. The included
script can be from the docstore or from the file system. In the file system, the script path must start with file://.... You can use this directive instead of
getScriptByPath().

For example, the following trigger script is named My Library, in which a function called lib_add() is defined in the following script:

function lib_add(a, b)
{
return a + b;
}

In the script sandbox, you can start the function lib_add() by using the following sample script:

var MY_LIBRARY_PATH = "scripts/triggers/My Library";


//doc store path

620 IBM Product Master 12.0.0


//var MY_LIBRARY_PATH = "file://local/qa2/imports/scripts/myLibrary.wpcs";
//file system path
var MY_FUNCTION_NAME = "lib_add";
var script = getScriptByPath(MY_LIBRARY_PATH);
var func = script.getFunctionByName(MY_FUNCTION_NAME);
var result = func.invoke(1,2);
out.println("result: " + result);

In the script sandbox, you can start the function lib_add() by using #include as in the following sample script:

#include "/scripts/triggers/My Library"out.println("sum : " + lib_add(1,2));

If the script is in the file system at local/qa2/imports/scripts/myLibrary.wpcs, you can use the following sample script:

#include "file://local/qa2/imports/scripts/myLibrary.wpcs"out.println("sum : " + lib_add(1,2));

The output for these scripts is sum : 3.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Void definition
Void is not used within scripts. You use void in the definition of the script operations in the reference information.

Void can indicate one of the following:

The product that is returned as a result of calling the script operation,


The control flow statement should not be used, or
There is no product that is returned by the call

For example, calling a script operation, which has void as a return type in the prototype information should not be done on the right side of an assignment or within a
conditional expression.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

New definition
New is used as semantic in the script language. Some scripting operations require that a new keyword is used just before the script operation is called. Use of the new
keyword in other places in the script is invalid.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Guidance on transactions and exception handling


You can use the useTransaction() and startTransaction() operations to run scripting code inside a transaction. Running related code inside a transaction
provides you an ability to ensure atomic execution of the related code. You can also use transactions in long-running jobs such as import or report jobs for periodic commit
of the changes to the database.

When you use transaction-related operations, you must take adequate care to ensure that the script code does not unintentionally disrupt an already active transaction.
Use of useTransaction() script operation results in commit of an already active transaction. Unless the intention is to commit the existing transaction, you must not
start useTransaction() if a transaction exists.

Using startTransaction() results in a rollback of an already active transaction only if an error occurs. In such a case, the exception must be propagated to the owner
of the active transaction.

You can use inTransaction() script operation to programmatically determine whether the code is running within an active transaction to ensure that it does not disrupt
an existing transaction.

In addition, IBM® Product Master prevents disruption of the active transaction by scripts to ensure consistency of the Product Master operations that rely on those
transactions. For more information, see Limitations on using transactions within extension points

useTransaction and startTransaction script operations


You use the useTransaction and startTransaction script operations to run operations within a transaction.
Nesting transactions
Although the syntax supports nested transactions, IBM Product Master does not support nesting of transactions.
Catching errors in a transaction
With the catchError script operation, you can trap the underlying issues and prevent an error that causes the entire script to fail.

IBM Product Master 12.0.0 621


Guidelines for exception handling in scripting
You can use the catchError() script operation to catch exceptions in the scripting code, and take the required action that is based on the exception. You can
choose to handle the exception programmatically or rethrow the exception to calling code.
Sample code for exception handling
The following code is a sample for exception handling to guide you on appropriate handling of exceptions.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

useTransaction and startTransaction script operations


You use the useTransaction and startTransaction script operations to run operations within a transaction.

useTransaction
Runs the statements in a transaction. Rolls back if an error occurs.
startTransaction
Runs the statements in a transaction. Rolls back if an error occurs. Does not do anything if a transaction is already open.

startTransaction is similar to useTransaction, however, if startTransaction discovers it is already within a transaction, it will not interfere with the ongoing
transaction, and merely runs the wrapped instructions. useTransaction commits the currently open transaction.

If an error occurs within a startTransaction, it initiates its own rollback and passes up the exception. If the startTransaction were nested inside another
startTransaction or even a useTransaction, this script operation continues to chain upwards. Eventually the place where IBM® Product Master started the script is
aware the script failed, and can take appropriate action.

In addition, you can use the inTransaction() script operation to determine whether the script code is running within a transaction.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Nesting transactions
Although the syntax supports nested transactions, IBM® Product Master does not support nesting of transactions.

The following code fragment might be interpreted as an attempt to save all four entities within the same transaction, and to use a nested transaction to attempt to save
the last two. The real interpretation of this code fragment is shown in the "Product Master interpretation of the example" column.
Table 1. Example of a nested transaction
Nested transaction example Product Master interpretation of the example
useTransaction{ commit any existing work,
save entity to database and start a new transaction (tx#1){
save entity to database save entity within tx#1
useTransaction{ save entity within tx#1
save another entity to database commit tx#1, and start a new on (tx#2){
save another entity to database save entity within tx#2
} save entity within tx#2
} }(commit tx#2, create a new transaction
(tx#3)
to replace it)
{(commit tx#3, create a new transaction
(tx#4)
to replace it)
Although the effect is obvious within such a small script fragment, it becomes less obvious when you consider the use of scripting libraries where the following code can
be written:
Table 2. Example of attempting to nest a transaction through a library
Attempt to nest through a library Scripting library 'saveAnother2Entities'
useTransaction{ function saveAnother2Entities{
save entity to database //make sure we save the 2 entities in
save entity to database
call scripting library function //a transaction!
'saveAnother2Entities'
}
useTransaction{
save entity to database
save entity to database
}
}
This example is identical to the previous one, in that a call to useTransaction was made inside an existing useTransaction block, except the usage is now hidden.
Both approaches cause the original transaction to be prematurely committed. When you attempt to ensure that a section of script is within a transaction and you are
unsure if you are already in a transaction, use startTransaction instead.

With the addition of savepoint support to the Java™ API, it is possible to perform partial rollbacks within a transaction and achieve many of the wanted results of nested
transactions. This feature is intended for use from Java code with Java API, but because Java API methods can also be accessed from scripts by using reflection, the
capabilities are also available to the script developer. Consult the Java API documentation to learn how to use the savepoint methods and how to use reflection to call
them from scripts.

622 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Catching errors in a transaction


With the catchError script operation, you can trap the underlying issues and prevent an error that causes the entire script to fail.

Do not use catchError without a matching throwError statement. Also, do not use start*!ENT!* and catchError together. The rollback of the transaction occurs
only if the end brace of the useTransaction sees the exception. If the exception was caught and not rethrown in the block, then the rollback does not occur and there is
the potential of corrupting the database.

When used around a transaction operation, such as an entity save, the following script operations can cause errors during the save to go unhandled. You need to either
avoid including such operations within a catchError block, or be sure to rethrow the error after resolving it yourself. It might be simpler to refactor the script logic to
avoid including the operations within it.
Table 1. Example of a bad catchError usage
Example of a bad catchError usage Example of a throwError use
useTransaction{ useTransaction{
catchError(e){ catchError(e){
saveCtgItem(...) saveCtgItem(...)
} }
if(e!=null) if(e!=null)
{ {
... handle error .. ... handle error ..
} throwError(e);
} }
}
As an alternative, if you are controlling the transaction, you can change the order and enclose the useTransaction within the catchError. For example,

catchError(e){
useTransaction{
saveCtgItem(...)
}
}
if(e!=null)
{
... handle error ..
}

Transaction-sensitive operations
Five script operations in IBM® Product Master are a possible cause of disruption (for example, premature commits or roll backs) to user transactions.
Workflow operations and transactional implications
The workflow engine within IBM Product Master is a separate JVM that communicates with the other parts of Product Master through workflow events that are
posted to the database.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Transaction-sensitive operations
Five script operations in IBM® Product Master are a possible cause of disruption (for example, premature commits or roll backs) to user transactions.

Use care with the following script operations. Each script operation can affect transactional behavior and the ability to roll back.

deleteCatalog
deleteCategoryTree
deleteLookupTable
importEnv
saveCtgItem

Note: saveCtgItem is disruptive only when used within a batching import scenario.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Workflow operations and transactional implications


The workflow engine within IBM® Product Master is a separate JVM that communicates with the other parts of Product Master through workflow events that are posted to
the database.

Because the workflow engine is a separate JVM, transactions must be handled.

IBM Product Master 12.0.0 623


A transaction can exist only within one JVM, thus any work that is performed by the workflow engine cannot form part of a unit-of-work that contains an alternative
JVM.
The workflow engine will action an event only if it can see it. For the workflow engine to see it, the event must be posted to the database. If the event posting forms
part of a transaction is not seen until the unit-of-work is committed.

Certain workflow events are enabled to run "inline", meaning they bypass the requirement to post to the database, and process on the current JVM within any current
transaction.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Guidelines for exception handling in scripting


You can use the catchError() script operation to catch exceptions in the scripting code, and take the required action that is based on the exception. You can choose to
handle the exception programmatically or rethrow the exception to calling code.

It is recommended that you handle the exceptions in the following manner:

If the exception message string indicates a failure from a business error that can be programmatically handled by the script implementation, then you can choose
to handle it.
If the exception is received from starting an IBM® Product Master operation, and you determine the root cause of the exception to be an SQLException, then you
must ensure that the active transaction is rolled-back.
If the scripting code results in the exception is contained within a useTransaction() block, then the exception must be thrown beyond the closing brace
of useTransaction() to ensure rollback of current transaction.

catchError(e, eObj)
{
useTransaction()
{
//your code
}
}
if (e != null)
{
//it is safe to handle the exception, since the transaction has been rolled-back
}

If the scripting code results in the exception is contained within a startTransaction() block, then the exception must be rethrown to the calling code if
an active transaction existed when startTransaction() was started.

var existingTxn = inTransaction();


catchError(e, eObj)
{
startTransaction()
{
//your code
}
}
if (e!= null)
{
if ((existingTxn == true) && isSQLException(eObj))
{
throwSQLException()
}
// otherwise it is safe to handle the exception since the transaction owned by us has been rolled-back
}

If the scripting code results in the exception is not contained within a transaction block, then the exception must be rethrown from calling code if an active
transaction existed when the code was started.

var existingTxn = inTransaction();


catchError(e, eObj)
{
//your code
}
if (e != null)
{
if ((existingTxn == true) && isSQLException(eObj))
{
throwSQLException()
}
}

Since throwError script operation does not retain the root cause exception, you can prefix a string to indicate to the calling code that it is an SQL Error.

function throwSQLException(e)
{
throwError("SQL ERROR OCCURRED: " + e);
}

The method isSQLException(eObj) can be implemented by creating a piece of Java™ code that performs instanceOf on the exception object to detect if it is
a SQLException. Alternatively, the following code can be used to return the error message from the root cause exception object.

var method = createJavaMethod("java.lang.Throwable", "getMessage");


errorMessage = runJavaMethod(eObj, method);

624 IBM Product Master 12.0.0


The error message can be inspected for an identifying substring, such as "SQL Error", to determine whether the causative exception is an SQLException.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample code for exception handling


The following code is a sample for exception handling to guide you on appropriate handling of exceptions.

Sample scripting based import


Sample code for exception handling in a scripting based import script:

var parser = newCSVParser(in);


var bDone = false;
while(!bDone)
{
var attrs = parser.splitLine();
bDone = (null == attrs);
if (!bDone)
{
catchError(e, errObj)
{
// The user-written function getNewItemUsingAttrs()
// populates an item object from the attrs
var item = getNewItemUsingAttrs(attrs);
item.saveCtgItem();
}
if (errObj != null)
{
// The user-written function isCriticalError() uses Java to interrogate errObj
if (isCriticalError(e, errObj))
{
throwError("Critical error occurred in import job: " + e);
}
}
}
}

Sample scripting based report


Sample code for exception handling in a scripting based report script. This applies to a report script written to import items:

var bDone = false;


var numItemsInTxn = 0;
var numItemsPerTxn = 200;
while(!bDone)
{
catchError(e, errObj)
{
useTransaction
{
numItemsInTxn=0;
while (!bDone && numItemsInTxn < numItemsPerTxn)
{
// The user-written function getNextItemToSave()
// returns the next item object to persist
var item = getNextItemToSave();
bDone = (null == item);
if (!bDone)
{
item.saveCtgItem();
numItemsInTxn++;
}
}
}
}
if (errObj != null)
{
// The user-written function isCriticalError() uses Java to interrogate errObj
if (isCriticalError(e, errObj))
{
throwError("Critical error occurred in import job: " + e);
}
}
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 625


Compiled scripts
You can run scripts in any of these modes: compiled_only, not_compiled, or compiled_if_available. The compiled_only mode applies more restrictions on
writing scripts than the other modes; however, it also makes the scripts run faster.

The script compiler supports user-defined functions with a mismatch in total number of arguments. For example, consider the following function:

functionA(arg1, arg2, arg3) {……..}.


IBM® Product Master passes a null for the third argument.

Settings for Product Master scripting mode


Depending on whether you want to use compiled or non-compiled scripts, you can set the compile mode in the IBM Product Master common.properties file.
Common script compilation errors
When you use compiled scripts, a script can be saved in the script console only if it compiles correctly. If there is an error, check the svc.out file in the appsvr logs
directory for the full Java output and error message.
Common runtime problems
The four main common runtime problems are; invalid number of arguments in a function call, invalid argument type, mismatched argument types in comparisons,
and XML parsing.
Resolve runtime problems
To resolve runtime errors on the application server, see the file svc.out in the appsvr log directory. You can examine the exception.log and default.log files to resolve
runtime problems.
Debugging scripts
You can use different tools to help you debug scripts.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Settings for Product Master scripting mode


Depending on whether you want to use compiled or non-compiled scripts, you can set the compile mode in the IBM® Product Master common.properties file.

Product Master scripting is set in $TOP/etc/default/common.properties by using:

script_execution_mode=script_mode

Where script_mode can be either compiled_only, compiled_if_available, or not_compiled.

compiled_only
The entire script is compiled before execution. The script is run only if the compilation passed. To ensure proper performance, use the compiled_only option.
compiled_if_available
The entire script is compiled first, and if the compilation fails, the script is ran in the not_compiled mode.
Attention: The compiled_if_available mode adversely impacts the performance. However, if this mode must be used, then be aware that a compiled script
cannot start a function from a non-compiled script.
not_compiled
The script is run without being compiled as a whole before execution.
Important: Use the following in the common.properties:

script_execution_mode=not_compiled

Script mode can be overridden by including the following directive at the beginning of the script:

//script_execution_mode=script_mode

Attention: Do not use the directive //script_execution_mode=not_compiled at a script level due to significant performance degradation.
Script mode cannot be overridden if the mode is set to not_compiled in common.properties.
To check the current script mode, run the following method.

getScriptExecutionMode()

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Common script compilation errors


When you use compiled scripts, a script can be saved in the script console only if it compiles correctly. If there is an error, check the svc.out file in the appsvr logs directory
for the full Java™ output and error message.

The following table shows some common compilation errors.


Error Incorrect script Correct script

626 IBM Product Master 12.0.0


Error Incorrect script Correct script
A break or return statement inside a forEach*Element() block does not compile due to an "unreachable code" error. forEachItemSet forEachItemSet
Element Element
(itemSet, (itemSet,
item) { item) {

return item; if (item !=


null) {
}
return item;

}
If you return a value from a function, you need to return a value in every case. The incorrect script example does not return a function function
value if an exception happens in the catchError block. sample() { sample() {

var e = null; var e = null;

catchError (e) catchError (e)


{ {

// do // do
something... something...

return "a return "a


string"; string";

} if (e != }
null) {
if (e != null)
reportError(.. {
.);
reportError(..
} .);

} }

return null;

}
For major compilation issues, you can look at the generated Java files. The generated Java files are in a directory that is specified by the tmp_dir parameter in the
common.properties file. The Java file naming convention includes the script name and a generated sequence, for example, MyScript12345.java.

The full path of the script from the docstore is placed as a comment at the starting of each generated Java file. If you are mapping the docstore to the file system, you can
do a recursive grep to determine which Java file matches a script.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Common runtime problems


The four main common runtime problems are; invalid number of arguments in a function call, invalid argument type, mismatched argument types in comparisons, and XML
parsing.

Invalid number of arguments in a function call


Compiled IBM® Product Master scripts must call functions with the exact number of arguments that they take. You cannot rely on defaulting to null for non-specified
parameters.
Invalid argument type
The wrong type of argument is being passed to a function. For example, a HashMap argument is being passed to a function that requires a String. This problem also
happens if Product Master cannot infer the type correctly; you need to use the checkString() script operation.
Mismatched argument types in comparisons
If you do not have the same data type on both sides of a conditional operator such as ==, >, <, and <= then the expression evaluates to false. This evaluation does
not result in an error message, but the corresponding code will not be run. For example, the following script will not work:

var id = "12345";

var my_id = item.getEntryAttrib(path to some attribute that is a sequence) ;

if ( id == my_id) {

// statements that need to be executed but will not be

The solution in this case is to explicitly use:

var id = "12345" ;

var my_id = checkString(item.getEntryAttrib(//some attribute that is a sequence),) ;

if ( id == my_id) {

// statements to be executed

IBM Product Master 12.0.0 627


}

XML parsing
The following code seems to work in non-compiled mode and even in compiled mode when run from the sandbox:

new XmlDocument(xmlDoc) ;

forEachXmlNode("item") {
}

However, in compiled mode, if this code is used in a script library function that is started by multiple users, then the statements inside the forEachXmlNode script
operation block do not get run. There is no error message. The workaround is to use the following code:

var doc = new new XmlDocument(xmlDoc) ;

var xmlNode ; forEachXmlNode(doc, "item", xmlNode) {


}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Resolve runtime problems


To resolve runtime errors on the application server, see the file svc.out in the appsvr log directory. You can examine the exception.log and default.log files to resolve
runtime problems.

Using the Java™ file naming convention you can identify which script has failed. The error message also identifies the line number in the generated Java file. To resolve the
problem, view the generated Java file and scroll to the line where the runtime error occurred. The generated Java code includes script code as comments every few lines.
For example, consider the following portion of code from a sample-generated Java file.

// function checkIfPartyPartyTypeExist(party, partyType)

public static Object ScriptFunction__


checkIfPartyPartyTypeExist(HashMap hmContext,
Object party, Object partyType) throws Exception

// var bRet = false;

Object bRet = (java.lang.Boolean) Boolean.FALSE;

// var rootEntry = party.getRootEntryNode();

Object rootEntry = GenGetRootEntryNodeOperation.


execute(hmContext , (IEntry) party);

// var entryNodes = rootEntry.getEntryNodes


(getCatalogSpecName() + "/Party Types/Party Type Code");

Object entryNodes = GenGetEntryNodesOperation.execute


(hmContext , (EntryNode) rootEntry, (String)
BinaryOperation.execute(BinaryOperation.PLUS,
ScriptFunction__getCatalogSpecName(hmContext),
"/Party Types/Party Type Code"));

// var entryNodesSize = entryNodes.size();

Object entryNodesSize = (java.lang.Integer)


GenSizeOperation.execute(hmContext , (HashMap)
entryNodes);

The comments that are highlighted in bold are code from the corresponding script. This makes it easy to identify where failures occurred in the script.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Debugging scripts
You can use different tools to help you debug scripts.

When debugging scripts, use the following tools:

Use log4j for logging and debugging. The tool allows you to control which output levels should be logged and where these outputs should go. There are five output
levels: debug, fatal, error, warn, and info. The following functions are available:
Logger getLogger(String), Logger::loggerDebug, Logger::loggerInfo, Logger::loggerWarn, Logger::loggerError, and
Logger::loggerFatal
Note: All user log statements are prefixed with com.ibm.ccd.wpc_user_scripting.
Define routes in file $TOP/etc/default/log4j2.xml to route the logging and debugging messages to various log files.

628 IBM Product Master 12.0.0


For application-related issues, all exceptions are captured in the exception.log file in the appsvr logs directory ($TOP/logs/appsvr_server_name/exception.log).
For job schedule issues (when a scheduled job fails on run), all exceptions are captured in the exception.log file in the scheduler logs directory
($TOP/logs/scheduler_server_name/exception.log).
For workflow-related issues, all exceptions are captured in the exception.log file in the workflowengine logs directory
($TOP/logs/workflowengine_server_name/exception.log).
To get more logging by the system, in the <Logger></Logger> tag of $TOP/etc/default/log4j2.xml, set priority="debug".

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Predefined scripts
You can create predefined scripts for the users to use. Predefined scripts are simple scripts.

Search and replace macro script is a predefined script, which allows users to replace an old pattern in an attribute with a new one.

Export script example


The following script is an example of a typical export script. It was generated by IBM Product Master based on a destination file spec that was for export.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Export script example


The following script is an example of a typical export script. It was generated by IBM® Product Master based on a destination file spec that was for export.

var file = createOtherout ("path1/path2/MyExport.csv");


forEachCtgItem ("Training Catalog", item)
{
var primaryKey = item.getCtgItemPrimaryKey ();
var description = item.getEntryAttrib ("Training Catalog Spec/Description");
var name = item.getEntryAttrib ("Training Catalog Spec/Name");

file.writeln (buildCSV (primaryKey, description, name ));


}
file.save ("path1/path2/My Export.csv");

For all items selected for export (either all items or only the items that correspond to the optional item selection), the script example is read as:

1. Get the values for the attributes that are mapped in the mapping screen (and compute any necessary mapping expression).
2. Validate the resulting values against the optional constraints that were put within the destination specification (including any minimum lengths and required fields).
3. If validation does not fail, then output the specified fields in a CSV file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Scripting tips
You can use the IBM® Product Master workbench to write your scripts to ensure accuracy of the script syntax.

Read the following scripting tips and best practices:

Understanding error messages, for example, the following script:

var i;
while ( i < 10)
{
out.writeln(i + ": hello world");
i++;
}

This script provides the following error:

Exception:Script execution failed (java.lang.NullPointerException)


Exception:java.lang.NullPointerException at
WPCScriptSandboxPath12077873685880.run(WPCScriptSandboxPath12077873685880.java:28)

This error means that a variable was not defined before it was used, and all variables need to be defined before they are used. The correct script is:

var i = 0;
while ( i < 10)
{

IBM Product Master 12.0.0 629


out.writeln(i + ": hello world");
i++;
}

Use the startTransaction function, which creates a unit of work if none is current, and runs the contained operations within the current unit of work.
Control the changes to scripts by using a configuration repository tool such as Rational® ClearCase®, CVS, or Perforce.
If you must search for elements inside the array, use a Hashmap instead of an array.
Use the forEachHmElement() operation when looping through a Hashmap.
Use the getCategoryCache() operation to get all categories.
Use the correct character encoding set when you load a data file.
Avoid using sendEmail() to debug the code.
Use selections for extracting selective product information.
Test for null periodically whenever you use a get method for an object. For example, getSpecByName(). Ensure that the object returned is not NULL before you do
anything with it. The following example illustrates the usage:

var spcSpec = getSpecByName("My Spec");


if(spec == null)
{
out.writeln("My Spec does not exist");
}
else
{
// do your processing on the spec here
}

Use checkString() and checkInt() to test all input.


Use the MVC design paradigm when you build applications with trigger scripts.
Use EntryNode::throwValidationError() in pre-processing and post-processing scripts to mark errors on individual attributes.
Use script libraries in pre-processing and post-processing scripts (or the validation framework) to do validations.
When you write a script that dumps output to more than one writer. For example, onto the screen and to a file, use a Boolean variable to control where the output is
directed.
Do not use reports as a way to load data because the reports lock the items until the report is completed. You can use imports to load data.
Do not use reports to export data if the exports manipulate and save items (such as setting status flags on the items). Always use exports.
Clean up older versions to reduce the size of the database and to improve performance. When you use imports and exports, a new version of the catalog is created
after each import or export.
Clean the docstore regularly. An excessively populated docstore (document store with many documents, regardless of the document size) results in poor import
performance.
When you debug in the log4j2.xml file, check the ipm.log for additional information on unexpected transactional commits and rollbacks. This information might be
of use if transactional disruption is suspected as a cause.
Implicit variables, for example,
in This is the default reader for an import. It refers to the data source, which was chosen during the creation of the import.
catalog The destination catalog object that was chosen during the import creation.
Use of script libraries like the validation framework to reduce the amount of custom logic within your import. This enables reuse of import components by other
scripts.
Use the useTransaction script function on the import or report. This commits all of the statements within the useTransaction brackets to the database in one
single transaction. This is useful if you need to back out a series of changes if something fails.
There can be only one useTransaction in a script. Script calls to functions from another scripts can cause problems if nested useTransaction blocks are
created. The transaction rolls back to the state before the use first transaction block.
For example, your item model is divided into three catalogs of attributes and you want to cancel all updates if one catalog entry fails. You would place the
three catalog updates within a useTransaction block.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

JDBC script operations


You use the script API to drive the Java™ JDBC calls directly for script operations.

JDBC script operations are to be used for basic JDBC access. For more robust, scalable JDBC connectivity, use theIBM® Product Master interface. For example, basic JDBC
access includes opening a connection, writing a single piece of SQL, and then closing the connection. The SQL statements are exposed through the Product Master script
API. If this type of JDBC processing is required, use the Java script operations.

Three components in Product Master are supported only by script operations and are not supported in the user interface: JDBC, WebSphere MQ, and JMS.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Query language
You can use the IBM® Product Master query language to easily write queries that retrieve complicated Product Master Server specific data from Product Master Server
systems easily. The query language adopts the syntax of the Structured Query Language (SQL).

With familiarity with SQL, you can write the query language queries easily. The query language defines a set of objects that simplify the complicated relationships in a
Product Master Server data model so that you can create a user-oriented search query without dealing with the technical details in the Product Master Server data model.

630 IBM Product Master 12.0.0


Query language syntax
This information describes the syntax of the query language.
Notational conventions
The query language uses the [] notation for member data, dot notation for object joins, () notation for functions, and single quotation marks for strings.
Objects and terminals
IBM Product Master data model consists of data objects and terminal data values. The core objects in Product Master model are item and category, which are
defined by spec object. A collection of item objects is a catalog. The hierarchy object defines a hierarchical form of a category collection.
Attributes
An attribute is a data unit in the IBM Product Master data model. It is also called a Product Master Server data attribute.
Functions
The query language has Date(), Path(), and Timestamp() functions.
Aggregate functions
Aggregate functions enable users to perform mathematical operations on data from within their queries. The IBM Product Master query language provides five
aggregate functions that enable you to perform quick mathematical operations on search attributes. The query language also provides the group by clause to group
results displayed by an aggregate function based on one or more search attributes.
Predicates
A predicate is a Boolean-valued expression that can have one of the two Boolean values: true or false.
Semantics of an object join
In the query language, one object can join with another object by using the dot notation. For example, item.category or category.spec.
Sample queries
Sample queries are provided for items in a catalog, items in a collaboration area, and categories in a hierarchy.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Query language syntax


This information describes the syntax of the query language.

WQL queries can have AND or OR operators between two conditions. The AND operator has a higher precedence than the OR operator. This means the AND logical operation
is evaluated before an OR operation.

Grouping of search predicate using parenthesis is not currently supported in the search user interface. Given this limitation and the fact that AND operator has a higher
precedence than OR operator to define a search of the form (A or B) and C, you first need to do some transformations so parenthesis are not needed. Using this example,
the search specification is transformed to A and C or B and C. In this form parenthesis is not needed and therefore can be defined using the search user interface.

Query term Query elements


compound_query ::= query 'intersect' compound_query
| query 'union' compound_query
query ::= basic_query | range_query
basic_query ::= 'select' selectables
'from' container
['inner join on' predicate ]
['where' predicate ]
['order by' orderables ]
['group by' selectables ]
container ::= ‘catalog' ‘(‘ catalog_name ‘)'
| ‘hierarchy' ‘(‘ hierarchy_name ‘)'
| ‘collaboration_area' ‘(‘ collaboration_area_name ‘)'
range_query ::= 'select range' start_index 'to'
end_index selectables_list
| 'select first' count selectables_list
| 'select last' count selectables_list
'from' container
['where' predicate]
selectables ::= selectable
| selectable ‘,'selectables
selectable ::= attribute
| ‘inner_join' ‘(‘ attribute ‘)'
| ‘max' ‘(‘ attribute ‘)'
| ‘min' ‘(‘ attribute ‘)'
| ‘sum' ‘(‘ attribute ‘)'
| ‘count' ‘(‘ attribute ‘)'
| ‘avg' ‘(‘ attribute ‘)'
orderables ::= orderable
| orderable ‘,' orderable
orderable ::= attribute ['asc'|'desc']
attribute ::= leading_attr
| attribute ‘.' named_attr
| attribute spec_driven_attr
leading_attr ::=, 'item'|'category',
named_attr ::= ‘item'|'category'|'pk'|'name'|'path'
spec_driven_attr ::= ‘[‘ attribute_path ‘]'
attribute_path ::= spec name ‘/' attribute name
predicate ::= expr
| expr logic_op predicate

IBM Product Master 12.0.0 631


Query term Query elements
expr ::= attribute binary_op const_attr
| ‘upper' ‘(‘attribute‘)' binary_op const_attr
| [‘not'] ‘exists' attribute
| attribute ‘is' [‘not'] ‘null'
| search attribute op const attribute
| named or indexed search attribute op named or
indexed search attribute
| search attribute [‘not']‘in' '('WQL subquery ')'
| search attribute ‘in' ‘SQL' '('SQL subquery ')'
| search attribute op WQL subquery
| search attribute op 'SQL' '(' ' "SQL subquery" ' ')'
| '[not] exists' WQL subquery
| '[not] exists' 'SQL' '('SQL subquery ')'

binary_op ::= '>'|'<' | ‘!=' | ‘<=' | ‘>=' | ‘like' |'='


logic_op ::= 'and'|'or',
constant_attr ::= single quoted string
| number
|'path' ‘(‘ path_string ‘,' path_separator ')'
|'date' ‘(‘ date_string ‘,' date_format ‘)'

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Notational conventions
The query language uses the [] notation for member data, dot notation for object joins, () notation for functions, and single quotation marks for strings.

Member data: [] notation


By adopting the [] notation from some Script API, such as JavaScript, the query language can define a member data in an object in a straightforward form.
Objects join: dot notation
Dot notation has two features: (1) define an object join relationship; and (2) access to a pre-defined member data in the object.
Function: () notation
The query language takes the standard function notation to define a function by using ‘(' and ‘)'.
String: Single quotation mark
All string constants must be enclosed in single quotation marks.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Member data: [] notation


By adopting the [] notation from some Script API, such as JavaScript, the query language can define a member data in an object in a straightforward form.

The following examples include spec-driven attributes for an item, [‘my


spec/my attribute'], and [‘my spec/my link'], that are defined by using [] notation:

item [‘my spec/my attribute']


item [‘my spec/my link'].pk

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Objects join: dot notation


Dot notation has two features: (1) define an object join relationship; and (2) access to a pre-defined member data in the object.

In the following examples, 1 defines an object join relation between item and category; and 2 defines the primary key data in the item object:

1. item.category
2. item.pk

For more information, see Semantics of an object join.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

632 IBM Product Master 12.0.0


Function: () notation
The query language takes the standard function notation to define a function by using ‘(' and ‘)'.

In the following examples, 1 is a catalog function; 2 is hierarchy function; 3 is a date function; and 4 is a path function:

1. catalog (‘my catalog')


2. hierarchy (‘my category tree')
3. date (‘12/12/2004', ‘mm/dd/yyyy')
4. path (‘c#root#myhome', ‘#')

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

String: Single quotation mark


All string constants must be enclosed in single quotation marks.

In the following examples, 1 is a string constant; 2 is spec-driven attribute with a string constant as the spec attribute path; and 3 is a catalog function with a string
constant as the catalog name:

1. ‘xyz'
2. item [‘my spec/my attribute']
3. catalog (‘my catalog')

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Objects and terminals


IBM® Product Master data model consists of data objects and terminal data values. The core objects in Product Master model are item and category, which are defined by
spec object. A collection of item objects is a catalog. The hierarchy object defines a hierarchical form of a category collection.

Objects
An object is a compound data unit in the Product Master data model. An object can have member data. For example, item is a data unit that encapsulates data that is
related to an item in a catalog; it can include member data, such as name, and catalog information.

The query language defines 11 objects:

Item
category
location
parent
child
spec
step
catalog
hierarchy
log
logentry

Where category, location, parent, and child are the same type of objects; the other have their own types.

Terminals
Terminal data is the minimal data unit in the Product Master data model. Terminal data is associated with an object. Terminal data cannot have member data. For example,
an item name is a terminal string.

The Product Master data model defines seventeen data types:

binary
bool
currency
image
image-url
integer
number
number-enumeration
password
sequence
string
string-enumeration
thumbnail-image

IBM Product Master 12.0.0 633


thumbnail-image-url
timestamp
timezone
url

Leading objects
A leading object is an object that appears in the beginning of a data expression. The IBM Product Master data model defines two leading objects: item and category.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Leading objects
A leading object is an object that appears in the beginning of a data expression. The IBM® Product Master data model defines two leading objects: item and category.

In the following example, the data expression 1 has a leading object item; and 2 has a leading object category.

1. item.catalog
2. category.hierarchy

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Attributes
An attribute is a data unit in the IBM® Product Master data model. It is also called a Product Master Server data attribute.

All data in a Product Master Server system (for example, Product Master) can be called attribute data. An attribute can be an object or a terminal.

Named attributes
A named attribute is an atomic attribute that is defined in the query language. It can be either an object (for example, item) or a terminal (for example, pk).
Spec-driven attributes
A spec-driven attribute is a user-defined attribute. It is defined by a user data spec using the following format: [protocol:'/attribute path']
Compound attributes
A compound attribute is an attribute that consists of more than one atomic attribute through an object join or member terminals that use dot notation.
Constant attributes
A constant attribute defines a constant value.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Named attributes
A named attribute is an atomic attribute that is defined in the query language. It can be either an object (for example, item) or a terminal (for example, pk).

The query language defines 11 named object attributes:

item
category, location, parent, child
spec
step
catalog, hierarchy
log, logentry

The query language defines different sets of named terminal attributes for each of the object attributes or object types:

item: pk
category: pk, path, level
parent: pk, path, level
child: pk, path, level
location: pk, path, level
spec: name, type, attribute_path
step: path, reserved_by
catalog: name
hierarchy: name
logentry: log, message, timestamp

You can access named attributes in an object by using dot notation. In the following examples, the first one accesses the pk attribute in the item; the second one accesses
the level attribute in the category; and third one accesses the catalog attribute in the item and then accesses the name attribute in the catalog.

634 IBM Product Master 12.0.0


1. item.pk
2. category.level
3. item.catalog.name

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Spec-driven attributes
A spec-driven attribute is a user-defined attribute. It is defined by a user data spec using the following format: [protocol:'/attribute
path']

Where, 'protocol:' is optional; a node path can be the node name or the node name with grouping information. For example,

1. [‘Search Ctg Spec/string-RTS']


2. [‘Search Ctg Spec/link-str']
3. [‘Search Ctg Spec/relationship']
4. [‘Search Ctg Spec/lookup']
5. [‘Search Ctg Spec/grouping/string-RTS']

A spec-driven attribute can be an object, or terminal. In the following examples, 1, 2, and 3 are terminal attributes; 4 and 5 are object attributes.

If 'protocol:' is omitted. The spec-driven attribute is a global attribute by default, for example, 1–5. The query language supports three protocols:

global:
location:
location.override:

In the following examples, 1 is a global attribute, 2 is a location-specific attribute, and 3 is a location-specific attribute for the override value.

1. [‘global:Search Ctg Spec/string']


2. [‘location:Search Ctg Spec/string']
3. [‘location.override:Search Ctg Spec/string']

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Compound attributes
A compound attribute is an attribute that consists of more than one atomic attribute through an object join or member terminals that use dot notation.

leading_object{.intermediate object}.tailing_object
leading_object{.intermediate object}.member_terminal

Only two leading objects are defined in the query language. All objects in the IBM® Product Master data model can be used as intermediate and tailing objects in a join:

item: catalog, category, location, logentry, step


category: hierarchy, item, category, spec, step, parent, child, logentry
parent: hierarchy, item, category, spec, step, parent, child
child: hierarchy, item, category, spec, step, parent, child
location: hierarchy, item, category, spec, step, parent, child
spec: none
step: none
catalog: none
hierarchy: none
logentry: log

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Constant attributes
A constant attribute defines a constant value.

For example, the following values are constant attributes.

‘abc'
123
123.56

IBM Product Master 12.0.0 635


A constant attribute is used to create a predicate.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Functions
The query language has Date(), Path(), and Timestamp() functions.

date() function
The date() function is used to define a date constant that uses specific date format.
path() function
The path() function is used to define a complicated path string constant that uses specific path separator.
timestamp() function
The timestamp() function is used to define a timestamp constant with a specific timestamp format.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

date() function
The date() function is used to define a date constant that uses specific date format.

The date() function uses the following syntax: date (dateStr,


dateFormat)

Parameters
dateStr
The string of date. For example, 1990/12/11.
dateFormat
The date format string. For example, yyyy/mm/dd.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

path() function
The path() function is used to define a complicated path string constant that uses specific path separator.

The path() function uses the following syntax: path (pathStr,


pathDelim)

Parameters
pathStr
The string of path. For example, abc$edf$hij.
pathDelim
The delimiter used in the path string. For example, $.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

timestamp() function
The timestamp() function is used to define a timestamp constant with a specific timestamp format.

The timestamp() function uses the following syntax: timestamp


(timestampStr, timestampFormat)

Parameters

636 IBM Product Master 12.0.0


timestampStr
The string representation of timestamp. For example, 2008/11/21 01:12:33.
timestampFormat
The timestamp format string. For example, yyyy/MM/dd HH:mm:ss.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Aggregate functions
Aggregate functions enable users to perform mathematical operations on data from within their queries. The IBM® Product Master query language provides five aggregate
functions that enable you to perform quick mathematical operations on search attributes. The query language also provides the group by clause to group results displayed
by an aggregate function based on one or more search attributes.

The search attribute that is used by the avg(), max(), min(, and sum() functions must be a numeric type of indexed spec-driven attribute. Product Master has the
following types of numeric attributes: Currency, Integer, Number, Number enumeration, and Sequence. The search attribute that is used by the count() function must be
a named attribute or an indexed spec-driven attribute. The search attribute that is used by the group by clause must be a named attribute or an indexed spec-driven
attribute.

Avg() function
The avg() function is used to compute the average value of a search attribute.
Max() function
The max() function is used to find the maximum value of a search attribute.
Min() function
The min() function is used to find the minimum value of a search attribute.
Sum() function
The sum() function is used to compute the sum of values of a search attribute.
Count() function
The count() function is used to count the number of rows that are defined by a search attribute. The count (*) function is used to count all rows in the result set of
a query.
Important: To use this function with asterisk "*" specify the where clause else specify column_name instead of asterisk "*", for example "count(column_name)".

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Predicates
A predicate is a Boolean-valued expression that can have one of the two Boolean values: true or false.

The query language has atomic predicates and compound predicates.

Atomic predicates
An atomic predicate is a relationship between two attributes.
Compound predicate
One predicate logically operating with another predicate creates a compound predicate.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Atomic predicates
An atomic predicate is a relationship between two attributes.

You can define an atomic predicate by using the following syntax:

attribute_1 binary_relational_operator attribute_2

The right attribute attribute_2 in an atomic predicate can be a compound or a leading object attribute. The left attribute attribute_1 can also be a compound attribute or a
leading object attribute.

The query language defines five binary relational operators:

=
>
>=
<
<=

The query language also defines four special unary relational operators:

IBM Product Master 12.0.0 637


is null
is not null
exists
not exists

By using unary relational operators, the query language can define an atomic predicate in the following special formats:

attribute is null
attribute is not null
exists attribute
not exists attribute

Where, attribute must be object attribute.

Subquery
The subquery is supported in an atomic predicate, in the following formats:

attribute_1 IN ( query_2 )
attribute_1 IN sql( 'sql_query_2' )

Where, query_2 is a regular WQL query and sql_query_2 is a SQL query string.

attribute_1 op query_2
attribute_1 op sql_query_2

Where, op can be one of the binary relational operators. The query_2 must not return a set of data if the operator is not 'IN'. The same applies to sql_query_2.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Compound predicate
One predicate logically operating with another predicate creates a compound predicate.

The following predicates are compound predicates:

An atomic predicate that operates with another atomic predicate: atomic_predicate binary_logic_operator atomic_predicate
An atomic predicate that operates with a compound predicate: atomic_predicate binary_logic_operator compound_predicate
A compound predicate that operates with an atomic predicate: compound_predicate binary_logic_operator atomic_predicate
An atomic predicate with a unary logical operator: unary_logic_operator atomic_predicate
A compound predicate with a unary logical operator: unary_logic_operator compound_predicate

The query language defines two binary logical operators:

and
or

The query language defines one unary logical operator:

not

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Semantics of an object join


In the query language, one object can join with another object by using the dot notation. For example, item.category or category.spec.

The query language uses this convention to define the relationship between two objects in the IBM® Product Master data model. Only five of the eleven pre-defined
objects in the query language can join with other objects: item, category, location, parent, and child. The objects location, parent, and child are the same type of objects as
the object category is.

After joining with other object, the resulting object can join with other objects again if the object is of item or category type. For example:

item.category
item.category.spec
item.category.item
item.category.item.category
item.category.item.category.item
category.parent.child.parent.child.parent

Object: item
The item object in the query language can join with five objects.

638 IBM Product Master 12.0.0


Object Join with Meaning
item catalog Catalog where the item is located
category Categories that the item is mapped to
location Location that the item is mapped to
step Steps in the workflow that the item entry is checked out to
logentry Log entries related to the item

Object: category
The category object in the query language can join with eight objects.

Object Category Meaning


category hierarchy Hierarchy or category-tree where the category is located
category Categories that the category is mapped to (category-to-category mapping)
step Steps in the workflow that the category entry is checked out to
item Item that is mapped to the category (item-to-category mapping)
spec Spec that is mapped to the category
parent Parent categories of the category in the hierarchy
child Child categories of the category in the hierarchy
logentry Log entries related to the category

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample queries
Sample queries are provided for items in a catalog, items in a collaboration area, and categories in a hierarchy.

You use the Select statement and the query language constructs to write a query. You can include the query in Java™ code, a script, or in a text box in the interface.

You can use a query in a script or Java code to perform a dynamic search. You can also use a query to define a dynamic selection. If you are using a script, you can use the
SearchQuery script operation call a query. If you are using Java, you can use the SearchQuery Java class to call a query.

Sample queries for items in a catalog


These sample queries search for items in a catalog.
Sample queries for items in a collaboration area
These sample queries search for items in a collaboration area.
Sample queries for categories in hierarchies
These sample queries search for categories in hierarchies.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample queries for items in a catalog


These sample queries search for items in a catalog.

Sample 1
This sample query searches for all items in catalog ctg01 for which the value of the attribute pk starts with the characters item.

select item
from catalog('ctg01')
where item['spec01/pk'] like 'item%'

Sample 2
This sample query searches for all items in catalog ctg01 for which the value of the attribute pk is not empty.

select item
from catalog('ctg01')
where item['spec01/pk'] is not null

Sample 3
This sample query searches for all items in catalog ctg01 whose name attribute value starts with na and whose age attribute value is greater than 12.

select item
from catalog('ctg01')

IBM Product Master 12.0.0 639


where item['spec01/name'] like 'na%' and item['spec01/age'] > 12

Sample 4
This sample query searches for the primary keys of the items in catalogctg01 whose name attribute value starts with na and whose age attribute value is greater than 12.

select item.pk
from catalog('ctg01')
where item['spec01/name'] like 'na%' and item['spec01/age'] > 12

Sample 5
This sample query searches for the names of the items in catalog ctg01 whose age attribute value is greater than 12.

select item['spec/name']
from catalog('ctg01')
where item['spec01/age'] > 12

Sample 6
This sample query searches for the names of the items in catalog ctg01 whose city address is San Jose and whose street address is 555 Bailey Ave.

select item['spec/name']
from catalog('ctg01')
where item['spec01/address/street#a1'] = '555 Bailey Ave'
and item['spec01/address/city#a1'] = 'San Jose'

In this sample, ['spec01/address'] is a multi-occurrence attribute. An item (for example, a person), can have multiple addresses. Each address has fields for city and
street. If a person has two addresses, such as 555 Bailey Ave/Cupertino and 666 Main Street/San Jose, then the fields for city and street each have two values. The #a1 in
the query indicates that any multi-occurrence is not to be returned. If you do not use #a1 in the query, then this item is returned with both values included; that is.
['spec01/address/street'] = {'555 Bailey Ave', '666 Main Street'} and ['spec01/address/city'] = {'San Jose', 'Cupertino'}.

Sample 7
This sample query searches for the description of all items in catalog grocery store catalog with a primary spec spec.

select item[‘spec/desc']
from catalog(‘grocery store catalog')

Sample 8
This sample query searches for the description of all items that are in catalog ctg with a primary spec spec, where the price is greater than 10.

select item[‘spec/desc']
from catalog(‘grocery store catalog')
where item[‘spec/price'] > 10

Sample 9
This sample query searches for all items that are mapped to a category that uses a spec that has an attribute name that contains cookie.

select item
from catalog(‘grocery store catalog')
where item.category.spec.attribute_path like ‘%cookie%'

Sample 10
This sample query searches for the primary keys and the descriptions of all items in catalog grocery store catalog with a primary spec spec, where the price is greater than
10.

select item.pk, item[‘spec/desc']


from catalog(‘grocery store catalog')
where item[‘spec/price'] > 10

Sample 11
This sample query searches for the descriptions of all items in catalog grocery store catalog with a primary spec spec and the categories that the item that is mapped to,
where the price is greater than 10.

select item[‘spec/desc'], item.category


from catalog(‘grocery store catalog')
where item[‘spec/price'] > 10

Sample 12
The primary key of all items that are for more than $100 in any location in the location hierarchy grocery chain stores.

select item.pk
from catalog(‘ctg')
where item.location.hierarchy.name = ‘grocery chain stores'
and item[location:loc spec/price'] > 100

640 IBM Product Master 12.0.0


Sample 13
This sample query searches a user-defined log by the primary key of an item to return user-defined log entries. An item might have multiple entries in a user-defined log.

select item.logentry
from catalog('BasicAPICatalog')
where item.pk = ‘xyz'
and item.logentry.log.name = ‘My UDL'

Sample 14
This sample query searches multiple user-defined logs by the primary key of an item. Multiple user-defined logs can be associated with a single container such as a
catalog or hierarchy. An item can have entries in multiple user-defined logs that are associated with a catalog.

select item.logentry
from catalog('BasicAPICatalog')
where item.pk = ‘Item1'
and item.logentry.log.name in ( ‘My UDL1', ‘My UDL2','My UDL3')

Sample 15
This sample query searches a user-defined log by the logentry timestamp of an item.

select item.logentry
from catalog('BasicAPICatalog')
where
item.logentry.timestamp > timestamp(‘08/11/21 01:12:33','yy/MM/dd hh:mm.ss')
and item.logentry.log.name = ‘My UDL'

Sample 16
This sample query searches a user-defined log for log messages by the primary key of an item.

select item.logentry.message
from catalog('BasicAPICatalog')
where item.pk = ‘xyz'
and item.logentry.log.name = ‘My UDL'

Sample 17
This sample query searches for the item timestamp in a user-defined log by the primary key of an item. This is useful for identifying the modification date and time of an
item.

select item.logentry.timestamp
from catalog('BasicAPICatalog')
where item.pk = ‘xyz'
and item.logentry.log.name = ‘My UDL'

Sample 18
This sample query searches for items based on the logentry timestamp.

select item
from catalog('BasicAPICatalog')
where
item.logentry.timestamp = timestamp(‘08/11/21 01:12:33', 'yy/MM/dd hh:mm.ss')
and item.logentry.log.name = ‘My UDL'

Sample 19
This sample query searches for items where the logentry timestamp is within a specified range.

select item
from catalog('BasicAPICatalog')
where
item.logentry.timestamp >= timestamp(‘08/11/21 01:12:33', 'yy/MM/dd hh:mm.ss')
and
item.logentry.timestamp <= timestamp(‘08/11/25 01:12:33', 'yy/MM/dd hh:mm.ss')
and item.logentry.log.name = ‘My UDL'

Sample 20
This sample query searches for log messages by the primary key of an item and the logentry timestamp.

select item.logentry.message
from catalog('BasicAPICatalog')
where item.pk = ‘xyz'
and item.logentry.timestamp=timestamp(‘08/11/21 01:12:33', 'yy/MM/dd hh:mm.ss')
and item.logentry.log.name = ‘My UDL'

Sample 21
This sample query displays the average value of the number-RTS attribute for the items in the Search Ctg catalog.

IBM Product Master 12.0.0 641


select avg(item[‘Search Ctg Spec/number-RTS'])
from catalog(‘Search Ctg')
where item.pk is not null

Sample 22
This sample query displays the maximum value of the attribute number-RTS from among the items in the Search Ctg catalog.

select max(item[‘Search Ctg Spec/number-RTS'])


from catalog(‘Search Ctg')
where item.pk is not null

Sample 23
This sample query displays the minimum value of the attribute number-RTS from among the items in the Search Ctg catalog.

select min(item[‘Search Ctg Spec/number-RTS'])


from catalog(‘Search Ctg')
where item.pk is not null

Sample 24
This sample query displays the sum of the values of the attribute number-RTS for the items in the Search Ctg catalog.

select sum(item[‘Search Ctg Spec/number-RTS'])


from catalog(‘Search Ctg')
where item.pk is not null

Sample 25
This sample query displays the number of rows that are defined by the attribute number-RTS for the items in the Search Ctg catalog.

select count(item[‘Search Ctg Spec/number-RTS'])


from catalog(‘Search Ctg')
where item.pk is not null

Sample 26
This sample query displays the number of items in the Search Ctg catalog.

select count(*)
from catalog(‘Search Ctg')
where item.pk is not null

Sample 27
This sample query displays the average value of attribute number-RTS that grouped by the attribute string-RTS in the Search Ctg catalog.

select item[‘Search Ctg Spec/string-RTS'],


avg(item[‘Search Ctg Spec/number-RTS'])
from catalog(‘Search Ctg')
where item.pk is not null
group by item[‘Search Ctg Spec/string-RTS']

Sample 28
This sample query displays the values of the number-RTS attribute over the items from rows 100 to 200 in the Search Ctg catalog.

select range 100 to 200


item[‘Search Ctg Spec/number-RTS'])
from catalog(‘Search Ctg')
where item.pk is not null

Sample 29
This sample query displays the first 100 items in the Search Ctg catalog.

select first 100 item


from catalog(‘Search Ctg')
where item.pk is not null

Sample 30
This sample query displays the last 100 items in the Search Ctg catalog.

select last 100 item


from catalog(‘Search Ctg')
where item.pk is not null

Sample 31

642 IBM Product Master 12.0.0


This sample query demonstrates the use of a dynamic search attribute on the right side of a predicate.

select item
from catalog(‘Search Ctg')
where item['Search Ctg'/string] = item['Search Ctg'/string2]

Sample 32
This sample query is a generic WQL subquery.

select item
from catalog(‘Search Ctg')
where item.pk in (select item.pk from catalog('Search Ctg2')
where item['Search Spec 2/string'] like 'item%')

Sample 33
This sample query demonstrates the use of an embedded SQL subquery with an 'in' clause.

select item
from catalog(‘Search Ctg')
where item.pk in SQL('select itm_primary_key from itm
where itm_container_id = 1001')

Sample 34
This sample query demonstrates the use of a WQL subquery with an 'in' clause.

select item
from catalog(‘Search Ctg')
where item.pk in (select item.pk from catalog('Search Ctg2')
where item['Search Spec 2/string'] like 'item%')

Sample 35
This sample query demonstrates a generic SQL subquery in a predicate.

select item
from catalog(‘Search Ctg')
where item.pk = SQL('select itm_primary_key from itm
where itm_container_id = 1001')

Sample 36
This sample query demonstrates the use of location attributes in a WQL query.

select item.location
from catalog(‘Search Ctg')
where item['location:Search Loc Spec/attr1'] = 'abc'

Sample 37
This sample query demonstrates the use of the version() function to return the version ID of an item based on the version name in a WQL query. You can only use the < and
> operators.

select item
from catalog('Search Ctg')
where item.version < version('version1')

Sample 38
This sample query demonstrates the use of Relationship, Link, or Timezone attributes with ALL or ANY condition.
Important: This sample query does not adhere to the standard WQL query for searching item for the multi-occurrence attributes.
Type Sample
Relationship attribute with ‘All’ condition select item.pk
from catalog('test ctg')
where (
item['test spec/relation'] .catalog.name='10416 ctg'
and item['test spec/relation'].pk = '1'
)
intersect
select range 1 to 2 item.pk
from catalog('test ctg')
where (
item['test spec/relation'] .catalog.name='10416 ctg'
and item['test spec/relation'].pk = '2'
)

IBM Product Master 12.0.0 643


Type Sample
Relationship attribute with ‘Any’ condition select item.pk
from catalog('test ctg')
where (
item['test spec/relation'] .catalog.name='10416 ctg'
and item['test spec/relation'].pk = '1'
)
intersect
select range 1 to 2 item.pk
from catalog('test ctg')
where (
item['test spec/relation'] .catalog.name='10416 ctg'
and item['test spec/relation'].pk = '2'
)
Link attribute with ‘All’ condition select item.pk
from catalog('test ctg')
where (
item['test spec/link'] .pk = '1'
)
intersect
select item.pk
from catalog('test ctg')
where (
item['test spec/link'] .pk = '2'
)

Sample 39
This sample query demonstrates the use of a secondary attribute in the where clause of a WQL query.

select category.item.pk, category.item['Primary_Spec/Primary_key']


from hierarchy('Hierarchy_Name') where category.pk = 'Sample_PK' and
category.item['Secondary_Spec/Date'] >= date('Mon Dec 01 00:00:00 CEST 2019', 'EEE MMM dd HH:mm:ss zzz yyyy')
and category.item['Secondary_Spec/Date'] <= date('Mon Dec 15 23:59:59 CEST 2019', 'EEE MMM dd HH:mm:ss zzz yyyy')
and category.item['Primary_Spec/Primary_key'] is not null

Important: If there is more than one attribute in the clause, select then you need to explicitly mention the same attribute in the where clause by using the NOT NULL
constraint. This is a known limitation.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample queries for items in a collaboration area


These sample queries search for items in a collaboration area.

Sample 1
This sample query searches for all items in the ca01 item collaboration area that are in step mod and that have a pk attribute value that is not empty.

select item
from collaboration_area('ca01')
where item['spec01/pk'] is not null and item.step.path = 'mod'

Sample 2
This sample query searches for all items in the ca01 item collaboration area that are in step mod and that are reserved by the user Admin.

select item
from collaboration_area('ca01')
where item.step.reserved_by = 'Admin' and item.step.path = 'mod'

Sample 3
The description of all items in the collaboration area grocery store catalog ca for item catalog with a primary spec spec.

select item[‘spec/desc']
from collaboration_area (‘grocery store catalog ca')

Sample 4
Search for the description of all categories in the collaboration area grocery store hierarchy ca with a primary spec spec.

select category [‘spec/desc']


from collaboration_area (‘grocery store hierarchy ca')

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

644 IBM Product Master 12.0.0


Sample queries for categories in hierarchies
These sample queries search for categories in hierarchies.

Sample 1
This sample query searches for all categories in hierarchy ctr01 that are not the root and for which the value of the attribute name starts with the characters na.

select category
from hierarchy('ctr01')
where category['spec01/name'] like 'na%' and category.pk != 'ctr01'

Sample 2
This sample query searches for all categories in hierarchy ctr01 that are not the root and that are mapped to a spec.

select category
from hierarchy('ctr01')
where exists category.spec and category.pk != 'ctr01'

Sample 3
This sample query searches for the primary keys of all the categories that are mapped to a spec that contains a root attribute color.

select category.pk
from hierarchy(‘grocery store item hierarchy')
where category.spec.attribute_path = ‘color'

Sample 4
This sample query searches a user-defined log by the primary key of a category to return user log entries. A category can have multiple entries in a user-defined defined
log.

select category.logentry
from hierarchy('BasicAPIHierarchy')
where category.pk = 'xyz'
and category.logentry.log.name = 'My UDL'

Sample 5
This sample query searches multiple user-defined logs by the primary key of a category. Multiple user defined logs can be associated with a single container such as a
catalog or hierarchy. A category can have entries in multiple user defined logs associated with a hierarchy.

select category.logentry
from hierarchy('BasicAPIHierarchy')
where category.pk = 'Category1'
and category.logentry.log.name in ('My UDL1','My UDL2','My UDL3')

Sample 6
This sample query searches a user-defined log by the logentry timestamp of a category.

select category.logentry
from hierarchy('BasicAPIHierarchy')
where
category.logentry.timestamp > timestamp('2008/11/21 01:12:33', 'yyyy/mm/dd hh:MM:ss')
and category.logentry.log.name = 'My UDL'

Sample 7
This sample query searches a user-defined log for log messages by the primary key of a category.

select category.logentry.message
from hierarchy('BasicAPIHierarchy')
where category.pk = 'xyz'
and category.logentry.log.name = 'My UDL'

Sample 8
This sample query searches for the category timestamp by the primary key of a category. This is useful for identifying the modification date and time of a category.

select category.logentry.timestamp
from hierarchy('BasicAPIHierarchy')
where category.pk = 'xyz'
and category.logentry.log.name = 'My UDL'

Sample 9
This sample query searches for categories based on the category primary key and the log entry timestamp.

IBM Product Master 12.0.0 645


select category
from hierarchy('BasicAPIHierarchy')
where
category.logentry.timestamp = timestamp('2008/11/21 01:12:33', 'yyyy/mm/dd hh:MM:ss')
and category.logentry.log.name = 'My UDL'

Sample 10
This sample query searches for categories where the logentry timestamp is within a specified range.

select category
from hierarchy('BasicAPIHierarchy')
where
category.logentry.timestamp >= timestamp('2008/11/21 01:12:33',
'yyyy/mm/dd hh:MM:ss') and category.logentry.log.name = 'My UDL'

Sample 11
This sample query searches for log messages by the primary key of a category and the logentry timestamp.

select category.logentry.message
from hierarchy('BasicAPIHierarchy')
where category.pk = 'xyz'
and
category.logentry.timestamp = timestamp('2008/11/21 01:12:33', 'yyyy/mm/dd hh:MM:ss')
and category.logentry.log.name = 'My UDL'

Sample 12
This sample query displays the average value of the number-RTS attribute for the categories in the Search Ctg Ctr hierarchy.

select avg(category[‘Search Ctg Spec/number-RTS'])


from hierarchy('Search Ctg Ctr')
where category.pk is not null

Sample 13
This sample query displays the maximum value of the attribute number-RTS from among the categories in the Search Ctg Ctr hierarchy.

select max(category[‘Search Ctg Spec/number-RTS'])


from hierarchy('Search Ctg Ctr')
where category.pk is not null

Sample 14
This sample query displays the minimum value of the attribute number-RTS from among the categories in the Search Ctg Ctr hierarchy.

select min(category[‘Search Ctg Spec/number-RTS'])


from hierarchy('Search Ctg Ctr')
where category.pk is not null

Sample 15
This sample query displays the sum of the values of the attribute number-RTS for the categories in the Search Ctg Ctr hierarchy.

select sum(category[‘Search Ctg Spec/number-RTS'])


from hierarchy('Search Ctg Ctr')
where category.pk is not null

Sample 16
This sample query displays the number of rows that are defined by the attribute number-RTS for the categories in the Search Ctg Ctr hierarchy.

select count(category[‘Search Ctg Spec/number-RTS'])


from hierarchy('Search Ctg Ctr')
where category.pk is not null

Sample 17
This sample query displays the number of categories in the Search Ctg Ctr hierarchy.

select count(*)
from hierarchy('Search Ctg Ctr')
where category.pk is not null

Sample 18
This sample query displays the average value of attribute number-RTS that are grouped by the attribute string-RTS in the Search Ctg Ctg hierarchy.

select category[‘Search Ctg Spec/string-RTS'],


avg(category[‘Search Ctg Spec/number-RTS'])
from hierarchy('Search Ctg Ctr')

646 IBM Product Master 12.0.0


where category.pk is not null
group by category[‘Search Ctg Spec/string-RTS']

Sample 19
This sample query displays the values of the number-RTS attribute over the categories from rows 100 to 200 in the Search Ctg Ctr hierarchy.

select range 100 to 200 category[‘Search Ctg Spec/number-RTS'])


from hierarchy('Search Ctg Ctr')
where category.pk is not null

Sample 20
This sample query displays the first 100 categories in the Search Ctg Ctr hierarchy.

select first 100 category


from hierarchy('Search Ctg Ctr')
where category.pk is not null

Sample 21
This sample query displays the last 100 categories in the Search Ctg Ctr hierarchy.

select last 100 category


from hierarchy('Search Ctg Ctr')
where category.pk is not null

Sample 22
This sample query demonstrates the use of a dynamic search attribute on the right side of a predicate.

select category
from hierarchy('Search Ctg Ctr')
where category['Search Ctg'/string] = category['Search Ctg'/string2]

Sample 23
This sample query is a generic WQL subquery.

select category
from hierarchy('Search Ctg Ctr')
where category.pk in (select category.pk from hierarchy('Search Ctg Ctr2')
where category['Search Spec 2/string'] like 'category%')

Sample 24
This sample query demonstrates the use of an embedded SQL subquery with an 'in' clause.

select category
from hierarchy('Search Ctg Ctr')
where category.pk in SQL('select cat_primary_key from cat
where cat_container_id = 1001')

Sample 25
This sample query demonstrates the use of a WQL subquery with an 'in' clause.

select category
from hierarchy('Search Ctg Ctr')
where category.pk in (select category.pk from hierarchy('Search Ctg Ctr2')
where category['Search Spec 2/string'] like 'category%')

Sample 26
This sample query demonstrates a generic SQL subquery in a predicate.

select category
from hierarchy('Search Ctg Ctr')
where category.pk = SQL('select cat_primary_key from cat
where cat_container_id = 1001')

Sample 27
This sample query demonstrates the use of the version() function to return the version id of a category based on the version name in a WQL query. You can only use the
< and > operators.

select category
from hierarchy('Search Ctg Ctr')
where category.version < version('version1')

IBM Product Master 12.0 Fix Pack 8

IBM Product Master 12.0.0 647


Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating web services


You can create a web service so that users can access Product Master Server system data from an external application. For example, you can create a web service to
search for items in a specific catalog.

Within IBM® Product Master, there is an axis-based web service component. You can create web services and deploy to the Product Master web application by using UI,
scripts, and Java™ code.
Attention: Product Master is compliant with Apache Axis 1.4 standard only and might not support systems using a different standard.
You can also use Product Master as a Java library to develop your own web service, package the server into your own enterprise system, and deploy the server to your own
enterprise application server. See Web services outside of Product Master. The following table provides a list of references that are related to web services.
Concept Link
Web services Wikipedia entry
Latest news from W3C
Apache Axis Various beginner guides and reference documents
User guide
Axis (De)Serializers Reference class list
WSDL2Java/Java2WSDL Tool references
WSDD Explanation and details
WSDL Official W3C specification and guide
SOAP Official W3C specification and guide
java.util.Date Information about using dates in web services

Web services within Product Master


IBM Product Master Java web services support allows hosting of simple web service, without requiring complex setup. In addition, it allows experienced web
service developers to host more complex services, by handling the deployment details themselves.
Web services outside of Product Master
A service outside of IBM Product Master is a custom-hosted web service. It is hosted on a server other than the Product Master. You can implement this web
service in Java by following the standard web service implementation steps. You can deploy a custom-hosted web service to IBM Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Web services within Product Master


IBM® Product Master Java™ web services support allows hosting of simple web service, without requiring complex setup. In addition, it allows experienced web service
developers to host more complex services, by handling the deployment details themselves.

Before you begin


Define a value for soap_company, soap_user, and product_center_url, in the common.properties file. Incoming SOAP requests use this company and user to access the
database and run scripts.

About this task


A web service is a software component that is described in XML and Web Services Description Language (WSDL). In addition to WSDL, the web service description must
include the fully qualified Java implementation class (as included in your deployed user .jar) and can include custom WSDD (Web Services Deployment Descriptor). You
can access web services through standard network protocols, typically SOAP over HTTP.

The Web Services Console in the Product Master provides the user the ability to create and manage web services. You can write a WSDL document to define a service and
an implementation script is created to control how the service is run. To access the Web Services Console, click Collaboration Manager > Web Services > Web Service
Console. A web service that you create in Product Master is based on WSDL and works with standard SOAP over HTTP protocol.

Procedure
1. Implement the web service. If your web service is not simple, create both WSDL and WSDD. For more information, see Planning for web services.
2. Write the service implementation code in Java (for example Java API) or scripting language. For more information, see Implementing a simple web service and
Implementing a complex web service.
3. Deploy the web service. For more information, see Deploying a web service.
4. Access the web service. For more information, see Access web services.

Document literal style web services


You need to create a web service to deploy a document literal style web service. Your web service needs to include a WSDL that defines the schema of the service
and a IBM Product Master trigger script to start when a request is encountered.
Planning for web services
You can create multiple web services to display various slices of data in your IBM Product Master server system to other applications and to facilitate loosely
coupled integrations and service-oriented architecture (SOA). Product Master provides some standard integration capabilities, for example Data Access Service
(DAS) to help you integrate with upstream and downstream systems easily.

648 IBM Product Master 12.0.0


Implementing a simple web service
You can deploy simple web services on IBM Product Master without any customization of WSDD (Web Services Deployment Descriptor) or WSDL. Simple web
services must use only simple types such as string and int.
Implementing a complex web service
You can implement a complex web service for complex operations such as deriving information from a Java API call.
Deploying a web service
You must deploy a web service after you implement it.
Debugging web services
You can find web service errors in various ways.
Access web services
You can start web services from a browser.
Examples of implementing and deploying web services
See the following examples of web services.

Related concepts
Planning for web services

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Document literal style web services


You need to create a web service to deploy a document literal style web service. Your web service needs to include a WSDL that defines the schema of the service and a
IBM® Product Master trigger script to start when a request is encountered.

When you save the web service, you need to select the web service that you want to deploy. Upon deployment, Product Master creates a URL for the web service where
you can access the deployed WSDL. The URL of the web service has the following form:

http://application-webserver:application-port-number/services/stored-webservice-name

If you append the "?wsdl" string to the end of the URL, you get the path to the stored WSDL for the web service.
A request for a document literal web service is enclosed in a SOAP envelope, and the body of the SOAP message includes the request document in its entirety. This
request document must be in correct XML form, and is passed to the Product Master web service handler as-is. A caller then creates this request with prior knowledge of
the format of the schema node of the stored WSDL for the web service that is being started.

The Product Master web service mechanism receives this request and validates its contents against the WSDL schema for document literal style requests. If the request
does not adhere to the WSDL schema, an AxisFault is thrown. Otherwise, Product Master eliminates the namespace references from the request body and passes the
modified request to the Product Master trigger script, which is stored at deployment time. The namespace removal is required because the Product Master script context
is not able to handle namespace-enabled XML documents. The Product Master trigger script takes the contents of the request and uses them as defined by the script
author. The script must output its results as a valid response to the incoming request. Therefore, the response is validated against the WSDL before returning the output.

Known limitations
Namespace must be defined on schema node of WSDL
When you deploy document literal style web services, the namespace declaration must be defined locally on the schema node of the WSDL.
Newly created web services do not automatically deploy
If you receive an error when you attempt to start the newly created web service, allow write access to the Axis configuration file server-config.wsdd under the
public_html/WEB-INF directory.
The style of the web service cannot be changed
You cannot change the style of a web service that is deployed. For example, if you create a DOCUMENT-LITERAL service and deploy it, you cannot change the
service to be an RPC-ENCODED service.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Planning for web services


You can create multiple web services to display various slices of data in your IBM® Product Master server system to other applications and to facilitate loosely coupled
integrations and service-oriented architecture (SOA). Product Master provides some standard integration capabilities, for example Data Access Service (DAS) to help you
integrate with upstream and downstream systems easily.

Types of web services


You can host simple web services without the needing complex setup. In addition, experienced web service developers can host more complex services by handling the
deployment details themselves.

Types of web services include simple and complex:

Simple web services

IBM Product Master 12.0.0 649


In simple web services, only simple types, such as String and Int, are sent and received as arguments and returns from methods, you need to inform Product Master
of the Java™ class that provides the implementation for the web service. Product Master handles the optional generation of Web Services Description Language
(WSDL) and creation of Web Services Deployment Descriptor (WSDD) for the deployment of the service.
Complex web services
In complex web services, WSDD can be supplied to configure the ability to send and receive more than simple types. The WSDD must be authored and supplied to
Product Master in the WSDD entry field in the Web Service Console by using either Java API, or through the WebService::setWsddDocPath() method of script
API. Product Master uses the WSDD that is supplied in this manner to deploy the defined service, rather than using built-in WSDD. To author your own WSDD, you
must have a good understanding of web services, the Java2WSDL, and WSDL2Java tools.
Important: If you are using the Java API, you should become familiar with creating, deploying, and starting web services. You should understand the following
concepts: web services, XML, WSDL, WSDD, SOAP, Axis, RPC/Encoded, Document/Literal, and JAX-RPC.

Web Service Description Language (WSDL)


For both scripts and Java-based web services, you must specify a WSDL. A WSDL definition describes in XML how to access the web service and what methods web
service provides. The WSDL definition also serves as the input or output contract for messages between the web service and the client that starts that web service. You
can create a web service in either of the way:

Create a web service to comply with a third-party WSDL that defines your web service.
Start with a functional design from which you code the Java and then derive the WSDL.

Web services Deployment Descriptor (WSDD)


WSDD is XML definition that tells Axis how to construct the web service from your Java code. Part of this definition involves specifying for each message element, the Java
method that it must call and the parameters it must provide to obtain the formatted data for compliance with the WSDL. The WSDD provides the bridge between the Java
object and the message format that is specified in the WSDL. You can author your own WSDD.

Web services features


Product Master supports SOAP over HTTP web services by using Axis, an open source implementation of SOAP. You can host both RPC or encoded and document or literal
style web services within the Product Master application server. This allows other external clients or applications to interact within Product Master exactly as required by
the external clients.

Each web service in Product Master has a scripting handle that is associated with the following,

Decodes the incoming request parameters,


Runs a query against the data model to fetch the appropriate information,
Formulates a response message.

Following are some of the features:

Web services support real-time integration and loosely coupled integration.


Web services support running of queries. You can run queries in Product Master by using the web services.
Note: Do not run queries directly on the Product Master databases because of the way data is stored.

Factors for web services


When you are implementing web services, you must consider and understand the following things:

Service-oriented architecture (SOA)


How you can implement web services to achieve SOA..
Advantages and disadvantages
Web services are good for real-time small data transfers (over HTTP) but might not be viable for large amounts of data.
Depending on the business requirement, a solution architect must determine what is the best mode of integration: web services, WebSphere® MQ, or flat files.
Overall timeline and budget impact
Though you can display any part of your Product Master Server system data model by using web services, you need to invest effort to construct and test the web
services.

Number and type of web services


Depends on the following factors:

Client readiness - Most of the clients who implement the Product Master Server system use a traditional system environment where the preferred integration
solution is flat files. As a solution architect, you must analyze and consider the time that is needed to implement a web service (by the design team) to
implement necessary hooks (by the clients), and to complete the integration testing (by the design team).
Real-time versus batch requirements - Web services support real-time requirements. If you do not have real-time requirements, then you can fulfill
requirements by using alternatives solutions that run in a batch, such as WebSphere MQ and flat files.

Web services granularity


As a solution architect, you might need to plan for these consideration,

Amount of data that is returned as a web service response. You must not recommend a web service solution to return a huge volume of records.
Number of requests that are handled by a web service. You can decide between creating a parametrized web service such that one web service can cater to
multiple types of requests and creating individual web services for each request.

Related tasks
Web services within Product Master

IBM Product Master 12.0 Fix Pack 8

650 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

Implementing a simple web service


You can deploy simple web services on IBM® Product Master without any customization of WSDD (Web Services Deployment Descriptor) or WSDL. Simple web services
must use only simple types such as string and int.

About this task


You can create simple web services in addition to writing web services that access the Product Master Java™ API.

Procedure
Implement the web service. Use one of the following methods: Java or script.
Option Description
The following sample code creates a simple web service that returns the number of items in a catalog.

public class SimpleSampleWebService


{

public int getNumberOfItemsInCatalog(String catalogName)


{
Context ctx = null;
try
{
ctx = PIMWebServicesContextFactory.getContext();
CatalogManager catalog_manager = ctx.getCatalogManager();
Catalog catalog = catalog_manager.getCatalog(catalogName);
if (null == catalog)
{
return -1;
}
else
{
Java
Collection items = catalog.getItems();
if (items == null)
{
return 0;
}
else
{
return items.size();
}
}
}
catch (Exception e)
{
e.printStackTrace();
}

return -2;
}
}

IBM Product Master 12.0.0 651


Option Description
The following sample code creates a script-based web service.

// Search a given catalog for the items that have the given attribute
// with a value containing the given value string.
function getListOfMatchingItems(catalogName, attributePath, attributeValue)
{
var pkList = [];
catchError(e) {
var query = " select item.pk from catalog('" + catalogName + "')\n"
+ " where item['" + attributePath + "'] like '%" + attributeValue + "%'\n" ;
var qry = new SearchQuery(query);
var rs = qry.execute();
if (rs.size() > 0) {
var i = 0;
while (rs.next()) {
pkList[i] = rs.getString(1);
i = i+1;
}
} else {
pkList[0] = "Item not found";
}
Script }
if (e != null) {
pkList[0] = "Exception occurred. Item not found: " + e;
}

return pkList;
}

// parse the request document


var doc = new XmlDocument(soapMessage);

// get the ticker parameter


var ctgName = parseXMLNode("catalogName");
var attrPath = parseXMLNode("attributePath");
var attrValue = parseXMLNode("attributeValue");
var pks = getListOfMatchingItems(ctgName, attrPath, attrValue);
out.println("<getListOfMatchingItemsResponse xmlns=\"\">");
for(var i = 0; pks[i] != null; i=i+1)
{
out.println(" <getListOfMatchingItemsReturn>" + pks[i] + "</getListOfMatchingItemsReturn>");
}
out.println("</getListOfMatchingItemsResponse>");
The simple web service is implemented.

What to do next
Next, deploy the web service.

Sample implementation script and WSDL document


The following document literal web service returns a stock quotation for a ticker symbol. This limited example returns only a value for the "IBM" ticker; all other
arguments result in a SOAP fault.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample implementation script and WSDL document


The following document literal web service returns a stock quotation for a ticker symbol. This limited example returns only a value for the "IBM®" ticker; all other
arguments result in a SOAP fault.

The implementation script is as follows:

// parse the request document


var doc = new XmlDocument(soapMessage);

// get the ticker parameter


var ticker = parseXMLNode("ibm:ticker");

// we only give out ibm quotes around here...


if (ticker == "IBM") {
out.println("<ibm:getStockQuoteResponse
xmlns:ibm=\"http://ibm.com/wpc/test/stockQuote\">");
out.println("<ibm:response>123.45</ibm:response>");
out.println("</ibm:getStockQuoteResponse)");
}
else {
soapFaultMsg.print("Only quotes for IBM are supported");
}

Table 1. Sample scripts


Method Sample code

652 IBM Product Master 12.0.0


Method Sample code
Document literal <element name="getStockQuote"/>
style <complexType>
<sequence>
<element name="ticker" type="xsd:string"/>
</sequence>
</complexType>
</element>
<element name="getStockQuoteResponse"/>
<complexType>
<sequence>
<element name="response" type="xsd:decimal"/>
</sequence>
</complexType>
</element>
Java™ java.math.BigDecimal getStockQuote(String ticker);
WSDL <?xml version="1.0" encoding="UTF-8"?>
<definitions
xmlns="http://schemas.xmlsoap.org/wsdl/"xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"xmlns:xs="http://www.w
3.org/2001/XMLSchema"xmlns:y="http://ibm.com/wpc/test/stockQuote"targetNamespace="http://ibm.com/wpc/test/stockQ
uote">
<types>
<xs:schema targetNamespace="http://ibm.com/wpc/test/stockQuote"elementFormDefault="qualified">
<xs:element name="getStockQuote">
<xs:complexType>
<xs:sequence>
<xs:element name="ticker" type="xs:string" nillable="false"/>
</xs:complexType>
</xs:element>
<xs:element name="getStockQuoteResponse">
<xs:complexType>
<xs:sequence>
<xs:element name="response" type="xs:decimal"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
</types>
<message name="getStockQuoteRequest">
<part name="parameters" element="y:getStockQuote"/>
</message>
<portType name="StockQuotePortType">
<operation name="getStockQuote">
<input message="y:getStockQuoteRequest"/>
<output message="y:getStockQuoteResponse"/>
</operation>
</portType>
<binding name="StockQuoteBinding" type="y:StockQuotePortType">
<soap:binding transport="http://schemas.xmlsoap.org/soap/http"/>
<operation name="getStockQuote">
<soap:operation soapAction=""/>
<input>
<soap:body use="literal"/>
</input>
<output>
<soap:body use="literal"/>
</output>
</operation>
</binding>
<service name="StockQuoteService">
<port name="StockQuotePort" binding="y:StockQuoteBinding">
<soap:addresslocation="http://example.wpc.ibm.com/services/StockQuoteService"/>
</port>
</service>
</definitions>
The result of running this script is that IBM Product Master does the following things:

1. Receives a SOAP request from the Axis SOAP stack.


2. Validates the request message against the previous schema.
3. Starts the web service trigger script. The input variables are:

- operationName = "getStockQuote"- message = "<getStockQuote>


<ticker>IBM</ticker>
</getStockQuote>"

The trigger script writes the response to the "out" writer:

- out = "<getStockQuoteResponse>
<response>83.76</response>
</getStockQuoteResponse>"

4. Validates the response against the previous schema.


5. Sends the entire SOAP response back to the client through the Axis SOAP stack.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Implementing a complex web service


IBM Product Master 12.0.0 653
You can implement a complex web service for complex operations such as deriving information from a Java™ API call.

Procedure
1. Write and compile the code for creating a complex web service. Use the Java API.
The following sample Java API creates a complex web service. It returns a WBSItem with a specified primary key from a catalog with the requested name. Because
WBSItem is a complex class, you must create a WSDD for correct deployment within IBM® Product Master.

Main Class:

public class ComplexSampleWebService


{
public WBSItem getItemFromCatalog(String catalogName, String primaryKey) throws Exception
{
Context ctx = null;
try
{
ctx = PIMWebServicesContextFactory.getContext();
CatalogManager catalog_manager = ctx.getCatalogManager();
Catalog catalog = catalog_manager.getCatalog(catalogName);
if (null == catalog)
{
throw new Exception("No Catalog found.");
}
else
{
Item item = catalog.getItemByPrimaryKey(primaryKey);
if (item == null)
{
throw new Exception("No matching Item found.");
}
else
{
WBSItem wbsItem = new WBSItem();
wbsItem.setItemPK(item.getPrimaryKey());
wbsItem.setItemDisplayName(item.getDisplayName());
wbsItem.setCatalogName(catalogName);
return wbsItem;
}
}
}
catch (Exception e)
{
e.printStackTrace();
throw new Exception(e.getMessage());
}
}
}

WBSItem Object Class:

public class WBSItem


{
private String itemPK = null;
private String itemDisplayName = null;
private String attributeValue = null;
private String catalogName = null;
private int price = 0;

public int getPrice()


{
return price;
}

public void setPrice(int price)


{
this.price = price;
}

public String getAttributeValue()


{
return attributeValue;
}

public void setAttributeValue(String attributeValue)


{
this.attributeValue = attributeValue;
}

public String getItemPK()


{
return itemPK;

public void setItemPK(String itemPK)


{
this.itemPK = itemPK;
}

public String getItemDisplayName()


{
return itemDisplayName;

654 IBM Product Master 12.0.0


}

public void setItemDisplayName(String itemDisplayName)


{
this.itemDisplayName = itemDisplayName;
}

public String getCatalogName()


{
return catalogName;
}

public void setCatalogName(String catalogName)


{
this.catalogName = catalogName;
}
}

The complex web service is created.


2. Prepare to deploy a complex web service to Product Master.
You need to generate the deploy.wsdd file for deployment. You can use Java2WSDL, and then WSDL2Java to create the sample deploy.wsdd file.

a. Run Java2WSDL on the Java API to generate WSDL.


You can use the WBSItem to generate the WSDD for Product Master.

java org.apache.axis.wsdl.Java2WSDL -o wp.wsdl -l "http://192.168.0.1:9080/services/ComplexSampleWebService"


-n "http://tests.ibm.com" com.ibm.tests.ComplexSampleWebService -y WRAPPED -u LITERAL

b. Run WSDL2Java on the generated WSDL.

java org.apache.axis.wsdl.WSDL2Java -o . -d Session -s -S true wp.wsdl

c. Edit the WSDD.

<ati:deployment xmlns="http://xml.apache.org/axis/wsdd/" xmlns:java="http://xml.apache.org/axis/wsdd/providers/java"


xmlns:ati="http://xml.apache.org/axis/wsdd/">

<ati:service name="ComplexSampleWebService" provider="java:WPCDocument" use="literal">


<ati:parameter name="wsdlTargetNamespace" value="http://tests.ibm.com"/>
<ati:parameter name="wsdlServiceElement" value="ComplexSampleWebServiceService"/>
<ati:parameter name="wsdlServicePort" value="ComplexSampleWebService"/>
<ati:parameter name="className" value="com.ibm.tests.ComplexSampleWebService"/>
<ati:parameter name="wsdlPortType" value="ComplexSampleWebService"/>
<ati:parameter name="allowedMethods" value="*"/>
<ati:parameter name="scope" value="Session"/>

<ati:typeMapping xmlns:ns="http://tests.ibm.com" qname="ns:WBSItem" type="java:com.ibm.tests.WBSItem"


serializer="org.apache.axis.encoding.ser.BeanSerializerFactory"
deserializer="org.apache.axis.encoding.ser.BeanDeserializerFactory" encodingStyle=""/>
/ati:service>
</ati:deployment>

What to do next
Next, deploy the web service.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deploying a web service


You must deploy a web service after you implement it.

Before you begin


Implement the web service.

Procedure
Deploy the web service. Use one of the following methods: user interface, Java™, or script.
Option Description
a. Click Collaboration Manager > Web Services > New Web Service.
User interface b. In the New Web Service page, provide all the details for the new web service.

See Example of using the UI to create a web service for details.

IBM Product Master 12.0.0 655


Option Description
Sample 1: The following sample Java code creates a script web service.

public void createScriptWebService()


{
Context ctx = PIMContextFactory.getCurrentContext();
DocstoreManager docstoreManager = ctx.getDocstoreManager();
Document wsdlDoc = docstoreManager.getDocument("archives/wsdl/ItemFinderService0");
Document implementationScript = docstoreManager.getDocument("scripts/wbs/ItemFinderService0");
WebServiceManager webServiceManager = ctx.getWebServiceManager();
webServiceManager.createWebService( "myScriptService", "This is a sample script service",
wsdlDoc, WebService.MessageStyle.DOCUMENT_LITERAL, implementationScript,
false, false, true, false, true, true );
}

Java Sample 2: The following sample Java code creates a Java web service.

public void createJavaWebService()


{
Context ctx = PIMContextFactory.getCurrentContext();
DocstoreManager docstoreManager = ctx.getDocstoreManager();
Document wsdlDoc = docstoreManager.getDocument("archives/wsdl/ItemFinderService0");
Document wsddDoc = docstoreManager.getDocument("archives/wsdd/ItemFinderService0");
String implClass = "com.ibm.pim.service.sample.ItemFinderServiceImpl";
WebServiceManager webServiceManager = ctx.getWebServiceManager();
webServiceManager.createWebServiceUsingJava( "myJavaService", "This is a sample Java service",
wsdlDoc, wsddDoc,
WebService.MessageStyle.DOCUMENT_LITERAL, implClass,
false, false, true, false, true, true );
}

656 IBM Product Master 12.0.0


Option Description
Sample 1: The following sample code creates a script-based web service.

var attr_Name = "ScriptWebService";


var attr_ImplClass="";
var attr_Desc = "Description for Test Webservice";
var attr_WSDLDocPath = "archives/wsdl/test0";
var attr_WSDDDocPath = "archives/wsdd/
test0";
var attr_Protocol = "SOAP_HTTP";
var attr_Style = "RPC_ENCODED";
var attr_ImplScriptPath = "scripts/wbs/test0";
var b_StoreIncoming = true;
var b_StoreOutgoing = true;
var b_Deployed = true;
var b_AuthRequired = true;
var b_SkipRequestValidation=true;
var b_SkipResponseValidation=true;
var ws = createWebService(
attr_Name,
attr_ImplClass,
attr_Desc,
attr_WSDLDocPath,
attr_WSDDDocPath,
attr_Protocol,
attr_Style,
attr_ImplScriptPath,
b_StoreIncoming,
b_StoreOutgoing,
b_Deployed,
b_AuthRequired,
b_SkipRequestValidation,
b_SkipResponseValidation);
ws.saveWebService();
out.writeln(ws);
Script
Sample 2: The following sample code creates a Java based web service.

var attr_Name = "JavaWebService";


var attr_ImplClass="com.ibm.ccd.soap.common.SOAPWebService";
var attr_Desc = "Description for Test Webservice";
var attr_WSDLDocPath = "archives/wsdl/test0";
var attr_WSDDDocPath = "archives/wsdd/test0";
var attr_Protocol = "SOAP_HTTP";
var attr_Style = "RPC_ENCODED";
var attr_ImplScriptPath = "";
var b_StoreIncoming = true;
var b_StoreOutgoing = true;
var b_Deployed = true;
var b_AuthRequired = true;
var b_SkipRequestValidation=true;
var b_SkipResponseValidation=true;
var ws = createWebService(
attr_Name,
attr_ImplClass,
attr_Desc,
attr_WSDLDocPath,
attr_WSDDDocPath,
attr_Protocol,
attr_Style,
attr_ImplScriptPath,
b_StoreIncoming,
b_StoreOutgoing,
b_Deployed,
b_AuthRequired,
b_SkipRequestValidation,
b_SkipResponseValidation);
ws.saveWebService();
out.writeln(ws);
]
Note: To deploy a web service on WebLogic application servers, you must remove the value from the Listen Address field from the WebLogic Admin console. Otherwise
deploying a web service on WebLogic fails because the SOAPWebService class that is used for deployment of web services uses the Listen Address field.

Results
The simple web service is deployed.

What to do next
Next, access the web service.

Related tasks
Deploying a third party or custom user .jar file

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 657


Debugging web services
You can find web service errors in various ways.

To debug your web service, use the following tips to find errors:

If you are trying to comply with an outside WSDL, use IBM® Product Master WSDL auto-generation feature to see the WSDL of your web service. Modify the Java™
and WSDD to correct any errors.
Use the Java2WSDL followed by Java2WSDL to get a basic WSDD. Compare this WSDD to your current WSDD to find any errors. For more information, see Example
of creating a web service that retrieves information from an item.
With simple web services, you can view the XML of the web service return message through a URL. For more information, see Examples of implementing and
deploying web services.
To analyze the XML packets traveling "over the wire", use a packet monitoring application, such as TCPMon or Ethereal.
Use try..catch blocks in your root web service method. Unless you configure SOAP faults and other error handling information through WSDD, it can be difficult
to determine where your exception went within your web service.

To debug a deployed web service and to determine which information you should collect, complete the following steps:

1. Click Collaboration Manager > Web Services > Web Service Console.
2. In the Web Service Console, select the web service in question by clicking its name. The Web Service Detail screen shows WSDL information and the Style that is
used, which is mostly DOCUMENT_LITERAL on script-based web services and the Implementation Script for the web service. Also, the URL for the web service and
the WSDL URL can be found on this screen.
3. Verify that the check box Deployed at the end of the screen is checked.
4. Verify the following settings in the common.properties file that is in $TOP/etc/default directory:
# SOAP username and company. This can be any valid username and

# company for which a login is available. SOAP services will run

# with the permissions of this company/user combination.

soap_company=<your company>

soap_user=<your user running the webservice>

# This defines the SOAP Envelope Schema URL.

# By default it points to "http://schemas.xmlsoap.org/soap/envelope/"

# This can be changed to another URL which hosts the SOAP Envelope schema v 1.1

soap_envelope_schema_url=http://schemas.xmlsoap.org/soap/envelope/

# The fully-qualified URL, including port number, of the web site

# where users should point their browsers to access this

# instance. It should NOT include a trailing "/" character. If this

# value is kept empty, it will be deduced from the appserver hostname

# and port. Do NOT leave this value empty for WebSphere.

# Example: product_center_url=http://myinstance.trigo.com:1234

product_center_url=http://stan.munich.de.ibm.com:7505
a. Copy the web service URL, found at the top of the Web Service Detail screen, into the address bar of your web browser.
Note: This step tests only to see that the web service can be found and is deployed at that location.
b. Test the WSDL URL, which can be also found near the top of the Web Service Detail screen, by copying it into the address bar of your web browser.
Note: This step tests only to see that the WSDL file can be found. If the browser shows the XML file, then also a basic schema check is done which means
that the XML file (WSDL File) is a formed XML file. This schema validation works successfully only if no known schema definitions is used.
5. Certify that the endpoint (location) of the web service shows the same host name and port as defined for the parameter product_center_url.
Note: The endpoint is the host name of the box where the service is deployed.
6. The following script should be used from the sandbox of the Product Master that is starting the web service. This is to assess whether the communication is also
working from a different computer or if there is for example a firewall that is blocking the communication. Ensure that you adapt the URL to your URL as found for
the parameter product_center_url in the common.properties file.
var sURL = "http://stan.munich.de.ibm.com:7505/services/GDS.WebService";

var hmRequestProperties = [];

var hmParameters = [];

658 IBM Product Master 12.0.0


var sRequestMethod = "GET";

var hmResponse = getFullHTTPResponse(sURL, hmRequestProperties, hmParameters, sRequestMethod);

forEachLine(hmResponse["RESPONSE_READER"], line){

out.writeln(line);

}
Note: In some cases, the information from the WSDD file might be overwritten. For example, after every new deployment of the product after the install_war.sh
script is run, the information that is found in the server-config.wsdd WSDD file at $WAS_HOME/installedApps/<hostname>/<application.ear>/ccd.war/WEB-INF/ is
overwritten. If so, then your test in the previous step fails. In this case, the web service just needs to be saved again from the Web Services Console. This re-creates
the server-config.xml file and the following XML information is found:
service name="GDS.WebService" provider="java.TrigoRPC">

<parameter name="allowedMethods" value="invoke"/>

</service>
Where service name is the name of the web service and provider is showing the type of the web service, TrigoRPC is a "complex" web service of type RPC mostly
used if Java Objects like Items are used instead of simple script objects like a Primary Key.
7. Ensure that the log appender for "soap" is set to debug level logging, which is the most verbose and granular logging mode. The log4j2.xml file is found under
$TOP/etc/default directory. Debug level logging ensures, that complete stack traces are shown in the $TOP/logs/appsvr_<hostname>/ipm.log file. You can then
reproduce the problem again and send the set of log files and all the information from the previous steps to Support for further analysis.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Access web services


You can start web services from a browser.

You can test a web service by starting the web service with a web browser. Access the following URL:

http://pim.host.address:pimport/services/ServiceName?method=getNumberOfItemsInCatalog&catalogName;=TestCatalog

Where http://pim.host.address:pimport is the web address for accessing your IBM® Product Master system. The ServiceName variable is the name that you entered for the
web service in the Web Services Console, and TestCatalog is the catalog that you are querying.

If the catalog exists, a number of items that exist within the catalog are returned as an XML message.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Examples of implementing and deploying web services


See the following examples of web services.

Example of using the UI to create a web service for a document literal type
This example shows how to create and run a web service for a document literal type.
Example of creating a web service to query the number of items in a catalog
This example shows how to retrieve a simple piece of information (in this case, the number of items in a catalog) from IBM Product Master by using the Java API.
Example of creating a web service to search for an item
This example shows how to complete a search in the Java API and illustrates how to return an array of items rather than a single value.
Example of creating a web service that uses a business object and complex logic
This example shows a more sizeable Java API application that uses a business object to provide backend function. This example also illustrates how to create your
own Java client to call your web service.
Example of creating a web service that retrieves information from an item
This example shows how to create a web service that retrieves information from an item, and returns a custom object to the web service. This example also
demonstrates the use of Java2WSDL and WSDL2Java in creating WSDD.
Example of handling namespaces when you create a web service
This example shows how to handle namespaces when you create a web service. Depending on how you structure your code and use the Java2WSDL command to
generate WSDL2Java, ensure a valid response is generated when the web service is deployed.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Example of using the UI to create a web service for a document literal type
This example shows how to create and run a web service for a document literal type.

IBM Product Master 12.0.0 659


Procedure
1. Define a value for soap_company, soap_user, and product_center_url in the common.properties file.
2. Click Collaboration Manager > Web Services > New Web Service. Specify the following values:
Web Service Name: DocumentWebServiceTest
Description: A test for document literal web service
Protocol: SOAP_HTTP
Style: DOCUMENT_LITERAL
URL: Is blank initially, but when the web service is saved it shows as:
http://hostname:9099/services/DocumentWebServiceTest
WSDL URL: Is blank initially, but when the web service is saved it shows as:
http://hostname:9099/services/DocumentWebServiceTest?wsdl
3. In the WSDL field, specify the WSDL XML content.
For example:

<?xml version="1.0" encoding="UTF-8"?>


<wsdl:definitions targetNamespace="http://samples.service.pim.ibm.com" xmlns="http://schemas.xmlsoap.org/wsdl/"
xmlns:apachesoap="http://xml.apache.org/xml-soap" xmlns:impl="http://samples.service.pim.ibm.com"
xmlns:intf="http://samples.service.pim.ibm.com"
xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:tns1="http://samples.service.pim.ibm.com"
xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
xmlns:wsdlsoap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:xsd="http://www.w3.org/2001/XMLSchema">

<wsdl:types>
<schema targetNamespace="http://samples.service.pim.ibm.com" xmlns="http://www.w3.org/2001/XMLSchema">
<element name="getListOfMatchingItems"> <complexType>
<sequence>
<element name="catalogName" type="xsd:string"/>
<element name="attributeName" type="xsd:string"/>
<element name="attributeValue" type="xsd:string"/>
</sequence>
</complexType> </element>

<element name="getListOfMatchingItemsResponse"> <complexType>


<sequence><element maxOccurs="unbounded" minOccurs="0" name="getListOfMatchingItemsReturn"
type="xsd:string"/></sequence>
</complexType></element>
</schema>
</wsdl:types>

<wsdl:message name="getListOfMatchingItemsResponse"> <wsdl:part element="tns1:getListOfMatchingItemsResponse"


name="parameters"/> </wsdl:message>
<wsdl:message name="getListOfMatchingItemsRequest"> <wsdl:part element="tns1:getListOfMatchingItems"
name="parameters"/></wsdl:message>

<wsdl:portType name="ItemFinderService">
<wsdl:operation name="getListOfMatchingItems" parameterOrder="">
<wsdl:input message="impl:getListOfMatchingItemsRequest" name="getListOfMatchingItemsRequest"/>
<wsdl:output message="impl:getListOfMatchingItemsResponse" name="getListOfMatchingItemsResponse"/>
</wsdl:operation>
</wsdl:portType>

<wsdl:binding name="ItemFinderServiceSoapBinding" type="impl:ItemFinderService">


<wsdlsoap:binding transport="http://schemas.xmlsoap.org/soap/http"/>
<wsdl:operation name="getListOfMatchingItems">
<wsdlsoap:operation soapAction=""/>
<wsdl:input name="getListOfMatchingItemsRequest"> <wsdlsoap:body namespace="http://samples.service.pim.ibm.com"
use="literal"/> </wsdl:input>
<wsdl:output name="getListOfMatchingItemsResponse"> <wsdlsoap:body namespace="http://samples.service.pim.ibm.com"
use="literal"/> </wsdl:output>
</wsdl:operation>
</wsdl:binding>

<wsdl:service name="ItemFinderServiceService">
<wsdl:port binding="impl:ItemFinderServiceSoapBinding" name="ItemFinderService">
<wsdlsoap:address location="http://maobing3:9138/services/ItemFinderService" />
</wsdl:port>
</wsdl:service>

</wsdl:definitions>

4. Select a web service implementation. In this example, select Script.


5. In the Implementation script field, specify the Script API script for the web service.
For example:

// service impl script

function getListOfMatchingItems(catalogName, attributePath, attributeValue)


{
var pkList = [];
catchError(e) {
var query = " select item.pk from catalog('" + catalogName + "')\n"
+ " where item['" + attributePath + "'] like '%" + attributeValue + "%'\n" ;
var qry = new SearchQuery(query);
var rs = qry.execute();
if (rs.size() > 0) {
var i = 0;
while (rs.next()) {
pkList[i] = rs.getString(1);
i = i+1;
}
} else {

660 IBM Product Master 12.0.0


pkList[0] = "Item not found";
}
}
if (e != null) {
pkList[0] = "Exception occurred. Item not found: " + e;
}
return pkList;
}

// parse the request document


var doc = new XmlDocument(soapMessage);

// get the ticker parameter


var ctgName = parseXMLNode("catalogName");
var attrPath = parseXMLNode("attributePath");
var attrValue = parseXMLNode("attributeValue");
var pks = getListOfMatchingItems(ctgName, attrPath, attrValue);
out.println("<getListOfMatchingItemsResponse xmlns=\"\">");
for(var i = 0; pks[i] != null; i=i+1)
{
out.println(" <getListOfMatchingItemsReturn>" + pks[i] + "</getListOfMatchingItemsReturn>");
}
out.println("</getListOfMatchingItemsResponse>");

6. Select Store requests so that you can view the request history from the transaction console.
7. Select Store replies so that you can view the response history from the transaction console.
8. Select Deployed to deploy the web service. The web service is available to you only if you check this option.
9. Select Skip validations of SOAP requests and Skip validations of SOAP responses.
10. Start the web service by running a script that contains the appropriate code in the script sandbox.
The following sample code is provided to guide you on the correct method of starting web services:

var xmlStr="<getListOfMatchingItems xmlns=\"http://samples.service.pim.ibm.com\"


<catalogName>nmg_ctg</catalogName>
<attributePath>nmg_primeSpec/pkey</attributePath>
<attributeValue>item10001</attributeValue>
</getListOfMatchingItems>;

res = invokeSoapServerForDocLit("ws_url/services/DocumentWebServiceTest", xmlStr);

out.println("res: " + res);

Note: Replace ws_url with the server name and port number.
This following code is the expected output of running a script:

res: <ns2:getListOfMatchingItemsResponse xmlns="http://samples.service.pim.ibm.com"


xmlns:ns2="http://samples.service.pim.ibm.com">
<ns2:getListOfMatchingItemsReturn>item10001</ns2:getListOfMatchingItemsReturn>
</ns2:getListOfMatchingItemsResponse>
<multiRefid="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_apps_webservices_tsk_excreatewebserv_id0"
soapenc:root="0"
soapenv:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xsi:type="soapenc:long"
xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">1276834296222752585</multiRef>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Example of creating a web service to query the number of items in a catalog


This example shows how to retrieve a simple piece of information (in this case, the number of items in a catalog) from IBM® Product Master by using the Java™ API.

About this task


This example also illustrates the use of an unauthenticated web service and the use of the PIMContextFactory to obtain your initial PIMContext.

Procedure
1. Write the Java code. Here is an example of a simple web service that returns the number of items in a catalog:

public class CatalogService


{
public static final int CATALOG_NULL = -1;
public static final int CATALOG_ITEMS_NULL = -2;
public static final int EXCEPTION_CAUGHT = -3;

public int getNumberOfItemsInCatalog(String catalogName)


{
Context context = null;
int result;

try
{
context = PIMContextFactory.getContext("Joe", "passw0rd", "Acme");

IBM Product Master 12.0.0 661


CatalogManager catalogManager = context.getCatalogManager();
Catalog catalog = catalogManager.getCatalog(catalogName);

if (catalog == null)
{
return CATALOG_NULL;
}
else
{
Collection<Item> items = catalog.getItems();
if (items == null)
{
return CATALOG_ITEMS_NULL;
}
else
{
result = items.size();
}
}
}
catch (Exception e)
{
e.printStackTrace();
return EXCEPTION_CAUGHT;
}

return result;
}
}

2. Deploy the user .jar file.


For more information, see Deploying a third party or custom user .jar file.

3. Register the web service in Product Master.


a. Access your Product Master instance and log in.
For example: http://yourWPCserver:yourWPCport/utils/enterLogin.jsp.
b. Click Collaboration Manager > Web Services > Web Service Console > New.
c. Provide the following values:
Web Service Name: Provide a name. For example, CatalogService.
WSDL: Type <definition/> to indicate that Product Master is to generate the WSDL for you.
Web Service Implementation: Select Java.
Java Implementation Class: Type the Java class of your web service. For the above example, you type com.acme.javawbs.CatalogService.
Deployed: Select this check box. Clearing this selection enables you to store web services in an inactive (unusable) state.
Authenticated: Do not select this check box.
Authenticated means that the web service expects a user name and password to be supplied. This is done through an Axis menu dialog if you are
calling the web service through the web browser, or if you are using a custom Java client then the user name and password must be included in the
SOAP header. The correct format for an Axis user name is User@Company for example Joe@Acme.

Unauthenticated means that the web service contacts Product Master using the user name and company that is specified in the soap_company and
soap_user fields in the $TOP/etc/default/common.properties file. In this scenario, the password will not be checked. Only the administrator should
have access to change files on the Product Master server, so this is not constituted a security risk. To protect against unauthorized access,
administrators should ensure that the SOAP company and user that are specified for Product Master in the common.properties file should not be an
Admin user.

The two fields can be kept blank to disable unauthenticated access completely.

If the service cannot be authenticated, a SOAP fault is returned in the message body.

d. Click Save. If you get an error similar to: Unable to verify Java implementation class, and you have no typographical errors in your fully qualified
Java class name, then you did not successfully deploy your Java class through the user .jar mechanism. Return to Step 2 and check whether your user .jar
appears in your class path.
For example, ps -ef | grep java and check the Java process for Product Master.
4. Test starting your web service. Your web service is now deployed and ready for use. You can develop a web service client to call your web service, but it is also
possible as a quick test to start the web service in a web browser. Type
http://YourWPCServer:YourWPCPort /services/ServiceName?method=getNumberOfItemsInCatalog&catalogName=YourCatalogName
Where YourWPCServer and YourWPCPort is the URL for accessing your Product Master system, ServiceName is the name that you gave the web service when you
deployed it to Product Master, and YourCatalogName is the catalog that you are querying.

Results
This quick test invocation in a web browser is unable to start anything but web service methods. This test does not handle namespaces correctly and can give a false
impression of errors when everything works fine if accessed from a correct web services client.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Example of creating a web service to search for an item


This example shows how to complete a search in the Java™ API and illustrates how to return an array of items rather than a single value.

662 IBM Product Master 12.0.0


About this task
This example also shows how to add authentication to your web service, and how to use the PIMWebServicesContextFactory to obtain your initial PIMContext.

Procedure
1. Write the Java code. The Java API offers a JDBC-like search capability, which provides an equivalent to the JDBC statement and result set that most Java
programmers who use JDBC are familiar with. The following sample Java code demonstrates the use of the Java API to create a Java API search and process the
results.

public class SearchService


{

/**
* Search a catalog for a specific spec's attribute.
*
* @param catalog
* the catalog to search
* @param spec
* the spec to search
* @param attribute
* the attribute to search for
* @return the result set as an array of Strings
*/
public String[] search(String catalog, String spec, String attribute)
{
Context context = null;
SearchResultSet searchResultSet = null;
String[] searchResults = null;

try
{
// Obtain a PIM Context and a Search Manager
context = PIMWebServicesContextFactory.getContext();
}
catch (Exception e1)
{
e1.printStackTrace();
}

// Build a search string and obtain a Search query instance


String queryString = "select item ['" + spec + "/" + attribute + "'] from catalog('" + catalog + "')";
System.out.println("Query string built as : " + queryString);

SearchQuery query;
try
{
query = context.createSearchQuery(queryString);

// Execute the query


searchResultSet = query.execute();

// Process the Search result Set


if (searchResultSet != null && searchResultSet.size() > 0)
{
String currentResult = null;
searchResults = new String[searchResultSet.size()];
int resultsIndex = 0;

while (searchResultSet.next())
{
currentResult = searchResultSet.getString(1);
System.out.println("Result : " + resultsIndex + " is : " + currentResult);

// Add to the result set


searchResults[resultsIndex] = currentResult;
resultsIndex++;
}
}
else
{
// No results so return empty array
searchResults = new String[] {};
}
}
catch (PIMInternalException pie)
{
pie.printStackTrace();
}
catch (PIMAuthorizationException pae)
{
pae.printStackTrace();
}
catch (PIMSearchException pse)
{
pse.printStackTrace();
}

return searchResults;
}

IBM Product Master 12.0.0 663


2. Deploy the user .jar file.
For more information, see Deploying a third party or custom user .jar file.

3. Register the web service in IBM® Product Master.


a. Access your Product Master instance and log in.
For example: http://yourWPCserver:yourWPCport/utils/enterLogin.jsp.
b. Click Collaboration Manager > Web Services > Web Service Console > New.
c. Provide the following values:
Web Service Name: Provide a name. For example, SearchService.
WSDL: Type <definition/> to indicate that Product Master should generate the WSDL for you.
Web Service Implementation: Select Java.
Java Implementation Class: Type the Java class of your web service. In the above example, you would type com.acme.javawbs.SearchService.
Deployed: Select this check box. Clearing this selection enables you to store web services in an inactive (unusable) state.
Authenticated: Select this check box to have the user name, company, and password that is specified when the web service is started.
d. Click Save. If you get an error similar to: Unable to verify Java implementation class, and you have no typographical errors in your fully qualified
Java class name, then you did not successfully deploy your Java class through the user .jar mechanism. Return to Step 2 and check whether your user .jar
appears in your class path.
For example, ps -ef | grep
java and check the Java process for Product Master.
4. Test starting your web service. After you retrieve a query, you can now use the execute() method to run the query and process the returned search result set:
http://YourWPCServer:YourWPCPort /services/ServiceName?
method=search&spec=YourSpecName&attribute=YourAttributeName&catalogName=YourCatalogName
Where YourWPCServer and YourWPCPort is the URL for accessing your Product Master system, ServiceName is the name that you gave the web service when you
deployed it to Product Master, and YourSpecNameYourAttributeName, and YourCatalogName are the spec, attribute, and catalog that you want to query.
Because you selected Authenticated, an Axis login dialog opens. Enter the appropriate user name and password, for example: User name: Joe@Acme Password:
passw0rd. The output in the browser window returns three items from the catalog.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Example of creating a web service that uses a business object and complex logic
This example shows a more sizeable Java™ API application that uses a business object to provide backend function. This example also illustrates how to create your own
Java client to call your web service.

About this task


The functionality that is offered is to search all catalogs and return a list of items that meet a specified attribute=value criteria. This example is a more realistic
example, involving a business object that is used by the web service to look through all catalogs for any items with the specified attribute value.

Procedure
1. Write the Java code.
a. Write the code for the business object CatalogQuery. Here is a sample:

public class CatalogQuery


{

/**
* All catalogs are queried for any items with the attribute name/value pair
* provided.
*
* @return an array of strings containing either matching item names or the
* result of the query if no match was found
*
*/
public String[] getListOfItems(String attributeName, String attributeValue)
{

Collection<String> matchInfo = getItems(attributeName, attributeValue);


int size = matchInfo.size();

String[] list = new String[size];

Iterator<String> listIterator = matchInfo.iterator();

int i = 0;
while (listIterator.hasNext())
{
String string = (String) listIterator.next();
list[i++] = string;
}
return list;
}

/**
* getItems
*
* @return an ArrayList containing either matching item names or the result
* of the query if no match was found

664 IBM Product Master 12.0.0


*
*/
public Collection<String> getItems(String attributeName, String attributeValue)
{
Collection<String> result = new ArrayList<String>();

Collection<Catalog> wpcCatalogs = getCatalogs();


if (wpcCatalogs == null)
{
String noCatalogs = " There are no matching Items: No Catalogs found ";
result.add(noCatalogs);
return result;
}
else
{
Collection<String> matchingItemsOrResult = getItemsMatchingCriteria(wpcCatalogs, attributeName,
attributeValue);
return matchingItemsOrResult;
}
}

/**
* getCatalogs uses a sequence of javaapi calls for accessing catalogs in a
* IBM® InfoSphere Master Data Management Server for Product Information
* Management instance from a webservice
*
* @return a PIMCollection containing objects representing the catalogs
* found.
*
*/
private Collection<Catalog> getCatalogs()
{
Collection<Catalog> catalogs = null;
String errorMsg = null;
Context ctx = null;
try
{
ctx = PIMContextFactory.getContext("Admin", "trinitron", "BadWolf");

}
catch (Exception e)
{
errorMsg = "Error obtaining Context" + e.getMessage();
e.printStackTrace();
}
CatalogManager catalog_manager = null;
try
{
catalog_manager = ctx.getCatalogManager();
}
catch (Exception e)
{
errorMsg = "Error retrieving Catalog manager" + e.getMessage();
e.printStackTrace();
}
try
{
catalogs = catalog_manager.getCatalogs();
if (null != catalogs)
{
errorMsg = "No Catalogs found";
}
return catalogs;
}
catch (Exception e)
{
errorMsg = "Error retrieving Catalogs. " + e.getMessage();
e.printStackTrace();
}
String result = null;
if (null == errorMsg)
{
result = "Successfully completed Catalog interrogation";
}
else
{
result = errorMsg;
}
System.out.println(result);
return catalogs;

/**
* getItemsMatchingCriteria queries each catalog for items with a matching
* attribute name/value pair
*
* @return an ArrayList containing either matching item names or the result
* of the query if no match was found
*
*/
private Collection<String> getItemsMatchingCriteria(Collection<Catalog> wpcCatalogs, String attributeName, String
attributeValue)
{
// this Collection will be returned if the result is an error
Collection<String> result = new ArrayList<String>();

IBM Product Master 12.0.0 665


// fa separate Collection to return if the result is good
Collection<String> listOfMatchingItemNames = new ArrayList<String>();

Iterator<Catalog> catalogIterator = wpcCatalogs.iterator();


// iterate through the Catalogs looking for items with the matching
// attributes
while (catalogIterator.hasNext())
{
Catalog wpcCatalog = catalogIterator.next();

if (wpcCatalog == null)
{
String catalogAltered = "System alterations have been made by another application: Catalog not found.
";
result.add(catalogAltered);
return result;
}
else
{
try
{
String specName = wpcCatalog.getSpec().getName();

if (hasItems(wpcCatalog))
{
PIMCollection<Item> wpcItems = wpcCatalog.getItems();

// iterate through the items for matching attributes


Iterator<Item> itemsIterator = wpcItems.iterator();
while (itemsIterator.hasNext())
{
Item wpcItem = itemsIterator.next();
if (wpcItem == null)
{
String itemAltered = "System alterations have been made by another application: Item
not found. ";
result.add(itemAltered);
return result;
}
else if (attributeMatch(wpcItem, specName, attributeName, attributeValue))
{
String itemName = wpcItem.getDisplayName();
listOfMatchingItemNames.add(itemName);
}

}
}
}
catch (Exception e)
{
e.printStackTrace();
}
}
} // end of the Catalog iterator 'while'

// after we iterate through all the catalogs and build a list of


// matching items
// we return the list of matching items
// first check to see whether there are any items in the Collection
if (listOfMatchingItemNames.isEmpty())
{
String noMatches = "No items that matched the criteria were found. ";
result.add(noMatches);
return result;
}
else
{
return listOfMatchingItemNames;
}
}

/**
* hasItems determines whether a catalog contains items
*
* @return boolean
*
*/
private boolean hasItems(Catalog wpcCatalog)
{
try
{
PIMCollection<Item> items = wpcCatalog.getItems();
if (items == null)
{
return false;
}
else
{
return true;
}
}
catch (Exception e)
{
e.printStackTrace();
}
return false;

666 IBM Product Master 12.0.0


}

/**
* attributeMatch determines whether the item has a matching attribute and
* value
*
* @return boolean
*
*/
private boolean attributeMatch(Item wpcItem, String specName, String attributeName, String attributeValue)
{

// first establish whether the attribute exists for the item


try
{
String valueRetrieved = (String) wpcItem.getAttributeValue(specName + "/" + attributeName);
if (valueRetrieved == null)
{
// attribute is not there, so there will be no match
return false;
}
else
{
// check whether the attribute found has a value match
if (valueRetrieved.equals(attributeValue))
{
return true;
}
}
}
catch (Exception e)
{
e.printStackTrace();
}
return false;
}
}

b. Write the code for the web service, ItemFinderService, which uses the previous business object. Here is a sample:

/**
* This class is intended to be exposed as the Web service that drives
* CatalogQuery
*
* An attribute name/value pair is provided, and a list of items that contain
* this attribute name/value pair is returned.
*
* @return an array of string containing the primary keys (PKs) of items containing the
* attribute name/value pair.
*
*/
public class ItemFinderService
{

/**
* getListOfMatchingItems queries all catalogs for any items with the
* attribute name/value pair provided.
*
* @param String
* attributeName: search for this attribute name
* @param String
* attributeValue: if the attributeName exists, search for
* instances of that name with this value
*
* @return an array of strings containing either item names with matching
* attribute name/value or the result of the query if no match was
* found
*/
public String[] getListOfMatchingItems(String attributeName, String attributeValue)
{
CatalogQuery catalogQuery = new CatalogQuery();
String[] matchInfo = catalogQuery.getListOfItems(attributeName, attributeValue);
return matchInfo;
}

2. Deploy the user .jar file.


For more information, see Deploying a third party or custom user .jar file.

3. Register the web service in Product Master.


a. Access your Product Master instance and log in.
For example: http://yourWPCserver:yourWPCport/utils/enterLogin.jsp.
b. Click Collaboration Manager > Web Services > Web Service Console > New.
c. Provide the following values:
Web Service Name: Provide a name. For example, ItemFinderService.
WSDL: Type:

<?xml version="1.0" encoding="UTF-8"?>


<wsdl:definitions targetNamespace="http://javawbs.acme.com"
xmlns="http://schemas.xmlsoap.org/wsdl/"
xmlns:apachesoap="http://xml.apache.org/xml-soap"
xmlns:impl="http://javawbs.acme.com"

IBM Product Master 12.0.0 667


xmlns:intf="http://javawbs.acme.com"
xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:tnsl="http://javawbs.acme.com"
xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
xmlns:wsdlsoap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<wsdl:types>
<schema targetNamespace="http://javawbs.acme.com"
xmlns="http://www.w3.org/2001/XMLSchema">
<element name="getListOfMatchingItems">
<complexType>
<sequence>
<element name="attributeName"
type="xsd:string"/>
<element name="attributeValue"
type="xsd:string"/>
</sequence>
</complexType>
</element>
<element name="getListOfMatchingItemsResponse">
<complexType>
<sequence>
<element maxOccurs="unbounded" minOccurs="0"
name="getListOfMatchingItemsReturn" type="xsd:string"/>
</sequence>
</complexType>
</element>
</schema>
</wsdl:types>
<wsdl:message name="getListOfMatchingItemsResponse">
<wsdl:part element="tnsl:getListOfMatchingItemsResponse"
name="parameters"/>
</wsdl:message>
<wsdl:message name="getListOfMatchingItemsRequest">
<wsdl:part element="tnsl:getListOfMatchingItems"
name="parameters"/>
</wsdl:message>
<wsdl:portType name="ItemFinderService">
<wsdl:operation name="getListOfMatchingItems">
<wsdl:input
message="impl:getListOfMatchingItemsRequest"
name="getListOfMatchingItemsRequest"/>
<wsdl:output
message="impl:getListOfMatchingItemsResponse"
name="getListOfMatchingItemsResponse"/>
</wsdl:operation>
</wsdl:portType>
<wsdl:binding name="ItemFinderServiceSoapBinding"
type="impl:ItemFinderService">
<wsdlsoap:binding
transport="http://schemas.xmlsoap.org/soap/http"/>
<soap12:binding
transport="http://schemas.xmlsoap.org/soap/http"/>
<wsdl:operation name="getListOfMatchingItems">
<wsdlsoap:operation
soapAction="getListOfMatchingItems"/>
<wsdl:input name="getListOfMatchingItemsRequest">
<wsdlsoap:body namespace="http://javawbs.acme.com"
use="literal"/>
</wsdl:input>
<wsdl:output name="getListOfMatchingItemsResponse">
<wsdlsoap:body namespace="http://javawbs.acme.com"
use="literal"/>
</wsdl:output>
</wsdl:operation>
</wsdl:binding>
<wsdl:service name="ItemFinderServiceService">
<wsdl:port binding="impl:ItemFinderServiceSoapBinding"
name="ItemFinderService_PORT">
<wsdlsoap:address
location="http://<your-WPC-server>:<your-WPC-port>/services/Item
FinderService"/>
</wsdl:port>
</wsdl:service>
</wsdl:definitions>

Web Service Implementation: Select Java.


Java Implementation Class: Type the Java class of your web service. For the previous example, type com.acme.javawbs.ItemFinderService.
WSDD: Type:

<deployment
xmlns="http://xml.apache.org/axis/wsdd/"
xmlns:java="http://xml.apache.org/axis/wsdd/providers/java">
<!--Services from ItemFinderService WSDL service-->
<service name="ItemFinderService" provider="java:WPCDocument" use="literal">
<parameter name="wsdlTargetNamespace" value="http://javawbs.acme.com"/>
<parameter name="wsdlServiceElement" value="ItemFinderService"/>
<parameter name="wsdlServicePort" value="ItemFinderService"/>
<parameter name="className" value="com.acme.javawbs.ItemFinderService"/>
<parameter name="wsdlPortType" value="ItemFinderService"/>
<operation name="getListOfMatchingItems"
qname="operNS:getListOfMatchingItems"xmlns:operNS="http://javawbs.acme.com"
returnQName="getListOfMatchingItemsReturn"
returnType="rtns:string"xmlns:rtns="http://www.w3.org/2001/XMLSchema">

668 IBM Product Master 12.0.0


<parameter name="attributeName" type="tns.string"
xmlns:tns="http://www.w3.org/2001/XMLSchema"/>
<parameter name="attributeValue" type="tns:string"
xmlns:tns="http://www.w3.org/2001/XMLSchema"/>
<operation>
<parameter name="allowedMethods" value="getListOfMatchingItems"/>
<parameter name="scope" value="Session"/>
</service>
</deployment>

Deployed: Select this checkbox. Clearing this selection enables you to store web services in an inactive (unusable) state.
Authenticated: Do not select this checkbox .
d. Click Save. If you get an error similar to: Unable to verify Java implementation class, and you have no typographical errors in your fully qualified
Java class name, then you did not successfully deploy your Java class through the user .jar mechanism. Return to Step 2 and check whether your user .jar
appears in your class path.
For example, ps -ef | grep
java and check the Java process for Product Master.
4. Test starting your web service. Your web service is now deployed and ready for use. Type:
http://YourWPCServer:YourWPCPort/services/ServiceName?
method=getListOfMatchingItems&attributeName=YourAttributeName&attributeValue=YourAttributeValue
Where YourWPCServer and YourWPCPort is the URL for accessing your Product Master system, ServiceName is the name that you gave the web service when you
deployed it to Product Master, YourAttributeName is the name of the attribute that you want to examine, and YourAttributeValue is the value of that attribute that
you want to look for.
The output in the browser window returns two matching items from any Product Master catalog that contains a field that is called "color" with a value of "red".

5. Create a client application to call the web service. The following instructions apply to Rational® Software Architect, but you can use different tools for writing SOAP
web service clients.
a. Open Rational Software Architect and open a new Java project. Call it ItemFinderClient.
b. Go to the URL that appeared in the WSDL URL field when you saved the ItemFinderService web service (or copy the WSDL from the WSDL text box). After you
obtain the WSDL, save it to a file.
For example, ItemFinderService.wsdl. Your client needs the web service WSDL definition to know how to interact with the web service.
c. Import the ItemFinderService.wsdl file into your Java project.
d. Ensure that you have the following capabilities that are enabled in your workspace preferences:
Web Service Developer
XML Developer
Advanced Java Platform, Enterprise Edition
e. Right-click the WSDL file and select Web Services > Generate Client.
f. Select the default options for the web service. Do not select Test web service because this runs a separate application server within Rational Software
Architect and will not work for Product Master.
g. Click Finish. You have now generated the files that provide the methods that you need to access the web service from your Java code.
h. Create a Java class that is called ItemFinderServiceClient.java with the following code:

public class ItemFinderServiceClient


{
public static void main(String args[]) throws RemoteException
{
// test the ItemFinderService Web Service
ItemFinderServiceProxy proxy = new ItemFinderServiceProxy();
ItemFinderService service = proxy.getItemFinderService();

// call the web service and return matching items


String[] returnedItems = service.getListOfMatchingItems("color","red");

// report number of matches


System.out.println("ItemFinderService.getItems("color",\"red\") returned "+returnedItems.length+"
matching items:");

// print details
for (int i=0; i < returnedItems.length; i++)
{
String itemString = (String) returnedItems[i];
int itemNo = i+1;
System.out.println("Item "+itemNo+": "+itemString);
}
}
}

i. Ensure that Product Master is started, your user .jar with the current version of the web service class is deployed, and the web service is registered and saved
as deployed in Product Master.
j. In Rational Software Architect, run the ItemFinderServiceClient class. If everything is set up correctly, you see the output from calling the web Service in the
Rational Software Architect console view.

ItemFinderService.getItems("color",
"red") returned 2 matching items:
Item 1: Item27
Item 2: Item43

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Example of creating a web service that retrieves information from an item

IBM Product Master 12.0.0 669


This example shows how to create a web service that retrieves information from an item, and returns a custom object to the web service. This example also demonstrates
the use of Java2WSDL and WSDL2Java in creating WSDD.

About this task


This example creates a web service that enables you to query a catalog with a primary key and get back a Java™ object instance that represents a catalog-specific item.
The catalog is for books, so items have a primary key attribute of ISBN and other attributes such as title and author. The web service allows users to query the catalog and
get an instance of a custom book class. Users can then query this instance in application code.

Procedure
1. Write the Java code. Here is a sample:
Main Class:

public class BookCatalogService


{
/**
* Queries the specified catalog for the book with the specified ISBN
* number.
*/
public Book getBookFromCatalog(String catalogName, String isbnNumber) throws Exception
{
Context context = null;
try
{
context = PIMWebServicesContextFactory.getContext();
CatalogManager catalogManager = context.getCatalogManager();
Catalog catalog = catalogManager.getCatalog(catalogName);

if (catalog == null)
{
throw new Exception("No Catalog found.");
}
else
{
Item item = catalog.getItemByPrimaryKey(isbnNumber);
if (item == null)
{
throw new Exception("No matching Item found.");
}
else
{
System.out.println("Item was " + item);
/*
* Now we have an item, we can extract the application
* specific details and populate our application object
* instance
*/
Book book = new Book();
book.setISBN(Integer.parseInt(item.getPrimaryKey()));

// The spec name is required to qualify fully the attribute


// name
String spec = "BookSpec";

book.setAuthor(item.getAttributeValue(spec + "/" + "Author").toString());


book.setTitle(item.getAttributeValue(spec + "/" + "Title").toString());
book.setDescription(item.getAttributeValue(spec + "/" + "Description").toString());
book.setPrice(Float.parseFloat(item.getAttributeValue(spec + "/" + "Price").toString()));
book.setStockLevel(Integer.parseInt(item.getAttributeValue(spec + "/" + "StockLevel").toString()));

return book;
}
}
}
catch (Exception e)
{
e.printStackTrace();
throw new Exception(e.getMessage());
}
}

Book Object Class:

public class Book


{
private int ISBN;
private String title;
private String author;
private String description;
private float price;
private int stockLevel;

public String getAuthor()


{
return author;
}

public void setAuthor(String author)

670 IBM Product Master 12.0.0


{
this.author = author;
}

public String getDescription()


{
return description;
}

public void setDescription(String description)


{
this.description = description;
}

public int getISBN()


{
return ISBN;
}

public void setISBN(int isbn)


{
ISBN = isbn;
}

public float getPrice()


{
return price;
}

public void setPrice(float price)


{
this.price = price;
}

public int getStockLevel()


{
return stockLevel;
}

public void setStockLevel(int stockLevel)


{
this.stockLevel = stockLevel;
}

public String getTitle()


{
return title;
}

public void setTitle(String title)


{
this.title = title;
}
}

The following code is the custom Java object that represents a book in the catalog and is passed back by the web service.

/**
* Simple application wrapper class which represents an Item from a
* catalog for books.
*/
public class Book {
private int ISBN;
private String title;
private String author;
private String description;
private float price;
private int stockLevel;

public String getAuthor() {


return author;
}
public void setAuthor(String author) {
this.author = author;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public int getISBN() {
return ISBN;
}
public void setISBN(int isbn) {
ISBN = isbn;
}
public float getPrice() {
return price;
}
public void setPrice(float price) {
this.price = price;
}
public int getStockLevel() {
return stockLevel;
}

IBM Product Master 12.0.0 671


public void setStockLevel(int stockLevel) {
this.stockLevel = stockLevel;
}
public String getTitle() {
return title;
}
public void setTitle(String title) {
this.title = title;
}
}

2. Generate the WSDD for the web service.


This step shows how to use two Axis utilities: Java2WSDL and WSDL2Java to create WSDD for a web service. Java2WSDL creates WSDL that represents Axis's view
of the web service that is implemented by the specified class. WSDL2Java creates Java classes from WSDL to implement a web service that uses Axis. More
importantly, WSDL2Java is able to create a WSDD document that can be used to deploy that service to Axis. The generated WSDD contains the definitions for
serializers that Axis expects to use for the web service.

a. Create a temporary directory.


For example ws_tmp to use while you generate the WSDL and WSDD. Type mkdir ws_tmp.
b. Go to the temporary directory. Type cd ws_tmp.
c. Add the Axis .jar files to your CLASSPATH, so you can run the Axis tools.
Note: This code is for Windows. It will differ on other platforms. Also the directory for Axis might differ on your system.

set CLASSPATH=%CLASSPATH%;C:\Axis\lib\axis.jar;
C:\Axis\lib\jaxrpc.jar;C:\Axis\lib\commons-logging-1.0.4.jar;
C:\Axis\lib\commons-discovery-0.2.jar;C:\Axis\lib\wsdl4j-1.5.1.jar;
C:\Axis\lib\saaj.jar;

Note: There is no line break in this command. Replace C:\Axis\lib with the location of your Axis lib directory.
d. Run Java2WSDL to generate the WSDL that describes your web service. Type:

java org.apache.axis.wsdl.Java2WSDL
-o BookCatalogService.wsdl
-l "http://myWpcHost:9080/services/BookCatalogService "
-n "http://javawbs.acme.com"
com.acme.javawbs.BookCatalogService -y WRAPPED
-u LITERAL

Do not include the line break. Where http://myWpcHost:9080/services/BookCatalogService is the URL for the service that you plan to deploy, in
the format:
http://wpcHostname:wpcPort/services/WebServiceName
wpcHostname is the name that you give when you deploy the web service in the console. -n http is the namespace that is used to distinguish your WSDL from
other namespaces (inverting the package name is convention, but any String that is unique to your installation should work).
com.acme.javawbs.BookCatalogService is the fully qualified name of the Java class that implements the service. BookCatalogService.wsdl is the output file
name for the WSDL. -y WRAPPED -u LITERAL ensures that the generated WSDL is wrapped-document/literal based, which is the correct type for IBM®
Product Master document/literal deployments. For rpc/encoding, use -y
RPC -u ENCODED.
You get a WSDL document.

e. Run WSDL2Java to generate the WSDD file. Type:

java org.apache.axis.wsdl.WSDL2Java \-o .


\-d Session \-s \-S true wp.wsdl

BookCatalogService.wsdl is the output file name from the previous step. -S true can be changed to -S false to generate individual entries for each operation,
which can be needed for more complex services, such as those returning arrays of custom types.
After WSDL2Java runs, a directory tree that contains generated Java appears in your temporary directory. With Java, there is an example WSDD document
for the service that is called deploy.wsdd, which can be used as a template for your own WSDD file.

f. Edit the WSDD file for use with Product Master. Open the generated WSDD document: more com/acme/javawbs/deploy.wsdd Where
com/acme/javawbs is the package name of the web service implementation class. Make the following changes:
Change provider="java:RPC" to provider="java:WPCDocument". For document literal or for RPC/Encoded based services, use:
provider="java:WPCRPC". This change is required, to ensure Product Master security. The web Services Console does not save a service that is
based on a default Axis provider because these services do not validate Product Master user names for authenticated services.
Change <parameter name="className" value="..."> to <parameter
name="className" value="com.acme.javawbs.BookCatalogService"> where the value= parameter is the fully qualified class name of the
implementation class. The output XML is a valid WSDD document that can be entered into the WSDD field when you deploy the web service to Product
Master.
g. Clean up your directories. The entire ws_tmp directory can be cleaned up and deleted after the deploy.wsdd and BookCatalogService.wsdl files are copied
elsewhere.
3. Deploy the user .jar file.
For more information, see Deploying a third party or custom user .jar file.

4. Register the web service in Product Master.


a. Access your Product Master instance and log in.
For example: http://yourWPCserver:yourWPCport/utils/enterLogin.jsp.
b. Click Collaboration Manager > Web Services > Web Service Console > New.
c. Provide the following values:
Web Service Name: Provide a name. For example, CatalogService.
WSDL: Copy and paste the entire contents of BookCatalogService.wsdl into the WSDL field.
Web Service Implementation: Select Java.
Java Implementation Class: Type the Java class of your web service. For the above example, you type com.acme.javawbs.CatalogService.
WSDD: Copy and paste the entire contents of your edited copy of deploy.wsdd into the WSDD field.
Deployed: Select this check box. Clearing this selection stores web services in an inactive (unusable) state.
Authenticated: Do not select this check box.

672 IBM Product Master 12.0.0


Authenticated means that the web service expects a user name and password to be supplied. This is done through an Axis menu dialog if you call the
web service through the web browser, or if you are using a custom Java client then the user name and password must be included in the SOAP header.
The correct format for an Axis user name is "User@Company" for example "Joe@Acme".

Unauthenticated means that the web service contacts Product Master using the user name and company that is specified in the soap_company and
soap_user fields in $TOP/etc/default/common.properties. In this scenario, the password will not be checked. Only the administrator should have
access to change files on the Product Master server, so this is not constituted a security risk. To protect against unauthorized access, administrators
should ensure that the SOAP company and user that are specified for Product Master in common.properties should not be an Admin user.

The two fields can be kept blank to disable unauthenticated access completely.

If the service cannot be authenticated, a SOAP fault is returned in the message body.

d. Click Save. If you get an error similar to: Unable to verify Java implementation class, and you have no typographical errors in your fully qualified
Java class name, then you did not successfully deploy your Java class through the user .jar mechanism. Return to Step 2 and check whether your user .jar
appears in your class path.
For example, ps -ef | grep
java and check the Java process for Product Master. Your web service is now deployed.
5. Access the web service through the service URL, or generate your own Java application client.
For more information, see Example of creating a web service that uses a business object and complex logic.

You can now write a Java client application that starts your web service through the generated Java proxy, which enables you to write business logic easily. Here is
an example of some client code:

/**
* Test the BookCatalogService via a generated proxy.
*/
public class BookCatalogApplication {

public static void main(String[] args)


{
BookCatalogServiceProxy bookCatalogServiceProxy = new BookCatalogServiceProxy();
try {
Book book = bookCatalogServiceProxy.getBookFromCatalog("BookCatalog", "140035206");
System.out.println("Got the Book instance as : " + book);
if(book != null){
System.out.println("ISBN : " + book.getISBN());
System.out.println("Author : " + book.getAuthor());
System.out.println("Description : " + book.getDescription());
System.out.println("Price : " + book.getPrice());
System.out.println("StockLevel : " + book.getStockLevel());
}
} catch (RemoteException e) {
e.printStackTrace();
}
}
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Example of handling namespaces when you create a web service


This example shows how to handle namespaces when you create a web service. Depending on how you structure your code and use the Java2WSDL command to
generate WSDL2Java, ensure a valid response is generated when the web service is deployed.

About this task


In the Example of creating a web service that retrieves information from an item topic, you created a web service implementation class BookCatalogService.java, which
returned a response Object of class Book.java. If in your code structure, the BookCatalogService.java and Book.java services, belong to the same package, the steps that
are given in the example suffices; however, if they belong to different packages (for example, if BookCatalogService.java belongs to com.acme.service and Book.java
belongs to com.acme.service.to) the WSDL generated by the tool maps each package to a different namespace. In this case, you must include the Helper classes, which
can be optionally generated by the Java2WSDL command or WSDL2Java command in the .jar file that you deploy for the web service.

Procedure
1. Generate the WSDD for the web service.
a. Create a temporary directory to use while you generate the WSDL and WSDD.
For example, type mkdir
ws_tmp.
b. Go to the temporary directory. Type cd
ws_tmp.
c. Add the Axis .jar files to your CLASSPATH so that you can run the Axis tools.

set CLASSPATH=%CLASSPATH%;C:\Axis\lib\axis.jar;
C:\Axis\lib\jaxrpc.jar;C:\Axis\lib\commons-logging-1.0.4.jar;
C:\Axis\lib\commons-discovery-0.2.jar;C:\Axis\lib\wsdl4j-1.5.1.jar;
C:\Axis\lib\saaj.jar;

IBM Product Master 12.0.0 673


Note: This example code is for Windows; it differs on other platforms. Also, the directory for Axis might differ on your system. There is no line break in this
command. Replace C:\Axis\lib with the location of your Axis lib directory.
d. Ensure that the directory you are using for Java2WSDL has all the compiled classes and that the folder structure is consistent with the package structure.
For example, com/acme/service/BookCatalogService.class and com/acme/service/to/Book.class.
e. Run Java2WSDL to generate the WSDL that describes your web service, WSDD, and Helper Class. Type the following command:

java org.apache.axis.wsdl.Java2WSDL -o BookCatalogService.wsdl -l


"http:// myWpcHost:9080/services/BookCatalogService"
com.acme.service.BookCatalogService -y WRAPPED -u LITERAL -d

Do not include the line break. Where:


http://myWpcHost:9080/services/BookCatalogService is the URL for the service that you plan to deploy, in the format:

http://wpcHostname:wpcPort/services/WebServiceName

wpcHostname is the name that you give when you deploy the web service in the console.
com.acme.service.BookCatalogService is the fully qualified name of the Java™ class that implements the service.
BookCatalogService.wsdl is the output file name for the WSDL.
-y WRAPPED -u LITERAL ensures that the generated WSDL is wrapped-document and literal that is based, which is the correct type for IBM®
Product Master literal document and literal deployments. For RPC and Encoding, use -y
RPC -u ENCODED.
–d is for deployment. This parameter is for convenience, as it generates the WSDD and Helper classes also in the same step.
Note: The Helper classes can also be generated by using the –H option with WSDL2Java command, but the –d option is more convenient.
A WSDL document, WSDD, and a Helper class are created. The WSDL is in the ws_tmp folder. The deploy.wsdd is in the /com/acme/service directory, and the
helper class for the response object, for example, Book_Helper.java, is in the /com/acme/service/to directory.
f. Edit the WSDD file for use with Product Master. Open the generated WSDD document com/acme/service/deploy.wsdd where com/acme/service is the
package name of the web service implementation class. Perform the following changes:
Change provider="java:RPC" to provider="java:WPCDocument". For document literal or for RPC and Encoded based services, use
provider="java:WPCRPC". This change is required to ensure Product Master security. The Web Services Console does not save a service that is
based on a default Axis provider because these services do not validate Product Master user names for authenticated services.
Ensure <parameter name="className" value="..."> has the form <parameter name="className" value="com.acme.service.BookCatalogService">
where the value= parameter is the fully qualified class name of the implementation class. If not, then modify it accordingly. The output XML is a valid
WSDD document that can be entered into the WSDD field when you deploy the web service to Product Master.
g. Copy the Book_Helper.java class to your web service project. Put it in the same folder as the Book.java class. Compile the project and export it as a .jar file.
h. Clean up your directories. The entire ws_tmp directory can be cleaned up and deleted after the deploy.wsdd, BookCatalogService.wsdl files are copied
elsewhere.
2. Deploy the user .jar file.
For more information, see Deploying a third party or custom user .jar file.

3. Register the web service in Product Master.


a. Access your Product Master instance and log in.
For example: http://yourWPCserver:yourWPCport/utils/enterLogin.jsp.
b. Click Collaboration Manager > Web Services > Web Service Console > New.
c. Provide the following values:
Web Service Name: Provide a name. For example, CatalogService.
WSDL: Copy and paste the entire contents of BookCatalogService.wsdl into the WSDL field.
Web Service Implementation: Select Java.
Java Implementation Class: Type the Java class of your web service. For the above example, you type com.acme.javawbs.CatalogService.
WSDD: Copy and paste the entire contents of your edited copy of deploy.wsdd into the WSDD field.
Deployed: Select this check box. Clearing this selection stores the web services in an inactive (unusable) state.
Authenticated: Do not select this check box.
Authenticated means that the web service expects a user name and password to be supplied. This is done through an Axis menu dialog if you are
calling the web service through the web browser, or if you are using a custom Java client then the user name and password must be included in the
SOAP header. The correct format for an Axis user name is "User@Company" for example "Joe@Acme".

Unauthenticated means that the web service contacts Product Master using the user name and company that is specified in the soap_company and
soap_user fields in $TOP/etc/default/common.properties. In this scenario, the password is not checked. Only the administrator should have access to
change files on the Product Master server, so this is not constituted a security risk. To protect against unauthorized access, administrators should
ensure that the SOAP company and user that are specified for Product Master in common.properties should not be an Admin user.

The two fields can be kept blank to disable unauthenticated access completely.

If the service cannot be authenticated, a SOAP fault is returned in the message body.

d. Click Save. If you get an error similar to: Unable to verify Java implementation class, and you have no typographical errors in your fully qualified
Java class name, then you did not successfully deploy your Java class through the user .jar mechanism. Return to Step 2 and check whether your user .jar
appears in your class path.
For example, ps -ef | grep java and check the Java process for Product Master. Your web service is now deployed.
4. Access the web service through the service URL, or generate your own Java application client. Y
For more information, see Example of creating a web service that uses a business object and complex logic.

ou can now write a Java client application that starts your web service through the generated Java proxy, which enables you to write business logic easily. Here is an
example of some client code:

/**
* Test the BookCatalogService via a generated proxy.
*/
public class BookCatalogApplication {

public static void main(String[] args)


{
BookCatalogServiceProxy bookCatalogServiceProxy = new BookCatalogServiceProxy();
try {
Book book = bookCatalogServiceProxy.getBookFromCatalog("BookCatalog", "140035206");
System.out.println("Got the Book instance as : " + book);

674 IBM Product Master 12.0.0


if(book != null){
System.out.println("ISBN : " + book.getISBN());
System.out.println("Author : " + book.getAuthor());
System.out.println("Description : " + book.getDescription());
System.out.println("Price : " + book.getPrice());
System.out.println("StockLevel : " + book.getStockLevel());
}
} catch (RemoteException e) {
e.printStackTrace();
}
}
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Web services outside of Product Master


A service outside of IBM® Product Master is a custom-hosted web service. It is hosted on a server other than the Product Master. You can implement this web service in
Java™ by following the standard web service implementation steps. You can deploy a custom-hosted web service to IBM Product Master.

Procedure
1. Implement the web service.
To implement the web service using Rational® Software Architect, complete the following steps. For the detailed steps, see Rational Software Architect product
documentation.

For an example of web services implementation, see the sample web services.

2. Deploy the web service.


For example, see Deploying a web service.
3. Access the web service.
For example, see Access web services.

Deploying a web service


You must deploy a web service after you implement it. This information describes how to deploy a web service on WebSphere Application Server.
Access web services
You can access the web service with the endpoint where your web service is running.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deploying a web service


You must deploy a web service after you implement it. This information describes how to deploy a web service on WebSphere® Application Server.

Before you begin


You must already have a web service that is wrapped in an .ear file.

Procedure
1. Configure the application server
a. Start the server: $WAS_HOME/bin/startServer.sh
server_name.
b. Go to the Administration Console: http://server_name:9060/ibm/console Replace the port number with the port number for your admin server.
c. Configure the JVM. You configure two things in the JVM: the class path and the custom properties.

Class path
Application servers > server_name > Process Definition > Java Virtual Machine::
/opt/WebSphere/AppServer/java/lib/tools.jar
/opt/db2inst9/sqllib/java/db2jcc.jar
/opt/db2inst9/sqllib/java/db2jcc_license_cu.jar
For Oracle: /opt/oracle/Ora11gHome/jdbc/lib/ojdbc5.jar
$TOP\jars\ccd_javaapi2.jar
$TOP\jars\ccd_svr.jar
$TOP\jars\mdm_cache.jar
$TOP\jars\ehcache-1.6.0.jar
$TOP\jars\axis_1.4.jar
$TOP\jars\jakarta-regexp-1.5.jar
$TOP\jars\log4j-1.2.15.jar
$TOP\jars\icu4j-4_2.jar

IBM Product Master 12.0.0 675


$TOP\jars\commons-dbcp2-2.1.1.jar
$TOP\jars\commons-pool2-2.4.2.jar
$TOP\jars\commons-collections-3.2.1.jar
$TOP\jars\commons-configuration-1.6.jar
$TOP\jars\commons-lang-2.4.jar
$TOP\jars\commons-io-1.4.jar
$TOP\jars\concurrent-utils-1.3.2.jar
$TOP\jars\Cup.jar
$TOP\jars\JSON4J_1.0.0.2.jar
$TOP\jars\xalan_2_7_5.jar
$TOP\jars\xercesImpl_4_4_6.jar
Custom properties
Application servers > server_name > Process Definition > Java Virtual Machine > Custom Properties:
CCD_ETC_DIR: The value of $TOP)/etc
TOP: The value of $TOP
enableJava2Security: true
exit_if_config_file_not_found : no
java.security.policy: The value of $TOP)/etc/default/java.policy
svc_name : The name of the application server, for example, appsvr_FROGGER

2. Deploy the sample services with WebSphere Application Server as a New Enterprise Application.
a. Provide the location of the .ear file.
b. Specify the context root or application name for the application, for example, /webservices.samples, and follow the steps of deployment.
c. Make sure that the deployed web service application is started.

What to do next
Next, access the web service.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Access web services


You can access the web service with the endpoint where your web service is running.

For example, to access the sample Scheduler web services, use the endpoint as follows replacing the hostname and port number with your hostname and port.

http://hostname:9083/webservices.samples/SchedulerService.

It can be accessed with your clients program or through web services explorer in Rational® Software Architect.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Developing scripts and extension points with the script workbench


You can develop scripts and Java™ extension points with script workbench for IBM® Product Master.

Script workbench for Product Master is an Eclipse-based set of tools that help you create or update scripts. A typical IBM Product Master implementation relies on scripts
to tailor the particular implementation to fulfill your user's requirements.
You can use Java programs instead of scripts. You can run a Java program from an Product Master Java extension point. A wizard in the Script workbench for Product
Master helps creating a Java program to run from an extension point.

Overview of the script workbench


You use Script workbench for Product Master to create and edit script files, ASP files, and JSP-like files.
Installing the script workbench
Script workbench for IBM Product Master is a collection of extensions or plug-ins to IBM Rational® Software Architect version 8 or later.
Modify preferences and properties
You can use script workbench for IBM Product Master to modify and alter preferences, such as the task tags and use of function calls within scripts, and properties
that relate to script workbench for Product Master.
Develop scripts with the script workbench for Product Master
The script workbench for Product Master contains a script editor that you can use to help create or update a script. The script editor is automatically started on a
new script and is created with the script creation wizard. The script editor can be started on an existing script by double-clicking on the file.
Develop Java extension points using the script workbench
There are various points within the IBM Product Master where you can modify the behavior by running some user-defined business logic. In the past, the business
logic was provided by writing scripts, however, you can write an extension point in Java.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

676 IBM Product Master 12.0.0


Overview of the script workbench
You use Script workbench for Product Master to create and edit script files, ASP files, and JSP-like files.

The script files contain only the Product Master script commands. The ASP files and JSP-like files are a mixture of HTML and Product Master script commands and are
typically used to create custom web-based tools and user interfaces.

The script editor provides the following features:

A number of facilities to create or update scripts, including content-assistance on the script operations that are used in the script and on functions that are defined
in the script
Automatic checking of scripts and ASP files and JSP-like script files for errors and warnings
A server explorer view capability that enables you to work with the contents of document repositories (docstore) on remote Product Master servers
A script test facility that runs scripts on remote Product Master servers and returns information about the execution of the script

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing the script workbench


Script workbench for IBM® Product Master is a collection of extensions or plug-ins to IBM Rational® Software Architect version 8 or later.

Before you begin


Ensure that you have IBM Rational Software Architect version 8.0 or later installed. The alphaWorks® version of scripting workbench must be uninstalled before you install
the script workbench for Product Master.

Installing the script workbench into Rational Software Architect


You can install the script workbench into IBM Rational Software Architect for IBM Product Master.
Uninstalling the alphaWorks version
You need to uninstall the alphaWorks version from IBM Rational Software Architect.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing the script workbench into Rational Software Architect


You can install the script workbench into IBM® Rational® Software Architect for IBM Product Master.

Before you begin


If you previously installed the alphaWorks® version of script workbench for Product Master, you must uninstall it before you proceed with this installation. The alphaWorks
version and this version of Script workbench for Product Master cannot be installed together into the same instance of IBM Rational Software Architect.

Procedure
1. Click Help > Install New Software. The Installation window displays.
2. Click the What is already installed? link to check whether there is an existing Script workbench for Product Master.
3. Select the Installed Software tab and search for Script Workbench...IBM Product Master, where ... is the product version extension or base code. If there is an
existing Script workbench for Product Master:
a. Select Script Workbench...IBM Product Master and click Uninstall.
b. Click Finish. You might need to restart your system after the uninstallation is complete.
4. Install Script workbench for Product Master.
a. Back on the Installation window, click Add.
b. Find the workbench compressed file for your current product version in the workbench folder on the product CD or the unpacked archive that is downloaded
from the product site.
c. Click Archive and specify the location of the Script workbench for Product Master compressed file.
d. Clear Group items by category check box.
e. Select the check boxes of the two Script workbench for Product Master files and click Next to install.
f. Select I Accept the terms in the license agreement, and click Finish.
g. If a warning message displays, click OK to install.
h. After the installation is complete, restart Rational Software Architect.
5. Verify the installed features:
For a fresh install, check for base feature and your current product version feature under installed features.
For an update install, installing your current product version while you have an earlier version of script workbench, check for an updated base feature, your
current product version feature, and features for previous versions.

IBM Product Master 12.0 Fix Pack 8

IBM Product Master 12.0.0 677


Operating Systems: AIX, Linux, and Windows (Workbench only)

Uninstalling the alphaWorks version


You need to uninstall the alphaWorks® version from IBM® Rational® Software Architect.

Procedure
1. Click Help > Install New Software. The Installation window displays.
2. Click the What is already installed? link to check whether there is an existing alphaWork version of script workbench for IBM Product Master.
3. Scroll to locate Product Master script workbench. Select this entry.
4. Click Uninstall. An uninstallation confirmation window opens.
5. Click Finish.
6. When requested to do so, restart Rational Software Architect.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Modify preferences and properties


You can use script workbench for IBM® Product Master to modify and alter preferences, such as the task tags and use of function calls within scripts, and properties that
relate to script workbench for Product Master.

Setting project properties


When a file is saved, the script files are checked for errors. The selection of error conditions that are detected by the script editor is configurable with the project
properties.
Setting script workbench preferences
You can modify the preferences of the script editor, for example, customizing the color of script language elements.
Enabling task tags
You can flag tasks that need to be completed by using a scanner to identify the task markers. When the scanner finds a task marker, the task is flagged in the left-
margin of the editor pane and it also appears in the Tasks view on the script workbench for IBM Product Master.
Setting file associations
You can specify which file extensions you are using with your scripts. To ensure that your script editor is working with the same file suffixes, you can choose the file
association preferences.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting project properties


When a file is saved, the script files are checked for errors. The selection of error conditions that are detected by the script editor is configurable with the project
properties.

About this task


The project properties file settings cause a selection of files to be created with a .prefs file extension. These files are placed in a .settings folder in the root of your project.
These .prefs files can be checked in and out of a code control system and kept with the code throughout its lifecycle.
Note: Do not manually edit these files. Instead, use the project properties user interface to edit the files.
Only properties that are set to deviate from the default values are stored in these .prefs files.

Procedure
1. Right-click on your project, and select Properties > Script operations.
Important: Do not modify any of the Builders properties.
2. Select whether you would like the type of problem that is detected by the check to be Ignored, flagged as a Error, or flagged as a Warning in the Checks against the
IBM Product Master scripts box. The checks run on the various script operations that are used within the scripts. The scripts are changed to ensure that the usage of
the script operations now matches the specified IBM® Product Master version.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting script workbench preferences


678 IBM Product Master 12.0.0
You can modify the preferences of the script editor, for example, customizing the color of script language elements.

About this task


The script editor scans the source file for tokens. According to the token type, the text displays in the specified color.

Procedure
1. Select Window > Preferences > IBM® Product Master.
2. Select Script Language > Editor under the + symbol to customize the color of script language elements.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling task tags


You can flag tasks that need to be completed by using a scanner to identify the task markers. When the scanner finds a task marker, the task is flagged in the left-margin of
the editor pane and it also appears in the Tasks view on the script workbench for IBM® Product Master.

About this task


You can also work within an outline view, which supports nested function calls. The outline view provides an overall structural view of the file. The names of functions that
are defined in the script editor display in the outline view. Click a function name in the Outline pane. The script editor moves your cursor to the line in which the function is
defined within the script.

Procedure
1. Select Window > Preferences > IBM Product Master.
2. Select Script Language > Task tag under the + symbol.
3. Provide a name for Task tag field.
4. Select the Enable task tag processing check box. This check box ensures that the scanner identifies the task markers. Click Apply. Click OK.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting file associations


You can specify which file extensions you are using with your scripts. To ensure that your script editor is working with the same file suffixes, you can choose the file
association preferences.

Procedure
Set the file association. Use either of the user interface or file association.
Option Description
a. Click Preferences > General > Editors.
UI b. Select File Associations.
c. Add the new suffix in the dialog box and then add the InfoSphere® editor in the box. Click OK.

a. Right-click the file that you want to change the extension.


b. Select Open with.
File association c. Select Other.
d. Select the IBM® Product Master editor from the list.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Develop scripts with the script workbench for Product Master


The script workbench for Product Master contains a script editor that you can use to help create or update a script. The script editor is automatically started on a new
script and is created with the script creation wizard. The script editor can be started on an existing script by double-clicking on the file.

The script editor extends the basic editor functions that are provided by Rational® Software Architect.

IBM Product Master 12.0.0 679


The added function helps the preparation of editing Product Master scripts. Some of these functions are:

Color coding of keywords and sections of the code to help visually separate the various statements within the script.
Ability to easily comment out sections of code.
Double-click to highlight sections of code that is enclosed by brackets, braces, or quotation marks. This function is useful for ensuring that sections of code are
properly delineated.
Outline view to help search around the file.
Integrated parser and semantics checker.
ToDo comment support.
Errors in the construction of the script are flagged enabling errors to be removed at editing time.
Content assist functions to interactively provide help on script functions and functions that are defined within the script.

Opening the Product Master perspective


When you work in the IBM Product Master perspective you have access to a set of views and actions that are frequently used when you edit and testing the Product
Master script files.
Creating a project
You can create a IBM Product Master project and select the server version in which you want to work with the files.
Apply the Product Master project nature to an existing project
Project natures allow a plug-in to tag a project as a specific project. For example, the Java development tools use a Java nature to add Java specific behavior to
projects.
Overview of the script editor
You use the script editor to create and edit the IBM Product Master scripts. The script editor includes a script builder and a semantics checker.
Creating a script
You can create a new IBM Product Master script or an ASP and JSP script with the script creation wizard.
Importing an existing script
You can import a file from an external file system into the script workbench for IBM Product Master environment.
Updating scripts from the server
You can view a conceptual representation of the grouping of scripts within a docstore in the Server Explorer view. Because this view is only a conceptual
representation, you cannot, for instance, copy or move a folder from one place to another. You can copy and drag files from the Server Explorer view.
Changing file extensions
You need to change your user properties if you want to change the file extensions of your scripts.
Specify comments in Javadoc format
The script editor supports some Javadoc tags. You can use Javadoc tags to display cleanly formatted hover help on parameters and variables and on calls to
functions defined locally in the script.
Debugging scripts
To help you debug scripts, you can customize the text color for functions, variables, comments, and common phrases within a script.
Run a script
You can run a script locally or from a remote server. You can then view the output from the console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Opening the Product Master perspective


When you work in the IBM® Product Master perspective you have access to a set of views and actions that are frequently used when you edit and testing the Product
Master script files.

Procedure
1. Select Window > Open Perspective > Other.
2. Select the Product Master perspective.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a project
You can create a IBM® Product Master project and select the server version in which you want to work with the files.

Before you begin


Switch to the Java™ perspective by clicking Window > Open Perspective. Select Java. A Java perspective opens.

About this task


You always write a Java class from the Java perspective rather than from the IBM Product Master perspective.

Procedure

680 IBM Product Master 12.0.0


1. Select File > New > Project from the menu bar. The project creation wizard opens.
2. Provide a name for the new project. Click Next.
3. Select the version of the Product Master server you want the files in the project to work with. The version selection controls a number of functions within the script
workbench for IBM Product Master the script operation help, and the content assist functions. Click Finish. A new project displays in the navigation in your
workspace.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Apply the Product Master project nature to an existing project


Project natures allow a plug-in to tag a project as a specific project. For example, the Java™ development tools use a Java nature to add Java specific behavior to projects.

The different facets that are connected to a IBM® Product Master project is associated with the Product Master project nature. The project nature can be applied to a blank
or simple project, which results in a Product Master project. The project nature can also be applied to existing projects to use the script editor behavior on those projects.

For example, if you have a Java project that has .java files and compiled .class files, you maintain .wpcs files within the same project and have access to the Product
Master editor capabilities and file checking. You can add the Product Master project nature to the Java project.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Overview of the script editor


You use the script editor to create and edit the IBM® Product Master scripts. The script editor includes a script builder and a semantics checker.

The script editor contains the following main features:

Content assist
You use the Ctrl+space key combination to use the content assist function of the editor. Use the Ctrl+space key combination when you type the name of a script
operation or a user-defined function to display a list of operations, functions, and documentation. After you select an item from the list, a template for the item
construct that the keyword uses is automatically provided.
Hover help
You can place the mouse over certain labels and fields to display more text for help.

On error markers
Errors that are found when the file was last saved are marked on both margins of the editor pane. Go to errors in the file by clicking the red marker in the right
margin. You can hover over the red marker on the left margin to view information about the problem detected.
On script operations
You can hover over a script operation that is found within a script to display documentation help for that script operation.
On a user function
You can hover over a user function to display a text area that contains documentation help for that user function.
On a user variable
You can hover over a variable to display a text area that contains documentation help for that variable.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a script
You can create a new IBM® Product Master script or an ASP and JSP script with the script creation wizard.

Procedure
1. Select File > New > InfoSphere MDM Collaboration Server Script from the menu bar. The script creation wizard opens.
2. Provide a name for the new script. The file extension for a Product Master script file is .wpcs. The file extension for an ASP/JSP script is .wps. Click Finish.
3. Optional: You can group multiple scripts together under directories within a project. Select File > New > Other > Simple > Folder to create a folder under the project
and place the files in the directory.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Importing an existing script


IBM Product Master 12.0.0 681
You can import a file from an external file system into the script workbench for IBM® Product Master environment.

Procedure
1. Create an IBM Product Master project or use an existing Product Master project.
2. Right-click on the project folder in the navigator pane. Select Import > File System > Next and browse to the directory that contains the file that you want to import.
3. Select the check boxes that are provided for the file that you want to import. Ensure that the Into Folder field specifies the Product Master project.
4. Click Finish to import the file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Updating scripts from the server


You can view a conceptual representation of the grouping of scripts within a docstore in the Server Explorer view. Because this view is only a conceptual representation,
you cannot, for instance, copy or move a folder from one place to another. You can copy and drag files from the Server Explorer view.

Procedure
1. Copy all of the files from the Server Explorer view and paste them to any folder on your system.
2. Modify the files on your system.
3. Copy the files from your system and paste them back into the Server Explorer view.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Changing file extensions


You need to change your user properties if you want to change the file extensions of your scripts.

About this task


By default, the script workbench for IBM® Product Master uses .wpcs and .script for IBM Product Master scripts and .wsp and .wpcp for JSP and ASP scripts.

Procedure
Change the file extension. Use any one of the following methods:
Option Description
a. Click Window > Preferences.
b. Click IBM Product Master > Script Language > Script Checker preference.
Changing the file extensions that are used by the semantic checker Note: Ensure that you leave the default values on the list and add more
extension file types.

a. Click Window > Preferences.


b. Click Workbench > File Associations.
Changing the file extensions that are used by the editor c. Add a file extension, for example, .script.
d. Associate the script workbench for Product Master with the .script file
extension.

a. Click Window > Preferences.


Changing the file extension that is used by the IBM Product Master Server b. Click IBM Product Master > Server Explorer.
Explorer view c. Provide a single file extension in each field.

File extensions
The script workbench for IBM Product Master supports a .wpcs file extension for a Product Master script file and a .wsp file extension for JSP-style Product Master
script file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

File extensions

682 IBM Product Master 12.0.0


The script workbench for IBM® Product Master supports a .wpcs file extension for a Product Master script file and a .wsp file extension for JSP-style Product Master script
file.

You can modify the file extensions that the semantic checker uses in the user preferences list.

You can modify the file extensions that are used by the script editor. You can associate a certain program to a certain type of file that is based on the file extension. By
associating a certain program to a certain type of file extension, you are defining which file extensions have the script workbench for Product Master associated with them
to open up the script workbench for Product Master. If you double-click a file in the navigator view, the default editor for that file extension opens. If you right-click on the
file, a menu with a list of available editors for that file extension display.

You can modify the file extensions that are used by the Product Master server explorer view. In the server explorer view, you can go to the document store of a remote
Product Master server. You can drag-drop or copy-paste to and from this view into and out of the local project navigator.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Specify comments in Javadoc format


The script editor supports some Javadoc tags. You can use Javadoc tags to display cleanly formatted hover help on parameters and variables and on calls to functions
defined locally in the script.

You use Javadoc tags to:

Enhance the content assist function. This enables you to obtain help on the functions that are defined within the script
Enable the script editor to perform checks on the usage of the function
Enables the use of other applications to render a version of the script for print or web page use

The script editor supports the following Javadoc tags:

@param
Documents a parameter declared in a function. Here is an example of the syntax: @param parameterName description. All parameters declared in a function
should be documented with this tag.
@return
Documents the information that is returned by a function. Here is an example of the syntax: @return description. There should be only one instance of the
@return tag per function declaration.
@throws
Documents the errors, which a function can throw. Here is an example of the syntax: @throws description Multiple @throws tags can be used per function
declaration.
@deprecated
Indicates that the function is deprecated. Here is an example of the syntax: @deprecated description. The description is optional, however, it indicates why
the function is deprecated and what should be used as an alternative.

Here is an example of a Javadoc tag that is used in a comment: // @deprecated This function is deprecated.
To use the script editor syntax checking feature, the comment should be directly above the definition of the function.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Debugging scripts
To help you debug scripts, you can customize the text color for functions, variables, comments, and common phrases within a script.

The selection of error conditions that are detected by the script editor is configurable from the project properties. You can set your project properties to check against the
script operations that are used in your scripts. See Setting project properties for information about setting your project properties to debug scripts.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Run a script
You can run a script locally or from a remote server. You can then view the output from the console.

After you configure the connection, you can start a script and view the output that is generated by that script. You can start only a script that is run in stand-alone mode.
You cannot start a script that is run by the server in a specific environment. For example, an entry preview script or an import or export script that is run from a scheduled
job has pre-defined input and cannot be started.

You can also start the server explorer view by copying files from the server. You can transfer only textual files. You cannot transfer binary files like Java™ class files.

IBM Product Master 12.0.0 683


Running a script on the server
You can run a script on a remote IBM® Product Master server.
Viewing the output in the console from running a script
When you run a script, either locally or on a remote server, you can view the output in the console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Running a script on the server


You can run a script on a remote IBM® Product Master server.

About this task


You can use this same run configuration process to run different scripts by changing the script name in the wizard and clicking Run.

Procedure
1. Right-click on a file in the navigator view.
2. Select Run.... The run wizard opens.
a. Select the file that you want to run.
b. Select the server that you want to run the script on.
c. Specify a file to be used as input for the script, if needed.
3. Click Apply. Click Run.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Viewing the output in the console from running a script


When you run a script, either locally or on a remote server, you can view the output in the console.

About this task


Output from the script, which goes to the out output stream, appears in blue text in the console. Output from the script, which goes to the err output stream, appears in
red text in the console.

Procedure
1. Click Window > Show View.
2. Select Console under the Basic group of views. After the console view opens, the output from the script is written to it.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Develop Java extension points using the script workbench


There are various points within the IBM® Product Master where you can modify the behavior by running some user-defined business logic. In the past, the business logic
was provided by writing scripts, however, you can write an extension point in Java™.

Getting started with Java extension points


Ensure that you meet the following prerequisites before you get started with developing Java extension points.
Troubleshooting checklist for script workbench for Product Master
Troubleshooting is a systematic approach to solving a problem. The goal of troubleshooting is to determine why something does not work as expected and explain
how to resolve the problem.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Getting started with Java extension points


684 IBM Product Master 12.0.0
Ensure that you meet the following prerequisites before you get started with developing Java™ extension points.

Before you begin


Ensure that the wizard has access to the ccd_javaapi2.jar file. This file is distributed as part of the IBM® Product Master server package. You must copy the
ccd_javaapi2.jar and ccd_javaapi2doc.zip file from the server onto the computer where the script workbench for IBM Product Master is running.

Creating a Java project


You need to create a Java project to develop Java extension points.
Setting up the Java Extension Points wizard
The extension point wizard helps the preparation of a Java based extension point program.
Ensure the extension point class file is available on the server
You can ensure that the extension point class file is available on the server from the IBM Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a Java project


You need to create a Java™ project to develop Java extension points.

Before you begin


Ensure that you switch to the Java perspective.

Procedure
1. Click File > New > Project. The New Project wizard opens.
2. Select Java Project for the type of project that you want to create. Click Next.
3. Provide a name for your project and click Next.
4. Select the Libraries tab and click Add JARs.... The JAR Selection window opens.
5. Go to where you copied the ccd_javaapi2.jar file and click Open.
6. Click Finish.

What to do next
Now, you can extract the ccd_javapi2doc.zip file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting up the Java Extension Points wizard


The extension point wizard helps the preparation of a Java™ based extension point program.

Before you begin


Ensure that you switched to the Java perspective and created a new Java project.

About this task


As each extension point is specific to an invocation point within the Product Master system, the extension point program must be written to conform to the requirements
of the extension point. This means that the Java class must implement specific methods and interact with the server through a specific information argument bean. You
use the wizard to select which extension point you want to build a program for and it generates the correct class skeleton with the required method stubs. It is possible to
write a single Java class, which can be run from several different extension points. The wizard supports the creation of a single Java class, which might be used with
multiple extension points.

Procedure
1. Extract the ccd_javapi2doc.zip file. This file contains the documentation for the methods and classes in the ccd_javaapi2.jar file.
2. Click + next to your Java project in the navigator.
3. Right-click on the ccd_javaapi2.jar file and select Properties.
4. Select the Javadoc property.
5. Select Javadoc URL and browse to the location of the extracted files from the ccd_javapi2doc.zip file. Click OK.
6. Click File > New > Other > InfoSphere MDM Collaboration Server extension point wizard.
7. Provide values for the fields and click Finish.

IBM Product Master 12.0.0 685


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Ensure the extension point class file is available on the server


You can ensure that the extension point class file is available on the server from the IBM® Product Master.

The extension point class file is started through a redirection script file. This script file contains a special format.

//script_execution_mode=java_api="japi://wpc.javaapi.test.extension

points.WorkBenchExtensionTestImpl.class"

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting checklist for script workbench for Product Master


Troubleshooting is a systematic approach to solving a problem. The goal of troubleshooting is to determine why something does not work as expected and explain how to
resolve the problem.

Use the general troubleshooting checklist to resolve common issues in script workbench for IBM® Product Master. The following is a list that can help you to resolve
simple issues and identify issues with the script editor:

The Product Master server explorer view does not display any folders or files and the user cannot drag-drop files onto the view.
The server explorer view of the remote Product Master document store cannot create folders or enable you to drag-drop files onto the root of the document store.
Therefore, if the files or folders did not exist, nothing displays.

Issue:
Your document store is empty.
Workaround:
Go to the Product Master server and create a document in the document store. Then, go to the document store from the script editor.

When the user saves a script, they do not see errors. However, when they clean their project, errors are visible in the editor.
The Build Automatically option controls whether Eclipse tries to build files when they are saved or updated. If it is not turned on, then builds need to be kicked-off
manually using either the clean or rebuild menu items.

Issue:
Your script shows error in your project view.
Workaround:
Click Project > Build Automatically and set the build automatically option is set to true.

Errors are not appearing in the Problems tab.


If you have your filter set to ignore script editor errors, the errors do not display in the Problems tab.

Issue:
You are not seeing your errors display in the Problems tab.
Workaround:
In the Problems tab, click the drop-down arrow and select Filters. Select the check box for the Product Master errors to become visible.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deploying a third party or custom user .jar file


To use the third-party code or code that is available from custom JARs, those custom JARs need to get deployed in to the system.

Before you begin


If you are deploying custom user .jar files, before you can deploy them, you must write your Java™ code.

About this task


IBM® Product Master can have the need to use third-party JARs or custom JARs. These JARs can belong to one of the following categories:

Third party JARs that are not included with the product.
Java™ API JAR file, which is included along with the product but not added to the .classpath by default.

686 IBM Product Master 12.0.0


Custom JARs files that contain user code, developed by users of IBM Product Master. For example, these JARs might contain Web Services code or Java API
extension point implementation classes.

Procedure
1. If you are deploying a custom user .jar file, perform the following tasks on your development environment to produce the custom .jar file. If you are deploying a
third-party .jar, that .jar file should already be available and the following steps are not required.
a. Compile your Java file. This generates a compiled .class file, for example: SearchService.class.
b. Add your compiled .class file into a JAR file :
Type:
jar -cvf /data/jars/AcmeServices.jar SearchService.class

AcmeServices is the name of the .jar file.


c. Store the generated AcmeServices.jar in a directory on the IBM Product Master server.
2. On the Product Master server perform the following tasks:
a. Initialize your command-line environment.
b. Stop your Product Master server if it is running by using the following command:
$TOP/bin/go/stop_local.sh
3. Add the custom user JAR file. Use either of the following methods:
a. Copy the JAR file to the <install dir>/jars directory.
or
a. Add the paths, one per line, to the custom JAR file to bin/conf/classpath/jars-custom.txt.
b. Save the jars-custom.txt file.
The paths in the jars-custom.txt file can be absolute or relative to the <install dir> directory. If you use a relative path, the <install
dir> directory is prepended to the value in the jars-custom.txt file. For example, /opt/ssce/lib/myjar.jar is an absolute path. When the runtime class path is
assembled, the /opt/ssce/jars/myjar.jar path is added to the class path.
Another example of an absolute path is somedir_under_install_dir/mydir/myjar.jar. When the runtime class path is assembled, the <install
dir>/somedir_under_install_dir/mydir/myjar.jar path is added to the class path.

4. Run the following script to update the runtime class path:


configureEnv.sh.
A message displays stating that a .jar file was added.
If the classpath parameter needs to reflect the latest custom JAR additions or deletions, ensure you use the $TOP/bin/updateRtClasspath.sh shell script to
update the classpath parameter only in the env_settings.ini file without modifying other configuration files in the $TOP/etc/default directory. All Product Master
services start with the classpath parameter as defined in the $TOP/bin/conf/env_settings.ini directory. For more information, see updateRtClasspath.sh script.

5. Redeploy the .war file using the following command:


$TOP/bin/yourAppServer/install_war.sh
yourAppServer is the name of your application server.
If you are running WebSphere® as an alternative approach, you can also add the .jar file directly from the WebSphere Application Server administrative console by
using the following path:

Application servers > servername > Java and Process Management > Process Definition > Java Virtual Machine > Classpath.

6. Restart Product Master using the following command:


./start_local.sh.

Related concepts
Java API migration

Related tasks
Making the extension point class available

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Samples
IBM® Product Master provides several different samples that you can use to develop your Product Master Server solution.

JavaScript extensions samples


The JavaScript extensions samples provide the code that can be plugged into different JavaScript extension points to customize data entry screens inIBM Product
Master.
Web services samples
The web services samples provide a list of web services for the IBM Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 687


JavaScript extensions samples
The JavaScript extensions samples provide the code that can be plugged into different JavaScript extension points to customize data entry screens inIBM® Product
Master.

The JavaScript extensions samples are available in the Samples\webapp\js_extensions\ directory on the product CD.

Customizing the right pane user interfaces


When you customize the right pane user interfaces, there are no specific hooks that are provided for any screens. Instead, a JavaScript file,
genericUIExtensions.js, is being started if provided. Any custom code that you want run when you render the screen needs to be added to this JavaScript file.
Ideally, you need to wait until the screen is rendered, and then use DOM APIs to query the specific DOM elements and add custom HTML elements to the screen.
Customizing the new data entry screens
The new data entry user interfaces can be customized using predefined JavaScript hooks, which start custom code that is provided in the JavaScript file. Separate
hooks are provided for the single-edit screen (newSEExtObj.initialize()) and the multi-edit screen (newMEExtObj.initialize()). These hooks are
started after the UI widgets are instantiated and are then available for customization.
Installing the JavaScript extensions sample
These steps show you how to install and run the JavaScript extensions sample code.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Customizing the right pane user interfaces


When you customize the right pane user interfaces, there are no specific hooks that are provided for any screens. Instead, a JavaScript file, genericUIExtensions.js,
is being started if provided. Any custom code that you want run when you render the screen needs to be added to this JavaScript file. Ideally, you need to wait until the
screen is rendered, and then use DOM APIs to query the specific DOM elements and add custom HTML elements to the screen.

Procedure
1. Copy the genericUIExtensions.js under $TOP\ui\samples\js_extensions\common\ as provided in the sample file
Samples\webapp\js_extensions\webapp.samples.js_extensions.zip on the product CD.
2. Add custom code to be run after the right pane screen that you are customizing is rendered.
Note: The genericUIExtensions.js JavaScript is started for the right pane UI. Therefore, be careful that you customize only the specific DOM elements present
in screens that need to be customized.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Customizing the new data entry screens


The new data entry user interfaces can be customized using predefined JavaScript hooks, which start custom code that is provided in the JavaScript file. Separate hooks
are provided for the single-edit screen (newSEExtObj.initialize()) and the multi-edit screen (newMEExtObj.initialize()). These hooks are started after the UI
widgets are instantiated and are then available for customization.

Procedure
1. Copy the dataEntryExtension.js file under $TOP\public_html\user\js\dataentry\ as provided in the sample file
Samples\webapp\js_extensions\webapp.samples.js_extensions.zip on the product CD.
2. Add your custom code to the newSEExtObj.initialize() and newMEExtObj.initialize() hooks.
For example, you can call an alert for single-edit UI as follows:

newSEExtObj.customFunction = function()
{
alert("Hello World");
};

3. Start this function from the newSEExtObj.initialize() hook:

newSEExtObj.initialize = function()
{
newSEExtObj.customFunction();
};

Results
The custom function invocation is wrapped in a try/catch block within the product code and any exceptions that are caught are logged to the Developer tools console.
These logs can be viewed using the Microsoft Internet Explorer Developer tools console.

IBM Product Master 12.0 Fix Pack 8

688 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing the JavaScript extensions sample


These steps show you how to install and run the JavaScript extensions sample code.

Procedure
1. Extract the Samples\webapp\js_extensions\webapp.samples.js_extensions.zip file on the product CD into the $TOP\public_html\user\js folder.

Results
The following folder structure should be present after installation:

$TOP\public_html\user\js\
$TOP\public_html\user\js\common
$TOP\public_html\user\js\common\genericUIExtensions.js
$TOP\public_html\user\js\common\helloWorld.html
$TOP\public_html\user\js\dataentry\
$TOP\public_html\user\js\dataentry\dataEntryExtension.js

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Web services samples


The web services samples provide a list of web services for the IBM® Product Master.

The following sample web services are provided in the Samples\webservices\ directory on the product CD:

Search web services

String search(String wql, Context wsContext)


This service returns a string of xml representation for the search result set by the search query in WQL language.

Product web services

Item createItem(Item item, Context wsContext)


If you create an Item with as many attributes per the business requirement, this service creates a new item from the data provided. As an input, the Item object
must include container name, primary key value, and display name.
Item updateItem(Item item, Context wsContext)
This service updates an existing item by the data provided. As an input, the Item object must include container name and primary key value.
Item getItem(Item item, Context wsContext)
If the Item object returns all of the attributes or returns only primaryKey and displayName, this service retrieves an existing item from the system. As an input,
the Item object must include container name and primary key value.
void deleteItem(Item item, Context wsContext)
This service creates an item from the data provided. As an input, an Item object can include container name and primary key value.

Scheduler web services

Report createReport(Report report, Context wsContext)


This service creates a Product Master Server Report job from the web service Report object that is passed to the service call.
To start this operation, the following arguments on the Report object must exist and be valid in Product Master Server:

reportScriptFileName, path to the report script


inputSpecName, name of the script input spec
distributionName, distribution name to be associated with the Report

Schedule createSchedule(Schedule schedule, Context wsContext)


A Product Master Server Schedule is created for the Job associated with the schedule.
For this operation to be successful, the following parameters should be valid:

Job.name, valid job name existing in Product Master Server; currently, Product Master Server Job.name is the same as job description.
Schedule.type, type of the schedule to be created; can take one of the following values:
IMMEDIATE = 0
ONE_TIME = 1
MINUTE = 2
HOURLY = 3
DAILY = 4
WEEKLY = 5
MONTHLY = 6
YEARLY = 7
Schedule.startTime, if the schedule type is other than IMMEDIATE, a start time is required to indicate when the schedule is to be run.
Schedule.intervalMinutes, If the schedule type is MINUTE, intervalMinutes need to be set. The schedule runs at the interval of the minutes.

String getScheduleStatus(Schedule schedule, Context wsContext)

IBM Product Master 12.0.0 689


Currently, the Java™ API does not display a schedule ID. So, the schedule is fetched based on the schedule name and a start date. Also, the job that is associated
with the schedule should have a valid description. If the schedule completes running, the status of its completion is returned to the caller. If the schedule is
running, the percentage completion of the schedule is returned.
void stopSchedule(Schedule schedule, Context wsContext)
The schedule is fetched using the schedule name and a start date and the associated job description.
Job getJob(Job job, Context wsContext)
The corresponding Product Master Server job is fetched using the object description. The new Job object corresponding to the retrieved Product Master Server job
is returned to the caller.
Collection<job> getAllJobs(Context wsContext)
This operation takes in Context and returns all jobs currently in the system.
Collection<Schedule> getAllSchedules(Job job, Context
wsContext)
This service returns all associated schedules for the Job. The input takes in a web service Job, which needs to have a valid description.

Using web services samples


The following steps show you how to use the sample web services code. Rational® Software Architect can be used to develop and test the web services.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using web services samples


The following steps show you how to use the sample web services code. Rational® Software Architect can be used to develop and test the web services.

Before you begin


Make sure that you are using Java™ 1.6. Rational Software Architect must be set up with a WebSphere® Application Server profile using the required class path and
environment variables.
Copy the entire installation directory to your development box.

Procedure
1. Import web services project into Rational Software Architect.
a. Extract the web services samples project.
b. In Rational Software Architect, enable and open Java Platform, Enterprise Edition perspective to be able to see the Servers window.
c. Right click the navigation Enterprise Explorer. Click Import > General > Import Existing Projects into Workspace. Click Next.
d. Choose Select root directory. Browse to the webservices/samples folder and click OK.
e. Click Finish.
2. Build the project.
a. Expand the WebServiceSamples project and locate the build.xml file.
b. Right click build.xml and select Run As > Ant Build. The build fails.
c. Right click build.xml and select Run As > External Tools Configuration.
d. In External tools Configuration, Main tab, enter for Arguments the following entry: -Dmdmpim.home=<MDMPIM install
directory>
e. Click OK.
The successful build creates a webservices.samples.war file in the folder webservices/samples/build.
3. Import WAR file.
a. Right click the blank space of Enterprise Explorer. Click Import > WAR file.
b. Click Browse to browse the WAR file webservices/samples/build/webservices.samples.war.
c. Click Finish.
Two projects webservices.samples and webservices.samplesEAR are imported to the workspace.
4. Deploy the web services using Rational Software Architect.
a. Go to the Servers tab.
b. Right click the server you want to deploy the web services in and click Add and Remove Projects.
c. Add the project webservices.samplesEAR.
d. Click Finish.
e. Start the server. Make sure that the server and application are both synchronized.
5. Test the web services.
a. In the project webservices.samples, locate WebContent/WEB-INF/wsdl/SchedulerService.wsdl
b. Right click SchedulerService.wsdl and select Web Service > Test with Web Services Explorer.
c. In web Services Explorer, add the endpoint by clicking the Add link next to the Endpoints table. Enter the following by replacing port 9083 with your HTTP
port. Make sure that your endpoint is pointing to the correct port as selected in your server profile.

http://localhost:9083/webservices.samples/SchedulerService

d. Select the new endpoint created and click Go.


e. You can now test the operations that are listed. For all of the operations, Context is a required input. Enter a valid company, password, and user name as
arguments for the Context object. Depending on the operation, you are testing, you might need to enter the arguments for Job object and Schedule object.
You get an input validation error whether the arguments are missing.
f. After all of the arguments are entered, click Go to start the web service.

Results
Return values are obtained from web services calls.

690 IBM Product Master 12.0.0


What to do next
Use this sample as an example to create your own web services using the Java APIs.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Developing the solution


Solution developers take the specifications from the solution architects and create the data model objects, workflows and collaboration areas, import, export, and report
jobs, selections and searches, and security objects. Solution developers can use Java™, script, or the user interface for these tasks.

Important: Code examples that are used in this documentation are meant to explain and show concepts. The examples are not meant to be working code and directly
used.

Creating data model objects


You can create data model objects such as specs, sub specs, hierarchies, catalogs, and lookup tables with IBM® Product Master. With GDS, you can upload or run a
script to load the data model object.
Creating objects for business processes
You create a workflow to manage the flow of work in a business process, and you create a collaboration area to hold the entries that are in the steps of the
workflow. You use workflows to define a set of business processes and rules, which are then applied to any IBM Product Master entries.
Enabling the business user workflow dashboard for business processes
The business user workflow dashboard is a custom tool. The dashboard provides users a categorized graphical summary of their current tasks to enable a better
view into what tasks they need to complete.
Installing the business user workflow dashboard
The business user workflow dashboard solution comes as a compressed ZIP file.
Integrating with upstream and downstream systems
To integrate your Product Master Server system with upstream and downstream systems, you need to create data sources, define import, export, or report jobs, and
create message queues. An upstream system is any system that sends data to the Product Master Server system. A downstream system is a system that receives
data from the Product Master Server system.
Creating selections and searches
You can create selections so that users can use saved search queries or results. You can create searches so that users can search catalogs, hierarchies, or
collaboration areas.
Creating the security model
You create a security model so that users can have different levels of privileges to objects in the Product Master Server solution.
Creating custom tools with the UI framework
You can extend the capabilities for the user interface in IBM Product Master by using the UI framework and the Java API for Product Master.
Customizing labels and icons for items and categories
A solution implementer can customize the labels and icons used for entities managed in a catalog or a hierarchy. By default, the label item and category
respectively are used by the user interface for these entities. A solution implementer can define one or more specifications of icons and labels to be used for a given
type of business entity managed in their catalogs or hierarchies. Each of these types of business entity specifications are a domain entity specification.
Enabling event logging using history manager
You can use the history manager of IBM Product Master to log object events through Java APIs. You can also disable event logging for a session for all objects, all
objects of a specific type, and specific events. Refer to the Javadoc for details.
Configuring the Entity Count Tool
You use the EntityReportingFunction extension point to add custom logic on how entities are designated and counted. Before you can run and view the entity count
report, the extension point implementation class needs to be uploaded to the system and the URL registered in the system.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating data model objects


You can create data model objects such as specs, sub specs, hierarchies, catalogs, and lookup tables with IBM® Product Master. With GDS, you can upload or run a script
to load the data model object.

A catalog is used for storing information about items. A hierarchy helps in categorizing item information that is stored in the catalogs.

For example, for abc company we can create a catalog named abc electronics items, a hierarchy named abc hierarchy electronics items, and categories named audio
items, video items, audio video items, multimedia items, imported items, exported items.

Specs help in defining the format in which you want the data to be stored, calculated, and managed in the catalogs within the PIM solution.

In the single edit screen:

Specs are rendered in the order specified in the View (or the Tab in case of tabbed view).
Nodes with a spec are rendered in the order specified in the View (typically same as the order in the spec definition, but can be overridden for a tab to not use spec
ordering).
Primary Key node is only rendered if the View/Tab explicitly includes it. This allows certain tabs to not show primary spec and primary key node if the solution
desires so.
Primary Key is no longer rendered "always as the first node" in the primary spec. Instead it will be rendered at the correct location within the primary spec as
specified by the spec order (or view/tab order).

IBM Product Master 12.0.0 691


A sub spec is a reusable spec which can be used as part of either a primary or a secondary spec, for example, to group together a set of attributes that always occur
together.

Lookup tables are useful for quick information retrieval and for storing small amounts of data.

Views provide a more efficient or task-specific view of items, create groups of attributes that are related to a specific data entry or data maintenance process. You can
create multiple views of the same catalog, and create views that are shared by multiple users.

You need to create the data model objects in the following sequence:

1. Create a spec
2. Associate a spec
3. Create a hierarchy
4. Create a catalog
5. Create a view

You can also create attribute collections, item relationships, lookup tables, and define location attributes.

An attribute collection is a group of item or category attributes that is associated or behave the same way in a given context. Attribute collections are used for workflow
step validation and catalog and hierarchy views.

Item-to-item relationships are relationships between items. For example, you might have a relationship between the items in a bundle that are sold together for
promotional purposes. Item-to-other-entity relationships can be between an item and any other entity, such as a supplier, a customer, or a location.

Location attributes store data that is specific to particular locations. For example, an item that is available for sale in California might require Prop 65 warning information.

Data modeling overview


A data model identifies the data, the data attributes, and the relationships or associations with other data. It provides a generalized, user-defined view of data that
represents the real business scenario and data.
Creating specs
You need to create a spec to specify the format in which you want the data to be stored, calculated, and managed within the Product Master Server solution.
Creating hierarchies
You can create a hierarchy to classify the items under hierarchies. A hierarchy consists of a set of categories. For example, you can create a hierarchy of books for
categorizing the books into different categories such as fiction, non-fiction, drama, poetry, technical books, and non-technical books. A category hierarchy is used
by catalogs to classify items, while an organization hierarchy is used to manage IBM Product Master users.
Creating catalogs
You can create the catalog for storing information about items. For example, you can create a catalog of books for storing information about the books.
Creating views
You can create views to provide a more efficient or task-specific view of items, create groups of attributes that are related to a specific data entry or data
maintenance process. You can create multiple views of the same catalog, and create views that are shared by multiple users. For example, a general view to view all
the attributes, a marketing view to see marketing related attributes only, and a technical view to see only technical attributes.
Creating attribute collections
You can create the attribute collections so that users are able to easily manage large number of attributes. Attribute Collections are a group of specs and attributes
that will behave the same way in all contexts.
Defining location attributes for entries
You can define location data for entries so that users can use location data. For example, if the price of an item varies from location to location, the user can
maintain the item-specific price attribute and the location-specific price attribute.
Creating item relationships
You can create item relationships so that users are able to establish a relationship between two attributes from two catalogs. For example, you can establish a
relationship between product id and item code from two catalogs. One of the attributes must be a relationship type attribute.
Creating lookup tables
You can create a lookup table and save lookup values to the primary spec so that users can select the lookup values from a drop-down list. Lookup tables are useful
for quick information retrieval and for storing small amounts of data. You can update lookup tables without needing to do mass update for all the items with the
value that you need to update.
Attaching files to a staging area
A staging area is similar to a distribution, for example, an FTP site or an email in which a single directory in the document store is set aside for any files sent using
that distribution. Those files can then be accessed at will through a script or manually.
Defining multi-occurrence group labels
When working with multi-occurrence groups, you can select a child attribute for use as a multi-occurrence group label to provide more group occurrence
information in the single edit screen.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data modeling overview


A data model identifies the data, the data attributes, and the relationships or associations with other data. It provides a generalized, user-defined view of data that
represents the real business scenario and data.

You need to create a data model to understand how to design your database and meet the data modeling requirements for your enterprise. You will use this data model to
structure and organize the data.

A data model consists of data objects and workstation data values. The item and category objects are the core objects in the data model, which are defined by the spec
object. A collection of item objects is a catalog. The hierarchy object defines a hierarchical form of a collection of categories.

Data modeling is the process of creating a data model. When you create a data model, you define the data and its attributes and relationships with other data and you
define constraints or limitations on the data. For example, you might create a data model for a product where the vendor attribute of the product item links to a vendor id
in a vendor catalog.

692 IBM Product Master 12.0.0


To determine which components to model, you must have a good understanding of the Product Information Management (PIM) domain, IBM® Product Master, and client
requirements.

The data modeling factors include user interface (UI), workflows, and search.

User Interface
The user interface affects the data model for enabling the business processes. For example, if the multi-edit feature is required for the business, then you must
model the UI accordingly.
Workflows
The data model must support the workflow by providing an end-to-end business process with step-based views and user roles. You must test a prototype of the
typical business processes and perform a conceptual dry run to check whether the data model design is constraining the use of native workflows.

Search
The data model must facilitate searching. You must understand how users will search for product data before you create the data model. The data model must
support the searching and require little UI customization for users.
Note: All attributes of an item are stored in serialized form in the database as a blob, and cannot be searched directly. The only attributes marked as indexed gets
stored within a relational table as well to enable fast and easy search. Therefore, when designing the data model you need to make sure to mark only those
attributes as indexed that need to be searched on a regular basis. However, avoid to index all attributes as it will increase disk space demand on database server
side, thereby, affecting the performance of accessed data.

To create a data model, you must consider the product attributes, foundation data, and product classifications.

Product attributes
Product attributes are a set of attributes that define a product.

Product attributes are typically grouped into a set of core and extension attributes. The core attributes are common for all the enterprise products, for example, the UPC
attribute. The extension attributes are specific to certain product types or categories, for example, the screen size attribute.Product Master also supports relationship
data, for example, cross-sell, up-sell, and other relationship data.

Product Master serves as a system of record for referential attributes.


Restrictions:

Handle the attributes that are transactional or volatile in nature outside of a Product Information Management (PIM) system by using the appropriate consuming
applications. For example, current price is handled by a pricing engine.
Do not model attributes whose values are derived by business logic from the external applications in your PIM system. You can keep such data in a PIM system as
read only, but if you do this, you need an update mechanism to keep the data in sync. In addition, keeping such data in a PIM system can add unnecessary load and
high availability requirements.

Foundation data
Foundation data encompasses any supporting entities and attribute values that are needed for defining a product. For example, foundation data includes a list of
suppliers, locations, product brands, and other information.

Product classifications
The product classifications define how the products are grouped together. You can group products together for a specific business purpose such as organizational
structure or for ease of navigation. A product can be categorized in multiple ways. For example, you can categorize the furniture product into kitchen furniture, drawing
room furniture, bed room furniture, and study room furniture.

Performance design considerations


Consideration for performance is important when designing a solution. IBM Product Master allows for complex data models and business logic. This flexibility
allows for the handling of advanced business rules. However, the availability of this flexibility can lead to simple solutions getting more complicated than necessary.
Product Master does not restrict the size or the number of objects that are used in the design of a solution, therefore, it is important to follow these performance
design considerations.
Product attributes
Product attributes are a set of attributes that define a product.
Location attributes in the data model
Location attributes store data that is specific to particular locations. For example, an item that is available for sale in California might require Prop 65 warning
information. Other states in America might not require the Prop 65 information, they might require some other information.
Product classifications in the data model
The product classifications process involves classifying or grouping products. You can classify products based on a set of common categories and custom
classification based on business requirements.
Item relationships
When you model relationships, you relate items or entities together under a certain relationship such as relating two items for up-sell, or relating an item to its
suppliers.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Performance design considerations


Consideration for performance is important when designing a solution. IBM® Product Master allows for complex data models and business logic. This flexibility allows for
the handling of advanced business rules. However, the availability of this flexibility can lead to simple solutions getting more complicated than necessary. Product Master
does not restrict the size or the number of objects that are used in the design of a solution, therefore, it is important to follow these performance design considerations.

IBM Product Master 12.0.0 693


The following common guidelines should be adhered to avoid major performance problems:

Keep the number of specs between the range of 10 - 100 specs.


The number of specs can sometimes reach higher than 100 until the size of the individual specs is small. Going beyond 1000 requires extreme care and
consideration as it dramatically increases the memory requirement on the system. The larger number of specs lead to the larger more complex views that increase
the memory footprint of individual users. This type of solution requires highly custom tuned load balancers and garbage collection settings.
Keep the size and count of lookup tables to a minimum.
Lookup tables are cached in memory. Increasing the size or count of the lookup tables consumes more memory for the caching of the tables thereby leaving less
memory for other operations. This in turn leads to frequent garbage collections, sluggish, and unpredictable performance of the system. Also, large number of
lookup tables values in the single edit user interface can affect the rendering times of the user interface as large amounts of lookup table data is sent across for the
creation of the drop-down menus.
Keep the number of steps in a workflow to less than 100.
Workflows can get complex in Product Master. They can be used to manage complex business rules. Typical workflows are less than 100 steps in size. Workflows
larger than this can result multiple problems ranging from an unmanageable user interface to very slow operations that manage and maintain the workflow.
Keep the number of related items for an item to less than 100.
If the number of related items for an item are more than 100, results in slow performance. Higher volume of related items causes more extensive relationship
attribute related queries which compounds to performance degradation.

Lay out the user interface in a manageable way.


As the complexity and scale of the data increases, so does the complexity of the data and the screen. Laying out the user interface in a manageable way is critical
for the user performance. Consider the following things to reduce the impact of the user interface's performance:

Manage the number of view and attribute collections that are used.
Collections and views are cached per user and are created the first time that a user accesses them after login. Having a large number results in a lot memory
usage per users and adversely affects the garbage collection activity.
Manage the number of items and attributes that get displayed at a time on the multi edit screen.
The larger the number of items and attributes, the slower the user interface screen performance. The user interface screen can take longer to refresh or load
due to the amount of data that is trying to display.

Manage the amount of nested-multi-occurrence information is also critical.


Within Product Master user interface, each nested occurrence of information is rendered as non-trivial HTML code that the client's browser must interpret. If
you have many (even just tens, but especially hundreds) of multi-occurrences that display (especially nested occurrences) on a single screen this can cause
rendering to be incredibly slow. Server processing can complete within ~1 second but the user response can sometimes take more than 60 seconds simply
due to the browsers rendering taking so much time. Therefore, making appropriate decisions during the data modeling stage of the project to avoid this kind
of complexity in the user interface is important.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Product attributes
Product attributes are a set of attributes that define a product.

Product attributes are typically grouped into a set of core and extension attributes. The core attributes are common for all the enterprise products, for example, the UPC
attribute. The extension attributes are specific to certain product types or categories, for example, the screen size attribute. Product Master also supports relationship
data, for example, cross-sell, up-sell, and other relationship data.

Product Master serves as a system of record for referential attributes.


Restrictions:

Handle the attributes that are transactional or volatile in nature outside of a Product Master Server system by using the appropriate consuming applications. For
example, current price is handled by a pricing engine.
Do not model attributes whose values are derived by business logic from the external applications in your Product Master Server system. You can keep such data in
a Product Master Server system as read only, but if you do this, you need an update mechanism to keep the data in sync. In addition, keeping such data in a PIM
system can add unnecessary load and high availability requirements.

Core product attributes in the data model


Core attributes are those attributes that are common in all the products in an enterprise. For example, price, size, and color are core product attributes.
Extension attributes in the data model
Extension attributes are a category of attributes that represents a combination of attributes such as item-type attributes and other attributes.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Core product attributes in the data model


Core attributes are those attributes that are common in all the products in an enterprise. For example, price, size, and color are core product attributes.

Core product attributes can be:

Item identifiers
Item types
Common attributes

694 IBM Product Master 12.0.0


Item status attributes
Pricing and cost attributes
Image link attributes

Item identifiers
You use item identifiers to uniquely distinguish each item. You can create item identifiers based on industry standards such as GTIN or UPC, or you can create item
identifiers that are enterprise-specific , such as SKU, Corporate ID, and others. Generally, enterprises might have one or more item identifiers per item.
Item types
You can group the products in your enterprise into item types which are subsets that represent either different lines of business (for example, consumer
electronics, entertainment, or services) or varying product types (for example, pants, shirts, shoes).
Common attributes
Common attributes are attributes that are common to all item types in the enterprise. The common attributes form the Primary spec of the Item master catalog.
Pricing and cost attributes
The pricing and cost attributes are typically managed by the pricing and merchandising system. However, you might import certain attributes into the Product
Master Server system by external feeds.
Image link attributes
The image link attribute stores the images that are used for item definition.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Item identifiers
You use item identifiers to uniquely distinguish each item. You can create item identifiers based on industry standards such as GTIN or UPC, or you can create item
identifiers that are enterprise-specific , such as SKU, Corporate ID, and others. Generally, enterprises might have one or more item identifiers per item.

When you create a data model, you identify a primary identifier first and then group the rest of the identifiers as alternate identifiers.

Primary identifiers
A primary identifier is a single attribute that is assigned as the primary key.
You must specify one primary item identifier for the enterprise to facilitate data exchange and avoid duplicate resolutions. This strategy might require your enterprise to
create a uniform identifier, which can affect the traditional systems.

You can model an identifier as a single attribute or a combination of attributes. However, you must assign a single attribute to be the primary item identifier for defining the
items in IBM® Product Master. The primary identifier is best modeled as a sequence attribute. Product Master enforces the uniqueness of the primary item identifier and
keeps it indexed.

The sequence attributes are by default unique, but not chronological or incremental. This is due to the fact that the system uses a sequence caching mechanism for
optimum performance. The sequence caching mechanism maintains and increments the sequence attribute values in memory without the need to make a round trip every
time to the database.

The sequence batching depends on whether a particular sequence is marked as sequential or non-sequential. You can mark any sequence as sequential or non-
sequential. By default, the created sequences are non-sequential. In other words, sequence batching is enabled for each sequence that is created by you.

You can change this behavior of sequence attributes by using simple scripting. For example,

sp = getSpecByName("Spec1", false);
nod = sp.getNodeByPath("Spec1/seqA");
nod.setAttribute("SEQUENCE_SEQUENTIAL","true");
sp.saveSpec();

The script marks the sequence attribute seqA in Spec1 as sequential. This means that every incremental value is retrieved from the database, that is, the sequence
batching is disabled for the sequence Spec1/seqA.

Remember: You must restart the server after running the script for the changes to fully take effect.

Alternate identifiers
Alternate identifiers are alternative values that you can use to facilitate searching and identifying the products.
You can use standards based identifiers such as GTIN and UPC to remember and refer to items. For example, you can need a 10-digit sequence as the primary identifier
(or primary key) to avoid duplications and other problems. But to help users search on and refer to items, you also provide a more memorable alternate identifier, which
might be a supplier-model combination. You can use these guidelines when you model alternate identifiers:

Make sure that identifier attributes are marked as searchable, which cause IBM Product Master to create indexes on them for faster searching. If the identifier is a
combination of attributes, make sure that all of the attributes are marked as searchable.
Enforce integrity constraints. If the identifier is unique in all items, similar to the primary key, then mark the identifier as unique. For multi-attribute identifiers or for
identifiers that are not unique, write custom validations to enforce uniqueness based on business logic. For details on implementing custom validations, see the
Validation Frameworks document that is listed in the Appendix.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 695


Item types
You can group the products in your enterprise into item types which are subsets that represent either different lines of business (for example, consumer electronics,
entertainment, or services) or varying product types (for example, pants, shirts, shoes).

During requirements gathering, be sure to get a good understanding of the all items in an enterprise regardless of when they will be made available in the solution. You
must identify the item types and sub types that the enterprise needs in the solution. A list of item types and sub types will help in extracting the attribute set that is
common to all items and the set of attributes that are applicable to items of a particular type.

An item type must be specific enough to identify an item no matter how the item is categorized. An item might be classified under various categories but the item type is
an attribute of the item and remains intact regardless of where the item is classified. For example, a shirt will always remain a shirt whether it appears under Casual Shirt,
Dress Shirt, or Holiday Deals. Similarly, a consumer electronics item will always remain a consumer electronics item whether it falls under MP3 players, Phones, or Special
Offers.

The following table illustrates the item attribute, the data model, and the purpose of the item attributes.
Table 1. Item attribute
Item attribute Data model Purpose
Item type Implemented as an item Item type attribute helps to identify an item regardless of categorization. For example, clothing, electronics, and toys. Drives
attribute the attachment of secondary specs
Item Sub type Implemented as an item Item Sub type attribute helps to further identify an item type. It helps during search on item sub types. For example, shirts
attribute within clothing, audio within electronics, and others.
Common Implemented as the Common attribute is common for all enterprise items.
attributes primary spec
Item-type Implemented with Item-type attribute is a separate spec for each item type.
attributes secondary specs
Item-type attribute is a type of Extension attribute.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Common attributes
Common attributes are attributes that are common to all item types in the enterprise. The common attributes form the Primary spec of the Item master catalog.

You can model all item types within the same catalog. Typical attributes found in this set include: item identifier, description, item status, supplier, cost, and price
attributes.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Pricing and cost attributes


The pricing and cost attributes are typically managed by the pricing and merchandising system. However, you might import certain attributes into the Product Master
Server system by external feeds.

These attributes might not change frequently but serve as inputs to the pricing engine.

Examples of pricing and cost attributes are:

Initial price
Initial cost
Minimum advertised price
Vendor cost

Best practices
Treat IBM® Product Master as an enabler for pricing system.
Do not store the current price of an item in a Product Master Server system. Users can view the current price in the pricing and merchandising systems. Maintaining
the value in the Product Master Server system and keeping in sync requires unnecessary overhead and increases the Service Level Agreement (SLA) or performance
requirements.
Create the current price attribute as read-only and expose the interface to update these values from the pricing engine only if the client requires the current and
promotion price to be made visible in Product Master Server.
Product Master can show these prices and export them as needed. Schedule the data exchange to occur on a periodic basis.
Do not impose any real-time requirements.
Do not store the history of price and cost changes in Product Master preferably.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

696 IBM Product Master 12.0.0


Image link attributes
The image link attribute stores the images that are used for item definition.

The product and item images are important for businesses that operate eCommerce sites. Images enrich the client experience and helps clients to make a purchasing
decision. For other applications such as the Product Master Server system, the images are used for reference purposes. Different media such as print and Web have
specific requirements for images that limit the color, size, resolution, and type of image.

Images are required by the retailers with an eCommerce or Print catalog system that consumes data from a Product Master Server system. The business requirements for
images include:

Rule definition
Rule definition involves size, color, resolution, and compression.
Creation
Image creation involves source image capture (high resolution).
Derivative generation
Derivative generation involves creation of images by transformation rules on the source image.
Image association
Image association involves assigning images to the products.

You must have business processes for generating images. For example, you must capture the image using high resolution. An image server (such as Adobe graphics
server) processes this to generate multiple images based on the defined set of business rules. Store the images in the image repository and notify the location of the
generated images along with the metadata to the imaging team. When the images are ready, the Creative team accesses the images from the Product Master Server
system and creates an association among the images.

For data modeling perspective, you need to ensure that each item contains a multi-occurrence of images with the following item-image attributes:

Image type
This attribute might be swatch, primary, alternate, room view, or other types.

Image URL
This attribute is a reference to image location in repository. Preferably thumbnail image that will fit within the IBM® Product Master interface.

Best practices
You need a business process for generating images. For example:

Capturing the image using high resolution.


Using an image server to process the images.
Storing the images in the image repository.
Notifying the location of the generated images along with the metadata to the imaging team.
Accessing the images from the Product Master Server system.
Creating an association among the images.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Extension attributes in the data model


Extension attributes are a category of attributes that represents a combination of attributes such as item-type attributes and other attributes.

The extension attributes include the following attributes:

Item-type attributes, or
Item-category attributes, or
Item-characteristics attributes

Item-type attributes
Item-type attributes are attributes that are applicable only to items of the same type. For example, collar size and sleeve size are attributes specific to shirts and waist and
length attributes are specific to pants.
You can model item-type attributes in of the two ways:

Option 1
You can attach a secondary spec by running a post-save script on certain item types. If you choose this approach, you can model item types and subtypes as a
hierarchical structure and attach item-category specs at the type and subtype level.
Option 2
Using a secondary hierarchy that you can create the categories that depict the item types. The item-type specific attributes can be attached as item category specs.
When you set up the items, you can categorize the items in this hierarchy to obtain the additional set of attributes. This approach also provides a different way to
navigate the item assortment in the catalog. If you choose this option, you can attach the secondary spec directly without needing to first define a separate
hierarchy.

IBM Product Master 12.0.0 697


When you define item types, do not model every variation as a different item type. For example an MP3 player with audio compared to an MP3 player with audio and video.
These do not need to be separate item types, but rather modeled as one item type MP3 and define a set of attributes that allow you to indicate whether it is audio, video or
both. You can also decrease the granularity by classifying the item into audio item type and having an attribute to indicate whether it is an MP3 player. The more granular
item types are more flexible but the maintenance cost of secondary specs is high.

Item-category attributes
The item-category attributes are used for extending an item's attribute set.
Item-category attributes are tied to an item's classification. For example, if an item is classified under Perishable goods, then an attribute for a best before date is added.
With item-category attributes, you can dynamically extend an item's attribute set and avoid having one large Primary spec in all items.

The data modeling of item-category attributes is similar to item-type attributes. You can model item-category attributes with a secondary hierarchy because the
classification hierarchy exists. Each category within hierarchy has its own item-category spec, which gets assigned to the item upon classification.

Item-characteristic attributes
Item-characteristic attributes are based on a particular item characteristic and are used to extend an item's attribute set. These attributes are similar to item-type and
item-category attributes.
For example, if an item is marked as sellable then certain attributes such as sales channel might need to be added.

You must refrain from modelling Item-characteristic attributes because it requires a granular form of defining dynamic attributes. Instead, you can assign such attributes
to all items and move the requirements as part of data validations. For example, if an item is marked as sell-able then make sure that at least one sales channel is selected
or return a validation error. In this case, the sales channel attribute is visible to all the items regardless of whether they are sellable or not but needs only to be populated
for sell-able items.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Location attributes in the data model


Location attributes store data that is specific to particular locations. For example, an item that is available for sale in California might require Prop 65 warning information.
Other states in America might not require the Prop 65 information, they might require some other information.

The location attributes store location data that pertains to certain locations. A specific location has the same location attributes. For example, the location attribute named
Australia can have location-specific data pertaining to the Australian continent.

You can create location-specific attributes to handle different location-specific requirements. For example, for the Asia-pacific region, you can define an attribute named
Asia-pacific.

As solution architect, you can only determine how many attributes and locations are required for the implementation. However, you cannot determine the total number of
attributes and locations that are supported by the product. If this limit needs to be set, it is set by the product engineering team. For example, you can have around 3,500
locations and 40 attributes per location for a single item.

If you add the location attributes to product data, the data load requirements significantly increase on IBM® Product Master. It is important to get a better understanding
on whether the location attributes apply to a higher location grouping such as region or are at a more granular level such as store. This will help you to determine whether
that location use case can be handled by Product MasterProduct Master location hierarchy. You can push location-level attributes into the InfoSphere® Advanced Edition
server or you can keep them within their ERP/legacy systems.

Enhanced support for location-specific data


Support for thousands of separate locations and their associated product information is provided. This function is critical for companies that maintain product information
specific to a location such a pricing, terms and conditions, promotions, and so on.

Known issues and limitations


Following is the list of known issues and limitations for Location data page.

Multi-occurrence attributes are not supported.


Sub-spec attributes are not supported.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Product classifications in the data model


The product classifications process involves classifying or grouping products. You can classify products based on a set of common categories and custom classification
based on business requirements.

Global Product Classification (GPC) is one of the forms of product classification. GPC is a set of common categories that provides a common language to buyers and sellers
for grouping products in the same manner globally. GPC is developed, owned, and used by the GS1 user community. GPC is mandatory for global data synchronization.

698 IBM Product Master 12.0.0


GPC works by defining a hierarchy starting by industry sector or segments. In GPC, a brick defines categories of similar products. You can define bricks by brick attributes.

The product classifications in the PIM system involve the following hierarchies:

Merchandising hierarchy
Merchandising hierarchies contain product categories. Each retailer has three merchandising hierarchies per banner:

Sellable merchandise hierarchy


Supplies hierarchy
Capital expenditure goods hierarchy

The sellable merchandise hierarchy is the primary Merchandising hierarchy and is typically the one maintained for a PIM repository. It is used to group sellable items into
product categories for merchandise planning, forecasting, procurement, allocation, and replenishment.

Location
A location is any legal, functional, or physical location within a business or organizational entity. It can be any point of sale for trade items. Locations generally include
corporate headquarters, divisional offices, stores, and distribution centers but might even include a brand. Each location can be represented by a Global Location Number
(GLN).
GLN is the EAN.UCC-mandated globally unique 13-digit identification number for a location (per the EAN International website). An example of a GLN: 7777777777777,
Albertsons Store #7122, 3132 Clement Street, San Francisco, CA 94121.

Location Hierarchy
Locations are organized in location hierarchies. There are three different location hierarchies:

Operations hierarchy: This is the primary location hierarchy and lists stores or warehouses by region. All PIM clients use this hierarchy for cost and price
management and financial reporting. All PIM clients at the retailer use this hierarchy.
Marketing hierarchy: This hierarchy lists stores or warehouses by membership in target markets. Marketing manager clients use it for promotions and promotional
circulars within regional target markets.
Geographic hierarchy: This hierarchy lists stores or warehouses by existence in cities, counties, and states. Finance manager uses the Geographic hierarchy for
managing regional and local tax rates. This hierarchy includes store departments because tax rates often vary by store department.

All three location hierarchies use the mapping of departments to stores. You can visualize them as the leaves of each one of these hierarchies being associated to stores,
which contain the set of departments mapped to them. As mentioned previously, the SKU's are associated to the individual departments in individual stores.

You can set up the location hierarchies at a retailer in one of the two ways in relation to the different banners in the enterprise:

One set of location hierarchies per banner.


One set of location hierarchies for the enterprise with the banners as the first level of the location hierarchies.

Your choice depends on the way the organization is structured at the retailer's enterprise. The first variation is illustrated later in this section:
Location hierarchies have the following structure:

Vendor and supplier


A vendor is a seller of trade items to a retailer. Vendor is a WCC Party. Each vendor has one or more points of sale, shipment, or invoicing. Every vendor point of sale,
shipment, or invoicing can be represented by a GLN.
A supplier is the same as a vendor, a different term that is used to mean the same thing in this context.

There are multiple types of vendors:

Brand owner: establishes the global product information for a trade item for all information providers.
Information provider: provides the product information for a trade item to a retailer. As per the EAN.UCC, the information provider is the entity that provides the
global data synchronization network with master data. The data source owns this data. For a given item or party, the source of data is responsible for ensuring
permanent updates of the information.
Manufacturer: produces a trade item for a retailer either directly or through subcontractors. If a retailer acts as a manufacturer of products for sale in its stores, the
retailer is designated as a private label manufacturer.
Wholesaler or distributor: ships trade items to a retailer that are manufactured by another vendor. The wholesaler or distributor takes ownership of the trade
items.
Broker: ships trade items to a retailer that are manufactured by another vendor. The broker does not take ownership of the trade items.

The type of vendor is not important from a PIM data model point of view. They are just described here to provide some context.

Vendor hierarchy
Vendor hierarchies allow you to define the vendor attributes and association of items to vendors.
For example

Black and Decker Dust buster 7.2 volts, GTIN #12345678901234 is mapped to Vendor GLN 8888888888888, Black and Decker Baltimore Mfg. Plant.

There are four vendor hierarchies per retailer:

Supply chain
Factor relations
Offshore supply chain
EDI

The Supply chain hierarchy is typically the primary Vendor hierarchy. It defines the ordering, payment, and shipping information flows between a retailer and a vendor.

IBM Product Master 12.0.0 699


The Supply chain hierarchy is a location hierarchy. It has three levels:

Vendor corporate
Vendor manufacturing facility
Vendor distribution center

Trade items (not SKU's) are mapped to the Supply chain hierarchy to specify the following data:

Primary ordering location


Secondary ordering location
Primary shipping location
Secondary shipping location
Primary payment location
Secondary payment location

Example
For item 12345 (Black & Decker Dustbuster) in store 678 (Minneapolis), the

primary ordering location and primary shipping location: GLN 1234567890123 (also DUNS 12-345-6789), Black and Decker Baltimore plant,
primary payment location: Black and Decker HQ, GLN 5555666677777 (also DUNS 44-555-6666),
secondary ordering, shipping, and payment location: GLN 223344556677889 (also DUNs 22-333-4444), SuperValu Minneapolis Distribution Center.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Item relationships
When you model relationships, you relate items or entities together under a certain relationship such as relating two items for up-sell, or relating an item to its suppliers.

You might define a set of attributes on top of a relationship. For example, supplier cost can be an attribute on the item-supplier relationship because it signifies the
supplier cost of that particular item.

Volume considerations for three-way relationships


You must take note of the volume of the attribute relationship records. For example, take the case of one million items where an item can have a maximum of twenty
suppliers. You have a possibility of twenty million item-supplier records. If another dimension such as locations is added to the relationship, for example, fifty locations
per item, then the number increases to 1,000,000,000 item-supplier-location records.

This shows that the way that you model such relationships can make or break the implementation. You can receive volume such as this when you model any relationship
that form a three-way join of three attributes. The item-supplier-location relationship is a common example of a relationship that can create a high volume of records. You
must model relationships carefully. Because there is no accurate approach, you must seek the advice of the IBM® Product Master services, engineering, and architecture
teams on whether the proposed model will be supported by Product Master and whether there are any other ways of fulfilling the requirement.

Two-way relationships
You can be model two-way relationships in various ways in Product Master:

With relationship attributes


With linked catalogs
With hierarchies
With multi-occurring attributes

The best approach for modeling a two-way relationship depends on whether the type of relationship and whether it is an item-to-item relationship or a item-to-other
entity relationship.

Item-to-item relationships
Item-to-item relationships are relationships between items. For example, you might have a relationship between the items in a bundle that are sold together for
promotional purposes.
Item-to-other-entity relationships
Item-to-other-entity relationships can be between an item and any other entity, such as a supplier, a client, or a location.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Item-to-item relationships
Item-to-item relationships are relationships between items. For example, you might have a relationship between the items in a bundle that are sold together for
promotional purposes.

700 IBM Product Master 12.0.0


In the example of items in a bundle, the individual items have a separate product Ids. Because the items are bundled together they are related to each other by the ID of
the bundle.

Product bundles
A product bundle is a collection of individual products that can be sold separately but are sold together for promotional purposes.
Item variations and differentiators
A product might have variants in terms of the size, color, and other attributes.
Cross-sell and up-sell products
Cross-sells and up-sells are industry-specific terms for relationships between a product and those products that a merchant suggests to buyers of that product.
Accessory products
An accessory is a product that complements the current item.
Substitute and replacement products
A substitute is a product that is comparable and similar in functionality to the current item. A replacement is a product that supersedes the current item.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Product bundles
A product bundle is a collection of individual products that can be sold separately but are sold together for promotional purposes.

For example, a digital camera Get More bundle might include a digital camera, a memory stick, and a camera case. All these items might be sold individually, but they can
also be sold together for a discount to entice the buyer to purchase three products, instead of just one of the products.

Any scenario that requires you to group products to represent them by a single entity can be considered a bundle. The names might vary in verticals and serve different
purposes. For example, you can have names such as promotions, functional groupings, and others to group products to represent them by a single entity. In the
automotive industry, the concept of Kits represents a group of redesigned parts that are sent with an order. You can group the products and model them as bundles.

A bundle is a product itself. It has its own attributes such as ID and description. In addition to the regular attributes, it has pointers to the individual products that it
contains, which creates an item-to-item relationship. The individual products themselves are items in the catalog and have their own Ids, descriptions, and other
attributes. You can categorize a bundle as in any other product. You can also apply validation rules for combining certain set of items.

From a modeling perspective, the bundle is defined as a product with a multi-occurrence of relationship links to other constituent products within the same catalog.
Depending on the type of bundle, homogeneous or heterogeneous, you can set validations that will identify what type of products can be valid constituents of the bundle.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Item variations and differentiators


A product might have variants in terms of the size, color, and other attributes.

An enterprise might manufacture or sell items that are essentially the same but have some variation such as color and size. For example, a blue iPod and a red iPod are
essentially the same product. To reduce the data entry and increase accuracy of the product information, the enterprise typically groups these items into one item family.
This groupings allows reporting and analysis at the base item level. This concept is commonly referred to as item variations or differentiators.

The data model design for item variations depends on how much data can vary between the base item and the variant and client requirements around searching and
assigning item identifiers.

Data modeling questions:

Item ID:
Does the base or variant item require an independent item identifier? For example, a SKU for iPod and a separate SKU for Blue iPod.
Search:
Should the base items or variants both be searchable?

Diff Val:
Can the variant have a different set of values, other than the distinguishing characteristics, compared to the base item? For example, can the variant have a separate
description, supplier ID, and characteristics?

The following table illustrates the modeling options based on a set of key requirements.
Table 1. Key requirements of modeling item relationships
Modeling key points Base items Variants
Item ID Search Item ID Search Diff val
Same catalog X X X X
Separate items
Relationship link between base and variant
Variants cannot override base item attribute values

Same catalog X X X
Base item has multi-occurring attributes for depicting variants

IBM Product Master 12.0.0 701


Modeling key points Base items Variants
Separate catalog X X X X
Variants in item master catalog
Relationship link between base and variant

Same catalog X X X X X
Separate items
Relationship link between base and variant
Variants can override base item attribute values

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Cross-sell and up-sell products


Cross-sells and up-sells are industry-specific terms for relationships between a product and those products that a merchant suggests to buyers of that product.

A Cross-sell is another product that is suggested to the buyer in addition to the current product that is being purchased. It might be in the same category as the current
product or a completely different category. A cross-sell has no bearing on the price but more on the relevance of being sold together. For example, a merchant can suggest
a high-resolution color printer when a digital camera is purchased, or a particular type of batteries when a flashlight is purchased.

The Up-sell is a relationship between a product and other products that are more expensive and belong to the same category. For example, a merchant can suggest a
larger screen television than the one the buyer is considering purchasing.

The data modeling requirement is to relate Product A to Product B and define this relationship to be meaningful to the business and be leveraged by other consuming
applications.

From a PIM perspective these are static relationships and must be part of the item definition. Following is the recommended data modeling approach:

Scenario Data modeling key points


< 10 related items Use a multi-occurrence of item-to-item relationship attributes
Use a non-persistent attribute (NPA) on relationship to show item description

> 10 related items Use a custom tool to add, edit and view the list of related items with their Ids, name or description and link to the item.
You can construct similar relationships dynamically based on user behavior, sales, and other attributes.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Accessory products
An accessory is a product that complements the current item.

For example, a headset can be offered as an accessory with an mp3 player. An mp3 player might support one or multiple types of headsets and all of the headsets are
considered accessories. An accessory is often less expensive than the main product.

This example is an item-to-item relationship because the accessory product and the main product are both items. The data modeling concepts are similar to those for
cross-sell and up-sell, where the main item has pointers to the related item (accessory).

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Substitute and replacement products


A substitute is a product that is comparable and similar in functionality to the current item. A replacement is a product that supersedes the current item.

The concept of substitutes and replacements is widely used in different industries such as retail, automotive, and manufacturing. The terminology might change based on
industry standard or business definitions. For example, in the automotive industry the term interchange is used for substitute parts, and supersession is used for
replacement parts.

A substitute product can be used at point-of-sale to suggest alternates when an item is out of stock, and can also be used by the replenishment system for restocking
inventory if the maximum order quantity of the main item is reached.

A replacement product is assigned when the current item becomes temporarily or permanently unavailable. When you receive any queries for the purchase or
replenishment of the current item then the replacement product is selected when the current item is not available.

These examples are of an item-to-item relationships because the current product and the substitute and replacement products are all items. The data modeling concepts
are similar to those for cross-sell and up-sell, where the main item has pointers to the related item (substitute or replacement).

702 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Item-to-other-entity relationships
Item-to-other-entity relationships can be between an item and any other entity, such as a supplier, a client, or a location.

Item-to-supplier relationships
In this relationship, an item is related to a supplier. A supplier is the entity that supplies the particular product to the enterprise.
Item-to-client relationships
In this relationship, an item is related to a client.
Item-to-location relationships
In this relationship, an item is related to single or multiple locations. Depending on the industry or business requirements, location can mean different things such
as warehouses, retail outlets, target markets, cities, or states.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Item-to-supplier relationships
In this relationship, an item is related to a supplier. A supplier is the entity that supplies the particular product to the enterprise.

The entity is integral to the item's definition and can be defined in multiple ways: supplier, manufacturer, distributor. Because the item-supplier relationship is defined
during item setup, the supplier entity must to be in data model so that the relationship can be established. You can model the supplier entity as a supplier catalog or
lookup table with a limited set of attributes such as:

Supplier ID
Supplier name
Supplier status
Supplier contact

To update data, you must ensure that the data is read-only in the financial or merchandising application and you must be able to expose an interface in IBM® Product
Master. For example, iPod is supplied by Apple Computer, Inc. Therefore, there is a relationship between the item, iPod and the supplier, that is, Apple Computer, Inc.

Within top of the item-supplier relationship, you can define a group of attributes such as supplier cost, supplier country, supplier packaging, and other attributes.

You can model the item-to-supplier relationship in one of the two ways:

By modeling the supplier as a multi-occurrence attribute: You can model the supplier as a multi-occurrence attribute on the item and within that multi-occurrence
define a subset of attributes. This approach works well for a limited number of suppliers per item.
By modeling the supplier as a hierarchy: You can model the supplier as a hierarchy and map the item to a particular entity (category). This scenario works well for a
large set of suppliers per item, and is efficient in determining the list of items supplied by a particular supplier. However, this approach makes it difficult to
determine the suppliers from a given item from the user interface.

Use the following approach to determine the correct approach to modeling supplier data:
Scenario Data modeling main points
Scenario 1: Model supplier as a catalog or a lookup table.
Limit the number of supplier attributes in Product Master.
< 10 suppliers per item Create an item-supplier relationship attribute on an item.
Create a multiple occurrence of the item-supplier relationship attribute.
Item-supplier queries Define attribute subset under the item-supplier relationship.
Define an attribute subset to include NPA attribute and to show the supplier name on user interface.

Scenario 2: Model the supplier as hierarchy with each supplier entity as a category and use hierarchy to organize suppliers into groups.
Limit the number of supplier (category) attributes in Product Master.
< 10 suppliers per item Classify item under a supplier to establish item-supplier relationship.
Define the item-supplier attribute subset as item hierarchy spec.
Item-supplier queries

Supplier-item queries
Scenario 3: Model supplier as a catalog or a lookup table.
Create an item-supplier catalog or user-define logs (UDLs) for improving the performance.
> 10 suppliers per item Create a custom tool so that users can create an item-supplier relationship and enter values for the attributes.
Create a custom tool so that users can view the items for a given item.
Item-supplier queries Create a custom tool so that users can view the items for a given supplier.

Supplier-item queries

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 703


Item-to-client relationships
In this relationship, an item is related to a client.

You might need a client entity if you are a manufacturer or a distributor. You can assign product assortments to certain clients. For example, if you are a food distributor,
you might need to assign which clients or businesses will receive a particular item. For data modeling, the concept of the client entity is the same as the supplier entity.

Because you are likely to have more than ten clients per item, you must model the item-to-client relationship as follows:

You can model the item-to-client relationship in one of the two ways:

By modeling the client as a multi-occurrence attribute: You can model the client as a multi-occurrence attribute on the item and within that multi-occurrence define
a subset of attributes. This approach works well for a limited number of clients per item.
By modeling the client as a hierarchy: You can model the client as a hierarchy and map the item to a particular entity (category). This scenario works well for a large
set of clients per item, and is efficient in determining the list of items supplied by a particular client. However, this approach makes it difficult to determine the
clients from a given item from the user interface.

An item can have multiple features. To store these multiple features, multiple attributes are required. For example, a printer can have feature 1: 1200 dpi resolution and
feature 2: 15 ppm print speed. Feature 1 and 2 are multi-occurrence attributes. To mark each multi-occurrence attribute (feature 1 and feature 2), you need an occurrence
variable (#abc). You must mark both feature 1 and feature 2 with the same occurrence variable #abc. In case of multi-occurrence, when the client runs a search, values of
either feature 1 and or feature 2 are returned for the item, printer. You can design the rich search to search for any or all occurrence of a multi-occurring attribute (feature).
For the multi-occurrence attribute, all or one of the values of the attribute (feature) are not matched and searched.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Item-to-location relationships
In this relationship, an item is related to single or multiple locations. Depending on the industry or business requirements, location can mean different things such as
warehouses, retail outlets, target markets, cities, or states.

You can capture the relationship between an item and the locations and locations where the item belongs.

The item-to-location model is similar to the item-to-supplier model. You must make the location data read-only and available for updating from a location master table
that resides outside of the PIM system.

Because an item is assigned to a large set of locations, typically in the hundreds or thousands, you can optimize your data model to handle these relationship by providing
a location hierarchy in data model. You can also define the item-location relationship for a subset of attributes.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating specs
You need to create a spec to specify the format in which you want the data to be stored, calculated, and managed within the Product Master Server solution.

The process of creating all the specs is similar. We will discuss creation of primary specs, secondary specs, and sub specs.

Types of specs
Most typical solutions keep the number of specs around 100 count. The number may sometimes reach into the 100's as long as the size of the individual specs is
small.
Creating primary specs
You can create primary specs to create a template or basic structure for creating the catalogs (items) and categories (hierarchies). For example, you would need a
primary spec to create a catalog of electronics items. Similarly, you would need a primary spec to create a hierarchy of electronic items such as computer
peripherals and home appliances.
Creating secondary specs
You can create secondary specs when you want to include attributes related to a specific category of a hierarchy and to define location attributes and
supplementary attributes associated with particular categories. For example, if you have a catalog of electronic home appliances, you might want to use a
secondary spec to add the named attribute screen width to all items under the TV category, or any other attribute specific to televisions.
Creating sub specs
A sub spec is a reusable spec which can be used as part of either a primary or a secondary spec You can create sub specs to group together a set of attributes that
always occur together. For example, you have new attribute named value added tax (VAT). To add this new attribute VAT along with other attributes for new taxes,
you might need the sub specs.
Attribute node modeling and naming guidelines for specs
The attribute node guidelines for specs include modelling guidelines and guidelines for naming conventions for nodes and conventions for abbreviating attribute
and object names.
Validation rules for number, integer and currency attribute types
To better control the validation of number, currency and integer attribute types, you can use a set of facets.
Setting a date attribute type without creating a time stamp
By default, in IBM Product Master specs, date attribute types have an attached time stamp. For example, the date field value shows "3/2/2007 1:34 PM", but you
want to display just the date (3/2/2007) without the time (1:34 PM).

704 IBM Product Master 12.0.0


Internal representations of attribute types
All attributes are represented as Java™ types. See The Java Language Specification for details.
Specifying the length of attributes in specs
In general, keep the default value for the maximum length of an attribute in a spec, because space is not pre-allocated based on the maximum length. However,
there might be some situations when you want to adjust the value of the maximum length.
Specifying currency symbols
You can specify a currency symbol when you define a spec attribute of type Currency in the IBM Product Master environment.
Associating specs to objects
Associating the specs with objects involves using the specs when creating the objects. You can associate specs to catalogs and hierarchies so that you can create
items and categories. For example, you can create a spec named electronic items spec once. Then, you could create a catalog and hierarchy for electronic items by
using this spec. The spec functions as a template and you need not create the spec again.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Types of specs
Most typical solutions keep the number of specs around 100 count. The number may sometimes reach into the 100's as long as the size of the individual specs is small.

Going beyond 1000 requires extreme care and consideration as it dramatically increases the memory requirement on the system. Larger number of specs lead to larger
more complex views that increase the memory footprint of individual users. This type of solution requires highly custom tuned load balancers as well as garbage collection
settings.

In some cases, there are requirements to handle category-specific attributes, which might lead to thousands of specs. Ensure that you involve architects and engineers in
the design discussion. Alternative designs should be considered first, and if no feasible alternative exists, then potential implication needs to be analyzed.

The following specs are available: primary spec, secondary spec, sub spec, destination spec, file spec, lookup spec, script input spec.

Primary spec
A primary spec is the data model that is used to define hierarchies and catalogs. A primary spec must have a primary key.
Secondary spec
A secondary spec is the data model that is used to define location attributes and supplementary attributes that are associated with particular categories. Secondary
specs do not have a primary key.
Sub spec
A sub spec is a reusable spec, which can be used as part of either a primary or a secondary spec, for example, to group a set of attributes that always occur
together.
Destination spec
A destination spec defines the structure of the data that is exported to a destination data file for a catalog or hierarchy export.
File spec
A file spec defines the structure of a data file from a data source to use for a catalog or hierarchy import.
Lookup spec
A lookup spec is the data model that is used to define a Lookup Table. A lookup spec must have a primary key.
Script input spec
A script input spec defines the structure of the parameters that are passed to a script API, especially a script that runs through the system user interface. For
example, a report script.

Note: For Lookup type attribute, the minimum and maximum length validator is not applicable because the value is referred with item ID since the lookup table itself is
container that stores items.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating primary specs


You can create primary specs to create a template or basic structure for creating the catalogs (items) and categories (hierarchies). For example, you would need a primary
spec to create a catalog of electronics items. Similarly, you would need a primary spec to create a hierarchy of electronic items such as computer peripherals and home
appliances.

Before you begin


Ensure that you get the kind of attributes from the solution architect that you would like to include in the primary spec.
When defining primary keys in specs of type String, ensure to limit the character length to 300. It is possible to enter primary keys with a String length of more than 300
characters, however, only the first 300 characters will be used for uniqueness enforcement of an item within a catalog.

Procedure
Create a spec with any of the following methods: User interface, Java™ API, or Script API. When you create a primary spec, you need to set a primary key.
Option Description

IBM Product Master 12.0.0 705


Option Description
a. Click Data Model Manager > Specs/Mappings > Specs Console. The Specs Console opens.
User interface b. Click Primary.
c. Click the new spec icon in the Primary Spec row. Provide the required details for creating the spec.

The following sample code creates a primary spec.

Context ctx = PIMContextFactory.getContext(USERNAME, PASSWORD, COMPANY_NAME);


SpecManager manager = ctx.getSpecManager();
Java API
PrimarySpec pSpec = (PrimarySpec)manager. createSpec("New Primary Spec", Spec.Type. PRIMARY_SPEC);
AttributeDefinition pkNode = pSpec. createAttributeDefinition(specName + "/PK", 0);
pSpec.setPrimaryKey(specName + "/PK");
pSpec.save();
The following sample script API creates a primary spec named My Primary Spec with three nodes.

var SPEC_NAME = "My Primary Spec";


var NODE_NAME1 = "pk";
var NODE_NAME2 = "str2";
var NODE_NAME3 = "int3";
var PRIMARY_NODE_PATH = SPEC_NAME + "/" + NODE_NAME1;
var priSpec = new Spec(SPEC_NAME, "PRIMARY_SPEC");
var rulenode1 = new SpecNode(priSpec,NODE_NAME1,1);
rulenode1.setAttribute("TYPE", "STRING");
Script API
priSpec.setPrimaryKeyPath(PRIMARY_NODE_PATH);
var rulenode2 = new SpecNode(priSpec,NODE_NAME2,2);
rulenode2.setAttribute("TYPE", "STRING");
rulenode2.setAttribute("MIN_OCCURRENCE","1");
rulenode2.setAttribute("MAX_OCCURRENCE","1");
rulenode2.setAttribute("INDEXED","yes" );
var rulenode3 = new SpecNode(priSpec,NODE_NAME3,3);
rulenode3.setAttribute("TYPE", "INTEGER");
priSpec.saveSpec();
out.println("created spec: " + priSpec.getSpecName());
The primary spec is created. Users can view it in the Specs Console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating secondary specs


You can create secondary specs when you want to include attributes related to a specific category of a hierarchy and to define location attributes and supplementary
attributes associated with particular categories. For example, if you have a catalog of electronic home appliances, you might want to use a secondary spec to add the
named attribute screen width to all items under the TV category, or any other attribute specific to televisions.

Before you begin


Ensure that you get the kind of attributes from the solution architect that you would like to include in the secondary spec.
Note: It is not possible to assign secondary specs to categories in a collaboration area for hierarchies.

Procedure
Create the secondary spec with any of the following methods: user interface, Java™ API, or script API. When you create a secondary spec, you need not specify the
primary key. In the secondary spec, you need to specify the attributes that you were not able to include in the primary spec.
Option Description
a. Click Data Model Manager > Specs/Mappings > Specs Console. The Specs Console opens.
b. Click Secondary Specs.
c. Click New and provide the required details for creating the spec.
d. Optional: Select the Add Specs to Children check box if you would like the spec to be added to the subcategories.
User interface Selecting the Add Specs to Children check box affects only existing subcategories. New subcategories that you create after creating secondary
specs automatically inherit the secondary specs of the immediate parent category regardless of whether you select the Add Specs to Children
check box.

e. Click Done.

The following sample Java API code creates a secondary spec.

Context ctx = PIMContextFactory.getContext(USERNAME, PASSWORD, COMPANY_NAME);

SpecManager manager = ctx.getSpecManager();


Java API
Spec secSpec = manager.createSpec("New Secondary Spec", Spec.Type.SECONDARY_SPEC);

AttributeDefinition node1 = secSpec.createAttributeDefinition(specName + "/PK", 0);

secSpec.save();

706 IBM Product Master 12.0.0


Option Description
The following sample script API creates a secondary spec.

var SPEC_NAME = "My SecondarySpec"; var NODE_NAME1 = "str1";var NODE_NAME2 = "int2";

var secSpec = new Spec(SPEC_NAME, "SECONDARY_SPEC");

var rulenode1 = new SpecNode(secSpec,NODE_NAME1,1);


Script API rulenode1.setAttribute("TYPE", "STRING");

var rulenode2 = new SpecNode(secSpec,NODE_NAME2,2);


rulenode2.setAttribute("TYPE", "INTEGER");

secSpec.saveSpec();

out.println("created spec: " + secSpec.getSpecName());


The secondary spec is created. Users can view it in the Specs Console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating sub specs


A sub spec is a reusable spec which can be used as part of either a primary or a secondary spec You can create sub specs to group together a set of attributes that always
occur together. For example, you have new attribute named value added tax (VAT). To add this new attribute VAT along with other attributes for new taxes, you might need
the sub specs.

Before you begin


Ensure that you get the type of new attributes from the solution architect that you want to accommodate in the primary and secondary specs.

Procedure
Create the sub spec using the user interface, or the Java™ API. While creating a sub spec, if you add a new attribute to a sub spec, you need to add the sub spec again to
the primary spec to be able to see the new attribute in the primary spec.
Option Description
a. Click Data Model Manager > Specs/Mappings > Specs Console. The Specs Console opens.
User interface b. Click Sub-spec.
c. Click the new spec icon in the Sub Spec row. Provide the required details for creating the spec.

The following sample Java API creates a sub spec named New Sub Spec.

String subSpecName = "New Sub Spec";


Context m_ctx = PIMContextFactory.getContext(USERNAME, PASSWORD, COMPANY_NAME);
Java API
SpecManager manager = m_ctx.getSpecManager();
Spec subSpec = manager.createSpec(subSpecName, Spec.Type.SUB_SPEC);
AttributeDefinition node1 = subSpec.createAttributeDefinition(subSpecName + "/attr1", 0);
subSpec.save();
The sub spec is created. Users can view it in the Specs Console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Attribute node modeling and naming guidelines for specs


The attribute node guidelines for specs include modelling guidelines and guidelines for naming conventions for nodes and conventions for abbreviating attribute and
object names.

Attribute node modeling guidelines for specs


When modelling attribute nodes, use the following guidelines:

Implement sub specs if you name a large data model and require the creation of a data dictionary with reusable attribute definitions.
Model string enumerations as a lookup table if you need translation or security.
Implement lookup tables instead of string enumerations. Lookup tables do not store the value chosen but they store a reference to it. Lookup tables allow you to
change the values without having to do mass updates for objects that have that value.
Define an attribute with a unique value only if it is absolutely necessary because it impacts performance.
Do not mark all attributes as indexed in the specification, especially when large data volumes are stored in the PIM solution.

Attribute node naming guidelines for specs


Use the following guidelines when naming the spec attribute nodes:

IBM Product Master 12.0.0 707


Use only alphabetic characters in the attribute names even though certain special characters are permitted.
Do not include the spec name in the attribute names.

Abbreviations guidelines for objects and attributes


Use the following guidelines to create abbreviations for objects and attributes:

Do not abbreviate attribute or object names. Create a shorter standardized abbreviation if an attribute or object name becomes difficult to read because it is too
long.
Shorten the name by abbreviating each word of the data item name from right to left, abbreviate only what is required to facilitate readability.
Maintain and refer to a list of standard corporate abbreviations when the need for an abbreviation is identified.
If the abbreviation for the word does not exist in the list of standard corporate abbreviations, then use the following guidelines to create an abbreviation:
Do not abbreviate words less than six characters long.
Do not abbreviate acronyms and abbreviations further.
Include the first character of each word in the abbreviated name.
Delete vowels instead of consonants. However, do not delete the first letter of each word even if it is a vowel.
Delete one letter if an abbreviation results in double letters (for example, tt). This practice eliminates typos.
Drop the special characters and derive the abbreviation for the resulting word if a word has special characters. For example, the word in-transit is
abbreviated to intransit. Abbreviations of a single word will not contain hyphens or other special characters.
When you create a new abbreviation add the new abbreviation to the corporate list of standard abbreviations for future reference.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Validation rules for number, integer and currency attribute types


To better control the validation of number, currency and integer attribute types, you can use a set of facets.

The mandatory and optional facets are:

Total number of digits


Fraction digits
Maximum Value (exclusive)
Maximum Value (inclusive)
Minimum Value (exclusive)
Minimum Value (inclusive)

If the mandatory facets are not used, the attributes will be validated as before. It is not recommended to introduce these new facets on existing specs.

Number and currency attribute types


The number and currency attribute types has the following mandatory facets:

Total number of digits


Fraction digits

The number and currency attribute types has the following optional facets (zero or more):

Maximum Value (exclusive)


Maximum Value (inclusive)
Minimum Value (exclusive)
Minimum Value (inclusive)

You must use both of these mandatory facets to be able to enforce the new validation rules for number and currency attribute types. If you use both of the mandatory
facets, Maxlength and Precision are not used for validation. In addition, you can use one or more of the optional facets.

Restriction: The optional facets are used for validation if and only if both the mandatory facets are used.

Attributes of type integer


The integer attribute type has the following mandatory facet:

Total number of digits

The integer attribute type has the following optional facets:

Maximum Value (exclusive)


Maximum Value (inclusive)
Minimum Value (exclusive)
Minimum Value (inclusive)

You must use the mandatory facet to be able to enforce the new validation rules for integer attribute types. If you use the mandatory facet, Maxlength is not used for
validation. In addition, you can use one or more of the optional facets.

Restriction: The optional facets are used for validation if and only if the mandatory facet is used.

IBM Product Master 12.0 Fix Pack 8

708 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting a date attribute type without creating a time stamp


By default, in IBM® Product Master specs, date attribute types have an attached time stamp. For example, the date field value shows "3/2/2007 1:34 PM", but you want to
display just the date (3/2/2007) without the time (1:34 PM).

About this task


Changing the date format settings, from Home > My Settings > General settings > User Select for the Date Input Format and Date Output Format fields apply globally and
do not allow for finer control of the date attribute type.
When you create a spec, follow these steps to set each date attribute type in Product Master to Date only, which eliminates the time stamp.

Procedure
1. For the attribute, in the Type list, select Date.
2. From the list, select Date only and then click the + sign.
3. In the details window, select the Date only checkbox, and click Save.

Results
Now when this attribute is displayed in the user interface or in search results, only the date (without the time stamp) is displayed.
Important:

The data is always stored with the time stamp. This specification affects the display of the data in the user interface only.
The date attribute format supports only a value range of 100 - 9999 years.

The data is always stored with the time stamp. This specification affects the display of the data in the user interface only.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Internal representations of attribute types


All attributes are represented as Java™ types. See The Java Language Specification for details.

The internal representations for Lookup table and Relationship are undefined. Most attributes are represented as strings (in Java, String), except the following:
Table 1. Internal representations
Attribute Type Typical value of maximum length
Integer Java int
Number Java double
Number enumeration Java double
Currency Java double
Flag Java boolean
Date Java Date
Sequence Java int

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Specifying the length of attributes in specs


In general, keep the default value for the maximum length of an attribute in a spec, because space is not pre-allocated based on the maximum length. However, there
might be some situations when you want to adjust the value of the maximum length.

Note: After you have added an item or category, when providing values for the attributes of that item or category, the string length is actually measured in bytes and not
characters which can be multi-bytes. The maximum string length is 5000 or more English one byte characters or 1332 characters.
For example, if you have downstream applications that consume data from InfoSphere® MDM Server for PIM and there are limitations on the maximum size of data in one
or more of those applications, you might want to put restrictions on the amount of data that a user can enter by setting a smaller value on the maximum length of the
corresponding attribute as defined in InfoSphere MDM Server for PIM. The attribute types that you would likely need to customize the maximum length in this example are
Integer, Number, Password, Rich Text and String.

In the table below, "default maximum length" refers to any of the following:

The default value that is presented in the spec definition screen.

IBM Product Master 12.0.0 709


For script operations, it is not necessary to provide a value on the MAXLENGTH attribute. Therefore, you do not need to call Node::setAttribute on the
MAXLENGTH attribute.
For Java™ API, it is not necessary to provide a value on the MAXLENGTH attribute. Therefore, you do not need to call setValue method on the
AttributeDefinitionProperty for the attribute MAXLENGTH.

Table 1. Default maximum length of attributes in a spec


Attribute Type Typical value of maximum length Recommendation
Binary The maximum length allowed for a URL or name of a local file.
Currency See Validation rules for number, integer and currency attribute types for more information.
Date (Date only) Any positive number. Keep the default maximum length.
External content Any positive integer. Keep the default maximum length.
reference
Flag Set it to be at least 5. Keep the default maximum length.
Image The maximum length allowed for a URL or a name of a local file.
Image URL The maximum length allowed for a URL.
Integer See Validation rules for number, integer and currency attribute types for more information.
Lookup table Any positive integer. Keep the default maximum length.
Number See Validation rules for number, integer and currency attribute types for more information.
Number The total number of digits including the decimal separator that can be entered for the attribute. If "Total Keep the default maximum length as the
enumeration number of digits" and "Fraction digits" attributes are used, the Maximum Length and Precision attributes value is selected from a predefined list of
are ignored. number enumerations.
Password The maximum length allowed for the text if "Single Line String" facet is specified. Otherwise control
characters are counted in the total length. For example, the line termination control characters (\r\n for
carriage return and line feed) are counted as 2 characters.
Relationship Any positive integer. Keep the default maximum length.
Rich text The maximum length allowed for the rich text (markup tags are counted as characters contributing to the
maximum length as well).
Sequence Any positive integer. Keep the default maximum length.
String The maximum length allowed for the text if "Single Line String" facet is specified. Otherwise control
characters are counted in the total length. For example, the line termination control characters (\r\n for
carriage return and line feed) are counted as 2 characters.
String The maximum length allowed for the text if "Single Line String" facet is specified. Otherwise control Keep the default maximum length as the
enumeration characters are counted in the total length. For example, the line termination control characters (\r\n for value is selected from a predefined list of
carriage return and line feed) are counted as 2 characters. number enumerations.
Thumbnail The maximum length allowed for a URL or a name of a local file.
image
Thumbnail The maximum length allowed for a URL.
image URL
Time zone Set it to be long enough to allow saving the internal string representation of a time zone. Keep the default maximum length.
URL The maximum length allowed for a URL.
The table above provides limits only on the number of characters a user can enter when editing the attribute. Setting a maximum length on certain attribute types can also
imply a limit on the actual value. For example, a maximum length of 2 for an Integer limits the number the user can enter to the range -99 to 99, inclusive. However, the
internal representation will never change, whatever the maximum length is set to. For example, even when the maximum length is 2 the value is still held as a 32-bit
integer, although a single byte would be sufficient, and if the maximum length is 20 the value is still held as a 32-bit integer, which means that the maximum length in this
case does not prevent the user from entering a number that is outside the valid range.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Specifying currency symbols


You can specify a currency symbol when you define a spec attribute of type Currency in the IBM® Product Master environment.

About this task


When you define a spec attribute of type Currency, a Currency Symbol list is enabled to specify the currency symbol to use. For example, the $ symbol. The Currency
Symbol list displays only one option, None, by default. This list is populated at the company level.

Procedure
1. Navigate to Data Model Manager > Security > Company attributes.
2. From the Available Currencies list, select the required options and then click the arrow to move the options to the Selected Currencies list.
3. Click Save.
The symbols of the currencies in the Selected Currencies list are displayed in the Currency Symbol list in the spec attribute definition page.
4. Select a suitable symbol.
5. Click Save.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

710 IBM Product Master 12.0.0


Associating specs to objects
Associating the specs with objects involves using the specs when creating the objects. You can associate specs to catalogs and hierarchies so that you can create items
and categories. For example, you can create a spec named electronic items spec once. Then, you could create a catalog and hierarchy for electronic items by using this
spec. The spec functions as a template and you need not create the spec again.

Before you begin


Ensure that you know the type of specs that are available: primary spec, secondary spec, sub spec, destination spec, lookup spec, script spec. Also, ensure that you know
which spec can be associated with which object (examples of objects: catalog, hierarchy).

About this task


To associate a spec, create an object by using the spec that you want to associate with the object. For example, create a catalog named computers by using the default
catalog spec.

Similarly, you can use the hierarchy spec while creating the hierarchy.

For example, to associate a spec to a hierarchy, click Product Manager > Hierarchies > New Hierarchy and select the spec that you want to associate when you create a
hierarchy.

Procedure
Associate a hierarchy to a spec with any of the following methods: user interface, Java™ API, or script API. You need to specify hierarchy name, select a primary spec,
display attribute, a path attribute, a hierarchy type, and the access control group when you create the hierarchy. Display attribute is the spec node in the primary spec,
whose value for each category in the hierarchy will be displayed in the left pane when displaying this hierarchy in the left pane. You need to select the spec that you want
to associate your hierarchy to.
Option Description
a. Click Product Manager > Hierarchies > New Hierarchy. The Create Hierarchy page opens.
b. Provide the required details for creating the hierarchy.
User interface c. Select the spec that you want to associate.
d. Click Save.

The following sample code creates a new hierarchy and associates the hierarchy to a primary spec.
Sample 1: Creating a category hierarchy and associating the hierarchy to a primary spec.

Context ctx = PIMContextFactory.getContext(USERNAME, PASSWORD, COMPANY_NAME);

HierarchyManager manager = ctx.getHierarchyManager();

AccessControlGroup acg = ctx.getOrganizationManager().getAccessControlGroup(ACG_NAME);

SpecManager specManager = ctx.getSpecManager();


Java API
PrimarySpec primarySpec = (PrimarySpec)specManager.getSpec(PRIMARYSPEC_NAME);

AttributeDefinition displayAttribute = primarySpec.getAttributeDefinition(DISPLAY_ATTRDEF_PATH);

AttributeDefinition pathAttribute = primarySpec.getAttributeDefinition(PATH_ATTRDEF_PATH);

Hierarchy hierarchy = manager.createHierarchy(primarySpec, "newHierarchy", pathAttribute, acg,


displayAttribute);

hierarchy.save();
The following sample code creates an organization hierarchy and associates the hierarchy to a primary spec.
Sample 2: Creating an organization hierarchy and associating the hierarchy to a primary spec.

Context ctx = PIMContextFactory.getContext(USERNAME, PASSWORD, COMPANY_NAME);

OrganizationManager manager = ctx.getOrganizationManager();

AccessControlGroup acg = manager.getAccessControlGroup(ACG_NAME);

SpecManager specManager = ctx.getSpecManager();

PrimarySpec primarySpec = (PrimarySpec)specManager.getSpec(PRIMARYSPEC_NAME);

AttributeDefinition displayAttribute = primarySpec.getAttributeDefinition(DISPLAY_ATTRDEF_PATH);

AttributeDefinition pathAttribute = primarySpec.getAttributeDefinition(PATH_ATTRDEF_PATH);

OrganizationHierarchy hierarchy = manager.createOrganizationHierarchy(primarySpec, "newHierarchy",pathAttribute,


acg, displayAttribute);

hierarchy.save();

IBM Product Master 12.0.0 711


Option Description
The following sample script API creates a hierarchy with the name My Hier, and associates the hierarchy with My Primary Spec as the primary spec.

var HIER_NAME = "My Hier";


var HIER_SPEC_NAME = "My Primary Spec";
var DISPLAY_NODE_NAME = "str2";

var hierSpec = getSpecByName(HIER_SPEC_NAME);


Script API var dispAttrNode = hierSpec.getNodeByPath(HIER_SPEC_NAME + "/" + DISPLAY_NODE_NAME);
var hmOptArgs=[];
hmOptArgs["displayAttribute"] = dispAttrNode;
hmOptArgs["pathAttribute"] = dispAttrNode;

var hier = new CategoryTree(hierSpec, HIER_NAME,hmOptArgs);


var errs = hier.saveCategoryTree();
out.println("Created the hier: " + HIER_NAME + " ---- errs: " + checkString(errs,""));
Users can view the newly created hierarchy that is associated to the primary spec in the Hierarchy Console by clicking Product Manager > Hierarchies > Hierarchy Console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating hierarchies
You can create a hierarchy to classify the items under hierarchies. A hierarchy consists of a set of categories. For example, you can create a hierarchy of books for
categorizing the books into different categories such as fiction, non-fiction, drama, poetry, technical books, and non-technical books. A category hierarchy is used by
catalogs to classify items, while an organization hierarchy is used to manage IBM® Product Master users.

Before you begin


Ensure that you have created the primary spec to be used for categories in this hierarchy. Also, ensure that you get the type of hierarchy (category hierarchy or
organization hierarchy) that you can create from the solution architect.

Procedure
Create the hierarchy with any of the following methods: user interface, Java™ API, or script API. You need to specify hierarchy name, select a primary spec, display
attribute, a path attribute, a hierarchy type, and the access control group when you create the hierarchy. Display attribute is the spec node in the primary spec, whose
value for each category in the hierarchy will be displayed in the left pane when displaying this hierarchy in the left pane.
Option Description
a. Click Product Manager > Hierarchies > New Hierarchy. The Create Hierarchy page opens.
User interface b. Provide the required details for creating the hierarchy.
c. Click Save.

The following sample code creates a hierarchy.


Sample 1: Creating a category hierarchy.

Context ctx = PIMContextFactory.getContext(USERNAME, PASSWORD, COMPANY_NAME);

HierarchyManager manager = ctx.getHierarchyManager();

AccessControlGroup acg = ctx.getOrganizationManager().getAccessControlGroup(ACG_NAME);

SpecManager specManager = ctx.getSpecManager();


Java API
PrimarySpec primarySpec = (PrimarySpec)specManager.getSpec(PRIMARYSPEC_NAME);

AttributeDefinition displayAttribute = primarySpec.getAttributeDefinition(DISPLAY_ATTRDEF_PATH);

AttributeDefinition pathAttribute = primarySpec.getAttributeDefinition(PATH_ATTRDEF_PATH);

Hierarchy hierarchy = manager.createHierarchy(primarySpec, "newHierarchy", pathAttribute, acg,


displayAttribute);

hierarchy.save();

712 IBM Product Master 12.0.0


Option Description
The following sample code creates an organization hierarchy.
Sample 2: Creating an organization hierarchy.

Context ctx = PIMContextFactory.getContext(USERNAME, PASSWORD, COMPANY_NAME);

OrganizationManager manager = ctx.getOrganizationManager();

AccessControlGroup acg = manager.getAccessControlGroup(ACG_NAME);

SpecManager specManager = ctx.getSpecManager();

PrimarySpec primarySpec = (PrimarySpec)specManager.getSpec(PRIMARYSPEC_NAME);

AttributeDefinition displayAttribute = primarySpec.getAttributeDefinition(DISPLAY_ATTRDEF_PATH);

AttributeDefinition pathAttribute = primarySpec.getAttributeDefinition(PATH_ATTRDEF_PATH);

OrganizationHierarchy hierarchy = manager.createOrganizationHierarchy(primarySpec, "newHierarchy",pathAttribute,


acg, displayAttribute);

hierarchy.save();
The following sample script API creates a hierarchy with the name My Hier, My Primary Spec as the primary spec, and str2 as the display and attribute
path node.

var HIER_NAME = "My Hier";


var HIER_SPEC_NAME = "My Primary Spec";
var DISPLAY_NODE_NAME = "str2";

Script API var hierSpec = getSpecByName(HIER_SPEC_NAME);


var dispAttrNode = hierSpec.getNodeByPath(HIER_SPEC_NAME + "/" + DISPLAY_NODE_NAME);
var hmOptArgs=[];
hmOptArgs["displayAttribute"] = dispAttrNode;
hmOptArgs["pathAttribute"] = dispAttrNode;

var hier = new CategoryTree(hierSpec, HIER_NAME,hmOptArgs);


var errs = hier.saveCategoryTree();
out.println("Created the hier: " + HIER_NAME + " ---- errs: " + checkString(errs,""));
Users can view the newly created hierarchy in the Hierarchy Console by clicking Product Manager > Hierarchies > Hierarchy Console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating catalogs
You can create the catalog for storing information about items. For example, you can create a catalog of books for storing information about the books.

Before you begin


Ensure that you get the information about the type of catalog that you can create from the solution architect.
Note: Use non-default ACG for catalog instead of the default ACG, for better control in managing multiple permissions.

Procedure
Create a catalog by using any of the following methods: user interface, Java™ API, or script API. You need to specify the catalog name and the spec name when you create
the catalog.
Option Description
a. Click Product Manager > Catalogs > New Catalog.
b. Click the new catalog icon.
User interface c. Provide the required details for creating the catalog.
d. Click Save.

The following sample Java API code creates a catalog.

Context ctx = PIMContextFactory.getContext(USERNAME, PASSWORD, COMPANY_NAME);

HierarchyManager hierarchyManager = ctx.getHierarchyManager();

Hierarchy hierarchy = hierarchyManager.getHierarchy("BasicAPICategoryTree");


Java API
SpecManager specManager = ctx.getSpecManager();

PrimarySpec spec = (PrimarySpec)specManager.getSpec("BasicAPISpec");

Catalog catalog = m_catalogMgr.createCatalog(spec, "TestCatalog", hierarchy);

catalog.save();

IBM Product Master 12.0.0 713


Option Description
The following sample script API creates a catalog.

var CTG_NAME = "My Catalog";


var CTG_SPEC_NAME = "My Primary Spec";
var HIER_NAME = "My Hier";
var DISPLAY_NODE_NAME = "str2";

var ctgSpec = getSpecByName(CTG_SPEC_NAME);


Script API
var hier = getCategoryTreeByName(HIER_NAME);
var dispAttrNode = ctgSpec.getNodeByPath(CTG_SPEC_NAME + "/" + DISPLAY_NODE_NAME);
var hmOptArgs=[];
hmOptArgs["displayAttribute"] = dispAttrNode;

var ctg = new Catalog(ctgSpec,CTG_NAME,hier,hmOptArgs);


ctg.saveCatalog();
out.println("Created the ctg: " + ctg.getCtgName());
Important: Use non-default ACG for catalog instead of default ACG, for better control in managing multiple permissions.
The catalog is created. Users can view it in the Catalog Console.

Defining linked attributes


Ensure you are familiar with the following characteristics that linked attributes can contain.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Defining linked attributes


Ensure you are familiar with the following characteristics that linked attributes can contain.

About this task


Attributes specified as link have the following characteristics and pertain to these rules of linking:

When defining linked attributes, you can choose another catalog as the target catalog or the same catalog to be both the source and target.
The attributes can be of any type and multi-occurring.
Link between a source catalog item and target catalog item is established using the primary key of the target catalog item.
Any indexed attribute of the target catalog can be selected as the display attribute (also known as the Destination Attribute Name on the Catalog Attributes screen)
of link attribute. This may also be the primary key or the display attribute of the destination catalog. The attribute value of the target item is used for the display of
link attribute values.
Note: The display attribute of the target item is never stored as the value of link attribute but is used only for display purpose. If the value of the display attribute is
empty or not set, by default the system displays the primary key value of the target item as the value of link attribute.
The link attribute and the primary key of the target catalog should be compatible. Following rules define the compatibility:
Link attribute and destination catalogs primary key should have the same internal type. Refer to the following topic for a representation of different attributes
types in the system; Specifying the length of attributes in specs.
For all of these types, length of link attribute should be at least be equal to or greater than the length of the destination catalogs primary key attribute.
Where relevant, precision of link attribute should be at least be equal to or greater than the precision of the destination catalogs primary key attribute.
Any validation enforced by the system on the type of the link attribute is still enforced.
The target catalog of a link attribute will gather the items and objects from the cache if the target catalog is a cached catalog.

Procedure
1. Create a primary spec with at least one attribute that is marked as Link.
2. Create a catalog, which is based on this primary spec, for example, Catalog1.
3. Create another catalog, Catalog2, which is going to serve as the target catalog. The Catalog2 primary key should be type compatible with the Link attribute of
Catalog1, as defined above.
4. Open the Catalog Attributes screen for Catalog1.
5. Select the link attribute from the Src Attribute Name drop-down menu.
6. Select Catalog2 from the Catalog drop-down menu as the target in the Link Attributes section.
7. Select any attribute from the Destination Attribute Name drop-down menu. The Destination Attribute Name drop-down menu contains the indexed attributes
defined in the primary spec of the Catalog2.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating views
You can create views to provide a more efficient or task-specific view of items, create groups of attributes that are related to a specific data entry or data maintenance
process. You can create multiple views of the same catalog, and create views that are shared by multiple users. For example, a general view to view all the attributes, a
marketing view to see marketing related attributes only, and a technical view to see only technical attributes.

714 IBM Product Master 12.0.0


Before you begin
Ensure that you created the catalog or hierarchy for which you are creating the view. Also, ensure that the attribute collections are created and have the appropriate
attributes in them.

Procedure
1. Use any one of the following methods: user interface, Java™, or script.
You can create catalog view and hierarchy view. You can display a set of attributes in tabs.

To create a view, provide details of the container (catalog or hierarchy) for which the view is being created and the name of the view. You should select the attribute
collections that you want to edit or view from the screens in the PIM system when you create the views.

Option Description
To create a catalog view:
a. Click Product Manager > Catalogs > Catalog Console.
b. From the Catalog Console pane, select a catalog, and click Views.
c. In the Catalog View pane, provide a name for the view in the Catalog View Name field, and click Next.
d. Specify the attribute collections that you want to view or edit in the screens of the PIM system. You can apply the attribute collection to
the following screens:
Edit Item - If you open an item in the single-edit page, then the catalog view becomes applicable.
Bulk Edit - If you open an item in the multi-edit page, then the catalog view becomes applicable.
Item List - Applicable, if you open an item from Product Manager > Selections.
Note: You must first create a selection on the catalog.
Important: The application currently does not support Item Popup and Item Location attribute collections.
e. Click Save.
To create a hierarchy view:
User interface a. Click Product Manager > Catalogs > Hierarchy Console.
b. From the Hierarchy Console pane, select a hierarchy, and click Views.
c. In the Hierarchy View pane, provide a name for the view in the Hierarchy View Name field, and click Next.
d. Specify the attribute collections that you want to view or edit in the screens of the PIM system. You can apply the attribute collection to
the following screens:
Category Edit - If you open a category in the single-edit page, then the hierarchy view becomes applicable.
Category Bulk Edit - If you open a category in the multi-edit page, then the hierarchy view becomes applicable.
e. Click Save.
To create a tab view:
You can create tabs for screens in both catalog view and hierarchy view.
To display attribute collections in tabs:
a. Click Tab View > New.
b. In the tab detail pane, provide a name for the view in the Tab Name field.
c. Select the attribute collections to be displayed in the tab, and click Save.
Sample 1: The following sample Java API code creates a catalog view.

Context ctx = PIMContextFactory.getCurrentContext();


CatalogManager manager = ctx.getCatalogManager();
Catalog catalog = manager.getCatalog("Catalog");
View view = catalog.createView("View");
ScreenView screenView = view.getScreenView(ScreenType.ITEM_SINGLE_EDIT);
Collection<AttributeCollection> attrColls = new ArrayList<AttributeCollection>();
AttributeCollection attrColl1 =
ctx.getAttributeCollectionManager().getAttributeCollection("AttributeCollection");
AttributeCollection attrColl2 =
Java API ctx.getAttributeCollectionManager().getAttributeCollection("AttributeCollection1");

attrColls.add(attrColl1);
attrColls.add(attrColl2);

screenView.setEditableAttributeCollections(attrColls);

List<AttributeCollection> attrColls1 = new ArrayList<AttributeCollection>();


attrColls.add(attrColl1);
ScreenViewFilter filter = screenView.addFilter("Filter");
filter.setAttributeCollections(attrColls1);
view.save();
Sample 2: The following sample Java API code creates a hierarchy view.

Context ctx = PIMContextFactory.getCurrentContext();


HierarchyManager manager = ctx.getHierarchyManager();
Hierarchy hierarchy = manager.getCatalog("Hierarchy");
View view = hierarchy.createView("View");
ScreenView screenView = view.getScreenView(ScreenType.CATEGORY_SINGLE_EDIT);
Collection<AttributeCollection> attrColls = new ArrayList<AttributeCollection>();
AttributeCollection attrColl =
ctx.getAttributeCollectionManager().getAttributeCollection("AttributeCollection");
attrColls.add(attrColl);
screenView.setEditableAttributeCollections(attrColls);
view.save();

IBM Product Master 12.0.0 715


Option Description
Sample 1: The following sample script API creates a catalog view.

var catalog = getCtgByName("Catalog");

var view = new CtgView(catalog, "CatalogView");


var perms = [];
perms[0] = "E";
perms[1] = "V";
var attrGroups = [];
attrGroups[0] = "AttrGroup";
attrGroups[1] = "AttrGroup1";
Script API
view = view.setCtgView("ITEM_EDIT", attrGroups, perms);
view.saveCtgView();
var attrGroup1 = getAttrGroupByName("AttrGroup");

var attrGroups1 = [];


attrGroups1[0] = attrGroup1;

var tab1 = view.getNewCtgTab("Filter1", attrGroups1);

view.addCtgTab(tab1);
view.saveCtgTabs();
Sample 2: The following sample script API creates a hierarchy view.

var view = new CtgView(hier, "HierarchyView");


var perms = [];
perms[0] = "E";
perms[1] = "V";
var attrGroups = [];
attrGroups[0] = "AttrGroup";
attrGroups[1] = "AttrGroup1";

view = view.setCtgView("CATEGORY_EDIT", attrGroups, perms);


view.saveCtgView();
var attrGroup1 = getAttrGroupByName("AttrGroup");

var attrGroups1 = [];


attrGroups1[0] = attrGroup1;

var tab1 = view.getNewCtgTab("Filter1", attrGroups1);

view.addCtgTab(tab1);
view.saveCtgTabs();

716 IBM Product Master 12.0.0


Option Description
Sample 3: The following sample script API creates a tab view inside a workflow.

//Workflow Creation Starts


var sWFLName = ("Test Workflow");
var sACGName = ("Default");
var oWFL = new Workflow(sWFLName, "CATALOG");
oWFL.setWflAccessControlGroup(sACGName);
var oWFLName1Step1 = oWFL.createWflStep("GENERAL", "Step-1");

var saWFLName1Step1_AG_R = [];


saWFLName1Step1_AG_R.add("TestCtgAttrColl");

var saWFLName1Step1_AG_E = [];

saWFLName1Step1_AG_E.add("TestCtgAttrColl1");

var saWFLName1Step1_AG_V = [];


saWFLName1Step1_AG_V.add("TestCtgAttrColl");

//Assigning attributes groups to step


oWFLName1Step1.setRequiredAttributeGroups("ITEM_EDIT", saWFLName1Step1_AG_R);
oWFLName1Step1.setEditableAttributeGroups("ITEM_EDIT", saWFLName1Step1_AG_E);
oWFLName1Step1.setViewableAttributeGroups("ITEM_EDIT", saWFLName1Step1_AG_V);

var stepPerformers = [];

stepPerformers.add("Admin");
oWFLName1Step1.setWflStepPerformerUsers(stepPerformers);
oWFL.getWflInitialStep().mapWflStepExitValueToNextStep("SUCCESS", oWFLName1Step1);
oWFLName1Step1.mapWflStepExitValueToNextStep("DONE", "Success");
out.writeln(oWFL.saveWfl());

////Workflow Creation Ends

//Step Tab creation Starts


var oContView = oWFLName1Step1.getWflStepView("ITEM_EDIT");
out.writeln(oContView);
var attrGrpNames = oContView.getCtgViewAttrGroupsList();
out.writeln(attrGrpNames.size());
var tabAttrGrps=[];

for(i=0;i<attrGrpNames.size();i++)
{
out.println("getAttrGroupByName(" + attrGrpNames[i] + ")");
attrGrp = getAttrGroupByName(attrGrpNames[i]);
out.writeln(attrGrp);
tabAttrGrps[i] = attrGrp;
}

var tab = oContView.getNewCtgTab("Test",tabAttrGrps);


oContView.addCtgTab(tab);

var tab1 = oContView.getNewCtgTab("Test1",tabAttrGrps);


oContView.addCtgTab(tab1);

oContView.saveCtgTabs();
oContView.saveCtgView();

out.writeln(oWFL.saveWfl());

Important: To enable the TAB View, first save the workflow and then proceed with creation of the TAB VIEW.
The view is created. Users can create, delete, and modify views through the Catalog Console.
2. Verify the view. From the upper right, select the view from the list.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating attribute collections


You can create the attribute collections so that users are able to easily manage large number of attributes. Attribute Collections are a group of specs and attributes that
will behave the same way in all contexts.

Before you begin


Ensure that you know the types of attribute collections that the user would require.
Note: The maximum size of the attribute collection name is 21 characters.

About this task


Users can also model data in an organized and efficient manner by using the attribute collections. When you fetch and save an item category by using the attribute
collection, only the attributes selected for a view are retrieved and saved. In this way you can improve performance by using the attribute collections.

IBM Product Master 12.0.0 717


Procedure
Create the attribute collection. Use any of the following methods: User interface, Java™ API, or script API.
Option Description
a. Click Data Model Manager > Attribute Collections > New Attribute Collection.
b. Provide a name and description for the attribute collection.
User interface c. Click Save.
Note: When you are modifying an attribute collection, for example adding an attribute or spec, or removing an attribute or spec, the changes will
be saved before you click Save.

The following sample Java API code creates an attribute collection.

String specName = "Sample Primary Spec";


String acName = "Sample Attribute Collection";
String acDesc = "This is a sample Attribute Collection containing attributes from Sample Primary Spec";

try
{
Spec spec = m_ctx.getSpecManager().getSpec(specName);
Java API
AttributeCollection acSample = m_mgr.createAttributeCollection(acName);
acSample.save();

// acSample needs to be saved before adding attributes acSample.addAllAttributes(spec);


}
catch ( Exception e)
{
e.printStackTrace();
}
The following sample script API creates an attribute collection.

// The Spec name whose attributes to be added to the new Attribute Collection
var specName = "Sample Primary Spec";
var spec = getSpecByName( specName );

// The description for the new Attribute Collection


Script API var acDesc = "This is a sample Attribute Collection containing attributes from Sample Primary Spec";
// The name for the new Attribute Collection
var acName = "Sample Attribute Collection";

var sampleAC = new AttrGroup( acName, "GENERAL", acDesc);

// boolean: false - static attribute collection;


true - dynamic attribute collection sampleAC.addSpecToAttrGroup(spec, false);
The attribute collection is created. User can view the new attribute collection in the attribute collection console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Defining location attributes for entries


You can define location data for entries so that users can use location data. For example, if the price of an item varies from location to location, the user can maintain the
item-specific price attribute and the location-specific price attribute.

Before you begin


Ensure that you configure the catalog to support location attributes for specific hierarchies.
Add the property enable_location_data=true to the common.properties file to display the location-specific attributes tab.
The catalog on which the location data needs to be defined should be empty.

Procedure
Define the location data by using any of the following methods: user interface, Java™ API, or script API.

User interface

a. In the left pane, right-click in the blank area under Catalog module.
b. Click Catalog Attributes.
c. Click the Define Location Specific Attributes tab in the Catalog Detail screen.
d. Specify the Catalog, Hierarchy, ACG, and Secondary specification for defining the location-specific data.
e. Click Inheritance Attribute Groups link to add the Location specific attribute collection.
f. Select the Attribute collection for the location-specific attributes and click Save.
g. Add an item to the Catalog and click Save. Location Data tab gets enabled.
h. Right-click a category, click Make Available, and then click Refresh. Right pane displays the attributes of the location-specific spec.
i. In the right pane, click the drop-down list, and select Inherit/Override.
j. Add the values for the spec attributes and click Save.

The EntryExplorer displays the attributes of the secondary spec.


Java API

718 IBM Product Master 12.0.0


The following sample Java API code defines the location data.

Defining location attributes for entries Java APIContext ctx = PIMContextFactory.getContext(USERNAME, PASSWORD,
COMPANY_NAME);

// Get a reference to the catalog

CatalogManager catalogManager = ctx.getCatalogManager();

Catalog catalog = catalogManager.getCatalog(CATALOG_NAME);

// Get a reference to the hierarchy to be used as the location hierarchy

HierarchyManager hierarchyManager = ctx.getHierarchyManager();

Hierarchy locationHierarchy = hierarchyManager.getHierarchy(LOCATION_HIERARCHY_NAME);

// Get a reference to the location spec to be used


SpecManager specManager = ctx.getSpecManager();

SecondarySpec locationSpec = (SecondarySpec) specManager.getSpec(LOCATION_SPEC_NAME);

// Add a location data configuration.

LocationDataConfiguration locationConfig = catalog.addLocationDataConfiguration(locationHierarchy, locationSpec);

// (Optionally), identify the attributes that are inheritable

AttributeCollectionManager attributeCollectionManager = ctx.getAttributeCollectionManager();

AttributeCollection locationAttributesCollection =
attributeCollectionManager.getAttributeCollection(LOCATION_SPEC_ATTRIBUTES);

Script
The following sample script API defines the location data.

var MY_CTG_NAME = "My Catalog";


//catalog with a location hierarchy
var MY_LOC_HIER = "My Location Hierarchy";
//hierarchy with defined locations
var MY_LOC_SPEC = "My Location Spec";
//secondary spec for locations
var MY_LOC_AC = "My Location Attribute Collection";
//collection of the location spec

var ctg = getCtgByName(MY_CTG_NAME);


var hier = getCategoryTreeByName(MY_LOC_HIER);
var locSpec = getSpecByName(MY_LOC_SPEC);
var secSpecAC = getAttrGroupByName(MY_LOC_AC);

//create attribute collection for location attributes


var inhAttrGrps=[];
inhAttrGrps.add(secSpecAC);

//define location attributes


ctg.defineLocationSpecificData(hier, locSpec, inhAttrGrps);
ctg.saveCatalog();

//optional: make a location available for an item to populate data


var MY_LOC_PATH = "My Location";
//category path under the location hierarchy above
var MY_LOC_ITEM_PK = "MyLocItem";
//primary key of an item with location data
var dataLoc = hier.getCategoryByPath(MY_LOC_PATH,"/");
var dataLocItem = ctg.getEntryByPrimaryKey(MY_LOC_ITEM_PK);
if(!dataLocItem.isItemAvailableInLocation(dataLoc)){ dataLocItem.makeItemAvailableInLocation(dataLoc, true);
var errs = dataLocItem.saveCtgItem();
}

Results
The location-specific data is defined. Users can view them in the Define Location Specific Attributes tab of the Catalog Detail screen.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating item relationships


You can create item relationships so that users are able to establish a relationship between two attributes from two catalogs. For example, you can establish a relationship
between product id and item code from two catalogs. One of the attributes must be a relationship type attribute.

Before you begin


Ensure that your relationship attributes are editable in the view of your catalog.

IBM Product Master 12.0.0 719


Procedure
Create the item relationship by using the user interface. You need to create a two items with one attribute which is a primary key and the other attribute of type
relationship. You can create a relationship between two items by selecting the primary key attribute of the first item with the relationship attribute of the second item.
Option Description
a. In the left pane, click on the Default Organization folder under Catalog module.
b. In the left pane, click on the first catalog.
c. In the right pane, from the catalog drop-down list, select the second catalog. You will create a relationship from the attribute of the first catalog
User interface to the attribute of the second catalog.
d. Click on the ? mark icon next to the relationship attribute. A window opens.
e. Browse and select the item that you want to relate to.
f. Click Save.

The following sample code creates a relationship between two items.


// Sample code to set the Relationship between item 1 and item 2

var MY_CATALOG = "My Catalog";


var MY_PRIMARY_KEY1 = "item1";
var MY_PRIMARY_KEY2 = "item2";
var MY_REL_ATTR_PATH = "My Catalog Spec/rel8"; //the Relationship-typed node

var ctg = getCtgByName(MY_CATALOG);


//set item1 to have relationship with
Scripting API //item2
var item1 = ctg.getEntryByPrimaryKey(MY_PRIMARY_KEY1);
item1.setCtgItemRelationshipAttrib(MY_REL_ATTR_PATH,ctg,MY_PRIMARY_KEY2);
errs = item1.saveCtgItem();
out.println("errs: " + checkString(errs,""));

//verify item1 is set correctly to item2

var retId = item1.getEntryAttrib(MY_REL_ATTR_PATH);


var expId = ctg.getEntryByPrimaryKey(MY_PRIMARY_KEY2).getEntryId();
assertEquals.invoke(toInteger(retId),toInteger(expId));
The following sample code creates a relationship between two items.

CatalogManager mgr = m_ctx.getCatalogManager();


Catalog ctg1 = mgr.getCatalog("Catalog1");
Catalog ctg2 = mgr.getCatalog("Catalog2");

Java™ API Item item1 = ctg1.getItemByPrimaryKey("pk1");


Item item2 = ctg2.getItemByPrimaryKey("pk2");

AttributeInstance attrInst = item1.getAttributeInstance(PS1_RELATIONSHIP2);


attrInst.setValue(item2);

item1.save();
The relationship is created between two items.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating lookup tables


You can create a lookup table and save lookup values to the primary spec so that users can select the lookup values from a drop-down list. Lookup tables are useful for
quick information retrieval and for storing small amounts of data. You can update lookup tables without needing to do mass update for all the items with the value that you
need to update.

Before you begin


Ensure that you have created the lookup spec which is required to create a lookup table.

About this task


Lookup tables can be used:

to store auxiliary data during aggregations,


to populate output fields during syndications,
as an attribute type in the primary spec, secondary spec or subspec,
for quick information retrieval,
to store small amounts of data,
to create standard tables, for example units of measure (UOM, unit of measure), currencies, or countries,
to store values that can be assigned to IBM® Product Master attributes including attributes of the Global Data Synchronization feature, and
to validate data contained in specific item or category fields. For example, you might need to create a lookup table for storing long financial values of foreign-
exchange department of a bank.

Lookup tables can hold content for:

Standard tables like units of measure (UOM), currencies or countries.

720 IBM Product Master 12.0.0


Custom replacements tables (for example, BK = Black and BL = Blue).

Procedure
Create a lookup table by using any one of the following methods: user interface, Java™ API, or script API.
You need to specify attributes and values when you create the lookup table. To create a lookup table, provide details such as lookup spec and the lookup table name.
Option Description
a. Click Product Manager > Lookup Tables > Lookup Table Console.
b. Click the new icon.
c. From the Select Type drop-down list, select Single String Key and click Select.
User interface d. Select the lookup table spec to use.
e. Provide a name and other required details for the lookup table.
f. Click Next.

The following sample Java API code creates a lookup table.

Context ctx = PIMContextFactory.getCurrentContext();

LookupTableManager lookupTableMgr = ctx.getLookupTableManager();

Java API SpecManager specMgr = ctx.getSpecManager();

LookupSpec lkpSpec = (LookupSpec)specMgr.getSpec("LookupSpec");

LookupTable lookupTable = lookupTableMgr.createLookupTable(lkpSpec, "MyLookupTable");

lookupTable.save();
The following sample script API creates a lookup table.
Script API
var lookupSpec = getSpecByName("Default_LookUp_Spec");
var lookuptable = new LookupTable(lookupSpec, "TestLookupTable");
The lookup table is created. The users can create, delete, and modify attributes and values of the lookup tables from the Lookup Table console.

Modifying lookup tables by adding new attribute values


You can only assign values from a list of valid values to an IBM Product Master attribute. If you need to assign a new value to a lookup type attribute, or modify an
existing value, then you need to add the value to the list of valid values in the lookup table, or modify the existing value in the lookup table.
Modify lookup tables to contain values for Global Data Synchronization attributes
You can only assign values from a list of valid values to a Global Data Synchronization attribute. If you need to assign a new value to a Global Data Synchronization
attribute, or modify an existing value, then you need to add the value to the list of valid values in the lookup table, or modify the existing value in the lookup table.
Modeling considerations for lookup values
Lookup values are a set of predetermined valid values or a rule-based list of valid values for a particular attribute.
Modeling considerations for lookup tables (and cached catalogs)
Depending on your usage scenario, you need to consider the different modeling perspectives for lookup tables and cached catalogs. Lookup tables and cached
catalogs are both cached, meaning, the entries or items in them are cached for quick access.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Modifying lookup tables by adding new attribute values


You can only assign values from a list of valid values to an IBM® Product Master attribute. If you need to assign a new value to a lookup type attribute, or modify an existing
value, then you need to add the value to the list of valid values in the lookup table, or modify the existing value in the lookup table.

Procedure
1. Identify the attribute that you want to modify.
2. Identify the spec that contains the attribute.
3. Identify the lookup table associated with the attribute in the spec.
4. Click Product Manager > Lookup Tables >Lookup Table Console to navigate to the Lookup Table Console.
5. Click the name of the lookup table that you want to update.

6. Click to edit the lookup table to reflect the new acceptable values for the attribute, or click to add a single entry in the lookup table, and to add
multiple entries.
7. Edit an existing value, or add new values.
8. Click in the single entry window, or click in the multiple entry window if you add new entries.
Note: You must select the check box for each entry if you are adding multiple entries.
9. Save the lookup table.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Modify lookup tables to contain values for Global Data Synchronization attributes

IBM Product Master 12.0.0 721


You can only assign values from a list of valid values to a Global Data Synchronization attribute. If you need to assign a new value to a Global Data Synchronization
attribute, or modify an existing value, then you need to add the value to the list of valid values in the lookup table, or modify the existing value in the lookup table.

Procedure
1. Identify the data pool attribute that you need to modify.
2. Identify the corresponding Global Data Synchronization attribute for the data pool attribute.
The mapping between Global Data Synchronization and data pool attributes can be found under ~/etc/messaging/xml/demand/wwrev6 directory.
3. Identify the IBM® Product Master backend spec for the Global Data Synchronization attribute.
For global attributes, the spec is Global_Attributes_Spec and for local attributes, the spec is Global_Local_Attribute_Spec.
4. Identify the lookup table for each Global Data Synchronization attribute in the spec.
5. Click Product Manager > Lookup Tables >Lookup Table Console to navigate to the Lookup Table Console.
6. Click the name of the lookup table that you want to update.

7. Click to edit the lookup table to reflect the new acceptable values for the attribute, or click to add a single entry in the lookup table, and to add
multiple entries.
8. Edit or enter the values in the code, description, and agency fields.
9. Click in the single entry window, or click in the multiple entry window if you add new entries.
Note: You must select the check box for each entry if you are adding multiple entries.
10. Save the lookup table.
11. Restart the Global Data Synchronization feature.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Modeling considerations for lookup values


Lookup values are a set of predetermined valid values or a rule-based list of valid values for a particular attribute.

Lookup values can be shown as values list in the UI. For example, lookup values might provide a the list of valid countries, suppliers, or brands. Use the following scenarios
to determine the correct approach to modelling lookup values:
Scenario Data modeling main points
Scenario 1: Use a string enumeration or a string enumeration rule.
< 20 values

No search by primary key

Single attribute value selection


Use a lookup table.
Scenario 2: Create multiple attributes in a lookup spec.
> 20 values Use an NPA attribute to show lookup value descriptions on UI

Search by primary key

Before you make a selection,


refer to other attribute values.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Modeling considerations for lookup tables (and cached catalogs)


Depending on your usage scenario, you need to consider the different modeling perspectives for lookup tables and cached catalogs. Lookup tables and cached catalogs
are both cached, meaning, the entries or items in them are cached for quick access.

You use lookup tables to perform search and replace functions within an item or category. You can also use lookup tables to validate data contained in specific item or
category fields. A catalog is a container that is used to store items. You can select a catalog and mark it cached to cache the items contained within that catalog. When
items in a catalog are cached, they are read into memory of the database.
If your business object requires access control group based security, reports, exports, imports, workflow, or pre or post processing scripts for lookup tables, you need to
use cached catalogs. You must use cached catalogs in these particular scenarios.

Ensure that the number of items in a lookup table or in a cached catalog are kept to a minimum for optimal performance. If the number of items goes over 1000, you might
need to turn caching off for that catalog. The number of items or entries is dependent on the number of attributes for each lookup table item or cached catalog item.

There is a maximum limit to the size of these caches as defined in the mdm-cache-config.properties file. For example, max_lookups_in_cache cache memory
parameter. All the caches that use the mdm-cache-config.properties file are using the same resource, for example, the amount of memory available to a single JVM.
For optimal performance, ensure that the values you specify for the number of catalogs or lookup tables to cache are within these limits.

Lookup table count and size

722 IBM Product Master 12.0.0


It is very easy to forget that lookup tables are cached in memory. Increasing the size or count of the lookup tables consumes more memory for the caching of the tables
thereby leaving less memory for other operations. This in turn leads to frequent garbage collections and sluggish and unpredictable performance of the system. Also, large
number of lookup tables values in the single edit user interface affects the rendering times of the user interface as large amounts of lookup table data is sent across the
network for the creation of the drop-downs.
Therefore, if you consider a lookup table with large number of items (in the hundreds) as a validation only reference table, then another approach could be considered
such as using a very simple catalog and lookup against that catalog instead of the overhead of a lookup table. Using a catalog as an alternative to a lookup table can be
considered in some cases. Since version 9.0.0, a catalog can be marked as cached too.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Attaching files to a staging area


A staging area is similar to a distribution, for example, an FTP site or an email in which a single directory in the document store is set aside for any files sent using that
distribution. Those files can then be accessed at will through a script or manually.

Procedure
1. Click Data Model Manager > Staging Area.
2. Click Create new Staging Area.
3. Provide a name for your staging area in the Staging Area Name field. Your staging area displays in the Staging Area Console.

What to do next
After you create a report or export, your staging area is listed under the Destination column. It is listed under the Destination column because when you run the job, those
files will be going to that staging area for later retrieval. When you create an export, report or anything else that uses a distribution, files are sent or placed in a directory.
During the creation of an export or report, you can select where to put those files; a staging area is listed as an option. If you select to have those files sent to the staging
area, after running the export or report you can access the files in the Document Store. The files are placed in the following directory {company
name}/staging_area/{Name of staging area}/{files in the staging area}.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Defining multi-occurrence group labels


When working with multi-occurrence groups, you can select a child attribute for use as a multi-occurrence group label to provide more group occurrence information in
the single edit screen.

On the multi-occurrence group details screen, select type value of Multi_occurrences group label. This will enable you to locate the particular group occurrence in the user
interface without having to expand all of the group occurrences to look for the particular one.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating objects for business processes


You create a workflow to manage the flow of work in a business process, and you create a collaboration area to hold the entries that are in the steps of the workflow. You
use workflows to define a set of business processes and rules, which are then applied to any IBM® Product Master entries.

A business process is a set of tasks that a client performs to meet a business need or resolve a business issue. For example, a set of tasks for introducing a new product is
a business process.

Solution architects need to understand the business needs of a company so that they can define the business processes for the company and build them in a workflow.
There are various methodologies available to design the business processes for conducting a set of tasks, for example, outside-in and user modelling.

Business processes and collaboration in IBM Product Master can be created by using multiple methods, including native workflows, custom screens, portal screens, and
file uploads. These topics focus on the workflow part of Product Master collaboration.

Defining use cases


Use cases are built to refine a set of requirements based on a role or task. Instead of the traditional list of requirements that may not directly address the use of the
solution, use cases group common requirements based on the type of role or goal. Use cases define what the users or roles are doing in the solution, a business
process defines how they perform those functions.
Creating business processes
Creating a business process is the accumulation of the understanding of how the business wants to perform this process and the set of use cases that define what
they do in our system. A business process always has a defined starting and ending point, and may have multiple starting or ending points dependent on the
process.

IBM Product Master 12.0.0 723


Creating collaboration areas
Creating collaboration areas include building, reserving, and releasing items within the collaboration area.
Creating workflows
Using workflows allows more strict control of the Product Master Server process.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Defining use cases


Use cases are built to refine a set of requirements based on a role or task. Instead of the traditional list of requirements that may not directly address the use of the
solution, use cases group common requirements based on the type of role or goal. Use cases define what the users or roles are doing in the solution, a business process
defines how they perform those functions.

About this task


A use case represents the list of tasks that actors can perform, and is directly related to the requirements of the business process. Use cases are a recognition of the
requirements that the project must achieve. To document a use case, define the purpose requirements, provide an introduction, and list the different actors or roles for a
given scenario.

Procedure
1. Identify and define the actors.
a. Identify all the key users of the system. The key users are the ones for whom the system is being built.
b. Identify all the other users of the system, including automated systems and users in support and managerial roles.
Example of the actors for a use case for introducing a new product: To introduce any new products that an electronic store requires someone from a marketing role
to write a description of the item, and someone from the photo department to provide if one is not provided by the vendor. Therefore, the list of user roles include
the vendor, the marketing role, and the photographer role, and each play a part in the use case for introducing a new product. Here, the vendor submits the product
for review, the marketing role introduces the new product, and the photographer takes the photo.
The task that the vendor does is its own use case. The task that the marketing person does is a separate, related, use case. The task that the photographer does is a
third, related, use case.

2. Match the user requirements to a use case and document the role names and descriptions for the role names by using the use case template.
The purpose of matching requirements to use cases is to provide a basis of communication between the clients and the solution developers. The use cases help
build the structure and provides the information for the business process that the solution developer use to create a workflow.

Example of defining requirements: There are three user roles, the vendor, the marketing role, and the photographer. The marketing role is in charge of introducing
the product. Users in this role need to write the product information and use the product to ensure that they understand what the product does. The marketing
person is dependent upon the other user roles. The photographer is in charge of taking pictures of the product for hardcopy or softcopy. The vendor is charge or
submitting the product for review.

Example
The following template is a sample from an industry standard source and can be used to document use cases.
Use-case field Description
Use case name An active verb phrase that describes a particular task.
Subject area A use role or other grouping mechanism that can be used to group use cases.
Business event A trigger that stimulates activity within the business. Many business events occur at the interface point between the business and one of the
external entities with which it interacts. Business events must be observable.
Actors The actor that initiates this use case and all users who participate in this use case.
Use case overview A description of the overall scope and content of the use case.
Preconditions Constraints that must be met for the use case to be taken by the solution developer and used to create a workflow. This might include a required
sequencing of use cases. For example, one or more other use cases might need to be performed successfully for this use case to begin.
Termination A list of the successful and unsuccessful ways this use case might end. What are the possible ending results?
outcome
Condition affecting A list of the conditions under which the corresponding termination outcome occurs.
termination
outcome
Use case A brief description of events for the most likely termination outcome. List the actions the actor does and how the system responds.
description
Use case A list of other use cases that are associated with this use case.
associations
Traceability to A list of other related documents, models, and products that are associated with this use case.
Input summary A brief summary that lists the data input by the actor.
Output summary A brief summary that lists the data output by the system.
Usability index A number based on how this use case ranked in terms of satisfaction, importance, and frequency.
Use case notes Information that is not directly part of this use case but that the solution developer needs to be aware of while working on the workflow.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

724 IBM Product Master 12.0.0


Creating business processes
Creating a business process is the accumulation of the understanding of how the business wants to perform this process and the set of use cases that define what they do
in our system. A business process always has a defined starting and ending point, and may have multiple starting or ending points dependent on the process.

About this task


Unlike use cases, business processes have a definite time flow, and show the users how they should be performing their functions. Every step in a process has a performer
and a well defined task, whether they are performing an action or making a decision. In our example, the solution helps the client to identify the steps that the buyer must
go through to purchase inventory and enter it into the system.

Business process for a clothing store


One of the main goals of a clothing store is to buy merchandise. A clothing store's business process is to make available its buying organization to buy more effectively. The
store has a buyer with a budget to use to buy the merchandise to be sold in the store. The buyer purchases the merchandise from the suppliers and must specify what was
bought to track inventory.
This example shows how the solution architect must model the process of purchasing inventory and adding it to a tracking system to steps in a workflow.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating collaboration areas


Creating collaboration areas include building, reserving, and releasing items within the collaboration area.

Building collaboration areas by role


Depending on a user's authorization, each user is presented with a list of workflows or business processes in the collaboration area.
Reserving and releasing items
You can reserve, reserve and edit, and unreserve items in a particular workflow step. You can also perform actions on the items, to ensure that the item moves to
the next step in the workflow.
Creating a collaboration area
A collaboration area is a runtime instance of a workflow. You create a collaboration area so that users can work with items or categories in a workflow.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Building collaboration areas by role


Depending on a user's authorization, each user is presented with a list of workflows or business processes in the collaboration area.

About this task


You can have as many different collaboration areas as you want and divide each collaboration area any way that you want. You can gather information from use cases to
help define which products need to be viewed in a collaboration area by which performers.
You can separate performers by their security role. If the company has a certain class of users and they want those users to view a particular set of items, you can control
that access by relating the performer to the security model. The security model controls the different roles of the users and specifies what collaboration areas and items
those users are able to view and work with.

Procedure
1. Identify which performers need to work with which products.
2. Define a collaboration area for every performer-to-product match.

Example
An example of building a collaboration area by role is having a clothing retailer with separate man's and woman's divisions. All man's clothing goes through the man's
collaboration area and the woman's clothing goes through the woman's collaboration area. The workflow steps follow this flow: start > enrich data > approve data > finish.
The enrich data and approve data are handled by different groups of users. We call all the man's enrichers M-Enr and the woman's enrichers W-Enr. For the approver's the
man's are called M-App and the woman's are called W-App. We will add a user we can call by name into each group, for example, Bob is a M-Enr, Sally is a W-Enr, Mike is a
M-App, and Jill is a W-App.

For the setup of the steps and roles, we put the M-Enr and W-Enr as performers of the enrich data step, that means both Bob and Sally are allowed to perform that step.
Similarly, M-App and W-App should be performers of the approval step, meaning both Mike and Jill are allowed to approve things. It will seem that we are allowing all
enrichment users the ability to enrich all items, but that is just the first part of setting up the workflow.

IBM Product Master 12.0.0 725


When we create the collaboration areas, we are going to assign them to different access control groups (ACG). We will start by building a man's ACG called M-ACG and a
woman's ACG called W-ACG. M-Enr and M-App both have permissions to access items in M-ACG. Similarly, W-Enr and W-App have permissions in W-ACG. By building the
a man's collaboration area called M-ColArea based on our workflow, only Bob and Mike have access to the items in it, and similarly for the woman's collaboration area W-
ColArea only Jill and Sally can see items in it.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Reserving and releasing items


You can reserve, reserve and edit, and unreserve items in a particular workflow step. You can also perform actions on the items, to ensure that the item moves to the next
step in the workflow.

About this task


You use the collaboration area rich searches to search workflow steps and to subsequently update the items that are used in a particular step of a workflow. The results of
your query return workflow steps and items that are checked out, or reserved, to a particular collaboration area.
You need to designate which roles at which steps need to use the reserve and release functionality. During the setup process of a step, there is check box that needs to be
enabled. Whomever is the performer for that step, that role then becomes the role that is able to use the reserve and release functionality.

Procedure
1. Click Data Model Manager > Workflows > Workflow Console. A list of workflows are displayed.
2. Select a workflow to edit. The Workflow Edit screen opens.
3. Select a step from the workflow. The Step Edit screen opens.
4. Select an item within the step. The Item Edit screen opens.
5. Select the Reserve Status check box or the Release Status check box to reserve or release an item. After an item is reserved you can edit the item.
6. Optional: Search on items based on status. Select the Reserve Status check box to search workflow steps based on their reserve status. The types of reserve status
include, Reserved by yourself, Available, and Reserved by other.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a collaboration area


A collaboration area is a runtime instance of a workflow. You create a collaboration area so that users can work with items or categories in a workflow.

About this task


A collaboration area associates a workflow to a specific catalog or hierarchy. The entries belonging to that catalog or hierarchy are checked out to the collaboration area,
where they will move through a series of steps corresponding to the workflow associated with the collaboration area.
You can checkout entries into a collaboration area, and work with the entries through the workflow steps in the collaboration area. The checked out version of the entry is
initially populated with the attribute values from the entry in the source container. You may also import new entries directly in the collaboration area. When you run an
import, you initially populate the attributes with the values provided from the import source.
Important: Ensure that you open the collaboration area and click Refresh whenever attributes (for example, catalog scripts or link attributes) are modified on the
associated source catalog.

Procedure
Create a collaboration area.
Use one of the following methods to create a collaboration area:
Option Description
a. Click Collaboration Manager > Collaboration Areas > New Collaboration Area.
b. Provide a name for your collaboration area.
c. Select the workflow to provide the design of this collaboration area by selecting a container (either a catalog or hierarchy) from the Container
field. If you are working with items, you need to select a catalog. If you are working with catalogs, you need to select a hierarchy.
d. Assign an administrator to the collaboration area by selecting from the list in the Administrators field. Administrators are assigned to
User interface collaboration areas to maintain the access to the workflow, and to make changes if an item is not passing a particular step. The collaboration
area administrator has the ability to move items stuck in a particular step and ensure that all entries are able to progress through the steps of
the collaboration area.
e. Specify the access control group for the collaboration area in the Access Control Group (ACG) field. The ACG on a given collaboration area
controls which users are allowed to checkout items to this collaboration area.

726 IBM Product Master 12.0.0


Option Description
The following sample code creates an item collaboration area.

// get an existing catalog


CatalogManager ctgManager = ctx.getCatalogManager();
Catalog catalog = ctgManager.getCatalog("test catalog");

// get an existing workflow


WorkflowManager wflMgr = ctx.getWorkflowManager();
Workflow workflow = wflMgr.getWorkflow("test catalog workflow");

CollaborationAreaManager manager = ctx.getCollaborationAreaManager();

String collaborationAreaName = "test item collaboration area";


// create a collaboration area using a workflow and a catalog
CollaborationArea collaborationArea = manager.createItemCollaborationArea(collaborationAreaName,
workflow, catalog);

// get an existing access control group


OrganizationManager orgManager = ctx.getOrganizationManager();
Java™
AccessControlGroup accessControlGroup = orgManager.getAccessControlGroup("Default");

Collection<Performer> performers = new ArrayList<Performer>();

// get an existing user and add it to the performers list


User user = orgManager.getUser("userA");
performers.add(user);

// get an existing role and add it to the performers list


Role role = orgManager.getRole("roleA");
performers.add(role);

// set access control group


collaborationArea.setAccessControlGroup(accessControlGroup);
// set administrators/performers for the collaboration area
collaborationArea.setAdministrators(performers);
// then save the collaboration area
collaborationArea.save();

The following sample code creates a category collaboration area.

Context ctx = PIMContextFactory.getCurrentContext();

// get an existing hierarchy


HierarchyManager hierManager = ctx.getHierarchyManager();
Hierarchy hierarchy = hierManager.getHierarchy("test hierarchy");

// get an existing workflow


WorkflowManager wflMgr = ctx.getWorkflowManager();
Workflow workflow = wflMgr.getWorkflow("test hierarchy workflow");

CollaborationAreaManager manager = ctx.getCollaborationAreaManager();

String collaborationAreaName = "test category collaboration area";


// create a collaboration area using a workflow and a hierarchy
CollaborationArea collaborationArea =
manager.createCategoryCollaborationArea(collaborationAreaName, workflow, hierarchy);

// get an existing access control group


Java OrganizationManager orgManager = ctx.getOrganizationManager();
AccessControlGroup accessControlGroup =
orgManager.getAccessControlGroup("Default");

Collection<Performer> performers = new ArrayList<Performer>();

// get an existing user and add it to the performers list


User user = orgManager.getUser("userA");
performers.add(user);

// get an existing role and add it to the performers list


Role role = orgManager.getRole("roleA");
performers.add(role);

// set access control group


collaborationArea.setAccessControlGroup(accessControlGroup);
// set administrators/performers for the collaboration area
collaborationArea.setAdministrators(performers);
// then save the collaboration area
collaborationArea.save();

IBM Product Master 12.0.0 727


Option Description
The following sample script creates an item collaboration area.

var catalog = getCtgByName("test catalog");


var workflow = getWflByName("test workflow");
var collaborationAreaName = "test item collaboration area";
var collaborationArea = new CollaborationArea(collaborationAreaName, workflow, catalog);

var accessControlGroupName = "Default";


Script
var users = [];
users[0] = "userA";
var roles = [];
roles[0] = "roleA";
collaborationArea.setColAreaAccessControlGroup(accessControlGroupName);
collaborationArea.setColAreaAdminUsers(users);
collaborationArea.setColAreaAdminRoles(roles);
collaborationArea.saveColArea();
The following sample script creates a category collaboration area.

var hierarchy = getCategoryTreeByName ("test hierarchy");


var workflow = getWflByName("test workflow");
var collaborationAreaName = "test category collaboration area";
var collaborationArea = new CollaborationArea(collaborationAreaName, workflow, hierarchy);

var accessControlGroupName = "Default";


Script
var users = [];
users[0] = "userA";
var roles = [];
roles[0] = "roleA";
collaborationArea.setColAreaAccessControlGroup(accessControlGroupName);
collaborationArea.setColAreaAdminUsers(users);
collaborationArea.setColAreaAdminRoles(roles);
collaborationArea.saveColArea();

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating workflows
Using workflows allows more strict control of the Product Master Server process.

A workflow consists of a set of steps that make up a single use case or business process. A collaboration area associates a workflow to a specific catalog or hierarchy.

To apply a workflow to the entries in a given container, a collaboration area is created for that workflow and that container. Entries in that container (the source container)
can be checked-out to that collaboration area, which means that they move through the steps that are defined in the workflow for each business process to be performed.
At the end of the workflow, the changed entry can be checked-in, which means the attribute values are copied back into the source container.

The following three roles are involved in creating workflows and collaboration areas:

Solution developers
The solution developer is responsible for designing the representation of the company's business processes. Typically this is a business analyst.
Collaboration area user
The user who performs the business process on the items and categories. There can be many different roles and users for this area.
Collaboration area administrator
The collaboration area administrator is responsible for checking out items and categories and to maintain the items and categories in a collaboration area. The
collaboration area administrator resolves any issues during the Fixit step.
Important: The Fixit step is only visible to an user who is a collaboration area administrator.

Workflows
A workflow represents a business process in the PIM application. By creating collaboration areas based on this workflow, users of IBM® Product Master can perform their
business processes by moving the entries through workflow steps.

Collaboration areas
A collaboration area implements a business process in the PIM application. The collaboration area can be used to operate task lists, display status of entries, and can also
be used for auditing and reporting.

A collaboration area is the application of a specific workflow to a catalog or hierarchy. A collaboration area is empty when it is created even if the container has entries.
Entries that need to be processed according to the workflow can be checked out into the collaboration area. After an entry is checked out, the performer of each step of
the process can modify and enrich the attributes of the entry according to the workflow that was used to create the collaboration area.
Important: Ensure that you open the collaboration area and click Refresh whenever attributes (for example, catalog scripts or link attributes) are modified on the
associated source catalog.

Relationship between workflows and collaboration areas


You can associate a single catalog or hierarchy with multiple workflows so that you have multiple collaboration areas with different rules. Multiple workflows are useful
when the same set of items needs to be processed by more than one set of rules. For example, the rules might allow the description of an item to be changed by the

728 IBM Product Master 12.0.0


supplier, and the rules might require that the descriptions not exceed a certain length and be approved by someone within the business. However, the price of an item
might have stricter controls than the description does for who can update it, and updates to the price might need more approval steps.

All the attributes of an entry might not be modifiable within a collaboration area. The list of modifiable attributes is computed from the workflow definition that is
associated with the collaboration area. Therefore, if multiple collaboration areas have disjointed lists of modifiable attributes from one another, then the entry can be
checked-out to all of them at the same time.

In this example, you might have one collaboration area that modifies only price and another collaboration area that modifies the description. In this case, an item can be
checked out for price update and description update at the same time. In addition, the item can be checked out into any other collaboration area that does not allow for
the update of the price or of the description.

Workflow steps
A workflow consists of a series of steps. Only an Admin or a user assigned Admin privileges should make changes to a workflow definition.
Workflow validations
A workflow validation validates the attributes that are associated with an entry, the edits that were made to the entry, and the overall process to successfully
complete a step. You perform a validation to check if all the required attributes of that step are populated.
Persistence of attribute values
The value of an attribute in a workflow may be persisted when its entry moves from one step to another in the workflow, or when the entry is finally checked in to its
source container. Whether or not an attribute is persisted is controlled solely by whether it is set editable in the corresponding workflow step. Whether the attribute
is in fact editable in the workflow step is also controlled by whether it is set editable in its entry’s spec. The combined effects of these settings is described here.
Workflows versus custom tools
You can use the IBM Product Master existing solution for introducing new products to the market. The existing solution is based on the users workflow. The
workflow reduces the time to introduce a new product to the market.
CEH logging and debugging
Collaboration entry history (CEH) logging enables you to track how a particular item is traveling throughout a workflow. You can also use CEH logging as historical or
audit information after an item has exited the workflow.
Creating a workflow
To create a workflow, you create the steps for the workflow, define the routing order for the steps, assign performers to the steps, and specify validation rules for
each step. Additional custom business processes and validations could be specified using the IN and OUT scripts associated with the workflow steps.
Creating a nested workflow
A workflow is a collection of steps that are routed. Every workflow defines a business process. Each step has certain performers and actions associated with it. A
nested workflow is when you have a workflow that has a small subset of steps and is then connected to another workflow.
Multiple workflow processes
You can have multiple workflow processes in Product Master to improve the performance and share the load.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Workflow steps
A workflow consists of a series of steps. Only an Admin or a user assigned Admin privileges should make changes to a workflow definition.

Each step in a workflow defines a particular part of a business process. For example, you might have a "Set Pricing Info" step to allow your pricing department to enter
prices for each item. You might have a "Product Detailing" step where the marketing department adds product descriptions and images.

The Workflow Console provides a list of workflows defined in the system. When you click Create new workflow two main tables display; a New Workflow table and a
Workflow Steps table. You define the routing of an entry in the Workflow Steps table. The Workflow Steps table displays a list of the steps that created for this workflow,
however, these steps are not yet organized. You must define the order that the steps need to follow for an entry to be completed and successful.

The Initial step in a workflow is always the first step by default. Across from the Initial step is where you define what the first step is in your workflow. You continue this
process until you are done creating a relationship between all of your steps within your workflow. Routing of steps is similar to creating a path for an entry to complete all
the steps that you created in your workflow. The steps are connected together based on exit values.
Note: Although you start with the Initial step by default, you can Import into other steps instead of starting with a checkout. You can select the Allow Import Into Step
check box to do this.
There is a constraint when setting up an ItemEdit and an ItemLocation attribute in a workflow. Because location attributes are meant to be different then global attributes,
the two attributes can not share the same attribute collection. They must have different attribute collections.

Each step consists of a number of components:

One or more performers. These are the users or roles who are allowed to interact with entries in this step. The collaboration area administrator is automatically a
performer for all steps and does not need to be specified. However, if a step needs to be performed by the administrator of the collaboration area only, you still
need to assign at least one performer.
A set of viewable attributes. These are the attributes that can be viewed for reference in this step, but not edited.
A set of editable attributes. These attributes can be edited in this step. These must be valid according to the attributes' spec in order for the entry to be allowed to
leave the spec.
A set of required attributes. These attributes can be edited in this step. These must be valid according to the attributes' spec and are required to have a value even if
the spec does not enforce it, in order for the entry to be allowed to leave the spec.
One or more exit values. Each exit value represents a route out of the step which the performer can send one or more entries out on. Each exit value will have one or
more associated next steps, to which the entries will move if sent to that exit value.
Optionally, an IN function and an OUT function may be provided in script or Java™; this is user-provided code that is issued against an entry when it enters or leaves
the step.

When an entry is checked-out to a collaboration area, it moves to the first workflow step for processing by the performer of that step. Ultimately, it will reach a step with an
exit value of SUCCESS (which causes the entry to be checked in to the source container) or FAILURE (which results in the checked-out entry being abandoned and the
source container unaffected).

Types of steps
The types of steps that can make up a workflow include base system steps, user steps, and automated steps.

IBM Product Master 12.0.0 729


Performers for steps
A performer is a role or user who performs the action supported by the step.
Step logic in a workflow
You design the workflow so that users can check out an entry and interact with the entry, then check the entry back in. You need to create a flow of steps (routing
between steps) that follows the business process.
Workflow step extension points
Each workflow step can be enhanced with entry processing logic written by the solution developer.
Exit values in workflow steps
In order for an entry to move to the next step within a workflow, that entry needs to be edited and then assigned an exit value.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Types of steps
The types of steps that can make up a workflow include base system steps, user steps, and automated steps.

Base system steps


These steps are created automatically for each workflow. Base system steps cannot be deleted or renamed. They can be partially edited (for example to allow imports into
the step or to add script functions). The base system steps are:

Initial
A workflow always starts with an Initial step. Only one instance of an Initial step exists per workflow. When you check out an entry, the entry will arrive in the Initial
step, where it will be rerouted according to the mapped next step.
Success
If entries reach the Success step in a workflow, the system attempts to check the entries into the catalog or hierarchy that is connected to the collaboration area for
the workflow.
Failure
If entries reach the Failure step in a workflow, the system drops the entries from the collaboration area.
Fixit
This step is used to repair entries which have timed out or were invalid when a performer tried to move them out of a step. The collaboration area administrator can
edit all checked out attributes, fix any errors, and check in the entry or abandon the entry by "dropping" it from the collaboration area.

User steps
These are steps where a performer can interact with the entry before selecting an exit value. The user steps are:

And approval
This step requires all of the performers to approve an entry before it moves to the next step on the APPROVE exit value. If one approver rejects the entry, it will
move out on the REJECT exit value. The collaboration area administrator may unilaterally approve or reject the entry.
Or approval
This step requires only one of the performers to approve a record before it moves to the next step on the APPROVE exit value. If one approver rejects the entry, it
will move out on the REJECT exit value. The collaboration area administrator may also approve or reject the entry.
Dispatch
Use this step to determine which next step is to be taken; by selecting an appropriate exit value. Typically, no editing is done in this step; achieved by leaving the
editable and required attribute collections blank.
General
This step modifies a set of entries. You can modify this step to perform any of the user steps. The Reserve To Edit feature is disabled by default. This step has no
limitations.
Modify
This step modifies a set of entries. You can modify this step to perform any of the user steps. The Reserve To Edit feature is enabled by default. This step has no
limitations.

Automated steps
These steps are automated and will perform logic and trigger an exit on an exit value without any performer interaction. The automated steps are:

Automated
This step automates a task. The logic of this step is captured in the IN() and OUT() functions of the script.
Wait
Use this step if you want entries to wait for a user or script to move them to the next step. This step can also be used for checking entries back into the source
container at a specific date. For example, if you want the entries to be merged with your source container only on November 15, you insert a wait step with a
deadline of November 15 before the Success step. User logic is captured in the TIMEOUT function of the script. When the deadline triggers, the entry will exit on the
TIMEOUT exit value.
Make Unique
This step removes every other copy of a entry in other branches of the workflow (usually after a split). This step ensures that a entry that reaches this step is in this
step and this step only. You can organize your workflows so that users can check out one attribute of an item into a collaboration area in one workflow (for example,
English short description), and check out another attribute of the same item into a different collaboration area in another workflow (for example, French short
description).
Merge
A merge step ensures all of the incoming steps are completed for that entry. This step merges several steps after a split. If x steps point to the merge step, then x
copies of the entry must go through the merge step before this entry can move to the next step. Use the condenser step to reduce the number of incoming steps.
Note: The Admin user can move items out of a merge step even when all of the inputs to the merge have not arrived.

730 IBM Product Master 12.0.0


Condenser
A condenser step needs only one of its incoming steps to be complete for the entry to trigger an exit. This step reduces the number of entries that point to a merge
step. You reduce the number of entries that point to a merge step by pointing several steps to the condenser.
Interim Checkout
This step reverses the changes that were done to attributes in this workflow. The values of these attributes are gathered from the main catalog when a entry enters
this state.
Interim Checkin
This step allows you to apply the changes to the entry in the collaboration area back to the source container, similar to checkin. But unlike checkin, it does not
remove the entry from the collaboration area. Instead, further processing can be done on the item in the collaboration area and it can be later checked-in.
Nested workflow
This step includes another valid workflow as a step, which must be specified when the step is added. A nested workflow is a workflow that is contained within
another workflow as a nested workflow step.
The Initial, Failure, and Success steps of the nested workflow are created as extra automated steps within the main workflow. All user steps and any other
automated steps are also added as extra steps to the main workflow. Entries entering the nested workflow step will enter the Initial step of the nested workflow.

The exit values for this step are the same as the termination exit values for the included nested workflow. An entry arriving at the Failure step of the nested
workflow exits the nested workflow step on the FAILURE exit value. An entry arriving at the Success step of the nested workflow exits the nested workflow step on
the SUCCESS exit value. If there is more than one copy of an entry in the nested workflow, a make unique operation is performed on exiting the nested workflow
step. Any extra copies of an entry within the nested workflow will be deleted, however, copies of the entry in other steps of the outer workflow, will be unaffected. If
the nested workflow step has a timeout deadline, a make unique operation is performed and the entry is moved out on the TIMEOUT exit value for the nested
workflow step, regardless of its current location within the nested workflow.

Workflows can be nested to multiple levels, however, a cyclic workflow (A nests B which nests A) is not supported.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Performers for steps


A performer is a role or user who performs the action supported by the step.

Performers are the only roles or users who can access the step. It is possible to combine roles and users in any step. If a user is within a role and both the user and role are
mapped to a step, the user will be able to act on behalf of the role.

Ensure that you identify the different roles that you need to build for the PIM solution. You can control the access to the workflow through the Access Control screen to
determine which roles can view, edit, or delete this workflow.

If a step involves user interaction, the Exit Value is the text displayed on the button that moves to the step that is mapped to the Exit Value. If a step does not involve user
interaction, each outcome in the script within the step must map to an Exit Value.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Step logic in a workflow


You design the workflow so that users can check out an entry and interact with the entry, then check the entry back in. You need to create a flow of steps (routing between
steps) that follows the business process.

Checking-out an entry
When an entry is checked-out to the collaboration area, IBM® Product Master evaluates the workflow definition and identifies the total list of editable and required
attributes across the workflow.
These attributes are "locked" in the source container entry and it is these attributes that are checked-out to the collaboration area. This means that other attributes are
not checked-out, therefore, you can create another collaboration area and check out other attributes on the same entry to a different collaboration area; provided there is
no overlap on the editable and required attributes across the two workflows.

Note: You are not allowed to modify the source entry in catalog if the attributes are checked-out as either editable or required. However, to modify the already checked-
out attribute you need to explicitly disable the Collaboration
Area Validation locks container processing option. To disable the Collaboration Area Validation locks option through scripting, use the
disableContainerProcessingOptions script operation. To disable the Collaboration Area Validation locks option through Java™ API, call the
setCollaborationAreaLocksValidationProcessing method with the value as false.

Order of the checked-out entries in a collaboration area step


The sequence of the checked-out entries is the order in which they were checked-out from the source container.

Interacting with an entry in a step


After an entry arrives in a step, the code that the workflow designer wrote (the IN() function) is run. If the next step is a user step, the entry is visible in the user interface.
When you click the entry, that step's performer (as defined by the workflow designer) can then edit those attributes of the entry that are allowed to be edited in that step.

IBM Product Master 12.0.0 731


When this is done, the performer can select an exit value for that entry or a group of entries in that step. The OUT() function logic, if there is any, is performed, and the
entry or entries move out along the specified exit value, to whatever next step, or steps the workflow designer specified for that exit value. If the defined timeout for the
step expires before the user triggers an exit, the TIMEOUT() function is run and the entry leaves on the TIMEOUT exit value instead.

Exiting a step
When an entry exits a step, Product Master evaluates the definition of the workflow to determine the next step or steps for the chosen exit value.
Product Master moves the entry to the next step. If there is more than one next step, a split is performed. This means that the entry now exists in two steps at once. A
typical scenario for this is that you want your pricing and detailing processes to run in parallel without one department waiting for the other. After a split, a Merge step is
necessary to recombine the copies of the entry. Alternatively, all but one copy can be discarded, by using a Make Unique step. A scenario for this is, if you have three
departments that perform the same task on a workflow entry, then whichever completes the task for a given entry first would cause it to be moved to a make unique step,
which would delete the item from the other departments work queues. If there is more than one next step, a Split is done by the workflow automatically, as oppose to a
split step that needs to be created by the workflow designer.

Checking-in an entry
At the end of a workflow, one of the steps has an exit value connected to a system-provided step such as SUCCESS or FAILURE.
If an entry is moved to SUCCESS, the changed attributes are copied back to the source container entry, and those attributes are unlocked on the source container entry. If
an entry is moved to FAILURE, the checked-out entry is deleted and the attributes are unlocked.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Workflow step extension points


Each workflow step can be enhanced with entry processing logic written by the solution developer.

This takes the form of a script, which will either be written in Script API or Java™ (using the IBM® Product Master Java API). Each script has three functions (which can be
left empty):

IN
Runs as soon as an item or category enters the workflow step.
OUT
Runs when an item or category exits the workflow step.
TIMEOUT
Runs if the item or category exceeds the timeout period for the workflow step.

You create the workflow step script when you create or edit a step.
Note: If an unhandled step script error occurs, perform the following:

Write the IN, OUT, and TIMEOUT step scripts to have an outermost structure of:
catchError(e){ } if (e != null){
This line of code will distinguish the difference between errors. However, for the TIMEOUT step script, do not attempt to change the exit value.

Workflow engine - transactions and events


Any operation on an entry in a collaboration area (such as release, reserve, move to next step, check in, and check out) is considered a workflow event. Certain
workflow events are permitted to run inline, meaning, the events bypass the requirement to post to the database and process on the current Java virtual machine
(JVM) within a transaction.
Caching workflow data
You can use Object Query Cache (OQC) to intercept queries that are used for a workflow object's construction with cached queries, thus enhancing workflow event
processing. This mechanism reduces the database use for each event that is processed by caching the workflow definition data.
Asynchronous and synchronous processing for workflow events
When an item is checked out into a collaboration area, a checkout event is posted to the workflow engine.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Workflow engine - transactions and events


Any operation on an entry in a collaboration area (such as release, reserve, move to next step, check in, and check out) is considered a workflow event. Certain workflow
events are permitted to run inline, meaning, the events bypass the requirement to post to the database and process on the current Java™ virtual machine (JVM) within a
transaction.

These workflow events are "posted" to the IBM® Product Master database and processed by a separate part of Product Master called the workflow engine. The workflow
engine is a separate JVM. This means that while an event might have been "posted", it will not take effect until it is picked up and "processed" by the workflow engine. In
some cases, Product Master processes the event immediately on the application server rather than posting the event to the workflow engine.

You can view the events that have taken place to an entry in a collaboration area so far by looking at the Collaboration Area Entry History.

The workflow engine will only pick up and process an event if it can see it. This means it must have been saved (posted) to the database. Therefore, a transaction can only
exist within one JVM any work that is performed by the workflow engine cannot form part of a unit-of-work that is contained on Product Master main JVM (the application
server). This means that if the event posting forms part of a transaction, it will not be seen until the unit-of-work is committed.

732 IBM Product Master 12.0.0


When you request a check in, check out, reserve, release, or move to a different step, you are posting an event to the workflow engine. If this posting is part of a
transaction, the requested workflow event does not happen until the transaction is committed. This is known as asynchronous processing of the workflow event. However,
in some cases, Product Master processes the event immediately on the application server rather than just posting the event for later processing. This is known as
synchronous or "inline" processing of the workflow event, and in this case, the event takes effect immediately. When writing workflow step extension point logic containing
workflow event functions, you need to consider whether each operation is synchronous or asynchronous, to ensure the correct behavior of your script. For more
information, see Asynchronous and synchronous processing for workflow events.

Important: For greatest efficiency on workflow event processing, it is important that each workflow event contains an optimized item volume to process so the overhead
that is associated with workflow event processing is amortized across the item collection. Also, ensure that workflow events are not posted at a rate faster than the rate at
which workflow events can be processed from the workflow engine.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Caching workflow data


You can use Object Query Cache (OQC) to intercept queries that are used for a workflow object's construction with cached queries, thus enhancing workflow event
processing. This mechanism reduces the database use for each event that is processed by caching the workflow definition data.

The cache size and cache assessing time is configured in the following common.properties options:

max_workflow_in_cache=<n>
Sets the limit for the cache size in the JVM.
max_workflow_cache_timeout_in_seconds=<n>
Sets the time interval in seconds for how often the JVM checks for workflow updates.

Note: If you are using a clustered environment, to synchronize the workflow data, set the value of max_workflow_in_cache and
max_workflow_cache_timeout_in_seconds properties to -1.
If set to -1, the cache is disabled. If set to 0, then the JVM always checks for workflow updates. Updates that are made on a given JVM are always immediately available
on that JVM.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Asynchronous and synchronous processing for workflow events


When an item is checked out into a collaboration area, a checkout event is posted to the workflow engine.

When a synchronous checkout is requested, it passes the waitForStatus flag as true to either the checkoutEntry() or checkoutEntries() script operations, or it invokes the
checkoutAndWaitForStatus() Java™ API method.

If the checkout is synchronous and is called from a script or import; the event occurs inline, that is, the event is processed by the same thread, rather than occurring on the
workflow engine. This process is fast because there is no background processing and assessing status is not required.

The workflow includes an extensive audit trail. Every attribute change at each workflow step for each collaboration area is stored in the database.

Asynchronous operations cannot be chained within the same transaction. For example, it is not possible to move an entry through more than one workflow step using the
moveEntriesToNextStep operation within a single transaction because each moveEntriesToNextStep call would require the transaction to be committed before it takes
effect.
Important: If there are scripts associated with an automated workflow step, then your scripts should not contain the moveEntriesToNextStep operation.
Table 1. Script operations for asynchronous and synchronous processing
Script
Synchronous or asynchronous Comments
operation

IBM Product Master 12.0.0 733


Script
Synchronous or asynchronous Comments
operation
HashMap If waitForStatus is omitted, or false, effect is Within a single transaction, if performed inline, the entry specified will be checked out when the
CollaborationAr asynchronous. Otherwise the operation is transaction is committed.
ea:: performed synchronous.
checkOutEntry( If performed asynchronously, at commit, the entry will be in a non-checked out state, but a
Entry entry pending checkout event is created, that when processed by the workflow engine, will result in the
[, String entry being checked out.
stepPath]
[, boolean
waitForStatus])

HashMap
CollaborationAr
ea::
checkOutEntries
(EntrySet
entrySet
[, String
stepPath]
[, boolean
waitForStatus])
Boolean Synchronous Within a transaction, if performed inline, the entry will be released when the transaction is
CollaborationAr committed.
ea::
releaseEntryInS
tep(Entry
entry,
String stepPath)
Boolean Synchronous Within a transaction, if performed inline, the entry will be reserved when the transaction is
CollaborationAr committed.
ea::
reserveEntryInS
tep(IEntry
entry,
String stepPath
[, String
username])
Category new Synchronous (when applicable) When ctr is an instance of a collaboration area, and ctr contains a category that matches the
Category(Catego specified category path, this script operation acts to a check out that category.
ryTree ctr,
String path [, Within a transaction, the checkout (when performed), is performed inline. When the transaction is
String committed, the category will be in a checked out state.
delimiter] [,
String
primaryKey])

Category
CategoryTree::
buildCategory(S
tring path
[, String
delimiter]
[, String
primaryKey])
void Asynchronous Within a transaction, the drop event is created. When the transaction is committed, the workflow
CollaborationAr engine processes the event, which causes the entry to be dropped.
ea::
dropEntries(Entr
ySet
entrySet)

void
CollaborationAr
ea::
dropEntry(Entry
entry)

734 IBM Product Master 12.0.0


Script
Synchronous or asynchronous Comments
operation
HashMap Asynchronous Within a transaction, the event to move the entry is created. When the transaction is committed,
CollaborationAr the workflow engine processes the event, which causes the entry to be dropped.
ea::
moveEntriesToN
extStep
(EntrySet
entrySet,
String
stepPath, String
exitValue)

HashMap
CollaborationAr
ea::
moveEntryToNe
xtStep
(Entry entry,
String
stepPath,
String exitValue)
Boolean Asynchronous Within a transaction, the move event is created. When the transaction is committed, the workflow
CollaborationAr engine processes the event, which causes the entry to be moved.
ea::
moveEntriesToC
olArea
(EntrySet
entrySet,
String
destColAreaNa
me)

Boolean
CollaborationAr
ea::
moveEntryToCol
Area
(Entry entry,
String
destColAreaNa
me)
Boolean Asynchronous Within a transaction, the interim checkin event is created. When the transaction is committed, the
CollaborationAr workflow engine processes the event, which causes the entry being copied into the source
ea:: container.
publishEntriesTo
Src
Container
(EntrySet
entrySet)
Boolean Synchronous Within a transaction, the specified entry will be added to the collaboration area. The
CollaborationAr addEntryIntoColArea script operation starts the BeginStep event for the entry that is added to the
ea:: collaboration area.
addEntryIntoCol
Area When the transaction is committed, the workflow engine will process the event.
(Entry entry,
String
stepPath)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Exit values in workflow steps


In order for an entry to move to the next step within a workflow, that entry needs to be edited and then assigned an exit value.

An exit value is assigned by the performer of a step after all of the processing on that entry that needs to be done in this step (such as editing some of its values) is
complete. The entry then moves to the next step or steps that correspond to that exit value in the workflow graph design.

DONE
The default exit value is Done. This default exit value shows that the action is completed and the entry can now move to the next step.
SUCCESS
An exit value of Success means that the entry completed the last step within the workflow and no other steps are necessary. You cannot add another step or have
an entry move to the next step if an exit value of Failure or Success is assigned.
FAILURE

IBM Product Master 12.0.0 735


An exit value of Failure means that the step cannot be completed and that something needs to be fixed before the user can save edits and move to the next step.
You cannot add another step or have an entry move to the next step if an exit value of Failure or Success is assigned.
CUSTOM
You can create your own custom exit values in the Exit values field for a step. Type the exit value that you want to create in the blank field and click the + button.
The new exit value is added to the list of possible values for your step within a workflow. To remove a custom exit value, highlight the exit value in the list and click
the X button.
APPROVE
An exit value of Approve means that the step is complete. You can only approve a step in the Or Approval or And Approval steps.
REJECT
An exit value of Reject means that the step is not complete. You can only reject a step in the Or Approval or And Approval steps. If one approver rejects the entry,
it will move out on the REJECT exit value.

Therefore, an exit value determines which next step the entry goes to. You can have multiple next steps per exit value.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Workflow validations
A workflow validation validates the attributes that are associated with an entry, the edits that were made to the entry, and the overall process to successfully complete a
step. You perform a validation to check if all the required attributes of that step are populated.

There are two different types of validations:

validations enforced by the IBM® Product Master application


business logic related validations that are coded in script. This type of validation is a type of processing can be applied using the step scripts.

You can validate data during two main steps: when uploading data through imports and during step transitions.

You can choose to validate data or apply custom validations through scripting. Custom validations can be run at step entry or exit; additional steps or processing will need
to be planned for failed items. Using the OOB functionality, you can specify a required set of attributes for an item at a particular step. The item will not continue
successfully until all of those attributes have a valid value.

Most spec level validations, such as field length, minimum and maximum, and required, do not get applied between steps, but is applied upon the workflow exit step. If
you do not validate whether all of the spec validations will pass before reaching the Success step, the items are sent to Fixit.

In a workflow, performers often want to enrich their product information. To enrich the information of an entry, a performer can add valuable attributes such as financial
information, like the price of an entry, or the sales margin. These attributes are then associated with that particular entry and therefore, need to be validated.

If you specify one or more attributes in the Attributes to validate field when you create or edit a step in the workflow, then the specified attributes are validated when the
performer finishes editing the entry. Attributes that are required and can be modified in a workflow, must be added to a Required attribute collection on those steps where
they can be, even if a rule is specified on a spec. Every entry must have a unique primary key. Only the attributes that are editable or required within a step are validated. If
the attribute collection is set to be Required for the step, all of the attribute values in the attribute collection need to be filled in when the item is in that step, otherwise
validation errors will be thrown.

By default, the items and categories in a workflow step get saved as draft in a collaboration area. Under the save as draft mode, even the invalid assigned values of
attributes gets saved, provided they are of valid attribute type. This allows you to save an in-progress version of the item or categories in a workflow step of a collaboration
area, and then return to this workflow step later to continue processing the item or category. This save-as-draft behavior of entry save in workflow steps can be disabled
by setting the value of save_as_draft_enabled parameter to false in the common.properties file. Regardless of whether the save_as_draft_enabled parameter
is set to true or false, the item or categories can move out of this workflow step only after all of its attributes successfully pass the validation.

You can use the following validations to ensure business process and data integrity within workflow:

Uniqueness
Validates against the Catalogs and Hierarchies view before submitting values from a workflow when the workflow exits.
Required
Validates the values in a workflow when the workflow exits.
Dynamic view
The Fixit attribute collection is dynamically defined when your workflow is saved. The Fixit step is given a union of all editable and required attribute collections
from the workflow (and any nested workflows) as the Fixit editable view.
Timeout
During the submit process of a given step with a timeout, the system validates any target steps that contain an equivalent set or superset of the editable attributes
on that same step with the timeout. For nested workflow steps that have timeouts, the target will be equivalent to or a superset of the attributes that are modifiable
within the nested workflow.

Values are validated based on attribute data types. Data type validation also checks for the mandatory and minimum occurrence criteria for all the attributes for a specific
entry.

Attributes with the following data types can be validated to ensure that the values correspond to the data type:

Number
Integer
Currency
Date
Period
URL
Image URL
Thumbnail Image URL
Image

736 IBM Product Master 12.0.0


Thumbnail Image

The data validation works in the following ways:

Data types
When the performer provides a value for an attribute, the validation framework validates the data against the corresponding data type. Any validation errors, are
reported back to the performer at the top of the screen.
Format
Some data types, such as Date and Period, are checked for the correct format and an error message displays with the expected date format.
Invalid values
The error messages provide information for the performer to resolve, such as the attribute display name, the invalid value entered, and the attribute data type. An
error message displays against each attribute for which an invalid value is entered.
Mandatory attributes
If no value is entered for mandatory attribute an error message displays.
Minimum occurrence
If an attribute requires a minimum number of occurrences and values do not exist for that number of occurrences, an error message displays.

Attributes with errors have tool-tips that display their respective error messages. If the Work with Item screen contains multiple tabs, the performer cannot switch to
another tab if the current tab has validation errors.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Persistence of attribute values


The value of an attribute in a workflow may be persisted when its entry moves from one step to another in the workflow, or when the entry is finally checked in to its
source container. Whether or not an attribute is persisted is controlled solely by whether it is set editable in the corresponding workflow step. Whether the attribute is in
fact editable in the workflow step is also controlled by whether it is set editable in its entry’s spec. The combined effects of these settings is described here.

Specs
An attribute can be marked as editable in a spec (by checking the Editable check box, which is usually the default setting). If this check box is unchecked the attribute is
read-only in any editor, so the user cannot change its value directly. However, a non-editable attribute's value may be changed by a script or Java™ API operation. The
setting of the editable property in the spec has no effect on whether the attribute value is persisted in a workflow.

Workflow steps
When defining a workflow step, you can set the Attributes to validate by choosing to select one or more attribute collections. All the attributes in these collections are then
validated when an entry moves from one step to another in a collaboration area. The three settings for attribute values within a workflow step are:

Viewable
Editable
Required (which also implies Editable)

This may be done independently for three different content authoring screens: Edit, Bulk Edit and Item Popup (or Category Popup). This means, for example, that the
same attribute may be editable in Edit but not in Bulk Edit. These settings control whether an attribute is editable by the user (those marked Viewable may not be edited),
but they also control the persistence of the attribute. When an entry is moved to another step or is checked in to its source container from a collaboration area, only those
attributes that are Editable or Required are persisted. Attribute values marked as Viewable are not persisted.
If an entry is newly created in a workflow step (that is, it had not been checked out to that step or moved to that step from another step), all its attributes are persisted,
even those attributes marked as Viewable.

These settings affect whether users can edit the attribute values within the UI. For editing, each setting applies independently for the different content authoring screens.
However, for persistence the situation is different: if an attribute value is set as Required or Editable in any one of the content authoring screens, that setting defines that
the attribute value is persisted.

Persistence to source container


Only Editable and Required attributes are persisted when an entry is checked into its source container. (Such attributes are often referred to as "checked out" attributes.)
This is to ensure that if more than one user is working concurrently on the same entry in different collaboration areas these users do not overwrite one another's changes.
To allow such concurrent access, you must define the editable attribute collections for the entry in different workflows or steps such that these collections contain disjoint
sets of checked out attributes.
For example, say you want to allow the situation in which user A is working on item item1 in collabArea1, in which attributes a1 and a2 are checked out, while user B is
simultaneously working on item item1 in collabArea2, in which attributes a3 and a4 are checked out. You need to define a different workflow for each collaboration area,
so that the workflow for collabArea1 has a1 and a2 marked editable and a3 and a4 marked viewable, while the workflow for collabArea2 has attributes a3 and a4 marked
editable and a1 and a2 marked viewable.

Another example is if an item was created in the collaboration area (and there is no source catalog item), all of the attributes from the collaboration area item are copied
into the source catalog item. This can be an important difference, especially if not all of the item attributes were checked out to a collaboration area. If you automatically
are setting values for attributes not checked out to the collaboration area (through value rules or default values, and so on...), those values are still getting set. The item
now being created in the source catalog during the checkin process might contain undesired or unexpected attribute values, if not prevented by validation rules.
Depending on the desired business logic you have to implement the corresponding logic in your PostProcessing script.

Examples

IBM Product Master 12.0.0 737


Example of a common use case
Those attributes that the user is expected to be able to change should be marked Editable in the spec and be given a Required or Editable setting in the step. Those
that the user is not meant to change in the collaboration area step should also be marked Editable in the spec but be given a Viewable setting in the step.
Example with immutable attributes
Those attributes that no user (including an Admin) is expected to change by direct editing should be marked as non-editable in the spec. They are then non-editable
in any situation, and the values are not persisted from a workflow.
Example with mutable but non-editable attributes
Suppose there are attributes that you do not want the user to edit directly in a workflow but whose values you may want to allow to be changed in the workflow by a
script, for example, and persisted to the container. The way to do this is to mark such an attribute as non-editable in the spec but Editable (or Required) in the step.
If the Admin wanted to change the value outside the step, this could be done only by use of a script.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Workflows versus custom tools


You can use the IBM® Product Master existing solution for introducing new products to the market. The existing solution is based on the users workflow. The workflow
reduces the time to introduce a new product to the market.

When choosing to use a existing solution you need to evaluate if a workflow is necessary before the workflow is built. On occasion, simple custom tools can satisfy the
requirements. To determine if a workflow is actually required, map the new product introduction business process to the steps that are described by the business process
and use cases. If there are substantially fewer steps and such steps can be easily represented using simple custom tools, then custom tools are the preferred approach.
The same holds true for complex workflows. If the workflow is substantially more complex, and the amount of work to create a custom tool is less than a workflow, and
will be more easy to use, than a custom tool is the preferred approach.
Important: Custom tools are not natively supported and are typically more difficult to maintain.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

CEH logging and debugging


Collaboration entry history (CEH) logging enables you to track how a particular item is traveling throughout a workflow. You can also use CEH logging as historical or audit
information after an item has exited the workflow.

CEH saves the history about how a particular item progressed through the workflow.

Example
Here is an example of how you can use CEH logging for tracking information about a particular item in a workflow. Let's say you are a large company and you have 1000
products. There is a new product that a vendor has told you that they want to sell through your company. You have a team of specialists that study the product and they
say if the product is worth selling in your stores. You also have a team of people who evaluate it the product. If the evaluators accept the product, the performers have to
understand the following:

how are they going to market it


what is the price of the product going to be
how is the product going to sell
what is the description of the product

A performer is usually assigned to each one of these actions. Let's now say there is a workflow that this product goes through to gather information enrichment for itself.
You want to know how long it takes for this new product to go through the workflow, or how it progressed throughout the workflow, for example, did it get disapproved
many times and therefore, there were lots of changes that needed to be made in order for it to be successful through the workflow?

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a workflow
To create a workflow, you create the steps for the workflow, define the routing order for the steps, assign performers to the steps, and specify validation rules for each
step. Additional custom business processes and validations could be specified using the IN and OUT scripts associated with the workflow steps.

Before you begin


Get the use cases and business process definitions from the solution architect. Ensure that you already identified all of the performers in the PIM system. You can find a
list of identified performers from use cases and business processes that the solution architect created. The business process also defines which performer is assigned to a
particular step. These users and roles must have been created in the User and Role Consoles before they can be used as performers.

738 IBM Product Master 12.0.0


About this task
A workflow consists of a set of steps that make up a single use case or business process. For example, you can create a workflow for the steps in the process of product
introduction and define those steps, the routing order, performers, and validation rules.

Procedure
Create a workflow by using the UI, the Java™ API, or the Script API.
Option Description
User interface See Creating a workflow in the UI.
Java See Sample Java code for creating a workflow.
Script See Sample script code for creating a workflow.

Creating a workflow in the UI


You can use the user interface to create the steps for the workflow, define the routing order for the steps, assign performers to the steps, and specify validation
rules for each step.
Sample Java code for creating a workflow
This sample Java code creates a workflow, creates the steps for the workflow, defines the routing order for the steps, assigns performers to the steps, and specifies
validation rules for each step. There are two samples; one for creating an item workflow and one for creating a category workflow.
Sample script code for creating a workflow
This sample script code creates a workflow, creates the steps for the workflow, defines the routing order for the steps, assigns performers to the steps, and
specifies validation rules for each step. There are two samples; one for creating an item workflow and one for creating a category workflow.
Representing steps in a workflow
You need to meet with the client to understand what the representing steps are within the business process.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a workflow in the UI


You can use the user interface to create the steps for the workflow, define the routing order for the steps, assign performers to the steps, and specify validation rules for
each step.

About this task


Only an Admin or a user assigned Admin privileges should make changes to a workflow definition.

Procedure
1. Create a new workflow in the Workflow Console.
a. Click Data Model Manager > Workflows > New Workflow.
b. Provide a name for the workflow.
c. Set the access control for the workflow. The access control determines which roles can view, edit, or delete this workflow.
d. Specify the container type:

Catalog type
Catalog type is for workflows that are used to create collaboration areas that are associated with a catalog. The workflows are created for specifying
business processes that are associated with processing items.
Hierarchy type
Hierarchy type is for workflows that are used to create collaboration areas that are associated with a hierarchy. The workflows are created for
specifying business processes that are associated with processing categories.

Now that you have provided the basic information for the workflow, a workflow definition console displays. This console shows a list of system steps, which have
been created for you (Initial, Success, Failure, and Fixit). These steps are required for all workflows. You can add additional steps, and then connect them together
in the correct order to form a chain from Initial to Success.
You can complete a workflow by mapping the Initial step directly to a Success step. However, if you have more than one step, the first step must be an Initial step
followed by another step where you choose the step type. This procedure assumes that you are creating more than just an Initial step.

2. Add a step in the workflow to represent each business process that you want. For each step that you add, repeat the following tasks:
a. Click Add Step and specify a name for the step for example, Specify Product Details.
b. Select the step type. See Types of steps.
c. Specify the performers for the step. In the Performers field, click the + button. A list of users and roles displays. Select any number of users or roles to assign
to this step. Use roles instead of users, as that reduces the maintenance overhead if there is a personnel change. Click the disk icon to save the selection.
Tip: For time-critical business processes, you can specify a timeout on the step. This means that if an entry remains in this step beyond the specified period,
a Timeout will occur and the entry will be moved out on the TIMEOUT exit value. Select Duration or Date to specify a timeout deadline.
d. Optional: Add user code to the step through the workflow step extension point. There are three functions, which can be coded:
IN (which will run on entrance to the step)
OUT (which will run on exiting of the step)
TIMEOUT (which will run when exiting due to timeout)
See Workflow step extension points. Adding code including a moveToNextStep operation to the IN function can cause the step to operate automatically.
These scripts can be coded through Script API or the Java™ API. For Java API, you do not type your code.
e. Optional: Specify settings for the step:

IBM Product Master 12.0.0 739


Reserve to edit
Reserve an item by checking the Reserve to edit check box. Selecting this check box means that before a performer can make any edits to the checked
out entry, they must reserve the entry. They cannot do this if another performer has reserved the entry already. Any performer who has reserved an
entry can release it to make it available for editing by another performer. If this check box is not selected, reserve and release are not required for that
step (and is not available to performers in that step).
Allow recategorization
Recategorize an item by checking the Allow recategorization check box. Selecting this check box means that for items or subcategories checked out to
a collaboration area and arriving in the step corresponding to this workflow step, the performer is able to move them to a different category. If the
check box is not selected, this will not be allowed for that step.
Allow import into step
Allow import into the step by checking the Allow import into step check box. If you select this check box, any performer who has access to this step
can insert items into the workflow. This check box also inserts another starting point into your process flow. The more starting points that you have,
the more entry places there are to insert new items or categories, thus, the less control you have as to how the new objects got into the workflow. For
most imports, which will Import into the Initial step, ensure that you edit the initial step and enable this check box, otherwise the import fails.

f. Specify any additional exit values for the step. Select the exit value. Some steps have predetermined exit values. If your step does not have a predetermined
exit value, you must specify one. If the step involves user interaction, each outcome in the script within the step must map to an exit value. See Exit values in
workflow steps.
g. Specify any validations for the step in the Attributes to validate field. See Workflow validations.
h. Optional: Specify Entrance notification email addresses, a comma separated list of email addresses, which will be notified when an entry enters the step.
i. Optional: Specify Timeout notification email addresses, a comma separated list of email addresses, which will be notified when an entry times out from this
step.
If a performer does not edit the item within a specified amount of time, they are notified. Select Duration or Date to specify a timeout deadline.
3. Repeat the previous step until you have added all of the steps that you need in the workflow.
4. Define the routing path between the steps. A workflow is valid for saving only if the process moves through a continuous chain of steps from the Initial step to either
Success, Failure, or Fixit without a break in the flow. Therefore, for a workflow to be valid, every step must have at least one next step and all paths from the Initial
step must be connected ultimately to a final step. See Defining routing.
5. Save the workflow.
Note: If you are using a clustered environment, to synchronize the workflow data, set the value of max_workflow_in_cache and
max_workflow_cache_timeout_in_seconds properties to -1.

Defining routing
When an entry is moved from one step to another through a workflow, that entry is following a path of logical steps called routing. Routing refers to the path that an
entry follows within steps of a workflow.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Defining routing
When an entry is moved from one step to another through a workflow, that entry is following a path of logical steps called routing. Routing refers to the path that an entry
follows within steps of a workflow.

Before you begin


To set up routing for an entry, you need to ensure that all the steps for the workflow are created.

Procedure
1. Click Data Model Manager > Workflows > New Workflow. The New Workflow and Workflow Steps tables display.
2. Create a customized step.
3. Across from the Initial step, click the + button to select the first step in your workflow.
Under the Name column, all of your workflow steps are listed. When you click the + button, a window displays the possible exit values and the steps in your
workflow.
4. Click the + button to identify the next step for every step that you created in your workflow.
5. Click the disk icon to save your routing of steps within the workflow.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample Java code for creating a workflow


This sample Java™ code creates a workflow, creates the steps for the workflow, defines the routing order for the steps, assigns performers to the steps, and specifies
validation rules for each step. There are two samples; one for creating an item workflow and one for creating a category workflow.

Sample Java code for creating an item workflow


Context ctx = PIMContextFactory.getContext("Admin","trinitron","junitdb");
WorkflowManager wmgr = ctx.getWorkflowManager();

740 IBM Product Master 12.0.0


//Creating an Item Workflow.
Workflow wfl = wmgr.createItemWorkflow("Item Workflow1", "This is a test");

//Add a General Type Workflow Step


WorkflowStep step1 = wfl.addStep("Step 1", "step 1 description", WorkflowStep.Type.GENERAL_STEP);

//Adding performers to the workflow step.


OrganizationManager omgr = ctx.getOrganizationManager();
Role r = omgr.getRole("japiRole");
step1.addPerformer(r);

WorkflowStep initialStep = wfl.getInitialStep();


WorkflowStep successStep = wfl.getSuccessStep();

//Connecting the steps


initialStep.setNextStep("Success", step1);
step1.setNextStep("DONE", successStep);
AttributeCollectionManager amgr = ctx.getAttributeCollectionManager();
AttributeCollection ac = amgr.getAttributeCollection("WorkflowStepCollection3");

List list = new ArrayList();


list.add(ac);

step1.setEditableAttributeCollections(list, ScreenType.ITEM_SINGLE_EDIT);

wfl.save();

Sample Java code for creating a category workflow


Context ctx = PIMContextFactory.getContext("Admin","trinitron","junitdb");
WorkflowManager wmgr = ctx.getWorkflowManager();

//Creating an Item Workflow.


Workflow wfl = wmgr.createCategoryWorkflow("Category Workflow1", "This is a test");

//Add a General Type Workflow Step


WorkflowStep step1 = wfl.addStep("Step 1", "step 1 description", WorkflowStep.Type.GENERAL_STEP);

//Adding performers to the workflow step.


OrganizationManager omgr = ctx.getOrganizationManager();
Role r = omgr.getRole("japiRole");
step1.addPerformer(r);

WorkflowStep initialStep = wfl.getInitialStep();


WorkflowStep successStep = wfl.getSuccessStep();

//Connecting the steps


initialStep.setNextStep("Success", step1);
step1.setNextStep("DONE", successStep);
AttributeCollectionManager amgr = ctx.getAttributeCollectionManager();
AttributeCollection ac = amgr.getAttributeCollection("WorkflowStepCollection3");

List list = new ArrayList();


list.add(ac);

step1.setEditableAttributeCollections(list, ScreenType.CATEGORY_SINGLE_EDIT);

wfl.save();

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample script code for creating a workflow


This sample script code creates a workflow, creates the steps for the workflow, defines the routing order for the steps, assigns performers to the steps, and specifies
validation rules for each step. There are two samples; one for creating an item workflow and one for creating a category workflow.

Sample script code for creating an item workflow


// create a catalog wfl: Initial --> Modify --> Success
///////////////////////////////////////////////////////////
var WFL_CATALOG1 = "wf-catalog1";
var WFL= "Workflow";
var WFL_COLLECTION1 = "wf collection1";

var adminUsers = [];


adminUsers[0] = "Admin";

workflow1 = new Workflow(WFL,"CATALOG");


workflow1.setWflAccessControlGroup("Default");

//create step
var attributeCollections = [];
attributeCollections[0] = WFL_COLLECTION1;

IBM Product Master 12.0.0 741


step1 = workflow1.createWflStep("MODIFY", "Modify Step");
step1.setWflStepPerformerUsers(adminUsers);
step1.setEditableAttributeGroups("ITEM_EDIT",attributeCollections);
step1.setEditableAttributeGroups("BULK_EDIT",attributeCollections);
step1.setWflStepReserveToEdit (true);
step1.setWflStepCategorizeEntries(true);

//mapping: Initial --> Modify --> Success


workflow1.getWflInitialStep().mapWflStepExitValueToNextStep("SUCCESS", step1);
step1.mapWflStepExitValueToNextStep("DONE", "Success");
assertTrue(workflow1.saveWfl());

Sample script code for creating a category workflow


// create a catalog wfl: Initial --> Modify --> Success
////////////////////////////////////////////////////////////
var WFL_CATALOG1 = "wf-catalog1";
var WFL= "Workflow";
var WFL_COLLECTION1 = "wf collection1";

var adminUsers = [];


adminUsers[0] = "Admin";

workflow1 = new Workflow(WFL,"CATEGORY_TREE");


workflow1.setWflAccessControlGroup("Default");

//create step
var attributeCollections = [];
attributeCollections[0] = WFL_COLLECTION1;

step1 = workflow1.createWflStep("MODIFY", "Modify Step");


step1.setWflStepPerformerUsers(adminUsers);
step1.setEditableAttributeGroups("ITEM_EDIT",attributeCollections);
step1.setEditableAttributeGroups("BULK_EDIT",attributeCollections);
step1.setWflStepReserveToEdit (true);
step1.setWflStepCategorizeEntries(true);

//mapping: Initial --> Modify --> Success


workflow1.getWflInitialStep().mapWflStepExitValueToNextStep("SUCCESS", step1);
step1.mapWflStepExitValueToNextStep("DONE", "Success");
assertTrue(workflow1.saveWfl());

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Representing steps in a workflow


You need to meet with the client to understand what the representing steps are within the business process.

Before you begin


Ensure that you have a business process to work with.

Procedure
1. Discuss with the client the steps that are defined in the business process. The output from this discussion becomes the process flow. Similar to a use case, there is
one action, or task, per process flow. If there are multiple tasks, there will be multiple process flows.
2. Identify the steps that are needed to be considered for the workflow. You will be providing the solution developer with only the steps that the workflow must have.
3. Identify the performers for each step.
4. Determine how the client wants the collaboration area set up. How many different views are needed?

Steps that are not included in a workflow


Some steps belong in a Product Master Server solution, and some are done outside of the solution. You need workflow steps only for tasks that are done within a
Product Master Server solution; that is, tasks that generally include an edit to an item.
Modeling the process to steps in a workflow
You use the business process and use cases to define the steps in your workflow.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Steps that are not included in a workflow


Some steps belong in a Product Master Server solution, and some are done outside of the solution. You need workflow steps only for tasks that are done within a Product
Master Server solution; that is, tasks that generally include an edit to an item.

742 IBM Product Master 12.0.0


There are two different types of performers of a process. You can have a human perform a process or a system or application perform a process. The process needs to be
manually performed, or it must be a process that can be automated. You need to identify which steps have a place in the Product Master Server solution and define those
steps. You can identify only steps where an edit of some type is performed on an item. The purpose of the steps in a Product Master Server solution workflow is to perform
an edit of an item. If a process flow does not include an edit of an item, then it most likely will not be included in a Product Master Server solution workflow.

Step that is not included in a workflow


In the example of the clothing store, the buyer meets with the suppliers to purchase merchandise for the store. The buyer and supplier hold discussions and make
agreements on which items are to be purchased. These steps are not covered by a workflow step. When the buyer enters the items that were purchased into the system,
there will be a workflow step.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Modeling the process to steps in a workflow


You use the business process and use cases to define the steps in your workflow.

Generally, a business process translates into one workflow for IBM® Product Master, although there are instances when a business process translates logically into several
overlapping or synchronous workflows. In addition, not every step in a business process translates into a step within a workflow, because the steps in a Product Master
workflow are designed to perform item or object maintenance functions, and a complete business process usually has external business functions, as well.

Workflow complexity and size


Workflows can get very complex in Product Master. They can be used to manage complex business rules. Typical workflows are less than 10 steps in size, but in some
implementations, the step number can be in the hundreds and thousands. A workflow with a large number of steps can result in multiple problems ranging from an
unmanageable user interface to very slow operations that manage and maintain the workflow, so it should be carefully designed.

One potential approach is to break down a large workflow with many steps into smaller workflows with less steps, which can achieve better performance.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a nested workflow


A workflow is a collection of steps that are routed. Every workflow defines a business process. Each step has certain performers and actions associated with it. A nested
workflow is when you have a workflow that has a small subset of steps and is then connected to another workflow.

Before you begin


Ensure that you have two workflows already created. You will need to have an inner workflow and an outer workflow. The outer workflow will nest the inner workflow.

Procedure
1. Select from the drop down list which workflow you want to add. Click Add workflow. Your inner workflow displays in the Workflow steps section.
2. Select the inner workflow check box and route the inner workflow steps to its next steps. You need to route the outer workflow to your inner workflow so that the
inner workflow shows up as a step.
3. Save your nested workflow.

Example
Here is an example of an inner workflow. Let's say every time a company releases any kind of information to the public, the company needs to get legal approval. After
legal approval is granted, the same steps are run over again. The following would make up an inner workflow. The outer workflow could then be about the type of
announcement that needs to be made. The company needs to do the following:

1. Decide which type of announcement is going to be made:


a. For press releases
b. Product announcements
c. Financial information
2. Get legal approval
a. Contact legal representatives
b. Fill out the appropriate forms

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 743


Multiple workflow processes
You can have multiple workflow processes in Product Master to improve the performance and share the load.

You need following prerequisite software for configuring multiple workflow processes.

hazelcast-3.12.5.jar
serializer-2.7.1.jar
hazelcast.xml

You need to configure Multicast or TCP/IP.

By default, Hazelcast uses Multicast to discover other members that can form a cluster.

You can also use TCP/IP as an alternative to discover other members that can form a cluster.

You can run multiple workflow engine processes in your Product Master instance on the same server or multiple servers in a cluster.

Running multiple workflow engines on the same server for an Product Master instance
You can run multiple workflow engine processes on the same server where the Product Master instance is running. This is similar to running multiple instances of
other processes like Appserver or Scheduler. More than one workflow engine processes run on different ports and share the load of items that are moving through
work flows or collaboration areas.
Running multiple workflow engine processes for an Product Master instance across a cluster
You can run multiple workflow engine processes across the cluster where the Product Master instance is running. This is similar to running multiple instances of
other processes like Appserver or Scheduler. More than one workflow engine processes run on different servers that are part of the cluster running the Product
Master instance and share the load of items that are moving through workflows or collaboration areas in that instance.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Running multiple workflow engines on the same server for an Product Master
instance
You can run multiple workflow engine processes on the same server where the Product Master instance is running. This is similar to running multiple instances of other
processes like Appserver or Scheduler. More than one workflow engine processes run on different ports and share the load of items that are moving through work flows or
collaboration areas.

Procedure
1. Shut down Product Master by using the following command.
$TOP/bin/go/abort_local.sh

2. Go to $TOP and extract bundles.zip file. Locale files in the en_US folder get replaced by the new files.
Note: The directory structure is bundles/locales/en_US/system_resource_bundle with layout.xml file in the en_US directory, and other files in system_resource
bundle folder.
3. Go to $TOP/bin/conf/ and open env_settings.ini file.
4. Locate the [services] section and depending on your setup, add new services with a unique name that is separated by a comma.
Example
workflow=workflow01,workflow02

5. Run $TOP/bin/configEnv.sh script and then $TOP/bin/compat.sh script. Allow the script to overwrite any existing configuration files, if prompted.
6. Start Product Master instance by using the following command.
$TOP/bin/go/start_local_rmlogs.sh

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Running multiple workflow engine processes for an Product Master instance across a
cluster
You can run multiple workflow engine processes across the cluster where the Product Master instance is running. This is similar to running multiple instances of other
processes like Appserver or Scheduler. More than one workflow engine processes run on different servers that are part of the cluster running the Product Master instance
and share the load of items that are moving through workflows or collaboration areas in that instance.

Procedure
1. Shut down Product Master by using the following command.
$TOP/bin/go/abort_local.sh

744 IBM Product Master 12.0.0


2. Go to $TOP and extract bundles.zip file. Locale files in the en_US folder get replaced by the new files.
Note: The directory structure is bundles/locales/en_US/system_resource_bundle with layout.xml file in the en_US directory, and other files in system_resource
bundle folder.
3. Go to the CCD_CONFIG_DIR folder and open env_settings.ini file for each server in the cluster.
Important: By default, the env_settings.ini file is under $TOP/bin/conf folder. You will have a env_settings.ini file for each server in your cluster. These
env_settings.ini files need to be stored at user specified location specified through environment variable CCD_CONFIG_DIR.
4. Locate the [services] section and depending on your setup, add new services with a unique name that is separated by a comma.
Example
workflow=workflow01,workflow02

5. Copy the hazelcast.xml file to $TOP/bin folder.


6. Edit the hazelcast.xml to set values for the multicast or TCP-IP section by adding following.

<multicast enabled="false">
</multicast>
<tcp-ip enabled="true">
<member>machine1 ip</member>
<member>machine2 ip</member>
</tcp-ip>

Where, replace the machine1 ip and machine2 ip with values for your setup, and add any other workstations in your cluster.

7. Run $TOP/bin/configEnv.sh script on each workstation.


8. Start Product Master instance on each workstation by using the following command on your primary workstation in the cluster.
$TOP/bin/go/start_local_rmlogs.sh

9. Run $TOP/bin/go/start_local.sh script on other workstations.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling the business user workflow dashboard for business processes


The business user workflow dashboard is a custom tool. The dashboard provides users a categorized graphical summary of their current tasks to enable a better view into
what tasks they need to complete.

Overview of Business user workflow dashboard


A solution implementer can choose to use the dashboard tool as is, or enhance it further using the available source code to meet their specific solution requirements.

The dashboard provides a central view of the current tasks for a business user who is a performer in one or more steps in one or more workflows in Product Master.

Workflow Activity Summaries My Current® Activity Totals view provides the total number of entries from all of the steps across all of the collaboration areas that the
given user is a performer of.

Workflow Overview Portal The Workflow Overview Portal provides a list of collaborations areas that the user is a performer of with the total count of entries that are
awaiting processing, count of entries reserved to the given user and a count of entries awaiting approval. It also provides a reading of the average duration that entries
have been awaiting processing in their current step.

For the total count of entries awaiting processing, a further breakdown of entries by their status is available. The status of an entry is defined as follows:

Red
Critical - entries have been waiting in this step for more than 80% of the allocated time.
Yellow
Urgent - entries have been waiting in this step between 50% - 80% of the allocated time.
Blue
Current - entries have been waiting in this step between 1% - 50% of the allocated time.

The allocated time for an entry is derived from the timeout duration that is specified for the workflow step the entry is in. For steps that do not have any specified timeout
duration, all the entries in those steps are considered to be Current.

Features of Business user workflow dashboard


The business user workflow dashboard offers the following features to the business user:

Complete graphical view of currently assigned entries and tasks with priorities.
An ability to view pertinent details such as time the entries have spent awaiting processing.
Provides a breakdown of entries that are assigned to the user or need approval along with status of all the entries in a workflow.
Added functionality to allow users to delegate, reserve, and release entries directly from the details pane without having to open an item.
Ability to easily launch entries in single or multi edit views.
Ability to search for items or categories across all collaboration areas.
Note: Only the collaboration areas, where the user is a performer of at least one step, are included in the search.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 745


Installing the business user workflow dashboard
The business user workflow dashboard solution comes as a compressed ZIP file.

Before you begin


Ensure that you have installed Product Master.
Ensure that you provide access to the DashboardCustomTool to the user roles that you want allowed to use this tool. This can be achieved from the Role Console in
the user interface.

Procedure
1. Extract the dashboard.zip file into a convenient temporary directory. For example, $TOP/dashboard.
2. Issue the following command: cd $TOP/dashboard/bin
3. Change the permissions of all of the files to add the execute permission: chmod 755 *
4. Issue the following command: dashboard_install.sh --TOP=$TOP
5. To reconfigure the Product Master environment, issue the following command: ./configureEnv.sh
6. Restart Product Master.
7. Perform an Environment Import of $TOP/dashboard/customtools/DashboardModel.zip to set up the dashboard custom tool in a given company in your Product
Master instance.
8. Optional: To access the source code for the solution, extract the dashboard.src.zip file into a convenient temporary directory to extract the one file it contains, which
is dashboard.extensions.src.zip. This file contains the complete source for the solution and can be extracted in any convenient directory. However, you might see
that a Class not found exception when starting the tool. To avoid this issue, it is needed to also document that the JAR file needs to be added to the class path, for
example, by opening the WebSphere Application Server Admin Console and then go to the virtual machine and add the JAR file path to the class path.

What to do next
You can control the maximum number of search results to be returned by modifying the dashboard-config.xml file. If a value is not specified for the <search_result_limit>
</search_result_limit> property, then it returns all of the results matching the search criteria.

1. Open the dashboard-config.xml file in the $TOP/etc/default/dashboard/config directory.


2. Provide a value for the maximum number of search results to be returned. For example, :

<?xml version="1.0" encoding="UTF-8"?>


<dashboardconfig>
<!-- Specify the max number of rows to return for dashboard search -->
<search_result_limit></search_result_limit>
</dashboardconfig>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrating with upstream and downstream systems


To integrate your Product Master Server system with upstream and downstream systems, you need to create data sources, define import, export, or report jobs, and
create message queues. An upstream system is any system that sends data to the Product Master Server system. A downstream system is a system that receives data
from the Product Master Server system.

You can load data into the Product Master Server system at regular intervals (weekly, daily, or hourly) from an upstream system.

An upstream system can be any system that publishes item information to the Product Master Server system. The following graphic is an example of an upstream system
is the supplier system that updates item information into the Product Master Server system. An example of a downstream system is a print catalog system that receives
data from the Product Master Server system to generate the item information for the published catalog.

Creating data sources


You need to create a data source so that users are able to retrieve data.
Creating and scheduling jobs
You can create and schedule jobs so that users can run these jobs to integrate the Product Master Server system with the upstream and downstream systems.

746 IBM Product Master 12.0.0


Creating scripts
The URL of the invoker script that is named invoker.jsp is listed separated by components, including the Product Master application URL, the company code, and
the name of the script.
Creating message queues
You can use message queues in Product Master through script API and Java API. Message queues are created externally.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating data sources


You need to create a data source so that users are able to retrieve data.

Before you begin


Ensure that you have approval authority.

About this task


A data source is an entity that defines how data is imported into IBM® Product Master.
External data can originate from various locations or databases and can also be accessed in a number of different ways. For example, you can import data from a
database, a file found on an FTP server, or data from your local file system. Each of these options requires specific configuration parameters to access the data. A data
source encapsulates these parameters, and lets them be manipulated as a single named entity so that after the source details are set up it can be reused throughout the
product.

Based on the business or data integration requirement, data that is fed into Product Master is considered a data source. The following descriptions relate to a data source
design:

The mechanism for how the data was fed into Product Master
The mapping perspective for where the data is inserted into Product Master
The types of data source; full data load or delta data load.

Procedure
Use any one of the following methods to create a data source: User interface, Java™ API, or script API.
Option Description
a. Click Collaboration Manager > Data Sources > Data Source Console.
User interface b. Click the new, and provide the required details for creating a data source.
c. Click Save.

Java API Sample 1: The following sample Java API code creates a data source to upload data by using a Web browser.

import com.ibm.pim.context.Context;
import com.ibm.pim.context.PIMContextFactory;
import com.ibm.pim.utils.DataSource;
import com.ibm.pim.utils.RoutingManager;
import com.ibm.pim.common.exceptions.PIMInternalException;

public class WebUploadDataSource


{
public static void main(String args[])
{
createDataSource();
}

public static void createDataSource()


{
try
{
Context context =PIMContextFactory.getContext("Admin", "trinitron","MyCompany");
RoutingManager manager = context.getRoutingManager();

String sDSName = "webuploadDS";


DataSource ds = manager.createDataSource(sDSName, DataSource.Type.PUSH_WWW);
ds.save();
}
catch(PIMInternalException pe )
{
pe.printStackTrace();
}
catch(Exception e )
{
e.printStackTrace();
}
}
}

IBM Product Master 12.0.0 747


Option Description
Sample 2: The following sample Java API code creates a data source to retrieve data by using FTP.

import com.ibm.pim.context.Context;
import com.ibm.pim.context.PIMContextFactory;
import com.ibm.pim.utils.DataSource;
import com.ibm.pim.utils.RoutingManager;
import com.ibm.pim.common.exceptions.PIMInternalException;

public class FTPRetrieveDataSource


{
public static void main(String args[])
{
createFTPDataSource();
}

public static void createFTPDataSource()


{
try
{
Context context =PIMContextFactory.getContext("Admin", "trinitron","MyCompany");
RoutingManager manager = context.getRoutingManager();

String sDSName = "FTPdataSource";

//create FTP Type DataSource


DataSource ds = manager.createDataSource(sDSName, DataSource.Type.PULL_FTP);

//set properties

ds.setProperty(DataSource.Property.SERVER_ADDRESS, "ftp.server.com");

ds.setProperty(DataSource.Property.SERVER_PORT, "21");

ds.setProperty(DataSource.Property.USERNAME, "username");

ds.setProperty(DataSource.Property.PASSWORD, "*****");

ds.setProperty(DataSource.Property.FILENAME, "file");

ds.setProperty(DataSource.Property.DIRECTORY, "/path/to/directory");

//save
ds.save();

}
catch(PIMInternalException pe )
{
pe.printStackTrace();
}
catch(Exception e )
{
e.printStackTrace();
}
}
}
Sample 1: The following sample script API creates a data source to upload data by using a Web browser.

var res = createDataSource(


“browser”, //Name of thesource
"PUSH_WWW"); //Type

Sample 2: The following sample script API creates a data source to retrieve data by using FTP.

var attr_Name = "ftp_source";


var extraAttribs = [];
Script API extraAttribs["SERVER_ADDRESS"] = "9.10.84.139";
extraAttribs["SERVER_PORT"] = "80";
extraAttribs["USERNAME"] = "user";
extraAttribs["PASSWORD"] = "password";
extraAttribs["FILENAME"] = "getFtp.txt";
extraAttribs["DIRECTORY"] = "ftp_dir";
extraAttribs["DOC_STORE_PATH"] = "/archives/ftp_folder";
var res = createDataSource(
attr_Name, //Name of the source
"PULL_FTP", //Type
extraAttribs); //Optional attributes

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating and scheduling jobs


You can create and schedule jobs so that users can run these jobs to integrate the Product Master Server system with the upstream and downstream systems.

748 IBM Product Master 12.0.0


Job design
The most common approach to data integration between an outside system and IBM® Product Master is the job.
Creating imports
You can create an import so that users can load data from an external source, such as a database or a document store, into a PIM system. You can create the import
to load the data one time, and then you can reuse the import to load the data again on demand or on a schedule.
Creating exports
You can create an exports so that users can load data to an external source, such as a print catalog or warehouse, from a PIM system. You can create the export to
load the data one time, and then you can reuse the export to load the data again on demand or on a schedule.
Creating reports
You can create reports and make them available for users to run.
Setting up schedules for jobs
You can set up schedules to run jobs such as report jobs, export jobs, and import jobs.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Job design
The most common approach to data integration between an outside system and IBM® Product Master is the job.

Import and report jobs can intake data from an upstream system. Export and report jobs can disperse data to a data-stream. The integration jobs define the frequency of
jobs, for example, periodic schedule job runs or on-demand job runs.

Import job design


An import is an external data file, or job that you design to load data from an external source into the PIM system. You use imports not only during the initial data
migration process but also to keep the PIM system up-to-date on an ongoing basis. Generally, you design approximately four to five imports per implementation.
Export job design
An export job is an external file that you use to distribute or publish product information from a catalog or a hierarchy to downstream systems. You use exports not
only during the initial data migration process but also to keep the Product Master Server system in synchronization with downstream systems on an ongoing basis.
Report job design
A report is a job type. You use reports to update data into multiple catalogs.
Choosing between imports and reports
It is important to understand the customers requirements before making the decision to go with a report or an import.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Import job design


An import is an external data file, or job that you design to load data from an external source into the PIM system. You use imports not only during the initial data migration
process but also to keep the PIM system up-to-date on an ongoing basis. Generally, you design approximately four to five imports per implementation.

An import job does not necessarily acquire an exclusive lock on the catalog or hierarchy on which it was defined. If the catalog or hierarchy is exclusively locked,
concurrent changes are not allowed to be made to the contents of the catalog or hierarchy.
Whether the catalog or hierarchy is locked or not depends on the job type and semantics.

If a catalog or hierarchy is not exclusively locked by an import job, then the job may acquire locks on the entries in the catalog or hierarchy that it modifies. The duration for
which these locks are held on the entries is affected by:

whether the release_locks_early parameter is set to true or false


and how many objects are contained within one transaction

When the release_locks_early parameter is set to false, all of the modified objects within a scheduled job will remain locked until the job has finished. When set to true
the locks are released at the completion of the transaction within which the objects were modified. If you need to guarantee a high level of concurrency while running
imports, set the release_locks_early parameter to true and ensure that the transactions are committed at short intervals.
Depending on what API you use for the scheduled job, the API can affect when transactions are committed. For example:

When using the Java™ API, you have explicit control on transaction boundaries by using the startTransaction() and commit() methods.
When using Scripting API, per default in the common.properties file, the aggregation_queue_size parameter defines after how many modified objects the
transaction gets committed.

When using the Java API in scheduled jobs, you can use the startBatchProcessing() and flushBatch() methods to define how many item updates are batched
(using JDBC batching) to minimize database update calls. Batching may help to improve the overall performance specifically in the cases where many attributes of an item
or category spec are marked as indexed and when there is a high latency for network calls between the scheduler service and the database server.

Import job design considerations


Ensure you are familiar with the following import job design considerations and best practices.
Types of imports
Ensure that you are familiar with the types of import jobs.
Common import scenarios
The following scenarios cover some highly-used use cases for import jobs.
Importing large volume of data
You can minimize the time taken to import large volume of data by designing the imports to run in parallel. You can either create multiple imports with their own

IBM Product Master 12.0.0 749


import scripts or generate multiple import jobs using the same import script. In the latter case, the import script is responsible for generating multiple import jobs.
In either case, the import file can be divided into multiple chunks for each import job to use.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Import job design considerations


Ensure you are familiar with the following import job design considerations and best practices.

If the same kind of information is coming into the PIM system from multiple systems, make sure you use a single file format. If you use multiple file formats, you
need to create an import job for each file format that you use. In general, use an Enterprise Application Integration (EAI) tool between the upstream system and the
PIM system.
Ensure that all transformations of the data from upstream systems are completed outside of the PIM system. Use Extract Transform and Load (ETL) tools, such as
IBM® InfoSphere® DataStage®, to cleanse and harmonize the data before importing it into your PIM system.
Push validations outside of IBM Product Master whenever possible. Almost all Product Master implementations have an upstream system that feeds data into
Product Master. Whenever possible provide valid, cleansed, and harmonized data. This will allow the import job to be light in validations.
Parallelize import jobs to load large input data when possible. This requires that there is no dependency between the records that you receive in your imports. You
then would need to perform the following:
1. Break down the import data file into smaller chunks.
2. Create multiple import jobs (using the same import script), each using one of the generated smaller data files.
3. Run the jobs in parallel.
You might need to increase your hardware or memory requirements for your PIM system depending on the volume of data in your import jobs and the frequency
with which you run your import jobs.
Ensure that you define how you will handle errors that occur during import, including how errors will be logged, how users will be notified of errors, or whether a
retry mechanism will be included.
Product Master provides the capability to tag versions for containers such as catalogs and hierarchies. Imports will create versions of the destination catalog each
time they are run. The container is tagged with a version upon completion of an import. This enables a difference between exports and rollbacks. The import will
also lock any container that it is affecting so that no concurrent changes are made. This will disable both users and other jobs from updating information in the
catalog while the import is running.

Every import that you create must consist of the following components:

File spec
A Product Master spec that represents the structure of the incoming data file. A file spec is required. An import script API is typically for XML imports or imports that
require enhanced business logic, validations, or multi-occurring attributes.
For importing data using XML file specification, you must use the implicit variable feed_doc_path in the import script as a handle to the XML import job file. XML
import job files can be uploaded normally using the import console native functionality and can be easily accessed in the import script using this implicit variable
from which the XML parser can be used to extract the data.
Primary spec
The spec of the catalog or hierarchy for which the data is to be loaded.
Spec map
A graphical representation that provides:

A mapping of the file spec attributes to the catalog attributes


Validations across specifications

A spec map is required.


Catalog, hierarchy, or lookup table
An item import job is for a catalog or lookup table, and a hierarchy import job data import is for a hierarchy.
Data source
The source of the input file. For example, the document store, an FTP site, or uploading through a browser.
Import script API
A script API that inserts customized business and processing logic. The script API can integrate the import into a larger framework of functionality such as the
Validation Framework. Ninety percent of custom import development is done in the import script API.
You can use an existing script API, a generated script API, or a new script API. Generated script APIs can be used in simple imports where a file spec and spec map
exist and there are no special processing requirements. Generated script APIs use these components to dynamically generate the processing logic that is required
to import data.

Input parameters can be defined for an import job by associating an input parameter spec to the import script.

Best practices
Imports that you create with the user interface (the Import Console) without a script API are best suited for fixed file specifications from a delimited (CSV format) file that
are imported into a fixed catalog spec, with no multi-occurring attributes and little business validations. The spec maps can perform attribute-level validations that are
identified within the specs, but the more complex business logic from a validation framework or trigger script APIs will be absent. The more complex the data model,
validations, and business logic, the more complex the imports will become. Use script API or Java™ API for these types of customizations.

While planning and designing your import job, consider the following best practices:

Data transformations (Inbound)


For data transformation from the upstream system to the PIM system, ensure that all the data transformations are completed outside of the PIM system.
File formats
If the same kind of information is flowing in to the PIM system from multiple systems, then use a single file format. If you use multiple file formats, you need to
create as many imports as the number of file formats. Generally, you use an Enterprise Application Integration (EAI) layer between the upstream system and the
PIM system.
Type of input or job

750 IBM Product Master 12.0.0


Provide a import job even if the upstream system sends out snapshot jobs by using ETL tools such as IBM InfoSphere DataStage.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Types of imports
Ensure that you are familiar with the types of import jobs.

There are four general types of imports. The two most common import types are Hierarchy and Item feeds.

Binary feed
For uploading binary files. The uploaded binary files will be accessed from import scripts to retrieve the required data for item or category import.
Hierarchy feed
For updating hierarchy structure and attributes. This feed is commonly used.
Item feed
General item feed for adding/changing items and attributes. This feed is most commonly used feed type.
Item to category map feed
If you do not want to change item attributes but want to update only the Item to Category linkages.
If you want to allow a mapping of categories, you need to first define a typical file spec with the following two attributes:

The item primary key


The category which needs be of type "Category". The type "Category" is a special "type" for a file spec.
The "delimiter for the category path" needs to be specified. The delimiter is a one-character delimiter that separates the elements of the category path; for
example, "/", as in "top/middle/bottom".

Depending on your selection, you also need to choose the semantics. Data Import Semantics are the methods for importing data into a container. They each have different
behaviors given the same input:

Update
Existing items are updated with new values. If they do not exist, they are created in the respective catalog, which is the most commonly used method. The
destination catalog is updated with all of the changes contained within an input file. Use this input file when you need to update items in a catalog while leaving the
rest of the items intact. For example, updates to an item that come external systems that do not contain all items or attributes.
Replace
If the item exists already, the existing item is dismissed and replaced with the item information contained in the import data file. The destination catalog is
completely cleared out and replace it only with what is included in the input file. For example, you have a catalog with 100 items and you run an import with only 10
of the items. After the import is run, there will be just 10 items in the catalog. This selection is useful when:

There is no manual entry in a catalog


You want to completely refresh the contents of a catalog
There are no linked catalogs (refresh can break existing links to/from other catalogs)

Delete
If the respective item is found, it is deleted from the catalog. The deletion affects each of the items listed in an input file and leave all others intact. Useful when you
want to remove a certain set of items from a catalog. For example, you have a catalog with 100 items and you run an import with only 10 of the items. After the
import is run, there will be 90 items in the catalog.

Depending on the job type and semantics, the catalog or hierarchy may require an exclusive lock on the container. The following list provides the locking details for all
combinations of job types and semantics:

Type
Item feed (ITM), Item to category map feed (ICM) and Category tree (hierarchy) feed (CTR)
Semantics
Update, Replace and Delete

The following table shows which Catalog and Hierarchies require a master lock. Otherwise, a slave lock is required.
Table 1. Catalog and Hierarchies requires master locks
Type of import and semantics Update Replace Delete
Item feed (ITM) Catalog
Item to category map feed (ICM) Hierarchy Catalog and Hierarchy Hierarchy
Category tree (hierarchy) feed (CTR) Hierarchy Hierarchy Hierarchy
Note: If you are importing a category into a workflow hierarchy, the collaboration area is locked instead of a hierarchy.
The following types of import jobs are used at different points in the implementation. Decide whether to use an item, an item to category input, or any other input type.

Semantics of the import job


Indicate how the job handles the data. Decide whether to add a item, update attribute values, or replace an existing item with different attribute values.
Data sources
Decide whether to upload a data file with FTP, database, upload through a web browser (manual upload), or a document store as the data source.
File type
Choose a file type, for example, an XML, a comma-separated values (CSV) or a Microsoft Excel file (XLS).
Destination catalogs and or hierarchies
If the import contains item data, select catalog. If the import contains category data, select hierarchy.
File specification
Determine whether to use the incoming file layout as the file specification. File spec is used to specify the incoming file format. Determine whether to use the
incoming file layout as the file specification.
File spec to catalog (or hierarchy) spec attribute mappings
Decide whether to use:

IBM Product Master 12.0.0 751


File spec to catalog: a predefined import spec as the basis for mapping the import data attributes to the catalog item attribute.
File spec to hierarchy: create a custom script or Java™ code to perform the mappings. More complex mappings typically require this option.

Frequency of execution
Consider the frequency of imports and the concurrent client activity. You can determine the hardware requirement by using this information.
Volume of data
Decide whether to run an hourly import or more or less frequently. You can determine the hardware requirement by using this information.
Transport protocols
Decide whether to use Messaging Queue, web services or other transport protocols, such as FTP or manual upload.
Error handling
Decide a retry mechanism, error notification mechanism, error logging mechanism, or any other error handing mechanism to handle errors during import.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Common import scenarios


The following scenarios cover some highly-used use cases for import jobs.

Data migration and initial load imports


The data migration and initial load imports tend to be very large that consist of item data that come from any number of sources. Most often, a single flat file containing all
items and attributes is loaded into IBM® Product Master. The initial load files are typically processed to optimize the data format for faster imports, like sorting the file so
that multiple references to a single item are sequential in the file. Since these are one time loads, human intervention is sometimes required to perform the optimizations.
You can use CSV because it has reduced file size and overhead compared to formats such as XML. The initial load input file spec should be based on the data model. To
reduce implementation work, this format could also be used for export of all Product Master data that provides a single interface for import and export. These imports are
typically started manually.

Incremental batch imports (multi-item)


The incremental batch imports contain less data than an initial load but follow the same principal of a single file to update or add many items in batch. These are typically
scheduled to run at standard intervals (nightly or weekly) depending on business needs and pick up files produced from other systems on a corresponding interval.

Transactional imports (single item)


The transactional imports are single record deltas that are received by Product Master and processed more frequently than the incremental batch imports. These typically
contain XML messages and are triggered by:

Web Services
An update is sent directly to Product Master through Web services and is processed in real-time. Web services are ideal for small request and response
transactions. They do not scale to support large file transfers.
Scheduled poll of message queues
A scheduled import or report will poll a message queue at a standard interval depending on business needs (15 minutes, 30 minutes, 1 hour). The job will pick up
messages from a queue and process them in the order received.

The transactional imports are typically written as a scheduled job to poll a message queue for new transactions. This can be done either as a report or an import. The
common usage of XML in these types of transactional imports leads towards a more custom scripted approach rather than a straight forward Product Master import. You
will not likely use file specs and spec maps for this type of import.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Importing large volume of data


You can minimize the time taken to import large volume of data by designing the imports to run in parallel. You can either create multiple imports with their own import
scripts or generate multiple import jobs using the same import script. In the latter case, the import script is responsible for generating multiple import jobs. In either case,
the import file can be divided into multiple chunks for each import job to use.

When running multiple import jobs in parallel, ensure that these guidelines are followed:

1. There are enough scheduler threads to handle the import jobs. Ensure that you start multiple schedulers. You can specify the schedulers in the
{Install_dir}/bin/conf/env_settings.ini file under the [services] section. For example, if four schedulers are needed, define them in the env_settings.ini file as follows:
scheduler=scheduler01,scheduler02,scheduler03, scheduler04
The value for num_threads in the {Install_dir}/etc/default/common.properties file determines how many jobs a single scheduler can handle.
2. There are enough database connections available for the schedulers. The number of database connections can be specified in the
{Install_dir}/etc/default/common.properties file with the following properties:
db_maxConnection_scheduler

db_minConnection_scheduler

db_maxConnection_scheduler_default

752 IBM Product Master 12.0.0


db_maxConnection_scheduler_system

db_maxConnection_scheduler_gdsmsg
3. There is enough memory to handle the import job running parallel. Adjust the memory settings for the scheduler in the
{Install_dir}/bin/conf/service_mem_settings.ini file by modifying the following property:
SCHEDULER_MEMORY_FLAG=-Xmx1024m -Xms48m

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Export job design


An export job is an external file that you use to distribute or publish product information from a catalog or a hierarchy to downstream systems. You use exports not only
during the initial data migration process but also to keep the Product Master Server system in synchronization with downstream systems on an ongoing basis.

Types of export jobs


While planning the type of export, you must consider many factors:

Type of export jobs


Decide whether to use an item, an item to category output, or another output type.
File type
Choose a file type, for example, an XML, a comma separated values (CSV), a Microsoft Excel file (XLS), or another file type.
Destination catalogs and or hierarchies
If the import contains item data, select catalog. If the import contains category data, select hierarchy.
File type for export
Choose a file type for the export, for example, if you want to use the catalog export script.
File spec to catalog (or hierarchy) spec attribute mappings
Decide whether to use:

File spec to catalog: a predefined import spec as the basis for mapping the import data attributes to the catalog item attribute.
File spec to hierarchy: create a custom script or Java™ code to perform the mappings. More complex mappings typically require this option.

Frequency of execution
Consider the frequency of exports and the concurrent client activity. You can determine the hardware requirement by using this information.
Volume of data
Decide whether to run an hourly export or more or less frequently. You can determine the hardware requirement by using this information.
Destination format
Decide whether to use e-mail or FTP to send the catalog.
Transport protocols
Decide whether to use Messaging Queue, Web services or other transport protocols, such as FTP or manual upload.
Error handling
Decide a retry mechanism, error notification mechanism, error logging mechanism, or any other error handing mechanism to handle errors during export.

Design considerations
While designing the type of export, you must consider many factors:

Initial data migration export jobs


These export jobs export the entire data set from the Product Master Server system. You can process these export jobs to optimize the data format for faster
exports, such as sorting the file so that multiple references to a single item are sequential in the file. You can export data in CSV files instead of XML because less
disk space and less overhead is required for CSV files.

Best practices
While planning and designing the type of export, you must consider many factors:

File formats
If the same kind of information is flowing from the Product Master Server system to multiple systems, then use a single file format. If you use multiple file formats,
you need to create as many exports as the number of file formats. Generally, you use an Enterprise Application Integration (EAI) layer between the Product Master
Server system and the downstream system.
Data transformations (Outbound)
For data transformation from the Product Master Server system to the downstream system, you must choose a single file format that can be split in to multiple file
formats by using an EAI layer for the downstream system.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Report job design

IBM Product Master 12.0.0 753


A report is a job type. You use reports to update data into multiple catalogs.

A report job shares many characteristics as the import and export jobs. The unique characteristic about the report job is that it can be setup with "input parameters" to the
job and it does not maintain data versioning; while import job and export job would mark a new data version on the container.

Types of reports
While planning the type of report, you must consider many factors:

For example, you can design a report to analyze online sales activity of office furniture for Europe.

Type of the report


A user-defined script defines the type of the report.
Name of the report
You can define valid characters and values that the user can provide as the report name.
A distribution method
You can design the distribution method for the report, such as e-mail, FTP, post, or XML to connect.

Design considerations
While designing the type of report, you must consider many factors:

batch updates
Batch updates are supported in imports only, therefore, reports might not provide optimal performance as compared to imports for updating and importing large
volumes of data.
workflow
A workflow is a set of steps. For example, you can prepare reports for item life cycle (item status, duration per step, # of items setup weekly, overdue tasks, and
item assignments by role, and others).
item master data
For example, you can prepare reports for item assortment (which products belong to category XYZ), item setup between a date range, items setup by a particular
user, item relationships, and others.
import data analytics
Supplier scorecard is a typical example of Import data analytics. Based on the report jobs from the supplier, this type of report includes the statistics in terms of
number of items having error, missing main attributes, number of times it was sent to and from the supplier, and the turnaround time. Depending on the business
some of these reports might be scheduled daily, weekly, monthly, or adhoc. They might also be set up with an e-mail distribution list to circulate the report to others
after it runs.

Report-based imports
Reports are another way to load data from external sources into IBM Product Master. They are similar to imports and are typically run on a scheduled basis. They
are primarily designed to produce data reports but are commonly used as generic "jobs" to do a number of scheduled tasks. They are sometimes used to import
data.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Report-based imports
Reports are another way to load data from external sources into IBM® Product Master. They are similar to imports and are typically run on a scheduled basis. They are
primarily designed to produce data reports but are commonly used as generic "jobs" to do a number of scheduled tasks. They are sometimes used to import data.

Design considerations
The following list shows the considerations for creating a report-based import:

No catalog locking
A report can update an item without locking the destination catalog. This can be useful in situations where you have transactional updates coming in and do not
want to interrupt the day-to-day operation of the catalog.
Item locking
Reports lock the entire set of items that are imported for the entire duration of the report. Therefore, no further user activity is possible on the items that are
imported by the report. For example, if a workflow needs to update an item at the same time the report is run, the workflow and the report may conflict.
Loose design
Reports are basically a scheduled script and do not have much of the overhead that is associated with imports. There are no specs, spec maps, or data sources that
need to be created.
Loading data
Can load data into multiple catalogs or hierarchies.
No catalog versioning
Reports do not create versions by default. This is beneficial in cases where you want to run many imports throughout the day but do not want the overhead of
creating versions. For example, a transactional model where a message queue is polled every 15 minutes for item updates. A report can be used to poll the queue
and process items without having to create a new catalog version each time.
Frequent small updates
Best suited for frequent small updates that need to happen throughout the day.
Multiple edits
There are some negative effects to consider such as multiple edits to the same item at once.
Concurrent import jobs
For optimal performance, avoid running concurrent import jobs.

754 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Choosing between imports and reports


It is important to understand the customers requirements before making the decision to go with a report or an import.

The following table summarizes the information from the previous section. Using this table as your decision tree, you can evaluate all of the criteria of each approach to
make an informed decision based on a requirement.
Table 1. Choosing between imports and reports
Function Import Report
Catalog Locking Yes No
Enhanced Performance Yes No
Update Multiple Catalogs Yes Yes
Ease of Development for Simple Import Low Medium
Ease of Development for Complex import High High
Scheduled Yes Yes
Versioning Always Optional (Scripted)
Infrequent Small Imports Best Good
Infrequent Large Imports Best Good
Frequent Small Imports Poor Best
Frequent Large Imports Poor Poor

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating imports
You can create an import so that users can load data from an external source, such as a database or a document store, into a PIM system. You can create the import to
load the data one time, and then you can reuse the import to load the data again on demand or on a schedule.

Before you begin


Ensure that you create a data source before you create an import.

About this task


When you create an import, you need to specify a name for it and other information. The data import semantics specify how the incoming file is to affect any existing items
in the destination catalog. The file specification specifies the type of data that is included in the import file. You can select a catalog, lookup table, or a collaboration area.
If you pick a collaboration area you need to specify the workflow step to add the import to.

Procedure
1. Create the import.
Use one of the following methods to create the import: user interface, Java™ API, script API.
Option Description
a. Click Collaboration Manager > Imports > New Imports.
b. In the New Import wizard, complete each step to specify the necessary details for the import.
Note: In the Select data import type field, select one of the following:

Binary feed
This type of import is used to upload a zip file which can contain any type of information. While creating the import, you can
specify where the zip file is to be uploaded in the docstore. When the import is ran, the files in the zip package are extracted under
the ctg_files directory in the docstore. All of the catalogs that have a binary image or thumbnail image attribute can use the file
name from any of these files to populate the attributes.
User interface
Hierarchy feed
This type of feed is used to import categories from a file to a hierarchy.
Item feed
This type of feed is used to import items from a file to a catalog.
Item to category map feed
This type of feed is used to import item data from a file to a catalog and then map it to certain categories. For example, in a CSV
file if each line has a category data and an item data, that item is imported to that category. If the category does not exist, it is
created under the hierarchy related to the catalog.

IBM Product Master 12.0.0 755


Option Description
Sample 1:

The following sample code creates an item import.

Context ctx = PIMContextFactory.getContext(USERNAME, PASSWORD, COMPANY_NAME);

String importName = "JavaAPI Import Test_1";


DistributionManager distManager = m_ctx.getDistributionManager();
DataSource dataSource = distManager.getDataSource("browser");
FileSpec fileSpec = (FileSpec) m_ctx.getSpecManager().getSpec("BasicCtgImportFileSpec");
Catalog destCatalog = m_ctx.getCatalogManager().getCatalog("BasicCatalog");
Java API
Map<OptionalArguments, Object>
optionalArgs = new HashMap
<OptionalArguments, Object>();
optionalArgs.put(Import.OptionalArguments.IMPORT_SEMANTICS, ImportSemantics.UPDATE);
optionalArgs.put(Import.OptionalArguments.CHARSET, null);
optionalArgs.put(Import.OptionalArguments.APPROVER_USER, m_ctx.getCurrentUser());
optionalArgs.put(Import.OptionalArguments.ACCESSCONTROLGROUP,
m_ctx.getOrganizationManager().getAccessControlGroup("Default"));
optionalArgs.put(Import.OptionalArguments.SPECMAP, m_ctx.getSpecManager().getSpecMap
("BasicCtgImportSpecMap"));
optionalArgs.put(Import.OptionalArguments.DOCSTOREPATH_TO_SCRIPT, "");
optionalArgs.put(Import.OptionalArguments.FEEDFILE_PATH, "junitdb1/JobManager/test.csv");

JobManager jobManager = m_ctx.getJobManager();


ItemImport itemImportObj = jobManager.createItemImport (importName, dataSource, fileSpec,
destCatalog, optionalArgs);

itemImportObj.save();
Sample 2:

The following sample code creates a category import.

Context ctx = PIMContextFactory.getContext(USERNAME, PASSWORD, COMPANY_NAME);

String importName = "JavaAPI Import Test_2";


DistributionManager distManager = m_ctx.getDistributionManager();
DataSource dataSource = distManager.getDataSource("browser");
Hierarchy destHierarchy = m_ctx.getHierarchyManager().getHierarchy("BasicCategoryTree");
FileSpec fileSpec = (FileSpec) m_ctx.getSpecManager().getSpec(FILE_SPECNAME);

Map<OptionalArguments, Object>
optionalArgs = new HashMap
<OptionalArguments, Object>();
optionalArgs.put(Import.OptionalArguments.IMPORT_SEMANTICS, ImportSemantics.UPDATE);
optionalArgs.put(Import.OptionalArguments.CHARSET, null);
optionalArgs.put(Import.OptionalArguments.APPROVER_USER, m_ctx.getCurrentUser());
optionalArgs.put(Import.OptionalArguments.ACCESSCONTROLGROUP,
m_ctx.getOrganizationManager().getAccessControlGroup("Default"));
optionalArgs.put(Import.OptionalArguments.SPECMAP,
m_ctx.getSpecManager().getSpecMap("BasicCtgImportSpecMap"));
optionalArgs.put(Import.OptionalArguments.DOCSTOREPATH_TO_SCRIPT, "");
optionalArgs.put(Import.OptionalArguments.FEEDFILE_PATH, "junitdb1/JobManager/test.csv");

JobManager jobManager = m_ctx.getJobManager();


CategoryImport catImportObj = jobManager.createCategoryImport
(importName, dataSource,fileSpec, destHierarchy, optionalArgs);

catImportObj.save();

756 IBM Product Master 12.0.0


Option Description
The following sample script API creates a item import.

var attr_FeedName = "SchedulerTestBasicCtgImport";


var attr_FeedType = "ITM";
var attr_DataSrc = "browser";
var attr_FileSpec = "BasicCtgImportFileSpec";
var attr_Catalog = "BasicCatalog";
var attr_SpecMap = "BasicCtgImportSpecMap";
var attr_CtrTree = "BasicCategoryTree";
var attr_Script = "";
var attr_Acg = "Default";
var optionalArgs = [];
optionalArgs["sCharsetName"] = "Cp1252";
optionalArgs["bIsCollaborationArea"] = false;
optionalArgs["sWflStepPath"] = null;
optionalArgs["sParamsDocPath"] = null;
optionalArgs["sFeedSemantic"] = "U";
Script API
var success = createImport(attr_FeedName,attr_FeedType, attr_DataSrc, attr_FileSpec, attr_Catalog,
attr_SpecMap, attr_CtrTree, attr_Script, attr_Acg, optionalArgs);

if ( "Done" != success )
{
out.writeln("ERROR: Creation of Feed '"+attr_FeedName+"' failed due to unspecified reasons");
}

//Creating a sample feed file.


createOtherOut("myWriter");
for(var i=0;i<5;i++)
{
var sDocText ="Item" + i + ", " + i + ",1";
myWriter.writeln("junitdb/test.csv");
}
myWriter.close("junitdb/test.csv");
loadImport("SchedulerTestBasicCtgImport", "junitdb/test.csv");
2. Optional: Run the import in the Import Console or the Jobs Console to ensure that the data is included correctly. When the import is run, you can find the
catalog.outfile in archives/ctg/generated/uploaded/<importjobName>/<current timestamp>/catalog.outdirectory.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating exports
You can create an exports so that users can load data to an external source, such as a print catalog or warehouse, from a PIM system. You can create the export to load the
data one time, and then you can reuse the export to load the data again on demand or on a schedule.

About this task


When you create an export, you need to specify a name for the export and other information. The data export semantics specify how the outgoing file is to affect any
existing items in the destination catalog. The file specification specifies the type of data that is included in the export file. You can select a catalog, lookup table, or a
collaboration area. If you pick a collaboration area you need to specify which workflow step you want to add the export to.

Procedure
1. Create the export.
Use one of the following methods to create the export: user interface, Java™ API, or script API.
Option Description
a. Click Collaboration Manager > Exports > New Exports.
User interface
b. In the New Export wizard, complete each step to specify the necessary details for the export.

IBM Product Master 12.0.0 757


Option Description
The following sample code creates a catalog based, item export.

DestinationSpec destSpec = null;


SpecMap destMap = null;

Context ctx = PIMContextFactory.getContext(USERNAME, PASSWORD, COMPANY_NAME);

String exportName = "JavaAPI Export Test_1";

SpecManager specMgr = m_ctx.getSpecManager();


destSpec = (DestinationSpec)specMgr.getSpec("BasicAPIDestinationSpec");

destMap = specMgr.getSpecMap("SimpleSpecMap");
Java API
Catalog sourceCatalog = m_ctx.getCatalogManager().getCatalog("BasicAPICatalog");
JobManager jobManager = m_ctx.getJobManager();
Map optAgrs = buildOptionalArgsForExport();
optAgrs.put(Export.OptionalArguments.CONTENTDIFF_TYPE, Export.ContentDiffType.ALL);
optAgrs.put(Export.OptionalArguments.DOCSTOREPATH_TO_SCRIPT, "BasicAPIDestinationSpec - Generated
export script (CSV)");

Export exportObj = jobManager.createExport(


exportName,
destSpec,
sourceCatalog,
destMap,
Type.ALL,
optAgrs);

exportObj.save();
The following sample script creates a catalog based, item export.

var attr_ExportName = "SchedulerTestBasicCtgExport";


var attr_Catalog = "BasicCatalog";
var attr_DestFileSpec = "BasicCtgExportDestFileSpec";
var attr_ExportScript = "myExportScript";
var attr_SpecMap = "BasicCtgExportSpecMap";
var hmOpArgs=[];
hmOpArgs["approverUserName"] = "user1";
hmOpArgs["charsetName"] = "Cp1252";
hmOpArgs["selectionName"] = "My Selection";
Script API
hmOpArgs["sParamsDocPath"] = "params/My Input Spec/set1";
var success = createExport(
attr_DestFileSpec,
//Destination File Spec
attr_Catalog,
//Catalog name
attr_SpecMap, //Export Spec Map
attr_ExportScript, //Export Script
attr_ExportName, //Export name
hmOpArgs); //Optional Parameters
out.writeln(success);
2. Optional: Run the export in the Export Console or the Jobs Console to ensure that the data is included correctly. When the export is run, you can find the export.out
file in /exports/<current timestamp>/export.out the directory.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating reports
You can create reports and make them available for users to run.

Procedure
1. Create the report. Use one of the following options: user interface, Java™ API, or script API.
Option Description
a. Click Product Manager > Reports > New Reports.
User interface b. From the Select Report Type list, select a report type and click Select.
c. Specify a report name and distribution method for the report and click Select.

758 IBM Product Master 12.0.0


Option Description
The following sample Java API code creates a report.

Context ctx = PIMContextFactory.getCurrentContext();


Distribution distribution = ctx.getDistributionManager().getDistribution("My Distribution");
ScriptInputSpec inputSpec = ctx.getSpecManager().getSpec("My Spec");
JobManager mgr = ctx.getJobManager();
Report report = mgr.createReport(
"Report 1", // report name
Java API defDoc, // Document object for script of java codes
inputSpec, // ScriptInputSpec for document input spec
distribution // Distribution object to define where to put the report
);

You can set the value for the parameters in the script input spec by using Report.setInputParameterValue(String parameterName, String value)
in Java API. Parameter values are stored as attributes of document in path params/<input spec name>/<report name>.
The following sample script API code creates a report.

var distribution = getDistributionByName("My Distribution");


var report = new Report(
Script API
"Report 1", // report name
"reportdef", // Document object for script of java codes
distribution // Distribution object to define where to put the report
);
2. Specify a query embedded in Java API or in script API for the report.
Option Description
The following Java API code searches the ctg catalog to find all the items for which the primary key is not null. This code returns all the items in
the catalog.

Context ctx = PIMContextFactory.getCurrentContext();


SearchQuery query = new ctx.createSearchQuery("select item from catalog('ctg') where item.pk is not
Java API null");
SearchResultSet rs = query.execute();
while (rs.next())
{
System.out.println("item: " + rs.getItem(1));
}
The following script API code searches the ctg catalog to find all the items for which the primary key is not null. This code returns all the items
in the catalog.

var query = new SearchQuery("select item from catalog('ctg') where item.pk is not null");
var rs = query.execute();
Script API while (rs.next())
{
out.writeln("item: " + rs.getItem(1));
}

If you want to use a new script API code with a report, create a new report type. For Java API, pass the script as the document object.
3. Run the report in the Report Console or the Jobs Console to ensure that the data is included correctly. When a report is run, you can find the report.out file in
/reports/<current timestamp>/report.out the directory.

Results
The report is now available in the Reports Console, where your business users can run it.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting up schedules for jobs


You can set up schedules to run jobs such as report jobs, export jobs, and import jobs.

Procedure
Set up a schedule. Use any one of the following methods: user interface, Java™ API, or script API.
Option Description
a. Click Data Model Manager > Scheduler > Jobs Console.
User interface b. Specify the schedule you want your job to run.

IBM Product Master 12.0.0 759


Option Description
Sample 1: The following sample Java API sets ups a schedule for a job. Java API has only one method of setting up a schedule for any kind of job
(import, export or report).

Context ctx = PIMContextFactory.getCurrentContext();


JobManager manager = ctx.getJobManager;
Job job = manager.getJob(jobName);
Date date = new Date(new Date().getTime() + 3600000L);
Schedule schedule = manager.createSchedule(job, sample_schedule, Schedule.Type.DAILY, date);

//creates schedule with the name sample schedule of type Daily for the job which starts at the given time
"date".

Java API
Sample 2: The following sample Java API sets up an immediate schedule for a job by taking the job name from job description.

Schedule schedule = manager.createSchedule(job);


//creates an immediate schedule taking
//the name from job description

Sample 3: The following sample Java API sets up a job schedule of given type using current time as the next running time.

Schedule schedule = manager.createSchedule(job, sample_schedule, Schedule.Type.ONE_TIME);

//creates a schedule of given type


//using current time as the next running time
Sample 1: The following sample script API starts a schedule for running a report job.

var attr_JobName = “My Report”;


var attr_JobType = “REPORTEXE”;
var scheduleID = runJob(attr_JobName,attr_JobType);
out.writeln(scheduleID);

Sample 2: The following sample script starts a schedule for running an export job.

Script API var attr_Export = "SchedulerTestBasicCtgExport";


var s = startExportByName(attr_Export);
out.writeln(s);

Sample 3: The following sample script starts a schedule for running an import job.

var attr_Import = “SampleImport”;


var attr_DocPath = “feed_files/file.csv”;
var s = startAggregationByName(attr_Import, attr_DocPath);
out.writeln(s);

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating scripts
The URL of the invoker script that is named invoker.jsp is listed separated by components, including the Product Master application URL, the company code, and the
name of the script.

An invoker script sends an HTTP response, with HTML content, to the client. The client is a web browser or a mail client so that an external client can start a particular
URL. You can also use IBM® Product Master scripts to create an HTML page that is displayed on the screen when the script is called. For example, when you want to access
a link embedded in an email. You can design the invoker script as an ASP or JSP script.

The following URL is used for the invoker script: http://localhost:8080/utils/invoker.jsp?company_code=<enterYourCompanyCodeHere>&bUserOutput=


<BOOLEANtrueORfalse>&invoking_user=<username>/<enterCompanyCodeHere>&script=<enterTriggerScriptNameHere>

When bUserOutput=true, the following information is displayed:

Client, server, and request protocols


Parameters that are passed
Script context
Script output

If bUserOutput=false, then only user-defined output is displayed.


Some clean-up operations such as cleanUp() and dump() are done in the secure_invoker.jsp but not in invoker.jsp. If you need one of the mentioned functions,
use invoker.jsp or secure_invoker.jsp script that supports the function that you need.

You can run the secure invoker script in the same way as the invoker script.

Best practices
You must make minimal use of the invoker scripts.
You must allow only applications within the firewall to access the invoker script and prevent the invoker script from being accessed from the external IP.

Create a sample trigger script from the script console called helloWorld.
Type: out.writeln(« Hello World »); You can start the trigger script by issuing the invoker.jsp script:

760 IBM Product Master 12.0.0


http://localhost:8080/utils/invoker.jsp?company_code=MyCompany
&bUserOutput=true&invoking_user=Admin/MyCompany&script=helloWorld

Running a trigger script


To run a trigger script from a browser, type the corresponding Web address (URL). The URL consists of the Product Master application URL with the company code
and the name of the script.
Using the parameters of invoker.jsp
You need to have the values of a few of the parameters and the different scripts involved in order to call the invoker.jsp. In the simple example below, the invoker.jsp
is called by two different scripts - a custom tool script and a trigger script.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Running a trigger script


To run a trigger script from a browser, type the corresponding Web address (URL). The URL consists of the Product Master application URL with the company code and the
name of the script.

Procedure
1. Create a simple catalog and a collaboration area.
2. Create at least one item in the catalog.
3. Create a trigger script that saves item. For example,

var logPath = "InvokerTest.txt";


var log = createOtherOut(logPath);
log.writeln(getCurrentUserName());
log.writeln(getCompanyName());
var ctg = getCtgByName("IBM_GroceryCatalog");
var item=ctg.getCtgItemByPrimaryKey("22347");
item.setCtgItemAttrib("IBM_GrocerySpec/Description", "CheckOut");
item.saveCtgItem();
log.save(logPath);
log.close();

4. Create a catalog script that checks whether the primary key is null. If primary key is null, then the script checks out the item. For example,

var logPath = "PostSaveTest.txt";


var log = createOtherOut(logPath);
log.writeln(getCurrentUserName());
log.writeln(getCompanyName());
var ctg = getCtgByName("IBM_GroceryCatalog");
var item=ctg.getCtgItemByPrimaryKey("22347");
if (item.getCtgItemAttrib("IBM_GrocerySpec/Description") == "CheckOut")
{ var pks=øØ;
pks.add("22347");
var entrySet= ctg.getEntrySetForPrimaryKeys(pks, false);
var colArea = getColAreaByName("IBM_ModifyItemCollaboration");
var hm=colArea.checkOutEntries(entrySet,null,true);
}
log.save(logPath);
log.close();

5. Use invoker.jsp for running trigger scripts and use secure_invoker.jsp for running the secured trigger scripts. Call invoker.jsp from browser to run the trigger script as
follows:

http://9.184.112.117:8080/utils/invoker.jsp?
company_code=mycompany&script=InvokerTest

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using the parameters of invoker.jsp


You need to have the values of a few of the parameters and the different scripts involved in order to call the invoker.jsp. In the simple example below, the invoker.jsp is
called by two different scripts - a custom tool script and a trigger script.

Procedure
Create and save the invoker.jsp script.
Option Description

IBM Product Master 12.0.0 761


Option Description
a. Go to the Scripts Console.
b. Select Custom Tools from the drop-down list.
c. Click New.
d. Specify the following parameters:
i. Select input parameters spec: None
ii. Provide the custom tool script name
iii. Select type: ASP/JSP like
iv. Add the following code in the scripts editor:
<HTML>

<%

user = getCurrentUserName();

%>
This script is a custom tool script. It invokes a trigger script
and passes the parameters of the invoker.jsp through the <SCRIPT language="JavaScript"> window.location=
URL.
("/utils/invoker.jsp?

company_code=trigo&bUserOutput=false&invoking_user=

<%=user %>/trigo&script

=browserTriggerScript.wsp");

</script>

</html>
e. Click Save to save the script.
Note: The format of using the invoking_user value in the URL is
<user_name>/<company_code>. For example, the user_name=user1 and company_code=trigo,
the value in the URL is user1/trigo.

762 IBM Product Master 12.0.0


Option Description
a. Go to the Scripts Console.
b. Select Trigger Script from the drop-down list.
c. Click New.
d. Specify the following parameters:
i. Select input parameters spec: None
ii. Provide custom tool script name, browserTriggerScript.wsp
iii. Select type: ASP/JSP like
iv. Add the following code in the scripts editor:
<HTML>

<HEAD>

<title>Test Page</title>

</HEAD>

<BODY>

User = <%= request["invoking_user"] %>, and getCurrentUserName =


<%=

getCurrentUserName() %>

<% //Verify the user_name that is passed through the URL is


correct using

the API call getCurrentUserName()

This script is a sample trigger script, out.writeln("<p>BEGIN TEST<br>");


browserTriggerScript.wsp, that is invoked from the custom
tool script. userName = getCurrentUserName();

companyName = getCompanyName();

out.writeln("User Name: " + userName + "<br>");

out.writeln("Company Name: " + companyName + "<br>");

out.writeln("DONE<br></p>");

%>

</BODY>

</HTML>

e. Click Save to save the script.


f. Run the custom tool script:
i. Go to Home main menu.
ii. Select My Settings.
iii. Choose the custom tool script from above for the Use a Custom Tool page as the start
page field.
iv. Click Save.
v. Refresh the browser window. You should see the result below in the display:
User = user1/trigo, and getCurrentUserName = user1.

BEGIN TEST
User Name: user1
Company Name: trigo
DONE

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating message queues


You can use message queues in Product Master through script API and Java™ API. Message queues are created externally.

Message queue design


Queues serve as a gateway to handle inbound and outbound messaging with external sources and destinations. For example, you can send status messages to all the
required parties when you run import or export jobs.
A messaging queue is typically used for asynchronous communication. A messaging queue helps to provide flexible connectivity with external applications because it
abstracts the application-specific interfaces. The messaging queue also helps you to simplify the IBM® Product Master connectivity because you can publish one message

IBM Product Master 12.0.0 763


to the queue. This message is then consumed by multiple subscribing downstream applications. A queue enables setting up a message transmission protocol to tie the
external message source to and from the queue.

Types of queues
While you plan the type of queue, you must consider many factors:

Type of queues
You must be aware of the types of queues that you can design such as JMS and MQ.
Types of queues include,

JMS
An interface defined for messaging. There are various interfaces to provide the actual JMS implementation. Typically, the application server would have JMS
provided, such as WebSphere® Application Server JMS.
MQ
MQ is the actual IBM middleware product providing the enterprise integration channel.

Number of external sources or destinations for inbound and outbound messaging


You must consider the external sources or destinations for inbound and outbound messaging such as Enterprise Application Integration (EAI) platforms and web
servers. Based on the number of sources and destinations for messaging, you can design the number of queues.
Type of scripts
You consider the type of that you can design for the queues.

Design considerations
While designing the type of queue, you must consider many factors:

Type of messaging protocol


You consider the type of protocols that you can design for the queues.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating selections and searches


You can create selections so that users can use saved search queries or results. You can create searches so that users can search catalogs, hierarchies, or collaboration
areas.

Searching involves finding information in the PIM system. You can customize and define the searching model to support different business user requirements.

You must determine what kind of information the business users will search for, what options they will choose to search for the information, and the frequency of the
search. You can design your search model accordingly. You can define saved searches so that business users can save and use the search query or criteria for future
searches. Similarly, you can help business users to save the last search results. You can also help the business users to keep track of the visited items in the multi-edit
screen by highlighting these visited items in a different color.

Note: Only indexed attributes are supported when using rich search from new business user interfaces.
Ensure that you know what kind of search queries and results might be used frequently by the users. You can use the saved searches to run the search queries or look at
the saved results on demand.

Selections help you to save the results of a search as a query for running the search in future without specifying the search query again. The selection name is displayed
under the selections module in the left pane. If you wan to run the query, click the selection name in the left pane. The query is run using the same query as the original
saved query without displaying the Rich search page.

There are two types of selections:

Static selection
Stores two types of entries (1) a list of individual entries and (2) a list of categories that contain items. The items in the first list are static and the items in the
second list are dynamically generated.
Dynamic selection
Stores the query and returns a dynamically generated result set with a list of items or categories based on the query.

Catalog search enables you to search for an item in a catalog. Hierarchy search helps you to search for categories within the hierarchy. Collaboration area search helps you
to search for items in the item collaboration area or categories in category collaboration area.

Search design and best practices


Search requirements for the data model include determining the search criteria, search attributes, and attribute datatype and estimating the number of search
results.
Creating selections
You can create selections so that users can work with items from saved search results and real-time queries.
Creating searches
You can create searches so that users can search for items in catalogs, categories in hierarchies, and items or categories in collaboration areas.
Using Recent Searches
You can modify, delete, or run again the most recent rich search that you ran on items within a catalog in IBM® Product Master.
Exporting search results
You can export search results to a Microsoft Excel spreadsheet. By using the generate report function, you can write the rich search result that is presented on the
multiple edit page to an Excel spreadsheet.

764 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Search design and best practices


Search requirements for the data model include determining the search criteria, search attributes, and attribute datatype and estimating the number of search results.

As a solution architect, you must design the data model to support any search requirements. You can determine what common day-to-day activities your business users
will perform and what kind of data the business users access frequently.

You can index the data (single attribute) that you expect the client will need to access frequently. Indexing helps the search engine find the data faster. You might not need
to index some other data (non-indexed attributes) which is not frequently accessed by the client. Only attributes marked as indexed get stored within a relational table
for direct searching.

Consider these search requirements when designing your data model:

Search criteria
By defining search attributes, you can specify the search criteria on which a business user can run the search, such as searching the entire catalog, within the
previous search result set, or searching in a pre-defined selection.
Saved searches
As solution architect, you can design and define dynamic selections and static selections as two types of saved searches. Users can use the saved searches feature
to save search queries and search results. Users can use these saved searches to run the search queries or look at the saved results on demand.

Dynamic selection
Dynamic selections are saved search queries that return the latest data in the system. Dynamic selections return only items as their search results. Dynamic
selections are used to specify search criteria.
Static selection
Static selections are saved search queries that return the specific items or categories that you selected. The search results remain the same every time you
run the search. Static selections are used to specify a set of items and categories.

Static selections are better for performance because they return a static set of items and categories whereas dynamic selections perform the search again to reflect
the recent changes to the data. When you save the search, you can see the saved search as a selection. You can access a particular saved search from the drop-
down list in the left pane.

You can design the search to enhance the search experience.

Avoid marking too many attributes indexed. If there is a large amount of attributes indexed, this can cause a large ITA table. An ITA table stores all of the indexed
attributes. Therefore, marking too many attributes as indexed will unnecessarily increase the table size.

Avoid to check indexed property for two attributes that are always used for search together. Meaning, if two attributes are always used together in searches, marking one
of the attributes as indexed is sufficient. This will cause low performance for join query on a large ITA table.

During search, you can specify a criteria of the search for the attributes to 'contains' a certain word, or 'endsWith' a certain word. Marking those attributes as indexed will
speed up the search.

Create XML indexes in the database for the attribute often searched. This will increase the performance significantly.

Business users can run search in the background as a scheduled job if they expect that the search might take a long time to run. The following search features are
currently supported by search and can help you to model the data and search queries to meet your business users needs:

Searching for exact words or phrases


Searching with wild cards
Searching with logical operators (AND, OR, NOT)

The following features are a part of the search design:

Search templates
You can define a template to restrict the search results to only display the attributes that you want to view. For example, you can define a search template that
allows business users to specify criteria for name and price attributes in a search. You have the option to save the value of an attribute as part of the template by
selecting the Save the current parameters as part of the template option. The template provides a specified set of search criteria that enables you to restrict the
search results.
Search for multi-occurrence attribute values
You can define a search for multi-occurring attributes. For example, you can search for a multi-occurrence attribute, such as an item that has multiple occurrences
of the same attribute within that item.

Search for co-occurrence of attribute values in a multi-occurrence group


The co-occurrence of an attribute value in a multi-occurrence group relies on the relationship between the values within a single attribute. You can design
the search for two attributes that are displayed in the same occurrence of the group that the two attributes belong to. For example, you can design a search
for two attributes, given name and family name, that are displayed in the same occurrence #abc. When the business user runs a search, values of both
attributes, given name and family name, are returned. If you search for given name John and family name Doe, then the preceding attributes and values are
returned. However, if you search for given name David and family name Doe, then the preceding attributes and values are not returned.
As another example, lets say you have an item spec with a multi-occurring STRING attribute. You create several items and fill in values into this multi
occurring attribute. You now want to find those items in your catalog that do not contain a specified search string in any of the occurrences of this attribute.
When using the item rich search this will return all items that have at least one attribute occurrence that is different from the entered search string. It will
also include items in the result set that contain the entered search string.

The following lists describes the expected product behavior:

1. item['spec/multi-occ'] = 'abc' returns true if one of the occurrence values equals to 'abc'
2. item['spec/multi-occ'] != 'abc' returns true if at least one of the occurrence values is different from 'abc'

IBM Product Master 12.0.0 765


Multi-edit and single-edit view feature
You might need to design and provide this feature where the business user is able to edit a single attribute in the single edit tab or page and multiple attributes in
the Multi-edit tab or page. You can show the single edit page when a single item is selected for viewing or editing. You can show the multi edit page when multiple
items are selected for viewing or editing.
Track visited items in multi-edit screen
You might need to design and provide this feature where the business user is able to keep track of the visited items in the multi-edit screen. When the business user
performs a search, multiple items are returned in the multi edit screen. When the business user clicks an item link, it is saved in the browser history. This item is
highlighted as a visited item in the multi edit screen. The business user can clear the history of the visited links by cleaning the browser history. The
track_visited_entries option in common.properties allows you to make available or unavailable the feature for a given instance of IBM® Product Master.
Attribute indexing
As a general recommendation, index the attributes that are the only attributes involved in a search query. If the search query involves multiple predicates, ensure
that not all the attributes are indexed.
Search category restrictions
Business users can place restrictions on the categories that are included in the search. With category restrictions, users can control the way how search will be
done based on the category mapping of the items in the catalog. Users can specify whether the items must be searched in any of the selected categories or in all of
the selected categories identified in the category restrictions. Users can specify whether the search scope needs to be restricted to the items mapped to a
particular set of categories or to any categories in their subtrees, the search sub-tree, and the search level options on a given search restriction. The search
category restrictions will be saved as part of a search template or a saved search.
Search location category restrictions
You can place restrictions on the location categories that the business user can include in the search.
Default results display order
You can define the display sort order of the default results by primary key or by any indexed attribute. Primary key is used by default.
Enhance left navigation pane for selections
You can create the selections so that users can use saved search queries and results. You can create searches so that users can search catalog, hierarchies, and
collaboration areas.

Designing your search model for searching for objects in your data model helps provide a usable PIM system.

You should consider these best practices when planning your search model:

Non-indexed attribute search


Non-indexed attribute search involves search using XML technology. If a search query involves non-indexed attributes, search is performed on XML representation
of master data. To improve the search performance for queries involving non-indexed attributes, create XML indexes in the database.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating selections
You can create selections so that users can work with items from saved search results and real-time queries.

Before you begin


You need to determine if you want to create a static selection or dynamic selection. When users run a static selection, they get two types of items (1) a static list of
individual items and (2) a dynamically generated list of items within the manually selected categories.
When users run a dynamic selection, they get a dynamically generated result set with a list of items or categories based on the query.

Creating static selections


You can create static selections so that users can use the saved search queries and results. When users choose a static selection, they get the same search results
every time they run the selection. Static selections are useful for storing a specific query result that you want to store as it is. For examples, you might want create a
selection to store the sales results of cars for March.
Creating dynamic selections
You can create a dynamic selection. When choosing a dynamic selection, you receive an updated search result every time you run the selection

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating static selections


You can create static selections so that users can use the saved search queries and results. When users choose a static selection, they get the same search results every
time they run the selection. Static selections are useful for storing a specific query result that you want to store as it is. For examples, you might want create a selection to
store the sales results of cars for March.

Before you begin


Ensure that you know what results that you want returned from the static selection.

Procedure
1. Create the selection. Use any one of the following methods: user interface, Java™ API, or script API.

766 IBM Product Master 12.0.0


Option Description
a. Click Product Manager > Catalogs > Catalog Console..
b. Choose a catalog.
c. Click Rich Search.
d. Provide the required details.
User interface
e. From the Save search as field, select Static Selection.
f. Provide a name for the selection.
g. Provide the search criteria.
h. Click Search.
The following sample Java API code creates a static selection.

Context ctx = PIMContextFactory.getCurrentContext();


SelectionManager mgr = ctx.getSelectionmanager();
Java API
Catalog ctg = ctx.getCatalogManager().getCatalog("my catalog");
Hierarchy ctr = ctx.getHierarchyManager().getCatalog("my hierarchy");
StaticSelection sel = mgr.createStaticSelection(ctg, ctr, "my static selection");
sel.save();
The following sample script creates a static selection.

var ctg = getCtgByName("my catalog");


var sel = new BasicSelection(ctg, "my static selection");
var item = ctg.getCtgItemByPrimaryKey("pk1");
Script API sel.addEntryToSelection(item);
sel.saveSelection();

Note: From the script API, when the development team deprecates BasicSelection and if the new StaticSelection script op exists then change
var sel = new BasicSelection to var
sel = new StaticSelection.
The selection is created, users can view it in the Selection console.
2. Optional: Run the static selection.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating dynamic selections


You can create a dynamic selection. When choosing a dynamic selection, you receive an updated search result every time you run the selection

Before you begin


Ensure that you have the information about the type of selection that you would like to create.

About this task


Dynamic selections are useful for getting the updated search results. For example, you might want to create a dynamic selection to retrieve the updated sales results of
cars for the current month when you run the dynamic selection.

Procedure
1. Create the selection. Use any one of the following methods: user interface, Java™ API, or script API.
Option Description
a. Click Product Manager > Catalogs > Catalog Console..
b. Choose a catalog.
c. Click Rich Search.
d. Provide the required details.
e. From the Save search as field, select Dynamic Selection.
User interface f. Provide a name for the selection.
g. Provide the search criteria.
h. Click Search.
Note: You can edit items in dynamic selections by navigating to Product Manager > Selections > Selection Console. Click on the icon for Preview
this selection . After you select an item that you want to edit in a dynamic selection, you must click Edit Selected at the top of the page instead
of Edit at the extreme right of a row. This will enable you to save any changes you make to the item.
The following sample Java API code creates a dynamic selection.

Context ctx = PIMContextFactory.getCurrentContext();


SelectionManager mgr = ctx.getSelectionmanager();
Java API
DynamicSelection sel = mgr.createDynamicSelection("my dynamic selection", " select item.pk
from
catalog('my catalog') where item['myspec/myattribute'] = 'abc' ");
sel.save();
The following sample script creates a dynamic selection.

var ctg = getCtgByName("my catalog");


Script API
var sel = new DynamicSelection("my dynamic selection", " select item.pk from catalog('my
catalog') where item['myspec/myattribute'] = 'abc' ");
sel.saveSelection();
The selection is created, users can view it in the Selection console.

IBM Product Master 12.0.0 767


2. Optional: Run the an dynamic selection.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating searches
You can create searches so that users can search for items in catalogs, categories in hierarchies, and items or categories in collaboration areas.

Creating catalog searches


You can create catalog searches so that users can search for items in a catalog.
Creating hierarchy searches
You can create hierarchy searches so that users can search for categories in a hierarchy.
Creating collaboration area searches
You can create collaboration area searches so that users can search for items in the item workflows and category workflows in the collaboration area.

Related information
../../reference/properties/ref_trackvisitedentries.html
../../reference/properties/ref_searchignorecase.html

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating catalog searches


You can create catalog searches so that users can search for items in a catalog.

Before you begin


Before you can create a catalog search, the catalog must exist.

Procedure
Create the search. Use any one of the following methods: user interface, Java™ API, or script API.
Option Description
a. Click Product Manager > Catalogs > Catalog Console.
b. Choose a catalog.
User interface - rich search c. Click Rich Search.
simplified d. Provide the necessary search details.
e. Click Search.

The following sample Java API code creates a catalog search, executes it and displays the results.

Context ctx = PIMContextFactory.getCurrentContext();


SearchQuery query = ctx.createSearchQuery(" select item.pk from catalog('my catalog') where
item['myspec/myattribute'] = 'abc' ");
Java API
SearchResultSet rs = query.execute();
while (rs.next())
{
System.out.println("Primary Key: " + rs.getString(1));
}
The following sample script creates a catalog search, executes it and displays the results.

var query = new SearchQuery(" select item.pk from catalog('my catalog') where
item['myspec/myattribute'] = 'abc' ");
Script var rs = query.execute();
while (rs.next())
{
out.writeln("Primary Key: " + rs.getString(1));
}
The following code searches all the items in a catalog. The catalog is called Product with a primary spec called spec. This query returns
all the items in the catalog.
Query language select item from

catalog('Product')
The search is created, and users can now view it in the left pane.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

768 IBM Product Master 12.0.0


Creating hierarchy searches
You can create hierarchy searches so that users can search for categories in a hierarchy.

Before you begin


Before you can create a hierarchy search, the hierarchy must exist.

About this task


To enable a user to see all the categories in the hierarchy, they must be assigned a special combination of privileges. For example, in order to view all the defined
categories, if a role should have access to only hierarchies but not to catalogs, it must have the "view item" access privilege for catalogs.

Procedure
1. Create the search. Use any one of the following methods: user interface, Java™ API, or script API.
Option Description
a. Click Product Manager > Hierarchies > Hierarchy Console.
b. Choose a hierarchy.
User interface - rich
c. Click Rich Search.
search simplified
d. Provide the necessary search details.
e. Click Search.
The following sample Java API code creates a hierarchy search, executes it and displays the results.

Context ctx = PIMContextFactory.getCurrentContext();


SearchQuery query = ctx.createSearchQuery(" select category.pk from hierarchy('my
hierarchy') where category['myspec/myattribute'] = 'abc' ");
Java API
SearchResultSet rs = query.execute();
while (rs.next())
{
System.out.println("Primary Key: " + rs.getString(1));
}
The following sample script creates a hierarchy search, executes it and displays the results.

var query = new SearchQuery(" select category.pk from hierarchy('my hierarchy') where
category['myspec/myattribute'] = 'abc' ");
Script API var rs = query.execute();
while (rs.next())
{
out.writeln("Primary Key: " + rs.getString(1));
}
The following code searches all the categories in a hierarchy. The categories are searched by their primary keys that are mapped.
select category.pk

from hierarchy('grocery store item hierarchy')


Query language
where

category.spec.attribute_path = 'color'
The search is created, and users can now view it in the left pane.
2. Optional: Run the search in the Rich search screen.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating collaboration area searches


You can create collaboration area searches so that users can search for items in the item workflows and category workflows in the collaboration area.

Before you begin


Before you can create a collaboration area search, the collaboration area must exist.

Procedure
1. Create the search. Use any one of the following methods: user interface, Java™ API, or script API.
Option Description
a. Click Collaboration Manager > Collaboration Areas > Collaboration Area Console.
b. Click a workflow step in a collaboration area to display all items in the Multi Edit tab.
User interface - rich
c. Click the Rich Search tab to create a search.
search simplified
d. Provide the necessary search details.
e. Click Search.

IBM Product Master 12.0.0 769


Option Description
The following sample Java API code searches a collaboration area, executes it and displays the results.

Context ctx = PIMContextFactory.getCurrentContext();


SearchQuery query = ctx.createSearchQuery(" select item.pk from
Java API collaboration_area('my collaboration area') where item.step.reserved_by = 'xyz' ");
SearchResultSet rs = query.execute();
while (rs.next())
{
System.out.println("Primary Key: " + rs.getString(1));
}
The following sample script searches a collaboration area, executes it and displays the results.

var query = new SearchQuery(" select item.pk from collaboration_area('my collaboration area') where
item.step.reserved_by = 'xyz' ");
Script API var rs = query.execute();
while (rs.next())
{
out.writeln("Primary Key: " + rs.getString(1));
}
The search is created, and users can now view it in the left pane.
2. Optional: Run the search in the Rich search screen.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using Recent Searches


You can modify, delete, or run again the most recent rich search that you ran on items within a catalog in IBM® Product Master.

Before you begin


Before you select a Recent Searches option, you must run a search operation by giving search criteria in the Item Rich Search screen and wait to get the search results.

Procedure
1. Click Refresh in the Search tab in the left pane.
An entry that is named Recent Search1 is displayed after Recent Searches in the left pane. The Recent Search1 entry points to your most recent search operation.
2. Right-click Recent Search1.
A menu is displayed that contains the Run, Modify, and Delete options.
3. Select the appropriate option:
Click Run to run the search operation again.
Click Modify to modify the most recent Rich Search criteria displayed in the right pane.
Click Delete to delete the search operation.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Exporting search results


You can export search results to a Microsoft Excel spreadsheet. By using the generate report function, you can write the rich search result that is presented on the multiple
edit page to an Excel spreadsheet.

Before you begin


Before you can export search results to Microsoft Excel, you must know all the viewable attributes of all the items that you would like to export.

About this task


The IBM® Product Master Excel Parser is supported through a third-party .jar file named poi-3.7-20101029.jar. This library supports Microsoft Excel 2007 format, which
allows more than 255 columns per data sheet. You need to run the new Rich Search Result Report script by using the Default Rich Search Results Report Script (Excel
2007 - xlsx format) option to export items with more than 255 attributes.
Note: You still can export Rich Search Results to files in the Excel 2003 format by using the Default Rich Search Results Report Script option, but with this format the
report fails if items have more than 255 attributes.

Procedure
Export search results to an Excel spreadsheet.
Option Description

770 IBM Product Master 12.0.0


Option Description
a. Click Product Manager > Catalogs > Catalog Console..
b. Choose a catalog.
c. Click Rich Search.
d. Perform a rich search to return multiple items on Multiple Edit pane.
e. Click Generate Report.
Note:
The product provides a default script with the name Default Rich Search Results Report Script that can be used to generate the Excel
reports. However, this is a generic report and should not be used in production environment, where there can be large set of items being
written to the reports. When run as a foreground task, the report can take considerable time to run, depending on the number of items
being written to the report and the application waits until the report completes. For production usage, you must either customize the
default script or implement a new report script, depending on the usage scenario. Customization might involve modifying or optimizing
the out of the box script or writing an entirely new script and selecting the new script when running the report. In addition, the
Background option can be used to run the report script as a background task so that the screen is released for accessing other parts of
the system.
User interface On occasion, clicking the Generate Report button, reports do not get generated. The cursor changes into an hour glass icon, and the
Microsoft Internet Explorer browser appears to be timed out. To correct this problem, you need to disable the pop-up blocker in the
Microsoft Internet Explorer browser and then click the Generate Report button again. If the pop-up blocker plug-in is installed on the
Microsoft Internet Explorer browser, when the pop-up blocker is disabled, the browser window identifies the "Popups okay" message.
f. In the dialog box, from the Search result reporting script list, select a script.
g. Select Background to generate report as a background task.
h. Click Done.
i. If Background is not selected, click Click here to open or save the report link.
i. In the File Download dialog box, click Save.
ii. In the Save As dialog box, provide a file name for the report and click Save.
j. If Background is selected, the background task is submitted.
i. Click Click here to check the status of the Job to see the status of the job, which takes you to the Job Schedule Status page.
ii. After the job completes, under the Return Value column, click Click here to open or save the report link to open or save the report, as
described in the previous step.

Restriction: The product can contain out of the box, default scripts. However, these are provided as samples and might not be production-ready. Customization of such
scripts can be required depending on the usage scenario. Customization might involve modifying or optimizing the out of the box script or writing an entirely new script
and selecting the new script when running the particular functionality.
The report is generated and saved.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating the security model


You create a security model so that users can have different levels of privileges to objects in the Product Master Server solution.

About this task


You control security for your Product Master Server system by providing different access privileges to users based on the roles of the users.
You can define roles based on the tasks that are performed by any user or set of users of the Product Master Server system. For example, you can define roles to control a
user's privileges to catalog management. You can set the privileges to the role and not to the user.

For example, you can define an Admin role for an administrator.

Each role can be assigned to multiple users. For example, the basic role can be assigned to user 1, user 2, and user 3.
Important: For a working security model, set the value of the javaapi_security flag in the common.properties file to true. The javaapi_security flag disables the
secure mode for any Java™ API invocation. By default, the value of the javaapi_security flag is true. You can disable the security by setting the value of the
javaapi_security flag to false.

Procedure
1. Create roles such as Admin and Basic.
2. Create a user for each person who uses the Product Master Server system.
3. Create access control groups (ACGs) to group objects in ACGs so that you can provide access privileges for the objects to the users who belong to that ACG.
4. Map objects to an ACG to group the objects in ACGs so that the objects inherit the access privileges of the ACG. Provide access privileges for the objects by
providing the user with the access to the ACG.
5. Grant group access privileges to roles such as the create and delete privileges for the Admin role.
6. Grant system privileges to the roles such as the manager role so that the managers have privilege to modify other roles in the Product Master Server system.
7. Grant privileges to the user interface screens by role.
For example, grant the Admin role privileges to all of the administration screens.

Setting up a Single Sign-on (SSO) environment


To set up SSO in your environment, you must install and configure your application server to enable the SSO feature.
Configuring Product Master in the WebSphere Application Server
In order for SSO in IBM® Product Master to work, you must set the Product Master role to All Authenticated in WebSphere® Application Server.
Configuring SSO
For SSO to work in the Product Master, you must configure the SSO parameters in the in the Admin UI and Persona-based UI property files.

IBM Product Master 12.0.0 771


Configuring SAML SSO
IBM Product Master supports SAML 2.0 web single sign-on with Just In Time (JIT) provisioning for the Admin UI and Persona-based UI. Security
Assertion Markup Language (SAML) is an OASIS open standard for representing and exchanging user identity, authentication, and attribute information. JIT enables
more efficient integration of SAML to provide a seamless application login experience for users as it automates user account and group creation. SAML JIT now
does not need a local LDAP for user authentication and instead relies on SAML attributes that are received as claims in the SAML assertion to retrieve user
attributes and groups.
Configuring SPNEGO and Kerberos SSO
When a client application wants to authenticate to a remote server, but if the server or the client is not sure which authentication protocol the other supports, you
can use various SSO implementations.
Defining users
When you define users, consider which users need access to the Product Master Server system and which users can wait or have their needs addressed in other
ways.
Defining roles
A role is set of permissions that are shared amongst one or more users.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting up a Single Sign-on (SSO) environment


To set up SSO in your environment, you must install and configure your application server to enable the SSO feature.

SSO is an authentication process in which a user can access more than one system or application by entering a single user ID and password.

If you are an administrator, you need to specify the company that you want to access during login. Any other user is taken to their specific company.
Restriction: You can use only one company name. When you implement SSO in IBM® Product Master, every user must use the same default company name when they log
in.
You can use SSO with WebSphere® Application Server. To implement SSO for WebSphere Application Server, you need to install and configure WebSphere Application
Server and then enable SSO in your IBM Product Master database and configuration.

Ensure you are familiar with the following SSO prerequisites.

Product Master prerequisites


At a minimum, you need to have Product Master Version 12.0 using WebSphere Application Server Version 9.0.
WebSphere Application Server prerequisites

Verify that all of the servers are configured as part of the same DNS domain. The realm names on each system in the DNS domain are case-sensitive and
must match identically. For example, if the DNS domain is specified as mycompany.com, then SSO is effective with any Domino® server or WebSphere
Application Server on a host that is part of the mycompany.com domain, for example, a.mycompany.com and b.mycompany.com.
Verify that all servers share registry. This registry can be either a supported Lightweight Directory Access Protocol (LDAP) directory server or, if SSO is
configured between two WebSphere Application Servers, a stand-alone custom registry.
Define all of the users in a single LDAP directory. Using multiple Domino directory assistance documents to access multiple directories is not supported.
Enable HTTP cookies in browsers because the authentication information that is generated by the server is transported to the browser in a cookie. The
cookie is used to propagate the authentication information for the user to other servers, exempting the user from entering the authentication information for
every request to a different server.
For more information on WebSphere Application Server prerequisites, see Single sign-on for authentication using LTPA cookies

A third-party user registry

Ensure you are using a third-party user registry that is supported by WebSphere Application Server Version 8.5 and synchronized with the Product Master
internal registry.
All of the users existing in the Product Master internal user registry to also exist in the external user registry with the same credentials, for example, same
username and same password.
An Product Master default company in which every user eligible for SSO must have a unique username.

SSO supports the WebSphere Application Server. The following list depicts how SSO works with the application server.

1. A request is made to IBM Product Master.


2. Authenticate your credentials to your application server.
3. Your credentials are verified.
4. Correct credentials are passed to your application server.
5. You are presented with a response.
6. Another request is made to Product Master.
7. You are presented with a final response.

When installing and configuring a SSO server, you need to choose which application server you are going to use.

1. Choose and install the WebSphere Application Server. For more information, see application server's documentation.

Configuring LDAP and Product Master internal repositories synchronization


In order for SSO in IBM Product Master to work, you must configure the synchronization between repositories to ensure that you have security access to both the
LDAP repository and the Product Master internal repository.
Setting up WebSphere Application Server
To enable SSO for Product Master, you must perform the following configuration steps for enabling the server’s SSO features.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

772 IBM Product Master 12.0.0


Configuring LDAP and Product Master internal repositories synchronization
In order for SSO in IBM® Product Master to work, you must configure the synchronization between repositories to ensure that you have security access to both the LDAP
repository and the Product Master internal repository.

About this task


In order for SSO to work, you have to have a unique identity, for example a unique username. When you log into the LDAP repository with your credentials, your credentials
are verified against the LDAP registry. If a user is in LDAP and not in the Product Master internal repository it can create a potential problem because they might be able to
log into the security realm and get access to PIM while not being authorized. Also, if you are recognized by the Product Master, but are not recognized in LDAP, you will not
be able to log into LDAP.

Procedure
1. Configure synchronization between the LDAP repository and Product Master internal repository.
2. Specify your user credentials by performing the following steps:
a. Access the Product Master login page.
b. Provide your user name, password, and company name. Click Submit. Your user credentials are verified against the Product Master registry. After verification,
your requested page displays.
Authenticate your credentials from any one of the following.
Local database
LDAP server
SSO support from the application server

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting up WebSphere Application Server


To enable SSO for Product Master, you must perform the following configuration steps for enabling the server’s SSO features.

Before you begin


Ensure that the LDAP repository is synchronized with the Product Master database.

Procedure
Proceed as follows to enable the administrative security for the WebSphere® Application Server.

1. In a web browser, go to the URL for the WebSphere Admin Console.


2. Expand Security, Global Security > Security Configuration Wizard.
3. Select Enable application security & Enable administrative security > Next.
4. Select Federated repositories > Next.
5. Enter the administrative credentials, and click Next.
6. Click Finish > Save.
7. Restart the WebSphere Application Server.

Enabling SSO in WebSphere Application Server


Enabling SSO needs to be completed before you can configure Product Master within your application server environment.
Configuring LTPA in WebSphere Application Server

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling SSO in WebSphere Application Server


Enabling SSO needs to be completed before you can configure Product Master within your application server environment.

About this task


For more information about WebSphere® Application Server SSO settings, see: Single sign-on settings.

Procedure

IBM Product Master 12.0.0 773


1. In the WebSphere Application Server Administration Console, click Security > Global security.
2. In the Authentication window, select the LTPA radio button.
3. Click Single sign-on (SSO), under the Web and SIP Security section.
4. Select the minimum requirements for SSO enablement. Select Enable and Web inbound security attribute propagation options from the SSO menu.
5. Click OK.
6. Click Save in the message box that appears at the top of the WebSphere Application Server Administration Console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring LTPA in WebSphere Application Server

About this task


The proper configuration of the following settings is necessary when using multiple server instances and domains. For more information, see the Lightweight Third Party
Authentication in the WebSphere® Application Server Network Deployment documentation.

Procedure
1. In the WebSphere Application Server Administration Console, click Security > Global security.
2. In the Authentication window, click LTPA.
3. Restart the server.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring Product Master in the WebSphere Application Server


In order for SSO in IBM® Product Master to work, you must set the Product Master role to All Authenticated in WebSphere® Application Server.

Procedure
1. Log in to the WebSphere Application Server administrative console.
2. Go to Applications > Application Types > WebSphere enterprise applications. The Enterprise Applications page opens.
3. In the Enterprise Applications page, click <war_file_name> link. The Security role to user/group mapping page opens.
4. In the Security role to user/group mapping page, specify following according to the file type, and click OK.
WAR file Steps
mdm-rest.war a. Select AllAuth role.
ccd.war i. Click Map Special Subjects.
ii. Select All Authenticated in Application’s Realms.
b. Select LoginUser role.
i. Click Map Special Subjects.
ii. Select Everyone.
mdm_ui.war a. Select AllAuth role.
i. Click Map Special Subjects.
ii. Select Everyone.
b. Select LoginUser role.
i. Click Map Special Subjects.
ii. Select Everyone.
5. Edit the $TOP/bin/conf/env_settings.in file.
a. Set the admin security to true and specify the Application server admin username and password.

[appserver.websphere70]
admin_security=true
[appserver]
username=
password=

b. Restart the Product Master services.


6. Restart the WebSphere Application Server administrative console and the Appserver on which the Product Master is deployed.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring SSO

774 IBM Product Master 12.0.0


For SSO to work in the Product Master, you must configure the SSO parameters in the in the Admin UI and Persona-based UI property files.

Before you begin


Ensure that all of the SSO users maintain a unique username and password in a default company.

Procedure
1. Admin UI
a. In the common.properties file that is located at the $TOP/etc/default, set the value of the enable_sso parameter to true and the value of the sso_company
parameter to company_name.
2. Persona_based UI
a. In the config.json file that is located at the $TOP/mdmui/dynamic/mdmui set the value of the enableSSO parameter to true.
b. Go to cd $TOP/mdmui/bin and run the updateRtProperties.sh file.
3. Restart the application.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring SAML SSO

IBM® Product Master supports SAML 2.0 web single sign-on with Just In Time (JIT) provisioning for the Admin UI and Persona-based UI. Security Assertion
Markup Language (SAML) is an OASIS open standard for representing and exchanging user identity, authentication, and attribute information. JIT enables more efficient
integration of SAML to provide a seamless application login experience for users as it automates user account and group creation. SAML JIT now does not need a local
LDAP for user authentication and instead relies on SAML attributes that are received as claims in the SAML assertion to retrieve user attributes and groups.

About this task


WebSphere® Application Server acts as a SAML service provider. A web user authenticates to a SAML identity provider, which produces an SAML assertion, and WebSphere
Application Server SAML service provider consumes a SAML assertion to establish a security context for the web user and grants access to the IBM Product Master Admin
UI and Persona-based UI web applications. Admin UI and Persona-based UI applications extract SAML attributes that are received as claims in the SAML assertion to
create users and roles in the Product Master. It is important to set the SAML assertion attribute mappings on the SSO partners.
Note: You must have a valid role in the Product Master to be able to log in to the application. Roles created as a result of the SAML login are created with default ACG
permissions. It is the Administrator's responsibility to assign the correct role to the user or update the permission in the roles. The newly created roles are not added to
the $TOP/mdmui/dynamic/mdm-rest/mdmce-roles.json file. The user is assigned a basic role, and allowed login to the Persona-based UI. You can disable the role
creation in the SSO Configuration lookup table. For more information, see Configuring SSO properties.

Procedure
1. Configure Product Master.
2. Enable the SAML Web browser SSO.
3. Configure SSO partners.
4. Enable SAML Service Provider Initiated (SP-Initiated) web SSO.
5. Configure SSO in the browser.

Configuring the IBM Product Master


Before configuring SAML SSO, complete the following tasks.
Enabling the SAML Web browser SSO
To enable the SAML Web browser SSO, complete the following tasks.
Configuring SSO partners
To configure SSO partners, complete the following tasks.
Enabling SAML Service Provider Initiated web SSO
To enable SAML Service Provider Initiated (SP-Initiated) web SSO (SSO), complete the following task.
Configuring SSO in the browser
To configure your browser to authenticate SSO, complete the following task in your browser.
Timeout behavior in the Persona UI
When SAML SSO is enabled, the session timeout for Product Master Persona-based UI is based on the following properties.
Known issues and limitations (SSO)
Certain product features assume that the system is deployed by using a centralized deployment model where services share the file system and product binaries.
With containerized deployments, services no longer have a common file system and are working in isolation.

Related concepts
Troubleshooting the SAML SSO issues

Related information
SAML single sign-on scenarios, features, and limitations
TroubleShoot: SAML Web SSO, WebSphere traditional

IBM Product Master 12.0.0 775


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring the IBM Product Master

Before configuring SAML SSO, complete the following tasks.

Configuring SSO properties.


Enable administrative security for the WebSphere Application Server.
Enable administrative security for the Product Master.
Configure security role mapping.
Enable SSO flags for the Product Master.

Configuring SSO properties


You need to enable SSO properties. To enable SSO properties, proceed as follows.

1. Enable SSO authentication in the Login.wpcs file. To enable SSO authentication, you must set the wpcOnlyAuthentication flag in the Login.wpcs file to false in case
LDAP authentication is required. The Login.wpcs file identifies the authentication mechanism.
a. Click Data Model Manager > Scripting > Scripts Console.
b. Select Login script from the drop-down list.
c. Click Edit for the Login.wpcs script.
d. Find and set the wpcOnlyAuthentication flag to false.
2. Populate SAML attributes in the SSO Configuration lookup table from Admin UI.
a. Import the mdm-env.zip file located at $TOP/mdmui/env-export/mdm-env, if not already done.
b. Go to Product Manager > Lookup Table > Lookup Table Console.
c. Select SSO Configuration lookup table and add a new role.
d. Populate all the attributes as follows.
Attribute Name Description of attribute
Id The primary key of the lookup table entry, auto generated.
SSO Type SAMLv2.0
Create Role After logging into the Product Master,
True: User roles are created, if the roles do not exist.
False: User role are not created and the Administrator needs to manually create roles.
First Name The user attribute, which represents the given name in the SAML assertion, for example,
Attribute http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name

.
Last Name The user attribute, which represents the surname in the SAML assertion, for example,
Attribute http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname
Mail ID Attribute The user attribute, which represents the mail ID in the SAML assertion, for example,
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress

.
Telephone The user attribute, which represents the telephone number in the SAML assertion, for example,
Number http://schemas.xmlsoap.org/ws/2005/05/identity/claims/telephone
Attribute
.
Fax Number The user attribute, which represents the fax number in the SAML assertion, for example,
Attribute http://schemas.xmlsoap.org/ws/2005/05/identity/claims/fax

.
Postal Address The user attribute, which represents the postal address in the SAML assertion, for example,
Attribute http://schemas.xmlsoap.org/ws/2005/05/identity/claims/address

.
Title Attribute The user attribute, which represents the title in the SAML assertion, for example,
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/title

.
Roles Attribute The member-of attribute, which represents the group in the SAML assertion, for example,
http://schemas.xmlsoap.org/claims/Group
Organization The user attribute, which represents the organization in the SAML assertion. For example,
Attribute http://schemas.xmlsoap.org/claims/organization

This attribute is required only for the Vendor Persona users. The vendor user is created under Vendor Organization Hierarchy based on the value
of the organization attribute. Possible values are: Vendor1OU, ParentOU/Vendor1OU, and so on.

Enable administrative security for the WebSphere Application Server


To enable administrative security for the WebSphere® Application Server, proceed as follows.

1. Log in to WebSphere Application Server Console.


2. Select Security and then click Global Security.

776 IBM Product Master 12.0.0


3. Click Security Configuration Wizard.
4. Select Enable application security and Enable administrative security, and then click Next.
5. Select Federated repositories, and then click Next.
6. Enter the administrative credentials, and then click Next.
7. Click Finish > Save.
8. Restart WebSphere Application Server.

Enable administrative security for the Product Master


To enable administrative security for the Product Master, proceed as follows.

1. Edit the $TOP/bin/conf/env_settings.ini file.


a. Set the value of the admin security property to true.
[appserver.websphere70]
admin_security=true
b. Specify the credentials for the WebSphere Application Server.
[appserver]
username=
password=
2. Restart the Product Master services.

Configure security role mapping


To configure security role mapping, proceed as follows.

1. Log in to the WebSphere Application Server administrative console.


2. Go to Applications > Application Types > WebSphere enterprise applications. The Enterprise Applications page opens.
3. In the Enterprise Applications page, click <war_file_name> link. The Security role to user/group mapping page opens.
4. In the Security role to user/group mapping page, specify following according to the file type, and click OK.
WAR file Steps
ccd.war a. Select AllAuth role.
mdm_ui.war i. Click Map Special Subjects.
ii. Select All Authenticated in Trusted Realms.
b. Select LoginUser role.
i. Click Map Special Subjects.
ii. Select Everyone.
mdm_rest.war a. Select AllAuth role.
i. Click Map Special Subjects.
ii. Select All Authenticated in Trusted Realms.
b. Select LoginUser role.
i. Click Map Special Subjects.
ii. Select All Authenticated in Trusted Realms.
5. Restart the WebSphere Application Server administrative console and the Appserver on which the Product Master is deployed.

Enable SSO flags for the Product Master


Ensure that all the SSO users maintain a unique username and password in a default company.
To enable SSO flags for the Product Master, proceed as follows.

1. In the $TOP/etc/default/common.properties file,


a. Set the value of the enable_sso property to true.
b. Set the value of the sso_company property to <company_name>.
Example

# SSO authentication enabled


enable_sso=true
sso_company=<company_name>

2. In the $TOP/mdmui/dynamic/mdmui/config.json file, set the value of the enableSSO property to true.

enableSSO=true

3. Run the updateRtProperties.sh file by using the following command.

cd $TOP/mdmui/bin and execute updateRtProperties.sh

4. Restart the services by using the following commands.

cd $TOP/bin/go
./stop_local.sh
./start_local.sh

What to do next
Enabling the SAML Web browser SSO

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 777


Enabling the SAML Web browser SSO
To enable the SAML Web browser SSO, complete the following tasks.

Install the SAML Assertion Consumer Service (ACS).


Enable SAML Trust Association Interceptor (TAI).

Install the SAML ACS


Using the WebSphere® Application Server administrative console, install the ACS application (WebSphereSamlSP.ear) located in the $WAS_HOME/installableApps/ folder.

1. Log in to WebSphere Application Server console and click New Application.


2. Click New Enterprise Application.
3. Select WebSphereSamlSP.ear file from the local machine and click Next.
4. Select Server1 as the server on the Map modules to servers page.
Note: If the Server1 is not running, start the server by using the following command.

$WAS_HOME/<profile-home>/bin/startServer.sh server1

5. Select Fast Path, click Next, and Finish.

Enable SAML TAI


1. Log in to WebSphere Application Server console.
2. Click Security > Global security.
3. Expand Web and SIP security and click Trust association.
4. Under the General Properties, select the Enable trust association checkbox and click Interceptors.
5. Under Custom properties, complete the custom property information.
6. Click New and enter the following custom property information.
Name: sso_1.sp.idMap
Value: idAssertion
7. Click OK.
8. Click New and enter com.ibm.ws.security.web.saml.ACSTrustAssociationInterceptor in the Interceptor class name field.
9. Go back to Security > Global security and click Custom properties.
10. Click New and define the following custom property information under General properties.
Name: com.ibm.websphere.security.DeferTAItoSSO
Value: com.ibm.ws.security.web.saml.ACSTrustAssociationInterceptor
11. Click New and define the following custom property information under General properties.
Name: com.ibm.websphere.security.InvokeTAIbeforeSSO
Value: com.ibm.ws.security.web.saml.ACSTrustAssociationInterceptor
12. Click OK.
13. Restart the WebSphere Application Server.

Note: The com.ibm.websphere.security.Defroster property, was previously used in the default configuration of all installed servers. Now it is only used as part of the SAML
configuration. Therefore, even if this property exists in your system configuration, you must change its value to
com.ibm.ws.security.web.saml.ACSTrustAssociationInterceptor. Multiple values, separated with commas, cannot be specified for this property. It must be set to a single
SAML TAI.

What to do next
Configuring SSO partners.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring SSO partners


To configure SSO partners, complete the following tasks.

Configure SAML attribute mappings on the single sign-on partner.


Add an identity provider by using metadata of the identity provider (IdP).
Add Identity Provider realms to the list of inbound trusted realms.
Add the Issuer in the RMI-IIOP security.
Export the Service Provider metadata file from wsadmin command-line utility.

Configure SAML attribute mappings on the single sign-on partner


The SAML subject identifies the authenticated user. Product Master SAML SSO requires that the single sign-on partner should configure NameID as the SAML assertion
subject.

The single sign-on partner should also define attribute mappings for Group memberships.

Setting NameID and Group mappings in the SAML assertion are mandatory for Product Master login. Other optional mapping can be defined for user attributes. For
example, First Name, Last Name, Title, Email Address, Telephone, Fax, and Address.

778 IBM Product Master 12.0.0


If you want to enable Vendor users to login with SAML SSO, Organization mapping is mandatory to be set in the SAML assertion. Ensure that the Organization attribute
value matches the Vendor organization present in the Product Master Vendor Organization Hierarchy.

Add an identity provider by using metadata of the identity provider


1. Start the wsadmin command-line utility from the app_server_root/bin folder by using the following command.

>./wsadmin.sh -lang jython

2. At the wsadmin prompt, enter the following command.

AdminTask.importSAMLIdpMetadata('-idpMetadataFileName <IdPMetaDataFile> -idpId 1 -ssoId 1 -signingCertAlias <idpAlias>')

Where IdpMetaDataFile is the full path name of the IdP metadata file, and IdpAlias is any alias name that you specify for the imported certificate. Use the
<IdPMetaDataFile> exported from Identity Provider.
Example

AdminTask.importSAMLIdpMetadata('-idpMetadataFileName /opt/metadata/federationmetadata.xml -idpId 1 -ssoId 1 -


signingCertAlias adfs_cert')

3. Save the configuration by using the following command.

AdminConfig.save()

4. Exit the wsadmin command utility by using the following command.

quit

Add identity provider realms to the list of inbound trusted realms


1. Log in to the WebSphere® Application Server administrative console.
2. Click Global security.
3. Under User account repository, click Configure.
4. Click Trusted authentication realms - inbound.
5. Click Add External Realm.
6. Complete the external realm name.
7. As an example, add the following to the inbound trusted realms.
https://adfsserver.ipm.local/adfs/ls

http://adfsserver.ipm.local/adfs/services/trust
8. Click Apply.
9. Click OK and Save changes to the master configuration.

Add the Issuer in the RMI-IIOP security


1. Log in to the WebSphere Application Server administrative console.
2. Click Global security.
3. Under RMI/IIOP Security, click CSIv2 outbound communications.
4. Click Trusted authentication realms – outbound.
5. Click Add External Realm.
6. Enter IdP entityID URL.
Example
http://adfsserver.ipm.local/adfs/services/trust
You get the entityID URL in the federation metadata XML file.
7. Restart the WebSphere Application Server.

Export the Service Provider metadata file from wsadmin command-line utility
1. Start the wsadmin command-line utility from the app_server_root/bin folder by using the following command.

>./wsadmin.sh -lang jython

2. At the wsadmin prompt, enter the following command to export the Service Provide metadata.

AdminTask.exportSAMLSpMetadata('-spMetadataFileName /tmp/spdata.xml -ssoId 1')

This command creates the /tmp/spdata.xml metadata file.


3. The service provider metadata file can be consumed by the SSO partners.
Note: In case of cluster deployment, extract service provider metadata from only one node of the cluster.

What to do next
Enabling SAML Service Provider Initiated web SSO.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 779


Enabling SAML Service Provider Initiated web SSO
To enable SAML Service Provider Initiated (SP-Initiated) web SSO (SSO), complete the following task.

1. Develop a SAML authentication request provider that implements the com.ibm.wsspi.security.web.saml.AuthnRequestProvider interface.
The com.ibm.wsspi.security.web.saml.AuthnRequestProvider class is found in the was_public.jar file in the $WAS_HOME/dev folder.
The com.ibm.ws.wssecurity.saml.common.util.UTC class that is used in this sample can be found in the com.ibm.wsfp.main.jar file that is located in
$WAS_HOME/plugins folder.
The HttpServletRequest class that is used in this sample can be found in the $WAS_HOME/lib/j2ee.jar folder.
The method getAuthnRequest(HttpServletRequest req, String errorMsg, String acsUrl, ArrayList<String> ssoUrls) must return a map that
includes four entries with the following keys.

AuthnRequestProvider.SSO_URL
The SAML identity provider's SSO URL.
AuthnRequestProvider.RELAY_STATE
The relayState as defined by the SAML Web Browser SSO profile.
AuthnRequestProvider.REQUEST_ID
The value for this key must match the ID attribute's value in the AuthnRequest message.
AuthnRequestProvider.AUTHN_REQUEST
A Base64 encoded AuthnRequest message as defined in the spec.

Sample authentication request for SAML SSO

package com.ibm.saml;

import java.security.SecureRandom;
import java.util.ArrayList;
import java.util.Base64;
import java.util.Date;
import java.util.HashMap;

import javax.servlet.http.HttpServletRequest;

import com.ibm.websphere.security.NotImplementedException;
import com.ibm.ws.wssecurity.saml.common.util.UTC;
import com.ibm.wsspi.security.web.saml.AuthnRequestProvider;

/**
SAML authentication request provider implementation
*/
public class SAMLAuth implements AuthnRequestProvider {

private static String printableChars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";

public HashMap<String, String> getAuthnRequest(HttpServletRequest req, String errorMsg, String acsUrl,


ArrayList<String> ssoUrls) throws NotImplementedException {

System.out.println("Create SAML AuthnRequest \n Date :: "+new Date());


HashMap<String, String> map = new HashMap<String, String>();

// ADFS server url


String ssoUrl = "https://adfsserver.mdmce.local/adfs/ls";
map.put(AuthnRequestProvider.SSO_URL, ssoUrl);
System.out.println("ssoUrl:: "+ssoUrl);

String reqURI = req.getRequestURI();


System.out.println("RequestURI:: "+reqURI);
System.out.println("ssoUrls:: "+ssoUrls);

String scheme = req.getScheme(); // http


String serverName = req.getServerName(); // hostname.com
int serverPort = req.getServerPort(); // 80
String contextPath = req.getContextPath(); // /mywebapp

String relayState = generateRandom();


// If the URL is ACS URL then do not set relayState parameter with constructed URL
if(!reqURI.contains("samlsps") && !reqURI.contains("error")) {
StringBuilder url = new StringBuilder();
url.append(scheme).append("://").append(serverName);
if (serverPort != 80 && serverPort != 443) {
url.append(":").append(serverPort);
}
url.append(contextPath);
System.out.println("URL:: "+url.toString());
relayState=url.toString();
}

map.put(AuthnRequestProvider.RELAY_STATE, relayState);
System.out.println("RelayState:: "+relayState);

String requestId = generateRandom();


map.put(AuthnRequestProvider.REQUEST_ID, requestId);
System.out.println("RequestId:: "+requestId);

String acsURL = "https://<ipaddress>:<portnumber>/samlsps/ipm";


String issuer = acsUrl;
String destination = ssoUrl;
System.out.println("acsURL:: "+acsURL);

// create AuthnRequest- Authentications methods

780 IBM Product Master 12.0.0


// urn:federation:authentication:windows
// urn:oasis:names:tc:SAML:2.0:ac:classes:Password
// urn:oasis:names:tc:SAML:2.0:ac:classes:Kerberos
String authnMessage = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"
+ "<samlp:AuthnRequest xmlns:samlp=\"urn:oasis:names:tc:SAML:2.0:protocol\" " + "ID=\"" +
requestId
+ "\" Version=\"2.0\" " + "IssueInstant=\"" + UTC.format(new java.util.Date())
+ "\" ForceAuthn=\"false\" IsPassive=\"false\""
+ " ProtocolBinding=\"urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST\" "
+ "AssertionConsumerServiceURL=\"" + acsURL + "\" " + "Destination=\"" + destination +
"\"> "
+ "<saml:Issuer xmlns:saml=\"urn:oasis:names:tc:SAML:2.0:assertion\">" + issuer
+ "</saml:Issuer> <samlp:NameIDPolicy"
+ " Format=\"urn:oasis:names:tc:SAML:2.0:nameid-format:transient\"" + "
SPNameQualifier=\"mysp\""
+ " AllowCreate=\"true\" /> <samlp:RequestedAuthnContext Comparison=\"exact\"> "
+ "<saml:AuthnContextClassRef xmlns:saml=\"urn:oasis:names:tc:SAML:2.0:assertion\">"
+ " urn:oasis:names:tc:SAML:2.0:ac:classes:Password </saml:AuthnContextClassRef>"
+ "</samlp:RequestedAuthnContext> </samlp:AuthnRequest>";

System.out.println("Before encoding authnMessage :"+authnMessage);


String encodedAuth = Base64.getEncoder().encodeToString(authnMessage.getBytes());
System.out.println("After encoding authnMessage :"+encodedAuth);

map.put(AuthnRequestProvider.AUTHN_REQUEST, encodedAuth);
return map;
}

// Implement method to generate a random alpha numeric String that is unique


// each time it is invoked and cannot be easily predicted (like a counter)
private String generateRandom() {

byte[] seed = SecureRandom.getSeed(16);


SecureRandom random = new SecureRandom(seed);
int modder = printableChars.length();
char[] result = new char[8];
for (int i = 0; i < 8; i++) {
int j = random.nextInt(modder);
result[i] = printableChars.charAt(j);
}
return new String(result);
}

@Override
public String getIdentityProviderOrErrorURL(HttpServletRequest arg0, String arg1, String arg2,
ArrayList<String> arg3) throws NotImplementedException {
return null;
}
}

2. In the custom class you can specify either urn:oasis:names:tc:SAML:2.0:ac:classes:Password as authentication context for password authentication OR
urn:federation:authentication:windows for windows-based authentication.
3. Put a JAR file that contains your custom class in the $WAS_HOME/lib/ext folder.
4. Copy the com.ibm.wsfp.main.jar file in the $WAS_HOME/lib/ext folder. Location of this JAR file is $WAS_HOME/plugins.
Note: For multiple node clusters, perform step 3 and 4 above for all the nodes in the cluster.
5. Configure the SAML web SSO TAI to use your AuthnRequest message.
a. Log in to WebSphere® Application Server console.
b. Click Security > Global security.
c. Expand Web and SIP security and click Trust association.
d. Click Interceptors.
e. Click com.ibm.ws.security.web.saml.ACSTrustAssociationInterceptor.
f. Under Custom properties, set the following property for the login error page.

Name: sso_<id>.sp.login.error.page
Value: Fully qualified name of AuthnRequestProvider implementation class

Example

com.ibm.saml.SAMLAuthRequest

Name: sso_1.sp.acsUrl
Value: https://<hostname>:<sslport>/samlsps/<any URI pattern string>

Where hostname is the hostname of the system where the WebSphere Application Server is installed and sslport is the web server SSL port number
(WC_defaulthost_secure).
Example

https://host.ipm.in:9443/samlsps/ipm

6. Set the following custom properties according to your configuration.


Name Value
sso_1.sp.acsUrl https://ipmserver.ipm.in:9443/samlsps/ipm
sso_1.sp.idMap idAssertion
sso_1.idp_1.EntityID http://adfsserver.ipm.local/adfs/services/trust
sso_1.sp.preventReplayAttackScope Server
replayAttackTimeWindow 600
sso_1.sp.preventReplayAttack False
sso_1.sp.useRealm http://adfsserver.ipm.local/adfs/services/trust
sso_1.sp.preserveRequestState False

IBM Product Master 12.0.0 781


Name Value
sso_1.sp.filter request-url^=/|/mdm_ui
sso_1.sp.login.error.page com.ibm.saml.SAMLAuthRequest
sso_1.idp_1.SingleSignOnUrl https://adfsserver.ipm.local/adfs/ls/
Note:
The SingleSignOnUrl and EntityID property are automatically added when you import federation metadata file.
In case of multi node cluster deployment with a load balancer, the ACS URL should point to the load balancer URL and port.
7. Restart the WebSphere Application Server.

What to do next
Configuring SSO in the browser.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring SSO in the browser


To configure your browser to authenticate SSO, complete the following task in your browser.

This is required only if Windows authentication method is configured for SAML in the Enabling the SAML Web browser SSO.

Microsoft Internet Explorer


Chromium
Mozilla Firefox

Access the interfaces with the following URLs.

Admin UI
https://<ipmserver.ipm.com>:<port>
Persona-based UI
https://<ipmserver.ipm.com>:<port>/mdm_ui

Note: The context root for the Admin UI is "/" and for the Persona-based UI is "/mdm_ui". These are specified in the SAML properties and only these URL patterns are
intercepted by the SAML web SSO TAI.

Microsoft Internet Explorer


1. Open Microsoft Internet Explorer browser.
2. Select Tools > Internet Options > Security tab.
a. Select Local intranet and click Sites to display the list of trusted sites.
b. Select the following first two options.
i. Include all local (intranet) sites not listed in other zones.
ii. Include all sites that bypass the proxy server are checked.
c. Click Advanced and add the URL of the Identity Provider to list of trusted sites.
d. Click Custom level, under User Authentication, and Logon, select Automatic logon with current username and password security setting.
e. Select the Advanced > Security section, ensure that Enable Integrated Windows Authentication is selected.
f. Click OK and restart Microsoft Internet Explorer.
g. Similar steps are applicable for the Trusted sites.

Chromium
If you are using Google Chromium, it automatically fetches all the settings that are done in the Microsoft Internet Explorer for the SSO.
To import bookmarks from Microsoft Internet Explorer.

1. Open Chromium browser.


2. At the upper right, click More.
3. Select Bookmarks > Import Bookmarks and Settings.
4. Select the program that contains the bookmarks you would like to import.
5. Click Import and Done.

Mozilla Firefox
1. Open the Mozilla Firefox browser.
2. In the URL field, enter about:config, and press Enter.
3. Ignore the warning, and click I accept the risk!.
4. In the Search field, enter network.negotiate-auth.trusted-uris. This preference lists the trusted sites for Kerberos authentication.
5. Double-click network.negotiate-auth.trusted-uris.
6. In the Enter string value field, enter the Fully Qualified Domain Name (FQDN) of the host running the Product Master, and click OK.

Related concepts
Troubleshooting the SAML SSO issues

782 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Timeout behavior in the Persona UI


When SAML SSO is enabled, the session timeout for Product Master Persona-based UI is based on the following properties.

Lightweight Third Party Authentication (LTPA) timeout


Hypertext Transfer Protocol (HTTP) session timeout
Session inactivity timeout

Additionally the topic also provides information for the following.

How do the Session and LTPA timeout work


Best practices

LTPA timeout
When you log in to the Persona-based UI, you get an LTPA token by the IBM® WebSphere® Liberty. You can use this token to validate your access to the applications. The
LTPA token cannot be extended or renewed, even for an active user session. As a result, your session ends after the LTPA timeout. You get logged out of the application
and must provide login credentials again to get a new token. This fixed LTPA time is a security mechanism to prevent an unlimited user session, which is vulnerable to
exploitation from unauthorized sources.

With the LTPA mechanism, you can lose your unsaved work. As a result, the LTPA must be set for the longest allowable time by your IT Security team according to your
corporate compliance policies. The LTPA timeout is common for all applications. You can specify the LTPA timeout while you are installing the applications. You can modify
the timeout by changing the settings after installation in the IBM WebSphere Liberty.

You can change the LTPA timeout as follows.

Containerized deployment
The LTPA timeout setting must be added in the Persona UI pod configuration.
In the IBM WebSphere Liberty, increase the LTPA cookies expiration timeout in the server.xml file at the /opt/ibm/wlp/usr/servers/ipm folder. The default timeout is
120 minutes.
Add the following tag to specify the timeout value as 8 hours.

<ltpa expiration="480"/>

Non containerized deployment

1. Log in to the IBM WebSphere Application Server.


2. Click Security > Global Security > Authentication. By default, the LTPA selected.
3. Click the LTPA link.
4. Under LTPA timeout setting, change the LTPA timeout value, and click Apply. The default value is 480 minutes.
5. To save the change directly to the master configuration, click Save link.

HTTP session timeout


The HTTP session timeout settings keep your application session active while you are actively working in the session. When you access the Persona-based UI, an HTTP
session is created. A session is timed out after a specified period of inactivity for better management of memory resources. Note that the active session still ends when the
LTPA timeout limit is reached.

You can change the HTTP session timeout as follows.

In the config.json file at the /opt/MDM/mdmui/dynamic/mdmui folder. Increase the value of the timeouTS property as follows to specify the timeout value as 4
hours. The default timeout is 30 minutes.

"timeoutTS": "14400"

Session inactivity timeout


With the session inactivity timeout countdown, you are alerted about the session timeout in advance and sudden session termination is avoided. Once the session timeout
notification appears, any task that is work-in-progress is lost.

For example, if the HTTP session timeout is set to 30 minutes, the session inactivity timeout is set at 25 minutes, and the Inactivity timeout countdown is set to 5 minutes.
With these settings, if a user session is inactive for 25 minutes, the application UI starts displaying countdown of 5 minutes. A link is displayed that the you can click and
extend the session without logout. If you do not click the link before end of countdown, then you are logged out from the application. Note that the session still ends when
the LTPA timeout limit is reached.

You can change the session inactivity timeout as follows.

In the config.json file at the /opt/MDM/mdmui/dynamic/mdmui folder. Increase the value of the ideTS property as follows to specify the timeout value as 3.5 hours.
The default timeout is 25 minutes.

"idleTS": "12600"

Note: The default LTPA timeout in the IBM WebSphere Liberty that hosts the application is 120 minutes. The default HTTP session timeout for the Persona-based UI is 30
minutes. Though the default values for the LTPA and HTTP session timeouts can be extended, consult your IT team to determine the appropriate timeout interval.

IBM Product Master 12.0.0 783


How do the Session and LTPA timeout work
You must set the LTPA timeout to a value greater than the Session timeout value. If a session for an application is idle for more than the Session timeout value, and if you
click in the application, the application opens in the same window because the LTPA timeout is still active. However, when the session of any application is idle for a time
greater than the LTPA timeout value, and you click in an application, you are logged out of the application and must log in again to access the application.

Regardless of whether a user session was active or inactive, the LTPA session expires in 480 minutes and no new session is established. You are logged out and must log
in again to access the applications.

Note: If the browser tab in which the Persona-based UI is running is closed but the browser is still open, you can come back to the application without logging in again
until the LTPA timeout is reached.

Best practices
To avoid loss of data or other inconveniences, follow the recommendations.

Before you leave your application session idle, ensure to save any unsaved changes and log out of your session.
Be aware of the LTPA limit set by your organization. If you work continuously in a session without idling a session, save your work and log off before your LTPA
timeout limit is reached. You can log back in to the system to start a new session.
The Session Inactivity timeout for the Persona-based UI must not exceed the Session timeout.
The Session timeout must be less than the LTPA timeout.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Known issues and limitations (SSO)


Certain product features assume that the system is deployed by using a centralized deployment model where services share the file system and product binaries. With
containerized deployments, services no longer have a common file system and are working in isolation.

Following is the list of known issues and limitations.

Once SAML SSO is enabled, default Admin user as well any local user that does not exist on the Identity Provider will not be able to log in to the Admin UI and the
Persona-based UI applications.
Roles that are created by the SAML SSO login grant default ACG permissions hence Administrator needs to set correct permissions for such roles.
Any deletion of users or roles on the Identity Provider is not synced with the Product Master database as part of SAML SSO. This should be handled by the customer
as a routine administrative task.
User attributes for users who are created by the SAML SSO login cannot be edited from the Product Master Admin UI as the user edit screen is in read only mode.
Even though the Logout option is available on the Admin UI and Persona-based UI once the user clicks the Logout option, the user gets redirected to the main
application URL. This triggers the SAML SSO flow again.
There are different session timeouts users need to be aware of. Each UI has its own configured timeout as well as SAML token have the expiry set by the Identity
Provider. It is recommended to increase the session timeouts for both the application UI as well as SAML token to have a better user experience when SAML SSO is
enabled.
REST API access with SAML SSO through external source (custom code) is not yet supported. This is known limitation.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring SPNEGO and Kerberos SSO


When a client application wants to authenticate to a remote server, but if the server or the client is not sure which authentication protocol the other supports, you can use
various SSO implementations.

About this task


IBM® WebSphere® Application Server provides Kerberos authentication and SSO features that enable interoperability and identity propagation with other applications
(such as .NET, Db2®, and others) that support the Kerberos authentication mechanism. With these features, you need to log in only once, and then you can access other
applications that support Kerberos authentication. SPNEGO is a standard protocol that is used to negotiate the authentication protocol that is used when a client
application wants to authenticate to a remote server.

The following links focus on a set of common scenarios that demonstrate how to use the Kerberos authentication mechanism with the IBMWebSphere Application Server:

IBM WebSphereWebSphere Application Server version 8.5.5 -


Setting up Kerberos as the authentication mechanism for WebSphere Application Server
IBM WebSphereWebSphere Application Server version 9.0 -
Setting up Kerberos as the authentication mechanism for WebSphere Application Server

Configure security role mapping


1. Log in to the WebSphere Application Server administrative console.

784 IBM Product Master 12.0.0


2. Go to Applications > Application Types > WebSphere enterprise applications. The Enterprise Applications page opens.
3. In the Enterprise Applications page, click <war_file_name> link. The Security role to user/group mapping page opens.
4. In the Security role to user/group mapping page, specify following according to the file type, and click OK.
WAR file Steps
ccd.war a. Select AllAuth role.
mdm_ui.war i. Click Map Special Subjects.
mdm_rest.war ii. Select All Authenticated in Application’s Realms.
b. Select LoginUser role.
i. Click Map Special Subjects.
ii. Select Everyone.
5. Restart the WebSphere Application Server administrative console and the Appserver on which the Product Master is deployed.

Configure Mozilla Firefox


Proceed as follows to configure the Mozilla Firefox.

1. Open the Mozilla Firefox browser.


2. In the URL field, enter about:config, and press Enter.
3. Ignore the warning, and click I accept the risk!.
4. In the Search field, enter network.negotiate-auth.trusted-uris. This preference lists the trusted sites for Kerberos authentication.
5. Double-click network.negotiate-auth.trusted-uris.
6. In the Enter string value field, enter the Fully Qualified Domain Name (FQDN) of the host that is running the Product Master application, and click OK.

Configure Microsoft Internet Explorer


1. Proceed as follows to configure the Microsoft Internet Explorer.
2. Open the Microsoft Internet Explorer browser and select Tools > Internet Options > Security tab.
3. Select Trusted sites and click Sites to display the list of trusted sites.
4. Add the URL for your Persona-based application to enable auto login and click Close.
Note: If required, select Require server verification (https:) for all sites in this zone.
5. Click Custom level and navigate to User Authentication > Logon.
6. Select Automatic logon with current user name and password, and click OK.
Important: Avoid accessing the Persona-based UI using Microsoft Internet Explorer 11 browser with SPNEGO enabled. Though the application appears to work, the
browser does not send authentication tokens in the request headers and generates an undesired number of authentication tokens.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Defining users
When you define users, consider which users need access to the Product Master Server system and which users can wait or have their needs addressed in other ways.

You must limit the number of users of the Product Master Server system. You must provide access to only those who will benefit the most from the features of the Product
Master Server system during the initial phases of implementing the Product Master Server system. In later phases, because the Product Master Server system hosts more
data and business processes, you can add more users to the Product Master Server system. The developers can concentrate on answering the needs of a smaller group of
users at a time and the client can manage change efficiently.

You can place the users into two groups based on their level of activities in the Product Master Server system.

Users with higher interaction with Product Master Server


These are the users that have a reasonable amount activity in the Product Master Server system. They might participate in the workflows and add, delete, and
modify referential data in the Product Master Server system on a regular basis.
Users with lower interaction with Product Master Server
These users mostly query information with little or no editing authorization.

To avoid concurrency problems, try to limit the number of Users with lower interaction with Product Master Server that have direct access to the Product Master Server
system. Here are some suggestions for limiting the number of users:

If the users with lower interaction with Product Master Server log into the PIM system to search for the same information, on a regular basis, then it is better to
send out a report to these users instead of giving them access to the Product Master Server system. For example, if a group of users are interested only in the
products that have gone live in the last week, then a report can be sent to this group instead of allowing the group to have access to the Product Master Server
system to query this information. This approach will reduce the burden on the Product Master Server system. If there are 50 users that might search this
information, instead of querying the Product Master Server system 50 times, you can send a report that can run at a relatively down time.
If the information that the users with lower interaction with Product Master Server is interested in is also available in another module such as the Merchandising
system, then redirect the user to the Merchandising system.
If a system requires an extensive number of users to query referential data, integrate with an external search engine, such as Endeca to enhance the Product
Master Server system's search and presentation capabilities. Endeca needs to be purchased and is independent of IBM® Product Master, because Endeca has its
own database, users with lower interaction with Product Master Server will query Endeca's database instead of querying the Product Master Server system. To keep
the external database updated, you can create an export from the Product Master Server system that runs regularly.

Creating users
You must consider the following things when you create a new user:

Tasks performed by the new user.

IBM Product Master 12.0.0 785


Type of access privilege required.

If you want to integrate the Product Master Server system with LDAP, then you can use LDAP for the user creation and authentication tasks.

Roles
To provide different access privileges to different users, you must define different roles preceded by the company name (company_role name). For example, you can
define roles such as Acme_Admin, Acme_basic, and Acme_content_author preceded by the imaginary company name, Acme:

Acme_Admin
You can design this role to have all the access privileges. Users in the admin role can control all the other roles in an organization. You cannot edit the
<companyname>_admin role.
Acme_Basic
You can design this role to have basic access privileges such as view access. You can edit the <companyname>_basic role.
Acme_Content_author
You can design this role to have basic access privileges such as add, edit, delete, and view access. You can edit this role.

Managing users
You can group the users in organizations. You can model the organizations as either external (customers or business partners) or internal entities that are separate
business units within the company. You can use organizations to separate all suppliers or business units. You can create an organization spec to track the business unit
information.
You can manage the users that you create but you cannot delete or recycle users:

After a user is created, you cannot delete the user.


Do not recycle user names. Although you can update role, locale, and personal settings assignments easily, the automatic audit trails of the Product Master Server
system will make tracing difficult. For example, a user with a user name Smith quits the company in January. In March, a new employee is hired and the same user
name Smith is given to this new user. It will be difficult to trace the changes that are made in the Product Master Server system by these two employees who were
using the same user name without their employment dates.

If you integrate the Product Master Server system with Integrating Lightweight Directory Access Protocol (LDAP) then you can handle all user maintenance with LDAP.
Otherwise, you must create and manage the new users in the User Console in the Product Master Server system or you can use a script to create users.
Note: You cannot delete a user from the application. It is not advised as it could corrupt the internal audit trail. Instead, if you do not want a particular user as part of the
application, you can disable it. Moreover, deletion of the users should not be attempted directly from the database either as it could potentially corrupt your data.

Mapping roles to users


Mapping of roles to users is required to inherit the privileges from the roles to the users.
Some of the roles are defined by the business processes. For instance, for a New Product Introduction (NPI) process you can define roles for every department that is
involved in NPI.

Each user needs to be assigned a minimum of one role in order to have access to objects, but you can assign multiple roles to a single user. However, if the two roles are in
conflict with each other (for example, if they have different catalog access privileges), resolving that conflict takes time and typically results in poor page rendering
performance typically results.

Creating users
You can create a user for each person and associate different roles to the user so that this user can have proper levels of access privileges to the Product Master
Server system.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating users
You can create a user for each person and associate different roles to the user so that this user can have proper levels of access privileges to the Product Master Server
system.

Before you begin


Ensure that you have approving authority. Also, ensure that the roles to be assigned to the user are created.

About this task


Defining the roles and authorization involve the following tasks:

Identifying roles.
Creating the roles and authentication.
Managing the roles.

Procedure
Use any one of the following methods to create the user.
Be sure to assign at least one role to the user at the time of user creation.

786 IBM Product Master 12.0.0


Option Description
a. Navigate to the Organization Hierarchy on the left pane and select the organization that you want to add the user to. Right-click and select Add
User from the menu.
User interface b. Select the roles that you would like you to associate with the user.
c. Provide details about the user such as the user name, first name, last name, email ID, and password.

The following sample Java API code creates users.

Context ctx = PIMContextFactory.getCurrentContext();


OrganizationManager manager = ctx.getOrganizationManager();
Role role = manager.getRole("Admin");
List<Role> roles = new ArrayList<Role>();
Java™ API
roles.add(role);
Organization organization = manager.getOrganizationHierarchy("Default Organization
Hierarchy").getOrganizationByPath("Default Organization");
User user = manager.createUser("Admin", "AAA", "BBB", "xx@abc.com", true , "password", roles, organization,
true);
user.save();
The following sample script API creates users.
Script API
var user = createUser("Admin", "AAA", "BBB", "xx@abc.com", true , "password", roles, organization, true);
A user is created. You can view the user in the User Console.

Revising passwords
You can revise the password of a user anytime.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Revising passwords
You can revise the password of a user anytime.

Before you begin


Ensure that you have the user created.

Procedure
1. Navigate to Default Organization Hierarchy > Default Organization in the Organization module of the left pane.
2. Select the user name. The detail of the user is displayed.
3. Modify the password.
4. Save the settings.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Defining roles
A role is set of permissions that are shared amongst one or more users.

Identifying roles
You can identify the roles such as admin, basic, and temp in the Product Master Server system. You must identify the users tasks and requirements and then define
the roles.
Best practices

If you are using LDAP, ensure that the role name in the Product Master Server system matches the group name in the LDAP server.
Limit the number of roles of the Product Master Server system. Define only those roles that are required.

Roles creation and authentication


If you want to integrate the Product Master Server system with LDAP, then you can move the roles creation and authentication tasks completely to LDAP.
Roles management
You can map the roles to the appropriate access control groups (ACGs).

Determining the roles that you need


To determine roles for the Product Master Server system, first determine the roles that are needed. Map those roles to users and define the securities for the roles. A
single user might only have the expertise to perform the tasks in one or two departments. For each department, you can create a separate category and define roles for
managing items in these categories. For example, Apparel and Electronics categories differ in the expertise needed to create and maintain the items that belong to these
categories. You must define two groups of roles for creating and maintaining items belonging to these two categories.
The following roles might be involved in the NPI process. The roles on the left side are identical to the roles on the right side, except for the categories that they have
authorization for.

IBM Product Master 12.0.0 787


Creator - Group A Creator - Group B
Content Specialist Base – Group A Content Specialist Base – Group B
Content Specialist Specialty – Group A Content Specialist Specialty – Group B
Content Specialist Imaging – Group A Content Specialist Imaging – Group B
Content Approver Base – Group A Content Approver Base – Group B
Content Approver Specialty – Group A Content Approver Specialty – Group B
Content Approver Imaging – Group A Content Approver Imaging – Group B
If items are partitioned, you will have fewer groups to define roles for.

In addition to the roles that are in charge of only a subset of items, some roles in the Product Master Server system deal with all the items regardless of their
categorizations. HazMat admin, tax and license admin, and SKU blocking admin are some examples of these roles.

Defining privileges for roles


Privileges are not set to the user, rather to the role that they are assigned to. If you assign a user to multiple roles, the users inherit the privileges from each role.
Based on the requirements of different users, you can define access privileges at different levels, such as:

System-wide access privilege


This privilege is applicable for the complete Product Master Server system. For example, you can grant screen view privilege which is a system-wide access
privileges to a role, so that users assigned to that role can view screens in the Product Master Server system. You can restrict access to various features. You cannot
change the ACG for the system-wide access. You can restrict the user to certain functions that are not object based such as the ability to modify specs or spec
maps, the ability to work with scripts, work with scheduled jobs, security or even access to certain menu options.
Page or screen access privilege
This privilege is applicable for a page or screen. For example, you can define access for a page when you need to provide access to only certain users.
Catalog access privilege
This privilege is applicable for single and multiple catalogs.
Hierarchy access privilege
This privilege is applicable for single and multiple hierarchies.
Locale access privilege
You can restrict access to one or more available locales. For example, members of a north American managers role might have access to English-US, English-
Canada, French-Canada and Spanish-Mexico, but not English-UK, French-France or Spanish-Spain locales.
Custom tools access privilege
You can restrict the access to certain custom tools for certain roles. For example, you can provide access to the admin role for some custom tools.

Access Control Groups (ACGs) and relevance to roles


Access Control Group (ACG) is a group of Access Control supported data model objects in Product Master Server system. You can restrict access to the role for each
associated ACG. You can restrict the user to view, list, create, edit, or delete certain objects in the Product Master Server system. When you are setting group specific
access for a role, you must choose at least one ACG.

Best practices
It is a good practice to give the roles the minimum access to the screens that users need to perform their duties. Access to a screen that they are not knowledgeable about
might cause security breaches in addition to confusing the users. For example, if you provide access to the catalog console page to a user other than the Product Master
Server Admin, the user by mistake can delete a catalog when you configure screen permissions, be conservative and allow the minimum access possible.

Creating roles
You can create roles for each category of users of a Product Master Server system so that the right set of security permissions can be granted to these users to
perform their work. For example you can create a role named catalog manager for users that perform catalog management work and associate with this role the
correct set of permissions needed for the user's to accomplish their tasks.
Configuring GDS users and roles
There are two security requirements for Global Data Synchronization (GDS); functional security and data security.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating roles
You can create roles for each category of users of a Product Master Server system so that the right set of security permissions can be granted to these users to perform
their work. For example you can create a role named catalog manager for users that perform catalog management work and associate with this role the correct set of
permissions needed for the user's to accomplish their tasks.

Before you begin


1. Ensure that you have approving authority.
2. Ensure that you clear the cache when modifying the role of a user. To clear the cache click Flush Cache.

About this task


A Product Master Server solution has different categories of users that use the system in different ways to accomplish their tasks.

788 IBM Product Master 12.0.0


Procedure
Use any one of the following methods to create the role.
Option Description
a. Click Data Model Manager -> Security -> Role Console to navigate to the Role Console.
b. Click the new icon.
User interface c. Provide details about the role.
d. Click Save.
Note: Assign access privileges for the role when you are creating the role.

The following sample Java API code creates roles.

Context ctx = PIMContextFactory.getCurrentContext();


Java™ API
OrganizationManager manager = ctx.getOrganizationManager();
role = manager.createRole("Admin", "Administrator");
role.save();
The following sample script creates roles.
Script API
var role = createRole("Admin", "Administrator");
A role is created. You can view the role in the role console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring GDS users and roles


There are two security requirements for Global Data Synchronization (GDS); functional security and data security.

Functional security
Access to the functions/modules in Supply Side Global Data Synchronization or Demand Side Global Data Synchronization is managed through access control
groups (ACGs) and roles. You can define and configure the specific users GDS roles and ACGs. You can also combine the Global Data Synchronization roles and
ACGs with Product Master users giving additional Global Data Synchronization specific privileges.
Data security
Provides security for trade items, trade item links, trading partners, target markets, classification categories, and transactions. Data security is implemented using
Catalog Access Privileges and Selections feature in Product Master.

Creating a GDS role


A role is a collection of tasks, services, and information for a user or a group of users. A role defined by you provides the visualization of contents and the navigation
structure within the solution. After defining a role, you can assign multiple users to a role and a user can have multiple roles. The privileges can only be assigned to
a role which in turn gets inherited to its assigned users.
Creating a GDS user
After creating the roles, you can now create the users. The privileges that are assigned to a particular role gets automatically inherited to the users of that role. You
can assign a user to multiple roles so that each user inherits the privileges from each role.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a GDS role


A role is a collection of tasks, services, and information for a user or a group of users. A role defined by you provides the visualization of contents and the navigation
structure within the solution. After defining a role, you can assign multiple users to a role and a user can have multiple roles. The privileges can only be assigned to a role
which in turn gets inherited to its assigned users.

Procedure
1. Click Data Model Manager > Security > Role Console.
2. Click New, provide a name and description for the role.
3. Select default for ACG from the drop-down menu, and provide GDS-specific access privileges for this role. If you want to create a single user for Product Master and
GDS role then Product Master privileges are also be selected.
4. Click Save to save privileges for this role. A sample role provided by GDS data model is GDS Administrators.
Note: The roles and ACGs that are set-up in Product Master are now applied to Global Data Synchronization (GDS).

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a GDS user

IBM Product Master 12.0.0 789


After creating the roles, you can now create the users. The privileges that are assigned to a particular role gets automatically inherited to the users of that role. You can
assign a user to multiple roles so that each user inherits the privileges from each role.

Before you begin


You must have the roles defined that you want to associate with the user.

Procedure
1. In the left navigation pane, select Default Organization Hierarchy from the drop-down menu.
2. Click + to expand the Default Organization Hierarchy section.
3. Right-click and select Add User.
4. Provide you user values in the User Profile section.
5. Specify a password for the user.
6. Select a role for this user.
7. Click Save and refresh the left navigation pane.
8. Click View users. The User Console displays the newly created user.
9. Enable the new user.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating custom tools with the UI framework


You can extend the capabilities for the user interface in IBM® Product Master by using the UI framework and the Java™ API for Product Master.

Custom tools provide a custom user interface in the Product Master Server system. You can use custom tools to extend the user interface of the Product Master Server
system. For example, you can use custom tool to present item and category together on a screen when the client wants to view item and category together on a screen.

The Product Master Server system comes with standard user interfaces to render the catalog items. The ready-to-use user interfaces try to address most of the common
scenarios but does not address specific client-specific scenarios. The custom tools help bridge the gap between native Product Master Server system's functionality and
the client requirements. Most of the Product Master Server system implementations use custom tools to their advantage to enhance the client experience in the Product
Master Server system.

You typically create a custom user interface to address a business need that native Product Master screens are unable to address.
Tip: Before you create a custom interface, be sure that you define the use case. Identify the process that cannot be addressed by the native user interface. You need to
ensure that the custom user interface is built to satisfy specific needs and requirements rather than a particular solution or design.

Keep in mind that Custom tools are 100 percent scripted and it is your responsibility to integrate that custom tool with the native Product Master Server system
functionality programmatically.
Custom tools take much time to develop, integrate, and test. You should use the custom tools only when you have ample time.
Custom tools must be implemented strategically at various points of the implementation to help address client requirements. Custom tools are not meant to
replace the interfaces of the Product Master Server system.

Architecture of the UI framework


The UI framework adapts the Model, View, and Controller architectural pattern and Web 2.0 UI styling of presenting the UI.
The architecture of the UI framework consists of the following:

Model
The model represents enterprise data and the business rules that govern access to and updates of this data. The model is a Java object, which directly makes the
calls to the IBM Product Master Java API layer.
View
The view renders the contents of a model. It accesses enterprise data through the model and specifies how that data should be presented. JavaServer Pages (JSP)
along with the Dojo toolkit are used for the presentation layer.
Controller
The controller converts interactions with the view into actions to be performed by the model. In a Web application, they are GET and POST HTTP requests. The
actions that are performed by the model include activating business processes or changing the state of the model. Based on the user interactions and the outcome
of the model actions, the controller responds by selecting an appropriate view. The controller is a Java servlet.

Flow configurations of the UI framework


You can use the flow-config.xml to control the navigation of the UI framework. You can also use the following sample code for the error, login, and navigation control
and dispatch within the UI framework.
Configuring third-party modules for the left pane
You can have a single-instance module or a multiple-instance module.
Samples for implementing a custom interface with the UI framework
You can use the following samples for implementing a custom interface with the UI framework.
Scenarios for creating custom tools
You can follow the two scenarios to learn how to create custom tools in IBM Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

790 IBM Product Master 12.0.0


Flow configurations of the UI framework
You can use the flow-config.xml to control the navigation of the UI framework. You can also use the following sample code for the error, login, and navigation control and
dispatch within the UI framework.

Installation of the IBM® Product Master uses the flow-config.default.xml file to generate flow-config.xml file.

New users
When you run the configureEnv.sh script, the flow-config.default.xml file generates the flow-config.xml file.
Existing users
If the flow-config.xml does not exist, when you run the configureEnv.sh script, the flow-config.default.xml file generates the flow-config.xml file.
If the flow-config.xml file exists and there are some new updates in the flow-config.default.xml file, then when you run the configureEnv.sh script, you get a
prompt to confirm that you want to overwrite the existing flow-config.xml file.
If you chose to overwrite, you get the All user custom changes will be lost. Do you want to continue? warning prompt. If you chose
to continue, then the backup of the existing file is created and a new flow-config.xml file is created from the flow-config.default.xml file.
If you chose not to overwrite or continue, then you get the You will need to add updated changes manually warning message.
If there is any semantic error in the existing flow-config.xml file, then a corresponding error message is displayed and the process stops.

flow-config.xml file for the UI framework


The flow-config.xml file is the core configuration file for the framework. The flow-config.xml file needs to be present in the class path of your Web application.
Error and login for the UI framework
You can use the global-error and global-login tags in the UI framework.
Navigation control and dispatch for the UI framework
You can use the synchronous and asynchronous tags within the flow-config.xml to control navigation within the UI framework.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

flow-config.xml file for the UI framework


The flow-config.xml file is the core configuration file for the framework. The flow-config.xml file needs to be present in the class path of your Web application.

You can control the navigation by using the flow-config.xml file available under the directory PRODUCT_INSTALLED_DIR/etc/default. The contents of the XML file is
governed through the flow-config.xsd file located in the same location as the flow-config.xml file. Here is the sample code from the flow-config.xml file:

<flow-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="flow-config.xsd">
<global-error page="/utils/error.jsp"/>
<global-login page="/utils/enterLogin.jsp"/>
<flow-mapping>
<flows>
<flow path="homepage" command="com.ibm.ccd.ui.homepage.HomePageCommand"
method="getHomePage">
<flow-dispatch name="classic" location="/ccd_workflow/collaboration_console.jsp"
dispatchType="forward" />
<flow-dispatch name="newhomepage" location="/utils/homepage.jsp"
dispatchType="forward"/>
<flow-dispatch name="customhomepage" location="/utils/custom_page.jsp"
dispatchType="forward"/>
</flow>

<flow path="classicornew" command="com.ibm.ccd.ui.util.SingleEditSwitchCommand"


method="">
<flow-dispatch name="classicitemedit" location="/ccd_content/item_data_entry.jsp"
dispatchType="forward" />
<flow-dispatch name="newitemedit" location="/ccd_content/new_single_edit_ui.jsp"
dispatchType="forward"/>
</flow>
</flows>
<async-flows>
<async-flow path="WorkflowQueryStore"
command="com.ibm.ccd.ui.worklist.WorklistPageCommand"
method="getCollaborationAreaEntries"/>
</async-flows>
</flow-mapping>
</flow-config>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Error and login for the UI framework


You can use the global-error and global-login tags in the UI framework.

global-error

IBM Product Master 12.0.0 791


Denotes the error page that needs to be routed to in case of exceptions in the navigation.
global-login
Acts as a place holder for the login page.

The following sample code is an example that can be used whenever there is a session timeout. If the request is routed to the new IBM® Product Master UI framework,
then the framework takes care of the session management and routing to the login page.

<global-error page="/utils/error.jsp"/>
<global-login page="/utils/enterLogin.jsp"/>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Navigation control and dispatch for the UI framework


You can use the synchronous and asynchronous tags within the flow-config.xml to control navigation within the UI framework.

Synchronous and page navigation


The page navigational aspect is controlled through the flow tag within the flow-config.xml file.
Asynchronous and Ajax calls for the UI framework
The async-flows tag contains all the calls for the asynchronous flow.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Synchronous and page navigation


The page navigational aspect is controlled through the flow tag within the flow-config.xml file.

path attribute
The path attribute within the flow tag represents the incoming request to the controller Servlet. From the sample below "homepage" is the incoming request and
generally the request URL looks like the example shown below where the ServerURL and the portNo is the location where the IBM® Product Master instance is
running.

http://ServerURL:portNo/homepage.wpc

command attribute
The command attribute of the flow tag is a place holder for the Java™ class which gets invoked based on the incoming request and the further details are under the
types of command section.
method attribute
The method attribute of the flow tag is a place holder for the method that needs to be executed on the command class. If the method attribute is left empty
method=””, the framework assumes that the command has a default method name called execute().
The flow-dispatch tag represents the resultant JSP that has to be dispatched based on the string returned from the named method (declared in the “method”
element) in the class (declared in the “command” element).

name attribute
The name attribute represents return type of the commands method.
location attribute
The location attribute is the place holder for the URL which can be a JSP shown below or it can be a URL of the framework like something.wpc.
dispatchType attribute
The dispatchType attribute of the flow tag represents the type of the dispatch this can "include," "forward," or "redirect".

include
Includes the content of a resource (servlet, JSP page, HTML file) in the response. This method enables programmatic server-side includes.
forward
Forwards a request from a servlet to another resource (servlet, JSP file, or HTML file) on the server.
redirect
The request with redirect to other requesting resource.

Note: All of the above mentioned dispatchType attribute values are equivalent to the Servlets Request Dispatching techniques. For more information, refer to the
RequestDispatcher class on the servlets specification

There can be a number of flow tags within a flows tag and there can be a number of flow-dispatch tags within a flow tag. Here is sample code for the flows:

<flows>
<flow path="homepage" command="com.ibm.ccd.ui.homepage.HomePageCommand"
method="getHomePage">
<flow-dispatch name="classic" location="/ccd_workflow/collaboration_console.jsp"
dispatchType="forward" />
<flow-dispatch name="newhomepage" location="/utils/homepage.jsp"
dispatchType="forward"/>
<flow-dispatch name="customhomepage" location="/utils/custom_page.jsp"
dispatchType="forward"/>
</flow>
</flows>

792 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Asynchronous and Ajax calls for the UI framework


The async-flows tag contains all the calls for the asynchronous flow.

Asynchronous calls are the XML http calls where the request is asynchronously posted to the server. That means you and the browser can continue working on other
things while waiting for a response to come back.

path attribute
The path attribute of the async-flow tag contains the incoming URL.
command attribute
The command attribute contains the Java™ command that needs to be invoked and the “method” attribute indicates the method within the class to be invoked.

Here is the sample code for the async-flow tag:

<async-flows>
<async-flow path="WorkflowQueryStore"
command="com.ibm.ccd.ui.worklist.WorklistPageCommand"
method="getCollaborationAreaEntries"/>
</async-flows>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring third-party modules for the left pane


You can have a single-instance module or a multiple-instance module.

The user_leftPane.conf.xml follows this syntax:

<leftPane>
List of left Pane modules with one module per line.
</leftPane>

where the List of left pane modules with one module per line consists of one or more module elements, which are a specialized XML element. Each
module must have a name attribute to indicate the module name. You need to specify the name of the main UI JavaScript file, which resides in the
$TOP/public_html/user/js/module directory. This name must be the same as the module name with the addition of the js file extension.

Single-instance modules
For single-instance modules, only the name attribute is required in the module element.

Multiple-instance modules
For multiple-instance modules, the InstanceExplorerClass attribute is required. You use this attribute to provide the full class path of the custom implementation of
LeftPaneDataObject. For example,

<leftPane>
<module name="MyModule" /> <!-- single instance module -->
<module name="EmptyModule" InstanceExplorerClass="com.lnp.EmptyModuleDataObjectFactory" />
</leftPane>

Configuring a server-side extension module: sample


You can use this sample code to develop server-side third-party modules for the left pane. You must create the extension framework on the server-side and for the UI.

1. Copy the following sample code and paste it into a Java™ file. For example, EmptyModuleDataObjectFactory.java.
2. Compile the Java file. Include the corresponding class file into the class path of the appsvr service, then restart the service.

The following sample code illustrates how to use the enhanced Java API for the left pane extension framework. You must specify your specific string names where
empty2, empty 1 are mentioned. This custom code can also use other Java APIs to interact with server objects such as a catalog and hierarchy. The
EmptyModuleDataObjectFactory.java looks similar to the following code:

package com.lnp;

import com.ibm.pim.ui.leftpane.LeftPaneDataObjectFactory;
import com.ibm.pim.ui.leftpane.LeftPaneDataObject;

public class EmptyModuleDataObjectFactory implements LeftPaneDataObjectFactory {

public EmptyModuleDataObjectFactory(){}

public LeftPaneDataObject [] getDataObjects() {


String[] names = {"empty2", "empty1"};

IBM Product Master 12.0.0 793


LeftPaneDataObject[] result = new MyCustomLeftPaneDataObject[names.length];
for (int i = 0, count = names.length; i < count; i++) {
result[i] = new MyCustomLeftPaneDataObject(names[i]);
}
return result;
}
}

class MyCustomLeftPaneDataObject implements LeftPaneDataObject {

private String m_name;

public MyCustomLeftPaneDataObject() {}

public MyCustomLeftPaneDataObject(String name) {


this.m_name = name;
}

public String getName() {


return this.m_name;
}
}

Sample for configuring a client-side UI module


You can use this sample code to develop client-side UI third-party modules for the left pane.

1. Copy the sample code below and to paste it into a JavaScript file. For example, EmptyModule.js.
2. Place the EmptyModule.js file into the $TOP/public_html/user/js/module folder.
Note: Ensure that the name of the JavaScript file matches the module name as defined in the $TOP/etc/default/user_leftPane.conf.xml file.

The JavaScript methods that are illustrated in this sample code are extension points. You need to specify your specific string names where EmptyModule, Module are
mentioned. The EmptyModule.js looks similar to the following:

/**
* @class
* @extends Module
*/
function EmptyModule()
{
EmptyModule.uber.constructor.call(this);
EmptyModule.instance = this;
}

// extends Module
Runtime.subclass(EmptyModule, Module);

EmptyModule.prototype.getTitle = function() {
return LeftPaneModuleLabels[UserDefinedLeftPaneModuleTypeEnum.EMPTYMODULE];
};

EmptyModule.prototype.renderContent = function () {

var userServer =
"<html><body></body></html>"
return userServer;
};

EmptyModule.prototype.refresh = function() {
};

EmptyModule.prototype.showCtxMenu = function(event, elementId, moduleId) {};

Incorporating external tools for globalization


You can incorporate external tools for globalization of labels.

1. Open the $TOP/public_html/user/user_jsp_include.jsp file.


2. Provide your globalization label of module types.

The user_jsp_include.jsp file looks similar to the following:

dojo.require("dojo.colors");
dojo.requireLocalization("dojo", "colors");

var colors=dojo.i18n.getLocalization("dojo", "colors");


LeftPaneModuleLabels[UserDefinedLeftPaneModuleTypeEnum.EMPTYMODULE]= colors.aqua;

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Samples for implementing a custom interface with the UI framework


You can use the following samples for implementing a custom interface with the UI framework.

794 IBM Product Master 12.0.0


Sample for implementing a parameter-enabled interface
You might implement a parameter-enabled interface when you only want to work with the request parameter. For example, you might want to get the login details
and route it to a landing page.
Sample for implementing a request-response-enabled interface
You might implement a request-response-enabled interface when you need both an HTTP request and a response.
Sample for implementing a simple command with no interface
You might implement a simple command with no interface if you simply want to route the user from one page to another.
Sample for implementing an asynchronous interface
You might want to implement an asynchronous interface when an asynchronous call must be made and the return content is either JSON, TEXT_PLAIN, or
TEXT_XML.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample for implementing a parameter-enabled interface


You might implement a parameter-enabled interface when you only want to work with the request parameter. For example, you might want to get the login details and
route it to a landing page.

Implementing a parameter-enabled interface


Here is the sample code for when you get the login details and route it to a landing page.

public interface ParameterEnabled {

public void setParamertMap(java.util.Map parameterMap);

Here is a sample implementation of the parameter-enabled interface:

public class TestParameterEnabledCommand implements ParameterEnabled {

public java.util.Map parameterMap;

public void setParamertMap(java.util.Map parameterMap){


this.parameterMap=parameterMap;
}

public String execute()throws java.lang.Exception {

//use the request parameter to play around with and then pass to control back to
//some other jsp
//here call your WPC Java API to play around with wpc entities
return “somepage”;

Here is the sample configuration in the flow-config.xml file:

<flow path="login" command="com.ibm.pim.extenstion.test.TestParameterEnabledCommand"


method="">
<flow-dispatch name="somepage" location="/homepage.jsp" dispatchType="forward" />
<flow-dispatch name="errorlogin" location="/utils/enterLogin.jsp"
dispatchType="forward"/>
</flow>

The request to server would be a URL similar to the following: http://ServerURL:portNo/Login.wpcs and in the configuration file, each request is identified by the
path attribute of the flow tag which should match with login. login has a correlation with the configuration or the Java™ file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample for implementing a request-response-enabled interface


You might implement a request-response-enabled interface when you need both an HTTP request and a response.

Implementing a request-response-enabled interface


Here is the sample code for when you need both the http request and response.

public interface RequestResponseEnabled {

IBM Product Master 12.0.0 795


public void setRequest(HttpServletRequest request);

public void setResponse(HttpServletResponse response);

Here is the code to implement the sample class:

public class HomePageCommand implements RequestResponseEnabled {

HttpServletRequest request;
HttpServletResponse response;

public void setRequest(HttpServletRequest request) {


// TODO Auto-generated method stub
this.request=request;

public void setResponse(HttpServletResponse response) {


// TODO Auto-generated method stub

this.response=response;

public String getHomePage(){

//get the austin context and switch the screen

AustinContext ctx = (AustinContext)request.getSession().getAttribute("ctx");


AustinContext.setCurrentContext(ctx);

String start_page_url =
StringUtils.checkString(ctx.getSecurityContext().getUserSettingValue(UserSettingEnum.TRIGO_APPLICATIONS_AS_START_PAGE_ENUM.db,
Const.USER_SETTING_DEFAULT_INSTANCE+ ""), "");

if(Const.HOME_PAGE_CLASSIC.equalsIgnoreCase(start_page_url) || "".equalsIgnoreCase(start_page_url))
return "classic";
else if(Const.HOME_PAGE_NEW.equalsIgnoreCase(start_page_url))
return "newhomepage";
///utils/custom_page.jsp?script_id=802
String script_id = start_page_url.substring(start_page_url.lastIndexOf("=")+1);
//setting the script ID so that i can be routed to the correct custom script

request.setAttribute("script_id",script_id);
return "customhomepage";

Here is the sample configuration in the flow-config.xml file:

<flow path="homepage" command="com.ibm.pim.uitest.HomePageCommand" method="getHomePage">


<flow-dispatch name="classic" location="/ccd_workflow/collaboration_console.jsp"
dispatchType="forward" />
<flow-dispatch name="newhomepage" location="/utils/homepage.jsp"
dispatchType="forward"/>
<flow-dispatch name="customhomepage" location="/utils/custom_page.jsp"
dispatchType="forward"/>
</flow>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample for implementing a simple command with no interface


You might implement a simple command with no interface if you simply want to route the user from one page to another.

Implementing a simple command with no interface


Here is the sample code for when you have a command that does not implement anything. For example, when you have routing from one page to another page.

public class DefaultNavigator {

public String execute(){

796 IBM Product Master 12.0.0


System.out.println("DefaultNavigator.execute() Dependency injection is tested ");

return "success";

Here is the sample configuration in the flow-config.xml file:

<flow path="setViewItem" command="com.ibm.ccd.ui.util.DefaultNavigator" method="">


<flow-dispatch name="newViewOfItem" location="/ccd_content/new_single_edit_ui.jsp"
dispatchType="forward"/>
</flow>

Note: The DefaultNavigator class is pre-built and it can be directly used for the purpose of routing.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample for implementing an asynchronous interface


You might want to implement an asynchronous interface when an asynchronous call must be made and the return content is either JSON, TEXT_PLAIN, or TEXT_XML.

Implementing an asynchronous interface


Here is the sample code for when an asynchronous call has to be made and the return content is either TEXT/XML, JSON, or TEXT_PLAIN.

public interface AsyncEnabled {

public static String TEXT_PLAIN="text/plain";

public static String TEXT_XML="text/xml";

public static String JSON="json";

/**

* the implementation class should set the resonse string so that

* WPC UI Infrastructure can process the AJAX calls

* @return java.lang.String

*/

public String getResponseContent();

/**

* possible selection of the content type

* set either TEXT_PLAIN or TEXT_XML

* @return

*/

public String getContentType();

Here is the sample code for the implementation class:

public class WorklistPageCommand implements AsyncEnabled


{

String responseContent ="";

public String getContentType() {


return JSON;
}

IBM Product Master 12.0.0 797


public String () {
return responseContent;
}

public String getCollaborationAreaEntries() throws Exception {


AustinContext.setCurrentContext(
(AustinContext)request.getSession().getAttribute("ctx"));
ctx = PIMContextFactory.getCurrentContext();
manager = ctx. getCollaborationAreaManager();

JSONObject topJsonObj = new JSONObject();


topJsonObj.put("identifier","abbreviation");
JSONArray itemsArr = new JSONArray();
//Getting the collaboration areas form the pim context
PIMCollection allCAs = manager.getAllCollaborationAreas();
CollaborationArea colArea = null;
if(allCAs!=null && allCAs.size() > 0){
for (Iterator i = allCAs.iterator(); i.hasNext(); ){
JSONObject itemJsonObj = new JSONObject();
colArea = (CollaborationArea) i.next();
String caName = colArea.getName();
itemJsonObj.put("abbreviation", caName);
itemJsonObj.put("label", caName);
itemJsonObj.put("name", caName);
itemsArr.add(itemJsonObj);
}

}
topJsonObj.put("items",itemsArr);
responseContent = topJsonObj.serialize(true);
return null;
}

Here is the sample configuration in the flow-config.xml file:

<async-flows>
<async-flow path="WorkflowQueryStore"
command="com.ibm.ccd.ui.worklist.WorklistPageCommand"
method="getCollaborationAreaEntries"/>
</async-flows>

Note: For all of the asynchronous calls, one of them must implement the AsyncEnabled interface and the corresponding configuration should be <async-flows>
<async-flow ….
/></async-flow>.
After the operation is performed the response content should be set to responseContent which is an instance variable and the same variable should passed when the
framework calls the getResponseContent() method. The getContentType() method should return either JSON/ TEXT_PLAIN/ TEXT_XML. If this is not set by default, it
will take the TEXT_PLAIN.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Scenarios for creating custom tools


You can follow the two scenarios to learn how to create custom tools in IBM® Product Master.

Scenario 1: Creating a simple custom tool


This scenario explains how to create a simple custom tool called UIFrameworkTest. You can use the same procedure to deploy custom JSP files in the IBM Product
Master environment.
Scenario 2: Modifying a custom tool to display a collaboration area
This scenario adds a collaboration area to the base custom tool in scenario 1.

Related concepts
Samples for implementing a custom interface with the UI framework

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Scenario 1: Creating a simple custom tool


This scenario explains how to create a simple custom tool called UIFrameworkTest. You can use the same procedure to deploy custom JSP files in the IBM® Product
Master environment.

Before you begin


798 IBM Product Master 12.0.0
Before deploying your custom code (JSPs and Java command objects) you must make provision to register and deregister servlet threads.

Add customTool="true" for custom flows, both flows and asynchronous, in the flow-config.xml file:

<flow path="testCommand"
command="test.TestCommand"
method="getAction"
isCustomTool="true"
>

This file is located in the $TOP/etc/default directory.


Add header and footer includes that wrap the custom JSP code in a try/catch/finally block:

Header
Place this include after any declarations and anything else that will define objects at the JSP class level, such as declarations that define methods and
member variables. The include file mentioned below opens a try block which will be finished by the closing include. This try block MUST include all the code
in the main method of the JSP. Avoid using static declarations that access the database.

<%@ include file="/js/utils/customToolHeader.jsp.include" %>

Footer
Place this include at the end of the file. The include file mentioned below closes the try block that is opened in the header include.

<%@ include file="/js/utils/customToolFooter.jsp.include" %>

Procedure
1. Create the custom tool in the user interface.
a. Open the Scripts Console.
b. Select Custom Tools.
c. Click the new button.
d. Specify any input parameters and other settings. For this scenario specify None for the input parameters. Provide a custom tool name, for example:
UIFrameworkTest and select the ASP/JSP type.
e. In the scripts editor, insert the following script that generates the HTML that you need for the custom tool. For example:

Welcome to the new UI framework


<HR>
<a href="#_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_code_newhomepage.wpc"> New Custom Page
</a>

f. Click Save.
2. Edit your flow-config.xml file.
Add the following piece of code under the flow tag:

<flow path="newhomepage" command="com.ibm.ccd.ui.util.DefaultNavigator" method="">


<flow-dispatch name="success" location="/user/welcome1.jsp" dispatchType="forward" />
</flow>

Although you can directly call the welcome1.jsp, this code manages the product session. If there is a timeout, then this code forwards the user to the login page.

3. Create a simple JSP page and save it as welcome1.jsp.

<BR>
<BR>
<HR>
<%
out.println("Welcome to the new UI extension framework ");
%>
<HR>

4. Add the custom code or JSP files in the $TOP/public_html/user directory.


5. Add Java code, if any.
a. Create a JAR file for the Java™ command utilities code that your wrote for the new extension.
b. Add the jar file to the server.
Note: In this scenario, no Java command class is required because you are just forwarding the request. You will use DefaultNavigator.java, which is packed
with your application.
6. Restart the Product Master application.
7. Change the settings in the Product Master.
a. In the User Settings page, select the new custom tool as the new default starting page and click Save.
b. Reload the screen to see the new custom tool with the link "New Custom page". Click this link to open the new page, which is welcome1.jsp.
You see the message Welcome to the new UI extension framework.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Scenario 2: Modifying a custom tool to display a collaboration area


This scenario adds a collaboration area to the base custom tool in scenario 1.

Before you begin

IBM Product Master 12.0.0 799


Follow the steps in Scenario 1: Creating a simple custom tool.

Procedure
1. Modify the welcome1.jsp file as shown below:

<html>
<head>
<script type="text/javascript"
src="/home/markdown/jenkins/workspace/Transform/in/SSADN3_12.0.0/dev_soln/code/js/dojo_toolkit/dojo/dojo.js"
djConfig="parseOnLoad: true"></script>
<script type="text/javascript">

function getCollabArea() {
// alert("testing ajax called");
dojo.xhrGet( {
// The following URL must match that used to test the server.
url: "/PIMCollabArea.wpc",
handleAs: "json",

timeout: 5000, // Time in milliseconds

// The LOAD function will be called on a successful response.


load: function(response, ioArgs) {

paintCollabAreas(response);
return response;

},

// The ERROR function will be called in an error case.


error: function(response, ioArgs) {
console.error("HTTP status code: ", ioArgs.xhr.status);
return response;
}
});

return true;
}

function paintCollabAreas(collabareajson){

var responseHTML = "";


if(collabareajson.WPCCollabAreas.length > 0){
for (i = 0; i < collabareajson.WPCCollabAreas.length; i++) {

responseHTML= responseHTML+collabareajson.WPCCollabAreas[i].name+"<br>";
}

dojo.byId("collabareas").innerHTML =responseHTML;
}else
{

dojo.byId("collabareas").innerHTML ="No collaboration area ";


}
}

function init() {
getCollabArea();
}

dojo.addOnLoad(init);

</script>
</head>
<body class="tundra">

<BR>
<BR>
<HR>
<%
out.println("Welcome to the new UI extension framework ");
%>
<HR>

<B>List of Collaboration area</B>

<div id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_code_tsk_modifycustomtool_collabareas">
</div>

</body>

In the preceding code, the dojo.addOnload(init) calls the init function and the init function calls the getCollabArea(). The getCollabArea() method uses the dojo's
ajax method to asynchronously request the collaboration area information. The paintCollabAreas() uses the "collabareas" div to display the available collaboration
area.

2. Add the command class to the JSP. Add the new Java™ command class PIMUIExtensionCommand.java as shown below:

public class PIMUIExtensionCommand implements AsyncEnabled,RequestResponseEnabled{

800 IBM Product Master 12.0.0


HttpServletRequest request;
HttpServletResponse response;

//send an json array for the collboration area


String responseContent ="";
public String getContentType() {

return JSON;
}

public String getResponseContent() {

return responseContent;
}

public void setRequest(HttpServletRequest request) {

this.request = request;

public void setResponse(HttpServletResponse response) {


this.response = response;

public String getCollaborationArea() throws Exception {


// austin context is the root wpc context with in the WPC environment
AustinContext ctx = (AustinContext)request.getSession().getAttribute("ctx");
AustinContext.setCurrentContext(ctx);
//creat
Context apiContext = PIMContextFactory.getCurrentContext();

CollaborationAreaManager collbMgr =apiContext.getCollaborationAreaManager();


PIMCollection<CollaborationArea> coll = collbMgr.getAllCollaborationAreas();

JSONArray collaborationAreas = new JSONArray();


CollaborationArea collArea = null;
for(Iterator<CollaborationArea> it =coll.iterator();it.hasNext();)
{
collArea = it.next();
OrderedJSONObject obj = new OrderedJSONObject();

List acollabList = collArea.getNonEmptySteps();


if(acollabList.size() > 0 )
{
int countofitems =0;

obj.put("name",collArea.getName());
CollaborationStep collabStep = null;
for(Iterator<CollaborationStep> collabSteps =
acollabList.iterator();collabSteps.hasNext();){

collabStep = collabSteps.next();
countofitems+=collabStep.getContents().size();
}
obj.put("count",new Integer(countofitems));

}else {
obj.put("name",collArea.getName());
obj.put("count","");
}

collaborationAreas.add(obj);
}
JSONObject collabAreas = new JSONObject();
collabAreas.put("WPCCollabAreas",collaborationAreas);
//setting the response contents
responseContent = collabAreas.serialize(true);

//async call always return null


return null;
}
}

In the preceding code, the getCollaborationArea() method uses the Java API calls to get the collaboration area and it uses JSON to build the JSON array. The
responseContent is set with the collaboration area's in JSON format.

The JSON content looks like this:

{"WPCCollabAreas": [
{
"name": "cat-col 1",
"count": 1
},
{
"name": "hie-col 1",
"count": ""
}} }

3. Edit the flow-config.xml entry. Add the following entry under the asyn-flows tag:

<async-flow path="PIMCollabArea" command="com.ibm.pim.ui.test.PIMUIExtensionCommand" method="getCollaborationArea"/>

4. Update the classpath with the new class:

IBM Product Master 12.0.0 801


a. Compile the new Java file.
b. Create a new JAR and update the classpath of the application server instance with the new JAR.
c. Restart the application server.
5. Restart IBM Product Master. The new page with the collaboration area is displayed when you click the New Custom Page link.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Customizing labels and icons for items and categories


A solution implementer can customize the labels and icons used for entities managed in a catalog or a hierarchy. By default, the label item and category respectively
are used by the user interface for these entities. A solution implementer can define one or more specifications of icons and labels to be used for a given type of business
entity managed in their catalogs or hierarchies. Each of these types of business entity specifications are a domain entity specification.

Domain entities are specified in an XML file. These domains can then be assigned to a catalog or hierarchy, whose entries indicate the assigned domain entities. A domain
specification assigned to a catalog or hierarchy applies to all of the items or categories in that container. A solution implementer can also choose to associate different
domains to different entries within the same container.

Domain entities can be assigned to an individual entry (item or a category) within a container (catalog or a hierarchy), in which case these will override those derived from
the container. Thus, multiple domain entity types can be associated to items or categories of the same catalog or the same hierarchy. Each domain entity also has icons
specified in the XML spec.

To customize labels and icons for multiple domain entities, perform the following steps:

1. Identify the business domains for the entities your solution will manage in the catalogs or hierarchies. For example, if you are managing a set of vendors in a
hierarchy, you might want to consider having a 'vendor' domain. Or, if you plan tot manage bundles in a catalog, then you might want to consider having a 'bundle'
domain.
2. Define your domain entities in the XML file. See Defining domain entities in the XML spec for more information.
3. Generate the CSS files. See Generating a CSS file for the domain entities for more information.
4. Restart the application server.
5. Log into the Product Master user interface and select your type of domain entities. See Associating values to the domain entities using the user interface for more
information.

Defining domain entities in the XML spec


The XML spec file contains information on the domain entities that are available to be used with an entry in a container like a catalog or hierarchy. The XML spec file
is located in the location: $TOP/etc/default/domains/multi_domain_entities_${locale}.xml, where ${locale} is the locale for which the domain entities are defined.
For example, the default XML spec provided is multi_domain_entities_en_US.xml, where the en_US is the United States English locale.
Generating a CSS file for the domain entities
After you have defined the domain entities in the XML specification, the following setup is needed.
Associating values to the domain entities using the user interface
After you have restarted the application server, log into the Product Master user interface.
Error handling and limitations for domain entities
When Product Master encounters an error in locating the domain entity specification associated with a container or with specific entities with a container, it does
not respond with error messages in the user interface. When a particular domain entity ID is not found, the product uses the default domain labels for entries, for
example, items and categories and the default icons used by the product for these entries, after logging the error and warning messages in the log files.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Defining domain entities in the XML spec


The XML spec file contains information on the domain entities that are available to be used with an entry in a container like a catalog or hierarchy. The XML spec file is
located in the location: $TOP/etc/default/domains/multi_domain_entities_${locale}.xml, where ${locale} is the locale for which the domain entities are defined. For
example, the default XML spec provided is multi_domain_entities_en_US.xml, where the en_US is the United States English locale.

To add another locale (for example, French or fr_FR) specific domain entities for the domain entities specified in this file, all the domain entities defined here should be
specified in another file: multi_domain_entities_fr_FR.xml with corresponding French translated labels.

Currently, the application supports the following locales and their corresponding local specific domain entity xml spec files are given below:

en_US: English (US)


multi_domain_entities_en_US.xml
de_DE: German (Germany)
multi_domain_entities_de_DE.xml
el_GR: Greek (Greece)
multi_domain_entities_el_GR.xml
es_ES: Spanish (Spain)
multi_domain_entities_es_ES.xml
fr_FR: French (France)
multi_domain_entities_fr_FR.xml
it_IT: Italian (Italy)
multi_domain_entities_it_IT.xml
ja_JP: Japanese (Japan)

802 IBM Product Master 12.0.0


multi_domain_entities_ja_JP.xml
ko_KR: Korean (Korea)
multi_domain_entities_ko_KR.xml
pl_PL: Polish (Poland)
multi_domain_entities_pl_PL.xml
pt_BR: Portuguese (Brazil)
multi_domain_entities_pt_BR.xml
ru_RU: Russian (Russia)
multi_domain_entities_ru_RU.xml
tr_TR: Turkish (Turkey)
multi_domain_entities_tr_TR.xml
zh_CN: Chinese (Simplified)
multi_domain_entities_zh_CN.xml
zh_TW: Chinese (Traditional)
multi_domain_entities_zh_TW.xml

Sample XML spec


Provided below is a sample XML specification for defining domain entities.

<domain_configurations>
<company code="your_company_code">

<domain id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_labels_ref_samplexmlspec_Employee"
type="item">
<label>
<label_for_entity>employee</label_for_entity>
<label_for_entities>employees</label_for_entities>
<label_for_Entity>Employee</label_for_Entity>
<label_for_Entities>Employees</label_for_Entities>
<label_for_ENTITY>EMPLOYEE</label_for_ENTITY>
<label_for_ENTITIES>EMPLOYEES</label_for_ENTITIES>
</label>
<icon path="/images/entities/employee.png"/>
<empty_display_value_icon path="/images/entities/empty_display_value_employee.png"/>
</domain>
<domain id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_labels_ref_samplexmlspec_Manager"
type="item">
<label>
<label_for_entity>manager</label_for_entity>
<label_for_entities>managers</label_for_entities>
</label>
<icon path="/images/entities/manager.png"/>
<empty_display_value_icon path="/images/entities/empty_display_value_manager.png"/>
</domain>
<domain id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_labels_ref_samplexmlspec_Geography"
type="category">
<label>
<label_for_entity>region</label_for_entity>
<label_for_entities>regions</label_for_entities>
</label>
<icon path="/images/entities/geography.png"/>
<empty_display_value_icon path="/images/entities/empty_display_value_geography.png"/>
</domain>
<domain id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_labels_ref_samplexmlspec_Department"
type="category">
<label>
<label_for_entity>department</label_for_entity>
<label_for_entities>departments</label_for_entities>
</label>
<icon path="/images/entities/department.png"/>
<empty_display_value_icon path="/images/entities/empty_display_value_department.png"/>
</domain>
<domain id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_labels_ref_samplexmlspec_Hardware"
type="category">
<label>
<label_for_entity>machine</label_for_entity>
<label_for_entities>machines</label_for_entities>
</label>
<icon path="/images/entities/machine.png"/>
<empty_display_value_icon path="/images/entities/empty_display_value_machine.png"/>
</domain>
<domain id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_labels_ref_samplexmlspec_Equipment"
type="category">
<label>
<label_for_entity>equipment</label_for_entity>
<label_for_entities>equipments</label_for_entities>
</label>
<icon path="/images/entities/equipment.png"/>
<empty_display_value_icon path="/images/entities/empty_display_value_equipment.png"/>
</domain>
</company>
<company code="acme">
<domain id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_labels_ref_samplexmlspec_Employee"
type="item">
<label>
<label_for_entity>employee</label_for_entity>
<label_for_entities>employees</label_for_entities>
</label>
<icon path="/images/entities/employee.png"/>
<empty_display_value_icon path="/images/entities/empty_display_value_employee.png"/>

IBM Product Master 12.0.0 803


</domain>
<domain id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_labels_ref_samplexmlspec_Manager"
type="item">
<label>
<label_for_entity>manager</label_for_entity>
<label_for_entities>managers</label_for_entities>
</label>
<icon path="/images/entities/manager.png"/>
<empty_display_value_icon path="/images/entities/empty_display_value_employee.png"/>
</domain>
<domain id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_labels_ref_samplexmlspec_Geography"
type="category">
<label>
<label_for_entity>region</label_for_entity>
<label_for_entities>regions</label_for_entities>
</label>
<icon path="/images/entities/geography.png"/>
<empty_display_value_icon path="/images/entities/empty_display_value_geography.png"/>
</domain>
<domain id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_labels_ref_samplexmlspec_Department"
type="category">
<label>
<label_for_entity>department</label_for_entity>
<label_for_entities>departments</label_for_entities>
</label>
<icon path="/images/entities/department.png"/>
<empty_display_value_icon path="/images/entities/empty_display_value_department.png"/>
</domain>
<domain id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_labels_ref_samplexmlspec_Hardware"
type="category">
<label>
<label_for_entity>machine</label_for_entity>
<label_for_entities>machines</label_for_entities>
</label>
<comment>
Note: Below, both icon and empty_display_value_icon have paths referring to user supplied icons in location:
supplier_base_dir which has a default value of $TOP/public_html/suppliers/
specified in $TOP/etc/default/common.properties.
Another location could be: $TOP/public_html/user/domains/ directory.
</comment>
<icon path="/suppliers/machine.png"/>
<empty_display_value_icon path="/suppliers/empty_display_value_machine.png"/>
</domain>
<domain id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_labels_ref_samplexmlspec_Equipment"
type="category">
<label>
<label_for_entity>equipment</label_for_entity>
<label_for_entities>equipments</label_for_entities>
</label>
<comment>
Note: Below, both icon and empty_display_value_icon have paths referring to user supplied icons in location:
$TOP/public_html/user/domains/images/ directory.
Another location could be: in supplier_base_dir which has a default value of /public_html/suppliers/
specified in $TOP/etc/default/common.properties.
</comment>
<icon path="/user/domains/images/equipment.png"/>
<empty_display_value_icon path="/user/domains/images/empty_display_value_equipment.png"/>
</domain>
</company>
</domain_configurations>

For a company, for a given container (either a catalog or a hierarchy), you can define a domain entity to be used in an application instance. This has to be done through the
container attributes in the application itself.

For each domain entity, you will need to specify:

the id (unique identifier) should not any have characters belonging to the set: []{}:\\/\"'#@<>,*|!@$%^&()=+&.
the type of the domain entity (either "item" or "category")
the label, with different variations like:

label_for_entity
Singular label of a domain entity in all lower case text.
label_for_entities
Plural label of domain entities in all lower case text.

The rest of the labels are optional:

label_for_Entity
Singular label of a domain entity with first letter in upper case text. If not specified value is derived from the value specified for label_for_entity.
label_for_Entities
Plural label of domain entities with first letter in upper case text. If not specified value is derived from the value specified for label_for_entities.
label_for_ENTITY
Singular label of a domain entity in all upper case text. If not specified value is derived from the value specified for label_for_entity.
label_for_ENTITIES
Plural label of domain entities in all upper case text. If not specified value is derived from the value specified for label_for_entities.
icon (with a path attribute defined, /user/domains/images/icon.extension)
If not specified the default icon used by the product for the specified domain type is used.
empty_display_value_icon (with a path attribute defined, /user/domains/images/empty_display_value_icon.extension)
If not specified the default icon used by the product for the specified domain type is used.

Note: Both icon and empty_display_value_icon can have paths referring to user supplied icons in location: $TOP/public_html/user/domains/images/ directory.
Another location could be: in supplier_base_dir which has a default value of /public_html/suppliers/ specified in $TOP/etc/default/common.properties.

804 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Generating a CSS file for the domain entities


After you have defined the domain entities in the XML specification, the following setup is needed.

Run the bin\perllib\genDomainEntityIconClasses.pl command from the top directory where the application has been setup, in a command window. Running this command
generates a CSS file that includes the newly added domain entities for icon accessibility from the product. Output similar to the following is displayed:

c:\projects_rtc\trunk\ccd_src>bin\perllib\genDomainEntityIconClasses.pl
Reading fileName = multi_domain_entities_en_US.xml, with localeString = en_US
Now processing domain entities for company: trigo in locale: en_US...
dei = DomainEntityInfo (Department, entry, department, departments, Department,
Departments, DEPARTMENT, DEPARTMENTS, /images/entities/department.png, /images/e
ntities/empty_display_value_department.png)
dei = DomainEntityInfo (Hardware, entry, machine, machines, Machine, Machines, M
ACHINE, MACHINES, /images/entities/machine.png, /images/entities/empty_display_v
alue_machine.png)
dei = DomainEntityInfo (Geography, entry, region, regions, Region, Regions, REGI
ON, REGIONS, /images/entities/geography.png, /images/entities/empty_display_valu
e_geography.png)
dei = DomainEntityInfo (Manager, entry, manager, managers, Manager, Managers, MA
NAGER, MANAGERS, /images/entities/manager.png, /images/entities/empty_display_va
lue_employee.png)
dei = DomainEntityInfo (Equipment, entry, equipment, equipments, Equipment, Equi
pments, EQUIPMENT, EQUIPMENTS, /images/entities/equipment.png, /images/entities/
empty_display_value_equipment.png)
dei = DomainEntityInfo (Employee, entry, employee, employees, Employee, Employee
s, EMPLOYEE, EMPLOYEES, /images/entities/employee.png, /images/entities/empty_di
splay_value_employee.png)
Now processing domain entities for company: acme in locale: en_US...
dei = DomainEntityInfo (Department, entry, department, departments, Department,
Departments, DEPARTMENT, DEPARTMENTS, /images/entities/department.png, /images/e
ntities/empty_display_value_department.png)
dei = DomainEntityInfo (Hardware, entry, machine, machines, Machine, Machines, M
ACHINE, MACHINES, /images/entities/machine.png, /images/entities/empty_display_v
alue_machine.png)
dei = DomainEntityInfo (Geography, entry, region, regions, Region, Regions, REGI
ON, REGIONS, /images/entities/geography.png, /images/entities/empty_display_valu
e_geography.png)
dei = DomainEntityInfo (Manager, entry, manager, managers, Manager, Managers, MA
NAGER, MANAGERS, /images/entities/manager.png, /images/entities/empty_display_va
lue_employee.png)
dei = DomainEntityInfo (Equipment, entry, equipment, equipments, Equipment, Equi
pments, EQUIPMENT, EQUIPMENTS, /images/entities/equipment.png, /images/entities/
empty_display_value_equipment.png)
dei = DomainEntityInfo (Employee, entry, employee, employees, Employee, Employee
s, EMPLOYEE, EMPLOYEES, /images/entities/employee.png, /images/entities/empty_di
splay_value_employee.png)
Now processing domain entities for company: junitdb in locale: en_US...
dei = DomainEntityInfo (Department, entry, department, departments, Department,
Departments, DEPARTMENT, DEPARTMENTS, /images/entities/department.png, /images/e
ntities/empty_display_value_department.png)
dei = DomainEntityInfo (Hardware, entry, machine, machines, Machine, Machines, M
ACHINE, MACHINES, /images/entities/machine.png, /images/entities/empty_display_v
alue_machine.png)
dei = DomainEntityInfo (Geography, entry, region, regions, Region, Regions, REGI
ON, REGIONS, /images/entities/geography.png, /images/entities/empty_display_valu
e_geography.png)
dei = DomainEntityInfo (Manager, entry, manager, managers, Manager, Managers, MA
NAGER, MANAGERS, /images/entities/manager.png, /images/entities/empty_display_va
lue_employee.png)
dei = DomainEntityInfo (Equipment, entry, equipment, equipments, Equipment, Equi
pments, EQUIPMENT, EQUIPMENTS, /images/entities/equipment.png, /images/entities/
empty_display_value_equipment.png)
dei = DomainEntityInfo (Employee, entry, employee, employees, Employee, Employee
s, EMPLOYEE, EMPLOYEES, /images/entities/employee.png, /images/entities/empty_di
splay_value_employee.png)
Reading fileName = multi_domain_entities_fr_FR.xml, with localeString = fr_FR

After you run the bin\perllib\genDomainEntityIconClasses.pl command, restart the application.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Associating values to the domain entities using the user interface


After you have restarted the application server, log into the Product Master user interface.

You can set the values for the domain entities for a catalog or hierarchy using the catalog attributes or hierarchy attributes page respectively.

IBM Product Master 12.0.0 805


You can associate a domain entity specification with a catalog or hierarchy by identifying it as the domain entity to be used with the item. This domain specification applies
to all of the entries in that container.

You can also associate a domain entity specification to only a specific set of items or categories in a catalog or hierarchy. This is achieved by a two-step process.

1. Identify a non-localized, indexed attribute from the primary spec of the container as the domain entity attribute field.
2. Ensure that the correct domain entity to be used for that item or category is set as the value of the domain attribute in that item or category.

Catalog attributes
Right-click on a catalog in the left pane navigation and select Catalog Attributes. If you need to specify the domain specific for all of the items in the catalog, in the Domain
Entity to be used with items in the catalog, select a value of your domain entity from the drop down menu.
If you need to specify the item specific domain specification, in the Catalog domain entity attribute field, pick a spec attribute that is of String type and indexed
(searchable) from the drop down menu. The actual domain entity value for an item would be set on the chosen domain entity attribute on the item.

Hierarchy attributes
Right-click on a hierarchy in the left pane navigation and select Hierarchy Attributes. If you need to specify the domain specific for all of the items in the hierarchy, in the
Domain Entity to be used with categories in the hierarchy, select a value of your domain entity from the drop down menu.
If you need to specify the category specific domain specification, in the Hierarchy domain entity attribute field, pick a spec attribute that is of String type and indexed
(searchable) from the drop down menu. The actual domain entity value for a category would be set on the chosen domain entity attribute on the category.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Error handling and limitations for domain entities


When Product Master encounters an error in locating the domain entity specification associated with a container or with specific entities with a container, it does not
respond with error messages in the user interface. When a particular domain entity ID is not found, the product uses the default domain labels for entries, for example,
items and categories and the default icons used by the product for these entries, after logging the error and warning messages in the log files.

The following modules do not support customized labels and icons:

Collaboration Area Explorer in the left navigation pane does not show domain specific or labels for entries within workflow steps.
Location Explorer in the Location Data popup from data entry screens does not show domain specific or labels for the location tree.
Organization Hierarchies do not support domain specific icons and labels.
Import and Exports do not support domain specific icons or labels. This include consoles and the other screens associated with them.

The following features within the modules do not support multi domain:

Environment import and export of a company does not support customized icons or labels.

Additionally, the following restrictions apply to the specification of the domain identifier:

The domain ids must be unique across all types. Thus, a domain of item type cannot have the same domain id as another domain id of category type and vice versa.
The domain ids use only ASCII characters excluding these characters: []{}:\\/\"'#@<>,*|!@$%^&()=+

If you do not set the domain for a container, but you set the domain attribute to be used, then the following two things happen:

The container level domain specification will default to the Product Master default, for example, if it is a catalog, then you will see item).
The item level domain specification will use the domain specified via the domain attribute value on the item. If the domain item attribute value is empty, then it will
default to the default domain item.

When browsing hierarchies and catalogs, you use the On Demand Filter. All of the attributes on a hierarchy are saved with the current version, therefore, when you browse
the older version of a catalog, you are going to get the older version of the hierarchy attributes.

The following scenarios provide examples of error handling when using domain entities.

Scenario 1: The domain spec file does not exist


An item has its domain entity attribute set to a nonexisting domain entity. This generates an error message in the default.log log file.
Scenario 2: Missing XML tag specified for domain entity
The domain entity XML file specifies a domain entity with a missing required label in the following XML:
Scenario 3: Wrong type for domain entity
The domain entity XML file specifies a domain entity with wrong type in the following XML:
Scenario 4: Duplicate domain entity specified
When a duplicate domain entity is specified, it generates a warning message in the default.log file about the duplicate domain entity and that the duplicate
domain entity will be skipped.
Scenario 5: Invalid characters specified in domain entity ID
The domain entity XML file specifies a domain entity ID with invalid characters in the following XML:
Scenario 6: Invalid empty label value specified for required labels on a domain entity
The following domain entity XML file specifies a domain entity Employee with an empty value for the label_for_entities label.
Scenario 7: A domain entity has been defined without a company tag
If a domain entity XML file specifies a domain entity without company tags, it generates an error message in the default.log file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

806 IBM Product Master 12.0.0


Scenario 1: The domain spec file does not exist
An item has its domain entity attribute set to a nonexisting domain entity. This generates an error message in the default.log log file.

If the domain spec file is missing it will generate an error message in the default.log file. The following error message is an example of the type of message you would
receive if the domain spec file did not exist.

2013-04-30 23:18:03,238 [jsp_8: getDomainEntityIdForContainer.wpc]


ERROR com.ibm.ccd.content.common.MultiDomainEntityUtils -
CWPCM0606E:There is an error when processing the domain entity spec file:
C:\projects_rtc\aruba\ccd_src\etc/default/domains/multi_domain_entities_en_US.xml.,
Exception:C:\projects_rtc\aruba\ccd_src\etc\default\domains\
multi_domain_entities_en_US.xml (The system cannot find the file specified.)
java.io.FileNotFoundException: C:\projects_rtc\aruba\ccd_src\etc\default\
domains\multi_domain_entities_en_US.xml (The system cannot find the file specified.)

Ensure that you set the domain entity attribute for the item to an existing domain entity.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Scenario 2: Missing XML tag specified for domain entity


The domain entity XML file specifies a domain entity with a missing required label in the following XML:

<domain id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_labels_ref_missingxmldomainent_Employee2"
type="item"
<label>
<label_for_entity>employee</label_for_entity>
</label>
<icon path="/images/entities/employee.png"/>
<empty_display_value_icon
path="/images/entities/empty_display_value_employee.png"/>
</domain>

This generates an error message in the default.log file.


The following error message is an example of the type of message you would receive if there was a missing XML tag for a domain entity:

2013-04-30 23:23:05,707 [jsp_9: getDomainEntityIdForContainer.wpc]


ERROR com.ibm.ccd.content.common.MultiDomainEntityUtils -
CWPCM0606E:There is an error when processing the domain entity spec file:
C:\projects_rtc\aruba\ccd_src\etc/default/domains/multi_domain_entities_en_US.xml.,
Exception:Error finding node: label_for_entities

Error finding node: label_for_entities

Ensure that you specify all required XML tags for the domain entities.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Scenario 3: Wrong type for domain entity


The domain entity XML file specifies a domain entity with wrong type in the following XML:

<domain id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_labels_ref_wrongtypedomainent_Employee2"
type="xxyyzz"
<label>
<label_for_entity>employee</label_for_entity>
<label_for_entities>employees</label_for_entities>
</label>
<icon path="/images/entities/employee.png"/>
<empty_display_value_icon
path="/images/entities/empty_display_value_employee.png"/>
</domain>

This generates an error message in the default.log file.


The following error message is an example of the type of message you would receive if there was an invalid value for the type attribute:

2013-04-30 23:25:34,807 [jsp_9: getDomainEntityIdForContainer.wpc]


ERROR com.ibm.ccd.content.common.MultiDomainEntityUtils -
CWPCM0604E:Domain Entity 'Employee2' has an invalid domain type 'xxyyzz'..

Ensure that you specify a valid value for the type attribute.

IBM Product Master 12.0 Fix Pack 8

IBM Product Master 12.0.0 807


Operating Systems: AIX, Linux, and Windows (Workbench only)

Scenario 4: Duplicate domain entity specified


When a duplicate domain entity is specified, it generates a warning message in the default.log file about the duplicate domain entity and that the duplicate domain
entity will be skipped.

2013-04-30 23:31:53,528 [jsp_9: getDomainEntityIdForContainer.wpc]


ERROR com.ibm.ccd.content.common.MultiDomainEntityUtils -
CWPCM0608E:Domain Entity with id 'Employee' has already been defined.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Scenario 5: Invalid characters specified in domain entity ID


The domain entity XML file specifies a domain entity ID with invalid characters in the following XML:

<domain id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_labels_ref_invalidcharacter_Employee*%"
type="item">
<label>
<label_for_entity>employee</label_for_entity>
<label_for_entities>employees</label_for_entities>
<label_for_Entity>Employee</label_for_Entity>
<label_for_Entities>Employees</label_for_Entities>
<label_for_ENTITY>EMPLOYEE</label_for_ENTITY>
<label_for_ENTITIES>EMPLOYEES</label_for_ENTITIES>
</label>
<icon path="/images/entities/employee.png"/>
<empty_display_value_icon path="/images/entities/empty_display_value_employee.png"/>
</domain>

This generates a warning message in the default.log file about the invalid characters present in the ID. The invalid set of characters are from the character set: '[]
{}:\/"'#@<>,*|!@$%^&()=+&.'.

2013-04-30 23:33:32,995 [jsp_8: getDomainEntityIdForContainer.wpc]


ERROR com.ibm.ccd.content.common.MultiDomainEntityUtils -
CWPCM0608E:Domain Entity 'Employee*%' has invalid characters.
Provide a valid domain id with no characters from the character set '[]{}:\/"'#@<>,*|!@$%^&()=+&.'.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Scenario 6: Invalid empty label value specified for required labels on a domain entity
The following domain entity XML file specifies a domain entity Employee with an empty value for the label_for_entities label.

<domain id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_dev_soln_labels_ref_invalidempty_Employee" type="item">


<label>
<label_for_entity>employee</label_for_entity>
<label_for_entities>employees</label_for_entities>
<label_for_Entity>Employee</label_for_Entity>
<label_for_Entities>Employees</label_for_Entities>
<label_for_ENTITY>EMPLOYEE</label_for_ENTITY>
<label_for_ENTITIES>EMPLOYEES</label_for_ENTITIES>
</label>
<icon path="/images/entities/employee.png"/>
<empty_display_value_icon path="/images/entities/empty_display_value_employee.png"/>
</domain>

This generates an error message in the default.log file saying that a non-empty value should be provided for this label.

2013-04-30 23:36:48,279 [jsp_9: getDomainEntityIdForContainer.wpc]


ERROR com.ibm.ccd.content.common.MultiDomainEntityUtils -
CWPCM0607E:Domain Entity 'Employee1' has an invalid empty label value for 'label_for_entities'.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Scenario 7: A domain entity has been defined without a company tag

808 IBM Product Master 12.0.0


If a domain entity XML file specifies a domain entity without company tags, it generates an error message in the default.log file.

2013-04- 2013-04-30 23:48:59,766 [jsp_9: getDomainEntityIdForContainer.wpc]


ERROR com.ibm.ccd.content.common.MultiDomainEntityUtils -
CWPCM0609E:Domain Entities have been defined without a company.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling event logging using history manager


You can use the history manager of IBM® Product Master to log object events through Java™ APIs. You can also disable event logging for a session for all objects, all
objects of a specific type, and specific events. Refer to the Javadoc for details.

Example
When integrating with upstream and downstream systems, you need to identify Product Master objects that have changed to be able to send delta changes or full
snapshots of the changed objects to the target systems.

Event logging
Event logging is a mechanism to log events for IBM Product Master objects. You need to use the history manager to log object events such as create and update to a
database, and to retrieve event information.
Log messages in the data model
The data model redirects certain messages to the log files. These log files help you to identify the problems in IBM Product Master and take any corrective action.
Configuring logging for supported events of an object
To configure the history manager to log events for supported Product Master Server objects, you need to define a subscription policy in the
history_subscriptions.xml file.
Subscription policies
A subscription policy contains all the information required to start logging events for an object.
Consuming logged history data
You need to use the searchHistory() method of the history manager Java API to retrieve history details of an object, a set of objects, or an object type. The retrieved
details contain information about objects, a set of objects, or an object type that has changed. This information is helpful for further operations. For example, you
might want to export object data only for objects that have changed after a specific timestamp. The schema for the logged history data is available in the
history_schema.xsd file in the <install directory>/etc/default directory. Refer to the history manager Javadoc for all the retrieval APIs.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Event logging
Event logging is a mechanism to log events for IBM® Product Master objects. You need to use the history manager to log object events such as create and update to a
database, and to retrieve event information.

Using the history manager, you can select which objects you want to log events for, which events you want to log, and define how much detail you want to log for each
event. You can use filters to select specific objects from within a certain type of objects. For example, you can select specific items from within the item objects.
You must determine what event information your users will need to retrieve. You can design your event logging by using an appropriate combination of objects, events,
event logging levels, and filters.

You can log events only for Product Master objects that are supported by the history manager. Before you can set up an object for event logging, you must verify if the
object is supported by the history manager by checking if an object node for the object exists in the history_objects.xml file. This file is available in the
$TOP/etc/default directory.
You can log specific events for Product Master objects. However, the history manager may not support logging of all events for all objects. To check if the history manager
supports a specific event for an object, you need to check the object node for the object in the history_objects.xml file. This file is available in the $TOP/etc/default
directory.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Log messages in the data model


The data model redirects certain messages to the log files. These log files help you to identify the problems in IBM® Product Master and take any corrective action.

You can use the user-defined logs (UDLs) to log specific kinds of Product Information Management (PIM) events that IBM Product Master does not log by default.

A user-defined log (UDL) is an object that can be associated with a hierarchy or a catalog for storing custom content related to categories and items.

The UDL is available under Product Manager > Catalogs > Catalog console menu. You must have an existing catalog to choose. Click Attrs. and then Logs. Then click New,
enter a name in the Log name field, and click +.

IBM Product Master 12.0.0 809


The custom content for an individual item or category is stored in a user defined log entry (UDLE). The custom content for a UDLE is in the form of a string which can be
large. Do not store strings larger than fifteen to thirty characters in a UDLE.

Types of UDLs
Regular Log
You can store only one UDLE for each item or category. A new UDLE replaces an existing UDLE.
Running Log
All the UDLEs for a item or category are stored in the same order in which they are received.

Valid uses of UDLs


You can use UDLs as custom logs and for data staging.

Custom logs:
You can use UDLs to log specific kinds of PIM events that Product Master does not log by default. For example, you can use UDLs for audit-specific data updates.
Data staging:
You can use UDLs as a staging area. For example, you can copy data into a working storage area at one stage of an export process to do some additional
manipulation of the data before the export. You can use a UDL to hold the temporary data that you want to modify.

Incorrect uses of user-defined logs


Do not use UDLs as an alternative to item catalogs to boost performance. Never use UDLs for storing core item data. Item data must always be stored in catalogs. The
drawbacks to managing item data within UDLs include:

Limited ready-to-use access to PIM core functionality


No support for the Product Master native workflow
UDLs are completely outside Product Master security model
No version management or control exists
Inheritance must be custom built
All core PIM capabilities that rely on UDLs fall outside of any support warranty
Cost of implementation, maintenance, and migration from one release to another are high because of the level of custom coding that is required

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring logging for supported events of an object


To configure the history manager to log events for supported Product Master Server objects, you need to define a subscription policy in the
history_subscriptions.xml file.

Before you begin


Before you can configure the history manager to log events for specific objects, you must verify that the events are supported for the objects. The list of supported events
is available in the history_objects.xml file. This file is available in the $TOP/etc/default directory.

Procedure
1. Open the history_subscriptions.xml file.
2. If a subscription node for the object exists in the file, add a <log> tag to the <subscription_levels> tag. If a subscription node for the object does not exist, create a
new node by using the <subscription> tag, and then create a <log> tag within a <subscription_levels> tag.
The history_subscriptions.xml file that is shipped with the product does not have any nodes by default. However, it contains descriptions and examples.
3. Add the event and history level on the <log> tag.
4. Save the file.

Example
Here is an example of configuring event logging for the REPORT object for two events. For each event, the level of logging is set to BASIC_EVENT_INFO.

<subscription object_type="REPORT">
<subscription_levels>
<log event_type="CREATE" history_level="BASIC_EVENT_INFO"/>
<log event_type="UPDATE" history_level="BASIC_EVENT_INFO"/>
</subscription_levels>
</subscription>

history_subscriptions.xml file
The history_subscriptions.xml file contains filter definitions and object subscription nodes for all the objects that are subscribed for event logging.
Using filters in the history manager
The history manager can use filters to enable you to specify objects for which changes are logged on subscribed events. This is important to limit the number of
objects of a specific object type whose history changes are logged, and leads to improved system performance as well as optimization of storage space.
Objects that support event logging
The history manager supports event logging for specific objects. The list of supported objects is available in the history_objects.xml file. You can configure

810 IBM Product Master 12.0.0


event logging for these objects by subscribing them.
Error Handling
The IBM Product Master history manager supports error handling.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

history_subscriptions.xml file
The history_subscriptions.xml file contains filter definitions and object subscription nodes for all the objects that are subscribed for event logging.

Note: For later versions than IBM® Product Master Fix Pack 8, set the value of fetch_audit_lookupDataByID_date property in the $TOP/etc/default/common.properties to
date when applied. For more information, see common.properties file parameters.

Object subscription nodes


Object subscription is the mechanism of setting up an object for event logging by the history manager. You need to subscribe Product Master Server objects for which you
want to log events. Each object subscription node contains information about the object type, filter, events, and history level for each event.
Object subscription enables you to select the objects for which events should be logged. This is required to avoid the overhead of logging events for all objects. The object
must be one of the objects that are supported by the history manager.

This is an example of an object subscription node in the history_subscriptions.xml file. A regular expression filter is used in this subscription node.

<subscription object_type="ITEM">
<subscription_filter>regex filter</subscription_filter>
<filter_arguments>regexp^nam.*[0-9]$/regexp</filter_arguments>
<subscription_levels>
<log event_type="CREATE" history_level="NONE"/>
<log event_type="UPDATE" history_level="BASIC_EVENT_INFO"/>
</subscription_levels>
</subscription>

<subscription object_type="object_type">
Indicates the object type that is subscribed. You can log events only for those PIM objects that are supported by the history manager.
<subscription_filter>
Indicates the type of filter to be applied.
filter_arguments
Indicates the argument of the applied filter. The arguments vary based on the filter being used. A named list filter has a list of names as filter_arguments and a regex
filter has regular expression as filter_arguments. For a regex filter, the characters ^, * and $ have the same meaning as in UNIX and Linux® environments.
<subscription_levels>
Indicates the events that need to be logged. There can be multiple events on this tag. In the example, the CREATE and UPDATE events are logged.
<log event_type="event_type" history_level="history_level">
Indicates the type of event and the level of history with which the changes need to be logged for that event. The <log> tag is mandatory for the CREATE, UPDATE,
and DELETE events. It is optional for the MAPPING and HIERARCHY events. If you define <log> tags for the MAPPING and HIERARCHY events, then the events are
logged with the defined history_level. If you do not define the MAPPING and HIERARCHY events, then the logging level of the parent event is used. A parent event is
one that causes another event to happen. For example, an UPDATE event causes a MAPPING event to happen.

The history manager enables you to log event details for the object events that are specified for the object in the history_objects.xml file in the $TOP/etc/default
directory.

The history manager enables you to log event details at six levels to improve the performance and efficiency of the system by logging only relevant data:

BASIC_EVENT_INFO
At this level, the history manager only logs basic information such as username, current timestamp, object ID, and object primary key.
PRE_EVENT_SNAPSHOT
At this level, the history manager logs a complete snapshot of the object before any changes were made.
POST_EVENT_SNAPSHOT
At this level, the history manager logs a complete snapshot of the object after the changes were made.
PRE_AND_POST_EVENT_SNAPSHOTS:
At this level, the history manager logs a complete snapshot of the object both before and after any changes were made.
DELTA
At this level, the history manager identifies and logs any added, removed, and modified attributes, relationships, or hierarchical_structure.
NONE
At this level, the history manager does not log any changes.

Related reference
common.properties file parameters

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 811


Using filters in the history manager
The history manager can use filters to enable you to specify objects for which changes are logged on subscribed events. This is important to limit the number of objects of
a specific object type whose history changes are logged, and leads to improved system performance as well as optimization of storage space.

The history_subscriptions.xml file contains the filter definitions for logging events for Product Master objects. The history manager provides two types of filters:

Named List filter


Regular Expressions filter

To be able to use a named list filter or regular expressions filter, you need to declare the filter name and filter implementation in the history_subscriptions.xml file.
Once declared, you can use the filter in subscription policies. The filter_definition tags for the default filter exist in the history_subscriptions.xml file.
Attention: Do not remove or modify the filter_definition tags for the named list filter and regular expressions filter in the history_subscription.xml file.
This example demonstrates the declaration of a filter_definition tag for a regular expression filter:

<subscription_filter_definition>
<filter_name>regex filter</filter_name>
<filter_implementation>com.ibm.ccd.api.history.RegularExpressionFilter
</filter_implementation>
</subscription_filter_definition>

subscription_filter_definition
The definition tags of the filter are mentioned on this tag.
filter_name
The name of the filter is mentioned on this tag.
filter_implementation
The fully qualified name of the Java™ class that implements this filter is mentioned on this tag.

The name that you specify on the <filter_name> tag is the name that you must specify on the <subscription_filter> tag for the subscription node. The order of precedence
for the filters that apply to a subscription policy is the following:

A subscription policy by using named list filters is evaluated first irrespective of the order of the definition of named list filter in the filters section.
Subscription policies using other filters are evaluated in the order of filter definition in the subscription_history.xml file including the regex filter.
A subscription policy without any filter is evaluated in the end.
If two subscription policies for the same object type use the same filter, or do not use a filter, then the order of definition of those policies in
thehistory_subscriptions.xml file will determine their relative precedence.

For example, if events are logged for an object with a filter, but subscription policies for different filters have different levels of logging defined, then the object is evaluated
first by using the NamedListFilter. Then, the object is evaluated by using the RegularExpressionFilter. The order of precedence of these filters are in the order in which they
are defined in the filters section of the history_subscriptions.xml file.
Users can implement their own filters. A filter must implement the Filter interface, which provides two methods: setup() and apply().
This example demonstrates the implementation of a filter:

public class ReportFilter implements Filter {

private String distributionType = null;

public String getDistributionType() {


return distributionType;
}

public void setDistributionType(String distributionType) {


this.distributionType = distributionType;
}

public void setup(Node policyArguments) {


try {
Node nArguments = findNode(policyArguments,"filter_arguments",true);
Node nDistributionType = findNode(nArguments,"distribution_type",true);
distributionType = nDistributionType.getFirstChild().getNodeValue();
} catch (Exception e) {
e.printStackTrace();
}
}

public boolean apply(PIMObject object) {


Report report = null;
try {
report = (Report) object;
} catch (ClassCastException e) {
e.printStackTrace();
}
boolean returnValue = getDistributionType().equalsIgnoreCase(report.getDistribution().getType().toString());
return returnValue;
}

public static Node findNode(Node node, String name, boolean required) throws Exception{
Node result = null;
if (null != node){
NodeList children = node.getChildNodes();
for (int i = 0; i < children.getLength(); i++){
Node child = children.item(i);
if (child.getNodeName().equals(name)){
result = child;
break;
}
}
if (result == null && required){

812 IBM Product Master 12.0.0


throw new Exception("node '" + name + "' missing");
}
}
return result;
}

After implementing the filter, you need to add the filter definition of this filter to the history_subscriptions.xml file. Here is how the filter definition is added to the
history_subscriptions.xml file.

<subscription_filter_definition>
<filter_name>report filter</filter_name>
<filter_implementation> com.abc.pim.history.ReportFilter </filter_implementation>
</subscription_filter_definition>

Note: The fully qualified name of the class depends on the actual location of the class. For this example, you can assume it to be com.abc.pim.history.ReportFilter.
You can create a custom filter for the report object based on distribution. The distribution can be any of the predefined Report.Type. You can capture changes that are
made to the report objects, which have a given distribution type such as email.
This is an example of using a report filter. There are only two <log> nodes because the REPORT object supports only the CREATE and DELETE events.

<subscription object_type="REPORT">
<subscription_filter>report filter</subscription_filter>
<filter_arguments>
<distribution_type>Email</distribution_type>
</filter_arguments>
<subscription_levels>
<log event_type="CREATE" history_level="PRE_AND_POST_EVENT_SNAPSHOTS"/>
<log event_type="DELETE" history_level="DELTA"/>
</subscription_levels>
</subscription>

If no subscription_filter is defined for a policy, then the policy applies to all objects of the given object_type, in this case the REPORT object.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Objects that support event logging


The history manager supports event logging for specific objects. The list of supported objects is available in the history_objects.xml file. You can configure event
logging for these objects by subscribing them.

The Product Master history manager supports these objects for event logging:

USER
ROLE
ITEM
CATEGORY
CATALOG
HIERARCHY
IMPORT
EXPORT
REPORT
DOCUMENT
COLLABORATION_ITEM

The history manager does not support all events for all objects. The events supported for an object are part of the object node in the history_objects.xml file.
This example displays the object node of the item object from the history_objects.xml file. The history manager supports four events for the item object.

<object name="ITEM">
<event type="CREATE"/>
<event type="UPDATE"/>
<event type="DELETE"/>
<event type="MAPPING"/>
</object>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Error Handling
The IBM® Product Master history manager supports error handling.

The history manager supports handling of two types of errors:

Validation errors - These errors are in the history_subscriptions.xml file. The server will not start if there are validation errors. These errors are displayed on
the console. Examples of validation errors include incorrect XML, incorrect tags, missing tags, missing attributes, and unsupported object types.

IBM Product Master 12.0.0 813


Runtime errors - These errors are detected when the server is running, and logged in the error log files. The subscription policies that have these errors are ignored.
Errors in implementing a custom filter is an example of this type of errors.

Example
This is an example of a validation error message that is displayed on the console when the server is unable to read the history_subscription.xml file.

The XML file for history subscriptions is either missing or is not in valid XML format.
Verify that the history_subscription.xml exist under the install_dir/etc/default and has valid XML structure.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Subscription policies
A subscription policy contains all the information required to start logging events for an object.

The subscription policy defines the object type, filter, events, and level of event logging for each event. The subscription policy is contained within the subscription node in
the history_subscriptions.xml file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Consuming logged history data


You need to use the searchHistory() method of the history manager Java™ API to retrieve history details of an object, a set of objects, or an object type. The retrieved
details contain information about objects, a set of objects, or an object type that has changed. This information is helpful for further operations. For example, you might
want to export object data only for objects that have changed after a specific timestamp. The schema for the logged history data is available in the history_schema.xsd
file in the <install directory>/etc/default directory. Refer to the history manager Javadoc for all the retrieval APIs.

This example displays the format of the output that is generated by the searchHistory() method.

<?xml version="1.0" encoding="UTF-8"?>


<history>
<basic_event_info>
<object_id>0</object_id>
<container_id>0</container_id>
<object_primary_key>object_primary_key</object_primary_key>
<object_type>object_type</object_type>
<event_type>event_type</event_type>
<log_level>log_level</log_level>
<timestamp>timestamp</timestamp>
<user_id>0</user_id>
</basic_event_info>
<attributes>
<delta>
<stand_alone>
<added>
<attribute>
<name>name</name>
<value>value</value>
</attribute>
</added>
<removed>
<attribute>
<name>name</name>
<value>value</value>
</attribute>
</removed>
<modified>
<attribute>
<name>name</name>
<old_value>old_value</old_value>
<new_value>new_value</new_value>
</attribute>
</modified>
</stand_alone>
<spec_driven>
<added>
<attribute>
<name>name</name>
<value>value</value>
</attribute>
</added>
<removed>
<attribute>
<name>name</name>
<value>value</value>
</attribute>

814 IBM Product Master 12.0.0


</removed>
<modified>
<attribute>
<name>name</name>
<old_value>old_value</old_value>
<new_value>new_value</new_value>
</attribute>
</modified>
</spec_driven>
</delta>
</attributes>
<mappings>
<delta>
<added>
<map>
<type>type</type>
<objects>
<object>
<name>name</name>
<id>0</id>
</object>
</objects>
</map>
</added>
<removed>
<map>
<type>type</type>
<objects>
<object>
<name>name</name>
<id>0</id>
</object>
</objects>
</map>
</removed>
</delta>
</mappings>
<hierarchical_structure>
<delta>
<added>
<hierarchy>
<type>type</type>
<parents>
<object>
<name>name</name>
<id>0</id>
</object>
</parents>
<children>
<object>
<name>name</name>
<id>0</id>
</object>
</children>
</hierarchy>
</added>
<removed>
<hierarchy>
<type>type</type>
<parents>
<object>
<name>name</name>
<id>0</id>
</object>
</parents>
<children>
<object>
<name>name</name>
<id>0</id>
</object>
</children>
</hierarchy>
</removed>
</delta>
</hierarchical_structure>
</history>

You need to use the searchHistory() method of the history manager API to retrieve history details.
Attention: Use this API with caution as this might return an large list of results, which might lead to problems such as the out-of-memory problem.

Getting data per object type


Use the following sample Java code to retrieve the history details for an object type.

Context context = PIMContextFactory.getCurrentContext();


HistoryManager manager = context.getHistoryManager();
List<String> objectTypes = new ArrayList<String>();
objectTypes.add(Report.OBJECT_TYPE);
List<String> ls = manager.searchHistory(objectTypes,null,null,null);

In this example, the code returns all the history for objects of type Report.

IBM Product Master 12.0.0 815


Getting data by timestamps
Use the following sample Java code to retrieve the history details of all objects between two timestamps.

Context context = PIMContextFactory.getCurrentContext();


HistoryManager manager = context.getHistoryManager();
List<String> ls = manager.searchHistory(fromThisTimestamp, toThisTimestamp);

In this example, fromThisTimestamp and toThisTimestamp are the two Date objects that indicate the period for which you need to get event logging information.

Getting data per object type


Use the following sample Java code to retrieve the history details for an object type.

Context context = PIMContextFactory.getCurrentContext();


HistoryManager manager = context.getHistoryManager();
List<String> objectTypes = new ArrayList<String>();
objectTypes.add(Report.OBJECT_TYPE);
List<String> ls = manager.searchHistory(objectTypes,null,null,null);

In this example, the code returns all the history for objects of type Report.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring the Entity Count Tool


You use the EntityReportingFunction extension point to add custom logic on how entities are designated and counted. Before you can run and view the entity count report,
the extension point implementation class needs to be uploaded to the system and the URL registered in the system.

About this task


The entity count tool enables you to count the number of entities in the system.
Entities are classes of data and associated attributes and functions that describe and support a business object (for example, product, agreement, item, asset, location,
partner), as recognized by the Program's Entity Count Tool.

Procedure
1. Implement the EntityReportFunction extension point. See the example below.
2. Make the extension point implementation available to the system. Perform one of the following tasks:
Upload the compiled class to the docstore, or
Make the compiled class part of a .jar file and make the .jar file available to the system by using the custom jar mechanism. For more information, Deploying a
third party or custom user .jar file.
3. Register the location of the extension point class. Perform the following steps:
a. Click Data Model Manager > Security > Company Attributes.
b. In the Entity Count Report Extension Point Url field, under the General settings section, provide the japi URL to where the location of the extension point
class is.
For example,

japi:///uploaded_java_classes:com.ibm.ccd.api.extensionpoints.EntityReportingFunctionImpl_new.class

Example
When the entity report is run from the user interface, different methods of the extension point are started by the system. Required values for the report are computed
based on the user returned values and the report is persisted, assigning an entity report id.

This code shows a sample implementation class of the EntityReportingFunction extension point and how it is used to provide information about the entities in the system
when you run a report.

package com.ibm.ccd.api.extensionpoints;

import java.util.ArrayList;
import java.util.Collection;

import com.ibm.pim.catalog.Catalog;
import com.ibm.pim.common.PIMObject;
import com.ibm.pim.extensionpoints.EntityReportingFunction;

/**
* This is the user implementation of system supplied extension point interface: EntityReportingFunction
* This class is complied, uploaded to docstore (or made available via user jar mechanism)
* The location of the class needs to be provided under the Company attributes page before the 'Run Report' can be invoked
* from the 'Entity Count Reports' screen.
*/
public class EntityReportingFunctionImpl_new implements EntityReportingFunction
{
/**
* Returns the names of the applicable containers in the system, for the given container type.
* Please refer to EntityReportingFunction interface for the supported container types.

816 IBM Product Master 12.0.0


* Note that the containers with the given name should exist in the system.
*/
public Set<String> getApplicableContainerNames(ContainerType cType)
{
TreeSet <String> applContainerNames = new TreeSet <String>();

if (cType.equals(ContainerType.CATALOG)) {
applContainerNames.add("Appl Catalog 1");
applContainerNames.add("Appl Catalog 2");
}
else if (cType.equals(ContainerType.HIERARCHY)) {
applContainerNames.add("Appl Hierarchy 1");
applContainerNames.add("Appl Hierarchy 2");
}
else if (cType.equals(ContainerType.LOOKUPTABLE)) {
applContainerNames.add("Appl Lookup Table 1");
applContainerNames.add("Appl Lookup Table 2");
applContainerNames.add("Appl Lookup Table 3");
}
else if (cType.equals(ContainerType.ORGANIZATIONHIERARCHY)) {
applContainerNames.add("Appl Organization Hierarchy 1");
}

return applContainerNames;
}

/**
* Returns the names of the non-applicable containers in the system, for the given container type.
* Please refer to EntityReportingFunction interface for the supported container types.
* Note that the containers with the given name should exist in the system.
*/
public Set<String> getNonApplicableContainerNames(ContainerType cType)
{
TreeSet <String> nonApplContainerNames = new TreeSet <String>();

if (cType.equals(ContainerType.CATALOG)) {
nonApplContainerNames.add("Non Appl Catalog 1");
nonApplContainerNames.add("Non Appl Catalog 2");
}
else if (cType.equals(ContainerType.HIERARCHY)) {
nonApplContainerNames.add("Non Appl Hierachy 1");
}
else if (cType.equals(ContainerType.LOOKUPTABLE)) {
nonApplContainerNames.add("Non Appl Lookup Table 1");
nonApplContainerNames.add("Non Appl Lookup Table 2");
}
else if (cType.equals(ContainerType.ORGANIZATIONHIERARCHY)) {
; //Assume all of the org hierarchies are applicable...
}

return nonApplContainerNames;
}

/**
* Returns the names of the entities in a given container.
* Please refer to EntityReportingFunction interface for the supported container types.
* Note that the entity names are purely user defined.
*/
public Set<String> getEntityTypes(PIMObject container, ContainerType cType)
{
TreeSet <String> entityTypes = new TreeSet <String>();

if (cType.equals(ContainerType.CATALOG))
{
//The code below handles two applicable catalogs
if (((Catalog)container).getName().equals("Appl Catalog 1")) {
entityTypes.add("Manufactured goods");
entityTypes.add("Assets and commodities");
}
else if (((Catalog)container).getName().equals("Appl Catalog 2"))
{
entityTypes.add("Locations");
entityTypes.add("Trading partners");
}
}
else if (cType.equals(ContainerType.HIERARCHY))
{
//The code below returns the same entity types for all applicable
//hierarchies
entityTypes.add("Products");
entityTypes.add("Sub Products");
}
else if (cType.equals(ContainerType.LOOKUPTABLE))
{
entityTypes.add("Company Codes");
entityTypes.add("Employee Departments");
}
else if (cType.equals(ContainerType.ORGANIZATIONHIERARCHY))
{
entityTypes.add("Department 1");
entityTypes.add("Department 2");
}

return entityTypes;
}

IBM Product Master 12.0.0 817


/**
* Returns the names of the entities in a given container.
* Please refer to EntityReportingFunction interface for the supported container types.
* Note that the entity names are purely user defined.
*/
public int getEntityCountByType(PIMObject container, ContainerType cType, String entityType)
{
int count = 0;
//Optionally, the specific container names can be checked below in-
//addition to the container type.
if (cType.equals(ContainerType.CATALOG)) {
if (entityType.equalsIgnoreCase("Manufactured goods"))
count = 100;
//the logic to compute the entity counts is user defined. For ex, //using JavaAPIs,
code can compute the number of items under a given //catalog.
//In this case, a hardcoded count is being returned
else if (entityType.equalsIgnoreCase("Assets and commodities"))
count = 250;
else if (entityType.equalsIgnoreCase("Locations"))
count = 300;
else if (entityType.equalsIgnoreCase("Trading partners"))
count = 1000;
}
else if (cType.equals(ContainerType.HIERARCHY)) {
if (entityType.equalsIgnoreCase("Products"))
count = 10;
else if (entityType.equalsIgnoreCase("Sub Products"))
count = 25;
}
else if (cType.equals(ContainerType.LOOKUPTABLE)) {
if (entityType.equalsIgnoreCase("Company Codes"))
count = 30;
else if (entityType.equalsIgnoreCase("Employee Departments"))
count = 500;
}
else if (cType.equals(ContainerType.ORGANIZATIONHIERARCHY)) {
if (entityType.equalsIgnoreCase("Department 1"))
count = 99;
else if (entityType.equalsIgnoreCase("Department 2"))
count = 88;
}

return count;
}

/**
* Returns user defined comments for a a given container.
* Please refer to EntityReportingFunction interface for the supported container types.
*/
public String getComment(PIMObject container, ContainerType cType)
{
String comment = "";
//Only container type is checked and same comment returned for all container types.
if (cType.equals(ContainerType.CATALOG))
{
comment = "Attribute used as product identifier: [BasicProductSpec/name]";
}
else if (cType.equals(ContainerType.HIERARCHY))
{
comment = "All occurrences of attribute SKU have been counted";
}
else if (cType.equals(ContainerType.LOOKUPTABLE))
{
comment = "Generic comment for all lookup tables";
}
else if (cType.equals(ContainerType.ORGANIZATIONHIERARCHY))
{
comment = "Generic comment for all org hierarchies";
}

return comment;
}
}

What to do next
After you have uploaded the extension point implementation class and registered the URL to the system, you can then run and view the entity count report.

1. Click System Administrator > Entity Count Reports.


2. Click Run report. A Run report dialog displays.
3. Choose if you want to run the job in the background or immediately and click OK. If you select the background option, the entity report is run on the scheduler as a
job, instead of running on the appserver. This option provides a link so that you can check the status of the job. If you select to run immediately, the report will run
and load the details in the right navigation pane.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrating with other products


818 IBM Product Master 12.0.0
IBM® Product Master includes a number of developer samples that you can use to help you develop your own Product Master solutions and customization.

Integrating IBM Content Integrator


You need to configure IBM Content Integrator by using the content_management_system_config.xml and
content_management_system_properties.xml files.
Integrating IBM InfoSphere Physical Master Data Management
An solution can involve integration between the Product Master and the InfoSphere® Physical Master Data Management. The Solution Toolkit is part of a continuing
effort to reduce time to value by implementing industry specific content and capabilities.
Integrating IBM InfoSphere Information Server
You can integrate IBM InfoSphere® Information Server with Product Master. This integration enables use of IBM InfoSphere Business Glossary and IBM InfoSphere
DataStage® with Product Master.
Integrating Operational Decision Manager
You can use Product Master with IBM Operational Decision Manager to allow viewing, creating, editing, and deleting Advanced Business Rules applicable to an
Product Master item or category for different types of business decisions through use of the Product Master user interface.
Integrating WebSphere Commerce
Advanced Catalog Management for WebSphere® Commerce is a solution asset from Product Master. This solution is a prebuilt configuration of Product Master,
ready for use by WebSphere Commerce users.
Integrating WebSphere MQ
You can use Product Master with WebSphere MQ to connect Product Master with enterprise applications to send and receive messages.
Integrating scheduler applications
You can integrate scheduler applications with Product Master so that the scheduler applications can schedule jobs to run from the command line.
Integrating connectors
You can configure various connectors with the IBM Product Master application.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrating IBM Content Integrator


You need to configure IBM® Content Integrator by using the content_management_system_config.xml and content_management_system_properties.xml
files.

The content_management_system_config.xml file contains values for several variables such as repo and user ID. For more information about configuring, see
Configuring the content management system.

You configure your login credentials in content_management_system_properties.xml file. For more information about the properties, see
content_management_system_properties.xml.

IBM Content Integrator works as the interface between Product Master and any content management system. Content management systems often store unstructured
data, such as product images, specification data sheets, warranty documents, and demonstrations, which provide relevant and necessary contexts for products. This
unstructured data or these product attributes are both called content. Ensure that you are familiar with the following key concepts.

content
Refers to any file, for example, an image, document, worksheet, or media file that is stored in the content management system repository.
document
Refers to any file that is not contained within the content management system repository.
URN (Uniform Resource Name)
The link to the content that is present in the content management system. The format of the link is, vbr:/repositoryID/itemID/versionID/itemType
Where:

repositoryID
The name of the connector.
itemId
The unique identifier of the content.
versionID
The version number of the content.
itemType
The class of content in the content management system.

Data maps
A map that is created that uses the IBM Content Integrator admin console. You use a data map to search for content in multiple content management systems. This
map contains the mapping of attributes from different content management systems. After a data map is created in the IBM Content Integrator, add the name of
the data map in the content_management_system_config.xml file.
Item class
An item class is created in the content management system. You use an item class when you add a content to the content management system. The item class can
be created through the content management system administration console where you can select attribute to contain within that item class. The Add screen
contains attributes from within the item class. Any content that you want to add to the content management system is of this class and also contains attributes of
this class.

You must install and configure IBM Content Integrator before you can perform the following tasks. You can use either the user interface or Java™ API features to perform
the tasks.

After you configure the content management system:

Add content to a content management system


Search for content in a content management system
Associate content to an item
View content in a content management system

IBM Product Master 12.0.0 819


By integrating Product Master with IBM Content Integrator to access content management systems, you can better manage all attributes and data for your products.

The following image depicts the data flow between content management system repositories and Product Master.

Installing IBM Content Integrator for IBM Content Management


To install and configure IBM Content Integrator to work with IBM Product Master, you must install the IBM Content Management on the server-side computer.
Installing IBM Content Integrator for IBM FileNet P8 Platform
To install and configure IBM Content Integrator to work with IBM Product Master, you must install the IBM FileNet® P8 Platform on the server-side computer.
Configuring the content management system
You use the content_management_system_config.xml file to configure the content management system, and to define the connection that the IBM Content
Integrator uses to connect to the repository.
Configuring IBM Product Master to use IBM Content Integrator services
To manage your content in the content management system, you must configureIBM Product Master to use content management system services.
Managing content in a content management system
After you configure access to the content management system, you can search for content, add content, associate content to an item, or view content in a content
management system.
Content Management troubleshooting checklist
Use the content management troubleshooting checklist to resolve common integration issues with IBM Product Master.
Java API for the content management system
IBM Product Master provides three Java API interfaces, CMSInstance, CMSManager, and CMSContentURN, that you can use to manage content in the content
management system.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing IBM Content Integrator for IBM Content Management


To install and configure IBM® Content Integrator to work with IBM Product Master, you must install the IBM Content Management on the server-side computer.

Procedure
1. Install IBM Content Management on the server-side computer. For more information, see IBM Content Management.
2. Install IBM Content Integrator on the server-side computer by using the "Full Install" option. For more information, see IBM Content Integrator.
3. Install IBMDb2® Information Integrator on the server-side computer and on the computer that is running the Product Master instance.
4. Modify the config.sh file to set the IBMCMROOT system variable and to include the following .jar files. This file can be found in the <CI home>/bin folder of the
server-side computer.

IBMCMROOT=/opt/IBM/db2cmv8 export IBMCMROOT


VBR_CLASSPATH=$IBMCMROOT/lib/cmbsdk81.jar:$VBR_CLASSPATH export VBR_CLASSPATH
VBR_CLASSPATH=$IBMCMROOT/lib/cmbview81.jar:$VBR_CLASSPATH export VBR_CLASSPATH
VBR_CLASSPATH=$IBMCMROOT/lib/cmb81.jar:$VBR_CLASSPATH export VBR_CLASSPATH
VBR_CLASSPATH=/opt/IBM/db2/V8.1/java/db2java.zip:$VBR_CLASSPATH export VBR_CLASSPATH
VBR_ALLJARS=$IBMCMROOT/cmgmt:$VBR_CLASSPATH export VBR_CLASSPATH

5. Ensure that Db2 is started by the Db2 instance owner on the server-side computer.
6. Ensure that resource manager application is started on the server-side computer. For example,
/opt/IBM/WebSphere51/bin>./startServer.sh icmrm
7. Run the <CI home>/bin/RMIBridge.sh script in the server-side computer.
8. Install IBM Content Integrator on the computer that is running the Product Master instance by using the "Connector only install" option.
9. Modify the config.sh file as described in step 4.
10. Create a connector by running the <CI home>/bin/Admin.sh script in the computer that is running the Product Master instance. Running this script displays a GUI.
You can right-click "connector" name to test the connection to the IBM Content Management.
11. Create a data map by using the Admin.sh GUI, and selecting the appropriate item class that was created on IBM Content Management.
12. Create a subject on the computer that is running the Product Master instance by using the following command:

<CI home>/bin./run_sample.sh commandline.SSOAdminTool

13. Select Add repo credentials after the subject is created. For more information, see Configuring the single sign-on administration tool.

820 IBM Product Master 12.0.0


14. Run the <CI home>/bin/VeniceBridge.sh script in the computer that is running the Product Master instance to get the URL of the single sign-on (SSO) server.
15. Run the following command to create a folder:

./run_sample.sh commandline.CreateFolder vbr:/<name-of-connector>/-1//FOLDER <Repo-userid> <passwd> ICMDRFOLDERS pimdir

16. Modify the $TOP/bin/conf/env_settings.ini file.

# extensions
[extension cms]
enabled=yes
services=all
home=/opt/IBM/ContentIntegrator
jvm_opts=-Dvbr.home%EQUALS%/opt/IBM/ContentIntegrator

Where,
/opt/IBM/ContentIntegrator is the path of the CI installation home.
17. Set the parameter value as true in the $TOP/etc/default/common.properties file.
For example, enable_content_reference_check=true
18. Modify the content_management_system_config.xml file to set values for repo and user ID. Following is a sample code snippet.

<repositories>
<content_integration>
<username><![CDATA[pimsso]]></username>
<password><![CDATA[pimsso]]></password>
<data_map><![CDATA[Data Map 4]]></data_map>
<sso_url><![CDATA[rmi://localhost:1250/SSOServer]]></sso_url>
</content_integration>
<repository>
<repo_name><![CDATA[DB2PAL]]></repo_name>
<item_class><![CDATA[firstItemType1]]></item_class>
<folder_urn><![CDATA[vbr:/DB2PAL/ICMDRFOLDERS.A1001001A09A08A04022F18370.A09A08A04022F18370.1005//FOLDER]]>
</folder_urn>
</repository>
</repositories>

Where,

User name
This parameter is the user name to log in to the Content Integrator. This parameter is the Subject name that you created during Single-Sign On configuration.
password
This parameter is the password to log in to the Content Integrator. This parameter is the Subject password that you created during Single-Sign On
configuration.
data_map
This parameter is the data map name that you use to search for contents.
sso_url
This parameter is the URL of the server where the single-sign on server is running. You can run the VeniceBridgeServices.sh script to get the URL of the
single-sign on server. This script is in the <ci_home>/bin folder.
repo_name
This parameter is the name of the repository connector. This parameter is the name of the connector that you created by using the Admin.sh script. This
script is in the <ci_home>/bin folder.
item_class
This parameter is the item class name.
folder_urn
This parameter is the URN of the folder where content is added.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing IBM Content Integrator for IBM FileNet P8 Platform


To install and configure IBM® Content Integrator to work with IBM Product Master, you must install the IBM FileNet® P8 Platform on the server-side computer.

Procedure
1. Install IBM FileNet P8 Platform on the server-side computer. For more information, see IBM FileNet P8 Platform.
2. Install IBM Content Integrator on the server-side computer by using the "Full Install" option. For more information, see IBM Content Integrator.
3. Copy the AE_HOME\Workplace\WEB_INF\WcmApiConfig.properties file from the IBM FileNet P8 server to the IICE_HOME\lib folder (RMI bridge connector).
4. Edit the properties in the WcmApiConfig.properties file to point to the correct IBM FileNet P8 server.
5. Add the following line to the WcmApiConfig.properties file to specify the key for locating a UserTransaction reference during an IBM FileNet P8 JNDI lookup:
TxJndiKey=jta/usertransaction
6. Enable access to the AE_HOME\Workplace\WEB-INF\lib\javaapi.jar and Jace.jar by copying the files to the IICE_HOME\lib folder:
7. Set the WAS_HOME variable in the IICE_HOME\bin\RMIBridge.bat file. The WAS_HOME variable is typically in the folder on the same system where the RMI bridge
is running. For example,
SET WAS_HOME="C:\Program Files\IBM\WebSphere\AppServer"
8. Add the following properties to the Java™ command in the RMIBridge.bat file. These properties should be placed after the -Dvbr.home="%VBR_HOME% ^ : line.
-Djava.naming.factory.initial=%JNDI_CLIENT_FACTORY%
9. Ensure that %JNDI_CLIENT_FACTORY% is replaced by the JNDI initial context factory class.
For example, the WebSphere® Application Server typically uses com.ibm.websphere.naming.WsnInitialContextFactory to perform JNDI operations.
-Djava.naming.provider.url=%JNDI_CLIENT_PROVIDER%
10. Ensure that %JNDI_CLIENT_PROVIDER% is replaced by the URL of the JNDI naming provider that is hosting FileNet P8 server.

IBM Product Master 12.0.0 821


For example, iiop://hostname:2809
The host name is the host name or IP address of the IBM FileNet P8 system.
Port 2809 is the application server default.
Each of the following commands should be on one line:

-Djava.ext.dirs=%WAS_HOME%\java\jre\lib\ext;%WAS_HOME%
\java\jre\lib;%WAS_HOME%\classes;%WAS_HOME%\lib;
%WAS_HOME%\lib\ext

-Xbootclasspath/p:%WAS_HOME%\lib\ibmorb.jar;%WAS_HOME%
\profiles\default\properties

-Dcom.ibm.CORBA.ConfigURL=file:/%WAS_HOME%\profiles\
default\properties\sas.client.props

Add the WcmApiConfig.properties file:


Dfilenet.wcmapiconfig="%VBR_HOME%\lib\WcmApiConfig.properties"
11. Use the Content Integrator administration tool to set the following connector properties:

Remote Server URL


cemp:iiop://hostname:2809/FileNet/Engine
Remote Server Upload URL
cemp:iiop://hostname:2809/FileNet/Engine
Remote Server Download URL
cemp:iiop://hostname:2809/FileNet/Engine

12. Create a connector by running the <CI Home>/bin\Admin.sh script on the computer that is running the Product Master instance. Running this script displays a GUI.
You can right-click "connector" name to test the connection to the IBM FileNet P8 Platform.
13. Create a data map by using the Admin.sh, and selecting the appropriate item class that was created on the IBM FileNet P8 Platform.
14. Create a subject on the computer that is running the Product Master instance by using the following command:
<CI home/bin>./run_sample.sh commandline.SSOAdminTool
15. Select Add repo credentials after the subject is created. For more information about the usage of the SSOAdminTool, see Configuring the single sign-on
administration tool.
16. Run the VeniceBridge.sh script on the computer that is running the Product Master instance to get the URL of the single sign-on (SSO) server. This script is available
in the <CI Home>/bin folder.
17. Run the following command to create a folder:

./run_sample.sh commandline.CreateFolder vbr:/<name-of-connector>/-1//FOLDER <Repo-userid> <passwd> FOLDER pimdir

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring the content management system


You use the content_management_system_config.xml file to configure the content management system, and to define the connection that the IBM® Content
Integrator uses to connect to the repository.

Before you begin


You must install and configure IBM Content Integrator before you configure the content management system. For more information about installing and configuring IBM
Content Integrator, see: IBM Content Integrator product documentation.

About this task


Configure the common attributes that enable you log in to the content management system and the repository information for each content management system
repository that you want to connect to.

Procedure
1. Open the content_management_system_config.xml configuration file. The file is in the $TOP/etc/default/ directory.
2. Modify the entries on the <content_integration> tag to specify the user name, password, and URL. This information is required to connect to the IBM Content
Integrator single sign-on server.
3. Modify the entries on the <repository> tag to define the content management system that you are connecting to. You must specify the name of the repository
connector, the item class, and the URN of the folder where the content is added.
Note: You can add multiple <repository> tags to define connectors to multiple content management systems.
4. Save the content_management_system_config.xml configuration file.
5. Modify the env_settings.ini file to enable content management system integration as follows:
This file is available in the $TOP/bin/confdirectory.

# extensions
[extension cms]
enabled=yes
services=all
home=/opt/IBM/ContentIntegrator
jvm_opts=-Dvbr.home%EQUALS%/opt/IBM/ContentIntegrator

Where /opt/IBM/ContentIntegrator is the path of the Content Integrator installation home directory.
6. Set the parameter enable_content_reference_check to true in the common.properties file.

822 IBM Product Master 12.0.0


This file is available in the $TOP/etc/default directory.
7. Add values of the variables such as repo and username in the content_management_system_config.xml file.
This file is available in the $TOP/etc/default directory.

content_management_system_properties.xml File
You define the values in the content_management_system_properties.xml in order to configure your content management system.

Related tasks
Configuring IBM Product Master to use IBM Content Integrator services

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

content_management_system_properties.xml File
You define the values in the content_management_system_properties.xml in order to configure your content management system.

You cannot specify or configure a repository in the single sign-on server with the repository name that contains non-English characters, for example French. Also, you
should not use any special characters in the repository name when you create the connector in the IBM® Content Integrator, for example !@$%^&()=+. You need to
configure your login credentials in the content_management_system_properties.xml file. There are two sections in the content_management_system_properties.xml file.

Common attributes
The common attributes are included in the <content_integration> tag set and consist of:

User name
The user ID to connect to the IBM Content Integrator. IBM Product Master logs in to the IBM Content Integrator by using the credentials that are configured in the
XML file.
password
The password to connect to the IBM Content Integrator. IBM Content Integrator logs in to all of the content management systems that are configured in the single
sign-on server.
data_map
The name of the data maps used to perform searches on the configured repositories.
sso_url
The URL of the server where the single sign-on server is running.
default_repo
The default content management system.

Specific content management system attributes


The specific content management system attributes are included in the <repository> tag set and consist of:

repo_name
The name of the repository connector.
item_class
The name of the item class for the contents.
folder_urn
The URN of the folder where the content is added.
build_script
The name of the build script.
runOnLoad
The attribute that controls whether the build script is run automatically or manually. Set to true if you want it to run automatically.

content_management_system_config.xml Configuration file


The XML file takes the following format:

<repositories>
<content_integration>
<username><![CDATA[]]></username>
<password><![CDATA[]]></password\>
<data_map><![CDATA[]]></data_map>
<sso_url><![CDATA[]]></sso_url>
<default_repo><![CDATA[]]></default_repo>
</content_integration>

<repository>
<repo_name><![CDATA[]]></repo_name>.
<item_class><![CDATA[]]></item_class> \
<folder_urn><![CDATA[vbr:/repository ID/item ID/verID/FOLDER]]></folder_urn>.
<build_script runOnLoad="true">pim script</build_script>
</repository>
</repositories>

Where:

IBM Product Master 12.0.0 823


<content_integration>
A common entry.
<repository>
A content management system-specific entry.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring IBM Product Master to use IBM Content Integrator services


To manage your content in the content management system, you must configureIBM® Product Master to use content management system services.

Procedure
1. Open the $TOP\bin\conf\env_settings.ini file.
2. Add or edit the following section in env_settings.ini:

[extension cms]
enabled=no
services=all
home=/opt/foo
jvm_opts=-Dvbr.home%EQUALS%<cms_home>

a. Set the enabled parameter to yes.


b. Set the value of the home parameter and <cms_home> in the jvm_opts parameter to the root of the content management system installation directory.
3. Save the env_settings.ini file.
4. Restart the Product Master services.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing content in a content management system


After you configure access to the content management system, you can search for content, add content, associate content to an item, or view content in a content
management system.

About this task


To manage content in a content management system, you must create or update a primary or secondary spec to add or have an external content reference attribute type.

Procedure
1. Add an item. For more information, see Adding content to the content management system.
2. Search for a content in the content management system. For more information, see Searching the content management system.
3. Associate the image of the camera as an attribute of this item. For more information, see Associating a content management system document to an item.
4. View the content in the content management system. For more information, see Viewing content in the content management system.

Adding content to the content management system


You can add content to the content management system by using the external content reference attribute Add Content screen. In order to select a particular
content to be associated with an IBM Product Master item or category you need to add content to the content management system.
Searching the content management system
You can search the content management system by using the pre-configured metadata in the data map. You can search on multiple configured content
management systems by using a data map. When you search the content management system, you are able to locate and select a particular content to be
associated with an IBM Product Master item or category.
Associating a content management system document to an item
When you add or update an item, you can associate a content management system document to an item after you locate the document in the content management
system. Unstructured content, for example, product images, specification data sheets, warranty documents, flash and video demonstrations provides relevant and
necessary context to products, and hence must be associated with products.
Viewing content in the content management system
You can view content in a content management system through the view feature for any external content reference type attribute. Viewing content in the content
management system enables you to see whether there is a valid content reference that is stored in any one attribute.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Adding content to the content management system

824 IBM Product Master 12.0.0


You can add content to the content management system by using the external content reference attribute Add Content screen. In order to select a particular content to be
associated with an IBM® Product Master item or category you need to add content to the content management system.

Before you begin


1. Install and configure IBM Content Integrator.
2. Configure the content management system by using the configuration file.
3. Define a spec.
4. Add an external content reference type attribute to the spec.

If you want to perform a case insensitive search, configure the system before you add content to it. When you configure your system, for example, adding the custom
properties, anything you add to the content management system after you have configured your properties, those added items are the only things that you can search on.

Procedure
Add content to the content management system. Use either of the following methods: user interface or Java™ API.
Option Description
a. Open an item or category in the Single Edit screen using one of the following ways:
i. Click an item or category in the pane navigation.
ii. Open an item to edit from the Item View screen.
iii. Select a step from the collaboration area.
b. Click + next to an external content reference attribute on the Single Edit screen.
Note: When you edit an item in a catalog, you can click +, ensure that you enter a number that represents a time, for example, 1111111111 for
User interface date attribute.
c. Find the content file from the local file system.
d. Select a content management system that you want to add the content to.
e. Ensure that the values for the metadata attributes display on the content management system screen.
f. Click Upload to add the content and the metadata to the content management system repository under the folder that is configured for this
repository.

Java API The following sample Java API code populates the metadata attributes and adds a content to repository by using the CMSManager and CMSInstance
Java interfaces.
Note: The following two statements should not be called repeatedly in a Java API.
CMSManager CMSMgr = ctx.getCMSManager();

CMSInstance CMSInst = CMSMgr.getCMSInstance();


Otherwise, this might lead to OutOfMemory or an increase in the number of applications that are connecting to the Content Management System from
collaborative MDM. The following sample code uses these statements correctly.
Context ctx = null;

try

ctx = PIMContextFactory.getContext("Admin", "xxx", "MyCompany");

CMSManager CMSMgr = ctx.getCMSManager();

CMSInstance CMSInst = CMSMgr.getCMSInstance();

String repositoryName = "DB2CM";

String fileToBeAdded = Configuration.getValue("tmp_dir") + "/test.xml";

String displayFileName = "test.xml";

HashMap<String, String> metadataAttributesHM = new HashMap<String, String>();

Map<String, String> metaProps =

CMSInst. getMetaDataAttributes(repositoryName);

Iterator<String> it = metaProps.keySet().iterator();

while (it.hasNext())

// meta data attribute

property = (String) it.next();

// meta data attribute value

IBM Product Master 12.0.0 825


Option Description

value = "val1";

metadataAttributesHM.put(property, value);

File file1 = new File(fileToBeAdded);

String cmsURN = CMSInst.addContent(repositoryName, fileToBeAdded, displayFileName, metaDataAttribsValuesHM);

String addedContentReference = cmsURN.getURN();

System.out.println(" The content reference of the added file is : \n" + addedContentReference);

catch (PIMAuthorizationException ae)

// Expected a failure

System.out.println("Authorization Failure");

return;

catch(IllegalArgumentException iae)

System.out.println(" Passed argument is null or empty ");

return;

catch(PIMInternalException ie)

System.out.println(" Internal Error ");

return;

}
To provide metadata information programmatically through Java API, set the item-attribute values on the Add Content screen.
Note: If the selected content management system is set as your default system during configuration, the metadata attributes are already set when the
Add Content screen displays. For more information, see content_management_system_properties.xml File.
The following sample Java API code checks to see whether a specified URN is a valid one syntactically and also checks the URN's existence by using
the CMSContentURN Java interface.

Context ctx = null;

try

ctx = PIMContextFactory.getContext("Admin", "xxx", "MyCompany");

CMSManager CMSMgr = ctx.getCMSManager();

CMSInstance CMSInst = CMSMgr.getCMSInstance();

// login to all the repositories through single sign on

CMSInst.logon();

System.out.println("\n\n ------------ Displaying cases for validation of URN ------------ ");

// sample for APIs on validation of URN

String URN1 = "vbr:/firstItemType.A1001001A09B24B35145B46524.A09B24B35145B46524.1022/1/CONTENT"; //


invalid URN

String URN2 = "vbr:/DB2PAL/firstItemType.A4B35145B46524.A09B24B35145B46524.1022/1/CONTENT"; // valid but


non-existing URN

String URN3 = "vbr:/DB2PAL/firstItemType.A1001001A09B24B35145B46524.A09B24B35145B46524.1022/1/CONTENT";

826 IBM Product Master 12.0.0


Option Description
// valid and existing URN

CMSContentURN urnObj1 = CMSInst.getCMSContentURN(URN1);

System.out.println(" The URN1 " + URN1 + " is valid? : " + urnObj1.isValid());

CMSContentURN urnObj2 = CMSInst.getCMSContentURN(URN2);

System.out.println(" The URN2 " + URN2 + " is valid? : " + urnObj2.isValid());

System.out.println(" The URN2 " + URN2 + " is existing? : " + urnObj2.isExisting());

CMSContentURN urnObj1 = CMSInst.getCMSContentURN(URN1);

System.out.println(" The URN1 " + URN1 + " is valid? : " + urnObj1.isValid());

CMSContentURN urnObj2 = CMSInst.getCMSContentURN(URN2);

System.out.println(" The URN2 " + URN2 + " is valid? : " + urnObj2.isValid());

System.out.println(" The URN2 " + URN2 + " is existing? : " + urnObj2.isExisting());

CMSContentURN urnObj3 = CMSInst.getCMSContentURN(URN3);

System.out.println(" The URN3 " + URN3 + " is valid? : " + urnObj3.isValid());

System.out.println(" The URN3 " + URN3 + " is existing? : " + urnObj3.isExisting());

catch (Throwable e)

throw new PIMInternalException(e.getLocalizedMessage(), e);

The following sample Java API code retrieves an already uploaded document from its URL through Java API. Assume that URN is the string URL for the
content management system object that is stored within the content management system.

CMSContentURN URNobj = CMSInst.getCMSContentURN(“urn”);

/*content object holds the reference of that object in CMS */

InputStream content = CMSInst.getURNContentAsStream(URNobj);

/* one can use this stream to redirect the contents to a file */

What to do next
Now you can associate a content management system document to an item.

Related tasks
Viewing content in the content management system
Associating a content management system document to an item
Searching the content management system

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Searching the content management system


You can search the content management system by using the pre-configured metadata in the data map. You can search on multiple configured content management
systems by using a data map. When you search the content management system, you are able to locate and select a particular content to be associated with an IBM®
Product Master item or category.

IBM Product Master 12.0.0 827


Before you begin
1. Install and configure IBM Content Integrator.
2. Configure the content management system by using the configuration file.
3. Define a spec.
4. Add an external content reference type attribute to the spec.

Procedure
Search the content management system. Use either of the following methods: user interface or Java™ API.
Option Description
a. Open an item or category in the Single Edit screen using one of the following ways:
i. Click an item or category in the pane navigation.
ii. Open an item to edit from the Item View screen.
iii. Select a step from the collaboration area.
b. Click the search icon next to an external content reference attribute on the Single Edit screen. The Search screen opens.
User interface c. Select at least one repository to perform your search on. Choose AND or OR search criteria.
d. Provide values for the search attributes, and click Search. The search results display in the same screen.
e. Select a search result to display the content properties, the content versions, all items that reference this content or if the content is an image
type, a thumbnail preview. Click Associate Document to associate that content with the external content reference attribute of the item. You can
also select a version in the Versions tab and click Associate Document to associate that version of the content with the external content
reference attribute of the item.

Java API The following sample Java API code searches the content management system by using the CMSManager and CMSInstance Java interfaces.
Note: The following two statements should not be called repeatedly in a Java API.
CMSManager CMSMgr = ctx.getCMSManager();

CMSInstance CMSInst = CMSMgr.getCMSInstance();


Otherwise, this might lead to OutOfMemory or an increase in the number of applications that are connecting to the Content Management System from
collaborative MDM. The following sample code uses these statements correctly.
Context ctx = null;

String property = null;

String value = null;

try

ctx = PIMContextFactory.getContext("Admin", "xxx", "MyCompany");

CMSManager CMSMgr = ctx.getCMSManager();

CMSInstance CMSInst = CMSMgr.getCMSInstance();

// get datamap

String dataMapName = CMSMgr.getDataMapName();

// get the repositories to be searched on

String[] selectedRepositories = new String[] {"DB2PAL", "DB2CM"};

// get datamap properties

Map<String, String> dataMapProps = CMSInst.getDataMapProperties();

// create hashmap for datamap properties and their values to be searched on

HashMap<String, String> dataMapPropsValuesHM = new HashMap<String, String>();

Iterator<String> it = dataMapProps.keySet().iterator();

while (it.hasNext())

property = (String) it.next();

if (property.equals("Name"))

// set the value for the property

value = "Mith8";

dataMapPropsValuesHM.put(property, value);

828 IBM Product Master 12.0.0


Option Description

else if (property.equals("Item_ID"))

value = "A1001001A09A10B02729B61702";

dataMapPropsValuesHM.put(property, value);

else

// no other search parameter

// user can add more search parameters here with more 'else if'

// cases

// creating the search query from the hashmap of data map

// properties to be searched on and their values

/**

* In this sample, searchQuery will ultimately have the following string

* : Name='Mith8' OR Item_Id='' User can directly write a valid search

* query String and pass it to the search API.

**/

String searchQuery = null;

Iterator<String> itr = dataMapPropsValuesHM.keySet().iterator();

while (itr.hasNext())

property = (String) itr.next();

value = (String) dataMapPropsValuesHM.get(property);

if (value != "")

if (searchQuery != null)

// here '=' is the operator used. Any other valid operator

// of Content Integrator can be used

searchQuery = searchQuery + " operation " + property + "='" + value + "'";

else

searchQuery = property + "='" + value + "'";

IBM Product Master 12.0.0 829


Option Description
// specify whether 'AND' or 'OR' operation to perform

searchQuery = CMSUtils.replaceString(searchQuery, "operation", "OR");

// search in selected repositories and get the resultset

IResultSet irs = CMSInst.search(dataMapName, selectedRepositories, searchQuery);

catch (PIMAuthorizationException ae)

// Expected a failure

System.out.println("Authorization Failure");

return;

catch(IllegalArgumentException iae)

System.out.println(" Passed argument is null or empty ");

return;

catch(PIMInternalException ie)

System.out.println(" Internal Error ");

return;

What to do next
Now you can associate a content management system document to an item and view content in the content management system.

Searching content management system using case insensitive searches


In IBM Product Master version 9.0 and earlier, you might search only the content management system with an exact case of the search keyword to retrieve a
particular object. You no longer must remember case-sensitive keywords.
Using a wildcard search string
To achieve a search that does not contain the full search string is to use a wildcard search.
Escape characters
Some characters in content integration server queries have special meanings.

Related tasks
Associating a content management system document to an item
Viewing content in the content management system

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Searching content management system using case insensitive searches


In IBM® Product Master version 9.0 and earlier, you might search only the content management system with an exact case of the search keyword to retrieve a particular
object. You no longer must remember case-sensitive keywords.

About this task


There are two different configurations that you need to perform before you can use case insensitive searches. You need to configure the content management system for
adding and searching for special characters. If you do not configure the content management system, you are able to perform only a case-sensitive search.

Configuring uploads to content management system for case insensitive searches


Through the content management system integration, you can add new objects by using the out of the box-provided upload screens. These upload screens are part

830 IBM Product Master 12.0.0


of External Content Reference attribute type, which can be added to a spec.
Configuring for performing case insensitive searches
Federated searches in the content management system are relatively disconnected from the upload concept and they depend on datamaps for searches across
repositories. If case insensitive search is chosen in the user interface, the element in the search predicate would be “switched” to the one described in the custom
property and the search key would be upper-cased.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring uploads to content management system for case insensitive searches


Through the content management system integration, you can add new objects by using the out of the box-provided upload screens. These upload screens are part of
External Content Reference attribute type, which can be added to a spec.

About this task


You can add an element to the data model (which would store uppercase values) and provide the mapping in the connector's custom property section.
After you specify the mapping, the right side element from the mapping is considered as the element used for storing the upper cased values and would be hidden from
the user.

During an upload, the values for the left side of a custom map would be upper cased and would be assigned to the right side element and passed on for saving.

Consider an example where a content management system data model “book” has an author_name element. The data model “book” is also called an item class within
the content management system.

Procedure
1. Add an element to the model in the content management system server.
For example, the database would store the upper cased string for the author_name element. The name in this case is author_name_up.
Note: The name of the element is an example and is random and left up to the data modeling admin. The element name can be anything in which the data modeler
prefers, however, you should provide a name that distinguishes the derived element as Product Master reserved elements so that it provides a hint to what element
it is mapped to. For the example here, the recommended name is for holding uppercase of author_name is UPPER_OF_AUTHOR_NAME_RESERVED. Ensure that none
of the element names are similar and that you are cautious when you map the derived elements with the Product Master reserved elements.
2. Add a database index to the author_name_up element for faster searches.
a. In the left pane navigation, expand the item class. Expand the Database Indexes node to display the node.
b. Right-click the node and select Explore. The New Database Index window displays.
c. Select the new attribute in the Available Attributes section and click Add.
d. Provide an index name for your attribute index in the Name field.
For example: author_name_up_idx
e. Click OK.
3. Add the new property to the Custom Properties Editor and assign it a property value in the Content Integrator admin client.
a. Open the admin tool.
b. Expand the connectors node and double-click the relevant connector to open the Properties Editor on the right.
c. Click Custom Properties to open the Custom Properties Editor.
d. Click Add to add the custom properties.
e. Add the author_name property to the editor and assign it the author_name_up property value.
4. Save your changes and restart the Content Integrator so that the new changes take effect.
5. Start Product Master.
6. Open the Add content management system window where only the left side where the key value pair is visible.
a. Specify the file that you want to upload in the Source file field.
b. Provide a value for the author_name field.
For example, teSt1.
c. Click Upload. The file is uploaded to the content management system with author_name and author_name_up completed.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring for performing case insensitive searches


Federated searches in the content management system are relatively disconnected from the upload concept and they depend on datamaps for searches across
repositories. If case insensitive search is chosen in the user interface, the element in the search predicate would be “switched” to the one described in the custom
property and the search key would be upper-cased.

About this task


For a case insensitive search, the configured datamap would be provided with similar mappings called custom properties as described for adding content.

Procedure

IBM Product Master 12.0.0 831


1. Add a DataMap.
For example, SearchDM.
2. Add an element corresponding to author_name for the repository.
For example, AUTHOR_NAME_RESERVED.
Ensure that the datatype defined for these attributes is STRING.
Ensure that the AUTHOR_NAME_RESERVED is searchable and can be returned in a query.
Ensure that the proper “Supported Operators” are chosen from the screen.
3. Add an element corresponding to author_name_up for the repository.
For example, UPPER_OF_AUTHOR_NAME_RESERVED.
4. Add a custom map between the two elements.
5. Save the element and the configurations.
6. Restart the Content Integrator so that the changes are reflected.
7. Log in to Product Master and navigate to an item with a content management system type attribute and open up the search window.
8. Select the content management system attribute on the item data entry screen. The content management search screen displays. The dm_author_name is listed.
Note: Product Master adds two new options for case-insensitive searching in the search operator drop downs namely equal-ignorecase and like-ignorecase.
9. Provide values for the element and choose one of the ignore case options from the drop-down and search.
10. The ignore case searches at the backend and is performed against the UPPER_OF_AUTHOR_NAME_RESERVED element and not the dm_author_name element.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Using a wildcard search string


To achieve a search that does not contain the full search string is to use a wildcard search.

About this task


There are two types of wildcard that are supported:

(*) - match any number of characters


(?) - substitutes for any one character in the search keyword

Procedure
1. Select the content management system attribute on the item data entry screen.
2. For any attribute, select LIKE or LIKE IGNORECASE in the attribute Option drop-down.
3. Provide a wildcard text string.
For example, Dav*.
4. Click Search. This search displays any results that start with Dav or dav.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Escape characters
Some characters in content integration server queries have special meanings.

For example, the asterisk (*) and question mark (?) are used in content integration server queries as multiple-character and single-character wild cards. To search for a
string or word that contains a literal special character, such as a literal asterisk or question mark, you must escape these characters in the query.
For example, to search for the literal value View*Star, you must escape the asterisk in the middle of the string. Use a backslash to escape such characters:

Property search
CompanyName LIKE 'View\*Sta?'
to get
'View*Star'
Full-text search
PHRASE = 'Did you ask why\?'
to get
'Did you ask why?'
PHRASE = 'View\*Star increases ? *'
to get
'View*Star increases a single'
If a backslash (\) needs to be included in a literal value, precede it with another backslash. Complex values can be composed by escaping characters that normally
have some other processing meaning:

PHRASE = 'Is (3 \* 4) \\ 2 = 6 \?' to get 'Is (3 * 4) \ 2 = 6 ?'

Characters that have special meaning in content integration server queries must be escaped if they are to be taken literally. These characters include:

Asterisk (*)
Question mark (?)
Backslash (\)

832 IBM Product Master 12.0.0


Single quotation mark (')

Table 1. How special characters are escaped to retrieve specific results


Query criteria
Results Explanation
string
String LIKE * String, Strong, Stronger, Strange, Stranger, Str*ng, All results are returned.
Str?ng, Str'ng,
Str_nger, Str?*r, Str*n*, Str\anger, Str\onger,
Str\?nger, Str\ange, Str\?nger, Str??ng, Str??nge,
Str??ng*, Str?nger, Str\\?nger, Str\?\?ng, Str\?\?
nger, Tom's Cabin
String LIKE String, Strong, Stronger, Strange, Stranger, Str*ng, Replace single character for ?, and multiple characters for * at the end of the
‘Str?n*’ Str?ng, Str'ng, word.
Str_nger
String LIKE String, Strong, Str*ng, Str?ng, Str'ng Single character wildcard search.
‘Str?ng’
String LIKE Stronger, Stranger, Str?*r, Str_nger Results strong and strange would not be returned because of the r at the end
‘Str?*r’ of the query string.
String LIKE Str*ng, Str*nger, Str*n* The character combination \* indicates that the first asterisk is literal and not
‘Str\*n*’ a wildcard character.
String = Str*ng Asterisk and question mark characters are meaningful only when the
‘Str*ng’ operator is LIKE.
String LIKE Str\anger, Str\onger, Str\?nger The backslash character in the query string is a literal, while the ? character
‘Str\\?nger’ is a single replacement wildcard character.
String LIKE Str\ange, Str\anger, Str\onger, Str\?n* The backslash character is literal because it is escaped in the query string.
‘Str\\?n*’
String LIKE Str??ng, Str??nge, Str??ng* The values Str\?\?ng and Str\?\?nger are not found because the ?
‘Str\?\?ng*’ characters are both escaped with backslashes in the query string.
String LIKE InvalidQueryException Malformed query. The single quotation mark character within the word Tom's
‘Tom’s Cabin’ must be escaped in the query string.
String LIKE Tom’s Cabin The string Tom\’s Cabin is not found because the backslash is used as an
‘Tom\’s escape character in the query string. To find the literal string Tom\’s
Cabin’
Cabin, both the backslash and the single quotation mark must be escaped in
the query string: Tom\\\'s Cabin.
The single quotation marks and backslash characters must be escaped wherever they are used as literals in a content integration server query because these characters
have a special meaning to IBM Content Integrator. If these characters are not escaped, an error might occur or the query might produce undesired results. For example, if
you query for the string Tom's
Cabin and you do not escape the single quotation mark, the query expression parser returns an exception and the query is not run. To search for Tom's Cabin, you would
enter:

String = 'Tom\'s Cabin'

The string is converted by the connector into a form that is acceptable by the repository to produce results for the string Tom's Cabin. This might convert into Tom''s
Cabin or Tom's Cabin depending on which repositories are searched. Client applications do not need to accommodate different repositories' accepted inputs because
each corresponding connector handles the differences. Client applications can search seamless between the various repositories without regard to the repositories'
special character or mechanism of escaping special characters.

Each repository has a repository profile that lists invalid character combinations. Invalid character combinations might be listed if the repository does not support
escaping wildcard characters, if there are characters that are special to the repository that cannot be escaped, or if there are any other characters that are illegal to the
repository. Never use these character combinations in a query on a repository with such restrictions.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Associating a content management system document to an item


When you add or update an item, you can associate a content management system document to an item after you locate the document in the content management
system. Unstructured content, for example, product images, specification data sheets, warranty documents, flash and video demonstrations provides relevant and
necessary context to products, and hence must be associated with products.

Before you begin


1. Install and configure IBM® Content Integrator.
2. Configure the content management system by using the configuration file.
3. Define a spec.
4. Add an external content reference type attribute to the spec.

Procedure
Associate a document to an item. Use any one of the following methods: search results, adding content or Java™ API.
Option Description
a. Search for the content in the content management system. For more information, see Searching the content management system.
Search results b. Select the document from the search results table and click Associate Document.

IBM Product Master 12.0.0 833


Option Description
a. Add the content to the content management system. For more information, see Adding content to the content management system.
Adding content b. Click Populate value to associate the reference of the content added to the item.

The following sample Java API code associates a content reference to an item attribute after you add a content.
Note: The following two statements should not be called repeatedly in a Java API.

CMSManager CMSMgr = ctx.getCMSManager();


CMSInstance CMSInst = CMSMgr.getCMSInstance();

Otherwise, this can lead to OutOfMemory or an increase in the number of applications that are connecting to the Content Management System from
collaborative MDM. The following sample code uses these statements correctly.

Context ctx = null;


try
{

ctx = PIMContextFactory.getContext("Admin", "xxx", "MyCompany");


CMSManager CMSMgr = ctx.getCMSManager();
CMSInstance CMSInst = CMSMgr.getCMSInstance();
String repositoryName = "DB2CM";
String fileToBeAdded = Configuration.getValue("tmp_dir") + "/test.xml";
String displayFileName = "test.xml";

CatalogManager mgr = m_ctx.getCatalogManager();


Catalog ctg1 = mgr.getCatalog("Catalog1");
Item item1 = ctg1.getItemByPrimaryKey("pk1");

AttributeInstance attrInst = item1.getAttributeInstance(EXTERNAL_CONTENT_REFERENCE_ATTRIBUTE_PATH);

Java API HashMap<String, String> metadataAttributesHM = new HashMap<String, String>();


Map<String, String> metaProps =
CMSInst. getMetaDataAttributes(repositoryName);

Iterator<String> it = metaProps.keySet().iterator();

while (it.hasNext())
{
// meta data attribute
property = (String) it.next();

// meta data attribute value


value = "val1";

metadataAttributesHM.put(property, value);
}

File file1 = new File(fileToBeAdded);


//Add the file to CMS
String cmsURN = CMSInst.addContent(repositoryName, fileToBeAdded, displayFileName, metaDataAttribsValuesHM);

//Associate this URN to the item's external content reference attribute


attrInst.setValue(cmsURN);

item1.save();

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Viewing content in the content management system


You can view content in a content management system through the view feature for any external content reference type attribute. Viewing content in the content
management system enables you to see whether there is a valid content reference that is stored in any one attribute.

Before you begin


1. Install and configure IBM® Content Integrator.
2. Configure the content management system by using the configuration file.
3. Define a spec.
4. Add an external content reference type attribute to the spec.

Procedure
View content in the content management system. Use either of the following methods: user interface or Java™ API.
Option Description

834 IBM Product Master 12.0.0


Option Description
a. Open an item or category in the Single Edit screen using one of the following ways:
i. Click an item or category in the pane navigation.
ii. Open an item to edit from the Item View screen.
User interface iii. Select a step from the collaboration area.
b. Click the view icon next to an external content reference type attribute. If there is a valid content reference that is stored in the attribute, the
content is displayed in a separate window.

The following sample Java API code enables you to view a text content in the console by using the CMSManager and CMSInstance Java interfaces.

Context ctx = null;


try
{

ctx = PIMContextFactory.getContext("Admin", "xxx", "MyCompany");


CMSManager CMSMgr = ctx.getCMSManager();
CMSInstance CMSInst = CMSMgr.getCMSInstance();
// login to all the repositories through single sign on
CMSInst.logon();

InputStream fileinputstream = null;


String urnOfContent =
"vbr:/DB2PAL/firstItemType1.A1001001A09E13B41353D75338.A09E13B41353D75338.1028/1/CONTENT";

// create CMSContentURN for a specific URN


CMSContentURN urnObj = CMSInst.getCMSContentURN(urnOfContent);
Java API
// get the inputstream of the contents of the CMS Content object
fileinputstream = CMSInst.getURNContentAsStream(urnObj);

byte buf[] = new byte[bufferSize];

for (int len = -1; (len = fileinputstream.read(buf)) != -1;)


{
System.out.write(buf, 0, len);
}

// get the Content Integrator Content object


Content content = CMSInst.getContentIntegratorContent(urnObj);
// get the actual file name of the content
String defaultFileName = content.getDefaultFileName();
System.out.println("File name of the content retrieved or viewed is : " + defaultFileName);

System.out.println("\n\n ------------ Viewing the content from CMS End------------ ");

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Content Management troubleshooting checklist


Use the content management troubleshooting checklist to resolve common integration issues with IBM® Product Master.

This is a list of possible resolutions that can help you to identify the source of the problem that is causing your content management integration issues.

Availability of server-side computer


Issue - The server-side computer is unavailable.
Resolution -

Ensure that the application server, server1, and icmrm, the resource manager hosted on server1 are running. You can run the serverStatus.sh script to
check whether the application server is running.
To check whether the resource manager is up and running, go to the WebSphere® Application Server Admin console and click Enterprise applications >
Installed Applications and check if icmrm is running.
Check that the DB2® database is running. You can run the db2start command at the command line to check the status of the database.
Run the RMIBridge.sh script.

Availability of client-side compute


Issue - The client-side computer is unavailable.
Resolution -

Ensure that the application server, Server1, is running.


Confirm that “test connection “ is working for the particular connection by running the Admin.sh script.
Check whether there are more than one instances. of VeniceBridge.sh running. If so, kill all instances and restart VeniceBridge.sh.
Ensure that all values in the content_management_system_config.xml are correct.
Check that the subject user ID and password work with various options that are displayed when you run the following command:

<CI home/bin>./run_sample.sh commandline.SSOAdminTool

For example, you can check whether the user ID and password are valid by choosing option 3. This option tests for connectivity, validity of user ID and
password, and ensures that the subject created has the necessary credentials.

Visibility of referenced data

IBM Product Master 12.0.0 835


Issue - Referenced data is not displayed in the references tab when searching the content management system.
Resolution - Ensure that the value of the enable_content_reference_check property is set to true in the $TOP\etc\default\common.properties file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Java API for the content management system


IBM® Product Master provides three Java™ API interfaces, CMSInstance, CMSManager, and CMSContentURN, that you can use to manage content in the content
management system.

When you run the CMSInstance, CMSManager, and CMSContentURN Java APIs, make sure that your JVM has the parameter vbr.home set like this:

vbr.home=cms_home

Where, cms_home is the installation directory of IBM Content Integrator.


For example, if you are running the APIs from a stand alone Java program in Linux®, you can add the parameter vbr.home to your classpath in this way:

export CLASSPATH=$CLASSPATH" -Dvbr.home=/opt/IBM/ContentIntegrator"

See IBM Javadoc Documentation to view all of the available information on the parameters, returns, exceptions, and syntax for all Java classes.

Sample CMSManager interface usage


Use the CMSManager interface to gather the data that is configured in the content_management_system_properties.xml file. To use these methods, you do not
need to implement single sign-on to content management systems.
This sample code shows how to retrieve the CMSManager object from the context and use it to retrieve a list of configured repositories.

Context ctx = null;


try
{
ctx = PIMContextFactory.getContext("Admin", "xxx", "MyCompany");
CMSManager CMSMgr = ctx.getCMSManager();
Collection<String> listOfCMS = CMSMgr. getRepositoryNames();
}
catch (PIMAuthorizationException ae)
{
// Expected a failure
System.out.println("Authorization Failure");
return;
}
catch(PIMInternalException ie)
{
System.out.println(" Internal Error ");
return;
}

Sample CMSInstance interface usage


Use the CMSInstance interface to perform tasks needed to integrate the content management system and validate any errors.
This sample code populates the metadata attributes and adds a content to repository.

Context ctx = null;


try
{
ctx = PIMContextFactory.getContext("Admin", "xxx", "MyCompany");
CMSManager CMSMgr = ctx.getCMSManager();
CMSInstance CMSInst = CMSMgr.getCMSInstance();
CMSInst.logon();
String repositoryName = "DB2CM";
String itemClassName =
CMSMgr. getItemClassName(repositoryName);
String filename = Configuration.getValue("tmp_dir") + "/test.xml";
String folderUrn = CMSMgr. getFolderURN(repositoryName);

HashMap<String, String> metadataAttributesHM = new HashMap<String, String>();


Map<String, String> metaProps =
CMSInst. getMetaDataAttributes(repositoryName);

Iterator<String> it = metaProps.keySet().iterator();

while (it.hasNext())
{
// meta data attribute
String property = (String) it.next();

// meta data attribute value


String value = "val1";

metadataAttributesHM.put(property, value);
}

File file1 = new File(filename);

String addedContentURN = CMSInst. addContent(


repositoryName, file1,file1.getName(),
metadataAttributesHM);

836 IBM Product Master 12.0.0


}
catch (PIMAuthorizationException ae)
{
// Expected a failure
System.out.println("Authorization Failure");
return;
}
catch(IllegalArgumentException iae)
{
System.out.println(" Passed argument is null or empty ");
return;
}

catch(PIMInternalException ie)
{
System.out.println(" Internal Error ");
return;
}

Sample CMSReadOnlyAttribs interface usage


Use the CMSReadOnlyAttribs interface to assigns values and make those attributes read only.
This sample code assigns values to the CMS attribute and makes those attributes read only.
Option Sample code
Java API Type //script_execution_mode=java_api="japi:///javaapiclass:com.ibm.mdm.pim.CMSTest.class" to call this API.

package com.ibm.mdm.pim;

import com.ibm.pim.extensionpoints.CMSEntryBuildFunction;
import com.ibm.pim.extensionpoints.CMSEntryBuildFunctionArguments;

import java.util.HashMap;

public class CMSTest implements CMSEntryBuildFunction


{
public void cmsEntryBuild(CMSEntryBuildFunctionArguments inArgs)
{
HashMap hmAttrs = (HashMap)inArgs.getCMSMetaData();
hmAttrs.put("ExpiryDate", "08/11/2011 16:45:00");
hmAttrs.put("MDM_DOC_AUTHOR", "isome");
// use inArgs.getCMSReadOnlyAttribs();
// for setting attributes to readonly
}
}
Script API Type res["<name of the attribute>"] = <value>; where <name of
the attribute> is the name of the CMS Item Class attribute, which shows up in Add Content screen. And where <value> is the value which
you want to assign to that attribute. For example,

res["PIMItemnr"] = 1234; //assuming it takes numbers, or else enclose the value with double quotes
res["PIMTaal"] = "someText";
res["PIMDocType"] = "Text";

//Assigns value to CMS attribute. Provide the name of the attribute as seen in the UI
cmsMetadata["AuthorCode"] ="Ruth";

//Make CMS Attributes read only. Provide the name of the attribute as seen in the UI
cmsReadOnlyAttribs[0] = "AuthorCode";
Sample CMSContentURN interface usage
Use the CMSContentURN interface to validate and check for the existence of a particular URN or external content management system document.
This code sample retrieves the IBM Content Integrators content object that is referring to the given URN. This code sample checks to see if the content referenced
by this URN object is existing and if so, returns the IBM Content Integrators content object.

Context ctx = null;


try
{
ctx = PIMContextFactory.getContext("Admin", "xxx", "MyCompany");
CMSManager CMSMgr = ctx.getCMSManager();
CMSInstance CMSInst = CMSMgr.getCMSInstance();
String urn = "vbr:/DB2PAL/firstItemType.A1001001A09B24B35145B46524.A09B24B35145B46524.1022/1/CONTENT";

CMSContentURN contURN= CMSInst. getCMSContentURN(urn);

Content CIContent = null;


if( contURN.isExisting())
{
Content CIContent =
CMSInst.getContentIntegratorContent (contURN);
}
}
catch (PIMAuthorizationException ae)
{
// Expected a failure
System.out.println("Authorization Failure");
return;
}
catch(IllegalArgumentException iae)
{
System.out.println(" Passed argument is null or empty ");
return;
}
catch(PIMInternalException ie)
{
System.out.println(" Internal Error ");

IBM Product Master 12.0.0 837


return;
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrating IBM InfoSphere Physical Master Data Management


An solution can involve integration between the Product Master and the InfoSphere® Physical Master Data Management. The Solution Toolkit is part of a continuing effort
to reduce time to value by implementing industry specific content and capabilities.

Ensure that you become familiar with the Solution Toolkit, how to deploy, install, import, configure and finally how to use it.

Overview of the Banking Solution


The Banking Solution is an application that you can use to control how you sell and manage your offers.
Banking Solution Sample
Ensure you become familiar with what the sample Banking Solution is, how to deploy, import, configure and finally how to use the sample.
Developing with the Banking Solution Sample and Toolkit
The Banking Solution provides a solution template. Depending on your banking needs, you need to further build upon the banking solution template. A developer
uses the spec model editor in the workbench to enhance the data model. All data model changes must be made through the workbench.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Overview of the Banking Solution


The Banking Solution is an application that you can use to control how you sell and manage your offers.

Banks have traditionally tried to sell products to customers through mass marketing, one size fits all, approach.

Time to market cycles for new products typically ranges 6 - 9 months and can involved the development of custom code and inherent testing complexities and costs.

The result has been low acceptance of new offers, high costs for new product development and low account to customer ratios. To address this problem, banks need the
following:

1. A detailed up-to-the second understanding of its customers


2. The ability to offer products and product bundles that are tailored to the current needs and desires of the customer.
3. The capability to manage new product campaigns and make in-flight course corrections based on product acceptance.

List of artifacts
The following artifacts are included in the solution.

IBM® Product Master artifacts


Data model
Workflows
User interface
Sample data
Security configuration
Data validations and other code that enhances the user interface usability
Ability to publish SPEC changes
Ability to publish changes to hierarchies
Ability to publish changes to items
Integration with IBM Operational Decision Manager
IBM Operational Decision Manager artifacts
Integration with Product Master
InfoSphere MDM Advanced Edition artifacts
Ability to support products based on Product Master specs
Ability to receive changes for hierarchies and items
Business Process Management artifacts
Integration with the physical MDM
Sample next best offer user interface
Sample onboarding workflows

Roles and tasks


For most Banking Solution systems, the tasks that are accomplished in the solution are generally covered by users in the following main roles.

Accountant
As an accountant, you are responsible for ensuring that the correct general ledger (GL) codes are assigned. Control the GL account assignments in all of the
catalogs.
Administrator

838 IBM Product Master 12.0.0


As an administrator, you are responsible for any activities that are related to IT, for example managing specs and attribute collections. The IT administrator is
responsible for the technical aspects of the system. This includes being the Business User in charge of the Features Catalog content. As an administrator you cannot
change content in the other catalogs.
Channel manager
As a channel manager, you are responsible for ensuring that the channel is ready to sell and service the offer. Your focus is primarily the offer catalog, and might
have the ability to select or create channel-specific offers and approve copy that is used by the channels they control.
Compliance
As a compliant checker, you are responsible to ensure that the bank follows all of the government requirements. You need to manage any compliance-related data
primarily in the sellable items catalog, but also potentially in the offers catalog.
Copywriter
As a copywriter, you are responsible for creating all of the text, pictures, and videos for offers and promotions. You control all of the copy write related content that
is either held or referenced by the offer catalog.
Offer manager
As an offer manager, you are responsible for managing offers within a release. You will likely be in a marketing organization and your primary focus is the offer
catalog.
Product manager
A product manager, you are responsible for managing one or more base products. You will likely be within the operations team of a line of business and you have
control over the content of the sellable items catalog.
Promotion manager
As a promotion manager, you are responsible for managing promotions within an offer. This might be the same individual as the offer manager or might be an
external sales person. Your primary focus will be to control the promotion-related data of the offers catalog.
Release manager
As a release manager, you are responsible for managing one or more combinations of offers that need to be managed together.
Risk credit
As a risk credit checker, you are responsible for ensuring that the risk implicit in an offer or promotion is properly accounted for. Therefore, generally, risk is
determined on the product level and focused in the sellable items catalog.
Super users
As a super user, you can perform every task outside of a workflow, except change attributes.
Tester
As a tester, you are responsible for testing all offers before they go into production. This is a business function ensuring that the offers work as designed and that
there is no unintended consequences of changes at the feature or product level in both offers and existing contracts

Rules for creating offers and managing sales processes


When managing offers and customer, the following types of rules should be considered.

Terms and Conditions


Terms and conditions represent the proposed or actual contract with the customer. The offer contains the proposed version and they may or might not be able to be
changed as part of the on boarding process. The actual terms and conditions are contained in the agreement or contract.

Compliance
Compliance ensures that the appropriate government regulations are being met and the processes and procedures are in place to ensure an ongoing
enforcement of the rules and regulations. For example,

A bank might not allow more than three withdrawals from a Savings Account in any given month.
An account might not be opened by a person under 18 years of age unless there is a joint account holder who is 18 years old or older

Eligibility
Eligibility is defined by the rules that the bank wants to apply and beyond compliance, such as:

A person must have a checking account and a credit card already.


The checking account must be kept an average daily balance over $5,000
A person must not be a current customer of the bank.
Only valid if accepted and applied for through a specific channel such as the internet.

Pricing
While straight forward in concept, providing flexibility can be complicated. Pricing can by driven by one or more of the following:

Volumes, cumulated over various time horizons


Balances, also cumulated over various time horizons
Sales, service, or transaction initiation channels
And include calculations of commissions and other fees payable to third parties

Marketing
Marketing focuses on who is told that the offer is available and how will they be told. Marketing considerations take into account what is known about the customer
such as age, gender, social economic factors, the channel over which the interaction is occurring, and the location of interaction and of the customer. It is possible
that consumers who are eligible for an offer might not be told about it, yet are still able to sign up for it if they do find out.
Risks
Specific customer risks can influence pricing of arrangements involving credit, but might also influence other fees since this effects the ability of a customer to
move their business. Offers might also be structured where only individuals with credit scores that are higher than a certain level might apply.
Events
Events are rules that are run based on some change in status or transaction. For example,

Birthdays
Current physical location
Transaction or balance volume changes
Changes in methods or frequency of interactions
Required update of financial or other documents that are required for compliance or risk assessment

Key concepts
The following key concepts must be understood in order to use the Banking Solution Sample.

IBM Product Master 12.0.0 839


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Key concepts
The following key concepts must be understood in order to use the Banking Solution Sample.

Catalog
Catalogs are collaborative MDM containers that are used to store items. Collaborative MDM can have zero or more catalogs; however, an item can only belong to one
catalog.
Category
Categories are used to classify items. In the same way as a catalog contains items; a hierarchy contains categories.
Effective, Expiration, End Dates
Parts of the system can exist in one of three statuses:

Effective Date - What is the first date that it can be used in an offer and presented to a customer or prospect?
Expiration Date - What is the last date that the item can be included in an offer that is presented to a customer or prospect?
End Date - When must all contracts, agreements, or accounts that rely on this item be revised to exclude the item?

Feature
Features represent the basic building blocks of products and services that may be included in a specific offer. For example, a demand deposit account might include
the ability to write checks (for example, typical retail checking account) or not (typical commercial banking clearing account).
Features tend to represent things that can be charged for, and therefore be connected to specific rules on how the charges are calculated. Following the example on
the demand account for retail banking, there is a Feature for Clearing Checks which is typed to a set of fees that are needed to support the Check Clearing function.
This would include the ability to charge for each check, to clear checks when there are sufficient balances available, and the ability to charge if it is allowed to clear
checks that will cause the account to have a negative balance. Each of these would represent an individual feature.

Features can be associated with various conditions. For example,

Fee Condition - a rule that uses a transaction count and cost per transaction to calculate a fee (for example, cost to clear a check).
Rate Condition - a rule that uses an amount and a rate to calculate an amount (for example, credit card monthly interest charge or interest earned on a
savings account).
Limit Condition - a rule that says when something can be done. Continuing with some of the examples, one limit condition would be the amount of overdraft
allowed, or the number of with-drawls from a saving account per month.

Feature group
Feature groups are used to make it easier to build sellable items and offers by bringing in more than one feature at a time. One of the uses of feature group is to
build up set features that deliver a specific capability. For example, in order to be able to clear checks, the bank needs to be able to:

Set fees based off of the numbers of checks presented for clearing
Decide when to accept the check based off of a number of factors, including how many checks have already been accepted over a given period of time and
the balance of the account IF the check is accepted
Charge for when a check is accepted that would cause an account balance to be less than required by the bank

For example, the feature group Wire Transfer Fees can include the following related fees:

Incoming Domestic Wire Transfer Fee


Incoming Foreign Wire Transfer Fee
Outgoing Domestic Wire Transfer Fee
Outgoing Foreign Wire Transfer Fee

Hierarchy
A hierarchy, sometimes referred to as category tree (particularly within collaborative MDM scripting), is a structured "tree" of categories.
Item
An item is a representation of a product or other object. Items can exist in a catalog only, where they can be categorized in one or more categories, or unassigned.
Items must be unique within a catalog and can exist in one catalog only. However, items can be referenced from another item or a category in one or more catalogs.
Offer
An offer is something that can be purchased by a prospect and represented by a single contract. Offers are created by:

selecting one or more sellable items


determining which sellable items are required or optional
deciding which of the optional features of the sellable items will be required, optional, not available
setting the pricing, eligibility and marketing rules

Prospect
A prospect is someone who is presented with an offer. This person may or may not have a relationship with the bank.
Sellable Item
A sellable item represents the smallest thing that the bank can sell by itself. This can represent a service (for example, the ability to go to a branch of the bank and
exchange one currency for another) or a product (for example, checking account or credit card). Sellable items have required features and may also have optional
features that can be included or excluded depending on the offer. Sellable items have neither pricing nor specific eligibility defined, but may have compliance
requires set.
Version
Because contract changes may not effect all existing contracts, just future ones, the fact that something requires a major version changes requires that a new item
be created. Existing offers will always continue to refer to the item with the old major version number. Conversely, minor version number changes are used more in
the context of auditing and history. Changes made that require only minor version number changes take effect on all offers that reference the item. Every change to
an item in any catalog is versioned in order to meet two business requirements:

What was presented to a customer at some point in time

840 IBM Product Master 12.0.0


Change the contractual terms of an offer

In order to meet this requirements, any change results in the version number in increments. Version numbers can be broken into Major and Minor parts. For
example, 1.2 where 1 is the major version number and 2 the minor version number. Changes that apply to Use Case 2 cause the major version number to be
increased. All other changes result in the minor version number increment.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Banking Solution Sample


Ensure you become familiar with what the sample Banking Solution is, how to deploy, import, configure and finally how to use the sample.

Deploying the Banking Solution Sample


You can deploy the Banking Solution Sample on the physical MDM or Product Master by performing the steps in the following sections.
Importing the Banking Solution sample
After you have installed and deployed the Banking Solution, you need to import the data model. This imports all the specs, catalogs, hierarchies, attribute
collections, and workflow steps, that are essential for the Banking Solution sample to work.
Managing product and offers
Using the Banking Solution Sample to manage your product and offers requires modifying the sample data with your specific content in the following order:

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deploying the Banking Solution Sample


You can deploy the Banking Solution Sample on the physical MDM or Product Master by performing the steps in the following sections.

Procedure
To deploy the MDM synchronization toolkit solution, proceed as follows:

1. Extract the ae-integration-11.6.0.XXXX-XXXXXXXX-XXXXXX.XX.tar.gz file (banking asset) to a folder on your machine.


Example:
../banking
The following folders are created:
bin
config
data_model
data_samples
jars/banking-extended
mappings
2. Run the installer script in the bin folder:

../banking chmod a+x mdms_install.sh


../banking mdms_install.sh

3. Update the environment configuration by running the configureEnv.sh script in the $TOP/bin folder:

$TOP/bin/configureEnv.sh

4. Install, configure, and run the hazelcast-3.8.1 component.


5. Configure the Group name, Group password, and Port number properties in the hazelcast.xml file in the ../hazelcast-3.8.1/bin folder.
Example:

<name>mdmce-hazelcast-instance</name>
<password>mdmce-hazelcast</password>
<port auto-increment="true" port-count="100">5702</port>

6. Update the common.properties file in the $TOP/etc/default folder for Hazelcast configuration:

hazelcast_group_name=mdmce-hazelcast-instance
hazelcast_password=mdmce-hazelcast
hazelcast_network_ip_address=XX.XX.XX.XX:5702

Note: The value of the Hazelcast properties should be same as those in the Step 5.
7. Update the mdm.properties in the $TOP/etc/default/mdm/config folder:
a. Specify the value of the mdm.ae.username property as mdmadmin.
b. Encrypt the AE password by running the following command:

$TOP/bin>$JAVA_RT com.ibm.mdm.integration.utils.EncryptionUtils<plain text password>

Output:

IBM Product Master 12.0.0 841


Encrypted Password for '<plain text password>': <encrypted password>

Note: If the $JAVA_RT environment variable is not set, perform following:


i. Browse to the $TOP/bin/compat.sh file.
ii. Copy the JAVA_RT variable details defined in the file.
iii. Run the following export command:

JAVA_RT='copiedcontent'

c. Add the <encrypted password> to the mdm.properties file:

mdm.ae.password=<encrypted password>

d. Specify the endpoint URL for MDM AE.

mdm.ws.endpoint.url=http://example.server.customer.com:9080/MDMWSProvider/MDMService

e. Specify the default values for request queue name and response map properties.

mdm.ce.hz.idgenerator.name=mdmBankingIdGenerator
mdm.ce.banking.request.queue=eventNotifBankingQueue
mdm.ce.banking.response.map=eventNotifBankingResponseMap
mdm.ce.response.retry.count=5

8. Ensure that the following properties are set in the mdm.properties according to this sample:

# Product and Category Publish Auditing attributes


mdm.audit.last_successful_publish_by=Last Successfully Published/By
mdm.audit.last_successful_publish_on=Last Successfully Published/On
mdm.audit.last_publish_results_by=Last Publish Results/By
mdm.audit.last_publish_results_on=Last Publish Results/On
mdm.audit.last_publish_results_code=Last Publish Results/Publish Result Code
mdm.audit.last_publish_results_result=Last Publish Results/Publish Result
mdm.audit.last_modified_by=Modified/By
mdm.audit.last_modified_on=Modified/On

# Product harden attributes


mdm.product.attribute.id=ID
mdm.product.attribute.name=Name
mdm.product.attribute.description=Description
mdm.product.attribute.short_description=Description

# Category harden attributes


mdm.category.attribute.id=ID
mdm.category.attribute.name=Name

9. If you have made changes to the mdm.properties file, restart the application instance.
10. Go to the banking-extended folder and locate the ae-rest-client.jar file.
a. Go to MDM-AE_Installed_Path/KafkaProcessor/bin folder and locate the runKafkaBatchStreams.sh file.
b. Update the runKafkaBatchStreams.sh file to add the ae-rest-client.jar entry:

set CP=%CP%;../lib/ae-rest-client.jar

c. Copy the ae-rest-client.jar file to the MDM-AE_Installed_Path/KafkaProcessor/lib folder.


d. Set the JAVA_HOME entry in the runKafkaBatchStreams.sh file.
11. Use the following steps to install and configure Apache Camel Transformer utility.
a. Open the WebSphere® Application Server administrative console.
b. Select Applications > Applications types > WebSphere enterprise applications > Install.
c. In the Path to the new application pane, select Local file system.
d. Browse to the camel-banking.war, and click Next.
The camel-banking.war is in the ae-integration-11.6.0.XXXX-XXXXXXXX-XXXXXX.XX.tar.gz in the ../banking/jars/banking-extended/.
e. In the How do you want to install application pane, select Fast Path, and click Next.
f. In the Select installation options pane, keep the Directory to install application field blank to automatically set the default path, and click Next.
g. In the Map modules to servers and Map virtual hosts for Web modules panes, click Next.
h. In the Map context root for Web modules pane, specify the value of the Context Root field as /camel-banking/ and click Next.
i. Review the installation summary, and click Finish.
12. Proceed as follows to update the application.properties.default file present in the installed location of the camel-banking.war file:
Example

/../IBM/WebSphere/AppServer/profiles/AppSrv01/installedApps/WASHOSTCell01/camel-banking_war.ear/camel-banking.war/WEB-
INF/classes/application.properties.default

a. Create a copy of the application.properties.default file and name it as application.properties file.


b. Update the application.properties file for the Hazelcast and Kafka service configurations:

hazelcast_group_name=mdmce-hazelcast-instance
hazelcast_password=mdmce-hazelcast
hazelcast_network_ip_address=XX.XX.XX.XX:5702

Note: The value of the Hazelcast properties should be same as those in the Step 6.
13. Go to Websphere enterprise applications > camel-banking_war > Start.
14. Check and verify that there is no error in the deployed camel-banking_war log. The log file is in the installed path.
Example

/../IBM/WebSphere/AppServer/profiles/AppSrv01/logs/banking/application.log

If case of no errors your application is ready for MDM-CE AE integration.

842 IBM Product Master 12.0.0


What to do next
If you are modifying the mdm.properties configuration file, ensure you follow these guidelines:

Table 1. Important sections of the mdm.properties file


Section Description Properties
Transaction Lists the XSL stylesheet files that are used for generating the XML request to be sent mdm.transaction.addSpec.mapping.filepath=mappi
Mappings to the physical MDM. For example, mappings/maintainCategory.xsl, is used to ngs/addSpec.xsl
mdm.transaction.updateSpec.mapping.filepath=ma
generate the XML request for publishing categories. ppings/updateSpec.xsl
mdm.transaction.addSpecFormat.mapping.filepath
=mappings/addSpecFormat.xsl
mdm.transaction.maintainMultipleProducts.mappi
ng.filepath=mappings/maintainMultipleProducts.
xsl
mdm.transaction.maintainCategory.mapping.filep
ath=mappings/maintainCategory.xsl
Product Catalog Lists the main catalog that is in the collaborative MDM. In this case, the main catalog mdm.product.catalog.name=Offer Catalog
is called: Offer Catalog. If you have a different catalog, rename the value.
Spec Registry Lists the Spec Registry Lookup Table attributes. Ensure that the attribute names mdm.spec.registry=MDM Spec Registry
Lookup Tables listed here match the actual attribute names of the Spec Registry Lookup Table. mdm.spec.registry.attribute.spec_name=MDM Spec
Registry Lookup Spec/Spec Name
mdm.spec.registry.attribute.spec_id=MDM Spec
Registry Lookup Spec/Spec ID
mdm.spec.registry.attribute.spec_namespace=MDM
Spec Registry Lookup Spec/Spec Namespace
mdm.spec.registry.attribute.spec_last_publish_
date=MDM Spec Registry Lookup Spec/Last
Publish Date
mdm.spec.registry.attribute.spec_format_id=MDM
Spec Registry Lookup Spec/Spec Format ID
mdm.spec.registry.attribute.spec_format_versio
n=MDM Spec Registry Lookup Spec/Spec Format
Version
mdm.spec.registry.attribute.spec_format_last_p
ublish_date=MDM Spec Registry Lookup Spec/Spec
Format Last Publish Date
mdm.spec.registry.attribute.publish_result_cod
e=MDM Spec Registry Lookup Spec/Publish Result
Code
mdm.spec.registry.attribute.publish_summary=MD
M Spec Registry Lookup Spec/Publish Summary
Hierarchy Registry Lists the Hierarchy Registry Lookup Table attributes. Ensure that the attribute names mdm.hierarchy.registry=MDM Hierarchy Registry
Lookup Table listed here match the actual attribute names of the Hierarchy Registry Look up Table. mdm.hierarchy.registry.attribute.hierarchy_nam
e=MDM Hierarchy Registry Lookup Spec/Hierarchy
Name
mdm.hierarchy.registry.attribute.hierarchy_id=
MDM Hierarchy Registry Lookup Spec/Hierarchy
ID
mdm.hierarchy.registry.attribute.last_publish_
date=MDM Hierarchy Registry Lookup Spec/Last
Publish Date
mdm.hierarchy.registry.attribute.publish_resul
t_code=MDM Hierarchy Registry Lookup
Spec/Publish Result Code
mdm.hierarchy.registry.attribute.publish_summa
ry=MDM Hierarchy Registry Lookup Spec/Publish
Summary
Catalog Registry Lists the Catalog Registry Lookup Table attributes. Ensure that the attribute names mdm.catalog.registry=MDM Catalog Registry
Lookup Table listed here match the actual attribute names of the Catalog Registry Lookup Table. mdm.catalog.registry.attribute.catalog_name=MD
M Catalog Registry Lookup Spec/Catalog Name
mdm.catalog.registry.attribute.product_type_id
=MDM Catalog Registry Lookup Spec/Product Type
ID
mdm.catalog.registry.attribute.product_type_sp
ec_name=MDM Catalog Registry Lookup
Spec/Product Type Spec Name
mdm.catalog.registry.attribute.last_publish_da
te=MDM Catalog Registry Lookup Spec/Last
Publish Date
mdm.catalog.registry.attribute.publish_result_
code=MDM Catalog Registry Lookup Spec/Publish
Result Code
mdm.catalog.registry.attribute.publish_summary
=MDM Catalog Registry Lookup Spec/Publish
Summary
Product and Lists the names of the auditing attributes for Product and Category specs. The mdm.audit.last_successful_publish_by=Last
Category Publish attribute names are shared among all specs. If you need to change the attribute Successfully Published/By
mdm.audit.last_successful_publish_on=Last
Auditing name, ensure that the names match what is specified here. Successfully Published/On
attributes mdm.audit.last_publish_results_by=Last Publish
Results/By
mdm.audit.last_publish_results_on=Last Publish
Results/On
mdm.audit.last_publish_results_code=Last
Publish Results/Publish Result Code
mdm.audit.last_publish_results_result=Last
Publish Results/Publish Result
mdm.audit.last_modified_by=Modified/By
mdm.audit.last_modified_on=Modified/On

IBM Product Master 12.0.0 843


Section Description Properties
Product harden Lists the harden attributes for Product specs. Ensure that the attribute names listed mdm.product.attribute.id=ID
attributes here match the actual names in the spec. mdm.product.attribute.name=Name
mdm.product.attribute.description=Description
mdm.product.attribute.short_description=Descri
ption
Category harden Lists the harden attributes for Category specs. Ensure that the attribute names listed mdm.category.attribute.id=ID
attributes here match the actual names in the spec. mdm.category.attribute.name=Name

AE System Lists the system configuration values for the physical MDM. Before you change any of mdm.ae.username=mdmadmin
Configurations the following values, consult your system administrator for the physical MDM to make mdm.ae.password=Abm3fMIyIwY=XT3Ef4Y+8cQfcyC5ol
1Fol09xH+GPvHE
sure you have the correct values. mdm.ae.hierarchy.type.code=3
mdm.ae.hierarchy.value.code=Finance
mdm.ae.categoryupdate.method.code=COMPLETE
mdm.ae.language.code=100
mdm.ae.admin.system.type=4
mdm.ae.admin.system.value=MDM
mdm.ae.ws.endpoint.url=http://mdmaix015.torola
b.ibm.com:9080/MDMWSProvider/MDMService
use this constant Lists the default start date to be use in all of the publishing requests. Before you mdm.ae.default.start.date=2013-01-01 00:00:00
for now change any of the following values, consult your system administrator for the physical
MDM to make sure you have the correct values.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Importing the Banking Solution sample


After you have installed and deployed the Banking Solution, you need to import the data model. This imports all the specs, catalogs, hierarchies, attribute collections, and
workflow steps, that are essential for the Banking Solution sample to work.

Procedure
1. Create a company, by running the create_cmp.sh file:

$TOP/bin/db/create_cmp.sh --code=bank

2. Start the instance, by running the start_local.sh file:

$TOP/bin/go/start_local.sh

3. Log in to the Product Master as an Administrator and using the company you created in Step 1.
4. Import the samples from the $BANKING_INSTALL_DIR/data_samples folder and models from the $BANKING_INSTALL_DIR/data_model folder.
5. Click System Administrator > Import Environment > sample.zip > Import.
Following are the files that you need to import as the sample.zip:
synchronization_toolkit_model.zip - Contains the definitions of the Lookup tables that are required to maintain the synchronization between the Product
Master objects and their counterparts in MDM AE.
banking_solution_model.zip
Note:
After successful import, repeat the import again to resolve a circular dependency between the Lookup tables and their specs.

If the first import fails to import all of the offers (items in offer catalog) and channels (categories in channel hierarchy), you need to import the ZIP file
again. To verify that the import completed successfully, you should see three items in the Offer Catalog (Basic Checking Account, Master-card Gold
and Small Business Receivables Loan), and the Channels hierarchy should have a number of categories including: ATM Machines, Branches, Browser,
and so on.

sample_product_catalog_model.zip
banking_qa_data.zip

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing product and offers


Using the Banking Solution Sample to manage your product and offers requires modifying the sample data with your specific content in the following order:

Managing features
Features represent the building blocks of products and services that a bank can provide to its customers. They are the core capabilities that the bank have to offer,
and can also be viewed as rules and conditions that apply to a given product or service.
Managing sellable items
A sellable item is something onto which the services might be applied to, for example a credit card, a checking account, or a small business loan. In other words, a
sellable item is what a feature is applied to. Some banks would call this product or an independent unit.
Managing offers
Offers are built from sellable items, which in turn are built from features. Offers represent the tailored products, services, or bundles that can be offered to

844 IBM Product Master 12.0.0


customers. An offer is divided into three facets:
Managing categories
Having a central product catalog is important. Without a central product catalog, product managers must dig deep into documentation of legacy systems, including
getting IT involved, to determine the key capabilities of existing products and the inherent product capability that is built into the systems.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing features
Features represent the building blocks of products and services that a bank can provide to its customers. They are the core capabilities that the bank have to offer, and can
also be viewed as rules and conditions that apply to a given product or service.

A feature is something that you can count, price, or measure. Some examples of features include:

Access to make deposits


The ability to cash a check
The ability to make a deposit
The ability to charge interest

Features are managed in the Feature Catalog. In the Banking Solution Sample, you can define all of the possible features that can be associated with products and
services, and then link them to the appropriate items in the Sellable Items Catalog. They can also be applied to offers in the Offer Catalog when the features apply to the
entire offer, such as discounts or eligibility conditions.

In the Feature Catalog, you can create either an individual feature or a feature group. A feature group is a grouping of related individual features.

Creating condition types


Condition types represent the various rules that are run at various points in a product or offer lifecycle. All features in a category in this hierarchy result in that rule being
run with the defined values in the feature as well as potentially information from the Sellable Item, Offer, or Customer.

1. In the left navigation pane, select Condition Type Hierarchy from the menu, and click + to add the module.
2. In the Condition Type Hierarchy, right-click on the category into which the new condition is to be added, and select Check Out, Create New Condition.
3. Go to the Create New Condition Collaboration Area or your home screen and select the new condition from the appropriate workflow step.
4. Complete all of the requested information.
5. Click Save if work needs to stop before all of the content has been added.
6. Click Done when all of the content has been correctly entered.

Creating an individual feature


A feature can be created as a single rule or condition. The Feature Catalog uses the secondary Condition Types hierarchy to associate specs that apply to different
condition types, such as: fees, rates, and risk. Each individual feature must have a condition type, which provides more attributes to further define the feature.

1. In the left navigation pane, select Feature Catalog from the menu to add the module.
2. In the Feature Catalog, right-click on a category from the Feature Types hierarchy to which the new feature is to be added, and select Add Item. In the right pane,
the new Feature form displays.
3. In order to save the feature, you must provide a name for the feature.
4. For Type, select Individual Feature from the list.
5. For Condition Type, select the appropriate condition type for the feature.
6. Click Save so that the attributes that are associated with the selected condition type is displayed in the Details tab.
7. Provide information for the feature as required and click Save.

Updating a feature
1. In the Feature Catalog, select the feature to be updated.
2. Make updates to any of the tabs as required and click Save.

Removing a feature
A feature (that has not been associated with any sellable items or offers) can be deleted. After a feature has been linked to sellable items or offers, the deleted feature is
not automatically removed from the related items or offers.

1. In the Feature Catalog, select the feature to be deleted, and click Delete.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing sellable items

IBM Product Master 12.0.0 845


A sellable item is something onto which the services might be applied to, for example a credit card, a checking account, or a small business loan. In other words, a sellable
item is what a feature is applied to. Some banks would call this product or an independent unit.

Sellable items are managed in the Sellable Item Catalog. In addition to basic description, sellable items are defined by associating one or more features or feature groups
from the Features Catalog. After a sellable item is defined, it can be linked to an offer in the Offer Catalog.

Creating condition types


Condition types represent the various rules that are run at various points in a product or offer lifecycle. All features in a category in this hierarchy result in that rule being
run with the defined values in the feature as well as potentially information from the Sellable Item, Offer, or Customer.

1. In the left navigation pane, select Sellable Item Catalog from the menu to add the module.
2. In the Sellable Item Catalog, right-click on a category from the Products and Services hierarchy to which the new sellable item is to be added, and select Add Item.
In the right pane, the new sellable item form is displayed.
3. Provide details for the sellable item and click Save.

Updating a feature
1. In the Sellable Item Catalog, select the item to be updated.
2. Make updates to any of the tabs as required and click Save.

Removing a feature
A sellable item (that has not been associated with any offers) can be deleted. After a sellable item has been linked to offers, the deleted item will not be automatically
reflected in the related offers.

1. In the Sellable Item Catalog, select the item to be deleted, and click Delete.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing offers
Offers are built from sellable items, which in turn are built from features. Offers represent the tailored products, services, or bundles that can be offered to customers. An
offer is divided into three facets:

1. Feature: A feature is something that you can count, price, or measure. For example, access to make deposits is a feature. There are mandatory features and
optional features, so you can control which features are optional and which are mandatory. Features represent the building blocks, from a banks perspective, when
providing services to customers. For example, the ability to cash a check, to make a deposit, to charge interest, and so on. Features are the core capabilities that the
bank must offer.
2. Sellable item: A sellable item is something onto which the services might be applied to, for example a credit card. In other words, a sellable item is what a feature is
applied to. Some banks would call this product or an independent unit.
3. Offer: An offer takes one or more sellable items and decides exactly how it wants to be sold. It includes the exact pricing, a target marketing, target segments, and
locations. For example, what is the pricing, what are the terms of editions for that sellable item and through what channels should this be sold, what copyright text,
what geographies, what devices?

Offers are managed in the Offer Catalog. The offer attaches all of the constraints and refines the optional features that are available. You might have two different offers
that are being presented to two different groups with two different sets of optional features on the same core product such as a checking account.

Creating an offer
An offer takes one or more sellable items and defines exactly how the offer is to be sold. It can include information such as pricing, target market segments, and locations.

1. In the left navigation pane, select Offer Catalog from menu to add the module.
2. In the Offer Catalog, right-click on a category from the Offer Types hierarchy to which the new offer is to be added.
3. To save the offer, you must provide a name for the offer.
4. Click the different tabs in the pane to view and supply offer details and click Save. A unique Offer ID is generated by the system.

Updating an offer
An offer can be updated when you need to modify information on the offer, such as changing target market segments, target locations, pricing, or fees on features.

1. In the Offer Catalog, select the offer to be updated.


2. Make updates to any of the tabs as required and click Save.

Removing an offer
If the offer has been published to the physical MDM, as indicated by the Last Successfully Published field, the offer remains in the physical MDM. The publish function
does not automatically delete the offer from the physical MDM.

1. In the Offer Catalog, select the offer to be deleted, and click Delete.

Assigning a channel to the offer


846 IBM Product Master 12.0.0
Sales channels define the ways in which an offer can be made available to target customers. In the sample banking solution, the various available sales channels are
defined by using the Channels hierarchy. Additionally, certain channels have additional attributes that can be applied to the offer. To access these attributes, the channels
must first be assigned to the offer.
Channels must be added through the Sale Channels grouping in the Marketing tab. The corresponding categories in the channels hierarchy are added automatically to the
offer and appear in the Categories tab. The sample banking solution does not allow you to map a Channels category in the Categories tab, or use the Add Item’ function on
a Channels category from the left navigation pane.

1. Click the Marketing tab.


2. Click the Add More link on the grouping for Sales Channels.
3. Select a channel from the list, and click Save.

Removing a channel from the offer


Sales channels can be removed from an offer. Any additional item attributes associated with the offer will also be removed automatically.
Channels must be removed through the Sale Channels grouping in the Marketing tab rather than through the Categories tab to ensure that any additional attributes that
are associated with the channel are removed properly from the offer.

1. Click the Marketing tab.


2. Under Sales Channels, click the Delete the occurrence icon next to the channel to be removed, and click Save.
If the selected channel has additional attributes that apply to the offer, they disappear from the Marketing tab. The channel category is also removed as a mapped
category in the Categories tab.

Adding related offers for up-sell or cross-sell


One or more up-sell and cross-sell offers can be added to the offer. Since the procedures for both are similar, only the procedure to add an up-sell offer is listed here.

1. Click the Marketing tab.


2. Click the Add More link on the grouping for Offers to Upsell.
3. In the entry field that displays, click the Expanded View icon (which appears when you place the cursor in the entry field).
4. In the Offers to upsell dialog window, ensure Offer Catalog is selected for Catalog.
5. Select an offer from the Display list. Alternatively, you can use the search functions to search for the offer in the same dialog window. Click OK to add the selected
up-sell offer, and click Save.

Publishing an offer
Using the workflow feature in the collaborative MDM, an offer can be published to the physical MDM. Meaning, the offer details are captured and are added or updated to
the physical tables in the physical MDM. When publishing an offer, there might be additional dependent business objects that also need to be published to the physical
MDM. These can include specs, categories, and hierarchies. The Publish Offer function in the sample banking solution automatically determines all of the dependent
business objects that are required by the offer for it to be successfully published, and publish those objects to the physical MDM first before publishing the offer.
Instead of having the Publish Offer workflow automatically detect and publish all of the dependent categories, hierarchies, and specs, these can also be published
separately before publishing the offer.

Limitations: The attribute types of Relationship and Lookup table which link to other business objects are not supported by the MDM unified spec in the current release.
These attribute types simply are treated as strings when published to the physical MDM. In the sample banking solution data model, there are a few fields that are defined
with these attribute types. As a result only numeric IDs are published to the physical MDM instead of values that are meaningful to end users. Examples of attributes in the
Offer spec having such attribute types:

Offered Items/Item
Offers to Cross-sell
Offers to Up-sell
Fees/Fee Feature
Rates/Rate Feature
Currency Type

Offers deleted from the collaborative MDM are not synchronized with the physical MDM through the publish offer function.

1. Select the offer from the left navigation pane, and click Checkout.
2. Select the Publish Offer workflow. Alternatively, you can access the Publish Offer workflow by right-clicking on the offer from the left navigation pane.
If the offer is successfully published, the offer is updated with the following information: Last Successfully Published and Last Publish Results.
If the publish fails, the workflow is in the Fixit step, and the relevant error message is listed in the Last Publish Results fields of the offer. If you encounter this
situation, resolve the errors and republish the offer.

Adding sellable items to the offer


An offer can include one or more sellable items from the Sellable Item Catalog. A sellable item is the product or service that underpins the offer.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Adding sellable items to the offer


An offer can include one or more sellable items from the Sellable Item Catalog. A sellable item is the product or service that underpins the offer.

Procedure

IBM Product Master 12.0.0 847


1. Click the Items tab.
2. In the attribute Offered Item, click Add More.
3. In the Item field, click the Expanded View icon (which appears when you place the cursor in the entry field).
4. In the Item dialog window, ensure Sellable Item Catalog is selected for Catalog.
5. Select a sellable item from the Display list. Alternatively, you can use the search functions to search for the sellable item in the same dialog window. Click OK after
the sellable item is selected.
6. In the Offer form, under Offered Item, update the Required field for the selected sellable item.
7. Save the offer by clicking Save.

What to do next
To confirm your changes, click on the Features tab, you should see the selected sellable item listed under Offered Item, along with the features associated with the
sellable item from the Sellable Item Catalog.
For each of the features listed, you can update the Required field as appropriate for the offer. Click Save when done.

Adding features to the offer


Additional features such as fees, rates, discounts or waivers can be added to the offer. Fees and Rates can be added in the Pricing tab, while Discounts and waivers
can be added in the Promotions tab. Since the procedure for each is very similar, only the procedure to add Fees is described here.
Adding target market segments to the offer
An offer can have one or more targeted market segments. In the sample banking solution, the various market segments are defined using the Market Segments
hierarchy.
Adding target market locations to the offer
An offer can have one or more targeted market locations. In the sample banking solution, the various locations are defined using the Locations hierarchy.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Adding features to the offer


Additional features such as fees, rates, discounts or waivers can be added to the offer. Fees and Rates can be added in the Pricing tab, while Discounts and waivers can be
added in the Promotions tab. Since the procedure for each is very similar, only the procedure to add Fees is described here.

Procedure
1. Click the Pricing tab.
2. To add Fees, click Add More on the grouping for Fees.
3. In the Fee Feature field, click the Expanded View icon (which appears when you place the cursor in the entry field).
4. In the Fee Feature dialog window, ensure Feature Catalog is selected for Catalog.
5. Select a fee feature from the Display list. Alternatively, you can use the search functions to search for the feature in the same dialog window. Click OK after the
feature is selected.
6. Complete the fields related to the selected fee as appropriate.
For example, base fee, pricing, and tiers.
7. Save the offer by clicking Save.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Adding target market segments to the offer


An offer can have one or more targeted market segments. In the sample banking solution, the various market segments are defined using the Market Segments hierarchy.

About this task


This procedure adds the market segments to the offer as attributes only. The market segments are in fact not added as mapped categories for the offer. Should you want
to view the offer using the Market Segments hierarchy view, you will need to independently map the market segment categories from the Categories tab.

Procedure
1. Click the Marketing tab.
2. Click the Add More link on the grouping for Market Segments.
3. Select a market segment from the list.
4. Save the offer by clicking Save.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

848 IBM Product Master 12.0.0


Adding target market locations to the offer
An offer can have one or more targeted market locations. In the sample banking solution, the various locations are defined using the Locations hierarchy.

About this task


This procedure adds the locations to the offer as attributes only. The locations are in fact not added as mapped categories for the offer. Should you want to view the offer
using the Locations hierarchy view, you will need to independently map the location categories from the Categories tab.

Procedure
1. Click the Marketing tab.
2. Click the Add More link on the grouping for Market.
3. Click the Add More link on the grouping for Location.
4. Select a location from the list.
5. Save the offer by clicking Save.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing categories
Having a central product catalog is important. Without a central product catalog, product managers must dig deep into documentation of legacy systems, including getting
IT involved, to determine the key capabilities of existing products and the inherent product capability that is built into the systems.

Hierarchies, and their categories, provide an organization structure to categorize items within product catalogs. In the Banking Solution Sample, all catalogs must be
defined with a primary hierarchy. Additional hierarchies can be created for use within one or more catalogs, allowing the user to view or organize catalog items through
different hierarchical structures. For example, in the Offer Catalog of the sample banking solution, the Offer Types hierarchy is the primary hierarchy that is defined for the
catalog. But the user can also view items under other hierarchies such as Channels hierarchy, Market Segments hierarchy and Locations hierarchy.

The following product documentation focuses primarily on how the hierarchies are being used in relation to the Banking Solution Sample.

Adding a category to an existing hierarchy


You can add new categories to the existing hierarchies in the sample banking solution: Offer Types, Channels, Market Segments and Locations.

1. From the left navigation pane, select the hierarchy from the menu to add the module.
2. From the left navigation pane, select the hierarchy from the menu to add the module.
3. Provide values to complete the category details.
4. Click Save to add the category to the hierarchy.

Updating a category
1. From the left navigation pane, select the hierarchy from the menu to add the module.
2. From the left navigation pane, select the category to update.
3. Update the category details and click Save.

Removing a category
You can delete a category (that has not yet been published). If the category has been published to the physical MDM (as indicated by the Last Successfully Published
fields), the category remains in the physical MDM. The publish function does not automatically delete the category from the physical MDM.

1. From the left navigation pane, select the hierarchy from the menu to add the module.
2. From the left navigation pane, select the category to be deleted.
3. Right-click on the category and select Delete, and click Save.

Moving a category to a different parent


You can change the hierarchical structure by move a category from one parent to another.

1. From the left navigation pane, select the hierarchy from the menu to add the module.
2. From the left navigation pane, select the category to be moved.
3. Right-click on the category and select Cut.
4. Select the new parent for the category.
5. Right-click and select Paste.
6. In the dialog window that displays, confirm that you want to change the parent of the category by clicking OK. The changes should appear in the hierarchy tree.
Click the refresh icon if necessary.
7. Click Save to save the changes to the hierarchy.

IBM Product Master 12.0.0 849


Assigning a secondary spec to a category
Secondary specs can be associated with categories to define additional item attributes that an item can have if it is mapped to those categories. The Channels hierarchy in
the sample banking solution is set up to illustrate how secondary specs can be used to add item attributes to an offer based on the Channels categories that are assigned
to the offer.
The following procedure assumes that the secondary spec has already been created through the Spec Console.

1. Select a category to edit.


2. Click the Specs tab, and select Manage Specs > Add Specs.
3. In the dialog window, select Item Hierarchy Specs for the Type of Specs field.
4. In the search field, enter the name of the secondary spec to add to the category, and click Search.
5. In the search results, check the appropriate secondary spec, and click Select.
6. For Apply Specs to Items In, select the appropriate catalog or All Catalogs from the list, then click Done. The selected secondary specs should be displayed under
Item Specs.
7. Click Save to save the category updates.

Publishing a category
Using the workflow feature in the collaborative MDM, a category can be published to the physical MDM so that published offers can be viewed or accessed within the
physical MDM using the same hierarchical structure as in the Product Master. When a category is selected for publishing, the publish category function publishes the
selected categories parent and child categories to the physical MDM. If the category has secondary specs that are associated with it, those specs are automatically
published.
You can publish a category as many times as required, especially after there are updates to the hierarchy (for example, changes to parent or child categories, category
name, associated or disassociated secondary specs).

It is not necessary for users to independently publish categories. The Publish Offer workflow publishes all of the categories (and specs) required by the offer to the
physical MDM as well.

Limitations: Categories that are deleted from the collaborative MDM are not synchronized with the physical MDM through the publish category function.

A category cannot have multiple parents. Although the sample banking solution user interface does not prevent the user from assigning multiple parents to a category,
publishing a category with multiple parents result in errors.

1. Select the category from the left navigation pane. Click Checkout, then select the publish category workflow. Alternatively, right-click on the category from the left
navigation pane and select the appropriate category publish workflow. The available publish category workflows for selected hierarchies are: Publish Offer Type,
Publish Market Segment and Publish Channel.
2. If the category is successfully published, the category is updated with the following information: Last Successfully Published and Last Publish Results.
3. If the publish fails, the workflow is in the Fixit step, and the relevant error message will be listed in the Last Publish Results field of the offer. If you encounter this
situation, resolve the error and republish the category.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Developing with the Banking Solution Sample and Toolkit


The Banking Solution provides a solution template. Depending on your banking needs, you need to further build upon the banking solution template. A developer uses the
spec model editor in the workbench to enhance the data model. All data model changes must be made through the workbench.

Prerequisites
The Banking Solution is only available in the IBM® Product Master.
The Banking Solution works by synchronizing data in the Product Master to data in the Physical Master Data Management. For the synchronizing to work, all of the data
needs to be stored in the Product Master.

Setting working environment


After the Solution Toolkit is correctly installed, the Banking Solution data model and sample are correctly deployed as described in the previous sections, it is ready to use.
Make sure that the username and password, and the endpoint URL for the physical MDM are correctly specified in the mdm.properties file. No further setup is required.

Extending the data model


The Banking Solution data model includes:

Catalogs
Hierarchies and categories
Specs
Association of specs to catalogs, hierarchies, and categories
Sample product data

Modifying objects
More catalogs, hierarchies, categories, and product data can be easily added as necessary. The included data models can also be modified as needed.
Managing product offers in the IBM Product Master and the physical MDM
To synchronize products offers in the Product Master to the physical MDM, the following must take place:

850 IBM Product Master 12.0.0


1. All of the data needs to be stored in Product Master.
2. The data then is published to the physical MDM.
3. If Product Master and the physical MDM are unified for the spec module, store the spec data in the same place.

For more information about managing product offers, see Managing offers.

Data synchronization
Data synchronization consists of the following:

1. Synchronizing an item in Product Master to a product in the physical MDM.


2. Synchronizing a category in Product Master to a category in the physical MDM.
3. Synchronizing a spec in Product Master to a spec in the physical MDM.

Modifying the mapping


In the Banking Solution Sample, the following tasks take place:

1. The user will author offers, items, and features data in Product Master through the user interface.
2. The user publishes offers, items, and features to Advanced Edition in the format of Product Master XML data representation. This consists of the following:
a. Retrieve XML data from Product Master.
b. Send XML data from Product Master to the physical MDM through maintain-like transactions.
c. Map data in the physical MDM through maintain-like transactions to the BObj that can be consumed by the physical MDM lower-level transaction. This can be
done through the Adapter Service Interface (ASI).

Data mapping is defined in the following stylesheet (xsl) files:

For category: maintainCategory.xsl


For item: maintainMultipleProducts.xsl
For spec:
addSpec.xsl, for adding a new spec.
addSpecFormat.xsl, for adding and removing attributes from a spec that has been published before.
updateSpec.xsl, for updating attribute definitions of a spec that has been published before.

The construction of the BObj for each type of synchronization can be found in the corresponding XSL file. In most situations, additional BObjs cannot be generically and
efficiently constructed in the XSL file; in such cases APIs is called to construct those additional BObjs. Therefore, modifying the mapping (which constructs the BObjs)
requires modifying the XSL file as well the API for the corresponding synchronization.

Creating a collaborative MDM extension point to publish to the physical MDM


Perform the following steps to create a collaborative MDM extension point to publish to the physical MDM. For more details, see any of the workflows included in the
sample data model.

1. Define the workflow.


Note: One workflow needs to be defined for the hierarchy, catalog, and spec.
2. Configure the flow to the physical MDM instance.
Note: The flow should start with the Initial Step, and end with the Publish Step, and the steps that the user can choose in case of failure.
3. Specify the script, which invokes the extension point class to do the publishing to the physical MDM. For example,

//script_execution_mode=java_api="japi://com.ibm.mdm.banking.extensionpoints.CategoryPublishWorkflowStepFunction.class

This script invokes the class, which publishes a category.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrating IBM InfoSphere Information Server


You can integrate IBM® InfoSphere® Information Server with Product Master. This integration enables use of IBM InfoSphere Business Glossary and IBM InfoSphere
DataStage® with Product Master.

Product Master provides Java™ APIs that can be used to export PIM metadata, such as, specs and attribute collections for use in IBM InfoSphere Information Server tools:
IBM InfoSphere Business Glossary and IBM InfoSphere DataStage. Data from IBM InfoSphere DataStage can also be imported into PIM using the tools and scripts
available within the product.

Overview of IBM InfoSphere Information Server integration


IBM Product Master (Product Master) provides scripts, documentation, and Java APIs that make integration with the IBM InfoSphere Information Server eier.
Product Master provides capabilities to export metadata such as specs and attribute collections to IBM InfoSphere Information Server. The capabilities are enabled in two
steps:

Metadata export
Export attribute collection in XSD formats with Product Master APIs. This XSD can then be consumed in IBM InfoSphere Information Server tool that is called IBM
InfoSphere DataStage and IBM InfoSphere QualityStage® designer. This step is manual at IBM InfoSphere Information Server. When imported in IBM InfoSphere
Information Server, the metadata is stored in IBM InfoSphere Information Server repository.
Glossary export

IBM Product Master 12.0.0 851


Export of Product Master spec attribute name details, with Product Master APIs, in XML format that is recognized by IBM InfoSphere Business Glossary. This can be
consumed by IBM InfoSphere Business Glossary. This is a manual step at IBM InfoSphere Business Glossary.

Data from inbound streams such as classic applications and data feeds into IBM InfoSphere Information Server can be mapped to the metadata that was imported in IBM
InfoSphere Information Server. This mapping must be done in IBM InfoSphere Information Server manually. IBM InfoSphere DataStage can now be used to apply
transformation rules and IBM InfoSphere QualityStage can further help data cleansing.

Final transformed output of IBM InfoSphere DataStage can be consumed by Product Master with the product APIs. Again, Product Master recognizes two data formats:
CSV file and database tables.

You can use the IBM InfoSphere DataStage and IBM InfoSphere QualityStage components to transform and cleanse incoming data that is loaded from upstream systems
into Product Master. You can also use the IBM InfoSphere DataStage and IBM InfoSphere QualityStage components to publish data of improved quality to downstream
systems.

Exporting for the business glossary


The glossary export capability exports IBM Product Master specification attribute name details in the business glossary format.
The following example is an interface method for glossary export.

/**
* This method takes a Collection of String representing the Spec Names as the
* input parameter and exports the metadata(spec) description in the
* Business Glossary format in a XML file and uploads the file to the
* docstore for each spec.
*
* @param specNames Collection of Spec names
* @throws PIMInternalException when internal error occurs
* @throws IllegalArgumentException when the Collection of SpecNames is null
*/
public void exportBusinessGlossary(Collection<String> specNames);

/**
* This method returns a Business Glossary XML as String for given Spec
* name.
*
* @param specName
* Spec name to be exported for Business Glossary
* @return String XML content as a string
* @throws PIMInternalException when internal error occurs
* @throws IllegalArgumentException when the specName is null
*/
public String getBusinessGlossary(String specName);

Loading data from upstream systems


For more information, see Loading data from upstream systems.

Publishing data to downstream systems


For more information, see Publishing data to downstream systems.

Publishing data to downstream systems


You can use the IBM InfoSphere Information Server product, along with IBM Product Master to transform data from downstream systems.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Loading data from upstream systems


You can use the IBM® InfoSphere® Information Server product, along with IBM Product Master to transform data from downstream systems.

Before you can load data from upstream systems, you must have a data model that is defined in Product Master. The model includes your data objects, data values,
specifications, and catalogs.

Defining the attribute collection


You can define an attribute collection in Product Master to match the data that you want to import from IBM InfoSphere DataStage®. You can then import this metadata
from Product Master into IBM InfoSphere DataStage to design and build your IBM InfoSphere DataStage jobs. Before you can export your data from external or input
systems to cleanse it using IBM InfoSphere DataStage, you must define the attribute collection that includes the attributes from one or more of your specifications.
Before you can define an attribute collection, you must create your IBM Product Master data model.

You can use the Java™ API, the Script API, or the user interface to define your attribute collection.

Exporting the attribute collection


To cleanse the product information data, you must export the attribute collections that are created in Product Master in a format that IBM InfoSphere DataStage can use.
Before you can export the attribute collection, you must:

852 IBM Product Master 12.0.0


Create your IBM Product Master data model.
Define your attribute collection.

You can use the Java classes and methods in the com.ibm.pim.interfaces.IISIntegration interface to use the export capability. The attribute collection information is
exported in the XSD format. Product Master metadata properties, which cannot be mapped to XML schema format are represented as annotation tags.

Interface methods to perform metadata export are:

/**
* This method takes an Collection of String representing the Attribute
* Collection names as input parameter and exports the metadata(of
* Attribute Collection) in XML Schema format in a XSD file and uploads the
* file to the docstore for each Attrribute Collection.
*
* @param attrCollNames
* Collection of AttributeCollection names
* @return List containing the Attribute Collection names which were not
* exported due to some error.
*
* @throws PIMInternalException when internal error occurs
* @throws IllegalArgumentException when the Collection of
* AttributeCollection Name is null
*/
public List<String> exportAttributeCollectionMetadata(Collection<String> attrCollNames);

/**
* This method returns Metadata information in a String having XML Schema
* format, for given AttributeCollection name.
*
* @param attrCollName
* Attribute Collection name to be exported for Metadata
* Information
* @return String XML Schema content as a string
* @throws PIMInternalException when internal error occurs
* @throws IllegalArgumentException when the AttributeCollection name is null
*/
public String getAttributeCollectionMetadata(String attrCollName);

The following table provides a mapping between node rules and XSD representation.

Table 1. Properties to XSD mapping


Node rule XSD representation
Max occurrence Schema element maxOccurs value
Min occurrence Schema element minOccurs value
Type Schema element type
Date only Schema element type (date or dateTime)
Default value Element default
Max length Schema simple type maxLength facet
Min length Schema simple type minLength facet
Pattern Schema simple type pattern facet
Precision Schema simple type fractionDigits facet
Number enumeration Schema simple type enumeration facet
String enumeration Schema simple type enumeration facet
The following table provides the mapping between metadata type and the XML schema.

Table 2. XSD representation of the metadata


Type XML schema
binary, image, thumbnail anyURI (URL)
currency decimal
date date or dateType. Date Only attribute on node is true then you want date, else dateType.
flag boolean
URL, image URL, thumbnail URL anyURI
integer integer
lookup table string (key into Lookup Table)
number decimal
number enumeration decimal (enumeration)
password string
relationship anyURI (item reference)
sequence integer
string string
string enumeration string (enumeration)
timezone string
group a grouping node is represented as a complex type with a sequence that contains all of the attributes with in that group.
Annotation tags are not used to represent some Product Master metadata properties, which cannot be captured in XML schema format.

Table 3. Metadata properties represented as annotation tags


Node property Description
Type Original product type field.
Attribute name Original attribute name storage field.

IBM Product Master 12.0.0 853


Node property Description
Attribute path The product attribute path. This property can be used while you import data from IBM InfoSphere Information Server to the product.
Primary key Unique identifier for Item.
Lookup table Lookup table for lookup-type.
Unique Creates a requirement for the node to be unique in the catalog. If you attempt to enter, duplicate value, an error occurs. This option is only available in
primary specs.
Non-persistent Ability to make a non-constant. The value is derived by using a script.
Localized Set attribute for localization. The company must be set with the wanted locales.
Link Defines a node as the "source attribute" or "foreign key" to the master catalog. Option only available in primary specs.
Indexed Product supports searching on this field.

Importing the schema into IBM InfoSphere Information Server


You must import your XSD file into IBM InfoSphere Information Server. Importing the XSD creates the table definitions in IBM InfoSphere DataStage that outline the
column names, data type, length, and other column properties of the data that you want IBM InfoSphere DataStage to process.
Before you can import the schema into IBM InfoSphere Information Server, you must:

Create your data model.


Define your attribute collection.
Write a trigger script in Product Master that uses the APIs to export the attribute collections created.
Sample script:

getContextMethodDefinition =
createJavaMethod("com.ibm.pim.context.PIMContextFactory","getContext","java.lang.String","java.lang.String","java.lang.Str
ing") ;
context = runJavaMethod(null, getContextMethodDefinition, user_name, password, company_name) ;

getIISIntegrationDefinition = createJavaMethod("com.ibm.pim.context.Context","getIISIntegration") ;
IISIntegration = runJavaMethod(context , getIISIntegrationDefinition) ;

getAttributeCollectionMetadataDefinition =
createJavaMethod("com.ibm.pim.utils.IISIntegration","getAttributeCollectionMetadata","java.lang.String") ;
xsdString = runJavaMethod(IISIntegration , getAttributeCollectionMetadataDefinition, attribute_collection_name) ;

xsdString = substring(xsdString,38);

out.write(xsdString);

Export your attribute collection as an XSD file.

Table definitions are flat lists of attributes. If you want to represent the complex structures in IBM Product Master specs, you must create multiple table definitions for
each grouping level within the hierarchical structure. You must run the import process once for each table definition that you want to create.
If the spec allows for multiple addresses, and if the source feeds multiple addresses, then you must have a table definition for the addresses. If the source has one
address, then the address attributes can be part of the main table definition that is imported from the XSD.

1. Open the IBM InfoSphere DataStage and IBM InfoSphere QualityStage® Designer.
2. Click Import > Table Definitions > XML Table Definitions.
3. Use the Import window, click File > File from the Web. In the subsequent popup, provide the URL to run the attribute collection export trigger script, which is
already created in Product Master.
URL format

http://server_name:port/utils/invoker.jsp?
company_code=company_code&invoking_user=user_name&bUserOutput=false&script=trigger_script_name

Sample URL

http://localhost:8080/utils/invoker.jsp?
company_code=trigo&invoking_user=Admin&bUserOutput=false&script=GetAttrCollectionMetadata.Wpcs

Where,
GetAttrCollectionMetadata.wpcs is the trigger script that is created for exporting attribute collection. This script fetches the attribute collection metadata
into IBM InfoSphere DataStage. Establish primary key relationships and save the table definition.
4. Repeat step 2 and 3 for each table definition that you want to create.
5. Export the Product Master spec and save it as a file. From IBM InfoSphere Business Glossary, click Manage Categories tab.
6. Browse the file name of the spec that is saved and click Upload. Spec attributes are imported per business glossary definitions.

Building InfoSphere DataStage and InfoSphere QualityStage jobs


You can use the IBM InfoSphere DataStage and IBM InfoSphere QualityStage Designer to build jobs that cleanse, transform, and write data to a database table or a
comma-separated value (CSV) file. After the data is written, you can import the data into IBM Product Master.
Before you can build IBM InfoSphere DataStage and IBM InfoSphere QualityStage jobs, you must:

Identify the product data sources and ensure that you have access to the sources.
Ensure that you understand the content, structure, and initial quality of the product data sources. You can use the Information Analyzer component of the IBM
InfoSphere Information Server product to analyze the data sources.

You need to use the IBM InfoSphere DataStage Designer to create InfoSphere DataStage or InfoSphere QualityStage jobs.
Important: Ensure that output file produces data in the correct pair format (column, column-value).

1. Open the InfoSphere DataStage Designer and create a job that runs one or more of the following tasks.
a. Compute a frequency distribution for:
Eliminating duplicate records.
Matching input records.

854 IBM Product Master 12.0.0


Determining the clerical pairs.
b. Validate field values.
c. Validate that a manufacturer in the source data is recognized by the product.
d. Correct misspellings.
e. Specify which part of the hierarchy an item should be placed into based on business rules.
f. Create the input file of cleansed master product data to load into the product.

Running InfoSphere DataStage and InfoSphere QualityStage jobs


You must run the IBM InfoSphere DataStage and IBM InfoSphere QualityStage job that cleanses, transforms, and delivers data to a staging server or staging table.
Before you can run IBM InfoSphere DataStage and IBM InfoSphere QualityStage jobs, you must:

Identify the product data sources and ensure that you have access to the sources.
Ensure that you understand the content, structure, and initial quality of the product data sources. You can use the Information Analyzer component of the IBM
InfoSphere Information Server product to analyze the data sources.
Build the IBM InfoSphere DataStage and IBM InfoSphere QualityStage jobs. For information, see Building InfoSphere DataStage and InfoSphere QualityStage jobs.

1. Define the jobs by using the graphical job sequencer and run the jobs.

Importing an InfoSphere DataStage output file


For more information, see Importing an InfoSphere DataStage output file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Importing an InfoSphere DataStage output file


You can import your cleansed and transformed data from IBM® InfoSphere® DataStage® into IBM Product Master. The data loader program available in Product Master
reads the data and adds items to the appropriate categories and workflows.

Before you begin


Before you can import an output file, you must:

Build your IBM InfoSphere DataStage and IBM InfoSphere QualityStage® jobs. For more information about DataStage, see Building InfoSphere DataStage and
InfoSphere QualityStage jobs.
Run your IBM InfoSphere DataStage and IBM InfoSphere QualityStage jobs. For more information about DataStage, see Running InfoSphere DataStage and
InfoSphere QualityStage jobs.
Ensure that the data is written to a comma-separated values (CSV file) or database table.
Build a custom catalog to import items or collaboration area to check out items.

About this task


The data loader program can be used for the following activities:

Import items into multiple catalogs.


Categorize the imported items under multiple categories. You can specify the entire category path in the data file. If the category path is invalid, item is imported to
the Unassigned category
Enable or disable validation during item save
Populate relationship attributes, link attributes, localized attributes, and multi-occurrence attributes.
Place all the items into workflow and extending the workflow logic.
Support different action modes like create, update, and delete.
Support for delta load (update existing products).
Take advantage of batching framework for better performance. Uses the batching APIs exposed by the Java™ API layer to allow the items to be saved using
batching framework.
Can be extended by the users.

You can create a report using Product Master graphical user interface or using a shell script that is provided as part of data loader.

Installation and execution of the data loader program using Product Master user interface

1. Source code of the data loader program is made available as a compressed (compressed file) file.
2. The data_loader.jar file is available in $TOP/Generic_Data_Loader/data_loader_distribution/ directory. In order to use the data loader program, you must first
include this JAR file to the product class path. You can do this by either User JAR mechanism or Dynamic docstore class-loader mechanisms that are supported by
exit points.
3. Log in to the Product Master GUI with the user ID that has administrative privileges.
4. Create a spec. To do so, click Data Model Manager > Specs/Mappings > Specs Console. Create a spec with the following attributes:
Property file name
Data file name
Option
Job ID
Start record
Stop record

IBM Product Master 12.0.0 855


5. Click Product Manager > Reports > Reports Console.
6. Click New to create a new report.
7. Set the parameter values for the report:
a. Type the path of the property file. The path can be a docstore path or a file system path. In the case of the file system path, the prefix file:// must be used
before the path.
b. Type the path of the data file. This is similar to the property file path.
c. Specify the type of data file. Type either csv or table in the Loader_Report_Input_Spec/Option field. The value should be specified in lowercase.
d. Set the options Job_Id, Start_Record_Id, and Stop_Record_Id while you load data from the database.
8. Create a report script: //script_execution_mode=java_api="japi://com.ibm.ccd.loader.LoaderReportGenerateFunction.class"
You can use either of the shell scripts csv_loader_invoke_job.sh and table_loader_invoke_job.sh to create and run the report with the specified name.

Remember: The LoaderJobInvoker.class file must be added to the product class path by using either User JAR mechanism or dynamic docstore loader
mechanism.
9. The report script directs you to the data loader program using the exit points:
//script_execution_mode=java_api="japi://com.ibm.ccd.loader.LoaderReportGenerateFunction.class"
Run the report to start the data loader program. The report imports items into the relevant catalog or checks out items into the appropriate collaboration area.

The messages are logged in default.log file in the scheduler service. For any error, or exception you can view the exception.log file of the scheduler service.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

csv_loader_invoke_job.sh Script
Use the csv_loader_invoke_job.sh script to create a report job that uses the loader program as the report script. This job imports data from a comma-separated value
(CSV) file that contains the output from an IBM® InfoSphere® DataStage® and IBM InfoSphere QualityStage® job.

Purpose
The csv_loader_invoke_job.sh script imports data from a CSV formatted output file that is generated from IBM InfoSphere Information Server.

Syntax for the csv_loader_invoke_job.sh script


csv_loader_invoke_job.sh
--username = user --password = password

--jobtype = report --jobname = jobname


--company = company

--propertyFilePath = file:///<absolute_property_file_system_path>

--dataFilePath = file:///<absolute_data_file_system_path>

Arguments
Command arguments and their values have the following meanings:

--username
Indicates that you want to specify the name of the user for the company.
This argument is required.

user
User name to log in to the company.
--password
Indicates that you must supply a password to log in to the company.
This argument is required.

password
Password to be used to log in to IBM Product Master.
--company
Indicates that you want to load data for a specific company.
company
The name of the company that the data is for.
--jobtype
Indicates the type of job you want to specify. Currently, the only value that is accepted is report, as the built-in mechanism to support the invocation of loader is
using reports.
report
Creates a report job. This is the only valid value that must be specified for the -jobtype argument.
--jobname
Indicates that you want to specify the name of the job.
jobname
The name of the job.
--propertyFilePath
Indicates that you want to specify the path of the property file.
file:///<absolute_property_file_system_path>

856 IBM Product Master 12.0.0


The path of the property file.
--dataFilePath
Indicates that you want to specify the path of the data file.
file:///<absolute_data_file_system_path>
The path of the data file.

Example: Loading data from a CSV file


In this example, data is loaded for the abc_company company from a CSV file.

csv_loader_invoke_job.sh --username=user --password=password -–company=abc_company -–jobtype=report --jobname=jobname -–


propertyFilePath=file:///<absolute_property_file_system_path> --dataFilePath=file:///<absolute_data_file_system_path>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Create report using table_invoke_job.sh script


Use the table_invoke_job.sh script to create a report job that uses the loader program as the report script. This job imports data from a database table that contains the
output from an IBM® InfoSphere® DataStage® and IBM InfoSphere QualityStage® job.

Purpose
The table_invoke_job.sh script imports data from the database table.

Syntax for the table_invoke_job.sh script


table_invoke_job.sh --username = user --password = password --company = company

--jobtype = report --jobname = jobname --propertyFilePath = propertyfile

-jobid = jobid -startRecordid = startRecordid -StopRecordId = StopRecordid

Arguments
Command arguments and their values have the following meanings:

--username
Indicates that you want to specify the name of the administrative user for the company.
This argument is required.

user
User name to log in to the company.
–password
Indicates that you must supply a password to log in to the company.
This argument is required.

password
Password to be used to log in to IBM Product Master.
--company
Indicates that you want to load data for a specific company.
company
The name of the company that the data is for.
--jobtype
Indicates that you want to specify the type of job. Currently, the only value that is accepted is report, as the built-in mechanism to support the invocation of loader is
using reports.
report
Creates a report job. This is the only valid value that must be specified for the -jobtype argument.
--jobname
The name of the job.
jobname
Indicates that you want to specify the name of the job.
reportname
The name of the report.
--propertyFilePath
Indicates that you want to specify the path of the property file.
file:///<absolute_property_file_system_path>
The path of the property file.
--jobid
Indicates that you want to specify the ID of the job.
jobid
The ID of the job. ID is an integer.
--startRecordid
Indicates that you want to specify the record ID of the first record to be loaded.

IBM Product Master 12.0.0 857


startRecordid
The ID of the first record to be loaded.
--stopRecordid
Indicates that you want to specify the record ID of the last record to be loaded.
stopRecordid
The ID of the last record to be loaded.

Example: Loading data from a database table


In this example, data is loaded for the abc_company company from a database table. Only records with an ID from 101 through 601 are loaded.

table_loader_invoke_job.sh --username=user1 --password=sfssj49h -–company=abc_company -–jobtype=report --jobname=job1 --


propertyFilePath=file:///<absolute_property_file_system_path> --jobid=11 --startRecordid=101 --stopRecordid=601

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Sample InfoSphere DataStage and InfoSphere QualityStage CSV output file data
The sample data illustrates the output format for the CSV files that IBM® InfoSphere® DataStage® and IBM InfoSphere QualityStage® must produce. In order for IBM
Product Master to import the data, the output file must be in this format.

Table 1. Sample CSV output file data


JobID RecordID Name Value
13 123 Catalog Loader_Catalog
13 123 ActionMode Update
13 123 Loader_Primary_Spec/pk 10
13 123 Loader_Primary_Spec/diplay_name name
13 123 Loader_Primary_Spec/currency_attr 123
13 123 Loader_Primary_Spec/grouping/child1 child1value
13 123 Loader_Primary_Spec/grouping/child2 child2value
13 123 Loader_Primary_Spec/multioccur_attr#0 first occurrence
13 123 Loader_Primary_Spec/multioccur_attr#1 first occurrence
13 124 Catalog Loader_Catalog
13 124 ActionMode Update
13 124 Loader_Primary_Spec/pk 11
13 124 Loader_Primary_Spec/diplay_name 11name
13 124 Loader_Primary_Spec/relationship_attr Loader_Catalog|10;
13 125 Catalog Loader_Catalog
13 125 Category Loader_Hierarchy|cat1
13 125 ActionMode Update
13 125 Loader_Primary_Spec/pk 12
13 126 Catalog Loader_Catalog
13 126 ActionMode Update
13 126 Loader_Primary_Spec/pk 13
13 127 Catalog Loader_Catalog
13 127 Category Loader_Hierarchy|cat1
13 127 ActionMode Update
13 127 Loader_Primary_Spec/pk 14
13 128 Catalog Loader_Catalog
13 128 ActionMode Update
13 128 Loader_Primary_Spec/pk 15
13 129 Catalog Loader_Catalog
13 129 Category Loader_Hierarchy|cat1
13 129 ActionMode Update
13 129 Loader_Primary_Spec/pk 16
13 130 Catalog Loader_Catalog
13 130 ActionMode Update
13 130 Loader_Primary_Spec/pk 17
13 131 Catalog Loader_Catalog
13 131 ActionMode Update
13 131 Loader_Primary_Spec/pk 18
13 132 Catalog Loader_Catalog
13 132 ActionMode Update
13 132 Loader_Primary_Spec/pk 19

IBM Product Master 12.0 Fix Pack 8

858 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

CSV columns and column values


The following documentation describes the CSV columns and the column values.

Important: The first line in the data file must be a header line (JobID, RecordID, Name, and Value). The header line is required for better readability.
Column and column values have the following meanings:

JobID

ID of the DataStage® job that created the data file. You can use the JobID to identify which records in the sample CSV output file table correspond to a
specific running of the IBM® InfoSphere® DataStage job.
The JobID must be unique for each job instance.

RecordID

RecordID identifies an item in the data file.


Rows within a job that have the same record ID correspond to a single item.
Rows for a single item must be placed contiguously in the file and not separated from one another.
IDs must be unique in a job.

Name

Identifies a row as corresponding to a category, catalog, attribute, or action mode. When you import data, specifies the action to take.
If the name refers to an attribute, then you must specify the full path for the attribute.
If the attribute is localized, then you must ensure that the path is appropriate for the locale. The path of the localized attribute in the data file must also have
the locale information like mySpec/name/en_US.
With relationship attributes, the related catalog and the primary key of the related item must be specified separated by a pipe delimiter.
The order of occurrence is important for the data to be imported correctly.

Value

When the "Name" is "Catalog" then "Value" gives the name of the catalog. This overrides the default catalog that is mentioned in the property file.
When the "Name" is "Category" then "Value" gives the entire path of the category that includes the hierarchy information. Hence you will be able to support
multiple hierarchies.
When the "Name" is "Action Mode" then "Value" can be any one of these "Create", "Update", or "Delete". "Action Mode" value is case-sensitive.
Note: "Create" mode is for creating a new item, which does not exist; "Delete" mode is for deleting an existing item, which is not checked out; and "Update"
mode is for updating an existing item, which is not currently checked out. In other cases, an error occurs.
For an attribute, "Value" holds the value of the attribute.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data and status table format rules


The data table and the status table are a part of the IBM® Product Master schema. The data table (tldr_dat_data_table) and status table (tldr_stt_status_table) are added
to Product Master schema to store the data generated from IBM InfoSphere® Information Server and the status of the import respectively.

The tldr_dat_data_table table must be populated with the data from IBM InfoSphere Information Server. You must have a job in IBM InfoSphere Information Server to do
this, and tldr_stt_status_table table must be populated with the status of the item import by the data loader program. The clean up of these tables is beyond the scope of
the loader and you should do that regularly yourself.

The following rules and restrictions apply to the columns and values of these tables:
Table 1. Data table
Column name Column type Column length
JobID number 9.0
RecordID number 9.0
Name varchar2 256
Value varchar2 256
Table 2. Status table
Column name Column type Column length
JobID number 9.0
RecordID number 9.0
Satus varchar2 256
Timestamp date

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 859


Property file information
The property file information illustrates the format that the property file uses to configure the import process. In order for IBM® Product Master to import the data, this
property file is required. The first line in the property file must be a header line (Property, Value). This is required for good readability. The properties in the property file are
case sensitive.

The property file is in the form of key value pairs and contains the default values and the properties which are common for all the items in the data file. This file contains
properties such as the following:

Name of the default catalog. You can import the value into the catalog specified in the property file when there is no catalog information for the item in the data file.
Name of the collaboration area. Collaboration Area value can be the hard coded collaboration area name or the fully qualified name of the implementation class of
the WorkflowIdentifier interface with the prefix "extn:".
Option to disable validation processing
Size of the item checkout. This property defines the size of the collection of items to be checked out into the collaboration area.
Batch size of the items saved. The data loader program supports batching while saving the items, and ItemSaveBatchSize specifies the batch size.

Table 1. Sample property file information


Property Value
Catalog LoaderCatalog
CollaborationArea LoaderCollaborationArea
DisableProcessingOptions true
ItemCheckoutSize 100
ItemSaveBatchSize 100

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Publishing data to downstream systems


You can use the IBM® InfoSphere® Information Server product, along with IBM Product Master to transform data from downstream systems.

Procedure
1. Exporting the product data
2. Defining the attribute collection
3. Exporting the attribute collection
4. Importing the schema into IBM InfoSphere Information Server
5. Building InfoSphere DataStage and InfoSphere QualityStage jobs

Exporting the product data


You can use existing IBM Product Master features to export the product data into a CSV file that InfoSphere DataStage can read.
Defining the attribute collection
You can define an attribute collection to match the data that you want to import. The attribute collection must be exported before it can be imported. Before you
can import your data to cleanse it using IBM InfoSphere DataStage, you must define the attribute collection that includes the attributes from one or more of your
specifications.
Exporting the attribute collection
To cleanse the product information data, you must export the attribute collections that are created in Product Master in a format that IBM InfoSphere DataStage
can use.
Importing the schema into IBM InfoSphere Information Server
You must import your XSD file into IBM InfoSphere Information Server. Importing the XSD creates the table definitions in IBM InfoSphere DataStage that outline
the column names, data type, length, and other column properties of the data that you want IBM InfoSphere DataStage to process.
Building InfoSphere DataStage and InfoSphere QualityStage jobs
You can use the IBM InfoSphere DataStage and IBM InfoSphere QualityStage Designer to build jobs that cleanse, transform, and write data to a database table or a
comma-separated value (CSV) file. After the data is written, you can import the data into IBM Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Exporting the product data


You can use existing IBM® Product Master features to export the product data into a CSV file that InfoSphere® DataStage® can read.

About this task


You can write a script or generate an automated script to connect to the database. You can use one of the two methods from within Product Master to generate the script:

860 IBM Product Master 12.0.0


Procedure
1. Click Collaboration Manager > Export Console > Select Catalog Export Script.
This will produce the CSV file.
2. Click Product Manager > Create/Edit Report.
This invokes the Report Console.
3. Use the Report Console to write a script.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Defining the attribute collection


You can define an attribute collection to match the data that you want to import. The attribute collection must be exported before it can be imported. Before you can
import your data to cleanse it using IBM® InfoSphere® DataStage®, you must define the attribute collection that includes the attributes from one or more of your
specifications.

Before you begin


Before you can define an attribute collection, you must create your IBM Product Master data model.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Exporting the attribute collection


To cleanse the product information data, you must export the attribute collections that are created in Product Master in a format that IBM® InfoSphere® DataStage® can
use.

Before you begin


Before you can export the attribute collection, you must:

Create your IBM Product Master data model.


Define your attribute collection. For more information about defining your attribute collection, see Defining the attribute collection.

About this task


You can use the Java™ classes and methods in the com.ibm.pim.interfaces.IISIntegration interface to use the export capability. The attribute collection information is
exported in the XSD format. Product Master metadata properties, which cannot be mapped to XML schema format are represented as annotation tags.

Interface methods to perform metadata export are:

/**
* This method takes an Collection of String representing the Attribute
* Collection names as input parameter and exports the metadata(of
* Attribute Collection) in XML Schema format in a XSD file and uploads the
* file to the docstore for each Attrribute Collection.
*
* @param attrCollNames
* Collection of AttributeCollection names
* @return List containing the Attribute Collection names which were not
* exported due to some error.
*
* @throws PIMInternalException when internal error occurs
* @throws IllegalArgumentException when the Collection of
* AttributeCollection Name is null
*/
public List<String> exportAttributeCollectionMetadata(Collection<String> attrCollNames);

/**
* This method returns Metadata information in a String having XML Schema
* format, for given AttributeCollection name .
*
* @param attrCollName
* Attribute Collection name to be exported for Metadata
* Information
* @return String XML Schema content as a string
* @throws PIMInternalException when internal error occurs
* @throws IllegalArgumentException when the AttributeCollection name is null
*/
public String getAttributeCollectionMetadata(String attrCollName);

The following table provides a mapping between node rules and XSD representation.

IBM Product Master 12.0.0 861


Table 1. Properties to XSD mapping
Node rule XSD representation
Max occurrence Schema element maxOccurs value
Min occurrence Schema element minOccurs value
Type Schema element type
Date only Schema element type (date or dateTime)
Default value Element default
Max length Schema simple type maxLength facet
Min length Schema simple type minLength facet
Pattern Schema simple type pattern facet
Precision Schema simple type fractionDigits facet
Number enumeration Schema simple type enumeration facet
String enumeration Schema simple type enumeration facet
The following table provides the mapping between metadata type and the XML schema.

Table 2. XSD representation of the metadata


Type XML schema
binary, image, thumbnail anyURI (URL)
currency decimal
date date or dateType. Date Only attribute on node is true then you want date, else dateType.
flag boolean
URL, image URL, thumbnail URL anyURI
integer integer
lookup table string (key into Lookup Table)
number decimal
number enumeration decimal (enumeration)
password string
relationship anyURI (item reference)
sequence integer
string string
string enumeration string (enumeration)
timezone string
group a grouping node is represented as a complex type with a sequence that contains all of the attributes with in that group.
Annotation tags are not used to represent some Product Master metadata properties, which cannot be captured in XML schema format.

Table 3. Metadata properties represented as annotation tags


Node property Description
Type Original product type field.
Attribute name Original attribute name storage field.
Attribute path The product attribute path. This property can be used while you import data from IBM InfoSphere Information Server to the product.
Primary key Unique identifier for Item.
Lookup table Lookup table for lookup-type.
Unique Creates a requirement for the node to be unique in the catalog. If you attempt to enter duplicate value, an error occurs. This option is only available in
primary specs.
Non-persistent Ability to make a non-constant. The value is derived using a script.
Localized Set attribute for localization. The company must be set with the wanted locales.
Link Defines a node as the "source attribute" or "foreign key" to the master catalog. Option only available in primary specs.
Indexed Product supports searching on this field.

Procedure
1. Identify by name the attribute collections that you want to export.
2. Write a client application using the Java API interfaces that are exposed with the IISIntegration interface, and then run the program to export the metadata.
The Java program can run the same way other Java API programs are run.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Importing the schema into IBM InfoSphere Information Server


You must import your XSD file into IBM® InfoSphere® Information Server. Importing the XSD creates the table definitions in IBM InfoSphere DataStage® that outline the
column names, data type, length, and other column properties of the data that you want IBM InfoSphere DataStage to process.

Before you begin


Before you can import the schema into IBM InfoSphere Information Server, you must:

862 IBM Product Master 12.0.0


Create your data model.
Define your attribute collection.
Write a trigger script in Product Master that uses the APIs to export the attribute collections created.
Sample script:

getContextMethodDefinition =
createJavaMethod("com.ibm.pim.context.PIMContextFactory","getContext","java.lang.String","java.lang.String","java.lang.Str
ing") ;
context = runJavaMethod(null, getContextMethodDefinition, user_name, password, company_name) ;

getIISIntegrationDefinition = createJavaMethod("com.ibm.pim.context.Context","getIISIntegration") ;
IISIntegration = runJavaMethod(context , getIISIntegrationDefinition) ;

getAttributeCollectionMetadataDefinition =
createJavaMethod("com.ibm.pim.utils.IISIntegration","getAttributeCollectionMetadata","java.lang.String") ;
xsdString = runJavaMethod(IISIntegration , getAttributeCollectionMetadataDefinition, attribute_collection_name) ;

xsdString = substring(xsdString,38);

out.write(xsdString);

Export your attribute collection as an XSD file.

About this task


Table definitions are flat lists of attributes. If you want to represent the complex structures in IBM Product Master specs, you must create multiple table definitions for
each grouping level within the hierarchical structure. You must run the import process once for each table definition that you want to create.
If the spec allows for multiple addresses, and if the source feeds multiple addresses, then you must have a table definition for the addresses. If the source has one
address, then the address attributes can be part of the main table definition that is imported from the XSD.

Procedure
1. Open the IBM InfoSphere DataStage and IBM InfoSphere QualityStage® Designer.
2. Click Import > Table Definitions > XML Table Definitions
3. Use the Import window, click File > File from the Web. In the subsequent popup, provide the URL to run the attribute collection export trigger script, which is
already created in Product Master.
URL format: http://server_name:port/utils/invoker.jsp?
company_code=company_code&invoking_user=user_name&bUserOutput=false&script=trigger_script_name.

Sample URL: http://localhost:8080/utils/invoker.jsp?


company_code=trigo&invoking_user=Admin&bUserOutput=false&script=GetAttrCollectionMetadata.Wpcs. Where,
GetAttrCollectionMetadata.wpcs is the trigger script that is created for exporting attribute collection. This script fetches the attribute collection metadata
into IBM InfoSphere DataStage. Establish primary key relationships and save the table definition.

4. Repeat step 2 and 3 for each table definition that you want to create.
5. Export the Product Master spec and save it as a file. From IBM InfoSphere Business Glossary, click Manage Categories tab. Browse the file name of the spec that is
saved and click Upload. Spec attributes are imported per business glossary definitions.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Building InfoSphere DataStage and InfoSphere QualityStage jobs


You can use the IBM® InfoSphere® DataStage® and IBM InfoSphere QualityStage® Designer to build jobs that cleanse, transform, and write data to a database table or a
comma-separated value (CSV) file. After the data is written, you can import the data into IBM Product Master.

Before you begin


Before you can build IBM InfoSphere DataStage and IBM InfoSphere QualityStage jobs, you must:

Identify the product data sources and ensure that you have access to the sources.
Ensure that you understand the content, structure, and initial quality of the product data sources. You can use the Information Analyzer component of the IBM
InfoSphere Information Server product to analyze the data sources.

About this task


You need to use the IBM InfoSphere DataStage Designer to create InfoSphere DataStage or InfoSphere QualityStage jobs.
Important: Ensure that output file produces data in the correct pair format (column, column-value).

Procedure
Open the InfoSphere DataStage Designer and create a job that runs one or more of the following tasks:

a. Compute a frequency distribution for:


Eliminating duplicate records.
Matching input records.

IBM Product Master 12.0.0 863


Determining the clerical pairs.
b. Validate field values.
c. Validate that a manufacturer in the source data is recognized by the product.
d. Correct misspellings.
e. Specify which part of the hierarchy an item should be placed into based on business rules.
f. Create the input file of cleansed master product data to load into the product.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrating Operational Decision Manager


You can use Product Master with IBM® Operational Decision Manager to allow viewing, creating, editing, and deleting Advanced Business Rules applicable to an Product
Master item or category for different types of business decisions through use of the Product Master user interface.

Introduction of Advanced Business Rules through IBM Operational Decision Manager


Advanced Business Rules through Operational Decision Manager integration enables business users to use a single interface for product creation, rules authoring,
and rules association. Advanced Business Rules helps drive product personalization, advanced bundling and configuration, complex offers, and more.
Installing and configuring the Advanced Business Rules solution
Ensure that you install the Advanced Business Rules solution and configure both Product Master and Operational Decision Manager to work with the Advanced
Business Rules solution.
Enabling Advanced Business Rules in your solution
The following topics provide information about tasks that the Solution Designer should perform.
Adding rules
A rule project can contain common rules and entity-specific rules. Common rules are shared by all items or categories that are associated with the rule project.
These cannot be created or edited through the Advanced Business Rules solution user interface.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Introduction of Advanced Business Rules through IBM Operational Decision Manager


Advanced Business Rules through Operational Decision Manager integration enables business users to use a single interface for product creation, rules authoring, and
rules association. Advanced Business Rules helps drive product personalization, advanced bundling and configuration, complex offers, and more.

The key modeling construct in Product Master for managing information is an item. A catalog contains a set of items, typically modeling a business level entity, such as a
product. Additionally, catalogs can associate with one of more hierarchies. Each hierarchy is an N-level deep tree that models a classification system. A category
represents a node in a hierarchy. An item can be mapped to one or more categories in the same hierarchy, and multiple categories in all the hierarchies that its catalog is
associated with.

Key concepts
The following key concepts must be understood for you to use Advanced Business Rules through Operational Decision Manager integration. Advanced Business Rules
through Operational Decision Manager integration supports specifications of different types of decisions for which a product in Product Master might associate with rules
in Operational Decision Manager.

Decision types
Each decision type represents a particular business level question that might be asked at the time of rule execution. For example,

Eligibility decision type


This decision type asks the question: "Can this product be offered to this customer".
Pricing decision type
This decision type asks the question: "How much discount can be offered to this customer for this product".

Decisions
A decision is a set of rules in Operational Decision Manager that is associated with an item or a category in Product Master for a decision type.
Rule projects
Items or categories in Product Master associate with a rule project in Operational Decision Manager for each decision type.
Rule Projects in Operational Decision Manager contain rules, rule templates, and other artifacts such as rule flows.

Advanced Business Rules requires a rule project to be used for only one decision type. Each rule project can initially contain a set of rules applicable to a set of
items or categories in Product Master for a decision type. This set of rules is considered “common rules” for all products that use the rule project and are not
changeable in the context of an individual item or category.

Additionally, each rule project contains a set of defined rule templates. The templates can be used by a content author in Product Master to create a new rule for
the item or category.

All rules in the rule projects that are used by items or categories in Product Master are required to have a custom property to store the entity identifier for the rule.
The name of this custom property must be provided in the wodm-config.xml configuration file.

The rule package that is identified in the decision type entry in the WODM Integration Lookup Table must:

exist in the rule projects, and

864 IBM Product Master 12.0.0


is used when you create new rules for the decision type.

The rule project can contain a rule flow that uses dynamic queries to select rules that are applicable to a specific item or category. The specific item or category
then uses a custom property on the rules.

Rules
Advanced Business Rules for Product Master allows working with both action rules and decision table type rules in Operational Decision Manager.
Advanced Business Rules support the notion of common rules for all entities (items or categories) in Product Master. Common rules apply to all entities that
associate to a rule project for a decision type. Common rules must be created through the Operational Decision Manager tools and be viewed only from the Product
Master user interface.

Entity-specific rules can be viewed, created, edited, and deleted through the Product Master user interface. These rules apply only to a specific item or category.

A rule project can contain only rules for Product Master items or categories from a single catalog or hierarchy.

New rules are created under the rule package whose name is identified in the lookup table entry for the decision type. This top-level rule package name exists in
the rule project that is being used. The rules for each product are placed under a different sub package. This sub package is automatically created if it already does
not exist. Placing entity-specific rules in individual sub packages provides them their own namespace, allowing the rule with same rule name to exist for different
entities.
Note: If you select Greek or Turkish for the Product Master user interface, the header and buttons of the Operational Decision Manager Edit rule window display
information in Greek or Turkish. However, the rule conditions, tooltips, and number of lines to display are displayed in English because the Operational Decision
Manager server messages are not converted into these languages.
Branches
The Advanced Business Rules integration edits the item or category-specific rules on a development branch in the rule project. The branch created the rule project
through the Decision Center tools.
The name of the branch to be used for editing the item or category-specific rules must be specified in the $TOP/etc/default/wodm/wodm-config.xml configuration
file.

When you list rules for the item or category in an Product Master workflow step, Product Master works with the development branch.

When you list rules for the item or category in an Product Master catalog or hierarchy, Product Master displays the rules on the main branch.

The common rules are retrieved from the main branch irrespective of whether the item or category is in the catalog or the workflow.

Workflows
Advanced Business Rules integration allows authoring of rules only in an Product Master workflow.
The workflow that is used for enrichment of item or category information can be enhanced to also include steps for editing and approving rules for the item or
category. Extra steps are needed to enable a workflow to edit rules through the Advanced Business Rules.

Key concepts
The following key concepts must be understood for you to use Advanced Business Rules through Operational Decision Manager integration.
Roles and tasks
The solution provider can use the following roles and tasks as a means of organizing and setting access permissions.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Key concepts
The following key concepts must be understood for you to use Advanced Business Rules through Operational Decision Manager integration.

Decision types
Advanced Business Rules through Operational Decision Manager integration supports specifications of different types of decisions for which a product in the IBM®
Product Master might associate with rules in Operational Decision Manager. Each decision type represents a particular business level question that might be asked
at the time of rule execution. For example: Eligibility decision type This decision type asks the question: "Can this product be offered to this customer". Pricing
decision type This decision type asks the question: "How much discount can be offered to this customer for this product".
Decisions
A decision is a set of rules in Operational Decision Manager that is associated with an item or a category in the IBM Product Master for a decision type.
Rule projects
Items or categories in the IBM Product Master associate with a rule project in the Operational Decision Manager for each decision type. Rule Projects in the
Operational Decision Manager contain rules, rule templates, and other artifacts such as rule flows. Advanced Business Rules requires a rule project to be used for
only one decision type. Each rule project can initially contain a set of rules applicable to a set of items or categories in the IBM Product Master for a decision type.
This set of rules are considered “common rules” for all products that use the rule project and are not changeable in the context of an individual item or category.
Additionally, each rule project contains a set of defined rule templates. The templates can be used by a content author in the IBM Product Master to create a new
rule for the item or category. All rules in the rule projects that are used by items or categories in the IBM Product Master are required to have a custom property to
store the entity identifier for the rule. The name of this custom property must be provided in the wodm-config.xml configuration file. The rule package that is
identified in the decision type entry in the WODM Integration Lookup Table must: exist in the rule projects, and is used when you create new rules for the decision
type. The rule project can contain a rule flow that uses dynamic queries to select rules that are applicable to a specific item or category. The specific item or
category then uses a custom property on the rules.
Rules
Advanced Business Rules for the IBM Product Master allows working with both action rules and decision table type rules in Operational Decision Manager.
Advanced Business Rules support the notion of common rules for all entities (items or categories) in the IBM Product Master. Common rules apply to all entities
that associate to a rule project for a decision type. Common rules must be created through the Operational Decision Manager tools and be viewed only from the IBM
Product Master user interface. Entity-specific rules can be viewed, created, edited, and deleted through the IBM Product Master user interface. These rules apply
only to a specific item or category. A rule project can contain only rules for the IBM Product Master items or categories from a single catalog or hierarchy. New rules
are created under the rule package whose name is identified in the lookup table entry for the decision type. This top-level rule package name exists in the rule
project that is being used. The rules for each product are placed under a different sub package. This sub package is automatically created if it already does not

IBM Product Master 12.0.0 865


exist. Placing entity-specific rules in individual sub packages provides them their own namespace, allowing the rule with same rule name to exist for different
entities.
Note: If you select Greek or Turkish for the IBM Product Master user interface, the header and buttons of the Operational Decision Manager Edit rule window
display information in Greek or Turkish. However, the rule conditions, tooltips, and number of lines to display are displayed in English because the Operational
Decision Manager server messages are not translated into these languages.
Branches
The Advanced Business Rules integration edits the item or category-specific rules on a development branch in the rule project. The branch created the rule project
through the Decision Center tools. The name of the branch to be used for editing the item or category-specific rules must be specified in the
$TOP/etc/default/wodm/wodm-config.xml configuration file. When you list rules for the item or category in an IBM Product Master workflow step, the IBM Product
Master works with the development branch. When you list rules for the item or category in an IBM Product Master catalog or hierarchy, IBM Product Master displays
the rules on the main branch. The common rules are retrieved from the main branch irrespective of whether the item or category is in the catalog or the workflow.
Workflows
Advanced Business Rules integration allows authoring of rules only in an IBM Product Master. The workflow that is used for enrichment of item or category
information can be enhanced to also include steps for editing and approving rules for the item or category. Extra steps are needed to enable a workflow to edit rules
through the Advanced Business Rules.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Roles and tasks


The solution provider can use the following roles and tasks as a means of organizing and setting access permissions.

Information Technology (IT)


The IT user is responsible for working with Advanced Business Rules and Product Master in creating the artifacts that are used by the solution and in performing other
supporting actions.
The IT user creates one or more rule projects through features of Operational Decision Manager such as the Rule Designer.

Content Author
The content author is responsible for working with Product Master to author the content of the business entities and author and associate business rules for these entities.
The business entities are stored as items (or categories).
Using Advanced Business Rules integration, a content author can:

Edit an existing item or category-specific rule.


Create an item or category-specific rule from a rule template that is defined in the associated rule project. These rules are associated with the entity identifier in
Product Master (item or category primary key) from a custom property on the rule.
Delete an item or category-specific rule.

Content Approver
A Content Approver is responsible for working with Product Master to review and approve any changes to the business entities modeled as item or a category. The
information also includes the rules that are associated with the item or category.

Solution Designer
A Solution Designer defines the metadata for an Product Master solution. For example, they define the catalogs, items, categories, and specs. A Solution Designer can also
implement code to extend the basic function of Product Master, from the Product Master scripting language or the Product Master Java™ API and extension points.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing and configuring the Advanced Business Rules solution


Ensure that you install the Advanced Business Rules solution and configure both Product Master and Operational Decision Manager to work with the Advanced Business
Rules solution.

Prerequisites
Ensure that you meet the following prerequisites before you install Advanced Business Rules.

Install IBM® Product Master.


Operational Decision Manager must be installed, but not necessarily on the same server as Product Master.
Ensure that you are using the latest Operational Decision Manager (ODM) version.
If Product Master and Advanced Business Rules are installed on different servers, then the Product Master server must be able to communicate with the Advanced
Business Rules server on the Advanced Business Rules port. The port number is 9080 by default.

866 IBM Product Master 12.0.0


Installing on Product Master
After you confirm that you meet the prerequisites for installing Advanced Business Rules, perform the following steps to install Advanced Business Rules on Product
Master.
The installation script updates the following items:

The jars-custom.txt file under the $TOP/bin/conf/classpath folder.


The flow-config.xml file under the $TOP/etc/default folder.
The script also adds the wodm folder under the $TOP/etc/default folder.

1. Extract the wodm.zip file into a convenient temporary folder. For example, $TOP/wodm.
2. From the $TOP/wodm/documents/README folder,
a. Follow steps 2 - 9 under the Installation Procedure section.
b. Follow the instructions under the topic Set Up Data Model section.
Note: To access the source code for the solution, extract the wodm.src.zip file into a temporary folder to extract the wodm.extensions.src.zip file. This file contains
the complete source for the solution and can be extracted in any convenient folder. For more information about the WODMAPI API, see the
src/com/ibm/ccd/solution/wodm/api/WODMAPI.java file.

Migrating Operational Decision Manager


If you have an existing installation of IBM Operational Decision Manager, update your existing installation. Follow these steps to migrate to the current version.

1. Extract the wodm.zip file into a convenient temporary folder. For example, $TOP/wodm.
2. Follow the steps in the $TOP/wodm/documents/README.

Configuring Product Master for Advanced Business Rules


After you install Advanced Business Rules onto Product Master, you need to then configure Product Master to fully integrate and use Advanced Business Rules.
Specify the locales for the WODM Integration Lookup Table Spec.

1. Using Spec Console, open the WODM Integration Lookup Table Spec for editing.
2. Select the locales to use from the available locales. These locales identify the locales in which the decision type name is specified.

Configuring Operational Decision Manager


You need to configure Operational Decision Manager for working with Advanced Business Rules as follows:

1. Extend the rule model to contain an extra custom property that is used by Advanced Rules Management. You can choose any name for this custom property. For
example, EntityId. This name must be specified as the value for the configuration that is set in the <entry_identifier_property> section of the wodm-
config.xml file.
2. For each rule project that is associated with an item or category in the Product Master, create a sub branch under the main branch. You can choose any name for
this branch. For example, Dev. This name must be specified as the value for the configuration that is set in the <rule_development_branch> section of the
wodm-config.xml file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling Advanced Business Rules in your solution


The following topics provide information about tasks that the Solution Designer should perform.

Enabling the Advanced Business Rules user interface


The manage decisions tab in the Product Master single edit screen is enabled for the Product catalog and Sample Collaboration Area by updating the
$TOP/etc/default/data_entry_properties directory.
Updating your workflows
The rules are edited in the development branch in the associated rule project. The current copy of the rules for the item or category must be copied from the main
branch to the development branch at the start of the workflow.
Specifying decision types
The decision types are specified in the WODM Integration Lookup Table in Product Master.
Adding attributes to specs
For an item that contains associated decisions, you need to add an attribute to the spec for each decision type. If the decision applies to all items, the attribute
must be added to the primary spec. If the decision applies only when the item is in certain categories, the attribute must be added to a secondary spec, which must
be added to the corresponding categories.
Authenticate with Advanced Business Rules
Advanced Business Rules solution requires the users to authenticate with Operational Decision Manager at most one time during the entire Product Master session.
You are presented with the login screen when you are attempting to work with rules for the first time in that Product Master session.
Access control for rule viewing and authoring
Rules that are associated with a product can be viewed and edited in the context of the item authoring screens in Product Master. Specifically, the rules are
displayed in the Manage Decisions tab in the new business user interface single edit screen.
Additional customization required by your solution
The following sections describe extra customization your Solution Designer might need to perform depending on how your solution is structured.
Working with Advanced Business Rules sample model
The following topics provide information about how to work with the Advanced Business Rules sample model.

IBM Product Master 12.0 Fix Pack 8

IBM Product Master 12.0.0 867


Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling the Advanced Business Rules user interface


The manage decisions tab in the Product Master single edit screen is enabled for the Product catalog and Sample Collaboration Area by updating the
$TOP/etc/default/data_entry_properties directory.

The following entries need to be provided for each of the catalogs, hierarchies, and collaboration areas.
You need to only provide these entries if any of the following situations apply to you:

If you have catalogs with items that associate with rules, or


If you have hierarchies with categories that associate with rules, or
If you have collaboration areas that are associated with these catalogs and hierarchies and are used to manage rules.

<company code="{YOUR-COMPANY-CODE}">
<catalog name="{YOUR-CATALOG-NAME}">
<script>
<type>url</type>
<title>Manage Decisions</title>
<extra>height='100%' name="wodmCustomTabFrame"</extra>
<path>/scripts/triggers/wodmCustomTab</path>
<position>tab</position>
<refresh-on></refresh-on>
</script>
</catalog>

<hierarchy name="{YOUR-HIERARCHY-NAME}">
<script>
<type>url</type>
<title>Manage Decisions</title>
<extra>height='100%' name="wodmCustomTabFrame"</extra>
<path>/scripts/triggers/wodmCustomTab</path>
<position>tab</position>
<refresh-on></refresh-on>
</script>
</hierarchy>

<collaboration-area name="{YOUR-COLLABORATION-AREA}">
<script>
<type>url</type>
<title>Manage Decisions</title>
<extra>height='100%' name="wodmCustomTabFrame"
id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_integrating_wodm_ref_enablingabrsol_wodmCustomTabFrame"</extra>
<path>/scripts/triggers/wodmCustomTab</path>
<position>tab</position>
<refresh-on></refresh-on>
</script>
</collaboration-area>
</company>

Where:

{YOUR-COMPANY-CODE} is your company code


{YOUR-CATALOG-NAME} is the name of your catalog
{YOUR-HIERARCHY-NAME} is the name of your hierarchy
{YOUR-COLLABORATION-AREA} is the name of your collaboration area

The script is implemented in the specified Java™ class. For source code, see the src/com/ibm/ccd/solution/wodm/WODMCustomTabDataEntryScript.java in the
wodm.extensions.src.zip file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Updating your workflows


The rules are edited in the development branch in the associated rule project. The current copy of the rules for the item or category must be copied from the main branch
to the development branch at the start of the workflow.

Similarly, the final edited rules need to be merged from the development branch back to the main branch just before the checking in of the item or category back to the
source catalog or hierarchy.

Advanced Business Rules integration provides the implementation of a set of extension points that implement the IN script for automated steps that can be created to
perform these tasks.

The com.ibm.ccd.solution.wodm.extensionpoints. WODMCopyFromMainToDevBranchExtPoint attribute can be used as the IN script extension point of an automated step
at the start of the workflow to copy rules from the main branch to the development branch.

The com.ibm.ccd.solution.wodm.extensionpoints.
WODMMergeFromDevToMainExtPoint attribute can be used as the IN script extension point of an automated step at the end of the workflow to merge rules from the
development branch back to the main branch.

868 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Specifying decision types


The decision types are specified in the WODM Integration Lookup Table in Product Master.

About this task


If a decision type is applicable to all (or most of the products) in the catalog, then the solution designer can associate the decision type with the primary spec of the
catalog.
If the decision type is applicable to only products in a particular category, then the solution designer can associate the decision type with an item secondary spec for that
category.

The solution designer uses the existing Product Master screens to add a decision type entry into the Decision Types Lookup Table, to modify an existing entry, or remove
an entry.

There can be multiple rule projects that are associated with an entry, which means multiple decision types are displayed in the Manage decisions tab.

Procedure
Identify a list of decision types that are used by the solution.
For example, the solution designer must decide on the following:

a. Decision type Key.


The primary key.
b. Decision Type Name (localized).
The localized name of the decision type is displayed to the user in the Product Master user interface when you work with rules associated with the product.
Note: The default locale is en_US. When the localized Decision Type Name is not provided, the Decision type Key itself is used as the display name.
c. Rule Package Name.
The Rule Package Name identifies the package that Product Master expects to exist in all rule projects that are used for the decision type. Any new rules that are
created from Product Master user interface for the decision type is created under this rule package.
d. Spec Name.
The solution designer updates the identified spec in Spec Name attribute and adds a attribute in the spec at the path that is identified in the Rule Project Attribute
Path column for the decision type.
e. Rule Project Attribute Path.
The type of the Rule Project Attribute in the Spec could be String or String enumeration and is ideally to be marked as hidden to prevent it from showing in the
Attributes tab of the single edit or multi-edit page in Product Master user interface.
Note: By using a string enumeration type attribute with an enumeration rule, the solution can provide the ability for a business user to choose a Rule Project through
the Product Master user interface from a set of applicable rule projects for the item or category. In this model, however, the attribute might not be marked hidden,
and the solution needs to tackle any cleanup actions that are needed when disassociating rule projects from an item or category.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Adding attributes to specs


For an item that contains associated decisions, you need to add an attribute to the spec for each decision type. If the decision applies to all items, the attribute must be
added to the primary spec. If the decision applies only when the item is in certain categories, the attribute must be added to a secondary spec, which must be added to
the corresponding categories.

About this task


For an item, you need to set the value of the decision attributes for the relevant decisions. The value in the attribute is the name of the corresponding rule project in
Advanced Business Rules.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Authenticate with Advanced Business Rules


Advanced Business Rules solution requires the users to authenticate with Operational Decision Manager at most one time during the entire Product Master session. You
are presented with the login screen when you are attempting to work with rules for the first time in that Product Master session.

IBM Product Master 12.0.0 869


About this task
The solution designer can choose to use shared credentials in Operational Decision Manager for certain actions through the Advanced Business Rules integration user
interface.

You can specify a shared user name and password for viewing rules and authoring. This option allows the Advanced Business Rules solution to provide the shared
Advanced Business Rules credentials (user name and password) and also specify whether:

Never use the shared credentials for viewing rules and authoring
Use the shared credentials only for viewing rules
Use the shared credentials for both viewing and authoring

Depending on the requirements of the Advanced Business Rules solution implementation, the appropriate level of credential sharing can be set up. For example,
specifying the shared credentials for viewing actions only can allow the Content Approver to view the rules for a product without logging in to Operational Decision
Manager. However, Content Authors are still required to authenticate as themselves before they modify any rules.

The shared credentials are specified through the dc_shared_user and dc_shared_password configuration options in the Advanced Business Rules configuration file
at $TOP/etc/default/wodm/config/wodm-config.xml.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Access control for rule viewing and authoring


Rules that are associated with a product can be viewed and edited in the context of the item authoring screens in Product Master. Specifically, the rules are displayed in
the Manage Decisions tab in the new business user interface single edit screen.

About this task


The decision types available to the item or category are determined by the Product Master specs that are implemented by the product and the decision types that are
associated with each of those specs in the WODM Integration Lookup Table. The access level for each decision type can take the following three forms:

inaccessible
viewable
editable

The Advanced Business Rules solution can control access to the rules in a decision type, by controlling access to the rule project attribute identified in the WODM
Integration Lookup Table for that decision type. For example, including the Eligibility decision attribute in E+R (Editable and Required) attribute collection for an Edit
Eligibility workflow step results in the Eligibility rules to be editable for the item or category in that workflow step.

The solution designer can add the Compliance


Decision attribute to only the Viewable attribute collection of this step, causing the rules that are associated with compliance to be viewable only in this step.

Solution designers can use catalog access privileges or hierarchy access privileges to limit access to rules in the context of working with the product in the catalog or
hierarchy for a user role. This is done by defining privileges for different user roles on the rule project attribute for the decision type, for example Eligibility Decision.
Note: The Advanced Business Rules solution allows editing of rules only in the context of a workflow step. In the context of a catalog or a hierarchy, the rules are viewable
only or inaccessible depending on the container access privileges defined. If all decisions need to be inaccessible at the catalog or hierarchy level for all users, you can
remove the custom tab specification for the Manage Decisions tab for the catalog in the data_entry_properties.xml file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Additional customization required by your solution


The following sections describe extra customization your Solution Designer might need to perform depending on how your solution is structured.

Associating decision types to rule projects


Advanced Business Rules solution requires that the association between items or categories in Product Master and rule projects in Operational Decision Manager to
be made by your solution code. Your solution must populate the decision type attributes in the item or category before the business user accesses the Manage
Decisions tab to work with rules for the item or category.
Copying entity-specific rules between entities
Your solution might require copying all the entity-specific rules that are associated with a item or category to be immediately made available on another item or
category. This might be needed when a user creates a new item or category by cloning an existing item or category.
Deleting entity-specific rules
If your solution allows deletion of items or categories in Product Master that have entity-specific rules, then your solution might need to perform clean up action on
those rules in Operational Decision Manager. Similar clean-up might be wanted if new items or categories are dropped from a collaboration area after entity-specific
rules are created for those entities.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

870 IBM Product Master 12.0.0


Associating decision types to rule projects
Advanced Business Rules solution requires that the association between items or categories in Product Master and rule projects in Operational Decision Manager to be
made by your solution code. Your solution must populate the decision type attributes in the item or category before the business user accesses the Manage Decisions tab
to work with rules for the item or category.

About this task


Your solution can use existing capabilities in Product Master to populate these decision types attributes when a new item or category is created. For example, you could
implement a post processing extension point for the catalog or the hierarchy to populate these values when a new item or category is saved.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Copying entity-specific rules between entities


Your solution might require copying all the entity-specific rules that are associated with a item or category to be immediately made available on another item or category.
This might be needed when a user creates a new item or category by cloning an existing item or category.

Advanced Business Rules solution provides a set of helper APIs in the com.ibm.ccd.solution.wodm.api.WODMAPI interface. A reference to the class that is
implemented in this interface can be obtained through the static method getWODMAPIInstance() in the WODMInstanceHelper class.
The WODMAPI interface includes methods copyItemSpecificRules() and copyCategorySpecificRules() that allow your solution to copy the rules that are
associated with the source item or category in Operational Decision Manager, and associate the copied rules with the target item or category.

These methods can be started from your solution code that runs in different contexts (such as the post processing extension point for the catalog) or a custom tool in your
solution to allow business users to clone existing items or categories.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Deleting entity-specific rules


If your solution allows deletion of items or categories in Product Master that have entity-specific rules, then your solution might need to perform clean up action on those
rules in Operational Decision Manager. Similar clean-up might be wanted if new items or categories are dropped from a collaboration area after entity-specific rules are
created for those entities.

Advanced Business Rules solution provides a set of helper APIs in the com.ibm.ccd.solution.wodm.api.WODMAPI interface. A reference to the class that is
implemented this interface can be obtained through the static method getWODMAPIInstance() in WODMInstanceHelper class.
The WODMAPI interface includes a set of deleteRules() methods that allow your solution to delete all the rules that are associated with the item or category in
Operational Decision Manager.

These methods can be started from your solution code that is run in different contexts (such as the post processing extension point for the catalog) or a custom tool in
your solution to allow business users to delete existing items or categories.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Working with Advanced Business Rules sample model


The following topics provide information about how to work with the Advanced Business Rules sample model.

Import the Advanced Business Rules solution sample data


There are two sets of sample data to import, one for Product Master and one for IBM® Operational Decision Manager.
Enabling the Advanced Business Rules user interface for the sample model
The Manage Decisions tab in the Product Master single edit screen needs to be enabled for the Product catalog and Sample Collaboration Area. Update the
$TOP/etc/default/data_entry_properties directory with the following entries:
Description of sample model
In order to take advantage of Operational Decision Manager integration, the Product Master model must contain certain features. These are illustrated in the
sample model, and explained here.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 871


Import the Advanced Business Rules solution sample data
There are two sets of sample data to import, one for Product Master and one for IBM® Operational Decision Manager.

Before you begin


Ensure that you complete the Installing and configuring the Advanced Business Rules solution steps.

Procedure
1. Import the Product Master sample model data. In order to do this, you must import the file $TOP/wodm/test/SampleModel.zip into Product Master.
a. Log in to Product Master with a user ID with Administrator privileges.
b. Select System Administrator | Import Environment..
c. Browse to SampleModel.zip and click Import. A job is scheduled to create the data for the sample in Product Master; wait for this job to complete.
2. Import the Advanced Business Rules solution sample data. In order to do this, you must import the file $TOP/wodm/test/Sample_Project/mdmcs_wodm.zip into
the Decision Center.
a. Log in to the Decision Center through rtsAdmin.
b. Select Configure.
c. Click Import Projects under Administration.
d. Browse and select mdmcs_wodm.zip and click OK. This file creates the project in the Decision Center.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling the Advanced Business Rules user interface for the sample model
The Manage Decisions tab in the Product Master single edit screen needs to be enabled for the Product catalog and Sample Collaboration Area. Update the
$TOP/etc/default/data_entry_properties directory with the following entries:

<company code="{YOUR-COMPANY-CODE}">
<catalog name="Product Catalog">
<script>
<type>url</type>
<title>Manage Decisions</title>
<extra>height='100%' name="wodmCustomTabFrame"</extra>
<path>/scripts/triggers/wodmCustomTab</path>
<position>tab</position>
<refresh-on></refresh-on>
</script>
</catalog>

<collaboration-area name="Sample Collab Area">


<script>
<type>url</type>
<title>Manage Decisions</title>
<extra>height='100%' name="wodmCustomTabFrame"
id="_home_markdown_jenkins_workspace_Transform_in_SSADN3_12.0.0_integrating_wodm_ref_enablingabrui_wodmCustomTabFrame"</extra>
<path>/scripts/triggers/wodmCustomTab</path>
<position>tab</position>
<refresh-on></refresh-on>
</script>
</collaboration-area>
</company>

Where {YOUR-COMPANY-CODE} is your company code


The script is implemented in the specified Java™ class. For source code, see the src/com/ibm/ccd/solution/wodm/WODMCustomTabDataEntryScript.java in
wodm.extensions.src.zip file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Description of sample model


In order to take advantage of Operational Decision Manager integration, the Product Master model must contain certain features. These are illustrated in the sample
model, and explained here.

Added spec attributes


For every item associated with a rule, an extra attribute is required per decision type to define the name of the rule project corresponding to the item. The path of the
attribute is referenced in a lookup table, see the following table as an example. In the sample model, the extra attributes in the Product catalog Spec are as follows. After
the solution is set up, it is best to make these attributes that are hidden because they are of no interest to the user.

872 IBM Product Master 12.0.0


Table 1. Added spec attributes sample model
Attribute path Type Example Rule Project
"Compliance Decision" String, might be hidden AllProductsCompliance
"Eligibility Decision" String, might be hidden AllCustomerEligibility
There is a similar attribute, "Pricing Decision", in the Credit Card Secondary Spec, that applies only to credit cards.

Lookup table
A lookup table entry is required to connect the decision type (and its name), the rule package name, and the special attribute paths. One such entry exists for each
decision type in the WODM Integration Lookup Table.
Table 2. Lookup table entry sample model
Lookup table attribute Value
Decision Type (PK) Customer Eligibility
Decision Type Name (localized) Customer Eligibility
Rule Package Name Eligibility
Spec Name Product catalog Spec
Attribute Path (in spec) Eligibility Decision
Similarly, there are lookup table entries for Compliance and Pricing decision types.

Workflow
Rules can be edited only from a workflow and not directly from a source catalog. The workflow and corresponding attribute collections can be set up to allow users in
specific roles appropriate access to the rules. This is done in the sample model as follows.
Table 3. Workflow sample
Initial -> Edit Eligibility -> Edit Compliance -> Approve -> Enrich Details -> Success
Compliance (V) Compliance (E) Compliance (V) Details (E)
Eligibility (E) Eligibility (V)
Attribute collections:

"Eligibility"

Product Catalog Spec: Minimum Age, Eligibility Decision

"Compliance"

Product Catalog Spec: Regulatory Agency

"Details"

Product Catalog Spec: Department, Discount


Where:

Edit Eligibility
The Edit Eligibility step is performed by the Content Author role, who can edit the Eligibility rules and view the compliance rules.
Edit Compliance
The Edit Compliance step is also performed by the Content Author role, but only the Compliance rules can be edited.
Approve
The Approve step is performed by the Content Approver role, and viewing only of the rules is allowed.
Enrich Details
The Enrich Details step has nothing to do with rules, so the rules might not be viewed or edited.

Additionally, the Sample workflow contains two automated steps. The CopyFromMain step is at the start of the workflow and it copies the rules from the main branch to
the development branch. The MergeToMain step is at the end of the workflow and it merges the rules that are associated with the item or category from the development
branch back to the main branch.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Adding rules
A rule project can contain common rules and entity-specific rules. Common rules are shared by all items or categories that are associated with the rule project. These
cannot be created or edited through the Advanced Business Rules solution user interface.

Before you begin


The workflow step is set up to allow editing of the rules for the decision type

About this task


Entity-specific rules apply to the specific item or category that is being actively worked on in Product Master user interface.

IBM Product Master 12.0.0 873


Procedure
1. Go to Manage Decisions tab on the single edit screen for the item or category in the workflow step.
2. Click the decision type, or expand the decision type and click any existing rules under it.
3. Click Add to create a new rule.
4. Provide a name for your rule.
5. Select the Rule template to use for the creating the new rule.
6. Click Save.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrating WebSphere Commerce


Advanced Catalog Management for WebSphere® Commerce is a solution asset from Product Master. This solution is a prebuilt configuration of Product Master, ready for
use by WebSphere Commerce users.

The intent of the solution is to provide a richer set of catalog management functions to complement those functions currently available in WebSphere Commerce, for
example workflows to approve New Product Introduction.
Advanced Catalog Management for WebSphere Commerce is an accelerated solution with an out-of-the-box data model, business process workflows, and integration
components tailored for WebSphere Commerce. It includes advanced capabilities for managing eCommerce catalogs supported by WebSphere Commerce.

The integration of the asset with the corresponding WebSphere Commerce instance uses the native WebSphere Commerce data load functionality which is based on batch
loads of data files.

Introduction of Advanced Catalog Management


IBM WebSphere Commerce provides a powerful customer interaction platform for cross-channel commerce. It can be used by companies of all sizes, from small
businesses, to large enterprises, and for many different industries.
Installing and configuring Advanced Catalog Management
This installation section provides instructions to install the Advanced Catalog Management asset on the Product Master platform and the WebSphere Commerce
platform including the configuration of the provided default objects (for example, default master catalog, default views, and default workflows).
Enabling multiple languages
You can create attributes for each item in several languages in IBM Product Master and export these attributes to WebSphere® Commerce Server.
Managing catalog data
This asset manages the following three types of catalog objects for WebSphere Commerce.
Publishing catalog data
In this asset, you can use a common integration framework that can be extended for general use of data exports.
Creating custom logger
You can create a custom logger in the ACM environment.
Development environment setup
Ensure you are familiar with the following when setting up your development environment.
Debugging examples
Ensure you are familiar with the following debugging examples.
Extending Advanced Catalog Management
The Advanced Catalog Management solution provides the infrastructure for publishing data from Product Master to WebSphere Commerce Server.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Introduction of Advanced Catalog Management


IBM® WebSphere® Commerce provides a powerful customer interaction platform for cross-channel commerce. It can be used by companies of all sizes, from small
businesses, to large enterprises, and for many different industries.

Advanced Catalog Management provides easy-to-use tools for business users to centrally manage a cross-channel strategy. Business users can create and manage
precision marketing campaigns, promotions, catalog, and merchandising across all sales channels. WebSphere Commerce is a single, unified platform which offers the
ability to do business directly with consumers, directly with businesses, and indirectly through channel partners (indirect business models).

Advanced Catalog Management for WebSphere Commerce is a Product Master solution accelerator. Product Master solution implementation team can leverage the asset
for implementing a retail solution for catalog management with customization and extensions quickly.

Advanced Catalog Management has a 3-step data synchronization. The Advanced Catalog Management synchronization steps include:

1. Generating the data from the Product Master


2. Mapping the Product Master data to the WebSphere Commerce data, and
3. Uploading the WebSphere Commerce data to the WebSphere Commerce Server.

Advanced Catalog Management complements the catalog management capability of WebSphere Commerce to help retailer to efficiently manage the catalogs on the
eCommerce sites.

Product Master provides capabilities like workflows to represent business processes, fine grained access control and audit logs, category level attributes and inheritance
and flexibility to model catalogs and hierarchies.

874 IBM Product Master 12.0.0


The Advanced Catalog Management solution provides a pre-configured environment for quick and easy implementation of the solution. The solution has a pre-configured
data model in Product Master which mirrors the data model of WebSphere Commerce, it has some sample workflows (like New Product Introduction, Manage Catalog
Entry), predefined user roles with appropriate access privileges as well as integration with WebSphere Commerce to export the catalog entries. These features can be
extended or modified as per an organizations requirement.

This asset includes a pre-built configuration of Product Master and ready for integrating with WebSphere Commerce. It includes a pre-defined business object and
business process model. With a common integration framework, it also provides a WebSphere Commerce Server extension for publishing catalog data remotely.

Solution architecture overview


Business users of IBM WebSphere Commerce system must manage three business objects.
Key concepts
The following key concepts must be understood in order to use Advanced Catalog Management for WebSphere Commerce Server.
Data model
IBM WebSphere Commerce catalog objects are all modeled in a Product Master catalog "Catalog Entry Repository".
Roles and tasks
For most Advanced Catalog Management systems, the tasks that are accomplished in the product are generally covered by users in the following main roles:
catalog managers, content editors and viewers, attribute dictionary managers, and global administrators. Depending on the role that you have, certain tasks might
not be available for you.
Integration approaches
Advanced Catalog Management maintains the master copy of catalog data in Product Master system and publishes the data to IBM WebSphere Commerce Server
when it is ready.
Managing business user privileges
After you have installed and configured Advanced Catalog Management, you can log into an Advanced Catalog Management company.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Solution architecture overview


Business users of IBM® WebSphere® Commerce system must manage three business objects.

The three business objects are:

Catalog Entries (including Products, SKU’s, Bundles, and Kits)


Catalog Entry Categories (for example, Catalog Groups)
Catalog Entry Attributes (for example, Attribute Dictionary Attributes)

All or part of the catalog data can be used for a specific store.
In this asset, let's say we designed a catalog entry repository to maintain all catalog entries. The entire repository is divided into four catalog entry types: product, SKU,
bundle or kit. The business user can view the entire repository per each catalog entry type.
Catalog entries can be organized into catalog groups. A catalog entry can belong to one or more catalog groups.

A catalog entry can be available to one or more stores. The business user can store the entire catalog entry repository in one catalog asset store and make the subsets
available for each direct store, for example eSite store.

With workspace and business process management support from Product Master, this asset shipped a list of sample business processes for collaborative content
authoring of catalog entry and catalog group.

This asset provides the out-of-box integration support for publishing the catalog data to IBM WebSphere Commerce system seamless. A web based user interface
dashboard is available for the designated business users to perform the exports and track the status.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Key concepts
The following key concepts must be understood in order to use Advanced Catalog Management for WebSphere® Commerce Server.

Attribute dictionary
In WebSphere Commerce an attribute dictionary is a set of common attributes as well as attribute values. It is a feature used to manage a set of attributes and its
values that can be reused by multiple WebSphere Commerce catalog entries. If an attribute is changed in the WebSphere Commerce attribute dictionary, all
WebSphere Commerce catalog entries that share that attribute are updated.
Catalog entry repositories
A catalog entry repository is a container of catalog entries and is used for managing and sharing of catalog entries.
Catalog entry categories
Catalog entry categories is the primary hierarchy for the Catalog Entry Repository catalog. It represents the catalog group structure in WebSphere Commerce. The
categories in this hierarchy represent catalog groups in WebSphere commerce.
Stores
Stores is one of the secondary hierarchies for the Catalog Entry Repository catalog. It represents the stores and their structures in WebSphere Commerce.
Workspaces
Workspace is not supported in this release. The default workspace is used.
DescriptionOverride attributes
You can set DescriptionOverride attributes for items in the IBM Product Master and export these attributes to WebSphere Commerce Server.

IBM Product Master 12.0.0 875


CalculationCode Attributes
You can set a Calculation Code for attributes in IBM Product Master.
Search Engine Optimization
You can set Search Engine Optimization (SEO) attributes for items and categories in Product Master and export these attributes to WebSphere Commerce.
Attribute groups
In WebSphere Commerce Server, attribute groups are used to club attributes so that they can be displayed together.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Attribute dictionary
In WebSphere® Commerce an attribute dictionary is a set of common attributes as well as attribute values. It is a feature used to manage a set of attributes and its values
that can be reused by multiple WebSphere Commerce catalog entries. If an attribute is changed in the WebSphere Commerce attribute dictionary, all WebSphere
Commerce catalog entries that share that attribute are updated.

The solution includes a global, shared attribute dictionary, a central place where attribute definitions are managed with restricted functionality. The attribute dictionary
distinguishes the following types of attributes. Attribute dictionary hierarchy has two predefined categories which are Defining attributes and Descriptive attributes.
Business users should not overwrite these categories. New defining attributes and descriptive attributes should be created under these categories:

Defining attributes
A single attribute with a fixed range of values. Defining attributes are properties of products and SKUs in an online store, such as color or size for clothing.
Descriptive attributes
A set of attributes with any combination of value types. Descriptive attributes are properties of products and SKUs in an online store, such as care instructions for
clothing. You can add descriptive attributes to a product if you need to provide additional information to customers. For example, some pieces of clothing must be
dry cleaned. A descriptive attribute can specify the dry clean only condition.

Catalog entries when associated with any particular category under this hierarchy will have these particular attributes associated with them.
Changes to attribute definitions are only allowed if they amplify (but not restrict) the specification. This is necessary so that the already existing items using the attribute
remain with valid values. The attribute dictionary allows creating defining attributes and descriptive attributes (which are grouped according to the needs of modeling
product-types, for example, a group of descriptive attributes for a product type “smart phone”).

The attribute dictionary is one of the secondary hierarchies for Product Master catalog catalog entry repository. You can also add the reusable attributes through the
secondary spec's associated with the category in the attribute dictionary hierarchy.

Attribute definitions that are not assigned to any entry are still exported to WebSphere Commerce with the rest of the catalog and schema.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Catalog entry repositories


A catalog entry repository is a container of catalog entries and is used for managing and sharing of catalog entries.

A catalog entry repository is associated with one WebSphere® Commerce store. In WebSphere® Commerce catalog entries are organized in master and sales catalogs.

Catalog entries in catalog entry repositories must be one of four types:

Product
SKU
Bundle
Kit

You can map entries in a catalog entry repository to master catalog and sales catalogs under the catalog group hierarchy. See Catalog Entry Categories for more
information. You can also share catalog entries in a catalog entry repository between different stores. See Stores for more information.
By using Catalog entry repositories, users can perform the following tasks:

Define categories
Create products and SKUs
Categorize SKUs
Bundle SKUs
Manage dynamic kits
Enrich and manage product content
Create merchandising associations
Create, restore, view, and delete versions of catalog objects (sales catalog, category, and catalog entries)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Catalog entry categories


876 IBM Product Master 12.0.0
Catalog entry categories is the primary hierarchy for the Catalog Entry Repository catalog. It represents the catalog group structure in WebSphere® Commerce. The
categories in this hierarchy represent catalog groups in WebSphere commerce.

About this task


In this structure, the Master catalog is a predefined category at the first level. Business users should not create catalog groups with the same names. All the user defined
categories should be defined under one of these two categories:

Master catalog
This represents the primary group which contains the catalog entries. There can be only one Master catalog.
Sales catalog
This represents the group for some particular sales purpose. There can be multiple Sales catalogs categories designed for different purposes.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Stores
Stores is one of the secondary hierarchies for the Catalog Entry Repository catalog. It represents the stores and their structures in WebSphere® Commerce.

About this task


In this hierarchical structure, business users can define shared catalog asset stores and direct sales stores.

Catalog Asset Store


A user can create a category in the Stores hierarchy and use it as a parent of one or more eSite stores.
Direct Store
A user can create one or more leaf nodes in the Store hierarchy. The nodes must be first level categories for Direct Stores.
eSite Store
A user can create one or more eSite stores as child stores of a Catalog Asset Store category.

A store has an ID attribute, which is used for exporting data to the store on the WebSphere Commerce side. Store category names must not have any of the following
special characters: []{}:\/"'#@<>,*.
When you create a new store, a secondary spec is generated. In order to associate that secondary spec with the store that you created, perform the following steps:

1. Create a store.
2. The saved store generates a secondary spec that is associated as item spec for the store. The name of the generated spec is similar to the following format: <value
of attribute 'Code'> Spec
3. Open the newly created store category and attach the secondary spec with the name <value of attribute 'Code'> Spec
4. Save the store category.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Workspaces
Workspace is not supported in this release. The default workspace is used.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

DescriptionOverride attributes
You can set DescriptionOverride attributes for items in the IBM® Product Master and export these attributes to WebSphere® Commerce Server.

The description elements for the same entry in WebSphere Commerce Server can differ by the store the entry belongs to. While each entry has a default description, it is
possible to override this default description with descriptions that are specific to each store.

To support this functionality, you can set DescriptionOverride attributes for items in the IBM Product Master environment. When you export items with DescriptionOverride
attributes to WebSphere Commerce Server, the corresponding attributes will be populated with information that you had entered in the DescriptionOverride attributes on
the Product Master Server side.

Implementing Description Override


To implement description override, you need to add additional attributes related to the description override to each store. When an entry is associated with a store,
the description attributes for that particular store can be set for the entry.

IBM Product Master 12.0 Fix Pack 8

IBM Product Master 12.0.0 877


Operating Systems: AIX, Linux, and Windows (Workbench only)

CalculationCode Attributes
You can set a Calculation Code for attributes in IBM® Product Master.

Calculation code is a feature in WebSphere® Commerce Server that covers different type of codes that are related to shipping and tax. This feature is implemented in
Product Master by adding a new attribute to the primary specs of the catalog entry repository and the catalog entry categories hierarchy to support the calculation code.
This attribute is a lookup table type attribute. You can set this attribute by choosing the calculation code that you want to set from the lookup table. The code can be
associated with catalog entry or catalog group. When associated with a catalog group, it gets associated by default with the entries under the group.

When exported to WebSphere Commerce Server, you can see the association in the CATENCALCD table for catalog entries and CATGPCALCD table for catalog groups.

The following types of codes are available in Product Master. You can define the codes of each of these types in the commerce accelerator in Product Master.

CouponCalculationCode
SurchargeCalculationCode
DiscountCalculationCode
SalesTaxCalculationCode
ShippingCalculationCode
ShippingTaxCalculationCode
ShippingAdjustmentCalculationCode

To list the codes that are existing in WebSphere Commerce Server, you must add them to the Calculation Code lookup table in the Product Master instance. The lookup
table spec consists of the following attributes:

Name
This attribute is used for the code name. It is a string type attribute. The name must match the name of the code in Product Master.
Type
This attribute is an enumeration that lists the seven types stated above. It must be set as the same type as in Product Master.
Category
The codes can be part of code categories. The category name from Product Master in which the code exists can be set in this string attribute. This attribute is
optional.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Search Engine Optimization


You can set Search Engine Optimization (SEO) attributes for items and categories in Product Master and export these attributes to WebSphere® Commerce.

SEO attributes are:

Image alt text


Meta description
Page title
URL keyword

These attributes are available by default for a catalog group or a catalog entry, under the Search Engine Optimization tab.
Note: Ensure that the following characters are not included in the URL keyword value: [_, ?, =, #, /, ., %, ].
The SEO functionality in the WebSphere Commerce environment improves the online search page ranking for your store by allowing users to create shorter SEO-friendly
URLs with meaningful keywords.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Attribute groups
In WebSphere Commerce Server, attribute groups are used to club attributes so that they can be displayed together.

In IBM® Product Master, attribute groups are modeled as categories that are created under the Catalog Attribute Groups hierarchy. This hierarchy is set as the primary
hierarchy for the Catalog Attributes catalog. The items in this catalog represent the user defined attributes and the attributes are added when a user creates attribute
dictionary attributes. Whenever any attribute is mapped to a category which represents an attribute group under the Catalog Attribute Groups hierarchy, it is considered to
be part of that attribute group.

When you run the ACM attribute dictionary export both the attributes and attribute groups are exported. The relationship between attribute groups and attributes is also
exported using the data loader utility.

Note: Only those attribute groups which have any attributes mapped to them are exported using the data loader utility.

878 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data model
IBM® WebSphere® Commerce catalog objects are all modeled in a Product Master catalog "Catalog Entry Repository".

The data model can be interpreted as follows:


"Catalog Entry Repository" contains all catalog entries.

Under all categories:


A catalog entry can be mapped to a node (category) under the "Catalog Entry Categories" hierarchy.
Of four types (Product, SKU, Bundle, and Kit):
A catalog entry can be mapped to a node (type) under the "Catalog Entry Types" hierarchy.
From all stores:
A catalog entry can be mapped to a node (store) under the "Stores" hierarchy.
With or without user defined attribute(s):
A catalog entry can be mapped to a node (attribute group) under the "Attribute Dictionary" hierarchy.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Roles and tasks

IBM Product Master 12.0.0 879


For most Advanced Catalog Management systems, the tasks that are accomplished in the product are generally covered by users in the following main roles: catalog
managers, content editors and viewers, attribute dictionary managers, and global administrators. Depending on the role that you have, certain tasks might not be available
for you.

Tasks for Content Viewers


Content viewers have a limited role. They can view entries, which are in collaboration area steps, which have them assigned as administrators. They cannot view entries in
general.
As a content viewer, you are responsible for the following:

Able to view data (catalog entries and catalog entry categories) in collaboration areas, which is present in workflow steps with content viewers as administrators.
Search on entries.

Tasks for Content Editors


Content editors can edit catalog objects (catalog entries or catalog groups). They can edit it only in collaboration areas and not on source catalogs.
As a content editor, you are responsible for the following:

Adding the content to newly created objects (catalog entries or groups) in creation collaboration areas.
Edit and enrich objects in workflow steps that are designated for content editors in the maintenance collaboration areas.

The content editor can also manage the relationship between an Advanced Catalog Management Product and the associated Advanced Catalog Management SKUs, for
example, navigating from one to the other and adding and deleting the relationship between them.

Tasks for Catalog Managers


Users who manage the stores, the master catalog and the sales catalogs.
As a catalog manager, you are responsible for the following:

Responsible for creating content of product and variants within the catalog
Responsible for create, retrieve, update, and delete (create, retrieve, update, and delete) sales catalog
Create, retrieve, update, and delete of master catalog categories
Create, retrieve, update, and delete of sales catalog categories
Initiation of creation of product
Review and approve of product information, including product, SKU, bundle, and kit
Second-level approval inside a workflow

The Catalog Manager can create Advanced Catalog Management catalog entries and define basic parameters for it (like the type and association to attributes in the
Advanced Catalog Management attribute dictionary). The Content Manager also reviews and approves edited Advanced Catalog Management catalog entries so that they
can be exported to the commerce system. The Content Editor does the enrichment of the attribute values (both core attributes and reusable attributes of the Advanced
Catalog Management attribute dictionary) and the assignment of Advanced Catalog Management catalog entries to one of several Advanced Catalog Management sales
catalogs.
The Catalog Manager creates and edits Advanced Catalog Management sales catalogs including defining and updating the hierarchical structure.

The Catalog Manager creates Advanced Catalog Management store and manages Advanced Catalog Management master and sales catalogs in the store.

The Advanced Catalog Management asset offers basic workflow-controlled creation and editing of Advanced Catalog Management catalog entries with generic edit and
approval steps.

880 IBM Product Master 12.0.0


Tasks for Attribute Dictionary Managers
Attribute Dictionary Managers manage the content of the attribute dictionary that is modeled as a Product Master hierarchy.
As an attribute dictionary manager, you are responsible for the following:

Create, retrieve, update, and delete (create, retrieve, update, and delete) of attribute dictionary
meta-data management (for example, lookup values)
create descriptive attributes
create defining attributes
update attributes

The Attribute Dictionary Manager specifies and edits the reusable attribute definitions of the Advanced Catalog Management attribute dictionary, which are later exported
into the commerce system to become WebSphere® Commerce defining attributes and WebSphere Commerce descriptive attributes. The use case is restricted to assure
that the resulting updates are permitted in the WebSphere Commerce attribute dictionary. The Content Editor and Content Manager can only work with already created
attributes of the Advanced Catalog Management attribute dictionary.

Tasks for Global Administrators


As a global administrator, you are responsible for the following:

export management
catalog management
store management
initial load

The Global Admin can trigger or schedule a batch export to the commerce system. The export will always include content information (to be loaded into WebSphere
Commerce catalog entries and WebSphere Commerce catalogs) and meta data (to be loaded into the WebSphere Commerce attribute dictionary).

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integration approaches
Advanced Catalog Management maintains the master copy of catalog data in Product Master system and publishes the data to IBM® WebSphere® Commerce Server when
it is ready.

There are two approaches you can take when you integrate with IBM WebSphere Commerce Server.

Web service approach


For a small volume of data, Advanced Catalog Management uses real time a web service call to publish data directly to IBM WebSphere Commerce system. This approach
is used for catalog groups and attribute dictionary exports.

Batch load approach


For a large volume of data, Advanced Catalog Management starts an asynchronous batch load process. This approach is used for catalog entries. The batch process uses
WebSphere MQ for notifications and an FTP server for large data file transfer, as shown:

IBM Product Master 12.0.0 881


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing business user privileges


After you have installed and configured Advanced Catalog Management, you can log into an Advanced Catalog Management company.

You log in to an Advanced Catalog Management company based on your assigned user role. For example,

Catalog Managers

Username: cm
Password: cm

Content Editors

Username: ce
Password: ce

Content Viewers

Username: cv
Password: cv

Attribute Dictionary Managers

Username: adm
Password: adm

Global Administrators

Username: ga
Password: ga

882 IBM Product Master 12.0.0


Managing the catalog access privileges
Managing the catalog access privileges consists of accessing all of the catalogs and hierarchies and editing source containers.
The catalog access privileges are as follows:

The Global Administrator (GA) role has administration access over all of the catalogs and hierarchies.
The Catalog Manager (CM) role can access all of the catalogs and hierarchies and edit the source containers but cannot add new entries to catalogs from the
navigation pane. That operation can be done only through the collaboration areas.

Managing the content access privileges


Managing the content access privileges consists of editing content within specified areas.
The content access privileges are as follows:

The Global Administrator (GA) role has administration access over all of the content.
The Catalog Manager (CM) role can edit content in source containers as well as collaboration areas wherever allowed by the workflow step privileges.
The Content Editor (CE) role can edit content wherever allowed in the workflow steps for the collaboration areas.

Managing the attribute dictionary privileges


The attribute dictionary hierarchy can be accessed by the Global Administration (GA) and Attribute Dictionary Manager (ADM) roles.
Users in these roles can edit attribute categories, add new attributes and secondary specs for the attribute categories.

Managing the export privileges


Exporting happens through the Custom Tools menu.
Only users in the Global Administrator (GA) role can access the export custom tool. The export privilege is thus limited to only users in the Global Administrator (GA) role.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing and configuring Advanced Catalog Management


This installation section provides instructions to install the Advanced Catalog Management asset on the Product Master platform and the WebSphere® Commerce platform
including the configuration of the provided default objects (for example, default master catalog, default views, and default workflows).

Installation of the WebSphere Commerce Advanced Catalog Management product requires the following basic steps:

Installation prerequisites for Product Master and WebSphere Commerce


Configuration checklist prior to installation
Installation of Advanced Catalog Management into Product Master and WebSphere Commerce

The following sections describe the product and how to install and configure it to your needs.

Installing WebSphere Commerce Server


This installation section provides instructions to install WebSphere Commerce Server.
Installing and configuring WebSphere MQ for WebSphere Commerce Server
Ensure you install and configure WebSphere MQ for WebSphere Commerce Server use.
Installing and configuring FTP server
Perform the following steps to install and configure the FTP server.
Installing and configuring Advanced Catalog Management on WebSphere Commerce Server
Advanced Catalog Management (ACM) uses IBM® Message Queue Server messages for data export from Product Master to WebSphere Commerce Server.
Installing and configuring Advanced Catalog Management on Product Master
This installation section provides instructions to install the Advanced Catalog Management asset on the Product Master platform and the WebSphere Commerce
platform including the configuration of the provided default objects (for example, default master catalog, default views, and default workflows).
Troubleshooting
The following troubleshooting tips are useful when diagnosing any install or configuration issues related to Advanced Catalog Management.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing WebSphere Commerce Server


This installation section provides instructions to install WebSphere® Commerce Server.

Installation of the WebSphere Commerce requires the following basic steps:

Install WebSphere Commerce Server bundle


Install DB2®
Install IBM® HTTP Server

IBM Product Master 12.0.0 883


Install WebSphere Application Server
Install WebSphere Commerce Server

The following sections describe the product and how to install and configure it to your needs.

Prerequisites checklist for WebSphere MQ


Before you can install Advanced Catalog Management, ensure that you have met all the hardware and software requirements, team requirements, and the
application server and database configurations requirements to run ACM in WebSphere MQ.
Installing WebSphere Commerce Server bundle
The WebSphere Commerce Server bundle installation includes the installation of the following components.
Installing DB2
If you have already installed DB2 prior to running this step, the installation will detect this, and will put a check mark in the "is installed" box, then you can skip this
step.
Installing Oracle
If you have already installed Oracle prior to running this step, the installation will detect this, and will put a check mark in the "is installed" box, then you can skip
this step.
Installing HTTP Server
If you have already installed an HTTP server prior to running this step, the installation will detect this, and will put a check mark in the "is installed" box, then you
can skip this step.
Installing WebSphere Application Server
If you have already installed WebSphere Application Server prior to running this step, the installation will detect this, and will put a check mark in the "is installed"
box, then you can skip this step.
Installing WebSphere Commerce Server
If you have already installed WebSphere Commerce Server prior to running this step, the installation will detect this, and will put a check mark in the "is installed"
box, then you can skip this step.
Install WebSphere Application Server fix pack and WebSphere Commerce Server feature pack
There is a minimum level of WebSphere Application Server and WebSphere Commerce Server fix pack and feature pack that are required by Product Master in order
for it to work correctly.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Prerequisites checklist for WebSphere MQ


Before you can install Advanced Catalog Management, ensure that you have met all the hardware and software requirements, team requirements, and the application
server and database configurations requirements to run ACM in WebSphere® MQ.

Install WebSphere MQ, refer to the following topic for more information: Installing WebSphere MQ

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing WebSphere Commerce Server bundle


The WebSphere® Commerce Server bundle installation includes the installation of the following components.

About this task


IBM® DB2®
IBM HTTP Server
IBM WebSphere Application Server
IBM Web Server Plug-in
IBM WebSphere Commerce Server

Procedure
1. Go to the bin directory, and run the setup_linux command.
2. Specify the home directory for a non-root user. For example, your home directory is called /home/wcuser. If you do not have a non-root user ID to use, create one
before proceeding with the next step. For example, you non-root user ID is called wcuser. This non-root user ID will also be used for configuring the WebSphere
MQ later.
3. Click Next.
4. Choose Custom Installation. Select the following components to install:
WebSphere Commerce, which includes:
WebSphere Commerce Server, including WebSphere Application Server product
WebSphere Commerce payment, including WebSphere Application Server product
Remote WebSphere Commerce Management Utilities
WebSphere Commerce product documentation (required)
Supporting IBM Software, which includes:
IBM DB2 Universal Database, including:
IBM DB2 Universal Database Enterprise Server Edition

884 IBM Product Master 12.0.0


IBM DB2 Universal Database Administration Client (required)
IBM HTTP Server
WebSphere Application Server Web server plug-ins (required)
5. Click Next.
6. Specify the destination path where each component will be installed.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing DB2
If you have already installed DB2® prior to running this step, the installation will detect this, and will put a check mark in the "is installed" box, then you can skip this step.

Procedure
If you do not have DB2 installed, see
Installing and setting up the databaseInstalling and setting up the database.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing Oracle
If you have already installed Oracle prior to running this step, the installation will detect this, and will put a check mark in the "is installed" box, then you can skip this step.

Procedure
If you do not have Oracle installed, see
Installing and setting up the database.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing HTTP Server


If you have already installed an HTTP server prior to running this step, the installation will detect this, and will put a check mark in the "is installed" box, then you can skip
this step.

Procedure
If you do not have an HTTP server installed, ensure that you install one.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing WebSphere Application Server


If you have already installed WebSphere® Application Server prior to running this step, the installation will detect this, and will put a check mark in the "is installed" box,
then you can skip this step.

Procedure
1. If you do not have WebSphere Application Server installed, see
Setting up the
2. To avoid confusion when you have more than one version of WebSphere Application Server installed, specify the version number in the destination path for the
WebSphere Application Server installation. Refer to env_setting.ini.default for the entire list of sections supported.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 885


Installing WebSphere Commerce Server
If you have already installed WebSphere® Commerce Server prior to running this step, the installation will detect this, and will put a check mark in the "is installed" box,
then you can skip this step.

Procedure
1. If you do not have WebSphere Commerce Server installed, refer to Installing WebSphere Commerce Server for instructions.
2. To avoid confusion when you have more than one version of WebSphere Commerce Server installed, specify the version number in the destination path for the
WebSphere Commerce Server installation.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Install WebSphere Application Server fix pack and WebSphere Commerce Server
feature pack
There is a minimum level of WebSphere® Application Server and WebSphere Commerce Server fix pack and feature pack that are required by Product Master in order for it
to work correctly.

Installing WebSphere Application Server fix pack


The minimum level of WebSphere Application Server fix pack required is fix pack 21. If you install a fix pack that is lower than the required level, you will get an
error and will be required to install the correct level.
Installing WebSphere Commerce Server feature pack
The minimum level of WebSphere Commerce Server feature pack required is feature pack 6. If you install a feature pack that is lower than the required level, you
will get an error and will be required to install the correct level.
Enabling WebSphere Commerce Server feature pack
Ensure you enable the WebSphere Commerce Server feature pack before you create a WebSphere Commerce Server instance.
Creating a WebSphere Commerce Server instance
To create the WebSphere Commerce instance to be used by your Advanced Catalog Management application, you need to run the following step as a non-root user.
Publishing WebSphere Commerce Server stores
Before you can install a WebSphere Commerce Server store, first you need to start the HTTP server.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing WebSphere Application Server fix pack


The minimum level of WebSphere® Application Server fix pack required is fix pack 21. If you install a fix pack that is lower than the required level, you will get an error and
will be required to install the correct level.

Procedure
1. Go to the directory where you have downloaded feature pack 21, for example, /opt/software/WCS/update_installer_was7/UpdateInstaller.
2. Run the install_server.sh command. The introduction screen displays.
3. Specify the destination directory path for the fix pack. Click Next.
4. In the Product Selection screen, specify the installation location of WebSphere Application Server you want to update.
5. Select Install maintenance package in the next screen, and specify the directory which contains all the maintenance packages, for example, /opt/software/WCS.
6. Check the box next to the fix pack level that you want to install.
7. Take the default options in the following screen, then click Next to start the installation.
8. Click Finish when the installation has completed.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing WebSphere Commerce Server feature pack


The minimum level of WebSphere® Commerce Server feature pack required is feature pack 6. If you install a feature pack that is lower than the required level, you will get
an error and will be required to install the correct level.

886 IBM Product Master 12.0.0


Procedure
1. Go to the directory where you have downloaded feature pack 6, for example, /opt/software/WCS/FeaturePack6.
2. Run the install_server.sh command. The introduction screen displays.
3. Accept the terms of the license agreement. Click Next.
4. Specify the destination directory path for the feature pack. Click Next.
5. Specify the directory path for WebSphere Commerce Server 7.0. Click Next.
6. Select Install maintenance package. Click Next.
7. Specify the directory which contains the list of maintenance packages. Click Next.
8. Check the box next to 7.0.0-WSWCServer- FP006.pak and click Next to start the installation.
9. Click Finish when the installation has completed.

What to do next
Continue to install the feature pack.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling WebSphere Commerce Server feature pack


Ensure you enable the WebSphere® Commerce Server feature pack before you create a WebSphere Commerce Server instance.

Procedure
1. Ensure that WebSphere Commerce Server is already started.
You can do this by running the serverStatus.sh server1 command from the WebSphere Commerce Server bin directory, for example,
/opt/IBM/WebSphere/AppServer70/profiles/AppSrv01/bin. If it is not started, you can start it by running the startServer.sh server1 command.
2. Run the following command to enable the feature pack:
config_ant.sh -buildfile WC_installdir/components/common/xml/enableFeature.xml

-DinstanceName=instance_name

-DfeatureName=foundation

-DdbUserPassword=db_password [-DdbaPassword=dba_password]

[-DSolrWASAdminUser = solr_wasadminuser]

[-DSolrWASAdminPassword = solr_wasadminpassword]

[-Dscchost=HostForScheduledJobs]

[search_server_config] [-DsearchPort=searchPort]

Example
For example, the command will look similar to the following:
config_ant.sh -buildfile /opt/IBM/WebSphere/CommerceServer70/components/common/xml/enableFeature.xml -

DinstanceName=mdmce_acm -DfeatureName=foundation -DdbUserPassword=<your_password>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating a WebSphere Commerce Server instance


To create the WebSphere® Commerce instance to be used by your Advanced Catalog Management application, you need to run the following step as a non-root user.

About this task


In this example, the wcuser ID that has been previously created.

Procedure
1. Go to the bin directory of the WebSphere Commerce installation, for example, /opt/IBM/WebSphere/CommerceServer70/bin.
2. Run the config_server.sh command.
3. Open a new XTERM window, and go to the same bin directory of WebSphere Commerce installation, and run the config_client.sh command.
4. Enter configadmin in the User ID field. This is the ID that will be used for the instance configuration. After entering the authentication password, click OK.
5. Right-click on InstanceList, and click on the option to Create Instance.

IBM Product Master 12.0.0 887


6. Specify the commerce instance name to be created, the merchant key, and the Site Admin ID to use. Enter the correct information, make a note of them, and click
Next.
7. Enter which database to use (or a new one to create), for example, DB2®. You will also need to specify the database name, the database administration ID to use,
and the password. Click Next.
8. Specify your database user name and password. Click Next.
9. Specify the cell name, node name, port number, as well as the JDBC driver information. Confirm the information provided is correct. Click Next.
10. Specify the languages you would like to use. Click Next.
11. Verify that the host name and port numbers are correct and click Next.
The instance creation process will then start. It can take about 1 hour for it to complete. When it is done, you will see a completion message that will list what has
been created for the instance.
12. In the WebSphere Administration Console, select Environment > Virtual Hosts. Make sure that all of your virtual hosts are accessible from other servers.
For example, click on the host name VH_mdm_acm_Admin. If it is showing that it is only accessible from a particular host, click on the host name and change that to
*.
Remember: Ensure you click Save for any changes that you make to the host name. Ensure you do this for all of the virtual hosts in the list.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Publishing WebSphere Commerce Server stores


Before you can install a WebSphere® Commerce Server store, first you need to start the HTTP server.

Procedure
1. Go to the bin directory of where the IBM® HTTP Server product is installed, for example, /opt/IBM/IBMIHS/bin.
2. Run the ./apachectl -k start -f /opt/IBM/WebSphere/CommerceServer70/instances/mdm_acm/httpconf/httpd.conf command.
3. To verify that the HTTP server has been started successfully, check by looking at the running processes. Run the ps –ef | grep httpd command.
4. Log into the Administration Console on the server where WebSphere Commerce Server is installed on.
Use the non-root user id that was previously created.
5. Select Site to work on and click OK.
6. Click Select, then click the Store Archive tab, and select Publish.
7. Select a sample composite store to create. Click Next.
8. Select a sample data and inventory model, then click Finish. It could take up to 30 minutes for this step to complete, depending on the speed of the machine you
are using.
9. Click Next.
Note: At any time while it is publishing, you can check the status by clicking on the Store Archives tab and select Publish Status.
10. Click Details to see a more detailed status.
You can repeat this procedure to create another sample store. When you are done, you can check the stores that have been created by clicking Publish Status.
11. Log into the Management Center main page, using the same non-root user id as before, on the server where WebSphere Commerce Server is installed.
For example, https://9.55.177.114:8000/lobtools/cmc/ManagementCenterMain

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing and configuring WebSphere MQ for WebSphere Commerce Server


Ensure you install and configure WebSphere® MQ for WebSphere Commerce Server use.

Installing WebSphere MQ for WebSphere Commerce Server use


Perform the following steps to install WebSphere MQ for Websphere Commerce Server use.
Configuring WebSphere MQ for WebSphere Commerce use
After you installed WebSphere MQ for WebSphere Commerce use, ensure that you configure WebSphere MQ.
Creating queue managers
Perform the following steps to create queue managers.
Configuring WebSphere Application Server for use with WebSphere MQ
Before you can enable WebSphere MQ to use with WebSphere Commerce Server, you first need to configure WebSphere Application Server for use with WebSphere
MQ.
Enabling WebSphere MQ for WebSphere Commerce Server
After you have configured WebSphere Application Server for use with WebSphere MQ, you need to enable WebSphere MQ and confirm that it was configured
properly.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing WebSphere MQ for WebSphere Commerce Server use

888 IBM Product Master 12.0.0


Perform the following steps to install WebSphere® MQ for Websphere Commerce Server use.

Before you begin


Ensure that you created a user ID and group ID for WebSphere MQ. You can run the system-config-users command to create both a user ID and group ID. For example,
user ID: mqm and group ID: mqm.

Procedure
1. Go to the directory where you have downloaded the WebSphere MQ packages, for example, /opt/software/WCS/MQ.
2. Run the run mqlicense.sh –text_only command to validate your licenses.
3. Press 1 to accept the licenses.
4. Install the runtime, client, and server packages for WebSphere MQ. To do that, run the cd wc_install rpm -ivh MQSeries* command.
5. Go back to the previous directory, and run the rpm -ivh MQSeries* command.
If you receive a dependency error, you will need to install the dependency first. An example of a failed dependency looks similar to the following:
gsk7bas64 >= 7.0-4.27 is needed by MQSeriesKeyMan-7.0.1-3.x86_64
a. Install the dependency first. In this case, run the rpm -ivh gsk7bas* command.
b. Run the rpm -ivh MQSeries* command again.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring WebSphere MQ for WebSphere Commerce use


After you installed WebSphere® MQ for WebSphere Commerce use, ensure that you configure WebSphere MQ.

Before you begin


Refer to the following topic for configuring WebSphere MQ for WebSphere Commerce: Configuring WebSphere Commerce to use WebSphere MQ

Procedure
1. Start the default WebSphere Application Server application server, for example, server1.
2. Open the WebSphere Application Server Administrative Console.
3. Log onto the WebSphere Application Server Administrative Console.
4. In the navigation tree, expand Environment and select WebSphere Variables.
5. Enter the MQ_INSTALL_ROOT value:
a. Click MQ_INSTALL_ROOT.
b. In the Value field, enter /opt/mqm.
c. Click OK.
d. Click Save in the Administrative Console task bar.
e. On the Save page, select Synchronize changes with node.
f. On the Save page, click Save.
6. Close the WebSphere Application Server Administrative Console.
a. Stop the default WebSphere Application Server application server, for example, server1.
b. Go to the /opt/IBM/WebSphere/AppServer/profiles/mdm_acm/bin directory and run the stopServer.sh server1 command.
c. Add the non-root user ID that was previously created to the mqm user group. You can do this my modifying the /etc/group file and adding the non-root userid,
wcuser to the line containing mqm as follows: mqm:x:505:wcuser.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating queue managers


Perform the following steps to create queue managers.

About this task


In this topic, we will create the following queue managers.
Table 1. Queue managers
Queue Description
hostname.error This is the default error queue. This queue manager collects erratic inbound messages.
hostname.inbo This queue manager is used by SendReceiveImmediate mode of the adapter for WebSphere® MQ. It is where the reply and response messages
und from the backend system should go. WebSphere Commerce Server can optionally pick reply and response messages based on an outgoing request to
a backend system.
hostname.inbo Any message that arrives at this queue will be processed in parallel manner.
undp

IBM Product Master 12.0.0 889


Queue Description
hostname.inbo Any message that arrives at this queue will be processed in serial manner based on the message delivery sequence option of the queue.
unds
hostname.outb This queue manager is used for WebSphere Commerce Server initiated outbound messages and reply messages from WebSphere Commerce Server.
ound
where hostname is the TCP/IP name of the machine running WebSphere MQ.
Note: Take note of the names of the message queues you identified or created. This information is used in later steps.

Procedure
1. Set up the name and configuration of the queue that you want to create:
a. Open the WebSphere MQ Explorer by running thecd to /opt/mqm/bin and ./strmqcfg commands.
b. Click Queue managers > New > Queue Manager.
c. Specify the Queue Manager name to use, for example, MDMQueueManager. Click Next.
d. Specify Use circular logging. Click Next.
e. Select the Start queue manager after it has been created check box. Click Next.
f. Select the Create listener configured for TCP/IP check box. Click Next.
g. Select the Autoreconnect check box.
h. Click Finish. You will be brought back to the main screen where you can start creating the queue managers.
2. Create a queue manager:
a. Click Queues > New > Local Queue.
b. Enter the name of the queue you want to create. Repeat this process for all the queue managers mentioned. When complete, you should see all 5 queues
displayed in the list.
3. Create a server connection channel:
a. On the Create a Server Channel window, provide a name for your server connection channel, for example MDM.CHANNEL.
b. Select an existing object from which to copy the attributes for the new object.
c. Click Next.
d. Click Finish.
4. To verify that WebSphere MQ is running, you can check the running processes by issuing the ps –ef | grep mq command.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring WebSphere Application Server for use with WebSphere MQ


Before you can enable WebSphere® MQ to use with WebSphere Commerce Server, you first need to configure WebSphere Application Server for use with WebSphere MQ.

Procedure
1. Start WebSphere Commerce Server.
2. Open the WebSphere Application Server Administrative Console.
3. Log into the WebSphere Application Server Administrative Console.
4. Enable the ActivitySession service.
a. Expand Servers > Server Types. Click WebSphere Application Servers.
b. Click server1.
c. Under Container Settings, expand Business Process Services.
d. Click the ActivitySession service.
e. Select Enable service at server startup.
f. Click OK and save the configuration.
5. Determine the maximum number of connections allowed for the Adapter for WebSphere MQ.
a. In the WebSphere Application Server Administrative Console navigation tree, expand Applications > Application Types and select WebSphere enterprise
applications.
b. In the list of enterprise applications, click WC_instance_name where instance_name is the name of your WebSphere Commerce Server instance, for example,
WC_mdm_acm.
c. Under Modules, click Manage Modules.
d. In the list of connector modules, click Adapter for WebSphere MQ.
e. In the Additional Properties section on the Enablement-JCAJMSConnector.rar page, click Resource Adapter. The WC_instance_name.Adapter for WebSphere
MQ page displays. Use the node valueWC_instance_name, where instance_name is the name of the WebSphere Commerce instance.
f. In the Additional Properties section of the WC_instance_name.Adapter for the WebSphere MQ page, click J2C Connection Factories. The J2C Connection
Factories page displays.
g. In the list of J2C Connection Factories, click Enablement-JCAJMSConnector.rar. TheEnablement-JCAJMSConnector.rar page displays.
h. In the Additional Properties section, on the javax.resource.cci.ConnectionFactory page, click Connection Pool Properties. The Connection Pools page
displays.
i. Take note of the value in the Max connections field.
j. Ensure that the Purge Policy is set to failingConnectionOnly.
k. Create a WebSphere MQ messaging provider queue connection factory.
i. In the WebSphere Application Server administrative console navigation tree, expand Resources and select JMS > Queue connection factories.
l. Select the scope of the queue connection factory from the list. Use the node valueWC_instance_name, where instance_name is the name of the
WebSphere Commerce instance. Create it under the scope of Node=WC_instance name_node,
Server=server1 level. Click New.
m. Select WebSphere MQ messaging provider and click OK.
n. Configure basic attributes by providing the necessary values. Click Next.
o. Select Enter all the required information into this wizard and click Next.

890 IBM Product Master 12.0.0


p. Enter the name of the Queue Manager identified or created earlier, for example, MDMQueueManager, then click Next.
q. Enter the connection details. Select Bindings for the Transport field. Leave the rest of the fields blank. Click Next.
r. Click Test connection. Confirm the connection results; then click Next. View the summary; then click Finish.
s. Select the newly created JMSQueueConnectionFactory.
t. Under Advanced, clear Support distributed two phase commit protocol. Click Apply.
u. Under Additional properties, click Advanced properties.
v. Under Connection consumer, clear Retain messages, even if no matching consumer is available. Click OK.
w. Under Additional properties, click Connection pool. Adjust the maximum number of connections needed. This number must be 1 greater than the number
defined under the WC_instance_name enterprise application. Click OK and save the configuration.
x. Click Save in the administrative console task bar.
y. On the Save page, click Save.
6. Create WebSphere MQ messaging provider queue destinations.
a. In the WebSphere Application Server Administrative Console navigation tree, expand Resources > JMS > Queues.
b. Select the scope of the queue connection factory from the list. The node value should be WC_instance_name, where instance_name is the name of the
WebSphere Commerce instance. Click New.
c. Select WebSphere MQ messaging provider and click OK.
d. Create new queues for each inbound queue.
e. Create new queues for each outbound and error queue.
Note: Make sure to use the correct queue name for each queue.
f. For each outbound and error queue, click Apply.
i. Click Advanced properties.
ii. Clear the Append RFH version 2 headers to messages sent to this destination check box.
iii. Click OK.
g. Double check that the queues are correctly created and defined.
h. When you have created all of the JMS queues:
i. Click Save in the administrative console task bar.
ii. On the Save page, click Save.
7. Exit the WebSphere Application Server Administrative Console.
8. Stop the default WebSphere Application Server application server, for example, server1.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling WebSphere MQ for WebSphere Commerce Server


After you have configured WebSphere® Application Server for use with WebSphere MQ, you need to enable WebSphere MQ and confirm that it was configured properly.

Procedure
1. Stop WebSphere Commerce Server.
2. Launch the Configuration Manager user interface.
3. Enter your Configuration Manager user ID and password.
4. Expand host_name > Commerce > Instance List > instance_name > Components > Listener for WebSphere MQ (Transport Adapter).
Where:
host_name is the short name of the machine running WebSphere Commerce Server.
instance_name is the name of the WebSphere Commerce Server instance.
5. Select the Enable check box.
6. Click Apply.
7. Exit the Configuration Manager user interface.
8. Start WebSphere Commerce Server.
9. Test your WebSphere MQ configuration.
a. Right-click hostname.inbounds and select Put Test Message from the pop-up menu.
b. In the test message window, enter the following text: <?xml
test message>.

Results
WebSphere MQ is configured properly if the following occurs:

The test message is consumed from the serial inbound queue (hostname.inbounds).
An error message appears in the hostname.outbound queue.
The original message appears in the hostname.error queue.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing and configuring FTP server


Perform the following steps to install and configure the FTP server.

IBM Product Master 12.0.0 891


Prerequisites
You can choose to install whichever FTP server you prefer. For more information, see your FTP server documentation for instructions on installing and configuring.
An FTP user login name, password, and directory name must be available before configuring Advanced Catalog Management and WebSphere® Commerce Server.

Installing
Perform the following steps to install an FTP server.

1. Go to the directory where you downloaded the FTP package, for example, cd /opt/software/ftp.
2. Run the yum install vsftpd.x86_64 command to install the FTP server.

Configuring
Perform the following steps to configure an FTP server.

1. Go to the home directory of the non-root user, for example, wcuser and create an acm directory. For example,
cd /home/wcuser

mkdir acm
This directory is used to store the xml data that is created and used by Advanced Catalog Management.
2. Go to the /etc/init.d directory and start the FTP server. For example,
cd /etc/init.d

./vsftpd start
3. To verify that FTP is started successfully, run the chkconfig | grep ftp command. You should see the following:

vsftpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing and configuring Advanced Catalog Management on WebSphere Commerce


Server
Advanced Catalog Management (ACM) uses IBM® Message Queue Server messages for data export from Product Master to WebSphere® Commerce Server.

Before you begin


Before you can install Advanced Catalog Management, ensure that you have met all the hardware and software requirements, team requirements, and the
application server and database configurations requirements to run ACM in WebSphere Commerce.
Install WebSphere MQ. For more information, see Integrating WebSphere MQ.
Note: WebSphere MQ and FTP servers are shared between both WebSphere Commerce Server and Product Master, therefore, there is one instance for both
Product Master and WebSphere Commerce Server.
Configure WebSphere MQ for Advanced Catalog Management. For more information, see Configuring WebSphere Commerce to use IBM MQ for more
information.
Enable WebSphere MQ for WebSphere Commerce.
See System requirements for installation requirements for DB2® and WebSphere Application Server.

Procedure
1. Obtain the ACM package.
a. Create an ACM installation folder called acm under the WebSphere Commerce installation folder.

b. Create three sub directories under acm and call them config, logs and reports.

c. Copy all of the files and sub directories of the config folder found in the acm.wcs.zip file over to the newly created config folder.

2. Enable ACM at the WebSphere Commerce Server binary level. For more information, see Enabling ACM at the WebSphere Commerce Server binary level.
3. Modify the wc-dataload-env.xml file. For more information, see Modifying the wc-dataload-env.xml file.
4. Configure the wcs.properties file. For more information, see Configuring the wcs.properties file.
5. Enable ACM at the WebSphere Commerce Server instance level. For more information, see Enabling ACM at the WebSphere Commerce Server instance level.
6. Enable MQ Listener. For more information, see Enable MQ Listener for more information.

Enabling ACM in WebSphere Commerce Server


Perform the following steps to enable Advanced Catalog Management in WebSphere Commerce Server.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

892 IBM Product Master 12.0.0


Enabling ACM in WebSphere Commerce Server
Perform the following steps to enable Advanced Catalog Management in WebSphere® Commerce Server.

Procedure
1. Unzip the acm.zip file into a temporary directory, for example, /home/wcuser/acm. Extract the following files: acm.mdm.zip and acm.wcs.zip.
2. Unzip the acm.wcs.zip file into a temporary directory, for example, /home/wcuser/acm/wcs.
3. Create an acm directory under the ACM_INSTALL_DIR directory, for example, /opt/IBM/WebSphere/CommerceServer70.
4. Create the following subdirectories under the $ACM_INSTALL_DIR/acm. config directory logs reports.
5. Copy the content of the /home/wcuser/acm/wcs/config folder to the $ACM_INSTALL_DIR/acm/config directory by running the following commands:
cd /home/wcuser/acm/wcs/config

cp -R * /opt/IBM/WebSphere/CommerceServer70/acm/config
6. Modify the wc-dataload-env.xml file. Update the element values of _config:Database with the information found in the
/opt/IBM/WebSphere/CommerceServer70/instances/mdm_acm/xml directory for the <Database> element:
<Database>

<DB CreateDB="true"

DBAName="db2admin"

DBAPwd="m2zFWZGIt2KzuRcT6YvptQ=="

DBHost="maobing10.usca.ibm.com"

DBMSName="DB2"

DBNode=""

DBServerPort="50001"

DBUserID="DB2ADMIN"

DBUserPwd="m2zFWZGIt2KzuRcT6YvptQ=="

OraUserID=""

OracleDataFile=""

RemoteDB="false"

RunDB2SG="true"

ServiceName=""

active="true"

name="acm" />

</Database>
7. Copy the information to the appropriate attributes of _config:Database as follows:
<_config:Database type="db2" name="acm" user="db2admin" password="m2zFWZGIt2KzuRcT6YvptQ=="

server="maobing10.usca.ibm.com" port="50001" schema="db2admin" />


8. Change the attribute values of _config:BusinessContext with the store name as follows:
<_config:BusinessContext storeIdentifier="AdvancedB2BDirect" catalogIdentifier="AdvancedB2BDirect"

languageId="-1" currency="USD">

Enabling ACM at the WebSphere Commerce Server binary level


After you have obtained the Advanced Catalog Management (ACM) package, you need to enable ACM at the WebSphere Commerce Server binary level.
Enabling ACM at the WebSphere Commerce Server instance level
After you have configured the wcs.properties file, you can now enable Advanced Catalog Management (ACM) at the WebSphere Commerce Server instance level.
Modifying the wc-dataload-env.xml file
After you have enabled Advanced Catalog Management at the WebSphere Commerce Server binary level, you can now modify the wc-dataload-env.xml file. You
need to modify the wc-dataload-env.xml file to specify the database configuration parameter for the data load.
Configuring the wcs.properties file
After you have modified the wc-dataload-env.xml file, you can now configure the wcs.properties file.
Verifying the information in the wcs.properties file
After you have configured the wcs.properties file, you must verify that the information is accurate.
Copying the messages
After you have verified the information in the wcs.properties file, you need to copy the messages files to enable Advanced Catalog Management.
Modifying the wc-server.xml file
Update the messaging element in the wc-server.xml file which can be found in the
$WC_INSTALL_DIR/installedApps/WC_mdmdemo_cell/WC_mdmdemo.ear/xml/config folder.
Modifying the struts-config-ext.xml file
Make the following modifications to the struts-config-ext.xml file.
Enabling the jar libraries
After you have modified the struts-config-ext.xml file, you can enable the jar libraries.
Enable MQ Listener
The Listener for WebSphere MQ requires re-enablement every time when WebSphere Commerce Server is restarted.

IBM Product Master 12.0 Fix Pack 8

IBM Product Master 12.0.0 893


Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling ACM at the WebSphere Commerce Server binary level


After you have obtained the Advanced Catalog Management (ACM) package, you need to enable ACM at the WebSphere® Commerce Server binary level.

Procedure
1. Create an ACM install directory called acm under the WebSphere Commerce install directory.
2. Create three sub directories under acm and call them config, logs and reports.
3. Copy all of the files and sub directories of the config folder found in the acm.wcs.zip file over to the newly created config directory.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling ACM at the WebSphere Commerce Server instance level


After you have configured the wcs.properties file, you can now enable Advanced Catalog Management (ACM) at the WebSphere® Commerce Server instance level.

Procedure
Copy the following JAR files from the /home/wcuser/acm/wcs/jars/ directory to the
$WAS_INSTALL_DIR/profiles/Instance_Name/installedApps/WC_mdmdemo_cell/WC_mdmdemo.ear/ directory:

com.ibm.mdm.acm.dataload.jar
com.ibm.mdm.integration.utils.jar
FTPProtocol_2.01q.jar

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Modifying the wc-dataload-env.xml file

After you have enabled Advanced Catalog Management at the WebSphere® Commerce Server binary level, you can now modify the wc-dataload-env.xml file. You need
to modify the wc-dataload-env.xml file to specify the database configuration parameter for the data load.

Before you begin


This installation uses DB2®, therefore filling in the proper value for DB2 as shown below, note the password attribute should be assigned an encrypted password value.
You can find the encrypted database password in the $WCS_HOME/instances/[Instance Name]/xml/[instance
name/xml] directory.

Procedure
Type:
<_config:Database type="db2" name="wcs4acm" user="db2ins79"

password="YzV88qD+a+FM0EZiaEGfsw==" server="localhost" port="50003" schema="db2ins79" />

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring the wcs.properties file

After you have modified the wc-dataload-env.xml file, you can now configure the wcs.properties file.

Procedure
1. Fill in the MQ properties with actual values in your environment.
For example:

894 IBM Product Master 12.0.0


# MQ

wcs.mq.outbound.queue.name=supvm05.outbound

wcs.mq.queuemanager.name=MDMQueueManager

wcs.mq.channel.name=MDM.CHANNEL

wcs.mq.hostname=localhost

wcs.mq.username=mqm

wcs.mq.password=pass4temp

wcs.mq.port.num=1414
2. Fill in the secondary MQ properties with the actual values in your environment.
The secondary MQ setup is used for High Availability (HA) configuration. If the secondary MQ is not being used, use the same values as the primary MQ properties
listed in Step 1.
For example:
# MQ multi-instance queue manager setup - secondary MQ settings

secondary.wcs.mq.queuemanager.name=MDMQueueManager

secondary.wcs.mq.channel.name=MDM.CHANNEL

secondary.wcs.mq.hostname=localhost

secondary.wcs.mq.username=mqm

secondary.wcs.mq.password= pass4temp

secondary.wcs.mq.port.num=1414
3. Fill in the FTP properties with the actual values in your environment.
For example:
# FTP

wcs.ftp.hostname=localhost

wcs.ftp.port=21

wcs.ftp.folder=/home/webcom79

wcs.ftp.username=webcom79

wcs.ftp.password=webcom79

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Verifying the information in the wcs.properties file


After you have configured the wcs.properties file, you must verify that the information is accurate.

Procedure
Verify that the wcs.properties file contains the following settings, and that they are similar to the sample settings.
This file is located in the ACM_INSTALL_DIR/config directory.
# MQ

wcs.mq.outbound.queue.name=supvm05.outbound

wcs.mq.queuemanager.name=MDMQueueManager

wcs.mq.channel.name=MDM.CHANNEL

wcs.mq.hostname=9.55.177.114

wcs.mq.username=mqm

wcs.mq.password=passw0rd

wcs.mq.port.num=1414

# MQ multi-instance queue manager setup - secondary MQ settings

secondary.wcs.mq.queuemanager.name=MDMQueueManager

secondary.wcs.mq.channel.name=MDM.CHANNEL

secondary.wcs.mq.hostname=9.55.177.114

secondary.wcs.mq.username=mqm

secondary.wcs.mq.password=passw0rd

IBM Product Master 12.0.0 895


secondary.wcs.mq.port.num=1414

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Copying the messages


After you have verified the information in the wcs.properties file, you need to copy the messages files to enable Advanced Catalog Management.

Procedure
Copy the following files from the /home/wcuser/acm/wcs/xml/messaging/ directory to the
$WAS_INSTALL_DIR>profiles/Instance_Name/installedApps/WC_mdmdemo_cell/WC_mdmdemo.ear/xml/messaging/ directory:

MDMMessaging.dtd
user_template.xml

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Modifying the wc-server.xml file


Update the messaging element in the wc-server.xml file which can be found in the $WC_INSTALL_DIR/installedApps/WC_mdmdemo_cell/WC_mdmdemo.ear/xml/config
folder.

About this task


The messaging element looks similar to the following: <Messaging
EcInboundMessageDtdFiles="..." .../>

Procedure
Append MDMMessaging.dtd to the value of EcInboundMessageDtdFiles attribute as follows:
<Messaging EcInboundMessageDtdFiles="NCCommon.mod, NCCustomer_10.mod,

Create_NC_Customer_10.dtd, Update_NC_Customer_10.dtd, Update_NC_OrderStatus_10.dtd,

Update_NC_ProductInventory_10.dtd, Update_NC_ProductPrice_10.dtd,

Create_WCS_Customer_20.dtd, Create_WCS_Customer_30.dtd,

Update_WCS_Customer_20.dtd, Update_WCS_Customer_30.dtd,

Update_WCS_OrderStatus_20.dtd, Update_WCS_ProductPrice_20.dtd,

Inquire_WCS_PickPackListDetail_10.dtd, Create_WCS_PickBatch_10.dtd,

Create_WCS_ExpectedInventoryRecord_10.dtd, Create_WCS_InventoryReceipt_10.dtd,

Update_WCS_InventoryReceipt_10.dtd, Create_WCS_ShipmentConfirmation_10.dtd,

Create_WCS_ShipmentConfirmation_20.dtd, Update_WCS_ProductInventory_20.dtd,

Request_WCS_BE_ProductInventory_10.dtd, Update_WCS_OrderStatus_30.dtd,

Update_WCS_PriceAndAvailability_10.dtd, Update_WCS_ShoppingCartTransfer_10.dtd,

Update_WCS_BatchAvailability_10.dtd, MDMMessaging.dtd"

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Modifying the struts-config-ext.xml file


Make the following modifications to the struts-config-ext.xml file.

Procedure
896 IBM Product Master 12.0.0
1. Copy the contents of the struts-config-ext.xml.include in /home/wcuser/codebases/acm-11.3.0-60/acm/acm.wcs/Stores/WebContent/WEB-
INF/struts-config-ext.xml.include to
/opt/IBM/WebSphere/AppServer70/profiles/mdm_acm/installedApps/WC_mdmce_acm_cell/WC_mdm_acm.ear/
Stores.war/WEB-INF/struts-config-ext.xml.
2. Insert the following entire element value below under the element action-mappings:
<action parameter="com.ibm.mdm.acm.dataload.commands.JMSLoaderInvokeCommand"

path="/JMSLoaderInvoke" type="com.ibm.commerce.struts.BaseAction">

<set-property property="https" value="0:1"/>

<set-property property="authenticate" value="0:0"/>

</action>>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling the jar libraries


After you have modified the struts-config-ext.xml file, you can enable the jar libraries.

Procedure
1. Include the following files in the $WC_INSTALL_DIR/installedApps/WC_mdmdemo_cell/WC_mdmdemo.ear/META-INF/MANIFEST.MF directory:
com.ibm.mdm.acm.dataload.jar

com.ibm.mdm.integration.utils.jar

FTPProtocol_2.01q.jar
2. Restart the WebSphere® Commerce Server.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enable MQ Listener
The Listener for WebSphere® MQ requires re-enablement every time when WebSphere Commerce Server is restarted.

Procedure
1. Login to the Administration Console of WebSphere Commerce Server via https://hostname.com:8002/adminconsole with your WebSphere Commerce
Server admin user, for example wcsadmin.
2. Select Site and click OK.
3. Click Configuration > Component Configuration.
4. Select Listener for WebSphere MQ (Transport adapter) from the box on the right side and click Add, then click OK.
5. Verify the MQ Listener is added to the box on the left side by selecting Configuration > Component Configuration again.
Note: If the MQ Listener is still not added, repeat the process until it is successfully added to the Selected Components.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Installing and configuring Advanced Catalog Management on Product Master


This installation section provides instructions to install the Advanced Catalog Management asset on the Product Master platform and the WebSphere® Commerce platform
including the configuration of the provided default objects (for example, default master catalog, default views, and default workflows).

Installation of the WebSphere Commerce Advanced Catalog Management product requires the following basic steps:

Following installation prerequisites for Product Master and WebSphere Commerce


Install WebSphere MQ.
Note: WebSphere MQ and FTP servers are shared between both WebSphere Commerce Server and Product Master, therefore, there is one instance for both
Product Master and WebSphere Commerce Server.
Install and configure Product Master, see InstallingInstalling.
For DB2® and WebSphere Application Server installation requirements, see System requirements.
Configuration checklist before installation
Installation of Advanced Catalog Management into Product Master and WebSphere Commerce

IBM Product Master 12.0.0 897


The following sections describe the product and how to install and configure it to your needs.

Enable an Advanced Catalog Management package


Enabling Advanced Catalog Management (ACM) in Product Master includes installation of required ACM files at Product Master backend, configuration of properties files
and import of ACM data models to Product Master.

1. Obtain the ACM enablement package named acm.zip from the software CD under the /Samples/mdm directory.
2. Extract the acm.zip file. You find another ZIP file named acm.mdm.zip, which is used for enabling ACM in Product Master.
3. Upload the acm.mdm.zip file to the Product Master and extract it at a temporary location.

Installing Advanced Catalog Management on Product Master


After you have obtained the Advanced Catalog Management (ACM) package, you can now install ACM on Product Master.

1. Extract the acm.mdm.zip file into a convenient temporary directory, for example, $TOP/acm/mdm.
2. Go to the $TOP/acm/mdm/bin directory.
3. Change the permissions of all files to add the run permission, for example, chmod 755
*.
4. Issue the ./acm_install.sh --TOP=$TOP command.

Configuring Advanced Catalog Management on Product Master


For more information, see Configuring Advanced Catalog Management on Product Master.

Configuring Advanced Catalog Management on Product Master


After you have installed Advanced Catalog Management (ACM) on Product Master, you can now configure ACM.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuring Advanced Catalog Management on Product Master


After you have installed Advanced Catalog Management (ACM) on Product Master, you can now configure ACM.

About this task


Only a single company per instance is supported. Multiple companies within a single instance is not supported with this accelerator. See the readme file for more
information about the steps of this task.

Procedure
1. Update the $TOP/bin/conf/env_settings.ini file to enable MQ. For example:
[mq]

enabled=yes

#home will default to /opt/mqm if not set

home=/opt/mqm
2. Issue the $TOP/bin/configureEnv.sh command to re-configure the Product Master environment.
3. Verify that the following entries are present in the $TOP/etc/default/flow-config.xml file:

<!--ACM Selective Export-->

<async-flow path="acmGetSelections"
command="com.ibm.mdm.acm.customtools.commands.AcmSelectiveExportCommand"
method="getSelections" isCustomTool="true"/>

<async-flow path="acmGetStores"
command="com.ibm.mdm.acm.customtools.commands.AcmSelectiveExportCommand"
method="getStores" isCustomTool="true"/>

<async-flow path="acmGetWorkspaces"
command="com.ibm.mdm.acm.customtools.commands.AcmSelectiveExportCommand"
method="getWorkspaces" is customTool="true"/>

<async-flow path="acmExport"
command="com.ibm.mdm.acm.customtools.commands.AcmSelectiveExportCommand"
method="exort" isCustomTool="true"/>

4. Create a data model. Perform the following steps:


a. Log into the target company in Product Master.
b. Import the acm.design.zip file, then log out and log back in using the same company.
c. Once logged back in, run the custom tool ACM Refresh Data under the Custom Tools menu. Once completed, a report displays on the right pane.
5. Optional: For additional sample data and stores, perform the following steps:

898 IBM Product Master 12.0.0


a. Log into the target company in Product Master.
b. Import data for one of the Direct Stores (acm.design.aurora_store.zip,
acm.design.brazil_store.zip, acm.design.elite_store.zip, acm.design.mayujoy_store.zip) or for the Catalog Asset Store with eSite
Stores (acm.design.multi_site_stores.zip).
6. Edit the $TOP/etc/default/acm/config/acm.properties file and change the following parameters:
# WCS account info

wcs.context.username=wcsadmin # WCS Admin username

wcs.context.password=password # WCS Admin password

wcs.ws.endpoint.url=http://WCS-HOSTNAME:8007/webapp/wcs/component/catalog/services/CatalogServices

wcs.ws.endpoint.content.url=http:// WCS-HOSTNAME:8007/webapp/wcs/component/content/services/ContentServices

# WCS stores

wcs.store.name=Madisons # Store name found in WCS

# WCS master catalog

# WCS master catalog ID, this value can be found in WCS database tables

wcs.master.catalog.id=10001

# WCS attribute dictionary

# WCS attribute dictionary ID, which can be found in WCS database tables

wcs.attribute.dictionary.id=10001

# MQ

mdm.mq.outbound.queue.name=supvm05.inboundp

mdm.mq.inbound.queue.name=supvm05.outbound

mdm.mq.queuemanager.name=MDMQueueManager

mdm.mq.channel.name=MDM.CHANNEL

mdm.mq.hostname=MQ HOST NAME

mdm.mq.username=

mdm.mq.password=

mdm.mq.port=1414

# MQ multi-instance queue manager setup - secondary MQ settings

secondary.mdm.mq.queuemanager.name=MDMQueueManager

secondary.mdm.mq.channel.name=MDM.CHANNEL

secondary.mdm.mq.hostname= MQ HOST NAME

secondary.mdm.mq.username=

secondary.mdm.mq.password=

secondary.mdm.mq.port=1414

# FTP

mdm.ftp.hostname=FTP HOSTNAME # specify the host name of the ftp server

mdm.ftp.folder=/home/webcom79 # ftp folder

mdm.ftp.username=username # ftp user name

mdm.ftp.password=password # ftp user passowrd

mdm.ftp.port=21 # ftp port number

Setting up stores, catalogs and attribute dictionary


The following sections provide details about setting up and managing stores, catalogs and attribute dictionary.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 899


Setting up stores, catalogs and attribute dictionary
The following sections provide details about setting up and managing stores, catalogs and attribute dictionary.

Adding a Direct Store with Master Catalog and Attribute Dictionary


Advanced Catalog Management (ACM) supports management of Direct Store. Catalog Group and Catalog Entries can be added under a Direct Store and exported to
WebSphere® Commerce.
Adding a Catalog Asset Store with Master Catalog and Attribute Dictionary
Advanced Catalog Management (ACM) supports management of Catalog Asset Store. Catalog Group and Catalog Entries can be added under a Catalog Asset Store
and exported to WebSphere Commerce.
Adding an eSite Store to extend the Catalog Asset Store with Sales Catalog
Advanced Catalog Management (ACM) supports management of eSite Store. Description Attribute Override can be done using the eSite store.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Adding a Direct Store with Master Catalog and Attribute Dictionary


Advanced Catalog Management (ACM) supports management of Direct Store. Catalog Group and Catalog Entries can be added under a Direct Store and exported to
WebSphere® Commerce.

Before you begin


Ensure you meet the following prerequisites:

A new ACM company is setup on which the Direct Store can be created.
Lookup Table Attribute Dictionaries has the corresponding ID of the attribute dictionary for Direct Store in WebSphere Commerce.
Lookup Table Catalogs has the correct ID for the Master Catalog, corresponding to the catalog ID of the Direct Store on WebSphere Commerce side.

Procedure
1. Add the Stores hierarchy to the left navigation pane.
2. Under the Stores hierarchy, add a new category using the Add Category option.
a. For the new category, provide the name of the store and description.
b. For the Store ID field, provide the value of the corresponding Store ID in WebSphere Commerce.
c. Select the Store Type as Direct
Store.
d. For Associated Catalogs, select the Master Catalog. This catalog should already be available in the new ACM setup.
e. To populate the value of Attribute Dictionary ID, select the Attribute Dictionary that is already provided in the lookup table Attribute Dictionaries. This should
correspond to the ID of the attribute dictionary associated to the Direct Store on WebSphere Commerce.
f. Click Save to save the new store.
After the new Store is saved, a new secondary Spec is generated, which needs to be associated to the Store. Use the Specs tab of the newly created Store to
associate the store to the generated Store Secondary spec.
3. Click Manage Specs to go to the Spec selection screen.
4. Search for the generated secondary Store Spec. The name of the secondary spec has the store name as the prefix. For example, if the Store name was given as
MayUJoy, name of the corresponding generated spec is MayUJoy Spec.
5. Select the spec and click Done.
6. Once back in the store screen, click Save to save the store.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Adding a Catalog Asset Store with Master Catalog and Attribute Dictionary
Advanced Catalog Management (ACM) supports management of Catalog Asset Store. Catalog Group and Catalog Entries can be added under a Catalog Asset Store and
exported to WebSphere Commerce.

Before you begin


Ensure you meet the following prerequisites:

A new ACM company is setup on which the Catalog Asset Store can be created.
Lookup Table Attribute Dictionaries has the corresponding ID of the attribute dictionary for Catalog Asset Store in WebSphere Commerce.
Lookup Table Catalogs has the correct ID for the Master Catalog, corresponding to the catalog ID of the Catalog Asset Store on WebSphere Commerce side.

Procedure

900 IBM Product Master 12.0.0


1. Add the Stores hierarchy to the left navigation pane.
2. Under the Stores hierarchy, add a new category using the Add Category option.
a. For the new category, provide the name of the store and description.
b. For the Store ID field, provide the value of the corresponding Store ID in WebSphere Commerce.
c. Select the Store Type as Catalog
Asset Store.
d. For Associated Catalogs, select Master
Catalog. This catalog should already be available in the new ACM setup.
e. To populate the value of Attribute Dictionary ID, select the Attribute Dictionary that is already provided in the lookup table Attribute Dictionaries. This should
correspond to the ID of the attribute dictionary associated to the Catalog Asset Store on WebSphere Commerce.
f. Click Save to save the new store.
After the new Store is saved, a new secondary Spec is generated, which needs to be associated to the Store. Use the Specs tab of the newly created Store to
associate the store to the generated Store Secondary spec.
3. Click Manage Specs > Add Specs to go to the Spec selection screen.
4. Search for the generated secondary Store Spec. The name of the secondary spec has the store name as the prefix. For example, if the Store name was given as
Extended Sites Catalog Asset
Store, name of the corresponding generated spec is Extended
Sites Catalog Asset Store Spec.
5. Select the spec and click Done.
6. Once back in the store screen, click Save to save the store.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Adding an eSite Store to extend the Catalog Asset Store with Sales Catalog
Advanced Catalog Management (ACM) supports management of eSite Store. Description Attribute Override can be done using the eSite store.

Before you begin


Ensure you meet the following prerequisites:

The eSite store is added under an preexisting Catalog Asset Store. Therefore, the Catalog Asset Store must be added prior to performing this step, using the steps
outlined earlier in the section Adding a Catalog Asset Store with Master Catalog and Attribute Dictionary.
The eSite Store needs to get associated to a Sales Catalog, while getting created. Therefore, create the required Sales Catalog under Master Catalog first before
proceeding to the eSite Store creation.
Lookup Table Catalogs has the correct ID for the Sales Catalog, corresponding to the catalog ID of the Sales Catalog of the eSite Store on WebSphere® Commerce.

Procedure
1. Add the Stores hierarchy to the left navigation pane.
2. Under the Stores hierarchy, add a new category under the preexisting Catalog Asset Store.
a. For the new category, provide the name of the store and description.
b. For the Store ID field, provide the value of the corresponding store ID in WebSphere Commerce.
c. Select the Store Type as eSite
Store.
d. For Associated Catalogs, select the corresponding Sales Catalog, which is already created.
e. eSite stores do not have associated attribute dictionaries. Therefore, leave this field blank.
Once the new Store is saved, 2 new secondary Specs are generated, which need to be associated to the Store. Use the Specs tab of the newly created Store to
associate the store to the generated Store Secondary specs. Since the eSite Store is getting created under an already existing Catalog Asset Store, it will inherit the
secondary spec of the Catalog Asset Store.
3. Click Manage Specs > Add Specs to go to the Spec selection screen.
4. Search for the generated secondary Store Spec. The name of the secondary spec has the store name as the prefix. For example, if the Store name was given as
EliteESite, name of the corresponding generated spec is EliteESite Spec.
5. Search for the generated Description Override secondary Store Spec. The name of the secondary spec has the store name as the prefix. For example, if the Store
name was given as EliteESite, name of the corresponding generated spec is EliteESite Description
Override Spec.
6. Select these specs and click Done.
7. Once back in the store screen, click Save to save the store.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Troubleshooting
The following troubleshooting tips are useful when diagnosing any install or configuration issues related to Advanced Catalog Management.

Instance creation failed


The instance creation failed, and in the error log file, you see a DB2® error with SQLCODE 1084.

IBM Product Master 12.0.0 901


Failure in enabling WebSphere Commerce Server
The WebSphere Commerce Server failed to be enabled.
WebSphere MQ call failed
WebSphere MQ call failed with computer code '2'.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Instance creation failed


The instance creation failed, and in the error log file, you see a DB2® error with SQLCODE 1084.

Symptoms
In the log file, you see an error message similar to the following:
/opt/IBM/WebSphere/CommerceServer70/instances/mdmce_acm/logs/createInstanceANT.err.log

Resolving the problem


Add the following lines to the /etc/sysctl.conf file:
kernel.shmmni=4096

kernel.shmmax=17179869184

kernel.shmall=8388608

#kernel.sem=<SEMMSL><SEMMNS><SEMOPM><SEMMNI>

kernel.sem=250 256000 32 4096

kernel.msgmni=16384

kernel.msgmax=65536

kernel.msgmnb=65536
Then run the sysctl –p command to verify it.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Failure in enabling WebSphere Commerce Server


The WebSphere® Commerce Server failed to be enabled.

Symptoms
In the log file, you see an error message similar to the following:
Failure in loading native library db2jcct2, java.lang.UnsatisfiedLinkError: db2jcct2

(/opt/ibm/db2/V9.5/lib32/libdb2jcct2.so: wrong ELF class: ELFCLASS32): ERROR-CODE=-4472, SQLSTATE=null

at com.ibm.db2.jcc.a.dd.a(dd.java:660)

at com.ibm.db2.jcc.a.dd.a(dd.java:60)

at com.ibm.db2.jcc.a.dd.a(dd.java:94)

at com.ibm.db2.jcc.t2.a.a(a.java:37)

at com.ibm.db2.jcc.t2.T2Configuration.<clinit>(T2Configuration.java:94)

at java.lang.J9VMInternals.initializeImpl(Native Method)

at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)

at com.ibm.db2.jcc.a.o.a(o.java:26)

Resolving the problem


Make sure that the following environment variables are set correctly to the correct DB2® libraries.
export PATH=$PATH:/home/db2admin/sqllib/bin

export LD_LIBRARY_PATH=/home/db2admin/sqllib/lib32

IBM Product Master 12.0 Fix Pack 8

902 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

WebSphere MQ call failed


WebSphere® MQ call failed with computer code '2'.

Symptoms
com.ibm.mq.MQException: JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED') reason '2035' ('MQRC_NOT_AUTHORIZED'). at
com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:204) ... (11 more) java.lang.NullPointerException

Resolving the problem


Pass the WebSphere MQ user that is in the Channel MCA User ID.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling multiple languages


You can create attributes for each item in several languages in IBM® Product Master and export these attributes to WebSphere® Commerce Server.

By default, ACM supports the following languages:

de_DE
en_US
es_ES
fr_FR
it_IT
ja_JP
ko_KR
pl_PL
pt_BR
ru_RU
zh_CN
zh_TW

Enabling predefined languages


You can enable a language for attributes in IBM Product Master that is available in WebSphere Commerce.
Enabling new languages
You can enable a new language for attributes in IBM Product Master that is not available in WebSphere Commerce. To be able to export attributes in the new
language to WebSphere Commerce Server, you must enable the language in WebSphere Commerce.
Attributes which support multiple languages by default
Not all of the attributes support multiple languages in Advanced Catalog Management. The following list is the set of attributes that support multiple languages:
Restrictions and limitations
There are few restrictions and limitations which need to be taken care of during multiple language enablement in Advanced Catalog Management (ACM).

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling predefined languages


You can enable a language for attributes in IBM® Product Master that is available in WebSphere® Commerce.

About this task


See Choosing a locale for more information.

Procedure
1. Add the language that you want to enable to your instance from the company properties page in Product Master.
2. Localize the attributes that you want in the specs.
a. Edit the Spec corresponding to the object want to localize. If localization is required for catalog entry, edit the Spec named ‘Catalog Entry Spec'. If localization
is required for catalog group, edit the Spec named ‘Catalog Entry Categories Spec’.
b. Add the desired locale under ‘Selected Locales’ from ‘Available Locales’.

IBM Product Master 12.0.0 903


c. Open the required attribute. Set the flag ‘Localized’ to selected.
d. Save the Spec.
3. Update the required attribute collections so that the localized attributes show in the "TabbedView" used by the Single Edit or Multi Edit screen of a catalog group or
catalog entry.
For Catalog group, add the localized attributes to the following attribute collections:
For Description attributes, update the Manage
Category attribute collection.
For SEO properties and SEO URL keyword, update the Category Search Engine Optimization attribute collection.
For Catalog entry, add the localized attributes to the following attribute collections:
For Description attributes, update the Manage
Entry attribute collection.
For SEO properties and SEO URL keyword, update the Item Search Engine Optimization attribute collection.
4. Navigate to the $TOP/etc/default/acm/config directory.
5. Open the acm.properties file in a suitable editor.
6. Add the locale and mapped language ID to the list of locales in the acm.properties file
The mapped language ID is number that must be unique and the locale must follow the standard locale format of language_COUNTRY.
For example, pt_BR indicates Portuguese and Brazil locale.
7. Save the file.
8. Navigate to the $TOP/etc/default/acm/mappings directory.
9. If you are enabling a language for:
Catalog Group, open the CatalogGroup.xsl file in a suitable editor.
Catalog Entry, open the CatalogEntry.xsl file in a suitable editor.
10. To add a language to:
Description properties, Add a Description section for the newly enabled language similar to existing sections and add the language ID to the id set in the
acm.properties file.
SEO Properties, Add a SEOPROPERTIES section for the newly enabled language similar to existing sections and add the language ID to the id set in the
acm.properties file.
SEO URL property, Add a SEOURL section for the newly enabled language similar to existing sections and add the language ID to the id set in the
acm.properties file.
11. Save the file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Enabling new languages


You can enable a new language for attributes in IBM® Product Master that is not available in WebSphere® Commerce. To be able to export attributes in the new language to
WebSphere Commerce Server, you must enable the language in WebSphere Commerce.

Procedure
1. Add the language that you want to enable to your instance from the company properties page IBM Product Master.
For more information, see Choosing a locale.
2. Localize the attributes that you want in the specs.
3. Navigate to the $TOP/etc/default/acm/config directory.
4. Open the acm.properties file in a suitable editor.
5. Add the locale and mapped language ID to the list of locales in the acm.properties file
The mapped language ID is number that must be unique and the locale must follow the standard locale format of language_COUNTRY.
For example, pt_BR indicates Portuguese and Brazil locale.
6. Save the file.
7. Navigate to the $TOP/etc/default/acm/mappings directory.
8. If you are enabling a language for:
Catalog Group, open the CatalogGroup.xsl file in a suitable editor.
Catalog Entry, open the CatalogEntry.xsl file in a suitable editor.
9. To add a language to:
Description properties, Add a Description section for the newly enabled language similar to existing sections and add the language ID to the id set in the
acm.properties file.
SEO Properties, Add a SEOPROPERTIES section for the newly enabled language similar to existing sections and add the language ID to the id set in the
acm.properties file.
SEO URL property, Add a SEOURL section for the newly enabled language similar to existing sections and add the language ID to the id set in the
acm.properties file.
10. Save the file.
11. Update the required attribute collections so that the localized attributes show in the "TabbedView" used by the Single or Multi Edit view of a catalog group or
catalog entry.
For Catalog group, add the localized attributes to the following attribute collections:
For Description attributes, update the Manage
Category attribute collection.
For SEO properties and SEO URL keyword, update the Category Search Engine Optimization attribute collection.
For Catalog entry, add the localized attributes to the following attribute collections:
For Description attributes, update the Manage
Entry attribute collection.
For SEO properties and SEO URL keyword, update the Item Search Engine Optimization attribute collection.
12. Add the new language to the LANGUAGE table on WebSphere Commerce side and mention the language ID.
13. In WebSphere Commerce, follow the steps to add a language or currency to a store archive.

904 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Attributes which support multiple languages by default


Not all of the attributes support multiple languages in Advanced Catalog Management. The following list is the set of attributes that support multiple languages:

About this task


The Catalog Group supports the following attributes to have multiple languages:

Description attributes:
Name
Short description
Long description
Keyword
Thumbnail
Full image
Search Engine Optimization (SEO) attributes:
URL keyword
Page title
Image alt text
Meta description

Catalog Entry supports the following attributes to have multiple languages:

Description attributes:
Name
Short description
Long description
Keyword
Display to customers
Thumbnail
Full image
Search Engine Optimization (SEO) attributes:
URL keyword
Page title
Image alt text
Meta description

In addition, user-defined attribute dictionary attributes also support language enablement.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restrictions and limitations


There are few restrictions and limitations which need to be taken care of during multiple language enablement in Advanced Catalog Management (ACM).

Restrictions and limitations for Catalog entries and Catalog groups:


Run the custom tool Refresh Data, prior to localization of Specs and attributes in a new ACM setup.
If Search Engine Operation (SEO) attributes are to be localized, all of the SEO attributes have to be localized since they work as a single group on the WebSphere®
Commerce side.
Once values are set for SEO Properties, they cannot be removed. They can only be updated with new values.

Restrictions and limitations for Catalog groups:


If you are wanting language enablement, the attribute Name has to be marked localized since it is a mandatory field in WebSphere Commerce for all locales.
If Description attributes are localized, all of the Description attributes have to be localized since they work as a single group on the WebSphere Commerce
side.
SEO URL keyword needs to be unique. If this attribute is localized, the localized SEO URL keyword attributes have to marked unique manually by the user.
With collaborative MDM, SEO URL keyword cannot be enabled with multiple languages. The collaborative MDM application will throw an exception when such a
catalog group is saved with localized values.

Restrictions and limitations for Catalog entries:


While creating a catalog entry, SEO Property values for only one language can be supplied. If more than one language values are supplied, an exception SEO
feature not enabled is raised on WebSphere Commerce. To get around this limitation, create the entry with a single language value for SEO attributes and

IBM Product Master 12.0.0 905


update the entry with multiple languages later on.
WebSphere Commerce can take only 64 characters in its code attribute for catalog entries. You will need to ensure the characters in different languages contain 64
or less characters in length. For example, the following 63 characters in double byte languages count as 192 characters:
送り仮名送り仮仮名名送り仮名送り仮仮名名送り仮名送り仮仮名名送り仮名送り仮仮名名送り仮名送り

仮仮名名送り仮名送り仮仮名名送り仮

Restrictions on creating a localized enumeration attribute in the attribute dictionary


When a new localized attribute of enumeration type is created, or an existing enumeration attribute is localized, all languages should have equal number of values
for the enumeration. The values should have a 1-to-1 mapping between languages.
Restrictions on modifying an enumeration value of an attribute in a catalog entry which is localized
When assigning values to a localized enumeration type attribute associated to a catalog entry, each language of the attribute should be assigned the value of the
same enumeration position.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restrictions on creating a localized enumeration attribute in the attribute dictionary


When a new localized attribute of enumeration type is created, or an existing enumeration attribute is localized, all languages should have equal number of values for the
enumeration. The values should have a 1-to-1 mapping between languages.

For example, the string enumeration attribute distance is localized in 2 languages; English and French. The languages will have the corresponding values in order:

Distance (English): 1 mile, 2 miles, 3 miles


Distance (French): 1.6 km, 3.2 km, 4.8 km

Note: There is a 1-to-1 mapping between enumeration values of both languages.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Restrictions on modifying an enumeration value of an attribute in a catalog entry


which is localized
When assigning values to a localized enumeration type attribute associated to a catalog entry, each language of the attribute should be assigned the value of the same
enumeration position.

For example, the string enumeration Attribute Distance which is localized in 2 languages English and French, has values in the following order:

Distance (English): 1 mile, 2 miles, 3 miles


Distance (French): 1.6 km, 3.2 km, 4.8 km
Select: (2 miles, 3.2 km)

The values selected for both languages correspond to each other.


When this catalog entry is exported from Product Master, the entry and the associated attribute with the respective values for the respective language show up in
WebSphere® Commerce. If the attribute is given values corresponding to different enumeration positions for different languages, and this catalog entry is exported, then
for this catalog entry the values associated with each of the selected enumeration position for each of the language will show up for that attribute in WebSphere
Commerce.
For example, select: (1 miles, 4.8 km) for the attribute. The values selected for both languages do not correspond to each other.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing catalog data


This asset manages the following three types of catalog objects for WebSphere® Commerce.

Catalog Entry
A catalog entry is merchandise in an online catalog that includes a code, a name, a description, one or more offer prices, images, and other details. There are four
types of catalog entries: products, SKUs, bundles, and kits.
Catalog Entry Category (Catalog Group)
A category groups products and services offered by the store. You can create, find, list, change, and delete categories. You can classify products and SKUs under
different parent categories.
Catalog Entry Attribute
In addition to the default attributes (for example, Code, Name, and so on), catalog entries can include additional attributes defined in attribute dictionary. An
attribute dictionary is a set of attributes and their values that can be reused by multiple products. Changing an attribute in the attribute dictionary changes the
attribute for all products the attribute is applied to in the store catalog. By adding attributes to the attribute dictionary, you simplify the process for keeping attribute

906 IBM Product Master 12.0.0


names and values consistent across your site. Maintaining consistency with your attribute names and values improves store search results and product
comparisons.

Note: If you are using a clone operation on a catalog group or catalog entry to create new objects, the Status attribute will copy the same value present in the original
entry. However, since the cloned entry is not yet exported, you will need to modify the value manually to Open. On a successful export, the system will update this
attribute with the correct value of export status.

Managing attribute dictionary


The attribute dictionary hierarchy represents the attribute dictionary in WebSphere Commerce. The attribute dictionary can have multiple attributes and their type
is based on their usage.
Managing catalog groups
Catalogs are Product Master containers that are used to store items. Product Master can have zero or more catalogs; however, an item can only belong to one
catalog.
Managing catalog entries
Catalog entry maintenance is a collaborative process that requires multiple user roles to participate.
Creating calculation codes in WebSphere Commerce
To add calculation codes to catalog entries and catalog groups, you need to create calculation codes in the WebSphere® Commerce environment.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing attribute dictionary


The attribute dictionary hierarchy represents the attribute dictionary in WebSphere® Commerce. The attribute dictionary can have multiple attributes and their type is
based on their usage.

The attribute dictionary is modeled in the Attribute Dictionary hierarchy in Product Master, as shown in the following sections.
The following concepts are important when defining an attribute dictionary attribute:

Attribute Group
An Attribute Group represents a set of Product Master attributes combined together as a grouping. They are represented as a tree structure with a parent and
children node.
Attribute Category
An Attribute Category represents an attribute dictionary attribute in WebSphere Commerce. The details of this attribute are defined in the secondary spec
associated with this category.

Note: The details of the attribute are defined in the secondary spec associated with the category that represents the attribute dictionary attribute. Nodes that are removed
from the attribute dictionary secondary spec are not removed from the corresponding attributes on the WebSphere Commerce side. For example, if you remove an
attribute dictionary attribute from one of the secondary specs or remove the spec entirely from the attribute dictionary and export an item that was mapped to that
attribute dictionary category node, the removed attributes are not removed from the WebSphere Commerce side, however, new attributes are added.
The attributes defined in a spec will provide the actual names of the attributes and the types of these attributes will provide the attribute data type. The attribute type
required for usage in WebSphere Commerce, for example descriptive and defining, will come from the Attribute Category.
Note: For defining attributes, you need to define a list of values. On Product Master, this list of values must be of an Enumeration type. For descriptive attributes, you do
not need to define a list of predefined values. Advanced Catalog Management does not allow descriptive attributes with Enumeration type to be saved in Product Master.
Note: The term category name is the category path in the attribute dictionary hierarchy and the name of the spec which contains the attribute will be used as ‘attribute
group’ for the scope of the attribute. However, the ‘attribute group’ concept is not supported in the current release.
The attributes are divided into two types, descriptive and defining attributes.

Creating descriptive attributes


You can create descriptive attributes in the IBM® Product Master environment.
Creating predefined descriptive attributes
A descriptive attribute in Product Master of type Enumeration can be saved and exported to WebSphere Commerce using the Attribute Dictionary Export custom
tool. When descriptive attributes of Enumeration type are exported to WebSphere Commerce, they are published as Allowed value attributes.
Creating defining attributes
You can create defining attributes in the IBM Product Master environment.
Setting sequence values for descriptive and defining attributes
You can set sequence values for descriptive and defining attributes and export them to WebSphere Commerce Server where they can be displayed in an ascending
or descending order that is based on their sequence values.
Setting facets for attribute dictionary attributes
You can set facets for attribute dictionary attributes in IBM Product Master and export them to WebSphere Commerce when you export the attributes.
Creating attribute groups
You can create attribute groups in IBM Product Master and export them to WebSphere® Commerce.
Providing different values to code and name of Attribute Dictionary attributes
The code and name of an attribute dictionary attribute can have different values. You can provide different values to these attributes from ACM.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating descriptive attributes


You can create descriptive attributes in the IBM® Product Master environment.

IBM Product Master 12.0.0 907


About this task
The descriptive attribute represents the attributes that are used in WebSphere® Commerce as freeform attributes. These attributes can be of the following types: String,
Integer, Number, Date, Lookup table, String enum, Number enum, Thumbnail Image URL, and Rich text. You can mark these attributes with more than one occurrence by
modifying the Maximum Occurrence property to be greater than 1. These attributes are defined in the secondary specs that are associated with the attribute catalog
group.

Procedure
1. Create a secondary spec that contains one or more attributes.
2. Modify the Maximum Occurrence property to be greater than 1 for attributes that can have multiple values.
3. Add a subcategory under the catalog group Descriptive Attributes in the hierarchy Attribute Dictionary.
4. Name the new subcategory appropriately such that the name represents the set of descriptive attributes.
For example, Notebook Attributes.
5. Associate the newly created secondary spec as an Item Spec under the Specs tab.
6. Save the catalog group.

Results
Any catalog entry under the Catalog Entry Repository can be mapped to this newly created catalog group and the descriptive attributes from the catalog group are
available under the Descriptive Attributes tab for setting values.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating predefined descriptive attributes


A descriptive attribute in Product Master of type Enumeration can be saved and exported to WebSphere® Commerce using the Attribute Dictionary Export custom tool.
When descriptive attributes of Enumeration type are exported to WebSphere Commerce, they are published as Allowed value attributes.

Procedure
1. Add a category under Attribute Dictionary/Descriptive Attributes.
2. Create a secondary spec with an attribute of Enumeration type with the desired attribute values.
3. Click Save. This creates a corresponding item with the same name in the catalog Catalog Attributes, which has the value of the Allowed attribute set as
true.
4. Optional: Click Custom tools > Attribute Dictionary Export to published to WebSphere Commerce.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating defining attributes


You can create defining attributes in the IBM® Product Master environment.

About this task


The defining attribute represents the attributes that are used in WebSphere® Commerce as attributes with predefined values. Defining attributes are single occurrence
attributes and can be of only the Enumeration type. These attributes are defined in the secondary specs that are associated with the attribute catalog group. The catalog
group represents an attribute dictionary attribute.

Procedure
1. Create a secondary spec that contains one or more attributes.
2. Add a subcategory under the catalog group Defining Attributes in the hierarchy Attribute Dictionary.
3. Name the new subcategory appropriately such that the name represents the set of defining attributes.
For example, Notebook Colors.
4. Associate the newly created secondary spec as an Item Spec under the Specs tab.
5. Save the catalog group.

Results
Any catalog entry under the Catalog Entry Repository can be mapped to this newly created catalog group and the defining attributes from the catalog group are available
under the Entity Specific Attributes tab for setting values.

IBM Product Master 12.0 Fix Pack 8

908 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting sequence values for descriptive and defining attributes


You can set sequence values for descriptive and defining attributes and export them to WebSphere Commerce Server where they can be displayed in an ascending or
descending order that is based on their sequence values.

Procedure
1. Create a category in Descriptive Attributes or Defining Attributes under the Attribute Dictionary hierarchy.
See Creating descriptive attributes and Creating defining attributes for more information. For each attribute that is created, an item is generated in the Catalog
Attributes catalog under the 'Unassigned' category. The item names are the same as the names of the corresponding attributes.
2. Select an item.
The item details are visible in the right pane.
3. Enter a number in the Sequence field.
4. Save the item.
5. Export the attributes to WebSphere Commerce Server by using the Attribute Dictionary Export.
You can display attributes in an ascending or descending order that is based on the sequence value in the WebSphere Commerce Server environment.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting facets for attribute dictionary attributes


You can set facets for attribute dictionary attributes in IBM® Product Master and export them to WebSphere® Commerce when you export the attributes.

When you save an Attribute Dictionary attribute, you can set the following facets of the corresponding catalog attribute item to either true or false:

Searchable
Comparable
Facetable
Merchandisable
Displayable

When you export the attribute dictionary attribute to WebSphere Commerce, these facets are carried over to WebSphere Commerce. If you do not set a value for the
facets, all facets are exported as false by default.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating attribute groups


You can create attribute groups in IBM® Product Master and export them to WebSphere® Commerce.

Procedure
1. Create a category in the Catalog Attribute Groups hierarchy.
This hierarchy is the primary hierarchy of Catalog Attributes catalog. The category that you create represents the attribute group.
2. Assign any items that you want from your unassigned category for catalog attributes to the category you created.
These items represent the attributes in the attribute group.
3. Ensure that all the items associated with the attribute group category are also associated with a category in the store hierarchy.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Providing different values to code and name of Attribute Dictionary attributes


The code and name of an attribute dictionary attribute can have different values. You can provide different values to these attributes from ACM.

About this task

IBM Product Master 12.0.0 909


By having two separate attributes for a given attribute dictionary attribute, separate values for code and name, you can provide and maintain separate values on
WebSphere Commerce Server.

Procedure
1. Edit the spec named Catalog Attributes Spec.
2. Add a new String type attribute named Display Name with default properties.
3. Save the Spec.
4. Add a new attribute dictionary attribute under Attribute Dictionary hierarchy.
5. Open the entry for the recently added attribute under the catalog named Catalog Attributes and provide a value for the Display Name attribute and save it.
6. If there are attributes already present, populate the Display Name for these attributes as well.
7. Prior to export of attribute dictionary attributes using the custom tool ACM Attribute Dictionary Attribute Export, update the
AttributeDictionaryAttribute.xsl mapping file as shown in the sample.
8. On the WebSphere Commerce Server setup, edit the wc-dataload-object.xml configuration file and add the lines mentioned below:

<_config:DataLoader
className="com.ibm.commerce.foundation.dataload.BusinessObjectLoader">
<_config:ColumnExclusionList>
<_config:table name="ATTRDESC" columns="NAME" />
</_config:ColumnExclusionList>
<_config:DataReader
className="com.ibm.commerce.foundation.dataload.datareader.XmlReader">
<_config:XmlHandler className="${dataload.object.xmlhandler}" />
</_config:DataReader>

What to do next
Exporting attribute dictionary attributes can now set different values for Code and Name for a given attribute dictionary attribute, based on the values provided by the user
on ACM side.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing catalog groups


Catalogs are Product Master containers that are used to store items. Product Master can have zero or more catalogs; however, an item can only belong to one catalog.

A category groups products and services offered by the store. You can create, find, list, change, and delete categories. You can classify products and SKUs under different
parent categories.

Managing catalog groups in a workflow


For maintaining and editing the existing Catalog Group objects which are represented by the categories in the "Catalog Entry Categories" hierarchy, you need to use
the "Catalog Group Maintenance Collaboration Area".
Creating catalog groups in a workflow
To create new Catalog Group, you can use the "Catalog Group Creation Collaboration Area". This collaboration area is associated with the "Catalog Group Creation
Workflow".
Creating catalog groups with SEO data
You must specify Search Engine Optimization (SEO) data when you create catalog groups. For a Catalog Group, the URL keyword is a mandatory attribute.
Adding calculation codes to catalog groups
You can attach calculation codes that exist in WebSphere® Commerce to catalog groups in Product Master and then export and publish those groups. You cannot
create calculation codes in the Product Master environment.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing catalog groups in a workflow


For maintaining and editing the existing Catalog Group objects which are represented by the categories in the "Catalog Entry Categories" hierarchy, you need to use the
"Catalog Group Maintenance Collaboration Area".

About this task


This collaboration area is associated with "Catalog Group Maintenance Workflow" which has the following steps:

Procedure
1. Edit Catalog Group.
The Catalog Group object from the Catalog Entry Categories hierarchy can be checked out to this step. This step can be performed by the Global Admin or Catalog
Manager. These users in addition to the Content Editor can edit the catalog group in a way similar to the ‘Edit Content‘ step in the following topic and then the group
moves to the review step:

910 IBM Product Master 12.0.0


2. Review.
The review step allows authorized users, for example the Content Editor, to review the checked out catalog group and if there is anything wrong, it can be rejected
and sent back to the "Edit Catalog Group" step. Otherwise, it can be approved and sent ahead to "Approve" step.

3. Approve.
The approve step allows users, for example the Content Editor, to perform a final check and approve the edited catalog group which will be checked back into the
"Catalog Entry Categories" hierarchy. If rejected, it will move back to the "Edit Catalog Group" step.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating catalog groups in a workflow


To create new Catalog Group, you can use the "Catalog Group Creation Collaboration Area". This collaboration area is associated with the "Catalog Group Creation
Workflow".

About this task


The workflow contains the following steps:

Procedure
1. Create a catalog group.
In this step, you can create a new catalog group object. Click Add to add a new catalog group. Click Edit, to edit the newly created catalog group and save it. This
step can be performed by the Global Admin and Catalog Manager. From this step the group moves to the Edit Content step.

2. Edit Content
In this step, the same user interface is used as the step shown. The users in the previous step as well as the Content Editor can edit the newly created group. The
attribute values associated with the newly created group can be edited on this step and saved. The group then moves to the Review step.

3. Review
The review steps allows authorized users, such as the Catalog Manager, to review the newly created catalog group and if there is anything wrong, it can be rejected
and sent back to the "Edit Content" step. Otherwise, it can be approved and sent ahead to the "Approve" step.

4. Approve
In this step, the Catalog Manager performs a final check and approves the newly created catalog group which will be checked back into the "Catalog Entry
Categories" hierarchy. If rejected, the group moves back to the "Edit Content" step.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating catalog groups with SEO data


You must specify Search Engine Optimization (SEO) data when you create catalog groups. For a Catalog Group, the URL keyword is a mandatory attribute.

Before you begin


Before you can specify SEO data, you must ensure:

The URL keyword is unique across catalog groups. If you specify a value that is already used, the system throws a validation error while saving the catalog group.
The URL keyword value does not contain any of the following characters [?,_, =, #, /, ., %, ].

See Enabling multiple languages - Restrictions and limitations for information about restrictions and limitations of using SEO data when multiple languages are enabled.

Procedure
1. Navigate to the Search Engine Optimization tab.
2. Enter details in the URL Keyword, Page title, Image alt text, and Meta description fields.
3. Click Save.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Adding calculation codes to catalog groups

IBM Product Master 12.0.0 911


You can attach calculation codes that exist in WebSphere® Commerce to catalog groups in Product Master and then export and publish those groups. You cannot create
calculation codes in the Product Master environment.

About this task


For catalog groups, the primary spec (Catalog Entry Categories Spec) has an attribute that is called Calculation code of Lookup table type. You can set this attribute
when you want to attach a calculation code to a catalog group.
To publish a calculation code, you need to perform the steps of the standard catalog group export or publishing procedure. This procedure takes care of the
calculation code if set on the group.

Procedure
1. Ensure that you have the required calculation code available in the Product Master environment.
For more information, see Creating calculation codes in WebSphere® Commerce.
2. Set the Calculation code attribute of the primary spec of the catalog group.
3. Publish the catalog group.
For more information, see Publishing catalog groups with CalculationCode.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing catalog entries


Catalog entry maintenance is a collaborative process that requires multiple user roles to participate.

About this task


You can use the out-of-box single edit or multi-edit screens to update the catalog entries. Each tab on the screen is detailed as follows:

The Manage Entity tab shows all of the core attributes.


The Descriptive Attributes tab shows instances and values of the attributes that are associated with the categories under which this catalog entry is classified.
These attributes can have free form values.
The Entity Specific Attributes tab shows the instances and values of the attributes, which are specific to the type of the entity, for example product, SKU, bundle, or
kit, that is open in the Single Edit screen.
Note: The SKU attributes are considered Defining Attributes and have predefined values.
The Merchandising Associations tab shows the instances and values of the attributes, which specify the association that this entry has with other catalog entries.
These associations can be with entries from same store or other stores.
The Associated Assets tab shows the instances and values of the attributes that refer to external assets associated with the entry such as external images, files,
and URLs.
The References tab shows the instances and values of attributes that have related activities (if any) that the entry refers to. For example, marketing activity
attributes.
The Description Override tab shows the description override spec for each eSite store the entry is listed under, with an option to add the description override
attributes.
The Search Engine Optimization tab shows the SEO properties and URL attribute for the catalog entry.
The Categories tab lists the categories that the entry is mapped to.

Procedure
On the left navigation pane, click the catalog entry for direct editing of the catalog entry or right-click on any catalog entry to check the entry out into the Catalog Entry
Maintenance Collaboration Area for collaborative editing

DescriptionOverride attributes
You can set DescriptionOverride attributes for items in the IBM® Product Master and export these attributes to WebSphere Commerce Server.
Managing catalog entry attributes
In IBM Product Master, specific attributes from catalog entry primary spec are pre-mapped to WebSphere Commerce attributes.
Managing catalog entries in a workflow
For further editing, catalog entries can be checked out to the "Catalog Entry Maintenance Collaboration Area" and edited and reviewed.
Creating catalog entries in a workflow
Catalog entries are stored in the "Catalog Entry Repository" in the Product Master data model. A new entry can be created in the catalog or using the "Catalog Entry
Creation Collaboration Area". You can create the entry in the "Create Catalog Entry" step and edit and review it in subsequent steps.
Catalog entry tabbed view
The catalog entry object in WebSphere Commerce Server is represented by the item object in Product Master. In the out of the box data model, the items/catalog
entries are stored in the catalog “Catalog Entry Repository”.
Searching catalog entries
Users can use the out-of-box search features to find catalog entries and catalog groups.
Generating SKUs
You can generate SKUs in the Advanced Catalog Management environment.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

912 IBM Product Master 12.0.0


DescriptionOverride attributes
You can set DescriptionOverride attributes for items in the IBM® Product Master and export these attributes to WebSphere® Commerce Server.

The description elements for the same entry in WebSphere Commerce Server can differ by the store the entry belongs to. While each entry has a default description, it is
possible to override this default description with descriptions that are specific to each store.

To support this functionality, you can set DescriptionOverride attributes for items in the IBM Product Master environment. When you export items with DescriptionOverride
attributes to WebSphere Commerce Server, the corresponding attributes will be populated with information that you had entered in the DescriptionOverride attributes on
the Product Master Server side.

Implementing Description Override


To implement description override, you need to add additional attributes related to the description override to each store. When an entry is associated with a store,
the description attributes for that particular store can be set for the entry.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Implementing Description Override


To implement description override, you need to add additional attributes related to the description override to each store. When an entry is associated with a store, the
description attributes for that particular store can be set for the entry.

Procedure
1. Create an eSite store under the Catalog Asset Store in the stores Hierarchy.
When the eSite store is created the store spec and the store description override specs are generated as secondary specs automatically.
2. Add the store spec and the store description override spec to the created eSite store.
3. When an entry is mapped to this eSite store, the Description Override tab of that entry shows the store description override spec with an option to add the
description override attributes.

Results
The attributes get added to the Description Override tab when any item is mapped to the store. Based on the store to which the description override is related to, the
attributes can be exported to WebSphere® Commerce by using a separate request to data loader for description override attributes.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing catalog entry attributes


In IBM® Product Master, specific attributes from catalog entry primary spec are pre-mapped to WebSphere® Commerce attributes.

Attributes from Product Master are mapped to attributes in WebSphere Commerce so that corresponding values are set on WebSphere Commerce when catalog entries
are exported from Product Master. In the out-of-box mapping, the following attributes from catalog entry primary spec are pre-mapped to WebSphere Commerce
attributes.

General attributes
1. Code
2. Display name
3. General Information/Name
4. General Information/Short description
5. General Information/Long description
6. General Information/Keyword
7. General Information/URL
8. General Information/Manufacturer
9. General Information/Manufacturer part number
10. General Information/Parent
11. General Information/Recurring order item
12. Publishing/For Purchase
13. Publishing/On Special
14. Publishing/Announcement Date
15. Publishing/Withdrawal Date
16. Display/Display To Customers
17. Display/Thumbnail
18. Display/Full image

IBM Product Master 12.0.0 913


Note: Other attributes that are added to the primary spec are handled as extended attributes.
Among the general attributes, the following attributes are required:

1. Code
2. Parent (Code) (if any)

Extended attributes
This is not in the asset. Solution developers need to enable it through extension of this asset.

Catalog Group specific attributes


This is not in the asset. Solution developers need to enable it through extension of this asset.

Attribute dictionary attributes


The "Attribute Dictionary" hierarchy represents the attribute dictionary in WebSphere Commerce. The attribute dictionary can have multiple attributes and their type is
based on their usage. The attributes are divided into two types:

Defining attributes
This category represents the attributes, which are used in WebSphere Commerce as attributes with predefined values. These are represented by enumeration
attributes in Product Master. These attributes are defined in the secondary specs that are associated with the attribute category (Category representing an attribute
dictionary attribute).
Descriptive attributes
This category represents the attributes, which are used in WebSphere Commerce as free form attributes. These can be any type of attributes in Product Master.
(Currently they have a limitation of not allowing enumeration attributes which are used only for defining attributes). These attributes are defined in the secondary
specs that are associated with the attribute category.

The following concepts are important when defining an attribute dictionary attribute:

Attribute Group:
Attribute Group represents a set of Product Master attributes combined together as a grouping. They are represented as a tree structure with a parent and children
node.
Attribute Category:
Attribute Category represents an attribute dictionary attribute in WebSphere Commerce as explained. The details of this attribute are defined in the secondary spec
that is associated with this category.

As mentioned previously, the details of the attribute aredefined in the secondary spec that is associated with the category that represents the attribute dictionary
attribute.
The attributes defined in this spec provide the actual names of these attributes and the types of these attributes provide the attribute data type. The attribute type
required for usage in WebSphere Commerce (descriptive/defining) will come from the Attribute Category as explained previously.
Note: For defining attributes, you need to define a list of values, therefore, from Product Master side, it must be an Enumeration type. For descriptive attributes, you do not
need to define a list of predefined values, however, you can use Enumeration type from the Product Master side to edit values.
Note: The category name, the category path in the attribute dictionary hierarchy, and the name of the spec which contains the attribute will be used as an "attribute group"
for the scope of the attribute.
Note: For the user-defined attributes, for example, descriptive or defining, if you are creating an attribute of type Integer or Number, you need to provide a default value.
This is necessary because when such an attribute is exported to WebSphere Commerce Server, it cannot be accepted by the WebSphere Commerce Server data loader as
a null value.

Stores specific attributes


Store specific attributes are the attributes that are attached to a catalog entry when the entry is added to a store (mapped to a store category in Product Master). These
attributes are defined in the secondary specs, which are associated with the categories that represent stores in "Stores" hierarchy.
When a new category representing a store is created in the "Stores" hierarchy, a secondary spec is automatically created for the store where the name is given as <Store
category name> spec. This spec is constructed by using predefined sub specs for descriptive attributes, merchandising associations, references, and so on.

When a store category is created in the "Stores" hierarchy, go to the specs console. In the secondary specs tab, check for the new spec created with name <Store
category name> spec.

Open the newly created store category and add this newly create spec as a secondary item spec and save the category. You can see which attributes are added to a store
spec. When an entry is mapped to a store category, they are added to the entry. The various attribute groupings will be added to various tabs in tabbed view for the entry.
For example,:

Descriptive attributes > Descriptive attributes tab


Merchandising Associations > Merchandising associations tab
Associated assets > Associated assets tab
Offer price > Manage Entry tab
Remaining groupings for References and Marketing > References tab

The various tabs can be seen in the single edit screen. The various attributes from the various subgroupings can be found on these tabs. You can open the entries in the
single edit tab and edit the attributes that are editable.

Adding attribute dictionary-based attributes


You can add attribute dictionary-based attributes of defining or descriptive types.
Setting sequence values for descriptive and defining attributes in catalog entries
You can set sequence values for descriptive and defining attributes in catalog entries in IBM Product Master. After you export the entry to WebSphere® Commerce,
you can display the attributes in an ascending or descending order that is based on the sequence value in the WebSphere Commerce environment for that entry
under the respective tab.
Creating catalog entries with SEO data
You can specify Search Engine Optimization (SEO) data when you create catalog entries. Specifying SEO data is optional for a catalog entry.

914 IBM Product Master 12.0.0


Adding calculation codes to catalog entries
You can attach calculation codes that exist in WebSphere® Commerce to catalog entries in IBM Product Master and then export and publish those entries. You
cannot create calculation codes in the Product Master environment.
Creating catalog entries with Description Override in eSite Store
You can create catalog entries with Description Override in the eSite Store by exporting catalog entries from IBM Product Master to WebSphere® Commerce.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Adding attribute dictionary-based attributes


You can add attribute dictionary-based attributes of defining or descriptive types.

Procedure
1. Navigate to the Categories tab.
2. Select the hierarchy Attribute Dictionary under the Hierarchy drop-down.
3. Select the wanted category under this hierarchy that represents the attribute dictionary attribute and add it under mappings.
4. Save the catalog entry.
The selected attribute dictionary attribute is available under either the Descriptive Attributes tab or the Entity Specific attributes tab depending on the type of
attribute.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Setting sequence values for descriptive and defining attributes in catalog entries
You can set sequence values for descriptive and defining attributes in catalog entries in IBM® Product Master. After you export the entry to WebSphere® Commerce, you
can display the attributes in an ascending or descending order that is based on the sequence value in the WebSphere Commerce environment for that entry under the
respective tab.

Procedure
1. Map a descriptive or defining attribute to a catalog entry from the Categories tab.
2. Click Refresh.
3. Based on the type of attribute, go to the Descriptive or the Entity-specific tab
4. Under the Descriptive or Defining Attribute Sequences, enter a number in the Sequence field.
5. Save the entry.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating catalog entries with SEO data


You can specify Search Engine Optimization (SEO) data when you create catalog entries. Specifying SEO data is optional for a catalog entry.

Before you begin


Before you can specify SEO data, you must ensure:

The URL keyword is unique across catalog entries. If you specify a value that is already used, the system throws a validation error while saving the catalog entry.
The URL keyword value does not contain any of the following characters [?,_, =, #, /, ., %, ].

See Enabling multiple languages - Restrictions and limitations for information about restrictions and limitations of using SEO data when multiple languages are enabled.

Procedure
1. Navigate to the Search Engine Optimization tab.
2. Enter details in the URL Keyword, Page title, Image alt text, and Meta description fields.
3. Click Save.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 915


Adding calculation codes to catalog entries
You can attach calculation codes that exist in WebSphere® Commerce to catalog entries in IBM® Product Master and then export and publish those entries. You cannot
create calculation codes in the Product Master environment.

About this task


For catalog entries, the primary spec (Catalog Entry Spec) has an attribute that is called Calculation code of Lookup table type. You can set this attribute when you
want to attach a calculation code to a catalog entry.
To publish a calculation code, you need to perform the steps of the standard catalog entry export or publishing procedure. This procedure takes care of the
calculation code if set on the entry.

Procedure
1. Ensure that you have the required calculation code available in the Product Master environment.
For more information, see Creating calculation codes in WebSphere® Commerce.
2. Set the Calculation code attribute of the primary spec of the catalog entry.
3. Publish the catalog entry.
For more information, see Publishing catalog entries.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating catalog entries with Description Override in eSite Store


You can create catalog entries with Description Override in the eSite Store by exporting catalog entries from IBM® Product Master to WebSphere® Commerce.

About this task


The relationship of the Master Catalog entry to the Sales Catalog is loaded when you export an item without any Description Override attributes. If you have Description
Override attributes then the relationship and the Description Override attributes are exported.

Procedure
1. Create a catalog group in the Master Catalog on the Product Master side.
2. Create a Sales Catalog.
3. Create a catalog group within the Sales Catalog.
4. Create a Sales Catalog on the WebSphere® Commerce side for the eSite store that you plan to use.
You can retrieve the catalog IDs from the STORECAT table in WebSphere® Commerce.
5. Update the lookup table named Catalogs on the Product Master side with the catalog IDs.
The Master Catalog ID must match the WebSphere® Commerce Master Catalog ID for the Catalog Asset Store. The Sales Catalog ID must match the corresponding
Sales Catalog ID on the WebSphere® Commerce side for the eSite Store. This is the Sales Catalog that must be associated with the eSite store.
6. Export catalog groups.
After exporting the catalog groups, you should have the same catalog group structure on both the Product Master and WebSphere® Commerce sides.
7. Create an entry, and map it to the Catalog Asset Store, eSite Store, catalog group under the Master Catalog and the catalog group under the Sales Catalog.
You can perform these actions from the Categories tab.
8. Add some attributes to the Description Override tab.
9. Create a catalog entry export selection where the entries from the catalog group can be exported.
You can choose either the catalog group in the Sales Catalog or in the Master Catalog.
10. Publish the catalog entries with DescriptionOverride to the eSite Store. For more information, see Publishing catalog entries with DescriptionOverride to eSite Store.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Managing catalog entries in a workflow


For further editing, catalog entries can be checked out to the "Catalog Entry Maintenance Collaboration Area" and edited and reviewed.

About this task


For creating new SKUs from existing product type catalog entries, you can check out the entries to the "Generate SKU" collaboration area. The checkout will automatically
generate the SKU for the checked out product under the same catalog group.
Let's say that your collaboration area is associated with the "Generate SKU" workflow which has only one automated step which creates a new SKU for the product type
entry that is checked out. After being created the new SKU is checked back in to the "Catalog Entry Repository" under the same catalog entry category or group with the

916 IBM Product Master 12.0.0


type as SKU.
Note: Only a product type entry can be checked out to this collaboration area. The checkout can be performed by a Global Admin or Catalog Manager.
The steps for this collaboration area are as follows:

Procedure
1. Edit catalog entry.
This step is performed by the Global Admin or Catalog Manager. You can check out an existing catalog entry to the "Edit Catalog Entry" step. The Content editor can
edit the entry along with the Global Admin and Catalog Manager. After the catalog entry is edited with the required attributes, it can be saved. When it is saved, it
runs the post save script to validate the newly created entry based on the rules mentioned in section Mapping. The entry moves to the "Review" step.

2. Review.
This step is performed by the Catalog Manager or Content viewer where the user reviews the edited entry and approves it. The entry moves to the "Approve" step. If
the entry is rejected, the entry moves back to the "Edit Catalog Entry" step.

3. Approve.
This step is performed by the Catalog Manager or Content viewer where the user performs a final check of the edited entry and approves it. The entry moves to the
"Success" step and is checked back into the Catalog entry repository. If the entry is rejected, the entry moves back to the "Edit Catalog Entry" step.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating catalog entries in a workflow


Catalog entries are stored in the "Catalog Entry Repository" in the Product Master data model. A new entry can be created in the catalog or using the "Catalog Entry
Creation Collaboration Area". You can create the entry in the "Create Catalog Entry" step and edit and review it in subsequent steps.

About this task


You can use the out-of-box single edit screen to create a new catalog entry.

Procedure
1. Create a catalog entry.
This step is performed by the Global Admin or Catalog Manager. Select the step and click Add to create a new catalog entry. Once the catalog entry is populated
with the required attributes, it can be saved. When it is saved, it runs the post save script to validate the newly created entry based on the rules mentioned in
Mapping. The entry then moves to the "Edit content" step.

2. Edit content.
This step is performed by the Content Editor in addition to the performers of the previous step. The attribute values associated with the entry can be edited on this
step and saved. When it is saved, it runs the post save script to validate the newly created entry based on the rules mentioned in Mapping. The entry then moves to
the "Review" step.

3. Review.
This step is performed by the Catalog Manager or Content viewer where the user reviews the newly created entry and approves it. The entry then moves to the
"Approve" step. If the entry is rejected, the entry moves back to the "Edit content" step.

4. Approve.
This step is performed by the Catalog Manager or Content viewer where the user performs a final check of the newly created entry and approves it. The entry then
moves to the "Success" step and is checked back into the Catalog entry repository. If the entry is rejected, the entry moves back to the "Edit content" step.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Catalog entry tabbed view


The catalog entry object in WebSphere® Commerce Server is represented by the item object in Product Master. In the out of the box data model, the items/catalog entries
are stored in the catalog “Catalog Entry Repository”.

About this task


The catalog entry object contains the following data:

Procedure
1. Required information
This includes the attributes that are mandatory for a catalog entry in WebSphere Commerce Server. These attributes are seen on the ‘Manage Entry’ tab in the
tabbed view for the entry. This includes attributes like:
a. Code

IBM Product Master 12.0.0 917


b. Display name
c. General Information/Name
d. General Information/Short description
e. General Information/Long description
f. General Information/Keyword
g. General Information/URL
h. General Information/Manufacturer
i. General Information/Manufacturer part number
j. General Information/Parent
2. Optional information
This includes predefined attributes from the WebSphere Commerce Server data model which can be added to catalog entries in Product Master. Some of these
attributes are already mapped in the out of the box mapping for catalog entry.

3. Type specific data


Catalog entries can be of the following 4 types:
a. Product
b. SKU
c. Bundle
d. Kit
The 4 types above are types defined by the WebSphere Commerce Server data model and are represented by a category each (of the same name) in the ‘Catalog
Entry Types’ hierarchy in the Product Master data model. Only one of these types can be associated by any one of the entries by mapping them to one of these
categories.
There are certain attributes associated with each of these types which can be seen on the ‘Entity Specific Attributes’ tab in the tabbed view.

4. Store specific attributes


This type of attributes can be associated with a catalog entry by mapping the entry to a store category. This will associate the store specific attributes described in
section Objects and classes with the catalog entry. As explained in that section, the various grouping of store specific attributes can be seen on different tabs.

5. User defined data through attribute dictionary attributes


This type of attributes can be associated with a catalog entry by mapping the entry to an attribute dictionary category which represents a user defined attribute
(descriptive/defining) which are described in section Publishing catalog data. These attributes can be seen on the ‘Entity Specific Attributes’ tab once they are
associated with the entry.

6. Catalog group specific data


In the current data model, the only attribute that is associated with the catalog entry with reference to catalog group is the parent attribute which tells which group
the catalog entry belongs to. As explained in section Objects and classes, catalog group specific attributes can be added by user with more secondary specs
associated with the categories representing those groups.

All the type of attributes described in the above section are associated with a catalog entry either though the primary spec for the catalog entry or by mapping the
entry to categories under various hierarchies. There are certain rules for this kind of mapping which are as follows.
a. The catalog entry can be mapped to one and only one type in the ‘Catalog Entry Types’ hierarchy.
b. The catalog entry has to be mapped to at least one catalog group defined in the ‘Catalog Entry Categories’ hierarchy.
c. The catalog entry needs to be mapped to at least one of the store categories to associate store specific attributes with the catalog entry.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Searching catalog entries


Users can use the out-of-box search features to find catalog entries and catalog groups.

Procedure
Choose which type of search you want to run.

Catalog Entry Search


In the navigation pane, within a catalog module, right-click and open Item Rich Search. You can create a search query using the Code attribute. This search query
searches for catalog entries under this particular catalog and store and can be saved as a template.
Attribute Dictionary Search
In the navigation pane, within a hierarchy module, right-click and open Hierarchy Rich Search. You can create a search query using the Path attribute. This search
query searches for categories under this particular catalog and store and can be saved as a template.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Generating SKUs
You can generate SKUs in the Advanced Catalog Management environment.

Generating SKUs with product associations


You can generate SKUs with product associations.

918 IBM Product Master 12.0.0


Generating SKUs without product associations
You can generate SKUs that do not have any associated or parent product. Typically SKUs are product-level SKUs, but there is a need to create category-level SKUs
that are not a children of a product, but rather children of a category.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Generating SKUs with product associations


You can generate SKUs with product associations.

Procedure
Right click on a given product and select Check out >Generate SKU.
A collaboration area with automated step Generate SKU is used to generate the corresponding SKU.
The generated SKU is displayed along with the product under the same parent catalog group. The code and name of the SKU are generated based on the parent product
code.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Generating SKUs without product associations


You can generate SKUs that do not have any associated or parent product. Typically SKUs are product-level SKUs, but there is a need to create category-level SKUs that
are not a children of a product, but rather children of a category.

About this task


Generating an SKU is similar to creating a product with the difference that the entry is mapped to SKU instead of being mapped to a product.

Procedure
1. Right click a category in the left pane in the Catalog Entry repository.
2. Select Add Item.
3. Create an item.
4. In the categories tab, map it to the SKU category in the Catalog Entry Types hierarchy.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating calculation codes in WebSphere Commerce


To add calculation codes to catalog entries and catalog groups, you need to create calculation codes in the WebSphere® Commerce environment.

About this task


If you need to add calculation codes to catalog entries and catalog groups in the IBM® Product Master environment, you need to create calculation codes in the
WebSphere® Commerce environment and represent them as lookup table entries Product Master environment.

Procedure
1. Create calculation codes for any of the following types in the Commerce Accelerator in WebSphere Commerce.
CouponCalculationCode
SurchargeCalculationCode
DiscountCalculationCode
SalesTaxCalculationCode
ShippingCalculationCode
ShippingTaxCalculationCode
ShippingAdjustmentCalculationCode
2. Navigate to the lookup table console in the Product Master data model.
3. Open the calculation code lookup table.
4. Create an entry in the calculation code lookup table for each calculation code that you want to use, and select the same value for the Type attribute for that code as
the one in WebSphere Commerce.

IBM Product Master 12.0.0 919


5. Save the lookup table entry.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Publishing catalog data


In this asset, you can use a common integration framework that can be extended for general use of data exports.

An integration transaction can use a data transformer to retrieve the Product Master objects and prepare the business object data for the data loader to publish to the
target system.

Publishing an attribute dictionary


The custom tool used for attribute dictionary attribute export is the ACM Attribute dictionary export which can be invoked by the Global Administrator or the Catalog
Manager.
Publishing catalog entry categories (Catalog Group)
Ensure you are familiar with the following when publishing catalog entry categories or catalog groups.
Publishing catalog attributes
Publishing catalog entries
Use the master data management custom tool "Catalog Entry Selective Export" for exporting catalog entries.
Publishing attribute groups
To publish attribute groups, you need to export attribute groups to WebSphere Commerce from IBM® Product Master .
Refreshing data
To refresh data, you can use the Refresh Data custom tool. Refreshing data can only be performed by the global administrator after installation.
Objects and classes
Ensure you are familiar with the differences between master data management objects and business objects, and the different types of loaders.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Publishing an attribute dictionary


The custom tool used for attribute dictionary attribute export is the ACM Attribute dictionary export which can be invoked by the Global Administrator or the Catalog
Manager.

920 IBM Product Master 12.0.0


Before you begin
For running this export, ensure that you have the entry for the attribute dictionary in the lookup table named Attribute Dictionaries and the attribute dictionary is
associated with a store in the Hierarchy Stores.

About this task


When a user runs this custom tool, ACM gathers all of the attributes defined under the Defining Attributes and Descriptive Attributes categories of the Attribute Dictionary
hierarchy whose status is Open. The tool then invokes the exporter code for all of these attributes which transforms and loads the collected attributes in the attribute
dictionary. The status of each attribute can be checked under the corresponding entry in the catalog named Catalog Attributes.
Selective Export of Attribute Dictionary attributes

Users can selectively export attributes by setting the status to Open for the attribute dictionary attribute entry under the Catalog Attributes catalog. Using this approach,
selective facets of the attribute can be modified. Only attributes whose status is Open are exported to the WebSphere® Commerce Server on using the ACM Attribute
Dictionary Export custom tool.

Procedure
1. Open the lookup table named Attribute Dictionaries and ensure that an entry is present with a Name for the attribute dictionary and the corresponding Id of the
attribute dictionary from WebSphere Commerce Server.
2. Open the store from the Stores hierarchy and ensure that the given attribute dictionary is associated with the correct store.
3. Run the custom tool ACM Attribute dictionary export.
It will use the attribute dictionary id from the lookup table and the store id the store mentioned above during the export.

What to do next
After running the tool, open the entry for the attribute dictionary attribute under the Catalog Attributes catalog and check the Status field to see the status of export. For
the attributes which are successfully exported, the status is set to Exported and Target ID is assigned a value corresponding to the attribute dictionary attribute ID on
WebSphere Commerce Server.

Mapping
This mapping is meant for exporting all of the Attribute Dictionary Attributes defined in Product Master using the Attribute Dictionary Hierarchy to WebSphere
Commerce Server. The mapping is defined in the AttributeDictionaryAttribute.xsl file.
Sample XML data
An example of transformed sample data for attribute dictionary attribute export is as follows:
Export status
After running the tool, you can check the export status.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Mapping
This mapping is meant for exporting all of the Attribute Dictionary Attributes defined in Product Master using the Attribute Dictionary Hierarchy to WebSphere Commerce
Server. The mapping is defined in the AttributeDictionaryAttribute.xsl file.

The transformer object for attribute dictionary attribute transforms the attribute from Product Master to attribute for WebSphere Commerce Server.
The Advanced Catalog Management solution uses the catalog web services exposed by WebSphere Commerce Server to upload the transformed attribute dictionary
attribute to WebSphere Commerce Server.

Here is a sample mapping:

<!-- a simple data mapping: "Category"(string) to -->

<!-- "out:AttributeType"(AttributeTypeCodeType) -->

<xsl:if test="Category">

<out:AttributeType>

<!-- variables for custom code -->

<xsl:variable name="Category" select="Category"/>

<xsl:value-of select=

"XMLMappingUtils:getAttributeDictionaryAttributeType($Category)"/>

</out:AttributeType>

</xsl:if>
The attribute type is based on whether the attribute is classified as descriptive or defining in the attribute dictionary hierarchy.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 921


Sample XML data
An example of transformed sample data for attribute dictionary attribute export is as follows:

<_cat:AttributeDictionaryAttribute comparable="false" displayable="true"


facetable="false" merchandisable="false" searchable="false">
<_cat:AttributeIdentifier>
<_wcf:ExternalIdentifier>
<_wcf:AttributeDictionaryIdentifier>
<_wcf:UniqueID>10001</_wcf:UniqueID>
</_wcf:AttributeDictionaryIdentifier>

<_wcf:Identifier>a1</_wcf:Identifier>

<_wcf:StoreIdentifier>
<_wcf:UniqueID>10001</_wcf:UniqueID>
</_wcf:StoreIdentifier>
</_wcf:ExternalIdentifier>
</_cat:AttributeIdentifier>
<_cat:AttributeType>AssignedValues</_cat:AttributeType>
<_cat:AttributeDataType>String</_cat:AttributeDataType>
<_cat:AttributeDescription language="-1">
<_cat:Name>a1</_cat:Name>
<_cat:Description></_cat:Description>
<_cat:ExtendedData name="Footnote"></_cat:ExtendedData>
<_cat:ExtendedData name="SecondaryDescription"></_cat:ExtendedData>
<_cat:ExtendedData name="DisplayGroupName"></_cat:ExtendedData>
<_cat:ExtendedData name="UnitOfMeasure"></_cat:ExtendedData>
</_cat:AttributeDescription>
</_cat:AttributeDictionaryAttribute>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Export status
After running the tool, you can check the export status.

You must follow the steps in Publishing an attribute dictionary to see which attributes were exported to WebSphere® Commerce Server.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Publishing catalog entry categories (Catalog Group)


Ensure you are familiar with the following when publishing catalog entry categories or catalog groups.

Publishing a catalog group in a workflow


The method to export a catalog group in the Advanced Catalog Management solution is to use the Catalog Group Export Collaboration Area.
Mapping
The mapping used for catalog groups is CatalogGroup.xsl. This mapping defines the mapping from the attributes associated with a catalog group in Product Master
to the attributes associated with catalog group on commerce.
XML data for SOAP request
Here is a sample of the transformed catalog group.
Export status
You can view the status of an export operation.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Publishing a catalog group in a workflow


The method to export a catalog group in the Advanced Catalog Management solution is to use the Catalog Group Export Collaboration Area.

About this task

922 IBM Product Master 12.0.0


The Catalog Group Export Collaboration area is associated with the Catalog group export workflow. The users and roles which are authorized to use this collaboration area
are the Global Admin and Catalog Manager.
The authorized users can check out the categories representing Catalog Groups from the Catalog Entry Categories hierarchy to the collaboration area.

Multiple sub catalog groups under a given parent catalog group can be exported simultaneously using the multi-edit authoring screen. Select multiple catalog groups to be
exported and use Checkout > Catalog Group Export Collaboration Area to export selected catalog groups at the same time. When using this mode, make sure parent
catalog group is exported first prior to exporting of child catalog groups.

Procedure
1. Export Catalog Groups.
The checked out groups will go ino this step. This is an automated step which invokes the Catalog group export extension point which invokes the export process.

2. Review Completed Catalog Groups/Review Failed Catalog Groups.


Based on whether the export of the catalog group succeeded or failed, the group will move to one of these steps where it can be reviewed by the user and changes
can be made to correct the problem using the Catalog Group Maintenance Collaboration Area.

Publishing catalog groups with SEO data


SEO data for a catalog group is exported along with other catalog group data, using the collaboration area "Catalog Group Export Collaboration Area".
Publishing catalog groups with CalculationCode
CalculationCode data for a catalog group is exported along with other catalog group data, using the collaboration area "Catalog Group Export Collaboration Area".

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Publishing catalog groups with SEO data


SEO data for a catalog group is exported along with other catalog group data, using the collaboration area "Catalog Group Export Collaboration Area".

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Publishing catalog groups with CalculationCode


CalculationCode data for a catalog group is exported along with other catalog group data, using the collaboration area "Catalog Group Export Collaboration Area".

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Mapping
The mapping used for catalog groups is CatalogGroup.xsl. This mapping defines the mapping from the attributes associated with a catalog group in Product Master to the
attributes associated with catalog group on commerce.

The transformer object for catalog group transforms the catalog entry category from Product Master to catalog group for WebSphere® Commerce Server.
The Advanced Catalog Management solution uses the catalog web services exposed by WebSphere Commerce Server to upload the transformed catalog group to
WebSphere Commerce Server.

A sample mapping for catalog group attribute is as follows:

<out:CatalogGroupIdentifier>

<out2:ExternalIdentifier>

<!-- a simple data mapping:

"Catalog_Entry_Categories_Spec/Code"(CodeType) to

"out2:GroupIdentifier"(string) -->

<xsl:element name="out2:GroupIdentifier">

<xsl:value-of select="Catalog_Entry_Categories_Spec/Code"/>

</xsl:element>

</out2:ExternalIdentifier>

</out:CatalogGroupIdentifier>

IBM Product Master 12.0.0 923


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

XML data for SOAP request


Here is a sample of the transformed catalog group.

<out:CatalogGroup
xmlns:out=\http://www.ibm.com/xmlns/prod/commerce/9/catalog\
xmlns:xsd=\http://www.w3.org/2001/XMLSchema\
xmlns:out2=\http://www.ibm.com/xmlns/prod/commerce/9/foundation\
xmlns:xsi=\http://www.w3.org/2001/XMLSchema-instance\
<topCatalogGroup=\true\>
<out:CatalogGroupIdentifier>
<out2:ExternalIdentifier>
<out2:GroupIdentifier>Apparel</out2:GroupIdentifier>
</out2:ExternalIdentifier>
</out:CatalogGroupIdentifier>
<out:Description language=\-1\>
<out:Name>Apparel</out:Name>
<out:Thumbnail>http://localhost:7507/suppliers/VT_Living_Lite/images/catalog
/apparel/Kids_clothing_sm.jpg</out:Thumbnail>
<out:FullImage>http://localhost:7507/suppliers/VT_Living_Lite/images/catalog
/apparel/en_US/Kids_clothing.jpg</out:FullImage>
<out:ShortDescription>The latest styles for young kids.
</out:ShortDescription>
<out:LongDescription>The styles are all for kids and with
the latest fashions.</out:LongDescription>
<out:Keyword>kids wears, cloths, shirt, pants, tops,
bottoms, dresses, dress</out:Keyword>
<out:Attributes name=\published\>1</out:Attributes>
</out:Description>
</out:CatalogGroup>

<out:CatalogGroup
xmlns:out=\http://www.ibm.com/xmlns/prod/commerce/9/catalog\
xmlns:xsd=\http://www.w3.org/2001/XMLSchema\
xmlns:out2=\http://www.ibm.com/xmlns/prod/commerce/9/foundation\
xmlns:xsi=\http://www.w3.org/2001/XMLSchema-instance\
topCatalogGroup=\true\>
<out:CatalogGroupIdentifier>
<out2:ExternalIdentifier>
<out2:GroupIdentifier>Computers</out2:GroupIdentifier>
</out2:ExternalIdentifier>
</out:CatalogGroupIdentifier>

<out:Description language=\-1\>
<out:Name>Computers</out:Name>
<out:ShortDescription/>
<out:LongDescription/>
<out:Keyword/>
<out:Attributes name=\published\>1</out:Attributes>
</out:Description>
</out:CatalogGroup>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Export status
You can view the status of an export operation.

If the category or catalog group is exported correctly, it will show "Exported" for the "Status" attribute in the "Review Completed Catalog Group" step and the catalog
Group will show up in WebSphere Commerce Server.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Publishing catalog attributes


Publishing catalog entries with SEO data
SEO data for a catalog entry is exported along with other catalog entry data, using the Master Data Management custom tool "Catalog Entry Selective Export".
Publishing catalog entries with CalculationCode
To publish catalog entries with CalculationCode, you need to follow the standard mechanism of publishing catalog entries. For more information, see Publishing

924 IBM Product Master 12.0.0


catalog entries.
Publishing catalog entries with DescriptionOverride to eSite Store
You can publish catalog entries with DescriptionOverride to the eSite store in the WebSphere® Commerce environment.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Publishing catalog entries with SEO data


SEO data for a catalog entry is exported along with other catalog entry data, using the Master Data Management custom tool "Catalog Entry Selective Export".

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Publishing catalog entries with CalculationCode


To publish catalog entries with CalculationCode, you need to follow the standard mechanism of publishing catalog entries. For more information, see Publishing catalog
entries.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Publishing catalog entries with DescriptionOverride to eSite Store


You can publish catalog entries with DescriptionOverride to the eSite store in the WebSphere® Commerce environment.

Procedure
1. Ensure that the catalog entries you want to publish are created correctly. For more information, see Creating catalog entries with Description Override in eSite
Store.
2. Export the entry by using Catalog Asset Store.
You must export from Catalog Asset Store before you export it from eSite store. You can see the entry in the Catalog Asset Store master catalog group after a
successful export.
3. Export the entry by using eSite Store.

Results
You can see the same entry that is mapped in the Sales Catalog group in the eSite store along with the description override attributes specified after a successful export.
There is a small arrow icon that overlaps the entry icon. This icon indicates that the entry originally exists in the Master Catalog, and that it is not a stand-alone entry but
overrides the one in the Master Catalog.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Publishing catalog entries


Use the master data management custom tool "Catalog Entry Selective Export" for exporting catalog entries.

Procedure
1. Click Custom Tools > Catalog Entry Selective Export. A list of all of the predefined export selections displays.
2. Selecting the ACM Catalog Entry Selective Expor menu item. The right pane is refreshed with a list of predefined export selections.
The Manage Selections link allows creation of new selections of entries to be exported or edit the existing ones. You will be taken to the Selection Console to define
new selections or editing existing ones. The columns presented in the grid are:
Name – name of the selection
Type – type of the selection; either Dynamic or Static
Object Type – type of the business objects included in the selection, for example, Catalog Entries
3. Select one of the export selections to be exported. Click Export on the content pane to display the Export dialog. In the store drop-down list, select a store where
the entries are to be exported.

IBM Product Master 12.0.0 925


The Store drop-down list shows all the stores that are available for export. This drop-down is populated based on the categories present in the ‘Stores’ hierarchy.
The WCS-id attribute for the store category should be set to a non negative value for that store to be added to the drop-down.
Note: Ensure the store selected from the drop-down list should be installed in the WebSphere® Commerce Server instance used for publishing the data. The names
of the stores should be matching exactly.
4. Click Export to start the export. A dialog will be displayed to notify you that the export has started.
5. Click the click here link. Check the status of the export job in the displayed Scheduled Status window. The name of the export job is "ACM Data Load Report".
Important: If you wish to export a catalog entry that is mapped to an Attribute dictionary attribute (either Defining or Descriptive), then the mapped attribute must
exist as an item in the Catalog Attributes Catalog.

XSL syntax
The mapping used for catalog entry attributes is CatalogEntry.xsl. This mapping defines the mapping from the attributes associated with a IBM® Product
Master item to the attributes associated with WCSCatalogEntry.
Mapped attributes
The transformer for WebSphere Commerce Server catalog entry transforms the IBM Product Master item to WCSCatalogEntry. The ACM solution uses the data
loader feature in WebSphere Commerce Server to upload the business objects to WebSphere Commerce Server.
XML data for data load
Here is a sample of transformed XML data which can be exported to WebSphere Commerce using the data loader.
Data load report
Advanced Catalog Management stores the response file in the document store on Product Master.
Disassociating an attribute in a catalog entry
When a catalog entry to which an Attribute Dictionary attribute has been associated with a value is exported, the entry and the associated attribute with the value
show up in WebSphere Commerce. When the associated attribute is removed from the catalog entry in Product Master and the entry is re-exported to WebSphere
Commerce, the association on WebSphere Commerce is not removed. In the case of the associated attribute value was removed, users should remove it from
WebSphere Commerce manually to keep the system healthy.
Updating an enumeration value of an attribute in a catalog entry
When a catalog entry to which an Attribute Dictionary attribute of enumeration type has been associated is exported from Product Master, the entry and the
associated attribute show up in WebSphere Commerce. When a different value is selected for the attribute in that catalog entry, and the catalog entry is re-
exported, then both the values (from both the exports) show up in WebSphere Commerce.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

XSL syntax
The mapping used for catalog entry attributes is CatalogEntry.xsl. This mapping defines the mapping from the attributes associated with a IBM® Product Master item
to the attributes associated with WCSCatalogEntry.

For example:

<! -- a simple data mapping:

"Catalog_Entry_Spec/General_information/Short_description"(string)

to "out:ShortDescription"(string) -->

<xsl:if

test="Catalog_Entry_Spec/General_information/Short_description">

<xsl:element name="out:ShortDescription">

<xsl:value-of

select="Catalog_Entry_Spec/General_information/Short_description"/>

</xsl:element>

</xsl:if>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Mapped attributes
The transformer for WebSphere® Commerce Server catalog entry transforms the IBM® Product Master item to WCSCatalogEntry. The ACM solution uses the data loader
feature in WebSphere Commerce Server to upload the business objects to WebSphere Commerce Server.

The following entries are the required attributes for WebSphere Commerce Server catalog entry that must be included in the mapping. The actual name used in the map, if
different than the attribute name used in the display, is put in parenthesis:

1. Code (PartNumber)
2. Parent (ParentCatalogEntryIdentifier)
code of the parent product of this catalog entry; is required if the catalog entry is of SKU type
ParentCatalogEntryIdentifier is a complex element

926 IBM Product Master 12.0.0


In addition, following attributes are also in the out-of-box mapping:

1. Short description (ShortDescription)


2. Long description (LongDescription)
3. Keyword
4. URL
5. Manufacturer
6. Manufacturer part number (ManufacturerPartNumber)
7. Recurring order item (DISALLOW_REC_ORDER)
8. For purchase (Buyable)
9. On special (OnSpecial)
10. Announcement date (StartDate)
11. Withdrawal date (EndDate)
12. Display to customers (Published)
13. Thumbnail
14. Full image (FullImage)

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

XML data for data load


Here is a sample of transformed XML data which can be exported to WebSphere® Commerce using the data loader.

Note: When the data loader utility is used for noun format XML file load and the file contains double byte characters, the entry gets uploaded to commerce and is seen in
the database. The double byte text shows up as question marks ???? in the database and the CMC UI. This error does not occur if the noun format XML files containing
double byte characters are uploaded to WebSphere Commerce using web service. To avoid this issue, ensure that you download the applicable patch from WebSphere
Commerce.

<?xml version=\1.0\ encoding=\UTF-8\?>

<CatalogEntries
xmlns:out=\http://www.ibm.com/xmlns/prod/commerce/9/catalog\
xmlns:xsd=\http://www.w3.org/2001/XMLSchema\
xmlns:out2=\http://www.ibm.com/xmlns/prod/commerce/9/foundation\
xmlns:xsi=\http://www.w3.org/2001/XMLSchema-instance\>
<out:CatalogEntry catalogEntryTypeCode=\ProductBean\>
<out:CatalogEntryIdentifier>
<out2:ExternalIdentifier ownerID=\7000000000000000051\>
<out2:PartNumber> partNum1 </out2:PartNumber>
</out2:ExternalIdentifier>
</out:CatalogEntryIdentifier>
<out:Description>
<out:Name> partNum1 </out:Name>
<out:Thumbnail/>
<out:FullImage/>
<out:ShortDescription>Random part</out:ShortDescription>
<out:LongDescription>Random unit testing</out:LongDescription>
<out:Keyword/>
</out:Description>
<out:CatalogEntryAttributes>
<out:Attributes displaySequence=\1.0\ usage=\Defining\>
<out:AttributeIdentifier>
<out2:ExternalIdentifier>
<out2:Identifier>swatcholor</out2:Identifier>
</out2:ExternalIdentifier>
</out:AttributeIdentifier>
<out:Value identifier=\Red\/>
</out:Attributes>
<out:CatalogEntryAttributes>
<out:CatalogEntryAttributes>
<out:Attributes displaySequence=\4.0\ usage=\Descriptive\>
<out:AttributeIdentifier>
<out2:ExternalIdentifier>
<out2:Identifier>grinder</out2:Identifier>
</out2:ExternalIdentifier>
</out:AttributeIdentifier>
<out:AttributeDataType>STRING</out:AttributeDataType>
<out:ExtendedData name=\UnitOfMeasure\/>
<out:Value>electric</out:Value>
</out:Attributes>
</out:CatalogEntryAttributes>
</out:CatalogEntry>
</CatalogEntries>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Data load report

IBM Product Master 12.0.0 927


Advanced Catalog Management stores the response file in the document store on Product Master.

Go to Collaboration Manager > Document Store. Under the document store home directory there is a folder that contains the data load reports organized by timestamp. By
default, the path of the folder is /acm. You can customize the name through the mdm.export.response.docstore.folder.path section in theacm.properties file.
Based on the attributes that are present in a catalog entry, multiple data load reports are generated. If the catalog entries contain description override attributes, the
Description Override Export Status report is generated. If the catalog entries contain merchandising associations, a Catalog Entry Association Export Status report is
generated. If the export contains Kits or Bundles, the KitBundle Export Status report is generated.

The Data Load report has the following structure:

<loadreport>

<loader name="{Loader Name}">

{Report Summary}

{Report Details}

</loader>

</loadreport>
Report summary section has the following format:

<summary>

<payload>{catalog Entry List}</payload>

<committed>{<number of catalog entries>}</committed>

<processed>{<number of catalog entries>}</processed>

<rolledback>{<number of catalog entries>}</rolledback>

</summary>
Report detail has the following format:

<details>

<CatalogEntry>

<CatalogEntryIdentifier>

<ExternalIdentifier>

<Code>{catalog Entry Part Number}</Code>

</ExternalIdentifier>

</CatalogEntryIdentifier>

<status>{Failed|Exported}</status>

</CatalogEntry>

...

</details>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Disassociating an attribute in a catalog entry


When a catalog entry to which an Attribute Dictionary attribute has been associated with a value is exported, the entry and the associated attribute with the value show up
in WebSphere® Commerce. When the associated attribute is removed from the catalog entry in Product Master and the entry is re-exported to WebSphere Commerce, the
association on WebSphere Commerce is not removed. In the case of the associated attribute value was removed, users should remove it from WebSphere Commerce
manually to keep the system healthy.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Updating an enumeration value of an attribute in a catalog entry


When a catalog entry to which an Attribute Dictionary attribute of enumeration type has been associated is exported from Product Master, the entry and the associated
attribute show up in WebSphere® Commerce. When a different value is selected for the attribute in that catalog entry, and the catalog entry is re-exported, then both the
values (from both the exports) show up in WebSphere Commerce.

928 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Publishing attribute groups


To publish attribute groups, you need to export attribute groups to WebSphere® Commerce from IBM® Product Master .

Procedure
1. Ensure that the attribute groups you want to export are created correctly. For more information, see Creating Attribute Groups.
2. Export attribute groups using the same custom tool that is used to export the Attribute Dictionary. For more information, see Publishing an attribute dictionary.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Refreshing data
To refresh data, you can use the Refresh Data custom tool. Refreshing data can only be performed by the global administrator after installation.

Before you begin


Before you begin, perform the following steps:

1. Install ACM with ACM data model loaded.


2. Logout and log back into the ACM instance.

About this task


The refresh data custom tool includes refreshing all of the categories on hierarchies Stores, Catalog Entry Categories and Attribute Dictionary that are installed. The global
administrator will only need to run it once immediately after the data model is imported. The refresh data tool should be run prior to performing any language enablement
in the collaborative MDM.

Procedure
1. Click Menu > Custom Tools.
2. Select ACM Refresh Data.
Note: Only global administrators have access to the tool.
3. Launch the ACM Refresh Data custom tool. A report displays on the right pane.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Objects and classes


Ensure you are familiar with the differences between master data management objects and business objects, and the different types of loaders.

Objects
Master data management objects
A solution developer can directly use master data management objects to manage the data in the master data management system.

IBM Product Master 12.0.0 929


Business objects
A solution developer will need to create the business objects for managing the business data in the target system. Below is the current implementation for
WebSphere Commerce business objects.

Classes
Exporter classes
The exporter classes create the transaction objects for publishing different master data management objects, for example, catalog entries, catalog groups, attribute
dictionary attributes, and so on to WebSphere® Commerce.
There are exporter classes for each of the object types which initializes the loader and transformer objects for the export and publishing.

The export objects are initialized by the exporter classes which are inherited from the GenericCatalogObjectExport class, which is the root level class which initiates
the export for any of the objects.

Exporter classes declare the transformer and loader classes for each object type.

Exporters gather the objects to be exported in XML format from master data management and invoke the transformers and loaders.

Transformer classes

930 IBM Product Master 12.0.0


There is a transformer class defined for each of the object types. These classes set the mapping files for each of the object types. These mapping files are copied
when a user installs Advanced Catalog Management. The paths for mapping are provided in the acm.properties file.
Transformers transform the XML format objects from master data management to the formats compatible to WebSphere Commerce using the mapping files.

Loader classes
Loaders are defined by loader classes for each of the object types. These are inherited from the AbstarctBatchLoader or AbstractWebService loaders. Depending on
this inheritance, objects are loaded using data loader or web services.
Loaders load the objects transformed by transformer classes to WebSphere Commerce using either data load or web services based on the object types and
quantity.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating custom logger


You can create a custom logger in the ACM environment.

About this task


ACM exports attribute dictionary attributes and catalog groups by using synchronous web services, and exports catalog entries by using the asynchronous data loader
approach.
You can log web service requests that contain the data that is being sent across to WebSphere Commerce Server by using a custom logger. Web service requests for
attribute dictionary attribute export are logged under the AppServer service logs and catalog group exports logs are created under workflow service logs. You can view
data files for catalog entry export directly in the FTP area of WebSphere Commerce Server.

Procedure
1. Open the log4j2.xml file.
This file is located in the $TOP/etc/default directory.
2. Add a custom appender.
Consider the following code as an example. You must adjust the path to the logs folder .

<RollingFile name="CONTEXT" fileName="/opt/MDM/logs/${svc_name}/ipm.log" append="true"


filePattern="/opt/MDM/logs/${svc_name}/ipm-%d{MM-dd-yyyy}-%i.log">
<PatternLayout>
<Pattern>%d [%t] %-5p %c %x- %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="10 MB" />
</Policies>
<DefaultRolloverStrategy max="2" />
</RollingFile>

3. Add a logging category.

<Logger name="com.ibm.ccd.cache" level="debug" additivity="false">


<AppenderRef ref="ACM" />
</Logger>

4. Save the file.

What to do next
Check to ensure that all of the logs that are produced by ACM are available in the log acm.log file under the respective service logs folders.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Development environment setup


Ensure you are familiar with the following when setting up your development environment.

Prerequisites
Ensure you meet the following minimum prerequisites before you begin developing the Advanced Catalog Management solution.
The following applications are required in order to do development with ACM code.

1. Ant
2. Eclipse/RSA (Rational® Software Architect) or Rational Application Developer with XML tools that are installed for working with maps and XSLT files.
3. Java SE Development Kit 1.6 (needs to be installed separately if users are using Eclipse and not RSA/RAD).

IBM Product Master 12.0.0 931


The source code can be acquired as follows.

1. Extract the acm.src.zip file. Under this package, users find the zipped archives of each of the source code projects.
2. Extract these projects into separate directories.

The following steps are needed to set up the source code in IDE.

1. Open a new workspace in the IDE (Eclipse, RSA, or Rational Application Developer).
2. Open the Java™ or Java Platform, Enterprise Edition perspective.
3. Import all the source code projects from the directories created in Extending Advanced Catalog Management.
4. After the projects are imported, users can edit the Java, mapping or model code or data by using the IDE.

Building the source code projects


Follow the steps to build the source code projects.

1. Open the edit configuration pane for the build.xmlfile by going to the acm.build project and right-click on build.xml.
2. Go to Run As > Ant Build.
3. In the Edit configuration pane, set the –Dmdm.home argument, which is the root directory of the extracted enterprise build of Product Master. This provides a
reference to the Product Master Jars required to build the project.
4. Right-click on the acm.default target and run it with ant to build the entire code.

Deploying
In order to deploy the Advanced Catalog Management asset, ensure that you have already built the source code projects.
The deploy target copies the updated files on the Advanced Catalog Management installation in the Product Master environment.

1. Run the acm.deploy.dev target.


You need to provide the TOP directory path of the Product Master installation.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Debugging examples
Ensure you are familiar with the following debugging examples.

After the projects are set up in the workspace, the Advanced Catalog Management code can be debugged by using the remote debugger that is attached to the Product
Master appserver and scheduler.

Case 1: Debugging transformer


The root level class in the transformer module is BusinessObjectTransformer.java in the integration.common project.
The transformer classes for Catalog Entry, Catalog Group and Attribute Dictionary attributes are derived from the BusinessObjectTransformer.

You can add a breakpoint in the transform() method in BusinessObjectTransformer to debug the transformation of Product Master object to WebSphere®
Commerce Server object.

Case 2: Debugging loader


The root-level classes in the loader module explained are AbstractBatchLoader.java and AbstractWebServiceLoader in the integration.common project.
The loader classes for Catalog Entry, Catalog Group and Attribute Dictionary attributes are derived from the two classes respectively for loading the objects to WebSphere
Commerce Server depending on whether data load or web services are used.

You can add a breakpoint in the load() method in any of the two classes to debug the loading of the transformed objects to WebSphere Commerce Server by data load or
web services.

Case 3: Debugging exporter


The acm.extensions project has the exporter classes for the different type of commerce objects (for example, Catalog Entry, Catalog Group and Attribute Dictionary
attributes). These classes initialize the loader and transformer classes for the different type of objects. Exporter classes can be debugged by adding breakpoints in the
exporter classes in the com.ibm.mdm.acm.exporters package in the acm.extensions project.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Extending Advanced Catalog Management


The Advanced Catalog Management solution provides the infrastructure for publishing data from Product Master to WebSphere® Commerce Server.

932 IBM Product Master 12.0.0


For this purpose, the data from Product Master in XML format must be transformed to XML format, which is consumable by the WebSphere Commerce Server data loader
feature. This transformation is achieved by using XSL transformation, which is provided in the mapping files.
The out of the box Advanced Catalog Management solution has only a few attributes that are mapped including some mandatory attributes as explained in Objects and
classes. If users need to map more attributes and publish them from Product Master to WebSphere Commerce Server, they need to extend the mapping and hence the
solution.

Extending the mapping


To extend the mapping, you need to add your own mappings to the mapping files as mentioned in Publishing catalog data.
The extension would include adding mapping for all the new attributes that users add in a similar pattern to the samples mentioned in Publishing catalog entries. The
specific pattern would depend upon the kind of attributes that are being added.

Extending the common integration framework


To add new business objects to the support, developers need to extend the common integration framework for the given objects.
Developers need to create new exporters for the new objects and create new mapping for transforming the Product Master data to the new business objects.

With the new business object data, developers need to create a new business object loader to publish the data to the target system, for example WebSphere Commerce
Server.

Adding custom Java implementation


If extension involves adding new Java™ code, it is recommended to introduce your own Java classes instead of adding new methods to any out-of-box Java class. This can
avoid your custom implementation from being overwritten when applying patches to Advanced Catalog Management. By introducing new classes to an existing package,
you can avoid changes in the build scripts provided out-of-box.

Supporting extended attributes


There are certain attributes from WebSphere Commerce Server, which are already part of the primary spec in Product Master and are directly mapped in out of the box
mapping for Catalog Entry export.
The mapping is a correspondence between the XML design representing the primary spec for Catalog Entry Repository in Product Master and the default XML design for
Catalog Entries in WebSphere Commerce Server.
If you want to extend this design to have more attributes and publish them from Product Master to WebSphere Commerce Server, they need to update the Product Master
model and the mapping for the transforming them to the format that is acceptable for data loader.

For having extended attributes to publish, they should be first added to the Product Master model. This can be done by adding the attributes to the primary spec of catalog
entry repository.
The mapping files under the $TOP/etc/default/acm/mapping directory are used for the transformation from Product Master objects to WebSphere Commerce Server
objects. The new attribute needs to be mapped. For this, the new mapping needs to be added to the CatalogEntry.xsl.

Supporting the Catalog Group specific attributes


Catalog entries can be classified in different catalog groups in WebSphere Commerce Server. This is represented in Product Master by mapping the items in Catalog Entry
Repository catalog to the categories in Catalog Entry Categories Hierarchy.
Currently, the Advanced Catalog Management solution does not support the export or publishing of attributes specific to these categories. But users can extend the
solution to add these attributes.

The catalog group-specific attributes can be added by defining them in the secondary specs and associating these secondary specs to the categories in ‘Catalog Entry
Categories’ hierarchy, which represent these catalog groups.
The mapping for export can be extended by setting the solution up to update the mapping file for catalog entry export (CatalogEntry.xsl) automatically when items are
mapped to the categories and saved. Out of the box, when items are saved, the CatalogEntry.xsl file is extended with mappings for secondary spec attributes. This is done
by extension point code for post save of catalog entry repository. This can be seen in the updateXSLMapping() in CatalogEntryRepository.java in the
acm.extensions project. Similar code can be written to add mapping for the attributes from secondary specs this code extension adds these catalog group-specific
attributes to the mapping when entries are saved under these groups.

Supporting grouping attributes


Out of box Advanced Catalog Management solution supports single and leaf node attributes for catalog entry export and attribute dictionary attribute export. The user can
extend the solution to support exporting attribute groupings.
In the current solution, there are no grouping attributes in the data model. Grouping attributes are natively supported in Product Master. They can be created in one of the
following two ways:

1. A developer can create grouping attribute in Product Master primary specs and extend the mapping to map this attribute to grouping attribute in WebSphere
Commerce Server.
2. A developer can create a grouping attribute by using the hierarchy structure in attribute dictionary to implement attribute grouping in WebSphere Commerce Server
and use the category with which secondary specs are associated as the attribute grouping. The grouping that is defined by the category contains all the attributes in
the secondary specs that are related to the category.

For both options mentioned in the above, the developer needs to extend the mapping to include grouping attribute information.
If the developers use the first option the grouping information will be provided by the grouping node in Product Master spec.

If the developer uses the second option the grouping information will be provided by the category definition in Attribute Dictionary.

Supporting the catalog entry in different languages


Out of box Advanced Catalog Management solution supports catalog entry attributes to be exported in default language English (US). You need to update the mapping to
support different languages.

IBM Product Master 12.0.0 933


In the current implementation, Advanced Catalog Management supports only en_US locale.
In Product Master multilingual attributes are supported, for creating multilingual attributes, user needs to enable localization in Product Master and add the required
languages. Then, you can create localized attributes with different values in different languages. When an attribute is localized, it is considered as a separate attribute for
each enabled language for transformation.

In the current mapping, the language ID defaults to -1 for en_US locale.


For transforming multilingual attributes, the developer needs to extend the mapping so that the same attribute in each language has its own mapping to an attribute in
WebSphere Commerce Server with a different language ID.

Supporting the catalog entry in different containers


Currently, the Product Master data model for Advanced Catalog Management has only one container for items or catalog entries, which is the ‘Catalog Entry Repository’. To
support catalog entry in different containers the solution needs to be extended by extending the Product Master model.
In order to support catalog entries in different containers, users need to create new containers in the data model. These containers need to use the same primary and
secondary hierarchies and the same primary spec as the Catalog Entry Repository.
Users can add catalog entries to these new containers.

The ‘Catalog Entry Repository’ container is the default container that is used for catalog entry export. In order to use different containers, users need to set the name of
the container by using the following property in the acm.properties file.
mdm.catalog.entry.repository.catalog.name=Catalog Entry Repository
The acm.properties file is located in the $TOP/etc/default/acm/config directory.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrating WebSphere MQ
You can use Product Master with WebSphere® MQ to connect Product Master with enterprise applications to send and receive messages.

Before you begin


For IBM® Product Master functions that have dependencies on WebSphere MQ to work, you need to update the env_settings.ini file. For more information, see Configuring
WebSphere MQ parameters.

About this task


The main benefit of integrating with WebSphere MQ is the enterprise-wide connectivity. Any enterprise application can create the messages for Product Master, and
WebSphere MQ can route these messages.

In some scenarios, an external business process must receive or update the Product Master data. Different organizations within the enterprise can take business decisions
that are based on this data. Therefore, integrating Product Master with a competent message-oriented middleware product such as WebSphere MQ is a good
communication option.

A message is defined as input in CSV, XML, or UDF format that is provided by an external source. These messages can be parsed before they are sent or after you receive
the messages by using the Script API or Java™ API of Product Master that is provided for interacting with WebSphere MQ.

Product Master supports two ways of sending messages by using WebSphere MQ:

WebSphere MQ native message format


These messages can be produced by any WebSphere MQ support languages.
WebSphere MQ JMS
These messages can be produced by only JMS message aware API provided by WebSphere MQ. These APIs can be JMS or XMS clients for WebSphere MQ.

IBM WebSphere MQ is bundled with Product Master.

Procedure
1. Create the wanted WebSphere MQ topology, including queue managers, queues, and channels.
If your WebSphere MQ topology requires the WebSphere MQ Server to be installed on a remote computer, (not on the same computer as Product Master), or if you
are using the Script APIs to integrate with WebSphere MQ, then you must correctly configure the WebSphere MQ for client connectivity and define the relevant
server connection (SVRCONN) channel and listeners.
Note: When you use WebSphere MQ with the Script API, WebSphere MQ client connectivity is used. The Script API provides a reduced set of capabilities and does
not support transactions.
2. When you connect to WebSphere MQ through the Script API, default configuration information must be provided.
a. Configure the following properties in the common.properties file for connecting and controlling the way messages are directed to WebSphere MQ.

mq_port=1414
mq_channel=WPC.SVRCONN
mq_hostname=wpc.ibm.com
mq_queuemanager=WPC_QMGR
mq_username=
mq_password=
mq_inbound_queue=WPC.IN.QUEUE
mq_outbound_queue=WPC.OUT.QUEUE
mq_queue_put_open_options=
mq_message_put_options=
mq_queue_get_open_options=

934 IBM Product Master 12.0.0


mq_message_get_options=
mq_use_utf=false
mq_charset=819

b. Set the JMS-specific default settings:

jms_provider=IBM WebSphere MQ
jms_receive_timeout=1000
jms_inbound_queue=WPC.IN.QUEUE
jms_outbound_queue=WPC.OUT.QUEUE

3. If you are developing the Product Master solution with the Java API, then direct access to the WebSphere MQ APIs must be used.
Both client mode and bindings mode connectivity can be used. However, the design of the WebSphere MQ deployment topology is the deciding factor.

Example
Perform the following steps to configure WebSphere MQ where WebSphere MQ server is installed:

1. Create and start WebSphere MQ queue manager WPC_QMGR

crtmqm WPC_QMGR
strmqm WPC_QMGR

2. Create WebSphere MQ SVRCONN CHANNEL WPC.SVRCONN

runmqsc WPC.SVRCONN //define the channel in the command prompt that displays
DEF CHL(WPC.SVRCONN) CHLTYPE(SVRCONN)

3. Create WebSphere MQ local queues. You must create the queue in runmqsc command prompt.

DEF QL(WPC.IN.QUEUE)
DEF QL(WPC.OUT.QUEUE)
DEF QL(MY.QUEUE)

4. Exit from runmqsc by running the end command.


5. Start the listener on queue manager WPC_QMGR to accept TCP/IP connections.

runmqlsr –m WPC_QMGR –ttcp –p1415 & //The & symbol at the end makes
//the listener run in the background

The script that is provided is an example of WebSphere MQ native message format that can be used as basis to develop solution-specific integration scripts. As in this
example, you can override the default values specific to the Product Master configuration files.

//setup properties for MQ


var properties = [];
properties["mqHost"] = "test.ibm.com"; //assign the host name of the machine
// where queue manager is running
properties["mqPort"] = "1415"; //The port number of the listener through
// which we can connect to queue manager
properties["mqChannel"] = "WPC.SVRCONN"; //CHANNEL name through which

// we connect to queue manager


properties["mqManager"] = "WPC_QMGR"; //Queue manager name
properties["mqInqueue"] = "WPC.INPUT.QUEUE";
properties["mqOutqueue"] = "WPC.OUTPUT.QUEUE";
properties["myQueue"] = "MY.QUEUE";

MY_MQ_HOST = properties["mqHost"];
MY_MQ_PORT = properties["mqPort"];
MY_MQ_CHANNEL = properties["mqChannel"];
MY_MQ_MGR = properties["mqManager"];
MY_MQ_INBOUND_QUEUE = properties["mqInqueue"];
MY_MQ_OUTBOUND_QUEUE = properties["mqOutqueue"];

function sendMessage(Item)
{
//First get the Queue manager
qMgr = mqGetQueueMgr(MY_MQ_HOST, MY_MQ_PORT, MY_MQ_CHANNEL, MY_MQ_MGR);

//Doing a send message

mqSentMsg = qMgr.mqSendTextMsg(Item, MY_MQ_QUEUE, MQ_OPEN_OPTIONS, MQ_PUT_OPTIONS);

textMsg = mqGetTextFromMsg(mqSentMsg);

out.writeln("Sent Message :"+textMsg +"\n\n");


}

function createXMLMsg()
{
var text = ""
+ "<msg>"
+ "<item"
+ "<sku>123456</sku>"
+ "<quantity>1</quantity>"
+ "<Price>99.99</Price>"
+ "</item>"
+ "</msg>";
return text;
}

IBM Product Master 12.0.0 935


function receiveMsg()
{

qMgr = mqGetQueueMgr(MY_MQ_HOST, MY_MQ_PORT, MY_MQ_CHANNEL, MY_MQ_MGR);

//Receive message from MY.QUEUE


mqMsg = qMgr.mqGetReceivedMsg(MY_MQ_QUEUE, MQ_OPEN_OPTIONS, MQ_GET_OPTIONS);

textMsg = mqGetTextFromMsg(mqMsg);
out.writeln("\n\nReceived Message : "+textMsg +"\n\n");

Msg = createXMLMsg();
sendMessage (Msg);
receiveMsg();

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrating scheduler applications


You can integrate scheduler applications with Product Master so that the scheduler applications can schedule jobs to run from the command line.

Product Master provides a template shell script that you modify to define jobs that you want to schedule to run in various scheduler applications.

To allow a scheduler application access to run jobs:

1. Identify the Product Master jobs that are scheduled by an external scheduler. For example, you might have several Product Master status reports that must be
produced as part of a monthly review. Or you might automate running an Product Master import as part of an external scheduled delta refresh into Product Master.
2. Enable access to the Product Master code.
To schedule a Product Master job, the scheduler must be authorized to the local Product Master code. In most cases, you must set up a scheduler-specific UNIX
user that has the same permissions and authorizations as the Product Master user. In addition, this user profile might need to set up the correct Product Master
environment.

3. Create the scripts. See Creating the scripts for external scheduling.
4. Run the job. You can run the job from a command line, or if you use Tivoli Workload Scheduler, you can use the Tivoli Workload Scheduler user interface to schedule
and run the job. For more information, see

Running an external scheduler job from the command line


Before you can run an external scheduler job from the command-line interface:
Enable access to the product code.
Create the scripts for external scheduling. See Creating the scripts for external scheduling.
From the command line, run the shell script for the job that you want. In these examples, the run_job_feed1.sh script is run.
./run_job_feed1.sh
/usr/IBM/mdmpim/schedulejobs/run_job_feed1.sh
Running a job from the Tivoli Workload Scheduler interface
For more information, see Running a job from the Tivoli Workload Scheduler interface.

For information about how to work with jobs inside of Product Master, see Creating and scheduling jobs.

Creating the scripts for external scheduling


To integrate with an extender scheduler, such as Tivoli Workload Scheduler, you must create a new product job-specific shell script.
Running a job from the Tivoli Workload Scheduler interface
You can use the Tivoli Workload Scheduler user interface to trigger a product import or export job. The user interface uses a shell script to define which job to run or
schedule.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Creating the scripts for external scheduling


To integrate with an extender scheduler, such as Tivoli® Workload Scheduler, you must create a new product job-specific shell script.

About this task


You use the template shell script run_job_template.sh in the $TOP/bin/ folder in the environment where the scheduled jobs run. Each job requires its own shell script
that you modify from the run_job_template.sh template script. For example, if you have three jobs, you might create the following three scripts by modifying the
run_job_template.sh script:
Table 1. Example of jobs and
associated shell scripts
Name of job Associated shell script

936 IBM Product Master 12.0.0


Name of job Associated shell script
Feed 1 run_job_feed1.sh
Feed 2 run_job_feed2.sh
DailyFeed3 run_job_dailyfeed3.sh

Procedure
1. Copy the run_job_template.sh shell script to another file.
For an example, see the previous table.
2. Modify the new shell script.
The file has:

# Set the IBM Product Master installation directory here and uncomment the line
#export TOP=<Path to IBM Product Master installation home directory> # E.g. /usr/appinstalls/mdmpim60

# Set the job related variables below as needed, uncomment the lines and do not modify anything else after this
# CCD_JOB_NAME=<Job Name> # [Required]
# CCD_JOB_TYPE=<Job Type> # [Required, Valid values are import|export|report]
# CCD_COMPANY_CODE=<Company Code> # [Optional, Default Value is trigo]
# CCD_USERNAME=<User Name> # [Optional, Default Value is Admin]
# CCD_DEBUG=<Debug on or off> # [Optional, Default Value is off]

You might, for example, change the values for your job to something like the following example:

# Set the IBM Product Master installation directory here and uncomment the line
export TOP=/usr/IBM/mdmpim

# Set the job related variables below as needed, uncomment the lines and do not modify anything else after this
CCD_JOB_NAME=feed1 # [Required]
CCD_JOB_TYPE=import # [Required, Valid values are import|export|report]
CCD_COMPANY_CODE=test # [Optional, Default Value is trigo]
CCD_USERNAME=m # [Optional, Default Value is Admin]
CCD_DEBUG=on # [Optional, Default Value is off]

3. Ensure that the new shell script is executable.


4. Repeat this process until you have a script for each job that you plan to schedule with an external scheduler.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Running a job from the Tivoli Workload Scheduler interface


You can use the Tivoli® Workload Scheduler user interface to trigger a product import or export job. The user interface uses a shell script to define which job to run or
schedule.

Before you begin


Before you can run a job from the Tivoli Workload Scheduler interface:

Enable access to the product code.


Create the scripts for external scheduling. See Creating the scripts for external scheduling.
Ensure that you have the appropriate software that is installed as follows:
The application server must have the following software installed: Tivoli Workload Scheduler 8.2 or later, Tivoli Management Framework 4.1 or later, and
Tivoli Management Framework Language support 4.1 or later.
The computer that you use to run or schedule the jobs must have Tivoli Management Framework 4.1 or later installed.

Procedure
1. Create a task to define the job.
Creating a task defines the host that is to be used to run the scheduled job and the path to the shell script file for the product job.
a. Open the Tivoli Desktop and for the Management Environment fields, and specify information for the application server that contains the software to use the
merge scheduler feature.
b. Create a task library, then create a task in the task library. Give the task a name.
For example: Task for Feed 1.
c. Edit the task and select the role that is required to run the task.
d. Select the platform where the task is to be run and enter the information to specify the location of the shell script run_job_template.sh.
e. Save the edits to the new task.
After a task is created in the task library, it can be run manually or at a scheduled time. The task, when run, uses its defined host and the run_job_template.sh
script file to start a product import or export job.
2. Run or schedule the task.
Option Description
Run the task From the task library, double-click the task. Specify the options that you want and click Execute.

IBM Product Master 12.0.0 937


Option Description
a. From the task library, drag the task and drop it on the scheduler.
b. Specify a label for the job.
c. Specify the settings on the Add Scheduled Job page as needed for your requirements. For example, you might schedule the job to run
indefinitely or you might schedule it to run at regular time intervals.
Schedule the For example, to schedule a job so that it runs three times in a time interval of every 60 minutes select Repeat the Job and type 3, then
task type 60 for the minutes.

Tip: In such a scenario, it is important to make sure that the current time shows the exact server time as the Time Zone set.
d. Specify any other settings, such as email addresses or groups to notify and logs to use.
e. Click Schedule Job.
3. Optional: Check the status of a job.
On the Desktop page, double-click Notices, then select a group and click Open.
Tip: You can also log in to the product interface to check the status of a job.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrating connectors
You can configure various connectors with the IBM® Product Master application.

Following connectors can be implemented with the IBM Product Master application:

Adobe InDesign connector


IBM Product Master Adobe InDesign connector supports direct catalog to print capability. The connector uses the inbuilt Digital Asset Management module in the
Product Master for asset management and comes bundled with sample data model for publication. You can now generate marketing content by using Adobe
InDesign with product specification from the Product Master.
Amazon Marketplace Web Service connector
IBM Product Master Amazon Marketplace Web Service (Amazon MWS) connector enables high levels of selling automation, which can help you grow, and scale
business in the Amazon stores. With the help of Amazon marketplace add-on, you can connect to the Amazon marketplace store, and synchronize the products. You
can export products from the IBM Product Master to Amazon marketplace store in seamless way without much of configuration.

eBay merchant center connector


IBM Product Master eBay Commerce Center Network Merchant Connector, you can create a product listing on eBay marketplace by using the Product Master
platform. The integration allows you to easily manage thousands of products. You can connect to your eBay store by using multiple credentials, map categories,
product attributes, and accomplish much more using multichannel Product Information Management (PIM) solution integration. You can provide eBay
specifications, variations, refund policy, and shipping details for the exported products.

Google Merchant Center connector


IBM Product MasterGoogle Merchant Center (GMC) connector helps you get your product information available to the shoppers on the Google. Everything about
products is available to the shoppers when they search on a Google property. Product Master provides integration with the Google Merchant Center (GMC). You can
manage your Google Shopping product listings from the Product Master. You can export complete product catalog information to the Google Shopping including
product categories and product variations. This works with simple and collection types of products.

JDE connector
IBM Product Master JDE connector is an upstream connector for importing data from the JD Edwards module into the Product Master operational catalogs. Item
Master and Supplier Master are the two components that are supported by the connector.

Magneto2 connector
IBM Product Master Magento2 connector is a downstream connector for publishing the Product Master product items to the Magento2 e-commerce platform.
SAP connector
IBM Product Master SAP connector is an upstream connector for importing data from the SAP Material Management module into the Product Master operational
catalogs. Material and Vendor are the two modules that are supported by the connector.

The setup and configuration of any connector basically involves the following tasks:

Installing the connector


Updating properties file
Configuring IBM Product Master extension
Importing data models
Importing operational catalog data (JD Edwards connector)
Verifying end-to-end flow
Troubleshooting and commons errors

Before you start any connector, update the spring.main.web-application-type=none attribute in the $TOP_CONNECTOR/connector/mdmce-<connector_name>-
connector/conf/application.properties file.
To manually stop any connector, use the following command:

kill -9 <PID>

To get the PID use, the following command:

ps -ef | grep <connector_name>

Adobe InDesign connector


You can configure Adobe InDesign connector with the Product Master application.
Amazon Marketplace Web Service connector
You can configure Amazon Marketplace Web Service (Amazon MWS) connector with the Product Master application.

938 IBM Product Master 12.0.0


eBay Commerce Network Merchant Center connector
You can configure eBay Commerce Network Merchant Center connector with the Product Master application.
Google Merchant Center connector
You can configure Google Merchant Center (GMC) connector with the Product Master application.
JD Edwards connector
You can configure JD Edwards (JDE) connector with the Product Master application.
Magento2 connector
You can configure Magento2 connector with the Product Master application.
SAP connector
You can configure SAP connector with the Product Master application.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Adobe InDesign connector


You can configure Adobe InDesign connector with the Product Master application.

With Adobe InDesign connector, Product Master supports direct catalog to print capability. Adobe InDesign connector leverages the inbuilt Digital Asset Management
module of the Product Master for asset management and comes bundled with sample data model for publication. Using the product specification from the Product Master,
you can generate marketing content with the Adobe InDesign.

Working with the Adobe InDesign connector


Adobe InDesign connector provides a simple "Publish To Adobe InDesign" publication workflow.

Publish To Adobe InDesign


After the items fulfill the readiness criteria for publication, check-out the items to the "Publish To Adobe InDesign" collaboration area. Select items and click Publish
to send the items for publishing. On successful publication, items get checked-in to the catalog with the value of thePublication Details set as "Success". For new
items, connector reads the XML file from the shared location and creates a consolidated request for publication and sends the request to Adobe InDesign. For
existing item update request, connector reads the XML file and sends the request one by one to the Adobe InDesign.
Publish to endpoint
"Publish to endpoint" is a manual step that holds all the new item which are ready for the publication in the Adobe InDesign. The publication process creates a CSV
file for each item, these are consolidated and copied to a shared location along with assets. The connector comes bundled with a sample JSX script (Adobe
InDesign Script) which reads the CSV and assets from shared location. After the data is available on the shared location, run the JSX script from Adobe InDesign
tool to generate the print ready version of the INDD (InDesign Document) product specification template. On successful publication, items get checked-in to the
catalog with the value of the Publication Details attribute "Success".
Fix Failed Items
If there are any errors in processing, errors are set as comments and failed items move to "Fix Failed Items" step. The errors can be seen in the History tab on
single-edit page. Publication Status is set as "Failed", the Publication Date is set with the current date, and the Error attribute is set with the error message for such
items.
Rework Step
If the items are not ready for publishing, then click Rework, to move the items "Rework Step" where they can be worked upon and sent back to the catalog. Such
items can be checked-out again for publishing.

Generating print-ready templates in Adobe InDesign


Proceed as follows to generate print-ready templates:

1. Start Adobe InDesign and browse to File > Open > Select the .indd template.
2. To label the script, browse to Adobe tool > Windows > Utilities > Script. Map the frames present in the template with the CSV headers to populate the data in the
respective frames.
3. Go to Script tab in the utility tool and double-click the sample AdobeInDesignDataPush.jsx script to run.

The template is generated with product data and images and is ready to be shared for print.

Installing the connector


Proceed as follows to install the connector:

1. Browse to the $TOP_CONNECTOR/connectors folder.


2. Locate and extract the ipm_connectors_12.0.0.X_YYYYDDMMZZZ.tar.gz base file.
3. Set the TOP_CONNECTOR environment variable to point to the extracted folder:
export TOP_CONNECTOR=/opt/connectors/mdmce-connectors
4. Enable the connector in the $TOP_CONNECTOR/conf/connector_settings.ini file.
5. Run the installer script.

cd $TOP_CONNECTOR
cd bin
./install.sh

6. Update the application.properties file located in the following folder:


$TOP_CONNECTOR/connectors/mdmce_adobe/conf

For more information, see Updating properties file.


7. Start the connector from the $TOP_CONNECTOR/connectors/bin folder:

IBM Product Master 12.0.0 939


./start_connectors.sh -all or ./start_connectors.sh adobe

Stop the connector through the following command:

./stop_connectors.sh -all or ./stop_connectors.sh adobe

8. Check status of the connector through the following command:

./status_connectors.sh -all or ./status_connectors.sh adobe

9. Copy the JSX sample script from the $TOP_CONNECTOR/connectors/mdmce-adobe-connector/script/AdobeInDesignDataPush.jsx to Adobe InDesign folder.
C:\Adobe\Adobe InDesign CC 2018\Scripts\Scripts Panel\
10. Update the following properties in the AdobeInDesignDataPush.jsx script.
theFolder - The folder where the connector pushes the data.
curFile - The folder that contains the current INDD file.
imagePath - The folder where the connector pushes the image data.
doneFolderPath - The folder where the script moves the CSV file.
imageDonePath - The folder where the script moves the image file.

Updating properties file


You need to update the following parameters in the application.properties file.

Application port configuration

server.port=<port>

MDM administrator configuration

mdm.company.user=Admin
mdm.company.password=<password>
mdm.company.company=<company>

MDM Server configuration

mdm.server.topDir=/opt/MDM116
mdm.server.etcDir=/opt/MDM116/etc
mdm.server.classPath=/opt/MDM116/jars/ccd_svr.jar

Catalog and spec map name

export.base.specmap.name=CEToAdobeMap
export.source.catalog.name=Variant Catalog

Database-related parameters

app.datasource.url=jdbc:db2://<hostname>:<port>/PIMDB
app.datasource.username=<username>
app.datasource.password=<password>

Hazelcast client and WebSphere® MQ configuration

hazelcast_group_name=mdmce-hazelcast-instance
hazelcast_password=mdmce-hazelcast
hazelcast_network_ip_address=localhost:5702

mq.groupName=mdmce-hazelcast-instance
mq.password=mdmce-hazelcast
mq.networkIpAddress=localhost:5702

FTP details

ftp.host=<hostname>
ftp.user=<username>

FTP Base64-encoded password

ftp.password=<password>

FTP port

ftp.port=22

FTP directory path or folder name

ftp.directory=<directory>
ftp.sourcelocation=/opt/connectors/mdmce-connectors/connectors/mdmce-adobe-connector/adobetemp/consolidated
ftp.tempfilelocation=/opt/connectors/mdmce-connectors/connectors/mdmce-adobe-connector/adobetemp

Get Asset content API configuration

asset.loginURL=http:/<hostname>:<port>/api/v1/login
asset.logoutURL=http:/<hostname>:<port>/api/v1/logout
asset.getAssetsURL=http:/<hostname>:<port>/api/v1/dam/assets/content/bytes/?itemIds=

Importing data models


Proceed as follows to import base data model and data model:

1. Log in to the Admin UI.


2. Import the connectors_basemodel.zip file located in the following folder:

940 IBM Product Master 12.0.0


mdmce_connectors/datamodel

3. Import the connectors_adobe_datamodel.zip file located in the following folder:


mdmce_connectors/connectors/mdmce_adobe/datamodel

The data model mainly consists a "Publish To Adobe InDesign" workflow, a sample "Variants Catalog" and a "spec map" for mapping the catalog attributes to the CSV file
header, which are read by Adobe InDesign JSX scripts. Images can be associated with items either through Digital Asset attribute or simple Image URL or Thumbnail
Image URL attributes. The CSV generated by the connector for an Item data has @image as the header for digital assets and @imageHeaderURL as the header for
imageURL type attribute. CSV headers and data are separated with Vertical bar "|".

Sample CSV file

Name|Description|@image#0|@imageHeaderURL#0
TestItem|Mobile.jpg|@imageFromURL#20190130161140493

Configuring IBM Product Master extension


Proceed as follows to add the $TOP_CONNECTORS/libs/connectors-ext.jar to the Product Master classpath:

1. Using the following command, add the entry in the $TOP/bin/classpath/jars-custom.txt.

cd $TOP/bin
./updateRtClasspath.sh

Installing and configuring Hazelcast


For more information, see Installing Hazelcast IMDG.

Configuring IBM Product Master extension


Add $TOP_CONNECTORS/libs/connectors-ext.jar to the Product Master class path by following steps:

1. Use the following command to add an entry in the $TOP/bin/classpath/jars-custom.txt.

cd $TOP/bin
./updateRtClasspath.sh

You can either use image or image URL attributes for assets or enable Digital Asset Management feature for the Persona-based UI.

Troubleshooting
The logs for the connector are created in the $TOP_CONNECTOR/logs folder.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Amazon Marketplace Web Service connector


You can configure Amazon Marketplace Web Service (Amazon MWS) connector with the Product Master application.

Integration of Product Master with Amazon MWS enables high level of selling automation, which can help you grow and scale your business in the Amazon stores. With the
help of this marketplace add-on, you can connect to the Amazon marketplace store and synchronize the products. You can also export products from the Product Master
to the Amazon marketplace store in seamless way.

Working with the Amazon MWS connector


Amazon MWS connector provides a simple "Publish To Amazon MWS" publication workflow.

Publish To Amazon Marketplace


After the items fulfill the readiness criteria for publication, check-out the items to the "Publish To Amazon Marketplace" collaboration area. While processing the
items for publishing, the item information is written in an XML request. Each batch has its own consolidated XML request containing items information. The XML
request is available at the location that is defined in the ftp.tempfilelocation property. These temporary requests are then consolidated and stored at the location
defined in ftp.sourcelocation property. The file is picked up, copied to .done folder and sent to Amazon Marketplace for processing. On successful publication, items
get checked-in to the catalog with the value of the Publication Details set as "Success".
Publish to endpoint
"Publish to endpoint" is a manual step that holds all the new items that are ready for the publication in the Amazon Marketplace.
Fix Failed Items
If there are any errors in processing, errors are set as comments and failed items move to "Fix Failed Items" step. The errors can be seen in the History tab on
single-edit page. Publication Status is set as "Failed", the Publication Date is set with the current date, and the Error attribute is set with the error message for such
items.
Rework Step
If the items are not ready for publishing, then click Rework, to move the items "Rework Step" where they can be worked upon and sent back to the catalog. Such
items can be checked-out again for publishing.

Installing the connector

IBM Product Master 12.0.0 941


Proceed as follows to install the connector:

1. Browse to the $TOP_CONNECTOR/connectors folder.


2. Locate and extract the ipm_connectors_12.0.0.X_YYYYDDMMZZZ.tar.gz base file.
3. Set the TOP_CONNECTOR environment variable to point to the extracted folder:
export TOP_CONNECTOR=/opt/connectors/mdmce-connectors
4. Enable the connector in the $TOP_CONNECTOR/conf/connector_settings.ini file.
5. Run the installer script.

cd $TOP_CONNECTOR
cd bin
./install.sh

6. Update the application.properties file located in the following folder:


$TOP_CONNECTOR/connectors/mdmce_amazon/conf

For more information, see Updating properties file.


7. Start the connector from the $TOP_CONNECTOR/connectors/bin folder:

./start_connectors.sh -all or ./start_connectors.sh amazon

Stop the connector through the following command:

./stop_connectors.sh -all or ./stop_connectors.sh amazon

8. Check status of the connector through the following command:

./status_connectors.sh -all or ./status_connectors.sh amazon

Updating properties file


You need to update the following parameters in the application.properties file.

Application port configuration

server.port=<port>

MDM administrator configuration

mdm.company.user=Admin
mdm.company.password=<password>
mdm.company.company=<company>

MDM Server configuration

mdm.server.topDir=/opt/MDM116
mdm.server.etcDir=/opt/MDM116/etc
mdm.server.classPath=/opt/MDM116/jars/ccd_svr.jar

Catalog and spec map name

export.base.specmap.name=VariantToMWSMap
export.base.imagemap.name=VariantToMWSImageMap
export.source.catalog.name=Variants
amazon.hierarchy.name=Amazon Categories
category.to.map.lookup.table.name=CategoryToSpecMap
lookup.attribute.path=Lookup Table Import Spec/Value

Database-related parameters

app.datasource.url=jdbc:db2://<hostname>:<port>/PIMDB
app.datasource.username=<username>
app.datasource.password=<password>

Hazelcast client and WebSphere MQ configuration

hazelcast_group_name=mdmce-hazelcast-instance
hazelcast_password=mdmce-hazelcast
hazelcast_network_ip_address=localhost:5702

mq.groupName=mdmce-hazelcast-instance
mq.password=mdmce-hazelcast
mq.networkIpAddress=localhost:5702

FTP details

ftp.host=<hostname>
ftp.user=<username>

FTP directory path or folder name

ftp.sourcelocation=/opt/connectors/mdmce-connectors/connectors/mdmce-amazon-connector/mwstemp/consolidated
ftp.imagelocation=/opt/connectors/mdmce-connectors/connectors/mdmce-amazon-connector/mwstemp/consolidatedimages
ftp.tempfilelocation=/opt/connectors/mdmce-connectors/connectors/mdmce-amazon-connector/mwstemp

Amazon MWS configuration

amazon.auth.accesskeyid=<access-id>
amazon.auth.secretaccesskey=<secret-access-key>
amazon.mws.appname=<app-name>
amazon.mws.appversion=<app-version>
amazon.mws.serviceurl=<service-url>
amazon.mws.merchantid=<merchant-id>

942 IBM Product Master 12.0.0


amazon.mws.sellerdevauthtoken=<auth-token>
amazon.mws.marketplace=<market-place-code>
amazon.mws.feedtype=_post_product_data_
amazon.mws.imagefeedtype=_post_product_image_data_
amazon.mws.purgeandreplace=false

Importing data models


Proceed as follows to import base data model and data model:

1. Log in to the Admin UI.


2. Import the connectors_basemodel.zip file located in the following folder:
mdmce_connectors/datamodel

3. Import the connectors_amazon_datamodel.zip file located in the following folder:


mdmce_connectors/connectors/mdmce_amazon/datamodel

Installing and configuring Hazelcast


For more information, see Installing Hazelcast IMDG.

Configuring IBM Product Master extension


Add $TOP_CONNECTORS/libs/connectors-ext.jar to the Product Master class path by following steps:

1. Use the following command to add an entry in the $TOP/bin/classpath/jars-custom.txt.

cd $TOP/bin
./updateRtClasspath.sh

You can either use image or image URL attributes for assets or enable Digital Asset Management feature for the Persona-based UI.

Troubleshooting
The logs for the connector are created in the $TOP_CONNECTOR/logs folder.

Data-specific issues can be seen in the Amazon MWS responses. Amazon MWS is strict about using real UPCs, ISBNs, EANs, and so on. Use real items like books, or
phones when creating data. Users have their own registered UPCs for their products. Check that the Amazon MWS provided XSD files for each of these categories to look
for valid values of different attributes present in the different categories.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

eBay Commerce Network Merchant Center connector


You can configure eBay Commerce Network Merchant Center connector with the Product Master application.

With eBay Commerce Center Network Merchant Connector, you can create a product listing on the eBay marketplace by using the Product Master platform. The eBay-
Product Master integration allows you to easily manage thousands of products. You can connect your eBay store by using multiple credentials, map categories, product
attributes and do much more using multichannel PIM integration. You can also provide eBay specifications, variations, refund policy, shipping details for exported
products.

Working with the eBay Commerce Network Merchant Center connector


eBay Commerce Network Merchant Center connector provides a simple "Publish To eBay Marketplace" publication workflow.

Publish To eBay Marketplace


After the items fulfill the readiness criteria for publication, check-out the items to the "Publish To eBay Marketplace" collaboration area. Select items and click
Publish to send the items for publishing. On successful publication, items get checked-in to the catalog with the value of the Publication Details set as "Success".
For new items, connector reads the XML file from the shared location and creates a consolidated request for publication and sends the request to eBay
Marketplace. For existing item update request, connector reads the XML file and sends the request one by one to the eBay Marketplace.
Distribute
"Distribute" step is an automated step that distributes the item in the two steps based on the eBay ID associated with the item.
Publish to endpoint
"Publish to endpoint" is a manual step that holds all the new items that are ready for the publication in the eBay Marketplace.
Revise at Endpoint
"Revise at Endpoint" is a manual step that holds all the items, which are already published and have an eBay ID. These items get published for any update process.
Fix Failed Items
If there are any errors in processing, errors are set as comments and failed items move to "Fix Failed Items" step. The errors can be seen in the History tab on
single-edit page. Publication Status is set as "Failed", the Publication Date is set with the current date, and the Error attribute is set with the error message for such
items.
Rework Step
If the items are not ready for publishing, then click Rework, to move the items "Rework Step" where they can be worked upon and sent back to the catalog. Such
items can be checked-out again for publishing.

IBM Product Master 12.0.0 943


Installing the connector
Proceed as follows to install the connector:

1. Browse to the $TOP_CONNECTOR/connectors folder.


2. Locate and extract the ipm_connectors_12.0.0.X_YYYYDDMMZZZ.tar.gz base file.
3. Set the TOP_CONNECTOR environment variable to point to the extracted folder:
export TOP_CONNECTOR=/opt/connectors/mdmce-connectors
4. Enable the connector in the $TOP_CONNECTOR/conf/connector_settings.ini file.
5. Run the installer script.

cd $TOP_CONNECTOR
cd bin
./install.sh

6. Update the application.properties file located in the following folder:


$TOP_CONNECTOR/connectors/mdmce_ebay/conf

For more information, see Updating properties file.


7. Start the connector from the $TOP_CONNECTOR/connectors/bin folder:

./start_connectors.sh ebay

./start_connectors.sh -all

Stop the connector through the following command:

./stop_connectors.sh -all or ./stop_connectors.sh ebay

8. Check status of the connector through the following command:

./status_connectors.sh -all or ./status_connectors.sh ebay

Updating properties file


You need to update the following parameters in the application.properties file.

Application port configuration

server.port=<port>

MDM administrator configuration

mdm.company.user=Admin
mdm.company.password=<password>
mdm.company.company=<company>

MDM Server configuration

mdm.server.topDir=/opt/MDM116
mdm.server.etcDir=/opt/MDM116/etc
mdm.server.classPath=/opt/MDM116/jars/ccd_svr.jar

Catalog and spec map name

export.base.specmap.name=VariantToEbayMap
export.source.catalog.name= Variants

Database-related parameters

app.datasource.url=jdbc:db2://<hostname>:<port>/PIMDB
app.datasource.username=<username>
app.datasource.password=<password>

Hazelcast client and WebSphere MQ configuration

hazelcast_group_name=mdmce-hazelcast-instance
hazelcast_password=mdmce-hazelcast
hazelcast_network_ip_address=localhost:5702

mq.groupName=mdmce-hazelcast-instance
mq.password=mdmce-hazelcast
mq.networkIpAddress=localhost:5702

FTP details

ftp.host=<hostname>
ftp.user=<username>

FTP directory path or folder name

ftp.sourcelocation=/opt/connectors/mdmce-connectors/connectors/mdmce-ebay-connector/ebaytemp/consolidated
ftp.imagelocation=/opt/connectors/mdmce-connectors/connectors/mdmce-ebay-connector/ebaytemp/consolidatedimages
ftp.tempfilelocation=/opt/connectors/mdmce-connectors/connectors/mdmce-ebay-connector/ebaytemp

eBay Commerce Network Merchant Center configuration

ebay.feed.filename=cetoebayexport.xml
ebay.revise.filename=cetoebayrevise.xml
ebay.authtoken =<authtoken>
ebay.appid=<application id>
ebay.devname=<dev name>

944 IBM Product Master 12.0.0


ebay.certname=<cert name>
ebay.accessurl=https://api.sandbox.ebay.com/ws/api.dll

Importing data models


Proceed as follows to import base data model and data model:

1. Log in to the Admin UI.


2. Import the connectors_basemodel.zip file located in the following folder:
mdmce_connectors/datamodel

3. Import the connectors_ebay_datamodel.zip file located in the following folder:


mdmce_connectors/connectors/mdmce_ebay/datamodel

Installing and configuring Hazelcast


For more information, see Installing Hazelcast IMDG.

Configuring IBM Product Master extension


Add $TOP_CONNECTORS/libs/connectors-ext.jar to the Product Master class path by following steps:

1. Use the following command to add an entry in the $TOP/bin/classpath/jars-custom.txt.

cd $TOP/bin
./updateRtClasspath.sh

You can either use image or image URL attributes for assets or enable Digital Asset Management feature for the Persona-based UI.

Troubleshooting
The logs for the connector are created in the $TOP_CONNECTOR/logs folder.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Google Merchant Center connector


You can configure Google Merchant Center (GMC) connector with the Product Master application.

GMC helps you get your product information into Google and make it available to all the shoppers across Google. When customers search on a Google property, all your
product information is available to them. Product Master provides an out-of-the-box integration with the GMC. With GMC-Product Master connector, you can directly
manage your Google Shopping product listings from the Product Master. You can export complete product catalog information on the Google Shopping including product
categories and product variations.

Working with the Google Merchant Center connector


Google Merchant Center connector provides a simple "Publish To Google Marketplace" publication workflow.

Publish To Google Marketplace


After the items fulfill the readiness criteria for publication, check-out the items to the "Publish To Google Marketplace" collaboration area. While processing the
items for publishing, the item information is written in a temporary CSV request. Each batch has its own consolidated CSV request containing items information. The
CSV files are available at the location that is defined in the ftp.tempfilelocation property. These temporary files are then consolidated and stored at the location
defined in ftp.sourcelocation property. The file is picked up, copied to .done folder and sent to Google Marketplace for processing. On successful publication, items
get checked-in to the catalog with the value of the Publication Details set as "Success".
Publish to endpoint
"Publish to endpoint" is a manual step that holds all the new items that are ready for the publication in the Google Marketplace.
Fix Failed Items
If there are any errors in processing, errors are set as comments and failed items move to "Fix Failed Items" step. The errors can be seen in the History tab on
single-edit page. Publication Status is set as "Failed", the Publication Date is set with the current date, and the Error attribute is set with the error message for such
items.
Rework Step
If the items are not ready for publishing, then click Rework, to move the items "Rework Step" where they can be worked upon and sent back to the catalog. Such
items can be checked-out again for publishing.

Installing the connector


Proceed as follows to install the connector:

1. Browse to the $TOP_CONNECTOR/connectors folder.


2. Locate and extract the ipm_connectors_12.0.0.X_YYYYDDMMZZZ.tar.gz base file.
3. Set the TOP_CONNECTOR environment variable to point to the extracted folder:
export TOP_CONNECTOR=/opt/connectors/mdmce-connectors
4. Enable the connector in the $TOP_CONNECTOR/conf/connector_settings.ini file.

IBM Product Master 12.0.0 945


5. Run the installer script.

cd $TOP_CONNECTOR
cd bin
./install.sh

6. Update the application.properties file located in the following folder:


$TOP_CONNECTOR/connectors/mdmce_google/conf

For more information, see Updating properties file.


7. Start the connector from the $TOP_CONNECTOR/connectors/bin folder:

./start_connectors.sh google

./start_connectors.sh -all

Stop the connector through the following command:

./stop_connectors.sh -all or ./stop_connectors.sh google

8. Check status of the connector through the following command:

./status_connectors.sh -all or ./status_connectors.sh google

Updating properties file


You need to update the following parameters in the application.properties file.

Application port configuration

server.port=<port>

MDM administrator configuration

mdm.company.user=Admin
mdm.company.password=<password>
mdm.company.company=<company>

MDM Server configuration

mdm.server.topDir=/opt/MDM116
mdm.server.etcDir=/opt/MDM116/etc
mdm.server.classPath=/opt/MDM116/jars/ccd_svr.jar

Catalog and spec map name

export.base.specmap.name= VariantToGMCmap
export.source.catalog.name= Variants

Database-related parameters

app.datasource.url=jdbc:db2://<hostname>:<port>/PIMDB
app.datasource.username=<username>
app.datasource.password=<password>

Hazelcast client and WebSphere MQ configuration

hazelcast_group_name=mdmce-hazelcast-instance
hazelcast_password=mdmce-hazelcast
hazelcast_network_ip_address=localhost:5702

mq.groupName=mdmce-hazelcast-instance
mq.password=mdmce-hazelcast
mq.networkIpAddress=localhost:5702

Hierarchy Name, Lookup Table Name, and Lookup Attribute Path

google.hierarchy.name=GMC Categories
category.to.map.lookup.table.name=GMC Category To Spec Map
lookup.attribute.path=GMC Category Mapping/Value

FTP details

ftp.host=<hostname>
ftp.user=<username>

FTP directory path or folder name

ftp.sourcelocation=/opt/connectors/mdmce-connectors/connectors/mdmce-google-connector/gmctemp/consolidated
ftp.imagelocation=/opt/connectors/mdmce-connectors/connectors/mdmce-google-connector/gmctemp/consolidatedimages
ftp.tempfilelocation=/opt/connectors/mdmce-connectors/connectors/mdmce-google-connector/gmctemp

Google Merchant Center configuration

google.auth.accesstoken=<access-token>
google.auth.refreshtoken=<referesh-token>
google.auth.clientid=<client-id>
google.auth.clientsecret=<client-secret>
google.auth.redirecturi=<redirect-uri>
google.auth.oauthtokenurl=<auth-token-url>
google.request.url=https4://content.googleapis.com/content/v2
google.request.merchantid=<merchant-id>
google.feed.filename=cetogmcexport.csv

946 IBM Product Master 12.0.0


google.request.feedname=<sample-feed-name>
google.request.feedid=<feed-id>

Importing data models


Proceed as follows to import base data model and data model:

1. Log in to the Admin UI.


2. Import the connectors_basemodel.zip file located in the following folder:
mdmce_connectors/datamodel

3. Import the connectors_google_datamodel.zip file located in the following folder:


mdmce_connectors/connectors/mdmce_google/datamodel

Installing and configuring Hazelcast


For more information, see Installing Hazelcast IMDG.

Configuring IBM Product Master extension


Add $TOP_CONNECTORS/libs/connectors-ext.jar to the Product Master class path by following steps:

1. Use the following command to add an entry in the $TOP/bin/classpath/jars-custom.txt.

cd $TOP/bin
./updateRtClasspath.sh

You can either use image or image URL attributes for assets or enable Digital Asset Management feature for the Persona-based UI.

Troubleshooting
The logs for the connector are created in the $TOP_CONNECTOR/logs folder.

1. Refresh token failure


If the refresh token is not generated, the user uses the Postman client and Post API.

2. Data specifics
Google Merchant Center connector needs data in the specific supported format or else returns errors in responses.
Example
Attribute Type Description
Price Attribute Price should always be mentioned with currency details, for example, "500 USD".
Link Attribute Live URL, which is hosted and verified by Google.
Image link Live image link, which hosted in the verified domain.
For more information, see Data specifics.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

JD Edwards connector
You can configure JD Edwards (JDE) connector with the Product Master application.

The JDE connector is an upstream connector that imports data from JDE system into the Product Master. JDE connector imports data into Item Master catalog and
Supplier Master catalog.

Working with the JDE connector


Item Master Catalog has Commodity Class hierarchy and Business Unit hierarchy. Supplier Master Catalog has Location hierarchy and Business Unit hierarchy.

After CSV files are placed on the FTP or SFTP directory (configurable through the ftp.directory property in the application.properties file), start the JDE connector. The
connector starts reading and processing the CSV files from the configured location. The batch size is defined in the application.properties file.

If the batch size does not complete, then connector waits for 10 seconds before processing the CSV files. After completion of each batch, the batch status is added into
EBS and EXS table (total number of records that are processed, total number of succeeded items, and total number of failed items. The entry is added into the ERS table
for failed items along with error details.

JDE connector creates new items or updates the existing items and also categorizes the items. When JDE connector is run for the first time, it performs initial import of
categories from the JDE system into the Product Master. The connector populates Commodity class, Business Unit, and Location hierarchies. JDE connector remains in
the wait mode polling for availability of the CSV files. If connector stops abruptly due to some error, you are logged out of the Product Master interface.

Installing the connector


Proceed as follows to install the connector:

IBM Product Master 12.0.0 947


1. Browse to the $TOP_CONNECTOR/connectors folder.
2. Locate and extract the ipm_connectors_12.0.0.X_YYYYDDMMZZZ.tar.gz base file.
3. Set the TOP_CONNECTOR environment variable to point to the extracted folder:
export TOP_CONNECTOR=/opt/connectors/mdmce-connectors
4. Enable the connector in the $TOP_CONNECTOR/conf/connector_settings.ini file.
5. Run the installer script.

cd $TOP_CONNECTOR
cd bin
./install.sh

6. Update the application.properties file located in the following folder:


$TOP_CONNECTOR/connectors/mdmce_jde/conf

For more information, see Updating properties file.


7. Start the connector from the $TOP_CONNECTOR/connectors/bin folder:

./start_connectors.sh jde

./start_connectors.sh -all

Stop the connector through the following command:

./stop_connectors.sh -all or ./stop_connectors.sh jde

8. Check status of the connector through the following command:

./status_connectors.sh -all or ./status_connectors.sh jde

Updating properties file


You need to update the following parameters in the application.properties file.

MDM administrator configuration

mdm.company.user=Admin
mdm.company.password=<password>
mdm.company.company=<company>

bulk.import.batchsize=2
bulk.process.completion.batchsize=10
bulk.category.import.batchsize=50
file.success.dir.path=/opt/connectors/idocs
file.error.dir.path=/opt/connectors/idocs/error

FTP configuration

ftp.url=protocol://<ipaddress>:<port>/dir
ftp.user=<user_name>
ftp.password=<base64-encoded password>
ftp.csv.dest.directory=<idocs directory>

Database connection

app.datasource.url=jdbc:db2://<ipaddress>:<port>/<dbname>
app.datasource.username=<user_name>
app.datasource.password=<password>
app.datasource.driver-class-name=<driver_class>

Importing data models


Proceed as follows to import base data model and data model:

1. Log in to the Admin UI.


2. Import the connectors_basemodel.zip file located in the following folder:
mdmce_connectors/datamodel

3. Import the connectors_jde_datamodel.zip file located in the following folder:


mdmce_connectors/connectors/mdmce_jde/datamodel

Importing operational catalog data


In the catalog to catalog export, you can import the data from the Operational catalog in the Product catalog.

Proceed as follows to create a catalog to catalog export job:

1. Log in to the Admin UI.


2. Go to Product Manager > Catalogs > Catalog to Catalog Export and click New Catalog to Catalog Export.
3. Select Source catalog and select group of items you want to export.
4. Select all items or selection-based items and select the Destination catalog.
5. Add source to destination catalog mapping.
6. Select Skip to auto-generate the script or New to write your own script for catalog to catalog export.
7. Select the Approving Authority for Generated File Distribution.
8. Enter the name of the catalog to catalog export.

Troubleshooting
948 IBM Product Master 12.0.0
The logs for the connector are created in the $TOP_CONNECTOR/logs folder.

If the connector does not start processing, then verify the configuration parameters in the applications.properties file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Magento2 connector
You can configure Magento2 connector with the Product Master application.

Product Master - Magento2 connector is a downstream connector for publishing the items to the Adobe Magento2 e-commerce platform.

Prerequisites
Magento Connector supports Magento 2.4 version with following prerequisites.

You need to configure the following Lookup tables.

Connector Configuration Lookup Table


Connector Configuration Lookup Table provides mapping between catalog and the Magento2 connector. Using the following format, configure catalog for the
Magento publication.
Attribute Name Description
Catalog Name The name of the catalog for which publication is required.
Connector The name of the connector, example "Magento".
Enable Enable connector for a selected catalog. Valid value can be True or False.
Hierarchy Name The name of the hierarchy.
Key Auto generated primary key of the entry.
Publish Category Enable or disable category publication. Valid value can be True or False.
Publish Format The type of product publication content. For Magento, the format is JSON.
Queue The name of the Hazelcast Queue where products are published.
Note: If you are configuring Magento connector for multiple catalogs, then all the catalogs should have same publishing queue.
Root Category The full path of the root category. Only categories inside the root category are published.
Note: If no categories are specified in the root category, then the item gets mapped to the default category of the Magento2.
Transformer Class The custom transformation class name. If empty, signifies default transformation is used.
For custom implementation, provide the class full package name.
Example
com.ibm.ipm.CustomImplementation
Connector Category To Spec Map Lookup Table
Connector Category To Spec Map Lookup Table is used to define Spec Map associated for a respective category. Using the following format, configure spec
map to a category for the Magento publication.
Attribute Name Description
Catalog Name The name of the catalog for which publication is required.
Category The full path of the category for which spec maps need to be defined.
Connector The Lookup table reference of the Connector Configuration Lookup Table.
Hierarchy Name The name of the hierarchy.
Key Auto generated primary key of the entry.
Spec Map A comma-separated list of the spec maps required to generate product JSON.

You need to configure a spec map. A spec map is used to prepare a product JSON that is then published at the Magento endpoint. It is used to specify mapping of
the Product Master item attributes with the Magento attributes for each category. For more information, see Spec Map Console.
Note:
You should map multi-occurrence attributes of the Product Master only to the multi-occurrence attribute of the Magento.
For adding any custom attributes in the Magento, you need to simple fields in custom attribute grouping attribute.
For example, if you have color custom field where attribute_code=color
you can define, simple string attribute inside Custom_attribute grouping node.
Custom_atribute – [grouping]
-color
To get started with the Magento connector publication, Magento data model imports some spec maps, for example; VariantToMagentoMap,
VariantToMagentoExtensionMap, VariantToMagentoMediaMap. You can update the existing spec map or create a new spec map for the selected category and
update the Connector Category To Spec Map Lookup Table.

Using the Admin UI, you can import the sample Magento data model (connectors_magento2_datamodel.zip) from the $TOP_CONNECTOR/connectors/mdmce-
magento2-connector/datamodel folder.

Also, import the connectors_basemodel.zip file that is located in the following folder:
$TOP_CONNECTOR/connectors

Magento connector uses Asynchronous Bulk APIs, hence RabbitMQ must be installed and should be running. For more information, see RabbitMQ.
Use the bin/magento queue:consumers:start async.operations.all command to start the consumer that handles asynchronous and bulk API messages.
Configure and enable Hazelcast.

Working with the Magento2 connector


IBM Product Master 12.0.0 949
Magento2 connector provides a simple way to publish an item to the Magento2 marketplace right from catalog. Configure Connector configuration by using Lookup tables
and select any item from the respective catalog in the single-edit or multi-edit page. A Publish button is visible that lists number of configured endpoints where the item
needs to be published. Select Magento and item is sent for further processing of publication. During publication, item flows through many processing stages and
respective state are updated in the Publication Details of the item.
Status Description
REQUEST_INITIATED Item details are being processed. The product JSON file is prepared, and sent to the Magento connector for publication.
REQUEST_INPROGRESS Item details and product JSON file are being read by connector for further processing.
REQUEST_SUCCESSFUL Item successfully published at the Magento endpoint.
REQUEST_FAILED Item failed to publish at the Magento endpoint.
Tip: If your catalog items have images, update the value of the max_allowed_packet=100M property in the my.ini file (Windows) or my.cnf file (Linux®) in the Magento
endpoint. By default, the Magento connector sends 10 items in a single batch to the Magento endpoint. If the sizes of images are large, then calculate the size of the
max_allowed_packet property depending on the bulk.export.batch.size property in the application.properties file and number of images.

Installing the connector


Proceed as follows to install the connector:

1. Browse to the $TOP_CONNECTOR/connectors folder.


2. Locate and extract the ipm12_connectors_12.0.0.X_YYYYDDMMZZZ.tar.gz base file.
3. Set the $TOP_CONNECTOR environment variable to point to the extracted folder:
export $TOP_CONNECTOR=/opt/connectors/mdmce-connectors
4. Enable the connector in the $TOP_CONNECTOR/conf/connector_settings.ini file.
5. Run the installer script.

cd $TOP_CONNECTOR
cd bin ./install.sh

6. Update the application.properties file located in the following folder:


$TOP_CONNECTOR/connectors/mdmce-magento2-connector/conf

For more information, see Updating properties file.


7. Start the connector from the $TOP_CONNECTOR/connectors/bin folder:

./start_connectors.sh magento2

./start_connectors.sh -all

Stop the connector through the following command:

./stop_connectors.sh -all or ./stop_connectors.sh magento2

8. Check status of the connector through the following command:

./status_connectors.sh -all or ./status_connectors.sh magento2

Message archive service starts and stops automatically depending on the connection status of the Magento connector. If needed, message archive service can also be
started manually. For more information, see Message archive service.

Updating properties file


You need to update the following parameters in the application.properties file.

MDM company configuration

mdm.company.user=<user_name>
mdm.company.password=<password>
mdm.company.company=<company>

MDM Server configuration

mdm.server.topDir=<$TOP>
mdm.server.etcDir=<$TOP>/etc

Database-related parameters

app.datasource.url=jdbc:db2://<hostname>:<port>/<databasename>
app.datasource.username=<username>
app.datasource.password=<password>

Hazelcast client configuration

mq.groupName=mdmce-hazelcast-instance
mq.networkIpAddress=<hazelcasthostaddress>:5702
mq.requestQueueUri=hazelcast-seda:magento2_connector_item_publish_queue?
pollTimeout=1000&concurrentConsumers=5&transferExchange=false&transacted=false&hazelcastInstance=#hazelcastInstance
mq.bulkUUIDQueueUri=hazelcast-seda:magento2_connector_put_bulk_uuid_queue?
pollTimeout=1000&concurrentConsumers=5&transferExchange=false&transacted=false&hazelcastInstance=#hazelcastInstance

Magento2 configuration

magento2.accessTokenUrl=http://<magentohost>/magento2/rest/V1/integration/admin/token
magento2.username=
magento2.password=
magento2.product.bulk.request.url=http://<magentohost>/magento2/rest/default/async/bulk/V1/products
magento2.product.bulk.request.stats.url=http://<magentohost>/magento2/rest/V1/bulk/%s/detailed-status
magento2.category.create.request.url=http://<magentohost>/magento2/rest/V1/categories?fields=id,parent_id,name
magento2.category.search.request.url=http://<magentohost>/magento2/rest/V1/categories/list

950 IBM Product Master 12.0.0


Installing and configuring Hazelcast
For more information, see Installing Hazelcast IMDG.

Troubleshooting
The logs for the connector are created in the $TOP_CONNECTOR/logs folder.

1. If there is a "Consumer is not authorized to access" error, then verify your Magento URL and credentials.
2. If there is an "Invalid product data" error on publishing an item, then verify the values of mandatory attributes set in the item.

Message archive service


Magento2 uses message archive service to save incoming and outgoing XML messages.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Message archive service


Magento2 uses message archive service to save incoming and outgoing XML messages.

All the connectors send messages to queue. Message archive service fetches the message from the queue and saves to the disk at the following location.

fileStoragePath/companyId/connectorName/dayOfMonth/fileName.extension

You can mount this folder to docstore.

Installing the connector


Proceed as follows to install the connector:

1. Browse to the $TOP_CONNECTOR/connectors folder.


2. Locate and extract the ipm12_connectors_12.0.0.X_YYYYDDMMZZZ.tar.gz base file.
3. Set the $TOP_CONNECTOR environment variable to point to the extracted folder:
export $TOP_CONNECTOR=/opt/connectors/mdmce-connectors
4. Enable the connector in the $TOP_CONNECTOR/conf/connector_settings.ini file.
5. Run the installer script.

cd $TOP_CONNECTOR
cd bin ./install.sh

6. Update the application.properties file located in the following folder:


$TOP_CONNECTOR/connectors/mdmce-message-archive-service/conf

For more information, see Updating properties file.


7. Start the connector from the $TOP_CONNECTOR/connectors/bin folder:

./start_connectors.sh message-archive-service

./start_connectors.sh -all

Stop the connector through the following command:

./stop_connectors.sh -all or ./stop_connectors.sh message-archive-service

8. Check status of the connector through the following command:

./status_connectors.sh -all or ./status_connectors.sh message-archive-service

Updating properties file


You need to update the following parameters in the application.properties file.

Hazelcast client configuration

hazelcast.groupName=mdmce-hazelcast-instance
hazelcast.networkIpAddress=<hazelcasthostaddress>:5702
hazelcast.itemQueueUri=hazelcast-seda:message_archive_app_queue?
pollTimeout=1000&concurrentConsumers=5&transferExchange=false&transacted=false&hazelcastInstance=hazelcastInstance

File storage path on disk

app.fileStoragePath=

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 951


SAP connector
You can configure SAP connector with the Product Master application.

Product Master-SAP connector is an upstream connector for importing data from the SAP Material Management module into the Product Master operational catalogs. SAP
upstream connector allows you to import material and vendor information for data enrichment and review before publication to downstream channels. This guarantees
quality and speed time to market through data consolidation and governance that is enforced by Product Master. SAP connector imports data into Material catalog and
Vendor catalog operational catalogs. Material catalog has Material hierarchy as primary and Plant hierarchy as secondary hierarchy. Vendor catalog has Company
hierarchy as primary and Location hierarchy as secondary hierarchy.

Working with the SAP connector


The connector uses SAP Intermediate Document (IDOC) and Business Application Programming Interface (BAPI) for data transfer. SAP Enterprise Resource Planning
(ERP) needs to be configured to generate IDOC files and place these in the FTP or SFTP directory (configurable through the ftp.directory property in the
application.properties file). The connector reads IDOCS from the FTP or SFTP location and processes in batches. If the IDOCTYP field in the IDOC is MATMASXX, for
example, MATMAS01, MATMAS02. Then, IDOC is processed as a Material Master IDOC. If the field is,IDOCTYP CREMASXX then IDOC is processed as a Vendor IDOC.

SAP connector creates new items (materials or vendors) in the Product Master along with modifying existing items. Connector also categorizes these based on information
available in IDOCs. After completion of each batch, all successfully parsed IDOCs for which items are created or updated in the Product Master are moved to Success
directory. If there is any error in parsing or creating or updating items, then IDOCs are moved to Error directory. SAP connector performs initial import of categories from
SAP ERP system into the Product Master and populates Plant, Material, Company, and Location hierarchies in the Product Master.
Note: SAP connector supports only Material Master and Vendor type IDOCS.

Installing the connector


Proceed as follows to install the connector:

1. Browse to the $TOP_CONNECTOR/connectors folder.


2. Locate and extract the ipm_connectors_12.0.0.X_YYYYDDMMZZZ.tar.gz base file.
3. Set the TOP_CONNECTOR environment variable to point to the extracted folder:
export TOP_CONNECTOR=/opt/connectors/mdmce-connectors
4. Enable the connector in the $TOP_CONNECTOR/conf/connector_settings.ini file.
5. Run the installer script.

cd $TOP_CONNECTOR
cd bin
./install.sh

6. Update the application.properties file located in the following folder:


$TOP_CONNECTOR/connectors/mdmce_sap/conf

For more information, see Updating properties file.


7. Start the connector from the $TOP_CONNECTOR/connectors/bin folder:

./start_connectors.sh sap

./start_connectors.sh -all

Stop the connector through the following command:

./stop_connectors.sh -all or ./stop_connectors.sh sap

8. Check status of the connector through the following command:

./status_connectors.sh -all or ./status_connectors.sh sap

Updating properties file


You need to update the following parameters in the application.properties file.

MDM company configuration

mdm.company.user=<user_name>
mdm.company.password=<password>
mdm.company.company=<company>

JCO client configuration

jco.client.lang=en
jco.client.client=<client_id>
jco.client.passwd=<password>
jco.client.user=<user_name>
jco.client.sysnr=00
jco.client.ashost=<ip_address/host_name>
jco.client.destination=abap_as_without_pool
jco.passwordencrypted=false

bulk.import.batchsize=2
bulk.category.import.batchsize=50
file.success.dir.path=/opt/connectors/idocs
file.error.dir.path=/opt/connectors/idocs/error
file.idoc.source.directory=/opt/IODC/

FTP configuration

952 IBM Product Master 12.0.0


ftp.url=<ftp_url>
ftp.user=<user_name>
ftp.password=<base64-encoded password>
ftp.idoc.dest.directory=<idocs_directory>

Database-related parameters

app.datasource.url=jdbc:db2://<hostname>:<port>/PIMDB
app.datasource.username=<username>
app.datasource.password=<password>

Importing data models


Proceed as follows to import base data model and data model:

1. Log in to the Admin UI.


2. Import the connectors_basemodel.zip file located in the following folder:
mdmce_connectors/datamodel

3. Import the connectors_sap_datamodel.zip file located in the following folder:


mdmce_connectors/connectors/mdmce_sap/datamodel

Configuring JCo client


SAP connector uses the JCo client to fetch categorization data from the SAP system. Load the JCo client library through the following command:

1. Copy the libsapjco3.so file to a folder, for example /opt/sapjso3lib/ folder.


2. Add the following to the .bashrc or .bash_profile file:

export LD_LIBRARY_PATH=/opt/sapjso3lib

Note: If the library is not set after editing the .bash_profile file, run source .bash_profile file on the console.
3. Copy the sapjco3.jar file to the /opt/connectors/mdmce-connectors/libs folder.

Troubleshooting
The logs for the connector are created in the $TOP_CONNECTOR/logs folder.

If the connector does not start processing, then verify the configuration parameters in the applications.properties file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Reference
The reference information in the IBM® Product Master includes the configuration files, script operations, user interface panels, shell scripts, Javadoc, and REST APIs.

Javadoc
Following Javadoc documentation enables easy access to the API information:

Javadocs

Object Schema
Defines the XML representation of business objects supported in the product. All schema files are available under the $TOP/etc/default/object_schema folder of the core
product installation.

mdmspec01.XS - This file contains the formal definition of the XML representation for a spec object.
mdmentry01.XS - This file contains the formal definition of the XML representation for a entry object, including item, category, lookup table entry, collaboration
item, and collaboration category.

REST APIs
The REST API layer is a set of APIs to access entities in the IBM Product Master, Version 12.0.
Integrations REST APIs
The Integrations REST APIs are a set of APIs that have same functionality as the existing APIs. The main difference being that the input for these REST APIs, is
"names" instead of the traditional "IDs".
Configuration files
The configuration files contain your system configurations that you use to set up and customize Product Master.
Shell scripts
You can use scripts provided with IBM Product Master to perform many functions. These scripts are a powerful way to perform many functions quickly and
efficiently.
Script operations
Script operations extend the basic functionality of the IBM Product Master. You can use script operations to clean, transform, validate, and calculate information to
align with business rules and processes. This information can then be imported and exported to virtually any file standard and custom file format or used to perform
mass updates to a catalog of information.

IBM Product Master 12.0.0 953


Global Data Synchronization configuration files
The Global Data Synchronization configuration files contain the system configurations that you use to set up and customize Global Data Synchronization.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

REST APIs
The REST API layer is a set of APIs to access entities in the IBM® Product Master, Version 12.0.

REST APIs follow the security model based on the user roles and ACGs, and require the same set of access permissions as the Persona-based UI interface.

The documentation for each REST API contains information such as URLs, parameters, descriptions, sample input, and output data.

Following table lists the supported components:

Component REST API description


Authentication APIs for login and logout.
Collaboration Area APIs for operations on the collaboration areas entries like create, update entries, export, or import.
Catalog APIs for operations on the catalog items like create, update items, checkout, or export.
Hierarchy APIs for fetching hierarchy details.
Category APIs for operations on the categories like create, update, or move.
Search APIs for item and category search along with support for saved searches and templates.
Selection APIs to create and view items in a static selection.
Audit History APIs for fetching audit history details.
Digital Asset Management APIs for managing assets in the DAM repository and linking the assets to items.
Free Text Search APIs for supporting free text search on items and categories.
Scripts APIs for operations on the entry preview and trigger scripts.
REST API for the Product Master supports Basic authentication. You should use the login API to generate a JSON Web Token (JWT). Authorization header should contain
Base 64 encoded username:password. X-Company header should contain a valid company.

Login API
Method GET
URL /api/v1/login
Request Authorization: Basic <Base 64 encoded username:password>
headers
X-Company: yourcompany
Response X-AuthToken response header contains a valid JWT token on successful login. You should use the JWT token that is obtained after a successful login
headers as a Bearer Authorization token and specify a valid X-Company header when making API calls.

Logout API
Method POST
URL /api/login
Request headers Authorization: Bearer <JWT obtained on successful login >

X-Company: yourcompany

Common Error codes


Status code Generic description
400 Bad Request or Invalid Input.
401 Unauthorized
403 Forbidden
500 Internal Server Error
For more detailed information on the REST APIs, see the IBM Product Master REST APIs swagger file.

REST API error codes


Following is the list of all the REST API error codes for the IBM Product Master.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

REST API error codes

954 IBM Product Master 12.0.0


Following is the list of all the REST API error codes for the IBM® Product Master.

Error Code Description


ER001 Enter valid credentials.
ER002 Enter a valid company name.
ER005 Session expired, log in to the application again.
ER006 Enter valid credentials.
ER007 Session expired, log in to the application again.
ER008 Logout failed, try again.
ER009 Logout operation failed.
ER010 An error occurred, try again.
ER011 An error occurred, try again.
ER012 User not found.
ER013 You are not authorized, contact your administrator for more details.
ER014 Enter valid username/password/company name combination to log in.
ER015 Failed to refresh the Auth Token, try again.
ER016 An error occurred, try again.
ER017 Your account is locked, contact your administrator.
ER018 New password cannot be same as old password, try again.
ER019 Password should be alphanumeric with at least 8 characters and contain at least one number, at least one lowercase letter, at least one uppercase
letter, and at least one special character without any white spaces or username.
ER020 The LDAP user is not authorized to change the password.
ER021 Failed to connect to the database, contact system administrator.
ER101 Catalog does not exist or you do not have the access rights.
ER102 Catalog does not exist or you do not have the required access rights.
ER103 Failed to get the hierarchy information for the catalog.
ER104 Failed to get the catalog view information.
ER105 Failed to get the catalog information mapped with Relationship type attribute.
ER106 The Catalog does not exist.
ER107 Failed to get catalog view.
ER108 Failed to get company locales.
ER201 Category does not exist or you do not have the required access rights.
ER202 Failed to search for the categories, try again.
ER301 Failed to get the Hierarchy.
ER302 Failed to get the hierarchy view information.
ER303 The Hierarchy does not exist.
ER304 Failed to get the predefined tab for category in hierarchy.
ER401 Collaboration area does not exist or you do not have the required access rights.
ER402 Collaboration step does not exist or you do not have the required access rights.
ER403 Collaboration step does not exist or you do not have the required access rights.
ER404 Collaboration step does not exist or you do not have the required access rights.
ER405 Collaboration step does not exist or you do not have the required access rights.
ER406 Failed to fetch required attributes for collaboration area step.
ER500 Failed to get Audit records.
ER501 Failed to update the item. The item does not exist or you do not have the required access rights.
ER502 Failed to move the items, try again.
ER503 Failed to release the item, try again.
ER504 Failed to reserve the item, try again.
ER505 Failed to get the item, try again.
ER506 Failed to get the predefined tab for the item, try again.
ER507 No relationship attribute present for the item
ER508 Failed to categorize item, try again later.
ER509 Failed to add the new item, try again.
ER510 You need to reserve an item before editing.
ER511 Item is already checked out.
ER512 Failed to check out the item, try again.
ER801 Failed to update the entry. The entry does not exist or you do not have the required access rights.
ER802 Failed to move the entry, try again.
ER803 Failed to release the entry, try again.
ER804 Failed to reserve the entry, try again.
ER805 Failed to get the entry, try again.
ER806 Failed to get the predefined tab for the entry, try again.
ER807 No relationship attribute present for the entry
ER808 Failed to categorize entry, try again later.
ER809 Failed to add the new entry, try again.
ER810 You need to reserve an entry before editing.
ER811 Entries are already checked out.
ER812 Failed to check out the entry, try again.
ER823 The Entry is already reserved.

IBM Product Master 12.0.0 955


Error Code Description
ER824 The Entry is already released.
ER830 Audit: invalid from date.
ER831 Audit: invalid to date.
ER832 Audit: invalid XML in history record
ER833 The SAVE action might not be completed. Primary key is empty.
ER834 Failed to get checkout information for entry.
ER513 Failed to create the mass edit schedule, try again.
ER514 Failed to get the status of the mass edit schedule, try again.
ER515 Only one mass edit schedule can be processed at a time.
ER516 Mass edit report does not exist.
ER517 Specify a valid collaboration and workflow step, and try again.
ER518 Exceeded mass edit limit, try a different step with fewer items.
ER519 No mass edit schedule is running.
ER520 No mass edit schedules found
ER521 Failed to create mass-edit report.
ER522 Failed to get item check-out status.
ER523 The Item is already reserved.
ER524 The Item is already released.
ER525 Failed to validate total number of digits.
ER526 Failed to validate fraction digits.
ER527 Failed to validate minimum value (exclusive).
ER528 Failed to validate minimum value (inclusive).
ER529 Failed to validate maximum value (exclusive).
ER530 Failed to validate maximum value (inclusive)
ER531 Failed to update item.
ER532 Failed to delete the items from the catalog.
ER533 Failed to validate, minimum occurrence or maximum occurrence.
ER534 Item is not mapped to the {0} channel.
ER601 Failed to create the saved search, try again.
ER602 Invalid input.
ER603 Specified search name exists, save this search with a different name.
ER604 Saved search does not exist.
ER605 Failed to delete the saved search, try again.
ER606 Failed to retrieve the saved search, try again.
ER607 No saved searches found.
ER608 Invalid input.
ER609 Failed to retrieve the hierarchies, try again.
ER610 Failed to fetch the searchable attributes, try again.
ER611 Failed to retrieve the selection, try again.
ER612 No collaboration areas found.
ER613 No categories found.
ER614 No linked items found.
ER615 No related items found.
ER616 No history record found.
ER617 No items found.
ER618 No catalogs found.
ER619 No saved lists found.
ER620 No data available.
ER621 No saved templates found.
ER622 Selection does not exist.
ER623 No enumeration values exist.
ER624 Could not get enumeration values, try again later.
ER625 Could not generate a report, try again later.
ER626 Could not fetch report generation scripts, try again later.
ER627 No scripts found.
ER628 Failed to create a Saved List, try again.
ER629 Specified saved list exists, save this list with a different name.
ER630 Destination category or folder not found.
ER631 Failed to retrieve script result.
ER632 Failed to retrieve scripts.
ER633 Failed to get the search data.
ER635 Item does not exist.
ER636 Attribute with path {attributePath} does not exist.
ER637 Failed to create a rule, try again (or if applicable, contact your administrator)
ER639 Failed to fetch the rule list, try again (or if applicable, contact your administrator).
ER640 Failed to delete the rule {0}, try again (or if applicable, contact your administrator).
ER641 Failed to rename rule (or if applicable, contact your administrator)

956 IBM Product Master 12.0.0


Error Code Description
ER642 Failed to set the rule status, try again (or if applicable, contact your administrator).
ER644 Specified rule name exists, save this rule with a different name.
ER645 Rule does not exist.
ER646 Rule edition failed.
ER648 Are you sure you want to rename the rule?
ER649 Failed to fetch rule (or if applicable, contact your administrator).
ER650 Rule Name is too long or contains invalid characters.
ER638 Secondary specs not found.
ER901 Failed to upload the file, try again.
ER902 The server encountered an error while uploading the file, contact your system administrator.
ER903 Specified file exists, save this file with a different name.
ER904 Failed to download the file, try again.
ER905 The file does not exist.
ER906 Failed to load the user information, contact system administrator.
ER907 Failed to load the user information, contact system administrator.
ER908 Your session has expired, log in to the application again.
ER909 Invalid input
ER910 Failed to update the user information, contact system administrator.
ER911 Company locale does not exist.
ER912 Failed to run script.
ER913 Invalid Job type. Possible values are Import, Export, and Report.
ER914 Failed to fetch {0} Job. Provide a valid Job name.
ER915 Failed to fetch the documents.
ER916 Directory does not exist.
ER917 Failed to fetch the document content.
ER918 Document does not exist.
ER919 Internal server error.
ER920 Failed to load script operations.
ER921 Script validation failed: {0}.
ER922 Script validation failed due to internal errors.
ER923 Unsupported file type
ER924 File name cannot contain "!@$%^&()".
ER701 Failed to locate the folder in the repository.
ER702 Failed to locate mapping for the parent category in the repository.
ER112 Folder not present in the repository
ER703 Failed to locate the folders in the repository.
ER704 Asset not present in the repository
ER705 Failed to locate the asset in the repository.
ER706 Failed to locate the assets in the repository.
ER707 Failed to create the asset in the repository.
ER708 Failed to create the folder in the repository.
ER709 Failed to delete the asset in the repository.
ER710 Failed to delete the folder in the repository.
ER711 Failed to move the asset in the repository.
ER712 Failed to upload the file to the repository.
ER713 Failed to retrieve the asset details from the repository.
ER714 FTP bulk upload failed, try again.
ER715 Failed to locate the asset metadata in the repository.
ER716 Failed to update the asset metadata in the repository.
ER717 Failed to search the asset, try again.
ER718 Asset does not exist.
ER719 Login credentials are not valid, contact your administrator.
ER720 Failed to create an asset in the application, contact your administrator.
ER721 Failed to create a folder in the application, contact your administrator.
ER722 Failed to retrieve asset count from the repository.
ER723 Failed to connect to the FTP server, try again.
ER724 Failed to connect to the FTP server, try again.
ER725 Failed to retrieve the folder details from the FTP server, try again.
ER726 Failed to log out of the FTP server, try again.
ER727 The specified directory does not exist, provide a valid input.
ER728 Server might not connect to the FTP, try again.
ER729 Failed to get the FTP folder count.
ER730 Not supported FTP type, only FTP, or SFTP supported
ER731 Failed to get the asset details from the FTP server, try again.
ER732 Failed to get the digital asset hierarchy from the repository.
ER733 No jobs found
ER734 Failed to rename the folder in the repository.

IBM Product Master 12.0.0 957


Error Code Description
ER735 Failed to rename the category in the repository.
ER736 Failed to extract the assets from the repository.
ER737 Failed to upload the assets to the repository.
ER738 Specified category name exists, save this category with a different name.
ER739 Failed to schedule the job.
ER740 Failed to stop the schedule.
ER741 No schedule found
ER743 No ID attribute found, set an ID attribute, and try again.
ER744 Failed to update the user details in the repository.
ER745 Failed to update user details in the repository.
ER746 Failed to update user details because the user is already present in the repository.
ER742 User not found in the repository
ER747 Failed to create a category in the repository.
ER748 Failed to locate an FTP bulk upload sample report.
ER749 Failed to create an FTP bulk upload report.
ER750 Item not found
ER751 Failed to upload linkage file.
ER752 Failed to create file selector for the FTP.
ER753 Linkage upload sample report not found
ER754 No name attribute found, set the name attribute, and try again.
ER755 The new password cannot be the same as any previously used passwords.
ER756 Enter valid search type and try again.
ER757 Failed to rename the folder as folder is already present in DAM repository.
ER758 Failed to retrieve jobs.
ER759 You are not authorized in the repository, contact your administrator for more details.
ER760 Failed to search metadata.
ER761 User role not specified, contact your administrator.
ER762 Failed to convert the image to Base64 equivalent string.
ER763 The uploaded image size exceeds the recommended size of 50 MB
ER764 Configuration file does not exist, contact your administrator.
ER765 Failed to delink the assets from the item.
ER766 No users found
ER767 Failed to link asset with item.
ER768 Type or name exists, try with different type or name.
ER769 Digital Asset Catalog does not exist.
ER850 Failed to insert page activity for user {0}.Either page name is incorrect or Admin-restricted to capture users activity. Contact system administrator.
ER1000 Failed to fetch search response, contact system administrator.
ER1001 Enter a valid page number {0} or page size {1}.
ER1002 Failed to schedule job for the specified company, try again.
ER1003 Failed to retrieve schedule job status, try again.
ER1004 You can view 10,000 search results only for any keyword.
ER1005 Invalid input.
ER1006 Failed to get the sample report.
ER1007 Failed to create schedule for the Full Indexer Report.
ER1008 Failed to save the schedule details.
ER1009 Failed to fetch response, contact system administrator.
ER1010 Request missing required header values {0} or is incorrect.
ER1011 Failed to get Free text search configuration properties.
ER1012 Could not save user credentials in the downstream service, contact your system administrator.
ER1013 Cannot fetch data since a Full Indexing Schedule is pending, try again later.
ER1014 Schedule time is older than current time.
ER1015 Enter valid search term and try again.
ER1050 Failed to fetch possible suspects since Suspect Duplication Processing is disabled, contact your system administrator.
ER1051 Failed to fetch possible suspects since Free Text Search is disabled, contact your system administrator.
ER1101 Failed to get data model summary.
ER1102 Failed to get contributors for collaboration area.
ER1103 Failed to get items by age in collaboration area.
ER1104 Failed to get workflow.
ER1105 Failed to get ACGS.
ER1106 Failed to get roles.
ER1107 Failed to get users.
ER1108 Failed to get lookup tables.
ER1109 Dashboard Urls creation failed.
ER1110 Role creation failed.
ER1111 Roles JSON parsing failed.
ER1112 Failed to get catalog items by category.
ER1113 Failed to get performers for step.

958 IBM Product Master 12.0.0


Error Code Description
ER1114 Failed to get steps.
ER1115 Failed to get Remote/Request Host and Address.
ER1116 Completeness attribute name is not defined in the common.properties file.
ER1117 Completeness lookup table name is not defined in the common.properties file.
ER1118 Completeness attribute {0} is not defined in spec {1}.
ER1119 Container ID is not valid.
ER1120 Entry for ID {0} not found.
ER1121 Attribute collection for Completeness not defined in lookup table {0} for container {1}
ER1122 One or more age split values is invalid.
ER1123 One or more completeness split values is invalid.
ER1201 Chat bot details cannot be empty.
ER1202 Chat bot details not found.
ER1203 Chat bot details are invalid.
ER1204 Chat bot Assistant API Key is invalid.
ER1205 Chat bot Assistant Url is invalid.
ER1206 Chat bot Assistant ID is invalid.
ER1207 Invalid Session.
ER1208 Unable to connect to server.
ER1209 Failed to get company attribute details.
ER1301 Failed to get Spec.
ER1302 Failed to get Spec count.
ER1303 Cannot delete Spec {0} due to dependencies, remove the dependencies, and then try again.
ER1304 Unable to retrieve spec. The spec does not exist or may have been deleted.
ER1305 Failed to delete Spec.
ER1306 No Lookup Table found.
ER1307 Failed to get Lookup Tables.
ER1308 Failed to create or update spec.
ER1309 Failed to get attribute collection.
ER1310 Failed to get system-wide access.
ER1311 Failed to export spec.
ER1312 Failed to import spec.
ER1313 Invalid spec type.
ER1314 Value must be positive integer number.
ER1316 Spec name cannot not be more than {0} characters.
ER1317 Duplicates are not allowed {0}.
ER1401 Display attribute not found.
ER1501 Failed to validate GLN(Global Location Number).
ER1502 Failed to create partner.
ER1503 Trading Partner with provided GLN, exists.
ER1504 Failed to validate GTIN(Global Trade Item Number).
ER1505 Item with provided GTIN, exists.
ER1506 Failed to create item.
ER1507 Failed to register item.
ER1508 Failed to get transaction.
ER1509 Failed to get trading partner.
ER1510 Failed to add link.
ER1511 Failed to delete link.
ER1512 Register or publish transaction are not allowed when the item is checked out.
ER1513 Failed to retrieve message details.
ER1514 Check digit of the GTIN is incorrect.
ER1515 The GLN supplied is not a valid Information Provider GLN in 1SYNC item management.
ER1516 isOrderableUnit is required.
ER1517 NetWeight value must be greater than zero.
ER1518 If is not empty,grossWeight then value shall be greater than or equal to 0.
ER1519 GrossWeight must be greater than or equal to NetContent.
ER1520 GrossWeight must be greater than or equal to NetWeight.
ER1521 When Composition Width, Composition Depth, and Hi are all supplied, the mathematical product of the three attributes must equal the
totalQuantityOfNextLowerLevelTradeItem (pack)
ER1522 Qty of Next Level Item/Pack cannot be less than 1.
ER1523 The mathematical product of attributes, Number of Complete Layers Count in the Item/GTIN Pallet HI, and Number of Items in a Complete
Layer/GTIN Pallet TI must be equal to the Qty of Next Level Item/Pack when isTradeItemPackedIrregularly is FALSE.
ER1524 Validation for HI and TI failed.
ER1525 alternateItemIdentification/agency and alternateItemIdentification/id are used in pairs. If one is populated, then the other is required.
ER1526 Required field missing (Depth).
ER1527 Validation for weight failed.
ER1528 (Error Code:532) ProductName must not equal value to DescriptiveSize For TM Sweden.
ER1529 If isTradeItemADespatchUnit equals true, grossWeight must be populated with a value greater than zero.

IBM Product Master 12.0.0 959


Error Code Description
ER1530 StackingFactor is mandatory for Despatch Units in target market Sweden.
ER1531 If equals true and equals false or is not used, isTradeItemADespatchUnit isTradeItemNonPhysical then grossWeight shall be greater than 0.
ER1532 This is required if Product Size Code attribute is populated.
ER1533 This is required if Product Size Code Maintenance Agency attribute is populated.
ER1534 Net Content attribute is required if the Consumer Indicator is true.
ER1535 This attribute is mandatory for all data recipients. The last occurrence of this attribute must be populated and its corresponding data recipient GLN
must be empty.
ER1536 On first population endAvailabilityDate shall be later than or equal to today.
ER1537 Start and End availability dates does not match.
ER1538 This is required if Trade Item is Orderable.
ER1539 Value is greater than 35 characters.
ER1540 If Unit of Measure equals MILLIMETER, then its associated value must not have a decimal position.
ER1540 If Unit of Measure equals MILLIMETER, then its associated value must not have a decimal position.
ER1541 If Unit of Measure does not equals MILLIMETER, then its associated value must not have more than three decimal positions.
ER1541 If Unit of Measure does not equals MILLIMETER, then its associated value must not have more than three decimal positions.
ER1542 Width must be less than or equal to depth.
ER1543 discontinuedDateTime shall not be older than effectiveDateTime minus six months.
ER1544 GTIN Name and Language Code are Interdependent Attributes.
ER1545 At least one GTIN Name is required.
ER1546 If targetMarketCountryCode equals (AU, NZ or HU) and nonGTINPalletHi is greater than zero then nonGTINPalletTi shall be greater than zero.
ER1547 One instance must be in Swedish for TM Sweden.
ER1548 ProductName must not be empty for TM Germany and Ireland.
ER1549 If Suggested Retail Price Effective End Date is populated, then following attributes should also be populated.

\nSuggested Retail Price


\nSuggested Retail Price Currency
\nSuggested Retail Price Basis Per Unit
\nSuggested Retail Price Basis Per Unit UOM
\nSuggested Retail Price Effective Start Date

ER1550 This is required if any of the following attribute is populated.

\nList Price \nList Price Currency


\nList Price Basis Per Unit
\nList Price Basis Per Unit UOM
\nList Price Effective Start Date

ER1551 Invalid UOM for target market Sweden.


ER1552 If Pricing On Product is set to true, then Suggested Retail Price and corresponding dependent attributes must be populated.
ER1553 If this attribute is populated, then the following attributes should also be populated.

\nList Price
\nList Price Currency
\nList Price Basis Per Unit
\nList Price Basis Per Unit UOM
\nList Price Effective Start Date

ER1554 This is required if any of the following attribute is populated.

\nColor
\nColor Code Maintenance Agency
\nProduct Color Description

ER1555 Do not populate this attribute, if Product Type is PL (PALLET) or MX (MOD PALLET (MIXED).
ER1556 The height of the item cannot exceed 2600 mm.
ER1557 Base Unit Indicator must be populated for an EACH product type.
ER1558 For Sweden TM, packagingMaterialCode and packagingMaterialCompositionQuantity are mandatory when base unit indicator is true. Other
alternatives for this rule are,

The item has the packagingTermsAndCondition=12 (package recycling fee paid).


The item has the packagingMarkedReturnable=true..
The item has the packagingTypeCode=not packed / unpacked (NE).

ER1559 Peg Horizontal, Peg Vertical and Peg Hole Number attributes are Interdependent.
ER1560 The Unit of Measure for Peg Horizontal and Peg Vertical should be consistent for each Trade Item.
ER1561 Functional Name cannot contain the same text as TradeItem Size Description or BrandName if TradeItem Consumer Unit Indicator is true.
ER1562 NetContent must be populated.
ER1563 Only Last occurrence of the Data Recipient GLN can hold Empty Value.
ER1564 Do not populate this attribute, if the Product Type is PL (PALLET) or MX (MOD PALLET (MIXED)).
ER1565 PercentageOfAlcoholByVolume must be populated.
ER1566 Minimum Buying Quantity cannot be greater than Maximum Buying Quantity.
ER1567 Ordering Unit Indicator must be true.
ER1568 Minimum Order Quantity cannot be greater than Maximum Order Quantity.
ER1569 GPC Brick attributes can be populated for Consumer Unit only.

960 IBM Product Master 12.0.0


Error Code Description
ER1570 GPC Brick attributes are Interdependent.
ER1571 GS1 Context attributes can be populated for Consumer Unit only.
ER1572 GS1 Context Characteristics Measurement attributes(UOM/Value) missing.
ER1573 GS1 Context Characteristic Code cannot be empty.
ER1574 GS1 Context Attributes Characteristics Value/Characteristics Measurement/Characteristics Description cannot be empty.
ER1575 Orientation Type and Orientation Preference Sequence are Interdependent.
ER1576 Duplicate value must not be populated in more than one instance of Orientation Preference Sequence.
ER1577 Nesting Type and Nesting Direction are Interdependent.
ER1578 Electrical Usage Class Mode Code and Maximum Electrical Usage Class Mode UOM/Value are Interdependent attributes.
ER1579 Value must be greater than zero.
ER1580 There must be at most one iteration of shortDescription per language code.
ER1581 There must be at most one iteration of invoiceName per language code.
ER1582 There must be at most one iteration of color/description per language code.
ER1583 There must be at most one iteration of priceComparisonMeasurement per measurementUnitCode.
ER1584 There must be at most one iteration of additional description per language code.
ER1585 There must be at most one iteration of netContent per Unit Of Measure.
ER1586 There shall be at most one iteration of maximumFeedingAmount per measurementUnitCode.
ER1587 There shall be at most one iteration of minimumFeedingAmount per measurementUnitCode.
ER1588 If minimumWeightOfAnimalBeingFed is not empty, then it shall not exceed one iteration per measurementUnitCode.
ER1589 If feedingAmountBasisDescription is not empty, then it shall not exceed one iteration per language code.
ER1590 There shall be at most one iteration of animalNutrientQuantityContained per measurementUnitCode.
ER1591 There shall be at most one iteration of animalNutrientQuantityContainedBasis per measurementUnitCode.
ER1592 There shall be at most one iteration of tradeItemLicenseTitle per language code.
ER1593 There shall be at most one iteration of feedLifeStage per language code.
ER1594 There shall be at most one iteration of recommendedFrequencyOfFeeding per language code.
ER1585 There shall be at most one iteration of suggestedRetail price per measurementUnitCode.
ER1596 There shall be at most one iteration of tradeItemFinishDescription per language code.
ER1597 There shall be at most one iteration of targetConsumerAge per language code.
ER1598 There shall be at most one iteration of childNutritionQualifiedValue per measurementUnitCode.
ER1599 There shall be at most one iteration of childNutritionValue per measurementUnitCode.
ER1600 There shall be at most one iteration of nonCreditableGrainDescription per language code.
ER1601 There shall be at most one iteration of nonCreditableGrainAmount per measurementUnitCode.
ER1602 There shall be at most one iteration of nonCreditableGrainDescription per language code.
ER1603 There shall be at most one iteration of totalCreditableIngredientTypeAmount per measurementUnitCode.
ER1604 If targetMarketCountryCode equals United States and ChildNutritionQualifieror ChildNutritionLabel is used, then childNutritionQualifiedValue,
childNutritionValue, childNutritionLabelStatement, and childNutritionProductIdentification shall be used.
ER1605 The attributes doesTradeItemMeetWholeGrainRichCriteria and creditableGrainGroupCode shall be used for Target Market United States when
doesTradeItemContainNonCreditableGrains is used.
ER1606 The attribute nonCreditableGrainAmount shall be used for Target Market United States when the attribute doesTradeItemContainNoncreditableGrains
is TRUE.
ER1607 The attributes doesTradeItemContainNonCreditableGrains and creditableGrainGroupCode shall be used for Target Market United States when
doesTradeItemMeetWholeGrainRichCriteria is used.
ER1608 The attributes doesTradeItemContainNonCreditableGrains and doesTradeItemMeetWholeGrainRichCriteria shall be used for Target Market United
States when creditableGrainGroupCode is used.
ER1609 The attribute creditableAmount is missing. For target market (United States) creditableAmount shall be used when creditableIngredientDescription is
used.
ER1610 The attribute totalVegetableSubgroupAmount is missing. For target market (United States) totalVegetableSubgroupAmount shall be used when
vegetableSubgroupcode is used.
ER1611 For target market (United States), creditableIngredientDescription shall be used if creditableIngredientTypeCode is used.
ER1612 For target market (United States), both creditableIngredientTypeCode and totalCreditableIngredientTypeAmount shall be used at least once if class
ProductFormulationStatement is used.
ER1613 If targetMarketCountryCode equals (United States) and productFormulationStatementRegulatoryBodyCode is used, then
totalPortionWeightAsPurchased shall be used.
ER1614 When Is Item Available For Special Order is true, then Special Order Quantity Minimum, Special Order Quantity Multiple, and Special Order Quantity
Lead Time are required.
ER1615 Invalid script content.
ER1616 Drained weight must be less than or equal to its NetWeight.
ER1617 Drained weight must be less than or equal to its Gross Weight.
ER1618 The BrandName must not be empty.
ER1619 This is a mandatory field.
ER1620 Invalid Unit of Measurement.
ER1621 enter a valid value.
ER1622 Product Description is mandatory for all business units for target market France.
ER1623 If target MarketCountryCode equals United States and productFormulationStatementRegulatoryBodyCode is used, then
totalPortionWeightAsPurchased shall be used.
ER1624 Failed to register item.
ER1625 There shall be at most one iteration of ingredientName per language code.
ER1626 There shall be at most one iteration of servingSuggestion per language code.
ER1627 There shall be at most one iteration of dailyValueIntakeReference per language code.

IBM Product Master 12.0.0 961


Error Code Description
ER1628 There shall be at most one iteration of servingSizeDescription per language code.
ER1629 There shall be at most one iteration of precautions per language code.
ER1630 There shall be at most one iteration of preparationInstructions per language code.
ER1631 There shall be at most one iteration of maximumOptimumConsumptionTemperature per Unit Of Measure.
ER1632 There shall be at most one iteration of minimumOptimumConsumptionTemperature per Unit Of Measure.
ER1633 There shall be at most one iteration of minimumFishMeatPoultryContent per Unit Of Measure.
ER1634 There shall be at most one iteration of physiochemicalCharacteristicValue per Unit Of Measure.
ER1635 There shall be at most one iteration of quantity that is contained per Unit Of Measure.
ER1636 There shall be at most one iteration of cheeseMaturationPeriodDescription per language code.
ER1637 There shall be at most one iteration of contentDescription per language code.
ER1638 There shall be at most one iteration of enumerationValueDefinition per language code.
ER1639 There shall be at most one iteration of nonfoodIngredientStatement/statement per language code.
ER1640 There shall be at most one iteration of ingredientDefinition per language code.
ER1641 There shall be at most one iteration of organismMaximumValue per Unit Of Measure.
ER1642 There shall be at most one iteration of organismReferenceValue per Unit Of Measure.
ER1643 There shall be at most one iteration of organismWarningValue per Unit Of Measure.
ER1644 There shall be at most one iteration of dietTypeDescription per language code.
ER1645 There shall be at most one iteration of physioChemicalCharacteristicValue per Unit Of Measure.
ER1646 There shall be at most one iteration of fileFormatDescription per Unit Of Measure.
ER1647 There shall be at most one iteration of nutrientBasisQuantityDescription per language code.
ER1648 There shall be at most one iteration of ingredientStatement per language code.
ER1649 The attributes isWoodAComponentOfThisItem,isItemAvailableForSpecialOrder and isItemSubjectToUSPatent are used in pairs. If one is populated
the others are required.
ER1650 Invalid operation found for processing links.
ER1651 provide a valid input.
ER1652 Could not process link, contact system administrator.
ER1653 Requested resource not found, provide a valid input.
ER1654 Failed to associate selected product type, select valid product type.
ER1655 Resource already exists, provide a valid input.
ER1656 Failed to publish item.
ER1657 If netWeight is not empty, then unitOfMeasureCode must be from same measuring system across whole hierarchy.
ER1658 The Net Weight of the parent GTIN is less than that of the child GTIN.
ER1659 If height is not empty, then unitOfMeasureCode must be from same measuring system across whole hierarchy.
ER1660 If width is not empty, then unitOfMeasureCode must be from same measuring system across whole hierarchy.
ER1661 If depth is not empty, then unitOfMeasureCode must be from same measuring system across whole hierarchy.
ER1662 If grossWeight is not empty, then unitOfMeasureCode must be from same measuring system across whole hierarchy.
ER1663 If drainedWeight is not empty, then unitOfMeasureCode must be from same measuring system across whole hierarchy.
ER1664 If the hierarchy is published, the sum of the children items weight cannot exceed the value of the parent item.
ER1665 If productType is equal to MIXED_MODULE and parent item exists, then parent item productType must equal TRANSPORT_LOAD, PALLET, or
MIXED_MODULE.
ER1666 If productType is equal to MIXED_MODULE, then child item productType must not equal TRANSPORT_LOAD.
ER1667 If productType is equal to DISPLAY_SHIPPER, then parent item productType must not equal BASE_UNIT_OR_EACH or PACK_OR_INNER_PACK.
ER1667 If productType is equal to DISPLAY_SHIPPER, then parent item productType must not equal BASE_UNIT_OR_EACH or PACK_OR_INNER_PACK.
ER1667 If productType is equal to DISPLAY_SHIPPER, then parent item productType must not equal BASE_UNIT_OR_EACH or PACK_OR_INNER_PACK.
ER1668 If productType is equal to DISPLAY_SHIPPER, then child item productType must not equal TRANSPORT_LOAD, MIXED_MODULE or PALLET.
ER1668 If productType is equal to DISPLAY_SHIPPER, then child item productType must not equal TRANSPORT_LOAD, MIXED_MODULE or PALLET.
ER1669 If productTypeCode is equal to CASE, then child item productTypeCode must not equal TRANSPORT_LOAD, MIXED_MODULE or PALLET.
ER1670 If productTypeCode is equal to PACK_OR_INNERPACK, then child item productTypeCode must not equal TRANSPORT_LOAD, PALLET,
MIXED_MODULE, DISPLAY_SHIPPER or CASE.
ER1670 If productTypeCode is equal to PACK_OR_INNERPACK,then child item productTypeCode must not equal TRANSPORT_LOAD, PALLET,
MIXED_MODULE, DISPLAY_SHIPPER, or CASE.
ER1670 If is equal to PACK_OR_INNERPACK, productTypeCode then child item productTypeCode must not equal TRANSPORT_LOAD, PALLET,
MIXED_MODULE, DISPLAY_SHIPPER or CASE.
ER1671 If packagingInformation or packagingWeight is not empty, then unitOfMeasureCode must be from same measuring system across whole hierarchy.
ER1672 The discontinuedDate is required if the item has a child and the lastShipDate is populated on the child item.
ER1673 If lastShipDate is populated on the lowest level of the hierarchy, discontinuedDate of the parent items must be less than or equal to lastShipDate of
the base item.
ER1674 Failed to get the response count.
ER1675 Failed to get the item response transaction detail.
ER1676 The EACH GTIN cannot have children.
ER1677 Ordering Unit Indicator must be true for at least one GTIN in the hierarchy.
ER1678 Base unit indicator must be true only for the lowest level GTIN in the hierarchy.
ER1679 If the trade item has a child item, the value of isTradeItemABaseUnit must be false.
ER1680 Within each hierarchy one or more trade items must have isTradeItemAnInvoiceUnit equals true.
ER1681 If productType is TRANSPORT_LOAD, and the item has a parent, then the productType of the parent item can be TRANSPORT_LOAD only.
ER1682 Target Market needs to be populated for one of the GTIN in this item hierarchy and that Target Market needs to match the Target Market that is
assigned to the other GTIN in this item hierarchy.
ER1683 If productType is PALLET, and the item has a parent, then the productType of the parent item can only be TRANSPORT_LOAD or PALLET.

962 IBM Product Master 12.0.0


Error Code Description
ER1684 If productType is PALLET, and the item has a children, then the productType of the children can never be TRANSPORT_LOAD.
ER1685 If productType is CASE, and the item has a parent, then the productType of the parent item can never be BASE_UNIT_OR_EACH or
PACK_OR_INNER_PACK.
ER1685 If productType is CASE, and the item has a parent, then the productType of the parent item can never be BASE_UNIT_OR_EACH or
PACK_OR_INNER_PACK.
ER1686 If productType is Pack, and the item has a parent, then the productType of the parent item can never be Each.
ER1687 A parent GTIN cannot be linked as a child to any of its child items in the hierarchy.
ER1688 Hierarchy ready for publishing cannot contain items in draft status.
ER1689 The parent item must have the same Cancel Date as the child.
ER1690 Quantity of the Next Level Item must be 1 on each GTIN, and the sum of the quantities of direct child links must equal the quantity of the Next Level
Item of the Parent Item.
ER1691 The isWoodACompontentOfThisItem, isItemAvailableForSpecialOrder, isItemSubjectToUSPatent, isSecurityTagPresentand the optional attribute
isItemAvailableForDirectToConsumerDelivery are dependent and must co-exist, if isTradeItemAConsumerUnit.
ER1692 The grossWeight of the GTIN MUST BE greater than 96% of the sum of that GTIN packagingWeight plus the sum of the grossWeight of all next lower-
level child items (whether packagingWeight is populated in the child items).
ER1693 If color/agency is populated, then color/code must not be empty.
ER1694 If cataloguePrice/price is populated, then cataloguePrice/basisPerUnit must not be empty.
ER1695 If pricecomparisonContentType is not empty, then priceComparisonMeasurement must not be empty.
ER1696 If suggestedRetail/price is populated, then suggestedRetail/basisPerUnit must not be empty.
ER1698 Product Yield Value must be provide if Product yield Type Code is provided.
ER1699 Product yield Type Code must be provide if Product Yield Value is provided.
ER1700 If isIngredientGeneric is not empty, then ingredientStength must not be empty.
ER1701 If nutritionalClaimNutrientElementCode is not empty, then nutritionalClaimTypeCode must not be empty.
ER1702 If there is more than one iteration of ingredientSequence, then ingredientSequence and ingredientName must not be empty.
ER1703 If targetMarketCountryCode equals (AU, NZ or HU) and nonGTINPalletHi is greater than zero then nonGTINPalletTi shall be greater than zero.
ER1704 If nutrientTypeCode is used, then at least quantityContained or dailyValueIntakePercent shall be used.
ER1705 If gpcAttributeTypeCode is used, then gpcAttributeValueCode shall be used.
ER1706 If fatPercentageInDryMatter is not empty, then value must be greater than or equal to 0 and less than or equal to 100.00.
ER1707 Missing the all or none dependent attribute (nonGTINPalletHi or nonGTINPalletTi) in the segment are required. The nonGTINPalletHi, nonGTINPalletTi
and numberOfItemsPerPallet are dependent attributes.
ER2000 Missing Lookup Table configurations, contact your administrator.
ER2001 Fail to generate rendition, try again.
ER2002 Cannot generate rendition for already generated assets.
ER2003 Missing channel name or resolution in the Lookup Table.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrations REST APIs


The Integrations REST APIs are a set of APIs that have same functionality as the existing APIs. The main difference being that the input for these REST APIs, is "names"
instead of the traditional "IDs".

Product Master supports following types of Integrations REST APIs.

Table 1. Integrations REST APIs - Catalog


Request Method Description
POST Add new item in the catalog.
PUT Update an item in the catalog.
DELETE Delete an item from the catalog.
GET Fetches all the item based on the catalog name.
GET Fetch an item based on the catalog name and primary key of item.
Get spec schema for a catalog.
GET
Load bulk items to queue to import to the catalog.
POST
For more information, see Integrations REST APIs - Catalog.
Table 2. Integrations REST APIs - Collaboration Area
Request Method Description
PUT Performs update operation on the entries in the collaboration area, based on the step name and collaboration name.
POST Performs add operation on the entries in the collaboration area step, based on the step name and collaboration name.
For more information, see Integrations REST APIs - Collaboration Area.
Table 3. Integrations REST APIs - Hierarchy
Request Method Description
POST Adding new category.
PUT Add or update the category.
DELETE Delete the categories.

IBM Product Master 12.0.0 963


Request Method Description
GET Fetch all the categories based on the hierarchy’s name.
GET Fetch a category based on category primary key and hierarchy name.
GET Fetch all the subcategories based on the category primary key and hierarchy name.
For more information, see Integrations REST APIs - Hierarchy.
Table 4. Integrations REST APIs - Lookup table
Request Method Description
POST Adding new entry in the Lookup table.
PUT Updating existing entry in the Lookup table.
DELETE Delete an entry from the Lookup table.
For more information, see Integrations REST APIs - Lookup Table.

Table 5. Integrations REST APIs - Search


Request Method Description
POST Searching an item in the catalog based on the search criteria.
For more information, see Integrations REST APIs - Search.

Common attributes
Multi-occurrence attribute: A complete path should be provided in the "path" attribute.
1. If attribute path is valid and multi-occurrence is not present, a new multi-occurrence instance is added, and value is set.
2. If attribute instance is present and the value that is supplied is empty, then multi-occurrence instance gets deleted.

Request body

{
"entryInfoList": [
{
"primaryKey": "6001",
"attributeList": [
{
"path": "Producr_Hierarchy_Spec/multi#1",
"value": "1th occur"
},
{
"path": "Product_Hierarchy_Spec/multi#2",
"value": ""
}
]
}
]
}

Response body

{
"entries": [
{
"primaryKey": "6000",
"message": "Successfully updated [6000] category",
"errors": []
}
]
}

Grouping attributes:
If a correct full path is provided, existing value gets updated.
If the attribute “path” is not present, but the path that is given is valid, the attribute instance is added, and value is set.

Request body

{
"entryInfoList": [
{
"primaryKey": "6001",
"attributeList": [
{
"path": "Product_Hierarchy_Spec/Grouping/Attri1",
"value": "Grouping Attri1 value"
}
]
}
]
}

Response body

964 IBM Product Master 12.0.0


{
"entries": [
{
"primaryKey": "6000",
"message": "Successfully updated [6000] category",
"errors": []
}
]
}

Lookup attributes: The value should be in the following format:


<Primary key of the Lookup table entry>

Request body

{
"entryInfoList": [
{
"primaryKey": "6001",
"attributeList": [
{
"path": "Product_Hierarchy_Spec/lookup_attr",
"value": "4"
}
]
}
]
}

Response body

{
"entries": [
{
"primaryKey": "6000",
"message": "Successfully updated [6000] category",
"errors": []
}
]
}

Linked attributes: To update or add a linked attribute, pass primary key of the attribute.

Request body

{
"entryInfoList": [
{
"primaryKey": "6001",
"attributeList": [
{
"path": "Sample_Hierarchy_Spec/linkedAttr",
"value": "2"
}
]
}
]
}

Response body

{
"entries": [
{
"primaryKey": "6000",
"message": "Successfully updated [6000] category",
"errors": []
}
]
}

Date attributes: The value should be in the following format.


yyyy-MM-dd HH:mm:ss

Request body

{
"entryInfoList": [
{
"primaryKey": "6001",
"attributeList": [
{
"path": "Sample_Hierarchy_Spec/Date",
"value": "2020-10-28 12:00:00"
}

IBM Product Master 12.0.0 965


]
}
]
}

Response body

{
"entries": [
{
"primaryKey": "6000",
"message": "Successfully updated [6000] category",
"errors": []
}
]
}

Relationship attributes: The value should be in the following format.


<Catalog name>\u003E\u003E >> <Primary Key of the related Item>

Request body

{
"entryInfoList": [
{
"primaryKey": "6001",
"attributeList": [
{
"path": "Product_Hierarchy_Spec/RelationShipAttr",
"value": "Product Catalog>>1"
}
]
}
]
}

Response body

{
"entries": [
{
"primaryKey": "6000",
"message": "Successfully updated [6000] category",
"errors": []
}
]
}

For more detailed information on the REST APIs, see the IBM Product Master Integrations REST APIs swagger file.

Integrations REST APIs - Catalog


Following are some sample Integrations REST APIs for the Catalog.
Integrations REST APIs - Hierarchy
Following are some sample Integrations REST APIs for the Hierarchy.
Integrations REST APIs - Collaboration Area
Following are some sample Integrations REST APIs for the Collaboration Area.
Integrations REST APIs - Lookup Table
Following are some sample Integrations REST APIs for the Lookup Table.
Integrations REST APIs - Search
Following are some sample Integrations REST APIs for the Search.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrations REST APIs - Catalog


Following are some sample Integrations REST APIs for the Catalog.

POST: Add new item in the catalog.

Request body

NA

NA{
"entryInfoList": [
{
"primaryKey": "Samsung",
"attributeList": [

966 IBM Product Master 12.0.0


{
"attributePath": "Product_Catalog_Spec/Item_Name",
"value": "Samsung"
}
],
"categoryList": [
{
"categoryFullPath": "Product Hierarchy/Electronics",
"action": "ADD"
}
]
}
]
}

Response body

{
"entries": [
{
"primaryKey": "Samsung",
"message": "Successfully created item",
"errors": []
}
]
}

PUT: Update the exiting item from the catalog.

Request body

{
"entryInfoList": [
{
"primaryKey": "Samsung",
"attributeList": [
{
"attributePath": "Product_Catalog_Spec/Item_Name",
"value": "Samsung"
}
],
"categoryList": [
{
"categoryFullPath": "Product Hierarchy/Mobile",
"action": "ADD"
},
{
"categoryFullPath": "Product Hierarchy/Electronics",
"action": "REMOVE"
}
]
}
]
}

Response body

{
"entries": [
{
"primaryKey": "Samsung",
"message": "Successfully updated item",
"errors": []
}
]
}

DELETE: Delete the existing item from catalog.

Request body

{
"pks": ["Samsung",” item pk 2”]
}

Response body

{
"message": "Successfully deleted 1 item(s)"
}

GET: Fetches all the item based on the catalog name.

Request body

IBM Product Master 12.0.0 967


{
"primaryKeyNodeName": "performance_cat/pk ",
"displayNodeName": "performance_cat/pk ",
"entryInfoList": [
{
"primaryKey": "1",
"entryData": {
"performance_spec/lookup": "Lookup_spec_table»2»",
"performance_spec/date": "2020-04-17T00:00:00+0530",
"performance_spec/user": "test123",
"Approval Secondary Spec/Approval Details": [
{
"Approval Secondary Spec/Approval Details/Comments": null,
"Approval Secondary Spec/Approval Details/User": null,
"Approval Secondary Spec/Approval Details/Modified Date": null
}
],
"performance_spec/Number Enum": null,
"performance_spec/pk": "1"
},
"status": "CHECKEDOUT",
"entryCompletenessInfoList": null,
"parentCategories": [],
"collaborationIds": null,
"entryCollabInfoList": [
{
"entryId": 62003,
"collabId": 6402,
"collabName": "performance_collab",
"stepInfoList": [
{
"stepId": 6205,
"stepName": "Step 2",
"performer": true
}
],
"attributeList": null
}
]
},
{
"primaryKey": "201",
"entryData": {
"performance_spec/lookup": null,
"performance_spec/date": "2020-04-15T00:00:00+0530
"performance_spec/user": "User1",
"performance_spec/pk": "201"
},
"status": "CHECKEDOUT",
"entryCompletenessInfoList": null,
"parentCategories": [],
"collaborationIds": null,
"entryCollabInfoList": [
{
"entryId": 62203,
"collabId": 6402,
"collabName": "performance_collab",
"stepInfoList": [
{
"stepId": 6207,
"stepName": "Step 1",
"performer": true
}
],
"attributeList": null
}
]
}
]
,
"containerName": "performance_cat",
"totalCount": 2
}

GET: Fetch an item based on the catalog name and primary key of item.
yyyy-MM-dd HH:mm:ss

Request body

{
"primaryKeyNodeName": "performance_spec/pk",
"displayNodeName": "performance_spec/pk",
"entryInfoList": [
{
"primaryKey": "201",
"entryData": {
"performance_spec/lookup": null,
"performance_spec/lookup_multi": [],
"performance_spec/Status": null,
"performance_spec/Timezone": null,
"performance_spec/date": "2020-04-15T00:00:00+0530",
"performance_spec/Currency": null,

968 IBM Product Master 12.0.0


"performance_spec/Number": null,
"performance_spec/user": "User1",
"performance_spec/Linked": null,
"performance_spec/Relationship": null,
"Approval Secondary Spec/Approval Details": [],
"performance_spec/Number Enum": null,
"performance_spec/pk": "201"
},
"status": "CHECKEDOUT",
"entryCompletenessInfoList": null,
"parentCategories": [
"performance_hier/1"
],
"collaborationIds": null,
"entryCollabInfoList": null
}
],
"containerName": "performance_cat",
"checkedOutAttributes": [
"User",
"Approval Details",
"-3",
"Comments",
"Modified Date"
]
}

GET: Get spec schema for a catalog.


URL - https://productmaster.somehost.com/api/v1/integrations/catalogs/<catalogName>/schema?category=<categoryName>

Request body

NA

Response body

{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "https://productmaster.com/itemSchema.json",
"title": "Catalog Spec Definition",
"description": "Attributes from primary & secondary specs for a catalog",
"type": "object",
"properties": {
"Products Spec": {
"type": "object",
"properties": {
"Product Id": {
"type": "integer"
},
"Product Name": {
"type": "string"
},
"Product Manual": {
"type": "string"
},
"Modified Date": {
"type": "string"
},
"Modified Date Timezone": {
"type": "string"
},
"Availablity": {
"type": "boolean"
},
"Product Image": {
"type": "string"
},
"Product Image URL": {
"type": "string"
},
"Product Thumbnail Image": {
"type": "string"
},
"Product Thumbnail Image URL": {
"type": "string"
},
"Vendor Lookup": {
"type": "string"
},
"Product Additional Details": {
"type": "string"
},
"Discount": {
"type": "number"
},
"Related Product": {
"type": "string"
},
"Price": {
"type": "number"

IBM Product Master 12.0.0 969


},
"Product Description": {
"type": "object",
"properties": {
"Short Description": {
"type": "string"
},
"Long Description": {
"type": "string"
}
}
}
}
},
"Mobiles Spec": {
"type": "object",
"properties": {
"Variants": {
"type": "array",
"items": {
"type": "object",
"properties": {
"Internal Storage in GB": {
"type": "integer"
},
"Available Colours": {
"type": "array",
"items": {
"type": "string"
},
"minItems": 0,
"maxItems": 3
}
}
},
"minItems": 0,
"maxItems": 3
}
}
},
"Clothes Spec": {
"type": "object",
"properties": {
"Available Sizes": {
"type": "array",
"items": {
"type": "string"
},
"minItems": 0,
"maxItems": 4
}
}
}
},
"required": [
"Product Id",
"Product Name"
]
}

POST: Load bulk items to queue to import to the catalog.


URL - https://productmaster.somehost.com/api/v1/integrations/bulkload

Request body

Form-data
itemData:
{
"containerName": "Products Catalog",
"items": [
{
"identifierAttributeName": "Products Spec/Product Name",
"identifierAttributeValue": "Apple iPhone 13 Pro",
"source": "Channel-1",
"attributes": {
"Products Primary Spec": {
"Product Name": "Apple iPhone 13 Pro",
"Product Manual": "Apple iPhone 13 Pro - User Guide.pdf",
"Modified Date": "2022-05-26T00:00:00+0530",
"Modified Date Timezone": "(GMT+05:30) Calcutta, Chennai, Mumbai, New Delhi",
"Availablity": true,
"Product Image": "Apple iPhone 13 Pro.jpeg",
"Product Image URL": "https://www.image.com/Apple iPhone 13 Pro.jpeg",
"Product Thumbnail Image": "Apple iPhone 13 Pro Thumbnail.jpeg",
"Product Thumbnail Image URL": "https://www.image.com/Apple iPhone 13 Pro Thumbnail.jpeg",
"Vendor Lookup": "Vendor Lookup>>1",
"Product Additional Details": "https: //www.apple.com/in/iphone-13-pro/",
"Discount": 5.00,
"Related Product": "Products Catalog>>100",
"Price": 130000,
"Product Description": {
"Short Description": "Apple iPhone 13 Pro was launched on September 2021",

970 IBM Product Master 12.0.0


"Long Description": "The phone comes with a 6.7 inch touchscreen display with a resolution of
1284 x 2778 pixels at a pixel density of 458 pixels per inch (ppi). It iOS 15 and is powered by a 4352mAh non-
removable battery."
}
},
"Mobiles Spec": {
"Variants": [
{
"Internal Storage in GB": 128,
"Available Colours": [
"White",
"Pink"
]
},
{
"Internal Storage in GB": 256,
"Available Colours": [
"Grey",
"Gold",
"White"
]
},
{
"Internal Storage in GB": 512,
"Available Colours": [
"Silver",
"Purple"
]
}
]
}
},
"mappings": [
"Product Hierarchy/Mobiles"
]
},
{
"identifierAttributeName": "Products Spec/Product Name",
"identifierAttributeValue": "Janasya Women's Red Checkered Cotton Top",
"source": "Channel2",
"attributes": {
"Products Primary Spec": {
"Product Name": "Janasya Women's Red Checkered Cotton Top",
"Availablity": true,
"Product Image": "Janasya Women's Red Checkered Cotton Top.jpeg",
"Product Image URL": "www.image.com/Janasya Women's Red Checkered Cotton Top.jpeg",
"Product Thumbnail Image": "Janasya Women's Red Checkered Cotton Top Thumbnail.jpeg",
"Product Thumbnail Image URL": "www.image.com/Janasya Women's Red Checkered Cotton Top
Thumbnail.jpeg"
},
"Clothes Spec": {
"Available Sizes": [
"XS",
"S",
"M",
"L"
]
}
},
"mappings": [
"Product Hierarchy/Clothes"
]
}
]
}

Response body

Success

{
"message": "Item(s) published successfully"
}

Failure

{
"errorCode": "ER1801",
"errorMessage": "Failed to publish item request, try again"
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrations REST APIs - Hierarchy

IBM Product Master 12.0.0 971


Following are some sample Integrations REST APIs for the Hierarchy.

PUT: Add or update a category.

Request body

{
"entryInfoList": [
{
"primaryKey": "Mobile and Accessories",
"attributeList ": [
{
"path": "Product_Hierarchy_Spec/Category_Name",
"value": "Cell phone and Accessories"
}
],
"categoryList": [
{
"categoryFullPath": "Product Hierarchy/Electronics",
"action": "REMOVE"
},
{
"categoryFullPath": "Product Hierarchy/Infrastructure",
"action": "ADD"
}
],
"specInfoList": [
{
"specName": "ElectronicsSecondaryspec",
"type": "ITEM",
"actionType": "REMOVE",
"addAcrossMaping": true,
"addToChildren": true,
"catalogNames": [
"Product Catelog"
]
}
]
},
{
"primaryKey": "Cloud Computing Services1",
"attributeList": [
{
"attributePath": "Product_Hierarchy_Spec/Category_Name",
"value": "IT Accessories"
}
],
"categoryList": [
{
"categoryFullPath": "Product Hierarchy/Electronics",
"action": "ADD"
}
],
"specInfoList": [
{
"specName": "Product_Hierarchy_Data_spec",
"type": "STANDALONE",
"actionType": "ADD"
}
]
}
]
}

Response body

{
"entries": [
{
"primaryKey": " Mobile and Accessories ",
"message": " Successfully updated [Mobile and Accessories] category",
"errors": []
},
{
"primaryKey": " Cloud Computing Services1",
"message": " Successfully created [Cloud Computing Services1] category",
"errors": []
}
]
}

POST: Add a category.

Request body

{
"entryInfoList": [
{
"primaryKey": "Cloud Computing Services1",
"attributeList": [

972 IBM Product Master 12.0.0


{
"attributePath": "Product_Hierarchy_Spec/Category_Name",
"value": "IT Accessories"
}
],
"categoryList": [
{
"categoryFullPath": "Product Hierarchy/Electronics",
"action": "ADD"
}
],
"specInfoList": [
{
"specName": "Product_Hierarchy_Data_spec",
"type": "STANDALONE",
"actionType": "ADD"
}
]
}
]
}

Response body

{
"entries": [
{
"primaryKey": " Cloud Computing Services1",
"message": " Successfully created [Cloud Computing Services1] category",
"errors": []
}
]
}

DELETE: Delete the categories from hierarchy.

Request body

{
"pks": [
"Electronics",
"Books",
"Cloth"
]
}

Response body

{
"successCount": 2,
"failureCount": 1,
"failingPrimaryKeys": [
{
"categoryPk": " Electronic ",
"reason": " You cannot delete category [Electronic] since it is checked out into a
collaboration area."
}
]
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrations REST APIs - Collaboration Area


Following are some sample Integrations REST APIs for the Collaboration Area.

POST: Perform add operation on entries (item or category) in the collaboration area, based on the step name and collaboration name.

Request body

{
"entryInfoList": [
{
"primaryKey": "1002",
"attributeList": [
{
"path": "productSpec/productName",
"value": "Test"
}
],

IBM Product Master 12.0.0 973


"categoryList": [
{
"categoryFullPath": "Product Hierarchy/Parent Category"
}
]
}
]
}

Response body

{
"entries": [
{
"primaryKey": "1002",
"message": "Successfully added item",
"errors": []
}
]
}

PUT: Update the existing item entry in the collaboration area step based on the collaboration area and step name.

Request body

{
"entryInfoList": [
{
"primaryKey": "Bucket",
"attributeList ": [
{
"path": "Product_Catalog_Spec/Item_Name",
"value": "Bucket"
}
],
"categoryList": [
{
"categoryFullPath": "Product Hierarchy/Home Accessories",
"action": "ADD"
}
]
}
]
}{
"entryInfoList": [
{
"primaryKey": "Bucket",
"attributeList ": [
{
"path": "Product_Catalog_Spec/Item_Name",
"value": "Bucket"
}
],
"categoryList": [
{
"categoryFullPath": "Product Hierarchy/Home Accessories",
"action": "ADD"
}
]
}
]
}

Response body

{
"entries": [
{
"primaryKey": " Bucket ",
"message": " Successfully updated item ",
"errors": []
}
]
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrations REST APIs - Lookup Table


Following are some sample Integrations REST APIs for the Lookup Table.

974 IBM Product Master 12.0.0


POST: Create new entries in the Lookup table.

Request body

{
"entryInfoList": [
{
"attributeList": [
{
"path": "Connector Lookup Spec/Key",
"value": "GMC"
},
{
"path": "Connector Lookup Spec/Value",
"value": "GMC"
}
]
}
]
}

Response body

{
"entries": [
{
"primaryKey": "GMC",
"message": "Entry added successfully",
"errors": []
}
]
}

PUT: Update the existing entry from the Lookup table.

Request body

{
"entryInfoList": [
{
"primaryKey":"GMC",
"attributeList": [
{
"path": "Connector Lookup Spec/Key",
"value": "GMC1"
},
{
"path": "Connector Lookup Spec/Value",
"value": "GMC1"
}
]
}
]
}{
"entryInfoList": [
{
"primaryKey":"GMC",
"attributeList": [
{
"path": "Connector Lookup Spec/Key",
"value": "GMC1"
},
{
"path": "Connector Lookup Spec/Value",
"value": "GMC1"
}
]
}
]
}

Response body

{
"entries": [
{
"primaryKey": "Samsung",
"message": " Successfully updated item",
{
"entries": [
{
"primaryKey": "GMC1",
"message": "Entry updated successfully",
"errors": []
}
]
}{

IBM Product Master 12.0.0 975


"entries": [
{
"primaryKey": "GMC1",
"message": "Entry updated successfully",
"errors": []
}
]
}

DELETE: Delete the exiting entries from the Lookup table.


Pass the entry primary keys in the query parameter.
http://localhost:8080/mdm_rest/api/v1/integrations/lookupTables/Connector Lookup Table/entries?entryPKeys=GMC,GMC6

Request body

{
"message": "Successfully deleted 0 out of 2 items.",
"failedEntryPKeys": [
"GMC",
"GMC6"
]
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Integrations REST APIs - Search


Following are some sample Integrations REST APIs for the Search.

POST: Search for an item in the catalog based on the search criteria.

Request body

{
"categoryRestriction": "ANY",
"searchCriteria": [
{
"attributeName": "Product ID",
"operator": "EQUAL",
"attributePath": "Operator Spec/Product ID",
"condition": "AND",
"scope": "any",
"value": [
"Samsung S11"
],
"negate": false
}
],
"categories": [ "Product Details/Electronics" ],
"catalogName": "Product Details Catalog"
}

Response body

{
"primaryKeyNodeName": "Operator Spec/Product ID",
"displayNodeName": "Operator Spec/Product ID",
"entryInfoList": [
{
"primaryKey": "Samsung S11",
"entryData": {
"Operator Spec/number enum": [],
"Operator Spec/Rich Text": [],
"Operator Spec/String Enum Multi": [],
"Operator Spec/Product Summary": "quad camera 64mp",
"Operator Spec/number": [],
"Operator Spec/Product ID": "Samsung S11",
"Operator Spec/Lkp multi": [],
"Operator Spec/integer": [],
"Operator Spec/URL": [],
"Operator Spec/Rln": [],
"Operator Spec/Cost": null,
"Operator Spec/Color Variants": null,
},
"status": "CHECKEDOUT",
"entryCompletenessInfoList": [],
"entryCollabInfoList": [
{
"collabName": "Product Details Collab",
"stepInfoList": [
{
"stepName": "INITIAL",

976 IBM Product Master 12.0.0


"performer": true
}
]
}
],
"parentCategories": [
"Product Details/Electronics"
]
}
],
"containerName": "Product Details Catalog",
"totalCount": 1
}

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Configuration files
The configuration files contain your system configurations that you use to set up and customize Product Master.

admin_properties.xml file parameters


The admin_properties.xml file defines the different hosts that are part of a clustered environment.
application.properties (Indexer) file parameters
Defines the Indexer application.properties file parameters.
application.properties (pim collector) file parameters
Defines the pim collector application.properties file parameters.
common.properties file parameters
Defines the parameters of the common.properties file.
config.json file parameters
Defines the config.json file parameters.
dam.properties file parameters
Defines the dam.properties file parameters.
damConfig.properties file parameters
Defines the damConfig.properties file parameters.
dashboards_config.ini properties file parameters
Defines the dashboards_config.ini properties parameters.
data_entry_properties.xml file file parameters
The data_entry_properties.xml file defines the HTML properties and script that provide content for an additional catalog and hierarchy elements that are displayed
in the Single Edit pane.
db.xml file parameters
The db.xml file contains all of the database-related parameters and is located in the $TOP/etc/default directory.
docstore_mount.xml file parameters
The docstore_mount.xml file defines the location of your OS file system mount points and how to handle incoming data in the Document Store.
env_settings.ini file parameters
Defines the env_settings.ini file parameters.
gdsConfig.properties file parameters
Defines the gdsConfig.properties file parameters.
mass-edit.properties file parameters
Defines the mass-edit.properties file parameters.
mdm-cache-config.properties parameters
The cache property files contain the cache memory parameters of Product Master.
mdm-cache-config.xml.template file parameters
You should not update the mdm-cache-config.xml.template file unless you have been advised by the IBM Support to do so. However, you may need to adjust the
multi-cast settings depending on your network configuration.
mdmce-roles.json.default file parameters
Defines the parameters of the mdmce-roles.json.default file.
ml_configuration file parameters
Defines the ml_configuration file parameters.
restConfig.properties file parameters
Defines the restConfig.properties file parameters.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

admin_properties.xml file parameters


The admin_properties.xml file defines the different hosts that are part of a clustered environment.

Updating
You can edit the install dir/etc/default/admin_properties.xml file. Add the hostname of each node to this file.

IBM Product Master 12.0.0 977


Values
Use localhost if there is only one host in the cluster, otherwise use host names for each workstation in the cluster. Valid host names are allowed without the domain name
and an IP addresses cannot be used.

File location
The admin_properties.xml file is located in $TOP/etc/default directory.

Syntax
1: <admin>
2: <cluster>
3: <host name= "machine_host_name"/>
4: </cluster>
5: </admin>

Example
This is an example of the admin_properties.xml file. In following section, localhost is specified to indicate that there is only one host in the cluster.

1: <admin>
2: <cluster>
3: <host name= "localhost"/>
4: </cluster>
5: </admin>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

application.properties (Indexer) file parameters


Defines the Indexer application.properties file parameters.

Important: Starting from IBM® Product Master Fix Pack 8 onwards, Elasticsearch is getting replaced with OpenSearch. This is due to change in licensing strategy (no
longer open source) by Elastic NV. Due to this, you need to shift to OpenSearch 2.4.1 and later, and run full indexing. For more information, see Installing OpenSearch (Fix
Pack 8 and later).
Property Default Value Description
app.datasource.driver-class-name com.ibm.db2.jcc.DB2Driver Specifies the database driver class name.
app.datasource.password pimdbpw Specifies the database password.
app.datasource.url jdbc:db2://xx.xxx.xx.xx:50000/pimdb Specifies the database connection details.
app.datasource.username pimdbusr Specifies the database username.
camel.springboot.name indexer Specifies the instance name.
es.clusterName es-cluster
( and later) Specifies the OpenSearch cluster name.
( and earlier) Specifies the Elasticsearch cluster name.

es.isPasswordEncrypted true/false
( and later) Specifies whether the OpenSearch password is
encrypted.
( and earlier) Specifies whether the Elasticsearch password is
encrypted.

es.isSslEnabled true/false
( and later) Specifies whether the OpenSearch SSL is enabled.
( and earlier) Specifies whether the Elasticsearch SSL is
enabled.

es.password
( and later) Specifies the OpenSearch password.
( and earlier) Specifies the Elasticsearch password.

es.serverIp <protocol>://<ip_address_or_hostname>:
( and later) Specifies the OpenSearch host address.
<http_port>
( and earlier) Specifies the Elasticsearch host address.

es.totalFieldLimit 10000
( and later) Specifies the maximum OpenSearch field index
limit.
( and earlier) Specifies the maximum Elasticsearch field index
limit.

978 IBM Product Master 12.0.0


Property Default Value Description
es.userName
( and later) Specifies the OpenSearch username.
( and earlier) Specifies the Elasticsearch username.

instance.backOffMultiplier 5 Specifies the instance back off multiplier.


instance.dbPasswordEncrypted false Specifies whether the database password is encrypted.
instance.enableTracing false Specifies the instance enable tracing.
instance.retryCount 5 Specifies the instance retry count.
logging.level.org.hibernate ERROR Specifies the Hibernate® log level.
logging.level.org.springframework.web ERROR Specifies the Spring web log level.
logging.level.root INFO Specifies the root logging level.
management.endpoint.shutdown.enab true Specifies whether the shutdown of the Spring Boot management endpoint is
led enabled.
management.endpoints. * (All endpoints) Specifies the ID of the Spring Boot management endpoints that are displayed.
web.exposure.include
management.health. false Specifies whether the OpenSearch health of the Spring Boot management
OpenSearch.enabled endpoint is enabled.
mq.groupName mdmce-hazelcast-instance Specifies the Hazelcast group name.
mq.indexItemDeleteQueueUri Specifies the Hazelcast item delete queue URI.
mq.indexQueueUri Specifies the Hazelcast index queue URL.
mq.networkIpAddress <ip_address>:<http_port> Specifies the Hazelcast host address. Default value= 127.0.0.1:5701
server.port 9096 Specifies the instance port.
server.tomcat.additional-tld-skip- *.jar Specifies the skip Tag Library Descriptors (tld) scan.
patterns
spring.datasource.testWhileIdle true Specifies whether test database connection should be active if idle.
spring.datasource.validationQuery SELECT 1 Specifies the queue to be run to test connection.
spring.jpa.hibernate.ddl-auto none Specifies the Hibernate feature that controls the behavior.
spring.jpa.hibernate.naming.implicit- org.hibernate.boot.model.naming. Specifies the Hibernate implicit strategy.
strategy ImplicitNamingStrategyLegacyHbmImpl
spring.jpa.hibernate.naming.physical- org.springframework.boot.orm.jpa. Specifies the Hibernate physical strategy.
strategy hibernate.SpringPhysicalNamingStrategy
spring.jpa.properties.hibernate.dialect org.hibernate.dialect.DB2Dialect Specifies the database Hibernate dialect class name.
spring.jpa.show-sql false Specifies the SQL statements that are printed in the console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

application.properties (pim collector) file parameters


Defines the pim collector application.properties file parameters.

Property Default Value Description


app.datasource.driver-class-name com.ibm.db2.jcc.DB2Driver Specifies the database driver class name.
app.datasource.password pimdbpw Specifies the database password.
app.datasource.url jdbc:db2://xx.xxx.xx.xx:50000/pi Specifies the database connection details.
mdb Note: For IBM® Db2® Secure Sockets Layer (SSL) setup, you need to append
"sslConnection=true" to the URL.
Example
jdbc:db2://xx.xxx.xx.xx:50000/pimdb:sslConnection=true
app.datasource.username pimdbusr Specifies the database username.
camel.springboot.name pim-collector Specifies the instance name.
instance.batchSize 25 Specifies the instance batch size.
instance.completionTimeout 1000 Specifies the instance item aggregation completion timeout.
instance.dbPasswordEncrypted false Specifies the instance is database password encrypted.
instance.deleteIndexOnSchedule Specifies the instance delete for an index on new full index job schedule.
instance.deleteIndexTimeout Specifies the instance delete index timeout.
instance.enableTracing false Specifies the instance enable tracing.
instance.failureItemRedeliveryDelay 10000 Specifies the instance failure redelivery delay.
instance.failureItemRetryCount 5 Specifies the instance failure retry count.
instance.pageSize 1000 Specifies the instance page size.
instance.retryCount 5 Specifies the instance retry count.
logging.level.org.hibernate ERROR Specifies the Hibernate log level.
logging.level.org.springframework.web ERROR Specifies the Spring web log level.
logging.level.root INFO Specifies the root logging level.
management.endpoint.shutdown.enab true Specifies whether the shutdown of the Spring Boot management endpoint is enabled.
led

IBM Product Master 12.0.0 979


Property Default Value Description
management.endpoints.web.exposure * (All endpoints) Specifies the ID of the Spring Boot management endpoints that are displayed.
.include
mdm.ccdClasspath Specifies the CCD SVR jar location.
mdm.contextRecycleTimeInMins 30 Specifies the context recycle time.
mdm.etcDir Specifies the ETC directory location.
mdm.topDir Specifies the TOP directory location.
mq.batchQueuePutUri Specifies the Hazelcast batch queue put URI.
mq.batchQueueUri Specifies the Hazelcast batch queue URI.
mq.eventNotifQueueUri Specifies the Hazelcast event notif queue URI.
mq.groupName mdmce-hazelcast-instance Specifies the Hazelcast group name.
mq.indexItemDeleteQueueUri Specifies the Hazelcast item delete queue URI.
mq.indexQueueUri Specifies the Hazelcast index queue URL.
mq.itemQueuePutUri hazelcast-queue:default? Specifies the Hazelcast item queue put URL.
hazelcastInstance=#hazelcastIns
tance
mq.itemQueueUri Specifies the Hazelcast item queue URL.
mq.networkIpAddress <ip_address>:<http_port> Specifies the Hazelcast host address.
mq.retryQueuePutUri hazelcast-queue:retryQueue? Specifies the Hazelcast retry queue URL.
hazelcastInstance=#hazelcastIns
tance
server.port 9095 Specifies the instance port.
server.tomcat.additional-tld-skip- *.jar Specifies the skip Tag Library Descriptors (tld) scan.
patterns
spring.datasource.testWhileIdle true Specifies whether test database connection should be active if idle.
spring.datasource.validationQuery SELECT 1 Specifies the queue to be run to test connection.
spring.jpa.hibernate.ddl-auto none Specifies the Hibernate feature that controls the behavior.
spring.jpa.hibernate.naming.implicit- org.hibernate.boot.model.naming. Specifies the Hibernate implicit strategy.
strategy ImplicitNamingStrategyLegacyHb
mImpl
spring.jpa.hibernate.naming.physical- org.springframework.boot.orm.jpa Specifies the Hibernate physical strategy.
strategy .
hibernate.SpringPhysicalNamingS
trategy
spring.jpa.properties.hibernate.dialect org.hibernate.dialect.DB2Dialect Specifies the database Hibernate dialect class name.
spring.jpa.show-sql false Specifies the SQL statements that are printed in the console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

common.properties file parameters


Defines the parameters of the common.properties file.

Property Description
Batch
processing
parameters
aggregation_que Specifies the maximum number of objects that you can retrieve from the database, the updated items that are kept in memory during import, and the
ue_size items or categories in a batch for rollback. By default, the value is 100.
index_regenerat Specifies the maximum numbers of updated items that are kept in the cache during an import. When the specified value is reached during an import,
ion_batch_size all items are then committed to the database, and the cache is reset. By default, the value is 100.
Container script
execution
parameter
enable_scripts_ Specifies whether container scripts get run during import of the catalog or hierarchy content. By default, the value is false.
during_import
Completeness
parameters
dq_completenes Specifies required attribute in the Primary Spec. Applicable to Data Quality Completeness feature.
s_attribute_nam
e
dq_completenes Specifies name of required lookup table. Applicable to Data Quality Completeness feature.
s_lookup_table_
name
Connector
parameters
adobe_connecto Specifies the Hazelcast queue name for the Adobe connector.
r_queue

980 IBM Product Master 12.0.0


Property Description
amazon_connec Specifies the Hazelcast queue name for Amazon connector.
tor_queue
amazon_connec Specifies the Hazelcast image queue name for Amazon connector.
tor_image_queu
e
ebay_connector Specifies the Hazelcast queue name for the eBay connector.
_queue
ebay_connector Specifies the Hazelcast revise queue name for the eBay connector.
_revise_queue
google_connect Specifies the Hazelcast queue name for the Google connector.
or_queue
magento2_conn Specifies the Hazelcast queue name for Magento2 connector.
ector_queue
Console Specify the number of objects that display in each console. A valid value is an integer.
parameters
fire_event_proc Specifies the whether system events are logged and if application alerts are generated when necessary.
essor_events
table_rows_per Specifies the number of objects that can be displayed in the Alerts Subscription Console. By default, the value is 20.
_page_alerts_di
splay
table_rows_per Specifies the number of objects that can be displayed in the single-edit page. By default, the value is 20.
_page_item_set
table_rows_per Specifies number of objects that can be displayed in the Lookup Table Console. By default, the value is 20.
_page_lookup_t
able
table_rows_per Specifies the number of objects that can be displayed in the multi-edit page. By default, the value is 20.
_page_multi_edi
t
table_rows_per Specifies the number of objects that can be displayed in the Script Console. By default, the value is 20.
_page_scripts_c
onsole
table_rows_per Specifies the number of objects that can be displayed in the Specs Console. By default, the value is 20.
_page_specs_co
nsole
Data entry Specify the maximum number of entries that can be added through the multi-edit page and how the entries are sorted.
worklist
parameters
worklist_initial_ Specifies the maximum number of entries out of the whole work set that can be added by the bulk Add Item option. By default, the value is 5000.
size_limit
worklist_initial_ Specifies to load the items in the multi-edit page when you do not configure the setting in the user interface, on the Home > My Settings. By default,
size_limit_with_ the value is 500.
sort_all_enable
d
Database Specify the database connection parameters define the connection to the database.
connection
parameters
db_class_name Specifies the fully qualified class name of the driver.
db_maxConnTi Specifies the maximum number of days that a database connection object can sit idle in a connection pool before being eligible for removal. A valid
me value is an integer.
db_password Specifies the password to connect to the database.
db_retrySleep Specify the database connection parameters define the connection to the database.
db_tablespaces Specifies separate database table spaces for tables and indexes. Possible value can be either true or false. By default, the value is false.
db_type Specifies the type of the database. Possible value can be either Db2® or oracle.
db_url Specifies the JDBC URL used to connect to the database.
Note: For an IBM® Product Master installation with the Oracle RAC database, replace the colon ":" before the service name with a forward slash "/".
For example, jdbc:oracle:thin:@IP address:port number/MDMDEV
db_userName Specifies the username to connect to the database.
max_fetch_valu Specifies the maximum number of Item objects to both retrieve from the database and keep in memory for export into the ItemSet.java file. By
e default, the value is 100.

IBM Product Master 12.0.0 981


Property Description
Database Specify the six JVM service modules that configure the database: Application server (appsrv), Scheduler (scheduler), Workflow engine
service (workflowengine), Administration (admin), Event processor (eventprocessor), and the Queue manager (queuemanager). The six JVM services each have
parameters a set of four connection parameters that define the maximum number of connections per database-related component of the service. Each connection
parameter includes the module_name of its corresponding JVM service.

db_maxConnection_module_name
Specifies the maximum number of the database connections allowable for the service module.
db_minConnection_ module_name
Specifies the minimum number of the database connections that the service module maintains in the connection pool since startup of service.

db_maxConnection_module_name _default
Specifies the maximum number of the database connections allowable for the content processing operations of the service module.
db_maxConnection_module_name _system
Specifies the number of system connections that the module can have in the connection pool. These connections are used for system-specific
operations.

A valid value is an integer.


Database Specify the number of connections for the Java™ utility programs that require access to the database. A valid value is an integer.
parameters for
the default
module
db_maxConnect By default, the value is 8.
ion
db_maxConnect By default, the value is 4.
ion_default
db_maxConnect By default, the value is 4.
ion_system
db_minConnecti By default, the value is 2.
on
Database Specify the maximum number of connections that are used to start and stop modules on the remote computers. A valid value is an integer.
parameters for
the Admin
module
db_maxConnect By default, the value is 5.
ion_admin
db_maxConnect By default, the value is 4.
ion_admin_defa
ult
db_maxConnect By default, the value is 1.
ion_admin_syst
em
db_minConnecti By default, the value is 2.
on_admin
Database Specify the maximum number of connections that are used to do most of the nonscheduled work, when you are actively using the application. A valid
parameters for value is an integer.
the Application
server module
db_maxConnect By default, the value is 30.
ion_appsvr
db_maxConnect By default, the value is 26.
ion_appsvr_defa
ult
db_maxConnect By default, the value is 30.
ion_appsvr_gds
msg
db_maxConnect By default, the value is 4.
ion_appsvr_syst
em
db_minConnecti By default, the value is 10.
on_appsvr
Database Specify the maximum number of connections that are used to dispatch events between all the modules. A valid value is an integer.
parameters for
the Event
processor
module
db_maxConnect By default, the value is 6.
ion_eventproces
sor
db_maxConnect By default, the value is 2.
ion_eventproces
sor_default
db_maxConnect By default, the value is 4.
ion_eventproces
sor_system

982 IBM Product Master 12.0.0


Property Description
db_minConnecti By default, the value is 4.
on_eventproces
sor
Database
parameters for
the GDS
messaging
module
db_maxConnect
ion_messaging_
default
db_maxConnect By default, the value is 30.
ion_messaging_
gdsmsg
db_maxConnect
ion_messaging_
system
Database Specify the maximum number of connections that are used to schedule jobs. A valid value is an integer.
parameters for
the Scheduler
module
db_maxConnect By default, the value is 40.
ion_scheduler
db_maxConnect By default, the value is 36.
ion_scheduler_d
efault
db_maxConnect By default, the value is 30.
ion_scheduler_g
dsmsg
db_maxConnect By default, the value is 4.
ion_scheduler_s
ystem
db_minConnecti By default, the value is 10.
on_scheduler
Database Specify the maximum number of connections that send documents externally. A valid value is an integer.
parameters for
the Queue
manager
module
db_maxConnect By default, the value is 12.
ion_queuemana
ger
db_maxConnect By default, the value is 6.
ion_queuemana
ger_default
db_maxConnect By default, the value is 6.
ion_queuemana
ger_system
db_minConnecti By default, the value is 4.
on_queuemanag
er
Database Specify the maximum number of connections that are used to move the entries in the collaboration steps and monitor for alerts and workflow
parameters for escalations. A valid value is an integer.
the Workflow
engine module
db_maxConnect By default, the value is 40.
ion_workflowen
gine
db_minConnecti By default, the value is 10.
on_workfloweng
ine
db_maxConnect By default, the value is 36.
ion_workflowen
gine_default
db_maxConnect By default, the value is 4.
ion_workflowen
gine_system
db_maxConnect By default, the value is 30.
ion_workflowen
gine_gdsmsg
Data profiling Specify the tools for performance monitoring and troubleshooting. The profiling components provide profiling data for every page and executable file.
parameters A valid value is an integer.

IBM Product Master 12.0.0 983


Property Description
cexplorer_max_ Applicable to the editors in the IBM Product Master that displays category type nodes and use the common explorer as embedded component, such
categories_per_ as category mapper. It controls the maximum limits of the category nodes that are rendered per page under a parent node. You can refer to it as page
page size limits for categories when such nodes are of homogeneous nature. If the value of this parameter is set to "0", it signifies the system limit of
"214748364" and the editor page displays all the categories there are to be displayed on one page. By default, the value is 30.
cexplorer_max_i Applicable to the editors in the IBM Product Master that displays item type nodes and use the common explorer as embedded component. It controls
tems_per_page the maximum limits of the item nodes that are rendered per page under a parent node. You can refer to it as page size limits for data types items when
such nodes are of homogeneous nature. If the value of this parameter is set to "0", it signifies the system limit of "214748364" and the editor page
displays all the items there are to be displayed on one page. By default, the value is 10.
db_perf_dump Specifies the number of page hits that trigger the counter to reset. By default, the value is 100.
enable_memory Specifies the memory monitoring subsystem. Possible value can be either true or false. By default, the value is false.
monitor
Email manager Specify the built-in support for sending emails to external systems. You must configure the email manager with the server name and an email ID
parameters before you can send emails.
from_address Specifies the sender email address that your system uses when sending emails.
smtp_address Specifies the SMTP server name that is used for sending email. The server that you name must support SMTP on the default port.
support_email_ Specifies the support email address link in the Help menu of the user interface. This email address must be populated with the customer's internal
address support email address and not the IBM support email address.
Entity lock
parameters
inactive_jvm_m Specifies the time interval in milliseconds, that the daemon threads use to check for invalid or inactive JVMs. By default, the value is 300000.
onitor_interval
pulse_update_in Specifies the time interval in milliseconds to use as the heartbeat of each JVM. By default, the value is 60000.
terval
release_locks_e Specifies whether to release Product Master object locks after the termination of a transaction that modifies an item. Possible value can be either true
arly or false. By default, the value is false.
Entry processor Specify your system's worker threads and the maximum number of jobs that are allowed in the queue.
parameters
evp_poll_time Specifies the duration of time, in milliseconds, that the event processor polls for events and generate alerts if necessary. By default, the value is 5000.
max_jobs_entry Specifies the maximum number of jobs that can be queued for the worker thread to work on. By default, the value is 8.
_processor Important: The maximum number of worker threads should always be less than half of the value of the db_maxConnection_appsvr_default parameter.
max_threads_e Specifies the maximum number of worker threads available in a pool for performing background entry processing such as running a macro. By default,
ntry_processor the value is 64.
num_xml_proce Specifies the number of background processor threads per parent worker thread to convert data into XML for saves. By default, the value is 2.
ssor_backgroun
d_threads
Hazelcast client
parameters
appsvr_type Specifies the type of appserver.
hazelcast_group Specifies the Hazelcast group name.
_name
hazelcast_netw Specifies the Hazelcast network IP address in the <host:port> format.
ork_ip_address
pim_event_queu Specifies Hazelcast queue name, which is used by Product Master to put pim events to Elasticsearch.
e
GDS
parameters
enable_gds Specifies whether the GDS feature is enabled. Possible value can be either true or false. By default, the value is false.
history_subscri
ptions.xml file
parameters
fetch_audit_loo Specifies whether to display previous records as is or to get the Lookup values by address.
kupDataByID_d Before change, all XMLs in the Object history table, have the Lookup table address for the Lookup attribute and hence eliminate the logic for getting
ate attribute value based on address for Lookup.
Localization Specify your locale-related settings.
parameters
bidis
charset_name Specifies the UTF character encoding name. By default, the value is UTF-8.
CAUTION:
Using a character set value other than UTF-8 might cause incorrect display of characters in the GUI.
charset_value Specifies the UTF character encoding value. By default, the value is UTF-8.
CAUTION:
Using a character set value other than UTF-8 might cause incorrect display of characters in the GUI.
default_charset Specifies the default character set name that is used while creating exports and imports. By default, the value is Default (Cp1252 - ANSI Latin 1).
_name
default_charset Specifies the default character value that is used while creating exports and imports. By default, the value is Cp1252.
_value
default_locale Specifies the default locale that displays in user interface, when no specific locale is specified. By default, the value is en_US.
GB18030_enabl Specifies the Chinese National Standard GB 18030 - 2000 - Chinese font characters 18030 font family to render in the user interface. Possible value
ed can be either on or off.
Location data Specify how your system handles location data.
parameters

984 IBM Product Master 12.0.0


Property Description
enable_location Specifies the location data feature. Possible value can be either true or false. By default, the value is true.
_data
populate_locati Specifies your system to automatically populate location data with default values while simultaneously saving the items. Possible value can be either
on_default_valu true or false. By default, the value is false.
es
Log file
parameters
enable_client_ip Specifies whether user IP address and user name in logged in all the log files. Possible value can be either true or false. By default, the value is false.
_username_logg
ing
log_dir Specifies the root directory where all the logs are generated by the system. This parameter takes a valid path as its value.
log_format Specifies the format in which logs are generated by the WPC Logger. Valid value can be,

PatternLayout - Log messages in the log4j pattern layout format.


CBELayout - Log messages in the Common Base Event (CBE) format.
CBELayout_PatternLayout - Log messages in both CBE layout and pattern layout.

By default, the value is Pattern Layout.


Lookup table
parameters
max_inactive_in Specifies the time in seconds between client requests before an HTTP session expires. By default, the value is 1800.
terval
max_lookup_dro Specifies the threshold value that the control menu uses when displaying the Lookup Table keys in the single-edit and multi-edit pages.
pdown
IBM® MQ Specifies how your system handles inbound and outbound messaging with external sources or destinations, including EAI platforms and web servers.
(message
queue)
parameters
mq_channel
mq_charset Specifies the IBM MQ message’s character set. By default, the value is 819.
mq_hostname Specifies the hostname or IP of the server hosting the IBM MQ.
mq_inbound_qu Specifies the inbound queue name of the IBM MQ.
eue
mq_outbound_q Specifies the outbound queue name of the IBM MQ.
ueue
mq_password Specifies the password for obtaining a JMS QueueConnection.
mq_port Specifies the port number of the IBM MQ.
mq_queuemana Specifies the queue manager name of the IBM MQ.
ger
mq_use_utf Specifies whether messages in the message queue are read in the UTF format.
mq_username Specifies the password for obtaining a JMS QueueConnection.
Item bulk
import
parameters
data_import_qu Specifies queue for the item bulk load.
eue
Keyboard Specify the keyboard shortcuts that you can use within the single-edit and multi-edit pages.
shortcut
parameters
use_alt_in_short Specifies the Alt key on your keyboard to trigger shortcuts. Possible value can be either true or false. By default, the value is true.
cuts
use_ctrl_in_shor Specifies the Ctrl key on your keyboard to trigger shortcuts. Possible value can be either true or false. By default, the value is true.
tcuts
use_shift_in_sh Specifies the SHIFT key on your keyboard to trigger shortcuts. Possible value can be either true or false. By default, the value is false.
ortcuts
Machine
learning
parameters
ml_attributes_p Specifies the prediction API URL for attributes.
rediction_api
ml_attributes_tr Specifies the training API URL for attributes.
aining_api
ml_categorizatio Specifies the prediction API URL for categorization.
n_prediction_ap
i
ml_categorizatio Specifies the training API URL for categorization.
n_training_api
ml_controller_p Specifies the controller port number for the ML services.
ort
ml_feedback_at This is a Boolean attribute that must be present in any of the catalog specs for ML-based categorization feedback.
tribute_name
ml_hostname Specifies the hostname or IP of the server hosting ML APIs.
ml_protocol Specifies the protocol for ML APIs, possible values are "http" or "https".

IBM Product Master 12.0.0 985


Property Description
ml_services_sta Specifies the Status API URL for services.
tus_api
ml_standardizat Specifies the prediction API URL for standardization.
ion_prediction_
api
ml_standardizat Specifies the training API URL for standardization.
ion_training_api
ml_suggested_c Multi occurrence grouping attribute with two string attributes (Confidence Score and Name). The property must be present in one of the catalog specs,
ategories_attrib which are required for categorization.
ute_name
Memory
parameters
excel_records_i Specifies the maximum number of records to be kept in memory by POI when generating Microsoft Excel by using Export Job and Generate Report. By
n_memory_limit default, the value is 1000.
MongoDB
parameters
mongo_databas Specifies the MongoDB database name.
e
mongo_hostna Specifies the hostname of MongoDB server.
me
mongo_passwor Specifies the MongoDB user password.
d
mongo_port Specifies the MongoDB port number.
mongo_userna Specifies the MongoDB username.
me
Mount manager Configure the IBM Product Master for external processes to add, update, or read files from a file system that is mounted in the Document Store.
parameters
enable_mountm Specifies the mount manager to map a server file system to the Document Store. Possible value can be either true or false. By default, the value is
gr false.
ftp_root_dir Specifies the FTP directory that is used by the mount manager. By default, the value is /public_html/suppliers/.
Important: The path of the directory must begin with the public_html directory and end with a forward slash "/".
gzip_blobs Specifies whether BLOB files that are stored in the Document Store must be compressed during storage. When you set the parameter value to true,
you must also set the compress parameter in the docstore_mount.xml to yes before BLOB files are compressed.
mountmgr_dae Specifies the amount of time, in milliseconds, before the mount manager is set to sleep. By default, the value is 120000.
mon_sleep_time
supplier_base_d Specifies the individual company directories that you map from the file system to the Document Store. By default, the value is /u01/ftp/company/.
ir Important: The path of the FTP root directory must end with a forward slash "/".
Password
parameters
enable_passwor Specifies whether the password expiry is enabled. Possible value can be either true or false. By default, the value is false.
d_expiry
Vendor users (Persona-based UI)
Vendor users have enforced password expiry check and hence this property is not applicable to the Vendor users of the Persona-based UI.

enable_user_loc Specifies whether the user account gets locked out due to invalid inputs. The maximum_password_attempts property specifies the number of invalid
kout inputs allowed. All locked out users need to contact system administrator. Possible value can be either true or false. By default, the value is false.

System Administrator user


This property is not applicable to the system administrator user.
Vendor users (Persona-based UI)
Vendor users have enforced password user lockout and hence this property is not applicable to the Vendor users of the Persona-based UI.

force_strong_pa Specifies whether the password strength is checked by using the password_strength_criteria property. Possible value can be either true or false. By
ssword_for_user default, the value is false.
s
Vendor users (Persona-based UI)
Vendor users have enforced password strength check and hence this property is not applicable to the Vendor users of the Persona-based UI.

maximum_pass Specifies the maximum password age, in days, for all the users. By default, the value is 365.
word_age_for_u
sers
maximum_pass Specifies the maximum password age, in days, for the Vendor user. By default, the value is 90.
word_age_for_v Important: Not applicable to the Admin UI.
endor
maximum_pass Specifies the maximum invalid inputs that are allowed before user account locks out. By default, the value is 3.
word_attempts
password_stren Specifies the password strength criteria. For more information, see Password criteria.
gth_criteria
Performance Specify the processing time of the memory cache and the operational batch sizes.
parameters
enable_XML_in Enables the XML indexing for nonindexed attributes search so it can be used for performance. Possible value can be either true or false. By default, the
dexing value is false.
get_immutable_ Specifies the retrieval of immutable specs in scripts and defines the default behavior of the getCtgSpec and getCatalogSpec script operations for
specs retrieving immutable specs. Possible value can be either true or false. By default, the value is false.

986 IBM Product Master 12.0.0


Property Description
number_of_cons Specifies the number of consumer threads that need to be started for the catalog content import.
umer_threads_f
or_catalog_cont
ent_import
profiling_info_c Specifies the profile depth of nested function calls when profiling in the Profiling page. It also measures how deep the call stack is for methods being
ollection_depth checked. By default, the value is -1.
Attention: First, consider using the profiler integration feature rather than this profiling option. If you choose to use this option, enable profiling only on
production environments when actively debugging performance problems. Using this option with a high degree of collection_depth can reduce overall
performance noticeably as the volume of collected profiling information contributes to increased consumption of table space.
profiling_info_u Specifies the query names of the statements that are run while you are profiling your system to be added to the profiling information. Possible value
nique_db_name can be either true or false. By default, the value is true.
s
profiling_query_ Specifies the maximum time in milliseconds that a query can take without getting logged as a warning. By default, the value is 250.
delay_threshold Important: For production environments, the recommended parameter value is 1000 milliseconds.
profiling_sched Specifies profiling for scheduled jobs with or without debugging logs. By default, the value is 30.
uled_jobs
queue_size_for_ Specifies the size of queue where the generated item objects are added and then consumed by the consumer threads. Blocking queue size for the
catalog_content consumer threads for the catalog content import.
_import
use_script_inclu If true, then the scripts are cached by the script with includes expanded, meaning that if included files are changed, then the server needs to be
des restarted to clear the cache of scripts to see changes.
Poor
Obfuscation
Implementation
(POI)
parameters
poi_min_inflate Specifies the limit of ratio of compressed file size to the size of the expanded data during import. The property is used while importing entries into the
_ratio collaboration area. The default value is 0.01. The value of the property should be in double data type only (example 0.0).
To avoid a zip bomb detection error while performing import, minimize the value of this property.

Resembles min_inflate_ratio property of the org.apache.poi.openxml4j.util.ZipSecureFile class in the poi-ooxml-4.1.1.jar. The min_inflate_ratio
property is the minimum limit for ratio between deflated and inflated bytes to detect zip bomb.
Profiler port Specifies the default port numbers that your profiling agent uses when profiling your services.
parameters
jprofiler_adapter Specifies the JProfiler adapter class name.
_class
profiler_port_ad Specifies the port number at which the profiler agent that is attached to admin service must listen by default, if a value is not provided to port
min parameter of the pimprof.sh script. By default, the value is 7001.
profiler_port_ap Specifies the port number at which the profiler agent that is attached to appsvr service listens by default if a value is not provided to the port
psvr parameter of the pimprof.sh script. By default, the value is 7006.
profiler_port_ev Specifies the port number at which the profiler agent that is attached to eventprocessor service must listen by default, if a value is not provided to port
ent parameter of the pimprof.sh script. By default, the value is 7002.
profiler_port_qu Specifies the port number at which the profiler agent that is attached to queuemanager service must listen by default, if a value is not provided to the
euemgr port parameter of the pimprof.sh script. By default, the value is 7003.
profiler_port_sc Specifies the port number at which the profiler agent that is attached to scheduler service must listen by default, if a value is not provided to the port
heduler parameter of the pimprof.sh script. By default, the value is 7005.
profiler_port_wo Specifies the port number at which the profiler agent that is attached to workflowengine service must listen by default, if a value is not provided to the
rkflow port parameter of the pimprof.sh script. By default, the value is 7004.
yourkit_adapter Specifies the YourKit adapter class name.
_class
Scheduler Specify how scheduled jobs are handled.
parameters
master_poll_tim Specifies the time in milliseconds, of the wait between successive queries to the database, if there are no schedules to run. By default, the value is
e 5000.
num_threads Specifies the maximum number of worker threads available for scheduler to use. By default, the value is 8.
reset_schedule_ Specifies to reset a schedule that is not available so that it is available when the schedule runs again. Possible value can be either true or false. By
when_enabled default, the value is false.
sch_poll_time Specifies the amount of time in milliseconds that the Scheduler waits while checking for free threads. By default, the value is 30000.
scheduler_nap_l Specifies the size of the factor that is used to calculate the time the Scheduler waits after starting a job and before looking for another schedule to run.
og_factor By default, the value is 10000.
Scripting engine Specify certain functions of the scripting engine, and how your system handles scripts.
parameters

IBM Product Master 12.0.0 987


Property Description
script_compiler_ Specifies how scripts are compiled into class files. You can pass options, including any valid javac options to the compiler while the scripts are
options compiled. By default, the value is nowarn.

nowarn
Disable warning messages verbose Generate a verbose compiler status message that includes information about each class loaded and each
source file compiled.
verbose
Generate a verbose compiler status message that includes information about each class loaded and each source file compiled.
deprecation
Display source locations of where deprecated APIs are used (shorthand for Xlint:deprecation).
Xlint
Makes available recommended warnings.
Xlint:warning
Makes available or not available specified warning, where warning can be a combination of the following:

all
All of the warning options.
deprecation
Display source locations of where deprecated APIs are used.
unchecked
Give more detail for unchecked conversion warnings that are mandated by the Java Language Specification.
fallthrough
Check switch blocks for fall-through cases and provide a warning message for any that are found.
path
Warn about nonexistent path (class path, source path, and others) directories.
serial
Warn about missing serialVersionUID definitions on serializable classes.
finally
Warn about finally clauses that cannot end normally For multiple warnings, separate each warning with a comma.

For multiple warnings, separate each warning with a comma.

script_execution Specifies how scripts are run by choosing between either an interpreted mode or a compiled execution mode. Subsequently, in the compiled mode,
_mode script is transformed into Java code, then compiled into a Java class, and loadedJavaJava into memory for faster execution. By default, the value is
compiled_if_available.

not_compiled
Scripts are run in interpreted mode.
compiled_if_available
Scripts are run in a compiled way, only if the script either does not rely on dynamic scoping or has a script that can be compiled.
compiled_only
Scripts are compiled, but if compilation is not possible because of either script operations that is compilable or coding errors, then the script
execution fails.

script_restrict_l Specifies whether scripts must apply user locale restrictions. Possible value can be either true or false. By default, the value is true.
ocales
SDP
automation
restapi_base_ur Specifies the REST API URL for SDP Automation feature.
l
Search Specify the search function in the IBM Product Master.
parameters
delete_rollback Specifies the maximum batch size of the items or categories during a rollback. By default, the value is 100.
_batch_size
enable_content Specifies the display of all the items and categories referencing a particular CMS content reference. Possible value can be either true or false. By
_reference_chec default, the value is false.
k
enable_de_sear This property enables/disables the Free Text Search using data explorer search engine By default, this property is disabled. If the property is enabled,
ch you would see a search text box and a Search field always on the right of the Menu bar.
flex_search_attr Specifies the Attribute Collection to be used to filter the returned results from Watson™ Search.
ibute_collection
_name By default this property has no Attribute Collection name.

If a name is given, then when Watson Search is performed it makes use of this Attribute Collection to filter out what attributes can be viewed.
enable_filtering Specifies whether to enable case for string enumeration attribute. By default, this property is set to false. If the property is enabled, the filtering select
_enum_select_i search is case-insensitive .
gnorecase
rich_search_def Set to true in order to activate filtering of non indexed attributes from the rich search page default view. This does not affect custom templates.
ault_view_index
ed_only
rich_search_ind Set the value to true in order to populate indexed attributes of the selected catalog in the Relationship and Linked editors. Set to false to populate all
exed_only attributes. For more information, see Relationship and Linked editors.
Also, loads attributes for the Add criterion in the Search page.
search_ignoreca Specifies whether searches are case-sensitive when you use the Rich search or Lookup Table search for type string attributes. This parameter affects
se only search on String attributes by using Rich search in user interface. Searching by using the Query Language is not affected. Possible value can be
either yes or no. By default, the value is no.
Security
parameters

988 IBM Product Master 12.0.0


Property Description
enable_referer_ Specifies the Referer check in the header to prevent cross site request forgery. If set to false, the Referer check is disabled in the request header.
check Possible value can be either true or false. By default, the value is true.
enable_xframe_ The flag can be used to enable adding X-Frame-Options to the HTTP Response to prevent ClickJacking. If set to false, then X-Frame-Options are not
header added to the response header. If this flag is not present, the value defaults to true.
javaapi_security This flag can be used to disable secure mode for any JavaAPI invocation. Both extension point URIs japi:// and japis:// run in the insecure mode when
set to false, where no user permission authorization is performed. If this flag is not present, the value defaults to true.
mdmcs_csrf_tok Specifies the multipart request validation to prevent Cross Site Request Forgery (CSRF). The value for this parameter and the value of
en_name org.owasp.csrfguard.TokenName property in the $TOP/etc/default/csrf/Owasp.CsrfGuard.properties file should be the same. By default, the value is
MDMCS_CSRFTOKEN.
xframe_header_ Specifies the X-Frame-Option value in the HTTP Response to prevent ClickJacking. The valid value is SAMEORIGIN or ALLOW-FROM uri where "uri" is
option the URL where page can be framed from. If no value is specified, the default value is SAMEORIGIN.
Single sign-on
parameters
enable_sso Specifies the single sign-on feature. Possible value can be either true or false. By default, the value is false.
sso_company Specifies the company that you are associated with. By default, the value is trigo.
Simple Mail
Transfer
Protocol (SMTP)
parameters
from_address Specifies the SMTP sender address for the email bean.
smtp_additional Specifies additional SMTP properties as the semicolon separated key-value pairs. For example,
_props smtp_additional_props = mail.smtp.timeout=10;mail.smtp.sasl.enable=true
smtp_address Specifies the SMTP address for the email bean.
smtp_authentic Specifies whether the authentication for the SMTP is to be enabled. Possible value is true or false. The default value is false.
ation
smtp_encrypt_p Specifies whether to skip smtp_password in this file, and instead provide as parameter to the configureEnv.sh file. To encrypt the password, set the
assword value to yes and remove the value of password parameter.
smtp_password Specifies the SMTP password.
smtp_port Specifies the SMTP port number. The default value is 25.
smtp_username Specifies the SMTP username.
Spell check Specify the Wintertree Sentry Spelling Checker Engine plug-in for spell checking of text in the single-edit and multi-edit pages.
service
parameters
spell_check Specifies the spell checker feature. Possible value can be either true or false. By default, the value is false.
spell_check_ven Specifies the spelling checking independent software vendor of the Wintertree Sentry Spelling Checker Engine.
dor
spell_check_ven Specifies the spell check vendor class and spell check plug-in for the Wintertree Sentry Spelling Checker Engine.
dor_class
spell_default_lo Specifies the default dictionary language for spell checker. By default, the value is en_US.
cale
spell_license Specifies the license key for the Wintertree Sentry Spelling Checker Engine.
Role
parameters
admin Specifies the name of the Full Admin, admin, and Solution Developer roles.
solution_develo
per
full_admin_role
_name
Vendor
parameters
owner_approval Specifies the name of the workflow that is used by owners to approve or reject items that are added by vendors.
_workflow
spell_license Specifies the license key for the Wintertree Sentry Spelling Checker Engine.
vendor_collab_t Specifies the name of the lookup table, which shows mapping of vendors to their associated collaboration areas. By default, the value is Vendor Collab
able Table.
vendor_organiza Specifies the name of the organization hierarchy that is used for the Vendor Persona-based UI. By default, the value is Vendor Organization Hierarchy.
tion_hierarchy
vendor_product Specifies the name of the workflow that is used by vendors to work on their items. By default, the value is Product Enrichment Workflow.
_workflow
vendor_role_na Specifies the name of the Vendor role. By default, the value is Vendor.
me
Workflow Specifies how the workflow engine handles worker threads and wait times.
engine
parameters
wfe_db_event_p Specifies the amount of time in milliseconds, that the workflow engine thread waits before polling for events to process. By default, the value is 5000.
oll_time
wfe_event_statu Specifies the default maximum number of polling tries for busy wait emulation of synchronous processing of workflow events.
s_poll_max_att
empts
wfe_num_threa Specifies the maximum number of worker threads that are available in the Workflow thread pool for the Workflow engine. By default, the value is 8.
ds

IBM Product Master 12.0.0 989


Property Description
wfl_engine_poll Specifies the amount of time, in milliseconds, that the workflow database engine waits after querying the database for any new workflow events. By
_time default, the value is 1000.
Usability Specifies the capabilities and the appearance of various modules and consoles in the user interface.
parameters
allow_multiple_ Specifies how to handle multiple files with the same file names that are uploaded for binary, image, and thumbnail attributes from either the single-
files_with_same edit or multi-edit pages. Possible value can be either true or false. By default, the value is false.
_name
allow_users_to_ Specifies whether you can modify your own username through the My Profile page. Possible value can be either true or false. By default, the value is
modify_own_us true.
ername
appsvr_port Specifies the port on which the application server listens for incoming requests.
bulk_edit_limit Specifies the number of items that can be edited at one time in the Rich Edit screen. Once you reach the limit, you are prompted to save your unsaved
changes. You cannot continue to edit items until you save your unsaved changes; however, items that you have already edited can still be edited. A
valid value is an integer.
category_item_c Specifies whether the user interface calculates and displays the item count of each item that is mapped within categories for the navigation page,
ount_disabled static selection page, and link attribute pop-up window. Possible value can be either true or false. By default, the value is false.
category_path_s Specifies the category path separator for the UI, Java API, and scripting. By default, the value is "/".
eparator
debug_use_long Specifies whether to display the full widget name during debugging. Possible value can be either true or false. By default, the value is false.
_widget_names Important: The parameter value must be set to false in production environments.
default_compan Specifies a default company to automatically populate the Company field on the Login page.
y_code
display_attribut Specifies the entry attribute lists to be displayed in the dynamic selection editor page. Possible value can be either true or false. By default, the value is
es_in_rule_edito false.
r
display_flag_attr Specifies whether to use a drop-down menu or a checkbox for attributes of type Flag in the Single Edit and Multiple Edit pages. Possible value can be
ibutes_as_dropd either true or false. By default, the value is true.
own_list
display_loading Specifies whether to show a loading status page when navigating between pages to disable you from navigating elsewhere until the destination page
_screen completely loads. Possible value can be either true or false. By default, the value is false.
enable_custom_ Specifies a custom logout redirection from the log out action from the IBM Product Master user interface. Possible value can be either true or false. By
logout default, the value is true.
enable_medit_l Allows you to activate prefetch of lock information for items in the multi-edit page. Possible value can be either true or false. By default, the value is
ock_prefetch true.
enable_session_ Specifies the security issues that are caused due to a session ID remaining the same across a session. If set to true, the session ID changes
id_change_after dynamically across Product Master user interface pages after a successful login. If set to false, the session ID remains constant for a session. Possible
_login value can be either true or false. By default, the value is true.
enable_subspec Specifies the subspecs function, which is used for spec inheritance and various data modeling scenarios. Possible value can be either yes or no. By
s default, the value is no.
enable_version_ Specifies whether to perform release version validation between IBM Product Master and the compressed file that you import with the import
check environment. Possible value can be either true or false. By default, the value is false.
entrypreview_re Specifies the entry edit page to refresh after an entry preview pop-up window is closed. Possible value can be either true or false. By default, the value
fresh_entries_p is false.
ost_run
exception_visibl Specifies whether thrown exceptions are displayed to the user when an internal server error occur. This parameter is valid only when you are logged in
e to the "trigo" company. A valid value is a Boolean.
exec_prefix Specifies how Product Master runs the operating system level command. For UNIX, this parameter value must be blank. For Microsoft Windows, the
parameter must be set to bash.
flow_control_loc Specifies the location of the Flow-config.xml file where the user interface navigation can be customized and is used in the UI layer of the IBM Product
ation Master.
include_unassig Specifies the default behavior using in the classic rich search UI screen. If set to true, the uncatagorized items (the items in the Unassigned folder)
ned_when_sear matching the search criteria are included in the search result set. If set to false, the uncatagorized items are excluded from the search For backward
ch_by_category comparability, leave the value of this property as true.
job_status_refre Specifies the time interval in seconds, for how frequently the system updates the status of the Schedule Status page. By default, the value is 30.
sh
leave_single_edi Specifies the pop-up window that appears when items are opened on single edit pop-up window and you attempt to switch the tab without saving any
t_popup_silentl changes that are made in the current tab. Possible value can be either true or false. By default, the value is false.
y
leftnav_max_cat Specifies the maximum number of category children that are displayed under a given parent category for Catalog and Hierarchy modules in the
egories navigation page. By default, the value is 0.
leftnav_max_ite Specifies the maximum number of items that are displayed under a given parent category for catalog modules in the navigation page. You can click
ms Next to "page through" the sets of elements. For example, if you had 100 items in a category, and you set the display to 10 at a time, there would be
10 pages of items, and you would use the pagination feature to page through 10 items at a time. By default, the value is 100.
leftnav_max_se Specifies the maximum number of search results that return in the search feature of Catalog and Hierarchy modules in the navigation page. By default,
arch_results the value is 500.
max_attrgroup_t Specifies the maximum lifetime, in minutes, of (cache entry for) an attribute group in the attribute group cache. Setting the value to 0 effectively
imeout disables the cache (An attribute group is fetched from the database every time the attribute is needed). By default, the value is 30 minutes.
max_file_size_fo Specifies maximum size of the zip file (in GB) allowed to be uploaded for environment import.
r_import_in_gb
memorymonitor Specifies the fixed interval of time in milliseconds that defines when the memory monitoring subsystem collects information about the memory usage.
_interval By default, the value is 50000.

990 IBM Product Master 12.0.0


Property Description
must_save_befo Specifies whether a warning is issued if you navigate away from an unsaved entry in the single-edit or multi-edit pages. Possible value can be either
re_paging_entri true or false. By default, the value is true.
es
must_save_befo Specifies whether a warning is displayed if you navigate between the single-edit or multi-edit pages and unsaved entries exist. Possible value can be
re_switching_si either true or false. By default, the value is true.
ngle_multi_edit
nonindexed_sea Specifies the search behavior for non-indexed multi-occurring attributes. Possible value can be either true or false. By default, the value is false.
rch_like_indexe
d
save_as_draft_e Specifies whether the "save-as-draft" feature is enabled. When enabled, you can save collaboration areas even with the invalid values. Entries cannot
nabled move to the next step if there are validation errors in required set of attributes.
track_visited_en Specifies whether to highlight the items you visit in the multi-edit page. Possible value can be either true or false. By default, the value is false.
tries
trim_entry_attri Species if the entry attribute values are trimmed. For example, "xyz " is saved as "xyz".
bute_values
version_info Specifies any additional version information that you want to be displayed in the dialog box that located in the user interface at Help > About Current
PageID. Specify dev to indicate that your instance of IBM Product Master is in a development phase and specify test to indicate that your instance is in
a testing phase.
wait_max_time Specifies maximum amount of time, in milliseconds, a thread can wait on a locked critical section after which an exception is thrown due to timeout.
By default, the value is 60000.
wait_poll_time Specifies maximum amount of time, in milliseconds, a waiting thread polls to check if the critical section is free, and if it is free then it locks the critical
sections for its use. By default, the value is 1000.
xml_node_name Specifies the default character used by the Java API method. By default, the value is an underscore "_".
_space_equiv
Web service
connection
parameters
enable_webserv Specifies the web service session. Possible value can be either true or false.
ice_session
Web service Specify the credentials to create a web service context.
authentication
parameters
soap_company Specifies your company credentials so that SOAP services can run with the permissions of your company.
soap_envelope_ Specifies the URL of schema for the SOAP envelope and the port number. By default, the value is http://schemas.xmlsoap.org/soap/envelope/.
schema_url
soap_user Specifies your user credentials so that SOAP services can run with your user permissions.
webservice_ses Specifies the length of time, in seconds, of inactivity that causes the web service session to time out and invalidate. By default, the value is 300.
sion_timeout
Web service
parameters
cache_workflow Specifies the flag to turn on and off caching of the workflow definition object for event processing. This is mainly for performance testing purposes to
_per_event_pro evaluate this short-term cache, and should be left true, or removed from this file.
cessing
product_center_ Specifies the fully qualified URL, including the port number of the website where you should point your browsers to access your IBM Product Master
url instance.
Queue manager Specifies the queue wait times and the maximum number of jobs and worker threads.
parameters
queue_manager Specifies the number of threads that will be used to simultaneously distribute the messages from the queue. By default, the value is 3.
_threads
queuemanager_ Specifies the maximum number of jobs that can be queued while waiting for the worker thread to complete. By default, the value is 1000.
max_jobs
queuemanager_ Specifies the maximum number of worker threads available in the Queue Manager thread pool. By default, the value is 10.
num_threads
queuemanager_ Specifies the time in milliseconds that the Queue manager waits before checking for any new messages that need processing. By default, the value is
poll_time 5000.
Internal
parameters (do
not edit)
data_file_ext For FPT operations. File Name ends with .FILE file extension and is used only for IBM Application System/400® (AS/400) transfers.
dojo_build Specifies the declaration of variables in scripts. When this parameter is available, you must use the var keyword in your scripts to declare any
variables. Possible value can be either true or false. By default, the value is true.
locale_xml_top Specifies the directory that contains the per-locale XML files for language support at runtime. By default, the value is /locales/. The language support
directory must be relative to the $TOP directory and must end with a forward slash "/".
Important: If no locale is specified in an XML file of this directory, you must enter your locale in the default_locale parameter to set your default locale.
production_cod Specifies whether to declare variable as final while generating code for the generated classes.
e
tmp_dir Specifies the directory where the various subsystems temporarily keep their resources before processing them. By default, the value is /tmp/.
Note: The directory that you specify as the parameter values must end with the forward slash "/" character.
tomcat_ajp13 Specifies Tomcat Apache JServ Protocol (AJP) port number. Apache JServ Protocol is used for communication between Tomcat and Apache web
server. It is primarily used as a reverse proxy to communicate with application servers.
tomcat_shutdo Specifies port number for service on the Tomcat.
wn

IBM Product Master 12.0.0 991


Property Description
trace Specifies whether method entry and exit point need to be written into a trace file. By default, the value is off.
Microsoft Exce

Editing common.properties file


The common.properties configuration file contains the parameters of Product Master where you can define the core functions and appearance of the user interface.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Editing common.properties file


The common.properties configuration file contains the parameters of Product Master where you can define the core functions and appearance of the user interface.

About this task


Whenever your system is started, the common.properties file is read from the $TOP/etc/default directory to initialize each property.

You can view or edit the common.properties file to modify the default property values or to define values based on usage and performance preferences. Most parameters
have a generic default value that is preset in the common.properties file, but you can specify custom values to fit your needs.

Note:

Any changes to the file requirecommon.properties you to restart your system before the changes are applied.
Make sure that the end of line character (^M) is not added in the original common.properties file while editing this file in the Windows text editor. If the end of line
character is added, the file becomes unusable when accessed through Linux® workstations.

Procedure
1. Open the common.properties file by using a text editor.
The file is located in the $TOP/etc/default directory.
2. Use a script editor to change any parameter value.
For example, you might change the max_inactive_interval parameter from the standard value of 1800 seconds to a larger or smaller value depending on when you
want your session to timeout. Changing the value to 2000 results in the configuration property max_inactive_interval=2000.
3. Save your changes.
4. Restart Product Master.
You must restart Product Master for the configuration changes to take effect.

Results
From the user interface, you can view the common.properties file to determine how your parameters are defined and what values were loaded during system startup.

1. Click System Administrator > Properties.


2. Scroll through the unordered list of parameters to find what you are looking for. To help you locate and define certain parameters in the common.properties file, a
few parameters are displayed with brief descriptions.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

config.json file parameters


Defines the config.json file parameters.

Property Default Value Description


allowTypeAhead false Specifies whether you can select a category by using type ahead in a drop-down list or category selector dialog box.
Used in item search and rules engine pages.
appFullVersion Empty Specifies the current full version of the application that is displayed in the About page.
appVersion Empty Specifies the current version of the application that is displayed in the About page.
assetGridPageSize 50 Specifies the page size that is used for the grids of the single-edit page (category) and DAM assets.
assetNameLimit 100 Specifies the asset name character limit on the DAM single-edit page (asset header).
assetSearchGridPageSize 50 Specifies the page size for the DAM search grid.
attrSearchType Name Specifies the parameter that is used to show attribute name/path on the Attribute Search list on single-edit page.
attributeCollectionTreeN 50 Specifies the number of nodes to be loaded in the Search specs and attributes and Selected specs and attributes
odeCount sections (Attribute collection console).
auditHistoryGridPageSize 50 Specifies the page size for the Audit history grid in the Data visualization.
baseUrl Empty Specifies the server path that is used for making the API calls.
categoryCharsLimit 80 Specifies the characters limit for the category name on the Explorer page.
categoryTreePageSize 10 Specifies the maximum number of nodes that are loaded for a Category tree.

992 IBM Product Master 12.0.0


Property Default Value Description
collabAreaSummaryGrid 50 Specifies the page size for the Collaboration Area summary / Work list summary grid in the Data visualization.
PageSize
collabGridPageSize 50 Specifies the page size that is used for the Collaboration Area grid.
collabNameCharLimit 17 Specifies the character limit for a collaboration name after which Ellipsis is displayed on the Home page.
commentsCharactersLim 500 Specifies the characters limit for the single-edit page Comments tab.
it
companyName Multi-channel Specifies the company name.
Retailer
customScriptBaseUrl Empty Specifies the URL used to load the custom pages (Admin UI pages).
dataSanityGridPageSize 50 Specifies the page size for the Data Sanity grid in the Data visualization.
defaultCompany Empty Specifies the default company name in the Login page.
defaultGridPageSize 50 Specifies the default grid page size if page size is not specified.
editors.linked.pagination. 50 Specifies the page size for the Search page.
pageSize
editors.lookup. 50 Specifies the page size that is used for the Lookup Attribute dialog in the Search page.
pagination.pageSize
editors
pageSize 50 Specifies the page size that is used for the Relationship operator dialog in the Search page.
enableSSO false Specifies the constant used to enable the Single sign-on (SSO).
explorerGridPageSize 50 Specifies the page size that is used for the Explorer page and Custom page.
ftpGridPageSize 50 Specifies the grid page size for the FTP DAM page.
ftsPageSize 50 Specifies the page size for the FTS page (Elasticsearch).
headerDataTitleLimit 40 Specifies the header data character limit after which ellipsis is displayed on the Item page.
headerIdLimit 80 Specifies the header ID character limit after which Ellipsis is displayed on the Item header page.
idleTS 1500 Specifies the idle timeout of 25 minutes.
ImageDefaultZoom 200 Specifies the default image zoom size that is used for the Asset preview in the DAM.
ImageMaxZoom 400® Specifies the maximum image zoom value that is used for the Asset preview in the DAM.
ImageMinZoom 25 Specifies the minimum image zoom value that is used for the Asset preview in the DAM.
itemDescription 300 Specifies the character limit for the item description on the single-edit page after which ellipsis is displayed.
jobInterval 600000 Specifies the interval value set after which an API call is made to fetch the job status.
lookupTable
editorMode dialog Specifies the Editor mode option to be used for the Lookup attribute (dropdown or dialog).
maxItemsCount 5000 Specifies the maximum items count for the Lookup table attribute.
minTypeAheadLength 3 Specifies the minimum type ahead length for the Lookup table attribute.
categorySelector -
allowTypeAhead true Specifies whether you can select a category by using type ahead in category selector dialog box.
maxItemsCount 5000 Specifies the maximum items count for the categorySelector dialog box.
minTypeAheadLength 3 Specifies the minimum type ahead length for the categorySelector dialog box.
multiEditExitValueRefres 2000 Specifies the constant value that is used as an interval to refresh the grid when item is moved from one step to other.
h
multiEditMaxTabCount 8 Specifies the maximum number of tabs in the multi-edit page.
multiOccGrpPaginationVi true Specifies whether you can see the multi occurrence groups page-wise. The pagination is configured through the Admin
ew UI.
myNotesMaxCharLimit 2500 Specifies the maximum character limit for the content in the Show more panels > My notes section.
myShortCutsMaxCount 10 Specifies the number of URLs that can be added as shortcuts in the Show more panels > My shortcuts section.
newSavedButtonCharLim 15 Specifies the character limit for label of a new Save Search on the Home page if there are no saved searches.
it
oldUICustomLogoutUrl /custom/ Specifies the logout URL of the Admin UI.
NewUICustomLog
out.wpc
productDescription IBM® Product Specifies the product name in the application.
Master
publishSpecOnWKC false Specifies whether to enable publishing to the IBM Watson® Knowledge Catalog if the value is as "true".
realtionshipGridPageSize 50 Specifies the page size that is used for the Relationship dialog.
replaceJobInfoCharLimit 15 Specifies the character limit that is used for different labels on the Job pane.
ruleEngine. path Specifies the display name or localized name for the attributes list in the Rules console page.
attributeDisplayFormat
rulesConsoleGridPageSiz 50 Specifies the page size for the Rules console page.
e
searchGridPageSize 50 Specifies the page size that is used for the Search page.
searchNameCharLimit 30 Specifies the character limit for a search name after which ellipsis is displayed on the Home page.
showCardsCount 4 Specifies the count of cards that are shown on the Home page for a collaboration or category.
showGridImgThumbnail false Specifies whether to show thumbnail for an image. By default, the value is false.
showReservedByUser false Specifies the Boolean value to be checked to set format of the Reserved by column.
singleEditGridPageSize 25000 Specifies the page size for server-side pagination in the single-edit page (History, Related Items, and Linked Items tabs).
singleEditMaxTabCount 8 Specifies the maximum number of tabs are displayed in a single-edit page.
singleEditTabsStorageSiz 25 Specifies the maximum storage size for the single-edit page tabs.
e
sortOrder asc Specifies the default sort order value to be used while making API calls.

IBM Product Master 12.0.0 993


Property Default Value Description
stepNameCharLimit 30 Specifies the character limit for a step name after which ellipsis is displayed.
tabNameCharLimit 32 Specifies the character limit for a tab name after which ellipsis is displayed.
theme default Specify the value as the name you have given to the custom.css file.
thumbnailDimension 196 Specifies the thumbnail dimension that is used for the DAM renditions.
timeoutTS 1800 Specifies the application timeout period of 30 minutes.
uploadFileCountLimit 100 Specifies the maximum number of files that can be uploaded in the DAM.
widgetsAttributeDisplayF name Specifies whether to display attribute name or attribute name and path (<attribute name> | <path name>) in the Group
ormat by field.
widgetsGridPageSize 50 Specifies the page size that is used in the Key Metrics section on the Home page.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

dam.properties file parameters


Defines the dam.properties file parameters.

Property Default Value Description


asset_created_attribute_path Digital Asset Primary Spec/Creation Specifies the path of the Creation Date attribute in the spec.
Date
asset_id_attribute_path Digital Asset Primary Spec/ID Specifies the path of the ID attribute in the spec.
asset_modified_attribute_path Digital Asset Primary Spec/Last Specifies the path of the Last Modified Date attribute in the spec.
Modified Date
asset_name_atttribute_path Digital Asset Primary Spec/Name Specifies the path of the name attribute in the spec.
asset_renditions_attribute_path Digital Asset Primary Specifies the path of the Renditions attribute in the spec.
Spec/Renditions
asset_size_attribute_path Digital Asset Primary Spec/Size Specifies the path of the Size attribute in the spec.
asset_thumbnail_attribute_path Digital Asset Primary Specifies the path of the Thumbnail attribute in the spec.
Spec/Thumbnail
asset_type_attribute_path Digital Asset Primary Spec/Type Specifies the path of the Type attribute in the spec.
blob.min the.size 4096 Specifies the min theimum size to store file in the Blobstore folder.
blob.store.dir $TOP/mdmui/blobstore Specifies the path for the Blobstore folder.
bulkupload_asset_count assetCount Specifies the asset count in the bulk upload.
bulkupload_batch_size 100 Specifies the batch size for the bulk upload.
bulkupload_connect_timeout 10000 Specifies the Timeout parameter for the bulk upload.
bulkupload_consumer_thread_co 2 Specifies the consumer thread count for the bulk upload.
unt
bulkupload_queue_poll_timeout 10 Specifies the Timeout parameter for the queue poll in the bulk upload.
bulkupload_thread_pool_size 3 Specifies the Thread pool size for the bulk upload.
bulkupload_thread_pool_timeou 60 Specifies the Timeout parameter for the thread pool in the bulk upload.
t
category_id_attribute_path Digital Hierarchy Primary Spec/ID Specifies the path of the ID attribute in the hierarchy spec.
category_name_attribute_path Digital Hierarchy Primary Spec/Name Specifies the path of the Name attribute in the hierarchy spec.
channel_code_attribute_path Channel Hierarchy Specifies the path of the Channel Code in the Channel spec.
Specification/Channel Code
channel_hierarchy_name Channel Hierarchy Specifies the Channel hierarchy name.
channel_name_attribute_path Channel Hierarchy Specifies the path of the Channel Name in the Channel spec.
Specification/Channel Name
dam_catalog_name Digital Asset Catalog Specifies the catalog name for the DAM.
dam_configuration_lookup DAM Renditions Configuration Lookup Specifies the name of configuration Lookup table for the renditions.
dam_hierarchy_name Digital Asset Hierarchy Specifies the primary hierarchy name for the DAM.
dam_linkagefile_header_collab ASSET_path Specifies the DAM Linkage file headers for the collaboration area.
dam_linkagefile_header_ctg ASSET_path Specifies the DAM Linkage file headers for the catalog.
digital_assets_attribute_name Digital Assets Specifies the name of attribute for the digital asset products relation.
excluded_file_types exe Specifies the exclusion list for the file types for the asset upload.
extractMetaData true Specifies the set meta data extraction.
ftp_file_path_separator ##FS## Specifies the separator for the FTP file paths list to pass to the Job.
linkage_upload_ftpurl ftpUrl. Specifies the FTP URL for the linkage upload.
linkage_upload_linkagefilepath linkageFilepath. Specifies the linkage file path for the linkage upload.
linkage_upload_password pwd. Specifies the password for the linkage upload.
linkage_upload_portnumber portNumber. Specifies the port number for the FTP linkage upload.
linkage_upload_username userName Specifies the Username for the linkage upload.
localbulkupload_directory_path_ directorypath Specifies the directory path parameter for the local bulk upload.
param
localbulkupload_linkage_file_pat linkageFilepath Specifies the linkage file path parameter for the local bulk upload.
h_param

994 IBM Product Master 12.0.0


Property Default Value Description
localbulkupload_repo_parent_fol / Specifies the Parent folder id for the local bulk upload.
der_id
mdmce_shared_location Specifies the shared location path.
metadata_keys_to_discard File Specifies the metadata to be discarded.
mongo.database admin Specifies the authentication database.
mongo.password Specifies the password that is used to enable access control for the MongoDB authentication.
mongo.username Specifies the username that is used to enable access control for the MongoDB authentication.
mongoDb.port 27017 Specifies the port number for the mongoDb.
mongoDb.url localhost Specifies the URL for the mongoDb.
renditionupload_assetcount Asset Count Specifies the Asset Count parameter in the rendition upload spec.
renditionupload_directorypath Directory path Specifies the Directory path parameter in the rendition upload spec.
renditionupload_itemid Item Id Specifies the Item Id parameter in the rendition upload spec.
repository.provider com.ibm.mdm.dam.impl. Specifies the full path of the class file for the repository.
providers.JackrabbitDARepoImpl
repository.workspace default Specifies the workspace for the repository.
thumbnail_size 120 Specifies the Thumbnail height and width in the pixels for the display on the Digital Assets tab
in the single-edit page.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

damConfig.properties file parameters


Defines the damConfig.properties file parameters.

Property Default Value Description


asset_lower_size_limit 5242880 Specifies the minimum size for an asset.
asset_upper_size_limit 10485760 Specifies the maximum size for an asset.
bulk_upload_ftp_ BULK_UPLOAD_FTP_ Specifies the FTP bulk local upload prefix.
bulk_upload_local_ BULK_UPLOAD_LOCAL_ Specifies the Bulk local upload prefix.
channel_code_attribute_path Channel Hierarchy Specification/Channel Specifies the path of Channel Code in channel spec.
Code
channel_hierarchy_name Channel Hierarchy Specifies the channel hierarchy name.
channel_name_attribute_path Channel Hierarchy Specification/Channel Specifies the path of Channel Name in the channel spec.
Name
dam_bulkuploadspec_assetCount FTP_LINKAGE_ Specifies the path of asset count attribute in the local bulk upload spec.
dam_bulkuploadspec_directorypath Local Bulk Upload Spec/directoryPath Specifies the path of directoryPath attribute in the local bulk upload spec.
dam_bulkuploadspec_linkagefilepath Local Bulk Upload Spec/linkageFilePath Specifies the path of linkageFilePath attribute in the local bulk upload spec.
dam_bulkuploadspec_parentfolderid Local Bulk Upload Spec/parentFolderId Specifies the path of parentFolderId attribute in local bulk upload spec.
dam_catalog_linkagefile_header Specifies the headers for the linkage files for the catalog container type.
dam_collab_linkagefile_header Specifies the headers for the linkage files for the collaboration container type.
Note: Old name of this property was dam_linkagefile_header.
dam_ftp_bulk_upload_report_name DAM Bulk Upload Specifies the Report job name for the FTP bulk upload.
dam_ftpbulkupload_assetCount Bulk Upload Input/assetCount Specifies the path of the assetCount attribute in DAM FTP bulk upload spec.
dam_ftpbulkupload_directorypath Bulk Upload Input/directoryPath Specifies the path of the directoryPath attribute in the DAM FTP bulk upload
spec.
dam_ftpbulkupload_fileList Bulk Upload Input/fileList Specifies the path of the fileList attribute in the DAM FTP bulk upload spec.
dam_ftpbulkupload_ftpurl Bulk Upload Input/ftpUrl Specifies the path of the ftpUrl attribute in the DAM FTP bulk upload spec.
dam_ftpbulkupload_linkagefilepath Bulk Upload Input/linkageFilePath Specifies the path of the linkageFilePath attribute in the DAM FTP bulk upload
spec.
dam_ftpbulkupload_password Bulk Upload Input/pwd Specifies the path of the pwd attribute in the DAM FTP bulk upload spec.
dam_ftpbulkupload_port_no Bulk Upload Input/portNumber Specifies the path of the portNumber attribute in the DAM FTP bulk upload
spec.
dam_ftpbulkupload_username Bulk Upload Input/username Specifies the path of the username attribute in the DAM FTP bulk upload spec.
dam_getassets_url /api/v1/dam/assets?##GET Specifies the API URL to get assets.
dam_linkage_upload_ftpurl Linkage Upload Input/ftpUrl Specifies the path of the ftpUrl attribute in the DAM Linkage upload spec.
dam_linkage_upload_linkagefilepath Linkage Upload Input/linkageFilePath Specifies the path of the linkageFilePath attribute in DAM Linkage upload spec.
dam_linkage_upload_password Linkage Upload Input/pwd Specifies the path of the pwd attribute in the DAM Linkage upload spec.
dam_linkage_upload_port_no Linkage Upload Input/portNumber Specifies the path of portNumber attribute in the DAM Linkage upload spec.
dam_linkage_upload_report_name DAM Linkage Upload Specifies the Report job name for the DAM Linkage upload.
dam_linkage_upload_username Linkage Upload Input/username Specifies the path of the username attribute in the DAM Linkage upload spec.
dam_local_bulk_upload DAM Local Bulk Upload Specifies the Report job name for the local bulk upload.
dam_password BtynV/TVvqU= Specifies the encrypted password for the admin user.
l5LWojTOiVIIg607C9rQ75eS1qI0zolS
dam_rendition_upload Generate Renditions Specifies the Report job name to generate renditions.
dam_renditionspec_assetcount Rendition Upload Spec/Asset Count Specifies the path of Asset Count in rendition upload spec.

IBM Product Master 12.0.0 995


Property Default Value Description
dam_renditionspec_directorypath Rendition Upload Spec/Directory Path Specifies the path of the Directory Path in the rendition upload spec.
dam_renditionspec_itemid Rendition Upload Spec/Item Id Specifies the path of Item Id in the rendition upload spec.
dam_username admin Specifies the default username for the DAM.
digital_assets_attribute_name Digital Assets Specifies the Delink attribute name.
ftp_file_path_separator ##FS## Specifies the separator for FTP file paths list to pass to the Job.
ftp_linkage_file_dir dam/linkage_files/ Specifies the directory name for the directory to store linkage files.
ftp_linkage_file_prefix FTP_LINKAGE_ Specifies the FTP linkage file prefix.
host-url /mdm_rest Specifies the host URL for the DAM APIs.
job_name_prefix BULK_UPLOAD_ Specifies the prefix that all bulk upload jobs should begin with for job status.
jobs_count 5 Specifies the number of jobs.
linkage_file_directory_prefix LINKAGE_FILES_ Specifies the linkage file directory to store the linkage files.
local_linkage_file_prefix LOCAL_LINKAGE_ Specifies the local linkage file prefix.
mdmce_asset_primaryspec Digital Asset Primary Spec Specifies the Primary Spec for the DAM.
mdmce_asset_primaryspec Digital Asset Primary Spec Specifies the Primary spec name for the DAM.
mdmce_asset_primaryspec_createdOn Digital Asset Primary Spec/Creation Date Specifies the path of Creation Date attribute in spec.
mdmce_asset_primaryspec_id Digital Asset Primary Spec/ID Specifies the path of Id attribute in spec.
mdmce_asset_primaryspec_name Digital Asset Primary Spec/Name Specifies the path of the name attribute in spec.
mdmce_asset_primaryspec_relaltedFro Digital Asset Primary Spec/Related From Specifies the path of the related from attribute in spec.
m
mdmce_asset_primaryspec_renditions Digital Asset Primary Spec/Renditions Specifies the path of the Renditions attribute in spec.
mdmce_asset_primaryspec_size Digital Asset Primary Spec/Size Specifies the path of the size attribute in spec.
mdmce_asset_primaryspec_thumbnail Digital Asset Primary Spec/Thumbnail Specifies the path of the Thumbnail attribute in spec.
mdmce_asset_primaryspec_type Digital Asset Primary Spec/Type Specifies the path of the type attribute in spec.
mdmce_digital_asset_hierarchy Digital Asset Hierarchy Specifies the primary hierarchy name for the DAM.
mdmce_digital_catalog Digital Asset Catalog Specifies the catalog name for the DAM.
mdmce_primaryspec_id Digital Hierarchy Primary Spec/ID Specifies the path of the Id attribute in spec.
mdmce_primaryspec_name Digital Hierarchy Primary Spec/Name Specifies the path of the name attribute in spec.
mdmce_shared_location Specifies the shared location path
prefix_ftp_bulk_upload BULK_UPLOAD_FTP Specifies the prefix that all FTP bulk upload jobs should begin with for job
status.
prefix_linkage_upload LINKAGE_FILE Specifies the prefix that all linkage jobs should begin with for job status.
prefix_local_bulk_upload BULK_UPLOAD_LOCAL Specifies the prefix that all local bulk upload jobs should begin with for job
status.
prefix_rendition_upload RENDITION_LOCAL Specifies the prefix that all rendition upload jobs should begin with for job
status.
publication_channel_attribute_name Publication Details/Publication Channel Specifies the path of the publication channel attribute in spec.
publication_status_attribute_name Publication Details/Publication Status Specifies the path of the publication status attribute in spec.
recent_jobs_from 7 Specifies the check jobs that are run in the last seven days.
repo_provider com Specifies the DAM repository provider.
search_catalog catalog Specifies the search types name.
search_repo repository Specifies the search types name.
search_scope ENTIRE_CATALOG Specifies the scope for search.
sftp_linkage_file_prefix SFTP_LINKAGE_ Specifies the SFTP linkage file prefix.
size_attribute_name Size Specifies the attribute name for size.
sort_attr_path Digital Asset Primary Spec/Name Specifies the path of the attribute for sorting.
sort_type ASCENDING Specifies the sorting order.
status_sucess Success Specifies the status for success.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

dashboards_config.ini properties file parameters


Defines the dashboards_config.ini properties parameters.

Property Default Value Description


app_name dashboards Specifies the name of the dashboard.
authType tokenbased Specifies the REST API authorization type.
cellName Specifies WebSphere® Application Server cell name.
context_root /dashboards Specifies context root use for the dashboard.
derbyDatabaseName prddb Specifies the name of the Derby database name.
derbyLocation /opt/MDM/mdmui/dashboards/tnpmoed/ Specifies the installation location of the Derby database.
derbyPassword Specifies the password of the Derby database.
derbyPort 1535 Specifies the port number of the Derby database.
derbyServerIP Specifies the IP address of the server on which Derby database should be created.

996 IBM Product Master 12.0.0


Property Default Value Description
derbyUser gaiandb Specifies the username of the Derby database.
driverjarpath /opt/MDM/mdmui/dashboards/tnpmoed/derby/lib Specifies the location of the Derby database driver JAR.
ibm_java_path /opt/IBM/WebSphere/AppServer/java/8.0 Specifies IBM® Java SE Development Kit 1.8 path.
nodeName Specifies the name of the WebSphere Application Server node.
printSQL false Specifies whether to print log statements in a log file for the Derby database SQL queries.
profile AppSrv01 Specifies the profile name of the WebSphere Application Server.
restHost Specifies the hostname or the IP address for the REST API.
restPort 7507 Specifies the port number for the REST API.
restRequestType GET Specifies the request type for the REST API.
restScheme httphttp Specifies the scheme for the REST API.
restUri /mdm_rest/api/v1/ Specifies the base URI for the REST API.
server MDMCE Specifies the name of the WebSphere Application Server.
virtualhost MDMCE_Vhost Specifies the name of the virtual host for deployment.
war_file_name oed-1.4.0.0.war Specifies the name of the WAR file for the Dashboard.
war_file_path /opt/MDM/mdmui/dashboards/tnpmoed/prereq Specifies the location of the WAR file for the Dashboard.
WebSphere_location /opt/IBM/WebSphere/AppServer Specifies the location of the WebSphere Application Server.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

data_entry_properties.xml file file parameters


The data_entry_properties.xml file defines the HTML properties and script that provide content for an additional catalog and hierarchy elements that are displayed in the
Single Edit pane.

Values
For each set of <script></script> tags, you can specify:

type
There are two types of scripts you can specify in the <type></type> tags:

url
Note: The getURL(entry) function is deprecated.
content
Specifies a script whose getContent(entry) function returns HTML content. This HTML content is then used for displaying a div on the Single Edit pane
directly after the Common Attributes section.

title
In the <title></title> tags, specify the title of the div element.
path
In the <path></path> tags, specify the script and directory file path: scripts/triggers/script_Name
where, script_Name is the script.
extra
Optionally, through the <extra></extra> tags, you can pass extra HTML markup to format your div elements.

File location
The data_entry_properties.xml file is located in $TOP/etc/default directory.

Single edit and multi-edit with rich search simplified user interface support
For the Single edit and multi-edit with rich search simplified screen, the data_entry_properties.xml file provides additional options to configure the behavior and display of
the frames. In addition to the required parameters mentioned earlier, you can also specify the following optional parameters for each script:

<position></position> tags
Specify the position of the div element for a script. The position can be tab or top. If this tag is not present for a script, the default position for the div element is top.
<display-state></display-state> tags
Specify the initial display state of a div element for a script, which is configured to be displayed at the top position. The display state can be expanded or collapsed.
If this tag is not present for a script, the initial display state for the div element is expanded.
<refresh-on></refresh-on> tags
Specify zero or more <action></action> tags. Each action tag represents a single action on the single edit screen on which the script should be run again and the
content of the div element refreshed. If the refresh-on tag for a script does not contain any action tags, the script will not be run after the initial load on any action.
If the refresh-on tag is not present for a script, the script is executed again and the content of the div element is refreshed on all valid actions for the container.
<action></action> tag
Specify a valid action on the Single Edit screen. The valid values of action tags for different container types are:

Catalog
save, refresh, revert, categorize
Catalog Collaboration Area
save, refresh, revert, categorize

IBM Product Master 12.0.0 997


Hierarchy
save, refresh, revert, modify_spec_mapping
Hierarchy Collaboration Area
save, refresh, revert, modify_spec_mapping

In addition to the <catalog></catalog> and <hierarchy></hierarchy> tags, the use of <collaboration-area></collaboration-area> tags are also
supported in the single edit and multi-edit with rich search simplified user interface.

Example

<company code="trigo">
<catalog name="catalog1">
<script>
<type>content</type>
<title>Sample Script 2</title>
<path>/scripts/triggers/sampleScript2</path>
<position>tab</position>
<refresh-on></refresh-on>
</script>
</catalog>
<hierarchy name="hierarchy1">
<script>
<type>content</type>
<title>Sample Script 6</title>
<path>/scripts/triggers/sampleScript6</path>
<position>tab</position>
<refresh-on></refresh-on>
</script>
</hierarchy>
<collaboration-area name="catalogCA1">
<script>
<type>content</type>
<title>Sample Script 7</title>
<path>/scripts/triggers/sampleScript7</path>
<position>tab</position>
</script>
</collaboration-area>
</company>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

db.xml file parameters


The db.xml file contains all of the database-related parameters and is located in the $TOP/etc/default directory.

If you set the encrypt_password parameter to yes in the env_settings.ini file, the db_password_encrypted element contains the encrypted database password. If you set
the encrypt_password parameter to no, the db_password_plain element contains the database password in plain text format.
The elements in the db.xml file replace the following properties that were previously stored in the common.properties file:

db_userName
db_password
db_url
db_class_name

Note: Ensure that these properties are removed from the common.properties file.
You can encrypt the database password. The encrypt_password parameter controls whether an encrypted database password should be used.
Note: With the encrypt_password parameter set to yes, the database-related scripts require the dbpassword argument to be specified when launching the scripts. If the
dbpassword argument is not specified, you are prompted to enter the password.
Parameter Description Value Example
db_class_name Defines the fully qualified database class name of the database driver you use to Class name db_class_name=
connect to the database. DB2® - oracle.jdbc.driver.Oracle
Driver
com.ibm.db2.jcc.DB2Dri
ver

Oracle -
oracle.jdbc.driver.Oracle
Driver
db_password Defines the password to log in to the database. Database password db_password=open
db_url Defines the JDBC URL used to connect to the database. JDBC URL db_url=jdbc:oracle:thin:
@:host_name:1521:instance
_name
db_userName Defines a username to log in to the database Database username db_userName=DB_User

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

998 IBM Product Master 12.0.0


docstore_mount.xml file parameters
The docstore_mount.xml file defines the location of your OS file system mount points and how to handle incoming data in the Document Store.

Values
You can mount the file system directories, ftp and public_html to the Document Store.

The ftp and public_html are defined in the docstore_mount.xml configuration file to provide the location of your OS file system mount points.

File location
The docstore_mount.xml file is located in $TOP/etc directory.

Syntax
The $supplier_base_dir/ file system directory, file_Directory, is acquired from the supplier_base_dir parameter in the common.properties file. The $supplier_ftp_dir/ real
file path directory, real_File_Path_Directory, is constructed with the following string:

ftp_root_dir + "/" + current_company_code/

Where ftp_root_dir is a parameter in the common.properties file, whose value is appended to the current company code to form the real file path directory:
real_File_Path_Directory.

You can specify the inbound attribute as true or false. The inbound attribute must be set to true for the Document Store to flag a directory in the file system as inbound,
and to handle incoming data. When a directory is flagged as inbound, any new files copied to the inbound directory will automatically be displayed in the Document Store.

1: <mnts>
2: <mnt doc_path="file_Directory" real_path="real_File_Path_Directory" inbound="yes_OR_no"/>
3: </mnts>

Example
In this example, the ftp system directory is mounted to $supplier_ftp_dir/ and public_html is mounted to $supplier_base_dir/.

1: <?xml version="1.0">
2: <mnts>
3: <mnt doc_path="/public_html/" real_path="$supplier_base_dir/" inbound="yes"/>
4: <mnt doc_path="/ftp/" real_path="$supplier_ftp_dir/" inbound="yes"/>
5: </mnts>

Related concepts
Managing document store

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

env_settings.ini file parameters


Defines the env_settings.ini file parameters.

Property Description
[db.db2] section
cmdline_client_ddl_ ORA-01418: |ERROR at line |ORA-00942: |select|SELECT|\*|Failed to execute:
error_filter
[dam] section
enable Specifies whether Digital Asset Management is to be enabled. Possible value is yes or no. The default value is no.
[freetext-search] section
Important: Starting from IBM® Product Master Fix Pack 8 onwards, Elasticsearch is getting replaced with OpenSearch. This is due to change in licensing strategy (no
longer open source) by Elastic NV. Due to this, you need to shift to OpenSearch 2.4.1 and later, and run full indexing. For more information, see Installing OpenSearch
(Fix Pack 8 and later).
elastic_authenticatio
( and later) Specifies whether the authentication for the OpenSearch is to be enabled.
n
( and earlier) Specifies whether the authentication for the Elasticsearch is to be enabled.

Possible value is true or false. The default value is false.

IBM Product Master 12.0.0 999


Property Description
elastic_cluster_name
( and later) Specifies the name of the OpenSearch cluster.
( and earlier) Specifies the name of the Elasticsearch cluster.

The default value is es-cluster.


elastic_encrypt_pass Specifies whether to skip elastic_password in this file, and instead provide as parameter to the configureEnv.sh file. To encrypt the password, set
word the value to yes and remove the value of password parameter.
elastic_password
( and later) Specifies the plain text password for the OpenSearch username, if the value of the elastic_authentication is true.
( and earlier) Specifies the plain text password for the Elasticsearch username, if the value of the elastic_authentication is true.

elastic_server_hostn
( and later) Specifies the hostname of the OpenSearch Server with port number.
ame
( and earlier) Specifies the hostname of the Elasticsearch Server with port number.

Example
"elastic_server_hostname=https://abc.com:9200" or "elastic_server_hostname=http://abc.com:9200"
elastic_username
( and later) Specifies the OpenSearch username, if the value of the elastic_authentication is true.
( and earlier) Specifies the Elasticsearch username, if the value of the elastic_authentication is true.

enable Specifies whether Free Text Search is to be enabled. Possible value can be yes or no. The default value is no.
indexer_port Specifies the port number for the Indexer application. Change port number, if the port 9096 is not available.
pimcollector_port Specifies the port number for the pim-collector application. Change port number, if the port 9095 is not available.
[hazelcast] section
hazelcast_server_Ip Specifies the IP address of the Hazelcast Server, for example, "hazelcast_server_IpAddress=10.200.10.232".
Address
hazelcast_server_po Specifies the port number of the Hazelcast Server, for example, "hazelcast_server_port=5702".
rt
[mdmrest-app-war]
section
enable Specifies whether REST APIs are to be enabled. Possible value is yes or no. The default value is no.
[mdmui-app-war]
section
enable Specifies whether Persona-based UI is to be enabled. Possible value is yes or no. The default value is no.
[mlservice] section
enable Specifies whether the ML services are enabled. Possible value is "yes" or "no". The default value is "no".
ml_service_protocol Specifies the protocol for the ML service.
Specifies the hostname where the ML services are deployed.
ml_controller_port Specifies the port number to start the controller service.
ml_attributes_port Specifies the port number to start the attributes service.
ml_categorization_p Specifies the port number to start the categorization service.
ort
ml_standardization_ Specifies the port number to start the standardization service.
port
[mongo] section
mongo_database Specifies the MongoDB database name.
mongo_hostname Specifies the hostname of MongoDB server.
mongo_password Specifies the MongoDB user password.
mongo_port Specifies the MongoDB port number. The default value is 27017.
mongo_username Specifies the MongoDB username.
mongodb_encrypt_p Specifies whether to skip mongo_password in this file, and instead provide as parameter to the configureEnv.sh file. To encrypt the password, set
assword the value to yes and remove the value of password parameter.
[sdp] section
restapi_base_url The REST API URL for SDP Automation feature.
[smtp] section
from_address Specifies the SMTP sender address for the email bean.
smtp_additional_pro Specifies additional SMTP properties as the semicolon separated key-value pairs. For example,
ps smtp_additional_props = mail.smtp.timeout=10;mail.smtp.sasl.enable=true
smtp_address Specifies the SMTP address for the email bean.
smtp_authentication Specifies whether the authentication for the SMTP is to be enabled. Possible value is true or false. The default value is false.
smtp_encrypt_pass Specifies whether to skip smtp_password in this file, and instead provide as parameter to the configureEnv.sh file. To encrypt the password, set
word the value to yes and remove the value of password parameter.
smtp_password Specifies the SMTP password.
smtp_port Specifies the SMTP port number. The default value is 25.
smtp_username Specifies the SMTP username.
[sso] section
enable_sso Specifies whether SSO is to be enabled. Possible value is true or false. The default value is "false".
sso_company Specifies the company name when SSO is enabled.
[vendor-portal] section
enable Specifies whether Vendor portal is to be enabled. Possible value is yes or no. The default value is no.

1000 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

gdsConfig.properties file parameters


Defines the gdsConfig.properties file parameters.

Property Default Value Description


allowedStatusToPublishItem Registered Specifies allowed status of an item to submit for publication.
allowedStatusToRegisterAddtem Unregistered Specifies allowed status of an item to submit for registration.
allowedStatusToRegisterModifiedIte Modified Specifies allowed status of an item to submit for modification.
m
defaultEndIndex 25 Specifies end index to filter the records for the reports.
defaultStartIndex 0 Specifies start index to filter the records for the reports.
gdsItemStatus Global_Attributes_Spec/GDSItemStatus Specifies the path attribute for the current status of the item.
gdsItemValidationStatus Global_Attributes_Spec/ItemStatus Specifies the path attribute for the current validation status of the item.
glnIdentifierSpecPath GLN_Spec/GLN Identifier Specifies the path attribute name for the global location number (GLN) identifier.
glnItemTypeSpecPath GLN_Spec/Item Type Specifies the path attribute for the item type.
glnLookupTableName GLN_Type_LookUp Specifies the name of the GLN lookup table.
glnNumberSpecPath GLN_Spec/Global Location Number Specifies the path attribute name for the GLN.
glnTradingPartnerCountrySpecPath GLN_Spec/Trading Partner Country Specifies the path attribute name for the trading partner country.
glnTradingPartnerNameSpecPath GLN_Spec/Trading Partner Name Specifies the path attribute for the Trading partner name.
globalClassificationCatCodeSpecPat Global_Attributes_Spec/globalClassification Specifies the path attribute for the Global Classification Category Code.
h Category/
code
globalClassificationCatDescSpecPat Global_Attributes_Spec/globalClassification Specifies path attribute for the Global Classification Category Description.
h Category/
description
globalGTINNameTextSpecPath Global_Attributes_Spec/GTINName/text Specifies the path attribute for the global trade item number (GTIN) name.
globalInformationProviderSpecPath Global_Attributes_Spec/InformationProvide Specifies the path attribute for the information provider.
r
globalInternalClassificationCatCode Global_Attributes_Spec/globalClassification Specifies the path attribute for the Internal Classification Category Code.
SpecPath Category/
description
globalInternalClassificationCatDesc Global_Attributes_Spec/InternalClassificati Specifies the path attribute for the Internal Classification Category Description.
SpecPath onDesc
globalProductTypeSpecPath Global_Attributes_Spec/ProductType Specifies the path attribute for the product type.
globalTargetMarketSpecPath Global_Attributes_Spec/TargetMarket Specifies the path attribute for the target market.
globalTradeItemNumberSpecPath Global_Attributes_Spec/GlobalTradeItemN Specifies the path attribute for the Global trade item number.
umber
internalHierarchy Internal_Hierarchy Specifies name of the internal hierarchy.
itemEnrichmentCollborationAreaNa Item Enrichment Specifies name of the item enrichment collaboration area for GDS.
me
marketGrpCatalogName Market Group Catalog Specifies the name of the GDS market group catalog that has the market group-
related products.
productCatalogName GDS Product Catalog Specifies the name of the GDS product catalog that has the GDS-related products.
publicationInformationAttPath Global_Local_Attributes_Spec/PublicationI Specifies the publication information attribute identifier.
nformation
publishedGLNsPath Global_Attributes_Spec/publishedGLNs Specifies the path attribute for published GLN.
publishedTypeAttName publishedType Specifies the published type attribute identifier.
publshedGlnAttName publishedGLNs Specifies the published GLN attribute identifier.
targetMarketCategoryName TA Specifies name of the root category for the Target market hierarchy.
targetMarketHierarchyName Target_Market_Hierarchy Specifies name of the target market hierarchy.
tradingPartnerCatalogName GDS Product Catalog Specifies the name of the GDS trading partner catalog that has all the trading
partner-related products.
tradingPartnerHierarchyName Trading_Partner_Hierarchy Specifies name of the trading partner hierarchy.
validationSucessStatus Validation Successful Specifies successful validation status of an item.
worflowStepName Modify Item Specifies name of the workflow step GDS products.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

mass-edit.properties file parameters


Defines the mass-edit.properties file parameters.

IBM Product Master 12.0.0 1001


Property Default Value Description
attribute_path attributePath Specifies the name of the attribute for the attribute path.
batch_size 1000 Specifies the batch size.
collab_id collabId Specifies the name of the attribute for the collaboration area identifier.
filter_attr_separator ##filter## Specifies the separator for the filter parameters.
filter_params filterParams Specifies the name of the attribute for the filter parameters.
mass_edit_report_name MASS_EDIT_REPORT Specifies the name of the report.
new_value newValue Specifies the name of the attribute for new value.
old_value oldValue Specifies the name of the attribute for old value.
step_id stepId Specifies the name of the attribute for the step identifier.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

mdm-cache-config.properties parameters
The cache property files contain the cache memory parameters of Product Master.

The mdm-cache-config.properties parameters define the cache properties of Product Master. All settings for maxElementsInMemory refer to the maximum number of an
object type that is cached in memory per JVM instance.

When modifying the settings of object cache, it is important that administrators thoroughly examine the JVM memory settings of potentially affected services to ensure
that they are adequately set. The relevance of examining the JVM memory is important because the size of the object cache is generally proportional to the required
amount of memory. If you exceed the available memory with excessively large cache, performance will severely degrade. Similarly, however, an inadequately sized cache
results in poor performance from excessive CPU and disk utilization.

You can determine the size of the object cache you require by checking the cache hit percentage in the user interface, which is located on the pane underSystem
Administrator > Performance Info > Caches. Select the appropriate object in the drop-down menu to view its cache information. The percentages that are displayed are
calculated based on usage. You must provide the system reasonable time before referencing the cache hit percentage. The calculated percentages will resemble real
results only after your system has been running under a normal production load for an extended duration.

Note: When resetting the cache size, you must restart the server for the changes to take effect.

Product Master built-in cache


Product Master provides built-in cache for a list of object types. It is important to understand the scope of the cache, which has implications on the life span and memory
consumption on the server. If a cache is maintained in Application Context, it is global to the server instance (each Product Master service); if a cache is contained within
User Session, it is specific to one user in one session, and has different copies of cache objects for different user sessions.
The global cache in Application Context keeps one copy, therefore, uses less memory, but has a life span of the entire service process until the service is shut down. On
the other hand, the cache in User Session keeps one copy for each user session, therefore can use more memory for many user sessions. The life span of the cache in a
user session has the life span of the session, for example, when a user logs out, the session is terminated and the user session caches are released.

The specific cache objects in either global cache or user session cache still observein the rules of object count limit and timeout limit, when either limit is triggered, some
cached objects are beg removed from cache container, and corresponding memory is reclaimed.

With the introduction of distributed cache (ehcache), most of the objects are cached globally. Multiple service processes can access the same copy of cached objects,
which resolves the potential issue of stalled cache. This enhancement leads to better use of memory by removing User Session based cache; also the cached objects can
survive even if one service process is shut down.

For determining the effectiveness of each cached object type, Product Master user interface displays the number of cached objects and the count cache hits and cache
misses. The information can be analyzed to determine whether cache settings are properly done for the implementation and user scenarios.

Solution level cache


Product Master built-in caches usually are definition types of objects. If some objects are frequently accessed in a specific scenario, but that object type does not have
built-in cache capability, a solution level cache might help. A typical example is a category or hierarchy cache.

function getCategoryFromCache(hmCategoryCache, hierarchy, sCategoryPK) {


if (!hmCategoryCache.containsKey(sCategoryPK)) {
hmCategoryCache[sCategoryPK] = hierarchy.getEntryByPrimaryKey(sCategoryPK);
}
return hmCategoryCache[sCategoryPK];
}

var hmCategoryCache = [];


var hierarchy = getCategoryTreeByName(“My Hierarchy Name”);

// ...

// Any time you want to retrieve a category (i.e. within a loop where you are
// processing an item import), use this instead...
var category = getCategoryFromCache(hmCategoryCache, hierarchy, sCategoryPK);
item.mapCtgItemToCategory(category); // (for example)

all.useMulticast parameter
The all.useMulticast parameter defines whether the Product Master uses multicast for caching.

1002 IBM Product Master 12.0.0


accessCache.maxElementsInMemory parameter
The accessCache.maxElementsInMemory parameter defines the maximum number of Role access objects that can be cached.
attrGroupCache.maxElementsInMemory parameter
The attrGroupCache.maxElementsInMemory parameter defines the maximum time of an attribute group's lifetime in the attribute group cache.
attrGroupCache.timeToLiveSeconds parameter
The attrGroupCache.timeToLiveSeconds parameter defines the amount of time that a cached attrGroup definition may be out of sync with changes to the attr
groups or specs.
catalogCache parameter
The catalogCache.maxElementsInMemory parameter defines the maximum number of catalogs that can be cached in memory per JVM instance.
catalogDefinitionCache.maxElementsInMemory parameter
The catalogDefinitionCache.maxElementsInMemory parameter defines the number of catalog definitions cached. The value of the
catalogDefinitionCache.maxElementsInMemory parameter is equal to the sum of total number of catalogs, lookup tables, and collaboration areas. The size of the
cache is small and the cache holds the definition of the catalog.
ctgViewCache.maxElementsInMemory
The ctgViewCache.maxElementsInMemory parameter defines the maximum number of container view objects that can be cached in memory per JVM instance.
lookupCache.maxElementsInMemory parameter
The lookupCache.maxElementsInMemory parameter defines the maximum number of Lookup Tables that can be cached in memory per JVM instance.
roleCache.maxElementsInMemory parameter
The roleCache.maxElementsInMemory parameter defines the maximum number of user roles that can be cached.
scriptCache.maxElementsInMemory parameter
The scriptCache.maxElementsInMemory parameter defines the maximum number of both Document Store scripts and spec scripts that can be cached.
specCache__KEY_START_VERSION_TO_VALUE.maxElementsInMemory parameter
The specCache__KEY_START_VERSION_TO_VALUE.maxElementsInMemory parameter defines the maximum number of spec definitions that can be cached into
memory.
workflowCache.maxElementsInMemory parameter
The workflowCache.maxElementsInMemory parameter defines the maximum number of workflows that can be cached.
wsdlCache.maxElementsInMemory parameter
The wsdlCache.maxElementsInMemory parameter defines the maximum number of WSDL objects that can be cached.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

all.useMulticast parameter
The all.useMulticast parameter defines whether the Product Master uses multicast for caching.

Parameter values
Value
true/false
Default value
true

Set the value to false, if multicast is not supported (Product Master Cloud or Docker release) or if you do not want to use multicast. This uses Remote Method Invocation
(RMI) transport for cache synchronization over multiple nodes in a cluster. The same Remote Method Invocation (RMI) transport that is used by the Admin service to
communicate to the other services is used.

Example
In this example, the all.useMulticast parameter is set to true.

all.useMulticast=true

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

accessCache.maxElementsInMemory parameter
The accessCache.maxElementsInMemory parameter defines the maximum number of Role access objects that can be cached.

Parameter values
Value
Integer
Default value
500
Recommended value range
500 - 1000

IBM Product Master 12.0.0 1003


Set the cache size according to you system requirements by viewing the cache hit ratio for Lookup Tables in the user interface. Make adjustments to your setting until the
hit percentage is over 95%.

Example
In this example, the maximum number of Access objects is 500.

accessCache.maxElementsInMemory=500

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

attrGroupCache.maxElementsInMemory parameter
The attrGroupCache.maxElementsInMemory parameter defines the maximum time of an attribute group's lifetime in the attribute group cache.

Parameter values
Value
Integer
Default value
100
Recommended value
100

During startup, if your system finds that the parameter value is missing or invalid, the parameter value will automatically revert to the system default value of 100.

Example
In this example, the attrGroupCache is set to 100.

attrGroupCache.maxElementsInMemory=100

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

attrGroupCache.timeToLiveSeconds parameter
The attrGroupCache.timeToLiveSeconds parameter defines the amount of time that a cached attrGroup definition may be out of sync with changes to the attr groups or
specs.

Parameter values
Value
Integer
Default value
300
Recommended value
300

During startup, if your system finds that the parameter value is missing or invalid, the parameter value will automatically revert to the system default value of 300.

Example
In this example, the wait time is 300 seconds.

attrGroupCache.timeToLiveSeconds=300

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

catalogCache parameter
The catalogCache.maxElementsInMemory parameter defines the maximum number of catalogs that can be cached in memory per JVM instance.

1004 IBM Product Master 12.0.0


Parameter values
Value
Integer
Default value
50
Recommended value
100

If the number of cached catalogs in the system equals the catalogCache value and a catalog selected for caching is requested by the Catalog Manager, then the least
frequently used cached catalog is cleared from the cache to allow for the requested cached catalog to be processed.

The catalogCache holds the number of cached catalogs and not the number of cached items, and thus having large cached catalogs where each catalog has multiple items
can cause memory issues.

Example
In this example, the maximum number of catalogs in the cache is 50.

catalogCache.maxElementsInMemory=50

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

catalogDefinitionCache.maxElementsInMemory parameter
The catalogDefinitionCache.maxElementsInMemory parameter defines the number of catalog definitions cached. The value of the
catalogDefinitionCache.maxElementsInMemory parameter is equal to the sum of total number of catalogs, lookup tables, and collaboration areas. The size of the cache is
small and the cache holds the definition of the catalog.

Parameter values
Value
Integer
Default value
100
Recommended value range
100

Example
In this example, the number of catalog definitions that are cached is 100.

catalogDefinitionCache.maxElementsInMemory=100

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

ctgViewCache.maxElementsInMemory
The ctgViewCache.maxElementsInMemory parameter defines the maximum number of container view objects that can be cached in memory per JVM instance.

Parameter values
Value
Integer
Default value
100
Recommended value
100

During startup, if your system finds that the parameter value is missing or invalid, the parameter value will automatically revert to the system default value of 100.

Container view objects generally have a small footprint, so that values should be set to contain most of your views in the cache.

Example
In this example, the maximum number of Container View objects is 100.

IBM Product Master 12.0.0 1005


ctgViewCache.maxElementsInMemory=100

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

lookupCache.maxElementsInMemory parameter
The lookupCache.maxElementsInMemory parameter defines the maximum number of Lookup Tables that can be cached in memory per JVM instance.

Parameter values
Value
Integer
Default value
100
Recommended value
100

During startup, if your system finds that the parameter value is missing or invalid, the parameter value will automatically revert to the system default value of 100.

Set the cache size according to your system requirements, by viewing the cache hit ratio for Lookup Tables in the user interface. Make adjustments to your setting until the
hit percentage is over 95%.

Recommendation: To avoid running out of memory, if a solution has large Lookup Tables with entries greater than 1000, then the cache size must be kept small regardless
of your hit percentage.

Example
In this example, the maximum number of Lookup Tables is 100.

lookupCache.maxElementsInMemory=100

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

roleCache.maxElementsInMemory parameter
The roleCache.maxElementsInMemory parameter defines the maximum number of user roles that can be cached.

Parameter values
Value
Integer
Default value
50
Internal value
50
Recommended value range
50 - 100

Set the cache size according to your system requirements, by viewing the cache hit ratio for Lookup Tables in the user interface. Make adjustments to your setting until the
hit percentage is over 95%.

Example
In this example, the maximum number of user roles is 50.

roleCache.maxElementsInMemory=50

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

scriptCache.maxElementsInMemory parameter
The scriptCache.maxElementsInMemory parameter defines the maximum number of both Document Store scripts and spec scripts that can be cached.

1006 IBM Product Master 12.0.0


Parameter values
Value
Integer
Default value
200
Recommended value
200

To configure this parameter, view the cache hit ratio for specs in the user interface, under System Administrator > Performance Info > Caches. Increase this parameter's
value until the cache hit percentage reaches over 95%.

Example
In this example, the maximum number of Document Store scripts and spec scripts is 200.

scriptCache.maxElementsInMemory=200

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

specCache__KEY_START_VERSION_TO_VALUE.maxElementsInMemory parameter
The specCache__KEY_START_VERSION_TO_VALUE.maxElementsInMemory parameter defines the maximum number of spec definitions that can be cached into memory.

Parameter values
Value
Integer
Default value
100

During startup, if your system finds that the parameter value is missing or invalid, the parameter value will automatically revert to the system default value of 100.

Set the cache size according to your system requirements, by viewing the cache hit ratio for Specs in the user interface. Make adjustments to your setting until the hit
percentage is over 95%.

Example
In this example, the maximum number of Spec definitions is 100.

specCache__KEY_START_VERSION_TO_VALUE.maxElementsInMemory=100

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

workflowCache.maxElementsInMemory parameter
The workflowCache.maxElementsInMemory parameter defines the maximum number of workflows that can be cached.

Parameter values
Value
Integer
Default value
250
Recommended value
250

Example
In this example, the maximum number of workflows is 300.

workflowCache.maxElementsInMemory=300

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 1007


wsdlCache.maxElementsInMemory parameter
The wsdlCache.maxElementsInMemory parameter defines the maximum number of WSDL objects that can be cached.

An WSDL (Web Services Description Language) is an XML document which is used to describe a Web service. An WSDL includes the location of the Web service and the
operations (or methods) the service exposes.

To enhance the response time for your Web services and to avoid loading your WSDLs each time a request comes in, WSDLs are cached in the JVM level cache. The lookup
is done by the Web service name which act as a key for the lookup.

When the cache reaches the threshold specified by the wsdlCache.maxElementsInMemory parameter, the WSDL document with the oldest date of last use is replaced
with any new WSDLs.

Parameter values
Value
Integer
Default value
50
Recommended value
50

To configure this parameter for optimal performance, look at your cache hit ratio for WSDL in System Administrator > Performance Info > Caches. Increase the wsdlCache
parameter value until the hit ratio percentage reaches over 95%.

Example
In this example, the maximum number WSDL objects is 50.

wsdlCache.maxElementsInMemory=50

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

mdm-cache-config.xml.template file parameters


You should not update the mdm-cache-config.xml.template file unless you have been advised by the IBM Support to do so. However, you may need to adjust the multi-
cast settings depending on your network configuration.

Values
The multi-cast settings are configured by the installer. However, you may need to update them after the installation is complete.

There are three properties that can be set in the cacheManagerPeerProviderFactory node:

peerDiscovery
This is a mandatory property. Specify "automatic" unless your network prohibits multicast. If this is the case, please consult the ehcache documentation on manual
configuration of RMI communication.
multicastGroupAddress
This is a mandatory property. Specify a valid multicast group address in the range of 239.XXX.XXX.XXX. You must consult with your system administrator before you
specify a multicast group address.
multicastGroupPort
This is a mandatory property. Specify a dedicated port for the multicast traffic. The default is 4446 and it is recommended that this not be changed to avoid
collisions with other applications.
timeToLive
This setting configures how far multicast packets propagate past routers. 0 is restricted to the same host, 1 is restricted to the same subnet, 32 is restricted to the
same site, 64 is restricted to the same region, 128 is restricted to the same continent, 255 is unrestricted. The recommended value is either 0 or 1.

When changes are added to the mdm-cache-config.xml.template file and the configureEnv.sh script is run, the changes are then reflected in the mdm-ehcache-config.xml
file.

File location
The mdm-cache-config.xml.template file is located in $TOP/etc/default/ directory.

Example
This is an example of the mdm-cache-config.xml.template file:

<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=automatic,
multicastGroupAddress=239.10.10.10,

1008 IBM Product Master 12.0.0


multicastGroupPort=4446,
timeToLive=1"/>

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

mdmce-roles.json.default file parameters


Defines the parameters of the mdmce-roles.json.default file.

Each role has designated access to a fixed list of pages (components). Index indicates the position of a component in the left navigation menu of the home page.

The types of roles are; Full Admin, Admin, Basic, Catalog Manager, Digital Asset Manager, GDS Supply Editor, Merchandise Manager, Solution Developer, Vendor.

Table 1. Modules in the mdmce-roles.json.default file


name Home
components Collaboration Area Full Admin Admin Basic Catalog Manager Category Manager Merchandise Manager Solution Developer
Cards
Your Tasks
Data Model Summary
Product Completeness
Recently Added
Products
Recently Modified
Products
Recently Added Assets Full Admin Admin
Recently Modified
Assets
Top Products Listing By Full Admin Admin Basic Catalog Manager Merchandise Manager Solution Developer Vendor
Group1
Top Products Listing By
Group2
Top Products Listing By
Group3
My Notes
My Shortcuts
name Data Management Digital Asset Manager
subMenu name Digital Asset Management
components DAM Menu Page
name Explore
Item Explorer Full Admin Admin Catalog Manager Category Manager Content Editor Digital Asset Manager GDS Supply Editor
Merchandise Manager Solution Developer
Category Explorer Full Admin Admin Catalog Manager Category Manager Digital Asset Manager GDS Supply Editor Merchandise Manager
Solution Developer
name Search
components Item Search Full Admin Admin Catalog Manager Content Editor GDS Supply Editor Merchandise Manager Solution Developer
Digital Assets Vendor
Category Search Full Admin Admin Catalog Manager Solution Developer Vendor
Transactions GDS Supply Editor
name Data Model Manager
subMenu name Spec Console
Full Admin Admin Solution Developer
Attribute Collection Console
Full Admin Admin Solution Developer
Rules Console
Full Admin Admin Solution Developer
Lookup Table Console
Full Admin Admin Basic Catalog Manager Content Editor GDS Supply Editor Solution Developer Vendor
File Explorer
Full Admin Admin Basic Catalog Manager Content Editor GDS Supply Editor Solution Developer Vendor
name Dashboards
sequentialMenu name Audit History
Full Admin Admin Catalog Manager
DAM Summary
Full Admin Admin Merchandise Manager Catalog Manager
Data Sanity
Full Admin Admin Catalog Manager
User Summary
Full Admin Admin
Vendor Summary

IBM Product Master 12.0.0 1009


Full Admin Admin Catalog Manager Solution Developer
Worklist Summary
Full Admin Admin Catalog Manager Content Editor Solution Developer
name Global Data Synchronization
Manage Items GDS Supply Editor
Synchronization GDS Supply Editor
Reports
name Custom Tools Full Admin Admin Catalog Manager Solution Developer
name Settings
components Personalization Settings
Full Admin Admin Basic Catalog Manager Content Editor Merchandise Manager Solution Developer Vendor
FTS Settings
Full Admin Admin Solution Developer
DAM Settings
Full Admin Admin Catalog Manager Merchandise Manager Solution Developer
Change Password
Full Admin Admin Content Editor Merchandise Manager Solution Developer
Chatbot Settings
Full Admin Admin Solution Developer
Home page custom titles
Theme
Polling Time
Error Panel Settings
Full Admin Admin

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

ml_configuration file parameters


Defines the ml_configuration file parameters.

Property Default Value Description


embedding_dim 100 Specifies the smallest dimension that is required to embed an object.
epoch 40 Specifies the number of passes of the entire training data set, the machine learning algorithmmust complete.
hidden_layer_dim 200 Specifies the number of neurons in the hidden layer.
host Depending upon scenario, can specify the following:

hostname or the IP address of the MongoDB

learning_rate 0.1 Specifies the amount that the weights are updated during training.
name dam_g Specifies the name of the MongoDB database.
number_of_neighbors 4 Specifies the number of the neighbors to consider while constructing metadata (context pairs) for a given input.
password Specifies the password for the MongoDB database.
port 5000 Depending upon scenario, can specify the port number for any of the following:

Controller for the machine learning services


MongoDB database
Attributes service
Categorization service
Standardization service

test_split_size 0.30 Specifies the size of the data that must be split as the test data set.
threshold 10 Specifies the threshold that is used while replacing typographical error word from the input.
username mluser Specifies the MongoDB username.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

restConfig.properties file parameters


Defines the restConfig.properties file parameters.

Property Default Value Description


access_control_ GET, POST, DELETE, PUT Specifies access control allowed methods for cross-origin
allow_origin access.

1010 IBM Product Master 12.0.0


Property Default Value Description
access_control_ authorization, x-company, x-authtoken, content-type, Specifies access control allowed headers for cross-origin
expose_headers content-language, content-length, cache-control, date, expires, access.
last-modified, pragma, x-powered-by, set-cookie,enctype,
x-ftpauthorization, content-disposition, x-ftsuser
ageHighSplit 0-15, 15-30, 30-60, >60, in the past For the Dashboard counts by age. To be effective, specify all the
ageLowSplit three properties, such that low greater than 0, and low is less
ageMediumSplit than medium that is less than high.

attribute_path Mass edit spec/attributePath Specifies the attribute path for attribute.
cache_control no-cache, no-store, must-revalidate Specifies the cache control.
cache_expires 0 Specifies the cache expiry.
cache_pragma no-cache Specifies the cache pragma.
catalog- /api/v1/catalogs/_catalogId_/hierarchies Specifies the catalog hierarchy URL for the GET service.
hierarchies-url
category-items- Specifies the category items URL for the GET services.
url
category_max_li 50 Specifies the maximum number of the category items limit.
mit
collab_filter_key Specifies the collaboration area filter to be set on the catalog
attributes as a custom parameter.
collab_filter_me Specifies the method to be launched from the collaboration
thod_call area filter implementation class.
collab_id Mass edit spec/collabId Specifies the attribute path for the collaboration identifier.
collab-items-url Specifies collaboration entries URL for the GET.PUT and the
POST services.
companyId FreeTextSearch/companyId Specifies the company ID for the Free text search report script.
completenessHi 75 Specify all the three properties for the Item Completeness
ghSplit calculation in the Dashboard feature. The default ranges are 0-
25, 25-50, 50-75, and 75-100.
completenessLo 25
The value of the completenessLowSplit property should
wSplit
be greater than 0 and less than incomprehensibility
property.
completenessM 50 The value of the completenessHighSplit property should
ediumSplit be less than 100.

cpd_cluster_hos The host URL of the deployed instance on the IBM Cloud Pak®
t_url for Data.
cpd_username The username to connect to the IBM Watson® Knowledge
Catalog to publish specs.
damEnabled NO Specifies whether the Dam feature is enabled.
dashboardsEna true Specifies whether the Dashboards feature is enabled.
bled
docstore_downl 500 Specifies the limit for the download file size in the File Explorer
oad_size_mb_li page.
mit
docstore_exclud bat,bin,cmd,com,cpl,exe,gadget,inf1,ins,inx,isu,job,jse,lnk,msc,msi,msp,mst,paf,pif,p Specifies the file formats not supported by the File Explorer.
ed_file_types s1,reg,rgs,scr,sct,shb,shs,u3p,vb,vbe,vbs,vbscript,ws,wsf,wsh,coff,xcoff,sh,shell,actio
n,app,command,csh,ex_,ipa,ksh,osx,prg,run
docstore_max_u 100 Specifies the limit for the upload file size in the File Explorer
pload_size page.
displayDescripti null Specifies the attribute for displaying description as in the
onAttributeNam Single-edit page.
e
displayThumbna null Specifies the attribute for displaying thumbnail as in the Single-
ilAttributeName edit page.
elastic_search_s null Specifies the URI for the Elasticsearch service.
ervice_uri
epochTime FreeTextSearch/epochTimeInStr Specifies the epoch time for the Free text search report script.
excluded_file_ty exe.pif.application.gadget.msi. Specifies the file extensions to exclude during upload.
pes msp.com.scr.hta.cpl.
msc.jar.bat.cmd.vb.
vbs.vbe.js.jse.ws.wsf
exportimport_re Specifies the attribute path for the report name.
portname
exportimport_sc IBM MDMPIM Export Import Script Specifies the attribute path for the script name.
riptname
exportimportsp Specifies the attribute path for the attributes.
ec_attrs
exportimportsp Specifies the attribute path for the collaboration area name.
ec_collabname
exportimportsp Specifies the attribute path for the failure count.
ec_failurecount

IBM Product Master 12.0.0 1011


Property Default Value Description
exportimportsp Specifies the attribute path for the item identifier.
ec_itemids
exportimportsp Specifies the attribute path for step name.
ec_stepname
exportimportsp Specifies the attribute path for the total count.
ec_totalcount
exportimportsp Specifies the attribute path for the XL file name.
ec_xlfilename
filter_attr_separ ##filter## Specifies the separator for the filter attribute.
ator
filter_params Mass edit spec/filterParams Specifies the attribute path for the filter params.
fts_auth_passw null Specifies password for the Free text search.
ord
fts_auth_userna null Specifies username for the Free text search.
me
fts_enable_fuzzy false Specifies whether the Fuzzy is enabled in the Free text search.
_search
fts_indexer_rep IBM MDMPIM Full Data Indexer Report Specifies the Free text search Report Name.
ort_name
fts_is_auth_req false Specifies whether the Free text search authentication is
uired required.
fts_is_password false Specifies whether the password is encrypted for the Free text
_encrypted search.
fts_recent_jobs 7 Specifies the recent jobs for the Free text search within the
_from specified number of days.
fts_report_thres Specifies the threshold value that is used for the fetching
hold records from the Elasticsearch during the report generation.
fts_should_mat Specifies expression that is used for approximate matching
ch_expression criteria for the Free text search attribute-based search.
gdsProductCatal Specifies whether the source container that is associated with
og the collaboration area is the GDS Product Catalog.
hierarchies- Specifies the hierarchies categories URL for the GET service.
categories-url
high-deadline- 10 Specifies the high deadline percentage for the Dashboard
percentage feature.
host-url /mdm_rest Specifies host URL for the mm-rest service.
item_cache_limi 1000 Specifies the REST item cache limit.
t
jobs_count 5 Specifies the number of jobs.
mass_edit_limit 10000 Specifies the limit for the mass edit report.
mass_edit_repo MASS_EDIT_REPORT Specifies name for the mass edit report.
rt_name
maxCount 15 Specifies the search type-ahead pagination parameter for the
max count.
maxFileSize 50 Specifies the image size for the single-edit page and the DAM
item response in the megabyte.
mdm_version 11 Specifies the version of the IBM® Product Master.
mdmce_asset_r Renditions Specifies the asset rendition attribute name.
enditions_attrib
ute_name
mdmce_digital_ Digital Asset Catalog Specifies the catalog name of the digital asset.
catalog
mdmce_shared_ null Specifies the name for the shared location.
location
medium- 30 Specifies the medium deadline percentage for the Dashboard
deadline- feature.
percentage
new_value Mass edit spec/newValue Specifies the attribute path for the new value.
old_value Mass edit spec/oldValue Specifies the attribute path for the old value.
pim_event_recei null Specifies the URL for PIM event receiver
ver_app_url
pimServiceUrl Specifies the pim service URL for the Free text search report
script.
prefix_import IMPORT_ITEMS Specifies that the Import jobs should begin with the specified
prefix.
recent_jobs_fro 7 Specifies the recent jobs in the specified days.
m
recent_schedule 7 Specifies the recent job schedules in the specified days.
s_from
report_job_file_ REPORT_INP_SPEC/fileName Specifies the file name of the report job.
name

1012 IBM Product Master 12.0.0


Property Default Value Description
roleFileJSONPat XXXX Specifies the Persona-based UIrole details.
h
saved-search- Specifies the Saved Search items URL for the GET.POST and the
items-url DELETE services.
scroll_active_pe Specifies how long the Elasticsearch context should be kept
riod_in_mins alive during the report generation.
selection-items- Specifies the selection items URL for GET and the POST
url services.
setExpirationTi 240 Specifies the time when the token expires.
meMinutesInTh
eFuture
setNotBeforeMi 2 Specifies the time before which the token is not yet valid.
nutesInThePast
spec_max_limit 500 Specifies the maximum limit for the Spec creation.
spec_max_node 500 Specifies the maximum node limit in Spec.
_count
spec_max_occu 50 Specifies the maximum occurrence limit for a node.
rrence_limit
spec_nested_gr 8 Specifies the nested levels of the grouping limit for a node.
ouping_limit
step_id Mass edit spec/stepId Specifies the attribute path for the step identifier.
tradingPartnerC Specifies whether the source container that is associated with
atalog the collaboration area are Trading Partner Catalog.
vendor_organiza Vendor Organization Hierarchy Specifies the name of the Vendor organization hierarchy.
tion_hierarchy
vendor_role_na Vendor Specifies the role name for a Vendor.
me
version 1 Specifies the version information.
wkc_auth_api_k The generated API Key for authentication.
ey
wkc_ipm_catalo The name of the IBM Watson Knowledge Catalog where the
g_name assets are created. The catalog name needs to be specified as a
configuration.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Shell scripts
You can use scripts provided with IBM® Product Master to perform many functions. These scripts are a powerful way to perform many functions quickly and efficiently.

Ensure that you do not use the following keywords in your scripts. If any of the following keywords are used, your script may fail to compile. These keywords are reserved
in Product Master scripting language:

var
function
if
else
while
for
new
return
break
continue
true
false
null
toInteger
toDouble
toString
script_execution_mode=not_compiled
script_execution_mode=compiled_if_available
script_execution_mode=compiled_only
script_execution_mode=java_api=

The following words are reserved as implicit variable names:

item
err
val
res
category

IBM Product Master 12.0.0 1013


feed_doc_path
cmsInstance
cmsMetadata
cmsReadOnlyAttribs

All script operation names are reserved as well.

abort_local.sh script
Use the abort_local.sh script to abort IBM Product Master and all services from the command line.
analyze_schema.sh script
Use the analyze_schema.sh script to generate statistics of your Oracle and DB2® databases.
checkForCompileError.sh script
Use the checkForCompileError.sh script for automated checking of your scripts for compilation errors in the IBM Product Master environment.
cleanup_cmp.sh script
Running the cleanup_cmp.sh script on an existing company in Product Master instance removes the company object itself from the Product Master database.
Along with the company object itself, the script also removes all information in a company such as items and catalogs in the IBM Product Master environment.
configureEnv.sh script
Use the configureEnv.sh script to configure your installation of IBM Product Master. Running this script validates the env_settings.ini file and notifies you
of any errors, generates build.properties and common.properties files, and generates the configuration for Product Master services.
copy_files_for_appsvr.sh script
Use the copy_files_for_appsvr.sh script to copy files for the IBM Product Master appsvr service. This script is located in the
$TOP/bin/copy_files_for_appsvr.sh directory.
create_appsvr.sh script
Use the create_appsvr.sh script to deploy a second appsvr service on the same server for vertical clustering in the IBM Product Master environment.
create_cmp.sh script
Use the create_cmp.sh script to create a company for production deployment or a test company that you can use to test your installation and initial login to IBM
Product Master services.
create_pimdb.sh script
Use the create_pimdb.sh script to create a IBM Product Master database in a production environment.
create_pimdevdb.sh script
Use the create_pimdevdb.sh script to create an IBM Product Master database for use in a test or development environment.
create_schema.sh script
Use the create_schema.sh script to create the schema for your database.
create_vhost.sh script
Use the create_vhost.sh to create a virtual host to run a second appsvr service on the same server for vertical clustering in the IBM Product Master
environment.
db2_export.sh script
Use the db2_export.sh script, which can be found in the $TOP/src/db/scripts/backup directory, to export the IBM Product Master schema. Running this script
generates the SQL scripts required to create tables, indexes and sequences with their current values, and stores them in a tar file in the back up directory.
db2_fast_import.sh script
Use the db2_fast_import.sh script to import the contents of the tar file generated by running the db2_export.sh script into the database for a new IBM
Product Master environment. You can use this script to import the schema into the same or a different database under the same schema name or a different
schema name.
delete_old_versions script
You can use delete_old_versions.sh shell script to delete old versions of objects that are no longer needed. Deleting old object versions restores the database
storage space and therefore improves database performance.
drop_schema.sh script
Use the drop_schema.sh script to delete the IBM Product Master schema. Before you issue the drop_schema.sh script, ensure that you stop Product Master.
envexpimpXMLValidator.sh script
Use the envexpimpXMLValidator.sh script to validate all the XML files contained in the environment files generated by the exportCompanyAsZip.sh script. The
XML files are validated against the .xsd files of IBM Product Master.
estimate_old_version script
You can use the estimate_old_versions.sh shell script to get a size estimate for the number of old object versions in various Product Master database tables.
exportCompanyAsZip.sh script
Use the exportCompanyAsZip.sh script to export your company data model from the command line. You can import this data model in an IBM Product Master
target instance.
get_ccd_version.sh script
Use the get_ccd_version.sh script to get the version of the IBM Product Master ccd.
get_params.sh script
Use the get_params.sh script to parse the argument list into named variables in the IBM Product Master environment.
get_service_status.sh script
Use the get_service_status.sh script to get the status of individual IBM Product Master services.
importCompanyFromZip.sh script
Use the importCompanyFromZip.sh script to import your IBM Product Master data model from the command line. The data model is stored in a ZIP file created
by running the exportCompanyAsZip.sh script.
importRelationshipsAsCompany.sh script
Use the importRelationshipsAsCompany.sh script to import your IBM Product Master data model.
indexRegenerator.sh script
Use the indexRegenerator.sh script to regenerate indexes for entities such as items and categories. If you change the index option for an attribute of the spec
that is associated with the item or category after creating some other items or categories, then you must run this script to regenerate indexes. Regenerating indexes
ensures that you get correct search results based on the changed attribute. You can use only one run_options combination at a time. However, you can use zero or
more tuning_options together.
install_war.sh script
Use the install_war.sh script to replace the existing WebSphere® Application Server components when you deploy a second appsvr service in the IBM Product
Master environment.
load.sh script
Use the load.sh script to load the contents of a folder into a company. The format of the content should be the same as the format of the content that is created by
extracting the file that is created by running the exportCompanyAsZip.sh script.
pimprof.sh script
Use the pimprof.sh script to start, and stop services, and to capture memory snapshots in the IBM Product Master environment.

1014 IBM Product Master 12.0.0


pimSupport.sh script
Use the pimSupport.sh script for collecting diagnostic information if you encounter any issues with IBM Product Master and need to share data with the technical
support team.
rename_cmp.sh script
Use the rename_cmp.sh script to rename a company code and company name in the IBM Product Master environment. All users must log in with the new
company code after this script is run.
rmi_status.sh script
IBM Product Master runs using six key services, each of which run as a separate process on their own Java virtual machine, or JVM. The rmi_status.sh can be
run to print a list of all the IBM Product Master services which are currently running. In a non-clustered environment, there are six services. For a clustered
environment, one service can have multiple processes and multiple entries.
run_job_template.sh script
Use the run_job_template.sh script to create scripts to integrate with an extender scheduler, such as Tivoli® Workload Scheduler, in the IBM Product Master
environment. The run_job_template.sh script is in the $TOP/bin directory. Ensure that you modify the values in this script and then issue the command to run
the job.
start_local.sh script
Use the start_local.sh script to start IBM Product Master and all services.
start_local_rmlogs.sh script
Use the start_local_rmlogs.sh script to start IBM Product Master services and remove local logs. The start_local_rmlogs.sh script does the same job as
the start_local.sh without the addition of removing logs.
start_rmi_and_appsvr.sh script
Use the start_rmi_and_appsvr.sh script to start IBM Product Master RMI registry and appserver service.
stop_local.sh script
Use the stop_local.sh script to stop IBM Product Master services from the command line.
svc_control.sh script
Use the svc_control.sh script to control individual IBM Product Master services.
test_db.sh script
Use the test_db.sh script to verify the connectivity between IBM Product Master and databases, and to check for JDBC and native client connections.
updateRtClasspath.sh script
There are two ways to update the classpath parameter in the env_settings.ini file.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

abort_local.sh script
Use the abort_local.sh script to abort IBM® Product Master and all services from the command line.

Syntax
abort_local.sh [--wsadminUsername=<was_admin_user>
--wsadminPwd=<was_admin_password>][--profile]
[--debug][--rmlogs][--help]

Parameters
wsadminUsername
This is the WebSphere® Application Server administrative user name. This is required if admin_security=true is specified but the username and password are not
provided in the env_settings.ini file. You can specify the wsadminUsername and wsadminPwd arguments in the command to override the values provided in the
env_settings.ini. This argument is applicable only if svc_name is of type appsvr.
wsadminPwd
This is the password of WebSphere Application Server administrative user. This is required if admin_security=true is specified but the username and password are
not provided in the env_settings.ini. You can specify wsadminUsername and wsadminPwd arguments in the command to override the values provided in the
env_settings.ini file. This argument is applicable only if svc_name is of type appsvr.

Example
In this example, the abort_local.sh script is run to abort Product Master and all services:

$<install directory>/bin/go/abort_local.sh

Sample output
$ abort_local.sh
Aborting service admin_WPCAPP1...
Aborting service eventprocessor_WPCAPP1...
Aborting service workflowengine_WPCAPP1...
Aborting service queuemanager_WPCAPP1...
Aborting service appsvr_WPCAPP1...
Aborting service scheduler_WPCAPP1...
Aborting service rmi_WPCAPP1...

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 1015


analyze_schema.sh script
Use the analyze_schema.sh script to generate statistics of your Oracle and DB2® databases.

Syntax
analyze_schema.sh

Parameters
None
This command prompts to get the dbpassword and use the current user id as the schema name to analyze.

Example
In this example, the analyze_schema.sh script is run.

$<install directory>/bin/db/analyze_schema.sh

Sample output
============================================================
Analyzing schema 'schema1'
This may take several minutes. Do not kill the session.
============================================================

Note: If the schema to analyze is not the same as the current user id, the output is empty.
No other details display when the script is run, that is, if the script finishes without errors. Therefore, to check the last updated statistics time in Db2 and Oracle, see the
following:

Check update statistics time in DB2


Check update statistics time in Oracle

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

checkForCompileError.sh script
Use the checkForCompileError.sh script for automated checking of your scripts for compilation errors in the IBM® Product Master environment.

Syntax
checkForCompileError.sh --use_docstore=yes|no --company_code=code
--script_dir=script_directory --logfile=logfile_path

Parameters
use_docstore
Specify if the docstore needs to be used.
company_code
Specify the company code.
script_dir
Specify the full path to the script directory.
logfile
Specify the full path to the log file.

Example
In this example, the checkForCompileError.sh script is run from the /bin directory for a company named test, and the logfile is created in the /bin directory without
using the doc store.

$<install directory>/bin/checkChangedScriptOps.sh --use_docstore=no --company_code=test --script_dir=/bin --logfile=/bin/logs

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

cleanup_cmp.sh script
1016 IBM Product Master 12.0.0
Running the cleanup_cmp.sh script on an existing company in Product Master instance removes the company object itself from the Product Master database. Along with
the company object itself, the script also removes all information in a company such as items and catalogs in the IBM® Product Master environment.

Note: As a best practice, either run the cleanup_cmp.sh script after you stop the Product Master services or clear the Product Master cache if the services are already
running.

Syntax
Note:

cleanup_cmp.sh –-code=company_code

Parameters
code
Specifies the company code.

Example
In this example, the cleanup_cmp.sh script is run with the parameter code = test. The company "test" is removed from the database.

$<install dir>/bin/db/cleanup_cmp.sh --code=test

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

configureEnv.sh script
Use the configureEnv.sh script to configure your installation of IBM® Product Master. Running this script validates the env_settings.ini file and notifies you of any
errors, generates build.properties and common.properties files, and generates the configuration for Product Master services.

Syntax
configureEnv.sh [--overwrite]

Parameters
overwrite
Overwrites files without prompting.

Note: If you specify Y or -ov to indicate file overwrite, backup of the file is created and existing property values are not altered. New or missing properties are added from
the common.properties.default file to the common.properties file. If the common.properties file does not exist, a new file is created by using the
common.properties.default file. Manually delete the common.properties file, if you want the file to contain all the default values.

Example
In this example, the configureEnv.sh script is run with the overwrite parameter to overwrite the existing build.properties and common.properties files.

$<install directory>/bin/configureEnv.sh --overwrite

Important: Starting IBM Product Master Fix Pack 7 and later, the JAR entries for the following extensions (earlier present in the jars-custom.txt file) automatically get
copied to the jars-persona-internal.txt file located in the $TOP/bin/conf/classpath folder.

ftsExtensions,
mlExtensions,
sdpExtensions,
vendorExtensions,
dam.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

copy_files_for_appsvr.sh script
Use the copy_files_for_appsvr.sh script to copy files for the IBM® Product Master appsvr service. This script is located in the $TOP/bin/copy_files_for_appsvr.sh
directory.

Syntax

IBM Product Master 12.0.0 1017


copy_files_for_appsvr.sh

Parameters
None

Example
In this example, the copy_files_for_appsvr.sh script is run:

$<install directory>/bin/copy_files_for_appsvr.sh

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

create_appsvr.sh script
Use the create_appsvr.sh script to deploy a second appsvr service on the same server for vertical clustering in the IBM® Product Master environment.

Syntax
create_appsvr.sh

Parameters
None

Example
In this example, the create_appsvr.sh script is run:

$<install directory>/bin/Websphere/create_appsvr.sh

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

create_cmp.sh script
Use the create_cmp.sh script to create a company for production deployment or a test company that you can use to test your installation and initial login to IBM®
Product Master services.

Syntax
create_cmp.sh –-code=company_code [--name=company_name] [--silent] [--verbose]

Parameters
code
Specifies the company code. This parameter is mandatory and must be unique.
name
Specifies the name of the company. This parameter may not be unique. It is set to the company code if not provided by the user.
silent
Suppresses confirmation messages.
verbose
Displays messages related to company details and the current debugging output.

Usage
This script is stored in the <install dir>/bin/db directory. Running the script creates a log file named create_cmp.log in the <install dir>/logs directory.
Note: You must not run the create_cmp.sh shell script multiple times in parallel. If more than one instance is running at the same time, all of the instances will fail.

Example
In this example, a company with the name test_company is created:

$<install dir>/bin/db/create_cmp.sh --code=test --name=test_company --verbose

1018 IBM Product Master 12.0.0


Output
Running this script in the verbose mode displays the name and code of the company being created, the company locale, and activities performed by the script such as
loading other scripts. On successful completion, the script displays the following message on the screen:

DONE

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

create_pimdb.sh script
Use the create_pimdb.sh script to create a IBM® Product Master database in a production environment.

Syntax
create_pimdb.sh --db=dbname --inst_owner=instance_owner --cpath=container_path

Important: Ensure that you run this script from the Db2® server by using Db2 instance ID. Do not run the script from the application server.

Parameters
db
Specify the name of the database.
instowner
Specify the name of the instance owner.
cpath
Specify the path of the container.

Example
In this example, the create_pimdb.sh script is run to create a database with the name pimdb for the instance owner db2inst1 in the container path /u01 :

$<install directory>/bin/db_creation/create_pimdb.sh --db=pimdb --instowner=db2inst1


--cpath=/u01

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

create_pimdevdb.sh script
Use the create_pimdevdb.sh script to create an IBM® Product Master database for use in a test or development environment.

Syntax
create_pimdevdb.sh --db=dbname --instowner=instanceowner --cpath=containerpath

Important: Ensure that you run this script from the Db2® server by using Db2 instance ID. Do not run the script from the application server.

Parameters
db
Specify the name of the database.
instowner
Specify the name of the instance owner.
cpath
Specify the path of the container.

Example
In this example, the create_pimdevdb.sh script is run to create a database with the name pimdb for the instance owner db2inst1 in the container path /u01 :

$<install directory>/bin/db_creation/create_pimdevdb.sh --db=pimdb --instowner=db2inst1


--cpath=/u01

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 1019


create_schema.sh script
Use the create_schema.sh script to create the schema for your database.

CAUTION:
This script if executed on an existing database, will erase all the data.

Syntax
create_schema.sh [--tablespace=tablespace_name] [--silent]

Parameters
silent
Suppresses the confirmation messages.
tablespace
Specifies a table space name mapping file that shows the customized table space names for the required table spaces. If the table space parameter is not
specified, all tables and indexes use the predefined table space names. For the first time, it will be the default names; after that, it will use the names that are used
for previous create schema operations.

Usage
The script is stored in the <install dir>/bin/db directory. Running the script creates a log file that is named schema.log file in the <install dir>/logs directory.

Example
In this example, the create_schema.sh script is run with the parameter tablespace = test_table to create a new database schema.

$<install dir>/bin/db/create_schema.sh --tablespace=test_tablespace

Output
Running the create_schema.sh script displays the name of the database user, the client login command, and the JDBC details.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

create_vhost.sh script
Use the create_vhost.sh to create a virtual host to run a second appsvr service on the same server for vertical clustering in the IBM® Product Master environment.

Syntax
create_vhost.sh

Parameters
None

Example
In this example, the create_vhost.sh script is run:

$<install directory>/bin/WebSphere/create_vhost.sh

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

db2_export.sh script
Use the db2_export.sh script, which can be found in the $TOP/src/db/scripts/backup directory, to export the IBM® Product Master schema. Running this script
generates the SQL scripts required to create tables, indexes and sequences with their current values, and stores them in a tar file in the back up directory.

Prerequisites
1020 IBM Product Master 12.0.0
Before you run the db2_export.sh script, you must:

Shut down the Product Master application connected to the database schema.
Make sure that the back up directory exists on local disks on the database server and does not contain DB2 data files.
Make sure that the owner of the back up directory is the DB2 instance owner on the database server.
Make sure that you log in to the database server as the DB2 instance owner to copy and run the db2_export.sh script. You must not run the script from the
application server.
Make sure that the db2_export.sh script is saved in the UNIX format if you copy it from a Windows-based computer to the database server. If the file is not in the
UNIX format, the system might introduce end-of-line characters that will cause the script to fail.

Syntax
db2_export.sh ----db=databasename --dbuser=exportuser --dbpassword=exportuserpassword --backupdir=bkpdirectory --
bkpfile=bkpfilename

Parameters
db
Specify the name of the database from where you want to export the schema.
dbuser
Specify the name of the Product Master database schema user to export. It is db_userName in the common.properties file.
dbpassword
Specify the password of the Product Master database schema user to export. It is db_password in the common.properties file.
backupdir
Specify the name of the directory with the full path where you want to export the database schema. The tar file is created in this directory.
bkpfile
Specify the name of the back up tar file.

Example
In this example, the db2_export.sh script is run to export the wpcdb database schema to the july10bkp file in the /u01/backup directory for the user wpc1.

db2_export.sh --db=wpcdb --dbuser=wpc1 --dbpassword=passwd --backupdir=/u01/backup --bkpfile=july10bkp

Sample output
Running this script will display the date of last revision of the script and DB2 connection information.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

db2_fast_import.sh script
Use the db2_fast_import.sh script to import the contents of the tar file generated by running the db2_export.sh script into the database for a new IBM® Product
Master environment. You can use this script to import the schema into the same or a different database under the same schema name or a different schema name.

Prerequisites
Before you can run the db2_fast_import.sh script, you must:

Make sure that there are no errors at the time of export. If there are any errors, you must rectify the errors, run the db2_export.sh script again and then try the
import.
Make sure that there is enough space in the tablespaces before importing the database schema. You can use the db2 list tablespaces show detail
command to check the available space in the tablespaces, and add more space if required.
Make sure that the db2_fast_import.sh script is saved in the UNIX format if you copy it from a Windows-based computer to the database server. If it is not in
the UNIX format, the system might introduce end-of-line characters that will cause the script to fail.
Run this script only when you log in as the DB2® database administrator.
Optional: Create a new empty database user if you want to import to a new database user.

CAUTION:
If you import into an existing schema with tables, all existing tables will be dropped and data will be lost.

Syntax
db2_fast_import.sh --db=databasename --dbuser=touser --dbpassword=touserpassword --backupdir=bkpdirectory

Parameters
db
Specify the name of the database into which you are importing the schema. It can be the same database from which you exported the schema.
dbuser
Specify the name of the database user into which you want to import the schema.

IBM Product Master 12.0.0 1021


dbpassword
Specify the password of the target database user.
backupdir
Specify the name of the directory that contains the expanded tar file.

Example
In this example, the db2_fast_import.sh script is run to import the expanded database schema in the /u01/backup/july10/july10bkp directory to the pimdb database
schema into the trigodev user.

db2_fast_import.sh --db=pimdb --dbuser=trigodev --dbpassword=trigodev


--backupdir=/u01/backup/july10/july10bkp

Output
Running this script will display the date of last revision of the script, DB2 connection information, and the following progress information

DROP TABLES
CREATE TABLES
RUNSTATS
LOADING
REMOVE CHECK-PENDING STATUS
CREATE SEQUENCES
Start time <date time>
End time <date time>
CHECK .log FILES FOR SUCCESS

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

delete_old_versions script
You can use delete_old_versions.sh shell script to delete old versions of objects that are no longer needed. Deleting old object versions restores the database
storage space and therefore improves database performance.

Syntax
./delete_old_versions.sh
--code=Company_code
--end_date=YYYY-MM-DD.HH24:MI:SS
[--ctg=Catalog_ID]
[--start_date=YYYY-MM-DD.HH24:MI:SS]
[--confirm_db=no]

Parameters
Mandatory Parameters

code
Company code that is used to log in to Product Master
end_date
Old objects that are created before this date are estimated.

Optional Parameters

ctg
Catalog ID that should be checked for old object versions. This ID can be retrieved by using the following SQL query: select * from ctg
start_date
Old objects that are created after this date and until the end_date are estimated.
confirm_db
Confirmation flag to proceed with script execution with or without confirmation prompt. If no is specified, the script does not prompt for database user
confirmation.

Table 1. Actions performed by various combinations of parameters


Options Action
--code=<Company_code> Deletes old version objects that exist in the company code and created before the end_date.
--end_date=<YYYY-MM-DD.HH24:MI:SS>
--code=<Company_code> --end_date=<YYYY-MM- Deletes old version objects that exist in the company code and created between start_date and
DD.HH24:MI:SS> end_date.
--start_date=<YYYY-MM-DD.HH24:MI:SS>
Note: Collaboration Area History deletion considers only code and end_date parameters.
--code=<Company_code> --end_date=<YYYY-MM- Deletes old version objects that exist in the company code, catalog, and created before the end_date.
DD.HH24:MI:SS> Note: Collaboration Area History deletion considers only code and end_date parameters.
--ctg=<Catalog_ID>

1022 IBM Product Master 12.0.0


Options Action
--code=<Company_code> --end_date=<YYYY-MM- Deletes old version objects that exist in the company code, catalog, and created between
DD.HH24:MI:SS> start_date and end_date.
--ctg=<Catalog_ID> --start_date=<YYYY-MM-
DD.HH24:MI:SS> Note: Collaboration Area History deletion considers only code and end_date parameters.

Example
./delete_old_versions.sh --code=ibm --end_date=2009-01-31.18:30:30 --ctg=2063

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

drop_schema.sh script
Use the drop_schema.sh script to delete the IBM® Product Master schema. Before you issue the drop_schema.sh script, ensure that you stop Product Master.

CAUTION:
The drop_schema.sh script needs to be used with caution. This script results in removing all of the Product Master objects.

Syntax
drop_schema.sh

Parameters
None

Example
In this example, the drop_schema.sh script is run.

$<install directory>/bin/db/drop_schema.sh

Output
Running the drop_schema.sh script displays the name of the database user, the client login command, and the JDBC details.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

envexpimpXMLValidator.sh script
Use the envexpimpXMLValidator.sh script to validate all the XML files contained in the environment files generated by the exportCompanyAsZip.sh script. The XML
files are validated against the .xsd files of IBM® Product Master.

Syntax
envexpimpXMLValidator.sh

Parameters
None

Example
In this example, the envexpimpXMLValidator.sh script is run:

$<install directory>/bin/envexpimpXMLValidator.sh

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

estimate_old_version script
IBM Product Master 12.0.0 1023
You can use the estimate_old_versions.sh shell script to get a size estimate for the number of old object versions in various Product Master database tables.

Syntax
./estimate_old_version.sh
--code=Company_code
--end_date=YYYY-MM-DD.HH24:MI:SS
[--ctg=Catalog_ID]
[--start_date=YYYY-MM-DD.HH24:MI:SS]
[--confirm_db=no]

Parameters
Mandatory Parameters

code
Company code that is used to log in to Product Master
end_date
Old objects that are created before this date are estimated.

Optional Parameters

ctg
Catalog ID that should be checked for old object versions. This ID can be retrieved by using the following SQL query: select * from ctg
start_date
Old objects that are created after this date and until the end_date are estimated.
confirm_db
Confirmation flag to proceed with script execution with or without confirmation prompt. If no is specified, the script does not prompt for database user
confirmation.

Table 1. Actions performed by various combinations of parameters


Options Action
--code=<Company_code> Provides an estimate of old version objects that exist in the company code and created before the
--end_date=<YYYY-MM-DD.HH24:MI:SS> end_date.
--code=<Company_code> --end_date=<YYYY-MM- Provides an estimate of old version objects that exist in the company code and created between
DD.HH24:MI:SS> start_date and end_date.
--start_date=<YYYY-MM-DD.HH24:MI:SS>
--code=<Company_code> --end_date=<YYYY-MM- Provides an estimate of old version objects that exist in the company code, catalog, and created before the
DD.HH24:MI:SS> end_date.
--ctg=<Catalog_ID>
--code=<Company_code> --end_date=<YYYY-MM- Provides an estimate of old version objects that exist in the company code, catalog, and created between
DD.HH24:MI:SS> start_date and end_date.
--ctg=<Catalog_ID> --start_date=<YYYY-MM-
DD.HH24:MI:SS>

Example
./estimate_old_version.sh --code=ibm --end_date=2009-01-31.18:30:30 --ctg=2063

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

exportCompanyAsZip.sh script
Use the exportCompanyAsZip.sh script to export your company data model from the command line. You can import this data model in an IBM® Product Master target
instance.

Syntax
exportCompanyAsZip.sh --company_code=code --script_path=script_path

Parameters
company_code
Specify the company code.
script_path
Specify the directory where your export script is stored.

Example
In this example, the exportCompanyAsZip.sh script is run with the parameter company_code=test and script_path=/bin/scripts:

$<install directory>/bin/exportCompanyAsZip.sh --company_code=test --script_path=/bin/scripts

1024 IBM Product Master 12.0.0


IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

get_ccd_version.sh script
Use the get_ccd_version.sh script to get the version of the IBM® Product Master ccd.

Syntax
get_ccd_version.sh

Parameters
None

Example
In this example, the get_ccd_version.sh script is run:

$<install directory>/bin/get_ccd_version.sh

Output
Running the get_ccd_version.sh script will display the ccd version number.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

get_params.sh script
Use the get_params.sh script to parse the argument list into named variables in the IBM® Product Master environment.

Syntax
get_params.sh --variable_1=value_1 --variable_2=value_2 [--debug=yes]

Parameters
variable_1
Sets _CCD_VARIABLE_1 to value_1.
variable_2
Sets _CCD_VARIABLE_2 to value_2.
debug
Displays debugging information if set to yes.

Example
In this example, the get_params.sh script is run with the parameters confirm_db = yes and name = acme:

$<install directory>/bin/get_params.sh --confirm_db=yes --name=acme

This would create and set two variables, _CCD_CONFIRM_DB=yes, and _CCD_NAME=acme.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

get_service_status.sh script
Use the get_service_status.sh script to get the status of individual IBM® Product Master services.

Syntax
get_service_status.sh --svc_name=service name [--help]

IBM Product Master 12.0.0 1025


Parameters
svc_name
Specify the full name of service, for example: scheduler_MYHOST
help
Specify this option to request help information.

Example
In this example, the get_service_status.sh script is run to get the status of the scheduler service:

$<install directory>/bin/go/get_service_status.sh --svc_name=scheduler_MYHOST

Output
Running the get_service_status.sh script will display the status of the service whose name is mentioned after the script name at the prompt.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

importCompanyFromZip.sh script
Use the importCompanyFromZip.sh script to import your IBM® Product Master data model from the command line. The data model is stored in a ZIP file created by
running the exportCompanyAsZip.sh script.

Syntax
importCompanyFromZip.sh --company_code=code --zipfile_path=zipfile_path

Parameters
company_code
Specify the company code.
zipfile_path
Specify the fully qualified path of the directory where your ZIP file is stored.

Example
In this example, the importCompanyFromZip.sh script is run with the parameter company_code=test and zipfile_path=/bin/zipfiles:

$<install directory>/bin/importCompanyFromZip.sh --company_code=test --zipfile_path=/bin/zipfiles

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

importRelationshipsAsCompany.sh script
Use the importRelationshipsAsCompany.sh script to import your IBM® Product Master data model.

Syntax
importRelationshipsAsCompany.sh --company_code=code
--zipfile_path=zipfile_path [--temp_folder=temporary_folder]

Parameters
company_code
Specify the company code.
zipfile_path
Specify the fully qualified path of the directory where your .zip file is stored.
temp_folder
Specify the fully qualified path of the temporary directory.

Example

1026 IBM Product Master 12.0.0


In this example, the importCompanyFromZip.sh script is run for a company with test as the company_code. The .zip file is stored in the /bin/zipfiles directory and
/bin/temp is the temporary directory.

$<install directory>/bin/importRelationshipsAsCompany.sh --company_code=test


--zipfile_path=/bin/zipfiles --temp_folder=/bin/temp

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

indexRegenerator.sh script
Use the indexRegenerator.sh script to regenerate indexes for entities such as items and categories. If you change the index option for an attribute of the spec that is
associated with the item or category after creating some other items or categories, then you must run this script to regenerate indexes. Regenerating indexes ensures that
you get correct search results based on the changed attribute. You can use only one run_options combination at a time. However, you can use zero or more tuning_options
together.

Note: Before running the indexRegenerator.sh script, stop the IBM® Product Master instance.
Note: The time that it takes to complete the script depends on the number of items in the instance.

Syntax
indexRegenerator.sh --company=company_name run_options [tuning_options]

Parameters
company
Specify the company name.
run_options:

Full Container
--catalog=CATALOG_NAME
--hierarchy=HIERARCHY_NAME
Container subset
--items=FULL_PATH_CSV_FILE (two columns: PK, CATALOG_NAME)
--categories=FULL_PATH_CSV_FILE (two columns: PK, HIERARCHY_NAME)
Generate PK files for multiple systems (numFiles - default is 1)
--catalog=NAME
--items=FULL_PATH_TO_DESIRED_FILE
--numFiles=NUMBER_FILES
--hierarchy=NAME
--categories=FULL_PATH_TO_DESIRED_FILE
--numFiles=NUMBER_FILES

tuning_options
Specify one from this list:

--nodePaths=FULL_PATH_TO_NODES_SEPARATED_BY_COLONS. Choose this option to improve speed by specifying node paths. The default is all paths. For
example, --nodePaths="SpecName/Node1:SpecName/Node2"
--lockContainer=[YES|NO] Choose this option to improve speed by locking the container. This option has the disadvantage of locking out the other users. The
default is YES.
--threads=NUMBER_THREADS. Choose this option to improve speed by using more than one thread, but ensure that enough DB connections exist. The
default is 1.

file_encoding
Sets the file encoding pattern value. The default is ISO8859_1. Value can be different encoding standards, for example "UTF-8 for double byte character
environments". Many scripts use the value of this parameter to set -Dfile.encoding parameter.

--encoding=Runtime option sets the file encoding type to indexRegenerator.sh


script command.

Note:

1. You must enclose parameters that contain spaces and special characters within escaped quotation marks (\"). You must escape special characters by a backslash
(\). If you specify more than one file, the file number is placed before the file extension. For example,items.csv becomes items-1.csv.
2. You cannot use --catalog and --hierarchy, and --catalog and --categories arguments together. If you use --catalog and --items arguments together, only PK files are
generated and no indexes are regenerated.

Example
In this example, the indexRegenerator.sh script is run:

$<install directory>/bin/indexRegenerator.sh --company=acme --catalog=catalog1


--nodePaths=FULL_PATH_TO_NODES_SEPARATED_BY_COLONS --lockContainer=NO

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master 12.0.0 1027


install_war.sh script
Use the install_war.sh script to replace the existing WebSphere® Application Server components when you deploy a second appsvr service in the IBM® Product
Master environment.

Syntax
install_war.sh

Parameters
None

Example
In this example, the install_war.sh script is run:

$<install_directory>/bin/websphere/install_war.sh

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

load.sh script
Use the load.sh script to load the contents of a folder into a company. The format of the content should be the same as the format of the content that is created by
extracting the file that is created by running the exportCompanyAsZip.sh script.

Syntax
load.sh --company_code=code --folder_path=directory path

Parameters
company_code
Specify the company code.
folder_path
Specify the full folder path.

Example
In this example, the load.sh script is run with the parameter company_code = test and folder_path = /data:

$<install directory>/bin/load.sh --company_code=test --folder_path=/data

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

pimprof.sh script
Use the pimprof.sh script to start, and stop services, and to capture memory snapshots in the IBM® Product Master environment.

Syntax
pimprof.sh –-cmd=command
--mode=cpu|obj
--svc_name=unique_Service_Name
--file=filename
--opts=” “

Parameters
–-cmd=command
Specify the action to be performed on the service. The allowed actions are start and stop for a service, and capture_mem for capturing a memory snapshot.
--mode=cpu|obj

1028 IBM Product Master 12.0.0


Specify the full RMI name of the service.

cpu
The CPU profiling mode.
obj
The object allocation tracing mode.

--svc_name=unique_Service_Name
Specify the unique service name of the service that you specified for the --serviceType parameter. If the unique_Service_Name is not specified, the service that is
specified as default is enabled for profiling.

serviceName
The unique service name of the service.

--file=filename
Specify the name of the file with the full path to save the memory snapshot. If the file name and directory are not specified, the default
$TOP/profiler/snapshots directory is where the memory snapshot exists. If the file name and directory are not specified, the default $TOP/tmp directory is
where the memory snapshot is saved. fileName is the fully qualified directory and file name of the memory snapshot that is saved when either the profiling
completes or you issue the --stop command. If you do not specify the --file parameter, the file name defaults are:

For YourKit: prof_type_serviceType_serviceName_timeStamp.snapshot


For JProfiler: prof_type_serviceType_serviceName_timeStamp.jps

where:

prof_type
The profiling mode, either cpu or obj.
serviceType
One of six services.
timeStamp
The date and time that the file is created.

--opts=” “
Specify this option to request help information.

pimprof.sh enable profiling


Run the following command to enable profiling:

pimprof.sh --enable
--profilerName=yourkit|jprofiler
--serviceType=admin|appsvr|event|queuemgr|scheduler|workflow
--serviceName=unique_Service_Name
--port=port_#

--enable
Enables profiling.
--profilerName=yourkit|jprofiler

yourkit
The YourKit profiling agent.
jprofiler
The JProfiler profiling agent.

--serviceType=admin|appsvr|event|queuemgr|scheduler|workflow

admin
The admin service.
appsvr
The application server service.
event
The event processor service.
queuemgr
The queue manager service.
scheduler
The scheduler service.
workflow
The workflow engine service.

--serviceName=unique_Service_Name
The unique service name of the service you specified for the --serviceType parameter. If the unique_Service_Name is not specified, the service that is specified as
default is enabled for profiling.
--port=port_#
The unique port number of the service you specified for the --serviceName parameter If you do not specify the port_#, the port numbers specified in the
common.properties file for the following parameters are used:

profiler_port_admin
profiler_port_appsvr
profiler_port_event
profiler_port_queuemgr
profiler_port_scheduler
profiler_port_workflow

IBM Product Master 12.0.0 1029


YourKit parameters for starting profiling
You can run these additional parameters for YourKit:

--cpuCapture=sampling|tracing|j2ee --allocMode=adaptive|all

The following parameters are defined:

sampling
Records CPU times with adequate profiling detail and a low performance overhead impact. This is a default parameter.
tracing
Records CPU times as well as method invocation counts. This is a default parameter. This parameter decreases performance, therefore, it should be used only when
necessary.
j2ee
Offers high-level profiling of JDBC, JSP or Servlets, and JNDI calls. This parameter increases performance, therefore, it should be used only when necessary.
adaptive
Adaptively skips some object allocation tracing while keeping CPU processing overhead at a moderate level. This is a default parameter.
all
Traces each object allocation.

JProfiler parameters for starting profiling


You can run these additional parameters for JProfiler:

--reset=true|false --saveOnExit=true|false

The following parameters are defined:

--reset=true|false
Discards all past CPU profiling data.
--saveOnExit=true|false
Saves a snapshot of all of your profiling data when profiling ends. This is a default parameter.

Capture memory snapshot parameters


You can run this script to capture the memory snapshot file:

pimprof.sh –cmd=capture_mem
–—svc_name=unique_Service_Name
--file=fileName

The following parameters are defined:

–cmd=capture_mem
Captures the memory snapshot.
–—svc_name=unique_Service_Name
The unique service name of the service you specified for the --serviceType parameter If the unique_Service_Name is not specified, the service that is specified as
default is enabled for profiling.
--file=fileName
If the file name and directory are not specified, the default $TOP/profiler/snapshots directory is where the memory snapshot is saved. fileName is the fully qualified
directory and file name of the memory snapshot that is saved when either the profiling completes or you issue the --stop command If you do not specify the --file
parameter, the file name defaults are:

For YourKit: MEM_serviceType_serviceName_timeStamp.snapshot


For JProfiler: MEM_serviceType_serviceName_timeStamp.jps

where:

serviceType
One of six services.
serviceName
The unique service name of the service.
timeStamp
The date and time that the file is created.

Save memory snapshot parameters


You can run this script to save the memory snapshot:

pimprof.sh –cmd=save
–—svc_name=unique_Service_Name
--file=fileName
--snapshot_mode=without_heap|with_heap

The following parameters are defined:

–cmd=save
Saves intermediate profiling data.
–—svc_name=unique_Service_Name
The unique service name of the service you specified for the --serviceType parameter If the unique_Service_Name is not specified, the service that is specified as
default is enabled for profiling.
--file=fileName

1030 IBM Product Master 12.0.0


If the file name and directory are not specified, the default $TOP/profiler/snapshots directory is where the memory snapshot is saved. fileName is the fully qualified
directory and file name of the memory snapshot that is saved when either the profiling completes or you issue the --stop command If you do not specify the --file
parameter, the file name defaults are:

For YourKit: PROFILER_serviceType_serviceName_timeStamp.snapshot


For JProfiler: PROFILER_serviceType_serviceName_timeStamp.jps

where:

serviceType
One of six services.
serviceName
The unique service name of the service.
timeStamp
The date and time that the file is created.

--snapshot_mode=without_heap|with_heap
A memory snapshot includes all classes loaded by the JVM, the existing objects, and references between the objects.

without_heap
Captures the memory snapshot without the heap memory dump. This is a default parameter.
with_heap
Captures the memory snapshot with the heap memory dump

YourKit parameters for saving memory snapshot


You can run these additional parameters for YourKit:

--hprof=true|false

The following parameters are defined:

--hprof=true|false
A memory snapshot in a hprof format.

false
Captures the snapshot in the YourKit format. This is a default parameter.
true
Captures the snapshot in hprof format so that only heap dumps are created and no other YourKit profiling data is captured.

Example
In this example, the pimprof.sh script is run with the parameters cmd = capture_mem, mode = cpu, svc_name = admin and file = /store/pimprof_data:

$<install directory>/bin/pimprof.sh
–-cmd=capture_mem
--mode=cpu
--svc_name=admin
--file=/store/pimprof_data

The pimprof.sh shell script should be run from within the same shell by using ″dot space″, see the following examples:
In this example, the YourKit profiling agent is enabled to profile the CPU for the admin service:

$TOP/bin/pimprof.sh --enable --serviceType=scheduler --profilerName=yourkit


$TOP/bin/pimprof.sh --start cpu --serviceType=admin

In this example, the scheduler service, which has the unique service name myscheduler, is stopped. The directory and file name where the profiling data is saved, is
specified as /opt/snapshots/mysch.snapshot:

$TOP/bin/pimprof.sh --stop obj --serviceType=scheduler


--serviceName=myscheduler --file=/opt/snapshots/mysch.snapshot

In this example, a memory snapshot is captured for the scheduler service and saved in the default directory with default file name:

$TOP/bin/pimprof.sh --capturemem --serviceType=scheduler

Profiling will remain enabled until you individually disable the feature for each service.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

pimSupport.sh script
Use the pimSupport.sh script for collecting diagnostic information if you encounter any issues with IBM® Product Master and need to share data with the technical
support team.

If you cannot resolve your issue with the troubleshooting and support documentation, you can use the diagnostic tool or contact IBM Software Support for help.

The pimSupport.sh script provides many diagnostic collection options. You can view the options by running pimSupport.sh --help.
When opening a service request with IBM Software Support, provide the output of the following command:

IBM Product Master 12.0.0 1031


pimSupport.sh -b -l all -p xxxx.yyy.zzz

Running the script with these options collects the most likely needed basic environment configuration and system status information along with Product Master and
application server log files. This information helps to accelerate problem investigation.
The technical support team might request you to run the script with other options to help with problem diagnosis.

Syntax
pimSupport.sh --help --version --outputdirectory=path_to_store_the_result
--pmrnumber=<xxxxx.bbb.ccc>
--logtracking=start|stop --monitor=start|stop
--dumpprocess=<service name | PID> [--count --interval]
--basic --code=<company_code> --keeplog=yes|no
--collectlogs=all|allPIM|AS|appsvr|scheduler|workflowengine|queuemanager|admin|eventprocessor
--fromtime=from_time --totime=to_time
--code=<company_code> --colarea=<collaboration_area_name>
--catalog=<catalog_name>

Note: You can start each functionality by using the short or the verbose option. When data is collected, the on output shows the file name and location-screen of the
generated archive automatically.

Parameters
Parameters for general options

-h --help
Print a usage message.
-v --version
Print the version number of pimsupport.sh.
-o --outputdirectory=<path_to_store_the_result>
Specify the directory where the final output file is located. If not specified, the default directory is the tmp directory as defined in the
$TOP/etc/default/common.properties file.
-p --pmrnumber=<xxxxx.bbb.ccc>
Generated archives are prefixed with the PMR number for proper identification and storage on ecurep FTP server.

Parameters for information collection options

-t --logtracking=start | stop
This parameter is used to start or stop logging to a different directory with the log
level=debug and maxBackupIndex=5.
-m --monitor=start | stop
This parameter is used to start or stop a background job, which continuously monitors system information, such as vmstat and iostat. The output is
temporarily written under the $TOP/logs/diagnostic/system_status directory. To fetch the monitoring result, use the option: –collectmonitorlogs=yes
-d --dumpprocess=<service name | PID> [--count --interval]
This parameter is used to dump process information for a process with the specified ID "PID" or "service name", usually for a process with high processor
usage. This option, without any other option provides a list of available services along with the PIDs. The --count and --interval are optional parameters
that determine the number of times and intervals between the memory dumps generation. The default values are 5 and 30 seconds.
Note: Some of the parameters do not work in clustered environments.
Note: For the first time, when you first run the pimSupport.sh command, the proc_info directory is not removed. You need to manually delete the proc_info
directory after running pimSupport.sh command. With this fix, after running the pimSupport.sh -dumpprocess command, you will not see the proc_info
directory anymore under the $TOP/logs/diagnostic directory. The contents of the proc_info directory included in the tar file that was generated and can now
be found under the highcpulogs directory. The contents of the proc_info directory are already included in the tar file that was generated and can be found
under the highcpulogs directory.
-b --basic
This parameter is used to collect basic environment configuration and system status information.
--code=<company_code>
This is an optional parameter. If specified, this parameter collects the Product Master entity count for that company.
--keeplog=yes | no
This is an optional parameter. If specified, this parameter keeps all temporary outputs that are used for the final report. If not specified, it is equivalent to --
keeplog=no. This option can help debug if there is any issue with the final health report.
-l --collectlogs=all | allPIM | AS | appsvr | scheduler | workflowengine | queuemanager | admin | eventprocessor
all specifies that all logs are collected from the $TOP/logs directory.
allPIM specifies that all of the log files are collected for Product Master.
AS specifies that the logs files for WebSphere® are collected.
appsvr specifies that all logs are collected from the application server service.
scheduler specifies that all logs are collected from the scheduler service.
workflowengine specifies that all logs are collected from the workflow engine service.
queuemanager specifies that all logs are collected from the queue manager service.
admin specifies that all logs are collected from the admin service.
eventprocessor specifies that all logs are collected from the event processor service.
This parameter is used to collect log files generated by the product. If not specified, no logs are collected. You can specify multiple services by separating
each service with the | character and enclosing them with quotation marks " | ".
In addition to specifying this parameter to collect WebSphere Application Server logs and log files of certain Product Master services, you can also specify
the fromtime or totime options to collect only some portions of the log files by applying a time-based filtering mechanism.
Note: The time-based filters are available only for the Product Master logs. These filters do not apply to the WebSphere Application Server logs.
These options are used to select log entries of a particular time range.

--fromtime
Optionally specify this option to set the time from which the log entries in the log files are collected. The default value is January 01, 1970, 00:00.
Date time string expressed in the format of:

1032 IBM Product Master 12.0.0


MM_dd_yyyy__HH_mm

where:

MM stands for the two-digit month


dd stands for the two-digit day
yyyy stands for the four-digit year
HH stands for the two-digit hour
mm stands for the two-digit minute

When this option is specified, the svc.out, svc.err, and svc.pid files of all of the specified services are also collected.
--totime
Optionally specify this option to specify the time to which the log entries in the log file are collected. The default value is the time when the program is
started.
Date time string expressed in the format of:

MM_dd_yyyy__HH_mm

where:

MM stands for the two-digit month


dd stands for the two-digit day
yyyy stands for the four-digit year
HH stands for the two-digit hour
mm stands for the two-digit minute

Note:

These options are used to select the log entries of a particular time range. They must be used with the collectlogs option.
If fromtime and totime are both specified, log entries that were generated within the specified time period are collected.
If fromtime but not totime is specified, then log entries that were generated since the fromtime until the time when the program is started are
collected.
If totime but not fromtime is specified, then log entries that were generated before the totime are collected.
If fromtime and totime are not specified, all of the log entries are collected. Only one option value can be provided at a time and that option value
must not contain “|”.

To collect the Db2® logs and other information from the Db2 server, run the db2support command on the Db2 server.

db2support <OUTPUT_DIR_PATH> -f -d <DB_SERVICE_NAME> -m -c -u <DB_USER_NAME> -p <DB_USER_PWD> /dev/null

Examples
The following examples show various pimSupport.sh parameters, and their usage:

To collect basic environment configuration and system status information along with all Product Master and application server log files in to an archive and start the
archive with the PMR number:

pimSupport.sh -b -l all -p xxxx.yyy.zzz

To start system monitoring for continuously measuring the system status, such as memory, I/O, and processor use:

pimSupport.sh --monitor=start

To gather and collect log files selectively, for example, the following command line collects application server services log entries that are dated from June 23,
2010 00:18 to June 28, 2010 23:18:

pimSupport.sh --collectlogs="appsvr" --fromtime="06_23_2010__00_18" --totime="06_28_2010__23_18"

the following example collects logs for the application server and scheduler:

pimSupport.sh -l "appsvr|scheduler"

To gather log tacking information while reproducing a scenario, use this option to start logging:

pimSupport.sh --logtracking=start

When finished with the scenario, stop log tracking and specify the PMR number and some identifier to name the generated archive:

pimSupport.sh -t stop -p 11111.222.333-repro1

To generate a health check report. The option --keeplog=yes also keeps all the intermediate output files that were used to construct the final report. The final
report is well-formatted and concise. The intermediate files can provide more details about the system or application information:

pimSupport.sh -b --code=<comp_code> --keeplog=yes

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

rename_cmp.sh script
Use the rename_cmp.sh script to rename a company code and company name in the IBM® Product Master environment. All users must log in with the new company code
after this script is run.

IBM Product Master 12.0.0 1033


Syntax
rename_cmp.sh --FromCode=source_company_code --ToCode=target_company_code
[--ToName=target_company_name] [--dbpassword=database_password]
[--silent] [--verbose]

Parameters
FromCode
Specifies the existing company code.
ToCode
Specifies the new company code.
ToName
Optional parameter that specifies the new company name.
dbpassword
Optional parameter that specifies the database password if the database password is stored in the encrypted format.
silent
Optional parameter that specifies the suppression of database confirmation.
verbose
Optional parameter that displays detailed messages on the console.

Usage
This script is stored in the <install dir>/bin/db directory. Running the script creates a log file named rename_cmp.log in the <install dir>/logs directory.
Note: You must run the rename_cmp.sh shell script only when your system is down. Also, you must not run the rename_cmp.sh shell script multiple times in parallel. If
more than one instance is running at the same time, all of the instances fail.

Example
In this example, the rename_cmp.sh script is run with the parameter FromCode=testcompany, ToCode=newcompany and ToName="New Corporation Inc." to rename
the company named testcompany.

$<install dir>/bin/db/rename_cmp.sh --FromCode=testcompany --ToCode=newcompany --ToName="New Corporation Inc."

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

rmi_status.sh script
IBM® Product Master runs using six key services, each of which run as a separate process on their own Java virtual machine, or JVM. The rmi_status.sh can be run to
print a list of all the IBM Product Master services which are currently running. In a non-clustered environment, there are six services. For a clustered environment, one
service can have multiple processes and multiple entries.

Syntax
rmi_status.sh

Parameters
None

Example
In this example, the rmi_status.sh shell script is run:

$TOP/bin/go/rmi_status.sh

Output
Running this script displays the names of the services that are bound to the RMI registry. Each service name is followed by the name of the host computer.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

run_job_template.sh script

1034 IBM Product Master 12.0.0


Use the run_job_template.sh script to create scripts to integrate with an extender scheduler, such as Tivoli® Workload Scheduler, in the IBM® Product Master
environment. The run_job_template.sh script is in the $TOP/bin directory. Ensure that you modify the values in this script and then issue the command to run the job.

Syntax
run_job_template.sh

Parameters
None

Example
In this example, the run_job_template.sh script is used to create a script:

# Set the IBM Product Master installation directory here and uncomment the line
#export TOP=<Path to IBM Product Master installation home directory>
# E.g. /usr/appinstalls/mdmpim60

# Set the job related variables below as needed, uncomment the lines and
# do not modify anything else after this
# CCD_JOB_NAME=<Job Name> # [Required]
# CCD_JOB_TYPE=<Job Type> # [Required, Valid values are import|export|report]
# CCD_COMPANY_CODE=<Company Code> # [Optional, Default Value is trigo]
# CCD_USERNAME=<User Name> # [Optional, Default Value is Admin]
# CCD_DEBUG=<Debug on or off> # [Optional, Default Value is off]

You might, for example, change the values for your job to something like the following example:

# Set the IBM Product Master installation directory here and uncomment the line
export TOP=/usr/IBM/mdmpim

# Set the job related variables below as needed, uncomment the lines and
# do not modify anything else after this
CCD_JOB_NAME=feed1 # [Required]
CCD_JOB_TYPE=import # [Required, Valid values are import|export|report]
CCD_COMPANY_CODE=test # [Optional, Default Value is trigo]
CCD_USERNAME=m # [Optional, Default Value is Admin]
CCD_DEBUG=on # [Optional, Default Value is off]

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

start_local.sh script
Use the start_local.sh script to start IBM® Product Master and all services.

Syntax
start_local.sh [--wsadminUsername=<was_admin_user>
--wsadminPwd=<was_admin_password>][--profile]
[--debug][--rmlogs][--help]

Parameters
wsadminUsername
This is the WebSphere® Application Server administrative user name. This is required if admin_security=true is specified but the username and password are not
provided in the env_settings.ini file. You can specify the wsadminUsername and wsadminPwd arguments in the command to override the values provided in the
env_settings.ini. This argument is applicable only if svc_name is of type appsvr.
wsadminPwd
This is the password of the WebSphere Application Server administrative user. This is required if admin_security=true is specified but the username and password
are not provided in the env_settings.ini file. You can specify the wsadminUsername and wsadminPwd arguments in the command to override the values provided in
the env_settings.ini. This argument is applicable only if svc_name is of type appsvr.
svc_name
Specifies the short or full name of a service. This option can be specified multiple times, and all can be used to start all services defined in the env_settings.ini file.
profile
Enables profiling for the services.
debug
This parameter starts one or more services in the debug mode.
rmlogs
Deletes all files under the logs directory.
help
Displays help information.

Example

IBM Product Master 12.0.0 1035


In this example, the start_local.sh script is run to start Product Master and all services:

$<install directory>/bin/go/start_local.sh

In this example, the start_local.sh script is run not only to start all services, but to also show the configuration properties and the application server name.

$TOP/bin/go/start_local.sh --action=show_config --svc_name=appsvr_SUPAIX01

Output
Running this script displays the name of each service and its process ID (PID) as it starts.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

start_local_rmlogs.sh script
Use the start_local_rmlogs.sh script to start IBM® Product Master services and remove local logs. The start_local_rmlogs.sh script does the same job as the
start_local.sh without the addition of removing logs.

Syntax
start_local_rmlogs.sh

Parameters
None

Example
In this example, the start_local_rmlogs.sh script is run:

$<install directory>/bin/go/start_local_rmlogs.sh

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

start_rmi_and_appsvr.sh script
Use the start_rmi_and_appsvr.sh script to start IBM® Product Master RMI registry and appserver service.

Syntax
start_rmi_and_appsvr.sh

Parameters
None

Example
In this example, the start_rmi_and_appsvr.sh script is run:

$<install directory>/bin/go/start_rmi_and_appsvr.sh

Output
Running the start_rmi_and_appsvr.sh script displays the name and the PID of the service that is started.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

stop_local.sh script

1036 IBM Product Master 12.0.0


Use the stop_local.sh script to stop IBM® Product Master services from the command line.

Syntax
stop_local.sh [--wsadminUsername=<was_admin_user>
--wsadminPwd=<was_admin_password>][--profile]
[--debug][--rmlogs][--help]

Parameters
wsadminUsername
This is the WebSphere® Application Server administrative user name. This is required if admin_security=true is specified but username and password are not
provided in the env_settings.ini. You can specify wsadminUsername and wsadminPwd arguments in the command to override the values provided in the
env_settings.ini file. This argument is applicable only if svc_name is of type appsvr.
wsadminPwd
This is the password of WebSphere Application Server administrative user. This is required if admin_security=true is specified but username and password are not
provided in the env_settings.ini file. You can specify wsadminUsername and wsadminPwd arguments in the command to override the values provided in the
env_settings.ini file. This argument is applicable only if svc_name is of type appsvr.

Example
In this example, the stop_local.sh script is run to stop Product Master services:

$<install directory>/bin/go/stop_local.sh

Running the stop_local.sh script displays the name of each service as it is stopped.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

svc_control.sh script
Use the svc_control.sh script to control individual IBM® Product Master services.

Syntax
svc_control.sh --action=start|stop|abort|status|long_status|show_config|list
[--svc_name=short name of service]
[--wsadminUsername=<was_admin_user> --wsadminPwd=<was_admin_password>]
[--names=<list_of_service_names>] [--profile] [--debug] [--rmlogs] [--help]

Parameters
action
Specifies one option from among start, stop, abort, status, long_status, show_config, and list options.

The start option is used to start the services that are specified by the --svc_name option.
The stop option is used to stop the services that are specified by the --svc_name option.
The abort option is used to abort (kill) the services that are specified by the --svc_name option.
The status option is used to get status (PID) of the service specified by the --svc_name option.
The long_status is used to get the extended (HTML) status of the service specified by the --svc_name option.
The show_config option is used to show the configuration of the service that is specified by the --svc_name option.
The list option is used to print a list of the full names of all defined services.

svc_name
This parameter is the short name of a service, for example "admin, workflowengine, queuemanager, appsvr, or scheduler". If svc_name=all then the action is
applied to all services.
Note: The svc_control.sh script fails if the input for svc_name is the full name of the service.
wsadminUsername
This parameter is the WebSphere Application Server administrative user name. This is required if admin_security=true is specified, but the user name and password
are not provided in the env_settings.ini file. You can specify the wsadminUsername and wsadminPwd arguments in the command to override the values that are
provided in the env_settings.ini file. This argument is applicable only if svc_name is of type appsvr and action is start, stop, abort, or status.
wsadminPwd
This parameter is the password of the WebSphere Application Server administrative user. This is required if admin_security=true is specified, but the user name and
password are not provided in the env_settings.ini file. You can specify the wsadminUsername and wsadminPwd arguments in the command to override the values
that are provided in the env_settings.ini file. This argument is applicable only if svc_name is of type appsvr and action is start, stop, abort, or status.
names
The value needs to be a comma-separated list of short service names. This option can be specified only if svc_name is 'appsvr' or 'scheduler'. This option allows
specific services by name to be acted on.
profile
Enables profiling for the services.
debug
Start services in debug mode.
rmlogs

IBM Product Master 12.0.0 1037


Specify this option to request help information.
help
Delete all files under the logs directory.

Example
The following example shows how to stop a single service.

./svc_control.sh --action=stop --svc_name=workflowengine

Output
Running the svc_control.sh script displays the types and full names of services that are affected by the options that you have entered at the command prompt.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

test_db.sh script
Use the test_db.sh script to verify the connectivity between IBM® Product Master and databases, and to check for JDBC and native client connections.

Syntax
test_db.sh

Parameters
None

Example
In this example, the test_db.sh script is run.

$<install directory>/bin/test_db.sh

Output
Output similar to the following displays:

bash-3.00$ ./test_db.sh
====================================================
database user name: waldb7
database user passwd: *****
Client login command: CONNECT TO wal97db USER waldb7 USING *****
JDBC driver type: 4
JDBC URL: jdbc:db2://walnut.svl.ibm.com:60004/wal97db
====================================================

Connecting using the database client SUCCESS

Connecting using JDBC SUCCESS

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

updateRtClasspath.sh script
There are two ways to update the classpath parameter in the env_settings.ini file.

You can use the updateRtClasspath.sh script to update the classpath parameter in the env_settings.ini file without regenerating the common.properties and
other product configuration files. All IBM® Product Master services are started with the classpath as defined in the env_settings.ini file.
This script can be used when a .jar file is added or removed from the jar_dir directory or the jars-custom.txt file is edited. This script is in the $TOP/bin/ directory.

Syntax
updateRtClasspath.sh

You can also update the classpath parameter in the env_settings.ini file from the $TOP/bin/configureEnv.sh script. The configureEnv.sh script overwrites the
common.properties file. It also overwrites the following files in the $TOP/etc/default directory:

1038 IBM Product Master 12.0.0


db.xml
mdm-cache-config.properties

Important: Starting IBM Product Master Fix Pack 7 and later, the JAR entries for the following extensions (earlier present in the jars-custom.txt file) automatically get
copied to the jars-persona-internal.txt file located in the $TOP/bin/conf/classpath folder.

ftsExtensions,
mlExtensions,
sdpExtensions,
vendorExtensions,
dam.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Script operations
Script operations extend the basic functionality of the IBM® Product Master. You can use script operations to clean, transform, validate, and calculate information to align
with business rules and processes. This information can then be imported and exported to virtually any file standard and custom file format or used to perform mass
updates to a catalog of information.

The following IBM Product Master script operations are listed alphabetically for your reference.

add script operation


Adds one or more elements to an array. The argument to add can be an expression as well as a list of objects.
addAllObjectsToExport script operation
Specifies the entities of specific object type be exported.
addAttributeToAttrGroup script operation
Adds an attribute to the attribute collection.
addCategoryTreeMapping script operation
Add a map between the two given categories.
addChildCategory script operation
Adds Category as a child of this category.
addCtgFile script operation
Add a file in the suppliers directory into the zip archive.
addCtgItemToSelection script operation
Adds the item to the static selection, but does nothing for a dynamic selection.
addCtgTab script operation
Add a tabbed view to an existing catalog view. The tab is added to the end of the list of tabs already defined for the container ctg view.
addDate script operation
Add the integer value given to the field specified. Allowed field values are : YEAR MONTH DATE HOUR MINUTE
addEntryIntoColArea script operation
Adds the entry to the given stepPath of the collaboration area. Returns a boolean depending on whether the entry was successfully added to the collaboration area.
This script op returns true only if the entry does not exist in the collaboration area or in the source container. When waitForStatus is false, and when this script op
returns true, a BeginStepEvent is scheduled to run asynchronously on the workflow engine. This BeginStepEvent will run some time later after the users script
transaction successfully completes. If waitForStatus is false, always return null. If waitForStatus is true, then this script op only returns when the separate workflow
engine has processed that BeginStepEvent. The default value of waitForStatus is false. This script op returns false, if the entry already exists in the source container,
in which case the user should use checkOutEntry() to put the entry into the collaboration area. This script op returns false if the entry is a category.
addEntryToSelection script operation
Add the entry to the static selection - the entry can be an item or a hierarchy node (does nothing for dynamic selection).
addItemSecondarySpecToCategory script operation
Associates a secondary item spec to the given Category - if ctgs are passed, only those catalogs are affected by the spec.
addLdapAttribute script operation
Adds a LdapAttribute Object to this LdapEntry.
addLdapEntry script operation
Adds a new LDAP Entry to the given LDAP Entry Set.
addLdapObjectclass script operation
Adds a LdapObjectclass Object to this LdapEntry.
addLocalesToAttrGroup script operation
Adds the locales to the Attribute Collection.
addLocalizedNodeToAttrGroup script operation
Associates the given localized node with this attribute collection.
addObjectByNameToExport script operation
Sets the entity to be exported by specifying the entity name as an argument.
addRow script operation
Add a row to this lookup table - with value(s) for the given key. If the key exists, update the key with the new value(s). If the key does not exist, add a new row.
Returns true if the add or update was successful. Otherwise, returns false.
addRowByOrder script operation
Add a new row to this lookup table - with value(s) sValue/asValues for the key sKey.
addSecondaryCategoryTree script operation
Add the category tree as a secondary category tree.
addSecondarySpecToCategory script operation
Adds a secondary spec to the given category. The optional parameters allows for the Spec to be associated with the category but does not build out the EntryNode
structure, useful to improve performance on imports
addSpecToAttrGroup script operation
Associates all the nodes of the spec with this attribute collection. If the bDynamic flag is set to true then any additional nodes added to the spec, after the spec has

IBM Product Master 12.0.0 1039


been associated to the Attribute Collection, will become part of the Attribute Collection. If the bDynamic flag is set to false then only the nodes that are currently
part of the spec will be added to the Attribute Collection.
addSubSpec script operation
Adds an entire SubSpec.
addToCompanyLocales script operation
Adds the given locales to the list of locales that are defined for the company. If this script operation is invoked within a database transaction, the script operation
will no longer commit that transaction during its execution. It will also throw an exception if a database problem occurs, where previously it might not have done so.
This is to ensure that a rollback can be carried out safely by the caller. If you were relying on this script operation's transactional effects you will need to adjust your
script. See the product documentation for more information about transactions and scripts.
addToSpecLocales script operation
Adds the given locales to the list of locales that are defined for the spec.
addWorkEntry script operation
Insert a WorkEntry into the WorkEntryList at the specified index.
assertEquals script operation
Throws a ScriptAssertException unless condition the values are equal.
assertFalse script operation
Throws a ScriptAssertException unless condition is false
assertNotNull script operation
Throws a ScriptAssertException unless the value is NOT null.
assertNull script operation
Throws a ScriptAssertException unless the value is null.
assertTrue script operation
Throws a ScriptAssertException unless condition is true.
associateCategoryCacheToItemSet script operation
Associates the CategoryCache to the ItemSet so that when items are fetched, the corresponding categories are also fetched in bulk]
authenticateWPCUser script operation
Provides authentication for a IBM Product Master user. Optional parameter bEncodedPassword indicates if the password is being passed already encoded. Default
is false.
beginPerf script operation
Starts performance logging and timing the block of script to the next corresponding endPerf script op.
bidiTransform script operation
If direction is "IMPORT", using the BiDi attributes specified in the parameters to create a BiDiText and then transform it to BiDiText with default attributes. If
direction is "EXPORT", create a BiDiText using default attribute then transform it to BiDiText with attributes specified in the parameters. typeOfText can be :
"IMPLICIT", "VISUAL". orientation can be : "LTR", "RTL", "CONTEXTUAL_LTR", "CONTEXTUAL_RTL". swap can be : "YES", "NO". numShapes can be : "NOMINAL",
"NATIONAL", "CONTEXTUAL", "ANY". textShapes can be : "NOMINAL", "SHAPED", "INITIAL", "MIDDLE", "FINAL", "ISOLATED". default value is:
typeOfText:"IMPLICIT" orientation:"LTR" swap:"YES" numShapes:NOMINAL textShapes:NOMINAL
break-continue script operation
To break/continue from a loop
buildCategory script operation
Returns a new category object when given the complete path of the new category and the delimiter that separates the categories in the path. If the delimiter is not
specified, it defaults to '/' (except if a filespec is used during an import). If the primary key is not specified, then it should either be automatically set via a sequence
or value rule, or it should be set after creation. If used in workflows and the category path already exists in the source category tree, the category will be checked
out. If this script operation is invoked within a database transaction, the script operation will no longer commit that transaction during its execution. It will also
throw an exception if a database problem occurs, where previously it might not have done so. This is to ensure that a rollback can be carried out safely by the caller.
If you were relying on this script operation's transactional effects you will need to adjust your script. See the product documentation for more information about
transactions and scripts.
buildCSV script operation
Takes a variable number of arguments, and returns a string with the arguments concatenated in csv format.
buildCtgItem script operation
This script operation is deprecated.
buildDelim script operation
Takes a variable number of arguments, and returns a string with the arguments concatenated in delim format, using the qualifier to enclose strings that contain the
delimiter.
buildFixedWidth script operation
Takes a variable number of arguments, and returns a string with the arguments concatenated in fixed width format.
buildRE script operation
Returns a regular expression corresponding to the given pattern. Match flags are 0=caseSensitive, 1=ignoreCase, 2=matchMultiline (new lines match as ^ and $,
4=matchSingleLine (treat multiple lines as one line). Flags are additive.
buildSpec script operation
Returns a spec object given the name and the type of the spec. The optional parameter specFileType is only applicable to the spec of type FILE_SPEC. The optional
parameter specFileType specifies the data file type of the file spec.
buildSpecNode script operation
Returns a new node object of a spec with the given path and node order. Please make sure to use a spec that has been obtained using the new Spec() or buildSpec
operation
buildSpecNodeName script operation
Returns the parsed name that was passed in so that it can be used as a spec node name (spec node name only accept letters and characters, others are converted
to an underscore _).
buildTestCatalogData script operation
Create a document at sDocStorePath for the file specification fileSpec with nbRows of random data, with the primary key starting at firstSku. If this script operation
is invoked within a database transaction, the script operation will no longer commit that transaction during its execution. It will also throw an exception if a
database problem occurs, where previously it might not have done so. This is to ensure that a rollback can be carried out safely by the caller. If you were relying on
this script operation's transactional effects you will need to adjust your script. See the product documentation for more information about transactions and scripts.
buildTestSpec script operation
Returns a new spec object with the specified name, type and number of fields in the spec.
buildTestSpecMap script operation
Returns a new spec map on the specified map type between the source and the destination - first delete existing map if there is one.
buildWidget script operation
Creates a widget of the given type and of the given name.
catchError script operation
Analogous to a try-catch in Java™, all statements are executed and errMsg (and errObj, if passed) are set to null in the absence of errors. If an error occurs, errMsg

1040 IBM Product Master 12.0.0


is set to a descriptive string, and errObj (if passed) is set to a Java Throwable object representing the root cause.
checkDouble script operation
If the input string is null or empty, the default value is returned. Otherwise the original value parsed as an Double is returned.
checkInt script operation
If the input string is null or empty, the default value is returned. Otherwise the original value parsed as an Integer is returned.
checkOutEntries script operation
Checks out the entry into the collaboration area. If stepPath is not specified the entry will be checked-out into the Initial step. If waitForStatus is true, the checkout
will take place immediately and the status will be returned. Otherwise the checkout will not take place immediately, instead a message will be posted to perform
the operation after the current transaction is committed. Returns a HashMap of entry primary key to the status of the checkout (or null if waitforStatus is false).
Checkout status could be one of the following: CHECKOUT_SUCCESSFUL, CHECKOUT_FAILED, ALREADY_CHECKED_OUT, ENTRY_LOCKED and
ATTRIBUTE_LOCKED. ATTRIBUTE_LOCKED indicates one or more attributes editable or required for that collaboration area are checked out to another
collaboration area. waitForStatus is false by default. If this script operation is invoked within a database transaction, the script operation will no longer commit that
transaction during its execution. It will also throw an exception if a database problem occurs, where previously it might not have done so. This is to ensure that a
rollback can be carried out safely by the caller. If you were relying on this script operation's transactional effects you will need to adjust your script. See the product
documentation for more information about transactions and scripts.
checkOutEntry script operation
Checks-out the entry into the collaboration area. If stepPath is not specified the entry will be checked-out into the Initial step. The event id is returned. If
waitForStatus is false, always return null. If waitForStatus is true, then this operation returns when the separate workflow engine has processed the event. Default
is false. Returns a HashMap of entry primary key to the status of the checkout. Checkout status could be one of the following: CHECKOUT_SUCCESSFUL and
ATTRIBUTE_LOCKED. If any attributes editable or required for that collaboration area are locked in some other collaboration area, then the status of
ATTRIBUTE_LOCKED is returned for that entry primary key. If this script operation is invoked within a database transaction, the script operation will no longer
commit that transaction during its execution. It will also throw an exception if a database problem occurs, where previously it might not have done so. This is to
ensure that a rollback can be carried out safely by the caller. If you were relying on this script operation's transactional effects you will need to adjust your script.
See the product documentation for more information about transactions and scripts.
checkString script operation
If the input string is null or empty, the default value is returned, otherwise the original value is returned. The input string will be trimmed of all leading and trailing
spaces, unless a value of false for the optional TRIM parameter.
cloneItem script operation
Create and return a clone of this item.
cloneUser script operation
Clones an existing user's information into a new user. Password field is required. The optional roles and organization fields, when specified, override the roles
and/or organization of the existing user. The user running this operation must have the authority to create new users.
close script operation
Close this writer and, optionally, save the contents of the writer at the given location.
closeZipArchive script operation
Use to close a zip archive and upload to the docstore for future distributions. By default, the archive is deleted after the distribution, unless 'deleteAfterDistribution'
is false.
commit script operation
Commit a transaction using the DB connection.
concat script operation
Takes a variable number of arguments, and returns a string with the arguments concatenated in the order given.
contains script operation
Tests if this string contains an occurrence of the match substring.
containsByPrimaryKey script operation
Returns true if the catalog or item set contains an item with the given primary key.
containsKey script operation
Returns true if key exists.
containsUsingLookupTable script operation
Return true if and only if the string contains at least one of the keys from the lookup table.
containsValue script operation
Returns true if value exists.
copyDoc script operation
Copy this document to the specified sPath in the docstore. If the path ends with a '/' it is assumed that the doc needs to be copied to the specified directory with its
current name.
copySearchItemData script operation
Copy item search data to search selection where the item was retrieved from a search result set. Use the optional append argument if you want to add data to
existing data.
copySearchItemLocationTreeData script operation
Copy item search data to the given search selection where the item was retrieved from a search result set. Data is added for locations for the given location tree.
Use the optional append argument if you want to add data to any existing data.
copyUserToOrganizations script operation
Copy the user to multiple organizations.
createAccessControlGroup script operation
Creates an access control group object with the specified acg name and an optional acg description.
createDataSource script operation
Creates a data source of the specified type with the given name. If the specified name already exists, the data source is not created. If this script operation is
invoked within a database transaction, the script operation does not commit that transaction during its execution. If database problem occurs, ensure that a
rollback can be carried out safely by the caller. For more information about transactions and scripts, see the information center.
createExcelCell script operation
Returns am ExcelCell at the requested index within the ExcelRow.
createExcelCellStyle script operation
Returns a cell style associated with this ExcelBook. A style constrains characteristics of a cell over and above the value such as the font and the fillPattern. A style is
applied to a cell using ExcelCell::setExcelStyle.
createExcelSheet script operation
Creates a sheet from the workbook. If a sheet name is supplied then the sheet is created with this name.
createExport script operation
Creates the Export with given parameters. An optional parameter "charsetName", which may be set in the "optionalArgs" parameter, describes the file encoding of
the export. Otherwise, the Cp1252 is chosen as the default file encoding.
createFont script operation
Returns a ExcelCellFont associated with this ExcelBook, the ExcelFont set methods should be used to setup the font as required. This font can then be used as the

IBM Product Master 12.0.0 1041


input parameter to ExcelCellStyle::setFont(font). The ExcelCellStyle can then be set on an cell using ExcelCell::setExcelStyle(cellStyle).
createImport script operation
Creates the Feed with given parameters. An optional argument "sCharsetName", which may be defined in the optionalArgs HashMap, describes the file encoding of
the feed. Otherwise, Cp1252 is chosen as the default file encoding. Also, optional parameters to describe if the current container is a collaboration area, and the
step path of the workflow step in to which the feed is to be done, could be specified.
createJavaArray script operation
Create an array of type typeName. The number of dims specified indicates the number of dimensions that the array will be created with. The value of these
numbers indicates the number of elements in that dimension. e.g., supplying 1 and 4 as the dims would indicate that a two dimensional array will be created; the
first dimension containing 1 element, the second containing 4 elements. If an array of primitives is to be created, supply the type as the Java primitive keyword
such as "int" or "boolean". If the type is a class name, it should be fully qualified and should not be an interface.
createJavaConstructor script operation
From the description of the Java constructor, a lookup into the specified class is performed to locate a constructor which matches the search criteria.
createJavaMethod script operation
Create a java.reflect.Method Object by reflection using a className, methodName and optional types. className and methodName should not be null. The
className should be fully qualified. The className may be a fully qualified interface name. If the method you wish to target contains primitive arguments, those
arguments should be supplied with the Java primitive keyword such as "int","boolean".The className should not be primitive classes (i.e. Class literal names such
as int.class or int.TYPE ). In order to pass an array type use [] for one dimentional arrays and multiple []s for multiple dimension arrays. e.g. to target a 2 dimension
array of ints pass "int[][]" to target a 1 dimensional array of Strings pass "java.lang.String[]".
createLDIFFile script operation
Creates an LDIF formatted file based upon an input LDAP entry set. The filename is a docstore reference.
createNestedWflStep script operation
Adds a nested workflow step to the workflow.
createOtherOut script operation
Returns a new writer with the given name and an optional charset value. Close the writer when you are done using it.
createQueue script operation
Creates a new queue with the given parameters. If this script operation is invoked within a database transaction, the script operation will no longer commit that
transaction during its execution. It will also throw an exception if a database problem occurs, where previously it might not have done so. This is to ensure that a
rollback can be carried out safely by the caller. If you were relying on this script operation's transactional effects you will need to adjust your script. See the product
documentation for more information about transactions and scripts.
createRole script operation
Creates a role object with the specified rolename and an optional role description.
createRow script operation
Returns am ExcelRow at the requested index.
createUser script operation
Creates a user with the specified parameters. Enabled, Password, Roles, and organization parameters are required. encryptPassword exists for the purpose of
migrating environments so that encrypted passwords exported from one environment can be loaded into another environment without encrypting them again and
that there is no possibility of knowing what the password was. EnableLdap marks the user as LDAP enabled and allows the provision of extra LDAP parameters, the
LDAP name attribute and the LDAP Server URL The user running this operation must have the autority to create new users.
createWebService script operation
Creates a new web service with the given parameters. To save and deploy the web service(if DEPLOYED is true), call saveWebService(). NAME is the name of the
service. IMPLCLASS is the Java class for Java based web services or "" for script based ones, DESC is the description of the service. WSDLDOCPATH is the doc path
at which the WSDL is stored. WSDDDOCPATH is the doc path at which the WSDD is stored. PROTOCOL is the protocol. Currently, "SOAP_HTTP" is the only supported
protocol. STYLE is the message style. Currently, RPC_ENCODED and DOCUMENT_LITERAL are supported. IMPLSCRIPTPATH is the doc path of the service
implementation script. It is the callers responsibility to ensure that WSDLDOCPATH, WSDDDOCPATH and IMPLSCRIPTPATH do not cause the documents for any
other web service to be overwritten. STOREINCOMING determines whether incoming requests are stored. STOREOUTGOING determines whether outgoing request
are stored. DEPLOYED determines whether the service will be deployed. AUTH_REQUIRED determines whether a username, company name, and password are
required to invoke this web service. SKIPREQUESTVALIDATION determines whether the inbound SOAP message is validated against WSDL schema.
SKIPRESPONSEVALIDATION determines whether the outbound SOAP message is validated against WSDL schema. If a web service with the name of NAME already
exists, throws an AustinException.
createWflStep script operation
Adds a new step to the workflow if the step with the given name does not exists.
decodeUsingCharset script operation
Returns a string by decoding the string using the named charset.
defineLocationSpecificData script operation
Sets up location specific data for a catalog. CTR is the category tree that contains the locations. SPC is the spec of the locations. INHATTRBRPS is an array of
attribute groups containing the inheritable attributes.
deleteAttrGroup script operation
Deletes the given attribute collection.
deleteCatalog script operation
Delete the catalog ctg.
deleteCategory script operation
Deletes this category from its category tree. An exception is thrown if this script op is called from categories in collaboration areas or categories in a hierarchy that
are currently checked out.
deleteCategoryTree script operation
Delete the category tree ctr. Returns Validation Error array if any validation errors occurred. Null if successful.
deleteCtgItem script operation
Delete the catalog item itm in the source catalog. An exception is thrown if this script op is called from items in collaboration areas or items in a source catalog that
are currently checked out.
deleteCtgView script operation
Delete a catalog view. If this script operation is invoked within a database transaction, the script operation will no longer commit that transaction during its
execution. It will also throw an exception if a database problem occurs, where previously it might not have done so. This is to ensure that a rollback can be carried
out safely by the caller. If you were relying on this script operation's transactional effects you will need to adjust your script. See the product documentation for
more information about transactions and scripts.
deleteDoc script operation
Delete this document from the docstore.
deleteEntryNode script operation
Remove this entry node from the Entry. This operation will only work on Mutli-Occurance attributes, an error will be thrown if used on an entry node which is not
multi-occurance entry node.
deleteLookupTable script operation
Delete the lookup table.

1042 IBM Product Master 12.0.0


deleteSearchTemplate script operation
Delete this search template.
deleteSelection script operation
Delete the selection. Return true if the deletion occurred, false if selection was in use.
deleteSpec script operation
Delete this spec.
deleteWebService script operation
Deletes the Web Service in the database and undeploys it.
deleteWfl script operation
Delete a workflow. It throws an exception if the workflow can not be deleted, for example, if it is used by any collaboration area.
disableBatchProcessingForItems script operation
Sets up the import to not process items in bulk. This used to be achieved in earlier releases by setting up an import on a catalog different from the one the user
wanted to import into.
disableContainerProcessingOptions script operation
Disable the specified processing options.
displayCtgItemAttrib script operation
Returns the HTML string for displaying the item attribute value specified by the attribute path.
disableEntryValidation script operation
Disables collaboration entry validation for a given workflow step and collaboration area entry. The Entry in question needs to be an entry in a collaboration area, for
example, not a source container entry, otherwise the operation is a no-op. The Entry in question needs to currently be in the step specified by stepAtPath, otherwise
the operation is a no-op. When those conditions are met, the effect of the call with argument disableValidation set to true, is to permit the entry to leave the step
without being validated. The effect of a call with argument disableValidation set to false, is to restore that permission, for example, restore the default situation that
the entry must pass validation in order to be permitted to leave the step. When the entry leaves the step, or is forcibly removed from it, this "setting" or "flag" on it
will be cleared. If the script op succeeds in execution, for example, the flag is set to either true or false, it returns true. If this invocation is a no-op, it returns false.
This script operation returns null if CollaborationArea, stepAtPath or collabAreaEntry is null.
displayEntryAttrib script operation
Returns the HTML string for displaying the entry attribute specified by the attribute path.
dropEntries script operation
Posts a message to drop the entries in the entrySet from the collaboration area and to unlock the attributes which were locked in the source container for those
entries. The drop will not take place until after the current transaction has committed.
dropEntry script operation
Posts a message to drop the entry from the collaboration area and to unlock the attributes which were locked in the source container for that entry. The drop will
not take place until after the current transaction has committed.
dumpContext script operation
Return the script context in a string (and dumps it to the logger l if specified)
dumpSystemLog script operation
Return the last nLines of the system log sName.
dumpUserDefinedLog script operation
Dump all log entries from the user defined log to the Writer provided in no specific order.
encodeUsingCharset script operation
Encodes the string using the named charset.
endPerf script operation
Ends the current performance logging and timing.
endsWith script operation
Tests if the string ends with an occurrence of the match substring.
escapeForCSV script operation
If the given string contains a , newline character (\n), a CRLF character (\r) or double quotes the returned string has double quotes at the beginning and end and any
embedded doubles quotes have another double quote added to them. The string - abdc,asjdfh, "asdfdas" - would become "abdc,asjdfh, ""asdfdas"""
escapeForHTML script operation
By default isAscii is true. When isAscii is false the HTML type control characters in the given string are escaped as follows: "<" is converted to "&#60", ">" is
converted to "&#62", "\" is converted to "&#34", "'" is converted to "&#39" and "\\" is converted to "&#92". Continuous space characters are converted to "&nbsp"
regardless of whether isAscii is true or false.
escapeForJS script operation
Adds escape characters into the string, as necessary, so that the string can be used in Java or Script. For instance a "\" character is converted to "\\", "\\" is
converted to "\\\\", "\n" is converted to "\\n" etc.
escapeWithHTMLEntities script operation
The two integers define a character range. The characters outside of this range are converted to HTML escape sequences. For instance, if the range does not include
the numeric representation of the letter A (65) the any A's in the given string are converted to &#65;
execute script operation
Execute the search query.
executeBatchUpdate script operation
Executes a prepared statement for a batch update using the connection object. The Object[][] is a HashMap of HashMaps, each indexed by integer, whose value is
the replacement for a '?' in the prepared statement for a given batch.
executeInBackground script operation
Execute the search query in background and save result as a selection.
executeQuery script operation
Execute the SQL query using the Connection object. Returns the ResultSet.
executeUpdate script operation
Execute the update using the Connection object. Returns the number of rows inserted, updated, or deleted.
exportCatalog script operation
Use to syndicate a catalog using mktplaceSpec and specMap.
exportEnv script operation
Exports the IBM Product Master objects specified in envObjList at the specified DocStore path or file system path.
exportXML script operation
Exports the spec in XML format.
exportXSD script operation
Exports a spec to a String in a XML Schema Definition(XSD) format.
flushScriptCache script operation
Flushes the script cache on the local JVM. While this is normally done automatically, this script operation is provided in case there are any techniques that would
cause the scripts to update in docstore, without properly updating the cache. This method may also be used to test the caching behavior of scripts.

IBM Product Master 12.0.0 1043


for script operation
Equivalent to doing init-statement; while(cond) {t-statements; each-statement;}
forEachCategorySetElement script operation
Executes the statements for each Category(oCategory) in the categorySet
forEachCtgItem script operation
Executes the statements for each item in the catalog called sCatalogNam if specified otherwise the catalog from the current context is used.
forEachDocument script operation
Executes the statements for each document (used in distribution scripts). If the optional docs_list parameter is provided, however, the statements are executed for
each element of docs_list
forEachEntrySetElement script operation
Executes the statements for each (oEntry) in the entrySet
forEachHmElement script operation
Executes the statements for each (oKey, oValue) map in hm.
forEachItemSetElement script operation
Executes the statements for each item in the ItemSet. The oItem variable is set to each item in the set as the script operation iterates through the item set.
forEachLine script operation
Executes the statements for each line that is read from.
forEachUserDefinedLogEntry script operation
Executes the statements for each group of log entries in the given UserDefinedLog or, if Entry e is defined, each log entry for that specific Entry. If
bReturnMultipleLogEntries is false, the array of log entries will contain only the first (oldest) log in chronological order. This is only a valid option if Entry e is not
defined. If bReturnMultipleLogEntries is true, all logs are populated in the array in ascending chronological order for a given Entry (oldest first). By default,
bReturnMultipleLogEntries is true.
forEachXMLNode script operation
Executes the statements for each XML node that has the relative path xPath. Paths in the block are relative to xPath. If the node variable is passed in as an
argument, it is populated with the XMLNode that is being operated on in each iteration of forEachXMLNode. If the rootNode is specified, the path is relative to the
path of rootNode. When only one node is specified, it is treated as XMLNode and not rootNode. The rootNode argument is valid only when the XMLNode argument is
passed.
formatDate script operation
Use to format a date in a human readable format. The sNewFormat string is a pattern whose format is identical to the format used by Java. Locale is optional,
default is the UI locale.
formatNumber script operation
Convert a Number to a String in the given format or a format based on the locale. If numberFormat is null the default format of the locale is used. If locale is null,
the default locale of the current user is used.
formatNumberByLocPrecision script operation
Returns a string of the given number formatted according to the given locale and precision.
formatNumberByPrecision script operation
Returns a string of the given number formatted with the given precision.
getAccessControlGroupByName script operation
Returns an access control group object for the specified acg name.
getAccessControlGroupName script operation
Gets the name of the access control group.
getAccessControlGroupPrivsForRole script operation
Gets the access control group privileges for the given access control group and the given role. The return parameter is an array of privileges (which are defined in
the format: Catalog__list, Selection__list, SelectionMembers__view_items etc.).
getAccessControlGroupsForRole script operation
Gets the access control groups for the given role.
getAddedAttributePathsNewEntry script operation
Returns the paths of all attributes in given location that (1) are not present in the old entry and (2) are present in the new entry from which this EntryChangedData
object was created. If the given location is not specified or is null, then the comparison is done for global attributes.
getAllAttrGroupsForAttribute script operation
Returns an array of attribute collections where the given attribute path is included. Returns null if the given attribute path is not included in any attribute collection.
getAllAttributePathsFromAttrGroup script operation
Returns all the attribute paths associated with this attribute collection.
getAllCurrencies script operation
This operation return all currency codes that IBM Product Master supports.
getAllUsers script operation
Returns all users.
getAllWflNames script operation
Returns a list of all workflow names.
getAribaAttribute script operation
Gets the Ariba constant attribute names. Valid attribute names are PAYLOADID, TIMESTAMP, SHAREDSECRET, AUSTINDUNS
getAttrGroupByName script operation
Returns the attribute collection with the given name.
getAttrGroupName script operation
Returns the name of this attribute collection.
getAttrGroupType script operation
Returns the type of this attribute collection.
getAttributeGroupsToProcess script operation
Return the list of attribute collections, if any have been specified, which restrict processing a retrieval of attributes from the database. If a null is returned, it means
that retrieval and processing is not being restricted and all attributes are being processed.
getAvailableLocations script operation
Returns CategorySet of locations in the item that are either a match for the given location or are in the given CategoryTree.
getBoolean script operation
Get the value of the designated column in the current row of this SearchResultSet as a long.
getCatalog script operation
If Object is an item then the Catalog that the item belongs to is returned. If Object is a SearchResultSet then the value at the given column index in the current row
is returned as a catalog.
getCatalogAccessControlGroupName script operation
Returns the Access Control Group for this catalog.

1044 IBM Product Master 12.0.0


getCatalogAttribute script operation
Returns a list of values for the attribute.
getCatalogAttributes script operation
Returns a HashMap mapping attributes to their respective values. The attributes returned are "SCRIPT_NAME", "PRE_SCRIPT_NAME",
"POST_SAVE_SCRIPT_NAME", "ENTRY_BUILD_SCRIPT", "DISPLAY_ATTRIBUTE", "USER_DEFINED_CORE_ATTRIBUTE_GROUP" and
"SCRIPT_RESTRICT_LOCALES".
getCatalogCategoryTrees script operation
Returns an array of the category trees defined for this catalog.
getCatalogId script operation
Returns the id of this catalog.
getCatalogItemCountInVersion script operation
Returns the number of items in the specified version of this catalog.
getCatalogNamesList script operation
Return the list of names of available catalogs filtered by catalog privileges LIST (list catalog), VIEW_ITEMS (view items in catalog), MODIFY_ITEMS (modify items in
catalog). By default the catalog names for the catalogs with LIST privilege access are returned.
getCatalogNameToExport script operation
Returns the last value set using the setCatalogByNameToExport script operation.
getCatalogSpec script operation
Returns the spec this catalog. If the optional boolean bGetImmutableSpec is set to true, an immutable spec is retrieved (you can not modify the spec, but it is faster
to retrieve). By default you get a mutable spec.
getCatalogVersion script operation
Returns the current version of this catalog.
getCatalogVersionSummary script operation
Return an array with versions of this catalog - most recent first.
getCatalogsByAttributeValue script operation
Returns all catalogs that have an attribute with the given value.
getCategorizedItemCountInVersion script operation
Returns the number of items categorized in the specified category tree for the specified version of this catalog.
getCategory script operation
Get the value of the designated column in the current row of this SearchResultSet as a Category.
getCategoryAttrib script operation
Returns the value of the attribute sAttribPath (spec_name/attribute_name) of this category, only when the Attribute is included in the View or Attribute Collection,
otherwise it will return null.
getCategoryByPath script operation
Returns the category with a full name path equivalent to sNamePath. sNamePath is expected to be delimited by sDelim. sNamePath should not contain the name of
the root category, since we are already restricted to a specific category tree. If bLight is true, not all data for the category is retrieved. If bReadOnly is true, a read
only copy of the category is retrieved - bReadOnly should be used in exports, for example.
getCategoryByPathNoCfp script operation
Returns the category with a full name path equivalent to sNamePath. sNamePath is expected to be delimited by sDelim. sNamePath should not contain the name of
the root category, since we are already restricted to a specific category tree. If bLight is true, not all data for the category is retrieved. If bReadOnly is true, a read
only copy of the category is retrieved - bReadOnly should be used in exports, for example.
getCategoryCache script operation
Returns a CategoryCache for this CategoryTree. The cache will be empty if get_all_categories is false and the size will be the given size, or 100, whichever is the
greater. If get_all_categories is true then the cache will contain all the categories for the given category tree and the size arguments will be ignored. The size of the
cache in the latter case will be the greater of the number of categories in the tree or 100
getCategoryChildren script operation
Returns the categories immediately below this category. The option Boolean 'ordered' being set to true makes the operation return the ordered children of this
category if the catalog (if not specified, the default catalog) is set up to use ordering. The option restrictToSubtreeWithItems being set to true only returns
categories that have items in their sub-trees
getCategoryChildrenUsingCache script operation
Returns this category's children making use of the cache provided if provided.
getCategoryCode script operation
Returns the code of this category.
getCategoryHasChildren script operation
Returns true if the category has any children.
getCategoryLevels script operation
Returns the levels of this category in an array of Integers.
getCategoryOrganizations script operation
Return all the organizations that the given category is mapped to.
getCategoryParent script operation
Returns this category's parent. If there are multiple parents, only the first one is returned.
getCategoryParents script operation
Returns the parent categories of the given Category.
getCategoryParentsUsingCache script operation
Returns this category's parents making use of the cache if provided.
getCategorySet script operation
Returns a CategorySet for this CategoryTree.
getCategorySetByAttributeValue script operation
Returns a CategorySet with all categories in the category tree which have the given AttribPath and AttribValue. Use "" or null AttribValue for searching EMPTY
values. An exception is thrown if the attribPath is for non runtime searchable attribute.
getCategorySetByFullNamePath script operation
Returns an CategorySet of the categories in the category tree from the given full name paths. Do not include the category tree name in the full name paths.
getCategorySetByItemSecondarySpec script operation
Returns a CategorySet that is a subset of the categories of this tree having the specified spec in their item secondary spec list.
getCategorySetByLevel script operation
Returns an CategorySet of the categories in the category tree at a particular level.
getCategorySetByPrimaryKey script operation
Returns a CategorySet with the categories in the category tree which match the given primary key.
getCategorySetByStandAloneSpec script operation
Returns a CategorySet that is a subset of the categories of this tree having the specified spec in their stand alone spec list.

IBM Product Master 12.0.0 1045


getCategorySetSize script operation
Returns the number of categories in a category set.
getCategoryTree script operation
Returns the category tree object this category belongs to. Use getCategoryTreeByName() to get the category tree being used for an aggregation/syndication.
getCategoryTreeByName script operation
Returns the category tree object with the corresponding name. If name is not provided, return the category tree being used for the aggregation/syndication.
getCategoryTreeMap script operation
Returns the category tree map between the two given category trees.
getCategoryTreeName script operation
Returns the name of this categoryTree.
getCategoryTreeNamesList script operation
Return the list of names of available category trees filtered by category tree privileges LIST (list category tree), VIEW_ITEMS (view items in category tree),
MODIFY_CATEGORY_ATTRIBUTES (modify category attributes in category tree). By default the category tree names for the category tree with LIST privilege access
are returned.
getCategoryTreeSpec script operation
Returns the spec of this category tree.
getCellObj script operation
Returns the Excel Obj at the given column index for further investigation.
getCheckedOutEntryColAreas script operation
Return a list of collaboration area names in which the entry is checked out. Returns empty list if entry is not checked out.
getCheckedOutEntryColAreasByPrimaryKey script operation
Return a list of collaboration area names in which the entry for the given primary key is checked out. Returns an empty list if entry is not checked out.
getColAreaAdminRoles script operation
Returns the admin role names for the collaboration area.
getColAreaAdminUsers script operation
Returns the admin user names for the collaboration area.
getColAreaByName script operation
Return a collaboration area object with the given name if it exists otherwise null. By default, useCache is true and caching is setup for collaboration areas. If the
collaboration area exists but does not exist in the cache and useCache is true, the collaboration area is cached.
getColAreaContainer script operation
Returns the collaboration area as a container.
getColAreaEntryHistory script operation
Return the entire history of the entry in the given collaboration area.
getColAreaHistoryByTimePeriod script operation
Return the entire history given collaboration area for the time period defined by beginDate and endDate.
getColAreaHistoryDate script operation
Returns the date of the given collaboration area history event.
getColAreaHistoryEntryKey script operation
Returns the primary key from the entry from the given collaboration area history event.
getColAreaHistoryEventAttribute script operation
Returns the attribute value for the given collaboration area history event type attribute name. attrName could be one of the following: COMMENT, EXIT_VALUE,
ENTRY_DIFFERENCES
getColAreaHistoryEventType script operation
Returns the event type for the given collaboration area history event. Event types could be one of the following: CHECKOUT, CHECKIN, ENTERSTEP, LEAVESTEP,
SAVEENTRY, DROP, TIMEOUT.
getColAreaHistoryStepPath script operation
Returns the step path for the given collaboration area history event.
getColAreaHistoryUser script operation
Returns the username from the given collaboration area history event.
getColAreaHistoryWorkflow script operation
Returns the workflow name from the given collaboration area history event.
getColAreaId script operation
Returns the internal Id for the Collaboration Area.
getColAreaName script operation
Returns the name of the collaboration area.
getColAreaNames script operation
Returns all of the Collaboration Area Names for the current Company.
getColAreaNamesForRole script operation
Returns a list of collaboration area names that may be worked with by a particular role.
getColAreaNamesForUser script operation
Returns a list of collaboration area names that the current user can work with.
getColAreaSrcContainer script operation
Returns the source container which this collaboration area is tied to. This can be a catalog or a hierarchy.
getColAreaStepHistory script operation
Return the entire history of the workflow step in the given collaboration area.
getColAreaWorkflow script operation
Returns the workflow which this collaboration area is tied to.
getColumn script operation
Get the entry for the current result at column colName. Returns an object of type Integer, String, or Date (depending on the data type of the column).
getColumnAt script operation
Get the entry for the current result at column position. Returns an object of type Integer, String, or Date (depending on the data type of the column).
getCompanyCode script operation
Returns the company code of the current company.
getCompanyCurrencies script operation
This operation returns the currency codes selected in the company attribute.
getCompanyLocales script operation
Returns the locales that are used within the current company.
getCompanyName script operation
Returns the name of the current company.

1046 IBM Product Master 12.0.0


getContainerId script operation
Returns the id of this container.
getContainerLocalesForRole script operation
Gets the locales from the given container for the given role.
getContainerType script operation
Returns the type of the container. Types can be one of the following: CATALOG, CATEGORY_TREE.
getCountOfEntriesInColArea script operation
Returns the number entries currently in ALL the workflow steps of the collaboration area.
getCountOfEntriesInColAreaStep script operation
Returns the number of entries currently in the given workflow step of the collaboration area.
getCtgAccessPrvByRole script operation
Returns the catalog access privilege for the catalog and role. Returns catalog access privilege with full access if none was found.
getCtgAccessPrvPermission script operation
Returns the permission [E-editable|V-viewable] for the given attribute collection in the current catalog access privilege.
getCtgByName script operation
Returns the catalog object with the corresponding name. If no name is provided, returns the default catalog (if defined).
getCtgCategorySpecs script operation
Returns the category specs for this catalog.
getCtgFileDiffStatus script operation
Returns true or false to indicate whether or not the file was modified between the two versions selected for differences syndication.
getCtgFileExists script operation
Returns true or false to indicate whether the physical file really exists.
getCtgItemAllCategories script operation
This script operation is deprecated.
getCtgItemAtOldVersion script operation
Returns the old version of the item in the differences syndication.
getCtgItemAttrib script operation
Returns the value of the attribute sAttribPath (spec_name/attribute_name) of this item, only when the Attribute is included in the View or Attribute Collection,
otherwise it will return null.
getCtgItemAttribByPk script operation
Returns the value of the attribute sAttribPath (spec_name/attribute_name) of the item with the given primary key.
getCtgItemAttribNamesList script operation
Returns an array of String containing the attribute name of all the attributes of this item (optional parameter allows option exclude categorySpecificAttribute - true
by default).
getCtgItemAttribsForKeys script operation
Gets the attributes for an item based upon the passed Object[] (declared: var aAttribs = [];) of attribute keys (paths). The resultant values are loaded into the value
pair of the aAttribs mapping. By specifying the delimiter parameter, in addition to populating aAttribs mapping the operation returns a CSV string representation of
the retrieved values separated by the delimiter character.
getCtgItemAttribsList script operation
Returns an array of String containing the paths (spec_name/attribute_name) of all the attributes of this item
getCtgItemAttributeNewValue script operation
This script operation is deprecated.
getCtgItemAttributeOldValue script operation
This script operation is deprecated.
getCtgItemByAttributeValue script operation
Returns an ItemSet of items from the catalog that have the provided value for the attribute. Use "" or null value for searching EMPTY values. An exception is throw if
the attribute does not exist or it is not runtime searchable.
getCtgItemByPrimaryKey script operation
Returns the item from the catalog with the given primary key - this method cannot be used to retrieve newly created items that have not been saved yet.
getCtgItemCatSpecificAttribsList script operation
Returns an array of String containing the paths (spec_name/attribute_name) of all the category specific attributes of this item.
getCtgItemCategories script operation
Returns the categories this item is mapped to. If catTreeName is given, returns the categories within that ctr only (use the default category tree if no category tree is
passed). Also, can use an optional CategoryCache passed in catCache
getCtgItemCategoryPaths script operation
Returns an array of delimited strings of the category paths this item belongs to. If ctr is given, returns the paths of the categories within that ctr only.
getCtgItemCategoryPathsForPrimaryKey script operation
Returns an array of delimited strings of the category paths for the item with sPrimaryKey in Catalog. If ctr is given, returns the paths of the categories within that ctr
only
getCtgItemDiffStatus script operation
For content difference syndications, returns this item's difference status (A, M, D, U).
getCtgItemId script operation
Returns this item's ID.
getCtgItemIdByPrimaryKey script operation
Returns an item id from an item selected by its primary key.
getCtgItemLocationAttribsForKeys script operation
Gets the attributes for an item based upon the passed location category and a Object[] (declared: var aAttribs = [];) of attribute keys (paths). The resultant values
are loaded into the value pair of the mapping. If the value for one key is un-set, it defaults to a blank string. If the key does not correspond with an attribute, a null is
entered instead.
getCtgItemMappedAttrib script operation
Returns the value of the item's attribute mapped to or from the given attribute path (mapped_spec_name/attribute_name).
getCtgItemMappedAttribs script operation
Returns a HashMap with the mapped attributes values from the given item, indexed by their path (mapped_spec_name/attribute_name).
getCtgItemMappedAttribsList script operation
Returns an array of String containing the paths (mapped_spec_name/attribute_name) of all the mapped attributes of this item.
getCtgName script operation
Returns the name of this catalog.
getCtgItemOrganizations script operation
Returns the all organizations the given item is mapped to.

IBM Product Master 12.0.0 1047


getCtgItemPrimaryKey script operation
Returns this item's primary key value.
getCtgSpec script operation
Returns the spec to this catalog. If the optional boolean bGetImmutableSpec is set to true, an immutable spec is retrieved (you can not modify the spec, but it is
faster to retrieve). If bGetImmutableSpec is set to false, a mutable spec is retrieved. If bGetImmutableSpec is not specified, the type of spec retrieved is governed
by the setting of the get_immutable_specs parameter in the common.properties file, therefore, if this parameter is set to true (the setting as this file is delivered) an
immutable spec is retrieved, but if this parameter is set to false (or is removed) a mutable spec is retrieved.
getCtgTabAttrGroupsList script operation
Returns an ordered list of attribute collections for the catalog view tab.
getCtgTabByName script operation
Returns the tabbed view object with the given name.
getCtgTabName script operation
Returns the name of the tabbed view.
getCtgTabRow script operation
Returns the set of rows from a tabbed view.
getCtgTabs script operation
Returns an ordered array of container tab objects for the particular container view.
getCtgViewAttrGroupsList script operation
Returns an ordered list of attribute collections for the catalog view.
getCtgViewAttribsList script operation
Returns a list of ordered attribute paths for the catalog view.
getCtgViewByName script operation
Returns the view with the corresponding name. If no name is specified, returns the default view. Use '[System Default]' to refer to the default view. The viewType
can be 'ITEM_LIST', 'ITEM_POPUP', 'ITEM_LOCATION, 'BULK_EDIT', 'ITEM_EDIT', 'CATEGORY_EDIT' or 'CATEGORY_BULK_EDIT', 'CATEGORY_RICH_SEARCH',
'ITEM_RICH_SEARCH'. By default ITEM_EDIT/CATEGORY_EDIT is used. If the view is not found, it returns null.
getCtgViewPermission script operation
Returns the permission [E-editable|V-viewable] for the attribute collection name in the current view.
getCtgViewType script operation
Returns the type of the given view as a string.
getCurrencyDescByCode script operation
This operation returns a currency description for the given currency code.
getCurrencySymbolByCode script operation
This operation return currency symbol from currency code, such as input "USD",currency symbol return will be "$".
getCurrentCtgViewName script operation
Returns name of current catalog view (only in Data Entry scripts). Returns an empty string outside of the Data Entry scripts.
getCurrentLine script operation
Returns the current line.
getCurrentLocation script operation
Returns the category that identifies the current location if the script is running the context of a location.
getCurrentUserName script operation
Returns the name of the current user.
getCustomMessage script operation
Given the message ID (and locale), returns the description of the message.
getDate script operation
Get the value of the designated column in the current row of this SearchResultSet as a date.
getDateCellValue script operation
Returns the value of this date cell as a date. Use this function only if it is predetermined using String ExcelCell::getExcelCellType() (unless known otherwise) that the
cell is a date type.
getDateField script operation
Get the value of the field specified. Allowed field values are : YEAR MONTH DATE HOUR_OF_DAY MINUTE SECOND
getDateFromDoubleValue script operation
Creates a Date Object from a given Double value.
getDateInputFormat script operation
Returns the date input format set in my setting.
getDateOutputFormat script operation
Returns the date output format set in my setting.
getDateTimeInUserTimeZone script operation
Returns a Date object containing the time in the users time zone. This may be different from the time in the servers time zone.
getDefaultACGName script operation
Returns the name of default ACG in the current company.
getDefaultAttrCollectionName script operation
Returns the name of default lookup table hierarchy in the current company.
getDefaultCatalogName script operation
Returns the name of the catalog being used for an aggregation or syndication.
getDefaultCategoryTreeName script operation
This script operation is deprecated.
getDefaultCharset script operation
Returns the default charset of the current company.
getDefaultCtgViewName script operation
Returns name of the default catalog view.
getDefaultCtrViewName script operation
Returns name of the default category tree view.
getDefaultLktHierarchyName script operation
Returns the name of default lookup table hierarchy in the current company.
getDefaultLocale script operation
Returns the default locale of the current company.
getDefaultOrgHierarchyName script operation
Returns the name of default organization hierarchy in the current company.
getDefaultSpecDispNameAttribute script operation
Returns the display name attribute of the default spec in the current company.

1048 IBM Product Master 12.0.0


getDefaultSpecName script operation
Returns the name of the default spec in the current company.
getDefaultSpecNameAttribute script operation
Returns the name attribute of the default spec in the current company.
getDefaultSpecMapName script operation
Returns the name of the spec map that is being used for an aggregation or syndication.
getDefaultSpecPathAttribute script operation
Returns the path attribute of the default spec in the current company.
getDeletedAttributePathsOldEntry script operation
Returns the paths of all the attributes in the given location that (1) are not present in the new entry and (2) are present in the old entry from which this
EntryChangedData object was created. If the given location is not specified or is null, then the comparison is done for global attributes.
getDesc script operation
Returns the description of this Web Service.
getDescendentCategorySetForCategory script operation
Returns a CategorySet consisting of all the descendents of this category.
getDestinationCatalog script operation
Returns the destination catalog for catalog to catalog exports.
getDestinationEntrySetForRelatedEntries script operation
Returns the EntrySet with all the entries this entry is related to. If a filter container is specified only the entries held in the given container are returned.
getDistributionByName script operation
Gets the distribution with the specified name, if one exists; otherwise, returns null.
getDisplayValue script operation
Returns the display value of this entry. If there is no valid display value, this function will return the primary key value.
getDocAttribute script operation
Return the attribute value from this document for the given attribute name.
getDocAttributes script operation
Return the attributes of this document.
getDocByPath script operation
Return the document with path sPath. If the forceSync is set to true, the sync is done immediately even before the normal syc by mountmanager.
getDocContentAsString script operation
Return the content of this document as a string. WARNING - this means that the entire content of the document, however big, will be returned in a string so the user
needs to make sure that any call of this operation is not going to be used in a situation where the content of the document is too big (too big being defined by the
amount of memory available to the process this operation is running in).
getDocLastModifiedTimeStamp script operation
Returns the date/time this document was last modified.
getDocLength script operation
Returns the length of the document in kilo bytes. If bBytes is true, value is returned in bytes instead of Kilo-bytes. This is useful when querying the size of smaller
files.
getDocListByPaths script operation
Returns the document at each specified path.
getDocPath script operation
Return the path to the given document.
getDocStoreDirectoriesInDirectory script operation
Return the list of paths of directories under the given directory.
getDocStoreFilesInDirectory script operation
Returns the list of the paths of the documents under the given directory.
getDocStoreSubtreeList script operation
Return the list of documents under the given path.
getDouble script operation
Get the value of the designated column in the current row of this SearchResultSet as a double.
getDynamicSelectionQueryString script operation
Returns the query string for this dynamic selection.
getEditableAttributeGroups script operation
Gets the editable attribute groups of a workflow step. The result is an array attribute collection names. The optional parameter subViewType can be
'ITEM_LOCATION', 'BULK_EDIT', 'ITEM_EDIT', 'CATEGORY_EDIT', or 'CATEGORY_BULK_EDIT'. The optional parameter locationHierarchyName is required when the
subViewType is 'ITEM_LOCATION'.
getEntries script operation
Returns the entry set for the entries currently in the collaboration area.
getEntriesInStep script operation
Returns the entry set for the entries currently in the workflow step of the collaboration area. The format of the stepPath is Stepname.
getEntriesInfoXMLInStep script operation
Returns an XML representation of workflow step entries for the given step name. The dateFormat is used for formatting the date values. The attribute information of
attributes present in the hmAttrPaths are included in the XML.
getEntry script operation
Returns the Entry for the given EntryNode.
getEntryAttrib script operation
Returns the value of the attribute sAttribPath (spec_name/attribute_name) of this entry only when the Attribute is included in the View or Attribute Collection,
otherwise it will return null.
getEntryAttribValues script operation
Returns the values of the multi-value attribute given by sAttribPath (spec_name/attribute_name) of this entry
getEntryAttribs script operation
Returns a HashMap that maps the paths (spec_name/attribute_name) of attributes to their respective values. The attributes returned are limited to the attribute
groups specified by setAttributeGroupsToProcess(). This was a performance optimization in previous releases but is no longer required because the whole entry is
now always fetched from the database. Previously each attribute in the item was fetched with a separate database call. Use the script operations
getEntryAttribsList() or getCtgItemAttribsForKeys().
getEntryAttribsList script operation
Returns an array of String containing the paths (spec_name/attribute_name) of all the attributes of this entry.
getEntryByPrimaryKey script operation
Gets the entry given the primary key.

IBM Product Master 12.0.0 1049


getEntryChangedData script operation
Return an EntryChangedData object encapsulating the changes in data and locations between two entries at the time at which the EntryChangedData object is
created. Once created the EntryChangedData object is not updated with any subsequent changes to the entries. Note, this script operation is very CPU intensive on
large items (items with many locations and many attributes). Please see the script operation getEntryChangedDataSinceLastSave
getEntryChangedDataSinceLastSave script operation
Return an EntryChangedData object encapsulating the changes in data and locations between this entry and the last saved entry (including save as draft). Note, this
script operation should be much faster than 'getEntryChangedData(oldEntry, newEntry).
getEntryContainer script operation
Gets the holding container for this Entry. Could be a catalog or category tree. Use getContainerType to determine which one.
getEntryFromWorkEntry script operation
Get the Entry held by this WorkEntry.
getEntryId script operation
Returns this entry's id.
getEntryInStep script operation
Returns one entry that is currently in the workflow step of the collaboration area, if there is at least one. If there is more than one entry currently in the step, then it
is undetermined which particular one will be returned by a call to this operation. The format of the stepPath is Stepname.
getEntryMergeState script operation
Returns the number of copies of an entry that have already reached a merge step. Returns zero if stepPath does not correspond to a merge step, or if the entry was
not found in the step.
getEntryNode script operation
Return the entry node with the given path relative to the given entry node. If the path is not already built a NULL will be returned. Use the Entry::setEntryAttrib script
operation to create a path that might not exist.
getEntryNodeChildren script operation
Return the children of this entry node.
getEntryNodeExactPath script operation
Returns the exact path of this entry node - the following is always true: rootNode.getEntryNode(entryNode.getEntryNodeExactPath()) == entryNode
getEntryNodeInheritedValue script operation
If this entry node inherits its value, return the value. Otherwise, return null.
getEntryNodeParent script operation
Return the parent of this entry node. If it is the root node, NULL is returned.
getEntryNodePath script operation
Returns the Spec Node path of this entry node, NOT the relative path of this attr.
getEntryNodeType script operation
Return "V" = value, "G" = Grouping or top level of spec directory, "M" = Multi-directory (contains multiple occurrences of values or groupings)
getEntryNodeValue script operation
Return the value of this entry node.
getEntryNodes script operation
Return the entry nodes matching the path sPath.
getEntryPosition script operation
Allows users to get the position of a child Entry within a parent category. This will only work in an ordered catalog. Returns the position (if it works) or null (if it fails).
getEntryRelatedItemInfo script operation
Returns an array of length 2 that contains: [0]=Related Item's Catalog's Name, [1]=Related Item's Primary Key, for the related item represented by the given
internal unique item ID, at the browsing version of the catalog of the given entry.
getEntryRelationshipAttrib script operation
Given a relationship attribute path, returns an array of length 2 containing: [0]=Related Item's Catalog's Name, [1]=Related Item's Primary Key, for the related item.
If the attribute, sAttribPath, does not exist or the attribute, sAttribPath, does exist but does not have a value then null is returned. If the attribute, sAttribPath, does
exist and does have a value but it is not of the type "relationship" then a LocalizedException exception is thrown.
getEntrySaveResult script operation
Returns the result of the last save of this entry. Returns one of the following strings {ADDED,DELETED,MODIFIED,UNKNOWN}.
getEntrySetForPrimaryKeys script operation
Returns an EntrySet of the entries in this container for the given primary keys. Set bOptimize to true if you do not plan on changing the entries. The entry set is then
optimized, however, the entry sets items do not keep track of changed attributes.
getEntrySetSize script operation
Returns the number of entries in an entry set.
getEntryStatus script operation
Returns the status of the entry. This script operation returns a valid result when being invoked directly after an item save. When being used in a Post Save script, this
script function returns a valid result. When being used in other scripts without context to item save, for example, in a workflow script, this script function returns
"null".
getEntryXMLRepresentation script operation
Returns the XML representation of this entry specific to the given spec. The XML representation can be used as input to the WebSphere® Portal Server integration
and is understood by the WebSphere Portal Server XML parser. The optional parameter "addNameSpace" allows the user to specify that the XML is returned in a
format that can immediately be read in using the setEntryAttributesFromXMLRepresentation function. The default of false for the "addNameSpace" parameter
allows user to add a prefix to or append to the XML at the correct location as required. The date format is the pattern by which dates should be converted. A Simple
Date Format is normally used. If "aLocales" is specified only values in the given locales are mapped into XML. If AttrGroup[] is specified only attributes belonging to
one of the AttrGroup's are mapped into XML.
getErrorsForLocation script operation
Returns the validation errors for the current location errors. There will be at least one validation error.
getExactAttributePath script operation
Returns the exact attribute path of the entry node that is associated with the validation error.
getExcelCell script operation
Returns the value of the cell at given row and column indexes as a String value.
getExcelCellEncoding script operation
Returns the encoding of the cell.
getExcelCellType script operation
Returns the type of this cell. Values can be NUMERIC, STRING, DATE, BLANK, UNKNOWN.
getExcelRow script operation
Returns the row at the specified index. Note: Rows are zero based.
getExcelSheet script operation
Returns a sheet from the workbook based on the arguments passed. If iSheetNumber is passed, then the sheet having the index specified by this argument is
returned. If sSheetName is passed, then the sheet is retrieved by the name.

1050 IBM Product Master 12.0.0


getExcelSheets script operation
Returns a hashmap of Excel sheets within the workbook. The hashmap is indexed by the sheet name.
getExitValue script operation
Returns the exit value, if set, of an entry in a workflow step. Can be called in an OUT(), or TIMEOUT() step script function. Can also be called in pre-processing, post-
processing, or post-save scripts. If the entries' exitValue is not set, this script op returns null. This script operation will return a wrong exit value if called from a
previous step where more that one exit value is defined. If for that particular step customer has multiple exit values set, then calling the getExitValue() script
operation from IN() returns an arbitrary value out of its multiple exit values.
getExportItemSets script operation
Returns an array of ItemSets being exported.
getExportItemsCount script operation
Returns the number of items being exported.
getFirstCellNum script operation
Returns the index of the first physical cell in this row. Note that columns are zero based.
getFirstRowNum script operation
Returns the index of the first physical row.
getFlatEntryNodes script operation
Returns an array of EntryNodes for this entry.
getFlatEntryNodesOf script operation
Returns an array of all the entry nodes under the given entrynode down one level. For instance if the given node has two children, A & B, and A & B also have
children it is only the entry nodes A and B that are returned and not their children.
getFlatPrimaryEntryNodes script operation
Returns an array of entry nodes that are defined in the primary spec of this entry.
getFlatSecondaryEntryNodes script operation
Returns an array of entry nodes that are defined in the secondary specs of this entry.
getFloat script operation
Get the value of the designated column in the current row of this SearchResultSet as a float.
getFtp script operation
Retrieves a file from a system using FTP.
getFullHTTPResponse script operation
Returns a HashMap (with RESPONSE_READER and RESPONSE_HEADER_FIELDS) for the response for posting parameters or a specific type of document as
specified on the doc and sContentType parameters against the server at the specified URL. The response is optionally stored into a document in the docstore in the
path specified on the sDocStorePath parameter.. Close the response reader when you are done using it.
getFullPaths script operation
Returns the full path names of this Category, using the sDelimiter as the delimiter if provided. The full path returned includes the root category name if
bWithRootName is true.
getFunctionByName script operation
Build the function object for the function sFunctionName in this script object
getGlobalErrors script operation
Returns the validation errors for the global attributes. Will return an empty array if no such errors exists.
getHierarchy script operation
Get the value of the designated column in the current row of this SearchResultSet as a CategoryTree.
getHierarchyNameToExport script operation
Returns the last value set using the setHierarchyByNameToExport script operation.
getHierarchyNodeSetForSelection script operation
Returns the hierarchy nodes in the selection as a HierarchyNodeSet.
getHierarchyType script operation
Returns the type of the hierarchy. Types can be one of the following: CATEGORY_TREE, ORGANIZATION_TREE, COLLABORATION_AREA.
getHrefForDocPath script operation
Return an absolute path for the document with path sDocPath. This can be used in an HTML reference to provide a link to the document.
getHTTPResponse script operation
Returns a reader for the response for posting parameters against the server at the specified URL. Use the hmRequestProperties parameter to send specific header
information. Close the reader when you are done using it.
getImplScriptPath script operation
Returns the docstore path where the implementation script for this Web Service is stored.
getImplclass script operation
Returns the fully qualified name of the implementation class of this Web Service.
getIndexesOfEntriesHavingState script operation
Get the current indexes of the worklist entries having a particular state.
getInt script operation
Get the value of the designated column in the current row of this SearchResultSet as an int.
getItem script operation
Get the value of the designated column in the current row of this SearchResultSet as an Item.
getItemBySku script operation
This script operation is deprecated.
getItemLocationAttrib script operation
Gets a location attribute value from the given item.
getItemPrimaryKeysForCategory script operation
Returns an array of Strings containing the primary keys of the items in this category. The option Boolean 'ordered' being set to true makes the operation return the
ordered children of this category if the catalog is set up to use ordering.
getItemRootEntryNodeForLocation script operation
Returns the root EntryNode for this item at the given location.
getItemRootEntryNodesHavingLocationData script operation
Returns a list of EntryNodes, each is a root entryNode per location that has data defined.
getItemSecondarySpecsForCategory script operation
Returns the item secondary specs associated with this category.
getItemSetForCatalog script operation
Returns an ItemSet of the items in this catalog.
getItemSetForCategory script operation
Returns an ItemSet of the items in this category. The option Boolean 'ordered' being set to true makes the operation return the ordered children of this category if
the catalog is set up to use ordering.

IBM Product Master 12.0.0 1051


getItemSetForPrimaryKeys script operation
Returns an ItemSet of the items in this catalog for the given primary keys - set bOptimize to true if you don't plan on changing the items, the item set is then
optimized but these items don't keep track of changed attributes
getItemSetForSelection script operation
Return the items in the selection as an ItemSet.
getItemSetForUnassigned script operation
Returns an ItemSet of the items in this catalog which are not assigned to any of the categories of given category tree.
getItemSetSize script operation
Returns the number of items in an item set.
getItemsInCategory script operation
Returns an array of the items in this category. The option Boolean 'ordered' being set to true makes the operation return the ordered children of this category if the
catalog is set up to use ordering
getItemSubset script operation
Returns an ItemSet which is a subset cloned from the supplied ItemSet restrained by start and, optional, end index positions. A start point of -1 is interpreted as 0.
If the end index is omitted, all items are retrieved from the start point.
getItemsInStepByAttribute script operation
Returns the set of items in the given workflow step which contain the given attribute path/value pair. By default isAscending is taken as true. startIndex as 0 and
endIndex as total number of resultant items. categoryPKList is optional argument having comma separated list of category primary key for example :'1','3','7' . It is
used for filtering items.
getItemsInStepBySelection script operation
Returns the set of entries in the given workflow step which match the given selection. The results are sorted (by default isAscending is taken as true) and startIndex
and endIndex specify which part of the set of results is to be returned. The hashmap returned has two entries: the first has the key of "ITEMSET" and contain a
hashmap of the select entries, the second entry is keyed by "ITEMCOUNT" and contains the number of entries that were selected.
getItemStatus script operation
Returns the status of an item.
getItemUsingEntryRelationshipAttrib script operation
Return the related item object for the given relationship attribute path. An exception will be thrown if the attribute doesn't exist or it's not of type relationship.
getItemXMLRepresentation script operation
Returns the XML representation of this item which is specific to the given spec. This representation can be consumed by the XML parser in the WebSphere Portal
Server portion of the IBM Product Master and WebSphere Portal Server integration
getKeysFromValues script operation
Reverse lookup of keys using values from the lookup table. The values can either be Paths in the Spec or the column number of the lookup table starting from 0 and
not including the Key column.
getLastCellNum script operation
Returns the index of the last physical cell in this row.
getLastRowNum script operation
Returns the index of the last physical row.
getLdapAttributeType script operation
Retrieves the attribute type or the name of an LdapAttribute Object.
getLdapAttributeValue script operation
Retrieves the attribute value of an LdapAttribute Object.
getLdapAttributes script operation
Retrieves the LdapAttribute Objects for this LdapEntry.
getLdapDistinguishedName script operation
Retrieves the distinguished name for an LdapEntry as an LdapAttribute Object.
getLdapEntries script operation
Retrieves the LdapEntry Objects from the LDAP entry set.
getLdapEntryDn script operation
Returns the distinguished name field associated with an LDAP authenticated User.
getLdapObjectclass script operation
Retrieves the name of a LdapObjectclass Object.
getLdapObjectclasses script operation
Retrieves the LdapObjectclass Objects for this LdapEntry.
getLdapOperation script operation
Retrieves the LdapOperation detail strings for this LdapEntry.
getLdapServerUrl script operation
Return the URL of the server providing this users' LDAP authentication.
getLinkedItemAttributeValueForNode script operation
Returns the attribute value of the linked item associated with the specified node.
getLinkedItemForNode script operation
Returns the linked item associated with the specified node.
getLinkedItems script operation
Returns a HashMap for each item linked to this item's primary key. Keys in the HashMap include "item_key","item_id","catalog_id", and "catalog_name".
getListOfCtgViewNames script operation
Returns an array of view names available for this catalog. An entry with '[System Default]' is always included as the first entry.
getLkpByName script operation
Returns the lookup table object with the corresponding name. By default the lookup table is read-only, but can be made mutable by setting the isReadOnly
parameter to false.
getLkpId script operation
Return the ID of this lookup table.
getLkpKeys script operation
Return the keys of this lookup table.
getLocaleCode script operation
Returns the 5 letter code (2 letter language code + underscore + 2 letter country/region code) for the given locale.
getLocaleDisplayName script operation
Returns a description of the locale suitable for display.
getLocaleNode script operation
Returns the localized node for the supplied locale.
getLocales script operation
Returns the locales associated with the spec.

1052 IBM Product Master 12.0.0


getLocalesForRole script operation
Gets the locales that this role has access to for all containers.
getLocalesOfAttrGroup script operation
Returns the locales of the Attribute Collection as a single String of comma-separated values.
getLocalizedSpecNames script operation
Returns all the specs that are localized.
getLocation script operation
If the caller object is the entry node, return the location (category) or null if a global entry node; if the caller object is a SearchResultSet, return the value of the
given column in the current row of this SearchResultSet object as a Location (Category).
getLocationErrors script operation
Returns the locations errors for locations having validation errors. Will return an empty array if no such errors exists.
getLocationForErrors script operation
Returns the category associated with the current location errors.
getLocationsHavingData script operation
Returns the category set of locations for which this entry has location specific attributes defined under the specified location or category tree.
getLocationHierarchyNames script operation
Returns the list of location hierarchy names defined in the given workflow step. The optional parameter canModifyAvailability filters the list of location hierarchy
names based on the 'modify location hierarchy availability' flag. If not specified, no filtering takes place.
getLocationPathForInheritedValue script operation
Returns the path for the location from which this EntryNode inherits, or null if it does not inherit its value. The given delimiter is used.
getLocationsAddedAvailability script operation
[Returns all the locations in the given category tree that are available in the EntryChangedData object. See the getEntryChangedDataSinceLastSave and
getEntryChangedDataSinceLastSave for information on the contents of the EntryChangedData object.
getLocationsChangedToHaveData script operation
Returns all the locations in the given category tree that are: (1) are available in both the old entry and the new entry from which this EntryChangedData object was
created, and (2) contain no data in the old entry but do contain data in the new entry. Note, Override is considered as having data, and Inherit is considered as not
having data. See the getEntryChangedDataSinceLastSave and getEntryChangedDataSinceLastSave for information on the contents of the EntryChangedData object.
getLocationsChangedToHaveNoData script operation
Returns all the locations in the given category tree that are: (1) are available in both the old entry and the new entry from which this EntryChangedData object was
created, and (2) contain no data in the new entry but do contain data in the old entry. Note, Override is considered as having data, and Inherit is considered as not
having data. See the getEntryChangedDataSinceLastSave and getEntryChangedDataSinceLastSave for information on the contents of the EntryChangedData object.
getLocationsHavingChangedData script operation
Returns all the locations in the given category tree that are: (1) are available in both the old entry and the new entry from which this EntryChangedData object was
created, (2) have at least one attribute path for which the old and new entries contain different values. Note that this operation will return a superset of all locations
returned by the getLocationsChangedToHaveData and getLocationsChangedToHaveNoData script operations.
getLocationsRemovedAvailability script operation
Returns all the locations in the given category tree that are recorded as removed in the EntryChangedData object. See the getEntryChangedDataSinceLastSave and
getEntryChangedDataSinceLastSave for information on the contents of the EntryChangedData object.
getLogger script operation
Returns a logger (loggers are in the system log directory with the given name).
getLoginString script operation
Returns the URL string needed for login automatically to the given URL as the current user. If you are an admin, you can generate a login string for another user by
passing the username as an extra parameter. Note that the URL should not include the server name/port and should start with '/'. If an error occur, a null string is
returned.
getLong script operation
Get the value of the designated column in the current row of this SearchResultSet as a long.
getMappedCategories script operation
Returns the categories from the given category tree, if any, to which this category is mapped.
getMarkedEntries script operation
Return an entry set containing the marked entries in this work entry list with indexes between start and end.
getMemorySummary script operation
Invokes the garbage collector, sleeps for 5 seconds and then returns a string summarizing memory usage.
getMessageFromQueue script operation
The nth oldest message is returned where the index argument defines what n is. For example, getMessageFromQueue("Queue1", 2) would return the second oldest
message from the queue with name "Queue1".
getModifiedAttributePathsNewEntry script operation
Returns the paths of all attributes in the given location that (1) are present in both the old entry and the new entry from which this EntryChangedData object was
created, and (2) contain different data in the old and new entries. It is possible for an attribute to have different attribute paths across the old entry and the new
entry, for example because a multi-occurrence sibling has been deleted. In this case, we return the attribute path for the new entry. If LOCATION is not specified or
is null, then the comparison is done for global attributes.
getModifiedAttributePathsOldEntry script operation
Returns the paths of all attributes in the given location that (1) are present in both the old entry and the new entry from which this EntryChangedData object was
created, and (2) contain different data in the old and new entries. It is possible for the same attribute to have different attribute paths across the old entry and the
new entry, for example because a multi-occurrence sibling has been deleted. In this case, we return the attribute path for the old entry. If the location is not
specified or is null, then the comparison is done for global attributes.
getModifyLocationHierarchyAvailability script operation
Returns the 'modify location hierarchy availability' flag for a given location hierarchy in the given workflow step. The optional parameter locationHierarchyName is
required when the subViewType is 'ITEM_LOCATION'.
getMsgAppResponse script operation
Initiates a request to get a response to a message.
getMsgAppResponseDoc script operation
Returns the Doc object containing the response to a message.
getMsgAttachments script operation
Returns a HashMap of attachment names to attachments for the given message.
getMsgByMsgId script operation
Returns the message object with the specified message id.
getMsgDoc script operation
Returns the Doc object for the message.
getMsgId script operation
Returns the generated unique ID for the message.

IBM Product Master 12.0.0 1053


getMsgProtocolResponseDoc script operation
Returns the Doc object for the message.
getMsgQueue script operation
Returns the MsgQueue object for the message.
getMsgQueueName script operation
Returns the name of this message queue.
getName script operation
Returns the name of this Web Service.
getNameFromPath script operation
If str contains "/" returns the substring of str after the last "/" char exclusively, otherwise returns the original string. If a delimiter is specified then this is used rather
than '/'. If the delimiter is more than one character only the first character is taken as the delimiter.
getNbColumns script operation
Returns the number of physical columns in this sheet.
getNbRows script operation
Returns the number of physical rows in this sheet.
getNewCtgTab script operation
Builds a new container tabbed view object with the given name and returns it. The tabbed view needs to be added to a catalog view in order to save it.
getNextWflStepsForExitValue script operation
Returns the names of the next steps for a particular exitValue of a WorkflowStep.
getNodeAttributeValue script operation
Returns the value of this node's attribute, i.e., MAXLENGTH, MAX_OCCURRENCE, MIN_OCCURRENCE, HELP_URL, TYPE, etc.
getNodeAttributeValues script operation
Returns the values of this node's attributes in a Hash Map, i.e., MAXLENGTH, MIN_OCCURRENCE, STRING_ENUMERATION etc.
getNodeByPath script operation
Returns the node object for the given path in this spec.
getNodeChildren script operation
Returns the children for the node.
getNodeDisplayName script operation
Returns the display name of a locale node. Optionally, if the node is the parent of the locale nodes, pass in the locale for a particular locale node display name. If it
is not valid for the node to have a display name, will return null.
getNodeFromEntryNode script operation
Returns the Node object for this entry node.
getNodeId script operation
Returns the internal ID of this node that corresponds to the value stored in the database.
getNodeLocale script operation
Returns the locale object for this node if it is a locale specific node.
getNodeLookupTableName script operation
Returns the name of the Lookup Table associated with this node, if one exists.
getNodeName script operation
Returns the name of this node.
getNodePath script operation
Returns the path of this node.
getNodeSpec script operation
Returns the spec object for this node.
getNumericCellValue script operation
Returns the value of this numeric cell as a double value. Use this function only if it is predetermined using String ExcelCell::getExcelCellType() (unless known
otherwise) that the cell is a numeric type.
getOriginalEntry script operation
Returns the last saved entry as present in the database. If the entry is in a workflow step, the last saved draft entry is returned. If the entry is new or deleted, this
operation returns null.
getOriginalItem script operation
Returns the version of the item before modification.
getPageLayoutByName script operation
Returns the page layout object with the corresponding name.
getPageURL script operation
Return the URL for the requested page. The required objects are defined by the page itself which is limited to the following choices: "ITEM_LIST", "ITEM",
"SEARCH", "COLAREA_STEP", "COLAREA_ENTRY".
getParentPath script operation
If the given string contains "/" returns the substring of str up to the last "/" char exclusively, otherwise returns the empty string.
getPathValue script operation
Returns the path attribute value of this category. Note, this is not the full path.
getWPCDBConnection script operation
Get a none autoCommit SQL connection using the DB context.
getWPCDBContext script operation
Get the database context object from the current austinContext.
getPipeDelimitedCSVRepresentation script operation
Returns a CSV representation of this entry with fields that are name value pairs separated by the pipe character. All attribute values have the exact path of the
attribute with occurrence numbers as the name. All category paths have CATEGORY or PATH as the name for items and categories respectively.
getPossibleEntryNodeValues script operation
Returns the possible values of string enumeration, number enumeration, timezone or lookuptable entrynode. For other type of entrynodes an empty array would be
returned.
getPrimaryCategoryTree script operation
Returns the primary category tree of this catalog.
getPrimaryKey script operation
Returns the primary key value of this entry.
getPrimaryKeyNode script operation
Returns the primary-key node of this primary spec. If this is not a primary spec, returns null.
getProductCenterURL script operation
Returns the property trigo_web_url defined in common.properties (which holds the fully-qualified URL, including port number, of the web site where users should
point their browsers to access this instance of Product Center

1054 IBM Product Master 12.0.0


getProtocol script operation
Returns the protocol for this Web Service.
getReportByName script operation
Return a report if one exists with the specified name and null otherwise.
getRequiredAttributeGroups script operation
Gets the required attribute groups of a workflow step. The result is an array of attribute collection names. The optional parameter subViewType can be
'ITEM_LOCATION', 'BULK_EDIT', 'ITEM_EDIT', 'CATEGORY_EDIT', or 'CATEGORY_BULK_EDIT'. The optional parameter locationHierarchyName is required when the
subViewType is 'ITEM_LOCATION'.
getReservedEntriesInStep script operation
Returns the set of entries that are reserved entries currently in the workflow step of the collaboration area. The format of the stepPath is Stepname.
getRidOfRootName script operation
If str contains '/' then the remaining substring following the '/' is returned. The string is split at the first '/' character.
getRoleByName script operation
Returns a Role object for the specified role.
getRoleDescription script operation
Return the description of the role.
getRoleName script operation
Return the name of the role.
getRoles script operation
Returns all roles for the current company.
getRolesForCompany script operation
Returns all roles of the given company.
getRootEntryNode script operation
Return the root entry node for this entry.
getSearchTemplateByName script operation
Return the search template with the given name. Otherwise it becomes null.
getSearchTemplateName script operation
Return the name of this search template.
getSecondarySpecsForCategory script operation
Returns the secondary specs defining this categories attributes.
getSelectionAccessControlGroupName script operation
Returns the Access Control Group name of this selection.
getSelectionByName script operation
Return the named selection.
getSelectionCatalog script operation
Returns the selection's catalog.
getSelectionHierarchy script operation
Returns the selection's hierarchy.
getSelectionHierarchyNodeCount script operation
Returns the number of hierarchy nodes in a selection. Returns 0 for dynamic selections.
getSelectionItemCount script operation
Returns the number of items in a selection.
getSelectionName script operation
Returns the name of the selection.
getSelectionNamesList script operation
Return the list of names of available selections for catalog.
getSequenceByName script operation
Returns the named sequence object. The name of the sequence contains these parts, in this order, separated by underscores: catalog name or category tree name,
the string CTG or CATTREE, and the path of the node that the sequence is defined for.
getSequenceCurrentValue script operation
Returns the current value of the sequence.
getSequenceNextValue script operation
Returns the next value of the sequence.
getScriptByPath script operation
Build the script object for the script stored at sScriptPath.
getScriptContextValue script operation
Return the value of the variable named sVariableName. This script API method can be used to retrieve the values for variables specified in the collaborative MDM
script API setScriptContextValue() or in the collaborative MDM Java API method setCustomParameter(). This method returns an object and users code is
responsible for proper handling of the value returned. Methods setScriptContextValue() and setCustomParameter() place the values in the script context
and this methods retrieves the same from the script context. This method is useful when values need to be transferred across script API and Java API extension
point code. Currently this script and the getCustom methods can only be used with class arguments that have access to the item. An example of these class
arguments are the script sandbox or PostItemSaveFunctionArguments.
getScriptExecutionMode script operation
Return the current script execution mode.
getSourceCatalog script operation
Returns the source catalog for catalog to catalog exports.
getSourceEntrySetForRelatedEntries script operation
Returns an EntrySet of all the entries that have an attribute related to this entry. If filterContainer is specified only the entries held in that container are returned.
getSpec script operation
Get the value of the designated column in the current row of this SearchResultSet as a Spec.
getSpecAttribNames script operation
Returns the names of each attribute(node) specified in the spec
getSpecAttribPaths script operation
Returns the names of each attribute(node) specified in the spec.
getSpecByName script operation
Returns the spec object with the corresponding name. By default, a mutable spec is returned. If an immutable spec is needed, then an optional boolean parameter
bImmutable is specified to be true. Please note that only mutable specs can be modified.
getSpecMapByName script operation
Returns the specmap object with the corresponding name.

IBM Product Master 12.0.0 1055


getSpecMapDstObject script operation
Returns the destination object of this spec map.
getSpecMapSrcObject script operation
Returns the source object of this specmap.
getSpecMultiOccurAttributePaths script operation
Returns the multi occurrence attribute paths for this spec.
getSpecName script operation
Returns the name of this spec.
getSpecNameList script operation
Returns the names of the Specs that match the given filters. VALID Filters: ("PATTERN", String) ("CONTAINER", Container Object) Will return only specs attached to
container ("SPECTYPE", String {"PRIMARY_SPEC", "SECONDARY_SPEC", "LOOKUPTABLE_SPEC", "FILE_SPEC"} comma separated list) ("LOCALIZED", String {YES,
NO}) Will return only localized or only non-localized specs
getSpecNodes script operation
Returns a map of node paths to node objects for this spec.
getSpecPrimaryKeyAttributePath script operation
Returns the primary key attribute path for this spec. Returns null if the path is not valid for this spec.
getSpecSequenceAttributePaths script operation
Returns the sequence attribute paths for this spec.
getSpecType script operation
Returns the type of this spec.
getSpecUniqueAttributePaths script operation
Returns the unique attribute paths for this spec.
getStepEntryTimeout script operation
Gets the timeout value on an entry to the given time. For the operation to succeed the entry must be in the specified collaboration area and in the specified step. If
these assumptions are not true nothing is done (no timeout set). There is no exception thrown an there is no change to the workflow.
getStepsForEntry script operation
Returns all the steps that the entry is currently in for the given collaboration area. The return values is a string array containing the stepPaths. Entry should be
retrieved using Collaboration Area as Container.
getStoreIncoming script operation
Returns whether incoming messages for this Web Service are stored.
getStoreOutgoing script operation
Returns whether outgoing messages for this Web Service are stored.
getString script operation
Get the value of the designated column in the current row of this SearchResultSet as a string.
getStringCellValue script operation
Returns the value of this text cell as a String. Use this function only if it is pre determined using String ExcelCell::getExcelCellType() (unless known otherwise) that
the cell is a string type.
getStringValueForClassMember script operation
This operation uses Java reflection mechanisms to return the value of the specified static member for the named class as a string.
getStyle script operation
Returns the style for this Web Service.
getSystemDefaultEncoding script operation
Returns the value of the system's default encoding.
getSystemMessageById script operation
Given a message ID (and locale), returns the description of the message.
getSystemMessageByName script operation
Given the message name (and locale), returns the description of the message.
getTabRowPath script operation
Returns the attribute path for this tabbed view row.
getTime script operation
Returns the number of seconds since January 1, 1970, 00:00:00 GMT represented by this Date object.
getTimerElapsedTime script operation
Return the time elapsed between start and stop.
getTimeZoneDesc script operation
Get the time zone's description which is offset by the given minutes.
getTimeZoneOffsetFromDBValue script operation
Get time zone from the db value and return the offset from GMT in minutes.
getTypeToExport script operation
Returns the last object type set with setTypeToExport.
getTypesToExport script operation
Returns all the object types that were set using the setTypeToExport script operation.
getUrl script operation
Returns the URL for this Web Service.
getUserAddress script operation
Return the User's Address.
getUserByUsername script operation
Returns the User object for the given User Name and sCmpCode. If sCmpCode is not given, company code is taken from the current context of script execution.
getUserCompanyCode script operation
Return the User's Company Code.
getUserCompanyName script operation
Return the User's Company Name.
getUserDefinedLog script operation
Returns the user defined log object having the given name, for this container.
getUserEmail script operation
Return the User's Email Address.
getUserEnabled script operation
Returns if the User is enabled or not.
getUserFax script operation
Return the User's Fax Number.

1056 IBM Product Master 12.0.0


getUserFirstName script operation
Return the User's First Name.
getUserLastName script operation
Return the User's Last Name.
getUserLdapEnabled script operation
Returns if the User is a LDAP user or not.
getUserLocale script operation
Returns the locale that is selected by the user for browsing content.
getUserName script operation
Return the User Name.
getUsernameForReservedEntryInStep script operation
Returns the username of the user who reserved the entry in the workflow step in the collaboration area.
getUserOrganizations script operation
Return the User's Organizations.
getUserPhone script operation
Return the User's Phone Number.
getUserRoles script operation
Return the User's Roles.
getUserTimeZoneDesc script operation
Get the description of the user's current time zone setting in the language associated with the user.
getUserTimeZoneOffset script operation
Get the user's time zone setting, offset from GMT, in minutes.
getUserTitle script operation
Return the User's Title.
getUsers script operation
Returns all Users for the current company.
getUsersFromRole script operation
Returns all users with the given role.
getValidationErrorEntryNode script operation
Return the EntryNode associated with this ValidationError.
getValidationErrorMsg script operation
Return the error message associated with this ValidationError.
getValidationErrorType script operation
Return the type associated with this ValidationError.
getVersionDate script operation
Returns the date of this version.
getVersionName script operation
Returns the name of this version.
getVersionType script operation
Returns the type of this version.
getViewableAttributeGroups script operation
Gets the viewable attribute groups of a workflow step. The result is an array attribute collection names. The optional parameter subViewType can be
'ITEM_LOCATION', 'BULK_EDIT', 'ITEM_EDIT', 'CATEGORY_EDIT', or 'CATEGORY_BULK_EDIT'. The optional parameter locationHierarchyName is required when the
subViewType is 'ITEM_LOCATION'.
getWebServiceByName script operation
Returns the web service with the given name. If there is no such web service, returns null.
getWflAccessControlGroup script operation
Returns access control group name of the workflow.
getWflByName script operation
Returns the workflow if found otherwise null.
getWflContainerType script operation
Returns the workflow container type. The type could be either 'CATALOG' or 'CATEOGRY_TREE'.
getWflDesc script operation
Returns the workflow description.
getWflFailureStep script operation
Returns the failure step of the workflow.
getWflInitialStep script operation
Returns the initial step of the workflow.
getWflName script operation
Returns the workflow name.
getWflStepAddEntries script operation
Returns value of 'allow import into step' flag.
getWflStepAttributeGroups script operation
Returns an array of editable and required attribute group names for the workflow step.
getWflStepByName script operation
Returns the step of the workflow otherwise null.
getWflStepCategorizeEntries script operation
Returns value of 'allow recategorization' flag.
getWflStepDefaultScriptPath script operation
Gets the default path of the workflow script for the step: scripts/workflow/(workflow name)/(step name).
getWflStepDesc script operation
Returns the workflow step description.
getWflStepEntryNotification script operation
Gets the notification e-mails which will get sent when the item gets into the step.
getWflStepExitValues script operation
Retrieve the exit values of the WorkflowStep.
getWflStepName script operation
Returns the workflow step name.
getWflStepPaths script operation
Returns the paths for all the steps of the workflow.

IBM Product Master 12.0.0 1057


getWflStepPerformerRoles script operation
Returns the list of user roles for the workflow step.
getWflStepPerformerUsers script operation
Returns the list of user names for the workflow step.
getWflStepReserveToEdit script operation
Returns the reserve for edit flag for a workflow step.
getWflSteps script operation
Returns the list of all the steps in the workflow.
getWflStepScriptPath script operation
Gets the path of the workflow script for the step. If no script is defined, returns null.
getWflStepsForRole script operation
Returns the workflow step paths, along with the number of entries in each step, for which the role has access.
getWflStepsForUser script operation
Returns workflow step paths, along with the number of entries in each step, to which the current user has access.
getWflStepTimeoutDate script operation
Gets the timeout date for the workflow step. If no timeout date was set, a null is returned.
getWflStepTimeoutDuration script operation
Gets the timeout duration for the workflow step. Returns an integer in seconds. If no timeout duration was set, 0 is returned.
getWflStepTimeoutNotification script operation
Gets the notification e-mails which will get sent when the step times out.
getWflStepType script operation
Returns the workflow step type.
getWflStepView script operation
Returns a ctg view with a given subViewType for the given workflow step. The parameter subViewType can be 'ITEM_LOCATION', 'BULK_EDIT', 'ITEM_EDIT',
'CATEGORY_EDIT', or 'CATEGORY_BULK_EDIT'. The optional parameter locationHierarchyName is required when the subViewType is 'ITEM_LOCATION'.
getWflStepViews script operation
Returns an array of all the step views for the workflow step.
getWflStepsXML script operation
Returns an XML representation of workflow steps accessible by the given role name. If the role name is not provided the XML representation of workflow steps
accessible by the current user is returned.
getWflStepsXMLByAttrValue script operation
Returns an XML representation of the workflow steps that operate on the given attribute path/value. If the role name is not provided an XML representation of the
workflow steps accessible by the current user is returned.
getWflSuccessStep script operation
Returns the success step of the workflow.
getWidgetProperty script operation
Return the required property from this widget.
getWorkEntryAt script operation
Get the WorkEntry for the specified index in the WorkEntryList.
getWorkEntryListSize script operation
Gets the size of this work entry list.
getWorkEntryState script operation
Get the current state of this WorkEntry.
getWsddDocPath script operation
Returns the docstore path where the WSDD for this Web Service is stored.
getWsdlDocPath script operation
Returns the docstore path where the WSDL for this Web Service is stored.
getWsdlUrl script operation
Returns the WSDL URL for this Web Service.
getXMLNode script operation
Returns the xmlnode selected by sPath relative to this node.
getXMLNodeName script operation
Returns the name of the current XMLNode.
getXMLNodePath script operation
Returns the path of the current XMLNode. This path is not an XPath - it is the concatenation of all the names of the parent XMLNode's path, /, and the name of this
XMLNode.
getXMLNodeValue script operation
Returns the value of the current XMLNode.
getXMLNodeValues script operation
Returns the values of the XMLNode given by path.
getXMLNodes script operation
Returns the XMLNodes selected by sPath relative to this node.
getXMLRepresentation script operation
Returns an XML representation of this entry with the structure Nodes/Node/Name, Nodes/Node/Value, Paths/Path.
hasAccessToPrivilegeForEntry script operation
Checks if the entry has access to the given privilege.
hasCtgListPermission script operation
Returns true if the current user has permission to list this catalog, false otherwise.
hasCtrListPermission script operation
Returns true if the current user has permission to list this category tree, false otherwise.
hasInheritedValue script operation
Returns TRUE if this entry node WOULD inherit some non-null value if set to do so.
hasNonInheritedValue script operation
Returns TRUE if there is a non-null non-inherited value. The presence or absence of inherited values makes no difference.
if-else script operation
If cond is true t-statements are executed, otherwise f-statements are executed.
importEnv script operation
Imports the content of the archive in the docstore at sDocFilePath into this company - returns the log as a string.
importXML script operation
Imports a XML file to a spec.

1058 IBM Product Master 12.0.0


importXSD script operation
Imports a XML Schema Definition file (.XSD) to a spec, using the given parameters.
indexOf script operation
Returns the index within this string of the first occurrence of the specified match substring.
initializeKeyValueMapping script operation
Takes the values of the given array and builds a hashmap with the array values as the keys.
insertCtgTabAt script operation
Add the container tabbed view object to the catalog view at the index position(zero base). If the index is invalid the tab is added to the end of the list.
insertNewVersion script operation
Adds a version called sName on this container.
insertUserDefinedLog script operation
Persist the new user defined log object to the database.
intersectValues script operation
Return the set-intersection of hm1, hm2, ... (only values are considered). At least two hashmaps should be entered.
inTransaction script operation
Returns true if executing within a transaction, otherwise returns false.
invalidate script operation
Invalidates this widget.
invocationCacheClear script operation
Clear the "invocation" cache.
invocationCacheGet script operation
Returns the value associated with the given key from the invocation cache, or null, if no value is found for that key. Semantics of this operation are the same as a
Java Map "get" operation. The Invocation Cache is local to a thread and has a scope that corresponds to either the life span of an HTTP request for application
server code, the length of a job for scheduler code, or the life of a workflow event for workflow code. Be careful not to run out of memory when using this cache in
the context of a long running scheduler job. You can choose to call invocationCacheClear or invocationCacheRemove to remove objects from the cache that are no
longer needed to reduce its memory footprint.
invocationCachePut script operation
Stores a value in the invocation cache for the given key. If a value is already present in the cache for the given key, the old value will be overwritten and returned,
otherwise null is returned. Semantics of this operations are the same as a Java Map "put" operation.
invocationCacheRemove script operation
Remove a value from the "invocation" cache with given key.
invoke script operation
Invoke this function object with the arguments arg1, arg2, etc
invokeSoapServer script operation
Invoke a soap server. Returns the return value of the SOAP operation call.
invokeSoapServerForDocLit script operation
Invoke a soap server for Document-Literal based web services.
isAuthRequired script operation
Returns whether this Web Service requires authentication.
isBinary script operation
Indicates if the attribute represents a binary value encoded as a BASE64 string.
isColAreaLocked script operation
Checks to see if the collaboration area is locked and returned true or false to indicate the state of any locking.
isCtgItemMappedToCategories script operation
Returns true if the item is mapped to categories. If the optional argument ctr is given, returns true if the item is mapped to a category in ctr.
isDateAfter script operation
Returns true, if and only if this date is after otherDate
isDateBefore script operation
Returns true, if and only if this date is before otherDate
isDefined script operation
Return true if the value of the designated column in the current row of this SearchResultSet object is defined; otherwise, return false.
isDeployed script operation
Returns whether this Web Service is deployed.
isEntryAnItem script operation
Indicates if an entry is an item or not.
isEntryCheckedOut script operation
Returns true if the entry is checked out into a collaboration area otherwise it returns false.
isEntryCheckedOutForPrimaryKey script operation
Returns true if the entry for the given primary key is checked out into a collaboration area otherwise it returns false.
isEntryNew script operation
Returns TRUE if this entry is NEW, FALSE if it is existing.
isEntryReserved script operation
Checks if an entry is reserved in the given collaboration area.
isEntryReservedInStep script operation
Checks if an entry is reserved in a workflow step for a given collaboration area. Returns true or false to indicate the state of any reservation.
isExternal script operation
Indicates if the attribute is a reference to an external file. For example a reference to a JPEG image file.
isInheriting script operation
Return true if the item inherits at a location for sAttribPath. The attribute will contain an value that is un-set and will support inheritance. Note, no check is made
that there is a value to inherit.
isItemAvailableInLocation script operation
Returns true if the item is mapped to the given location in the specified category tree.
isLocalized script operation
Returns a boolean to indicate whether or not a spec is localized.
isLowerCase script operation
Checks if all the characters in this string are lower case using the rules of the default locale.
isNodeEditable script operation
Returns true if the node is editable. Returns false otherwise.
isNodeGrouping script operation
Returns true if the node is a grouping node, false otherwise.

IBM Product Master 12.0.0 1059


isNodeIndexed script operation
Returns true if this node is indexed.
isNodeNonPersisted script operation
Returns true if the node is a non-persisted node, false otherwise.
isNodeSpecRoot script operation
Returns true if the node is a spec root node, false otherwise.
isOrdered script operation
Returns the value of catalog's "Use Ordering" attribute.
isStringSingleByte script operation
For SHIFT_JIS encoding, this returns true if the string is made of single byte characters only. False is returned otherwise.
isUpperCase script operation
Checks if all the characters in this string are upper case using the rules of the default locale.
isUserDefinedLogNew script operation
Check if the user defined log has been saved in the database.
isWorkEntryMarked script operation
Check to see if the work entry is marked.
isWorkEntryMarkedNew script operation
Checks if the WorkEntry marked as new.
javaArrayFromScriptArray script operation
Transforms the provided scriptArray into a Java array holding the same elements in the same order. If scriptArray is null, returns null. The user must provide the
type (or subtype) of the array's elements. The types can be primitive Java data types (int, char, byte, float, boolean, long, double, short) or any valid Java class ( eg.
java.lang.String , java.lang.Integer). Type can also be a multidimensional array of elements (primitive/non primitive) with the brackets intact ( eg int[],
java.lang.Integer[][]). A fully qualified name is to be provided whenever using a class as type.
jmsCreateTextMsg script operation
Creates a new JMS TextMessage using QueueSession information with the text provided.
jmsDisconnect script operation
Disconnects from the given queue manager.
jmsGetConnectionFactory script operation
Creates and returns a JMS connection factory with the specified context.
jmsGetContext script operation
Creates a JMS context.
jmsGetMQConnectionFactory script operation
Creates and returns a JMS connection factory for communicating with MQ queues. Note that you do not need a Context to get an MQ connection factory whereas
you need a Context for connecting to other JMS queues.
jmsGetMessageCorrelationID script operation
Returns a string containing Correlation Id for the JMS message.
jmsGetMessageID script operation
Returns a string containing the JMS message id.
jmsGetMessageProperties script operation
Returns a hashmap from string property names to string values for those priorities.
jmsGetQueue script operation
Returns a javax.jms.Queue object from the given QueueSession. NAME identifies the queue in a vendor-specific format.
jmsGetQueueByName script operation
Returns a javax.jms.Queue object from the given JNDI Name and Context.
jmsGetQueueConnection script operation
Returns a JMS queue connection from the given connection factory. Uses the username and password supplied. If no username is specified then the default user
and password from common.properties is used if they exist otherwise attempts to connect as the user running IBM Product Master.
jmsGetQueueSession script operation
Returns a JMS queue session from the given connection factory.
jmsGetTextFromMsg script operation
Returns a string containing the entire content of a JMS message, including headers.
jmsReceiveMsg script operation
Receives a JMS Message. Times out after TIMEOUT milliseconds. If INBOUNDQUEUE is not null, looks on that queue. If ctx is provided, INBOUNDQUEUE is
assumed to be a JNDI name: otherise INBOUNDQUEUE is assumed to be a queue name in vendor-specific format. If INBOUNDQUEUE is null, and
MESSAGETORECEIVEREPLYFOR is not null, looks on the queue defined in the "Reply-To" field of MESSAGETORECEIVEREPLYFOR. If INBOUNDQUEUE is null and
MESSAGETORECEIVEREPLYFOR is null, throws an AustinException. We now know which queue will be used. If MESSAGESELECTOR and
MESSAGETORECEIVEREPLYFOR are both null, selects the first message from that queue. Otherwise selects the first message from the queue (if any) fulfilling all of
the conditions defined by MESSAGESELECTOR and MESSAGETORECEIVEREPLYFOR. If MESSAGETORECEIVEREPLYFOR is not null, rejects any message not having a
correlation ID equal to MESSAGETORECEIVEREPLYFOR's message ID. If MESSAGESELECTOR is not null, rejects any message not fulfilling the condition defined in
messageSelector. If no appropriate message is found, returns null.
jmsReceiveMsgFromQueue script operation
Receives a JMS Message. Times out after TIMEOUT milliseconds. If INBOUNDQUEUE is not null, looks on that queue. If INBOUNDQUEUE is null, and
MESSAGETORECEIVEREPLYFOR is not null, looks on the queue defined in the "Reply-To" field of MESSAGETORECEIVEREPLYFOR. If INBOUNDQUEUE is null and
MESSAGETORECEIVEREPLYFOR is null, throws an AustinException. We now know which queue will be used. If MESSAGESELECTOR and
MESSAGETORECEIVEREPLYFOR are both null, selects the first message from that queue. Otherwise selects the first message from the queue (if any) fulfilling all of
the conditions defined by MESSAGESELECTOR and MESSAGETORECEIVEREPLYFOR. If MESSAGETORECEIVEREPLYFOR is not null, rejects any message not having a
correlation ID equal to MESSAGETORECEIVEREPLYFOR's message ID. If MESSAGESELECTOR is not null, rejects any message not fulfilling the condition defined in
messageSelector. If no appropriate message is found, returns null.
jmsSendMsg script operation
Sends message MSG and returns MSG or null. The message is sent to the queue specified by OUTBOUNDQUEUE, unless OUTBOUNDQUEUE is null. If ctx is
provided, OUTBOUNDQUEUE is assumed to be a JNDI name. If ctx is not provided, OUTBOUNDQUEUE is assumed to be a queue name in vendor-specific format. If
OUTBOUNDQUEUE is null, MSG is sent to the reply-to queue of MESSAGETOREPLYTO, if MESSAGETOREPLYTO is provided. If OUTBOUNDQUEUE is null and
MESSAGETOREPLYTO is not provided, throws an AustinException. If MESSAGETOREPLYTO is provided, the message id is read from it. PROPERTIES is a map from
string keys to string values. There is one special (non-JMS) key: "TRIGO_INCOMING_REPLY_QUEUE". "TRIGO_INCOMING_REPLY_QUEUE" indicates the queue
name to which an external application should send replies to this message. If ctx is provided, the value of "TRIGO_INCOMING_REPLY_QUEUE" is assumed to be a
JNDI name; otherwise it is assumed to be a queue name in vendor-specific format.
jmsSendMsgToQueue script operation
Sends message MSG and returns MSG or null. The message is sent to the queue specified by OUTBOUNDQUEUE, unless OUTBOUNDQUEUE is null. If
OUTBOUNDQUEUE is null, MSG is sent to the reply-to queue of MESSAGETOREPLYTO, if MESSAGETOREPLYTO is provided. If OUTBOUNDQUEUE is null and
MESSAGETOREPLYTO is not provided, throws an AustinException. If MESSAGETOREPLYTO is provided, the message id is read from it. PROPERTIES is a map from

1060 IBM Product Master 12.0.0


string keys to string values with a single key value that is acted on. This special (non-JMS) key is "TRIGO_INCOMING_REPLY_QUEUE" whose value is a
javax.jms.Queue object to which an external application should send replies to this message.
jmsSetMessageText script operation
Sets the provided text for the JMS TextMessage. Only JMS TextMessage type is supported.
keyForValue script operation
Returns the key of the field containing the given value.
lastIndexOf script operation
Returns the index within this string of the rightmost occurrence of the specified match substring.
length script operation
Returns the length of this string.
linkCatalog script operation
Links a catalog to another using source and destination attributes. The dstAttribute is optional
listTransactions script operation
List the recorded transactions in order of date (undocumented, for internal use only).
loadCatalog script operation
Deprecated. Loads data from the specified File Spec and Spec Map, into the catalog or category tree upon which this operation is called. The feedType must be
"itm" for item to catalog feeds, "ctr" for category to category tree feeds, and "icm" for item to category mapping feeds.
loadImport script operation
Loads the file from the given DocStore path into the given Import.
loadJar script operation
loadJar dynamically adds the jar file of name jarName to the SystemClassLoader. This allows subsequent script operations (such as createJavaMethod) to use those
class files within the jar file. The jarName is specified as the fully qualified file name of the jar on the server. The operation returns false if the file cannot be
accessed. The operation returns true if the dynamic load was successful. If two loadJar calls are issued with the same fully qualified jarName and the first was
successful, then the second call will return true and will not add the jar file again.
loadSpecFromXML script operation
Creates spec defined in specXML. Can be loaded into different companies.
locationHasData script operation
Returns true if the location has data.
lockColArea script operation
Locks the Collaboration Area so that no more entries can be checked out into it. Returns true or false depending on whether the lock was successfully applied or
not.
logActionableMessage script operation
Logs a message in the alerts console.
logDebug script operation
Logs the debug message in the debug log that is available from the schedule profile details screens. Use with caution because the debug log is maintained in
memory.
logError script operation
Logs the given message as an error with the corresponding item id to the location specified in the context.
loggerDebug script operation
Writes the given data to this logger if the log level in log.xml is set to debug.
loggerError script operation
Writes the given data to this logger if the log level in log.xml is set to error.
loggerFatal script operation
Writes the given data to this logger if the log level in log.xml is set to fatal.
loggerInfo script operation
Writes the given data to this logger if the log level in log.xml is set to info.
loggerWarn script operation
Writes the given data to this logger if the log level in log.xml is set to warn.
logWarning script operation
Logs the warning message with the corresponding item id to the location specified in the context
lookup script operation
Returns the sSecKey-th value for sKey in the lookup table sLookupTableName or lkp. On using lookup(String sLookupTableName,...), the lookup object is retrieved
from the cache, which does not contain newly added lookup entries in the current context. Use lookup(LookupTable lkp,...) to get all of the entries added in the
current context when it gets from db on each call.
lookupValues script operation
Returns the values for the given key in the lookup table.
map script operation
Add a mapping from sSrcPath to sDstPath to this spec map.
mapCategoryToOrganizations script operation
Maps the category to all the organizations provided. If bAdd is true, the old mappings are added to otherwise they are overwritten to be the new set of
organizations.
makeItemAvailableInLocation script operation
Makes this item available in a given location. Available means that an item can have location data for the given location. If bRecursive is true than make item
available in all descendent locations.
makeItemAvailableInLocations script operation
Makes this item available in the given locations. Available means that an item can have location data for the given location. If bRecursive is true than make item
available in all descendent locations.
makeItemUnavailableInLocation script operation
Makes this item unavailable in a given location. If bRecursive is true than make item unavailable in all descendent locations.
makeItemUnavailableInLocations script operation
Makes this item unavailable in the given locations. If bRecursive is true then make item unavailable in all descendent locations.
mapCtgItemToCategory script operation
Map this item to this category. If optional boolean addToPicture is false, the secondary spec will not be associated and cannot be set; useful for performance. If
optional boolean validateCategory is true and the categories hierarchy does not have the VALIDATION_RULES option disabled, the mapping will only occur if
the category passes validation. Validation is false by default.
mapCtgItemToOrganizations script operation
Maps the item to all the organizations provided. If bAdd is true, the old mappings are added to; otherwise they are overwritten to be the new set of organizations.
mapWflStepExitValueToNextStep script operation
Maps the exit value of the WorkflowStep to the nextStep. The nextStep can either be the stepName or one WorkflowStep or an array of StepNames or an array of
WorkflowSteps.

IBM Product Master 12.0.0 1061


markWorkEntryDirty script operation
Mark this WorkEntry as being dirty.
match script operation
Return the contents of the parenthesized sub-expressions after a successful match.
max script operation
Return the maximum value of the two numbers.
mergeValues script operation
Return the set-union of hm1, hm2, ... (only values are considered). At least two hashmaps should be entered.
min script operation
Return the minimum value of the two numbers.
moveCtgItemToCategories script operation
Move item from existing categories to new set of categories, if bAdd is true, then category mappings will be added.
moveDoc script operation
Move this document to the specified sPath in the docstore. If the path ends with a '/' it is assumed that the doc needs to be moved to the specified directory with
the same doc name as the source.
moveEntriesToColArea script operation
Applies to items only. The moveEntriesToColArea() script operation checks to ensure that the set of Editable+Required nodes in the destination collaboration area
workflow is a superset of the Editable+Required nodes in the source collaboration area workflow.
moveEntriesToNextStep script operation
Posts a request to move the entries in the entrySet from the specified stepPath, to the next step for the given exitValue. Returns a map of Entry primary key to String
of validation errors (which can be zero-length). Only reserved (if reservations are needed) and valid entries are moved. The move takes place, at the earliest, only
after the current transaction has committed. Entries which have passed the reservation test, are saved before being tested for validity.
moveEntryToColArea script operation
Applies to items only. Moves the entrySet of entries in the collaboration area to another collaboration area. destColAreaName specifies the name of the destination
collaboration area, into whose Initial step the entries will be checked into. This operation is asynchronous which means a message is posted to complete the move
at some time after the current transaction is committed. Returns a boolean depending on whether the message to move the entry was successfully posted. Returns
false if the entry was a category.
moveEntryToNextStep script operation
Posts a request to move the entry from the specified stepPath to the next step for the given exitValue. Returns a map of Entry primary key to String of validation
errors (which may be zero-length). Only a reserved (if a reservation is required for it) and valid entry will be moved. The move will take place, at the earliest, only
after the current transaction has committed. Entries which have passed the reservation test, are saved before being tested for validity.
moveCursor script operation
Change cursor position, where 0 <= position < size(). So if size() = 100, you can set the position to 0, 1, ..., 98, 99. The return value is true if the cursor was moved
(note that you will have to call next() to fetch the row), or false if the cursor could not be moved due to an incorrect position.
moveUserToOrganization script operation
Move user from source to the destination organization.
mqDisconnect script operation
Disconnects from the given queue manager.
mqGetMessageDiagnostics script operation
Returns a string containing diagnostic information about the given message.
mqGetMessageId script operation
Returns the ID of the given message as a String containing a hexadecimal number.
mqGetQueueMgr script operation
Creates and returns a new MQ queue manager with the given properties.
mqGetReceivedMsg script operation
Receives a message from queueName. Returns the message, as a MQMessage, or null.
mqGetReceivedMsgByMessageID script operation
Finds the message in the given queue with given message ID. The ID is passed in as a String containing a hexadecimal number. Returns null if there is no such
message in the given queue.
mqGetResponseToMsg script operation
Gets the response to the given message from the given queue.
mqGetTextFromMsg script operation
Returns a string containing the entire content of a MQMessage, including headers.
mqGetXMLMessageContent script operation
Discards any garbage at the beginning of the input string to get a XML document. More precisely, behaves as follows: If the input string is of the form A + B, where B
is a valid XML document and A is any (possibly empty) string, this operation returns B. Otherwise, returns null.
mqSendReply script operation
Sends a reply to the given message, without indicating success or failure.
mqSendReplyWithStatus script operation
Sends a reply to the given message, setting the feedback field to indicate the given status. Status must be one of the following values (in upper or lower case):
"SUCCESS", "FAIL", "VALCHANGE", "VALDUPES", "MULTIPLE_HITS", "FAIL_RETRIEVE_BY_CONTENT", "BO_DOES_NOT_EXIST", "UNABLE_TO_LOGIN",
"APP_RESPONSE_TIMEOUT", "NONE".
mqSendTextMsg script operation
Sends the given message over the given queue. Returns the MQMessage.
mqSendTextMsgWithReply script operation
Sends a message over the given MQ queue and receive a reply from the reply queue.
new$AttrGroup script operation
Returns a new attribute collection with the given name, type and description. Type can only be GENERAL.
new$BasicSelection script operation
This script operation is deprecated.
new$Catalog script operation
Returns a new catalog with the given spec and name. Pass optional args in the map with these keys "displayAttribute" (path of node), "accessControlGroup" (pass
the ACG object), "isLookupTable" (default is false--set to true to create a lookup table and the Default Lookup Table Hierarchy is used as the category tree). If the
displayAttribute is not set, the pk attribute is used.
new$Category script operation
Returns a new category object when given the complete path of the new category and the delimiter that separates the categories in the path. If the delimiter is not
specified, it defaults to '/' (except if a filespec is used during an import). If the primary key is not specified, then it should either be automatically set via a sequence
or value rule, or it should be set after creation. If used in workflows and the category path already exists in the source category tree, the category will be checked
out. If this script operation is invoked within a database transaction, the script operation will no longer commit that transaction during its execution. It will also
throw an exception if a database problem occurs, where previously it might not have done so. This is to ensure that a rollback can be carried out safely by the caller.

1062 IBM Product Master 12.0.0


If you were relying on this script operation's transactional effects you will need to adjust your script. See the product documentation for more information about
transactions and scripts.
new$CategoryTree script operation
Returns a new category tree with the given primary spec and name. Pass optional args in the map with these keys "useInheritance" (default is false),
"displayAttribute" (Node object), "pathAttribute" (Node object), "accessControlGroup" (pass the ACG object), "isOrganizationTree" (default is false--set to true to
create an organization tree). If the pathAttribute is not set, the primary key will be used. If the displayAttribute is not set, the pathAttribute is used.
new$CollaborationArea script operation
Create a new collaboration area with the given name, wfl and srcContainer.
new$CSVParser script operation
Returns a Comma Separated Value Parser for the given the buffered reader input.
new$CtgAccessPrv script operation
Builds a new container access privilege object.
new$CtgItem script operation
Returns a new item object. The argument can be a catalog name or a catalog object. The argument being a catalog object allows the propagation of attribute
collections to process settings etc. to new items being built with this operation. If no catalog name/object is provided, then the default catalog from the current
script context is used. bRunEntryBuildScript or bBuildNonPersisted should be set to false to disable the default behavior of this script operation to run the entry
build script or build the non-persisted attributes respectively for this new item.
new$CtgTabRow script operation
Builds a new container tabbed view row object for the node specified by the path.
new$CtgView script operation
Builds a new catalog view.
new$Date script operation
Builds a Date object from a String given a date format. If the locale is supplied, that locale is used to apply the format, otherwise the default_locale from the
common.properties file is used..
new$DelimParser script operation
Returns a new parser that parses based on the given delimiter.
new$Distribution script operation
Creates a distribution with the given name, type, and any additional attributes. If this script operation is invoked within a database transaction, the script operation
does not commit that transaction during its execution. If a database problem occurs, it throws an exception. These changes in behavior of this script operation
ensure that a rollback can be carried out safely by the caller. For more information about transactions and scripts, see the product documentation.
new$DynamicSelection script operation
Returns a dynamic selection named selectionName and corresponding to the query queryString.
new$EnvObjectList script operation
Returns a container for the IBM Product Master objects to be exported.
new$ExcelBook script operation
This creates a new ExcelBook. ExcelBooks can be used in two ways, to read an existing ExcelBook or to create a new ExcelBook. It is not supported to update an
existing ExcelBook. All the other Excel objects can be obtained either directly from this ExcelBook or indirectly from the objects obtained from this ExcelBook (apart
from the ExcelParser object which has its own constructor). When the script op is used within an import job, then an existing ExcelBook is read from the docstore
(the default import document or the one that is specified in the docToRead parameter). When an existing ExcelBook is read, it should not be updated, as any
changes will not be written back to the docstore. To create an ExcelBook which will be updated, this script operation should be called outside of an import job,
without the docToRead parameter; this creates an ExcelBook in memory which can then saved to the docstore using the ExcelBook::saveToDocStore script
operation.
new$ExcelParser script operation
Returns an excel parser to parse the given spreadsheet.
new$FixedWidthParser script operation
Returns a new fixed width parser given the buffered reader.
new$LdapAttribute script operation
Create a new LDAP Attribute. Optional parameters: isBinary represents a BASE64 encoded binary representation, default false; isExternal represents an external
file reference, default false
new$LdapEntry script operation
Create a new LDAP Entry.
new$LdapEntrySet script operation
Create a new LDAP Entry Set.
new$LdapObjectclass script operation
Create a new LDAP objectclass object.
new$Locale script operation
Returns a locale with the language and country/region (two letter codes) combination specified. Throws exception if it is not supported.
new$LookupTable script operation
Returns a new lookuptable with the given spec and name.
new$PageLayout script operation
Returns a new page layout with the given name.
new$RE script operation
Returns a regular expression corresponding to the given pattern. Optional match flags are 0=caseSensitive, 1=ignoreCase, 2=matchMultiline (new lines match as ^
and $, 4=matchSingleLine (treat multiple lines as one line). Flags are additive.
new$Reader script operation
Returns the buffered reader for the document specified by the path. Close the reader when you are done using it.
new$Report script operation
Return a new report object.
new$SearchQuery script operation
Create a search query.
new$SearchSelection script operation
Return an empty search selection.
new$SearchTemplate script operation
Return a new search template with the given name, container, and the set of attribute group names. Also, search templates in a collaboration area step can be
defined by providing optional parameters colAreaName and stepPath.
new$Spec script operation
Returns a spec object given the name and the type of the spec. The optional parameter specFileType is only applicable to the spec of type FILE_SPEC. The optional
parameter specFileType specifies the data file type of the file spec.
new$SpecLookupTableNode script operation
Returns a new node created in the spec according to the path and order with the specified Look up table attached.

IBM Product Master 12.0.0 1063


new$SpecMap script operation
Returns a new spec map on the specified map type between the source and the destination - first delete existing map if there is one.
new$SpecNode script operation
Returns a new node created in the spec according to the path and order.
new$StaticSelection script operation
Returns an empty static selection (Selection) on the catalog.
new$SystemDB script operation
Returns an object that represents the status of the current database.
new$Timer script operation
Create and start a new timer.
newUserDefinedLog script operation
Returns a new user defined log object for this container with the given name and description. Will throw an exception if a log with the same name already exists for
the container.
newUserDefinedLogEntry script operation
Returns a new user defined log entry object for the specified entry which is either an item or category (with date/timestamp and log). If the category is also
provided the logs will only be associated to that category.
new$WorkEntry script operation
Creates a workentry for a given entry.
new$WorkEntryList script operation
Create a new work entry list from a catalog or a selection.
new$Workflow script operation
Create a new workflow of the given container type and with the given name. Container type can be one of the following: CATALOG, CATEGORY_TREE
new$XmlDocument script operation
Creates an XmlDocument from a docstore Doc instance or an XML string literal.
new$ZipArchive script operation
Returns a new zip archive with the given file name.
newCSVParser script operation
Returns a Comma Separated Value Parser for the given the buffered reader input.
newDelimParser script operation
Returns a parser that parses based on the delimiter provided.
newFixedWidthParser script operation
Returns a fixed width parser for the given buffered reader.
next script operation
Move the ResultSet iterator to the next result. Returns false if it has iterated past the last result.
next script operation
Move to next record. Return false if it reaches the end of result set; otherwise, return true.
nextLine script operation
Returns the next line from the reader.
openJDBCConnection script operation
Get an (autoCommit on) SQL connection using JDBC Drivers
parseCSV script operation
Returns an array of each token, as parsed by the CSV parser. If a field number is provided, just the corresponding token substring is returned. A nullpointer
exception is thrown if the string to be parsed is null.
parseDate script operation
Use to parse a String value to a Date object. The format string is a pattern whose format is identical to the format used by Java. Locale is optional, the default value
is the UI locale.
parseDelim script operation
Returns an array of each token, as parsed by the Delim parser. If a field number is provided, just the corresponding token substring is returned.
parseDouble script operation
Parse the string based on the given locale and convert to a double.
parseFixedWidth script operation
Returns the corresponding token substring between the two indexes.
parseLDIFFile script operation
Reads an LDIF file and instantiates an LDAP entry set based on it. The filename is a system reference.
parseNumber script operation
Use to convert a String to a Number according to the given numberFormat and locale. If locale is null, it will use the locale of the current user. If numberFormat is
null, it will use the default format of the locale. The numberFormat string is a pattern whose format is identical to the number format used by Java
parseTimeZoneToDBValue script operation
Parse the string to time zone then return the db value.
parseXMLNode script operation
Returns the value that is given by the sXMLSubPath XPath in the current XML document.
parseXMLNodeWithNameSpace script operation
Returns the value given by the sXMLSubPath XPath in the current XML document. For the XPath value, you can specify either a namespace URI qualified path or a
literal path.
populateAllNonPersisted script operation
Execute non-persisted script for all entrynodes in the entry that do not have a value.
populateNonPersistedForEntryNode script operation
Execute the non-persisted script for this entry node. Return true if the script was completed successfully, false otherwise
populateSecurityContext script operation
Returns the context for the given user by assigning the access privileges for the roles passed in roleNames. It has no effect on the current users context. If
ldapContext is present then a handle of the context will be set in the returned context. If the user is not already present in within the product a new user will be
created in the organization specified otherwise in the default organization of default organization hierarchy.
previewEntryAttrib script operation
Returns the preview string for displaying the entry attribute specified by attribute path. Returns "" if sAttribPath refers to a non existant attribute or to a nonexistent
multi-occurrence instance.
print script operation
Writes o as a string into this writer.
printXML script operation
Writes an XML tag with the text value sValue, the tag name sTagName and the attributes sAttributes.

1064 IBM Product Master 12.0.0


println script operation
Writes o as a string and appends a new line to it into this writer.
publishEntriesToSrcContainer script operation
Posts a message to publish the current attribute values for each entry in the entrySet in the collaboration area back to the source catalog or category tree, leaving
those entries which are able to be published out to the source container unchanged and undisturbed in the collaboration area. Entries which cannot, for whatever
reason, be published out to the source container will be moved to the Fixit step. This is also known as an interim checkin. The publish will occur after the current
transaction completes.
pullPropertyFromWidget script operation
The value of sDestProperty on this widget will always reflect the value of sSrcWidgetProperty on oSrcWidget - oSrcWidget is either a widget or a property of this
widget that holds a widget
pushPropertyToWidget script operation
Push the given property into the given widget.
put script operation
Put a new row in the given lookup table.
qmgrGetMsgQueueByName script operation
Returns the queue if present in the system.
queryJobCompletionPercentage script operation
Queries the completion percentage of the specified job. Method will return percent complete as Integer if the job is currently running, null otherwise.
queryJobStatus script operation
Queries the specified job if it is currently running.
rand script operation
Returns a random integer that is between 0 and max.
reformatDate script operation
Takes a date string formatted according to the pattern indicated by currentDateFormat and returns a new string formatted according to the newDateFormat
provided.If currentDateFormat is null the default format for the locale is used. If currentLocale or newLocale is null the locale in the user setting is used. If
newDateFormat is null the standard default pattern "EEE MMM dd HH:mm:ss zzz yyyy" is used.
reformatDouble script operation
Returns a new String representing the number, reformatted to fit the criteria specified by minDigitsBeforeDecPoint and maxDigitsAfterDecPoint
renderWidget script operation
Render the widget.
releaseEntryInStep script operation
Unlock an entry in a workflow step for a given collaboration area. This operation runs synchronously. If this script operation is invoked within a database
transaction, the script operation will no longer commit that transaction during its execution. It will also throw an exception if a database problem occurs, where
previously it might not have done so. This is to ensure that a rollback can be carried out safely by the caller. If you were relying on this script operation's
transactional effects you will need to adjust your script. See the product documentation for more information about transactions and scripts.
releaseJDBCConnection script operation
Roll back and release an SQL connection retrieved using JDBC.
releaseWPCDBConnection script operation
Roll back and release a connection retrieved using the DB context.
remove script operation
Removes the element at the given index, 0 offset, from the array
removeAttributeFromAttrGroup script operation
Removes the attribute from the attribute collection.
removeCategoryTreeMapping script operation
Remove a map between the two given categories.
removeChildCategory script operation
The removeChildCategory script operation removes the given child category from the categories children. The child category is allowed only if the given child
category has at least one parent. Ensure that you provide the category name as the parameter.
removeCtgItemFromCategory script operation
Remove mapping from this item to this category if the mapping exists.
removeCtgTabAt script operation
Remove the tabbed view, at the index position (zero base), from the catalog view.
removeFromCompanyLocales script operation
Removes the given locales from the list of locales that are defined for the company. This will also remove the given locales from any specs that are localized using
them. If this script operation is invoked within a database transaction, the script operation will no longer commit that transaction during its execution. It will also
throw an exception if a database problem occurs, where previously it might not have done so. This is to ensure that a rollback can be carried out safely by the caller.
If you were relying on this script operation's transactional effects you will need to adjust your script. See the product documentation for more information about
transactions and scripts.
removeFromSpecLocales script operation
Removes the given locales from the list of locales that are defined for the spec.
removeHTML script operation
Returns a new string resulting from removing all HTML tags from the original string.
removeItemSecondarySpecFromCategory script operation
Disassociates a secondary item spec to from this Category.
removeLocalesFromAttrGroup script operation
Removes the locales from the Attribute Collection.
removeLocationSpecificData script operation
Removes location specific data for a catalog. CTR is the category tree that contains the locations.
removeNode script operation
Removes a node from a spec.
removeSecondarySpecFromCategory script operation
Disassociates a secondary spec defining this categories attributes.
removeSpecFromAttrGroup script operation
Disassociates all the nodes of the spec from this attribute collection.
removeWorkEntry script operation
Removes the WorkEntry at the specified index from the WorkEntryList.
removeUserFromOrganization script operation
Removes the mapping of the user from the specified organization.
renderHorizontalBars script operation
Return an HTML table to display horizontal bars - anHeights[i] should have the length of the i-th bar and asLabels[i] the tooltip for the i-th bar

IBM Product Master 12.0.0 1065


renderVerticalBars script operation
Return an HTML table to display vertical bars - anHeights[i] should have the length of the i-th column and asLabels[i] the tooltip for the i-th column
reorderEntry script operation
Allows users to adjust the ordering of a child Entry within a parent category in catalog ctg. Entry child is moved before (bInsertBefore=true) or after
(bInsertBefore=false) the position (zero is the first element) specified. Returns the ordered entry id (if it works) or null (if it fails). This method should not be used in
conjunction with a transaction. The Boolean flag is optional and if not specified defaults to true.
replace script operation
Returns a new string resulting from replacing all occurrences of the match substring in the given string with the replacement substring.
replaceCharsNotInDecRangeWithHex script operation
Converts the characters in the string, not in the numeric range, to printable hex. Each character in the source string, not within the given numeric range, is converted
to a hex character representation using the provided encoding. Each converted character is preceded by the qualifier. The characters within the decimal range are
copied into the output string unchanged.
replaceCompanyLocales script operation
Sets the given locales for the company. Removes any existing locales. This will also remove any locales removed as a result of this operation, from any specs that
are localized using them. For example: Current locales are en_US and fr_FR. Calling replaceCompanyLocales({en_US,de_DE}) will result in (1) en_US and de_DE are
removed from company. (2) company locales are set to en_US and de_DE (3) any specs localized with fr_FR will have fr_FR removed from them. If this script
operation is invoked within a database transaction, the script operation will no longer commit that transaction during its execution. It will also throw an exception if
a database problem occurs, where previously it might not have done so. This is to ensure that a rollback can be carried out safely by the caller. If you were relying
on this script operation's transactional effects you will need to adjust your script. See the product documentation for more information about transactions and
scripts.
replaceSpecLocales script operation
Sets the given locales for the spec. Removes any existing locales.
replaceString script operation
Returns a new string resulting from replacing the first occurrence of the match substring, in this string, with the replacement substring.
replaceUsingLookupTable script operation
Returns a string in which any token matching a key in the lookup table is replaced by the value of that lookup table entry. One or more spaces is the delimiter
separating the tokens and the replacement is done based on that token. You should avoid the situation in which a value used in the replacement matches another
key in the lookup table as the result string returned by replaceUsingLookupTable becomes unpredictable.
reportAllTableIndexes script operation
Reports all the tables and their indexes.
reportChangedIndexes script operation
Reports the list of indexes that have not been updated.
reportExtraIndexes script operation
Reports the list of indexes that are extra in the current database that should not be there.
reportIndexStatistics script operation
Reports all the indexes and their current statistics and whether or not they need to be rebuilt. Warning: This script op should not be used on a live system: using this
script operation during normal IBM InfoSphere® Master Data Management Collaboration Server operations will have a detrimental effect on performance.
reportMissingIndexes script operation
Reports the list of indexes that are missing in the current database that should be there.
reserveEntryInStep script operation
Reserve an entry in a workflow step for a given collaboration area. This operation runs synchronously. If this script operation is invoked within a database
transaction, the script operation will no longer commit that transaction during its execution. It will also throw an exception if a database problem occurs, where
previously it might not have done so. This is to ensure that a rollback can be carried out safely by the caller. If you were relying on this script operation's
transactional effects you will need to adjust your script. See the product documentation for more information about transactions and scripts.
reset script operation
Reset cursor position to first position. Similar to calling moveCursor(0).
resetContainerLocalesForRole script operation
Deletes the locales that are associated with this container for the given role. The default list of locales for this role are the ones that then apply.
resizeString script operation
Use to increase the size of a string to the finalLength by applying the appropriate padding to the left or right of the string with the given padChar.
return script operation
Exits from a function with the option of returning some data to the caller.
rollback script operation
Roll back a transaction using the DB connection.
runJavaConstructor script operation
A constructor created using the createConstructor operation is executed with the specified parameters
runJavaMethod script operation
Run the supplied Method (which was created using a previous createJavaMethod call). If the Method is not static, then it is invoked on the supplied object using the
supplied parameters. For static methods the supplied obj is ignored. The supplied Method should not be null. For instance methods the supplied object should not
be null. The number of supplied parameters should match the number and types associated with the Method. If the Method parameters contain primitives, then
these parameters should not be supplied as null Objects and should be supplied as the appropriate wrapper primitive object (such as instances of
java.lang.Integer).
runJob script operation
Queues up the job for running immediately on the scheduler. Returns the scheduleID for the job. Job type is one of "CTGTODB", "DBTOMKT", "REPORTEXE", or
"CATALOGTOCATALOGEXPORT". Use CTGTODB for imports, DBTOMKT for exports, REPORTEXE for reports, and CATALOGTOCATALOGEXPORT for catalog exports.
runQuery script operation
Runs the given DB query string against the default DB.
runScript script operation
Run the script represented by the given script object.
runStepJob script operation
The workflow step is run for each of the entries in the entrySet in the collaboration area. If the job completes successfully, the "JOB_SUCCESSFUL" exit value will be
set on all the entries in entrySet and a LeaveStep event is posted. If the job completes with an error, the "JOB_FAILED" exit value will be set on all the entries in the
entrySet and a LeaveStep event posted. If the elements of entrySet are not in stepPath (but are still in the Collaboration Area) nothing is done: the LeaveStepEvent
posted when the job finishes has no effect. If one or more of the entries in the entrySet are no longer in the Collaboration Area they will be excluded from the
entrySet of the posted LeaveStepEvent. If no entries in the entrySet are still in the CollaborationArea, no LeaveStepEvent is posted.
save script operation
Creates a Doc object with the content in the Writer and saves it in the specified documentPath.
saveCatalog script operation
Saves this catalog. This is used to save new attributes that have been set on the catalog.

1066 IBM Product Master 12.0.0


saveCategoryTree script operation
Saves the category tree. DO NOT USE in AGGREGATION if you are in a item-to-category feed or a category tree feed. The category tree you are aggregating to gets
saved automatically at the end of an aggregation. However, if you side affect another category tree, then call this script operation to capture the changes you made.
The script operation will return a validation error array if any validation errors occurred, otherwise, it is null if successful.
saveCategoryTreeMap script operation
Save this category tree map.
saveColArea script operation
Saves the collaboration area.
saveCtgAccessPrv script operation
Saves the current catalog access privilege to the database.
saveCtgItem script operation
Saves the item and returns the EntryValidationErrors object. Use operations EntryValidationErrors::getGlobalErrors() and EntryValidationErrors::getLocationErrors()
to get the validation errors that may have prevented the save.
saveCtgTabs script operation
Save the container tabbed view objects that are new or have been modified in the container view.
saveCtgView script operation
Saves the catalog view to the database.
saveMarkedEntries script operation
Save the set of marked entries for this work entry list - with indexes between start and end - - for entries in the step specified by path in the collaboration area
colArea with given comment.
saveMultipartRequestData script operation
Saves the documents sent through a multipart post in the docstore at the following location: /archives/multipart/uploaded/saveDir/. If a character set is given, that
is used. Otherwise, the default_charset_value as specified in common.properties is used. If this script operation is invoked within a database transaction, the script
operation does not commit that transaction during its execution. If a database problem occurs, it throws an exception. These changes in behavior of this script
operation ensure that a rollback can be carried out safely by the caller. For more information about transactions and scripts, see the product documentation.
savePageLayout script operation
Saves the current page layout.
saveSelection script operation
Save the static or dynamic selection to the database.
saveSpec script operation
Save this spec to the database.
saveSpecMap script operation
Save this spec map to the database.
saveToDocStore script operation
Saves an updated ExcelBook to the documentation store. If used in an export script with no operands, then the Excel file will be written into the standard export
directory with name CATALOG.XLS. If run with no operands outside of an export, then this script operation will fail with an exception. When a docStorePath
argument is supplied, then this is absolute path including the file name where the excel book will be written in the doc store. When the overWriteFlag is set to true,
then any existing excel book at the supplied path will be overwritten, If the overWriteFlag is set to false and excelbook existing in the docstore path, an exception
will be thrown. If the overwriteFlag is not supplied, then it will default to false. It is recommended that you do not specify a docstore path in export scripts, as
subsequent runs of the export will attempt to write to the same file in the doc store (which will only succeed if the overwrite flag is set to true).
saveUser script operation
Save the User's Profile. Returns null if the save was successful, otherwise returns an array of ValidationError's.
saveWebService script operation
Saves the Web Service in the DB. If deployment settings have changed, they take effect upon saving. If this script operation is invoked within a database
transaction, the script operation will no longer commit that transaction during its execution. It will also throw an exception if a database problem occurs, where
previously it might not have done so. This is to ensure that a rollback can be carried out safely by the caller. If you were relying on this script operation's
transactional effects you will need to adjust your script. See the product documentation for more information about transactions and scripts.
saveWfl script operation
Saves the workflow. Returns true or false depending on whether the workflow was successfully saved or not.
saveUserDefinedLog script operation
Update the persisted user defined log object in the database.
scriptArrayFromJavaArray script operation
Transforms a 1 dimensional Java array into a script array holding the same elements in the same order. If OneDimensionalJavaArray is not a 1 dimensional array or
is null then null is returned. 1 dimension arrays of primitives can be supplied as the parameter.
sendEmail script operation
Sends asynchronous email message. The emailTos parameter is a list of email addresses seperated with the semicolon character (;).
sendFtp script operation
Sends a file using FTP.
sendHTMLEmail script operation
Sends an asynchronous email message but allows HTML anchor tags in the message body. The emailTos, ccList, bccList, parameters are a list of email addresses
separated with the semicolon character (;).
sendHttp script operation
Send a GET or POST action to a URL sending the data in the document.
sendHttpString script operation
Sends a GET or POST to a URL sending data contained in a string.
sendMsg script operation
Sends the message.
sendMultipartPost script operation
Sends one or more documents of any content type or a set of parameters using a multipart post against the server at the specified URL, Use the
hmRequestProperties parameter to send specific header information. The hmDocs parameter is the list of pairs ['document path', 'document content type'] for the
documents of a particular content type (for example: text/plain or image/gif). Also, returns a HashMap (with RESPONSE_READER and
RESPONSE_HEADER_FIELDS) for the response.
setAccessControlGroupForRole script operation
Sets an access control group with the given set of privileges for the role. The parameter privs is an array of privileges (which are picked from the strings in the
format: Catalog__list, Selection__list, SelectionMembers__view_items etc.). Please note the page privileges like PAGE_OBJ_CTG_CONSOLE__view,
PAGE_OBJ_CAT_CREATE__view are stored only in the "Default" ACG.
setAccessControlGroupForRoleMigration script operation
Script operation for migrating the old privilege names to the new ones. Its exactly the same as setAccessControlGroupForRole except it has a mapping of old
privilege names to new ones.

IBM Product Master 12.0.0 1067


setAccessPrv script operation
Sets the access privileges for the given group. The permission is either V, E or null]. If the permission is null the path is removed from the access Privilege.
setActionModeToExport script operation
Sets the default action mode for objects to be exported. The value specified in this method can be overridden by specifying the action mode in either the
addAllObjectsToExport() or addObjectByNameToExport() methods.
setAlignment script operation
Set the cell style alignment to that supplied. The valid alignments are ALIGN_GENERAL, ALIGN_LEFT, ALIGN_CENTER, ALIGN_RIGHT, ALIGN_FILL,
ALIGN_JUSTIFY, ALIGN_CENTER_SELECTION.
setAllAccessControlGroupForRole script operation
Sets the access control group with all privileges except for the ones in privExclusions.
setAttribute script operation
Set an attribute of a node or a spec. Please consult the documentation for allowable values of sAttributeName. Common values are MAX_OCCURRENCE,
MIN_OCCURRENCE, TYPE, DEFAULT_VALUE. If the optional third parameter "dontReplace" is supplied, and is true, or we are dealing with a node rather than a spec,
sValue is added to any existing values for this attribute rather than replacing them.
setAttributes script operation
Set an attribute of a node or a spec to a set of values contained in the sValues HashMep. Any existing values are deleted before the new values are added. Please
consult the documentation for allowable values of sAttributeName.
setAttributeGroupsToProcess script operation
Only retrieve attributes that belong to one of the attribute collections specified in the list of attribute groups for the given container. Was designed to work with
getEntryAttrib(). This was a performance optimization in previous releases but is no longer required as the whole entry is now always fetched from the database.
Previously each attribute in the item was fetched with a separate database called. Use the script operations getEntryAttribsList() or getCtgItemAttribsForKeys().
setAuthRequired script operation
Sets whether this Web Service requires authentication. The setting will take effect upon saving.
setBoldWeight script operation
Set the cell font bold weight.
setBorderBottom script operation
Set the Bottom Border to the supplied border.
setBorderLeft script operation
Set the left Border to the supplied border.
setBorderRight script operation
Set the Right Border to the supplied border.
setBorderTop script operation
Set the Top Border to the supplied border.
setBottomBorderColor script operation
Set the cell style bottom border color to that supplied.
setBypassApproval script operation
If an approval workflow is setup, use this to bypass the approval process
setCatalogAccessControlGroupName script operation
Sets the Access Control Group to the given name for this catalog.
setCatalogByNameToExport script operation
Specifies the catalog whose contents are to be exported.
setCategoryAttrib script operation
Sets the attribute sAttribPath (spec_name/attribute_name) of this category to sValue.
setCategoryCacheFetchSize script operation
Sets the category cache fetch size (i.e. the number of categories gotten in bulk each time). This is only applicable if the category cache is associated with an
ItemSet.
setCategoryTreesForRecategorization script operation
Sets the category trees which will be modified by this workflow. If no category trees are set that would mean that ALL of the category trees associated to the source
container will be modified by this workflow.
setCellType script operation
Set the cell type. Valid values are BLANK, NUMERIC, STRING. Be aware that the NUMERIC type will change to a DATE type of cell if the value of the cell is a Date.
setColAreaAccessControlGroup script operation
Sets the Access Control Group to the given name for this collaboration area.
setColAreaAdminRoles script operation
Sets the admin roles for the collaboration area.
setColAreaAdminUsers script operation
Sets the admin users for the collaboration area.
setColor script operation
Set the cell fonts color to that supplied.
setCompanyCurrencies script operation
This operation sets the list of currency codes for the company database.
setContainerAttribute script operation
Sets the value of the container attribute to the array of values
setContainerLocalesForRole script operation
Sets the locales that are allowed for the container for the given role.
setContainerProperties script operation
The properties specified in the PROPERTIES hashmap are set for the container. Enforcement of locale restrictions on script operations is based on the value of
"SCRIPT_RESTRICT_LOCALES". "SCRIPT_NAME" is now deprecated and "POST_SCRIPT_NAME" should be used in its place.
setCtgAccessPrv script operation
Returns a catalog access privilege object with the permissions set according to the attribute collections. Permissions are [V|E]
setCtgItemAttrib script operation
Sets the attribute sAttribPath (spec_name/attribute_name) of this item to sValue. Returns true if it was set successfully. Returns false if operation failed to set, or if
old and new values are same.
setCtgItemMappedAttrib script operation
Sets the attribute mapped to/from sAttribMappedPath (mapped_spec_name/attribute_name) of this item to given value.
setCtgItemMappedAttribs script operation
Set the attributes of this item: hmPathValue should contain (path_y, value_x)'s; the item attribute path_x receives value_x if path_y is mapped to path_x in the spec
map. If no spec map is specified, the specmap of the import is used.
setCtgItemPrimaryKey script operation
Sets this item's primary key value.

1068 IBM Product Master 12.0.0


setCtgItemRelationshipAttrib script operation
Sets the attribute sAttribPath (spec_name/attribute_path) of type RELATIONSHIP of this item to the related item represented by the given catalog and primary key.
setCtgItemRelationshipAttribUsingItem script operation
Sets the attribute sAttribPath (spec_name/attribute_path) of type RELATIONSHIP of this item to the given related item.
setCtgTabRow script operation
Sets the tabbed view to the contain a new set of attribute groups.
setCtgView script operation
Sets the container view object with the given name/catalog and returns it.
setDateCellValue script operation
Set a Date as the cell value.
setDateField script operation
Return a Date equal to the input Date, except that the specified field is set to the given value. Allowed field values are : YEAR MONTH DATE HOUR_OF_DAY MINUTE
SECOND
setDataFormat script operation
Set the cell style Data Format to that supplied.
setDateInputFormat script operation
Set the Date input format.
setDateOutputFormat script operation
Set the Date output format.
setDefaultCtgView script operation
Sets the given catalog view the default catalog view.
setDefaultCtrView script operation
Sets the given category tree view as the default category tree view.
setDeployed script operation
Sets whether this Web Service is deployed. The setting will take effect upon saving.
setDesc script operation
Sets the description of the given Web Service.
setDocAttribute script operation
Set the attribute in the document with the given name to the given value.
setDynamicSelectionQueryString script operation
Sets the query string for this dynamic selection.
setEditableAttributeGroups script operation
Sets the editable attribute groups for the workflow step for a given subViewType. The parameter subViewType can be 'ITEM_LOCATION', 'BULK_EDIT', 'ITEM_EDIT',
'CATEGORY_EDIT', or 'CATEGORY_BULK_EDIT'. The optional parameter locationHierarchyName is required when the subViewType is 'ITEM_LOCATION'.
WorflowStep cannot be of type "SUCCESS", as it is hardwired that an Entry must validate against its Container Spec in order to leave the Success step.
setEntryAttrib script operation
Sets, in this entry, the attribute at the given path (spec_name/attribute_name) to the given value.
setEntryAttribValues script operation
Sets the values of the multi-value attribute at the given attribute path (spec_name/attribute_name), in this entry.
setEntryNode script operation
Returns the entry node with the given path relative to given entry node. If the path is not already built, a NULL will be returned. Use the Entry::setEntryAttrib script
operation to create a path that might not exist.
setEntryNodeRelationshipValue script operation
Set the value of the given entry node of type RELATIONSHIP to the related item identified by the given catalog and primary key
setEntryNodeRelationshipValueUsingItem script operation
Set the value of the given entry node of type RELATIONSHIP to the given related item.
setEntryNodeValue script operation
Set the value of this entry node and return 1 if the value was set, 0 if nothing changed, and -1 if there was a type conversion error.
setEntryRelationshipAttrib script operation
Sets the attribute at the given path (spec_name/attribute_path) of type RELATIONSHIP on this entry to the related item represented by the given catalog and
primary key
setEntryRelationshipAttribUsingItem script operation
Sets the attribute at the given path (spec_name/attribute_path) of type RELATIONSHIP on this entry to the given item.
setEntryStatus script operation
Sets the status of the entry.
setExcelStyle script operation
Set the cell style for this cell.
setExitValue script operation
Set the exit value of an entry in a workflow step. Assumed to be called from an IN(), OUT(), or TIMEOUT() step script function.
setFillBackgroundColor script operation
Set the cell style fill Background color to that supplied.
setFillForegroundColor script operation
Set the cell style fill foreground color to that supplied.
setFillPattern script operation
Set the cell style fill pattern to that supplied.
setFont script operation
Set the cell style font to that supplied.
setFontHeight script operation
Set the cell font height.
setFontName script operation
Set the font name in the ExcelCellFont. The font name is accepted if it is a non-null String. The fonts names that are valid are those that are installed on the
Windows system that the spreadsheet is opened on.
setHierarchyByNameToExport script operation
Specifies the hierarchy whose contents are to be exported.
setHierarchyMapToExport script operation
Specifies the source and destination hierarchies whose mappings need to be exported.
setHttpServletResponseHeader script operation
Sets the name value pairs specified in the HashMap into the header for the current HttpServletResponse.
setHttpServletResponseStatus script operation
Sets the status (one of the following valid values: ('SC_ACCEPTED', 'SC_OK', 'SC_CONTINUE', 'SC_PARTIAL_CONTENT', 'SC_CREATED',

IBM Product Master 12.0.0 1069


'SC_SWITCHING_PROTOCOLS', 'SC_NO_CONTENT') for the current HttpServletResponse.
setIgnoreCategorySpecificAttributes script operation
Set whether or not category specific attributes should be processed for the item
setImplScriptPath script operation
Sets the docstore path of the implementation script for this Web Service.
setImplclass script operation
Sets the fully qualified name of the implementation class of the given Web Service.
setIndention script operation
Set the cell indent. The indent amount is the number of character positions to indent by.
setInheriting script operation
By default, or if flag is true, set an item's location attribute to a un-set value, which means that the attribute will inherit at this location.
setItalic script operation
Set the cell font to italic by passing true in, or non-italic by passing false in.
setItemAttributesFromXMLRepresentation script operation
Updates this item based upon an XML representation which is created by the XML parser in the WebSphere Portal Server portion of the IBM Product Masterand
WebSphere Portal Server integration.
setItemCategoryMapToExport script operation
Specifies the catalog and hierarchy whose item-category mappings need to be exported.
setItemLocationAttrib script operation
Sets the attribute sAttribPath (spec_name/attribute_name) of this item for the given location to sValue.
setItemLocationData script operation
Add item search data to a search selection. Data is added for locations for the given location tree as an array of full category paths. Use the given delimiter to delimit
the path elements and set rootIncluded to true if path includes the category tree root name. Use the optional append argument if you want to add to existing data.
setItemSetFetchLinkedItems script operation
Sets the item set to fetch or not fetch master linked items.
setItemSetFetchSize script operation
Sets the item set fetch size (for example, the number of items gotten in bulk each time).
setLdapDistinguishedName script operation
Sets the single distinguished name for an LdapEntry as an LdapAttribute Object.
setLdapEntryDn script operation
Sets the distinguished name field associated with an LDAP authenticated User.
setLdapOperation script operation
Adds an LdapOperation object to an LdapEntry.
setLdapServerUrl script operation
Sets the URL of the server providing this user's LDAP authentication.
setLeftBorderColor script operation
Set the cell style left border color to that supplied.
setLocalesForRole script operation
Sets the locales that this role has access to, for all containers.
setLocalized script operation
Sets the localized property of a spec.
setModifyLocationHierarchyAvailability script operation
Sets the 'modify location hierarchy availability' flag for a given location hierarchy in the given workflow step.
setMsgDoc script operation
Sets the Doc object for the message.
setName script operation
Sets the name of the given Web Service.
setNodeEditable script operation
Sets the node to be editable or non-editable according to the boolean argument.
setNodeIndexed script operation
Sets the node to be indexed or not.
setNodeName script operation
Renames the node in the given spec with the given path to newNodeName. Throws an exception if any of the following conditions is true: (i) there is no node at the
given path in the given spec. (ii) there is a node at the given path in the given spec, but it is not a leaf node. (iii) there is a node at the given path in the given spec,
but it is a primary key. (iv) there is already a node at the given path with newNodeName. (v) the name is a valid name.
setNumericCellValue script operation
Set a number as the cell value.
setOrdered script operation
Changes the catalog to allow ordering or not. Returns a flag on whether the update is successful or not.
setOutputAttribute script operation
Set an attribute of this writer - which becomes an attribute of the document this writer is flushed into, if any.
setOutputName script operation
Set the name of this writer - which becomes the name of the document this writer is flushed into, if any
setPrimaryKey script operation
Sets the primary key value of this entry.
setPrimaryKeyPath script operation
Sets the primaryKeyPath of this spec to the given path. Throws an AustinException under any of the following conditions: 1. The spec is not a primary spec or
lookup spec. 2. The path does not exist in the spec. 3. The path refers to a node included from a SubSpec, and the node does not have minimum occurrence and
maximum occurrence both set to 1.
setProtocol script operation
Sets the protocol of the given Web Service.
setRequiredAttributeGroups script operation
Sets the required attribute groups for the workflow step for a given subViewType. The optional parameter locationHierarchyName is required when the
subViewType is 'ITEM_LOCATION'. WorflowStep cannot be of type "SUCCESS", as it is hardwired that an Entry must validate against its Container Spec in order to
leave the Success step.
setRightBorderColor script operation
Set the cell style right border color to that supplied.
setScriptContextValue script operation
Set the value of the variable named sVariableName. This is a way of defining a variable within a script but this must be used with caution. There are already a
number of implicit system-defined variables and this script op should not be used to redefine any of these implicit variables. If an implicit variable is redefined the

1070 IBM Product Master 12.0.0


results may be unpredictable. Note that there are a number of implicit variables whose names start with a $ sign. This script op must not be used to define any
variables whose name starts with a $ sign. Again, the results may be unpredictable if a variable whose name starts with a dollar sign is defined. The following is a
list of the implicit variables (not including those whose name starts with a $ sign): all_itemset_fetch_linked_item, all_itemset_readonly, attribute_group,
bypass_approval_workflow, catalog, category, category_tree, colArea, collaboration_area, container, destination_attribute, entry, entrynode, entrySet, err, err_lines,
http_request, in, inputs, invoking_user, item, job, lkpTable, location, location_tree, locationRootEntryNode, logger, lookup_table, message, msg_attachments,
multi_request, node, organization, organization_type, original_doc_folder, out, outs, page, page_layout, queueid, request, res run_rule_per_occurence, save_event,
sequence, soapFaultCode, soapFaultMsg, soapIncomingAttachments, soapMessage, SoapOperationName, soapOutgoingAttachments, soapParams, spec,
spec_map, special_outs, specmap_script_dest_attrib, step, stepPath, this, top, val, workflow, workIndex, workList, wrn.
setScriptProgress script operation
Sets the percentage completed value in the context of a running script.
setScriptStatsDeletedCnt script operation
Sets the count of items deleted in the context of a running script.
setSelectionAccessControlGroupName script operation
Sets the Access Control Group name for the selection.
setSelectionHierarchy script operation
Sets the selection's hierarchy. Only applicable to static selections.
setSelectionName script operation
Sets the name of this selection.
setSequenceValueForMigration script operation
This operation is only there for migration of environments. Do not use for any other purpose.
setStepEntryTimeout script operation
Sets the timeout value on an entry to the given time. For the operation to succeed the entry must be in the specified collaboration area and in the specified step. If
these assumptions are not true nothing is done (no timeout set). There is no exception thrown an there is no change to the workflow.
setStoreIncoming script operation
Sets whether or not messages coming into the Web Service are to be stored.
setStoreOutgoing script operation
Sets whether this Web Service should store outgoing messages.
setStrikeout script operation
Set the cell style text to strikeout by passing true in, or non-strikeout by passing false in.
setStringCellValue script operation
Set a String as the cell value.
setStyle script operation
Sets the style of the given Web Service.
setTopBorderColor script operation
Set the cell style top border color to that supplied.
setTypeToExport script operation
Specifies the object type to be exported.
setUnderline script operation
Set the cell font underline.
setUserAddress script operation
Set the User's Address.
setUserEmail script operation
Set the User's Email Address.
setUserFax script operation
Set the User's Fax Number.
setUserFirstName script operation
Set the User's First Name.
setUserLastName script operation
Set the User's Last Name.
setUserLdapEnabled script operation
Sets the user as an LDAP user.
setUsername script operation
Sets the name of the current user.
setUserPhone script operation
Set the User's Phone Number.
setUserRoles script operation
Set the roles for the user. The user must have user creation capability to be able to use this operation.
setUserTimeZone script operation
Change the user's time zone setting with the offset value, from GMT, in minutes.
setUserTitle script operation
Set the User's Title.
setVerticalAlignment script operation
Set the cell style vertical alignment to that supplied.
setViewableAttributeGroups script operation
Sets the viewable attribute groups for the workflow step for a given subViewType. The parameter subViewType can be 'ITEM_LOCATION', 'BULK_EDIT',
'ITEM_EDIT', 'CATEGORY_EDIT', or 'CATEGORY_BULK_EDIT'. The optional parameter locationHierarchyName is required when the subViewType is
'ITEM_LOCATION'. WorflowStep cannot be of type "SUCCESS", as it is hardwired that an Entry must validate against its Container Spec in order to leave the Success
step.
setWflAccessControlGroup script operation
Sets access control group name of the workflow.
setWflDesc script operation
Sets the workflow description.
setWflName script operation
Sets the workflow name.
setWflStepAddEntries script operation
Sets value of 'allow import into step' flag.
setWflStepCategorizeEntries script operation
Sets value of 'allow recategorization' flag.
setWflStepDesc script operation
Sets the description for the workflow step.

IBM Product Master 12.0.0 1071


setWflStepEntryNotification script operation
Sets up the notification e-mails which will get sent when the item gets into the step. Email addresses must be separated by semicolons.
setWflStepExitValues script operation
Sets the exit values for the workflow step.
setWflStepPerformerRoles script operation
Sets the user roles for the workflow step.
setWflStepPerformerUsers script operation
Sets the users for the workflow step.
setWflStepReserveToEdit script operation
Sets the reserve for edit flag for a workflow step.
setWflStepScriptPath script operation
Sets up the workflow script path for this step. If no argument is passed, the default location is used (script/(workflow name)/(step name)). Note that this operation
does not check that the script is already loaded (it allows you to load the script later if needed).
setWflStepTimeoutDate script operation
Sets up the timeout date for the workflow step. This is a date which, when reached, causes the workflow step to timeout.
setWflStepTimeoutDuration script operation
Sets the timeout duration for the workflow step. The duration is in seconds and is the period that an item may remain in a workflow step before being timed out.
setWflStepTimeoutNotification script operation
Sets up the notification e-mails which will get sent when the step times out. E-mail addresses must be separated by semicolons.
setWidgetProperty script operation
Set the given property of this widget to the given value.
setWorkEntryMarked script operation
Marks/unmarks the given WorkEntry.
syncWorkEntryAt script operation
Sync the work entry at the specified index with it's database copy.
setWrapText script operation
Set the cell style text to wrapping by passing true in , or non-wrapping by passing false in.
setWsddDocPath script operation
Sets the docstore path of the WSDD document for this Web Service. The caller must ensure that this does not overwrite the WSDS for any other service.
setWsdlDocPath script operation
Sets the docstore path of the WSDL document for this Web Service. The caller must ensure that this does not overwrite the WSDL for any other service.
setXMLNodeValue script operation
Sets the value of the xmlNode given by sPath. Creates the node if it doesn't exists.
setXMLNodeValues script operation
Sets the values of the xmlNode given by sPath. Creates the node if it doesn't exists.
size script operation
Returns the size of an Object of type array, HashMap or SearchResultSet.
size script operation
Return the number of records in the result set.
sleep script operation
Sleeps for the given number of milliseconds.
sort script operation
Sorts the array.
sortItemSet script operation
Sorts the ItemSet for performance.
splitLine script operation
Returns an array of tokens that is obtained by breaking the line by using the given parser, which can be a CSV parser, delimiter parser, or a fixed-width parser.
startAggregationByName script operation
Run the feed called sName on the file sDocPath. If this script operation is invoked within a database transaction, the script operation will no longer commit that
transaction during its execution. It will also throw an exception if a database problem occurs, where previously it might not have done so. This is to ensure that a
rollback can be carried out safely by the caller. If you were relying on this script operation's transactional effects you will need to adjust your script. See the product
documentation for more information about transactions and scripts.
startBatchProcessingForUserDefinedLog script operation
Setup batch processing for the given User Defined Log. This operation is to be used mainly during import/mass update jobs.
startExportByName script operation
Runs the export identified by the given name.
startTimer script operation
Start the given timer.
startTransaction script operation
Executes the statements in a transaction, rollback takes place if an error occurs. Does not do anything if a transaction is already in progress.
stopBatchProcessingForUserDefinedLog script operation
Stop batch processing for the given User Defined Log. This operation is to be used mainly during import/mass update jobs.
stopTimer script operation
Stop the given timer.
startsWith script operation
Tests if this string begins with an occurrence of the match substring.
stopJob script operation
Stops the specified job if it is currently running.
stripOutNonASCII script operation
Returns a new string resulting from removing all non-ASCII characters in this string.
substitute script operation
Substitutes a string, using the regular expression, in another string. This method works like the Perl function of the same name. Given a regular expression of "a*b",
a String to substituteIn of "aaaabfooaaabgarplyaaabwackyb" and the substitution String "-", the resulting String returned by subst would be "-foo-garply-wacky-".
Returns: The string substituteIn with zero or more occurrences of the current regular expression replaced with the substitution String (if this regular expression
object doesn't match at any position, the original String is returned unchanged).
substring script operation
Returns a new string that is a substring of this string. The beginIndex is inclusive but endIndex is not.
throwError script operation
Use to throw a Java-like exception. This operation is usually used in conjunction with the catchError operation.

1072 IBM Product Master 12.0.0


throwValidationError script operation
Sets up a validation error that shows up in the validation errors in the end user interface and in the list of errors returned when an entry is saved in a script.
ERRORTYPE should be one of "UNIQUENESS", "VALIDATION_RULE", "PATTERN", "MIN_OCCURENCE", "LENGTH".
today script operation
Returns the current date and time
toDouble script operation
Parses str as a Double
toInteger script operation
Parses str as an Integer
toLowerCase script operation
Converts all of the characters in this string to lower case using the rules of the default locale.
toString script operation
This method may not render all of the information in an object to the string, so cannot be relied upon as a serialization mechanism for anything except the most
primitive of objects.
toTitleCase script operation
Converts the first alphabetic characters of each string of characters to upper case. A string of characters is delimited by any non-alphabetic character.
toUpperCase script operation
Converts all of the characters in this string to upper case using the rules of the default locale.
trim script operation
Removes white space from both ends of the string.
unescapeHTMLEntities script operation
Translates all character escaped with HTML character codes to corresponding characters, for example out.writeln(unescapeHTMLEntities("&apos;a a&apos;"));
where &apos; is not translated.
unlockColArea script operation
Unlocks the Collaboration Area so that entries can be checked out into it again. Returns true or false depending on whether the unlock was successful or not.
unzip script operation
Decompresses the zip file located in the file system at the file system path given by srcPath, into the directory given by dstPath.
urlEncode script operation
Translates the given string into a urlencoded format. Blanks between characters are converted to "+" signs and special characters are encoded. i.e. a ',' is encoded
as '%2C' etc. This is known as x-www-form-urlencoded format.
useTransaction script operation
Executes the statements in a transaction, rollback takes place if an error occurs. If a transaction is already in progress this transaction is committed and a new
transaction is started.
userDefinedLogAddEntry script operation
Add an entry to the user defined log. If a message is specified, set that for the UserDefinedLogEntry. If the category is provided then the logs are only restricted for
that category.
userDefinedLogDelete script operation
Remove the user defined log object from the database. This action will also drop all entries to the log.
userDefinedLogDeleteEntriesFor script operation
Delete all log entries for an entry from the user defined log.
userDefinedLogDeleteEntry script operation
Delete a particular entry from the user defined log.
userDefinedLogEntryGetDate script operation
Get the date of the user defined log entry.
userDefinedLogEntryGetTarget script operation
Get the entry object of the user defined log entry. If CONTAINERISCATALOG is true or is left unspecified, the entry must be in a catalog. If CONTAINERISCATALOG is
false, the entry must be in a hierarchy.
userDefinedLogEntryGetValue script operation
Get the value of the user defined log entry.
userDefinedLogEntrySetDate script operation
Set the date of the user defined log entry.
userDefinedLogEntrySetValue script operation
Set the log of the user defined log entry.
userDefinedLogGetContainer script operation
Get the container that is logged by the user defined log.
userDefinedLogGetDescription script operation
Get the description of the user defined log.
userDefinedLogGetEntriesFor script operation
[Get all log entries for an entry from the user defined log. The category can be provided in order to get the logs associated for that category only.
userDefinedLogGetName script operation
Get the name of the user defined log.
userDefinedLogIsRunningLog script operation
Returns whether this user defined log is a running-log.
userDefinedLogSetDescription script operation
Set the description of the user defined log. NOTE: You need to call insertUserDefinedLog/saveUserDefinedLog to persist this change.
userDefinedLogSetName script operation
Set the name of the user defined log. NOTE: You need to call insertUserDefinedLog/saveUserDefinedLog to persist this change.
validateMappedAttribs script operation
Validate a set of attribute values indexed by their mapped path against the destination spec.
validateUser script operation
Validates the User based on Username, User Password, and User Company Code.
validateXML script operation
Validates an XmlDocument from a docstore Doc instance. Returns "Success" if its a valid XML Document. Returns "Document not found" if the XML Document not
found in DocStore. Returns "Document is empty" if the XML Document is empty. Returns "Fatal Parsing Error" concatenated with the error description for a non-XML
Document. Returns "Error" for any other error.
while script operation
Repeats the execution of a set of instructions as long as a condition is true
write script operation
Writes o as a string into this writer

IBM Product Master 12.0.0 1073


writeBinaryFile script operation
Pipes the docstore file represented sOrigFilePath into a new Doc of name sDestFileName in the directory of the current transaction instance.
writeDoc script operation
Appends doc as a string into this writer.
writeFile script operation
Pipes the docstore file named by sFilePath into this writer.
writeFileUsingOut script operation
Pipes w into this writer. Close the writer when you are done using it.
writeFileUsingReader script operation
Pipes r into this writer.
writeln script operation
Writes o as a string and appends a new line to it into this writer.
xmlDocToString script operation
Returns the value of the current XMLNode.
zip script operation
Compresses the files under the directory given by srcPath and creates zip format file at the path given by dstPath.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Global Data Synchronization configuration files


The Global Data Synchronization configuration files contain the system configurations that you use to set up and customize Global Data Synchronization.

The required parameters are configured in the env_settings.ini file, which is used to populate the Global Data Synchronization feature properties files at system setup. The
configuration files for global data synchronization are then generated and usually you do not need to edit them.

For advanced features, you can edit the gds.properties, gds_system.properties, and properties.xml files to customize the functionality and performance of Global Data
Synchronization.

Important: You need to manually copy the $TOP/etc/default/gdsSupplySideTradeItemFunctions.properties.default file and rename the file to the
gdsSupplySideTradeItemFunctions.properties.

Editing the GDS configuration files


You set the required parameters during system setup in the env_settings.ini file, which is used to populate the Global Data Synchronization feature properties files.
For advanced features, you can edit the gds.properties and gds_system.properties files to customize the functionality and performance of Global Data
Synchronization.
Viewing system properties
You can view system properties from the Admin UI.
gds.properties file
The gds.properties configuration file contains the parameters of the Global Data Synchronization feature, where you can customize the features and the user
interface.
gds_system.properties file
The gds_system.properties configuration file contains the parameters of Global Data Synchronization where you can define the core functions and appearance of
the user interface.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

Editing the GDS configuration files


You set the required parameters during system setup in the env_settings.ini file, which is used to populate the Global Data Synchronization feature properties files. For
advanced features, you can edit the gds.properties and gds_system.properties files to customize the functionality and performance of Global Data Synchronization.

About this task


Most parameters have a generic default value that is preset in the gds.properties and gds_system.properties files, but you can specify custom values to fit your needs.

Procedure
1. Open the file to be edited.
2. Update the configuration values.
3. Close the file.
4. Stop the Product Master and Global Data Synchronization services.
5. Restart the services.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

1074 IBM Product Master 12.0.0


Viewing system properties
You can view system properties from the Admin UI.

About this task


You can view the following types of system properties:

common.properties
gds.properties

Procedure
1. From the menu bar, select Administration > View System Properties.
2. Click one of the properties files to view.
The file opens for viewing. You can not edit the displayed file from the Administration Console.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

gds.properties file
The gds.properties configuration file contains the parameters of the Global Data Synchronization feature, where you can customize the features and the user interface.

When the system is started, the gds.properties file is read from the $TOP/etc/default directory to initialize each property. Because properties are loaded during startup,
any changes to the gds.properties file requires you to restart your system before your changes are applied.

Each property is made up of a parameter and parameter value. For most parameters, a suggested default value is preset in the gds.properties file. With the gds.properties
file, you can modify each parameter value based on performance recommendations or usage preferences.
Important: For many of the system required parameters, Global Data Synchronization reverts to a system default value that might differ from the value you previously
specified in the gds.properties file when a parameter value is found to be one of the following:

Invalid
Missing
Incorrectly commented out

The following list defines each type of parameter value:

Default value
A suggested generic value that is preset in the gds.properties file template that comes bundled with the product.
User defined value
A value that you defined, which replaces the default value in the gds.properties file.
Recommended value
A recommended value for a parameter in the gds.properties file that can be appropriate to your situation regarding performance and function.

Note: Some parameters require you to set user-defined values as a preproduction step to optimize your configuration. Any changes to the gds.properties file requires you
to restart your system before the changes are applied.

gds.properties parameters
The gds.properties parameters define the properties of Global Data Synchronization feature. Each property is made up of a parameter and parameter values and are
documented by category.

daysInPast parameter
The daysInPast parameter specifies the record for past n number of days from the current date, where, n is the user input.
userIdInXML parameter
The userIdInXML parameter specifies the User ID in the XML format.

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

daysInPast parameter
The daysInPast parameter specifies the record for past n number of days from the current date, where, n is the user input.

Parameter values
Value

IBM Product Master 12.0.0 1075


Number of days
Default value
15

Example
In this example, the parameter displays record for past 15 days from the current date.

daysInPast=15

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

userIdInXML parameter
The userIdInXML parameter specifies the User ID in the XML format.

Parameter values
Value
The User ID in the XML format.

Example
userIdInXML=XX.XXXX.XXXXXXX

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

gds_system.properties file
The gds_system.properties configuration file contains the parameters of Global Data Synchronization where you can define the core functions and appearance of the user
interface.

Whenever your system is started, the gds_system.properties file is read from the $TOP/etc/default directory to initialize each property. Because properties are loaded
during startup, any changes to the gds_system.properties file requires you to restart your system before your changes are applied.

Each property is made up of a parameter and parameter value. For most parameters, a suggested default value is preset in the gds_system.properties file. With the
gds_system.properties file, you can modify each parameter value based on performance recommendations or usage preferences.
Important: For many of the system required parameters, Global Data Synchronization will revert to a system default value that might differ from the value you previously
specified in the gds_system.properties file when a parameter value if found to be one of the following:

Invalid
Missing
Incorrectly commented out

The following list defines each type of parameter value:

Default value
A suggested generic value that is preset in the gds_system.properties file template that comes bundled with the product.
User defined value
A value that you defined, which replaces the default value in the gds_system.properties file.
Recommended value
A recommended value for a parameter in the gds_system.properties file that can be appropriate to your situation in regards to performance and function.

Note: Some parameters require you to set user defined values as a pre-production step to optimize your configuration. Any changes to the gds_system.properties file
requires you to restart your system before the changes are applied.

gds_system.properties parameters
The gds_system.properties parameters define the system-level properties of Global Data Synchronization. Each property is made up of parameters and parameter values
and have been documented by category.

Validation scripts path parameters


The Validation scripts path parameters specifies the scripts used to perforce compliance checks.
Important: Do NOT change the value of these properties.

TRADE_ITEM_VALIDATION_SCRIPT_PATH parameter
The TRADE_ITEM_VALIDATION_SCRIPT_PATH parameter specifies the location of the script used for the compliance check of the trade item.

IBM Product Master 12.0 Fix Pack 8

1076 IBM Product Master 12.0.0


Operating Systems: AIX, Linux, and Windows (Workbench only)

TRADE_ITEM_VALIDATION_SCRIPT_PATH parameter
The TRADE_ITEM_VALIDATION_SCRIPT_PATH parameter specifies the location of the script used for the compliance check of the trade item.

Important: Do NOT change the value of this property.

Parameter values
Value
<Path of the trade item validation script>
Default value
None

Example
In this example, the location of the script is scripts/distribution/Trade Item Validation.

TRADE_ITEM_VALIDATION_SCRIPT_PATH=scripts/distribution/Trade Item Validation

IBM Product Master 12.0 Fix Pack 8

Operating Systems: AIX, Linux, and Windows (Workbench only)

IBM Product Master on Cloud


IBM® Product Master on Cloud for Bluemix® offers the rich features of an on-premises Product Master deployment without the cost, complexity, and risk of managing your
own own infrastructure. Product Master provides preinstalled Product Master configurations for production and development environments in an IBM SoftLayer® cloud-
hosted environment.

After you request a service and receive your welcome letter with the service details, see the following specifications.

Following are the three plans available for Product Master on the Cloud:

IBM Product Master on Cloud Small


IBM Product Master on Cloud Medium
IBM Product Master on Cloud Large

Production instance
The hardware configuration for a production instance is as follows:

System Type
Virtual Server
Backup Server
IBM Spectrum Protect

Cores per node - 16


RAM (GB) per node - 64
Disk storage (GB) with 10 IOPS/GB - 2 TB (Small)
Disk storage (GB) with 10 IOPS/GB - 3 TB (Medium)
Disk storage (GB) with 10 IOPS/GB - 4 TB (Large)

Small Medium Large


Plan
Application Server Database Tier Application Server Database Tier Application Server Database Tier
Number of nodes 3 2 3 2 3 2
Cores per node 4 8 4 16 4 16
RAM (GB) per node 8 32 8 64 8 64
Number of Worker nodes 2 2 4
Cores per Worker node 8 16 16
RAM (GB) per Worker node 16 32 32
File Storage (GB) with 5 IOPS/GB 300 600 500 600 500 750

Non-production instance
Product Master on Cloud plans also provide an additional non-production IBM Product Master instance for Development, Staging, and QA.

The hardware configuration is as follows:

System Type

IBM Product Master 12.0.0 1077


Virtual Server

Small Medium Large


Staging Staging Development QA Staging
Plan
Application Application Application Application
Database Tier Database Tier Database Tier Database Tier
Server Server Server Server
Number of nodes 1 1 1 1 1 1 3 2
Cores per node 4 4 4 8 4 16 4 16
RAM (GB) per node 8 16 8 32 8 64 8 64
Number of Worker 2 2 2 4
nodes
Cores per Worker 8 8 16 16
node
RAM (GB) per Worker 16 16 32 32
node
File Storage (GB) with 300 300 300 300 300 300 300 600
5 IOPS/GB

1078 IBM Product Master 12.0.0

You might also like