You are on page 1of 562

Tivoli Service Automation Manager

Version 7.2.4.4
Installation and Administration Guide
SC34-2657-08

Tivoli Service Automation Manager


Version 7.2.4.4
Installation and Administration Guide
SC34-2657-08

Note
Before using this information and the product it supports, read the information in Notices on page 525.
Edition notice
This edition applies to IBM Tivoli Service Automation Manager Version 7 Release 2 Modification Level 4 Fix Pack 4
(program number 5724W78), available as a licensed program product, and to all subsequent releases and
modifications until otherwise indicated in new editions.
This edition replaces SC34-2657-07 and any previous editions.
IBM Tivoli Service Automation Manager is also a key part of the software delivered with IBM CloudBurst, an
integrated hardware and software appliance offering.
Order publications through your IBM representative or the IBM branch office serving your area. Publications are
not stocked at the addresses given below.
Address comments on this publication to:
IBM Systems and Technology Group
Systems Software Development
Rajiv Gandhi Infotech Park Phase 2
Plot no Pl - 3 Midc, Hinjewadi, Village limit,
Of Marunji,
Pune, Maharashtra
India - 411057
Make sure to include the following in your comment or note:
v Title and order number of this book
v Page number or topic related to your comment
When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
Copyright IBM Corporation 2008, 2012.
US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Tables . . . . . . . . . . . . . . . xi
Preface . . . . . . . . . . . . . . xiii
Who should read this information . . . . . . xiii
Useful links . . . . . . . . . . . . . . xiii
Support information . . . . . . . . . . . xiv
Getting technical training . . . . . . . . xiv
Searching knowledge bases . . . . . . . . xiv
Searching the Internet . . . . . . . . xiv
Using IBM Support Assistant . . . . . . xv
Finding product fixes . . . . . . . . . xv
Getting email notification of product fixes . . xv
Contacting IBM Software Support . . . . . xvi
Setting up a software maintenance contract xvi
Determine the business impact . . . . . xvii
Describe the problem and gather background
information . . . . . . . . . . . . xvii
Submit the problem to IBM Software
Support . . . . . . . . . . . . . xvii
Chapter 1. Tivoli Service Automation
Manager overview . . . . . . . . . . 1
Product components . . . . . . . . . . . . 2
Tivoli Service Automation Manager Installation
Launchpad . . . . . . . . . . . . . . 2
Self-Service Virtual Server Management . . . . 2
User interfaces. . . . . . . . . . . . . 3
Applications in the administrative user interface . 3
Service Definitions application . . . . . . 4
Service Deployment Instances application . . 4
Resource Allocation applications. . . . . . 4
Monitoring Definition applications for
WebSphere Cluster service. . . . . . . . 4
Situation Analysis application . . . . . . 5
Cloud Server Pool Administration application 5
Cloud Storage Pool Administration application 5
Cloud Network Administration application . . 5
Cloud Customer Administration application . . 6
Cloud Maintenance Administration application 7
Service Topology application . . . . . . . 7
Service Update Package application. . . . . 8
Service Topology Node applications . . . . 8
IT Topology Work Orders application . . . . 8
Auxiliary applications . . . . . . . . . 9
WebSphere Cluster Service. . . . . . . . . 9
Service topology node attributes . . . . . 13
Performance monitoring support for the
WebSphere Cluster service . . . . . . . 16
Service structure. . . . . . . . . . . . . 17
Service provider support . . . . . . . . . . 18
Reporting function . . . . . . . . . . . . 20
Tivoli Service Automation Manager report types
and content . . . . . . . . . . . . . 20
Tivoli Usage and Accounting Manager reporting
function . . . . . . . . . . . . . . 21
Additional software installation on the provisioned
servers . . . . . . . . . . . . . . . . 21
VMware additional disk functionality . . . . . 22
VMware clone server functionality . . . . . . 22
Managing POWER LPAR provisioning with
VMControl . . . . . . . . . . . . . . 23
Image management. . . . . . . . . . . . 23
Workload Deployer overview . . . . . . . . 24
Maintenance mode overview . . . . . . . . 25
Chapter 2. Installing and upgrading
Tivoli Service Automation Manager . . 27
Overview of the Tivoli Service Automation Manager
installation process . . . . . . . . . . . . 27
Planning for Tivoli Service Automation Manager . . 28
Hardware and operating system requirements for
Tivoli Service Automation Manager . . . . . 28
Software requirements for Tivoli Service
Automation Manager . . . . . . . . . . 36
Web browser settings . . . . . . . . . 39
Requirements for Self-Service Virtual Server
Management . . . . . . . . . . . . . 40
Requirements for the System z environment . . 40
Providing the installation source files . . . . . 42
Restrictions and limitations . . . . . . . . 44
Installing Tivoli Service Automation Manager . . . 45
Full Installation of Tivoli Service Automation
Manager 7.2.4.4 . . . . . . . . . . . . 45
Preparing the environment for installation . . . 48
Preparing an AIX management server . . . 48
Verifying settings required for installing the
middleware on an AIX management server. 49
Verifying the settings required to install
Tivoli Provisioning Manager on an AIX
management server. . . . . . . . . 50
Packages required on an AIX management
server . . . . . . . . . . . . . 55
Preparing a Linux management server . . . 58
Verifying the settings required to install the
middleware on a Linux management server 59
Verifying the settings required to install
Tivoli Provisioning Manager on a Linux
management server. . . . . . . . . 60
Packages required on a Linux management
server . . . . . . . . . . . . . 64
Preparing a Windows administrative server . 66
Preparing a Linux administrative server . . . 66
Preparing an AIX administrative server . . . 67
Starting the launchpad and performing the
preinstallation steps . . . . . . . . . . 67
Installing Tivoli Service Automation Manager
and its prerequisite software. . . . . . . . 68
Installation defaults for Tivoli Service
Automation Manager . . . . . . . . . 69
Copyright IBM Corp. 2008, 2012 iii
Installing the Tivoli Service Automation
Manager license . . . . . . . . . . . 70
Installing the middleware . . . . . . . 71
Installing base services . . . . . . . . 72
Installing the Tivoli Provisioning Manager
core components . . . . . . . . . . 73
Installing the Tivoli Provisioning Manager
Web components . . . . . . . . . . 74
Installing Tivoli Service Request Manager . . 75
Installing the Advanced Workflow
Components . . . . . . . . . . . . 77
Installing Tivoli Provisioning Manager 7.2.1
Interim Fix 5 core components . . . . . . 78
Installing Tivoli Provisioning Manager 7.2.1
Interim Fix 5 Web components . . . . . . 78
Installing the Tivoli Service Automation
Manager applications . . . . . . . . . 79
Installing additional configuration files . . . 80
Installing the automation packages for Tivoli
Service Automation Manager . . . . . . 81
Post-installation steps . . . . . . . . . . 82
Installing optional software . . . . . . . . 83
Verifying the integrated installation . . . . . 84
Upgrading Tivoli Service Automation Manager from
7.2.2.1, 7.2.2.2, 7.2.3, 7.2.4, 7.2.4.1, 7.2.4.2, 7.2.4.3 to
7.2.4.4 . . . . . . . . . . . . . . . . 85
Starting the launchpad and performing the
preinstallation steps . . . . . . . . . . 85
Uninstalling the additional disk extension for
VMware . . . . . . . . . . . . . . 86
Installing the product . . . . . . . . . . 88
Installing the Advanced Workflow Components 88
Installing Tivoli Provisioning Manager 7.2.1
Interim Fix 5 core components . . . . . . . 89
Installing Tivoli Provisioning Manager 7.2.1
Interim Fix 5 Web components . . . . . . . 89
Performing the post-installation steps. . . . . 90
Finishing the upgrade . . . . . . . . . . 90
Mandatory data migration when upgrading from
Tivoli Service Automation Manager 7.2.2.1 to
7.2.4.4 . . . . . . . . . . . . . . . 91
Upgrading the service instance to the new
revision. . . . . . . . . . . . . . 92
Upgrading network . . . . . . . . . 92
Defining an IP address selection rule . . . 92
Running IP address discovery for
operational servers . . . . . . . . . 93
Migrating restored VMware servers . . . . 94
Migrating VMware Additional Disks . . . . 95
Mandatory data migration when upgrading from
Tivoli Service Automation Manager 7.2.2.2 to
7.2.4.4 . . . . . . . . . . . . . . . 95
Upgrading the service instance to the new
revision. . . . . . . . . . . . . . 96
Mandatory data migration when upgrading from
Tivoli Service Automation Manager 7.2.3 to
7.2.4.4 . . . . . . . . . . . . . . . 97
Mandatory data migration when upgrading from
Tivoli Service Automation Manager 7.2.4 to
7.2.4.4 . . . . . . . . . . . . . . . 98
Mandatory data migration when upgrading from
Tivoli Service Automation Manager 7.2.4.1 to
7.2.4.4 . . . . . . . . . . . . . . . 99
Mandatory data migration when upgrading
from Tivoli Service Automation Manager 7.2.4.2
to 7.2.4.4 . . . . . . . . . . . . . . 100
Mandatory data migration when upgrading
from Tivoli Service Automation Manager 7.2.4.3
to 7.2.4.4 . . . . . . . . . . . . . . 101
Enabling Tivoli Monitoring agent. . . . . . 102
Configuring IBM HTTP Server . . . . . . . 103
Configure the web server to handle HTTP
requests . . . . . . . . . . . . . . 103
Configure the web server to handle HTTPS
requests . . . . . . . . . . . . . . 104
Optional system settings after installation . . . . 106
Disabling Tivoli Provisioning Manager Software
Distribution Infrastructure . . . . . . . . 106
Additional tools and system health options . . . 106
Tools for the Management Server. . . . . . 106
Tools for the Administrative Server . . . . . 106
Tools for the Administrative and Management
Server . . . . . . . . . . . . . . . 107
Uninstalling . . . . . . . . . . . . . . 107
Uninstalling Tivoli Service Automation Manager
components . . . . . . . . . . . . . 107
Uninstalling Tivoli Provisioning Manager core
components . . . . . . . . . . . . . 108
Uninstalling base services other runtime
services. . . . . . . . . . . . . . . 108
Uninstalling middleware . . . . . . . . 109
Chapter 3. Configuring Tivoli Service
Automation Manager . . . . . . . . 111
Overview of the configuration process . . . . . 111
Planning your configuration . . . . . . . . 112
Planning the hypervisor configuration . . . . 113
New capabilities of VMware vSphere 5 . . . 114
Planning the network configuration . . . . . 116
Network templates . . . . . . . . . 116
Customer-specific network configuration . . 117
Network annotation of deployable images 118
Network configuration . . . . . . . . 118
Network configuration strategies . . . . 118
Management network structure overview 120
The multi-NIC networking model on System
p . . . . . . . . . . . . . . . 124
VIO support. . . . . . . . . . . 126
VIO Shared Ethernet Adapter
management . . . . . . . . . . 137
Virtual switch template properties for
System p . . . . . . . . . . . . 138
Planning for System p configuration. . . 140
Troubleshooting . . . . . . . . . 140
Definitions for DCM objects . . . . . . 141
Network planning for VMControl . . . . 144
Sample OVF descriptor file for AIX using
NIM . . . . . . . . . . . . . 145
Sample OVF descriptor file for AIX using
SCS . . . . . . . . . . . . . 148
Preparing the provisioning back ends . . . . . 151
iv Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Configuring the z/VM environment for Tivoli
Service Automation Manager . . . . . . . 151
Introduction to the z/VM environment . . . 151
Setting up z/VM for Linux provisioning . . 152
Modifying the z/VM System
Configuration file . . . . . . . . . 152
System DASD . . . . . . . . . . 153
z/VM network . . . . . . . . . . 153
Modifying system features . . . . . . 154
Updating the z/VM TCP/IP configuration 155
Network devices . . . . . . . . . 155
Home address and gateway . . . . . 156
Autolog statement . . . . . . . . . 156
Port statement . . . . . . . . . . 156
Defining the external IP address to the
TCP/IP stack . . . . . . . . . . 156
Setting up 'Directory Maintenance Facility for
z/VM (DirMaint) . . . . . . . . . . 157
Configuring the CONFIGxx DATADVH
file . . . . . . . . . . . . . . 157
Allocation groups . . . . . . . . . 158
Creating the MAPSRV and MAPAUTH IDs 158
Setting up Linux on System z . . . . . . 160
Defining virtual machines for Linux on
System z . . . . . . . . . . . . 160
Setting up a Linux on System z master
system . . . . . . . . . . . . 162
Verifying your configuration . . . . . 163
Using RACF with z/VM . . . . . . . 163
Permitting access to system resources . . 163
Configuring z/VM networking . . . . 164
Configuring DIRMAINT/DATAMOVE 164
Configuring VSMSERVE. . . . . . . 164
Configuring the KVM environment for Tivoli
Service Automation Manager . . . . . . . 166
Configuring the Xen environment for Tivoli
Service Automation Manager . . . . . . . 168
Preparing for an automatic Xen host install 169
Creating the Post Install Script file . . . . 170
Setting up Xen . . . . . . . . . . . 171
Configuring cloud server pools . . . . . . . 173
Configuring cloud server pools manually . . . 174
Manually configuring cloud server pools for
VMware . . . . . . . . . . . . . 174
Manually configuring cloud server pools for
KVM . . . . . . . . . . . . . . 177
Manually configuring cloud server pools for
System p . . . . . . . . . . . . . 179
Manually configuring cloud server pools for
Power Blades . . . . . . . . . . . 182
Manually configuring cloud server pools for
VMControl . . . . . . . . . . . . 185
Enhancing VMControl discovery
performance. . . . . . . . . . . 188
Enabling or Disabling NPIV Support via
VMControl . . . . . . . . . . . 188
Configuring cloud server pools for zVM . . 189
zVM cloud server pool configuration . . 189
Configuring zVM cloud server pools in
the Cloud Server Pool Administration
application . . . . . . . . . . . 190
Using Data Center Model (DCM) files to
configure cloud server pools . . . . . . . 191
Data Center Model (DCM) object templates 191
Customizing hypervisor independent Data
Center Model (DCM) items. . . . . . 193
Customizing hypervisor dependent Data
Center Model (DCM) items. . . . . . 197
Importing Data Center Model (DCM) object
templates. . . . . . . . . . . . . 202
Customizing cloud pool objects . . . . . 202
Customizing a Tivoli Service Automation
Manager cloud pool for VMware . . . . 204
Customizing a Tivoli Service Automation
Manager cloud pool for PowerVM . . . 206
Customizing a Tivoli Service Automation
Manager cloud pool for KVM . . . . . 212
Customizing a Tivoli Service Automation
Manager cloud pool for IBM Systems
Director VMControl . . . . . . . . 215
Configuring PowerVM SAN disks for VIOS
support using MPIO . . . . . . . . . . 216
Planning for SAN storage . . . . . . . 216
Enabling VIOS support . . . . . . . . 217
Configuring cloud networks . . . . . . . . 217
Setting up the network related Data Center
Model (DCM) objects. . . . . . . . . . 218
Defining network segment usage values . . . 219
Network segment usage values . . . . . 219
Relating an image to a hypervisor-specific
network configuration . . . . . . . . 220
Defining a specific network configuration for
a class of images . . . . . . . . . . 220
Distinguishing between network interfaces of
the same type . . . . . . . . . . . 220
Creating a network template . . . . . . . 221
Creating a network template using the Cloud
Network Administration application. . . . . 222
Creating a customer and assigning a network
template . . . . . . . . . . . . . . 223
Configuring distributed virtual switches . . . 223
Enabling IPv6 addressing support . . . . . 225
IPv6 addressing support. . . . . . . . 225
Enabling customer network for IPv6 auto
configuration . . . . . . . . . . . 227
Enabling IPv6 for the provisioning images 227
Enabling the IPv6 support on the
management server . . . . . . . . . 228
Disabling the IPv6 support on the
management server . . . . . . . . . 228
IP address selection rules . . . . . . . 229
Defining an IP address selection rule . . . 230
IPv6 properties and their layout . . . . . 230
Turning on VIO Shared Ethernet Adapter
management . . . . . . . . . . . . 231
Turning off VIO Shared Ethernet Adapter
management . . . . . . . . . . . . 232
Configuring cloud storage pools . . . . . . . 232
Configuring cloud storage resources. . . . . 234
Setting up purging options for storage disks on
System p . . . . . . . . . . . . . . 235
Purging LPARs . . . . . . . . . . . . 236
Contents v
Changing the default purging configuration . . 237
Saving and restoring projects with storage
resources . . . . . . . . . . . . . . 238
Configuring VMware additional disk feature . . . 238
Configuring cloud storage pools . . . . . . 238
Support for single datastore . . . . . . . 240
Support for Storage vMotion . . . . . . . 240
Configuring the service provider and customer
features . . . . . . . . . . . . . . . 241
Assigning resources to the default customer . . 241
Activating the default customer PMRDPCUST 242
Configuring the interface to Workload Deployer 243
Configuring the managed environment to use the
WebSphere Cluster Service . . . . . . . . . 244
Configuring the DCM to use the WebSphere
Cluster Service . . . . . . . . . . . . 245
Configuring and running discovery on a Tivoli
Provisioning Manager server . . . . . . . 247
Defining configuration items for the WebSphere
Cluster Service . . . . . . . . . . . . 248
Processor Pool planning . . . . . . . . . . 249
Processor pool configuration in the backend . . 250
Processor Pool configuration in the Cloud
Server Pool Administration Application . . . 250
Service request parameters for processor pool
support . . . . . . . . . . . . . . 250
Advanced configuration settings . . . . . . . 251
Reducing the run time of provisioning requests 251
Overcommitting resources on VMware
hypervisor . . . . . . . . . . . . . 252
Overcommitting CPU . . . . . . . . 252
Overcommitting memory . . . . . . . 253
Overcommitting storage . . . . . . . . 254
VMware storage resiliency . . . . . . 255
Integrating Tivoli Service Automation Manager
with other Tivoli products . . . . . . . . . 255
Integrating Tivoli Monitoring . . . . . . . 255
Configuring the provisioning of the
monitoring agent . . . . . . . . . . 256
Preparing the monitoring agent installable 256
Defining the Tivoli Monitoring agent
software definition in Tivoli Provisioning
Manager . . . . . . . . . . . . 257
Enabling the monitoring agent installation
on restored images . . . . . . . . 261
Synchronizing data between Tivoli
Monitoring and Tivoli Service Automation
Manager . . . . . . . . . . . . 261
Configuring monitoring for the WebSphere
Cluster service . . . . . . . . . . . 263
Setting up predefined Tivoli Service
Automation Manager events for
monitoring . . . . . . . . . . . 263
Enabling the SSH command end point to
retrieve IBM Tivoli Monitoring
configuration information . . . . . . 264
Triggering the Tivoli Service Automation
Manager event monitoring application . . 265
Integrating Tivoli Usage and Accounting
Manager . . . . . . . . . . . . . . 266
CSR files . . . . . . . . . . . . . 266
Configuring Tivoli Service Automation
Manager for Tivoli Usage and Accounting
Manager . . . . . . . . . . . . . 267
Configuring for RXA connections between
Tivoli Service Automation Manager and
Tivoli Usage and Accounting Manager . . 268
Enabling table auditing for Tivoli Usage
and Accounting Manager data collection . 268
Enabling CSR file generation . . . . . 269
Defining the directory for CSR file
generation . . . . . . . . . . . 269
Enabling logs for metering . . . . . . 270
Configuring Tivoli Usage and Accounting
Manager to process CSR files . . . . . . 270
Configuring the Tivoli Usage and
Accounting Manager job file to retrieve
CSR files from Tivoli Service Automation
Manager . . . . . . . . . . . . 271
Configuring the Tivoli Usage and
Accounting Manager job file to process
CSR files from Tivoli Service Automation
Manager . . . . . . . . . . . . 272
Metering for additional disks . . . . . . 276
Metering for additional disks that are
created during earlier versions of Tivoli
Service Automation Manager . . . . . 276
Integrating with Tivoli Change and
Configuration Management Database (CCMDB) . 276
Configuration artifacts for integrating with
Tivoli Change and Configuration
Management Database . . . . . . . . 277
Configuring workflow integration . . . . 278
Configuring a Service Request Manager
offering for processing . . . . . . . . 279
Extensions to the Change Management
application . . . . . . . . . . . . 280
Tivoli Change and Configuration
Management Database Configuration Items . 281
The CreateOrUpdateAuthorizedCI
operation. . . . . . . . . . . . 281
The LinkAuthorizedCI operation . . . . 283
The UnlinkAuthorizedCI operation . . . 283
Deriving an operation specific to a
configuration item. . . . . . . . . 284
Using the configuration item operations 284
Configuring security against XSS . . . . . . 285
Chapter 4. Administering Tivoli
Service Automation Manager. . . . . 287
Logging on to the Tivoli Service Automation
Manager administrative interface . . . . . . . 287
Working with the service automation reports . . . 287
Configuring the reporting function . . . . . 288
Generating request pages . . . . . . . 288
Enabling table auditing . . . . . . . . 288
Authorizing users to access reports . . . . 289
Generating, viewing, and scheduling reports 290
Working with usage and accounting reports . . . 291
Project account and the account code structure 291
Generating Tivoli Usage and Accounting
Manager reports . . . . . . . . . . . 292
vi Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Managing cloud networks . . . . . . . . . 292
Network template management . . . . . . 293
Importing network related artifacts . . . . 293
Creating a network template . . . . . . 294
Viewing network segments . . . . . . . 295
Adding a network segment. . . . . . . 295
Deleting a network segment . . . . . . 296
Viewing subnetworks . . . . . . . . 296
Adding a subnetwork . . . . . . . . 296
Deleting a subnetwork . . . . . . . . 297
Viewing virtual switch templates . . . . . 297
Adding a virtual switch template. . . . . 298
Deleting a virtual switch template . . . . 298
Changing the status of a network template 299
Network configuration instance management 299
Viewing network configuration instances . . 300
Importing a customer network configuration
instance . . . . . . . . . . . . . 301
Exporting a customer network configuration
instance . . . . . . . . . . . . . 301
Viewing project network configuration
instances . . . . . . . . . . . . . 301
Importing project network configuration
instance . . . . . . . . . . . . . 302
Exporting a project network configuration
instance . . . . . . . . . . . . . 302
Viewing customers . . . . . . . . . . 303
Configuring overlapping subnetwork . . . . 303
Validating DCM subnetwork for overlapping
IP . . . . . . . . . . . . . . . 303
Managing virtual server resources . . . . . . 304
Increasing the maximum memory settings for
System p . . . . . . . . . . . . . . 304
Moving servers to another host for System p
cloud server pools. . . . . . . . . . . 305
Creating an administrative role for VMware
users . . . . . . . . . . . . . . . 305
Assigning provisioning workflows for VMware
additional disk feature . . . . . . . . . 306
Assigning provisioning workflow to an ESX
server . . . . . . . . . . . . . . 306
Assigning provisioning workflows to
multiple ESX servers . . . . . . . . . 307
Managing server images. . . . . . . . . . 307
Creating operating system image templates . . 307
Creating operating system image templates
for KVM . . . . . . . . . . . . . 307
Preparing a Windows image . . . . . 308
Preparing a Linux image . . . . . . 309
Creating operating system image templates
for VMware . . . . . . . . . . . . 312
Preparing a Windows image . . . . . 312
Preparing a Linux image . . . . . . 318
Creating operating system image templates
for PowerVM . . . . . . . . . . . 324
Preparing an AIX image . . . . . . . 324
Discovering single virtual server image
templates. . . . . . . . . . . . . . 324
Preparing OS image templates for Tivoli Service
Automation Manager. . . . . . . . . . 325
Adding a new image from VMControl . . . . 326
Migrating a VMControl image from 2.3 level
to 2.4.1 or 2.4.2 level . . . . . . . . . 326
Deleting a server image . . . . . . . . . 327
Storing server images . . . . . . . . . 327
Registering single master images to different
VMware clusters and server resource pools . . 327
Multiplying VMware template metadata . . . 328
Enabling the restore across project offerings . . 329
Controlling user access . . . . . . . . . . 330
Security in the administrative user interface . . 330
Security management in the self-service user
interface . . . . . . . . . . . . . . 333
Security groups in the self-service user
interface . . . . . . . . . . . . . 334
Data segregation for service providers . . . . 336
Administering customers and their resources . . . 338
Assigning resources to a customer . . . . . 338
Assigning resources to all customers . . . 340
Returning customer resources . . . . . . . 340
Creating customer templates . . . . . . . 341
Defining quotas and limits . . . . . . . . 342
Activating and deactivating quotas and limits 343
Managing the maintenance mode. . . . . . . 344
Maintenance log . . . . . . . . . . . 344
Maintenance log attributes . . . . . . . 344
Maintenance mode statuses. . . . . . . . 345
Activating the maintenance mode . . . . . 346
Deactivating the maintenance mode . . . . . 346
Forcing the maintenance mode . . . . . . 347
Maintenance mode extensibility . . . . . . 347
Managing request approval, delegation, and
notification . . . . . . . . . . . . . . 348
Communication templates for email notification 348
Managing communication templates. . . . 350
Enabling or disabling automatic approval of
requests . . . . . . . . . . . . . . 350
Enabling or disabling delegation of approval
requests . . . . . . . . . . . . . . 351
Adding new software modules . . . . . . . 352
Starting and stopping the middleware . . . . . 353
Starting the management server . . . . . . 353
Stopping the management server . . . . . . 354
Controlling the middleware with a script . . . 355
Backing up the database. . . . . . . . . . 357
Changing default passwords for Tivoli Service
Automation Manager. . . . . . . . . . . 358
Change the passwords for IBM Tivoli Directory
Server . . . . . . . . . . . . . . . 358
Change the idsccmdb user password . . . 358
Change idsccmdb password in Tivoli
Directory Server . . . . . . . . . . 358
Changing the passwords for Tivoli Provisioning
Manager user IDs . . . . . . . . . . . 359
Change password for wasadmin . . . . . 360
Change wasadmin password in Tivoli
Directory Server . . . . . . . . . . 360
Verify Tivoli Provisioning Manager and
WebSphere . . . . . . . . . . . . 361
Change the maxadmin user password . . . 361
Change the Maximo user password . . . . 362
Change maximo.properties . . . . . . . 363
Contents vii
Update properties.jar . . . . . . . . . 365
Verify password change . . . . . . . . 366
Change OS user ID passwords . . . . . 367
Change the cloud administrator
(PMRDPCAUSR) password. . . . . . . . 367
Updating Tivoli Provisioning Manager certificate in
the Java truststore . . . . . . . . . . . . 368
Using virtual servers for SAP landscapes . . . . 369
Using the Admin Mode . . . . . . . . . . 372
Chapter 5. Reliability, availability, and
serviceability functions . . . . . . . 373
REST API reference for RAS . . . . . . . . 374
Delete DCM virtual servers on given host
platform . . . . . . . . . . . . . . 374
List virtual servers on given host platform . . 375
List virtual servers on given host platform and
in DCM . . . . . . . . . . . . . . 376
Force cleanup of service deployment instance 377
List service deployment instance backend
resources . . . . . . . . . . . . . . 378
List service deployment instance data model
inconsistencies . . . . . . . . . . . . 381
Validation checks . . . . . . . . . . 382
TOPO_VS_DCM validation checks . . . . . 383
TOPO_VS_BC validation checks . . . . . 383
BC_VS_DCM validation checks . . . . . 384
Provide service request current status . . . . 385
Force cleanup of service deployment instance
and back end . . . . . . . . . . . . 385
BIRT reports. . . . . . . . . . . . . . 387
Virtual server synchronization report . . . . . 388
Service deployment instance inconsistencies report 388
Ticket related objects report . . . . . . . . 389
Best practices for using reliability, availability, and
serviceability functions . . . . . . . . . . 390
Rolling back the resource workflow
modification process . . . . . . . . . . 390
Removing a host platform . . . . . . . . 391
Tracking VM Provisioning request by DCM
objects. . . . . . . . . . . . . . . 391
Checking the service request status . . . . . 391
Unlocking hanging requests . . . . . . . 392
Insufficient resources on the Tivoli Service
Automation Manager self-service user interface . 392
Removing phantom servers. . . . . . . 392
Freeing resources locked to deprecated or
inconsistent projects . . . . . . . . . 393
Chapter 6. REST API reference . . . . 395
Structure of the REST URLs used for the 'query'
request . . . . . . . . . . . . . . . 395
Tivoli Process Automation engine base object
queries . . . . . . . . . . . . . . . 397
Get domain definitions (MAXDOMAIN) . . . 397
Get list of installed and enabled languages
(LANGUAGE) . . . . . . . . . . . . 399
Get list of person groups (PERSONGROUP) . . 400
Get person group details (PERSONGROUPDET) 400
Get list of users (MAXUSER) . . . . . . . 401
Get list of user details (MAXUSERDET) . . . 402
Get list of security groups (MAXGROUP) . . . 404
Get security group users (GROUPUSER) . . . 405
Get Images (IMGLIB). . . . . . . . . . 406
Get system property values (MAXPROPVALUE) 407
Tivoli Service Automation Manager based or
modified object queries . . . . . . . . . . 408
Get project-related data
(PMZHBR1_PMRDPPRJVIEW) . . . . . . 408
Get server-related data
(PMZHBR1_PMRDPSRVVIEW) . . . . . . 411
Get storage-related data
(PMZHBR1_PMRDPSTGVIEW) . . . . . . 412
Get the current defined offerings
(PMRDPOFFVIEW) . . . . . . . . . . 414
Get person to group mapping
(PERSONGROUPTEAM) . . . . . . . . 415
(DEPRECATED) Get person to group mapping
details (PERSGRPTMDET) . . . . . . . . 416
Get person groups with person not yet in team
(PMZHBR1_PERSNTINTEAM) . . . . . . 417
Information to calculate user request frequency
(PMZHBR1_FREQUENTREQ) . . . . . . . 418
Network configuration API . . . . . . . . 418
Get network template . . . . . . . . 419
Get matching network segments . . . . . 420
Get network segments matching the
customer configuration . . . . . . . 420
Get network segments matching the
project configuration . . . . . . . . 421
Constructor for the network API client . . . 422
The Get Network configuration method . . . 423
The Set Network configuration method . . . 423
Network schema . . . . . . . . . . 424
Complex type NetworkConfiguration . . 424
Complex type DNS . . . . . . . . 425
Complex type Gateway . . . . . . . 426
Complex type NetworkSegment . . . . 427
Complex type Reference. . . . . . . 430
Complex type Route . . . . . . . . 430
Complex type Routes. . . . . . . . 431
Complex type Subnet. . . . . . . . 431
Complex type VlanId. . . . . . . . 432
Complex type VrfId . . . . . . . . 433
Simple type Type . . . . . . . . . 433
Simple type Version . . . . . . . . 434
JAXB access classes . . . . . . . . . 434
The ANY attributes available in the schema 435
Tivoli Service Request Manager based object
queries . . . . . . . . . . . . . . . 436
Get list of service requests (SR) . . . . . . 436
Get service request details (SRDET) . . . . . 438
Get shopping cart (CART) . . . . . . . . 439
Get user details (MAXUSERDET). . . . . . 440
Get offering information (OFFERING) . . . . 442
Get offering details (OFFERINGDET) . . . . 443
Create service request (SRCREATE) . . . . . 444
Create a shopping cart (CARDCREATE) . . . 447
Create interfaces via REST . . . . . . . . . 448
REST POST-style requests - 'create', 'update',
and 'delete' . . . . . . . . . . . . . 448
viii Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Create catalog requests (PMSCCR) . . . . . 449
Interfaces via Web services (PMRDPBCAPI) . . . 464
pmrdpbcapigetAvailableCapacityData Method 465
pmrdpbcapigetAvailablePoolList Method . . . 467
pmrdpbcapigetAvailableCapacityForPools
Method . . . . . . . . . . . . . . 469
pmrdpbcapigetBCRealSysResMonitoringData
Method . . . . . . . . . . . . . . 470
pmrdpbcapiReassignWfAssignment Method . . 472
pmrdpbcapiChangeWfAssignment Method . . 473
Additional relationships . . . . . . . . . . 473
GETDOMAIN Relationship. . . . . . . . 474
PMZHB_SRSPECCLASS Relationship . . . . 474
Additional filter domains . . . . . . . . . 474
(DEPRECATED) PMZHBT_CLUSERROLE Filter
Domain . . . . . . . . . . . . . . 474
PMZHBT_CLOUDUSER Filter Domain. . . . 475
(DEPRECATED) PMZHBT_CLUSERROLE Filter
Domain . . . . . . . . . . . . . . 475
PMZHBT_LOGGEDONUSR Filter Domain . . 475
PMZHBT_MODCLUSER Filter Domain . . . 476
PMZHBT_REASSIGNLST Filter Domain . . . 477
PMZHBT_SRAPPRLIST Filter Domain . . . . 477
PMZHBT_SRUSRLIST Filter Domain . . . . 478
PMZHBT_SRVPRJLUSER Filter Domain . . . 479
PMZHBT_USERSTEAM Filter Domain . . . . 479
PMZHBT_USERTEAMS Filter Domain . . . . 480
(DEPRECATED) PMZHBT_USRROLELIST Filter
Domain . . . . . . . . . . . . . . 480
PMZHBT_IMGNOSRV Filter Domain . . . . 481
PMZHBT_SWMODULE Filter Domain . . . . 481
PMZHBT_ILMSTRIMG Filter Domain . . . . 482
REST API troubleshooting . . . . . . . . . 482
Common omissions and user errors . . . . . 482
Debugging REST requests with loggers. . . . 483
Chapter 7. Troubleshooting and error
recovery . . . . . . . . . . . . . 485
Trace logging . . . . . . . . . . . . . 485
Common problems and solutions. . . . . . . 486
Defining Cloud.WINDOWS_ADMIN_USER . . . 488
Troubleshooting image registration . . . . . . 488
Troubleshooting installation problems . . . . . 489
Unable to log on to the administrative user
interface after Tivoli Provisioning Manager
installation . . . . . . . . . . . . . 489
Service Request Manager fix pack installation
terminates with error . . . . . . . . . . 490
Release Process Manager installation fails . . . 490
OutOfMemoryError when installing automation
packages . . . . . . . . . . . . . . 491
The TPMfOSd_VersionDiscovery workflow fails
during the Configure Cloud Management
Components post-installation step . . . . . 491
Mail check enabled for tioadmin user is causing
issues during Tivoli Service Automation
Manager installation . . . . . . . . . . 492
SOAP exception error while running
startApplication.py . . . . . . . . . . 492
Errors in the configuration of cloud
management component . . . . . . . . 493
Missing display of tags during installation . . 493
Troubleshooting upgrade and migration . . . . 493
Creating a project from a saved image fails . . 493
Upgrade fails with
VMWare_MigrateDCM_7_3_0_0 workflow errors 493
UpdateDB fails with error messages . . . . . 494
Modifying Cloud Server Pool fails . . . . . 495
Problems in upgrading Tivoli Service
Automation Manager 7.2.2.1 . . . . . . . 495
Provisioning problems . . . . . . . . . . 495
Investigating provisioning failures . . . . . 495
Cleaning up after a provisioning failure . . . 497
Provisioning fails on System p. . . . . . . 497
Virtual server on System p is not created . . . 498
Recovering data model from System p Live
Partition Mobility . . . . . . . . . . . 498
Additional software not available on AIX . . . 499
DB2 installation on Windows fails . . . . . 500
Provisioning fails with HWADDR line in
/etc/sysconfig/network-scripts/ifcfg-eth0. . . 500
Provisioning a VMware project with increased
memory size fails . . . . . . . . . . . 501
Provisioning VMware fails with timeout error 501
Provisioning fails when using Xen image . . . 502
Provisioning fails because of wrong
configuration of default gateways . . . . . 502
Excluding ESX server that is under maintenance
in VMWare . . . . . . . . . . . . . 503
Available resources are not listed properly. . . 503
Modifying disk size fails on Red Hat Enterprise
Linux server. . . . . . . . . . . . . 503
Resources not updated when creating a project 504
Resource pool is not available . . . . . . . 504
Setting automatic logon for Windows . . . . 504
Setting a specific timezone on a Windows
endpoint . . . . . . . . . . . . . . 505
Allowing automatic replacement of existing
servers . . . . . . . . . . . . . . 506
Cleaning up after unexpected stoppage of the
Provisioning Manager engines. . . . . . . . 506
The integrity checker tool functions . . . . . . 506
Manually changing the status of a service request 507
Troubleshooting when using IBM Systems Director
VMControl . . . . . . . . . . . . . . 507
Creating an image template for VMControl fails 507
VMC_FormatFileSystem workflow failure . . . 507
Deployment of more than 10 servers fails . . . 508
Removing hardware resources via VMControl 508
Recovering after unexpected hardware removal
or failure . . . . . . . . . . . . . . 509
Removing an orphan virtual server . . . . . 509
Server cannot be removed in the self-service
interface . . . . . . . . . . . . . 510
Correct physical CPU information is not
reflected on HMC . . . . . . . . . . . 510
Fixing Image Validation Errors . . . . . . 510
Deployment of a virtual appliance with
dedicated cpu or memory values in the OVF
fails . . . . . . . . . . . . . . . 511
VMControl certificate host name does not match 512
Contents ix
Troubleshooting errors in VMControl NPIV
project with additional disk . . . . . . . 513
Troubleshooting when using VMware . . . . . 513
Troubleshooting VMware additional disk feature 514
"Invalid Binding" displayed for data store
names . . . . . . . . . . . . . . . 514
Using the same data store for a server pool and
a storage pool . . . . . . . . . . . . 515
Additional disks for scheduled future projects 515
Error messages . . . . . . . . . . . . 515
VMware Additional Disks fails in non-US . . . 516
VirtualCenter discovery does not load the servers 516
CDSException messages issued for inactive CDS
application . . . . . . . . . . . . . . 517
Configuring extensibility for Workload Deployer
fails . . . . . . . . . . . . . . . . 517
Errors in http_plugin.log files . . . . . . . . 518
Extending an offering fails . . . . . . . . . 519
Troubleshooting additional disk extension for
Power LPAR . . . . . . . . . . . . . 519
Deleting saved images after the owning customer
was deleted . . . . . . . . . . . . . . 520
Errors in the Simple SRM UI display . . . . . 521
Guest OS customization of Windows 8 and
Windows Server 2012 does not complete . . . . 521
Appendix. Accessibility features . . . 523
Notices . . . . . . . . . . . . . . 525
Glossary . . . . . . . . . . . . . 527
A . . . . . . . . . . . . . . . . . 527
C . . . . . . . . . . . . . . . . . 528
E . . . . . . . . . . . . . . . . . 528
H . . . . . . . . . . . . . . . . . 528
J. . . . . . . . . . . . . . . . . . 528
M . . . . . . . . . . . . . . . . . 529
P . . . . . . . . . . . . . . . . . 529
R . . . . . . . . . . . . . . . . . 530
S . . . . . . . . . . . . . . . . . 530
T . . . . . . . . . . . . . . . . . 531
V . . . . . . . . . . . . . . . . . 531
W . . . . . . . . . . . . . . . . . 531
X . . . . . . . . . . . . . . . . . 532
Trademarks and Service Marks. . . . 533
Index . . . . . . . . . . . . . . . 535
Privacy policy considerations . . . . 539
x Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Tables
1. Documentation links for Tivoli Service
Automation Manager component products . . xiii
2. Uniqueness scopes for topology node
attributes . . . . . . . . . . . . . 14
3. Uniqueness specifications for attribute names 15
4. Uniqueness resolution rule . . . . . . . 15
5. Summary of Tivoli Service Automation
Manager Reports and Content . . . . . . 20
6. Management server hardware and operating
system requirements . . . . . . . . . 29
7. Minimum free space requirements for the
management system (AIX) . . . . . . . 30
8. Minimum free space requirements for the
management system (Linux) . . . . . . . 30
9. Administrative server requirements . . . . 31
10. Managed-environment server requirements 32
11. Summary of software in the Tivoli Service
Automation Manager environment . . . . . 36
12. Installation steps and servers on which the
corresponding installation files must be
available . . . . . . . . . . . . . 43
13. Installation scenarios . . . . . . . . . 45
14. Required packages for Linux management
servers . . . . . . . . . . . . . . 64
15. Default and your values for properties . . . 69
16. NIM vs SCS differences . . . . . . . . 144
17. The KVM server and KVM image server
settings worksheet . . . . . . . . . . 168
18. The Management Subnetwork customization: 193
19. The Customer Subnetwork customization: 194
20. . . . . . . . . . . . . . . . . 195
21. The 11_Cloud_NetworkSettings_VMware.xml
file customization . . . . . . . . . . 197
22. Systemp_SwitchTemplate customization 198
23. The 13_Cloud_NetworkSettings_zVM.xml file
customization . . . . . . . . . . . 199
24. The 23_1_Cloud_zLinuxImage_SLES10_zVM.xml
file customization . . . . . . . . . . 199
25. The 23_2_Cloud_Vswitches_zVM.xml file
customization . . . . . . . . . . . 200
26. System z Pool customization . . . . . . 200
27. Customization of the mapserve server section
in the System z Pool object . . . . . . . 201
28. The 14_Cloud_NetworkSettings_KVM.xml file
customization: . . . . . . . . . . . 202
29. Customization of the VIOS settings for the
discovered DCM objects and the System p
VIOS configuration for the CEC server objects 210
30. Sample Data Center Model import files. 218
31. Sample network templates and network
template schema. . . . . . . . . . . 221
32. Classification attributes for configuration
items to be used by the WebSphere Cluster
Service. . . . . . . . . . . . . . 248
33. Account code structure . . . . . . . . 273
34. Account code structure . . . . . . . . 291
35. The minimum rights of a VMware
administrative user . . . . . . . . . 306
36. . . . . . . . . . . . . . . . . 318
37. Roles and groups provided by Tivoli Service
Automation Manager . . . . . . . . . 331
38. Access to requests depending on the security
group . . . . . . . . . . . . . . 335
39. Types of quotas and limits . . . . . . . 342
40. Maintenance log attributes . . . . . . . 344
41. Escalations deactivated during the
maintenance mode. . . . . . . . . . 345
42. Tivoli Service Automation Manager
communication templates . . . . . . . 348
43. Roles for email notifications. . . . . . . 349
44. Management section of the input parameter
file . . . . . . . . . . . . . . . 370
45. Instance profile section of the input
parameter file . . . . . . . . . . . 371
46. ERROR_CODE values . . . . . . . . 382
47. ERROR_TYPE values . . . . . . . . . 382
48. Object structure definition for
MBS_MAXDOMAIN . . . . . . . . . 398
49. Object structure definition for
MBS_LANGUAGE. . . . . . . . . . 399
50. Object structure definition for
MBS_PERSONGROUP . . . . . . . . 400
51. Object structure definition for
MBS_PERSONGROUPDET . . . . . . . 401
52. Object structure definition for
MBS_MAXUSER . . . . . . . . . . 402
53. Object structure definition for
MBS_MAXUSERDET . . . . . . . . . 402
54. Object structure definition for
MBS_MAXGROUP. . . . . . . . . . 404
55. Object structure definition for
MBS_GROUPUSER . . . . . . . . . 405
56. Object structure definition for MBS_IMGLIB 406
57. Object structure definition for MBS_
MAXPROPVALUE . . . . . . . . . . 407
58. Object structure definition for
PMZHBR1_PMRDPPRJVIEW . . . . . . 409
59. Object structure definition for
PMZHBR1_PMRDPSRVVIEW . . . . . . 411
60. Object structure definition for
PMZHBR1_PMRDPSTGVIEW . . . . . . 413
61. Object structure definition for
PMZHBR1_PMRDPOFFVIEW . . . . . . 414
62. Object structure definition for
PMZHBR1_PERSGRPTM . . . . . . . 415
63. Object structure definition for
PMZHBR1_PERSGRPTMDET . . . . . . 416
64. Object structure definition for
PMZHBR1_PERSNTINTEAM . . . . . . 417
65. Object structure definition for
PMZHBR1_PERSNTINTEAM . . . . . . 418
66. Object structure definition for SRM_SR 437
Copyright IBM Corp. 2008, 2012 xi
67. Object structure definition for SRM_SRDET 438
68. Object structure definition for SRM_CART 439
69. Object structure definition for
SRM_MAXUSERDET . . . . . . . . . 440
70. Object structure definition for
SRM_OFFERING . . . . . . . . . . 442
71. Object structure definition for
SRM_OFFERINGDET . . . . . . . . . 443
72. Object structure definition for
SRM_SRCREATE . . . . . . . . . . 444
73. Object structure definition for
SRM_CARDCREATE . . . . . . . . . 447
74. Problems and solutions for common problems 486
75. Timezones and their numeric values . . . . 505
xii Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Preface
This publication documents how to install and administer Tivoli

Service
Automation Manager.
Who should read this information
This information is intended for:
v System and database administrators who are responsible for implementing Tivoli
Service Automation Manager or IBM

CloudBurst

v Service managers responsible for defining specific service offerings based on the
service definitions supplied with Tivoli Service Automation Manager
v Service instance managers and technicians responsible for monitoring and
administering existing IT landscapes represented by service instances
v Service operators responsible for operations management of landscapes that are
deployed or in the process of being deployed
What's new in this release
This release of Tivoli Service Automation Manager offers a set of new features and
enhancements to the existing functions.
v Overlapping IP Address environment for VMControl Hypervisor
v Quotas in VMControl
v Virtual machine cloning for additional disk
v Folders in VMware
v Save / Restore images with additional disk (Vmware)
v Metering for additional disk (VMware)
v NPIV support for VMControl 2.4.2 and 2.4.3.1
v Performance improvement and display for Resource KPI
Useful links
Tivoli Service Automation Manager is a component product. Use the following
topic to find more information about the related products and the requirements
that must be met for them.
Table 1. Documentation links for Tivoli Service Automation Manager component products
Product name Documentation URL
IBM Tivoli Provisioning Manager, version
7.2.1
http://pic.dhe.ibm.com/infocenter/tivihelp/
v45r1/topic/com.ibm.tivoli.tpm.doc/
welcome/ic-homepage.html
IBM Tivoli Service Request Manager, version
7.2.0.1
http://publib.boulder.ibm.com/infocenter/
tivihelp/v32r1/index.jsp?topic=
%2Fcom.ibm.srm.doc%2Fsrm_welcome.htm
Maximo

Base Services http://publib.boulder.ibm.com/infocenter/


tivihelp/v3r1/index.jsp?topic=
%2Fcom.ibm.mam.doc_7.1
%2Fmam_welcome.htm
Copyright IBM Corp. 2008, 2012 xiii
Table 1. Documentation links for Tivoli Service Automation Manager component
products (continued)
Product name Documentation URL
IBM Tivoli Directory Server, version 6.3 http://publib.boulder.ibm.com/infocenter/
tivihelp/v2r1/index.jsp?topic=
%2Fcom.ibm.IBMDS.doc%2Fwelcome.htm
DB2

, version 9.5 http://publib.boulder.ibm.com/infocenter/


db2luw/v9r5/topic/com.ibm.db2.luw.doc/
welcome.html
IBM WebSphere

Application Server
Network Deployment, version 6.1
http://publib.boulder.ibm.com/infocenter/
wasinfo/v6r1/topic/
com.ibm.websphere.base.doc/info/aes/ae/
welcome_base61.html
IBM Systems Director VMControl

version
2.3.1, 2.4.1.1, 2.4.2, 2.4.3.1
http://publib.boulder.ibm.com/infocenter/
director/v6r2x/index.jsp?topic=/
com.ibm.director.vim.helps.doc/
fsd0_vim_main.html
Support information
You can find support information for IBM products from a variety of sources.
v Getting technical training
v Searching knowledge bases
v Contacting IBM Software Support on page xvi
Getting technical training
Information about Tivoli technical training courses is available online.
Go to http://www.ibm.com/software/tivoli/education/.
Searching knowledge bases
If you have a problem with Tivoli Service Automation Manager, search through
one of the many available knowledge bases.
You can begin with the IT Service Management Knowledge Center.
Searching the Internet
If you cannot find an answer to your question in the IT Service Management
information center, search the Internet for the latest, most complete information to
help you resolve your problem.
To search multiple Internet resources, go to the IBM Tivoli Support website. From
there, you can search a number of resources, including:
v IBM technotes
v IBM downloads
v IBM Redbooks

If you still cannot find a solution to the problem, you can search forums and
newsgroups on the Internet for the latest information to help you resolve it.
xiv Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Using IBM Support Assistant
At no additional cost, you can install on any workstation the IBM Support
Assistant, a stand-alone application. You can then enhance the application by
installing product-specific plug-in modules for the IBM products that you use.
The IBM Support Assistant helps you gather support information when you need
to open a problem management record (PMR), which you can then use to track the
problem. The product-specific plug-in modules provide you with the following
resources:
v Support links
v Education links
v Ability to submit problem management reports
For more information, see the IBM Support Assistant Web site at
http://www-01.ibm.com/software/support/isa/.
Finding product fixes
A product fix might be available from the IBM Software Support website.
About this task
Check the website to determine which fixes are available for your product.
Procedure
1. Find the Tivoli Service Automation Manager product at http://www.ibm.com/
software/tivoli/products/.
2. Click the Support Pages link for the product.
3. Click Fixes for a list of fixes for your product.
4. Click the name of a fix to read the description and download the fix.
Getting email notification of product fixes
You can get notifications about fixes and other news about IBM products.
Procedure
1. From the support page for any IBM product, click My support in the
upper-right corner of the page.
2. Optional: If you have not registered, click Register in the upper-right corner of
the support page to set up your user ID and password.
3. Sign in to My support.
4. On the My support page, click Edit profiles in the left navigation pane, and
scroll to Select Mail Preferences. Select a product family and check the
appropriate boxes for the type of information you want.
5. Click Submit.
6. For email notification for other products, repeat steps 4 and 5.
Preface xv
Contacting IBM Software Support
You can contact IBM Software Support if you have an active IBM software
maintenance contract and if you are authorized to submit problems to IBM.
About this task
Before you contact IBM Software Support, follow these steps:
Procedure
1. Set up a software maintenance contract.
2. Determine the business impact of your problem.
3. Describe your problem and gather background information.
What to do next
Then see Submit the problem to IBM Software Support on page xvii for
information on contacting IBM Software Support.
Setting up a software maintenance contract
To be able to submit a problem to IBM, you need to have a software maintenance
contract. The type of contract that you need depends on the type of product you
have.
Procedure
v For IBM distributed software products (including, but not limited to, Tivoli,
Lotus

, and Rational

products, as well as IBM DB2 and IBM WebSphere


products that run on Microsoft Windows or UNIX operating systems), enroll in
IBM Passport Advantage

:
Enrolling online: Go to the Passport Advantage Web page at
http://www.ibm.com/software/lotus/passportadvantage/, click How to
enroll, and follow the instructions.
Enrolling by Telephone: For the telephone number for your country, go to
the IBM Software Support Handbook webpage at http://
www14.software.ibm.com/webapp/set2/sas/f/handbook/contacts.html and
click Contacts.
v For IBM eServer

software products, you can purchase a software maintenance


agreement by working directly with an IBM marketing representative or an IBM
Business Partner. For more information about support for eServer software
products, go to the IBM Technical support advantage webpage at
http://www.ibm.com/servers/eserver/techsupport.html.
What to do next
If you are not sure which type of software maintenance contract you need, call
1-800-IBMSERV (1-800-426-7378) in the United States. For a list of support
telephone numbers for your location, go to the Software Support Handbook page
at http://www14.software.ibm.com/webapp/set2/sas/f/handbook/contacts.html.
xvi Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Determine the business impact
When you report a problem to IBM, you are asked to supply a severity level. In
order to provide this information, understand and assess the business impact of
the problem you are reporting.
Severity 1 Critical business impact: You are unable to use the program,
resulting in a critical impact on operations. This condition
requires an immediate solution.
Severity 2 Significant business impact: The program is usable but is
severely limited.
Severity 3 Some business impact: The program is usable with less
significant features (not critical to operations) unavailable.
Severity 4 Minimal business impact: The problem causes little impact on
operations, or a reasonable circumvention to the problem has
been implemented.
Describe the problem and gather background information
When explaining a problem to IBM, it is helpful to be as specific as possible.
Include all relevant background information so that IBM Software Support
specialists can help you solve the problem efficiently.
To save time, know the answers to these questions:
v What software versions were you running when the problem occurred?
v Do you have logs, traces, and messages that are related to the problem
symptoms? IBM Software Support is likely to ask for this information.
v Can the problem be recreated? If so, what steps led to the failure?
v Have any changes been made to the system? For example, hardware, operating
system, networking software, and so on.
v Are you currently using a workaround for this problem? If so, be prepared to
explain it when you report the problem.
Submit the problem to IBM Software Support
You can submit the problem to IBM Software Support online or by telephone.
Online
Go to the IBM Software Support Web site at http://www.ibm.com/
software/support/probsub.html. Enter your information into the
appropriate problem submission tool.
By Telephone
For the telephone number to call in your country, go to the contacts page
of the IBM Software Support Handbook at http://
www14.software.ibm.com/webapp/set2/sas/f/handbook/contacts.html.
If the problem that you submit is for a software defect or for missing or inaccurate
documentation, IBM Software Support creates an Authorized Program Analysis
Report (APAR). The APAR describes the problem in detail. If a workaround is
possible, IBM Software Support provides one for you to implement until the APAR
is resolved and a fix is delivered.
Preface xvii
xviii Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Chapter 1. Tivoli Service Automation Manager overview
Tivoli Service Automation Manager helps the
automated provisioning, management, and
deprovisioning of cloud resources, which are
comprised of hardware servers, networks,
operating systems, middleware, and
application-level software. Several
virtualization environments (hypervisors) are
supported in the process of individual virtual
server provisioning.
Tivoli Service Automation Manager also provides management support for services
consisting of a specific set of middleware, in combination with AIX

, and Linux
(System x

and System z

). IBM provides many automation and best-practice


service definition templates, including specialized job plans or workflows. Tivoli
Service Automation Manager makes use of the entire spectrum of Tivoli process
automation engine tools.
Tivoli Service Automation Manager helps you define and automate services that
are lifecycle oriented, for example, a service to establish and administer an IT
server network for a limited period of time, to satisfy increased demand for
processing capacity or to serve as a test environment. Predefined service definitions
determine the overall framework for the services. The actual service instances are
requested using these service definitions. The Self-Service Virtual Server
Management environment, is used by the cloud users to request provisioning and
manage the virtual environments.
Tivoli Service Automation Manager uses IBM Service Management and the Tivoli
process automation engine as an integration platform. IBM Service Management is
an approach designed to automate and simplify the management of business
services. It concentrates on four areas:
v Technology integration and standards
v Improved collaboration among IT people spread across organizational silos
v Best-practices based process modules for automated process execution
v Sharing of business-critical IT information to improve decision making
Tivoli Service Automation Manager offers the following standard service
environments and definitions:
Self-Service Virtual Server Management environment
In this service environment, which is a collection of service offerings, users
request the provisioning of projects comprising virtual servers. The service
environment has an intuitive self-service user interface with Web 2.0
technology for enhanced interactive feedback. An administrator function is
also provided.
The basic service structure described in Service structure on page 17
supports this component, except for the interface to Tivoli Monitoring,
which has a different implementation in the Self-Service Virtual Server
Provisioning context.
Copyright IBM Corp. 2008, 2012 1
WebSphere Cluster Service
This optional, separately priced service provisions and manages a
WebSphere cluster.
Product components
This section provides the overview the components of Tivoli Service Automation
Manager.
Tivoli Service Automation Manager Installation Launchpad
The Tivoli Service Automation Manager Installation Launchpad guides the user
through the installation process.
For details, see Chapter 2, Installing and upgrading Tivoli Service Automation
Manager, on page 27.
Self-Service Virtual Server Management
Tivoli Service Automation Manager provides support for user-initiated
provisioning and management of virtual servers on System x, System p

, or
System z. The product also supports the IBM CloudBurst and IBM Workload
Deployer products, which are based on System x hardware. The self-service
environment is supported by the self-service user interface. The Self-Service Virtual
Server Management functional addresses a long-standing need for efficient
management of self-service deployment of virtual servers and associated software.
Using a set of simple, point-and-click tools, the user can select a software stack and
have the software automatically installed or uninstalled in a virtual host that is
automatically provisioned.
These tools integrate with Tivoli Service Request Manager

to provide a
self-service portal for reserving, provisioning, recycling, and modifying virtual
servers, and working with server images, in the following platform environments,
which are a part of a virtualized non-production lab (VNPL):
v VMware on System x (also used in the IBM CloudBurst and IBM Workload
Deployer products)
v Xen on System x
v KVM on System x
v LPARs on Power Systems
v z/VM

guests on System z
v WebSphere CloudBurst Appliance
This function ensures the integrity of fulfillment operations that involve a wide
range of resource actions:
v Creating virtual servers as part of a new deployment project or adding virtual
servers to an existing project, with optional scheduling for implementation at
some future time
v For each virtual server created, installing a software image that includes an
operating system and other applications that are associated with the image.
v Installing additional software on the provisioned virtual machines
v Deleting a virtual server when it is no longer needed, freeing up resources for
other servers
v Saving virtual server images and restoring the servers to their previous state
v Saving and restoring images of servers within the project.
2 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v Deleting individual servers.
v Canceling a project and deleting all of the associated virtual servers.
v Starting, stopping, and restarting virtual servers.
v Resetting the administrator password on a virtual server.
v Adding, removing, and modifying users and user teams.
You use these capabilities to achieve incremental value by adopting a self-service
virtual server provisioning process, growing and adapting the process at your own
pace, and adding task automation to further reduce labor costs around defined
provisioning needs.
Before users in the data center can create and provision virtual servers,
administrators perform a set of setup tasks, including configuring the integration,
setting up the virtualization environments managed by the various hypervisors,
and running a Tivoli Provisioning Manager discovery to find servers and images
across the data center.
After this initial setup has been completed, the administrator associates the virtual
server offerings with Tivoli Provisioning Manager virtual server templates. The
Image Library is used as the source for software images to be used in provisioning
the virtual servers.
User interfaces
Tivoli Service Automation Manager provides two options for user interaction: a
self-service user interface and an administrative user interface.
The administrative user interface is the standard interface within the Tivoli
process automation engine framework (formerly known as Maximo). It is intended
primarily for service and system administrators who perform the installation,
upgrade, configuration and administration tasks.
The self-service user interface is tailored to users of self-service offerings and
administrators. It is based on the Web 2.0 standard, which enables for the
context-sensitive and real-time display updating based on the current entry or
selection made by the user. The result is a faster access to the necessary
information without having to go through a sequence of clicks, dialogs, and
panels.
Applications in the administrative user interface
Many of the components of Tivoli Service Automation Manager are implemented
in the form of applications within the administrative user interface. This section
describes the available applications and their functions.
Note: For advanced search operations in administrative UI of Tivoli Service
Automation Manager, you cannot use check boxes for search filtering.
Chapter 1. Product overview 3
Service Definitions application
You use the Service Definitions application to create or modify a service definition
and to instantiate services based on that definition. You can also customize the
service definition delivered with Tivoli Service Automation Manager to suit your
requirements more precisely.
For services that are instantiated (that is, represented with actual hardware) based
on an approved service definition, the Service Definitions application triggers the
instantiation workflow. Self-Service Virtual Server Management, however, employs
an interactive process for instantiation that is carried out using the Service Request
Manager.
Service Deployment Instances application
You use the Service Deployment Instances application to manage existing
deployments (physical implementations) of a service.
Resource Allocation applications
You use Resource Allocation applications allow you to document and review
requirements and allocate configuration items (CIs). Resources are allocated to
fulfill document requirements or allocate configuration items.
There are two purposes for allocating resources.
v Document requirements
Users can insert requirement items to specify the requirements the CI has to
fulfill. These items are classified using the standard Tivoli process automation
engine classification mechanism, which means that the user can reuse predefined
forms.
v Allocate configuration items (CIs)
Once the requirements are set and approved, a CI has to be allocated that
matches those requirements. The user can review and change the set
requirements. Using this information, they can select a CI that matches.
Automated filtering helps the user find the appropriate CI.
The Resource Allocation applications include:
v Resource Allocation Record
v Resource Allocation for Service Deployment Instance
Monitoring Definition applications for WebSphere Cluster service
If you need to modify the service definition delivered with Tivoli Service
Automation Manager to match your requirements for performance monitoring, you
can change the monitoring definition using the set of Monitoring Definition
applications.
Note: This section pertains only to performance monitoring within the scope of
the Tivoli Service Automation Manager WebSphere Cluster service. It does not
describe displaying resource usage data collected by Tivoli Monitoring agents that
are installed on servers provisioned using the Self-Service Virtual Server
Management component.
Performance monitoring is supported in combination with IBM Tivoli Monitoring
on distributed platforms (Linux on System x, System p, and System z, and AIX).
The Monitoring Definition applications refer to the monitoring environment being
used with respect to any given service definition and service deployment instance.
4 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
IBM Tivoli Monitoring must be installed separately. It is no longer offered as an
installation option for Tivoli Service Automation Manager. Alternatively, a
separately installed Tivoli Monitoring environment can be used.
For details on setting up IBM Tivoli Monitoring, see the documentation for this
product IBM (Tivoli Monitoring and OMEGAMON XE documentation).
There are three monitoring related applications:
v Monitoring Definition
v Monitoring Definition Instantiation
v Monitoring Definition Instances
The handling of performance-related situations detected by the monitoring agents
in a deployed landscape is provided by the Situation Analysis application.
Situation Analysis application
You use the Situation Analysis application to review the context of the service
deployment instance in which a performance-related event occurred, including the
topology of the service instance, the CIs involved, and the monitoring definitions.
For an overview of the situation analysis function, see Performance monitoring
support for the WebSphere Cluster service on page 16.
Note: This information pertains only to performance monitoring within the scope
of the Tivoli Service Automation Manager WebSphere Cluster service. It is not
related to displaying resource usage data collected by Tivoli Monitoring agents
that are installed on servers provisioned using the Self-Service Virtual Server
Management component.
Cloud Server Pool Administration application
Use this application to define and configure cloud server pools after the
installation of Tivoli Service Automation Manager.
This application helps you define and configure a cloud server pool for each back
end on which you want to provision servers.
Cloud Storage Pool Administration application
Use this application to create and configure cloud storage pools.
Cloud storage pools are a flexible solution to add additional storage to your
provisioned servers. They are a collection of storage resources for additional disks
that can be assigned to a newly created image if you need more storage space than
just the boot disk.
Cloud Network Administration application
This application is used to perform cloud configurations and to manage them in a
production environment.
The Cloud Network Administration application is one of the applications that are
used during the Tivoli Service Automation Manager configuration process. Use it
after performing the configuration of the backend resources and of the storage
pool. The network configurations handled by this application include:
v Importing network Data Center Model (DCM) objects and parameters that
describe subnetworks and virtual switches
Chapter 1. Product overview 5
v Importing network templates that contain predefined network configuration
parameters
After performing these initial configurations and setting up a production
environment, Tivoli Service Automation Manager configurations change over time.
The Cloud Network Administration application helps you manage these changes
by handling the following tasks:
v Importing a new revision of a network template
v Changing the status of a network template
v Deleting a network template
v Listing network templates
v Showing details of a network template
Cloud Customer Administration application
The Cloud Customer Administration application provides the administrator with
an overview of the allocated resources of a specific customer. The application is
organized into multiple tabs. These tabs show the resources, requests, and
reservations of the selected customer. The main purpose of the Cloud Customer
Administration application is to maintain customers individually. The
administrator can also perform configuration tasks for individual customers and
access other applications that serve this purpose.
Tasks that are available within the Cloud Customer Administration application
include:
v Viewing customer-specific information (users, teams, submitted requests,
projects, saved images, reserved resources, and quotas)
v Creating and removing customer templates and customers
v Assigning resources to customers and returning resources
v Adding, deleting, activating, and deactivating quotas and limits
The application supports three types of customers:
Cloud Template (type=CT)
This type is used as a template for new customers. The cloud template is
created and modified in the administrative user interface. Then it becomes
available in the self-service interface as part of the Create Customer
request. A template customer is not operational, which means that no user
can belong to a customer of type CT.
Cloud Customer (type=CC)
This customer type can be created in the self-service user interface and
then configured or modified in the administrative user interface. If
customer templates are configured, you can use them to create new
customers of type CC. A cloud customer is operational. Users assigned to it
can request and use resources. One user can be assigned to one customer
only, and that user can never be migrated between customers. When a user
is assigned to a customer, that user can access and request resources that
are assigned to that customer.
Service Provider (type=SP)
A predefined global customer that is delivered with Tivoli Service
Automation Manager 7.2.2, with a default name PMRDPCUST. If no other
customers are defined, all users belong to this default customer. New
customers of type SP cannot be created.
6 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
The application is useful if the customer reports a problem with the cloud. The
administrative user can view and find the resources related to that customer. For
example, if the customer reports that a service request on a project failed, the
administrator can easily navigate to the list of all service requests and projects
related to this customer. From this application, the administrator can navigate to
the service request application to get further details about the specific service
request.
For specific information about the features of each tab that is available in this
application, click the ? Help button within the application.
Cloud Maintenance Administration application
Administrators use the Cloud Maintenance Administration application to start and
monitor the maintenance mode for Tivoli Service Automation Manager.
The Cloud Maintenance Administration application includes the following tabs:
List
This tab contains a list of all maintenance logs that were created in Tivoli Service
Automation Manager. The inactive logs are read-only and cannot be modified.
Maintenance Mode Log
This tab contains the details of the maintenance log.
Status This section contains the basic information about the maintenance mode. It
includes the following:
v The maintenance log ID, description, and requester name.
v The start and end time of the maintenance window that affects the users.
v Internal status that informs the administrator about the state of the
maintenance mode.
v The actual start and end date of the maintenance log, that is, the time
during which the log is active.
Log The log window that is updated with log entries when the status is active.
Details
The details section graphically shows the number of service requests in the
different statuses. It also contains separate tabs with tables that the
administrator can use to quickly view the service requests in different
states, all inbox assignments, and logged-on users.
Service Topology application
The Service Topology application is the main application for viewing and editing
service topology templates and instances. If you need to modify the service
definition delivered with Tivoli Service Automation Manager to exactly match your
requirements, you can change the service topology using this application.
The Service Topology application operates mainly on the topology and topology
node data model objects. You use it to view and edit the following:
v Name and description of a topology
v Nodes of a topology
v Basic relationships between nodes (parent-child relationships)
v Monitoring agent assignments for nodes
Chapter 1. Product overview 7
v Resource allocation records for nodes
Service Update Package application
You use this application to manage existing Service Update Packages and deploy
them on one or more Service Definitions or Service Deployment Instances.
Service Topology Node applications
You use these applications to work with topology node objects. These applications
provide more sophisticated and detailed ways for browsing and editing topology
node objects than available with the Service Topology application.
The applications in this category include:
v Service Topology Node
v Service Topology Node Operation
v Node Attribute Uniqueness Violations
v Co-Location Editor
v Topology Customization
In addition to the node viewing capabilities provided by the Service Topology
application, it is possible to change relationships between nodes. For one topology
node, you can display all related nodes (direct parent or child, or otherwise related
nodes) in a list. A child node can be selected and set as the main object for the
application in order to inspect the child node. In this way, you can walk through
and inspect complete hierarchies of topology nodes. This application also allows
you to define custom relationships (in contrast to standard parent-child
relationships) between nodes. Use this when relationships between nodes have to
be defined that cannot be modeled using standard parent-child relationships.
IT Topology Work Orders application
The IT Topology Work Orders application displays the status of an IT topology
work order. This application can be accessed either in the Work Orders tab of the
Service Deployment Instances application or when the error handling workflow
creates an assignment due to an error during processing.
An IT topology work order is used as a frame of reference during management
plan processing. It contains the state of the work order and references to the input
and output definitions for work order operation. When the management plan is
being executed, there is one work order that contains the state of the overall
management plan. This work order is referred to as the top IT topology work order.
When any operation from a management plan starts, another IT topology work
order, the operation work order, is created for the operation and its state. Workflows
that run within the context of an operation focus on the corresponding operation
work order.
Many Tivoli Service Automation Manager operations use a common main
workflow for tasks. This workflow is called the error handling workflow. This
workflow calls the operation-specific implementation workflow. If the workflow
returns an error by ending via the negative-action connection line, the error
handling workflow provides an assignment to the operator entitled 'A Workflow
Failed'. The operation work order is opened in the IT Topology Work Orders
application. The problem is described in the Messages tab of the application. The
operator can then route the workflow assignment and take one of the following
actions:
v Continue by ignoring the failed workflow and processing the remaining steps.
This option can be useful if the failed task was already completed manually by
8 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
the operator or if it can be omitted. Tivoli Service Automation Manager
continues processing by starting the next operation.
v Continue by re-executing the failed workflow. This option may be useful if
errors occur due to temporary problems (such as a temporary network outage).
Tivoli Service Automation Manager restarts the same operation with the current
input values.
v Continue by ignoring the failed workflow and canceling all remaining steps.
This option can be used if continuing the management plan is no longer
practical.
Note: Tivoli Service Automation Manager does not perform a cleanup in this
case.
Canceling means that all remaining job plans and tasks are still executed, but
each job task is considered completed when it is encountered, meaning that the
corresponding workflow is skipped. If an initial management plan is canceled by
the operator, the service deployment instance state is set to 'Canceled'. The user
has to manually clean up all the changes that occurred up to this assignment.
This includes topology changes, such as in the case of a WebSphere Add Server
workflow. It also includes changes made to the managed environment or Tivoli
Provisioning Manager.
You can also use the IT Topology Work Orders application to suspend and resume
work orders and send notifications to users about issues related to the current
management plan.
Auxiliary applications
Several auxiliary applications support special-purpose editing or process control
functions:
v Service Automation Manager
v Management Plan Input Parameters
v Management Plan Target Selection
v Service Topology Customization Editor
WebSphere Cluster Service
The WebSphere Cluster Service allows you to provision the IBM WebSphere
Application Server Network Deployment V6.1 product and manage its life cycle.
This section is relevant only if you have purchased and installed the Tivoli Service
Automation Manager for WebSphere Application Server chargeable component. See
Installing optional software on page 83 for instructions on installing this service.
It can be installed either directly after the main Tivoli Service Automation Manager
installation or at a later time.
Before you begin to use the WebSphere Cluster Service, complete the steps
pertaining to it under Configuring the managed environment to use the
WebSphere Cluster Service on page 244, including defining the configuration
items (CIs) for the associated host platforms.
The WebSphere Cluster Service provisions a WebSphere cluster consisting of the
following elements:
v WebSphere Cell
v WebSphere Network Deployment (ND) Manager
v WebSphere Network Deployment managed node
v WebSphere Application Server instance
Chapter 1. Product overview 9
v WebSphere Cluster
v HTTP Server node
This service comprises the following components:
v WebSphere Cluster Service definition, comprising best-practice WebSphere
topology templates and management plans
v Updated automation package (.tcdriver file), which enables Tivoli Provisioning
Manager to provision WebSphere server instances together with Tivoli Service
Automation Manager. Also included are sample XML files to configure Tivoli
Provisioning Manager to use the WebSphere Cluster service
v Scripts that perform changes in the WebSphere Application Server when they are
executed from Tivoli Service Automation Manager using the Script Adapter
The management plans include internal plans for creating a WebSphere cell or
adding a managed node to an existing cell. Plans for basic management tasks in
the life cycle of the service, such as starting and stopping individual components,
are also provided.
The WebSphere Cluster Service integrates very tightly into basic Tivoli Service
Automation Manager concepts and applications, utilizing the existing Service
Definitions and Service Topologies applications and processes such as Approval.
The following WebSphere components are part of each service deployment
instance. These components are modeled as individual topology nodes:
v Each instance contains and manages a single WebSphere cluster.
v The management instance for a cell, the Deployment Manager, can be
provisioned and is used heavily.
v There is at least one managed node that can be provisioned and managed.
v Each managed node contains a single application server instance, which is also
called a cluster member.
v Provisioning and management of an IBM HTTP server instance are also
supported.
WebSphere topology
In the following sections, the types of topology nodes used in the WebSphere
cluster service are listed. Each of these node types is defined by a Service Request
Manager classification that declares the attributes. Operations for topology nodes
are summarized for each type of node as applicable.
WebSphere Cell
A WebSphere ND Deployment Manager node that manages the cell and all
of its components, one or more WebSphere ND managed nodes hosting
application server instances, and a number of IBM HTTP Server nodes
representing the Web tier of the environment. Furthermore, a cell can
define multiple WebSphere clusters that group similar application server
instances.
Deployment Manager
A special type of WebSphere node for controlling a WebSphere cell and all
of its components. WebSphere components are configured within a cell is
done using the Deployment Manager node. The following operations are
supported:
10 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v Installing WebSphere ND on a server and creating a Deployment
Manager profile (and thus a Deployment Manager node). This implicitly
defines a WebSphere cell that is managed by the new Deployment
Manager
v Defining a new WebSphere cluster within a cell. The cluster is initially
created without members
v Creating a new cluster member (application server instance) for an
existing cluster on an existing WebSphere managed node
v Start the deployment manager. The deployment manager has to be
started for most other operations to succeed. This is done during initial
provisioning.
v Stopping the deployment manager
v Starting a complete cluster, that is all members within a cluster
v Stopping a complete cluster, that is all members within a cluster
v Starting a managed node and all cluster members that are managed by
this node
v Stopping a managed node and all cluster members that are managed by
this node. This might be required if the server this node runs on needs
maintenance
v Starting the application server instance
v Stopping the application server instance
v Generating the plug-in configuration for the web servers configured for
a cell and propagate it to the web server. The web server must then
route web traffic based on the new configuration, for example handling
load for newly deployed applications or routing to newly created cluster
members
v If you use Tivoli Monitoring in conjunction with Tivoli Service
Automation Manager, install a Tivoli Monitoring agent on the server this
instance runs on
v If you use Tivoli Monitoring in conjunction with Tivoli Service
Automation Manager, uninstall a Tivoli Monitoring agent from the
server this instance runs on
v Enabling administrative security on this managed node. You can set a
user name and password for the administrative console of the
deployment manager, for example
v Incorporating a managed node in the cell that is managed by this
Deployment Manager. In this way, the Deployment Manager can
introduce changes to the given node
v Removing the given managed node from this cell
v Copying the HTTP Server configuration script that was created by the
HTTP Server installation to the Deployment Manager, so that it can be
invoked to make the HTTP Server known to the Deployment Manager
v Uninstalling WebSphere from a server
Managed Node <n>
A managed node within a WebSphere cell. The node is
managed/controlled by the cell Deployment Manager and does not have
its own management interface (such as an administration console).
Managed nodes host application server instances for deploying J2EE
applications. The following operations are supported:
v Installing WebSphere ND on a server and creating a managed node
profile (and thus a WebSphere managed node).
Chapter 1. Product overview 11
v Installing a Tivoli Monitoring agent on the server this instance runs on.
This is only applicable if you use Tivoli Monitoring in conjunction with
Tivoli Service Automation Manager.
v Uninstalling a Tivoli Monitoring agent from the server this instance runs
on. This is only applicable if you use Tivoli Monitoring in conjunction
with Tivoli Service Automation Manager.
WebSphere Application Server instance
A WebSphere Application Server instance is an implementation of the
WebSphere Application Server. It is a container for hosting J2EE
applications and corresponds to one instance of a Java

Virtual Machine
on the respective node. Application server instances can be hosted on
managed or stand-alone WebSphere nodes; they cannot be hosted by
Deployment Manager nodes, that is, they cannot be defined within
Deployment Manager profiles.
WebSphere Cluster
A WebSphere cluster is a logical grouping defined within a WebSphere cell
for managing similar application server instances, that is, those of equal
configuration. The typical reasons for clustering are load balancing and
high availability.
IBM HTTP Server node
An IBM HTTP Server is the web server product of the WebSphere family.
In a clustered environment, instances of HTTP Server form the web or
front-end tier of a J2EE application-hosting landscape and distribute HTTP
traffic among members of a cluster of WebSphere Application Servers.
Supported operations are as follows:
v Installing and configuring IBM HTTP Server on a server.
v Starting this HTTP Server instance.
v Install a Tivoli Monitoring agent on the server this instance runs on. This
is only applicable if you use Tivoli Monitoring in conjunction with Tivoli
Service Automation Manager.
v Uninstall a Tivoli Monitoring agent on the server this instance runs on.
This is only applicable if you use Tivoli Monitoring in conjunction with
Tivoli Service Automation Manager.
DBMS server
A database management server. This is used in the optional Tivoli
Monitoring integration, such that events reported by this database server
can be correlated to a service deployment instance.
Database instance
A database instance managed by a DBMS server. This is used in the
optional Tivoli Monitoring integration, such that events detected by this
database can be correlated to a service deployment instance.
Management plans
The following processes can be performed on a deployed instance of the
WebSphere Cluster Service:
Add Server
Adds a WebSphere managed node to a cell and creates a member that is
added to an existing cluster
12 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Remove Server
Removes an existing member from a cluster and deprovisions its
associated WebSphere managed node from a cell
Start WebSphere Deployment Manager
Starts the Deployment Manager for this service deployment instance
Stop WebSphere Deployment Manager
Stops the Deployment Manager for this service deployment instance
Start WebSphere Node
Starts a complete WebSphere managed node, including all application
server instances hosted by the node
Stop WebSphere Node
Stops a complete WebSphere managed node, including all application
server instances hosted by the node
Start WebSphere Cluster Member
Starts a specific application server instance on a managed node
Stop WebSphere Cluster Member
Stops a specific application server instance on a managed node
Start WebSphere Cluster
Starts a complete WebSphere cluster, that is, all of the members of the
cluster
Stop WebSphere Cluster
Stops a complete WebSphere cluster, that is, all of the members of the
cluster
Service topology node attributes
The service topology is the set of individual hardware and software components
that constitute the defined or instantiated service. Each element of the topology is
referred to as a node. The following section describes a topology using the
WebSphere Cluster Service as a sample framework.
In the WebSphere Cluster Service, the main logical component of the environment
to be managed is a WebSphere Cell. Consequently, the top node in the hierarchy is
a WebSphere Cell node. A WebSphere Cell contains an IBM HTTP Server, a
Deployment Manager, and one or more managed nodes. A cell can also define
multiple clusters. All of these nodes are direct children of the WebSphere Cell node.
A WAS ND managed node hosts WebSphere Application Server Instances. Thus,
the managed node has a child designated as "AppSrv Instance".
All nodes that require hardware resources for deployment have associated resource
allocation templates that define what the resource must look like.
To support the configuration of more than one topology node on a physical server,
nodes that are eligible for co-location can be assigned to a co-location group. That
group is then assigned to the physical resource. Nodes not assigned to a group are
classified as stand-alone nodes. Nodes in a co-location group can later be
selectively deleted from the group, or an entire group can be dissolved, in each
case the affected nodes are returned to stand-alone status.
The WebSphere Cell contains one or more managed nodes. Each node has three
special attributes:
v Minimum cardinality
v Maximum cardinality
Chapter 1. Product overview 13
v Maximum cardinality unlimited
These attributes are used to define how often a node can occur in an instance of
the topology.
Other attributes indicate whether the node participates in performance monitoring
and whether it needs a hardware resource to run, for example.
Since selected nodes in a service topology can occur multiple times, a mechanism
is required to ensure that the attributes that describe these nodes are unique across
the topology or even across all services.
Node attributes must allow for the definition of a uniqueness scope in order to
prevent conflicts with attributes of other nodes in a topology. For example, all
nodes within a topology should have a unique name. WebSphere Application
Server nodes that are deployed on the same host must also have unique,
non-conflicting port number assignments. A simple resolution of conflicts is
provided through placeholders in character-string-type attributes that are replaced
with numbers that are automatically incremented at instantiation time. These
numbers are replaced and incremented on a per-topology basis.
A more sophisticated and granular handling of attributes (both string and numeric)
is necessary, for example, for port number assignments. In general, automated
resolution of conflicts is done based on numeric variable substitution according to
user-defined rules (see below). With numeric attributes, the entire attribute is
subject to substitution. With string attributes, placeholders for such numeric
variables are placed in the string. It is possible to insert multiple placeholders into
one string, and each of these is replaced. If placeholders in one string are identical,
the same value is inserted for each occurrence in a string. The complete identifier
of such a placeholder variable comprises the fully qualified name (including node
classification ID) of the attribute (see also Attribute Name Handling below) and the
name of the placeholder variable. For numeric attributes, the fully-qualified name
of the attribute itself identifies the replacement variable.
Uniqueness scope: A uniqueness_scope parameter can be set for all specification
attributes that defines which type of uniqueness is to be enforced for that
parameter of a node within a topology. The following uniqueness scopes can be
set:
Table 2. Uniqueness scopes for topology node attributes
Uniqueness scope Meaning
None Uniqueness not enforced (default scope)
Global Uniqueness is enforced across all topology nodes known to the
system
Service Definition Uniqueness is enforced across all topologies of service deployment
instances derived from one service definition
Topology Uniqueness is enforced within the topology. The value of that
attribute of a node must be unique within the entire topology,
meaning that no other node in the same topology can have the same
value for that attribute
Host Uniqueness is enforced on a per-host basis. Nodes deployed on the
same host must not define the value for an attribute
Group Uniqueness is enforced in a user-defined group; once defined, a
group can be referenced by attributes of all nodes, and uniqueness
is enforced in the scope of the selected group
14 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Attribute name handling: The user can define how the name of an attribute is
treated with respect to uniqueness, that is how the uniqueness is enforced between
the attributes of one node and the attributes of other nodes. The following types of
attributes handling can be set:
Table 3. Uniqueness specifications for attribute names
Attribute name
uniqueness Meaning
Exact Only values of identically named attributes are considered for
uniqueness handling (default)
Wildcard An expression containing wildcard characters (%) can be specified to
define a range of attributes that are considered for uniqueness
handling. Not only can the Wildcard specification can be used not
only for selecting a wider range of actual attribute names, it can also
be used to select attributes from multiple classifications. For
example, if attributes PORT_ONE and PORT_TWO are to be
correlated during uniqueness handling (to prevent port assignment
conflicts), the wildcard expression PORT_% could be used
Alias group Arbitrarily named attributes can be assigned to user-defined alias
groups. All attributes that belong to the same alias group are then
considered for uniqueness handling. This allows for correlating
attributes, for example, of different node classifications, that would
be correlatable by name. For example, an attribute of a WebSphere
node named SOAP_PORT (a TCP/IP port) and an attribute
BOOTSTRAP_ADDRESS (also a TCP/IP port) of an Application
Server instance could be assigned to an alias group
NETWORK_PORTS with host scope in order to keep allocated
TCP/IP ports unique on a host.
Uniqueness resolution rules: Within the user-defined uniqueness scope of an
attribute, variables are substituted according to a set of rules that define how
substitution values are calculated. Resolution is always performed on the basis of a
numeric calculation. There is currently one rule defined:
Table 4. Uniqueness resolution rule
Uniqueness resolution rule Meaning
Increment Increments the substitution variable by a
user-defined step value (default 1) starting
at a certain base value (default 1). A unique
value is calculated by obtaining the highest
value for an existing attribute and
incrementing it to assign the value to the
new attribute. The value of a numeric
attribute defines the base value and
overrides any such value specified in the
rule.
Numeric placeholders in character strings are treated the same way as numeric
attributes except that the base value is always the one defined in the rule.
Identically named placeholders within one string are assigned the same value. For
text-only attributes without placeholders, the uniqueness requirements can be
defined, but no automatic uniqueness resolution is performed.
Chapter 1. Product overview 15
Tivoli Service Automation Manager detects violations of the uniqueness rules for
attributes and displays the affected parts of the topology in the Node Attribute
Uniqueness Violations panel.
Performance monitoring support for the WebSphere Cluster
service
You can also use the Tivoli Service Automation Manager to install performance
monitoring software (agents) on the servers it provisions. This software detects
situations that cause degraded performance. Tivoli Service Automation Manager
reacts to such situations by presenting the user with options to analyze and resolve
the problems based on procedures that have been accepted as recommended
practices.
Note: This section applies only to the Tivoli Service Automation Manager
WebSphere Cluster service. It does not apply to the Tivoli Monitoring based
resource usage collection functions offered with the Self-Service Virtual Server
Provisioning component.
Each Tivoli Service Automation Manager service definition can have an associated
monitoring definition. These definitions provide the framework for implementing
the monitoring agents and controlling which types of events it will react to.
If you need to modify the service definition delivered with Tivoli Service
Automation Manager to suit your requirements for performance monitoring, you
can change it using the set of Monitoring Definition applications.
Performance monitoring is supported in combination with IBM Tivoli Monitoring
on distributed platforms (Linux on System x, System p, and System z, and AIX).
The Monitoring Definition applications refer to the monitoring environment being
used with respect to any given service definition and service deployment instance.
IBM Tivoli Monitoring must be installed separately. It is no longer offered as an
installation option for Tivoli Service Automation Manager.
For details on setting up IBM Tivoli Monitoring, see the documentation for this
product (IBM Tivoli Monitoring and OMEGAMON XE documentation).
The Situation Analysis application handles the situations detected by the
monitoring agents in a deployed landscape.
When a monitoring agent detects a situation, it triggers the execution of a
workflow within Tivoli Service Automation Manager. The Situation Analysis
application, which is part of this workflow, is called and assigned to the user with
the role corresponding to the domain that applies to the agent and situation. For
example, if the scenario is part of the domain AIX or DB2, the application
execution within the workflow is assigned to the respective AIX administrator or
DB2 administrator role. A customer can also add a new domain by enhancing the
situation analysis workflow, adding new roles and adding new values to the
domain. An assignment to analyze the event appears in the inbox of the
corresponding user within the Start Center panel of the Tivoli process automation
engine user interface. When the user clicks on this assignment, the application to
analyze the event opens.
The following categories of information are shown:
v Information about the situation, including:
The system on which the situation occurred
16 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
The domain the situation is related to (WAS, DB2, AIX, LINUX)
The name of the situation
v One or more service deployment instances that are candidates for the situation
indicated.
You can then browse the entire context of the instance within which the situation
occurred, including the topology, the CIs involved, and the monitoring
definitions. Deployment instances are only shown when the affected system
belongs to that instance and the situation was defined for one of the agent types
deployed on the system for that instance. Each instance includes a link to the
Service Deployment Instances application so that you can browse all information
and details related to that instance.
v when a service deployment instance is selected, the recommended practices (also
called good practices) for the situation in the context of this instance.
You use analysis practices to analyze a situation and management practices to
resolve a situation. These practices can refer to actions in the Select Action menu
to launch another management tool (such as the TEP console) in order to
analyze or resolve a situation. Analysis practices are displayed as plain text. A
management practice can be either a management action described in text form
and intended to be executed manually by the user, or a Tivoli Service
Automation Manager management plan that can be executed automatically by
the system.
The list of practices is sorted by the probability that the practice will resolve the
event. The number of times the practice was chosen to resolve an event of this
type is used as the sorting criterion. That is, the practice chosen most often and
considered the best approach by an administrator (by clicking on the Feedback
Good Practice button) is displayed as the first element in the list.
Depending on the analysis of the event, you can choose among the following
actions to resolve the problem:
v Close the event if it is only informational and no action is required.
v Go to the Incident application.
v Start a management action from the set of management practices to resolve the
event.
For more information, see the applicable task descriptions and "Working with
performance monitoring functions" in the User's Guide.
Service structure
Services offered by Tivoli Service Automation Manager are organized as follows:
Service definition
Basic set of rules for creating a specific IT landscape. Tivoli Service
Automation Manager delivers preconfigured service definitions that can be
used as templates for customizing the definition to meet your specific
needs.
Service deployment instance
Actual IT landscape described by the service definition.
Service topology
A set of hardware servers and software representing the IT landscape.
Monitoring definition
A framework for activating monitoring software to detect possible
performance problems in a deployed landscape.
Chapter 1. Product overview 17
Management plan
Predefined modification of the provisioned landscape.
Job plan
A tool that implements management plans in terms of software.
Workflows
Preconfigured sequence of steps that perform a specific function. For
example, there are workflows to instantiate (deploy) a service, make
management changes once the instance has been deployed, and perform
error handling.
Work order
A tool that provides the framework for executing management plans.
Resources
Actual hardware and software that can be used in constructing the
landscape.
Service provider support
The new service provider feature allows you to create clouds that can be used by
multiple customers. Two types of resources are available within a cloud: single
customer objects that are assigned to customers individually, and multi-customer
objects that can be shared among customers. The underlying idea of the solution is
data segregation, which is used to associate resources with one or more customers.
Resources can be used more efficiently and customers are able to support multiple
internal organizations.
A customer level is introduced in the self-service user interface. Each customer is
associated with a specific group of teams and users. Users can be assigned to one
or more customers and teams. Users can view only the information and resources
that are related to their associated customers.
Customers and their resources are managed in the administrative user interface,
with the Cloud Customer Administration application. The application is organized
into tabs. The tabs support the following actions:
v Viewing the list of customers, their teams and users, requests, resources,
reservations, quotas, and limits
v Creating customer templates and modifying existing customers
v Assigning and returning customer resources
v Setting quotas and limits on customer resources
In Tivoli Service Automation Manager, there is a clear distinction between objects
that are assigned to only one customer and resources that can be shared by many
or even all customers within a cloud. The Tivoli Process Automation engine service
provider functionality segregates the data for this purpose. Multi-customer
resources are associated manually. Association tasks related to those resources are
performed in the Cloud Customer Administration application. Single customer
resources are assigned automatically to the customer for which the requesting user
is working. For example, if a person who is assigned to a specific customer
submits a request for a new team, the fulfillment flow associates the team with the
customer automatically.
Resources that can be shared by customers are:
v Cloud server pools
18 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v Cloud network templates
v Cloud storage pools
v Master images
v Software products
v Cloud customer users
Resources that can be assigned only to individual customers are:
v Teams
v Users
v Service deployment instances, also called projects
v Saved server images
The multi-customer support in Tivoli Service Automation Manager introduces new
tasks related to the data segregation system. This system must ensure that users
see only the data assigned to them.
Four new roles are introduced in the self-service user interface: cloud customer
administrator, approver, multi-customer user (valid only for Cloud customer level
policy), and security administrator. Some significant changes were also made to the
authorities of the existing user roles. The main reason for the overall reorganization
is the introduction of the cloud customer level.
Users in the Cloud customer administrator role have authorities similar to the
authorities of a cloud administrator, but if they are on Cloud Customer Level
Policy, they are dedicated to individual customers. They have no authority to
register or unregister images.
Another role that is strictly connected with the cloud customer level is the
approver role. Approvers can check the status of projects, and see the service
requests associated with their customers. They are also authorized to approve
those service requests. Cloud administrators and cloud customer administrators
can also approve requests.
Important: When a cloud administrator creates another cloud administrator, both
Membership and Grant option must be selected for the new user. It is done
because the newly created Cloud Admin loses the Grant option when the
password is changed.
Multi-customer user role has authorities to administer multiple customers. Only
cloud administrators can create this role.
Security administrator can create, manage and delete users.
User roles are called security groups starting with version 7.2.2, and a user can be a
member of multiple security groups.
You can add new customized security groups. For example, you can define a
separate security group that is authorized to use VMware specific offerings. You
can also reuse the existing roles and group management from LDAP. These
procedures are described in details in the Tivoli Service Automation Manager
Extensions Guide.
Related information
Chapter 1. Product overview 19
For more information about creating and removing customers in the self-service
user interface, see Managing customers in the Tivoli Service Automation Manager
User's Guide.
For more information about the features of the Cloud Customer Administration
application, see Cloud Customer Administration application on page 6.
Reporting function
Reports show information related to service definitions and deployments.
Tivoli Service Automation Manager report types and content
The following types of reports are available with Tivoli Service Automation
Manager. The type of information shown in each report differs depending on the
service.
Table 5. Summary of Tivoli Service Automation Manager Reports and Content
Tivoli Service
Automation Manager
Application Report Type Report Name Selection Parameters Content
Service Definitions All Service Definitions AllSD
<service>
.rptdesign
List of all service definitions
and instances.
Service Definitions Service Definition
Summary
SDSummary
<service>
.rptdesign
Status
Owner
From Date
To Date
Basic service definition
information and list of service
definitions and deployments.
Service Definitions Service Definition
Details
SDDetail
<service>
.rptdesign
Service Name Service-dependent content with
identifying information, status,
topology, monitoring and
resource assignments, and
service definition history
Service Deployment
Instances
Service Deployment
Summary
SISummary
<service>
.rptdesign
Status
Owner
Service Definition Name
From Date
To Date
Service-dependent graphics
and lists showing CPU and
memory levels, servers by
owner, servers by platform
architecture, monitoring agents
by type, and instances
Service Deployment
Instances
Service Deployment
Details
SIDetail
<service>
.rptdesign
Service Name Service-dependent content
including graphics showing
CPU, memory, servers by
architecture, monitoring agents,
and service history
Service Deployment
Instances
Co-Located Nodes coLocation
<service>
.rptdesign
Host Name Information concerning
components of the service
instance that are located on the
same physical server
Service Deployment
Instances
Service Deployment
History Summary
SISummaryHistory
<service>
.rptdesign
From Date
To Date
Service-dependent graphics
showing CPU, memory, servers
(over time and for all
deployments)
To see the complete list of reports available in your environment:
1. Log in to the administrative interface.
2. Click Go To > Administration > Reporting > Report Administration.
3. In the Application field, type PMZHB and press Enter. The list of all available
reports is displayed.
20 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
For details on generating reports, see Working with the service automation
reports on page 287. For post-installation setup tasks required to activate the
Reporting function, see Configuring the reporting function on page 288.
Tivoli Usage and Accounting Manager reporting function
As an option, Tivoli Service Automation Manager can use the Tivoli Usage and
Accounting Manager metering function. Tivoli Usage and Accounting Manager
helps you improve IT cost management. You use it to analyze your costs and track,
allocate, and invoice based on actual resource use by department, user, and many
other criteria.
Tivoli Service Automation Manager instantiates and manages service instances. It is
able to track creation, modification, and deletion of a service instance itself, as well
as the capacity assigned to it. This information can be periodically extracted and
transformed into a so-called CSR (Common Server Resource) file, which can then
be retrieved by Tivoli Usage and Accounting Manager to generate reports.
Tivoli Usage and Accounting Manager helps measure service usage data. In the
Tivoli Service Automation Manager context, this data is the amount of resources
(CPUs and memory) allocated or assigned to servers provisioned by the
Self-Service Virtual Server Provisioning component over time. Servers belong to
projects and projects belong to the service. Each deployment has a user. The user
can own deployments coming from different services, request new deployments, or
request changes to the existing deployments. For accounting purposes,
organizational information must be defined for each user, that is, at the minimum,
a department identification must be assigned to each user identification. For each
project, you can distinguish between the requester of the project and the
organization being charged with the project. You can define organizational
information for a project by adding an (optional) project account code for the team
that is using the project. Reports can then be generated based on the team and its
users. It is also important that no two deployment instances are created with the
same name, because this will result in inaccurate usage and accounting data.
Before you can start using this function, you need to configure both Tivoli Service
Automation Manager and Tivoli Usage and Accounting Manager. For more
information, see Integrating Tivoli Usage and Accounting Manager on page 266.
To find out how to employ the collected data in the form of reports, see Working
with usage and accounting reports on page 291.
Additional software installation on the provisioned servers
With Tivoli Service Automation Manager, you can install one or more software
modules when you create a virtual machine, or you can install them on virtual
machines that were already provisioned.
You can use one of the product components, Tivoli Provisioning Manager, to define
software installation templates and then install that software on the virtual servers.
After the software product definitions are created in the administrative interface,
the self-service interface users can install the software modules on virtual
machines.
Before you install any software modules on a provisioned machine, create software
product definitions using the administrative user interface. See this topic for more
information on creating software product definitions.
Chapter 1. Product overview 21
In the self-service interface, you can install software modules by submitting three
types of requests:
Create Project with server type
You can select software to be installed on the servers that are to be
provisioned in a project. All servers have the same software installed.
Add server type
You can select software to be installed on a new server that is added to a
project.
Install Additional Software on a Server
You can install software modules on a server that is already provisioned.
You can select multiple software items and install them in a specified order. You
can also specify additional configuration parameters for the software items to be
installed.
VMware additional disk functionality
The VMware Additional Disk functionality enables you to provision additional
local storage when provisioning VMware virtual machines using the self-service
user interface.
This feature enables cloud and storage administrators to create cloud storage pools,
which map VMware datastores to customers. You can control whether an
additional storage is to be thin provisioned or not and you can limit the storage of
individual customer using the standard Tivoli Service Automation Manager quota
mechanisms.
The extension also enables team administrators or other users to request additional
storage from the storage pools during the virtual machine (VM) request process,
either when creating a new project or while adding servers to an existing project.
The lifecycle of these disks is equivalent to the lifecycle of the VM. This means that
if the VM is removed, the additional storage is also removed. Similarly, if a backup
image is made of a server, the additional disks will be part of that image and will
be restored when that image is restored.
Disks can be provisioned both for Windows and Linux VMs and can be formatted
and mounted in the VM automatically with a variety of formats.
Note: Only the first SCSI adapter is used for disk provisioning.
VMware clone server functionality
You can use the VMware clone server feature to clone a VMware server that is
provisioned by using Tivoli Service Automation Manager self-service UI. Internally,
after a direct disk clone, the cloned server gets a new IP address in the same IP
pool as defined for the source server. All other configurations or personalizations
of the existing server are left unchanged.
Note: If you use the ITM monitoring agent, then its configuration is not
personalized for the cloned machine by the Tivoli System Automation Manager.
Manual post configuration steps are required for the ITM setup to work properly
in the new VMware clone.
22 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Managing POWER LPAR provisioning with VMControl
This topic provides reference information about managing POWER

LPAR virtual
servers using IBM Systems Director VMControl.
VMControl quick reference
VMControl is a plug-in of IBM Systems Director. Tivoli Service Automation
Manager supports IBM Systems Director VMControl 2.3.1, 2.4.1.1, 2.4.2, 2.4.3.1
Enterprise Edition.
VMControl is an alternative way to provide POWER LPAR virtualization and
image management functions.
For more information about VMControl, see the section on VMControl in the IBM
Systems Director Information Center:
Setting up the environment
Before a Tivoli Service Automation Manager administrator can manage POWER
LPAR computers using VMControl, the environment needs to be configured. For
more information on the procedure, see Configuring cloud server pools on page
173
Managing provisioning
All self-service requests, apart from modifying disk size and saving/restoring
virtual servers, are available for POWER LPAR provisioning via VMControl.
Recovering form errors
For troubleshooting when using VMControl, see Troubleshooting when using IBM
Systems Director VMControl on page 507.
Image management
Software and server images can be maintained in the Tivoli Provisioning Manager
Image Library for selection at provisioning time. New Server image templates can
be created and imported to the library. Once the images are in the library, they
must be registered so that they can be used to provision new virtual servers. A
snapshot-like image of an entire provisioned server can also be saved and restored
in the current project, so that the server can be initialized at the state represented
by the image.
The Tivoli Service Automation Manager User's Guide describes the image-related tasks
that are available to administrators and users working with the self-service user
interface.
For other administrative tasks, see Managing server images on page 307.
Chapter 1. Product overview 23
Workload Deployer overview
IBM Workload Deployer is an optional, separately paid appliance that can be
integrated with Tivoli Service Automation Manager.
Note: The only supported version of IBM Workload Deployer is 3.0.
Workload Deployer offers a comprehensive set of patterns and capabilities that
focus on addressing WebSphere workloads. It enables creating, deploying,
monitoring, and managing service construction and delivery within Tivoli Service
Automation Manager. The appliance is based on special virtual images, such as the
WebSphere Application Sever Hypervisor Edition, and allows creating patterns that
represent the target application environment. These patterns include the
infrastructure nodes of the application and the necessary configuration for the
environment. By using Workload Deployer, you can deploy them into your private
cloud. In this way, the appliance provides management and monitoring capabilities
that give you necessary controls over your running application environments.
The integration with Tivoli Service Automation Manager
The functions of Workload Deployer and Tivoli Service Automation Manager are
complementary in nature.Workload Deployer allows users to create, deploy, and
manage customized environments that are both based on the WebSphere
application and located in a private cloud. Tivoli Service Automation Manager
provides the tools to perform the standardization and the automation in the cloud
environment, thereby enabling rapid provisioning for a wide breadth of workloads.
The integration provides:
v the service delivery and management capability of Tivoli Service Automation
Manager
v the Workload Deployer patterns and capabilities
v a unified interface from which users can deploy and manage their cloud-based
environments
Tivoli Service Automation Manager enables creating and deploying application
environments based on any kind of software. Workload Deployer offers capabilities
for application environments based on IBM software, thereby simplifying
installation, configuration, integration, and orchestration scripting procedures for
those workloads.
The integration of the two applications can be made according to different
scenarios. Depending on what services you want to deliver via the cloud, the most
common scenarios are when:
v When there is a need for unified management of private clouds that include
WebSphere.
v When you need to add request workflow capabilities to Workload Deployer.
After the integration, Tivoli Service Automation Manager becomes the primary
management device for your private cloud. The capabilities of both the products
remain the same. Tivoli Service Automation Manager enables the provisioning and
management of a wide array of cloud-based services including operating systems,
application platforms, and end-user applications. The integration of Workload
Deployer enhances the time to value for delivering WebSphere environments
regardless of what other services you deliver through Tivoli Service Automation
Manager. Moreover, integrating the products allows you to unify management of
your cloud services using the self-service user interface. Workload Deployer
delivers its patterns, namely rapid provisioning, consistent configurations, and
24 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
inherent product knowledge for WebSphere workloads without the need to switch
between multiple service management portals.
Maintenance mode overview
You can use maintenance mode to stop processing requests and restrict user access
during a maintenance window.
The administrator can schedule the maintenance mode by using the Cloud
Maintenance Administration application in the administrative user interface. The
system goes through the following statuses:
1. ONLINE
2. REQUEST_QUIESCE
3. QUIESCING
4. QUIESCED
5. REQUEST_ONLINE
6. PENDING_ONLINE
7. ONLINE
The statuses are described in detail in Maintenance mode statuses on page 345.
The system notifies the self-service interface users when the maintenance window
is scheduled. During the maintenance window, no requests or approvals can be
processed, and users can only view the user interface in read-only mode.
In order to quiesce the Tivoli Service Automation Manager management server, the
framework stops the major escalations that are used to process the service requests
in the various states. When the system enters QUIESCING status, some escalations
are deactivated. When the system enters PENDING_ONLINE status, the
escalations are activated again. When all escalations are reactivated, the system
status changes to ONLINE and users can submit requests again.
For more information about managing maintenance mode, see Managing the
maintenance mode on page 344.
Chapter 1. Product overview 25
26 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Chapter 2. Installing and upgrading Tivoli Service Automation
Manager
This section describes how to
install and upgrade Tivoli
Service Automation Manager
and its prerequisite software.
Overview of the Tivoli Service Automation Manager installation
process
During installation, you use the Tivoli Service Automation Manager Installation
Launchpad to check system prerequisites and then install the prerequisite software
and the Tivoli Service Automation Manager product itself. The principal
prerequisite for Tivoli Service Automation Manager, Tivoli Provisioning Manager,
includes the necessary middleware and base services for the shared product
environment. Tivoli Service Automation Manager itself is packaged as a set of
process management products (PMPs).
Important: Due to the complexity of the installation process, in particular with
respect to the middleware and base services, read the Tivoli Service Automation
Manager and Tivoli Provisioning Manager installation documentation before using
the Launchpad to install components.
Note: The Launchpad includes links and references to external documentation at
the appropriate points in the process. Become familiar with the Launchpad by
starting it on one or more of the prospective administrative or management system
servers.
Because Tivoli Provisioning Manager is a major component in the installation
process, you should refer also to the Tivoli Provisioning Manager Installation Guide
for installation requirements and details. However, use the Tivoli Service Automation
Manager Installation Launchpad, and not the corresponding Tivoli Provisioning
Manager Launchpad, to start the verification scripts and individual installers for
the middleware, base services, and individual products.
The information shown in the Launchpad is appropriate to the local operating
system. Information that does not apply to that operating system (for example, the
use of Windows as a management server) will either not appear in the Launchpad
or related links will not be active.
The Installation Launchpad starts on each server in the management system on
which software is to be installed. The Installation Launchpad is also started on the
administrative server to install certain platform-independent components on the
management system. Software related to the deployment of the components within
the management system is installed on the administrative server itself. You
therefore use the administrative server to install product upgrades and applications
Copyright IBM Corp. 2008, 2012 27
on the management system. The administrative server is not needed for normal
operation, but it is essential for providing service. If you lose the administrative
server, you will no longer be able to maintain the management server. Also, if you
make a backup of your management system, you must back up the state of the
administrative server that exactly matches it. Otherwise you will not be able to
apply any further PMP or any service upgrade to your management system.
The Installation Launchpad drives the installation by checking that prerequisites
have been met. It also calls installers for the prerequisite middleware and base
services, and the Process Solution Installer for the applications: Tivoli Provisioning
Manager, Service Request Manager, and the Service Automation Manager itself.
Optionally, you can also install Tivoli Service Automation Manager for the
WebSphere Application Server component.
Tivoli Service Automation Manager 7.2.4.4 supports two installation modes:
v Full installation
v Upgrade of the 7.2.2.1, 7.2.2.2, 7.2.3, 7.2.4, 7.2.4.1, 7.2.4.2, 7.2.4.3 installation to
version 7.2.4.4
Both these installation modes are available from the Installation Launchpad that is
available in the Tivoli Service Automation Manager 7.2.4.4 upgrade package.
Important: Before starting the installation or upgrade, visit the Installation
Planning section of the Installation Launchpad. Select the correct Server Role and
Installation Type there and provide any other necessary information, such as
package paths.
Depending on which installation type you select, only the relevant subset of
installation steps is visible and accessible in the Installation Launchpad sections.
You must perform all the steps available in the Installation Launchpad in sequence
in order to successfully install or upgrade the product.
Planning for Tivoli Service Automation Manager
Before you start installing the product, review hardware, software, and other
requirements.
Hardware and operating system requirements for Tivoli
Service Automation Manager
The management and administrative servers are subject to certain requirements to
ensure proper installation and subsequent operation in the Tivoli Service
Automation Manager environment.
Overview of the management system topology
The Tivoli Service Automation Manager and prerequisite software is installed on 1
to 3 servers referred to in the Tivoli Service Automation Manager context as the
management system.
The three logical components (or software servers) of the management system are:
v provisioning server (also referred to within Tivoli Provisioning Manager as the
application server and in Tivoli Service Automation Manager as the management
server )
v database server
28 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v directory server
Refer to the Tivoli Provisioning Manager Installation Guide and the Tivoli Service
Automation Manager Installation Launchpad for illustrations of possible
topologies.
Note: The term management server is a Tivoli Service Automation Manager-specific
term that designates the hardware server accommodating at least the Tivoli
Provisioning Manager provisioning server and the Tivoli Service Automation
Manager applications. In the Tivoli Service Automation Manager context, the terms
management server and provisioning server are synonymous.
Each of these software servers can be installed (or could have been preinstalled) on
a separate hardware server (also a virtual server), or can be co-located. In the
single-server configuration, all three components are installed on the same
hardware server.
An administrative server (commonly referred to in the Tivoli Provisioning Manager
context as the administrative workstation) is also needed to install selected Tivoli
Service Automation Manager related software on the management system. The
administrative server is later used to manage upgrades to the management server.
Note: If the prospective provisioning server of the management system also meets
the hardware and software requirements for an administrative server, the two
functions can be co-located. In other words, the same hardware server can be used
for both, the provisioning server and the administrative server. Otherwise, a
separate administrative server is required.
Important: While installing the product using the launchpad, before specifying
system requirements of your servers, decide whether to use your current system as
a management server or an administrative server, or both. You can do that by
checking the appropriate box at the end of Overview of the management system
topology section. A number of installation options is offered depending on your
selection.
Requirements for the management system servers
See the Tivoli Provisioning Manager Installation Guide for details and individual
space requirements for the middleware.
The following table lists the hardware platforms and operating systems that are
supported in the Tivoli Service Automation Manager management system
environment. The management servers running Tivoli Service Automation
Manager must meet the minimum requirements for processors, memory, disk
space, and swap space prescribed here or in the supplementary support
information (technotes) for Tivoli Provisioning Manager and Service Automation
Manager.
Table 6. Management server hardware and operating system requirements
Hardware Operating System
System p
IBM AIX 7.1 64bit
IBM AIX 6.1 Technology Level 3 64bit
IBM AIX 5.3 Technology Level 10, 64bit
Chapter 2. Installing and upgrading 29
Table 6. Management server hardware and operating system requirements (continued)
Hardware Operating System
System x
Red Hat Enterprise Linux Advanced Server 6 Update 3 x86 64bit
Red Hat Enterprise Linux Advanced Server 6 Update 2 x86 64bit
Red Hat Enterprise Linux Advanced Server 6 Update 1 x86 64bit
Red Hat Enterprise Linux Advanced Server 5 Update 5 x86 64bit
Red Hat Enterprise Linux Advanced Server 5 Update 4 x86 64bit
Red Hat Enterprise Linux Advanced Server 5 Update 6 x86 64bit
SUSE Linux Enterprise Server 11 Service Pack 2 x86 64-bit
SUSE Linux Enterprise Server 11 Service Pack 1 x86 64-bit
SUSE Linux Enterprise Server 11 x86 64-bit
SUSE Linux Enterprise Server 10 Service Pack 3 x86 64-bit
System z
Red Hat Enterprise Linux Advanced Server 6 Update 1 (IBM System z 64-bit)
Red Hat Enterprise Linux Advanced Server 5 Update 5 (IBM System z 64-bit)
Red Hat Enterprise Linux Advanced Server 5 Update 4 (IBM System z 64-bit)
SUSE Linux Enterprise Server 11 (IBM System z 64-bit)
SUSE Linux Enterprise Server 10 Service Pack 3 (IBM System z 64-bit)
Note: If you use Power

7, AIX 6.1 TL4 SP2 is required.


Note: The following table lists the minimum free-space requirements for key
system directories during installation. These figures apply to a single-server
configuration. Refer to the Tivoli Provisioning Manager installation documentation
for space data concerning the individual software components that apply in a
multiserver configuration.
Table 7. Minimum free space requirements for the management system (AIX)
Directory or category Space required (single-server configuration)
/ (root) 9 GB
/home 20 GB
/opt 15 GB
/usr 9 GB
/tmp 10 GB
/var/tmp 8 GB
Recommended free space (approximate) 71 GB
Installation images 20 GB
Table 8. Minimum free space requirements for the management system (Linux)
Directory or data category Space required (single-server configuration)
/ (root) 9 GB
/home 20 GB
/opt 20 GB
/usr 5 GB
/tmp 5 GB
30 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Table 8. Minimum free space requirements for the management system (Linux) (continued)
Directory or data category Space required (single-server configuration)
/var/tmp 8 GB
Recommended free space (approximate) 67 GB
Installation images 20 GB
A minimum of 8 GB of memory (RAM) is required. For adequate run-time
performance, 2 or more CPUs and 10 GB of memory (RAM) are recommended.
The minimum required memory for non-English systems is 10 GB.
Command line prompt: It is required that the command-line prompt for the root
user consists of one of the following marks followed by a space: #, >, or $.
Example: To set the prompt to start with #, log in as root and add or change the
export PS1 line in file .profile (AIX) or .bashrc (Linux) to:
export PS1="# "
Log out and then in again for the change to take effect.
Requirements for the administrative server
See the Tivoli Provisioning Manager Installation Guide for details.
The administrative server must meet the following minimum requirements:
Table 9. Administrative server requirements
Hardware Operating System
System p
IBM AIX 7.1 64bit
IBM AIX 6.1 Technology Level 3 64bit
System x
Windows Server 2003 Service Pack 2 (Standard,
Enterprise, or DataCenter), 32 or 64bit
Windows 2008 R2
SUSE Linux Enterprise Server 11 x86 64-bit
SUSE Linux Enterprise Server 10 Service Pack 3 64-bit
Red Hat Enterprise Linux Advanced Server 6 Update 3
x86 64bit
Red Hat Enterprise Linux Advanced Server 6 Update 2
x86 64bit
Red Hat Enterprise Linux Advanced Server 6 Update 1
x86 64bit
Red Hat Enterprise Linux Advanced Server 5 Update 5
x86 64bit
Red Hat Enterprise Linux Advanced Server 5 Update 4
x86 64bit
Note: Red Hat Enterprise Linux Advanced Server 6 Update 2 and Update 1 are
supported only when IBM Tivoli Directory Server is installed on a remote server.
Chapter 2. Installing and upgrading 31
Note: The software packages needed depend on the platform of the administrative
server. When using a Windows administrative server, software packaged for
Windows is required. When using a Linux administrative server, software
packaged for Linux is required. Software packaged for either platform can be used
on either environment.
Disk space:
Recommended minimum free space is 40 GB, with an additional 10 GB for
installation images. 12 GB of the required 40 must be reserved for the following
directories:
v
Windows
: C:\IBM\SMP
v
Linux
and
AIX
: /opt/IBM/SMP
Important: If the administrative server is on the same system as the management
server, the space requirement for /opt/IBM/SMP must be added to the space
requirement for /opt on the management server.
Memory:
A minimum of 2 GB of memory (RAM) and 4 GB of swap space is required.
Requirements for the managed environment
The following requirements apply to the hardware and software involved in the
provisioning and management of virtual servers by Tivoli Service Automation
Manager:
Table 10. Managed-environment server requirements.
Note: If not otherwise noted, for System x, both 32- and 64-bit architectures are supported.
For System p and System z, only the 64-bit architecture is supported.
Platform Hypervisor
Virtual Server (Guest)
Operating System Remarks
System x
v VMware ESX 3.5
v VMware ESXi 3.5
v SUSE Linux
Enterprise Server 10
v SUSE Linux
Enterprise Server 11
v Red Hat Enterprise
Linux 5.4
v Red Hat Enterprise
Linux 5.5
v Microsoft Windows
2003
v Microsoft Windows
2008
The following minimum
build levels are needed
to support Windows
2008:
v ESXi 3.5.0 Build
153875
v vCenter Server 2.5.0
Build 147633
32 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Table 10. Managed-environment server requirements (continued).
Note: If not otherwise noted, for System x, both 32- and 64-bit architectures are supported.
For System p and System z, only the 64-bit architecture is supported.
Platform Hypervisor
Virtual Server (Guest)
Operating System Remarks
v VMware vSphere 4.0
v VMware ESX 4.0,
ESXi 4.0
v VMware vSphere 4.1
v VMware ESXi 4.1
v SUSE Linux
Enterprise Server 10
v SUSE Linux
Enterprise Server 11
v Red Hat Enterprise
Linux 5.4
v Red Hat Enterprise
Linux 5.5
v Microsoft Windows
2003
v Microsoft Windows
2008
v Microsoft Windows
2008 R2
The following minimum
build levels are needed
to support Windows
2008:
v vCenter Server 2.5.0
Build 147633
v VMware ESX 4.0 for
Vitalization Executive
(Hypervisor)
Chapter 2. Installing and upgrading 33
Table 10. Managed-environment server requirements (continued).
Note: If not otherwise noted, for System x, both 32- and 64-bit architectures are supported.
For System p and System z, only the 64-bit architecture is supported.
Platform Hypervisor
Virtual Server (Guest)
Operating System Remarks
v VMware vSphere 5.0,
5.0U1
v VMware ESXi 5.0,
5.0U1
v VMware vSphere 5.1,
5.1U1
v VMware ESXi 5.1,
5.1U1
v SUSE Linux
Enterprise Server 11
v SUSE Linux
Enterprise Server 11
SP1
v SUSE Linux
Enterprise Server 11
SP2
v SUSE Linux
Enterprise Server 10
v SUSE Linux
Enterprise Server 10
SP1
v SUSE Linux
Enterprise Server 10
SP2
v SUSE Linux
Enterprise Server 10
SP3
v SUSE Linux
Enterprise Server 10
SP4
Toleration mode only
v Red Hat Enterprise
Linux 5.4
v Red Hat Enterprise
Linux 5.5
v Red Hat Enterprise
Linux 5.6
v Red Hat Enterprise
Linux 5.7
v Red Hat Enterprise
Linux 5.8
v Red Hat Enterprise
Linux 6
v Red Hat Enterprise
Linux 6.1
v Red Hat Enterprise
Linux 6.2
v Red Hat Enterprise
Linux 6.3
v Microsoft Windows
2003
v Microsoft Windows
2008
v Microsoft Windows
2008 R2
v Microsoft Windows 7
v Microsoft Windows 8
v Microsoft Windows
2012
34 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Table 10. Managed-environment server requirements (continued).
Note: If not otherwise noted, for System x, both 32- and 64-bit architectures are supported.
For System p and System z, only the 64-bit architecture is supported.
Platform Hypervisor
Virtual Server (Guest)
Operating System Remarks
KVM on Red Hat
Enterprise Linux 5.4
v Red Hat Enterprise
Linux 5.4
v Red Hat Enterprise
Linux 5.5
v SUSE Linux
Enterprise Server 11
v SUSE Linux
Enterprise Server 11
SP 1
v SUSE Linux
Enterprise Server 11
SP 2
v Microsoft Windows
2008 (64-bit only)
v Microsoft Windows
2008 R2 (64-bit only)
RHEV-H 5.4 is not
supported.
KVM on Red Hat
Enterprise Linux 6.2
v Red Hat Enterprise
Linux 6.2
v Red Hat Enterprise
Linux 6.3
Xen 3.0.3 on Red Hat
Enterprise Linux 5.3
v Red Hat Enterprise
Linux 5.4
v SUSE Linux
Enterprise Server 10.2
v CentOS 5.3
System p
v Power 6
v Power 7
v AIX 5.3 TL 12
v AIX 6.1 TL 4
v AIX 6.1 TL 7
v AIX 7.1
Power 7 Blade
v AIX 6.1 TL 4
v AIX 6.1 TL 7
v AIX 7.1
Power 5 and Power 6
Blades are not
supported.
PowerVM

via
VMControl 2.4.2 via
System Director (Power
6 & Power 7)
v AIX 6.1 TL 4
v AIX 6.1 TL 7
PowerVM via
VMControl 2.4.1.1, 2.4.2,
and 2.4.3.1
v AIX 6.1 TL 4
v AIX 6.1 TL 7
v AIX 7.1
System z z/VM 5.4
v SUSE Linux
Enterprise Server 10
v SUSE Linux
Enterprise Server 11
v Red Hat Enterprise
Linux 5.4
v Red Hat Enterprise
Linux 5.5
Including DirMaint

.
RACF

is optional.
Chapter 2. Installing and upgrading 35
Software requirements for Tivoli Service Automation Manager
Tivoli Service Automation Manager depends on other software to provide services
to complete requests.
Important: Version numbers in the column Version (new installation) apply only
to the new installation of the Tivoli Service Automation Manager 7.2.4. If Tivoli
Service Automation Manager is updated from version 7.2.2.1 or 7.2.2.2, then the
column Version (updated) applies.
See also the Installation Guide for Tivoli Provisioning Manager in its knowledge
center.
The following table summarizes the software components in a Tivoli Service
Automation Manager environment.
Table 11. Summary of software in the Tivoli Service Automation Manager environment
Software Component
Version (new
installation)
Version
(updated) Remarks
Tivoli Service Automation Manager
Tivoli Service
Automation Manager
(Base)
7.2.4 7.2.4
Tivoli Service
Automation Manager
Installation Launchpad
7.2.4 7.2.4
Tivoli Service
Automation Manager
for WebSphere
Application Server
(optional)
7.2.4 7.2.4
Tivoli Service Request Manager for Service Providers
Tivoli Service Request
Manager
7.2.0 with Fix
Pack 1 (7.2.0.1)
7.2.0 with Fix
Pack 1 (7.2.0.1)
A previous installation of Service Request Manager
without Tivoli Provisioning Manager can be used. In
this case, the middleware and base services were
installed with Service Request Manager.
Tivoli Provisioning Manager
Tivoli Provisioning
Manager
7.2.1 7.2.1 A previous installation of Tivoli Provisioning Manager
can be used. In this case, the middleware and base
services were installed with Tivoli Provisioning
Manager.
Tivoli Provisioning
Manager
Interim Fix 3 Interim Fix 3
Advanced Workflow Components
Advanced Workflow
Components
7.3.0.3 7.3.0.3 Delivered in the Tivoli Service Automation Manager
package.
IBM Tivoli Monitoring (optional)
IBM Tivoli Monitoring
(optional)
6.2.1 or 6.2.2 6.2.1 or 6.2.2 Must be installed separately.
IBM Tivoli Monitoring
for AIX (optional)
6.2.1 or 6.2.2 6.2.1 or 6.2.2 Must be installed separately.
Base Services
36 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Table 11. Summary of software in the Tivoli Service Automation Manager environment (continued)
Software Component
Version (new
installation)
Version
(updated) Remarks
Base services 7.1.1.9 7.1.1.8 This component is included with Tivoli Provisioning
Manager.
Middleware (included with Tivoli Provisioning Manager)
Tivoli Directory Server
(LDAP)
6.3.0.0 6.1.0.10, 6.2.0.2 When you are using new installation of Tivoli Service
Automation Manager, use version 6.1.0.10 on AIX 5.3.
DB2 9.7 FP4 9.5 FP3a To install DB2, you need at least 512 MB RAM.
Note: Tivoli Service Automation Manager version
7.2.4.3 supports the upgrade to DB2 9.7 fix pack 4.
WebSphere Application
Server Network
Deployment
6.1.0.37 6.1.0.23 When you are using new installation of Tivoli Service
Automation Manager, use version 6.1.0.29 on SUSE
Linux 11.
Note: Tivoli Service Automation Manager version
7.2.4.3 supports the migration to WebSphere Application
Server 7 fix pack 27.
IBM HTTP Server 6.1.0.37 6.1.0.23 When you are using new installation of Tivoli Service
Automation Manager, use version 6.1.0.29 on SUSE
Linux 11.
Self-service user
interface
External Software
Tivoli Usage and
Accounting Manager
7.1 or higher 7.1 or higher Note: This product is not installed with Tivoli Service
Automation Manager. Tivoli Service Automation
Manager provides an interface to an external Usage and
Accounting Manager server.
Web browser for
Installation Launchpad
As supported
by the
Common
Launchpad
initiative
As supported
by the
Common
Launchpad
initiative
The browser must be installed on the server on which
the launchpad is started to install the software.
Refer to the Release Notes

, Restrictions and
limitations on page 44 the Tivoli Provisioning Manager
Installation Guide, and other update information for
browser restrictions or differences between the browser
products.
Web browser for
self-service user
interface
v Microsoft
Internet
Explorer 8
and 9
v Mozilla
Firefox 10
ESR
(Extended
Support
Release)
v Mozilla
Firefox 10 or
higher
v Microsoft
Internet
Explorer 8
and 9
v Mozilla
Firefox 10
ESR
(Extended
Support
Release), 17
ESR, 24 ESR
v Mozilla
Firefox 10 or
higher
The browser can be installed on any system with access
to the application server of the management system.
Note: Microsoft Silverlight must be installed if Internet
Explorer 7 is used. For more information, see the User's
Guide.
Refer to the Release Notes, Restrictions and
limitations on page 44 the Tivoli Provisioning Manager
Installation Guide, and other update information for
browser restrictions or differences between the browser
products.
Chapter 2. Installing and upgrading 37
Table 11. Summary of software in the Tivoli Service Automation Manager environment (continued)
Software Component
Version (new
installation)
Version
(updated) Remarks
Web browser for
administrative user
interface
The browser can be installed on any system with access
to the application server of the management system.
Refer to the Release Notes, Restrictions and
limitations on page 44 the Tivoli Provisioning Manager
Installation Guide, and other update information for
browser restrictions or differences between the browser
products.
Tivoli Remote
Execution and Access
Required for secure communication within the network,
which includes the administrative server and
management system. It is also employed for
communication between Tivoli Service Automation
Manager and Usage and Accounting Manager.
This software does not require separate installation.
Tivoli Provisioning Manager
This product provides support for provisioning services within Tivoli Service
Automation Manager. The Tivoli Provisioning Manager installation media also
include the base services and middleware components, if they were not already
installed.
Note: Be sure to study the documentation for the applicable Tivoli Provisioning
Manager release regarding requirements placed on other components in the
Provisioning Manager environment. For example, for Tivoli Provisioning Manager
the command-line prompt for the root user must end with #, $, or >.
Base services
The Tivoli Provisioning Manager product includes base services, which are shared
services required for provisioning, process automation, and service management
functions. These services are also used by other components, such as Tivoli Service
Automation Manager and Tivoli Service Request Manager. The base services are a
short name for Tivoli's process automation engine.
Middleware components
In the Tivoli Service Automation Manager environment, the Tivoli Provisioning
Manager product serves as the source for the middleware in support of the
provisioning and process automation functions if Service Request Manager is not
already installed.
Tivoli Service Request Manager
Tivoli Service Request Manager provides the framework for Tivoli Service
Automation Manager that enables the user to define service offerings and issue
requests for virtual server provisioning and management.
38 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
IBM Tivoli Monitoring
This optional product provides server monitoring services for the Tivoli Service
Automation Manager-managed environment. IBM Tivoli Monitoring comprises:
Tivoli Enterprise Monitoring Server
Collects information from and controls monitoring agents that collect
availability data and alerts from operating systems and applications.
Tivoli Enterprise Portal Server
Analyzes and pre-formats data collected from the monitoring agents and
then routes the data to clients for presentation in views.
Tivoli Enterprise Portal
Presents data from monitoring agents in graphical and tabular views. The
monitoring agents supported are:
v Linux OS agent
v UNIX OS agent
v AIX Premium agent
Note: Tivoli Monitoring is implemented differently and for different purposes in
the WebSphere Cluster service and Self-Service Virtual Server Management
component.
Tivoli Remote Execution and Access
Tivoli Service Automation Manager uses this product to contact other systems and
run commands and scripts on the remote system.
Managed environments
v System z (z/VM Linux)
v System x (VMware)
v System x (Xen)
v System x (KVM)
v System p (PowerVM)
v System p (PowerVM via VMControl)
Packages
See the individual package requirements sections in the on preparing the
management servers.
Web browser settings
Tivoli Service Automation Manager supports a number of web browsers for the
management of the self-service and the administrative user interfaces.
Make sure that you meet the browser requirements for Tivoli Service Automation
Manager:
v Use one of the supported browsers:
Microsoft Internet Explorer 8, 9, and 10
Mozilla Firefox 17 ESR, 18 ESR, 24 ESR (Extended Support Release)
Note: Tivoli Service Automation Manager supports only ESR versions of
Mozilla Firefox.
Chapter 2. Installing and upgrading 39
Google Chrome 27 (27.0.1453.94 m)
v Install Microsoft Silverlight when using Internet Explorer.
v Enable native XMLHTTP support when using Internet Explorer. In the browser
toolbar, click Tools > Internet Options > Advanced and select the Enable native
XMLHTTP support check box.
v Enable JavaScript, CSS, SSL, and cookies.
v Set your browser resolution to a minimum of 1024x768.
Refer to the Release Notes, Restrictions and limitations, and other update
information for browser restrictions or differences between the browser products.
Requirements for Self-Service Virtual Server Management
This section describes special requirements for the Self-Service Virtual Server
Management functionality.
v For VMware:
Tivoli Service Automation Manager requires a vCenter Server 2.5 U4 (also
known as Virtual Center) implementation. Tivoli Service Automation Manager
employs workflows that require VMware templates, which are available only
in the Virtual Center. Certain minimum build levels of the VMware software
apply if you are using Windows Server 2008 as a managed-environment
operating system.
On each VMware template guest operating system that is used for
provisioning by Tivoli Service Automation Manager, VMware Tools must be
installed.
For Red Hat Enterprise Linux (RHEL) VMware templates ,the
resize2fs executable file version 1.39 or higher must be installed to be able
to use the disk resize function. This levelcheck is performed by the
Cloud_Linux_Online_Configure_Disks workflow. If the provisioned server
has a resize2fs version that is lower than 1.39, the workflow copies the file
from the repository to that server. If the file is not in the repository (it is not
by default), you will not be able to modify disk size on that server.
v For System z:
See Requirements for the System z environment for information related to
provisioning virtual servers (as z/VM guests) on System z.
For more information, see Chapter 2, Installing and upgrading Tivoli Service
Automation Manager, on page 27.
Requirements for the System z environment
This section describes special requirements for System z environment.
Restriction:
v Setting up the System z master provisioning environment is part of the Tivoli
Service Automation Manager deployment phase. This environment must be
operational before Self-Service Virtual Server Provisioning can be used to
provision Linux-based System z servers. For detailed instructions on configuring
z/VM, creating the Linux master images, and networking options, refer to
Configuring the z/VM environment for Tivoli Service Automation Manager on
page 151.
See also Requirements for Self-Service Virtual Server Management for specific
requirements related to Self-Service Virtual Server Management.
40 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v Linux is the only operating system available for provisioning of z/VM guests
via the Self-Service Virtual Server Management offerings.
See also Restrictions and limitations on page 44.
Software requirements for the z/VM host platform:
v z/VM 5.4
v DirMaint
v RACF (optional)
v for MAPSERVE : SUSE 10 SP2
System z virtual servers:
A System z virtual server is a z/VM guest with virtual direct access storage
devices (DASD), also referred to as disks, and network interfaces. This server is
represented in the Tivoli Provisioning Manager DCM and is associated with a
z/VM virtual guest name.
Virtual servers are created during Tivoli Service Automation Manager provisioning
by Provisioning Manager using the Tivoli Service Automation Manager Cloud
Management Subsystem. Before this process is started, a master System z
provisioning environment must be established, comprising:
v z/VM host platform(s)
v Linux on System z master images
v Boot server
v Network
v Virtual server templates
v DASD resources
v NIC resources
Each z/VM host platform has the following components:
v MAPSERVE guest ID on which the Tivoli Service Automation Manager
IBM-System-z-MAPSERVE package runs. This package includes an RPC client
that communicates with the RPC server (VSMSERVE)
v VSMSERVE guest ID (RPC server)
v DIRMAINT guest ID
v Linux master images
v Linux guests provisioned by Tivoli Service Automation Manager
In order to provision System z software on a z/VM host platform, the Tivoli
Service Automation Manager management server must communicate with the host
platform on the management network. During service instantiation, the OS
administrator selects a host platform from a list of predefined platforms. The
provisioning process then creates a new z/VM virtual guest on the selected host
platform and clones a software image on the DASD attached to the newly
provisioned guest. Performance requirements imposed by the application workload
must be considered when selecting a host platform for provisioning.
The Tivoli Service Automation Manager and other system administrators are
responsible for setting up a host platform, MAPSERVE and VSMSERVE guest IDs,
MASTER images, Vswitches, management, and a customer network and the
associated components.
Chapter 2. Installing and upgrading 41
Once the MAPSERVE guest ID is configured, the MAPSERVE RPM that is
packaged with the Tivoli Service Automation Manager product must be installed
on the MAPSERVE guest.
For further information see Configuring the z/VM environment for Tivoli Service
Automation Manager on page 151.
Providing the installation source files
This topic describes how to prepare the installation source files for Tivoli Service
Automation Manager and its prerequisites on various operating system platforms.
You can install the software directly from the product CD or DVD or download it
from Passport Advantage at http://www.ibm.com/software/howtobuy/
passportadvantage/pao_customers.htm. Fix packs can be downloaded from Fix
Central at http://www.ibm.com/support/fixcentral/. You specify the location of
the CD or DVD or the source directory for all required packages in the Tivoli
Service Automation Manager launchpad. Any software package must be made
available on the appropriate management or administrative server on which the
launchpad is started for that particular step in the process.
Note:
v The sizes of some of the files to be downloaded to the prospective management
server could exceed limits imposed by the operating system for that server. For
example, the default maximum file size for AIX is 1 GB.
v Ensure that any packages or programs that are required to extract downloaded
files have been installed beforehand. See the packages section for that particular
management server operating system.
v The path names for the downloaded files must contain only alphanumeric
characters or an underscore. Spaces or plus signs, for example, are not
permitted.
The Tivoli Service Automation Manager Installation Launchpad starts one or more
times on a combination of servers, depending on the scenario. In each case, the
server must have access to the source files that pertain to that environment.
Software installed from the administrative server is installed primarily on the
provisioning server of the management system (which is ultimately the Tivoli Service
Automation Manager management server). Some software deployment information
is recorded on the administrative server for subsequent component installations.
Therefore, this same administrative server must be used for any such installations.
However, the administrative server is not required for normal operation.
The table shows which steps apply to which server and which source package is
used on that server:
42 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Table 12. Installation steps and servers on which the corresponding installation files must be available
Step
DVD or CD, or
downloaded files
Administrative
server
Management system
Provisioning
server (Tivoli
Service
Automation
Manager
management
server) Database server Directory server
Middleware Tivoli
Provisioning
Manager 7.2
Middleware
X X X
Base services Tivoli
Provisioning
Manager 7.2.1
Installation
X
Tivoli
Provisioning
Manager core
components
Tivoli
Provisioning
Manager 7.2.1
core components
X
Tivoli
Provisioning
Manager 7.2.1
iFix 5 core
components
Tivoli
Provisioning
Manager 7.2.1
iFix 5 core
components
X
Tivoli
Provisioning
Manager Web
components
Tivoli
Provisioning
Manager 7.2.1
Installation
X
Tivoli
Provisioning
Manager 7.2.1
iFix 5 Web
components
Tivoli
Provisioning
Manager 7.2.1
iFix 5 Web
components
X
Advanced
Workflow
Components
Advanced
Workflow
Components
X
Service Request
Manager
Tivoli Service
Request Manager
for Service
Providers 7.2
X
Tivoli Service
Automation
Manager
Tivoli Service
Automation
Manager 7.2.4.4
base
X X
Tivoli Service
Automation
Manager for
WebSphere
Application
Server
Tivoli Service
Automation
Manager for
WebSphere
Application
Server 7.2.4.4
X
In the Tivoli Service Automation Manager Installation Launchpad, navigate to
Installation Planning > Installation Files to specify the locations where you
Chapter 2. Installing and upgrading 43
placed all the required installation packages. You must unpack the source packages
or have the DVDs available:
v For Tivoli Provisioning Manager base product:

UNIX
: TPM_V721_Install_Unix.tar, TPM_V721_CoreComp_1of2_Unix.tar,
and TPM_V721_CoreComp_2of2_Unix.tar.

Windows
: TPM_V721_Install_Win.zip.
v For middleware:
TPM_V720_Midlwr_<os>.tar, where <os> is the OS of your management server:
AIXPPC64, LinuxX64, or zLinux64.
v For Tivoli Provisioning Manager for Images: CI0D2ML.tar (optional component).
v For Tivoli Provisioning Manager 7.2.1 iFix 4:
7.2.1.0-TIV-TPM-Multi-IF00004.zip

Windows
7.2.1.0-TIV-Components-Windows-IF00004.zip

Linux
7.2.1.0-TIV-Components-Linux-IF00004.tar

Linux
System z: 7.2.1.0-TIV-Components-zLinux-IF00004.tar

AIX
7.2.1.0-TIV-Components-AIXPPC64-IF00004.zip
Note: Unpack the -Components- packages into the tpm_core subdirectory in
the same location where the -Multi- part was unpacked.
v For Service Request Manager for Service Providers: Either the location on the
DVD or where you unpacked TSRMfSP_V720.tar.
v For Service Request Manager Fix Pack: 7.2.0.1-TIV-SRM-FP0001.zip.
Note: The Tivoli Service Automation Manager DVD is required to start the
Installation Launchpad on all systems.
Restrictions and limitations
The following restrictions and limitations apply to Tivoli Service Automation
Manager:
v System z virtual server provisioning is done in a strictly z/VM environment.
Tivoli Service Automation Manager does not provision native LPARs on System
z. Operating systems are provisioned only on z/VM virtual guests, not native
LPARs. Self-Service Virtual Server Provisioning requires Linux as the z/VM
guest operating system.
v The maximum length of a service deployment instance name (and hence also a
deployment project name within the Self-Service Virtual Server Management
component) is 30 characters.
v Software resource template names for Tivoli Service Automation Manager DCM
entries cannot contain a forward slash character ("/"). This character is used
internally as a separator.
v See Self-Service Virtual Server Management on page 2 for platform restrictions
that apply to Self-Service Virtual Server Management.
v Use Windows platforms as managed-environment operating systems.
v The Backup and Restore functions in the self-service user interface (also referred
to as Save Image and Restore Image) are not currently available for servers in
Xen or z/VM managed environments.
44 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v Tivoli Service Automation Manager assumes that a disk on the z/VM hypervisor
(DASD 3390 Model 9) has exactly 10017 cylinders. This means that the minidisk
pool must be configured with disk volumes that can hold at least 10017
cylinders.
v When immediately removing both servers from a project that only contains two
servers, requests are accepted at first but the second one results in an error
because a project cannot exist with no servers. This error is a result of a race
condition and occurs because the first service is still queued when the second
one is submitted and Tivoli Service Automation Manager is unable to recognize
that the second service should not be accepted.
v Timezones of the database server and the WebSphere server must be consistent
if they run on different nodes. Otherwise, service requests will fail.
v Tivoli Service Automation Manager does not support the configuration when
any Virtual Center object, such as a Cluster or Data Center, is defined inside a
sub-folder.
v Tivoli Service Automation Manager does not support the discovery of the
already existing virtual machines provisioned out of band. If the virtual
machines are to be managed by Tivoli Service Automation Manager, they must
be provisioned by means of Tivoli Service Automation Manager, for example by
using self-service user interface or API.
v It is highly recommended to use separate data stores for provisioning of virtual
servers and for storing saved images and image templates. The space occupied
by saved images is not considered for available storage capacity calculation
during resource checking.
Installing Tivoli Service Automation Manager
Refer to this section for instructions on how to install a new instance of Tivoli
Service Automation Manager.
Full Installation of Tivoli Service Automation Manager 7.2.4.4
Tivoli Service Automation Manager supports three installation scenarios. Each of
these scenarios can be divided into three phases.
Three basic installation scenarios are supported:
v New installation in which no software has been previously installed.
v Installation using an existing Tivoli Provisioning Manager 7.2.1, in which only
Service Request Manager and Tivoli Service Automation Manager are installed.
v Installation using an existing Service Request Manager 7.2.0.1, in which only
Tivoli Provisioning Manager and Tivoli Service Automation Manager are
installed.
The following table summarizes the process steps involved in each of these
scenarios:
Table 13. Installation scenarios
Installation Scenario
Installation segment New installation
Existing Tivoli
Provisioning
Manager 7.2
Existing Service
Request Manager
7.2.0.1
Install middleware X
Install base services X
Chapter 2. Installing and upgrading 45
Table 13. Installation scenarios (continued)
Installation Scenario
Installation segment New installation
Existing Tivoli
Provisioning
Manager 7.2
Existing Service
Request Manager
7.2.0.1
Install Tivoli
Provisioning
Manager Core
X X
Install Tivoli
Provisioning
Manager Web
X X
Install Tivoli
Provisioning
Manager for Images
X X
Install Service
Request Manager 7.2
X X
Install Service
Request Manager 7.2
Hot fix 7
X X X
Install Service
Request Manager
Runbook Automation
(RBA) 7.3.0.4
X X X
Install Tivoli Service
Automation Manager
Applications
X X X
Install Tivoli Service
Automation Manager
enablement keys
X X X
Install additional
configuration files
X X X
Install automation
packages
X X X
Install Tivoli Service
Automation Manager
for WebSphere
Application Server
(optional)
X X X
Installation process flow:
Tivoli Service Automation Manager is installed in three basic phases:
1. Planning and pre-installation phase:
a. Verifying the overall installation prerequisites
b. Downloading selected software (or using an installation CD or DVD) to the
administrative and management servers. See Providing the installation
source files on page 42
c. Defining whether the current server will be used as administrative server or
management server. See Preparing the environment for installation on
page 48.
46 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
d. Preparing the management and administrative servers for the installation.
See
v Preparing an AIX management server on page 48
v Preparing a Linux management server on page 58
v Preparing a Windows administrative server on page 66
v Preparing a Linux administrative server on page 66
You also use the Installation Launchpad to run scripts that verify that
prerequisite packages have been installed and other system environment
settings have been performed to allow proper installation.
See Preparing the environment for installation on page 48.
2. Installation phase:
a. In accordance with the selected management system topology, installing the
software on each applicable server in the management system by starting
the Tivoli Service Automation Manager Installation Launchpad on that
server, and installing selected platform-independent software components
by starting the Launchpad on the administrative server.
This process is divided into installation segments, in which each segment is
performed by starting the Launchpad on either the administrative server or
one of the management system servers. The order in which the Launchpad
is started on which server must be made (that is, what server a particular
step must be performed on) is determined by the software design. The
segments involve some switching between the administrative and
management servers. See Installing Tivoli Service Automation Manager
and its prerequisite software on page 68.
In addition to the base Tivoli Service Automation Manager software and its
prerequisites, certain optional components can be installed depending on
environment.
Note: When the installation steps are performed on the administrative
server, the software is installed primarily on the management system.
Although the administrative server is not required in the runtime
environment, some software and control information is retained on the
administrative server for later use.
See Installing Tivoli Service Automation Manager and its prerequisite
software on page 68.
3. Post-installation phase:
a. Setting up the basic interfaces between Tivoli Service Automation Manager,
Tivoli Provisioning Manager, Service Request Manager, and Tivoli Process
Automation engine
b. Verifying that the basic Tivoli Process Automation engine environment is
operational.
c. Configuring the managed environments for the Tivoli Service Automation
Manager services that apply to your installation and making this
information available to Tivoli Provisioning Manager
d. Configuring the Tivoli Service Automation Manager interfaces to external
products
See Post-installation steps on page 82 and the configuration chapter for the
applicable virtualization environment.
Chapter 2. Installing and upgrading 47
Preparing the environment for installation
Perform these steps before starting the installation.
Procedure
1. Review the topology descriptions and hardware requirements in Planning for
Tivoli Service Automation Manager on page 28. Read the Tivoli Provisioning
Manager Installation Guide and the supplementary release notes for Tivoli
Provisioning Manager and Service Automation Manager. Tivoli Service
Automation Manager supports a subset of the Provisioning Manager
environments.
2. Define whether you want to use the current system as a management server, an
administrative server, or both. Different installation options on the product
installation page are offered depending on your selection.
Note: The check boxes are enabled or disabled depending on the installed
operating system. For example, you can use a Windows system only as an
administrative system and a System z system as a management system. If the
current operating system is not supported for a selection, the corresponding
check box is disabled.
3. Ensure that the installation source files for Tivoli Service Automation Manager,
Tivoli Provisioning Manager, and Service Request Manager are available on a
CD or DVD or on the system on which the launchpad is started for each
individual step (see Providing the installation source files on page 42).
4. Prepare the management system servers.
Note: When you run the hostname command on the management server, ensure
that the output contains the fully qualified domain name as configured on your
DNS server and is the same after reboot (that is, mytsamserver.mydomain.com).
5. Prepare the administrative server.
What to do next
Each of these steps is described in more detail in the sections that follow. When
you complete these steps, proceed to Installing Tivoli Service Automation
Manager and its prerequisite software on page 68).
Preparing an AIX management server
Note: See the Tivoli Provisioning Manager Installation Guide for details.
Make sure that you have sufficient disk space for the installation. See Hardware
and operating system requirements for Tivoli Service Automation Manager on
page 28.
Important: Make sure that the date and time on the management server is set
correctly, so that the Tivoli Service Automation Manager scheduler performs a
reservation for the same time frame that the user reserves a project.
48 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Verifying settings required for installing the middleware on an AIX
management server:
Note: The Installation Launchpad can start a script that verifies the following
settings prior to starting the installation. If the Launchpad reports a discrepancy,
resolve the problem before continuing.
Refer to the Tivoli Provisioning Manager Installation Guide and release notes for
Tivoli Provisioning Manager and Tivoli Service Automation Manager for details
and updated information.
User limits:
The number of files must be at least 8192. Other limit categories must be set to
unlimited.
For the Installation Launchpad script to be able to verify settings prior to starting
the installation, the AIX limits file must look like this:
default:
fsize = -1
core = 2097151
cpu = -1
data = 262144
rss = 65536
stack = 65536
nofiles = 2000
ctginst1:
fsize = -1
cpu = -1
data = -1
rss = -1
stack = -1
stack_hard = -1
root:
fsize = -1
cpu = -1
data = -1
rss = -1
stack = -1
stack_hard = -1
nofiles = -1
Set the maximum number of processes per user to 2048 or higher:
1. Run the following command:
smit chgsys
2. Ensure that the value for Maximum number of PROCESSES is set to 2048 or
higher by running the command smitty chgsys and set the value of
PROCESSES allowed per user to 2048.
Enable asynchronous I/O
Note: This step is required only for AIX 5.3. In AIX 6.1, it is enabled by default.
Execute the following commands to enable asynchronous I/O:
chdev -l aio0 -P -a autoconfig=available
mkdev -l aio0
lsdev -F status -t aio
Chapter 2. Installing and upgrading 49
lsdev -F status -t aio must return the following:
root@myserver ~# lsdev -F status -t aio
Available
If this is not successful, use the smitty utility:
smitty chgaio
and
smitty aio (Configure Defined Asynchronous I/O)
to change the I/O settings.
Note: On AIX 6.1 TL 4, the command export JAVA_COMPILER=none must be run
before starting the middleware installer.
Verification
The Installation Launchpad can start a verification script that checks the
requirements for the middleware. Resolve any discrepancies noted and rerun the
script until no errors are reported.
1. On the provisioning server of the management system, start the Tivoli Service
Automation Manager Installation Launchpad as described in Starting the
launchpad and performing the preinstallation steps on page 67.
2. Navigate to Pre-Installation Steps > Requirements for installing the
middleware and click the link.
3. In each case, the script reports any errors found. Resolve any problems
reported.
4. Run the script again until no errors are found.
Verifying the settings required to install Tivoli Provisioning Manager on an AIX
management server:
Use this task to verify that the settings on the AIX management server are correct.
Note: The Installation Launchpad provides for invoking a script to verify the
following settings prior to starting the installation. If the Launchpad reports a
discrepancy, resolve the problem before continuing.
Refer to the Tivoli Provisioning Manager Installation Guide and release notes for
Provisioning Manager and Service Automation Manager for details and updated
information.
Checking the root password:
Ensure that the root password for the management server does not contain any
special characters such as "&". An "&" in the root password can cause the
installation process to fail.
Checking the root shell prompt:
The last non-blank character of the command line prompt for the root user must be
$, #, or > on Tivoli Provisioning Manager. Edit the '.profile' file in root's home
directory, adding or changing the line: export PS1="# " (a hash mark followed by a
space). See Hardware and operating system requirements for Tivoli Service
Automation Manager on page 28.
50 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Updating the login script:
The default login shell on AIX is the Korn shell (ksh), and the default login script
is the file .profile in the home directory of the user name. Even if the login shell
is changed, the profile file may still be the login script. Ensure that you know
which login script is used for the root user on your management server so that you
apply the required changes to the correct script.
Verifying operating system requirements:
Ensure that you are using a supported operating system at the correct version level
(for details, refer to Planning for Tivoli Service Automation Manager on page
28).
Verify the following information:
1. Check the operating system version with the following command:
oslevel -r
Returned value for AIX 5.3 TL10:
5300-10
Returned value for AIX 6.1 TL3:
6100-03
2. Verify that the computer has a 64-bit CPU with the command:
prtconf -c
The output for a 64-bit CPU is:
CPU Type: 64-bit
Increasing file system size:
If you find that a file system is too small for Tivoli Service Automation Manager,
use the smitty utility to increase the size:
1. Log on as root.
2. Execute smitty storage
3. Select File Systems.
4. Select Add / Change / Show / Delete File Systems.
5. Select Enhanced Journaled File Systems
6. Select Change / Show Characteristics of an Enhanced Journaled File System.
7. Select the file system name from the list and press Enter.
8. Enter the new size in the Number of units field. The size is expressed as a
number of 512 byte units.
9. Press Enter and then F10 to exit smitty.
Increasing AIX file size limit and number of descriptors
Edit the file /etc/security/limits and complete these required steps, which are
essential to correct installation. In the example, ctginst1 is used as the name of the
database instance user.
1. Edit the /etc/security/limits file by opening it in a text editor.
2. Locate the section for the ctginst1 and root user, and then make changes to the
parameters below using the values listed:
Chapter 2. Installing and upgrading 51
ctginst1:
fsize = -1
cpu = -1
data = -1
rss = -1
stack = -1
stack_hard = -1
root:
fsize = -1
cpu = -1
data = -1
rss = -1
stack = -1
stack_hard = -1
nofiles = 8192
A value of -1 indicates that there is no limit.
Note: Always add a break between the root section and the sections before
and after it.
3. Save the file and exit.
Important: You must reboot the system for these changes to take effect.
Increasing AIX paging space:
Increase the paging space to a minimum of 4 GB, but preferably to the total
amount of physical memory. To determine the size of the physical memory, enter:
root@myserver # bootinfo -r
The following sample output shows that the amount of physical memory (in units
of 1 KB) is 8 GB:
8388608
To list the available paging space, enter:
root@myserver # lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type
hd6 hdisk1 rootvg 4096MB 1 yes yes lv
To add paging space:
root@myserver # chps -s 32 hd6
which adds 32 logical partitions to the paging space. The default logical partition
size is 128 MB. Add 8 logical partitions for each GB of paging space.
root@myserver # lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type
hd6 hdisk1 rootvg 8092MB 1 yes yes lv
The paging space is now 8 GB.
The number of logical partitions that need to be added as paging space depends
on the size of a logical partition. To determine the size of a logical partition, run
the following command:
lslv hd6
52 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Calculate the number of logical partitions that need to be added to reach the
desired amount of paging space based on the previous numbers: (available
physical memory - current amount of paging space) / (size of physical partition).
To add more logical partitions, use the following command:
chps -s xx yyy
where xx is the number of logical partitions to add and yyy identifies the logical
volume.
Changing umask for root
Note: This value is only valid during installation. When installation is complete,
reset it to its original value. If necessary, record the existing value by entering
'umask' with no arguments.
Execute the command:
chuser umask=0022 root
Alternatively, you can set the umask with smitty:
To change the umask setting to 0022 using SMIT on AIX, perform the following
steps:
1. Log on as root.
2. Run the following command from a shell prompt:
smitty user
3. Select Change/Show Characteristics of a User.
4. Enter root in the User NAME field.
5. Press Enter.
6. Scroll down and find File creation UMASK and change the value to 0022.
7. Press Enter to save the changes.
8. Exit smitty interface.
9. Log off and log back in for the changes to take effect.
Setting the home for root user
usermod -d /home/root root
Ensure that the value for Maximum number of PROCESSES is set to 2048 or higher
by using the command smitty chgsys.
Ensure fully qualified (long) host name.
The hostname command must return the fully qualified host name, not just the
short host name.
The following example shows the use of an incorrect host name:
root@myserver ~# hostname
myserver
If only the short host name is returned, set the host name using smitty hostname
or the following commands:
Chapter 2. Installing and upgrading 53
root@myserver ~# chdev -l inet0 -a hostname=myserver.mycompany.com
inet0 changed
The following example shows the use of a correct host name
root@myserver ~# hostname
myserver.mycompany.com
root@myserver ~# /usr/sbin/hostid `hostname`
The DNS server should also return the same long host name as returned by the
hostname command:
root@myserver ~# nslookup `hostname`
Server: ips-boeb-a.mycompany.com
Address: 9.152.120.241
Name: myserver.mycompany.com
Address: 9.152.26.115
Modifying /etc/hosts
If you are using the file /etc/hosts to resolve IP addresses, the file must be
configured correctly.
The file must include:
v The IP address, fully-qualified domain name, and host name of this management
server as the first entry.
v The IP address 127.0.0.1, with loopback as the domain name and localhost as
the host name.
The following example shows settings for a computer with the host name myserver.
# Internet Address Hostname # Comments
9.152.21.28 myserver.mycompany.com myserver
127.0.0.1 loopback localhost # loopback (lo0) name/address
Note: AIX installations differentiate between the IP address for the localhost host
name and the actual host name of the computer. Ensure that your /etc/hosts file
includes the static IP address for both localhost and the actual host name of the
computer.
Important: You must first define the real IP address and then the local host IP
address (127.0.0.1).
Permit SSH root login:
Ensure that the PermitRootLogin option is enabled (uncommented and set to yes)
in the /etc/ssh/sshd_config file.
Setting file permissions for /tmp and /var/tmp
Ensure that the file permissions are 1777:
chmod 1777 /tmp
chmod 1777 /var/tmp
Enabling the WebSphere Application Server SOAP port
Ensure that the WebSphere SOAP port (default 8879) can be reached from the
administrative server by switching off the firewall on the provisioning server.
54 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Verification
The Installation Launchpad provides for invoking a verification script to check the
previously described operating system settings. Resolve any discrepancies noted
and rerun the script until no errors are reported.
1. On the provisioning server of the management system (the Tivoli Service
Automation Manager management server), start the Tivoli Service Automation
Manager Installation Launchpad as described in Starting the launchpad and
performing the preinstallation steps on page 67.
2. To check the operating system prerequisites, navigate to Pre-Installation Steps
> Requirements for installing Tivoli Provisioning Manager and click the link.
3. In each case, the script reports any errors found. Resolve the problems
reported.
4. Run the script again until no errors are reported.
Packages required on an AIX management server:
You must install certain packages and software components before installing Tivoli
Service Automation Manager.
Note: The Installation Launchpad provides for invoking a script to verify package
installation before starting the installation. If the Launchpad reports a discrepancy,
resolve the problem before continuing.
For details and updated information, see Tivoli Provisioning Manager Installation
Guide and the release notes for this product and Tivoli Service Automation
Manager.
The following packages and other software must be installed:
v bash 3
If you use bash 4, downgrade to bash 3.
v bash-doc
v bos.loc.iso.en_US
v cairo 1.0.2-6
v curl
v expect (5.42 or higher) (expect.base for AIX 6.1)
v GNU tar (1.14 or higher) (in addition to UNIX tar)
v glib
v gtk2 2.8.3 or higher
v openssh (5.0.0.5301 or higher)
v openssl
v perl 5.8.2
v procmail (available in the Linux Toolbox for AIX)
v tcl (tcl.base for AIX 6.1)
v tk (tk.base for AIX 6.1)
v unzip
v wget 1.9
v xlC.rte version 9
v xlC.rte.9.0.0.1
v xlC.aix61.rte.10.0.0.2
v X11.base
v X11.apps
v X11.adt
v zip
Chapter 2. Installing and upgrading 55
v xterm (the path is/usr/bin/X11/xterm)
Note: For more information about the packages, see the Tivoli Provisioning
Manager documentation (Tivoli Provisioning Manager knowledge center >
Preinstallation tasks > Step 5: Verify component requirements > Required packages (UNIX
and Linux).
You can obtain packages from the AIX toolbox download site:
http://www.ibm.com/systems/p/os/aix/linux/toolbox/download.html. In
addition, certain packages must be accessible using specific paths.
Note: The unzip version offered at this download site cannot handle the .zip files
larger than 2 GB. Use another program to extract files larger than 2 GB.
Web browsers
To learn about supported browsers, refer to Web browser settings on page 39.
Installing bash-doc
root@myserver / # rpm -Uvh <path_to_updates>/bash-doc-3.0-1.aix5.1.ppc.rpm
bash-doc-3.0-1
Installing expect 5.4x.x
If expect is not installed, install it using the following command:
root@myserver # rpm -Uvh <path_to_updates>/expect-5.42.1-3.aix5.1.ppc.rpm
expect-5.42.1-3
If there are problems with dependencies, the command will generate the following
result:
root@myserver # rpm -Uvh <path_to_updates>/expect-5.42.1-3.aix5.1.ppc.rpm
error: failed dependencies:
libtcl8.4.so is needed by expect-5.42.1-3
libtk8.4.so is needed by expect-5.42.1-3
If there is a problem with dependencies, also update the dependencies TCL and TK
and reissue the command:
root@myserver / # rpm -Uvh --force --nodeps
<path_to_updates>/tcl-8.4.7-3.aix5.1.ppc.rpm
tcl ##################################################
root@myserver / # rpm -Uvh --force --nodeps
<path_to_updates>tk-8.4.7-3.aix5.1.ppc.rpm
tk ##################################################
(now reissue the command)
root@myserver / # rpm -Uvh
<path_to_updates>expect-5.42.1-3.aix5.1.ppc.rpm
expect ##################################################
Installing the GNU tar utility
The native UNIX tar utility does not support long file names. Ensure that the latest
GNU version of tar (gtar) is installed so that installation files can be extracted. You
can use the following commands to check that gtar is installed and which version
it is:
which gtar
gtar --version
If the GNU tar package is not installed, follow these steps:
56 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
1. Download the GNU tar package from http://www.ibm.com/systems/p/os/
aix/linux/toolbox/download.html.
2. Ensure that the GNU tar is in the PATH variable of /etc/environment.
Note: The native UNIX tar utility must also be available.
Checking the xlC* packages
To check if the C/C++ runtime library is installed use lslpp -l xlC*.
root@myserver ~# lslpp -l xlC*
Fileset Level State Description
----------------------------------------------------------------------------
Path: /usr/lib/objrepos
xlC.aix50.rte 9.0.0.1 COMMITTED XL C/C++ Runtime for AIX 5.x
xlC.cpp 9.0.0.0 COMMITTED C for AIX Preprocessor
xlC.msg.EN_US.cpp 9.0.0.0 COMMITTED C for AIX Preprocessor
Messages--U.S. English UTF
xlC.msg.EN_US.rte 9.0.0.0 COMMITTED C Set ++ Runtime
Messages--U.S. English UTF
xlC.rte 9.0.0.1 COMMITTED XL C/C++ Runtime
xlC.aix50.rte and xlC.rte must be available and must have a version > 6.0.
Note: For AIX 6.1, the package name is xlC.aix61.rte
Package path requirements
Some utilities are required by Tivoli Provisioning Manager and must be available
from the following locations:
v bash: /bin/bash
v expect, tar, gzip: /usr/bin
If they are not installed in these locations, create a symbolic link from those
directories to the actual location.
1. Check whether bash is available at /bin/bash:
root@myserver ~# ls -l /bin/bash
lrwxrwxrwx 1 root system 27 Sep 4 15:25 /bin/bash@ -> ../../opt/freeware/bin/bash*
root@myserver ~# echo $?
0
2. Ensure that the "shells" line in /etc/security/login.cfg points to /bin/bash
and not /usr/bin/bash. If both /usr/bin/bash and /bin/bash are in the shells
line, remove /usr/bin/bash from it.
3. Define /bin/bash as the default shell for user root:
a. Enter smitty user
b. Select Change / Show Characteristics of a User
c. Enter the user name root
d. Ensure that Initial PROGRAM is /bin/bash.
4. Check that expect and gzip are accessible from /usr/bin:
root@myserver # ls -l /usr/bin/expect
lrwxrwxrwx 1 root system 29 Sep 4 15:25 /usr/bin/expect@ -> ../../opt/freeware/bin/expect*
root@myserver # echo $?
0
root@myserver # ls -l /usr/bin/gzip
lrwxrwxrwx 1 root system 27 Sep 4 15:25 /usr/bin/gzip@ -> ../../opt/freeware/bin/gzip*
root@myserver # echo $?
0
5. Verify that GNU tar is in the environment path. Ensure that the PATH variable
contains both the native UNIX tar and GNU tar paths, and that the native
UNIX tar path is defined before the GNU tar path. For example:
Chapter 2. Installing and upgrading 57
PATH=/usr/bin:/opt/freeware/bin/:/etc:/usr/sbin:
/usr/ucb:/usr/bin/X11:/sbin:/usr/java14/jre/bin:/usr/java14/bin
Verifying that the required software packages are installed
You can check the installation of individual packages with the lslpp command. For
example, to check the 'expect' package, enter lslpp -L expect*.
To verify that all packages are installed, run a script from the Launchpad:
1. On the provisioning server of the management system (the Tivoli Service
Automation Manager management server), start the Tivoli Service Automation
Manager Installation Launchpad as described in Starting the launchpad and
performing the preinstallation steps on page 67.
2. Navigate to Pre-Installation Steps > Pre-requisite packages for Tivoli
Provisioning Manager
3. Click the link to run the verification script for the required packages. Error
messages are issued if any discrepancies are noted.
4. Install any package reported as missing or at an incorrect level.
5. Run the script again until all errors have been resolved.
Note: The script can only make general decisions about whether a required
package is installed. It cannot always determine the version of a package.
Successful execution of the script does not necessarily mean that the version of an
existing package is the correct one. The user must verity that the correct package
has been installed.
Preparing a Linux management server
Note: Unless otherwise noted, this section applies generally to Linux on System x
and Linux on System z and to the Red Hat and SUSE distributions. Differences are
noted where applicable.
Refer to the Tivoli Provisioning Manager Installation Guide and the release notes for
Tivoli Provisioning Manager and Tivoli Service Automation Manager for details.
Make sure that you have sufficient disk space for the installation. See Hardware
and operating system requirements for Tivoli Service Automation Manager on
page 28.
Important:
v Make sure that the date and time on the management server is set correctly, so
that the Tivoli Service Automation Manager scheduler performs a reservation for
the same time frame that the user reserves a project.
v Make sure that the correct umask value is set up. The umask value can be set for
all users either in /etc/bashrc file or in /etc/profile file. By default, most
Linux distributions set it to 0022 (022) or 0002 (002). To change the umask value:
1. Open /etc/profile or /.bashrc file
2. Enter:# vi /etc/profile or $ vi /.bashrc
3. To set up a new umask value, add or modify the following line: umask 022
4. Logout from the shell.
58 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Verifying the settings required to install the middleware on a Linux
management server:
This section describes operating system settings that are required to install the
middleware on a Linux management server.
Note: The Installation Launchpad provides for invoking a script to verify the
following settings before starting the installation. If the Launchpad reports a
discrepancy, resolve the problem before continuing.
Refer to the Tivoli Provisioning Manager Installation Guide and the release notes for
Tivoli Provisioning Manager and Tivoli Service Automation Manager for details
and updated information.
Set LD_LIBRARY_PATH for zLinux: Set the LD_LIBRARY_PATH variable:
LD_LIBRARY_PATH=/usr/lib
Modify kernel parameters for DB2: You can find the kernel parameters using the
appropriate commands, for example:
Free memory
# free -o -b
total used free shared buffers cached
Mem: 10432819200 10371768320 61050880 0 77840384 9825697792
Swap: 21476163584 0 21476163584
The following values must be set:
v kernel.shmmax=physical memory in bytes
v kernel.shmall=twice the memory in bytes as required by DB2
v kernel.msgmax=65536
v kernel.msgmnb=65536
For more details on these settings refer to the DB2 documentation.
After setting the kernel parameters, run the following command, so that the new
settings take effect:
/sbin/sysctl -p
Set user limits: To set the user limits:
1. Log on as root.
2. Edit the file /etc/security/limits.conf. When a limit is not specifically
configured for a user, the default value is used. The default is typically
unlimited for all the required limits except stack size.
a. Remove any existing entries for ctginst1 so that the default limits are
assigned. Ensure that the default limits are set to unlimited.
b. To change the stack size to the maximum value for ctginst1 and root, add
the following entries to the file:
ctginst1 - stack unlimited
root - stack unlimited
c. Set the limit for open file descriptors to 65536 or higher for the root user,
add the following entry to the file:
root - nofile 65536
3. For SUSE Linux Enterprise Server 11, add the following values to /etc/profile:
Chapter 2. Installing and upgrading 59
ulimit -v unlimited
ulimit -m unlimited
4. Log out as root and log back in for the changes to take effect.
5. If DB2 is already installed, restart the database instance by running the
following commands:
su - dasusr1 -c "db2admin start"
su - ctginst1 -c "db2start"
6. Verify the system resource limit settings by running the following command:
ulimit -a
Verify the settings: The Installation Launchpad provides for invoking a verification
script to check the requirements for installing the middleware.
1. On the provisioning server of the management system, invoke the Installation
Launchpad as described in Starting the launchpad and performing the
preinstallation steps on page 67
2. To check the middleware prerequisites, navigate to Pre-Installation Steps >
Requirements for installing the middleware and click the link.
3. Resolve any discrepancies noted and rerun the script checks until no errors are
reported.
Verifying the settings required to install Tivoli Provisioning Manager on a Linux
management server:
This section describes certain operating system settings needed to install Tivoli
Provisioning Manager.
Note: The Installation Launchpad can start scripts that verify the following
settings prior to starting the installation. If the Launchpad reports a discrepancy,
resolve the problem before continuing.
Refer to the Tivoli Provisioning Manager Installation Guide and the Tivoli
Provisioning Manager and Tivoli Service Automation Manager release notes for
details and updated information.
Check the root password:
Ensure that the root password for the management server does not contain any
special characters such as "&". An "&" in the root password can cause the
installation to fail.
Check the root shell prompt:
The last non-blank character of the command line prompt for the root user must be
$, #, or > on Tivoli Provisioning Manager. A suggested prompt is a hash mark
followed by a space. To set the prompt, log on as the root user and add or change
the "export PS1" line in file .profile (AIX) or .bashrc (Linux) to:
export PS1="# "
Log off as root and log back in for these changes to take effect.
60 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Verify operating system requirements
v Red Hat Enterprise Linux distribution
Verify that you have the required release installed:
cat /etc/redhat-release
v SUSE Linux Enterprise Server distribution
Verify that you have the required release installed:
cat /etc/SuSE-release
v libstdc++.so.5
Verify that libstdc++.so.5 is installed.
The middleware installation program requires the libstdc+.so.5 system library to
be present on a Linux system.
[root@myserver ~]# rpm -qal | grep -e "/libstdc++.so.5$"
/usr/lib/libstdc++.so.5
[root@myserver ~]# ls -l /usr/lib/libstdc++.so.5
lrwxrwxrwx 1 root root 18 May 15 2008
/usr/lib/libstdc++.so.5 -> libstdc++.so.5.0.7
For SUSE Linux Enterprise Server 10 Service Pack 3, ensure that you have both
the 64 and the 32-bit version of libstdc++.so.5 installed. Run the following
commands to verify that:
ls -la /usr/lib/libstdc++.so.5
ls -la /usr/lib64/libstdc++.so.5
v Kernel version
Version 2.6 of the kernel is required. Run the following command to verify the
kernel version:
uname -r
The output must begin with "2.6". For example:
[root@myserver ~]# uname -r
2.6.9-67.ELsmp
Ensure fully qualified (long) host name
On the management server, the host name must be fully qualified. If it is not, run
the following command as root to set it to a fully qualified name (replacing
myserver.mycompany.com accordingly):
hostname myserver.mycompany.com
Modify /etc/hosts
If you are using the file /etc/hosts to resolve hostnames to IP addresses, the file
must be configured correctly. Even if you are not using /etc/hosts to resolve
hostnames, it is recommended that your /etc/hosts file be configured as follows to
avoid errors reported by the pre-installation check utility.
The file must include:
v The IP address, fully-qualified domain name, and host name of this management
server as the first entry.
v The IP address 127.0.0.1, with loopback as the domain name and localhost as
the host name
The following example shows settings for a computer with the host name myserver.
Chapter 2. Installing and upgrading 61
# Internet Address Hostname # Comments
9.152.27.78 myserver.mycompany.com myserver
127.0.0.1 loopback localhost # loopback (lo0) name/address
Important: You must first define the real IP address and then the local host IP
address (127.0.0.1).
Disable /tmp cleanup (tmpwatch and anacron)
By default, the tmpwatch script runs daily and removes files in /tmp that have not
been accessed in 10 days. The anacron scheduler also runs scripts in
etc/cron.daily when the computer is booted. By default, the Tivoli Provisioning
Manager installer is installed under /tmp.
Before you start Tivoli Provisioning Manager installation, ensure that the
automated cleanup of /tmp is disabled for the duration of the installation. Check
/etc/crontab and /etc/anacrontab to determine whether scripts in /etc/cron.daily
are called. If so, either delete or relocate the tmpwatch (RHEL) or
suse.de-clean-tmp (SUSE) file, or comment out the applicable entries in the file:
v (Red Hat) Edit the /etc/cron.daily/tmpwatch script and comment out the lines
that perform the cleanup of /tmp.
v (SUSE) Edit file /etc/cron.daily/suse.de-clean-tmp and comment out the
following lines:
# cleanup_tmp ${MAX_DAYS_IN_TMP:-0} ${TMP_DIRS_TO_CLEAR:-/tmp}
# cleanup_tmp ${MAX_DAYS_IN_LONG_TMP:-0} ${LONG_TMP_DIRS_TO_CLEAR}
Example:
#/usr/sbin/tmpwatch -x /tmp/.X11-unix -x /tmp/.XIM-unix -x /tmp/.font-unix
-x /tmp/.ICE-unix -x /tmp/.Test-unix 240 /tmp
#/usr/sbin/tmpwatch 720 /var/tmp
#for d in /var/{cache/man,catman}/{cat?,X11R6/cat?,local/cat?}; do
# if [ -d "$d" ]; then
# /usr/sbin/tmpwatch -f 720 $d
# fi
#done
Note: The verification script reports an error if the file exists. If you retain the files
but comment out the entries that are applicable to /tmp cleanup, you can ignore
the script error message.
Start xinetd
The xinetd daemon must be running. Check whether it is running:
# ps -ef | grep xinetd | grep -v grep
root 4928 1 0 14:23 ? 00:00:00 xinetd -stayalive -pidfile /var/run/xinetd.pid
If necessary, start it with the command:
# /etc/init.d/xinetd start
If it does not start, try to do the following:
[root@myserver ~]# rcxinetd restart
Shutting down xinetd: done
Starting INET services. (xinetd) failed
If it still does not start, enable any service, for example echo.
62 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
To do this, edit /etc/xinetd.d/echo and set the property disable to no:
# default: off
# description: An echo server. This is the tcp version.
service echo
{
type = INTERNAL
id = echo-stream
socket_type = stream
protocol = tcp
user = root
wait = no
disable = no
}
Start xinetd again:
[root@myserver ~]# /etc/init.d/xinetd start
Starting INET services. (xinetd) done
Verify that the xinetd process exists:
[root@myserver ~]# ps -ef | grep xinetd | grep -v grep
root 17262 1 0 13:52 ? 00:00:00 /usr/sbin/xinetd
Permit SSH root login:
By default, SSH is permitted for root. Ensure that the PermitRootLogin option is
enabled (uncomment and set to yes, or commented out to start the default) in the
/etc/ssh/sshd_config file.
Set file permissions for /tmp and /var/tmp
Ensure that the file permissions are 1777:
chmod 1777 /tmp
chmod 1777 /var/tmp
Enable the WebSphere Application Server SOAP port
Ensure that the WebSphere SOAP port (default 8879) can be reached from the
administrative server. To do so, switch the firewall on the management server off,
for example:
chkconfig -level 2345 iptables off
service iptables save
service iptables stop
Swap space:
Swap space should be at least twice the amount of memory. Run this command to
check the swap space and memory size:
# free -o -b
total used free shared buffers cached
Mem: 10432819200 10371768320 61050880 0 77840384 9825697792
Swap: 21476163584 0 21476163584
Verification:
Chapter 2. Installing and upgrading 63
The Installation Launchpad can start a verification script that checks the above
settings. Resolve any discrepancies noted and rerun the script until no errors are
reported.
1. On the provisioning server of the management system, start the Tivoli Service
Automation Manager Installation Launchpad as described in Starting the
launchpad and performing the preinstallation steps on page 67
2. Navigate to Pre-Installation Steps > Requirements for installing Tivoli
Provisioning Manager and click the link.
3. Resolve any discrepancies noted and run the script checks again until no errors
are reported.
Packages required on a Linux management server:
This section identifies the packages that are needed on a Linux management server.
Some of the packages must be accessible using a specific path. A script can be run
from the Installation Launchpad to verify that all packages have been installed.
Note: The information about the versions of the packages is available in the Tivoli
Provisioning Manager documentation. For more details, see Tivoli Provisioning
Manager knowledge center > Preinstallation tasks > Step 5: Verify component
requirements > Required packages (UNIX and Linux). The following three packages are
specific for Tivoli Service Automation Manager: libXaw, procmail, xterm.
Table 14. Required packages for Linux management servers
Package Remarks
compat-db-4.1.25-9 RHEL only. For RHEL 5, the 32-bit and the
64-bit versions are required.
compat-libstdc++ No longer available. Use libstdc++ for 32 and 64
bit.
curl
expect 5.42 or later
ftp
gtk
ksh RHEL 6.x only
libaio
libstdc++
libXaw 32bit-version RHEL 5.5 only
libXmu Not required for Linux on System x
libXp Linux on System z only
libXpm RHEL only
ncftp SLES only
openssh
perl
procmail
rpm-build-4.3.3-22 RHEL only
tcl
telnet
tk
64 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Table 14. Required packages for Linux management servers (continued)
Package Remarks
wget
xorg-x11-xfs RHEL only
xorg-x11-font-utils RHEL only
xterm For Linux, the required path for xterm is
/usr/bin/xterm.
Verify that all required software packages are installed:
1. On the provisioning server of the management system, start the Tivoli Service
Automation Manager Installation Launchpad as described in Starting the
launchpad and performing the preinstallation steps on page 67.
2. Navigate to Pre-Installation Steps > Prerequisite packages for Tivoli
Provisioning Manager Click the link to run the verification script for required
packages, which indicates any packages that are missing or at the wrong level.
Error messages are issued if any discrepancies are noted.
3. Install any package reported as missing or at an incorrect level.
4. Run the script again until no discrepancies are reported.
You can also run the rpm -ivh command to install any packages that are missing
or that are at an incorrect level. For example:
# pwd
/media/SUSE-Linux-Enterprise-Server_001/suse/x86_64
# ls | grep expect
expect-5.43.0-16.4.164.x86_64.rpm
# rpm -ivh expect-5.43.0-16.4.164.x86_64.rpm
Preparing... ########################################### [100%]
1:expect ########################################### [100%]
#
Note: The script can only make general decisions about whether a required
package is installed. Successful execution of the script does not mean that the
version of an existing package is the correct one. This is supposed to be verified by
the user.
Package path requirements
Some packages must be available from certain locations:
v bash: /bin/bash
v expect, tar, gzip: /usr/bin
If they are not installed in these locations, create a symbolic link from those
directories to the actual location.
myserver:~ # cd /usr/bin
myserver:/usr/bin # ls -l tar
~/bin/ls: tar: No such file or directory
myserver:/usr/bin # ln -s /bin/tar tar
myserver:/usr/bin # ls -l tar
lrwxrwxrwx 1 root root 8 Oct 30 17:48 tar -> /bin/tar
myserver:/usr/bin # ls -l expect
-rwxr-xr-x 1 root root 11145 Jul 1 2004 expect
myserver:/usr/bin # ls -l gzip
lrwxrwxrwx 1 root root 9 Jun 12 16:58 gzip -> /bin/gzip
Chapter 2. Installing and upgrading 65
Preparing a Windows administrative server
An administrative server is needed to install components of Tivoli Provisioning
Manager, Service Automation Manager, and Service Request Manager. The
administrative server used to install the base services must also be used to install
other components that require an administrative server.
A Windows administrative server is also required if the management server cannot
also serve as the administrative server.
Note: If the installation is not performed using the product CD or DVD, the
installation files for the steps that require starting the launchpad on the
administrative server must be downloaded to the administrative server before
beginning installation.
Setting up the host name (computer name):
1. Open a Windows command prompt.
2. Enter hostname and press Enter.
3. If the host name of the computer is reported in the <localhost>.<localdomain>
format, it is configured correctly.
Setting up the hosts file:
If your environment does not use a name server to resolve host names to IP
addresses, but rather relies on the /etc/hosts file for name resolution, you must
enable local host name resolution of the administrative server and target
management server host names on the administrative server. To do so, edit file
C:\Windows\system32\drivers\etc\hosts file and add entries for both the
administrative and target management servers. Ensure that the loopback entry
exists and that it comes after the administrative server entry. Be sure to press Enter
after the last line of the file.
Example:
192.168.1.100 tsam-admin-server.mycompany.com tsam-admin-server
127.0.0.1 loopback host.domain localhost
192.168.1.101 tsam-mgmt-server.mycompany.com tsam-mgmt-server
Preparing a Linux administrative server
Components packaged for a Linux-based installation require a Linux
administrative server. A Linux administrative server can be co-located with a Linux
management server if the hardware and software requirements are met.
Important: Before installing Tivoli

Service Automation Manager administrative


server on Linux, you must install md5sum package.
In general, the requirements for setting up a Linux administrative server are
analogous to those for a Windows server. See Preparing a Windows
administrative server
Note: For Red Hat Enterprise Linux version 5.5 you need to install an additional
library libXaw 32bit-version that is required for the Launchpad.
66 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Preparing an AIX administrative server
Components packaged for a AIX-based installation require an AIX administrative
server. An AIX administrative server can be co-located with an AIX management
server if the hardware and software requirements are met.
Important: Before installing Tivoli

Service Automation Manager administrative


server on AIX, you must install md5sum package.
In general, the requirements for setting up an AIX administrative server are
analogous to those for a Windows server. See Preparing a Windows
administrative server on page 66
Note: The AIX level is not automatically detected when using Firefox 3.5. Ensure
that the AIX level you are using is supported.
Starting the launchpad and performing the preinstallation
steps
Start the Installation Launchpad on the administrative server and on the
management server to begin the full installation of Tivoli Service Automation
Manager.
Procedure
1. Start the Installation Launchpad on the management server and the
administrative server:
a. Log on as root (Linux or AIX) or from an account with system
administration privileges (Windows).
b. If you are using DVDs, insert the Tivoli Service Automation Manager Base
DVD to run the Installation Launchpad.
c. Run the Tivoli Service Automation Manager Installation Launchpad:
v
Windows
: from ./TSAMBASE724/launchpad.exe.
Restriction: The launchpad64.exe is not working for 64 bit Windows
2003. Instead use launchpad.exe ( 32-bit version).
Note: On newer Windows versions, run Launchpad.exe with the option
Run As Administrator.
v
Linux
or
AIX
: by running ./TSAMBASE724/launchpad.sh.
Note: Under Linux or AIX, ensure that launchpad.sh is executable. If not,
you must ensure the required access rights for it, for example:
chmod -R 777 ./TSAMBASE724/launchpad.*
d. When the launchpad is started, you can select the language from the list in
the upper right corner of the main window.
2. Read the information in the Installation Overview and Installation Planning
sections.
3. Go to the Installation Planning page and in the System Definition section
specify the server roles.
4. Select Full installation of 7.2.4.4 as the installation type for both servers.
5. In the Installation Files section, specify the locations of the dependent product
packages.
Chapter 2. Installing and upgrading 67
Installing Tivoli Service Automation Manager and its
prerequisite software
Installing Tivoli Service Automation Manager involves running the Tivoli Service
Automation Manager Installation Launchpad on the administrative and
management system servers to install the software on the management system. The
sequence of these invocations and the respective server involved is dictated by the
software and the installation scenario. This involves some switching between the
two environments. Some software related to the deployment process is installed on
the administrative server. However, this server is not required for normal operation
when installation has been completed.
Before you begin
Consult the Release Notes for Tivoli Service Automation Manager and the
associated products for up-to-date information.
The installation process employs the Tivoli Service Automation Manager
Installation Launchpad to install Tivoli Provisioning Manager, Service Request
Manager, and Tivoli Service Automation Manager. In the Tivoli Provisioning
Manager documentation, the process is referred to as a custom installation.
Special icons are provided in the Launchpad to indicate the server (administrative
or management) on which a particular step must be performed.
1. Ensure that you have selected the function of the current server. You can do
that on the Installation Planning page in the Launchpad.
2. Ensure that you have specified and verified the locations of all installation
source files in the Installation Planning page in the Launchpad.
3. Ensure that the pre-installation steps have been completed. See Preparing the
environment for installation on page 48.
4. Important: Create backup copies of the management and administrative
servers. If the installation fails, you will have to restore to these backup images.
Note: Throughout the installation and configuration processes, there may be
recommendations to back up the administrative and management servers.
While backing up is not required, it is strongly recommended. If a backup
image is not taken and the installation fails, you may have to start again with a
clean operating system image. Perform the operation on both the
administrative and management servers. If the administrative and management
servers are virtual machines, you can use the VM image backup capability
provided by your virtualization software. For example, a VMware virtual
machine can also be backed up by taking a snapshot of the VM image. An AIX
management server running in an LPAR can be backed up to a NIM server. For
servers installed on physical hosts, however, commercial backup software such
as Norton Ghost can be used. UNIX servers can be backed up and restored
using the dd utility. For details, consult the system administration guide for
your operating system.
5. As recommended in the Tivoli Provisioning Manager Installation Guide,
consider disabling antivirus software on all involved servers for the duration of
the installation.
68 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Installation defaults for Tivoli Service Automation Manager
To avoid conflicts within your enterprise, review the defaults the installation
process uses to determine if you should perform an advanced install and set new
values.
Whether the default values that Tivoli Service Automation Manager uses conflict
with systems already in place or do not adhere to standards in your enterprise is a
key consideration when deciding between a simple or advanced install. To help
with that decision, Table 15 lists the defaults used by the installer. Use this table to
record your values as an aid when you run the installer.
Table 15. Default and your values for properties
Installation Panel Property Default Value Your value
Remote Access
Configuration
User ID root
DB2 Configuration Installation Directory Linux, AIX:
/opt/IBM/db2/v9.7
Port Linux Red Hat:
50000
Linux SUSE (SLES 10 and 11):
50001
AIX:
50000
Note: On SUSE Linux, the
middleware installer (MWI) initially
offers the port 50000, but this port is
typically already in use. If this is the
case, use port 50001 instead.
Instance Userid db2inst1
DAS User ID dasusr1
Fence User ID db2fenc1
Admin Group db2grp1
CCMDB User ID ctginst1
Config Instance
Username
ctginst1
IBM Tivoli Directory Server
Configuration
Installation Directory Linux, AIX:
/opt/IBM/ldap/V6.3
Administration Port 3538
Secure
Administration Port
3539
Instance Name idsccmdb
Database Name security
Database Port 3708
LDAP Port 389
Secure LDAP Port 636
LDAP Username cn=root
LDAP Suffix o=organization,c=country
LDAP Organization ou=orgunit
Chapter 2. Installing and upgrading 69
Table 15. Default and your values for properties (continued)
Installation Panel Property Default Value Your value
WebSphere Configuration Installation Directory Linux:
/opt/IBM/WebSphere/AppServer
AIX:
/usr/IBM/WebSphere/AppServer
User ID wasadmin
IBM HTTP Server
Configuration
Installation Directory Linux
/opt/IBM/HTTPServer
AIX
/usr/IBM/HTTPServer
Administration Port 8008
HTTP Port 80
Base Services Base services
location
/opt/IBM/SMP
Tivoli middleware
workspace location
Linux:
~root/ibm/tivoli/mwi/workspace
Port 50005
Database Name maxdb71
Database user ID maximo
Cluster name MAXIMOCLUSTER
User base entry u=users,o=organization,c=country
Group base entry u=groups,o=organization,c=country
Integration Adapter JMS
Configuration
JMS DataSource
Name
intjmsds
Tivoli Provisioning
Manager
For details, refer to the Tivoli Provisioning Manager Installation Guide.
Installing the Tivoli Service Automation Manager license
To activate the installation links in the Installation Launchpad you must first install
the licence. The license must be installed on each server on which the Launchpad
is started. Acceptance of the license agreement in each case is recorded.
Before you begin
Server: Any
Procedure
1. Start the Tivoli Service Automation Manager Installation Launchpad as
described in Starting the launchpad and performing the preinstallation steps
on page 67.
2. Navigate to License Agreement.
3. Click on the link to install the license.
4. Read and accept the license agreement. When the license has been installed, the
Launchpad panel refreshes and the links become active.
70 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Installing the middleware
You can use the Tivoli Service Automation Manager Installation Launchpad to
install the middleware. Omit this step if the middleware has already been installed
for an existing product.
If Service Request Manager or Tivoli Provisioning Manager has already been
installed, the associated middleware is also present.
Before you begin
Important:
1. If you are installing the middleware on SUSE Linux Enterprise Server 11, see
the instructions in Installing the middleware on Solaris 10 SPARC and SUSE Linux
Enterprise Server 11 in the Tivoli Provisioning Manager knowledge center.
2. While installing WebSphere Application Server 6.1 (build -
WAS-ND_LinuxIA64_Custom_v61023_ISC7106.tar.gz.) on SLES 11 x86_64 bit, a
message warning that your operating system failed a prerequisite check may
appear. Ignore this message and install WebSphere Application Server 6.1 and
Fix Pack 29.
3. You must use the Tivoli Service Automation Manager Installation Launchpad
and not the Tivoli Provisioning Manager Launchpad. Once the middleware
installer has started, however, refer to the corresponding section in the Tivoli
Provisioning Manager information center for detailed instructions.
4. If you encounter an error during the installation of the WebSphere plug-in,
perform the workaround: edit the file /opt/.ibm/.nif/.nifregistry and
remove the entry "6.1.0-WS-IHS-LinuxX64-FP0000029".
Server: Each server in the management system
The distribution of the middleware in the management system depends on the
number of physical servers. The middleware installer can install a selected
combination of the three software servers (provisioning/application, database,
directory) on the current physical server each time it is started. The default
installation is performed on one server and all three options can be selected at
once.
About this task
Procedure
1. To install the middleware on a single server, start the Tivoli Service Automation
Manager Installation Launchpad on that server and click Product Installation >
Foundation Software > Install the middleware.
Note: To distribute the middleware on up to three servers (provisioning server,
database server, and directory server), you can invoke the middleware installer
several times, once on each server. Repeat the single server installation
procedure on each server. Depending on the middleware you want to distribute
on a given server, check the appropriate box and complete the installation.
2. Click the link to verify the middleware installation prerequisites. Select the
check box and hit return.
3. Specify the location of the Tivoli Provisioning Manager installation DVD or the
directory in which you unpacked the installation package
TPM_V721_Install_Win.tar or TPM_V721_Install_Unix.tar.
4. Select the language.
Chapter 2. Installing and upgrading 71
5. To install all middleware on one server, check the boxes for all parts of the
middleware (DB2, Directory Server, WebSphere).
6. Complete the installation.
What to do next
1. Optional: You can now configure the script for controlling the middleware. This
script simplifies middleware operations such as starting and stopping. It must
be configured prior to use. See Controlling the middleware with a script on
page 355.
2. Start the middleware for verification, for example, by using the script from the
previous step.
Note: Do not continue until all problems or discrepancies have been resolved.
3. Perform a backup of the management system servers.
Installing base services
Base services are components that are shared by multiple products in the process
automation environment.
Note: Omit this step if the base services were already installed with an existing
product. For example, if Tivoli Service Request Manager or Tivoli Provisioning
Manager is already installed, the base services are also present.
Before you begin
v The AIX level is not automatically detected when using Firefox 3.5. Ensure that
you are using a supported level as your administrative server.
v Ensure that the middleware is started as described in Starting the management
server on page 353.
Server: Administrative server
The procedure must be run on the administrative server. This same server later
becomes the mandatory administrative server for subsequent component
installation.
Procedure
1. On the administrative server, start the Tivoli Service Automation Manager
Installation Launchpad as described in Starting the launchpad and performing
the preinstallation steps on page 67
2. Navigate to Product Installation > Foundation Software > Install the Base
Services and required components.
3. Click the link to verify the base services installation prerequisites. When you
are done with the verification, select the check box and click Back to the
product installation page.
4. Click the link to install the base services. Refer to the Tivoli Provisioning Manager
Installation Guide for detailed instructions.
5. When the Process Solution Installer is started, an overview table is displayed
showing which version of base services is to be installed. Make sure that in the
Target version field, the version number for base services is 7.1.1.9. If any other
version is displayed, then you have selected the wrong base services installer
package. Make sure that you select the base services package that is packaged
with Tivoli Provisioning Manager 7.2.1.
72 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
What to do next
1. Verify that DBHEAP settings for DB2 are set to 6000 or more. Run the
command:
su - ctginst1 -c "db2 get database configuration for maxdb71" | grep
DBHEAP
If the settings are not correct:
a. Connect to the database:
db2 connect to <database_name> user <database_user> using
<database_password>
where:
v <database_name> is maxdb71 by default
v
Windows
<database_user> is db2admin by default
v
UNIX
<database_user> is ctginst1 by default
b. Update the database configuration:
db2 update db cfg using dbheap 6000
c. Reset the database connection:
db2 connect reset
2. Verify the installation by logging on to the administrative user interface at
https://<IP address of the administrative server>:9443/maximo/ui/login. Use user
maxadmin and the password specified during Base Services installation. If no
errors are reported, the installation is successful.
Note: Do not continue until all problems or discrepancies have been resolved.
Important: You can perform a full system backup after every successful
installation step. The absolute minimum that should be backed up is the base
services home directory (at least 9 GB of free space required), as described in Tivoli
Provisioning Manager documentation (Installing the base services section).
Proceed to Installing the Tivoli Provisioning Manager core components
Installing the Tivoli Provisioning Manager core components
Use the Tivoli Service Automation Manager installation launchpad to install the
Tivoli Provisioning Manager core components.
Before you begin
Make sure that the middleware and base services are started as described in
Starting the management server on page 353.
Server: Management server
This procedure must be run on the provisioning server of the management system.
Procedure
1. On the provisioning server of the management system, start the Tivoli Service
Automation Manager Installation Launchpad as described in Starting the
launchpad and performing the preinstallation steps on page 67.
Chapter 2. Installing and upgrading 73
2. Navigate to Product Installation > Tivoli Provisioning Manager > Install the
Tivoli Provisioning Manager core components.
3. Click the link to verify the core components installation prerequisites. When
this has been done, check the box and return.
4. Click the link to install the core components.
5. The core component installer launches. Refer to the Provisioning Manager
documentation for detailed instructions.
Note:
v Ensure that you select the option to install Tivoli Provisioning Manager for
OS Deployment.
v When installing Tivoli Provisioning Manager for OS Deployment, the host
name of the target system must be written in lower case. Otherwise, the
installation fails.
v If you do not want to change the host name or if it is not possible, install
Tivoli Provisioning Manager for OS Deployment manually, after the core
components are installed.
v If errors occur during the installation, you may need to repeat the previous
step in the installation process, and then continue.
6. When the core component installation is finished, go back to the Tivoli Service
Automation Manager launchpad.
7. Click the link to install Tivoli Provisioning Manager for Images components.
8. Enter the required parameters and click on the link to start the installation
script.
What to do next
1. Back up the management-system servers.
2. Proceed to Installing the Tivoli Provisioning Manager web components.
Installing the Tivoli Provisioning Manager Web components
Use the Tivoli Service Automation Manager installation launchpad to install the
Tivoli Provisioning Manager Web components.
Before you begin
Make sure that the middleware and base services are started as described in
Starting the management server on page 353.
Server: Administrative server
This procedure must be run on the same administrative server that was used for the
base services installation.
Procedure
1. On the administrative server, start the Tivoli Service Automation Manager
installation launchpad as described in Starting the launchpad and performing
the preinstallation steps on page 67.
2. Navigate to Product Installation > Tivoli Provisioning Manager > Install the
Tivoli Provisioning Manager Web components.
3. Click the link to verify the installation prerequisites. When this has been done,
check the box and return.
4. Click the link to install the Web components.
74 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
5. The Tivoli Provisioning Manager Web Component installer is launched. Refer to
the Provisioning Manager documentation for detailed instructions. Follow the
instructions in the installer.
6. Wait until the Web component installer has finished.
What to do next
1. Perform the post-installation steps for Tivoli Provisioning Manager:
a. In the launchpad, go to Product Installation > Tivoli Provisioning
Manager > Post-installation tasks for Tivoli Provisioning Manager.
b. Click Perform the Tivoli Provisioning Manager post-installation tasks.
c. Follow the steps described in the panel displayed.
2. After installing the Web and Core components of Tivoli Provisioning Manager
Interim Fix 5, download and install 7.2.1.0.5-TPM-0003LA. For more
information about the steps to install, see 7.2.4-TIV-TSAM-FP0003.README.
3. On completion, log on to the administrative user interface to ensure that it is
operational. For more details, see Logging on to the Tivoli Service Automation
Manager administrative interface.
Note: Do not continue until all problems or discrepancies have been resolved.
Important: It is recommended that you perform a full system backup
(management server and administrative server) after the installation is finished.
Depending on the installation scenario you are following:
v If Tivoli Service Request Manager was not previously installed, proceed to
Installing Tivoli Service Request Manager.
v If Tivoli Service Request Manager is installed, proceed to the installation of
Advanced Workflow Components. For more information, see Installing the
Advanced Workflow Components on page 77.
Installing Tivoli Service Request Manager
Service Request Manager 7.2.4 is installed from the Service Request Manager DVD.
The file for Fix Pack 2 is provided on Fix Central at http://www.ibm.com/
support/fixcentral/.
Before you begin
Make sure that the server is rebooted before you start this installation step to
provide the environment variable CTG_CCMDB_HOME to the current installer.
Make sure that the middleware and base services are started as described in
Starting the management server on page 353. Make sure that Tivoli Provisioning
Manager deployment engine is stopped. In case it is running, use the following
command as tioadmin on the management server:
$TIO_HOME/tools/tio.sh stop -t
Server: Administrative server
This procedure must be run on the same administrative server used for the base
services installation.
Procedure
1. Install Tivoli Service Request Manager V7.2:
Chapter 2. Installing and upgrading 75
a. On the administrative server, start the Tivoli Service Automation Manager
Installation Launchpad as described in Starting the launchpad and
performing the preinstallation steps on page 67
b. Select Product Installation > Tivoli Service Request Manager.
c. Click the link to verify the Service Request Manager installation
prerequisites. When finished, check the box and return to the Launchpad.
d. Click the link to install the Tivoli Service Request Manager applications. See
the product documentation for details.
Note: In the Tivoli Service Automation Manager environment, the
WebSphere RXA user ID is tioadmin.
e. When prompted for features selection, select all boxes under Service
Request Manager 7.2. However, the check boxes to any wanted language
support features are optional.
Important: When you select a root check box, you do not automatically
select its children.
f. On completion, log on to the administrative user interface to ensure that it is
operational. For more details, see Logging on to the Tivoli Service
Automation Manager administrative interface.
Note: Back up the administrative and the management servers so that you
can preserve the system environment for fix pack installation.
2. Install the Tivoli Service Request Manager Fix Pack:
a. If you performed a backup and rebooted your systems, make sure that you
started the middleware as described in Starting and stopping the
middleware on page 353.
b. As the root user, stop the WebSphere MXServer process on the management
server:
<was_home_dir>/bin/stopServer.sh MXServer -username <wasadmin_user> -password <wasadmin_password>
where by default:
v <was_home_dir> is /opt/IBM/WebSphere/AppServer/
v <wasadmin_user> is wasadmin
c. After unpacking the installation programs from the Tivoli Service Request
Manager Fix Pack, make sure that the programs have executed permissions.
d. Review the readme file for Service Request Manager V7.2 Fix Pack 1.
e. On the administrative server, start the Tivoli Service Automation Manager
Installation Launchpad as described in Starting the launchpad and
performing the preinstallation steps on page 67.
f. Select Product Installation > Tivoli Service Request Manager.
g. Click the link to install Tivoli Service Request Manager Fix Pack.
h. Follow the instructions in the installer.
Important:
In some rare situations, you might receive the following message:
CTGIN2252W: Cannot connect to base services web application
This warning can reflect a real problem or a timing problem. To rule out an
actual error, log on to the administrative user interface when the installation
76 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
program finishes. If you are able to log on, the message can be ignored. If it
is unsuccessful, closer investigation is needed.
If you are installing a language other than English, and the fix pack installer
reports that it is unable to log on to the administrative user interface,
perform the workaround procedure described in Unable to log on to the
administrative user interface after Tivoli Provisioning Manager installation
on page 489.
3. Install the Service Request Manager Limited Availability Fix following the
instructions in the installer.
4. From the Launchpad, click Install Common Process Components for Service
Providers and follow the instructions in the installer.
Note: To improve installation time, in the package options panel, you can
check the Defer Application Redeployment and the Defer Update of the
Maximo Database boxes.
5. From the Launchpad, click Install Service Provider Enablement Components
and follow the instructions in the installer.
Note: To improve installation time, in the package options panel, you can
check the Defer Application Redeployment and the Defer Update of the
Maximo Database boxes.
What to do next
Important: Perform a full system backup (management server and administrative
server) after the installation is finished.
1. Log on to the administrative user interface to ensure that it is operational. For
more details, see Logging on to the Tivoli Service Automation Manager
administrative interface.
2. Proceed to Installing the Advanced Workflow Components.
Installing the Advanced Workflow Components
After Service Request Manager is installed, proceed to install the Advanced
Workflow Components.
Before you begin
v Ensure that the middleware is running. If not, then run tioStatus.sh or
tioStatus.cmd command.
v Ensure that the Tivoli Provisioning Manager server is stopped. If not, then run
tio.sh or stop tpm.
About this task
Perform this installation on the same administrative server on which you installed
Service Request Manager.
Procedure
1. On the launchpad, select Product Installation > Advanced Workflow
Components.
2. Click the link to install the components and follow the instructions that appear.
Chapter 2. Installing and upgrading 77
Installing Tivoli Provisioning Manager 7.2.1 Interim Fix 5 core
components
Install the Tivoli Provisioning Manager 7.2.1 Interim Fix 5 core components using
the Tivoli Service Automation Manager launchpad.
Before you begin
Make sure that the middleware and base services are started as described in
Starting the management server on page 353. Make sure that Tivoli Provisioning
Manager deployment engine is stopped. In case it is running, use the following
command as tioadmin on the management server:
$TIO_HOME/tools/tio.sh stop -t
Server: Management server
Procedure
1. In the Launchpad, select Product Installation > Tivoli Provisioning Manager
7.2.1 iFix 5.
2. Click the link to install Tivoli Provisioning Manager 7.2.1 iFix 5 core
components.
3. Enter the required parameters and click the link to start the installation script.
What to do next
Proceed to Installing Tivoli Provisioning Manager 7.2.1 Interim Fix 5 Web
components.
Installing Tivoli Provisioning Manager 7.2.1 Interim Fix 5 Web
components
Install the Tivoli Provisioning Manager 7.2.1 Interim Fix 5 Web components using
the launchpad.
Before you begin
Make sure that the middleware and base services are started as described in
Starting the management server on page 353. Make sure that Tivoli Provisioning
Manager deployment engine is stopped. In case it is running, use the following
command as tioadmin on the management server:
$TIO_HOME/tools/tio.sh stop -t
Server: Administrative server
Procedure
1. In the Installation Launchpad, select Product Installation > Tivoli Provisioning
Manager 7.2.1 iFix 5.
2. Click the link to install the Web components and follow the instructions in the
installer.
What to do next
1. Perform the post-installation steps for Tivoli Provisioning Manager:
a. On the launchpad, go to Product Installation > Tivoli Provisioning
Manager 7.2.1 iFix 5 > Post-installation tasks for Tivoli Provisioning
Manager 7.2.1 iFix 5.
78 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
b. Click Perform the Tivoli Provisioning Manager 7.2.1 iFix 5
post-installation tasks.
c. Follow the steps described in the panel displayed.
Important: Perform a full system backup of both the management and the
administrative servers after the installation is finished.
2. After installing the Web and Core components of Tivoli Provisioning Manager
Interim Fix 5, download and install 7.2.1.0.5-TPM-0003LA. For more
information about the steps to install, see 7.2.4-TIV-TSAM-FP0003.README.
3. Proceed to Installing the Tivoli Service Automation Manager applications.
Installing the Tivoli Service Automation Manager applications
This section describes how to install the Tivoli Service Automation Manager base
component.
Before you begin
1. Make sure that the middleware and base services are started as described in
Starting the management server on page 353. Make sure that Tivoli
Provisioning Manager deployment engine is stopped. In case it is running, use
the following command as tioadmin on the management server:
$TIO_HOME/tools/tio.sh stop -t
Server: Administrative server
This procedure must be run on the same administrative server that was used for
the base services installation.
Procedure
1. On the administrative server, start the Tivoli Service Automation Manager
Installation Launchpad as described in Starting the launchpad and performing
the preinstallation steps on page 67.
2. Navigate to Product Installation > Tivoli Service Automation Manager >
Install the Tivoli Service Automation Manager applications.
3. Click Verify the Tivoli Service Automation Manager installation
prerequisites. When the prerequisites are verified, check the box and return.
4. Click Run script to perform miscellaneous install actions.
5. Click Modify Tivoli's process automation engine REST deployment
descriptor and click the link to run the script that performs the listed steps for
you.
6. Click Install Tivoli Service Automation Manager applications to install the
applications and follow the instructions in the installer. Ensure that there are no
errors during the installation process.
If you started the launchpad from the location where you unpacked the
product package or from Tivoli Service Automation Manager DVD, the location
of the application package is found automatically. In case the package is not
found automatically, specify the location of the Tivoli Service Automation
Manager DVD or where you unpacked the product package.
7. Click Install Tivoli Service Automation Manager enablement keys to install
the enablement keys and follow the instructions provided. Ensure that there are
no errors during the installation process.
8. On completion, log on to the administrative user interface to ensure that it is
operational. For more details, see Logging on to the Tivoli Service Automation
Manager administrative interface.
Chapter 2. Installing and upgrading 79
Note:
If you want to install the optional Tivoli Service Automation Manager for
WebSphere Application Server component, you can do so in a separate
procedure either immediately following the Tivoli Service Automation Manager
Base installation or at a later time.
What to do next
Important: It is recommended that you perform a full system backup
(management server and administrative server) after the upgrade completes.
Proceed with Installing additional configuration files.
Installing additional configuration files
This segment extracts the Cloud Management Subsystem configuration files onto
the management server in the /etc/cloud directory.
Before you begin
Make sure that the middleware and base services are started as described in
Starting the management server on page 353. Make sure that Tivoli Provisioning
Manager deployment engine is stopped. In case it is running, use the following
command as tioadmin on the management server:
$TIO_HOME/tools/tio.sh stop -t
Server: Management server
Procedure
1. On the management server, invoke the Tivoli Service Automation Manager
Installation Launchpad as described in Starting the launchpad and performing
the preinstallation steps on page 67.
2. Navigate to Product Installation > Tivoli Service Automation Manager >
Install additional configuration files.
3. Click the link to install the additional configuration files.
4. Enter the required parameters and click on the link to start the installation
script.
What to do next
In a multiserver topology, it is advisable to copy the /etc/cloud directory to each
additional server in the management system or to run and perform this Launchpad
step on each server. There are setup steps that require selected files to be accessible
on the database or directory servers.
Proceed with Installing the automation packages for Tivoli Service Automation
Manager on page 81.
80 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Installing the automation packages for Tivoli Service Automation
Manager
You can use the installation launchpad to install automation packages for Tivoli
Service Automation Manager.
Before you begin
1. Make sure that the middleware and base services are started as described in
Starting the management server on page 353. Make sure that Tivoli
Provisioning Manager deployment engine is stopped. In case it is running, use
the following command as tioadmin on the management server:
$TIO_HOME/tools/tio.sh stop -t
2. Make sure that the following services are running:
v On AIX
/etc/rc.d/init.d/dbgw start
/etc/rc.d/init.d/rembo start
/etc/rc.d/init.d/rbagent start
v On Linux:
/etc/init.d/dbgw start
/etc/init.d/rembo start
/etc/init.d/rbagent start
Server: Provisioning server of the management system (Tivoli Service Automation
Manager management server)
Procedure
1. On the Tivoli Service Automation Manager management server, start the Tivoli
Service Automation Manager Installation Launchpad as described in Starting
the launchpad and performing the preinstallation steps on page 67.
2. Navigate to Product Installation > Tivoli Service Automation Manager >
Install the automation packages for Tivoli Service Automation Manager.
3. Click the link to install the automation packages.
Note: Do not continue with subsequent installation or post-installation steps
until all problems or discrepancies have been resolved.
What to do next
Important: It is recommended that you perform a full system backup
(management server and administrative server) after the upgrade completes.
Proceed either with Installing optional software on page 83 or with
Post-installation steps on page 82.
Chapter 2. Installing and upgrading 81
Post-installation steps
Perform the post-installation steps in the installation launchpad to integrate Tivoli
Service Automation Manager and its interfaces with other solution components.
Before you begin
1. Make sure that the middleware and base services are started as described in
Starting the management server on page 353. Make sure that Tivoli
Provisioning Manager deployment engine is stopped. In case it is running, use
the following command as tioadmin on the management server:
$TIO_HOME/tools/tio.sh stop -t
2. Make sure that the following services are running:
v On AIX
/etc/rc.d/init.d/dbgw start
/etc/rc.d/init.d/rembo start
/etc/rc.d/init.d/rbagent start
v On Linux:
/etc/init.d/dbgw start
/etc/init.d/rembo start
/etc/init.d/rbagent start
Procedure
1. On the launchpad, select Post-Installations Steps > Specify Installation
Parameters Required for Component Integration.
On this page, you enter all necessary input parameters that are needed
throughout the rest of the post-installation. These parameters are used as input
for various scripts that are started in the background. Also, an automated
validation capability is available that verifies most user IDs and passwords that
are entered.
Important: Due to technical limitations, not all IDs and passwords can be
validated. You must verify the maxadmin ID and password manually, and
ensure that exactly this password is entered in the launchpad.
Verify the parameters and mark the check box.
2. Click Configure Tivoli Process Automation Engine Global Properties.
A script is run to update some global properties in the Maximo database and
set some properties for the endpoints PMRDPRBC and PMZHBWSCR. Mark the check
box if the script returns no errors.
3. Before setting up Tivoli Service Request Manager, you must remove certain
database triggers. To do it, perform the following steps:
a. Log on to the Tivoli Service Automation Manager launchpad as DB2 admin
user, for example ctginst1.
b. Change folder to /etc/cloud/install/DB.
c. Run the command db2 connect to maxdb71 to connect to the database.
d. Run the command db2 set current schema maximo to switch to Maximo
schema.
e. Run the following command:
db2 tvf DropProjectTriggers.sql
4. On the launchpad, click Set up Tivoli Service Request Manager.
82 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Perform a set of manual steps to set up Service Request Manager. See the
details in the launchpad, and at: Tivoli Service Request Manager 7.2 knowledge
center > Installing > Service Request Manager post installation tasks > Initial data
configuration. Verify the parameters and mark the check box.
Note: The default insert site defines the site a service request is for. A user can
create a service request for a site they cannot access. If the default insert site is
changed for a user, the site authorization list in the following security groups
must be updated: PMRDPCA, PMRDPCM, PMRDPTA, PMRDPTU.
5. After setting up Tivoli Service Request Manager, you must recreate the
previously-removed database triggers. To do it, perform the following steps:
a. Log on to the Tivoli Service Automation Manager launchpad as DB2 admin
user, for example ctginst1.
b. Change folder to /etc/cloud/install/DB.
c. Run the command db2 connect to maxdb71 to connect to the database.
d. Run the command db2 set current schema maximo to switch to Maximo
schema.
e. Run the following command:
db2 tvf CreateProjectTriggers.sql
6. Click Configure the Cloud Management Components.
A script is run to perform a set of activities. See the more detailed list in the
launchpad. Verify the parameters and mark the check box.
7. Click Set up the Self-Service Environment.
Perform the manual steps to complete this task as described on the launchpad.
What to do next
When finished, verify the integrated solution by performing connectivity tests
between the various components. For more information, see Verifying the
integrated installation on page 84.
Perform additional steps to configure IBM HTTP server.
Installing optional software
You can install Tivoli Service Automation Manager for WebSphere Application
Server as an optional, separately paid service provided for Tivoli Service
Automation Manager. This option is only available if a full installation of Tivoli
Service Automation Manager is performed. It is not available in the upgrade
scenarios.
Note: Run this step after Tivoli Service Automation Manager Base has been
installed (either immediately or at a later time).
Before you begin
1. Clear the browser cache.
2. Make sure that the middleware and base services are started as described in
Starting the management server on page 353. Make sure that Tivoli
Provisioning Manager deployment engine is stopped. In case it is running, use
the following command as tioadmin on the management server:
$TIO_HOME/tools/tio.sh stop -t
Chapter 2. Installing and upgrading 83
Procedure
1. On the administrative server, start the Tivoli Service Automation Manager
Installation Launchpad as described in Starting the launchpad and performing
the preinstallation steps on page 67
2. Navigate to Optional Software > Install Tivoli Service Automation Manager
for WebSphere Application Server
3. Click the link to install Tivoli Service Automation Manager for WebSphere
Application Server. Refer to the Tivoli Provisioning Manager installation
documentation for details.
Note: Do not continue until all problems or discrepancies have been resolved.
What to do next
Perform the Post-installation steps on page 82 if you have not done it before.
Configure the component (see Configuring the managed environment to use the
WebSphere Cluster Service on page 244).
Verifying the integrated installation
You can verify the integrated installation by performing simple connectivity tests
to the WebSphere Application Server console, Tivoli Service Automation Manager
administrative and self-service user interfaces, and the Tivoli Provisioning Manager
user interface.
Before you begin
Make sure all middleware including the Tivoli Provisioning Manager deployment
engine is started as described in Starting and stopping the middleware on page
353.
Procedure
1. Log on to the WebSphere Application Server console for the MXServer.
a. Start the browser:
https://<management_server_hostname>:9043/admin
b. Log on as wasadmin and enter the password.
c. Log out and close the browser.
2. Log on to the administrative user interface as maxadmin.
a. Start the browser:
https://<management_server_hostname>:9443/maximo
b. Log on as maxadmin and enter the password. Use the password that you
set during the installation.
c. Log out and close the browser.
3. Log on to the administrative user interface as PMSCADMUSR. Use this user to
modify the service offerings or catalogs in Tivoli Service Request Manager.
a. Start the browser:
https://<management_server_hostname>:9443/maximo
84 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
b. Log in as PMSCADMUSR and enter the password that you set during the
Base Services installation. Use the default password maxadmin unless you
have changed it after the installation.
c. Log out and close the browser.
4. Log on to the self-service user interface as PMRDPCAUSR. This user ID is
assigned the Tivoli Service Automation Manager Cloud Administrator role, and
is created automatically during product installation. Log on to the self-service
user interface for this time with this user.
a. Start the browser:
https://<management_server_hostname>:9443/SimpleSRM/
b. Log on as PMRDPCAUSR and enter the password. The password set
during installation is maxadmin.
c. Log out and close the browser.
Upgrading Tivoli Service Automation Manager from 7.2.2.1, 7.2.2.2,
7.2.3, 7.2.4, 7.2.4.1, 7.2.4.2, 7.2.4.3 to 7.2.4.4
Perform the tasks described in this scenario to upgrade Tivoli Service Automation
Manager from fix packs 7.2.2.1, 7.2.2.2, 7.2.3, 7.2.4, 7.2.4.1, or 7.2.4.2, 7.2.4.3 to
version 7.2.4.4.
About this task
To perform this upgrade scenario, you must have Tivoli Service Automation
Manager 7.2.2.1, 7.2.2.2, 7.2.3, 7.2.4, 7.2.4.1, 7.2.4.2, or 7.2.4.3 installed and
configured.
Starting the launchpad and performing the preinstallation
steps
The first step of the upgrade from fix pack 7.2.2.1, 7.2.2.2, 7.2.3, 7.2.4, 7.2.4.1, 7.2.4.2
to 7.2.4.3 is the same for the administrative server and for the management server
and you can perform it at the same time on both servers.
Procedure
1. Start the Installation Launchpad on the management server and the
administrative server:
a. Log on as root (Linux or AIX) or from an account with system
administration privileges (Windows).
b. If you are using DVDs, insert the Tivoli Service Automation Manager Base
DVD to run the Installation Launchpad.
c. Run the Tivoli Service Automation Manager Installation Launchpad:
v
Windows
: from .\TSAMBASE7243\launchpad.exe.
v
Linux
or
AIX
: by running ./TSAMBASE7243/launchpad.sh.
Note: Under Linux or AIX, ensure that launchpad.sh is executable. If not,
you must ensure the required access rights for it, for example:
chmod -R 777 ./TSAMBASE7243/launchpad.*
d. When the launchpad is started, you can select the language from the list in
the upper right corner of the main window.
Chapter 2. Installing and upgrading 85
2. Read the information in the Installation Overview and Installation Planning
sections.
3. Go to the Installation Planning page and in the System Definition section
specify the server roles.
4. Select Upgrade current 7.2.2.1 installation to 7.2.4.4 or Upgrade current 7.2.2.2
installation to 7.2.4.4, Upgrade current 7.2.3 installation to 7.2.4.4 or Upgrade
current 7.2.4 installation to 7.2.4.4 or Upgrade current 7.2.4.1 installation to
7.2.4.4 or Upgrade current 7.2.4.2 installation to 7.2.4.4 or Upgrade current
7.2.4.3 installation to 7.2.4.4as the installation type.
5. In the Installation Files section, specify the locations of the dependent product
packages.
What to do next
Proceed to Installing the product on page 88.
Uninstalling the additional disk extension for VMware
When you upgrade to Tivoli Service Automation Manager 7.2.4.4 from previous
versions, you must uninstall the additional disk extension from VMware.
About this task
You must uninstall the additional disk extension for VMware because the user
interface code is merged in the SimpleSRM.war in Tivoli Service Automation
Manager. Therefore, custom_web.war must be cleaned to remove the code specific
for the extension.
Procedure
1. Create a backup of the original custom_web.war file, which is located on the
Tivoli Service Automation Manager Admin Server:/opt/IBM/SMP/maximo/
applications/SimpleSRM/custom_web.war.
2. Remove the files listed below from custom_web.war:
Location Files
js\custom\tsam\api\requests
v AddDiskWizard_API.js
js\custom\tsam\dijit\nls
v CLAD_uiStringTable.js
js\custom\tsam\dijit\request\templates
v PMRDP_0201A_72_additional.html
js\custom\tsam\dijit\request
v ModifyStoragePanels.js
v PMCLAD_0172A_722.js
v PMCLAD_0173A_722.js
v PMCLAD_0174A_722.js
v PMRDP_0201A_72.js
v PMRDP_0211A_72.js
v PMRDP_0246A_72.js
v PMRDP_0248A_72.js
v PMRDP_0249A_72.js
86 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Location Files
js\custom\tsam\dijit\templates
v AdditionalDiskPanel.html
v AdditionalDiskPanelForDelete.html
v AdditionalDiskPanelForModify.html
v AdditionalDiskSummary.html
v ModifyStorageSummaryPane.html
v TSAMProjectSelectionPane.html
v TSAMServerSelectionPane.html
js\custom\tsam\dijit\themes\images
v ac22_restoreMaximoDefault.gif
v ac22_undo_24.png
v description_close_tab.png
v description_open_tab.png
js\custom\tsam\dijit\themes
v CLAD_SimpleSrmApp_Tundra.css
v CLAD_Tundra.css
v Custom_SimpleSrmApp_Tundra.css
v Custom_Tundra.css
v login.css
js\custom\tsam\dijit
v AdditionalDiskCapacityValidator.js
v AdditionalDiskGrid.js
v AdditionalDiskPanel.js
v AdditionalDiskPanelForDelete.js
v AdditionalDiskPanelForModify.js
v AdditionalDiskSummary.js
v CLADCSSLoader.js
v ModifyStorageSummaryPane.js
v TSAMProjectSelectionPane.js
v TSAMServerSelectionPane.js
js/custom/tsam/dojo/data
v AdditionalDiskStatusHandler.js
v cladQuery.js
v StorageDataParser.js
3. To stop MXServer and IBM Tivoli Provisioning Manager, log on to Tivoli
Service Automation Manager Management Server as tioadmin and run the
command: $TIO_HOME/tools/tio.sh stop.
4. On the Admin Server, as a root/administrator user, rebuild the Maximo ear file.
Run the command: %Maximo_Home%/deployment/buildmaximoear.cmd
For Windows admin server: C:\IBM\SMP\maximo\deployment\
buildmaximoear.cmd
For Linux admin server: /opt/IBM/SMP/maximo/deployment/buildmaximoear.sh
5. Log on to the IBM Tivoli Provisioning Manager server as user tioadmin and
restart MXServer:$TIO_HOME/tools/tio.sh start.
6. As a root/administrator user, deploy the Maximo ear file from Admin Server:
%CTG_CCMDB_HOME%/jacl/solutions/DeployApplication.bat wasadmin
<wasadminpwd> MAXIMO WASNodeName mxename %Maximo_HOME%/deployment/
default/maximo.ear WASVirtualHost WASWebServerName.
Chapter 2. Installing and upgrading 87
For Windows admin server: c:\ibm\smp\jacl\solutions\
DeployApplication.bat wasadmin <wasadminpwd> MAXIMO ctgNode01 MXServer
C:\IBM\SMP\maximo\deployment\default\maximo.ear maximo_host webserver1
For Linux admin server: /opt/IBM/SMP/jacl/solutions/DeployApplication.sh
wasadmin <password> MAXIMO ctgNode01 MXServer /opt/IBM/SMP/maximo/
deployment/default/maximo.ear maximo_host webserver1
This action may take 15-20 minutes to complete.
Installing the product
Install the product license and upgrade Tivoli Service Automation Manager to
7.2.4.4 using the launchpad.
About this task
Note: Pay attention to the icons that are displayed next to the steps in the
Installation Launchpad. They inform you whether the step must be performed on
the administrative server or on the management server.
Procedure
1. Go to the Preparation for Upgrade section of the launchpad and click Prepare
the system for upgrade.
2. Follow the instructions and perform all the preparation and backup steps.
3. In the License Agreement section, click Install the license agreement and
follow the steps in the installer that opens.
4. Install the Advanced Workflow components.
5. Install the Tivoli Service Automation Manager upgrade package.
What to do next
Proceed to Performing the post-installation steps on page 90.
Installing the Advanced Workflow Components
After Service Request Manager is installed, proceed to install the Advanced
Workflow Components.
Before you begin
v Ensure that the middleware is running. If not, then run tioStatus.sh or
tioStatus.cmd command.
v Ensure that the Tivoli Provisioning Manager server is stopped. If not, then run
tio.sh or stop tpm.
About this task
Perform this installation on the same administrative server on which you installed
Service Request Manager.
Procedure
1. On the launchpad, select Product Installation > Advanced Workflow
Components.
2. Click the link to install the components and follow the instructions that appear.
88 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Installing Tivoli Provisioning Manager 7.2.1 Interim Fix 5 core
components
Install the Tivoli Provisioning Manager 7.2.1 Interim Fix 5 core components using
the Tivoli Service Automation Manager launchpad.
Before you begin
Make sure that the middleware and base services are started as described in
Starting the management server on page 353. Make sure that Tivoli Provisioning
Manager deployment engine is stopped. In case it is running, use the following
command as tioadmin on the management server:
$TIO_HOME/tools/tio.sh stop -t
Server: Management server
Procedure
1. In the Launchpad, select Product Installation > Tivoli Provisioning Manager
7.2.1 iFix 5.
2. Click the link to install Tivoli Provisioning Manager 7.2.1 iFix 5 core
components.
3. Enter the required parameters and click the link to start the installation script.
What to do next
Proceed to Installing Tivoli Provisioning Manager 7.2.1 Interim Fix 5 Web
components on page 78.
Installing Tivoli Provisioning Manager 7.2.1 Interim Fix 5 Web
components
Install the Tivoli Provisioning Manager 7.2.1 Interim Fix 5 Web components using
the launchpad.
Before you begin
Make sure that the middleware and base services are started as described in
Starting the management server on page 353. Make sure that Tivoli Provisioning
Manager deployment engine is stopped. In case it is running, use the following
command as tioadmin on the management server:
$TIO_HOME/tools/tio.sh stop -t
Server: Administrative server
Procedure
1. In the Installation Launchpad, select Product Installation > Tivoli Provisioning
Manager 7.2.1 iFix 5.
2. Click the link to install the Web components and follow the instructions in the
installer.
What to do next
1. Perform the post-installation steps for Tivoli Provisioning Manager:
a. On the launchpad, go to Product Installation > Tivoli Provisioning
Manager 7.2.1 iFix 5 > Post-installation tasks for Tivoli Provisioning
Manager 7.2.1 iFix 5.
Chapter 2. Installing and upgrading 89
b. Click Perform the Tivoli Provisioning Manager 7.2.1 iFix 5
post-installation tasks.
c. Follow the steps described in the panel displayed.
Important: Perform a full system backup of both the management and the
administrative servers after the installation is finished.
2. After installing the Web and Core components of Tivoli Provisioning Manager
Interim Fix 5, download and install 7.2.1.0.5-TPM-0003LA. For more
information about the steps to install, see 7.2.4-TIV-TSAM-FP0003.README.
3. Proceed to Installing the Tivoli Service Automation Manager applications on
page 79.
Performing the post-installation steps
Complete these steps to integrate Tivoli Service Automation Manager with other
solution components.
Before you begin
Before starting the post-installation steps, you must stop the Tivoli Provisioning
Manager deployment engine, if it is not already stopped. Use the following
command to stop the engine:
./tio.sh stop -tpm
Procedure
1. In the Post-Installation Steps of the launchpad, perform all the steps that are
available for this installation scenario. The actions that are unavailable for this
server role or for this installation scenario are greyed out.
2. Click Specify Installation Parameters Required for Component Integration
and enter the required information.
3. Click Configure the Cloud Management Components and follow the
instructions to perform the listed actions.
What to do next
Proceed to Upgrading Tivoli Service Automation Manager from 7.2.2.1, 7.2.2.2,
7.2.3, 7.2.4, 7.2.4.1, 7.2.4.2, 7.2.4.3 to 7.2.4.4 on page 85.
Finishing the upgrade
Perform these steps to finish the upgrade of Tivoli Service Automation Manager to
version 7.2.4.4.
Before you begin
Make sure that all services are running according to the Tivoli Service Automation
Manager documentation before starting the provisioning.
Procedure
1. Log on to the Tivoli Service Automation Manager self-service user interface as
cloud administrator.
2. Log on to the administrative user interface as user maxadmin.
90 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
3. Create a provisioning request at the 7.2.4.4 level with the image that is
registered at 7.2.2.1 or 7.2.2.2 level. The request must be successful.
What to do next
Perform the tasks described in Mandatory data migration when upgrading from
Tivoli Service Automation Manager 7.2.2.1 to 7.2.4.4.
Mandatory data migration when upgrading from Tivoli Service
Automation Manager 7.2.2.1 to 7.2.4.4
After upgrading Tivoli Service Automation Manager from fix pack 7.2.2.1 to
version 7.2.4.4, you must perform a set of data migration steps.
Upgrading existing Cloud Server Pools and Cloud Storage Pool - After you
upgrade Tivoli Service Automation Manager to version 7.2.4.4, you must upgrade
the existing Cloud Server Pools and Cloud Storage Pools. Tivoli Service
Automation Manager 7.2.4.4 saves the provisioning data stores for VMware
hypervisors in a table. In the previous versions, the SAN Storage Pool Name was
used as an input field. After you upgrade a Tivoli Service Automation Manager
system to 7.2.4.4, existing VMware Cloud Server Pools stay intact until they are
disabled. To re-enable the Cloud Server Pools, select provisioning data stores again
manually in the Additional Resources tab. Tivoli Service Automation Manager
7.2.4.4 adds resource consumption bars for cloud server pools in the Cloud Server
Pool Administration application. These bars allow a quick overview of the current
consumption of the backend resources. To view the resource consumption bars for
a Cloud Server Pools that were created before the upgrade:
1. Go to Service Automation > Configuration > Cloud Server Pool
Administration.
2. Disable the Cloud Server Pool.
3. Go to Additional Resources tab and associate the Cloud Server Pool.
4. Click Save.
5. Validate and enable Cloud Server Pool.
To re-enable the Cloud Storage Pools, select provisioning data stores again
manually in the Associated Datastores tab:
1. Go to Service Automation > Configuration > Cloud Storage Pool
Administration.
2. Disable the Cloud Storage Pool.
3. Go to Associated Cloud Server Pool tab and associate the Cloud Server Pool.
4. Go to Associated Datastore tab and add the datastore, which existed before the
upgrade.
5. Click Save.
6. Validate and enable Cloud Storage Pool.
Optional Migration of VMware Cloud Pools for VSphere 5 features
If you want to have support for more than 8 virtual CPUs for VSphere5 backends
for existing VMware cloud pools, you must run the virtual center host discovery
again.
Upgrading service instances, upgrading network, and migrating VMware servers
Chapter 2. Installing and upgrading 91
After upgrading Tivoli Service Automation Manager from fix pack 7.2.2.1 to
version 7.2.4.4, you must upgrade the existing service instances:
Upgrading the service instance to the new revision
Upgrade the existing service instance to revision 8 that is shipped with Tivoli
Service Automation Manager 7.2.4.4.
Procedure
1. Log on to the administrative user interface as maxadmin.
2. Click Go To > Service Automation > Service Update Packages.
3. In the List tab, filter for RDPVS Revision 8.
4. Select the Service Instance Deployments tab.
5. Click Deploy on Service Deployment Instances. The Deploy Service Update
Package on Service Deployment Instances window is displayed.
6. Select the service deployment instances that you want to upgrade with the
content of the service update package. Select all operational service instances.
Note:
a. You can select only these service deployment instances that are in
Operational or Maintenance status.
b. After applying the upgrade package to update the service instances to
revision 8, upgraded service instances are still displayed with the old
revision, for example version 7. In this way, it is possible to identify the
service definition revision with which an instance was created.
7. Click Deploy on Service Deployment Instances to start the deployment for
the selected service deployment instances.
8. When prompted about the execution of the management plan after deploying
the update package, click Yes.
9. In the toolbar menu, click Refresh.
10. Verify that the status for each service update package deployment is Applied.
What to do next
Proceed to Defining an IP address selection rule.
Upgrading network
After upgrading Tivoli Service Automation Manager from 7.2.2.1 to 7.2.4.4, you
must perform some additional migration steps related to network.
Defining an IP address selection rule:
Define which IP addresses are displayed in the notification emails sent after
successful provisioning.
About this task
For more information about how to define IP address selection rules, see IP
address selection rules on page 229.
You can either define an IP address selection rule on the system level or on the
customer level.
92 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Procedure
v Configuration on the system level is performed by modifying the
PMRDP.Net.IPUpdateRule system property. Set or update this property using the
System Properties application in the administrative user interface:
1. Log on to the administrative user interface as maxadmin.
2. Click Go To > System Configuration > Platform Configuration > System
Properties.
3. Filter for the PMRDP.Net.IPUpdateRule property. This property contains an IP
address selection rule.
4. Click the property to edit it.
v The customer level is configured by modifying the IPUpdateRule property in the
network template. The property is optional and it contains an IP address
selection rule. If it is set, it overwrites the system level property. Set this
property in the network template XML file. All customers that have the same
network template are subject to the same IP address selection rule.
Running IP address discovery for operational servers:
Run the discovery of IP addresses to verify and update the topology properties of
the operational servers and the deployed images.
Before you begin
Before performing this task, configure the IP selection rules according to your
requirements. See, IP address selection rules on page 229 and Defining an IP
address selection rule on page 92.
About this task
See IPv6 properties and their layout on page 230 for more information about
IPv6 properties and about a configuration property that you can set for IPv6
discovery.
Procedure
1. Log on to the administrative user interface as cloud network administrator.
2. Click Go To > Service Automation > Configuration > Cloud Network
Administration.
3. From the Select Action menu, select Run IP Address Discovery.
4. In the window that opens, click OK.
5. Wait for the discovery to complete.
Note: The time of the discovery process depends on the number of operational
servers in the environment. You can click the action again to see its status.
Several statuses are possible:
v Network Discovery is in progress. It is not possible to submit a new
discovery.
v Network Discovery has been submitted. It is not possible to submit a new
discovery.
v Network Discovery succeeded for all servers. It is possible to submit a new
discovery.
v Network Discovery failed for one or more servers. See SystemOut.log for
details why the discovery failed. It is possible to submit a new discovery.
Chapter 2. Installing and upgrading 93
v Network Discovery timed out for one or more servers. For some severs, the
discovery took longer than the process waits. See SystemOut.log for details
why the discovery timed out. It is possible to submit a new discovery.
Results
The IP address properties in the server topology are updated.
Migrating restored VMware servers
It is possible that the MAC addresses of a restored VMware server do not match to
the values of the servers in the virtual center. It can occur on all VMware
provisioned servers, which were previously restored from a saved images. You
must run the following procedure one time after you upgrade to 7.2.4.4 and before
running a virtual center discovery in the Cloud Server Pool Administration.
Before you begin
Tivoli Service Automation Manager provides a workflow to list VMware
provisioned servers which have saved images associated together with the MAC
addresses for their network interface cards. The workflow name is:
VMware_DisplayRestoredServerCandidates.wkf. After finishing the procedure, the
data in DCM and the data in the virtual center are in sync.
Note: Failure to do this procedure can corrupt the network interface card and data
of a provisioned server. It can result in a situation when IP addresses of existing
servers are not blocked anymore and can be reused.
About this task
Follow this procedure to check or correct the MAC addresses of the restored
servers:
Procedure
1. Log on to the administrative UI.
2. Go to Administration > Provisioning > Provisioning Workflows.
3. Find the VMware_DisplayRestoredServerCandidates.wkf workflow on the list.
4. To run the workflow click Run next to the workflow on the list and then again
in the dialog box.
5. In the following pop-up windows, click No.
6. This workflow can take some time to complete, depending of the number of
resource pools and VMware provisioned servers in the environment.
7. To check whether the workflow is completed, go to Administration >
Provisioning > Provisioning Workflow Status and clickRefresh.
8. After the workflow is completed, compare the MAC address of each server
listed in the workflow execution with the data in the provisioned server virtual
center.
9. For all servers where the MAC addresses do not match, perform the following
steps:
a. Open the server in the application on the provisioning computer.
b. Go to IT Infrastructure > Provisioning Inventory > Provisioning
Computers.
c. Find each server in the list.
d. Click the Hardware tab.
94 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
e. In the lower table, click the NIC tab. A list of all the NICs of the server are
shown with their MAC addresses.
f. Expand the details for each NIC and correct the MAC address with the
value from the virtual center for this server.
g. Click Save in the top toolbar.
Migrating VMware Additional Disks
If you have additional disks created for VMware servers in the earlier versions of
Tivoli Service Automation Manager, you must run Migration for VMware
Additional Disks.
Procedure
1. Log on to the administrative user interface as Admin user.
2. Click Go To > Service Automation > Configuration > Cloud Storage Pool
Administration.
3. From the Select Action menu, select Update VMware Additional Disks.
4. Click OK and wait for the migration to complete.
Results
The Disk Name property (same as mount point) is added in storage topology,
Virtual Server Template and Resource Allocation.
Mandatory data migration when upgrading from Tivoli Service
Automation Manager 7.2.2.2 to 7.2.4.4
When you upgrade Tivoli Service Automation Manager from fix pack 7.2.2.2 to
version 7.2.4.4, you must perform a set of data migration steps.
Upgrading existing Cloud Server Pools and Cloud Storage Pool - After you
upgrade Tivoli Service Automation Manager to version 7.2.4.4, you must upgrade
the existing Cloud Server Pools and Cloud Storage Pools. Tivoli Service
Automation Manager 7.2.4.4 saves the provisioning data stores for VMware
hypervisors in a table. In the previous versions, the SAN Storage Pool Name was
used as an input field. After you upgrade a Tivoli Service Automation Manager
system to 7.2.4.4, existing VMware Cloud Server Pools stay intact until they are
disabled. To re-enable the Cloud Server Pools, select provisioning data stores again
manually in the Additional Resources tab. Tivoli Service Automation Manager
7.2.4.4 adds resource consumption bars for cloud server pools in the Cloud Server
Pool Administration application. These bars allow a quick overview of the current
consumption of the backend resources. To view the resource consumption bars for
a Cloud Server Pools that were created before the upgrade:
1. Go to Service Automation > Configuration > Cloud Server Pool
Administration.
2. Disable the Cloud Server Pool.
3. Go to Additional Resources tab and associate the Cloud Server Pool.
4. Click Save.
5. Validate and enable Cloud Server Pool.
To re-enable the Cloud Storage Pools, select provisioning data stores again
manually in the Associated Datastores tab:
1. Go to Service Automation > Configuration > Cloud Storage Pool
Administration.
Chapter 2. Installing and upgrading 95
2. Disable the Cloud Storage Pool.
3. Go to Associated Cloud Server Pool tab and associate the Cloud Server Pool.
4. Go to Associated Datastore tab and add the datastore, which existed before the
upgrade.
5. Click Save.
6. Validate and enable Cloud Storage Pool.
Optional Migration of VMware Cloud Pools for VSphere 5 features
If you want to have support for more than 8 virtual CPUs for VSphere5 backends
for existing VMware cloud pools, you must run the virtual center host discovery
again.
Upgrading service instances
After upgrading Tivoli Service Automation Manager from fix pack 7.2.2.2 to
version 7.2.4.4, you must upgrade the existing service instances:
Upgrading the service instance to the new revision
Upgrade the existing service instance to revision 8 that is shipped with Tivoli
Service Automation Manager 7.2.4.4.
Procedure
1. Log on to the administrative user interface as maxadmin.
2. Click Go To > Service Automation > Service Update Packages.
3. In the List tab, filter for RDPVS Revision 8.
4. Select the Service Instance Deployments tab.
5. Click Deploy on Service Deployment Instances. The Deploy Service Update
Package on Service Deployment Instances window is displayed.
6. Select the service deployment instances that you want to upgrade with the
content of the service update package. Select all operational service instances.
Note:
a. You can select only these service deployment instances that are in
Operational or Maintenance status.
b. After applying the upgrade package to update the service instances to
revision 8, upgraded service instances are still displayed with the old
revision, for example version 7. In this way, it is possible to identify the
service definition revision with which an instance was created.
7. Click Deploy on Service Deployment Instances to start the deployment for
the selected service deployment instances.
8. When prompted about the execution of the management plan after deploying
the update package, click Yes.
9. In the toolbar menu, click Refresh.
10. Verify that the status for each service update package deployment is Applied.
What to do next
Proceed to Defining an IP address selection rule.
96 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Mandatory data migration when upgrading from Tivoli Service
Automation Manager 7.2.3 to 7.2.4.4
When you upgrade Tivoli Service Automation Manager from fix pack 7.2.3 to
version 7.2.4.4, you must perform a set of data migration steps.
Migration of VMC image from Tivoli Service Automation Manager 7.2.3 to
7.2.4.4
To use the VMC NIM images that were created with Tivoli Service Automation
Manager 7.2.3 in Tivoli Service Automation Manager 7.2.4.4:
Just before the migration:
1. Unregister all registered VMC images.
Just after the migration:
1. Run the discovery on all VMC pools.
2. If validation errors are detected, fix them (see: Fixing Image Validation Errors
on page 510) and rerun the discovery.
3. Register the images again.
Upgrading existing Cloud Server Pools and Cloud Storage Pool - After you
upgrade Tivoli Service Automation Manager to version 7.2.4.4, you must upgrade
the existing Cloud Server Pools and Cloud Storage Pools. Tivoli Service
Automation Manager 7.2.4.4 saves the provisioning data stores for VMware
hypervisors in a table. In the previous versions, the SAN Storage Pool Name was
used as an input field. After you upgrade a Tivoli Service Automation Manager
system to 7.2.4.4, existing VMware Cloud Server Pools stay intact until they are
disabled. To re-enable the Cloud Server Pools, select provisioning data stores again
manually in the Additional Resources tab. Tivoli Service Automation Manager
7.2.4.4 adds resource consumption bars for cloud server pools in the Cloud Server
Pool Administration application. These bars allow a quick overview of the current
consumption of the backend resources. To view the resource consumption bars for
a Cloud Server Pools that were created before the upgrade:
1. Go to Service Automation > Configuration > Cloud Server Pool
Administration.
2. Disable the Cloud Server Pool.
3. Go to Additional Resources tab and associate the Cloud Server Pool.
4. Click Save.
5. Validate and enable Cloud Server Pool.
To re-enable the Cloud Storage Pools, select provisioning data stores again
manually in the Associated Datastores tab:
1. Go to Service Automation > Configuration > Cloud Storage Pool
Administration.
2. Disable the Cloud Storage Pool.
3. Go to Associated Cloud Server Pool tab and associate the Cloud Server Pool.
4. Go to Associated Datastore tab and add the datastore, which existed before the
upgrade.
5. Click Save.
6. Validate and enable Cloud Storage Pool.
Optional Migration of VMware Cloud Pools for VSphere 5 features
Chapter 2. Installing and upgrading 97
If you want to have support for more than 8 virtual CPUs for VSphere5 backends
for existing VMware cloud pools, you must run the virtual center host discovery
again.
Mandatory data migration when upgrading from Tivoli Service
Automation Manager 7.2.4 to 7.2.4.4
When you upgrade Tivoli Service Automation Manager from fix pack 7.2.4 to
version 7.2.4.4, you must perform a set of data migration steps.
Migration of VMC image from Tivoli Service Automation Manager 7.2.4 to
7.2.4.4
To use the VMC NIM images that were created with Tivoli Service Automation
Manager 7.2.4 in Tivoli Service Automation Manager 7.2.4.4:
Just before the migration:
1. Unregister all registered VMC images.
Just after the migration:
1. Run the discovery on all VMC pools.
2. If validation errors are detected, fix them (see: Fixing Image Validation Errors
on page 510) and rerun the discovery.
3. Register the images again.
Upgrading existing Cloud Server Pools and Cloud Storage Pool - After you
upgrade Tivoli Service Automation Manager to version 7.2.4.4, you must upgrade
the existing Cloud Server Pools and Cloud Storage Pools. Tivoli Service
Automation Manager 7.2.4.4 saves the provisioning data stores for VMware
hypervisors in a table. In the previous versions, the SAN Storage Pool Name was
used as an input field. After you upgrade a Tivoli Service Automation Manager
system to 7.2.4.4, existing VMware Cloud Server Pools stay intact until they are
disabled. To re-enable the Cloud Server Pools, select provisioning data stores again
manually in the Additional Resources tab. Tivoli Service Automation Manager
7.2.4.4 adds resource consumption bars for cloud server pools in the Cloud Server
Pool Administration application. These bars allow a quick overview of the current
consumption of the backend resources. To view the resource consumption bars for
a Cloud Server Pools that were created before the upgrade:
1. Go to Service Automation > Configuration > Cloud Server Pool
Administration.
2. Disable the Cloud Server Pool.
3. Go to Additional Resources tab and associate the Cloud Server Pool.
4. Click Save.
5. Validate and enable Cloud Server Pool.
To re-enable the Cloud Storage Pools, select provisioning data stores again
manually in the Associated Datastores tab:
1. Go to Service Automation > Configuration > Cloud Storage Pool
Administration.
2. Disable the Cloud Storage Pool.
3. Go to Associated Cloud Server Pool tab and associate the Cloud Server Pool.
4. Go to Associated Datastore tab and add the datastore, which existed before the
upgrade.
98 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
5. Click Save.
6. Validate and enable Cloud Storage Pool.
Optional Migration of VMware Cloud Pools for VSphere 5 features
If you want to have support for more than 8 virtual CPUs for VSphere5 backends
for existing VMware cloud pools, you must run the virtual center host discovery
again.
Mandatory data migration when upgrading from Tivoli Service
Automation Manager 7.2.4.1 to 7.2.4.4
When you upgrade Tivoli Service Automation Manager from fix pack 7.2.4.1 to
version 7.2.4.4, you must perform a set of data migration steps.
Migration of VMC image from Tivoli Service Automation Manager 7.2.4.1 to
7.2.4.4
To use the VMC NIM images that were created with Tivoli Service Automation
Manager 7.2.4.1 in Tivoli Service Automation Manager 7.2.4.4:
Just before the migration:
1. Unregister all registered VMC images.
Just after the migration:
1. Run the discovery on all VMC pools.
2. If validation errors are detected, fix them (see: Fixing Image Validation Errors
on page 510) and rerun the discovery.
3. Register the images again.
Upgrading existing Cloud Server Pools and Cloud Storage Pool - After you
upgrade Tivoli Service Automation Manager to version 7.2.4.4, you must upgrade
the existing Cloud Server Pools and Cloud Storage Pools. Tivoli Service
Automation Manager 7.2.4.4 saves the provisioning data stores for VMware
hypervisors in a table. In the previous versions, the SAN Storage Pool Name was
used as an input field. After you upgrade a Tivoli Service Automation Manager
system to 7.2.4.4, existing VMware Cloud Server Pools stay intact until they are
disabled. To re-enable the Cloud Server Pools, select provisioning data stores again
manually in the Additional Resources tab. Tivoli Service Automation Manager
7.2.4.4 adds resource consumption bars for cloud server pools in the Cloud Server
Pool Administration application. These bars allow a quick overview of the current
consumption of the backend resources. To view the resource consumption bars for
a Cloud Server Pools that were created before the upgrade:
1. Go to Service Automation > Configuration > Cloud Server Pool
Administration.
2. Disable the Cloud Server Pool.
3. Go to Additional Resources tab and associate the Cloud Server Pool.
4. Click Save.
5. Validate and enable Cloud Server Pool.
To re-enable the Cloud Storage Pools, select provisioning data stores again
manually in the Associated Datastores tab:
1. Go to Service Automation > Configuration > Cloud Storage Pool
Administration.
Chapter 2. Installing and upgrading 99
2. Disable the Cloud Storage Pool.
3. Go to Associated Cloud Server Pool tab and associate the Cloud Server Pool.
4. Go to Associated Datastore tab and add the datastore, which existed before the
upgrade.
5. Click Save.
6. Validate and enable Cloud Storage Pool.
Optional Migration of VMware Cloud Pools for VSphere 5 features
If you want to have support for more than 8 virtual CPUs for VSphere5 backends
for existing VMware cloud pools, you must run the virtual center host discovery
again.
Mandatory data migration when upgrading from Tivoli Service
Automation Manager 7.2.4.2 to 7.2.4.4
When you upgrade Tivoli Service Automation Manager from fix pack 7.2.4.2 to
version 7.2.4.4, you must perform a set of data migration steps.
Migration of VMC image from Tivoli Service Automation Manager 7.2.4.2 to
7.2.4.4
To use the VMC NIM images that were created with Tivoli Service Automation
Manager 7.2.4.2 in Tivoli Service Automation Manager 7.2.4.4:
Just before the migration:
1. Unregister all registered VMC images.
Just after the migration:
1. Run the discovery on all VMC pools.
2. If validation errors are detected, fix them (see: Fixing Image Validation Errors
on page 510) and rerun the discovery.
3. Register the images again.
Upgrading existing Cloud Server Pools and Cloud Storage Pool - After you
upgrade Tivoli Service Automation Manager to version 7.2.4.4, you must upgrade
the existing Cloud Server Pools and Cloud Storage Pools. Tivoli Service
Automation Manager 7.2.4.4 saves the provisioning data stores for VMware
hypervisors in a table. In the previous versions, the SAN Storage Pool Name was
used as an input field. After you upgrade a Tivoli Service Automation Manager
system to 7.2.4.4, existing VMware Cloud Server Pools stay intact until they are
disabled. To re-enable the Cloud Server Pools, select provisioning data stores again
manually in the Additional Resources tab. Tivoli Service Automation Manager
7.2.4.4 adds resource consumption bars for cloud server pools in the Cloud Server
Pool Administration application. These bars allow a quick overview of the current
consumption of the backend resources. To view the resource consumption bars for
a Cloud Server Pools that were created before the upgrade:
1. Go to Service Automation > Configuration > Cloud Server Pool
Administration.
2. Disable the Cloud Server Pool.
3. Go to Additional Resources tab and associate the Cloud Server Pool.
4. Click Save.
5. Validate and enable Cloud Server Pool.
100 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
To re-enable the Cloud Storage Pools, select provisioning data stores again
manually in the Associated Datastores tab:
1. Go to Service Automation > Configuration > Cloud Storage Pool
Administration.
2. Disable the Cloud Storage Pool.
3. Go to Associated Cloud Server Pool tab and associate the Cloud Server Pool.
4. Go to Associated Datastore tab and add the datastore, which existed before the
upgrade.
5. Click Save.
6. Validate and enable Cloud Storage Pool.
Optional Migration of VMware Cloud Pools for VSphere 5 features
If you want to have support for more than 8 virtual CPUs for VSphere5 backends
for existing VMware cloud pools, you must run the virtual center host discovery
again.
Mandatory data migration when upgrading from Tivoli Service
Automation Manager 7.2.4.3 to 7.2.4.4
When you upgrade Tivoli Service Automation Manager from fix pack 7.2.4.3 to
version 7.2.4.4, you must perform a set of data migration steps.
Migration of VMC image from Tivoli Service Automation Manager 7.2.4.3 to
7.2.4.4
To use the VMC NIM images that were created with Tivoli Service Automation
Manager 7.2.4.1 in Tivoli Service Automation Manager 7.2.4.4:
Just before the migration:
1. Unregister all registered VMC images.
Just after the migration:
1. Run the discovery on all VMC pools.
2. If validation errors are detected, fix them (see: Fixing Image Validation Errors
on page 510) and rerun the discovery.
3. Register the images again.
Upgrading existing Cloud Server Pools and Cloud Storage Pool - After you
upgrade Tivoli Service Automation Manager to version 7.2.4.4, you must upgrade
the existing Cloud Server Pools and Cloud Storage Pools. Tivoli Service
Automation Manager 7.2.4.4 saves the provisioning data stores for VMware
hypervisors in a table. In the previous versions, the SAN Storage Pool Name was
used as an input field. After you upgrade a Tivoli Service Automation Manager
system to 7.2.4.4, existing VMware Cloud Server Pools stay intact until they are
disabled. To re-enable the Cloud Server Pools, select provisioning data stores again
manually in the Additional Resources tab. Tivoli Service Automation Manager
7.2.4.4 adds resource consumption bars for cloud server pools in the Cloud Server
Pool Administration application. These bars allow a quick overview of the current
consumption of the backend resources. To view the resource consumption bars for
a Cloud Server Pools that were created before the upgrade:
1. Go to Service Automation > Configuration > Cloud Server Pool
Administration.
Chapter 2. Installing and upgrading 101
2. Disable the Cloud Server Pool.
3. Go to Additional Resources tab and associate the Cloud Server Pool.
4. Click Save.
5. Validate and enable Cloud Server Pool.
To re-enable the Cloud Storage Pools, select provisioning data stores again
manually in the Associated Datastores tab:
1. Go to Service Automation > Configuration > Cloud Storage Pool
Administration.
2. Disable the Cloud Storage Pool.
3. Go to Associated Cloud Server Pool tab and associate the Cloud Server Pool.
4. Go to Associated Datastore tab and add the datastore, which existed before the
upgrade.
5. Click Save.
6. Validate and enable Cloud Storage Pool.
Optional Migration of VMware Cloud Pools for VSphere 5 features
If you want to have support for more than 8 virtual CPUs for VSphere5 backends
for existing VMware cloud pools, you must run the virtual center host discovery
again.
Enabling Tivoli Monitoring agent
You must enable Tivoli Monitoring agent if it is not enabled after upgrade.
About this task
Enable monitoring agent installation during provisioning for each of the following
offerings:
v PMRDP_0211A_72 (Add VMware Servers)
v PMRDP_0212A_72 (Add POWER LPAR Servers)
v PMRDP_0213A_72 (Add Xen Servers)
v PMRDP_0214A_72 (Add z/VM Linux Servers)
v PMRDP_0215A_72 (Add KVM Servers)
v PMRDP_0201A_72 (Create Project with VMware Servers)
v PMRDP_0202A_72 (Create Project with POWER LPAR Servers)
v PMRDP_0203A_72 (Create Project with Xen Servers)
v PMRDP_0204A_72 (Create Project with z/VM Linux Servers)
v PMRDP_0205A_72 (Create Project with KVM Servers)
v PMRDP_0206A_72 (Create POWER LPAR Servers via IBM Systems Director
VMControl
v PMRDP_0216A_72 (Add POWER LPAR Servers via IBM Systems Director
VMControl)
Procedure
1. Log on to the administrative user interface as PMSCADMUSR.
2. Click Go To > Service Request Manager Catalog > Offerings
3. Click the offering to open it.
4. Click the Change Status and change the status of the offering to Pending.
102 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
5. Switch to the Specifications tab.
6. In the Presentation section, select the Mandatory check box, and clear the
Hidden check box, which corresponds to the offering attribute
PMRDPCLSWS_MONITORING.
7. Change the status of this offering to Active.
Configuring IBM HTTP Server
Configure the IBM HTTP Server after installing Tivoli Service Automation Manager
and its components.
Configure the web server to handle HTTP requests
Procedure
1. Specify port 80 for maximo_host:
a. Log on to the WebSphere Application Server administrative console
(https://<server>:9043/ibm/console).
b. Click Environment > Virtual Hosts.
c. Select maximo_host.
d. Under Additional Properties, click Host Aliases.
e. Verify that port 80 is shown in the list. If not, add it.
2. Edit httpd.conf:
a. In the WebSphere administrative console, click Servers > Web servers.
b. Click webserver1 in the list and click Edit to edit httpd.conf.
c. If it is not done already, uncomment (or add, if missing) the following lines
at the end of the file:
LoadModule was_ap20_module /opt/IBM/HTTPServer/Plugins/bin/32bits/mod_was_ap20_http.so
WebSpherePluginConfig /opt/IBM/HTTPServer/Plugins/config/webserver1/plugin-cfg.xml
d. Click OK to save httpd.conf.
3. Generate the plug-in configuration:
a. In the WebSphere administrative console, click Servers > Web servers.
b. Mark the check box next to webserver1 and then select Generate Plug-in.
c. Mark the check box next to webserver1 again and then select Propagate
Plug-in.
4. Restart WebSphere Application Server:
a. Open the command window.
b. Run the following as user tioadmin:
$TIO_HOME/tools/tio.sh stop <wasadmin> <password>
$TIO_HOME/tools/tio.sh start <wasadmin> <password>
5. Restart the HTTP server:
a. Open the command window.
b. Run the following commands as user root:
/opt/IBM/HTTPServer/bin/apachectl stop
/opt/IBM/HTTPServer/bin/apachectl start
6. Log on to the administrative user interface to ensure that it is operational. Use
the address http://<server>/maximo to access the interface through the HTTP
server. For more details, see Logging on to the Tivoli Service Automation
Manager administrative interface.
Chapter 2. Installing and upgrading 103
Configure the web server to handle HTTPS requests
Procedure
1. Enable SSL for the web server:
a. In the WebSphere administrative console (https://<server>:9043/ibm/
console/), click Servers > Web Servers.
b. Click webserver1 and select Web Server Virtual Hosts. Click New in the
window that opens.
c. In the window, select Security enabled virtual host and click Next.
d. Enter the following parameters:
v webserver1 as Key store file name
v $(WEB_INSTALL_ROOT)/conf as the Target key store directory
v WebAS as Key store password
v selfSigned as Certificate alias
e. Click Next.
f. Enter the following parameters:
v * as IP Address
v 443 as the Port
Note: Make sure that port 443 is available on your system. If not, use 444 or
any other available port for this step and for all subsequent steps.
g. Click Next and then click Finish.
h. Click Save.
2. Export the AppServer public certificate for later import into the WebSphere Key
Store:
a. In the WebSphere administrative console (https://<server>:9043/ibm/
console/), click Security > SSL certificate and key management.
b. Under Configuration settings, click Manage endpoint security
configurations.
c. Under Outbound, expand ctgCell01 > nodes > ctgNode01 > servers and
click MXServer.
d. Click Key stores and certificates.
e. Click TpmKeyStore.
f. Click Personal Certificates.
g. Mark the check mark for tpmuicert and click Extract.
h. Enter a certificate file name, for example /tmp/key.cert and click OK.
3. Correct the settings in the plugin-cfg.xml file to support SSL:
a. In the WebSphere administrative console (https://<server>:9043/ibm/
console/), click Servers > Web Servers.
b. Click webserver1 and select Plug-in properties.
c. In the section Repository copy of Web server plug-in files, change the
filename for plug-in key store file name to webserver1.kdb.
d. In the section Web server copy of Web server plug-in files, change the
filename for plug-in key store directory and file name to
/opt/IBM/HTTPServer/conf/webserver1.kdb.
e. Click OK.
f. Click Save.
4. Import the Appserver certificate:
104 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
a. In the WebSphere administrative console (https://<server>:9043/ibm/
console/), click Servers > Web Servers.
b. Click webserver1 and select Plug-in properties.
c. In the section Repository copy of Web server plug-in files, click Manage
keys and certificates.
d. Click Signer Certificates.
e. Click Add.
f. Enter TSAM for Alias.
g. Enter the file name of the previously exported AppServerCertificate, for
example /tmp/key.cert.
h. Click OK.
i. Click Save.
5. Define the SSL port for maximo_host:
a. In the WebSphere administrative console (https://<server>:9043/ibm/
console), click Environment > Virtual Hosts.
b. Click maximo_host.
c. Under Additional Properties, click Host Aliases.
d. Verify that the port that you want to use (for example 443) is displayed in
the list. If not, add it.
6. Generate the plug-in configuration:
a. In the WebSphere administrative console (https://<server>:9043/ibm/
console), click Servers > Web Servers.
b. Mark the check box next to webserver1 and then select Generate Plug-in.
c. Mark the check box next to webserver1 again and then select Propagate
Plug-in.
7. Restart WebSphere Application Server:
a. Open the command window.
b. Run the following commands as user tioadmin:
$TIO_HOME/tools/tio.sh stop <wasadmin> <password>
$TIO_HOME/tools/tio.sh start <wasadmin> <password>
8. Restart the HTTP Server:
a. Open the command window.
b. Run the following commands as user root:
/opt/IBM/HTTPServer/bin/apachectl stop
/opt/IBM/HTTPServer/bin/apachectl start
9. Verify the configuration by going to https://<server>/SimpleSRM/.
Note: If you specified a different port than the default SSL port 443, make sure
to include this port number in the URL. In case you receive the error message
Error 403: AuthorizationFailed, clear all cookies and try again.
Chapter 2. Installing and upgrading 105
Optional system settings after installation
Learn about the additional settings that you can optionally modify in your
environment.
Disabling Tivoli Provisioning Manager Software Distribution
Infrastructure
About this task
During Tivoli Provisioning Manager installation, Software Distribution
Infrastructure is also installed and enabled. This component is not required for
Tivoli Service Automation Manager, and you can disable it to avoid loading its
server-side code for performance reasons.
Procedure
1. Edit the TPM.javaopt file:
v Windows: C:\Program Files\IBM\tivoli\tpm\lwi\conf\overrides
v UNIX: /opt/IBM/tivoli/tpm/lwi/conf/overrides
2. At the end of the file, add the line:
-DSdiTaskInfrastructure.disabled=true
3. Save the file.
Additional tools and system health options
These section provides tools to be executed during and after the installation on the
management and administrative servers.
Tools for the Management Server
You can run the tools that are provided in this section on the Management Server.
Use these tools to:
v Check if Management Server files are in sync
v Synchronize the JAR files on Management Server
v Start, stop, check the status and restart Tivoli Service Automation Manager
Software Stack including Middleware
v Start and stop IBM Tivoli Provisioning Manager TIO-Script
v Start and stop Websphere Maximo Server (MXServer)
v Correct Classloader Settings of Maximo Application in WebSphere.
Tools for the Administrative Server
You can run the tools that are provided in this section on the Administrative
Server.
Use these tools to:
v Build the Maximo.ear
v Deploy the Maximo.ear
v Update the database.
106 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Tools for the Administrative and Management Server
You must run the steps that are listed here whenever you have installed additional
software or code after the Tivoli Service Automation Manager installation.
After you installed additional software or code after the Tivoli Service Automation
Manager installation:
v Deploy the Maximo.ear
v Correct the Classloader Settings of Maximo Application in WebSphere
v Synchronize JAR files on Management Server.
v Restart the machine.
The tool also detects the version of Tivoli Service Automation Manager that is
installed on the workstation.
Uninstalling
Uninstalling Tivoli Service Automation Manager requires you to remove its
components in a sequential manner: external, core, base services, and middleware.
About this task
Important: Do not use any other uninstall methods, such as the Add/Remove
Programs panel in Windows.
Uninstalling Tivoli Service Automation Manager components
Remove all solution components that are independent of the Tivoli Process
Automation engine.
Before you begin
You must have the following operating system privileges:
v
Windows
Administrator
v
UNIX Linux
root
Procedure
1. Remove the license installer on both the admin workstation and on the
management server:
v
Windows
Go to %APPDATA%\IBM\tsam and run
Uninstall_IBM_Tivoli_Service_Automation_Manager_License.exe
v
UNIX Linux
Go to /opt/IBM/tsam/ and run
Uninstall_IBM_Tivoli_Service_Automation_Manager_License
2. Remove the directories where Service Automation Manager was installed.
v
Windows
%APPDATA%\IBM\tsam and %PROGRAMFILES%\IBM\tsam
v
UNIX Linux
/opt/IBM/tsam
a. Log on to the administrative workstation:
v
Windows
as Administrator
v
UNIX Linux
as root
b. Remove the directory:
Chapter 2. Installing and upgrading 107
v
Windows
%APPDATA%\IBM\tsam and %PROGRAMFILES%\IBM\tsam
v
UNIX Linux
/opt/IBM/tsam
c. Log on to the management server as root.
d. Remove the directory /opt/IBM/tsam and /etc/cloud
What to do next
Proceed to Uninstalling Tivoli Provisioning Manager core components
Uninstalling Tivoli Provisioning Manager core components
If you no longer require it for other software solutions, uninstall Tivoli
Provisioning Manager core components and perform cleanup tasks.
Procedure
1. Follow procedure in Tivoli Provisioning Manager 7.2.1 knowledge center > Installing
version 7.2.1 > Uninstallation tasks > Uninstalling core components.
2. Follow procedure in Tivoli Provisioning Manager 7.2.1 knowledge center > Installing
version 7.2.1 > Uninstallation tasks > Removing remaining items.
What to do next
Proceed to Uninstalling base services other runtime services.
Uninstalling base services other runtime services.
Remove base services, Tivoli Service Request Manager, Tivoli Provisioning
Manager Web components, and the Tivoli Service Automation Manager Process
Manager package.
Before you begin
Important: Do not perform this task if you plan to continue using any of the
services it removes.
You must have the following operating system privileges:
v
Windows
Administrator
v
UNIX Linux
root
Procedure
1. Log on to the administrative workstation.
2. Run a command to uninstall the base services:
v
Windows
<Maximo_HOME>\_uninstall\uninstall.exe
Note: The default location for Maximo_HOME is c:\ibm\SMP
v
UNIX Linux
<Maximo_HOME>/_uninstall/uninstall
3. Remove the Maximo_HOME directory.
What to do next
Proceed to Uninstalling middleware on page 109.
108 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Uninstalling middleware
Remove remaining middleware to complete the uninstall of Tivoli Service
Automation Manager.
Before you begin
Important: Do not perform this task if any other products installed in your
infrastructure use the middleware or its parts.
You must have the following operating system privileges:
v
Windows
Administrator
v
UNIX Linux
root
Procedure
1. Log on to the administrative workstation.
2. Start the Tivoli Service Automation Manager launchpad.
3. In the launchpad navigation pane, select Product Installation / Foundation
Software.
4. Click Install the Middleware to start the middleware installer.
5. Select a language for the installation and click OK.
6. From the Welcome panel, click Next.
7. Accept the license agreement and click Next.
8. From the Choose Workspace panel, specify the workspace directory containing
the currently deployed plan, and then click Next. The default location for the
workspace will be the last workspace location specified. By default:
v
Windows
c:\ibm\tivoli\mwi\workspaces
v
UNIX Linux
/ibm/tivoli/mwi/workspace
9. From the Select Operation panel, select Undeploy the plan, and then click
Next.
10. From the Undeployment Preview panel, click Next to undeploy the plan.
11. From the Successful Undeployment panel, click Cancel to exit the middleware
installer.
12. Reboot the system.
Chapter 2. Installing and upgrading 109
110 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Chapter 3. Configuring Tivoli Service Automation Manager
Depending on the features that you plan to
use, some configuration is required after the
product is installed.
Overview of the configuration process
After successful installation of Tivoli Service Automation Manager, you must
configure it to be able to use it for server provisioning or for storage.
Configuration of Tivoli Service Automation Manager involves four main tasks. For
each of these tasks, there is a Tivoli Service Automation Manager application that
helps you to perform the configuration process.
Copyright IBM Corp. 2008, 2012 111
Note: You can access all of these applications by logging on to the administrative
user interface and clicking Go To > Service Automation > Configuration.
It is necessary to define and configure cloud server pools to provision servers on
VMware, PowerVM, KVM, VMControl, and z/VM back ends.
Planning your configuration
In the following sections you can find information about the supported hypervisor
and network configurations.
Figure 1. Tivoli Service Automation Manager configuration overview
112 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Planning the hypervisor configuration
This section describes supported configuration scenarios and the key environment
configuration aspects that need to be considered when configuring Tivoli Service
Automation Manager.
Hardware configurations
Network configurations
v Multi NIC support
The multi NIC mode assumes there could be multiple
networks connected to the provisioned virtual machine.
One could be marked as the management network.
One could be marked to be the source of the hostname
resolution for the virtual machine.
The resource pool defines the number of network interfaces
which will be connected to the provisioned virtual machine.
For each network defined in the resource pool there is a pool
of subnets.
Each subnet defines the Layer 3 network configuration
including the VLAN ID. In addition it contains a link to a
virtual switch template which defines the Layer 2
configuration of the network on the hypervisor layer.
The virtual switch template is a hypervisor platform specific
configuration of the Layer 2 network configuration and
contains all the required parameters to define the connection
between the virtual NIC on the virtual machine to the
physical NIC on the hypervisor platform.
There is one set of DNS configuration which is defined on the
resource pool level.
The multi NIC networking model supports a set of end user
extensibility which allows to customize certain behavior of the
out of the box capabilities.
Note: The subnetwork configuration and definition in Tivoli
Service Automation Manager is common to all hypervisors. All
subsequent hypervisor configurations must be configured for the
same subnetwork.
Storage configurations
v Local storage dedicated to a blade server or LPAR hardware and
shared among the virtual machines on the blade server.
v Globally shared storage through a Storage Area Network (SAN).
v Configuration of storage must be done for the virtual machine
disks.
v Configuration of storage must be done for hosting the images.
v Configuration of storage must be done for hosting backup
images.
v Other general considerations:
Virtual machine images might contain only one disk or
partition. The single partition and the file system it contains,
are resized (made larger) to the size requested for the virtual
machines during the provisioning process.
Chapter 3. Configuring 113
For Linux-based guest operating systems, a swap partition is
automatically created and configured for the size requested.
Software Configurations
Guest operating systems
v Windows
v AIX
v Linux for System x
v Linux for system z
Operating system image template requirements
v The OS images need to be created and configured according to
specific requirements.
Supported hypervisors
v VMware
v IBM Systems Director VMControl
v PowerVM
v Kernel-based Virtual Machine (KVM)
v Xen
v z/VM
v Number of hypervisors:
Adding additional but similar hypervisors to be managed by
Tivoli Service Automation Manager
Adding additional but heterogeneous hypervisors to be
managed by Tivoli Service Automation Manager.
v Number of resource pools per hypervisor.
Note: This documentation describes a basic first time configuration
for each supported hypervisor. Currently, it does not cover
configuring additional hypervisors of the same or different types.
Other restrictions and limitations
v Data stores for provisioning and data stores for save/restore
images should not be the same
v The host platform should be used as a dedicated platform for
Tivoli Service Automation Manager and not for out-of-band
operations
New capabilities of VMware vSphere 5
VMware vSphere 5 includes new capabilities, among them support for virtual
machines with up to 32 virtual processors to run larger CPU-intensive workloads
and Storage VMotion. Storage VMotion provides an interface for live migration to
run virtual machine disk files from one storage location to another, with no service
disruption.
Support for virtual machine capability to allow up to 32 virtual
processors
Tivoli Service Automation Manager version 7.2.4. supports virtual machines with
up to 32 virtual processors for homogeneous VMware vSphere 5 environments.
The Virtual Center and each of the ESX servers of the VMware cluster that are
used for the resource pool must be version 5 or later.
114 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Running the Virtual Center discovery, as described in section Manually
configuring cloud server pools for VMware on page 174, for VMware Cloud
Server pools Tivoli Service Automation Manager version 7.2.4. discovers the
version of the configured VMware vSphere 5 environment and stores the following
properties on the Spare Pool (Resource Pool) and Virtual Center DCM objects:
v PMRDP.Hypervisor.Manager.Version VMware Virtual Center version, for example
5.0.0
v PMRDP.Hypervisor.MinVersion minimum ESX server version in the VMware
cluster, for example 5.0.0
v PMRDP.Hypervisor.MinPCPU minimum physical processor amount in the VMware
cluster that is defined by the ESX server in the cluster and has the smallest
amount of physical processors available
When you submit requests in the self-service user interface to provision virtual
servers, these properties are used to determine the number of virtual processors
resources available in the Server Details tab.
Note: The maximum number of resources that is available for provisioning is also
limited by the pool limits that are defined for the cloud server pool. You can check
the limits for your pool in the Cloud Server Pool application. Go to Cloud Server
Pool Administration > Cloud Server Pool Details. In the Resource Configuration
section, click the Provisioning Parameter Settings tab. For support of up to 32
virtual processors, adapt the values for Maximum Number of Virtual CPUs and
Maximum Tenths of Physical CPU accordingly.
Support for Storage VMotion
When you run the Storage VMotion manually by using the VMware vSphere user
interface, it is supported by Tivoli Service Automation Manager. After a virtual
machine disk file is migrated from one storage location to another, you must rerun
the Virtual Center discovery as described in section Manually configuring cloud
server pools for VMware on page 174 or run the server discovery for the
migrated server. To start the server discovery,
v select the corresponding pool from the Cloud Server Pool application
v select the Virtual Center Configuration tab in the Resource Configuration
section
v select Detailed Menu - Select Value to set in section Discover and Synchronize
Managed Server the correct Resource Host
v click theSave Cloud Server Pool button from the toolbar
v click the Server Discovery button to start the discovery.
During discovery, the DCM data is adapted in accordance with the migration on
the VMware vSphere back end, so that Tivoli Service Automation Manager can still
manage the migrated virtual machine.
Restriction: When you run the Storage VMotion from the VMware vSphere user
interface, you must select the destination storage for the virtual machine disk files.
You can select hard disk 1 virtual machine disk file for the root disk, and further
disk files if the virtual machines contains multiple disks. When you select the
destination storage, the following restrictions apply:
1. For hard disk 1virtual machine disk file, you can select only a data store that is
defined in the list of associated data stores configured in the Cloud Server Pool
Administration application of Tivoli Service Automation Manager.
Chapter 3. Configuring 115
2. For additional hard disk virtual machine disk files, you can select only a data
store that is defined in the list of associated data stores configured in the Cloud
Storage Pool Administration application of Tivoli Service Automation Manager.
Support for VMFS data stores larger than 2TB
Tivoli Service Automation Manager version 7.2.4. supports data stores with up to
64TB disk size for VMFS-5 data stores. VMFS-5 is a new version of vSphere VMFS
that is introduced by VMware vSphere 5.0.
Remember: The maximum size of a VMDK on VMFS-5 is still 2TB.
Planning the network configuration
Before you start configuring your network, refer to this section for information
about supported configurations.
Network templates
A network template defines the network resources that are available to a customer.
A network template is an XML file that is built and verified against a schema
provided by Tivoli Service Automation Manager
(PMZHB_NetworkConfiguration.xsd). The XSD files are a part of the Tivoli Service
Automation Manager distribution and are located in install/files/DCM/
NetworkConfigurations. In this folder, you can also find some sample XML
network templates that can be used for single NIC and dual NIC configurations.
The resources that can be assigned to customers are grouped into network
segments.
Network segments
Network segments are groups of network interfaces that are able to communicate
with one another. These network interfaces are configured based on the subnet and
DNS definitions of the network segment.
Network segments have a type and a usage. The network segment type is a value
predefined by Tivoli Service Automation Manager that cannot be changed by the
user. The following values are available:
v Management - This value marks this network segment as the management
network for the provisioned virtual machine.
v Customer - This value marks this network segment as the customer network for
the provisioned virtual machine.
v Storage - This value marks this network segment as the storage network for the
provisioned virtual machine.
v Backup-Restore - This value marks this network segment as the Backup-Restore
network for the provisioned virtual machine.
Out of these four network segment types, the management network segment has a
special role as it is used by Tivoli Service Automation Manager to provision and
manage the virtual machine. The other values help to better distinguish the role
that the network segments play for the virtual machines.
116 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Network segment usage
In addition to these network segment type values that are predefined by Tivoli
Service Automation Manager, there is one value that is fully definable by the
customer. The value is called network segment usage. Users of Tivoli Service
Automation Manager can use this value to more specifically define the purpose of
the network segment. For example, the value can be used to:
v restrict a network segment to a specific resource pool
v restrict a network segment to a specific deployable image
The network segment usage values are stored in the administrative domain
PMZHBNETSEGUSAGE. See this topic for more information about network segment
usage values and how to set them.
Customer-specific network configuration
Tivoli Service Automation Manager offers an option to set a different network
configuration for each customer in the system.
When you create a customer, you must select a single network template. During
the process of creating and configuring a customer, the network template is used to
create the active network configuration of a customer. The following scenarios are
possible:
v A set of customers share a single network configuration.
v Each customer has a different network configuration.
The network configuration provided by default with Tivoli Service Automation
Manager has the following capabilities:
v Each network interface gets an IP address from a pool of subnets defined in its
associated network segment. Multiple subnet definitions in a network segment
are treated as a large IP address pool by Tivoli Service Automation Manager.
Important: Tivoli Service Automation Manager does not support subnetworks
with overlapping IP address ranges even if the subnetwork definitions are
separated by blocked IP address ranges. The same IP address can be assigned to
more than one virtual machine. As a result, an already provisioned virtual
machine can be destroyed during the provisioning of another virtual machine
with the same IP address.
v Exactly one network interface must be configured to be the management
network interface.
v Exactly one network interface must have the host name resolve attribute that is
used to derive the host name for the virtual machine. The DNS configuration of
the selected segment is used to customize the DNS setup of the virtual machine.
The network interface properties come from the deployable image and are entered
during the Register Image offering.
The network segments with the DNS and subnet configuration come from the
customer to which the logged-on user belongs.
Chapter 3. Configuring 117
Network annotation of deployable images
You can annotate an image with network requirements that it must meet to be
deployed.
To add network requirements to the deployable images, use the Register Image
offering that is available in the self-service user interface. You can use this offering
to define the number of network interfaces the image will get during deployment.
You can specify the following attributes for each network interface:
v Type of the network interface. The same values as for network segment types
can be used here.
v Usage of the network interface. The same values as for network segment usage
can be used here.
v The hostname resolve attribute of the network interface. The attribute defines if
it is used to derive the virtual machine host name from the IP address that the
interface gets during provisioning of the virtual machine.
These attributes are used during the Create Project and Add Server offerings to
find the suitable network segment to which the virtual machine is able to connect.
The following rules apply when finding the suitable network segments:
v The network segments come from the network configuration of the customer to
which the logged-on user belongs (this user triggers the provisioning request).
v Only the network segments that match the network interface type are displayed
and can be selected.
v If a network interface has usage values defined, only these network segments
that have at least one of the image usage values defined are shown. For
example, if an image has the usage definitions A and B defined, and there is a
network segment with usage A and a network segment with usage C, only the
network with usage value A is displayed and can be selected.
The self-service user interface offers a dropdown box for each network interface
present in the deployable image. In each of these dropdown boxes, the network
segments are displayed in which this image can be deployed. If one of the boxes is
empty, then the image cannot be deployed. In such case, either the image
registration, or the network configuration of the customer is incompatible. The
evaluation is performed when the deployable image is selected during the Create
Project or Add Server offerings in the self-service user interface wizard.
Network configuration
This topic lists issues you need to consider when planning the deployment and
configuration of your hypervisor environment.
Network configuration strategies:
The multi NIC network support is the only available networking mode and it
includes the previously supported network configuration strategies.
Note:
In earlier network configuration, Tivoli Service Automation Manager supported
three key network configuration strategies:
v A single management subnetwork hosting the Tivoli Service Automation
Manager components, the hypervisor components, and the provisioned virtual
machines.
118 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v A separate management subnetwork hosting the Tivoli Service Automation
Manager components and the hypervisor components and a separate
subnetwork hosting the provisioned virtual machines, referred to as the
customer subnetwork.
v A multi NIC network support where it is possible to have a set of network
interfaces to the provisioned virtual machine which can be individually
configured. Additionally, support for network isolation via VLAN IDs is
available.
Currently, the only supported networking mode is the multi-NIC networking
support. The first two strategies (single management subnetwork and separate
management subnetwork) are no longer supported.
The multi NIC network support is defined and selected in Tivoli Service
Automation Manager by setting the global provisioning property
PMRDP.Net.MultiNicSupport to true in the Data Center Model (DCM).
Important: This strategy is automatically enabled by the Cloud Server Pool
Administration application. Make sure that none of the following global properties
are defined in the global provisioning properties:
v Cloud.VRF_MODE
v Cloud.ENABLE_VLAN_SUPPORT
v Cloud.DUAL_NIC_MODE
Note: For information about multi-NIC networking model on System p, see The
multi-NIC networking model on System p on page 124.
Hostname naming convention
The default naming convention for host names is: Prefix (max three characters) and
IP address blocks left padded with zeros. For example, ipaddress "1.2.3.4" and
prefix "VM" = VM001002003004.
When using the multi-NIC networking support, the variable
PMRDP.Net.HostnamePrefix property of the subnetwork defines the hostname prefix
to be used to generate the virtual machine hostname if the reverse DNS lockup on
the Tivoli Service Automation Manager server is not successful. The check box Use
this interface for host name resolution in the Register Image offering defines
which interface is used to resolve the hostname.
DNS configuration of the provisioned image
Follow these steps to configure DNS in Tivoli Service Automation Manager:
1. To define network interface, select the hostnameresolve flag during registering
of the image.
2. Select a network segment for each network interface of the image. The
segments contain the configuration for a network interface including DNS.
3. During provisioning, the DNS configuration of the segment chosen for the
network interface which has the hostnameresolve flag set is used.
Note: If you require a DNS setup for your provisioned virtual machine, ensure
that each network segment used for a network interface, that has the
hostnameresolve flag set, has a DNS configuration. You must do this, for the
provisioned virtual machine, to have a DNS configuration.
Chapter 3. Configuring 119
Management network structure overview:
The management network coordinates the workflow between the Tivoli Service
Automation Manager Server and external network environments.
Provisioned
Virtual Machines
for Resource Pool 1
Management VLAN
Resource Pool 1
Management VLAN
Resource Pool 2
Provisioned
Virtual Machines
for Resource Pool 2
Tivoli Service
Automation Manager
Network
Boot Servers
File Repositories
Hypervisors
Tivoli Service Automation
Manager Server
The diagram shows an overview of the Tivoli Service Automation Manager
network. It is required that the external network environment ensures that the
management network can communicate with all management VLANs for the
various Resource Pools. Tivoli Service Automation Manager does not ensure or
provision the external network infrastructure. Networking support covers the
network management on the supported hypervisors for configuring the virtual
switches.
One of the network interfaces must be defined as the management network. When
you define the management network, you must obey the following rules:
v A virtual machine can be assigned to only one management network.
v If the management network requires a gateway or a static route to communicate
with the Tivoli Service Automation Manager node, the management network
must be the first network interface of a virtual machine.
v For VMControl and System p the management network must be the first
network and the host name resolve flag must be set on the management
interface.
The external network setup must ensure that it is possible to communicate
between the following elements through the interface:
v Tivoli Service Automation Manager Management Node
v Boot Servers
v Hypervisors
120 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Network communication is initiated from the Tivoli Service Automation Manager
components.
When planning the structure of your Management network, ensure you know the
following overviews, which correspond to the groups of components shown in the
Tivoli Service Automation Manager network diagram:
v Customer network planning overview
v Hypervisor and virtual machine overview
Servers permitted to reside on the Tivoli Service Automation Manager
management network
The following management servers require network contact with the endpoint
virtual machine and with Tivoli Service Automation Manager. It is required that
the external network configuration ensures that these servers can communicate
with the management networks defined in the resource pools and the virtual
machines which are provisioned through their management network interface.
Tivoli Service Automation Manager does not provision or manage this connectivity.
It is required that the external network setup ensures this connectivity. Networks
are defined using the following:
v Tivoli Provisioning Manager server
v NIM server
v IBM Tivoli Monitoring server
v Hardware Management Console is not required, but recommended
v Any file repository (for example, NFS server) containing software to be installed
on virtual machines, for example, the IBM Tivoli Monitoring agent. This is not
required for file repositories that store only virtual machine images.
v Virtual I/O server (VIO)
v PowerVM CEC
Customer network planning overview:
Private or shared customer networks can be related with each of the resource pools
through provisioned virtual machines.
Chapter 3. Configuring 121
Tivoli Service
Automation
Manager Network
Management VLAN
Resource Pool 1
Management VLAN
Resource Pool 2
Private Customer
network 1
Shared Customer
network 1
Private Customer
network 2
Provisioned Virtual
Machines for
Resource Pool 1
Provisioned Virtual
Machines for
Resource Pool 2
The diagram shows the lower part of Management network, including the
customer network structures. For a comprehensive view of the network structure,
Management network structure overview on page 120.
Each resource pool can have multiple customer networks. They are defined in the
same way as the management network: PMRDP.Net.SubnetPool_n =
subnet1,subnet2,... where n=0,1,2,... This property defines a pool of subnetworks
which are used to derive the IP configuration from and creates a network interface
on the created virtual machine.
The DNS properties are specified on the resource pool level.
Hypervisor and virtual machine overview:
You can configure the hypervisor and the virtual machines by defining a number
of DCM objects.
122 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Network Interface
Configuration and
Number of Interfaces
External Ethernet Switch VLANs
Subnets
Predefined Network
Configuration
Network Card
Configuration
(Hypervisor Level)
VLAN
Configuration
(Hypervisor Level)
A A
SP
Resource Pool/
Subnet DCM Object
Virtual Switch
Template DCM Object
NI 1 NI 2
Virtual Machine
P
Virtual
Switch
P
Virtual
Switch
PG
E 1 E 2
Hypervisor
Hardware Box
SP SP
The diagram shows hypervisor and virtual machine configuration structure. The
hypervisor is directly related with the Management network structure (see
Management network structure overview on page 120).
The following DCM objects are used to configure the network support on the
hypervisor and virtual machine level:
v Spare Pool (Resource Pool) - defines the number of network interfaces for the
created virtual machine, the DNS configuration, and the subnets
v Subnetwork (Subnet) - defines the network interface configuration (IP address,
netmask, gateway, VLAN ID etc.)
v Switch (Virtual Switch Template) - defines the hypervisor virtual switch
configuration used for the virtual network adapter connected to the virtual
machine
The external network hardware must be pre-configured by the network
administrator before the first provisioning run. For the DCM objects definition
types, see Definitions for DCM objects on page 141.
Management of the virtual switches for the supported hypervisors is covered by
Tivoli Service Automation Manager. This support varies depending on the
hypervisor type:
Chapter 3. Configuring 123
v VMware - virtual switches must exist and only the port groups are managed
v KVM and Xen - network bridges on the hypervisor are configured
v z/VM - virtual switches must exist and only the connection between the
Ethernet adapters of the virtual machine and the virtual switches is configured
v PowerVM - depending on the network configuration at the user's side, Shared
Ethernet Adapters (SEAs) are created and configured by Tivoli Service
Automation Manager or must be predefined.
The multi-NIC networking model on System p
The multi-NIC networking model is also supported on System p.
The support for the new networking model is similar to the support for other
hypervisors:
v Configuration per resource pool
v Network isolation on the hypervisor level (VIO, VIO set)
v Shared Ethernet adapter failover support in the dual VIO scenario
v EtherChannel support for shared Ethernet adapters
v Management of shared Ethernet adapters on VIO sets
v The configuration of VIOs and shared Ethernet adapters with the use of virtual
switch template and properties on the hosting platform (CEC)
Starting with version 7.2.1.1, Tivoli Service Automation Manager also supports the
following solutions:
v Dual VIO scenarios
v Multiple VIO sets per CEC
v Separate VIOs for networking
v Multiple network interfaces in LPAR
Assumptions
v VIOs and CECs are preinstalled and configured so that they can be accessed
from the HMC and via network.
v VIOs on the CECs are discovered by Tivoli Provisioning Manager and exist in
the data center model.
v VIOs in a VIO set share the same configuration settings. The main focus is on
the mapping of the Ethernet adapters to the external network.
v All Ethernet adapters that transport tagged traffic must be connected to the
external switch through trunk ports.
v If the EtherChannel (link aggregation) is used, the external switches and the
virtual switch template must be configured accordingly.
v Use this interface for host name resolution flag is only supported on the
management network. Make sure that during the registering of the image, the
Use this interface for host name resolution flag is checked only on the
management network.
124 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Network topology
Shared Ethernet
Adapter (SEA)
SP
Shared Ethernet
Adapter (SEA)
External
Ethernet
Switch 1
Shared Ethernet
Adapter (SEA)
VIO Set 1
VIO1
VIO2
ent0 (phys)
ent0 (phys)
ent1 (virt)
ent1 (virt)
ent3 (virt)
ent3 (virt)
ent4 (virt)
ent4 (virt)
LPAR1
ent0 (virt)
ent0 (virt)
SP
LPAR2
ent0 (virt)
ent0 (virt)
VIO Set 2
pSeries CEC 1
pSeries CEC 2
VLAN 1
Control Channel
VLAN 1
VLAN 2
VLAN 2
VLAN 2
VLAN 1
VLAN 2
VLAN 1
Legend:
v SP - trunk switch port
Chapter 3. Configuring 125
VIO support:
Learn more about VIO, dual VIO, and multiple VIO set supports available within
the management network.
Shared Ethernet
Adapter (SEA)
Ether Channel
Shared Ethernet
Adapter (SEA)
Global
(VIOS.SET)
Network specific
(VIOS.SET.NET)
LPAR
NI 1
E 1
NI 2
E 2
CEC
VIO
VIOSETs
PU P
A
PU P
A A
Legend:
v NI - network interface inside the operating system
v E - Ethernet adapter on the virtual machine
v PU - virtual Ethernet adapter for untagged traffic
v P - virtual Ethernet adapter with VLAN ID
v A - Ethernet adapter in HW BOX
VIO support
The following assumptions are made for the shared Ethernet adapter management:
v a shared Ethernet adapter can be created:
on top of a single Ethernet adapter
on top of a single EtherChannel that groups multiple adapters together
v When a shared Ethernet adapter is created, it consists of the following elements:
a single virtual Ethernet adapter used for untagged traffic
(PMRDP.Net.UntaggedTrafficPVID) (PU)
a single virtual Ethernet adapter used for the control session (Ctrl)
(PMRDP.Net.SEACtrlSessionVIandId)
v Additional VLANs are handled through additional Ethernet adapters connected
to the shared Ethernet adapter (P) (subnet parameters
PMRDP.Net.VLAN_PVID, PMRDP.Net.VLANID). The adapter is created with
126 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
the use of the PV ID and VLAN ID. One adapter per VLAN is created. As a
result, a shared Ethernet adapter can support 15 VLANs maximum. Additionally
a VLAN device is created for each additional VLAN.
v An EtherChannel is created if more than one Ethernet adapter is specified in the
virtual machine switch template. The PMRDP.Net.LinkTypeIEEE8023AD
property specifies the type of EtherChannel that is created.
v If a shared Ethernet adapter and the adapter for the VLAN already exist, no
action is taken. If a shared Ethernet adapter with a PV ID as specified by the
PMRDP.Net.UntaggedTrafficPVID property and with the physical adapters as
specified in the virtual switch template already exists on the VIO, it is used
instead of creating a new shared Ethernet adapter.
v The PMRDP.Net.VIOPairAlias property specifies the VIO pair that the shared
Ethernet adapter is created on.
Shared Ethernet
Adapter (SEA)
Ether Channel
A
Ctrl
PU P
A
Dual VIO and multiple VIO set support
v Dual VIO is supported by defining a VIO set. To define the VIO set, use the
properties on the CEC. There are two ways to define them:
1. VIOS.SET defines a global VIO for storage and networking
2. VIOS.SET.NET defines a network VIO
Each property is an array of attributes in the following form:
Symbolic Name assigns a symbolic name to a VIO set
VIO DCM Name of the first VIO in the set
VIO DCM Name of the second VIO in the set. It is optional and, if not
defined, results in a single VIO setup
Example: TEST1;v01;v02
v VIOS.SET takes precedence over VIOS.SET.NET
v There are multiple definitions per CEC. The property is an array.
v VIO sets are referenced from the virtual switch template according to their
symbolic name.
v If a VIO set is defined, it must also exist on all the CECs in the resource pool.
Chapter 3. Configuring 127
Important: Dual VIO configuration is only supported for HMC/CEC-based Power
hypervisors. IVM/PowerBlade hypervisors do not support dual VIO configuration
in general.
Migration
The following migrations are not supported:
v from a single VIO to a dual VIO scenario
v from a single VIO to a multiple VIO Set scenario
Minimal VIO setup for System p:
A number of minimal VIO setup scenarios is possible within the management
network.
General considerations
Keep in mind the following information as it is valid for all scenarios:
v The VLAN ID of a virtual network adapter in the hardware management
console (HMC) must match the PMRDP.Net.VLAN_PVID value in the subnet
definition.
v Additional VLANs of a virtual network adapter on the HMC must match the
PMRDP.Net.VLANID value in the subnet definition.
v Dual VIOS configuration is only supported for HMC/CEC-based POWER
hypervisors. IVM/PBlade hypervisors do not support this kind of configuration
in general.
Tivoli Service Automation Manager automates the VIO network configuration
based on the subnet and virtual switch template customization. The workflow
accesses the VIOs either directly through the network or using the HMC.
Therefore, network accessibility to the VIO and the HMC is required from the
Tivoli Service Automation Manager server node. Start the VIO network stack
before running the Tivoli Service Automation Manager discovery to make the VIO
accessible from Tivoli Provisioning Manager.
The network automation creates:
v A separate virtual network adapter for the untagged traffic for each SEA.
v A separate virtual network adapter for each VLAN that is routed using a SEA.
A SEA can have up to 16 virtual network adapters assigned to it. Therefore, a
maximum of 15 VLANs can be served by a SEA managed by Tivoli Service
Automation Manager.
The PMRDP.Net.VLAN_PVID property of a subnet must not be equal to the
PMRDP.Net.UntaggedTrafficPVID value of the referenced System p virtual switch
template.
Preparing a minimal network setup of the VIO is necessary for Tivoli Service
Automation Manager to perform the automation. All other network configurations
are done through these automation workflows, resource pool definitions, subnet
definitions, and virtual switch definitions.
The SEAs and adapters are created during the provisioning of the virtual machines
on the affected VIOs.
128 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Minimal VIO setup scenarios
The following scenarios are possible while configuring the VIO for System p on the
management network:
v Minimal VIO setups with a single adapter
v Minimal VIO setups with ether channels
v With a single adapter and with separate System p management network
Minimal VIO setup with a single adapter:
You can choose one of two scenarios of minimal VIO setup with a single adapter
for your management network.
Minimal VIO setup with a single adapter and with VLAN ID on the
management network
The scenario assumes that the Management network uses tagged network
traffic and has a separate VLAN ID.
Note: The customer network is optional but shown for the purpose of the
scenario.
The minimum setup has the following components:
v A management network that is accessible from the Tivoli Service
Automation Manager management and NIM server.
v The eth0 and the associated Shared Ethernet Adapter (SEA 1). It is
possible to have an EtherChannel with more than one adapter if it is
necessary.
Chapter 3. Configuring 129
v The control channel (CTRL) for SEA 1. It is required for a dual VIO
setup.
v The adapter for untagged traffic VLAN (PU).
v The Virtual Ethernet adapter (VE) on VIO1 and VIO2 that are connected
to the management VLAN. The adapters are using the VLAN ID of the
management network adapter (PA) and therefore are connected to the
management network by the hypervisor. Make sure that the network
stack is started on this network adapter so that the VIO is reachable.
v The PT adapter that is created using the PVID (PMRDP.Net.VLAN_PVID)
and VLAN ID (PMRDP.Net.VLANID) of the management subnet network.
v The network interfaces of each VIO ( NI VIO1, NI VIO2) that provide
each VIO with network connectivity. These are the IP interfaces that are
displayed on the DCM representation of the VIO as the management
interfaces.
Minimal VIO setup with a single adapter and with no VLAN ID on
management network
A1 A1 A2 A2
Shared Ethernet
Adapter (SEA) 1
NI VIO1 NI VIO2
PU Ctrl VE VE
eth0 eth0 eth1 eth1
VIO1 VIO2
Customer
Network
Management Network
System p CEC
The scenario assumes that the Management network uses untagged
network traffic. The tagging can be done by, for example, a switch.
Note: The customer network is optional but shown for the purpose of the
scenario.
The minimum setup has the following components:
v A management network that is accessible from the Tivoli Service
Automation Manager management and the NIM Server.
v The eth0 and the associated Shared Ethernet Adapter (SEA 1). It is also
possible to have an EtherChannel with more than one adapter if it is
necessary.
130 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v The control channel (CTRL) for SEA 1. It is required for a dual VIO
setup.
v The adapter for untagged traffic VLAN (PU).
v The Virtual Ethernet adapter (VE) on VIO1 and VIO2 connected to the
management VLAN. The adapters use the untagged network traffic
PVID from the PU adapter and therefore are connected to the
management network by the hypervisor.
v The network interfaces of each VIO (NI VIO1, NI VIO2) that provide
each VIO with network connectivity. These are the IP interfaces which
are displayed on the DCM representation of the VIO as the management
interfaces. The Tivoli Service Automation Manager management network
can reach the VIOS using these IP addresses. The second VIO (VIO2)
contains only the setup for the management network interface (VE + NI
VIO2).
In a dual VIO scenario, the mapping of the network adapters (A1 and A2)
and the resulting device names (eth0 and eth1) on the VIOS must be equal.
This means that the management network must be mapped to the Ethernet
adapter (A1) as device eth0 on both VIOS in the given example. The
customer network is mapped to Ethernet adapter (A2) as device eth1 on
each VIOS.
After initial discovery, the DCM objects for the two VIOS must be updated
using the property PMRDP.Net.TrunkPriority., for example 1 for VIO1 and
2 for VIO2. Each VIOS on the Central Electronics Complex (CEC) must
have a unique value.
CEC is also displayedas a DCM object after discovery. The following
properties have to be set on this DCM Object:
v VIOS.SET or VIOS.SET.NET. Use these two properties to define the VIO
sets on the CEC. In the given example it contains the following values:
VIOS.SET VIOSET1;VIO1;VIO2 where VIOSET1 is the symbolic name of the
VIOSET and VIO1 and VIO2 are the DCM names of the two discovered
VIOs.
Save the following values for later usage:
v The VLAN ID of the control session VLAN of SEA 1.
v The PVID of the PU.
v The DCM name of VIO1 and VIO2.
v The device name of the Ethernet Adapters (A) connected to both VIOS.
v The EtherChannel type (if it is used).
Minimal VIO setup with EtherChannels:
You can choose one of two scenarios of minimal VIO setup with EtherChannels for
your management network.
Chapter 3. Configuring 131
Minimal VIO setup with EtherChannels and with no VLAN ID on the
management network
A1 A1 A3 A3 A2 A2 A4 A4
Shared Ethernet
Adapter (SEA) 1
Ether Channel
NI VIO1 NI VIO2
PU Ctrl VE VE
eth0 eth0 eth2 eth2 eth1 eth1 eth3 eth3
VIO1 VIO2
Customer
Network
Management Network
System p CEC
The scenario assumes that the Management network uses untagged
network traffic. The tagging can be done by, for example, a switch.
Note: The customer network is optional but shown for the purpose of this
scenario.
The minimum setup has the following components:
v A management network that is accessible from the Tivoli Service
Automation Manager management and NIM Server.
v The eth0 and the associated Shared Ethernet Adapter (SEA 1). It is
possible to have an EtherChannel with more than one adapter if it is
necessary.
v The control channel (CTRL) for SEA 1. It is required for a dual VIO
setup.
v The adapter for untagged traffic VLAN (PU).
v The VE ( Virtual Ethernet adapter) on VIO1 and VIO2 that are connected
to the management VLAN. The adapters are using the untagged
network traffic PVID from the PU adapter and therefore are connected
to the management network by the hypervisor.
v The network interfaces of each VIO ( NI VIO1, NI VIO2) that provide
each VIO with network connectivity. These are the IP interfaces that are
displayed on the Data Center Model (DCM) representation of the VIO as
the management interfaces. Tivoli Service Automation Manager
132 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
management network uses these IP addresses to reach the VIOs. The
second VIO (VIO2) contains only the setup of a management network
interface (VE + NI VIO2).
In a dual VIO scenario, the mapping of the network adapters (A1, A2, A3,
and A4) and the resulting device names (eth0, eht1, eth2, and eth3) on the
VIOs must be equal. This means that the management network must be
mapped to Ethernet adapter (A1 and A3) as device eth0, eth2 on both VIOs
in the given example. The customer network is mapped to Ethernet
adapter (A2, A4) as device (eth1, eth3) on each VIO.
The devices (Eth0, Eth2) are each grouped together using the
EtherChannel.
After initial discovery, the DCM objects for the two VIOs must be updated
using the property PMRDP.Net.TrunkPriority., for example 1 for VIO1 and
2 for VIO2. Each VIO on the Central Electronics Complex (CEC) must have
a unique value.
CEC is also displayed as a DCM object after discovery. The following
properties must be set on this DCM object:
v VIOS.SET or VIOS.SET.NET. Use these two properties to define the VIO
sets on CEC. In the given example it contains the following values:
VIOS.SET VIOSET1;VIO1;VIO2 where VIOSET1 is the symbolic name of the
VIOSET and VIO1 and VIO2 are the DCM names of the two discovered
VIOS.
Save the following values for later usage:
v The VLANID of the control session VLAN of SEA 1.
v The PVID of the PU.
v The DCM name of VIO1 and VIO2.
v The device name of the Ethernet Adapters (A) connected to both VIOS.
v The EtherChannel type (if it is used).
Chapter 3. Configuring 133
Minimal VIO setup with EtherChannels and with VLAN ID on the management
network
A1 A1 A3 A3 A2 A2 A4 A4
Shared Ethernet
Adapter (SEA) 1
VLAN
device
Ether Channel
NI VIO1 NI VIO2
PU Ctrl PT VE VE
eth0 eth0 eth2 eth2 eth1 eth1 eth3 eth3
VIO1 VIO2
Customer
Network
Management Network
System p CEC
The scenario assumes that the management network uses tagged network
traffic and has a separate VLAN ID.
Note: The customer network is optional but shown for the purpose of the
scenario.
The minimum setup has the following components:
v A management network that is accessible from the Tivoli Service
Automation Manager management and NIM Server.
v The eth0 and the associated Shared Ethernet Adapter (SEA 1). It is
possible to have an EtherChannel with more than one adapter if it is
necessary.
v The control channel (CTRL) for SEA 1. It is required for a dual VIO
setup.
v The adapter for untagged traffic VLAN (PU).
v The Virtual Ethernet adapter (VE) on VIO1 and VIO2 that are connected
to the management VLAN. The adapters use the VLAN ID of the
management network adapter (PA) and therefore are connected to the
management network by the hypervisor. Make sure that the network
stack is started on this network adapter so that the VIO is reachable.
v The PT adapter that is created using the PVID (PMRDP.Net.VLAN_PVID)
and VLAN ID (PMRDP.Net.VLANID) of the management subnet network.
134 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v The network interfaces of each VIO (NI VIO1, NI VIO2) that provide
each VIO with network connectivity. These are the IP interfaces which
are displayed on the DCM representation of the VIO as the management
interfaces.
A VLAN device must be created for the VLAN ID (PMRDP.Net.VLAN_PVID)
on the SEA.
The second VIO (VIO2) contains only the setup for the management
network interface (VE + NI VIO2).
In a dual VIO scenario, the mapping of the network adapters (A1, A2, A3,
and A4) and the resulting device names (eth0, eth1, eth2, and eth3) on the
VIOS must be equal. This means that the management network must be
mapped to the Ethernet adapter (A1 and A3) as device eth0, eth2 on both
VIOS in the given example. The customer network is mapped to the
Ethernet adapter (A2, A4) as device (eth1, eth3) on each VIO.
The devices (eth0, eth2) are each grouped together using an EtherChannel.
After initial discovery, the DCM objects for the two VIOS must be updated
using the property PMRDP.Net.TrunkPriority, for example 1 for VIO1 and 2
for VIO2. Each VIO on the Central Electronics Complex (CEC) must have a
unique value.
CEC is also displayed as a DCM object after discovery. The following
properties have to be set on this DCM Object:
v VIOS.SET or VIOS.SET.NET. Use these two properties to define the VIO
sets on the CEC. In the given example, it contains the following values:
VIOS.SET VIOSET1;VIO1;VIO2 where VIOSET1 is the symbolic name of the
VIOSET and VIO1 and VIO2 are the DCM names of the two discovered
VIOS.
Save the following values for later usage:
v The VLAN ID of the control session VLAN of SEA 1.
v The PVID of the PU.
v The DCM name of VIO1 and VIO2.
v The device name of the Ethernet Adapters (A) connected to both VIOS.
v The EtherChannel type (if it is used).
v The VLAN ID and PVID of the PT adapter.
Chapter 3. Configuring 135
Minimal VIO setup with a single adapter and with separate System p management
network:
Learn more about components, properties, and values involved in this setup.
Shared Ethernet
Adapter (SEA) 1
VLAN
device
NI VIO1 NI VIO2
SP/PU Ctrl VE
VIO1
Customer
Network
Management Network
System p CEC
A1
Shared Ethernet
Adapter (SEA) 2
VLAN
device
SP/PU Ctrl VE
eth0
VIO2
System p Management
Network
A1
eth0
The scenario assumes that the Management network is using tagged network
traffic and has a separate VLAN Id.
Note: The customer network is optional but shown for the scenario's purpose.
The minimum setup has the following components:
v A management network that is accessible from the Tivoli Service Automation
Manager management and NIM Server.
v The eth0 and the associated SEA 1. It is possible to have an ether channel with
more than one adapter if it is necessary.
v The control channel (CTRL) for SEA1. This is required for a dual VIO setup.
v The VE ( Virtual Ethernet adapter) on VIO1 and VIO2 that are connected to the
management VLAN. The adapters are using the VLAN Id of the management
network adapter (PA) and therefore are connected to the management network
by the hypervisor. Make sure that the network stack is started on this network
adapter so that the VIO is reachable.
136 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v The SP/PU adapter that has to be created via the untagged traffic PVID and the
VLAN Id of the System p management network as the additional VLAN Id on
the adapter.
v The network interfaces of each VIO ( NI VIO1, NI VIO2) that provide each VIO
with network connectivity. These are the IP interfaces which show up on the
DCM representation of the VIO as the management interfaces.
It is required to create a virtual VLAN device for the VLAN Id of the System p
management network on the SEA.
The second VIO has to be setup in the same way as VIO1 for high availability.
In a dual VIO scenario the mapping of the network adapters (A1 and A2) and the
resulting device names (eth0 and eth1) on the VIOs have to be equal. This means
that the management network has to be mapped to the Ethernet adapter (A1) as
device eth0, on both VIOs in the given example. The customer network is mapped
to the Ethernet adapter (A2) as device (eth1) on each VIO.
Important: The System p management network is not known to Tivoli Service
Automation Manager. Therefore the setup of this network and the interfaces
including high availability have to be performed manually.
After initial discovery, the DCM objects for the two VIOs have to be updated via
the following property: PMRDP.Net.TrunkPriority, for example 1 for VIO1 and 2
for VIO2. It is required that each VIO on the CEC has a unique value.
CEC also shows up as a DCM object after discovery. The following properties have
to be set on this DCM Object:
v VIOS.SET or VIOS.SET.NET. Use these two properties to define the VIO Sets on
the CEC. In the given example it contains the following values: VIOS.SET
VIOSET1;VIO1;VIO2 where VIOSET1 is the symbolic name of the VIOSET and
VIO1 and VIO2 are the DCM names of the two discovered VIOs.
Save the following values for later usage:
v the VLANID of the control session VLAN of SEA1
v the PVID of the PU
v the DCM name of VIO1 and VIO2
v the device name of the Ethernet Adapters (A) connected to both VIOs
v the ether channel type (if it is used)
VIO Shared Ethernet Adapter management:
In IBM Tivoli Service Automation Manager version 7.2.3, the administrator has a
choice for System P to run VIO with Shared Ethernet Adapter (SEA) management
or without it.
VIO Shared Ethernet Adapter (SEA) management is turned on by default.
Note: Read the assumptions documented in the System P VIO management
chapter in VIO support on page 126.
If your current scenario is not compatible with these assumptions you can turn off
the VIO Shared Ethernet Adapter (SEA) management. In this case, you must
ensure that:
Chapter 3. Configuring 137
v All the VIOs in all System P resources pools are configured correctly to provide
the required network connectivity. Tivoli Service Automation Manager
configures only the provisioned LPAR based on the parameters defined in the
chosen subnetwork. The System P virtual switch template is not used anymore.
v If you define or modify a subnetwork, the changes to all affected CECs and their
VIOs must be done manually by the System P administrators. Failure to do that
causes provisioning to fail or creates LPARs which do not have network
connectivity.
v Dual VIO configuration for network must be implemented manually by the
System P administrators.
Note: This decision is system wide and not on a per resource pool base. Switching
between the two modes might require major manual configuration changes inTivoli
Service Automation Manager and on all affected System P provisioners and LPARs.
Switching might not be possible depending on the implemented network scenario.
Modifying Subnetwork Properties
If VIO Shared Ethernet Adapter Management is turned off the following property
must be defined in each subnetwork used for System P provisioning:
PMRDP.Net.VLAN_PVID.
To set the Tivoli Provisioning Manager Virtual Server Template vNIC use this
property: vio.vlan_ids.
For supported parameter values, go to Tivoli provisioning manager knowledge center >
Documentation for pSeries-Server and search for vio.vlan_ids property.
You must make sure that the chosen value for PMRDP.Net.VLAN_PVID works together
with your VIO configuration. Otherwise, it might result in provisioning errors and
LPARs which cannot communicate correctly with the external network.
Virtual switch template properties for System p:
Lists virtual switch parameters and properties, and describes their functions.
A virtual switch template definition corresponds to a Shared Ethernet Adapter
(SEA) on a VIO server. The following list provides information about the virtual
switch template properties for System p:
v PMRDP.Net.Cloud set to true indicates that this switch is used for Tivoli Service
Automation Manager
v PMRDP.Net.SwitchType set to Virtual Switch Template specifies the usage type
v PMRDP.Net.SEACtrlSessionVlanId specifies the ID of the shared Ethernet adapter
control session VLAN
v PMRDP.Net.LinkTypeIEEE8023AD defines the link aggregation type used. The value
may be either true (IEEE) or false (Cisco)
v PMRDP.Net.UntaggedTrafficPVID specifies the VLAN ID for untagged traffic for
this shared Ethernet adapter on System p
v PMRDP.Net.VIOPairAlias is the alias name that specifies the VIO Set used on the
System p platform. This value must be resolvable on the set of System p CECs
that are defined in the resource pool.
v The connection to physical network adapters is modeled by:
interface card entries
138 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
interface card, for example, entx, where x starts with 0
card type: serial
If more than one adapter is specified, an EtherChannel adapter is created by
Tivoli Service Automation Manager System p networking support to group
the adapters together. In this case, the PMRDP.Net.LinkTypeIEEE8023AD
property must be set. Depending on the PMRDP.Net.LinkTypeIEEE8023AD
property, you must configure the external switches for this link aggregation.
This configuration is an external administrative task.
Additional parameters
on the subnet for System p
The PMRDP.Net.VLAN_PVID property is specific for System p. It defines the
PVID used for the virtual adapter attached to the shared Ethernet adapter
if PMRDP.Net.VLANID is set. If the PMRDP.Net.VLANID property is not
specified, you can omit the PMRDP.Net.VLAN_PVID property.
on the VIO definition
The PMRDP.Net.TrunkPriority parameter specifies the trunk priority to be
used for the virtual Ethernet adapter. The possible values are:
v 0 means is_trunk=0 and there is no access to the external network
v the values from 1 to 15 specify the priority of trunk. The trunk priority
must be unique across the VIOS of the CEC.
on the System p CEC
v pSeries CEC settings (Provisioning Computer System)
VIOS.SET is a property array that stores VIO pairs available for
provisioning. This array specifies VIO pairs that are shared between
the storage and network. Each entry of the property array consists of
comma-separated values in the following form: Alias name, primary
VIO, secondary VIO; for example, Hugo;V1;V2
- Alias name: the name of the VIO pair used in the virtual switch
template to select the VIO that is used for provisioning. Do not
include commas in the alias name as it is a delimiter.
- First VIO: the name of the first VIO for networking
- Second VIO: the name of the secondary VIO for networking. You
can omit it if only a single VIO is required.
VIOS.SET.NET is a property array that stores the VIO pairs available
for provisioning. This array specifies VIO pairs for networking only.
Each entry of the property array consists of comma separated values
in the following form: Alias name, primary VIO, secondary VIO; for
example, Otto;V3;V4
- Alias name: the name of the VIO pair used in the virtual switch
template to select the VIO that is used for provisioning.
- First VIO: the name of the first VIO for networking
- Second VIO: the name of the secondary VIO for networking. You
can omit it if only a single VIO is required.
Note: The VIOS.SET parameter takes precedence over VIOS.SET.NET
v VIO sets must exist on all CECs that belong to the resource pool.
v The virtual switch template references to the alias name of VIOS.SET by
the PMRDP.Net.VIOPairAlias property.
Chapter 3. Configuring 139
Planning for System p configuration:
Learn how to retrieve the parameters from System p Central Electronics Complex
(CEC) and VIOS.
About this task
You can retrieve the following parameters from the System p CEC and VIO:
v The trunk values used for your VIOS.
v The IDs of the control session VLAN used on CECs.
Procedure
1. Make sure that the VIOS are discovered.
2. Make sure if there are enough free slots on VIOS and that the property values
for slot management are correct. These property values are:
LPAR.available_slots, LPAR.max_virtual_slots, LPAR.next_virtual_slot.
3. Define the VIO sets (VIOS.SET or VIOS.SET.NET) on the CEC.
What to do next
Check the external network setup:
v How the Ethernet adapters are mapped to VIOS.
v If EtherChannels are used.
v Which EtherChannel type is used.
v If the Ethernet adapters are connected to the switch through trunk ports.
v Which VLAN IDs are used.
Troubleshooting:
Lists the conditions that have to be met to configure correctly your network.
Make sure that the following conditions are met:
v Matching values are used for the control session VLAN for SEAs and the trunk
priority for VIOs is set correctly. Failures can result in network outage if more
than one VIO serves System p internal network. In this case, there can be a
MAC table inconsistency and the associate VLAN is not accessible.
v VIO sets are defined for all CECs that belong to the resource pool.
v VIOs that are in a VIO set share the same configuration settings and have the
same mappings of the Ethernet adapters.
v If Ether channels are used, configure them consistently on VIO and on the
external switch.
v VIOs are accessible through network so that you can issue commands to them.
On VIOs, there must be network interface configured and active after startup.
v The subnets that are used for management match the NIM server definitions.
Otherwise, the NIM provisioning fails. Each LPAR must be accessible from the
NIM server through their management network. You must define the
provisioning image on the NIM server (mksys backup and spot) correctly.
v The capacity of LPAR is high enough to boot up before the provisioning timeout
occurs (for example, 30 minutes).
v The communication between the NIM server, HMC, System p, and VIO is
possible.
140 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v The LPAR management interface communicates with the NIM and Tivoli Service
Automation Manager server.
v The DNS connected to the Tivoli Service Automation Manager node can be
resolved from the given IP address for the LPAR to the fully qualified domain
name (FQDN) and that the short name part of the FQDN can be resolved to the
same IP address again. Both resolutions are done during provisioning. If this is
not true, a problem may occur that the LPAR cannot be provisioned because it
has no management interface.
v The current implementation supports only hostname resolution on the
management network. For a successful provisioning, you must check only the
Use this interface for host name resolution flag on the management network
when registering the image. The following name resolutions are possible for the
management IP address: IP-Address to full qualified domain name (FQDN) and
short name part of the FQDN to IP address. This must be possible on both the
Tivoli Service Automation Manager node and on the NIM server. If these
preconditions are not met, provisioning results in errors.
v Provisioned virtual machine does not get a DNS configuration even if DNS
definition exists in a network segment. To solve this problem, go to: DNS
configuration of the provisioned image on page 119.
Definitions for DCM objects
Lists the definitions for DCM objects involved in the network support
configuration.
The following DCM objects are responsible for the network support configuration:
v Resource Pool
v Subnet
v Switch
Definitions on the Resource Pool DCM object
The Resource Pool defines general settings, such as:
v VMware adapter type configuration. VMware can have multiple different
network adapter types. With Tivoli Service Automation Manager, it is possible to
set this type on a resource pool base.
PMRDP.Net.VMware.NicAdapterType - this property can contain one of the
adapter types supported by VMware (E1000, VMXNET, VMXNET2,
VMXNET3, flexible)
Note:
At the moment, Tivoli Service Automation Manager supports five types of network
adapters for VMware (E1000, VMXNET, VMXNET2, VMXNET3, and flexible).
When you create additional network adapters during provisioning, the following
scenario applies:
If no network adapter is on the image, the network adapter type comes from
PMRDP.Net.VMware.NicAdapterType property of the resource pool. If the image
already contains network adapters, then the Tivoli Service Automation Manager
looks for the type of the first adapter and compare it with the supported network
adapter types.
The following situations can happen:
Chapter 3. Configuring 141
v If it is a VMXNET2 or VMXNET3 the additional adapter is VMXNET2 or
VMXNET3.
v If it is a E1000 the additional adapter is E1000.
v For all other cases, a flexible type of network adapter is added.
Definitions on the subnetwork DCM object
The subnet defines the network configuration on the operating system level:
v PMRDP.Net.Cloud - set to true to indicate usage by Tivoli Service Automation
Manager
v PMRDP.Net.VLANID - VLAN ID for the subnet
v PMRDP.Net.VLAN_PVID - PVID used for the virtual adapter attached to the SEA if
PMRDP.Net.VLANID is set. If no PMRDP.Net.VLANID property is specified, the
property can be omitted
v PMRDP.Net.Broadcast - broadcast mask for subnet
v PMRDP.Net.Gateway - gateway IP address for the subnet
v PMRDP.Net.DomainName - domain name used for hostname generation if DNS
reverse resolution does not work
v PMRDP.Net.HostnamePrefix - hostname prefix used to generate the Virtual
Machine hostname if DNS and Domain Name are not specified. The algorithm
creates a name in the form <prefix><IP Address>, for example, VM010010010010,
where the IP address is 10.10.10.10 and the prefix is VM.
v Default route parameter. Parameters are only supported if PMRDP.Net.Gateway is
specified:
PMRPD.Net.DefaultRoute.Destination - default route for the network interface
PMRPD.Net.DefaultRoute.NetMask - default route for the network interface
PMRPD.Net.DefaultRoute.Metric - default route for the network interface
v Virtual switch templates per hypervisor. A case in which multiple of these
entries exist is supported, for example if more that one hypervisor supports a
subnet:
PMRDP.Net.VMware.SwitchTemplate - name of the virtual switch template for
the VMware hypervisor
PMRDP.Net.KVM.SwitchTemplate - name of the virtual switch template for the
KVM hypervisor
PMRDP.Net.Systemp.SwitchTemplate - name of the virtual switch template for
the PowerVM hypervisor
PMRDP.Net.Systemz.SwitchTemplate - name of the virtual switch template for
the System z hypervisor
PMRDP.Net.Xen.SwitchTemplate - name of the virtual switch template for the
Xen hypervisor
Definitions on the virtual switch DCM object
The virtual switch template is based on the DCM object switch. It is used as a
template to customize the virtual switches on the hypervisor level.
Functions
The virtual switch defines the network adapters on the hypervisor which
are bound to the virtual switch. This is done by configuring interface cards
on the switch. There is one entry defined per adapter of type serial.
142 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v For KVM and Xen, a single adapter can be defined that is connected to
the network bridge and on which the VLAN device is configured.
v For PowerVM, multiple adapters are supported if an ether channel is
used, or a single adapter if no ether channel is available.
v For System z and VMware, this definition is not used.
Tivoli Service Automation Manager assumes that the hypervisors are
connected using the trunk ports to the external switches. VLAN tagging is
done using the virtual switches of the supported hypervisors. It is the
responsibility of the external network setup to:
v predefine Virtual LANs on the external switches
v set up the external switches, routers, or firewalls
v configure the external switch ports
v assure communication between the Tivoli Service Automation Manager
management network and the management VLANs for the different
resource pools
Properties
This template contains hypervisor-specific properties.
v VMware related properties:
PMRDP.Net.VSwitchName - the VMware virtual switch name where the
port groups are created
PMRDP.Net.BridgePrefixName - prefix name that is used to create the
port group name in VMware: <prefix>+ 4-digit VLAN ID
PMRDP.Net.HostGroupId - specifies the DCM ID of the VMware virtual
center
v System p related properties:
PMRDP.Net.VIOPairAlias - alias name that specifies the VIOS pair
used on the system p Platform. This value must be resolvable on each
of the PowerVM CECs that are defined in the resource pool.
PMRDP.Net.SEACtrlSessionVlanId - shared ethernet adapter control
session VLAN ID
Note: This parameter is not required for the Power Blade scenario.
PMRDP.Net.UntaggedTrafficPVID - VLAN ID that allows untagged
network traffic to pass through
PMRDP.Net.LinkTypeIEEE8023AD - defines link aggregation type used -
true (IEEE) or false (Cisco)
There must be one entry in the interface card list which specifies the
network adapter name the shared ethernet adapter binds to during
creation.
v System z related properties:
PMRDP.Net.zVM_NetworkPortDeviceNumber - NIC device number which
is used during virtual machine provisioning
PMRDP.Net.zVM_NetworkDeviceRange - device allocation range. This
parameter is used during virtual machine provisioning.
v KVM-related properties:
PMRDP.Net.BridgePrefixName - prefix name used to create the network
bridge name: <prefix>+ 4-digit VLAN ID
Chapter 3. Configuring 143
There must be one entry in the interface card list which specifies the
network adapter name the bridge binds to during creation.
v Xen related properties:
PMRDP.Net.BridgePrefixName - prefix name used to create the network
bridge name: <prefix>+ 4-digit VLAN ID
There must be one entry in the interface card list which specifies the
network adapter name the bridge binds to during creation.
v Global parameters required for all hypervisors:
PMRDP.Net.Cloud - set to true to indicate usage by Tivoli Service
Automation Manager
PMRDP.Net.SwitchType - set to Virtual Switch Template to indicate
that this is a virtual switch template
Network planning for VMControl
Network configuration for a VMControl environment is specific and requires
careful planning.
Before starting to plan your VMControl network setup, collect all the detailed
information for such network properties as VLAN ID, or gateway. Then you
should start configuring subnetworks and network segments. You can use the
sample OVF descriptor files to complete this setup process.
Network Installation Manager (NIM) vs Storage Copy Services (SCS)
You must realize the differences between NIM-based images and SCS-based images
before you start to configure any of them. It will help you decide which kind of
image will be more suitable for a particular virtual machine. The table below
specifies some essential differences.
Table 16. NIM vs SCS differences
Property NIM SCS
Disk Root disk only Root disk and additional
disk
Image for the virtual
machine
Uses MKSYSB image Captures the running virtual
machine
Capturing image NIM image resides in NIM
Master. Therefore live
capture is possible.
SCS uses a Flash copy to
capture an image. Therefore,
it forces the virtual machine
to shut down.
VIOS Uses single VIOS for
monitoring the network part.
Uses multiple VIOS, for
example a VIOS for storage
management and a VIOS for
networking.
Note: Live capture is not possible with SCS images. SCS uses bootable AIX images
only.
144 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Sample OVF descriptor file for AIX using NIM:
You can copy and modify this sample OVF descriptor file to create your own
descriptor file in the network configuration process.
<?xml version="1.0" encoding="UTF-8"?>
<ovf:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1"
xmlns="http://schemas.dmtf.org/ovf/envelope/1"
xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema
/2/CIM_ResourceAllocationSettingData"
xmlns:vim="http://www.ibm.com/xmlns/ovf/extension/vim/2"
xmlns:vimphyp="http://www.ibm.com/xmlns/ovf/extension/vim/2/phyp/3"
xmlns:vimphyprasd="http://www.ibm.com/xmlns/ovf/extension/vim/2/phyp/3/rasd"
xmlns:vimrasd="http://www.ibm.com/xmlns/ovf/extension/vim/2/rasd"
xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/
CIM_VirtualSystemSettingData"
xsi:schemaLocation="http://www.ibm.com/xmlns/ovf/extension/vim/2
ibm-vim2_2.1.0.xsd
http://schemas.dmtf.org/ovf/envelope/1 dsp8023_1.0.0.xsd
http://www.ibm.com/xmlns/ovf/extension/vim/2/phyp/3/rasd
ibm-vim2-phyp3-rasd_2.1.0.xsd
http://www.ibm.com/xmlns/ovf/extension/vim/2/rasd ibm-vim2-rasd_2.1.0.xsd"
xml:lang="en-US">
<ovf:References><ovf:File ovf:href="file:///export/nim/appliances/
dcae3965-baa4-421b-be9d-2b7566ee5c37/image1.mksysb"
ovf:id="file1" ovf:size="2877235200"/>
</ovf:References>
<ovf:DiskSection>
<ovf:Info>Disk Section</ovf:Info>
<ovf:Disk ovf:capacity="8053063680"
ovf:capacityAllocationUnits="byte"
ovf:diskId="disk1" ovf:fileRef="file1"
ovf:format="http://www.ibm.com/xmlns/ovf/diskformat/power.aix.mksysb"
ovf:populatedSize="5949756416"/>
</ovf:DiskSection>
<ovf:NetworkSection>
<ovf:Info>Network Section</ovf:Info>
<ovf:Network ovf:name="Network 1">
<ovf:Description>Captured from virtual server MyServer
connected to MyNetwork on host MyHost
</ovf:Description>
</ovf:Network>
</ovf:NetworkSection>
<ovf:VirtualSystem ovf:id="vs0">
<ovf:Info>This section describes a virtual system
to be created when deploying the package</ovf:Info>
<ovf:InstallSection>
<ovf:Info>This section provides information about the
first time boot of the virtual system.
Its presence indicates that the virtual system needs to
be booted after deployment,
to run first-boot customization.</ovf:Info>
</ovf:InstallSection>
<ovf:VirtualHardwareSection \
ovf:transport="http://www.ibm.com/xmlns/ovf/transport/
filesystem/etc/ovf-transport">
<ovf:Info>This section describes the virtual hardware
requirements on the target virtual system
</ovf:Info>
<ovf:System>
<vssd:ElementName>VirtualSystem</vssd:ElementName>
<vssd:InstanceID>VirtualSystem</vssd:InstanceID>
<vssd:VirtualSystemType>IBM:POWER:AIXLINUX</vssd:VirtualSystemType>
</ovf:System>
<ovf:Item>
<rasd:AllocationUnits>percent</rasd:AllocationUnits>
Chapter 3. Configuring 145
<rasd:Caption>Processor Allocation</rasd:Caption>
<rasd:ConsumerVisibility>3</rasd:ConsumerVisibility>
<rasd:Description>Processor Allocation</rasd:Description>
<rasd:ElementName>Allocation of 1 virtual processors, 0.2
processing units.</rasd:ElementName>
<rasd:InstanceID>1</rasd:InstanceID>
<rasd:Limit>20</rasd:Limit>
<rasd:Reservation>10</rasd:Reservation>
<rasd:ResourceType>3</rasd:ResourceType>
<rasd:VirtualQuantity>1</rasd:VirtualQuantity>
<rasd:Weight>128</rasd:Weight>
<vimphyprasd:VirtualLimit>1</vimphyprasd:VirtualLimit>
<vimphyprasd:VirtualReservation>1</vimphyprasd:VirtualReservation>
<vimphyprasd:Quantity>20</vimphyprasd:Quantity>
<vimphyprasd:ShareMode>uncap</vimphyprasd:ShareMode>
</ovf:Item>
<ovf:Item>
<rasd:AllocationUnits>byte * 2^10</rasd:AllocationUnits>
<rasd:Caption>Memory Allocation</rasd:Caption>
<rasd:ConsumerVisibility>2</rasd:ConsumerVisibility>
<rasd:Description>Memory Allocation</rasd:Description>
<rasd:ElementName>Allocation of 512 MB of dedicated memory.</rasd:ElementName>
<rasd:InstanceID>2</rasd:InstanceID>
<rasd:ResourceType>4</rasd:ResourceType>
<rasd:VirtualQuantity>524288</rasd:VirtualQuantity>
<vimphyprasd:VirtualLimit>524288</vimphyprasd:VirtualLimit>
<vimphyprasd:VirtualReservation>524288</vimphyprasd:VirtualReservation>
</ovf:Item>
<ovf:Item>
<rasd:Caption>Generic disk controller</rasd:Caption>
<rasd:ElementName>Generic disk controller</rasd:ElementName>
<rasd:InstanceID>3</rasd:InstanceID>
<rasd:ResourceType>20</rasd:ResourceType>
</ovf:Item>
<ovf:Item>
<rasd:AddressOnParent>0</rasd:AddressOnParent>
<rasd:Caption>Virtual Disk Allocation</rasd:Caption>
<rasd:Description></rasd:Description>
<rasd:ElementName>Virtual Disk Allocation</rasd:ElementName>
<rasd:HostResource>ovf:/disk/disk1</rasd:HostResource>
<rasd:InstanceID>4</rasd:InstanceID>
<rasd:Parent>3</rasd:Parent>
<rasd:ResourceType>31</rasd:ResourceType>
</ovf:Item>
<ovf:Item>
<rasd:Caption>Ethernet Adapter Allocation</rasd:Caption>
<rasd:Connection>Network 1</rasd:Connection>
<rasd:Description>Network 1</rasd:Description>
<rasd:ElementName>Network adapter 1 on Network 1</rasd:ElementName>
<rasd:InstanceID>5</rasd:InstanceID>
<rasd:ResourceType>10</rasd:ResourceType>
<rasd:VirtualQuantity>1</rasd:VirtualQuantity>
</ovf:Item>
</ovf:VirtualHardwareSection>
<ovf:ProductSection ovf:class="com.ibm.ovf.vim.2">
<ovf:Info>This product section provides information
about the entire package.</ovf:Info>
<ovf:Product>MyVirtualApplianceName</ovf:Product>
<vim:Description></vim:Description>
</ovf:ProductSection>
<ovf:ProductSection ovf:class="com.ibm.ovf.vmcontrol.system">
<ovf:Info>General System Product Section</ovf:Info>
</ovf:ProductSection>
<ovf:ProductSection ovf:class="com.ibm.ovf.vmcontrol.
system.networking">
<ovf:Info>System Level Networking</ovf:Info>
<ovf:Property ovf:key="hostname" ovf:type="string"
146 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
ovf:userConfigurable="true">
<ovf:Label>Short host name for the system.</ovf:Label>
<ovf:Description>Short host name for the system.</ovf:Description>
</ovf:Property>
<ovf:Property ovf:key="domainname" ovf:type="string"
ovf:userConfigurable="true" ovf:value="">
<ovf:Label>DNS domain name for the system.</ovf:Label>
<ovf:Description>DNS domain name for the system.</ovf:Description>
</ovf:Property>
<ovf:Property ovf:key="dnsIPaddresses" ovf:type="string"
ovf:userConfigurable="true" ovf:value="">
<ovf:Label>IP addresses of DNS servers for system.</ovf:Label>
<ovf:Description>IP addresses of DNS servers for system.</ovf:Description>
</ovf:Property>
<ovf:Property ovf:key="ipv4defaultgateway" ovf:type="string"
ovf:userConfigurable="true" ovf:value="">
<ovf:Label>Default IPv4 gateway.</ovf:Label>
<ovf:Description>Default IPv4 gateway.</ovf:Description>
</ovf:Property>
</ovf:ProductSection>
<ovf:ProductSection ovf:class="com.ibm.ovf.vmcontrol.
adapter.networking" ovf:instance="5">
<ovf:Info>Network adapter configuration for Network
adapter 1 on Network 1</ovf:Info>
<ovf:Category>Internet Protocol Version 4</ovf:Category>
<ovf:Property ovf:key="ipv4addresses" ovf:type="string"
ovf:userConfigurable="true">
<ovf:Label>Static IP address for the network adapter
&amp;quot;Network adapter 1 on Network 1&amp;quot;.
</ovf:Label>
<ovf:Description>Static IP address for the network adapter
&amp;quot;Network adapter 1 on Network 1&amp;quot;.
</ovf:Description>
</ovf:Property>
<ovf:Property ovf:key="ipv4netmasks" ovf:type="string"
ovf:userConfigurable="true" ovf:value="">
<ovf:Label>Static network mask for network adapter
&amp;quot;Network adapter 1 on Network 1&amp;quot; \.
</ovf:Label>
<ovf:Description>Static network mask for network
adapter &amp;quot;Network adapter 1 on Network 1&amp;quot;.
</ovf:Description>
</ovf:Property>
<ovf:Category>Deployment use</ovf:Category>
<ovf:Property ovf:key="order" ovf:type="uint32"
ovf:userConfigurable="false" ovf:value="0">
<ovf:Label>The adapter order for network adapter
&amp;quot;Network adapter 1 on Network 1&amp;quot;.
</ovf:Label>
<ovf:Description>The adapter order for network
adapter &amp;quot;Network adapter 1 on Network 1&amp;quot;.
</ovf:Description>
</ovf:Property>
</ovf:ProductSection>
<ovf:OperatingSystemSection ovf:id="9" ovf:version="7">
<ovf:Info>AIX 7 Guest Operating System</ovf:Info>
<ovf:Description>IBM AIX 7</ovf:Description>
</ovf:OperatingSystemSection>
<ovf:ProductSection ovf:class="com.ibm.ovf.vim.2.nim.6"
ovf:instance="1">
<ovf:Info>NIM-specific settings</ovf:Info>
<ovf:Product>AIX</ovf:Product>
<ovf:Vendor>IBM</ovf:Vendor>
<ovf:FullVersion>1.0</ovf:FullVersion>
<ovf:Category>NIM-specific settings</ovf:Category>
<ovf:Property ovf:key="nim.Resource" ovf:type="string"
ovf:userConfigurable="true">
Chapter 3. Configuring 147
<ovf:Label>NIM Resource or Resource Group</ovf:Label>
<ovf:Description>Specify the name of an existing NIM
Resource or NIM Resource Group to allocate during
the deployment.Any defined NIM Resource Group, or
Resource of class &amp;quot;resources&amp;quot;
can be specified, except:mksysb, spot, lpp_source,
ovf_vm, master</ovf:Description>
</ovf:Property>
</ovf:ProductSection>
</ovf:VirtualSystem>
</ovf:Envelope>
Sample OVF descriptor file for AIX using SCS:
You can copy and modify this sample OVF descriptor file to create your own
descriptor file in the network configuration process.
<?xml version="1.0" encoding="UTF-8"?>
<ovf:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1"
xmlns="http://schemas.dmtf.org/ovf/envelope/1"
xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/
1/cim-schema/2/CIM_ResourceAllocationSettingData"
xmlns:vim="http://www.ibm.com/xmlns/ovf/extension/vim/2"
xmlns:vimphyp="http://www.ibm.com/xmlns/ovf/extension/vim/2/phyp/3"
xmlns:vimphyprasd="http://www.ibm.com/xmlns/ovf/extension/vim/2/phyp/3/rasd"
xmlns:vimrasd="http://www.ibm.com/xmlns/ovf/extension/vim/2/rasd"
xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/
cim-schema/2/CIM_VirtualSystemSettingData"
xsi:schemaLocation="http://www.ibm.com/xmlns/ovf/
extension/vim/2 ibm-vim2_2.1.6.xsd
http://schemas.dmtf.org/ovf/envelope/1 dsp8023_1.6.0.xsd
http://www.ibm.com/xmlns/ovf/extension/vim/2/phyp/3/rasd
ibm-vim2-phyp3-rasd_2.1.6.xsd
http://www.ibm.com/xmlns/ovf/extension/vim/2/rasd
ibm-vim2-rasd_2.1.6.xsd" xml:lang="en-US">
<ovf:References>
<ovf:File ovf:href="imageFile" ovf:id="file1"
ovf:size="1048576"/>
</ovf:References>
<ovf:DiskSection>
<ovf:Info>Disk Section</ovf:Info>
<ovf:Disk ovf:capacity="1048576"
ovf:capacityAllocationUnits="byte"
ovf:diskId="disk1" ovf:fileRef="file1"
ovf:format="http://www.ibm.com/xmlns/ovf/diskformat/power.raw"
ovf:populatedSize="1048576"/>
</ovf:DiskSection>
<ovf:NetworkSection>
<ovf:Info>Network Section</ovf:Info>
<ovf:Network ovf:name="Network 1">
<ovf:Description>Network 1 description</ovf:Description>
</ovf:Network>
</ovf:NetworkSection>
<ovf:VirtualSystem ovf:id="vs0">
<ovf:Info>This section describes a virtual system to
be created when deploying the package</ovf:Info>
<ovf:InstallSection>
<ovf:Info>This section provides information about the
first time boot of the virtual system.
Its presence indicates that the virtual system
needs to be booted after deployment,
to run first-boot customization.</ovf:Info>
</ovf:InstallSection>
<ovf:VirtualHardwareSection ovf:transport="iso">
<ovf:Info>This section describes the virtual hardware
requirements on the target virtual system</ovf:Info>
148 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
<ovf:System>
<vssd:ElementName>VirtualSystem</vssd:ElementName>
<vssd:InstanceID>VirtualSystem</vssd:InstanceID>
<vssd:VirtualSystemType>IBM:POWER:AIXLINUX</vssd:VirtualSystemType>
</ovf:System>
<ovf:Item>
<rasd:AllocationUnits>percent</rasd:AllocationUnits>
<rasd:Caption>Processor Allocation</rasd:Caption>
<rasd:ConsumerVisibility>3</rasd:ConsumerVisibility>
<rasd:Description>Processor Allocation</rasd:Description>
<rasd:ElementName>Allocation of 1 virtual processors,
0.2 processing units.</rasd:ElementName>
<rasd:InstanceID>1</rasd:InstanceID>
<rasd:Limit>20</rasd:Limit>
<rasd:Reservation>20</rasd:Reservation>
<rasd:ResourceType>3</rasd:ResourceType>
<rasd:VirtualQuantity>1</rasd:VirtualQuantity>
<vimphyprasd:VirtualLimit>1</vimphyprasd:VirtualLimit>
<vimphyprasd:VirtualReservation>1</vimphyprasd:VirtualReservation>
<vimphyprasd:Quantity>20</vimphyprasd:Quantity>
<vimphyprasd:ShareMode>cap</vimphyprasd:ShareMode>
</ovf:Item>
<ovf:Item>
<rasd:AllocationUnits>byte * 2^10</rasd:AllocationUnits>
<rasd:Caption>Memory Allocation</rasd:Caption>
<rasd:ConsumerVisibility>2</rasd:ConsumerVisibility>
<rasd:Description>Memory Allocation</rasd:Description>
<rasd:ElementName>Allocation of 512 MB of dedicated memory.</rasd:ElementName>
<rasd:InstanceID>2</rasd:InstanceID>
<rasd:ResourceType>4</rasd:ResourceType>
<rasd:VirtualQuantity>524288</rasd:VirtualQuantity>
<vimphyprasd:VirtualLimit>524288</vimphyprasd:VirtualLimit>
<vimphyprasd:VirtualReservation>524288</vimphyprasd:VirtualReservation>
</ovf:Item>
<ovf:Item>
<rasd:Caption>Generic disk controller</rasd:Caption>
<rasd:ElementName>Generic disk controller</rasd:ElementName>
<rasd:InstanceID>3</rasd:InstanceID>
<rasd:ResourceType>20</rasd:ResourceType>
</ovf:Item>
<ovf:Item>
<rasd:AddressOnParent>0</rasd:AddressOnParent>
<rasd:Caption>Virtual Disk Allocation</rasd:Caption>
<rasd:Description>The disk description.</rasd:Description>
<rasd:ElementName>Virtual Disk Allocation</rasd:ElementName>
<rasd:HostResource>ovf:/disk/disk1</rasd:HostResource>
<rasd:InstanceID>4</rasd:InstanceID>
<rasd:Parent>3</rasd:Parent>
<rasd:ResourceType>32</rasd:ResourceType>
</ovf:Item>
<ovf:Item>
<rasd:Caption>Ethernet Adapter Allocation</rasd:Caption>
<rasd:Connection>Network 1</rasd:Connection>
<rasd:Description>Network 1</rasd:Description>
<rasd:ElementName>Network adapter 1 on Network 1</rasd:ElementName>
<rasd:InstanceID>5</rasd:InstanceID>
<rasd:ResourceType>10</rasd:ResourceType>
<rasd:VirtualQuantity>1</rasd:VirtualQuantity>
</ovf:Item>
</ovf:VirtualHardwareSection>
<ovf:ProductSection ovf:class="com.ibm.ovf.vim.2">
<ovf:Info>This product section provides information
about the entire package.</ovf:Info>
<ovf:Product></ovf:Product>
<vim:Description></vim:Description>
</ovf:ProductSection>
<ovf:ProductSection ovf:class="com.ibm.ovf.vmcontrol.system">
Chapter 3. Configuring 149
<ovf:Info>General System Product Section</ovf:Info>
<ovf:Property ovf:key="timezone" ovf:type="string"
ovf:userConfigurable="true">
<ovf:Label>Time zone setting for the virtual system</ovf:Label>
<ovf:Description>Time zone setting for the virtual system</ovf:Description>
</ovf:Property>
</ovf:ProductSection>
<ovf:ProductSection ovf:class="com.ibm.ovf.vmcontrol.system.networking">
<ovf:Info>System Level Networking</ovf:Info>
<ovf:Property ovf:key="hostname" ovf:type="string"
ovf:userConfigurable="true">
<ovf:Label>Short hostname for the system.</ovf:Label>
<ovf:Description>Short hostname for the system.</ovf:Description>
</ovf:Property>
<ovf:Property ovf:key="domainname" ovf:type="string"
ovf:userConfigurable="true">
<ovf:Label>DNS domain name for the system.</ovf:Label>
<ovf:Description>DNS domain name for the system.</ovf:Description>
</ovf:Property>
<ovf:Property ovf:key="dnsIPaddresses" ovf:type="string"
ovf:userConfigurable="true">
<ovf:Label>IP addresses of DNS servers for system.</ovf:Label>
<ovf:Description>IP addresses of DNS servers for system.</ovf:Description>
</ovf:Property><ovf:Property ovf:key="ipv4defaultgateway"
ovf:type="string" ovf:userConfigurable="true">
<ovf:Label>The default IPv4 gateway.</ovf:Label>
<ovf:Description>The default IPv4 gateway.</ovf:Description>
</ovf:Property>
</ovf:ProductSection>
<ovf:ProductSection ovf:class="com.ibm.ovf.vmcontrol.adapter.networking"
ovf:instance="5">
<ovf:Info>Network adapter configuration for Network adapter
1 on Network 1</ovf:Info>
<ovf:Category>Internet Protocol Version 4</ovf:Category>
<ovf:Property ovf:key="ipv4addresses" ovf:type="string"
ovf:userConfigurable="true">
<ovf:Label>Static IP address for the network adapter
&amp;quot;Network adapter 1 on Network 1&amp;quot;.</ovf:Label>
<ovf:Description>Static IP address for the network adapter
&amp;quot;Network adapter 1 on Network 1&amp;quot;.</ovf:Description>
</ovf:Property>
<ovf:Property ovf:key="ipv4netmasks" ovf:type="string"
ovf:userConfigurable="true">
<ovf:Label>Static network mask for network adapter
&amp;quot;Network adapter 1 on Network 1&amp;quot;.</ovf:Label>
<ovf:Description>Static network mask for network adapter
&amp;quot;Network adapter 1 on Network 1&amp;quot;.</ovf:Description>
</ovf:Property>
<ovf:Category>Internet Protocol Version 6</ovf:Category>
<ovf:Property ovf:key="ipv6addresses" ovf:type="string"
ovf:userConfigurable="true">
<ovf:Label>Static IP address for the network adapter
&amp;quot;Network adapter 1 on Network 1&amp;quot;.</ovf:Label>
<ovf:Description>Static IP address for the network adapter
&amp;quot;Network adapter 1 on Network 1&amp;quot;.</ovf:Description>
</ovf:Property>
<ovf:Property ovf:key="ipv6gateways" ovf:type="string"
ovf:userConfigurable="true">
<ovf:Label>Static default gateway for network adapter
&amp;quot;Network adapter 1 on Network 1&amp;quot;.</ovf:Label>
<ovf:Description>Static default gateway for network
adapter &amp;quot;Network adapter 1 on Network 1&amp;quot;.</ovf:Description>
</ovf:Property>
<ovf:Property ovf:key="useipv6autoconf" ovf:type="boolean"
ovf:userConfigurable="true" ovf:value="false">
<ovf:Label>Use IPv6 stateless address autoconfiguration
for network adapter &amp;quot; \Network adapter
150 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
1 on Network 1&amp;quot;.</ovf:Label>
<ovf:Description>Use IPv6 stateless address
autoconfiguration for network adapter
&amp;quot;Network adapter 1on Network 1&amp;quot;.</ovf:Description>
</ovf:Property>
<ovf:Category>Deployment use</ovf:Category>
<ovf:Property ovf:key="order" ovf:type="uint32"
ovf:userConfigurable="false" ovf:value="0">
<ovf:Label>The adapter order for network adapter
&amp;quot;Network adapter 1 on Network 1&amp;quot;.</ovf:Label>
<ovf:Description>The adapter order for network adapter
&amp;quot;Network adapter 1 on Network 1&amp;quot;.</ovf:Description>
</ovf:Property>
<ovf:Property ovf:key="slotnumber" ovf:type="string"
ovf:userConfigurable="false">
<ovf:Label>The slot number for network adapter
&amp;quot;Network adapter 1 on Network 1&amp;quot;.</ovf:Label>
<ovf:Description>The slot number for network
adapter &amp;quot;Network adapter 1 on Network 1&amp;quot;.</ovf:Description>
</ovf:Property>
</ovf:ProductSection>
<ovf:OperatingSystemSection ovf:id="9" ovf:version="7">
<ovf:Info>AIX 7 Guest Operating System</ovf:Info>
<ovf:Description>IBM AIX 7</ovf:Description>
</ovf:OperatingSystemSection>
</ovf:VirtualSystem>
</ovf:Envelope>
Preparing the provisioning back ends
If you are using z/VM, KVM or Xen, then setup the environment so that Tivoli
Service Automation Manager can provision and manage virtual servers with them.
Configuring the z/VM environment for Tivoli Service
Automation Manager
This section provides information on how to set up the z/VM environment so that
Tivoli Service Automation Manager can provision and manage virtual servers with
the z/VM hypervisor.
Introduction to the z/VM environment
This section describes the required z/VM setup for provisioning of z/VM Linux
guests by Tivoli Service Automation Manager.
The following outlines the basic steps in this process:
1. Plan for the environment.
2. Define CP_Owned and User_Volume DASD.
3. Set up the z/VM Network version 5.4, and VSWITCH.
4. Update the TCP/IP configuration.
5. Set up DIRM.
6. Enable VSMSERVE. For details, see z/VM V6R1.0 knowledge center > Application
Programming > z/VM V6R1 Systems Management Application Programming.
7. Create the Linux prototype.
8. Install the Linux master system.
9. Enable personalization.
Systems that have RACF enabled require additional setup by the security
administrator.
Chapter 3. Configuring 151
For more information about z/VM, and Linux on System z, see the z/VM
publication Getting Started with Linux on System z .
Planning the z/VM environment
The following information needs to be collected before starting the configuration:
v What DASD volumes will be used to contain the Linux systems?
Minidisks for master systems = Minidisk_volume
Minidisk pool for provisioned systems = Minidisk_Pool_Volume
v What devices are available for network connectivity?
OSA device for Vswitch = VSwitch_OSA_Addr
OSA device for IP = IP_OSA_Device_Addr
v What IP addresses are available for use by new Linux systems?
Provisioned IP address pool = IP_Pool
v What is the z/VM IP address?
z/VM IP address = External IP_Addr
Required software
v z/VM 5.4 with DIRM enabled
v z/VM TCP/IP for network connectivity
v SUSE Linux Enterprise Server 10 SP2 (or Red Hat Enterprise Linux 5 update 4)
Linux on System z z/VM guest for MAPSRV ID
v
v Master image:
SUSE Linux Enterprise Server 10 Linux on System z
SUSE Linux Enterprise Server 11 Linux on System z
Red Hat Enterprise Linux 5 update 4 Linux on System z
Setting up z/VM for Linux provisioning
This task describes how to configure z/VM to provision Linux guests.
Modifying the z/VM System Configuration file:
About this task
Directions for updating the SYSTEM CONFIG file can be found in z/VM CP
Planning and Administration. Refer to this publication if you have doubts about
what you are changing. Errors in this file can have serious repercussions on the
usability of the system. See z/VM V6R1.0 Knowledge Center > Planning and
Administration > z/VM V6R1 CP Planning and Administration > Contents (exploded
view).
The SYSTEM CONFIG file contains the primary system definitions used when CP
is booted (IPLed). All of the information needed to configure CP statically comes
from this file. The SYSTEM CONFIG file resides in the same location as the
bootable CP kernel. Update SYSTEM CONFIG from the main id. To access the
SYSTEM CONFIG file, perform the following steps to release the primary parm
disk:
152 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Procedure
1. Release Parm disk from CP:
cprelease a
2. Access the primary parm disk (MAINT's CF1):
link * cf1 cf1 mr
access cf1 z
3. Edit the SYSTEM CONFIG file:
xedit system config z
System DASD:
About this task
The User_Volume_List is a list of DASD volumes that CP should automatically
attach to the system for user mini-disk definitions. Because all minidisks are
managed by CP, all volumes that house minidisks must be attached to the z/VM
system. Update the User_Volume_List section to list all DASD that are used for
mini-disks across all master, guest, and DASD pools.
/**********************************************************************/
/* User_Volume_List */
/* Multiple labels can be stacked on one statement. */
/**********************************************************************/
/* USER_VOLUME_LIST USRP01 USRP02 USRP03 USRP04 */
User_Volume_List Minidisk volumes /*DASD for Minidisks*/
User_Volume_List Minidisk volumes /*DASD for Pools*/
z/VM network:
About this task
Add the VSWITCH definitions. Network definitions for provisioned systems must
be in the main SYSTEM CONFIG file. Using embedded files for network
definitions is not supported.
VSWITCH
The VSWITCH is set up in trunk mode to allow guests to be provisioned on
unique VLANs. [OSA_Addr] is the OSA address for the VSwitch that will be
attached to provisioned systems.
DEFINE VSWITCH zVM_LAN Name RDEV OSA_Addr CON CONTR DTCVSW1
Chapter 3. Configuring 153
Modifying system features:
About this task
The FEATURES statement in SYSTEM CONFIG allows you to modify attributes
associated with the running system at IPL time.
Allow passwords on commands:
About this task
The Passwords_on_Cmds feature tells CP which commands allow passwords. Edit
the Features statement to enable passwords as follows:
Passwords_on_Cmds,
Autolog yes ,
Link yes ,
Logon yes
Enable system shutdown:
About this task
The Disconnect_timeout feature controls whether and when a virtual machine is
logged off after it has been forced to disconnect. You will turn this feature off, so
that any virtual machine that has been forced to disconnect will not be logged off.
Add the Disconnect_timeout off option to the Features statement.
The ShutdownTime and Signal ShutdownTime features enable a virtual machine to
register with CP to receive a shutdown signal when z/VM is shutting down. CP
waits to shut itself down until the time interval (in seconds) is exceeded, or until
all of the virtual machines enabled for the signal shutdown have reported a
successful shutdown. Linux distributions support this function, which allows Linux
to shut down cleanly before z/VM shuts down.
Add the following line after the Features statement.
Set,
ShutdownTime 30 ,
Signal ShutdownTime 500
Custom user classes:
About this task
The following commands need to be added to allow the MAPSRV Linux system to
manage and update the virtual network devices.
/********************************************************************/
/* IBM DI PRIVCLASS SETUP */
/********************************************************************/
MODIFY CMD SET SUBC VSWITCH IBMCLASS B PRIVCLASS BT
MODIFY CMD QUERY SUBC * IBMCLASS B PRIVCLASS BT
MODIFY CMD IND IBMCLASS E PRIVCLASS ET
MODIFY CMD QUERY SUBC * IBMCLASS G PRIVCLASS GT
MODIFY CMD LINK IBMCLASS G PRIVCLASS GT
154 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Restore z/VM Parm disk:
About this task
Perform the following steps to release the primary parm disk:
Procedure
1. Release Parm disk from CP
release z
2. Access the primary parm disk (MAINT's CF1).
link * cf1 cf1 rr
3. Edit the SYSTEM CONFIG file
cpaccess maint cf1 a sr
Updating the z/VM TCP/IP configuration
This task describes how to configure TCP/IP for z/VM Linux provisioning.
About this task
The TCP/IP stack on z/VM needs to be set up to allow communication between
the MAPSERVE guest and the VSMSERVE server in the TCP/IP stack. It is also
necessary to set up the external TCP/IP communications. To do this:
Procedure
1. Add network devices, home address, routing information, port definitions and
autolog definitions.
2. Log on to the tcpmaint id and edit PROFILE TCPIP on the 198 disk. You will
need to know the device addresses and port names for the Vswitch, and the
IP/Gateway addresses for the IP network. This information will be defined by
the VM system programmer.
Network devices:
About this task
The following example presents a sample network device definition for the TCP/IP
stack.
DEVICE OSA_DEV_Name OSD OSA_Device_Addr PORTNAME OSA_Portname AUTORESTART
LINK ETH0 QDIOETHERNET OSA_DEV_Name
Add the following start statements at the end of the profile:
START OSA_DEV_Name
Chapter 3. Configuring 155
Home address and gateway:
About this task
The next step is to add the IP address for the stack and the routing statements.
HOME
External_IP_Addr Subnetmask ETH0
GATEWAY
; Network Subnet First Link MTU
; Address Mask Hop Name Size
; ------------- --------------- --------------- ---------------- -----
Network_Address Subnetmask = ETH0 1492
DEFAULTNET Gateway_Address ETH0 1492
Only the external OSA connection (eth0) needs a route.
Autolog statement:
About this task
The following servers need to be present in the AUTOLOG.
AUTOLOG
FTPSERVE 0 ; FTP SERVER
PORTMAP 0 ; PORTMAP SERVER
VSMSERVE 0 ; VM SMAPI SERVER
ENDAUTOLOG
Port statement:
About this task
The following ports need to be defined to the TCP/IP stack:
PORT
20 TCP FTPSERVE NOAUTOLOG ; FTP Server
21 TCP FTPSERVE ; FTP Server
23 TCP INTCLIEN ; TELNET Server
111 TCP PORTMAP ; Portmap Server
111 UDP PORTMAP ; Portmap Server
172.16.0.1 1023 TCP VSMSERVE ; VM SMAPI SERVER
Defining the external IP address to the TCP/IP stack:
About this task
This section defines how to add the external IP OSA to the z/VM TCP/IP stack.
The OSA addresses for external IP connectivity are assigned by the VM system
programmer.
Procedure
1. Edit the SYSTEM DTCPARMS file on tcpmaint.198. This is normally accessed as
drive D:
file SYSTEM DTCPARMS D
2. Xedit the file and add the following statement:
:NICK.TCPIP :TYPE.SERVER :CLASS.STACK
:ATTACH.OSA Addresses for IP
156 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
3. Save the file and exit.
Setting up 'Directory Maintenance Facility for z/VM (DirMaint)
This task describes how to configure DIRM
About this task
DIRM needs to be enabled as part of the z/VM setup. All DIRM commands issued
must complete with RC=0. Multiple CONFIG* DATADVH files are allowed.
Multiple CONFIG* DATADVH files are searched in reverse alphabetical order. Be
sure to select the correct configuration file. In the examples below, CONFIGxx is
the system configuration file.
Configuring the CONFIGxx DATADVH file:
About this task
Two statements need to be added to CONFIGxx DATADVH
Procedure
1. Use DIRM commands to get and replace the file:
dirm send configxx datadvh
Note: The command will return the file. Receive it with the replace command.
2. Add the three statements:
ALLOW_ASUSER_NOPASS_FROM= VSMSERVE *
ASYNCHRONOUS_UPDATE_NOTIFICATION_EXIT.UDP= DVHXNE EXEC
USE_RACF= YES ALL
3. Verify that the following parameters are set correctly:
RUNMODE= OPERATIONAL
ONLINE= IMMED
DASD_ALLOCATE= EXACT_FF
DATAMOVE_MACHINE= DATAMOVE * *
DVHDXD_FLASHCOPY_BEHAVIOR= 2
DVHDXD_FLASHCOPY_COMPLETION_WAIT= 0 0
MAXIMUM_UNASSIGNED_WORKUNITS= 100
4. Save the file and exit.
5. Replace the file and make it active:
dirm file configxx datadvh
dirm rldcode
dirm rlddata
dirm rldextn
Chapter 3. Configuring 157
Allocation groups:
About this task
Configure DirMaint to automatically allocate minidisks from a predefined pool of
volumes. These pools are defined in the EXTENT CONTROL file:
Procedure
1. Retrieve the EXTENT CONTROL file from DirMaint.
DIRM SEND EXTENT CONTROL
RECEIVE 138 = = A
2. Edit the extent control file to include the available DASD to be used as a DASD
pool
* ********************************************************************
:REGIONS.
*RegionId VolSer RegStart RegEnd Dev-Type Comments
000001 Minidisk Pool Volume 0001 3338 3390-03
000002 Minidisk Pool Volume 0001 3338 3390-03
000003 Minidisk Pool Volume 0001 3338 3390-03
:END.
:GROUPS.
*GroupName RegionList
POOL0 (ALLOCATE ROTATING)
POOL0 000001 000002 000003
:END.
3. Send the EXTENT CONTROL file back to DirMaint, and make it active:
DIRM FILE EXTENT CONTROL A
DIRM RLDEXTN
Creating the MAPSRV and MAPAUTH IDs
This task describes how to configure the MAPSRV and MAPAUTH IDs
About this task
The MAPSRV ID serves as the Tivoli Provisioning Manager boot server.
MAPAUTH is the class A userid that issues restricted system commands on behalf
of Tivoli Provisioning Manager.
Use the following directory entries to create the MAPSRV and MAPAUTH IDs.
MAPSRV DIRECT A
USER MAPSRV PASSW0RD 512M 1G GT
INCLUDE IBMDFLT
IPL 150
MACHINE ESA
OPTION LNKNOPAS LANG AMENG
*
DEDICATE 0150 [Addr_of Dedicated DASD_for_Linux_System]
*
NICDEF Vswitch_OSA_Addr TYPE QDIO LAN SYSTEM Vswitch_Name
NICDEF MAPLAN_OSA_Addr TYPE QDIO LAN SYSTEM MAPLAN
*
MDISK 0191 3390 Starting cylinder 10 Minidisk volume MR
MDISK 0151 3390 Starting cylinder 200 Minidisk volume MR
MDISK 0192 3390 Starting cylinder 50 Minidisk volume
158 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
MAPAUTH DIRECT A
USER MAPAUTH PASSW0RD 32M 32M ABCDEFG
INCLUDE IBMDFLT
MDISK 0191 3390 Starting_cylinder 10 Minidisk volume MR
After the two files have been created, they need to be added to the z/VM user
directory:
dirm add mapsrv
dirm add mapauth
Installing Linux on the MAPSRV ID
Information on installing Linux on System z can be found in Getting Started with
Linux on System z.
Install SLES 10 or RHEL 54 on the mapsrv id and verify that the vmcp command
works.
mapsrv:~ # vmcp q time
TIME IS 06:37:02 EST THURSDAY 04/10/08
CONNECT= 99:59:59 VIRTCPU= 017:11.47 TOTCPU= 023:44.53
If the command fails, edit /etc/sysconfig/kernel and add vmcp to the
MODULES_LOADED_ON_BOOT section:
MODULES_LOADED_ON_BOOT="vmcp"
Copy IBM-System-z.MAPSRV-7.2.1-1.s390x.rpm from the following location:
/opt/IBM/tivoli/tpm/repository/IBM-System-z on the Tivoli Service Automation
Manager management server to the mapsrv id and install it with the rpm
command:
mapsrv:~ # rpm -ivh IBM-System-z.MAPSRV-<version number>.s390x.rpm
Reboot the mapsrv id.
Granting MAPAUTH the authority to issue DIRM commands:
Issue the following commands to DIRM to grant the authorizations:
dirm for all authfor mapauth cmdlevel 140a cmdset adghmops
dirm for all authfor mapauth cmdlevel 150a cmdset adghmops
Authorizing MAPAUTH to issue SMAPI commands
VSMSERVE is the machine that runs the z/VM SMAPI. When setting up SYSTEM
CONFIG, MAPLAN is created. Since the SMAPI is an RPC interface, the traffic is
secured by allowing only the MAP and TCPIP systems to connect to it.
To authorize MAPAUTH to issue the SMAPI commands:
1. Log on to vsmserve:
Chapter 3. Configuring 159
#cp ipl cms
2. When prompted, enter any characters to stop the IPL and return to CMS mode.
3. Xedit the VSMSERVE AUTHLIST file and add MAPAUTH:
DO.NOT.REMOVE DO.NOT.REMOVE
MAINT ALL
VSMSERVE ALL
MAPAUTH ALL
4. Save the file the log off VSMSERVE.
Setting up Linux on System z
This task describes how to configure to provision Linux guests.
About this task
This section describes how to create Linux on System z master systems.
Defining virtual machines for Linux on System z:
About this task
Creating Linux directory files:
About this task
A directory prototype will allow all provisioned guests to share common
statements. Multiple directory prototypes can be created to support as many
configurations as are needed.
Procedure
1. Create a file called LNXPROTO PROTODIR A:
USER LNXPROTO PASSW0RD 512M 512M G
CPU 00
CPU 01
CLASS G
STORAGE 512M
MAXSTORAGE 2047M
IPL 500
MACHINE ESA 2
OPTION MAINTCCW TODENABLE
CONSOLE 0009 3215 T
*
2. Use the DirMaint FILE command to tell DirMaint to keep a copy of this
prototype for future use:
dirm file LNXPROTO protodir
Note: The zVM_Prototype variable in the Tivoli Provisioning Manager virtual
server template (see "Customizing virtual server templates") must be set to the
appropriate protodir file.
3. Create a default Linux directory entry called LINDFLT DIRECT A:
160 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
PROFILE LINDFLT
CLASS G
STORAGE 512M
MAXSTORAGE 2047M
IPL 500
IUCV ALLOW
MACHINE ESA
OPTION QUICKDSP
CONSOLE 0009 3215 T
NICDEF Vswitch_OSA_Addr TYPE QDIO LAN SYSTEM zVM_LAN_Name
SPOOL 000C 2540 READER *
SPOOL 000D 2540 PUNCH A
SPOOL 000E 1403 A
LINK MAINT 0190 0190 RR
LINK MAINT 019D 019D RR
LINK MAINT 019E 019E RR
LINK TCPMAINT 0592 0592 RR
Creating Linux master systems:
About this task
The Linux master system serves as a template for provisioned Linux systems. The
Linux master can be built using VM minidisks, or full-pack mini-disk. The master
system must have a single disk containing the Linux root and boot partitions. If
GNU/Linux LVM is used for the root partition, then any number of disks may be
added to the guest dynamically during provisioning. The boot partition cannot be
part of a logical volume.
Procedure
1. Log on to maint and create a Linux master directory entry called SL10MSTR
DIRECT A:
Sample for master with VM minidisk
USER SL10MSTR PASSW0RD 1024M 1024M G 64
INCLUDE IBMDFLT
CPU 0 NODEDICATE
CPU 1 NODEDICATE
IPL CMS
MACHINE ESA 4
OPTION QUICKDSP APPLMON
*
NICDEF Vswitch_OSA_Addr TYPE QDIO LAN SYSTEM zVM_LAN_Name
*
MDISK 0193 3390 Start_Cylinder End_Cylinder Volume_Name
You must specify a read password for Linux on System z mini-disks.
Note: For a full-pack minidisk, the DEVNO keyword must be used on the
MDISK statement.
2. Use DIRM to create a z/VM guest for SL10MSTR:
dirm add sl10mstr
Chapter 3. Configuring 161
Setting up a Linux on System z master system:
Installing Linux on SL10MSTR:
About this task
Install Linux on System z on the SL10MSTR user ID. See Getting Started with Linux
on System z for more information on installing Linux on System z.
Enabling personalization:
About this task
After installing the master system, make changes to the Linux system to enable
personalization:
Procedure
1. Regardless of which Linux distribution you use, make sure that python and
python-xml packages are installed
2. Copy IBM-System-z.MASTER-<version number>.s390x.rpm from the following
location: /opt/IBM/tivoli/tpm/repository/IBM-System-z on Tivoli Service
Automation Manager management server into the Linux master ID and install
it with the following command:
sl10mstr:~ # rpm -ivh IBM-System-z.MASTER-<version number>.s390x.rpm
3. Check the mount definitions in file etc/fstab. Mounts should happen by label,
not by device or id which can change in the copied image.
(Optional) Disabling the boot menu at IPL:
About this task
This procedure will eliminate the 10 second wait for the boot menu during the IPL.
Procedure
1. Edit the :menu section of /etc/zipl.conf and change prompt = 1 to prompt
= 0.
2. Save the change and run the zipl command.
Disabling parallel boot option on SUSE Linux Enterprise Server 11:
About this task
On SUSE Linux Enterprise Server 11, ensure that parallel boot option is disabled.
To disable this option:
Procedure
1. Edit the file /etc/sysconfig/boot.
2. Edit the following parameter as shown:
RUN_PARALLEL="no"
3. Save the file.
162 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Verifying your configuration:
About this task
To verify the new configuration:
Procedure
1. Log in to mapsrv.
2. Change to directory /opt/ibm/ztsam/workflow and run the following command:
../bin/uhubrpcclient <z/VM IP address> <SM API port number> mapauth
<password> 12345 imagequery SL10MSTR
The command should return the z/VM directory entry of SL10MSTR.
Using RACF with z/VM
This task describes how to work with z/VM in a RACF environment.
About this task
If the RACF Security Server for z/VM is installed, additional setup is required for
Tivoli Service Automation Manager. The system RACF Administrator will need to
configure RACF to allow Tivoli Service Automation Manager to access system
resources. The setup in the previous sections must be completed before configuring
RACF.
The steps are as follows:
1. Permit access to system resources
2. Configure z/VM networking
3. Grant RACF Admin authority to Dirmaint / Datamove
4. Configure VSMSERVE.
Permitting access to system resources:
Procedure
1. Configure the maint, operator, and ftpserve IDs to access the system reader:
rac permit maint class(vmrdr) id(datamove) acc(update)
rac permit dirmaint class(vmrdr) id(vsmserve) ac(update)
rac permit operator class(vmrdr) id(tcpip) acc(update)
rac permit ftpserve class(vmrdr) id(ftpserve) acc(control)
2. Allow VSMSERVE to access the z/VM parm disk:
rac setropts generic(vmmdisk)
rac permit maint.cf1 acc(alter) id(vsmserve)
rac permit maint.cf2 acc(alter) id(vsmserve)
Chapter 3. Configuring 163
Configuring z/VM networking:
Procedure
1. Define RACF resources for Vswitches. In the RACF commands below,
zVM_LAN_Name is the name of the Vswitch defined on the system.
RAC RDEFINE VMLAN SYSTEM.[zVM_LAN_Name] UACC(NONE)
2. If the system implements a VLAN, define a RACF resource for the VLAN. The
VLAN number must be 4 digits, with leading zeroes if necessary.
RAC RDEFINE VMLAN SYSTEM.[zVM_LAN_Name].[VLAN] UACC(NONE)
3. Reset VMLAN definitions:
RAC PERMIT SYSTEM.[zVM_LAN_Name] CLASS(VMLAN) RESET(ALL)
4. Allow update access to Maint and Dtcvsw1:
RAC PERMIT SYSTEM.[zVM_LAN_Name] CLASS(VMLAN) ID(MAINT) ACCESS(UPDATE)
RAC PERMIT SYSTEM.[zVM_LAN_Name] CLASS(VMLAN) ID (DTCVSW1) ACCESS(UPDATE)
5. Allow Mapsrv and Tcpip to connect to vswitch. If a VLAN is defined, it must
be 4 digits.
RAC PERMIT SYSTEM.[zVM_LAN_Name] CLASS(VMLAN) ID(MAPSRV) ACCESS(UPDATE)
RAC PERMIT SYSTEM.[zVM_LAN_Name].[VLAN] CLASS(VMLAN) ID(MAPSRV) ACCESS(UPDATE)
6. Activate the VMLAN class:
RAC SETROPTS CLASSACT(VMLAN)
Configuring DIRMAINT/DATAMOVE:
Procedure
Give Dirmaint and Datamove RACF Admin authority:
rac alu dirmaint special
rac alu datamove operations
Configuring VSMSERVE:
Procedure
1. Allow VSMSERVE to perform password validation:
RAC RDEFINE VMCMD DIAG0A0.VALIDATE UACC(NONE)
RAC PERMIT DIAG0A0.VALIDATE CLASS(VMCMD) ID(VSMSERVE) ACCESS(READ)
RAC SETROPTS CLASSACT(VMCMD)
2. Allow VSMSERVE to connect to the RACF service machine:
RAC RDEFINE FACILITY ICHCONN UACC(NONE)
RAC PERMIT ICHCONN CLASS(FACILITY) ID(VSMSERVE) ACCESS(UPDATE)
RAC SETROPTS CLASSACT(FACILITY)
3. If protection for DIAG0A0 is not currently active, activate it by issuing:
RALTER VMXEVENT EVENTS1 DELMEM(DIAG0A0/NOCTL)
SETEVENT REFRESH EVENTS1
164 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Note: DIAG0A0 is active by default. However, this setting can be changed in
the currently active VMXEVENT profile by issuing:
RDEFINE VMXEVENT EVENTS1 ADDMEM(DIAG0A0/NOCTL)
Updating VSMSERVE DTCPARMS:
Procedure
1. Log on to VSMSERVE
2. IPL CMS: enter any characters to bypass the IPL function
3. Xedit vsmserve dtcparms and verify that the following statements are coded
:Nick.VSMSERVE :Type.server :Class.VSMAPI
:ESM_ENABLE.YES
:PARMS.-E
:Owner.VSMSERVE
:Exit.VSMEXIT
:Nick.VSMAPI :Type.class
:Name.Virtual System Management API server
:Command.DMSVSMAS
:Runtime.C
:Diskwarn.YES
:ESM_Validate.RPIVAL
:ESM_Racroute.RPIUCMS
Updating DIRMAINT:
Procedure
1. From the maint ID, retrieve CONFIGXX DATADVH and add definitions for
RACF. See Setting up 'Directory Maintenance Facility for z/VM (DirMaint)
on page 157for more information on the CONFIGXX DATADVH file. The
DIRMAINT-RACF relationship present in the default DIRMAINT configuration
files and the modifications shown here must be preserved so that DIRMAINT
can perform the appropriate ADDUSER/DELUSER RDEFINE/RDELETE
commands when the CP directory entries for provisioned servers are created
and deleted. You can extend these configuration files and the DVHXPN user
exits as long as the ability to perform these operations is preserved.
dirm send configxx datadvh
2. Add the following lines to the file
POSIX_CHANGE_NOTIFICATION_EXIT= DVHXPESM EXEC
LOGONBY_CHANGE_NOTIFICATION_EXIT= DVHXLB EXEC
USER_CHANGE_NOTIFICATION_EXIT= DVHXUN EXEC
DASD_OWNERSHIP_NOTIFICATION_EXIT= DVHXDN EXEC
PASSWORD_CHANGE_NOTIFICATION_EXIT= DVHXPN EXEC
RACF_ADDUSER_DEFAULTS= UACC(NONE)
RACF_RDEFINE_VMMDISK_DEFAULTS= UACC(NONE) AUDIT(FAILURES(READ))
RACF_RDEFINE_VMPOSIX_POSIXOPT.QUERYDB= UACC(READ)
RACF_RDEFINE_VMPOSIX_POSIXOPT.SETIDS= UACC(NONE)
RACF_RDEFINE_SURROGAT_DEFAULTS= UACC(NONE) AUDIT(FAILURES(READ))
RACF_RDEFINE_VMBATCH_DEFAULTS= UACC(NONE) AUDIT(FAILURES(READ))
RACF_RDEFINE_VMRDR_DEFAULTS= UACC(NONE) AUDIT(FAILURES(READ))
RACF_RDEFINE_VMMDISK_DEFAULTS= UACC(NONE)
AUDIT(FAILURES(READ))RACF_VMBATCH_DEFAULT_MACHINES= BATCH1 BATCH2
TREAT_RAC_RC.4= 0 | 4 | 30
3. Save the file by entering:
dirm file configxx datadvh
4. Xedit vsmserve dtcparms and verify that the following statements are coded:
Chapter 3. Configuring 165
:Nick.VSMSERVE :Type.server :Class.VSMAPI
:ESM_ENABLE.YES
:PARMS.-E
:Owner.VSMSERVE
:Exit.VSMEXIT
:Nick.VSMAPI :Type.class
:Name.Virtual System Management API server
:Command.DMSVSMAS
:Runtime.C
:Diskwarn.YES
:ESM_Validate.RPIVAL
:ESM_Racroute.RPIUCMS
Configuring the KVM environment for Tivoli Service
Automation Manager
This section provides information on how to set up the KVM environment so that
Tivoli Service Automation Manager can provision and manage virtual servers with
the KVM hypervisor.
Before you begin
These setup steps are intended for a skilled KVM administrator. Before you
configure Tivoli Service Automation Manager for KVM, there are some preparatory
steps that are required to be performed on the KVM environment. This task will
describe the high level steps for setting up the KVM hypervisor and the KVM
image server to meet the Tivoli Service Automation Manager requirements.
Customize the KVM environment as described in this section and collect the
settings as they will be required as input for the subsequent configuration task. If
you already have a KVM server and KVM image server installed ensure their
settings match the requirements described. To facilitate the task, use the
tableTable 17 on page 168 to record your settings.
About this task
Procedure
1. Install and configure the KVM hypervisor on a host. This is the server that will
provision the virtual machines.
a. Install Linux on a server and ensure that ssh and NFS server are installed
and active.
b. Mount a partition on the server to /var and ensure it has the maximum
space available since it will be used for provisioning servers.
c. Record the password for the root user Id in the worksheet or in a protected
source.
d. Record the IP address, the Subnetwork mask and the MAC address of the
KVM server in the worksheet.
e. When configuring the networks on the KVM hypervisor host, if you are
configuring separate networks on the host, one for the management
network and one for the customer network, you must configure the first
network card on the host ("eth0") to be on the management network (by
default, 10.160.0.0/255.255.0.0) and the second network card ("eth1") to be
on the customer network (by default, 10.180.0.0/255.255.0.0). If configuring
only one network for both, they should both be on the first network card
("eth0").
166 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
f. If the KVM host is configured with a non-static IP, you need to obtain the
range of IPs that the DHCP server gives out and record it in the worksheet.
Tivoli Service Automation Manager needs this range so that when it
discovers the host on which the KVM hypervisor is running, it can assign it
the static IP address within Tivoli Provisioning Manager. You will need to
set this range in a later configuration step for Tivoli Service Automation
Manager. Record these values in the worksheet. If you assign a static IP to
your host, ensure it is outside the range defined for the management
network.
g. Increase the maximum number of loop devices to be used per provisioning
to 64, then reboot the hypervisor.
vi /etc/modprobe.conf (add the following line)
options loop max_loop=64
vi /etc/udev/makedev.d/50-udev.nodes
# add loop9 to loop63
Note: For large KVM hosts it may be necessary to increase the maximum
number of loop devices up to the kernel limit (currently 256).
h. In order to provision Windows 2008 R2 or Windows 2008 R2 SP1 Enterprise
Edition, an extra package is required: ntfs-3g. Install this package on the
KVM host platform.
2. Install and configure the KVM image server and NFS server on a host. This is
the server on which the operating system images templates should be stored.
This server must belong to the management subnetwork that you will
configure in the subsequent section Customizing a Tivoli Service Automation
Manager cloud pool for KVM on page 212.
a. Install Red Hat Linux version 5.4 on a physical server and ensure that the
KVM image server and NFS server are installed as well as the KVM
hypervisor packages.
b. Create a /repository directory, if not already present.
c. Configure the NFS server to export that directory. This is the directory that
will be mounted to access the images to use during the KVM provisioning
or virtual machines.
d. Create a subdirectory called /kvmimages in the /repository directory. This
will be the directory where the KVM images should be stored for use.
e. Record the password for the root user ID in the worksheet or in a protected
source.
f. Record the IP address of the KVM image server in the worksheet.
3. Customize the KVM environment for the KVM image backups.
a. Select the server that you will use for KVM image backups. This server
must have enough space for storing KVM image backups. This server must
belong to the management subnetwork that you will configure in the
subsequent section Customizing a Tivoli Service Automation Manager
cloud pool for KVM on page 212.
b. Record the IP address of that server in the worksheet.
c. Record the password for the root user ID in the worksheet or in a protected
source.
d. Create a directory that can be used as for image backups.
e. Record the path of that kvm_backup_directory in the worksheet below.
4. Create one or more OS image templates that can be used by Tivoli Service
Automation Manager and store them on the KVM Image server in the directory
Chapter 3. Configuring 167
/repository/kvmimages. Special requirements must be met for these templates
so that they can be used by Tivoli Service Automation Manager. Refer to
Preparing a Linux image on page 309 for more details about creating these
templates.
Results
Table 17. The KVM server and KVM image server settings worksheet
Your KVM setting Description Your value
DHCP server IP ranges Range of IP addresses that the
DHCP server uses when
assigning dynamic IP addresses
to hosts.
IP address IP address of the KVM
hypervisor server host
Subnetwork mask Subnetwork mask of the KVM
hypervisor server host
MAC address MAC address of the KVM
hypervisor server host
password Password for the root user of
the KVM hypervisor server
host
You may want to record the
password in a protected source.
IP address IP address of KVM image
server host
password Password for the root user of
the KVM image server host
You may want to record the
password in a protected source.
IP address IP address of the host used for
KVM image backups
password Password for the root user of
the host used for KVM image
backups
You may want to record the
password in a protected source.
kvm_backup_directory The name of the directory used
for backup and located on the
host used for KVM image
backups
Configuring the Xen environment for Tivoli Service
Automation Manager
This section provides information about the prerequisites required for setting up
Xen so that Tivoli Service Automation Manager can provision and manage virtual
servers with Xen.
168 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Preparing for an automatic Xen host install
If DHCP is available, the Xen hosts can be installed automatically by booting the
target server from network.
About this task
Remember that the only version of Xen supported by Tivoli Service Automation
Manager is Xen 3.0.3 on Red Hat Enterprise Linux 5.3. Newer versions of Xen are
not supported.
Procedure
1. Create a configuration file called /etc/dhcpd.conf that contains the following
code:
# DHCP Server Configuration file.
# see /usr/share/doc/dhcp*/dhcpd.conf.sample
#
ddns-update-style interim;
ignore client-updates;
option space gpxe;
option gpxe-encap-opts code 175 = encapsulate gpxe;
option gpxe.bus-id code 177 = string;
option iscsi-initiator-iqn code 203 = string;
option gpxe.bios-drive code 189 = unsigned integer 8;
option gpxe.keep-san code 8 = unsigned integer 8;
subnet 10.160.0.0 netmask 255.255.0.0 {
#option domain-name "";
#option domain-name-servers 10.160.0.1;
option routers 10.160.0.1;
option subnet-mask 255.255.0.0;
range dynamic-bootp 10.160.250.0 10.160.254.0;
default-lease-time 86400;
max-lease-time 86400;
option vendor-class-identifier "PXEClient"; #version 3
# Vendor Class setup for PXE
option vendor-encapsulated-options 01:04:00:00:00:00:ff;
next-server 10.160.0.64;
filename "pxelinux.0";
host test {
hardware ethernet 00:14:5E:6D:86:66;
next-server 10.160.0.64;
filename "";
option iscsi-initiator-iqn "iqn.1994-05.com.ibm:00145e6d8660";
option root-path "iscsi:10.160.0.25::::iqn.1986-03.com.ibm:sn.101184607";
# option gpxe.bios-drive 130;
option gpxe.keep-san 1;
if not exists gpxe.bus-id {
filename "gpxe/undionly.kpxe";
}
}
}
a. Locate the subnet 10.160.0.0 netmask 255.255.0.0 and replace it to match
your own subnet.
Chapter 3. Configuring 169
b. Locate the range dynamic-bootp 10.160.250.0 10.160.254.0 and replace the
ip range to match your own dynamic ip range.
c. Locate the next-server 10.160.0.64 and replace this IP address with the IP
of your tftpboot server, in our case it is same as our DHCP server.
2. Start DHCP.
a. Log on as root.
b. Run the service dhcpd start command.
3. Ensure that DHCP is running:
a. Log on as root.
b. Type service dhcpd status. You should see:
dhcpd (pid ####) is running...
4. Log on to the NFS server as root and create a /repository with the following
subdirectories:
v xen-host-platform including the OS installation images for kick start
installation
v xenimages - including the Xen images for Xen provisioning
v kvmimages - including the KVM images for KVM provisioning
v itm621 Tivoli Monitoring v6.2.1 installation images for the monitoring
agent middleware provisioning
v windows including the Windows sysprep files for Windows image creation
Creating the Post Install Script file
About this task
Create the CloudPostInstall script to complete the process of setting up Xen. When
copying the code to the file you are creating, pay special attention to the following
values and change them accordingly:
v Replace the value 10.160.0.130 with the IP address of your Tivoli Provisioning
Manager server.
v The last two values of the # chkconfig 2345 99 01 line set the priority to start
and stop the scripts. You may need a set of different values for your
environment, for example 85 15.
Procedure
1. Log on to the NFS server.
2. Create the CloudPostInstall directory using the following command: > mkdir -p
/repository/xen-host-platform/rhel5.3-64/CloudPostInstall.
3. Copy the following lines into /repository/xen-host-platform/rhel5.3-64/
CloudPostInstall/CloudPostInstall:
#! /bin/bash
##
CloudPostInstall Cloud Post Install script
##
chkconfig: 2345 99 01
# description: script should be started in levels 2,3,4, and 5, start priority should be 99, \
# and stop priority should be 01.
# source function library
. /etc/init.d/functions
TPM_URL="http://10.160.0.130:9080"
RETVAL=0
start() {
echo -n $"Running Cloud Post Install scripts..."
170 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
#set authorized_keys
mkdir -p /root/.ssh
rm -f /root/.ssh/authorized_keys
wget -O /root/.ssh/authorized_keys "$TPM_URL/TpmListener?getpublickey=true"
chmod 644 /root/.ssh/authorized_keys
#get the interface that have an IP.
dev=`ifconfig | grep -B 2 "inet addr:" | head -n 1 | cut -f 1 -d" "`
IPADDR=`ifconfig $dev | grep Mask | awk {print $2} | cut -f2 -d:`
MACADDR=`ifconfig $dev | grep HWaddr | awk {print $5}`
MASK=`ifconfig $dev | grep Mask | awk {print $4} | cut -f2 -d:`
wget "$TPM_URL/TpmListener/CallbackServlet?ip="$IPADDR"&mac="$MACADDR"&initmask="$MASK"&poolname=Xen
Cloud Pool"
chkconfig CloudPostInstall off
chkconfig --del CloudPostInstall
rm /etc/init.d/CloudPostInstall
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/CloudPostInstall
}
stop() {
echo -n $"not implemented"
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/CloudPostInstall
}
case "$1" in
start)
start
stop)
;;
stop
;;
restart|reload)
stop
start
RETVAL=$?
;;
*)
esac
echo $"Usage: $0 {start|stop|restart}"
exit 1
exit $RETVAL
Results
The CloudPostInstall script is ready.
Setting up Xen
About this task
If you do have a DHCP available in your environment, you can boot your target
server from the network, and install your OS with kick start automatically. After
the OS installation, the post-installation steps in the kick start file will call the
CloudPostInstall scripts to prepare your rhel5.3 system and reboot. After the
reboot, the installation process will call the RP.CreateBareMetalHost workflow to
finish up the Tivoli Provisioning Manager DCM setup. After this Xen host has been
created as a Provisioning Computer object, you can proceed to step 2. Check the
status of RP.CreateBareMetalHost workflow to continue your setup.
If you do not have a DHCP available in your environment, follow this procedure,
manually prepare your Xen host, and then proceed to step 2. Check the status of
RP.CreateBareMetalHost workflow.
Chapter 3. Configuring 171
Procedure
1. Install rhel5.3 manually.
a. Use Linux text install.
b. Type in the license number
c. Select disabled for firewall.
d. Select disabled for selinux.
e. Allocate all disks to VolGroup00.
f. Allocate 5 GB to LogVol00 as / directory, and define fstype as ext3.
g. Allocate 1 GB to LogVol01 as swap.
h. Make sure select virtualization package.
2. Verify the rhel5.3 system you just installed. Make sure that you have the
following rpm installed in your system, otherwise install it.
v ntp-4.2.2p1-9.el5.x86_64.rpm
v xen-libs-3.0.3-80.el5.i386.rpm
v xen-libs-3.0.3-80.el5.x86_64.rpm
v compat-libstdc++-33-3.2.3-61
v vnc-server-4.1.2-14.el5
3. Edit the /boot/grub/menu.lst file and change the value of dom0_mem to
512M.
4. Run the CloudPostInstall script.
a. Log on as root of the Xen host which you just installed.
b. Mount the NFS, locate the CloudPostInstall script, and run this script by
rebooting the system.
v mkdir /repository (create /repository directory, if it is not there yet)
v mount <NFS_server_IP>/repository/repository
v cd /repository/xen-host-platform/rhel5.3-64/CloudPostInstall (replace
rhel5.3-64 to point to your install image).
v cp CloudPostInstall /etc/init.d/ (copy the CloudPostInstall script to
/etc/init.d directory)
v chmod 777 /etc/init.d/CloudPostInstall
v chkconfig add CloudPostInstall
v chkconfig CloudPostInstall on
v chkconfig sendmail off
v echo "blacklist tg3" >> /etc/modprobe.d/blacklist
v reboot
Results
The CloudPostInstall script will start the RP.CreateBareMetalHost Tivoli
Provisioning Manager workflow.
172 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Configuring cloud server pools
Use the Tivoli Service Automation Manager Cloud Server Pool Administration
application to create and configure cloud server pools.
About this task
A cloud server pool is the central object that Tivoli Service Automation Manager
uses to define cloud environments. It contains references to all Data Center Model
(DCM) resources needed for the pool (including hypervisor manager, host
platform, file repositories, and resource pools). Furthermore, it defines reservation
parameters used during the provisioning of cloud projects. For each hypervisor
used for server provisioning, one cloud server pool must be created and properly
configured. The creation and configuration of such pools is facilitated by the Cloud
Server Pool Administration application.
Note: The image repository that you configure must have enough space to save
images at any time. Tivoli Service Automation Manager does not check whether
there is enough space available. If no space is available when saving an image, an
inbox assignment is created to manually recover from the situation. You must then
manually cancel the save request and all following steps. Then, add additional disk
space to the image repository at the back end by using hypervisor-specific
administration tasks.
You can choose one of the two methods to create a cloud server pool:
v Create DCM XML files, load the vrpool.properties file, perform the discovery
and then validate and enable the cloud server pool.
v Perform all the configuration steps manually using the Cloud Server Pool
Administration application.
The following is the general procedure for cloud server pool creation:
Procedure
1. Create all objects required for the pool, such as a computer object for the
hypervisor host, a file repository object for the virtual machine images, network
definitions, etc. These are the objects that fundamentally define the hypervisor
on which servers are to be provisioned.
2. Define user-specific reservation parameters.
3. Create a cloud server pool object.
4. Associate all the objects created to the cloud server pool object.
5. Perform all the required steps (e.g. hypervisor discovery).
Attention: When configuring a VMware cloud server pool, do not run any
Tivoli Provisioning Manager VMware discovery workflows to discover virtual
machines or images. To start a Tivoli Service Automation Manager discovery
workflow, use the Cloud Server Pool Administration application. Running a
Tivoli Provisioning Manager VMware discovery is incompatible with Tivoli
Service Automation Manager, destroys the Tivoli Provisioning Manager
relationships, and can lead to serious Tivoli Service Automation Manager
malfunction.
6. Validate and enable the cloud server pool.
Chapter 3. Configuring 173
Configuring cloud server pools manually
You can use the Cloud Server Pool Administration application to manually
perform all the configuration steps one after another.
About this task
This documentation suggests a certain order of configuration steps according to
which cloud server pools and cloud storage pools are configured before the
network configuration is performed. Server and repository DCM objects that are
created during cloud server pool configuration in the Cloud Server Pool
Administration application require associated subnetwork DCM objects to be
operable. These subnetworks are created automatically if no matching ones exist in
DCM that are compatible with the specified network address of the server or
repository. It is important that the subnetwork of these server or repository objects
do not overlap with the management or customer subnetworks that are created as
part of the network configuration. Such configuration is not supported and leads
to errors.
To overcome potential problems, you can import the network DCM objects or
perform the network configuration before configuring cloud server and cloud
storage pools. You can also avoid this problem by using DCM objects instead of
the "Quick Define" functions to configure cloud server pools.
This section contains topics that describe the procedure for manually creating a
new cloud server pool for all supported hypervisors. The steps are performed in
the Cloud Server Pool Administration application.
Procedure
1. Log on to the administrative user interface.
2. Click Go to > Service Automation > Configuration > Cloud Server Pool
Administration.
Note: Configuring a cloud server pool for different hypervisors involves
different steps. When you change the hypervisor type in the Hypervisor Type
field, the tabs in the Resource Configuration section are also changed.
Manually configuring cloud server pools for VMware
Perform these steps to manually configure a VMware cloud server pool in the
Cloud Server Pool Administration application.
Before you begin
Collect the following information about the VirtualCenter:
v The host name, IP address, and network mask
v User name and password
v VirtualCenter host certificate
Additionally, collect the following information:
v The name of the data center, for example "CloudDC"
v The name of the data store or data stores attached to the ESX hosts
Note: You can create a cluster under a folder. However, sub-levels are not allowed.
Ensure that the cluster name is unique.
174 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Procedure
1. In the Cloud Server Pool Details tab, click the New Cloud Server Pool icon.
Specify a name for your new cloud server pool, for example "VMware Cloud
Pool 1". Set the hypervisor type to VMware.
2. In the Resource Configuration section, in the Resource Pool Configuration
tab, click Define New Resource Pool.
a. In the Define New VMware Resource Pool window, enter a name for the
new resource pool, for example "VMware Resource Pool 1". Optionally,
enter the NIC adapter type in this window.
b. Click Save.
3. Open the Virtual Center Configuration tab.
a. Click Define New Virtual Center.
b. Enter the following details of the virtual center:
v Hypervisor Manager - the virtual center that manages the VMware ESX
host.
v Data Center Path - the data center path as it is seen in the VMware
virtual center.
v Cluster names - name of the clusters in the data center.
Remember: Though cluster names field is optional, enter the cluster
name if you have more than one cluster in the same data center and
these clusters are used or planned for use by other server pools.
Otherwise, after discovery, all the ESX hosts present in these clusters are
removed from other server pools and are added to the server pool from
which discovery is run. If required, you have to add these ESX hosts
manually back to their respective resource pools.
Note: Click Host Discovery if you want to discover a single managed
server.
c. Click Save.
d. Click Virtual Center Discovery. If you have problems, see
Troubleshooting when using VMware on page 513.
Attention: Creating multiple resource pools with the same virtual center
is not supported. If created, changing the saved image repository for one
of the resource pools may not be allowed and may also cause other issues.
Attention: Do not run any Tivoli Provisioning Manager VMware
discovery workflows to discover virtual machines or images. To start a
Tivoli Service Automation Manager discovery workflow, click Virtual
Center Discovery in the Cloud Server Pool Administration application.
Running a Tivoli Provisioning Manager VMware discovery is incompatible
with Tivoli Service Automation Manager, deletes the Tivoli Provisioning
Manager relationships, and can lead to serious Tivoli Service Automation
Manager malfunction.
Remember: First, you must import the SSL certificate of the virtual center
into the WebSphere certificate store. Otherwise, the discovery workflow
fails with a "No certificate found" error.
e. Click Save.
f. Click Refresh to see the current discovery workflow state.
Note: "Success" indicates that this step is finished. If the workflow fails,
you can open the Provisioning Workflow application and start the
Chapter 3. Configuring 175
workflow manually. You can also open the Provisioning Workflow Status
application to analyze the failed workflow.
After the discovery, the ESX host or hosts are added to the resource pool.
You can click the Delete icon on the right to remove hosts from the
resource pool.
Note: Running parallel host platform discoveries is not supported by Tivoli
Provisioning Manager.
4. You do not need to run the image discovery in the Image Template Discovery
tab now. Virtual server images are discovered during the virtual center
discovery. Click Image Discovery in this tab only if you want to discover new
images that were added at a later time.
5. Open the Additional Resources tab.
a. Specify the name of the data store on which you want Tivoli Service
Automation Manager to create provisioned virtual servers. You must
assign the data stores which you select in the Additional Resources and
Save/Restore Settings tabs to all ESX host platforms. The Cloud Server
Pool Administration application verifies that this requirement is met.
Consequently, in multi cluster VMware cloud pool configurations, only
SAN storage can be selected which is connected to all ESX hosts of all
specified clusters. You can select local storage disks only in single cluster
and single ESX host configurations.
b. Click Save.
6. Open the Save / Restore Settings tab.
a. Select the data store that you want to use for saved virtual server images.
Note:
v Data stores which are used to store saved images must be uniquely
assigned to the cluster which is associated with the current cloud server
pool. The data store must not be shared across clusters.
v The data repositories are discovered during the virtual center discovery.
Therefore, wait until the discovery workflow status changes to "Success"
before trying to select the data store in this field.
v Data stores which are used to store saved images must be attached to all
ESX hosts of the cluster which is associated with the current cloud
server pool.
v It is highly recommended to use separate data stores for provisioning of
virtual servers and for storing saved images. The space occupied by
saved images is not considered for available storage capacity calculation
during resource checking.
b. Click Save.
7. Review the values in the Provisioning Parameters tab and adjust them to
your needs.
8. If you want to assign this cloud server pool to all customers:
a. Switch the main tab from Cloud Server Pool Details to Customers.
b. Mark the box Assigned to all customers?.
Such cloud server pool is shared by all customers and it is not necessary to
assign it to individual customers in the Cloud Customer Administration
application.
9. Click Validate and Enable Cloud Server Pool.
176 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Note: After validation, all input fields become read-only. Discoveries are not
possible until the cloud server pool is disabled again.
10. If the following error message is displayed during the validation: CTJZH2064E
- The global provisioning property Cloud.RSAFile is not configured
properly. Current value: __TIO_HOME_/keys/TSAM/identity, then the
Cloud.RSAFile global provisioning property is generated but not customized.
Perform the following steps to customize it:
a. Log on to the administrative user interface and click Go to >
Administration > Provisioning > Provisioning Global Settings.
b. Open the Variables tab and in the field enter Cloud.RSAFile.
c. Replace the string __TIO_HOME_ with the fully qualified path to the
directory of the Tivoli Provisioning Manager management server, for
example /opt/IBM/tivoli/tpm.
d. Click Save.
Related tasks:
VirtualCenter discovery does not load the servers on page 516
Problem: The VirtualCenter discovery does not load the discovered ESX Servers
into the ESX Cloud Pool.
Manually configuring cloud server pools for KVM
Perform these steps to manually configure a KVM cloud server pool in the Cloud
Server Pool Administration application.
Before you begin
Collect the following information about the KVM Image Server:
v IP address and network mask
v user name and password for SSH / SCP access
v the path on the image server the master images are stored
v the path on the image server where to store the instance images
v the path on the image server where to store the saved / backup images
Additionally, collect the following information about the KVM host or hosts:
v IP address
v network mask
v MAC address
Procedure
1. In the Cloud Server Pool Details tab, click the New Cloud Server Pool icon.
Specify a name for your new cloud server pool, for example "KVM Cloud
Pool 1". Set the hypervisor type to KVM.
2. In the Resource Configuration section, in the Resource Pool Configuration
tab, click the Define New Resource Pool button.
a. In the Define New KVM Resource Pool window that appears, enter a
name for the new resource pool, for example "KVM Resource Pool 1".
b. Click Save.
3. Open the Boot Server Configuration and Image Discovery tab.
a. Click Define New KVM Image Server to define the KVM image server in
DCM.
b. Specify the name of the server, its IP address, the network mask, and the
SSH and SCP credentials.
Chapter 3. Configuring 177
c. Click Save.
d. Specify the directory name where he master images are located on the
KVM image server.
e. Click Install Boot Server and wait until the workflow is completed
successfully. Click Refresh to see the current workflow state.
f. Click KVM Image Discovery to discover images. Wait until the discovery
workflow is finished.
4. Open the Additional Resources tab.
a. Specify the name of the repository, for example "KVM File Repository", as
well as the IP address and network mask of the KVM image server that is
to be used.
b. Click Save.
5. Open the Save / Restore Settings tab.
a. Click Define New Saved Image Repository.
b. Specify the name of the repository, for example "KVM Saved Image
Repository".
c. Select the server and path where to store the saved images, for example
/repository/kvmsavedimages.
Note: You can select the KVM image server which was created previously
but you can also select any other server as long as it has an associated
boot server in DCM.
d. Click Save.
e. Click Define New Instance Image Repository.
f. Specify the name of the repository, for example "KVM Instance Image
Repository".
g. Click Save.
Note: If you change the mount point for the KVM image server, you must
update the KVM instance image repository mount point accordingly. To
perform this step, access the Tivoli Provisioning Manager application for
managing image repositories by clicking Go To > IT Infrastructure > Image
Library > Image Repositories.
6. Before creating a KVM host, create the Cloud.RSAFile global provisioning
property:
a. Log on to the administrative user interface and click Go to >
Administration > Provisioning > Provisioning Global Settings.
b. Open the Variables tab.
c. Add the variable Cloud.RSAFile with value /opt/IBM/tivoli/tpm//keys/
TSAM/identity.
d. Delete KVM Host entry from Provisioning Computers.
e. Click Save.
7. Open the Host Platform Configuration tab.
a. Specify the parameters that are necessary to create a KVM host: IP address
of the KVM host, the network mask, MAC address, and root password.
b. Click Create KVM Host.
178 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Note: The host platform is created with the specified parameters and it is
attached to the resource pool. You can verify this in the Resources
Overview section. Try to connect to the KVM host at the specified address
using SSH.
8. Review the values in the Provisioning Parameters tab and adjust them to
your needs.
9. If you want to assign this cloud server pool to all customers, do the following:
a. Switch the main tab from Cloud Server Pool Details to Customers.
b. Mark the box Assigned to all customers?.
Such cloud server pool is shared by all customers and it is not necessary to
assign it to individual customers in the Cloud Customer Administration
application.
10. Click Validate and Enable Cloud Server Pool.
Note: After validation, all input fields are greyed out and become read-only.
Discoveries are not possible until the cloud server pool is disabled again.
Manually configuring cloud server pools for System p
Perform these steps to manually configure a System p cloud server pool in the
Cloud Server Pool Administration application.
Before you begin
Collect the following information about the Hardware Management Console
(HMC) server:
v host name, IP address, network mask
v hscroot password
v CEC name to be used for this cloud server pool
Collect the following information about the host platform and VIOS configuration:
v CEC Host Platform Name
v Name of the Physical Ethernet Device, for example "eth0"
v the IP address and host name of the VIO server
Collect the following information about the NIM server:
v host name, IP address, network mask, gateway address, root password
v path on the NIM boot server where the saved / backup images are to be stored,
for example /export/nim/images/backups.
Procedure
1. In the Cloud Server Pool Details tab, click the New Cloud Server Pool icon.
Specify a name for your new cloud server pool, for example System p Cloud
Pool 1. Set the hypervisor type to LPAR.
2. In the Resource Configuration section, in the Resource Pool Configuration
tab, click the Define New Resource Pool button.
a. In the Define New PowerVM Resource Pool window that appears, enter a
name for the new resource pool, for example System p Resource Pool 1.
b. Click Save.
3. Open the HMC / IVM Discovery tab.
a. Click Define New HMC Server in order to define HMC in DCM.
Chapter 3. Configuring 179
b. Specify the server name, IP address, network mask, and the credentials for
the HSCROOT access.
Note: Server name must be valid and must be resolvable by DNS. Run a
DNS lookup on the Tivoli Service Automation Manager management
server if you are not sure if a name is valid. You can specify an IP address
as a server name as well.
c. Click Save.
d. Specify the Central Electronic Complex (CEC) name to be discovered. This
parameter is optional. If you leave this field blank, all CECs managed by
this HMC are discovered which normally takes much time.
e. Click Save.
f. Click HMC Discovery and wait for the workflow to complete successfully.
Restriction: Do not discover Power Blades and HMC computers within the
same cloud server pool. Such configuration is not supported.
4. Open the Host Platform and VIOS Configuration tab.
a. Select the SAN Storage / Multiple VIOS Mode? check box if you want to
use SAN storage in the MPIO mode. Such setting is default for a System p
cloud use case. Checking this box also unlocks the Storage Discovery tab.
If you select this check box and configure the cloud server pool for SAN
storage, the check box is disabled. After that, it is not possible to switch to
LVM storage mode for this cloud server pool, even if the cloud server pool
is disabled.
b. Enter the CEC and VIOS parameters and run the Configure CEC
workflow. The CEC host platform is now attached to the resource pool.
You can verify this in the Resources Overview section. Try to connect to
the host and VIO server at the specified address using SSH. This step must
be repeated for all hosts that are to be added to the resource pool.
c. Optionally, you can assign a new device model to the host platform or
platforms attached to the resource pool. To use the Tivoli Service
Automation Manager LPAR storage extension, select the Cloud pSeries
HostPlatform Storage device model and click Assign Device Model. The
new device model is listed in the Resources Overview section.
d. Configure the VIOS Sets for each host platform using the table in the
VIOS Configuration section below the Assign Device Model to
Hostplatforms section. For each host platform, open the VIOS section by
clicking the little arrow icon next to its name. Click Add VIOS Set and
select a VIOS set type. Each host platform can define at most one
VIOS.SET, at most one VIOS.SET.SAN, or at most one VIOS.SET.NET
e. Click New VIOS Set Entry and add the alias for the VIOS set.
Important: This alias must match the alias specified in the network
configuration (System p Switch Template definition).
f. Click Add VIOS Server to add a VIOS server or servers to this VIOS set
and specify the trunk priority if needed.
g. Review the VIOS set entries and click Save.
5. Open the NIM Discovery tab.
Tip: The current NIM discovery does not distinguish between a saved image
and a master image. This means that the NIM discovery will put master
images and saved images into the same NIM repository. To avoid this
situation it is useful to separate master images and saved images from each
180 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
other. This can be done with an additional NIM Server and NIM Boot Server
pair, so that one NIM Server and NIM Boot Server pair handles master
images, and the other handles saved images.
a. Click Define New NIM Server.
b. In the window that appears specify the DNS resolvable host name or IP
address as NIM Server Name. Specify the IP address, network mask, and
the SSH user credentials for the NIM server.
c. Click OK.
d. Click Define New Boot Server.
e. Specify the server name, for example NIMBootServer, the IP address,
network mask, and gateway address for this boot server.
f. Click NIM Discovery and wait until the workflow is finished successfully.
Note:
v During creation of NIM boot server a -BootServer suffix is appended to
the server name that you specified. This is to ensure that a unique NIM
boot server object is created even if the NIM server and the NIM boot
server have the same name. Unique name is required for Tivoli
Provisioning Manager.
v If the NIM server is also used to export file systems via NFS, ensure that
the export does not interfere with the NIM server setup for the restore
and backup functionality. For more information, refer to the respective
AIX NIM documentation. Incorrect setup might cause problems with
provisioning or backing up of LPAR servers.
g. Click Save.
6. Open the Save / Restore Settings tab.
a. Click Define New PowerVM Repository.
b. Specify the name, for example System p Repository. Select the previously
created NIM boot server and specify the location of the saved images on
the NIM server.
c. Click Save.
7. In the Storage Discovery tab, click Storage Pool Discovery. This tab is only
available if the LPAR storage extension is installed.
a. To distinguish between disks used for the OS installation and disks used
for additional storage, execute a storage discovery on the pre allocated
disks. This distinction is based on the disks sizes. This discovery will also
automatically maintain the storage volume usage state.
b. If there is no significant difference in storage between volumes used for
OS installation and the additional storage volumes, execute a storage
volume import.
Note: The storage volume import will add only new storage volumes to
the storage pools defined for the server pool with the given usage state.
Existing storage volumes are maintained in aspect of VIOS.SET properties
only, no storage volume usage state gets modified. Values such as volume
identifier, size and state are not automatically reviewed, therefore the user
is responsible for setting them correctly.
The storage pool naming rule for OS installation disks (rootVG) is: Server
resource pool name - Storage Pool. The storage pool naming rule for the
additional disk storage pool is: Server resource pool name - Data Disk
Stg Pool. The storage subsystem is identical for all server resource pools:
Chapter 3. Configuring 181
name cloud-storage-subsystem with a ANS It10 ID of 3258. The DCM import
file containing the storage volumes must be follow the Tivoli Provisioning
Manager xmlimport.dtd. Refer to a sample XML file
52_Cloud_Storage_Volumes_Systemp.xml described in chapter Data Center
Model (DCM) object templates on page 191
8. Review the values in the Provisioning Parameters tab and adjust them to
your needs.
9. If you want to assign this cloud server pool to all customers, do the following:
a. Switch the main tab from Cloud Server Pool Details to Customers.
b. Mark the box Assigned to all customers?.
Such cloud server pool is shared by all customers and it is not necessary to
assign it to individual customers in the Cloud Customer Administration
application.
10. Click Validate and Enable Cloud Server Pool.
Note: After validation, all input fields are grayed out and become read-only.
Discoveries are not possible until the cloud server pool is disabled again.
11. If the following error message is displayed during the validation: CTJZH2064E
- The global provisioning property Cloud.RSAFile is not configured
properly. Current value: __TIO_HOME_/keys/TSAM/identity, then the
Cloud.RSAFile global provisioning property is generated but not customized.
Perform the following steps to customize it:
a. Log on to the administrative user interface and click Go to >
Administration > Provisioning > Provisioning Global Settings.
b. Open the Variables tab and in the field enter Cloud.RSAFile.
c. Replace the string __TIO_HOME_ with the fully qualified path to the
directory of the Tivoli Provisioning Manager management server, for
example /opt/IBM/tivoli/tpm.
d. Click Save.
Manually configuring cloud server pools for Power Blades
Perform these steps to manually configure a System p cloud server pool with
Power Blade support. This configuration is identical with the configuration for
System p, apart from the settings in the HMC / IVM Discovery tab.
Before you begin
Collect the following information about the Hardware Management Console
(HMC) server:
v host name, IP address, network mask
v hscroot password
v CEC name to be used for this cloud server pool
Collect the following information about the host platform and VIOS configuration:
v CEC Host Platform Name
v Name of the Physical Ethernet Device, for example "eth0"
v the IP address and host name of the VIO server
Collect the following information about the NIM server:
v host name, IP address, network mask, gateway address, root password
182 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v path on the NIM boot server where the saved / backup images are to be stored,
for example /export/nim/images/backups.
Procedure
1. In the Cloud Server Pool Details tab, click the New Cloud Server Pool icon.
Specify a name for your new cloud server pool, for example System p Cloud
Pool 1. Set the hypervisor type to LPAR.
2. In the Resource Configuration section, in the Resource Pool Configuration
tab, click the Define New Resource Pool button.
a. In the Define New PowerVM Resource Pool window that appears, enter a
name for the new resource pool, for example System p Resource Pool 1.
b. Click Save.
3. Open the HMC / IVM Discovery tab.
a. Check the IVM Mode? box to use the IVM hypervisor. The HMC
Discovery section changes to IVM Discovery section. After enabling the
cloud server pool, the IVM Mode? box is grayed out and cannot be
changed, even if the cloud server pool is disabled again.
b. Click Define and Add Power Blade Server and specify all the required
parameters in the window that appears to create the computer object in
DCM. This computer is then added to the end of the list of computers in
the Power Blade Computers input field.
c. Click Select Hosts to be Discovered to select one or multiple computers
that are already defined in DCM. These computers are added as a
comma-separated list to in the Power Blade Computers field and replace
the content of this field.
d. Click Power Blade Discovery to start the discovery for all blade
computers specified in the Power Blade Computers field.
Restriction: Do not discover Power Blades and HMC computers within the
same cloud server pool. Such configuration is not supported.
4. Open the Host Platform and VIOS Configuration tab.
a. Select the SAN Storage / Multiple VIOS Mode? check box if you want to
use SAN storage in MPIO mode. Such setting is default for a System p
cloud use case. Checking this box also unlocks the Storage Discovery tab.
If you select this check box and configure the cloud server pool for SAN
storage, the check box is disabled. After that, it is not possible to switch to
LVM storage mode for this cloud server pool, even if the cloud server pool
is disabled.
b. Enter the CEC and VIOS parameters and run the Configure CEC
workflow. The CEC host platform is now attached to the resource pool.
You can verify this in the Resources Overview section. Try to connect to
the host and VIO server at the specified address using SSH. This step must
be repeated for all hosts that are to be added to the resource pool.
c. Optionally, you can assign a new device model to the host platform or
platforms attached to the resource pool. To use the Tivoli Service
Automation Manager LPAR storage extension, select the Cloud pSeries
HostPlatform Storage device model and click Assign Device Model. The
new device model is listed in the Resources Overview section.
d. Configure the VIOS Sets for each host platform using the table in the
VIOS Configuration section below the Assign Device Model to
Hostplatforms section. For each host platform, open the VIOS section by
clicking the little arrow icon next to its name. Click Add VIOS Set and
Chapter 3. Configuring 183
select a VIOS set type. Each host platform can define at most one
VIOS.SET, at most one VIOS.SET.SAN, or at most one VIOS.SET.NET
e. Click New VIOS Set Entry and add the alias for the VIOS set.
Important: This alias must match the alias specified in the network
configuration (System p Switch Template definition).
f. Click Add VIOS Server to add a VIOS server or servers to this VIOS set
and specify the trunk priority if needed.
g. Review the VIOS set entries and click Save.
5. Open the NIM Discovery tab.
a. Click Define New NIM Server.
b. In the window that appears specify the DNS resolvable host name or IP
address as NIM Server Name. Specify the IP address, network mask, and
the SSH user credentials for the NIM server.
c. Click OK.
d. Click Define New Boot Server.
e. Specify the server name, for example NIMBootServer, the IP address,
network mask, and gateway address for this boot server.
f. Click NIM Discovery and wait until the workflow is finished successfully.
Note: During creation of NIM boot server a -BootServer suffix is
appended to the server name that you specified. This is to ensure that a
unique NIM boot server object is created even if the NIM server and the
NIM boot server have the same name. Unique name is required for Tivoli
Provisioning Manager.
g. Click Save.
6. Open the Save / Restore Settings tab.
a. Click Define New PowerVM Repository.
b. Specify the name, for example System p Repository. Select the previously
created NIM boot server and specify the location of the saved images on
the NIM server.
c. Click Save.
7. In the Storage Discovery tab, click Storage Pool Discovery. This tab is only
available if the LPAR storage extension is installed.
8. Review the values in the Provisioning Parameters tab and adjust them to
your needs.
9. If you want to assign this cloud server pool to all customers, do the following:
a. Switch the main tab from Cloud Server Pool Details to Customers.
b. Mark the box Assigned to all customers?.
Such cloud server pool is shared by all customers and it is not necessary to
assign it to individual customers in the Cloud Customer Administration
application.
10. Click Validate and Enable Cloud Server Pool.
Note: After validation, all input fields are grayed out and become read-only.
Discoveries are not possible until the cloud server pool is disabled again.
11. If the following error message is displayed during the validation: CTJZH2064E
- The global provisioning property Cloud.RSAFile is not configured
properly. Current value: __TIO_HOME_/keys/TSAM/identity, then the
184 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Cloud.RSAFile global provisioning property is generated but not customized.
Perform the following steps to customize it:
a. Log on to the administrative user interface and click Go to >
Administration > Provisioning > Provisioning Global Settings.
b. Open the Variables tab and in the field enter Cloud.RSAFile.
c. Replace the string __TIO_HOME_ with the fully qualified path to the
directory of the Tivoli Provisioning Manager management server, for
example /opt/IBM/tivoli/tpm.
d. Click Save.
Manually configuring cloud server pools for VMControl
Perform these steps to manually configure a VMControl cloud server pool in the
Cloud Server Pool Administration application.
Before you begin
Collect the following information about the VMControl hypervisor:
v the trust certificate of the VMControl hypervisor
v the hostname, IP address, netmask, and credentials that are required to access
the VMControl hypervisor
v the name of the post installation script to be run on the provisioned server
("postInstallScript" by default)
Note: For supported network configuration scenarios, see Network configuration
on page 118.
Remember: To create additional disks on virtual server that is provisioned using
IBM Systems Director, make sure that the following prerequisites are performed :
v For VMControl 2.4.2: IBM

Systems Director VMControl 2.4.2 LA Build 58 is


installed. Contact IBM Support to provide this LA build.
v For VMC 2.4.3.1: Configure the IBM Systems Director VMControl 2.4.3.1 with a
server system pool and a storage system pool.
For more information about how to create a server system pool and a storage
system pool, see IBM Systems Director VMControl 2.4.2 or 2.4.3.1 documentation.
Procedure
1. In the Cloud Server Pool Details tab, click the New Cloud Server Pool icon.
Specify a name for your new cloud server pool, for example "VMControl
Cloud Pool 1". Set the hypervisor type to PowerHMC. Click Save.
2. In the Resource Configuration section, in the Resource Pool Configuration
tab, define your resource pool.
v Select the TPM Resource Pool.
v Define a new resource pool by clicking theDefine New Resource Pool
button.
a. In the Define New VMC Resource Pool window that opens, enter a
name for the new resource pool, for example "VMControl Resource Pool
1". Optionally, provide the name of the post installation script to be run
on provisioned systems.
b. Click OK.
3. Click Save to save the newly created VMControl server pool.
Chapter 3. Configuring 185
Note: The field Associated Hypervisor Manager Version shows the version
as stored in the resource pool variables section after the back-end has been
discovered.
4. In the Hypervisor Manager tab, define your hypervisor manager.
v Select an existing Hypervisor Manager.
v To define a new Hypervisor Manager, click Define New VMC Server. In
the window that opens, specify:
a. the name of the VMC server to be created
b. the IP address
c. the network mask
d. the credentials of the VMC hypervisor manager
Click OK.
Important: If no matching subnetwork exists in DCM, a new one is created
automatically. This is problematic because the IP address ranges of the
automatically created subnetwork overlap with the ones that are loaded later
on in the Cloud Network Configuration application. Problems can occur
especially if the management network and the network of the managed VMs
are the same or overlap. To avoid such a situation, load appropriate network
DCM objects (with the Cloud Network Configuration application) before you
create a cloud server pool.
a. Click Save.
b. Click VMControl Inventory Discovery to discover the VMControl
back-end.
Note: Import the SSL certificate of the VMControl hypervisor into the
WebSphere certificate store. Otherwise, the discovery workflow fails with a
"No certificate found" error.
Note: To enhance the performance of VMControl discovery, see
Enhancing VMControl discovery performance on page 188.
c. Click Save.
d. Click Refresh to check the current state of the discovery workflow.
Note: "Success" indicates that this step is finished. If the workflow fails,
you can open the Provisioning Workflow application and start the
workflow manually. You can also open the Provisioning Workflow Status
application to analyze the failed workflow.
After the discovery, the VMControl inventory host platform computer or
computers are created in DCM. They are not automatically added to the
selected resource pool and must be added manually.
Note: Run the discovery every time you provide a new VMC image at the
backend.
If the discovery workflow fails, this failure may be caused by images failed
verification. To solve this problem, do one of the following:
a. check the discovery workflow for failed images. To fix the image
validation errors, follow the instructions in the troubleshooting section
Fixing Image Validation Errors on page 510.
b. run the discovery again and if there are no new images discovered that fail
validation, then the discovery succeeds.
186 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
c. enable the cloud pool even if the discovery failed.
Note: All images that failed validation cannot be registered and used for
provisioning until the validation failure is fixed.
5. In the VMC Server System Pool Selection tab, select a VMC Server System
Pool object. Click Save.
Note: The VMC Hypervisor Manager must be successfully defined before this
step can be executed.
Note: The only VMC Server System Pools selectable are the pools which are:
a. associated with the selected VMC Hypervisor Manager
b. not empty (that is, have host platforms assigned).
6. In the Provisioning Parameters tab, define the maximum resources for
provisioning. Optionally, define a provisioning script in the Custom
Provisioning Scriptfield. Click Save.
7. If you want to assign this cloud server pool to all customers:
a. Switch the main tab from Cloud Server Pool Details to Customers.
b. Mark the box Assigned to all customers?.
c. Click Save.
Such cloud server pool is shared by all customers and it is not necessary to
assign it to individual customers in the Cloud Customer Administration
application.
8. In the Cloud Service Pool Details tab, click Validate and Enable Cloud
Server Pool. Click Save.
Note: After validation, all input fields are grayed out and become read-only.
Discoveries are not possible until the cloud server pool is disabled again.
9. If the following error message is displayed during the validation: CTJZH2064E
- The global provisioning property Cloud.RSAFile is not configured
properly. Current value: __TIO_HOME_/keys/TSAM/identity, then the
Cloud.RSAFile global provisioning property is generated but not customized.
Perform the following steps to customize it:
a. Log on to the administrative user interface and click Go to >
Administration > Provisioning > Provisioning Global Settings.
b. Open the Variables tab and in the field enter Cloud.RSAFile.
c. Replace the string __TIO_HOME_ with the fully qualified path to the
directory of the Tivoli Provisioning Manager management server, for
example /opt/IBM/tivoli/tpm.
d. Click Save.
10. The following are additional steps to discover storage pools when you want to
create additional disks on virtual servers:
a. Log on to the Tivoli Service Automation Manager Admin application.
b. Go to Service Automation > Configuration > Cloud Storage Pool
Administration.
c. In the Select Actions list, select Discover VMControl Storage Pools.
d. Click Refresh icon to discover the storage pool.
Chapter 3. Configuring 187
Enhancing VMControl discovery performance:
You can enhance the performance of the VMControl discovery.
Before you begin
In Tivoli Service Automation Manager 7.2.4.4, do not run a discovery in case both
the following conditions exist:
v 2 versions of VMControl (VMControl 2.4.2 or later and VMControl 2.4.1.1) that
are managed by Tivoli Service Automation Manager and
v Projects that are created using VMControl 2.4.2 or later.
Important: If you still run the discovery on VMControl 2.4.1.1, then it would
corrupt the DCM objects for the server that was created using VMControl 2.4.2 or
later.
Procedure
1. Before you create a Cloud Server Pool for VMControl, go to Discovery >
Provisioning Discovery > Discovery Configurations .
2. Enter the text VMC to filter the discovery configurations.
3. Click VMControl Discovery.
4. Set the Maximum Discovered Simultaneously value to a greater number and
save the changes.
Note: By default, Maximum Discovered Simultaneously value is set to 2,
which slows down VMControl discovery. Setting this value to a greater number
resolves the discovery performance issue.
For example, Maximum Discovered Simultaneously is set to 20.
5. After the successful creation of Cloud Server Pool for VMControl, go to
Discovery > Provisioning Discovery > Discovery Configurations.
6. Enter the text VMC to filter the discovery configurations.
7. Click <VMControl Host Name> VMControl discover.
8. Set the Maximum Discovered Simultaneously value to a greater number and
save the changes.
Important: The changes must be made both before and after the creation of
cloud server pool for VMControl.
Enabling or Disabling NPIV Support via VMControl:
Cloud resource pools for VMControl must be configured based on the storage
connection types - NPIV or vSCSI. The procedure has the steps to enable or disable
cloud resource pool that is based on NPIV storage connection type.
Before you begin
VMControl Discovery must be run to check whether NPIV capability is available
for a resource pool. If such a capability is discovered, the cloud resource pools are
set to use the NPIV storage connection type by default.
Note: To avoid conflicts on the nature of storage connection, it is important to
maintain resource type as either NPIV or vSCSI.
188 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Note: Do not submit concurrent requests for virtual server creation on NPIV
enabled resource pool for VMControl Hypervisor.
Procedure
1. Go to >Administration > Provisioning > Resource pool.
2. Select the Resource Pool for which NPIV must be disabled
3. In Variables tab, set the value of NPIV_Enabled from yes to no. Likewise,
change the value from no to yes to enable it.
Configuring cloud server pools for zVM
Learn to create and configure a zVM cloud server pool using DCM import XML
files and the Cloud Server Administration application.
zVM cloud server pool configuration:
Read the following information before you proceed to configure a zVM cloud
server pool.
Almost all hypervisor types support two configuration modes: manual
configuration using "Quick Define" operations and configuration with DCM import
XML files. However, the configuration of a zVM cloud server pool can be
performed using DCM import files only. Therefore, the configuration of a zVM
cloud server pool consists of the following steps:
1. Customizing the DCM import files that are delivered with the Tivoli Service
Automation Manager media:
v 10_Cloud_Global_NetworkSettings.xml
v 13_Cloud_NetworkSettings_zVM.xml
v 23_0_Cloud_Bootserver_zVM.xml
v 23_1_Cloud_zLinuxImage_zVM.xml
v 23_2_Cloud_Vswitches_zVM.xml
v 33_Cloud_Pool_zVM.xml
v 42_Cloud_ITM_Agent_Linux.xml (if IBM Tivoli Monitoring is required)
2. Loading the customized DCM import files.
3. Creating a cloud server pool for zVM using the Cloud Server Pool
Administration application.
4. Running the zVM discovery and enabling the cloud server pool.
Collect the following information before you start configuring a cloud server pool
for zVM:
v Network settings (required during network configuration):
The management and customer subnetworks in
10_Cloud_Global_NetworkSettings.xml
The virtual switch definitions in 13_Cloud_NetworkSettings_zVM.xml
v Mapserve zLinux host:
Name of the mapserve host (usually named "mapsrv") that is defined in
23_0_Cloud_Bootserver_zVM.xml in the Hostplatform property value
v zLinux images:
Configuration parameters of the zLinux master images that are defined in
23_1_Cloud_zLinuxImage_zVM.xml
v Virtual switch to host platform mapping:
Chapter 3. Configuring 189
The switch names and their mapping to host platforms are defined in
23_2_Cloud_Vswitches_zVM.xml
v System z hypervisors (defined in 33_Cloud_Pool_zVM.xml)
IP address, network mask of the hypervisor. SSH, and PING credentials for
the hypervisor
IP address, network mask, and SMAPI credentials for the vsmserve zVM
machine which processes the dirmaint commands
The following host platform configuration settings:
- cpu.family = "s390"
- cpu.size
- cpu.type="64-bit"
- memory.size
- disk name and size
Configuring zVM cloud server pools in the Cloud Server Pool Administration
application:
Perform these steps to manually configure a zVM cloud server pool in the Cloud
Server Pool Administration application.
Before you begin
See zVM cloud server pool configuration on page 189.
About this task
Procedure
1. In the Cloud Server Pool Overview tab, click Import DCM Objects and
import the prepared DCM XML files.
2. In the Cloud Server Pool Details tab, click the New Cloud Server Pool icon.
Specify a name for your new cloud server pool, for example "zVM Cloud Pool".
Set the hypervisor type to zVM.
3. In the Resource Configuration section, in the Resource Pool Configuration
tab, select the resource pool that was defined in and loaded with the DCM
import file 33_Cloud_Pool_zVM.xml.
4. Click Save to save the newly created zVM cloud server pool. Verify that the
host platform is listed as server in the Resource Pool overview.
5. Open the zVM Host Discovery tab.
a. Click Select Value next to the Hypervisor Manager field and select the
hypervisor computer loaded with the DCM import file
33_Cloud_Pool_zVM.xml.
b. Click Save.
c. Click zVM Discovery to run the zVM host discovery. Push the Refresh
button to see the updated workflow status.
6. Review the values in the Provisioning Parameters tab and adjust them to your
needs.
7. If you want to assign this cloud server pool to all customers, do the following:
a. Switch the main tab from Cloud Server Pool Details to Customers.
b. Mark the box Assigned to all customers?.
190 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Such cloud server pool is shared by all customers and it is not necessary to
assign it to individual customers in the Cloud Customer Administration
application.
8. Click Validate and Enable Cloud Server Pool.
Note: After validation, all input fields are greyed out and become read-only.
Discoveries are not possible until the cloud server pool is disabled again.
9. If the following error message is displayed during the validation: CTJZH2064E -
The global provisioning property Cloud.RSAFile is not configured
properly. Current value: __TIO_HOME_/keys/TSAM/identity, then the
Cloud.RSAFile global provisioning property is generated but not customized.
Perform the following steps to customize it:
a. Log on to the administrative user interface and click Go to >
Administration > Provisioning > Provisioning Global Settings.
b. Open the Variables tab and in the field enter Cloud.RSAFile.
c. Replace the string __TIO_HOME_ with the fully qualified path to the directory
of the Tivoli Provisioning Manager management server, for example
/opt/IBM/tivoli/tpm.
d. Click Save.
Using Data Center Model (DCM) files to configure cloud server
pools
You can use the DCM XML files to easily create or configure a cloud pool.
Data Center Model (DCM) object templates
DCM global provisioning parameters and DCM objects are defined in XML files
and then imported into Tivoli Service Automation Manager.
A set of Data Center Model (DCM) object templates are delivered with the Tivoli
Service Automation Manager installation DVD and copied to the management
server during the installation. All of these XML and properties files must be copied
and adapted to your environment. The default location is /etc/cloud/install/DCM.
These files need to be customized by adding specific IP addresses, host names, and
credentials for your environment.
Note: Before you can customize Data Center Model object templates, you must
have:
v maxadmin rights.
v access to the Tivoli Service Automation Manager installation DVD.
v read-access for all files to be imported.
The templates are structured as follows:
Important: The templates are prefixed with numbers, reflecting the order they
need to be imported.
CAUTION: Multiple imports of any DCM template are not supported and can
lead to errors.
Template Description
10_Cloud_Global_NetworkSettings.xml Global file containing all network DCM
objects.
Chapter 3. Configuring 191
Template Description
11_Cloud_NetworkSettings_VMware.xml Hypervisor network configuration for
VMware. Virtual Switch Templates for
Management and Customer network.
12_Cloud_NetworkSettings_Systemp.xml Hypervisor network configuration for
System P. Virtual Switch Templates for
Management and Customer network.
13_Cloud_NetworkSettings_zVM.xml Hypervisor network configuration for
z/VM. Virtual Switch Templates for
Management and Customer network.
14_Cloud_NetworkSettings_KVM.xml Hypervisor network configuration for KVM.
Virtual Switch Templates for Management
and Customer network.
15_Cloud_NetworkSettings_XEN.xml Hypervisor network configuration for XEN.
Virtual Switch Templates for Management
and Customer network.
23_0_Cloud_Bootserver_zVM.xml Boot Server configuration for z/VM.
23_1_Cloud_zLinuxImage_SLES10_zVM.xml Image definition of SLES10 for z/VM.
23_2_Cloud_VSwitches_zVM.xml VSwitch definition for z/VM.
33_Cloud_Pool_zVM.xml Cloud Pool Definition for z/VM.
52_Cloud_Storage_Volumes_Systemp.xml Sample of storage volumes definition to be
used for Power MPIO SAN disk storage
pools.
IBM Tivoli Monitoring Templates Contain software module definitions for the
Tivoli Monitoring Agent required to install
the monitoring agent on the provisioned
virtual machine.
Note: Before any Tivoli Monitoring Agent
DCM object templates can be imported, the
following data must be customized:
network-interface name, IP address, netmask
password-credentials.
41_Cloud_ITM_Agent_Windows.xml
42_Cloud_ITM_Agent_Linux.xml
43_Cloud_ITM_Agent_AIX.xml
44_Cloud_ITM_Agent_AIX_Linux_Windows.xml
In general, the customization of DCM object templates includes two configuration
steps. Start with customizing 10_Cloud_Global_NetworkSettings.xml that is used
regardless of the hypervisor type, then copy and configure the DCM objects that
are specific for your hypervisor type. Each of these steps is described in more
detail in the sections that follow.
192 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Customizing hypervisor independent Data Center Model (DCM) items:
Before you can configure your cloud pool objects, you need customize and import
a couple of DCM items. Start with 10_Cloud_Global_NetworkSettings.xml that is
used regardless of the hypervisor type.
Procedure
1. Copy the following files from the /install/files/DCM directory on the Tivoli
Service Automation Manager DVD to a local location:
v 10_Cloud_Global_NetworkSettings.xml
v 00_Cloud_Global_Properties.xml
2. Customize the 10_Cloud_Global_NetworkSettings.xml file:
v Customize the Management Subnetwork:
Table 18. The Management Subnetwork customization:
Property Name Description
ipaddress Update the IP Address according to your
environment.
netmask Update the IP Address according to your
environment.
blocked-range Update the IP Address according to your
environment.
PMRDP.Net.Gateway Update the IP Address according to your
environment or omit this property if no
gateway is used.
PMRDP.Net.Broadcast Update the IP Address according to your
environment.
PMRDP.Net.VLANID Update the IP Address according to your
environment or omit this property if no
VLAN is required.
PMRDP.Net.VLAN_PVID Update the IP Address according to your
environment or omit this property if no
VLAN is required. The value has to exist if
PMRDP.Net.VLANID is defined. It has to
specify a unique PVID on your system p
CEC.
PMRDP.Net.DefaultRoute.Destination Allows to specify a single static route for
your subnet . If the property is set, the
following properties must be defined:
PMRDP.Net.Gateway,
PMRDP.Net.DefaultRoute.NetMask,
PMRDP.Net.DefaultRoute.Metric and
PMRDP.Net.DefaultRoute.Destination.
PMRDP.Net.DefaultRoute.NetMask Allows to specify a single static route for
your subnet . If the property is set, the
following properties must be defined:
PMRDP.Net.Gateway,
PMRDP.Net.DefaultRoute.NetMask,
PMRDP.Net.DefaultRoute.Metric and
PMRDP.Net.DefaultRoute.Destination.
Chapter 3. Configuring 193
Table 18. The Management Subnetwork customization: (continued)
Property Name Description
PMRDP.Net.DefaultRoute.Metric Allows to specify a single static route for
your subnet . If the property is set, the
following properties must be defined:
PMRDP.Net.Gateway,
PMRDP.Net.DefaultRoute.NetMask,
PMRDP.Net.DefaultRoute.Metric and
PMRDP.Net.DefaultRoute.Destination.
PMRDP.Net.DomainName Allows to specify the domain name for the
hostname. It can be omitted if not used.
PMRDP.Net.HostnamePrefix Allows to specify the hostname prefix of the
generated hostname for the LPAR.
v Customize the Customer Subnetwork:
Table 19. The Customer Subnetwork customization:
Property Name Description
ipaddress Update the IP Address according to your
environment.
netmask Update the IP Address according to your
environment.
blocked-range Update the IP Address according to your
environment.
PMRDP.Net.Gateway Update the IP Address according to your
environment or omit this property if no
gateway is used.
PMRDP.Net.Broadcast Update the IP Address according to your
environment.
PMRDP.Net.VLANID Update the IP Address according to your
environment or omit this property if no
VLAN is required.
PMRDP.Net.VLAN_PVID Update the IP Address according to your
environment or omit this property if no
VLAN is required. The value has to exist if
PMRDP.Net.VLANID is defined. It has to
specify a unique PVID on your system p
CEC.
PMRDP.Net.DefaultRoute.Destination Allows to specify a single static route for
your subnet . If the property is set, the
following properties must be defined:
PMRDP.Net.Gateway,
PMRDP.Net.DefaultRoute.NetMask,
PMRDP.Net.DefaultRoute.Metric and
PMRDP.Net.DefaultRoute.Destination.
PMRDP.Net.DefaultRoute.NetMask Allows to specify a single static route for
your subnet . If the property is set, the
following properties must be defined:
PMRDP.Net.Gateway,
PMRDP.Net.DefaultRoute.NetMask,
PMRDP.Net.DefaultRoute.Metric and
PMRDP.Net.DefaultRoute.Destination.
194 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Table 19. The Customer Subnetwork customization: (continued)
Property Name Description
PMRDP.Net.DefaultRoute.Metric Allows to specify a single static route for
your subnet . If the property is set, the
following properties must be defined:
PMRDP.Net.Gateway,
PMRDP.Net.DefaultRoute.NetMask,
PMRDP.Net.DefaultRoute.Metric and
PMRDP.Net.DefaultRoute.Destination.
PMRDP.Net.DomainName Allows to specify the domain name for the
hostname. It can be omitted if not used.
PMRDP.Net.HostnamePrefix Allows to specify the hostname prefix of the
generated hostname for the LPAR.
3. Customize the 00_Cloud_Global_Properties.xml file:
Table 20.
Property Name Description
Cloud.FAILURE_RETRY_COUNT Determines how often a retry action is performed for
failed provisioning, deprovisioning and modifying
operations. The system automatically retries the failed
operation if Cloud.FAILURE_RETRY_COUNT is set to a
value greater then the default value of 1.
VMware
Placement of provisioning requests on the ESX
is not done by TSAM but by the VCenter.
VMware provides a VMware DRS functionality
that is not included in standard license. It is
responsible for:
v Automatic assignment of VMs to hosts.
v Placement of VMs when they are powered on.
v Handling the allocation of resources.
It is strongly recommended to enable VMware
DRS, and set the automation level to at least
Partially automated.
Note: Addtional information can be found in this
technote.
Chapter 3. Configuring 195
Table 20. (continued)
Property Name Description
Cloud.MAX_CONCURRENT_SERVER_OPERATIONS Limits the number of active provisioning operations.
Provisioning requests that exceed the number set for the
Cloud.MAX_CONCURRENT_SERVER_OPERATIONS
parameter wait for the earlier ones to complete. The
throttle is to prevent the cloud infrastructure and the
components within the infrastructure from overload. The
parameter is environment-specific and depends on the
capability of the infrastructure where the cloud is
deployed. Because of the strong infrastructure
dependency, the following guidelines must be treated as
general tips only:
VMWare
v The recommended value for Virtual Center
3.5 is 35.
v The recommended value for vSphere 4.0 and
higher is 60.
In general, the following values work well for
most environments. If necessary, adjust them
according to your needs.
VMControl
Set the parameter to 5.
z/VM Set the parameter to the number of the
configured hypervisors.
KVM The recommended value is the maximum
number of concurrent provisioning requests that
can be supported by the image repository and
the network. Additionally, the
Cloud.HOST_TRANSACTION_NUMBER_LIMIT
must be used to prevent individual hypervisors
from overload.
Cloud.HOST_TRANSACTION_NUMBER_LIMIT Limits the number of active provisioning operations that
are performed by individual hypervisors and prevents
them from overload. This parameter is
environment-specific and depends on the capability of
the infrastructure where the cloud is deployed. Because
of the strong infrastructure dependency, the following
guidelines must be treated as general tips only:
VMware
Set the parameter to -1 to deactivate the
throttle.
VMControl
Set the parameter to 5.
z/VM Set the parameter to 10.
KVM Set the parameter to 3 to avoid the impact the
provisioning operations have on running virtual
machines. On large hosts, values up to 10 can
work well and maximize provisioning
concurrency, however, such high values have
impact on running virtual machines.
196 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Customizing hypervisor dependent Data Center Model (DCM) items:
After you have customized the hypervisor independent Data Center Model objects
you need to configure DCM objects that are specific for the hypervisor type.
Follow the instruction that applies to your hypervisor type.
v VMware
v PowerVM
v z/VM
v KVM
Customizing Data Center Model (DCM) items for VMware:
Customize the DCM templates prior to importing them to create a VMware
hypervisor cloud pool.
Procedure
1. Copy the 11_Cloud_NetworkSettings_VMware.xml file from the
/install/files/DCM directory on the Tivoli Service Automation Manager DVD
to a local location.
2. Customize the 11_Cloud_NetworkSettings_VMware.xml file:
Table 21. The 11_Cloud_NetworkSettings_VMware.xml file customization
Property Name Description
PMRDP.Net.VSwitchName Specify the name of the virtual switch on
which the port group should be created, e.g.
vSwitch0, vSwitch1, ...
Controlling concurrent requests for VMware Virtual Center server pools:
You can specify the value for maximum parallel server provisioning for a virtual
center. This setting allows you to control the numbers of servers that are
provisioned at one time.
About this task
If the property Cloud.MAX_CONCURRENT_SERVER_OPERATIONS is not specified as a
virtual center variable, the global variable value is used. Setting this variable
allows you to control the number of server provisioning processes running at one
time.
Procedure
1. Click Go To > Service Automation > Configuration > Cloud Server Pool
Administration.
2. Select the VMware server pool and take note of the value for virtualcenterid.
3. Open the Start Center, and in the Data Model Object Finder section, type the
virtualcenterid value in the field Object ID, then press Enter.
4. In the search results, click the virtualcenter object of type Computer to view its
details.
5. In the Variables tab, click New Row to add a variable.
6. In the Variable field, type Cloud.MAX_CONCURRENT_SERVER_OPERATIONS and
provide the value.
7. Click Save.
Chapter 3. Configuring 197
Results
Parallel server provisioning is now controlled for the selected virtual center server
pool by the value specified for the Cloud.MAX_CONCURRENT_SERVER_OPERATIONS
property.
Customizing Data Center Model (DCM) items for PowerVM:
Customize the DCM templates before importing them to create a PowerVM
hypervisor cloud pool.
Procedure
1. Copy the 12_Cloud_NetworkSettings_Systemp.xml file from the
/install/files/DCM directory on the Tivoli Service Automation Manager DVD
to a local location.
2. Customize the 12_Cloud_NetworkSettings_Systemp.xml file:
v customize Systemp_SwitchTemplate:
Table 22. Systemp_SwitchTemplate customization
Property Name Description
PMRDP.Net.LinkTypeIEEE8023AD Specifies the type of EtherChannel. If true, it
is IEEE type. If false, it is CISCO type. The
value is required if more than one Ethernet
adapter is configured for the SEA. See
property icname =
PMRDP.Net.SEACtrlSessionVlanId The shared Ethernet adapter (SEA) requires
a control session in a dual VIO mode. This
value specifies the VLAN Id used for that.
The value must be unique on the CEC and
must match all other SEAs on the CEC if
they function as servers to the same internal
network.
Note: Failing to configure this can lead to
network outages due to an internal loop.
PMRDP.Net.UntaggedTrafficPVID Specifies the PVID for the adapter to be
used for untagged traffic. This PVID must
be unique on the CEC. All the untagged
network traffic goes to this adapter.
PMRDP.Net.VIOPairAlias Specifies the VIO Set which should be used.
The VIO Set is referenced by the symbolic
name and must exist on all CECs which are
part of the resource pool.
ic name= Defines the device name on the VIO on
which the shared Ethernet adapter or
EtherChannel is created. There can be
multiple entries, in which case an
EtherChannel is created. The
PMRDP.Net.LinkTypeIEEE8023AD property is
required in case of multiple entries.
198 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Customizing Data Center Model (DCM) items for z/VM:
Customize the Data Center Model (DCM) templates before importing them to
create a z/VM hypervisor cloud pool.
Procedure
1. Copy the following files from the /install/files/DCM directory on the Tivoli
Service Automation Manager DVD to a local location:
v 13_Cloud_NetworkSettings_zVM.xml
v 23_0_Cloud_Bootserver_zVM.xml
v 23_1_Cloud_zLinuxImage_SLES10_zVM.xml
v 23_2_Cloud_Vswitches_zVM.xml
v 33_Cloud_Pool_zVM.xml
2. Customize the 13_Cloud_NetworkSettings_zVM.xml file:
Table 23. The 13_Cloud_NetworkSettings_zVM.xml file customization
Property Name Description
PMRDP.Net.VSwitchName Specifies the name of the virtual switch
on which the port group should be
created.
PMRDP.Net.zVM_NetworkPortDeviceNumber Defines the template port device number.
This is the template value for all virtual
servers which are created in turn.
PMRDP.Net.zVM_NetworkDeviceRange Defines the port device range and is set
to 3 by default.
vlan name and vlan-number For each VLAN used by z/VM the
corresponding Tivoli Provisioning
Manager VLAN needs to be defined. The
vlan-number denotes the actual physical
VLAN tag.
3. Customize the 23_1_Cloud_zLinuxImage_SLES10_zVM.xml file:
Note: When changing the XML attribute value on any of the following XML
elements, you need to be aware of the links between Virtual Server Template,
Software Stack, Image and Master Image. All these elements are linked by
name. When changing one of the following values, all other values have to be
changed as well before importing the file to DCM.
Table 24. The 23_1_Cloud_zLinuxImage_SLES10_zVM.xml file customization
Property Name Description
zVM_Password The initial password for the provisioned z/VM
guest.
Note: Password must not contain any special
characters.
zLinux_RootPassword The initial password for the provisioned Linux
root access.
Note: Password must not contain any special
characters.
Chapter 3. Configuring 199
Table 24. The 23_1_Cloud_zLinuxImage_SLES10_zVM.xml file customization (continued)
Property Name Description
Minidisk image definition section:
zVM_Prototype
zVM_DiskOwnerId
zVM_CloneDisks
zVM_SystemDisk
Name of the z/VM prototype to be used for
new z/VM guests, as defined during the
z/VM backend setup phase.
Name of the guest that owns the master
image disk.
A space separated list of disk device numbers
to be cloned from the master image when
provisioning a new guest.
The device number of the disk that contains
the Linux boot/root partition.
4. Customize the 23_2_Cloud_Vswitches_zVM.xml file:
Note: This file contains definitions of existing z/VM virtual switches. For each
virtual switch template defined in the 13_Cloud_NetworkSettings_zVM.xml there
must be one related object in this file.
Table 25. The 23_2_Cloud_Vswitches_zVM.xml file customization
Property Name Description
switch name The name of the virtual switch object that must
match the PMRDP.Net.VSwitchName in the virtual
switch template.
host-platform property value The name of the mapserve server object that
must match the mapserve names in
23_0_Bootserver_zVM.xml and
33_Cloud_Pool_zVM.xml.
5. Customize the 33_Cloud_Pool_zVM.xml file:
v Customize System z Pool:
Note: The settings with "PMRDP.Net" suffix can also be configured in the
Network Settings tab of the Cloud Pool Administration application.
Table 26. System z Pool customization
Property Name Description
PMRDP.Net.SubnetPool_0 Defines the first network adapter on the host and
lists the existing subnetwork names.
PMRDP.Net.SubnetPool_1 Defines the second network adapter on the host
and lists the existing subnetwork names. It is
omitted if a single network is sufficient. There
can be multiple entries in the form
PMRDP.Net.SubnetPool_x, where x is a number
from 0 to the max number of network adapters
supported by the hypervisor.
Note: Having more than one entry with the same
value is not supported.
PMRDP.Net.ManagementNIC Defines the management network and contains
the number from the end of the
PMRDP.Net.SubnetPool_x property.
200 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Table 26. System z Pool customization (continued)
Property Name Description
PMRDP.Net.HostNameResolveNIC Defines the Network which is used to generate
the host name of the LPAR via reverse DNS
resolution on the Tivoli Service Automation
Manager management node. It contains the
number from the end of
PMRDP.Net.SubnetPool_x property.
PMRDP.Net.DNSServers Defines the list of name servers. Contains a
comma-separated list of IP addresses.
PMRDP.Net.DNSSuffix Defines the domain name server suffixes.
Contains a comma-separated list of DNS suffixes.
v Customize the mapserve server section in the System z Pool object:
Table 27. Customization of the mapserve server section in the System z Pool object
Property Name Description
NIC object(s) Defines the number of NICs on mapserve that
limits the number of NICs on a provisioned
server. Set the management flag on the Nic0
entry.
ipaddress and netmask in the
network-interface section
Defines the IP address and netmask of the
mapserve.
username and password in the PING and
zVM SSH service access point (sap)
definitions
Defines the user name and password needed to
access the mapserve (Linux root user) for the
respective protocols.
username and password in the SMAPI
service access point (sap) definition
Defines the user name and password needed to
use the system management API of z/VM.
vmserve-address and vmserve-port Defines the IP address and port of the z/VM
host itself.
CPU, disk.size and memory.size Specifications of CPU, disk, and memory
resources available for server provisioning.
Note: Those values do not need to be set
manually, as they are updated during the z/VM
resource discovery.
Note: The 23_0_Cloud_Bootserver_zVM.xml file contains a template of a
BootServer Tivoli Provisioning Manager object and does not need to be
customized.
Customizing Data Center Model (DCM) items for KVM:
Customize the DCM templates before importing them to create a KVM hypervisor
cloud pool.
Procedure
1. Copy the 14_Cloud_NetworkSettings_KVM.xml file from the /install/files/DCM
directory on the Tivoli Service Automation Manager DVD to a local location.
2. Customize the 14_Cloud_NetworkSettings_KVM.xml file:
Chapter 3. Configuring 201
Table 28. The 14_Cloud_NetworkSettings_KVM.xml file customization:
Property Name Description
Adapter name (ic name=) Specify the name of the network adapter for
the management and customer network
templates.
Importing Data Center Model (DCM) object templates
After customizing the appropriate DCM object templates for the hypervisor you
plan to configure, import them into Tivoli Service Provisioning Manager.
Before you begin
You must have:
v maxadmin rights.
v customized the DCM object templates appropriate for the hypervisors you want
to configure.
v read-access for all files to be imported.
Procedure
1. Enter the Cloud Pool Administration application by clicking: Go To > Service
Automation > Configuration > Cloud Server Pool Administration.
2. Click the Import DCM Objects... button.
3. Browse for and select all the DCM import XML files you customized.
Important: Import the templates according to their prefix number, from the
lowest to the highest.
What to do next
You can now proceed to configuring the cloud server pool.
Customizing cloud pool objects
As newly installed systems have no cloud pools present, you need to use the Tivoli
Service Automation Manager Cloud Server Pool Administration application to
create and configure cloud pool objects for them.
About this task
You can create a completely new cloud pool or use a preconfigured one. In both
cases you need to customize it to your needs using the Cloud Server Pool
Administration application. Work through all the sections in the Cloud Pool Details
tab, then validate/enable the cloud pool to make it available for provisioning
actions. The customization steps differ depending on hypervisor type, but there is
one general procedure that is common for all of them:
Procedure
1. To open the Cloud Server Pool Administration application, log on to the
administrative user interface and click Go To > Service Automation >
Configuration > Cloud Server Pool Administration.
2. Create a new cloud pool or select it from the Cloud Pool Overview tab that
has been imported earlier.
202 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
3. Type the parameters in the Pool Configuration Parameters section. If the pool
was imported from the properties file, a great number of the parameters has
been already entered.
4. Create and configure host platforms (dependent on hypervisor type).
5. Discover the hypervisor manager to examine the back end and create
appropriate objects in the Data Center Model.
6. Discover image templates that are to be used for the provisioning of servers.
7. Create Image Library Resources (only for VMware and LPAR hypervisor
types).
8. Configure network parameters of the pool.
9. Validate and enable the pool.
Note: In Tivoli Service Automation Manager 7.2.0, a vrpool.properties file,
located in the /etc/cloud directory on the manager server was used for the
administration of cloud server pools. Within version 7.2.1 and 7.2.2, it is only
used for the import of preconfigured cloud server pools for and in the
migration processes during the upgrade from version 7.2.0 to 7.2.1 or to 7.2.2.
Remember: If you have installed the IBM Tivoli Monitoring server in your
environment, you can configure Tivoli Service Automation Manager to include
the monitoring agent installation when it provisions virtual machines using
VMware. The configuration steps in the subsequent sections do not include
the configuration required for including the monitoring agent. This
configuration can be performed after completing the configuration of Tivoli
Service Automation Manager for VMware by following the configuration steps
described in Configuring the provisioning of the monitoring agent on page
256.
10. If the following error message is displayed during the validation: CTJZH2064E
- The global provisioning property Cloud.RSAFile is not configured
properly. Current value: __TIO_HOME_/keys/TSAM/identity, then the
Cloud.RSAFile global provisioning property is generated but not customized.
Perform the following steps to customize it:
a. Log on to the administrative user interface and click Go to >
Administration > Provisioning > Provisioning Global Settings.
b. Open the Variables tab and in the field enter Cloud.RSAFile.
c. Replace the string __TIO_HOME_ with the fully qualified path to the
directory of the Tivoli Provisioning Manager management server, for
example /opt/IBM/tivoli/tpm.
d. Click Save.
What to do next
To finish the customization, follow the procedures that apply to your hypervisor
type:
v VMware
v PowerVM
v KVM
v PowerVM with VMControl
Chapter 3. Configuring 203
Customizing a Tivoli Service Automation Manager cloud pool for VMware:
A cloud pool needs to be configured prior to using it for provisioning virtual
servers for VMware hypervisor.
Before you begin
It is required to have:
v the maxadmin role.
v the following data center model (DCM) import templates customized:
10_Cloud_Global_NetworkSettings.xml, 11_Cloud_NetworkSettings_VMware.xml.
See Data Center Model (DCM) object templates on page 191 for more
information.
Procedure
1. Add the VMware server trust certificates to WebSphere Application Server. The
VirtualCenter host certificate must be imported into the certificate store of the
Tivoli Service Automation Manager management server.
a. Stop the Tivoli Provisioning Manager:
su - tioadmin
$TIO_HOME/tools/tio.sh stop <wasadmin> <wasadmin password>
b. Copy the following certificates to the Tivoli Provisioning Manager server,
for example, to the path /tmp/SSL/.
ESX server
/etc/vmware/ssl/rui.crt or rui.cer
vCenter server
v
Windows
2003: C:\Documents and Settings\All
Users\Application Data\VMware\VMware VirtualCenter\SSL\
*.crt or *.cer .
v
Windows
2008: C:\ProgramData\VMware\VMware
VirtualCenter\SSL.
c. Import the certificates:
1) Log on to the management server as root and set up the $JAVA_HOME
variable as follows:
# export JAVA_HOME=/opt/IBM/WebSphere/AppServer/java/
2) Update the PATH environment variable and ensure that the keytool
command is from the $JAVA_HOME/jre/bin directory.
# export PATH=$JAVA_HOME/jre/bin:$PATH
3) Change directories.
# cd $JAVA_HOME/jre/lib/security
4) Run the following command and, when prompted, answer "yes" to the
question "Trust this certificate?":
keytool -import -v -trustcacerts -alias any-alias-name -file location-of-the-cert
-storepass changeit -keystore cacerts
204 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
where any-alias-name can be the host name of the vCenter or the ESX
server. For example,
keytool -import -v -trustcacerts -alias tpm711 -file /tmp/SSL/rui.crt -storepass
changeit -keystore cacerts
d. As tioadmin, start the Tivoli Provisioning Manager:
su - tioadmin
$TIO_HOME/tools/tio.sh start <wasadmin> <wasadmin password>
The certificate is now imported and communication with the VirtualCenter
is possible.
2. Enter the Cloud Pool Administration application by clicking: Go To > Service
Automation > Configuration > Cloud Server Pool Administration.
3. Import the customized data center model (DCM) templates:
a. Click Import DCM Objects....
b. Browse for and select the DCM import xml files you customized:
10_Cloud_Global_NetworkSettings.xml,
11_Cloud_NetworkSettings_VMware.xml.
Attention: When configuring a VMware cloud server pool, do not run any
Tivoli Provisioning Manager VMware discovery workflows to discover
virtual machines or images. To start a Tivoli Service Automation Manager
discovery workflow, use the Cloud Server Pool Administration application.
Running a Tivoli Provisioning Manager VMware discovery is incompatible
with Tivoli Service Automation Manager, destroys the Tivoli Provisioning
Manager relationships, and can lead to serious Tivoli Service Automation
Manager malfunction.
Important: The templates must be imported according to their prefix
number, from the lowest to the highest.
4. Create a new VMware Cloud Server Pool and execute all steps as described in
chapter Manually configuring cloud server pools for VMware on page 174.
Defining a new VMware virtual center:
After you have successfully defined a VMware resource pool, use the Cloud Server
Pool Administration application to define a new VMware virtual center.
About this task
It is necessary to perform this task in order to configure a new VMware cloud
pool.
Procedure
1. To open the Cloud Server Pool Administration application, log on to the
administrative user interface and click Go To > Service Automation > Cloud
Server Pool Administration.
2. Open the Virtual Center Configuration tab.
3. Click Define New Virtual Center.
4. Enter the necessary information about the Virtual Center Properties, Virtual
Center IP Details, and Virtual Center Login Details in the fields and click OK.
Chapter 3. Configuring 205
Results
A message is displayed that informs you about successful creation of the new
virtual center. The object name of the newly created virtual center is displayed in
the Hypervisor Manager field in the Cloud Server Pool Administration application.
Customizing a Tivoli Service Automation Manager cloud pool for PowerVM:
A cloud pool needs to be configured before using it for provisioning virtual servers
for PowerVM hypervisor. You must decide whether to use local partitionable disk
from single VIOS or external SAN storage with the dual VIOS capability.
Before you begin
v This task can only be performed with maxadmin role.
v Verify that you customized the following data center model (DCM) import
templates: 10_Cloud_Global_NetworkSettings.xml,
12_Cloud_NetworkSettings_Systemp.xml. See Data Center Model (DCM) object
templates on page 191 for more information.
v By default, Tivoli Service Automation Manager resource reservation utilizes
system memory up to 85% on System p back end hypervisor. The remaining
15% is the amount of memory reserved for the hypervisor firmware. For more
information see Manually adjusting memory allocated to firmware.
Procedure
1. Click Go To > Service Automation > Cloud Pool Administration The Cloud
Pool Administration application opens.
2. Import the customized data center model (DCM) templates:
a. Click Import DCM Objects....
b. Browse for and select the DCM import xml files you customized:
10_Cloud_Global_NetworkSettings.xml,
12_Cloud_NetworkSettings_Systemp.xml.
Important: Import the templates according to their prefix number, from the
lowest to the highest.
3. Create a new System p cloud server pool and perform all steps as described in
chapter Manually configuring cloud server pools for System p on page 179.
Manually adjusting memory allocated to firmware:
For each particular CEC, you can manually adjust the amount of memory that is
reserved for firmware.
About this task
By default, Tivoli Service Automation Manager resource reservation uses system
memory up to 85% on System p back-end hypervisor. The 15% is the amount of
memory reserved for the hypervisor firmware. The 85% value is rounded down to
the next multiple of 256 MB.
You can change this default setting manually by specifying the absolute amount of
memory to be reserved for firmware for each particular CEC. The specified amount
of memory is then considered to be allocated as hypervisor firmware memory and
is subtracted from the total amount of system memory. The remaining memory is
rounded down to the next multiple of 256 MB.
206 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Note: Tivoli Service Automation Manager does not read the size of the allocated
firmware memory from the back end. Since this size is subject to change and it is
influenced by the way LPARs are defined with respect to the definition and usage
of the various virtual resources, the setting must be reviewed regularly.
Procedure
For each CEC for which you want to change the memory setting, add the
PMRDP.Power.Firmware.MemoryMB variable. The value of this variable must be
integer and it must be specified in MB.
To switch off the default memory handling to consider 15% of the built-in
hypervisor memory, specify the parameter PMRDP.Power.Firmware.MemoryMB with a
value of 0. If the parameter is a negative integer, the default behavior of
subtracting 15% is used.
Defining a new PowerVM resource pool:
Use the Cloud Server Pool Administration application to define a new resource
pool for the PowerVM hypervisor.
Procedure
1. To open the Cloud Server Pool Administration application, log on to the
administrative user interface and click Go To > Service Automation > Cloud
Server Pool Administration.
2. In the Cloud Server Pool Details tab, create a new cloud server pool of the
LPAR type.
3. Click Define New Resource Pool. A Define New Power VM Resource Pool
window is displayed.
4. Enter the name of the new resource pool.
5. Click OK.
Results
A message is displayed that informs you about successful creation of the new
resource pool. The name of the newly created resource pool is displayed in the
Resource Pool Name field in the Cloud Server Pool Administration application.
What to do next
Perform this task.
Defining a new HMC server:
After you successfully created a resource pool, use the Cloud Server Pool
Administration application to define a new Hardware Management Console
(HMC) server.
Procedure
1. To open the Cloud Server Pool Administration application, log on to the
administrative user interface and click Go To > Service Automation > Cloud
Server Pool Administration.
2. Open the HMC Discovery tab.
3. Click Define New HMC Server. A Define New HMC Server window is
displayed.
Chapter 3. Configuring 207
4. Enter the required information about HMC Server Properties, HMC Server IP
Details, and HMC Server SSH Login Details in the fields.
5. Click OK.
Results
A message is displayed that informs you about successful creation of the new
HMC server. The name of the newly created HMC server is displayed in the HMC
Computer Name field in the Cloud Server Pool Administration application.
Note:
v Pay attention to the discovery timeout value. The value depends on how many
System p Central Electronic Complexes (CECs) are attached to the configured
HMC. Specify the name of the System p CEC you want to discover in the CEC
Name for Discovery (optional) field to reduce the time of the discovery.
v CECs that belong to one resource pool are not required to be managed by one
particular HMC. If more than one HMC is required to manage the CECs within
one resource pool, the HMC discovery has to be invoked multiple times on
respective HMCs. Then all CECs are discovered and are known in Tivoli
Provisioning Manager. Each CEC has a property specifying the name of the
HMC by which it is managed. Fail-over scenarios in which one CEC is managed
by more than one HMC are not supported by Tivoli Service Automation
Manager.
What to do next
Perform this task.
Defining a new NIM server:
After you have successfully defined a new HMC server, user the Cloud Server
Pool Administration application to define a new Network Installation Management
(NIM) server.
Procedure
1. To open the Cloud Server Pool Administration application, log on to the
administrative user interface and click Go To > Service Automation > Cloud
Server Pool Administration.
2. Open the NIM Discovery tab.
3. Click Define New NIM Server. A Define New NIM Server window is
displayed.
4. Enter the required information about NIM Server Properties, NIM Server IP
Details, and NIM Server SSH Login Details in the fields.
5. Click OK.
Results
A message is displayed that informs you about successful creation of the new NIM
server. The name of the newly created NIM server is displayed in the Image
Repository Host field in the Cloud Server Pool Administration application.
What to do next
Perform this task.
208 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Defining a new boot server:
After you have successfully defined a new NIM server, use the Cloud Server
Administration application to define a new boot server.
Procedure
1. To open the Cloud Server Pool Administration application, log on to the
administrative user interface and click Go To > Service Automation > Cloud
Server Pool Administration.
2. Open the NIM Discovery tab.
3. Click Define New Boot Server. A Define New Boot Server window is
displayed.
4. Enter the required information about Boot Server Properties and Boot Server
IP Details in the fields.
5. Click OK.
Results
A message is displayed that informs you about successful creation of the new boot
server. The name of the newly created boot server is displayed in the NIM Boot
Server Name field in the Cloud Server Pool Administration application.
Customizing the PowerVM VIOS network configuration for the CEC:
As part of the cloud pool customization for PowerVM, you need to customize the
VIOS network for the Central Electronic Complex (CEC).
About this task
Depending on the network configuration type used in your environment, perform
one of the tasks described below.
Default network configuration:
For the default network configuration type with multiple NIC support enabled
(global variable PMRDP.Net.MultiNicSupport is set), follow the instructions in this
task.
Procedure
1. Set the VIO server pair used for network virtualization management on all
CECs assigned to the cloud pool. For every CEC:
a. Click Go To > IT Infrastructure > Provisioning Inventory > Provisioning
Computers.
b. Find the computer representing the CEC, and click it to open the
configuration details.
c. Select the Variables tab and click New Row to add a new variable.
d. In Variable, type VIOS.SET.NET or VIOS.SET, depending on whether the VIO
server pair is to be used only for network virtualization management, or
both network and storage virtualization management.
e. Select the Is Array? check box.
f. Click Add New Value to add one or more lists of VIO servers, containing
semicolon-separated values for: pair alias name; first VIOS name; second
VIOS name. For more information, see Table 29 on page 210.
Chapter 3. Configuring 209
2. Set trunk priorities for network virtualization VIO servers. For each network
VIO server:
a. Click Go To > IT Infrastructure > Provisioning Inventory > Provisioning
Computers.
b. Find the computer representing the VIOS, and click it to open configuration
details.
c. Select the Variables tab and click New Row to add a new property.
d. In Variable, type PMRDP.Net.TrunkPriority.
e. Set the value to trunk priority of the VIO server. This is a numeric value
starting with 0, for example: 0, 1, 2,...) For more information about trunk
priority, see Table 29.
Example
Table 29. Customization of the VIOS settings for the discovered DCM objects and the
System p VIOS configuration for the CEC server objects
Property Name Description
VIOS.SET The default array property that enables
defining VIOS Sets on a CEC. It defines a
VIOS Set used by Storage and Networking.
Each entry in the array is composed of 3
values which are separated by a semicolon
(;) :
v The first value is a symbolic name.
v The second value is the DCM name of the
first VIOS n the VIOS Set.
v The third value is the DCM name of the
second VIOS in the VIOS Set. Optional for
a single VIOS configuration.
Example: test1;VIOS1;VIOS2.
All defined VIOS Sets symbolic names must
be equal and resolvable on all CECs in the
resource pool.
VIOS.SET.NET An array property that enables defining
multiple VIOS Sets on a CEC. It defines a
VIOS Set used exclusively by Networking.
Each entry in the array is composed of 3
values which are separated by a semicolon
(;) :
v The first value is a symbolic name.
v The second value is the DCM name of the
first VIOS in the VIOS Set.
v The third value is the DCM name of the
second VIOS in the VIOS Set. Optional for
a single VIOS configuration.
Example: testnet1;VIOS3;VIOS4.
All defined VIOS Sets symbolic names must
be equal and resolvable on all CECs in the
resource pool.
210 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Table 29. Customization of the VIOS settings for the discovered DCM objects and the
System p VIOS configuration for the CEC server objects (continued)
Property Name Description
VIOS.SET.SAN An array property that enables defining
multiple VIOS Sets on a CEC. It defines a
VIOS Set used exclusively by storage. Each
entry in the array is composed of 3 values
which are separated by a semicolon (;) :
v The first value is a symbolic name.
v The second value is the DCM name of the
first VIOS in the VIOS Set.
v The third value is the DCM name of the
second VIOS in the VIOS Set. Optional for
a single VIOS configuration.
Example: testsan1;VIOS_5;VIOS_6.
All defined VIO Sets symbolic names must
be resolvable on all CECs in the resource
pool.
PMRDP.Net.TrunkPriority Defines the trunk priority used when virtual
Ethernet devices are created. This value
must be unique across the VIOS on a CEC.
Legacy network configuration:
For legacy network configuration, with all pool components connected to one
subnetwork, follow the instructions in this task.
About this task
Add the variable Cloud.Subnetwork to the Resource Pool to connect the Resource
Pool with the management network VLAN ID:
Procedure
1. Click Go To > Service Automation > Cloud Pool Administration.
2. Filter for the PowerVM cloud pool, and open it.
3. To open the resource pool configuration, in Resource Pool Name, open the
detail menu, and select Go To Resource Pools.
4. Open the Variables tab.
5. Click New Row to add a new variable.
6. In Variable, type Cloud.Subnetwork.
7. In Value, type the name of the Tivoli Provisioning Manager subnetwork that
corresponds to the management subnetwork.
Chapter 3. Configuring 211
Customizing a Tivoli Service Automation Manager cloud pool for KVM:
A cloud pool needs to be configured prior to using it for provisioning virtual
servers for KVM hypervisor.
Before you begin
You must have:
v maxadmin role.
v customized the following data center model (DCM) import templates:
00_Cloud_Global_Properties.xml, 10_Cloud_Global_NetworkSettings.xml,
14_Cloud_NetworkSettings_KVM.xml, 34_Cloud_Pool_KVM.xml. See Data Center
Model (DCM) object templates on page 191 for more information.
Procedure
1. Enter the Cloud Pool Administration application by clicking:Go To > Service
Automation > Configuration > Cloud Server Pool Administration.
2. Import the customized data center model (DCM) templates:
a. Click the Import DCM Objects... button.
b. Browse for and select the DCM import xml files you customized:
00_Cloud_Global_Properties.xml, 10_Cloud_Global_NetworkSettings.xml,
14_Cloud_NetworkSettings_KVM.xml, 34_Cloud_Pool_KVM.xml
Important: The templates must be imported according to their prefix
number, from the lowest to the highest.
3. Create a new cloud pool for hypervisor type KVM:
a. Click New Cloud Server Pool in the task bar. The Cloud Server Pool
Details tab will display.
b. Enter a Cloud Server Pool Name.
c. Select a Hypervisor Type of KVM.
d. Select a Resource Pool name in the Resource Pool Configuration tab. If you
imported the default resource pool using the customized DCM files for
KVM, the name is 'Bare Metal KVM Pool'.
Note: As the imported cloud server pools have many parameters entered,
it is recommended to use them instead of creating completely new ones. If
you used the vrpool.properties to import the preconfigured KVM cloud
pool, select it from the Cloud Pool Overview tab of the Cloud Server Pool
Administration application.
e. Click the Save button to save your changes.
4. Install the KVM boot server in the Boot Server Configuration and Image
Discovery tab:
a. Select the Target Computer by opening the detail menu and choosing
'Select Value'.
b. Enter the Image Directory in which the new created server images will be
stored.
c. Click the Install Boot Server button. Refresh until the discovery finishes
and a value is displayed in Status.
5. In the same tab, discover KVM images using the KVM Image Discovery
section:
212 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
a. Select an Image Repository Host by opening the detail menu and choosing
'Select Value'.
b. Click the KVM Image Discovery button. Refresh until the discovery
finishes and a value is displayed in Status.
6. Optionally, define the file repository in the Additional Resources tab.
7. Configure the Save/Restore settings:
a. Define the maximum number of virtual server images that can be created,
or if an unlimited number of images is allowed.
b. Specify whether the IP address which was assigned to the virtual server
during provisioning should be blocked for the saved image and not used
for new provisioning requests.
c. Click the Define New Saved Image Repository button to create a new
Saved Image Repository object. On the KVM Saved Image Repository
panel, enter the name of the new repository, the computer that hosts the
repository, and the path to which images will be saved.
d. Click the Define New Instance Image Repository button to create a new
Instance Image Repository object. On the KVM Instance Image Repository
panel, enter the name of the new repository.
8. Create a KVM host using the Host Platform Configuration tab:
a. Enter the IP address, MAC address, Network Mask, and Password for the
KVM host. Save the changes.
b. Click the Create KVM Host button. Refresh until the discovery finishes
and a value is displayed in Status.
9. Validate and enable the cloud server pool:
a. Select the Cloud Server Pool Details tab.
b. Under the Cloud Server Pool Name and Type section, click the Validate
and Enable Cloud Server Pool button. They system will verify that the
necessary configuration parameters were set and that all workflows were
run and completed successfully.
c. Enable the cloud server pool by clicking the Save button.
10. Review the object references in the Resources Overview section and the
variables in the Resource Pool tab.
Note: The object references and variables are set during the 'Validate and
Enable' action. Changes that you make in the cloud server pool configuration
(for example, select a different Resource Pool, Hypervisor Manager, Image
Repository Host, File Repository, or change the network configuration) will
not become active until you click the Validate and Enable Cloud Server Pool
button after the changes are made.
Defining a new KVM resource pool:
Use the Cloud Server Pool Administration application to define a new resource
pool for the KVM hypervisor.
Procedure
1. To open the Cloud Server Pool Administration application, log on to the
administrative user interface and click Go To > Service Automation > Cloud
Server Pool Administration.
2. In the Cloud Server Pool Details tab, create a new cloud server pool of the
KVM type.
Chapter 3. Configuring 213
3. Click Define New Resource Pool. A Define New KVM Resource Pool window
is displayed.
4. Enter the name of the new resource pool in the Resource Pool Name field and
specify the path of the VM image in theVM Images Location field.
5. Click OK.
Results
A message is displayed that informs you about successful creation of the new
resource pool. The name of the newly created resource pool is displayed in the
Resource Pool Name field in the Cloud Server Pool Administration application.
What to do next
Perform this task.
Defining a new KVM image server:
Use the Cloud Server Pool Administration application to define a new image
server for the KVM hypervisor.
Procedure
1. To open the Cloud Server Pool Administration application, log on to the
administrative user interface and click Go To > Service Automation > Cloud
Server Pool Administration.
2. Open the Image Template Discovery tab.
3. Click Define New KVM Image Server. A Define New KVM Image Server
window is displayed.
4. Enter the required information about KVM Image Server Properties, KVM
Image Server IP Details, KVM Image Server SSH Login Details, and KVM
Image Server SCP Login Details in the fields.
5. Click OK.
Results
A message is displayed that informs you about successful creation of the KVM
image server. The name of the newly created image server is displayed in the
Image Repository Host field in the Cloud Server Pool Administration application.
What to do next
Perform this task.
Defining a new KVM file repository:
After you have successfully defined a KVM image server, use the Cloud Server
Pool Administration application to define a new file repository for the KVM
hypervisor.
Procedure
1. To open the Cloud Server Pool Administration application, log on to the
administrative user interface and click Go To > Service Automation > Cloud
Server Pool Administration.
2. Open the Additional Resources tab.
214 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
3. Click Define New File Repository. A Define New KVM File Repository
window is displayed.
4. Enter the required information about KVM File Repository Details and KVM
File Repository IP Details in the fields.
5. Click OK.
Results
A message is displayed that informs you about successful creation of the file
repository. The name of the newly created file repository is displayed in the File
Repository Name field in the Cloud Server Pool Administration application.
Customizing a Tivoli Service Automation Manager cloud pool for IBM Systems
Director VMControl:
Before you can use a cloud pool to provision virtual servers for the IBM Systems
Director VMControl hypervisor, the cloud pool must be configured.
Before you begin
To configure the cloud pool, you must have the maxadmin role. Also, customize
the following data center model (DCM) import templates:
10_Cloud_Global_NetworkSettings.xml. See Data Center Model (DCM) object
templates on page 191 for more information.
About this task
You use this task to add the VMControl server certificate to WebSphere Application
Server.
Procedure
1. Log on as tioadmin and run tio.sh stop to stop Tivoli Provisioning Manager.
2. Copy the keystore file to the Tivoli Provisioning Manager server, for example to
the path /tmp/SSL/.
VMControl server:
/opt/ibm/director/lwi/security/keystore/ibmjsse2.jks
3. To import the certificates:
a. Log on to the management server as root and set up the $JAVA_HOME
variable as follows:
# export JAVA_HOME=/WAS_HOME/IBM/WebSphere/AppServer/java/
b. Update the PATH environment variable so that it includes the
$JAVA_HOME/jre/bin directory to ensure that the proper keytool command is
run.
# export PATH=$JAVA_HOME/jre/bin:$PATH
c. Change directories to:
# cd $JAVA_HOME/jre/lib/security
d. Run the following command and when prompted, answer "yes" to the
question "Trust this certificate?"
Chapter 3. Configuring 215
keytool -export -alias lwiks -storepass ibmpassw0rd -file client.cer -keystore <location of ibmjsse2.jks in TPM server>
keytool -import -v -trustcacerts -alias lwiks -file client.cer -storepass changeit
-keystore cacerts
4. As tioadmin, run tio.sh start to start Tivoli Provisioning Manager. The
certificate is now imported and communication with VMControl is possible.
cd $TIO_HOME/tools
./tio.sh stop
./tio.sh start
5. Create a new VMControl Cloud Server Pool and execute all steps as described
in chapter Manually configuring cloud server pools for VMControl on page
185.
Configuring PowerVM SAN disks for VIOS support using MPIO
Both single and dual Virtual I/O server (VIOS) environments can be configured on
System p.
About this task
Note: System p Blade environment has one integrated virtual management console
(IVM) for each blade which is a part of a single VIO server. Multi path I/O (MPIO)
is only possible on Fibre Channel adapter cards.
Important: If you want to use SAN storage in a System p environment, you need
to install the SDDPCM io driver with equivalent version on all involved VIO
servers. TSAM supports only PVID identification of the configured disks, so the
TSAM storage discovery automatically generates a new PVID on all disks that
don't have one already. For existing LPARs this might interfere with other
applications that are managing the PVID of the disks themselves and might lead to
data loss. It is recommended to move all LPARs from the CEC and unmap all SAN
disks from the involved VIO servers. You don't have to do it if you know that the
PVID is not managed by any of the external applications installed on the existing
LPARs. Install the SDDPCM driver version 2.5 or greater. This is done
independently if SAN Volume Controller is used to map the SAN storage volumes
as disks. The interface to query storage capabilities is not standardized and even
the returned values are different. Therefore, the disks to be used by Tivoli Service
Automation Manager have to be connected to the VIO Server by using the
SDDPCM driver.
Planning for SAN storage
By configuring pre-allocated SAN disks you can connect to a virtual I/O server
(VIOS) and be storage vendor independent.
You need to define the size of the pre-allocated disks as only the root volume
group (rootVG) will be created on a single SAN disk. The size must encompass the
operating system itself, swap space and additional space for typical applications to
located in the rootVG. Check your AIX master image sizes and requirements to
determine disk size requirements.
If you are planning to use multiple CECs in one cloud server pool, make sure that
all disks you are planning to use from your storage pool are mapped to each VIO
on each CEC. In other words, all VIO servers used in a particular server pool must
be able to use the same set of SAN disks which in turn means that when
configuring the storage subsystem, each SAN disk that is intended to be used by
216 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Tivoli Service Automation Manager must be mapped to all VIO servers in the
server pool. The reason is that the Tivoli Service Automation Manager LPAR
placement algorithm does not consider disk mappings. The VIO servers are
grouped into VIOS sets that are used to define IO peers. Each VIOS set can consist
of one or more VIO servers. Ensure that you have defined
reservation_policy=no_reserve when backing the SAN disks to the VIO servers.
Neither a host platform (the pFrame or System p

CEC), nor any of the mapped


SAN disks can be shared between different Tivoli Service Automation Manager
resource pools. The storage volume identifiers are unique and cannot exist in
different storage pools.
Note:
1. When planning storage, keep in mind that the disk size for storage from MPIO
mapped SAN disks cannot be modified later.
2. For SAN disk attachments that can be reached via different access paths (multi
path IO), Tivoli Service Automation Manager configures each access path to a
particular SAN disk with a default value 1. If a particular SAN environment
requires a different setup, the local administrator must ensure proper
configuration, for example by use of AIX first boot scripts.
Enabling VIOS support
Set the variable PMRDP.Systemp.dual.VIOS.support to true in the Cloud Server
Pool Administration application to enable VIOS support.
Before you begin
All pre-system setup steps should be performed.
Note: Some configurations described here may have already been set through the
import of pre-configured DCM XML files or the Cloud Pool Administration
application.
Procedure
1. Click: Go To > Service Automation > Configuration > Cloud Server Pool
Administration.
2. Select the System p cloud server pool for which you want to enable VIOS
support.
3. Switch to the tab Host Platform and VIOS Configuration.
4. Mark the check box SAN Storage / Multiple VIOS Mode.
What to do next
Configure the VIOS sets for storage mapping.
Configuring cloud networks
Configure your network by performing this sequence of tasks.
Before you begin
Familiarize yourself with the Planning the network configuration on page 116
section of this documentation and make sure that you know which network
scenario you want to implement.
Chapter 3. Configuring 217
Important: For VMware, if you make any changes in the virtual center, you must
run the Tivoli Service Automation Manager virtual center discovery again so that
all the networking changes can happen in Tivoli Provisioning Manager Data
Center Model. If you change the switch names, you must also update the switch
names in the virtual switch templates in which the old switch names were used.
Setting up the network related Data Center Model (DCM)
objects
Learn how to create two types of DCM objects.
Procedure
Create two types of DCM objects necessary for network configuration:
v Subnetwork: Defines Layer 3 configuration of a network interface within the
deployed operating system.
v Switch: Defines Layer 2 connectivity of the network adapter within a virtual
machine to the hypervisor virtual switches.
A set of sample DCM import files for hypervisors and resources is provided with
Tivoli Service Automation Manager:
Table 30. Sample Data Center Model import files.
For a subnetwork
10_Cloud_Global_NetworkSettings.xml
Defines subnetworks for the management and customer
network.
For a switch
11_Cloud_NetworkSettings_VMware.xml
Virtual switch template for VMware.
12_Cloud_NetworkSettings_Systemp.xml
Virtual switch template for System p.
13_Cloud_NetworkSettings_zVM.xml
Virtual switch template for z/VM.
14_Cloud_NetworkSettings_KVM.xml
Virtual switch template for KVM.
15_Cloud_NetworkSettings_XEN.xml
Virtual switch template for XEN.
Important: These DCM files are just templates. Modify them for your environment
before you import them.
Related concepts:
Definitions for DCM objects on page 141
Lists the definitions for DCM objects involved in the network support
configuration.
218 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Defining network segment usage values
By defining network segment usage values, you can map network segments onto
resource pools and specific images, and distinguish between network interfaces of
the same kind.
About this task
Network segment usage values are customer-definable attributes. The following list
contains the scenarios that you can realize:
v Mapping network segments onto resource pools.
v Mapping network segments onto specific images.
v Distinguishing between network interfaces of the same kind.
Procedure
1. Log on to the administrative user interface.
2. Select Go To > System configuration > Platform configuration > Domains to
access the application that helps you to create and modify network segment
usage values.
3. Modify the PMZHBNETSEGUSAGE domain and save the changes.
What to do next
The following section contains example scenarios that show how you can use
network segment usage values.
Network segment usage values
Network segment usage values can be modified and added using the Domains
application, by modifying the PMZHBNETSEGUSAGE domain.
Values entered in this domain are displayed in the self-service user interface after
saving and performing a refresh.
Note: You must enter network segment usage values before you import the
network template that is using them.
The values are used in:
v Network segment definition
v Image registration
With network segment usage values, you can further restrict the results of the
image to network configuration resolution during the Create Project and Add
Server requests. The following algorithm is used during network configuration:
v For each network interface defined on the deployable image, the algorithm
checks whether network segments of the same type are available. If they are
available, a network segment is displayed on the list of network segments
possible for the given network interface of the image.
v If a network usage value is defined on the image, only those network segments
are displayed that have at least one of the usage values of the image.
Three kinds of scenarios can be realized on the basis of the algorithm. Choose
which scenario you want to implement.
Chapter 3. Configuring 219
Relating an image to a hypervisor-specific network configuration
Perform these steps to map an image onto a hypervisor-specific network
configuration resource pool.
Procedure
1. Define a usage value and name it after the resource pool that you want to use.
An example of such a value is ESXPool.
2. During image registration, select the usage value you have defined for each
required network interface.
3. In the network template, define a set of network segments that contain the
Usage tag with your chosen value. When the image is deployed, only those
network segments are shown in the self-service user interface that have the
defined usage tag.
Defining a specific network configuration for a class of images
Follow these steps to map a network configuration to a set of images.
Procedure
1. Define a usage value and name it after the operating system type of the image.
An example of such a value is Windows.
2. During image registration, select the usage value you have defined for each
required network interface.
3. In the network template, define a set of network segments that contain the
Usage tag with your chosen value. When the image is deployed, only those
network segments are shown in the self-service user interface that contain the
defined usage tag.
Distinguishing between network interfaces of the same type
Follow these steps to map two customer networks onto different network
segments. This scenario is helpful if you have multiple customer networks that
have some special meaning in your setup.
Procedure
1. Define two usage values, for example FrontendNetwork and BackendNetwork.
2. Select the FrontendNetwork value for the first customer network interface.
3. Select the BackendNetwork value for the second customer network interface.
4. In the network template, define the network segments that contain the Usage
tag specified with your chosen value. When the image is deployed, only those
network segments are shown in the self-service user interface that contain the
defined usage tag. With this approach, you can distinguish between multiple
network segments that have the same type (for example Customer) but have
different meanings in your setup.
Results
When the virtual machine is provisioned, both customer network interfaces have
different network configurations.
220 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Creating a network template
Learn how to create network templates so that you can assign network resources
to customers.
About this task
Note: The following instructions describe the configuration of network templates
only. The network template is instantiated upon onboarding of a new customer.
During customer onboarding, a copy of the network template data is created, and
assigned to the customer as "customer network instance". This configuration is
used during subsequent "Create project" requests for this customer. Changing the
template does not affect the network instances. To modify the network
configuration instances, see Network configuration instance management on
page 299.
A network template is the main network artifact that enables you to assign
network resources to customers. It is an XML document that conforms to the Tivoli
Service Automation Manager schema.
A set of sample network templates and a network template schema are available
on the installation media in the following location: \install\files\
NetworkConfigurations:
Table 31. Sample network templates and network template schema.
XML schema file Sample network templates
PMZHB_NetworkConfiguration.xsd
This file validates network
templates.
PMZHB_SingleNicNetworkTemplate.xml
This network template defines a single Network Interface Card
(NIC) scenario with one management network. It uses the DCM
definitions provided with the product.
PMZHB_DualNicNetworkTemplate.xml
This network template defines a dual NIC scenario with one
management network and one customer network. It uses the DCM
definitions provided with the product.
Both examples are bound to the subnetwork definitions provided in the DCM
network samples. The templates have sample data for the DNS configuration.
Modify the DNS section and update the DNS configuration before you import the
network template.
Remember: Keep in mind the following rules when creating a network
configuration XML.
v A network configuration XML must be encoded in UTF-8 (Unicode).
v Network segment names must be unique within a network template.
v Subnetwork names must be unique within a network segment.
v If network segment usage values are used, they must be registered in the
PMZHBNETSEGUSAGE domain.
v If DCM references are specified, check if the referenced DCM object exists once.
remember to check the following references:
Subnet maps to the DCM subnetwork
VLAN to the DCM VLAN
Gateway and DNS maps to the provisioning computer system (server)
Chapter 3. Configuring 221
To create a network template:
Procedure
1. Create a network configuration section.
2. Define at least one network segment that contains one subnetwork definition.
3. Define a DNS configuration with one server IP address and one domain entry.
An example for such configuration is SingleNicNetworkTemplate.xml.
Note: Network template schemas contain additional data types and entities
that are available for extensibility scenarios. It is not necessary to set them for a
standard Tivoli Service Automation Manager network configuration.
The following sections of a network segment are optional:
v Define a DNS configuration with one server IP address and with one domain
entry
v Additional subnetwork definitions
The network template schema contains additional data types and entities which
are available for extensibility scenarios.
If you use network segment usage values in the network template XML file,
enter them in the domain (PMZHBNETSEGUSAGE) before you import the network
template. Otherwise, an error message is displayed saying that the network
segment usage values could not be found.
Related concepts:
Network templates on page 116
A network template defines the network resources that are available to a customer.
Network schema on page 424
The network schema is an xsd file that defines the XML tags and the values
supported for these tags that can appear in network configuration XML files.
Creating a network template using the Cloud Network
Administration application
You can use the Cloud Network Administration application to create and modify
network templates.
Procedure
1. Log on to the administrative user interface.
2. Click Go to > Service Automation > Configuration > Cloud Network
Administration.
3. Click New Network Configuration on the toolbar. A new network
configuration template is created.
4. In the Network Details tab, you must first specify the name of the new
network template. After you specify the name, the rest of the configuration
options become available in the Network Template Customization section.
Note: After you specify the network template name, the Name field becomes
read-only and it is not possible to change it.
5. Specify at least one network segment of type Management.
6. Associate at least subnetwork with the management network.
7. Click Save.
Note: According to your needs, you can optionally:
222 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v Click New Network Segment to add additional network segments to your
network template.
v Click New Subnetwork Reference to add a subnetwork reference to one of
the network segments.
v Click the trash bin icon to delete the network segment or the subnetwork
reference located in the same row.
Creating a customer and assigning a network template
Create a customer using the Create Customer offering.
About this task
The Create Customer offering is the only time during the customer on-boarding
process when you can specify a network template for the customer.
When running this offering, you can use a menu to select a single active network
template and assign it to the customer that is created. A network template must be
selected at this time.
During the Create Customer workflow, a customer network instance is created.
The Cloud Network Administration application shows these instances and helps
you to modify them for an individual customer.
Perform the following steps to create a new customer:
Procedure
1. Log on to the administrative user interface.
2. Click Go to > Service Automation > Configuration > Cloud Customer
Administration to open the Cloud Customer Administration application.
3. Click New Customer on the toolbar and specify the required information.
Configuring distributed virtual switches
Use the distributed virtual switch feature available in VMware vSphere 4 to create
port groups and connect the virtual machines to them during the provisioning
process.
Before you begin
VMware distributed virtual switches (vDS) provide additional features over the
earlier virtual standard switches, for example private VLANs. In some cases it is
also easier to use and maintain distributed virtual switches than virtual standard
switches.
After you provide the correct input in the configuration DCM files, Tivoli Service
Automation Manager creates a port group based on the configuration present in
the associated switch templates. Before setting up this kind of configuration, make
sure that:
v The VMware cluster used for the resource pool is configured and compatible
with the distributed virtual switches feature.
v The VMware backend environment is configured accordingly with the
distributed virtual switches.
v The VMware switch template objects in the administrative user interface, for
example VMSwitchTemplate1, are updated according to the new distributed
Chapter 3. Configuring 223
virtual switches parameters or that the 11_Cloud_NetworkSettings_VMware.xml
DCM XML file is edited for the same purpose. For more information about the
file, see Customizing Data Center Model (DCM) items for VMware on page
197.
v The names used for the virtual standard switches are not reused in the
distributed virtual switches. Use different bridge name prefix values. To prevent
this kind of configuration error, a suffix is added to the generated distributed
virtual switch name, in addition to the prefix.
Note: Tivoli Service Automation Manager uses the VMware distributed switch API
to configure the Cisco Nexus 1000v virtual switch. Due to this fact, only a subset of
the Cisco Nexus 1000v features is available. For more details, see the virtual switch
configuration properties.
The configuration can be performed in two different ways.
Procedure
v Modify the virtual switch template object already created and configured for
VMware in the administrative user interface.
1. Log on to the administrative user interface.
2. Click Go To > IT Infrastructure > Provisioning Inventory > Switches.
3. Click the Variables tab.
4. Add the new distributed virtual switches parameter with the correct values,
as mentioned in the 11_Cloud_NetworkSettings_VMware.xml DCM XML file.
5. Click Save Switch.
v Edit the 11_Cloud_NetworkSettings_VMware.xml DCM XML file.
A new section of properties is added in the file:
<!-- Distributed Virtual Switch parameters -->
<!-- This parameter is set to 1 in case of using Distributed Virtual Switch -->
<!-- <property component="KANAHA" name="PMRDP.Net.IsVDS" value="1"/> -->
<!--The following properties specify the Suffix value for the port group name.
To properly distinguish it from standard switches, the default is "-DVS". Set it to NOSUFFIX in case it is not required(not recommended) -->
<!--<property component="KANAHA" name="PMRDP.Net.BridgeSuffixName" value="-DVS"/> -->
<!-- This parameter is set to 1 in case of using Cisco Nexus 1000v Virtual Switch
<!-- <property component="KANAHA" name="PMRDP.Net.IsNexus" value="1"/> -->
<!-- This parameter needs to be set to 1 in case of using a distributed virtual switch and a private VLAN -->
<!-- <property component="KANAHA" name="PMRDP.Net.IsPVLAN" value="1"/> -->
<!-- This parameter specifies the name of ports to be created on the new PortGroup (optional). Default is 128 -->
<!-- <property component="KANAHA" name="PMRDP.Net.VDS.NoOfPorts" value="128"/> -->
<!-- Teaming and Failover Policy (Optional). If you want to set custom teaming and failover policies on the port groups
like failback and ActiveUplinks, StandbyUplinks, UnusedUplinks, then set the below respective properties. Failback must be set
(default: Yes) and all three uplinks must be set or none at all (by default all uplinks are added to ActiveUplinks).
Note: Users can choose to set only PMRDP.Net.TeamingFailoverPolicy parameter to set these four attributes.
If this is set, the other four are not required and thay are ignored. -->
<!-- This parameter specifies the Failback property on the newly created Port group, set 1 for Yes (default) and 0 for No -->
<!-- <property component="KANAHA" name="PMRDP.Net.VDS.Failback" value="1"/> -->
<!-- This parameter specifies the ActiveUplinks on the newly created Portgroup. For more than one uplink use colon ":" as delimiter, e.g. dvUplink1:dvUplink2 -->
<!-- <property component="KANAHA" name="PMRDP.Net.VDS.ActiveUplinks" value="dvUplink1"/> -->
<!-- This parameter specifies the StandbyUplinks on the newly created Portgroup. For more than one uplink use colon ":" as delimiter, e.g. dvUplink1:dvUplink2 -->
<!-- <property component="KANAHA" name="PMRDP.Net.VDS.StandbyUplinks" value="dvUplink2"/> -->
<!-- This parameter specifies the UnusedUplinks on the newly created Portgroup. For more than one uplink use colon ":" as delimiter, e.g. dvUplink1:dvUplink2 -->
<!-- <property component="KANAHA" name="PMRDP.Net.VDS.UnusedUplinks" value="dvUplink3:dvUplink4"/> -->
<!-- This parameter specifies the Teaming and Failover properties for the newly created Port group. -->
<!-- <property component="KANAHA" name="PMRDP.Net.VDS.TeamingFailoverPolicy" value="0,dvUplink1,dvUplink2,dvUplink3:dvUplink4"/> -->
1. According to your needs, enable the properties by removing the comment
tags <!-- and --> where necessary.
2. Save the file.
3. Import the file during the configuration procedure. You can do it in two
ways:
Using the Cloud Network Administration application (recommended)
Using the Cloud Server Pool Administration application
When you run the Create project request or the Add server request, you can
see a successful provisioning of the virtual machines connected to port
224 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
groups created on the distributed virtual switch. If the port group on the
distributed virtual switch is not present, a new port group is created for the
virtual machine to connect to.
Enabling IPv6 addressing support
Configure your network environment for IPv6 dual stack support.
Procedure
1. Select the customer networks that are to be enabled for IPv6 support and
upgrade them to support IPv6..
2. Select the provisioning images that are to have IPv6 support and enable them
for IPv6 auto configuration..
3. Run provisioning requests with the IPv6 enabled images to verify that the auto
configuration is successful and that the IPv6 network stacks are configured
correctly. Delete these test images afterward.
4. Enable the Tivoli Service Automation Manager management server for IPv6.
IPv6 addressing support
Tivoli Service Automation Manager supports the IPv6 address format which offers
several benefits over the standard IPv4 format.
The following set of assumptions applies to the IPv6 configuration:
v The management network of the provisioned servers is IPv4 only.
v The customer network is either based on IPv4 or on IPv4 / IPv6 dual stack.
v Images used for provisioning are enabled for IPv6 and have the IPv6 network
stack installed and configured to be ready for auto configuration.
v The customer network environment is enabled for IPv6 auto configuration and
contains the necessary network components for auto configuration (router
advertisement Daemon, DHCPv6, and DNS capable of handling IPv6).
v The IPv6 environment is set up and maintained. Tivoli Service Automation
Manager does not provision or configure the IPv6 network environment.
v Tivoli Service Automation Manager runs a discovery after server provisioning to
get the IPv6 addresses of the provisioned virtual machine. The addresses are
then stored in the topology.
v The notification for the Create Project and the Add server offerings shows the
IPv4 and IPv6 addresses of the provisioned virtual machine.
v Tivoli Service Automation Manager provides a configuration property that
allows controlling which IP addresses are shown in the notifications.
v The administrative user interface and the self-service user interface are accessible
through both IPv4 and IPv6 if IPv6 is enabled for the Tivoli Service Automation
Manager management server.
Chapter 3. Configuring 225
The IPv6 network environment contains the following components:
v Tivoli Service Automation Manager management server with IPv6 dual stack
and statically configured IPv4 and IPv6 network stack.
v Tivoli Service Automation Manager management server enabled for IPv6. See
Enabling the IPv6 support on the management server on page 228.
v Tivoli Service Automation Manager management network running with IPv4
with access to the core provisioning components, such as hypervisors and
support servers.
v Virtual machine management networks that are running with IPv4 to manage
the provisioned virtual machines. There might be multiple such networks,
depending on the network configuration.
v Auto configuration components configured and running in the customer
network, such as the router advertisement daemon, Dynamic Host Configuration
Protocol for IPv6 (DHCPv6), and DNS.
IPv4/6
Tivoli Service
Automation Manager
management server
Tivoli Service Automation Manager node
IPv4
Access to the
administrative and the
self-service user interface
Tivoli Service
Automation Manager IPv4
management network
Virtual Machine IPv4
management network
Provisioned
Virtual
Machine
IPv4
Hypervisor
Provisioned
Virtual
Machine
IPv4
ESX Backend
IPv4 / IPv6
customer network
Router
advertisement
daemon
DSN v4 / v6 DHCP v6
IPv4 IPv4/6 IPv4/6
Figure 2. An example of Tivoli Service Automation Manager network setup for IPv6 auto configuration
226 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Enabling customer network for IPv6 auto configuration
Enabling the selected customer networks for IPv6 support is the initial step in the
IPv6 network configuration.
Before you begin
Familiarize yourself with the basic assumptions of IPv6 addressing support in
Tivoli Service Automation Manager. See IPv6 addressing support on page 225.
Procedure
1. Verify if the following physical components of your network support IPv6:
v Routers
v Firewalls
v Gateways
v Load Balancers
v Switches
2. Decide which prefixes and IPv6 ranges are to be used in the customer network
segment.
3. Set up the router advertisement daemon in the customer network segment and
configure it according to your scenario. You must at least set up the prefix to be
used in the customer network segment. You can do it either by using a
dedicated server or by enabling the corresponding function on a switch or
router that supports IPv6.
4. You can use the Dynamic Host Configuration Protocol for IPv6 (DHCPv6)
which provides additional IPv6 network configuration capabilities apart from
the functionalities that are available through the router advertisement daemon,
such as, DNS or NTP. If you choose to use DHCPv6, configure it according to
your scenario.
5. Set up DNS and gateways for the customer network segment to support IPv6.
6. Configure firewalls and load balancers for IPv6 according to your scenario.
7. Test the whole setup to ensure that you have a working IPv6 auto
configuration environment.
What to do next
Proceed to Enabling the provisioning images for IPv6 support.
Enabling IPv6 for the provisioning images
Enable the provisioning images for IPv6 auto configuration so that later you are
able to access the provisioned server using IPv6.
Before you begin
Make sure that the target customer network environment is enabled for IPv6 auto
configuration. See Enabling customer network for IPv6 auto configuration.
Procedure
1. To enable IPv6 on your operating system, follow the steps provided in the
Managing network resources > IPv6 addressing > Enabling IPv6 in the operating
system section of the Tivoli Provisioning Manager Knowledge Center.
2. Run a provisioning test to see if the IPv6 stack is configured correctly.
Chapter 3. Configuring 227
Results
Newly provisioned servers contain a configured IPv6 network stack if they are
provisioned to the network environment that supports the IPv6 auto configuration.
What to do next
Proceed to Enabling the IPv6 support on the management server.
Enabling the IPv6 support on the management server
Learn how to activate the IPv6 features of the Tivoli Service Automation Manager
management server.
Procedure
1. Follow the steps described in the Tivoli Provisioning Manager 7.2.1 > Managing
network resources > IPv6 addressing > Enabling IPv6 for the provisioning server
section of the Knowledge center to activate IPv6 support for Tivoli Provisioning
Manager.
2. Restart Tivoli Provisioning Manager and Tivoli Service Automation Manager to
activate the changes.
3. Apply the service upgrade package to the existing projects to upgrade the
service definition to revision 6. See Upgrading the service instance to the new
revision on page 92. By doing so, you add the required properties to the
server node in the topology. Perform this step only if you have existing projects
in your environment.
Results
If the auto configuration setup is performed successfully, newly provisioned virtual
machines can have IPv6 addresses assigned to them. The IPv4 and IPv6 addresses
show up in the notification email sent after the successful provisioning of a server.
See IP address selection rules on page 229 and Defining an IP address selection
rule on page 92 to learn how to configure which IP addresses show up in the
notification.
Disabling the IPv6 support on the management server
You can manually disable the IPv6 support for the Tivoli Service Automation
Manager management server.
Procedure
1. Open the following configuration file in a text editor:
v
UNIX
$TIO_HOME/config/system.properties
v
Windows
%TIO_HOME%\config\system.properties
2. In the file, change the value of the ipv6-enabled parameter to false.
3. Save your changes to the file.
4. Remove the IPv6 address from the network stack.
5. Restart Tivoli Provisioning Manager and Tivoli Service Automation Manager to
activate the changes.
Results
The IPv6 features of the Tivoli Service Automation Manager management server
are now disabled and the self-service user interface is only accessible over IPv4.
228 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
IP address selection rules
IP selection rules define which IP addresses show up in the notification emails sent
after a successful provisioning.
A set of configuration properties is provided with Tivoli Service Automation
Manager. You can use them to control which IP addresses show up in the
notification emails. There are two such properties:
v One on the system level.
v One on the customer level.
The IP address selection rule is a comma-separated list of values. You can either
use the NetworkSegmentType or the NetworkSegmentType.NetworkSegmentUsage
definition in an IP address selection rule.
Possible values for NetworkSegmentType are:
v Management - to show IP addresses from management network
v Customer - to show IP addresses from customer network
v Storage - to show IP addresses from storage network
v BackupRestore - to show IP addresses from backup-restore network
If you use NetworkSegmentType in the rule, all IP addresses from network segments
that are of the same type show up in the IP address list.
You can further restrict the IP address list by adding network segment usages to
the network segment type. The notation is:
NetworkSegmentType.NetworkSegmentUsage
This notation restricts the IP address list to the network segments that have the
same network segment type and the specified network segment usage.
The rule is applied to the server after it is provisioned and affects the IPv4 and
IPv6 addresses associated with the server.
Restriction: Do not combine NetworkSegmentType definitions with
NetworkSegmentType.NetworkSegmentUsage definitions. If you do so,
NetworkSegmentType is considered a priority and as a result, all IP addresses
matching the network segment type are displayed. Example:
Customer, Customer.Usage1
If you define such a rule, all IP addresses are displayed from network segments of
type Customer. The restriction to Customer.Usage1 does not work, as everything of
type Customer is already included.
To define an IP address selection rule on the system level or on the customer level,
see Defining an IP address selection rule on page 92.
Chapter 3. Configuring 229
Defining an IP address selection rule
Define which IP addresses are displayed in the notification emails sent after
successful provisioning.
About this task
For more information about how to define IP address selection rules, see IP
address selection rules on page 229.
You can either define an IP address selection rule on the system level or on the
customer level.
Procedure
v Configuration on the system level is performed by modifying the
PMRDP.Net.IPUpdateRule system property. Set or update this property using the
System Properties application in the administrative user interface:
1. Log on to the administrative user interface as maxadmin.
2. Click Go To > System Configuration > Platform Configuration > System
Properties.
3. Filter for the PMRDP.Net.IPUpdateRule property. This property contains an IP
address selection rule.
4. Click the property to edit it.
v The customer level is configured by modifying the IPUpdateRule property in the
network template. The property is optional and it contains an IP address
selection rule. If it is set, it overwrites the system level property. Set this
property in the network template XML file. All customers that have the same
network template are subject to the same IP address selection rule.
IPv6 properties and their layout
IPv6 properties are the properties that contain the list of all IPv6 addresses based
on the IP address selection rule set in the server topology node. The properties
have a specific layout.
There are two properties in the server topology node that contain IPv4 and IPv6
addresses of the server. The properties have the following names:
PMZHB_NETWORK_INTERFACE_IP_ADDRESS
This property contains the IPv4 addresses of the server in a
comma-separated list.
PMRDP_IPV6_ADDRESSES
This property contains the IPv6 addresses of the server on a
comma-separated list.
Note: IPv6 link-local addresses are not displayed in the list.
To be populated in the properties, the IP addresses of a network interface must
match the IP selection rule active for the current customer. See IP address
selection rules on page 229.
Configuration property for the IP address discovery
There is one system property that you can configure when running the IP address
discovery. For more information about running the discovery, see Running IP
address discovery for operational servers on page 93.
230 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
PMRDP.Net.DiscoveryMaxParallel
Use this property to define how many parallel discovery tasks are allowed
to run. The default value is 10. If you have a large environment and a
powerful Tivoli Service Automation Manager node, you can increase this
value to accelerate the discovery process.
Layout of IP address properties
The IP address properties for IPv4 and IPv6 are populated based on the IP address
selection rule. The sequence of the IP addresses is also determined by this rule.
For example, for the following rule:
IPAddressSelectionRule(Customer,Management),
Server (Nic1,Management,10.10.10.1) ,(Nic2,Customer,10.10.10.2, 2001:0DB8::1)
v The IPv4 property contains: 10.10.10.2,10.10.10.1.
v The IPv6 property contains: 2001:0DB8::1,
v The IPv6 property ends with a comma because the management network
interface card (NIC) does not have an IPv6 property.
Remember about the following two rules:
1. If an interface type or usage has no IPv6 address, a comma is inserted. For
example, for the following rule:
IPAddressSelectionRule(Management,Customer)
Server (Nic1,Management,10.10.10.1) ,(Nic2,Customer,10.10.10.2, 2001:0DB8::1)
v The IPv4 property contains: 10.10.10.1,10.10.10.2.
v The IPv6 property contains: , 2001:0DB8::1.
v The IPv6 property in this example starts with a comma because the
management interface has no IPv6 address.
2. If an interface contains multiple IPv6 addresses, a semicolon separates the IPv6
addresses of the same NIC. For example, for the following rule:
IPAddressSelectionRule(Management,Customer)
Server (Nic1,Management,10.10.10.1) ,(Nic2,Customer,10.10.10.2, 2001:0DB8::1, 2001:0DB8::2)
v The IPv4 property contains: 10.10.10.1,10.10.10.2.
v The IPv6 property contains: , 2001:0DB8::1; 2001:0DB8::2.
v The IPv6 property starts with a comma so the management NIC has no IPv6
address. The two IPv6 addresses of the customer NIC are separated with a
semicolon as the customer NIC has multiple IPv6 addresses.
Turning on VIO Shared Ethernet Adapter management
You can turn on the VIO Shared Ethernet Adapter management.
About this task
VIO Shared Ethernet Adapter Management is controlled by a Tivoli Provisioning
Manager Global Provisioning property: PMRDP.Net.VIONotManaged. If this property
exists and has a value True, VIO Shared Ethernet Adapter Management is turned
off.
Chapter 3. Configuring 231
The property can be created with the Tivoli Provisioning Manager Global
Provisioning property application.
Procedure
1. Go to Administration > Provisioning > Provisioning Global Settings.
2. Click Variables tab.
3. Click New Row.
4. In dialog box, enter the following values:
a. Variable: PMRDP.Net.VIONotManaged.
b. Component: Entire System.
c. Value: True.
d. Is array: not checked.
5. Click Save.
Turning off VIO Shared Ethernet Adapter management
You can turn off the VIO Shared Ethernet Adapter management.
About this task
VIO Shared Ethernet Adapter Management is controlled by a Tivoli Provisioning
Manager Global Provisioning property: PMRDP.Net.VIONotManaged. If this property
does not exist or does not have a value True, VIO Shared Ethernet Adapter
Management is turned on.
The property can be created with the Tivoli Provisioning Manager Global
Provisioning property application.
Procedure
1. Go to Administration > Provisioning > Provisioning Global Settings.
2. Click Variables tab.
3. Use search to find the property.
4. If it exists, delete it by clicking delete icon.
5. Click Save.
Configuring cloud storage pools
After the cloud server pools are already defined, use the Cloud Storage Pool
Administration application to create new cloud storage pools.
About this task
A Tivoli Service Automation Manager cloud storage pool is a collection of storage
resources for additional disks. It is associated with a Tivoli Provisioning Manager
storage pool.
When creating a project, you can select a server image and set only the size of the
local disk. Cloud storage pools are a flexible solution to add additional storage to
your provisioned servers. They can be customized for different environments
individually. When adding additional storage to your servers, you have two main
options:
232 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Add additional disks (mapped storage)
In this type of cloud storage pool, an additional disk or disks are created
on the virtual server in the form of logical units (LUNs). They are
managed by hypervisors, they are associated with the virtual server, and
their lifecycle depends on the server. Such disks can be added during
provisioning of virtual servers.
Note: Adding additional storage in form of an additional disk in a cloud
storage pool is only supported for the System p hypervisor (native HMC
managed Power LPAR), other additional storage extensions are available
for VMware.
Add additional file systems (mounted storage)
In this type of cloud storage pool, additional file systems are created that
are independent of the virtual server. One or more file systems can be
shared between several virtual servers. The file system is mounted to the
virtual server at the time of provisioning.
Configuring custom settings
This custom setting is only used by Power LPAR related offering. When
you define the storage sizes, the values of storage quota are not always
exact numbers. Custom storage sizes must be an even multiple of existing
storage volume size. If all volumes have 15 GB storage, then 15, 30, 60, 90,
120 are valid values and can be composed without having unused space.
Note:
Perform this task only if you want to set up custom storage sizes which do
not match existing storage pool volume sizes.
If the configured sizes cannot be composed by the implemented algorithm
or do not fit storage volume sizes related to the configured values, the
LPAR may have more real storage mapped and the quota is increased by
the requested value only.
You can select a configuration storage which does not match the storage
volumes sizes. If the size values are not an even multiple of the provided
storage volume, you risk having more storage assigned than the requested
value. One piece of storage can be composed from multiple storage
volumes from SAN.
Note: Multiple storage volumes need multiple mapping operations which
need processing time so they can generate errors.
Multiple storage volumes can be purged, with asynchronous purging
configured, in parallel. Purging too many disks in parallel, create process
swap overhead. It can result in an increase of processing time compared to
a single disk with identical storage size.
To configure custom storage value consider the required storage sizes for
the planned application. Take existing or defined storage volume sizes and
calculate sizes with even match. Type the calculated sizes into the input
field, save and enable the storage pool.
Related reference:
Chapter 3. Configuring 233
Troubleshooting additional disk extension for Power LPAR on page 519
The additional disk extension for Power LPAR has three main error situations. You
can use the following guidelines to get familiar with the error cases and to know
how to handle the errors when they occur. Error causes can be analyzed while
handling the inbox assignment of the service request and looking into the different
logs available. If the error cause is a workflow execution exception, look into the
related workflow execution log for more details.
Configuring cloud storage resources
Create and configure new cloud storage pools to enable additional storage on your
provisioned servers.
Before you begin
Log on to the administrative user interface. Open the Cloud Storage
Administration application by clicking Go to > Service Automation >
Configuration > Cloud Storage Pool Administration.
Procedure
1. Click the New Cloud Storage Pool toolbar button
2. Provide the required parameters for the new cloud storage pool:
v Name
v Description (optionally)
v Storage manager type (LPAR is the only type supported by default. Other
storage manager types are available as extensions.)
v Storage type
Use the Select Value icon to display a list of available storage types and
storage extension types. The two available storage types are:
v Mapped Additional Disks
v Mounted Disk Resources
Note: The default storage type for LPAR is Mapped Additional Disks.
Mounted storage is supported by storage extensions.
3. In the Associated TPM Storage Pools section, click New Assignment to create a
storage pool table entry. Select one Tivoli Provisioning Manager storage
allocation pool object.
Note: If there are no storage allocation pools listed in the table, most probably
storage discovery was not run. To run storage discovery, open the Cloud Server
Pool Administration application, select an available System p cloud pool, and
run the discovery from the Storage Discovery tab.
4. Switch to the Security Settings tab and specify the required information there.
For more information, see Setting up purging options for storage disks on
System p on page 235.
5. Click Validate and Enable Cloud Storage Pool.
Note: After the cloud storage pool is enabled, the configuration input fields
become greyed out and read-only. You must disable the cloud storage pool
before performing any configuration tasks again.
234 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Results
The cloud storage pool is created and ready to be assigned to a customer.
Setting up purging options for storage disks on System p
There are several ways in which you can purge the storage disks that you do not
use any more.
About this task
When you create a new cloud storage pool in the Cloud Storage Pool
Administration application, there are three possible choices for purging the disks to
make them available for next usage.
Procedure
1. Log on to the administrative interface and click Go to > Service Automation >
Configuration > Cloud Storage Pool Administration.
2. Switch to the Cloud Storage Pool Details section. In the Security Settings tab,
you have the following disk purging options available:
v No purging - Files in the additional disk are removed using the operating
system functionality (rm -rf <mount point>) and the disk is returned to the
storage pool immediately after the end of this operation.
v Synchronous purging - Data on the additional disk is removed first and
afterwards it is overwritten one time using dd if=/dev/zero
of=/dev/rhdisk1 bs=1m.
v Asynchronous purging - The same procedure is run as in the case of No
purging and the disk is unmapped right after it, however it remains in the
IN_USE state. A purge workflow purges the data as in the case of the
Synchronous purging option and the storage volume state is set to
AVAILABLE. With asynchronous purging, it is necessary that you define the
LPAR to which all the data disks are mapped so that the storage volume that
is to be purged can be identified. This option removes the time constraint for
the processes to delete, remove, or cancel because of the time consuming
processing.
Note: A dedicated disk purging host must be selected if the disk purging
mode is set to Asynchronous. This host performs the purge operation after
the project is cancelled and the storage is about to be returned. In the
Synchronous mode, the provisioned server itself purges the disk.
Note: Set Cloud pSeries HostPlatform Storage as the host platform device
model in the Cloud Server Pool Administration application. This device
model contains the definition of the additional capabilities in the form of the
CloudStorage tcdriver (an automation package) that consists of workflows
that handle operations such as adding and removing storage, creating
volume groups, creating file systems, and purging disks.
3. Specify a timeout in minutes per GB to be erased.
4. Specify the storage identification type in the box. The selection defines how the
storage resources are identified. They are identified by the Unique Device
Identifier (the default option), by the IEEE Volume Name, or by the Physical
Volume Identifier (PVID).
Important:
Chapter 3. Configuring 235
If an error is detected during purging and you choose the Ignore or Skip
option, the disk remains in the state IN_USE and is not available for any other
project. If the purging process fails because of an error, clean the storage
volume manually and set it to state AVAILABLE.
If an error is detected during asynchronous purging the storage volume state
will be set to UNKNOWN. An administrator needs to check manually the
backing state of this storage volume and needs to manually change the storage
volume state.
Related reference:
Troubleshooting additional disk extension for Power LPAR on page 519
The additional disk extension for Power LPAR has three main error situations. You
can use the following guidelines to get familiar with the error cases and to know
how to handle the errors when they occur. Error causes can be analyzed while
handling the inbox assignment of the service request and looking into the different
logs available. If the error cause is a workflow execution exception, look into the
related workflow execution log for more details.
Purging LPARs
A purging LPAR is a virtual server that makes the purging process more efficient.
Purging of storage disks is a time-consuming task because it involves overwriting
the disk content to make it invalid. To shift the purging effort from the servers in a
resource pool, configure a server outside of the configured resource pool to run the
asynchronous purging tasks. Such server is called a purging LPAR. It is built
similarly to a VIO server and requires the following components:
v A dedicated Fibre Channel adapter to enable SAN disk mapping.
v An equivalent AIX level.
v A storage driver (SDDPCM) of the same level as installed on the server pool
VIOS version. Use SDDPCM driver version 2.5 or greater.
Do not use the standard AIX commands because the returned values while finding
the disk device identifier on different LPARs depend on the storage driver and the
storage subsystem.
For IBM storage systems, for example DS8000

, Unique Device Identifier is used


in this implementation. Other storage systems might use IEEE Volume Name or
Physical Volume Identifier (PVID) to identify unique storage devices.
Note: In the storage pools created by the storage pool discovery, the physical
volume identifier (PVID) is used to discover the disk for MPIO. However, in this
case only VIOS are in use to identify the disks that are to be mapped to the client
LPAR. If disks are mapped directly to a client LPAR, as is the case with the
purging server, PVID cannot be used to identify a unique disk device.
For performance reasons, each storage pool can have an individual purging LPAR
configured.
Considerations for asynchronous purging
Purging of storage actually involves overwriting the whole disk with invalid
content in order to make it impossible to reconstruct the file system content. In the
additional storage for System p extension, a simple solution is used to achieve this
236 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
result. A command is run that overwrites the whole disk with zeros. This method
makes it difficult to restructure the file system content but cannot guarantee full
security.
Purging is a time-consuming process. For asynchronous purging, the processor and
memory resources are given back quickly but the additional storage resources
remain in the IN USE state until the operation is finished. If case of large amounts
of disk space that is to be purged, it might be necessary to set a higher timeout
value for the process.
To save the purging effort, you can set up the operating system to encrypt data on
additional storage and set purging to None. Such encryption reduces the I / O
throughput to some extent but the time required for purging is saved and the
storage resource can be quickly used again.
As a System p user who uses the additional disk feature, you must judge all
aspects like resource consumption, time, and level of security and decide which
purging option is best in your case.
Changing the default purging configuration
Perform this task if you want to set up a purging process other than the default
one.
About this task
Using a dedicated LPAR for the purging process is not always the optimal solution
and depends on the used storage subsystem. Different storage subsystems might
be able to purge storage space more efficiently. This is why you might want to
overwrite the implementation of Hostplatform.FindStorageDevice and
Hostplatform.PurgeStorage workflows in the CloudStorage automation package.
The storage device can also offer better methods for the purging algorithm.
Therefore, the implementation to identify the storage device is modeled as a logical
device operation (LDO) implementation and can be changed for customer-specific
reasons. The name of the logical device operation is
Hostplatform.FindStorageDevice. The way in which different storage subsystems
and different drivers are identified can be changed depending on what subsystems
and storage drivers are used. For multi path I/O (MPIO) devices, a key is required
to identify the disk device.
Procedure
1. Create a new Tivoli Provisioning Manager automation package that depends on
the CloudStorage automation package (tcdriver).
2. Create your own implementation for Hostplatform.FindStorageDevice.
Remember to check whether PMRDP.SAN.disk.identifier.attriubute property
does not need a different identifier attribute.
3. Create your own implementation for HostPlatform.PurgeStorage.
4. Install your automation package.
5. Use the Cloud Server Pool Administration Application to associate the new
device model to all host platforms in your server pool.
Chapter 3. Configuring 237
Saving and restoring projects with storage resources
Remember about the following points when saving or restoring projects with
additional storage attached to them.
v During the Create Server Image workflow, only the boot disk data is saved. The
additional data is not saved.
v Similarly, during the Restore Server from Image workflow, only the boot disk
data is restored. The additional data is not restored as it was not saved.
v During the Create Project from Saved Image and Add Server from Saved
Image workflows, only the boot disk data is restored. A new LUN is then
attached to the virtual machine for additional disks that were attached to it
during Save. For example, if a virtual machine had two additional disks of 20
GB and 30 GB attached to it when it was saved, then when the machine is
recreated during the operations, two empty 20 GB and 30 GB LUNs are attached
to it.
Configuring VMware additional disk feature
Configure your cloud environment to be able to use the VMware additional disk
feature.
Configuring cloud storage pools
Create and configure new cloud storage pools to enable additional storage on your
provisioned servers. After the cloud server pools are defined, use the Cloud
Storage Pool Administration application to create new cloud storage pools.
About this task
A Tivoli Service Automation Manager cloud storage pool is a collection of storage
resources for additional disks. It is associated with a Tivoli Provisioning Manager
storage pool.
Procedure
1. Log on to the administrative user interface with the cloud administrator user
name and password.
2. Click GoTo > Service Automation > Configuration > Cloud Storage Pool
Administration.
3. Click the New Cloud Storage Pool toolbar button .
4. Provide the required parameters for the new cloud storage pool:
v Name
v Description (optionally)
v Storage manager (to select the Storage Manager, click the lookup icon and
select VMware).
Note: The VMware storage manager must be selected so it is available for the
VMware additional disk feature. When the Storage Manager is VMware, the
default storage type is set to Mapped.
Note: In this type of cloud storage pool, an additional disk or disks are
created on the virtual server in the form of SCSI (Small computer system
interface) Targets. They are managed by hypervisors. They are associated
with the virtual server, and their lifecycle depends on the server. Such disks
can be added during provisioning of virtual servers.
238 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v Select the Cloud Server Pool for which the new cloud storage pool will be
used.
v Select data stores for additional disk operations.
Restriction: If the option Single Datastore Provisioning is chosen for the
selected Cloud Server Pool, then the datastore is preselected and cannot be
changed.
v Select if the additional disks should be created as thin or thick images.
v Click Save.
v Click Validate and Enable.
5. In the Associated Datastores section, you must select one or more datastores
to associate it with the new cloud storage pool. The Associated Datastores
section contains information about the VMware type storage pools.
a. Click the Select Datastores button and the Select Datastores panel is
displayed.
b. Select the check box associated with the required datastore from the list
and click OK.
6. In the Pool Settings tab section, the Thin Provisioning check box is selected
by default.
Note: Thin Provisioning is a key component of vStorage that allows
over-allocation of storage capacity for increased storage utilization, enhanced
application uptime and simplified storage capacity management. Thin
provisioning gives you higher utilization by allowing you to dedicate more
storage capacity than the actual purchased capacity. If thin provisioning is not
selected, then thick provisioning is set as the allocation type. This type of disk
pre-allocates and dedicates a user-defined amount of space for a virtual
machines disk operation.
Note: When you use support for single datastore (additional disks of a server
are placed on the same datastore as its root disk) for VMware additional disk
feature, select this setting in accordance to the definition of the VMware server
template that is used for the server provisioning of the associated cloud server
resource pool.
7. Optional - click the Customers tab and select the Assigned to all customers?
Check box. For more information about making the resource available to all
customers, see Assigning resources to a customer on page 338.
8. Click the save menu icon.
9. Click the Cloud Storage Pool Details tab and click the Validate and Enable
Cloud Storage Pool button.
10. A system message is displayed, indicating that the cloud pool validation
completed successfully. Click OK to this message.
Note: After the cloud storage pool is enabled, the configuration input fields
become disabled and read-only. You must disable the cloud storage pool
before you perform any configuration tasks again.
Results
The VMware cloud storage pool is now created. The cloud storage pool must be
associated with a customer and assigned a quota by using the Cloud Customer
Administration application.
Chapter 3. Configuring 239
Support for single datastore
You can use the Cloud Server Pool Administration application in the
administrative user interface to define that additional disks of a server are placed
on the same datastore as its root disk.
Procedure
1. In the administrative user interface, click Go To > Service Automation >
Configuration > Cloud Server Pool Administration.
2. Select a VMware Cloud Server Pool.
3. Disable the VMware Cloud Server Pool.
4. In the Resource Configuration section, select the Provisioning Parameter
Settings tab.
5. Mark the check box Place Disks on Single Storage and save your changes.
6. Validate and enable the VMware Cloud Server Pool.
Support for Storage vMotion
The vMotion provides an interface to do live migration of virtual machine disk
files. Starting with Tivoli Service Automation Manager 7.2.4, you can run the
Storage vMotion manually by using the VMware vSphere user interface.
About this task
After the virtual machine disk file is moved from one storage location to another,
you must perform some manual steps to update the DCM procedure:
Procedure
Rerun the Virtual Center Host Discovery or the Server Discovery for a single
server from the Cloud Server Pool Administration application.
Restriction: When you run the Storage vMotion from the VMware vSphere user
interface, you must select the destination storage for the virtual machine disk files.
Select Hard disk 1, which is the root disk of the virtual machine, and further select
disk files if the virtual machines contain multiple disks. The following restrictions
apply when you select the destination storage:
v For Hard disk 1 of virtual machine disk file, you can select only a datastore that
is defined in the list of associated data stores. These associated data stores are
configured in the Cloud Server Pool Administration application.
v For additional hard disk in virtual machine disk files, you can select only a
datastore that is defined in the list of associated data stores. These associated
data stores are configured in the Cloud Storage Pool Administration application.
Results
On successful completion, the DCM data is updated on the back end of VMware
vSphere in accordance with the migration.Tivoli Service Automation Manager
continues to manage the migrated virtual machine and its additional disks.
240 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Configuring the service provider and customer features
Even if you are not planning to use the service provider functionality, you need to
perform some basic configuration steps to make the default customer operational.
Assigning resources to the default customer
You need to assign at least the cloud server pool and a master image to the default
customer PMRDPCUST to make this customer operational.
Before you begin
Ensure that the following resources have been configured correctly:
v Server pools are configured in the Cloud Server Pool Administration application.
v Network configuration is defined in the Cloud Network Administration
application.
v Master images are discovered in the Image Library.
v Optional: Additional storage resources are configured in the Cloud Storage Pool
Administration application.
v Optional: Software products are configured.
Only configured resources can be assigned to the customer. The following
procedure describes how to assign a resource to a particular customer. For more
information on making the resource available to all customers, see Assigning
resources to all customers on page 340. Network resources cannot be shared
between customers.
Procedure
1. Click Go To > Service Automation > Configuration > Cloud Customer
Administration.
2. Select a customer.
3. In the Associated Resources section, select the tab for the resource type that you
want to assign to the customer.
4. Click the Assign Resource Type button under the table.
5. In the dialog that opens, select the specific resources to be assigned to the
customer.
6. Click Assign Resource Type.
Important: If you are planning to assign OS dependent software modules to
the customer, you also need to assign the software stack corresponding to the
OS master image, and ensure that it has the required properties defined. After
assigning the master image and the software module, perform the following
steps:
a. In the Cloud Customer Administration Application click Detail Menu >
Go To Master Images.
b. Next to the software stack click Detail Menu > Go To Software Stacks.
c. In the Variables tab, ensure that the software stack has the following
variables defined:
v exposetotivsam=1
v swType=OS
If they are not present, click New Row and add them.
Chapter 3. Configuring 241
d. Click Return to return to the Cloud Customer Administration
Application.
e. In the Software Modules tab, assign the software stack corresponding to
the image to the customer.
f. Save the changes.
What to do next
When the resources are assigned to the default customer, you need to activate this
customer in the self-service user interface. Proceed to Activating the default
customer PMRDPCUST.
Activating the default customer PMRDPCUST
Even if you are not planning to use the service provider functionalities, at least one
customer, the default PMRDPCUST must be activated. If you do not activate this
customer, you are not able to access any offerings in the self-service user interface.
Before you begin
Ensure that the following resources have been configured correctly:
v Server pools are configured in the Cloud Server Pool Administration application.
v Network configuration has been defined in the Cloud Network Administration
application.
v Master images have been discovered in the Image Library.
v Server resources and master images have been assigned to the PMRDPCUST
customer.
About this task
The first time you log in to the self-service user interface, you will only see one
request available: Create Customer. You need to use this request to activate the
already existing default customer PMRDPCUST.
Procedure
1. Log on to the self-service user interface as PMRDPCAUSR.
2. In the Home panel, click Request a New Service > Virtual Server
Management > Manage Customers > Create Customer.
3. Specify the network template.
4. Click OK to submit the request.
5. Refresh the browser.
Results
You should now be able to see all offerings. You can register the images and then
create new projects.
242 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Configuring the interface to Workload Deployer
Learn how to configure IBM Workload Deployer to enable its use within Tivoli
Service Automation Manager and provision projects based on Workload Deployer
patterns.
Procedure
1. Define the Workload Deployer as a provisioning computer in Tivoli
Provisioning Manager:
a. Select Go To > IT Infrastructure > Provisioning Inventory > Provisioning
Computers.
b. Click the Add Computer icon, or select Add Computer in the Select Action
menu.
c. For Computer, specify the Workload Deployer host name, for example
wcahostname.ibm.com. Click Save.
2. Create a network interface for the Workload Deployer:
a. Click Hardware tab.
b. Click New NIC Resource.
c. Click Network Interface tab.
d. Click New Network Interface.
e. Specify the Network Interface name, for example WCA Network Interface.
f. Specify the IP address of this interface, which must be the IP address of the
Workload Deployer host name.
g. Select the Management check box. A message box opens and asks whether
changes are to be saved. Click Yes.
3. Specify credentials for surrogate user authentication at the Workload Deployer:
a. Click the Credentials tab.
b. Click Add Credentials.
c. Select New Service Access Point.
d. Specify the Service Access Point name: WCA HTTPS.
e. In the Protocol Type list, select Network protocol IP.
f. In the Application Protocol list, select HTTP Secure Access.
g. Select Port if the default port 443 is not sufficient.
h. Click New Password Credential.
i. Specify Search Key as master.
j. Specify User Name. This user must be defined on the Workload Deployer
with administrator privileges.
k. Specify and confirm password. Click Save.
l. Select the Default Credential check box and click Save.
4. In the Variables tab, add variable X-Cloudburst-API-Version and set its value
to 2.0.0.2
5. Discover the Workload Deployer hardware. Workload Deployer patterns are
deployed into cloud groups, which are pools of homogeneous servers hosting
the virtual machines that belong to such a virtual system. Currently, a pool can
consist of one or more ESX or ESXi hypervisors.
A cloud group discovery is required to make Tivoli Provisioning Manager
aware of the cloud groups defined on Workload Deployer. At the same time, all
patterns on Workload Deployer are discovered. Images, software stacks,
software resource templates, and virtual server template elements are created
Chapter 3. Configuring 243
for them in Tivoli Provisioning Manager. Finally, patterns and their members
are also registered with the Image Library.
Repeat the discovery for cloud groups and patterns for any given Workload
Deployer if they have been changed. The changes can occur when, for example,
new hardware is added to the cloud group or a pattern is added or changed.
Discovery is accomplished as follows:
a. Select Go To > Discovery > Provisioning Discovery > Discovery
Configurations.
b. Filter for WebSphere CloudBurst Appliance Discovery and select it.
c. Click Run Discovery.
d. Select Computers.
e. Locate and select one or more entries representing Workload Deployer.
Click OK, and then Submit.
The discovery might take some time to run.
6. Select Go To > IT Infrastructure > Image Library > Image Repositories.
7. Open wcahostname.ibm.com and add the Repository Location:
a. In the Repository Locations tab, click the New Repository Location button.
b. In the New Repository Location window type:
v For Directory, specify: WCA Directory
v For Computer, select wcahostname.ibm.com (the name of the computer
representing the Workload Deployer)
c. Click OK and Save.
8. Select Go To > IT Infrastructure > Provisioning Inventory > Virtual Server
Template.
9. Connect all Virtual Server Templates with Software Templates:
a. Click the template to open it.
b. In the Virtual Server Template tab that is displayed, click the Select Value
button next to the Software Stack field.
c. In the window that opens, select the software stack with the name that
corresponds to the name of the template. Click Save.
Repeat these steps for all Virtual Server Templates.
Results
Upon completion, new or modified patterns can be selected for deployment. You
can do this by using the self-service user interface (creating a project based on a
Workload Deployer pattern).
Configuring the managed environment to use the WebSphere Cluster
Service
Learn how to configure the Tivoli Service Automation Manager managed
environment so that it can use the WebSphere Cluster service.
Before you begin
Note: This task is relevant only if you have purchased and installed the Tivoli
Service Automation Manager for WebSphere Application Server product.
The managed environment can be one of the following:
244 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v a Linux on System z system.
v an AIX (System p) system.
Procedure
1. Check that you fulfill the latest requirements for using the WebSphere Cluster
service. You can find this information at Network Deployment (Distributed
operating systems), Version 6.1 knowledge center > Installing your application serving
environment > Distributed operating systems > Preparing the operating system for
product installation.
2. Verify that you have completed the tasks described in Configuring and
running discovery on a Tivoli Provisioning Manager server on page 247.
3. Check that the time difference between the management server and the
managed environment nodes is set to less than 15 minutes. This is a
requirement for using the WebSphere Application Server. To modify the time on
an AIX server, use the command smitty date.
4. Verify that the root user of the management server can open a Secure Shell
(SSH) connection to the managed environments without any prompts. If
prompts are issued, you have to add the public SSH keys of the management
server to the authorized keys of the managed environments. To do so:
a. Copy id_rsa.pub file to the managed environment.
b. Extend the authorized_keys file by the new copy of id_rsa.pub file.
Configuring the DCM to use the WebSphere Cluster Service
The DCM is an abstract representation of physical and logical components
managed by Tivoli Provisioning Manager. This task describes how to configure the
Data Center Model (DCM) for Tivoli Provisioning Manager. After a successful
installation, configure DCM for Tivoli Provisioning Manager before you provision a
landscape that uses WebSphere Application Server.
Before you begin
Important: This task is relevant only if you have purchased and installed the
Tivoli Service Automation Manager for WebSphere Application Server product.
v Ensure that the Tivoli Provisioning Manager is running before you run the
command to import data.
About this task
To prepare a WebSphere Application Server service, you must configure the DCM.
The WebSphere Application Server provisioning workflow in Tivoli Provisioning
Manager uses an installable file from a file repository server. Tivoli Service
Automation Manager provides template XML files that you can customize. You
then have to import these customized files into DCM by using an XML import tool
(xmlimport.sh) provided in Tivoli Provisioning Manager.
Note: There must be only one WebSphere 6.1 software module in the DCM. By
default, the WebSphere Application Server 6.1 software module is installed. Delete
this module to proceed with the procedure.
Procedure
1. Depending on the managed environment, ensure that the following WAS and
HTTP Server installable files are available on the Tivoli Provisioning Manager
server:
Chapter 3. Configuring 245
v For System z:
WebSphere Application Server Network Deployment V6.1 Linux on
zSeries, 64-bit support (C88TKML).
WebSphere Application Server Network Deployment V6.1 Supplements for
Linux on zSeries, 64-bit Support (C88TQML).
v For AIX:
WebSphere Application Server Network Deployment V6.1 for AIX 5L

V5,
64-bit Support (C88TJML).
WebSphere Application Server Network Deployment V6.1 Supplements for
AIX 5L V5, 64-bit Support (C88TMML).
2. Switch to the Tivoli Provisioning Manager user tioadmin. Run the command:
su - tioadmin
3. Install the IBM-Http-Server.tcdriver file:
a. Obtain the file from OPAL (Open Process Automation Library) at
http://www-01.ibm.com/software/brandcatalog/portal/opal/
results?catalog.searchTerms=&catalog.c=Tivoli+Provisioning+Manager.
b. Copy the file to $TIO_HOME/drivers.
c. Install the file by running the command:
$TIO_HOME/tools/tc-driver-manager.sh installDriver IBM-Http-Server
For detailed information on using the tc-driver-manager command, refer to
the Information Center for the Tivoli Provisioning Manager.
4. Install IBM-TSAM.tcdriver. Skip this section if you have already installed the
IBM-TSAM.tcdriver. To install IBM-TSAM.tcdriver:
a. Locate the IBM-TSAM.tcdriver file at /opt/IBM/TSAM/tc-driver/IBM-TSAM on
the Tivoli Service Automation Manager management server.
b. Copy the file to $TIO_HOME/drivers.
c. Run the following command: .
$TIO_HOME/tools/tc-driver-manager.sh installDriver IBM-TSAM
For detailed information on using the tc-driver-manager command, refer to
the Tivoli Provisioning Manager information center.
5. Customize the file WAS_repository_Template.xml according to your setup:
a. Rename the sample file in $TIO_HOME/repository/IBM-TSAM/samples/
DCMTemplates/WAS_repository_Template.xml.
b. Assign a name for the file repository server.
c. Assign the IP address and netmask of the file repository server.
d. Assign the credentials requested to access the repository server.
6. Customize the file WAS_HTTP_software_module.xml according to your setup:
a. Rename the sample file in $TIO_HOME/repository/IBM-TSAM/samples/
DCMTemplates/WAS_HTTP_software_module.xml.
b. Assign the name of the file repository server.
c. Assign the name and path of each software module where the installable
can be found.
Note: The specification of a software module for Linux on System z (intended
for a virtual server provisioned in the self-service user interface) must include
the following software requirement entry:
246 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
<software-requirement name="platform.architecture" type="HARDWARE" enforcement="MANDATORY" hosting="false" accept-non-existing="true">
<software-requirement-value value="390" />
<software-requirement-value value="s390" />
</software-requirement>
Ensure that the corresponding host platform server specification includes:
<resource name="Platform" resource-type="platform" managed="true" partitionable="false">
<property component="KANAHA" name="platform.architecture" value="390" />
</resource>
7. Import the customized XML files by running the command:
$TIO_HOME/tools/xmlimport.sh file:filename, where filename is the name of
your XML file. For detailed information about using the xmlimport command,
see the information center for Tivoli Provisioning Manager.
8. When logged on to Tivoli Provisioning Manager, delete the software module
IBM WebSphere Application Server 6.1:
a. Click Go To > IT Infrastructure > Software Catalog > Software Products.
b. Search for the IBM WebSphere Application Server 6.1 software module and
delete it.
9. Restart the Tivoli Provisioning Manager server.
Configuring and running discovery on a Tivoli Provisioning
Manager server
Learn how to configure and run discovery on a Tivoli Provisioning Manager
server. This task makes the managed environment available to Tivoli Provisioning
Manager.
Procedure
1. Run the Initial Discovery:
a. Log on to the administrative user interface.
b. Click Go To > Discovery > Provisioning Discovery > Discovery
Configurations.
c. Filter for and open Initial Discovery.
d. Click New Row on the Computers tab and add the host name of the server
that you want to discover. Alternatively, click New Row on the IP Address
Ranges tab and add the IP address ranges of your managed servers.
e. Click New Row on the Credentials tab and add your credentials to access
the managed servers.
f. Click Run Discovery.
g. Confirm that you want to run this discovery by clicking Submit. The flow
takes you to the Provisioning Task Tracking application, where you can
monitor the success of the discovery. Wait for completion.
2. After the Initial Discovery has successfully run, run the Tivoli Provisioning
Manager Inventory Discovery:
a. Log on to the administrative user interface.
b. Click Go To > Discovery > Provisioning Discovery > Discovery
Configurations.
c. Filter for and open Tivoli Provisioning Manager Inventory Discovery.
d. Check both Hardware and Software boxes, and select Use software
signatures.
e. Click Run Discovery. A pop-up window appears.
Chapter 3. Configuring 247
f. Confirm that you want to run this discovery by clicking Submit. The flow
takes you to the Provisioning Task Tracking application, where you can
monitor the success of the discovery. Wait for completion.
g. Click Go To > IT Infrastructure > Provisioning Inventory > Provisioning
Computer.
h. Select the discovered computer and check the details tab to view the details
(such as hardware resources) for the computer that is the target of the
discovery.
Defining configuration items for the WebSphere Cluster
Service
This section describes how to define configuration items to be used by the
WebSphere Cluster Service.
About this task
The WebSphere Cluster service uses authorized configuration items (CIs) in the
process automation database to identify the servers that are to accommodate the
WebSphere ND environment. These CIs have attributes that are typically found in
the SYS.COMPUTERSYSTEM classification, but another classification name can be
assigned.
Note: This function was previously done automatically by the Tivoli Service
Automation Manager Data Loader for Tivoli Provisioning Manager, which is no
longer provided with Tivoli Service Automation Manager.
The following table lists the required classification attributes for servers:
Table 32. Classification attributes for configuration items to be used by the WebSphere
Cluster Service
Classification Attribute Description
COMPUTERSYSTEM_MEMORYSIZE Total RAM memory size for server. This
attribute is used for filtering in the resource
allocation records and can be adapted there.
COMPUTERSYSTEM_FQDN Fully qualified domain name of the server.
This attribute is used in the Script Adapter
to establish SSH connections to the server.
COMPUTERSYSTEM_NAME Name of the server as defined in Tivoli
Provisioning Manager. This attribute is used
to get the DCM ID of the server to invoke
Tivoli Provisioning Manager workflows.
COMPUTERSYSTEM_ARCHITECTURE Name of the architecture for the server. This
attribute is used for filtering in the resource
allocation records and can be adapted there.
Configuration items must be generated manually after the initial Tivoli
Provisioning Manager discovery process (which enters the server into the DCM)
and when the DCM has been updated. Use the Configuration Items application of
the administrative user interface is used to define the CIs. To do that:
Procedure
1. Log on to the administrative user interface.
2. Click Go To > IT Infrastructure > Configuration Items.
248 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
3. Click the New CI button.
4. Enter a name for the configuration item.
5. Enter the classification IT\SYS.COMPUTERSYSTEM in the classification field.
6. In the Alphanumeric Value field of the corresponding attribute in the
specifications table, enter at least one of the required attributes listed in table
Table 32 on page 248.
Processor Pool planning
System p provides a new capability that is called processor pools for Power 6 and
Standard Edition of PowerVM. This feature is primarily used for license
management and further partitioning of a CEC.
It covers the following aspects:
v Cloud Server Pool Administration Application is enhanced to enable processor
pool support for a Cloud Pool of type LPAR.
v Processor Pool Discovery is triggered from the Cloud Server Pool Administration
application.
v Provisioning of LPARs into a specified processor pool.
v Placement and resource checks during provisioning are enhanced to check if the
LPAR can be provisioned on the specified CEC within the specified processor
pool.
v New VSRPARM properties included in the Service Request for processor pool.
The processor pool support has the following assumptions:
v At least a Power 6 CEC with PowerVM Enterprise edition is required.
v The LPAR on the processor pool is independent of its power state.
v There is no support for the processor pool reserved processor unit attribute.
v There is no web 2.0 support for processor pools.
v The deployment is done only with SAN Storage.
v The processor pool support is enabled on a per cloud pool base.
v The resource check is done on Tivoli Provisioning Manager deployment level.
Processor pools are not accepted in Reservation or are treated as sub pools
during reservation.
v There is only a basic discovery support. The CECs in the Cloud Pool are
updated with processor pool information.
v There is a VSRPARM parameter to use the processor pool through REST API
only.
v There is a new discovery extension point available, which is callable from the
CSPA (Cloud Server Pool Administration Application) to enable customer
extension for discovery. The extension point contains an OOTB implementation
to do a CEC level Discovery.
v The processor pools must be defined on all CECs within the cloud pool. If not
done, it results in deployment errors.
v Modify server resources is not supported for LPARs that are in a processor
pool.
v No processor pools support from the web 2.0 UI.
Chapter 3. Configuring 249
Processor pool configuration in the backend
The processor pools must be set up on the CECs within a cloud pool before you
run the discovery of the processor pool. Setup of the processor pools on the CECs
must be done through HMC. The processor pools must be defined on all the CECs
within a cloud pool.
Processor Pool configuration in the Cloud Server Pool
Administration Application
Enablement of Processor Pool support is done within the Cloud Server Pool
Administration Application on a per cloud pool base. You can enable Processor
Pool support in the Host Platform and VIOS Configuration tab only for SAN
storage. You can trigger the processor pool discovery from the Processor Pool
Settings tab. All CECs must be added to the cloud pool before theprocessor pool
discovery is run.
Procedure
1. Go to Cloud Server Pool Administration > Cloud Server Pool Details tab.
2. In the Resource Configuration section, go to Host Platform and VIOS
Configuration tab.
3. In the Hostplatform Features section, select SAN Storage/Multiple VIOS
Mode.
4. Select Enable Processor Pools? When you enable the processor pools, the
Processor Pool Settings tab is available.
Note: In the Processor Pool Settings tab, the value of Processor Pool
Discovery Workflow Name is preset to "pSeries_DiscoverProcessorPools".
5. Click Processor Pool Discovery to start the asynchronous workflow.
6. Wait for the process to complete.
Important: If you change the processor pool definitions on a CEC, then the
processor pool discovery must be executed to reflect the changes in the data
center model (DCM). If a HMC discovery is executed for a cloud pool that has
processor pool enabled, then it is required to run the processor pool discovery
before you enable the cloud pool.
Service request parameters for processor pool support
A new property is introduced to specify the processor pool that the LPAR must be
provisioned to - PMRDP.SystemP.CPU.Shared_proc_pool_name .
The value of this property must be set to the name of the processor pool. The
srctype of this property must be VST. The property must be stored in the
VSRPARM section of the Service Request.
VSRPARM properties to set are as follows:
v SRCATTRNAME = PMRDP.SystemP.CPU.Shared_proc_pool_name
v SRCTYPE = VST
v SRCTOKEN = 0
v SRCATTRVAL = <name of the processor pool>
250 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
VST properties
The following properties in the VSRPARM must be set if
PMRDP.SystemP.CPU.Shared_proc_pool_name is set and the cloud pool is enabled for
Processor Pool support:
v SRCATTRNAME = PMRDP.SystemP.CPU.Sharing_mode
v SRCTYPE = VST
v SRCTOKEN = 0
v SRCATTRVAL = Constant string with either cap or uncap. If not set
uncap is the default.
v SRCATTRNAME = PMRDP.SystemP.CPU Uncap_weight
v SRCTYPE = VST
v SRCTOKEN = 0
v SRCATTRVAL = Integer with the uncap weight ( 0-255). If not set default
is 128
Example form to set VSRPARM:
!-- Example for PMRDPVSRPARM values created and filled during SR creation -->
!-- sets the PMRDP.SystemP.CPU.Shared_proc_pool_name
variable in the VST to the value mypool -->
VSRPARM.1.DESCRIPTION
input name="PMRDPVSRPARM.1.DESCRIPTION"
value="Processor Pool Name" type="text">p/>
VSRPARM.1.SRCTOKEN
input name="PMRDPVSRPARM.1.SRCTOKEN"
value="0" type="text">p/>
VSRPARM.1.SRCTYPE
input name="PMRDPVSRPARM.1.SRCTYPE" value="VST" type="text">p/>
VSRPARM.1.SRCATTRNAME
input name="PMRDPVSRPARM.1.SRCATTRNAME"
value=" PMRDP.SystemP.CPU.Shared_proc_pool_name" type="text">p/>
VSRPARM.1.SRCATTRVAL
input name="PMRDPVSRPARM.1.SRCATTRVAL" value="0" type="mypool">p/>
Advanced configuration settings
Learn about the optional and additional configuration settings that you can set in
your environment.
Reducing the run time of provisioning requests
You can reduce the run time of your provisioning requests to prevent them from
staying in the 'Queued' status for a long time.
About this task
Tivoli Service Automation Manager uses an algorithm to place virtual machines on
the defined host platforms in a resource pool. The algorithm helps to find an
optimal placement in context of the resources available in the resource pool. All
open provisioning requests take part in this optimization process. This includes
future reservations and active provisioning requests still waiting to be fulfilled.
Chapter 3. Configuring 251
Normally the process is run in a short timeframe but in rare situations it might
take considerably more time. Such situations are dependent on the resources that
are still available in the resource pool and on the kind of requests submitted. If
such behavior occurs, you can reduce the run time of the algorithm:
Procedure
1. Add the PMRDP.Fitting.Spray.Basic.MaxDepth parameter as a variable to the
Tivoli Provisioning Manager spare pool. By default, this parameter is not set
and its default value is 4.
2. Set the value of this parameter to less than 4 to reduce the run time of the
placement. Do not increase the value to more than 4.
Results
Setting the value to less than 4 reduces the time of the placement but the
placement process is not as optimal as originally.
Overcommitting resources on VMware hypervisor
The VMware hypervisor provides the capability to allocate more CPU, memory,
and storage resources to the virtual machines than physically available on the
hypervisor machines. This functionality is supported in Tivoli Service Automation
Manager with overcommit factors. Physical resources on the VMware cluster are
multiplied by the overcommit factor, and the reservation system permits to allocate
to the virtual machines more resources than physically available.
About this task
Overcommitting resources allows for more cost-effective usage of the cloud at the
cost of lower performance on the virtual machines.
For more information about the capabilities and strategies for overcommitting
resources on VMware, refer to the VMware Resource Management Guide available at
http://www.vmware.com/pdf/vi3_301_201_resource_mgmt.pdf.
Note: Changing the overcommit factors takes effect after the Virtual Center
Discovery was run in the Cloud Server Pool Application. Only the newly
provisioned virtual machines are affected. All existing virtual machines continue to
consume the resources as configured at the time of the last Modify Resources
request, or, if this request was not submitted, the last provisioning operation.
Overcommitting CPU
You can configure the system to assign more CPU to a VMware virtual machine
than is physically available on the hypervisor.
Procedure
1. Log on to the administrative user interface as user maxadmin.
2. Click Go To > Service Automation > Configuration > Cloud Server Pool
Administration.
3. Select the cloud pool that you want to configure.
4. In the Cloud Server Pool Details tab, ensure that the Resource Pool
Configuration sub-tab is open.
5. Verify that a resource pool is configured in the Resource Pool Name field. If it
is not configured, follow the steps described in Manually configuring cloud
server pools for VMware.
252 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
6. If the resource pool is configured, click Detail Menu > Go To Resource
Pools.
7. Open the Variables tab and click New Row.
8. In the Variable field, type: Cloud.overcommitfactor.cpu.
9. In the Value field, enter the required value for the overcommit factor.
10. When all overcommit factors are set up, run a virtual center discovery.
Results
If the Cloud.overcommitfactor.cpu property is not set, a default value of 1.0 is
assumed. When you define this property, the actual CPU value discovered during
the VMware virtual center discovery is multiplied by the overcommit factor to
provide the number of physical CPU that is available for Tivoli Service Automation
Manager virtual machines.
Overcommitting memory
You can configure the system to assign more memory to a VMware virtual
machine than is physically available on the hypervisor.
About this task
If the memory overcommit factor is not set, a default value of 1.0 is assumed. If
the memory reservation overcommit factor is not set, a default value of 0.75 is
assumed. When you increase the memory overcommit factor, you must at the same
time decrease the memory reservation overcommit factor, so that the physical
memory reservation on VMware does not limit provisioning.
Tivoli Service Automation Manager virtual machine provisioning fails if the actual
hardware in the hypervisor is not able to satisfy the request, that is, if the reserved
physical memory for all started virtual machines exceeds 100%. The default
memory reservation factor of 0.75 takes the overhead incurred by the hypervisor
into account. Tivoli Service Automation Manager does not provide a means to
monitor or warn the user about exceeding physical memory. The administrator
must implement an external monitoring infrastructure to avoid such a situation,
and define a process to increase the hardware resources in such situations.
Procedure
1. Log on to the administrative user interface as user maxadmin.
2. Click Go To > Service Automation > Configuration > Cloud Server Pool
Administration.
3. Select the cloud pool that you want to configure.
4. In the Cloud Server Pool Details tab, ensure that the Resource Pool
Configuration sub-tab is open.
5. Verify that a resource pool is configured in the Resource Pool Name field. If it
is not configured, follow the steps described in Manually configuring cloud
server pools for VMware.
6. If the resource pool is configured, click Detail Menu > Go To Resource
Pools.
7. Open the Variables tab and click New Row.
8. In the Variable field, type: Cloud.overcommitfactor.memory.
9. In the Value field, enter the required value for the overcommit factor.
10. Click New Row.
Chapter 3. Configuring 253
11. In the Variable field, type: Cloud.overcommitfactor.memoryreservation.
12. In the Value field, enter the required value for the overcommit factor.
13. When all overcommit factors are set up, run a virtual center discovery.
Results
When you define the Cloud.overcommitfactor.memory property, the actual
discovered hypervisor memory value is multiplied by the memory overcommit
factor to get the amount of memory that is available for Tivoli Service Automation
Manager virtual machine reservation.
When you define the Cloud.overcommitfactor.memoryreservation property, the
memory reservation overcommit factor is multiplied by the requested memory size
for a virtual machine to define the memory reservation value for the VMware
hypervisor. This value is the guaranteed amount of physical memory that VMware
reserves for the virtual machine.
Overcommitting storage
You can configure the system to allocate more storage to the VMware virtual
machine than is physically available on the hypervisor.
Before you begin
Note:
v When you use this functionality, you must manually monitor your storage
consumption to prevent system failures.
v The storage allocation can only be overcommitted when you are using thin
provisioned virtual machines.
Tivoli Service Automation Manager can thin provision virtual machines if the
master image is configured to use thin provisioned disks.
If you use this functionality, you must manually monitor your resources. If the
thin provisioned storage exceeds the available disk space, any new provisioning
request fails. If any existing virtual machines allocate additional storage by writing
new data to their filesystem, these virtual machines stop and they cannot be
accessed any more.
Procedure
1. Log on to the administrative user interface as user maxadmin.
2. Click Go To > Administration > Provisioning > Provisioning Global Settings.
3. Open the Variables tab and click New Row.
4. In the Variable field, type: Cloud.overcommitfactor.disk.
5. In the Value field, enter the required value for the overcommit factor.
6. When all overcommit factors are set up, run a virtual center discovery.
Results
If the Cloud.overcommitfactor.disk property is not set, a default value of 0.90 is
assumed. The value is smaller than 1 to account for overhead that is used for
virtual machine metadata and memory swap files. The actual discovered data store
size value will be multiplied by the overcommit factor to define the amount of
storage that is available for Tivoli Service Automation Manager virtual machine
reservation.
254 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
VMware storage resiliency:
Tivoli Service Automation Manager 7.2.4 validates the current capabilities of the
selected data store when provisioning is run, and corrects the selected data store
when necessary. This function compensates for the out-of-band-operations
performed in the VMware storage environment unknown to Tivoli Service
Automation Manager and that affects both the internal data model and allocation
tracking of Tivoli Service Automation Manager.
When the following formula is activated, it is used to validate disk placement,
regardless of the implemented thin- or thick-disk provisioning:
datastore-capacity Cloud.overcommitfactor.disk >= nominal VM space + Tivoli Service
Automation Manager current provisionings
Note: Nominal VM space: Virtual machine's regular disk space allocated during
thick-disk provisioning.
Depending on the type of provisioning performed by using thin-disk provisioning,
define the Cloud.overcommitfactor.disk parameter.
The VMware storage resiliency function is disabled by default. To enable it, define
the Tivoli Provisioning Manager global variable:
Cloud.ENABLE_VMWARE_STORAGE_RESILIENCY=true
Restriction: VMware storage resiliency function might interfere with high
performance requirements. If you demand high throughput to quickly create a
large number of VMs, switch off this function.
Integrating Tivoli Service Automation Manager with other Tivoli
products
This chapter discusses configuration tasks applying to the interface of Tivoli
Service Automation Manager with other Tivoli products outside the scope of TPAE.
Integrating Tivoli Monitoring
This section discusses configuration tasks applying to the interface of Tivoli Service
Automation Manager with Tivoli Monitoring.
Before you begin
There are two distinct and independent applications for Tivoli Monitoring within
Tivoli Service Automation Manager:
v Self-Service Virtual Server Management component: A Tivoli Monitoring agent
can be installed when a virtual server is provisioned. This agent then reports
resource consumption for the server, such as CPU and memory, in the
self-service user interface.
v WebSphere Cluster service: Definition of performance-related events to be
monitored and triggering respective event monitoring applications to assist the
user in resolving the problems noted.
Chapter 3. Configuring 255
Configuring the provisioning of the monitoring agent
Perform this configuration task to enable Tivoli Service Automation Manager to
deploy the Tivoli Monitoring agent on the provisioned virtual machines.
Before you begin
Ensure you meet the following requirements:
v The Tivoli Monitoring server version 6.2.2, or at minimum 6.2.1, is installed and
running. See theTivoli Monitoring knowledge center to install the server for your
OS and platform.
v You have the following information readily available from the Tivoli Monitoring
server:
The IP address of the server as well as its IP port number (the default is
typically 1918).
The User ID and password of the administrator managing the server.
v In Tivoli Service Automation Manager 7.2.4.4, a 32 bit Tivoli Monitoring agent
must be installed on a 32 bit operation system and 64 bit Tivoli Monitoring
agent must be installed on a 64 bit operating system. If there is a mismatch
between the agent and operating system, then an error occurs - "update
windows registry for TEMS server details".
Preparing the monitoring agent installable:
This task describes how to make the IBM Tivoli Monitoring agent installable
accessible in Tivoli Service Automation Manager.
Procedure
Obtain a copy of the Tivoli Monitoring agent installable for UNIX and Windows
from the IBM Tivoli Monitoring media.
v To prepare the monitoring agent installable for UNIX operating systems:
1. Locate the NFS server and ensure that it belongs to the same subnetwork of
the provisioned virtual machines.
2. Record the IP address and subnetwork mask of this server. (They must
match the settings on the DCM file repository object in the DCM that
represent this NFS server).
3. Copy the Tivoli Monitoring agent installable tar file to a directory on that
NFS server.
4. Create a directory called, for example, /repository on the NFS server, and
export it. (This name should match the root-path property in the DCM file
repository object that represents this NFS server).
5. Create a subdirectory called itm621 and untar the Tivoli Monitoring agent
installable into that directory.
Important: After you untar the monitoring agent installable, you need to
make a change to the install.sh file inside /repository/itm621:
a. Run the following command: cp install.sh install.sh.original.
b. Edit the install.sh and replace while getopts
":h:j:d:o:p:acqv:g(forceCDgskit)" opt with while getopts
":h:j:d:o:p:acqv:g" opt.
c. Save the changes to the file.
256 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v To prepare a Tivoli Service Automation Manager agent installable for Windows
Operating Systems:
1. Locate the Samba server and ensure that it belongs to the same subnetwork
of the provisioned virtual machines.
2. Record the IP address and subnetwork mask of this server. (They must
match the settings on the DCM file repository object in the DCM that
represent this Samba server.
3. Copy the agent installable tar file to a directory on that Samba server.
4. Create a directory called, for example, repository on the Samba server, and
export it with that name. (The export name should match the root-path
property in the DCM file repository object that represents this Samba server).
5. Create a subdirectory named itm621/WINDOWS, and unpack the monitoring
agent installable into that directory.
Defining the Tivoli Monitoring agent software definition in Tivoli Provisioning
Manager:
This task describes how to add an IBM Tivoli Monitoring agent software definition
customized for Tivoli Service Automation Manager to the Tivoli Provisioning
Manager software catalog . It also describes minimal configuration steps required
so that Service Automation Manager can provision the monitoring agent.
Before you begin
Tivoli Service Automation Manager requires a software definition that represents
the monitoring agent with version 6.2.1. Tivoli Service Automation Manager also
requires special customization of the software definition so that it can be deployed
on virtual machines. This procedure describes how to customize and create the
monitoring agent software definition using the Cloud Pool Administration
application and by customizing a DCM XML file provided by the Tivoli Service
Automation Manager installation. The DCM XML files are located in the
/etc/cloud/install/DCM directory on the Service Automation Manager server.
Important: If other monitoring agent definitions already exist in the software
catalog with the name IBM Tivoli Monitoring Agent but do not match the
customization described in this section, then they must be either deleted or
renamed in Tivoli Provisioning Manager. You can view the list of software
definitions by clicking Go To > IT Infrastructure > Software Catalog > Software
Products.
Procedure
1. Depending on whether you are configuring the monitoring agent for one or
more operating systems (AIX, Linux, Windows), choose the appropriate DCM
XML file to edit:
v Windows: 41_Cloud_ITM_Agent_Windows.xml
v Linux: 42_Cloud_ITM_Agent_Linux.xml
v AIX: 43_Cloud_ITM_Agent_AIX.xml
v Multiple OSs: 44_Cloud_ITM_Agent_AIX_Linux_Windows.xml
2. Verify that you have the proper file repositories defined in Tivoli Provisioning
Manager and that they reflect the previously mentioned NFS and Samba
servers.
Chapter 3. Configuring 257
a. To view the defined file repositories, log on to the administrative user
interface as the maxadmin user and list them by clicking Go To > IT
Infrastructure > Provisioning Inventory > File Repositories
b. If you have already configured Tivoli Service Automation Manager for
VMware or PowerVM, then file repositories will already be in the database
defined with the following default names:
v Cloud File Repository for the NFS server
v Cloud Windows File Repository for the Samba server
3. Depending on whether the repositories are already defined in Tivoli
Provisioning Manager or not, perform the appropriate step.
File repository is already defined
Update the IP address and subnetwork mask in the network interface
settings for each repository in Tivoli Provisioning Manager to match
the corresponding NFS and Samba servers you prepared previously.
File repository is not defined
a. In the chosen DCM XML files, customize the section below
according to your environment:
<!--For ITM and other installables package repository. -->
<file-repository name="CloudFileRepository" is-device-model="Cloud File Repository" root-path="/export/backups/repository">
<nic name="NIC0" failed="false" managed="false" management="false" netboot-enabled="false">
<network-interface name="eth0" failed="false" managed="false" management="true" dynamic-ipaddress="false" ipaddress="0.0.0.0"
netmask="0.0.0.0" address-space="DEFAULT" allocation="none"/>
</nic>
</file-repository>
<!--For ITM and other installables package repository. -->
<file-repository name="Cloud Windows File Repository" is-device-model="Cloud File Repository" root-path="repository">
<nic name="NIC0" failed="false" managed="false" management="false" netboot-enabled="false">
<network-interface name="eth0" failed="false" managed="false" management="true" dynamic-ipaddress="false" ipaddress="0.0.0.0"
netmask="0.0.0.0" address-space="DEFAULT" allocation="none"/>
</nic>
<sap name="SMB" protocol-type="unknown" app-protocol="UNKNOWN" context="NOCONTEXT" port="139" auth-compulsory="true" role="host">
<credentials search-key="default" is-default="true">
<password-credentials username="Administrator" password="xxxxxx" is-encrypted="false" />
</credentials>
</sap>
</file-repository>
b. In the file-repository section with name CloudFileRepository,
set the ipaddress, netmask, and root-path to match those of the
NFS server.
c. In the Cloud Windows File Repository section, set the ipaddress,
netmask, root-path, username, and password to match those of
the Samba server
d. Tivoli Provisioning Manager requires that subnetworks must
already exist in the DCM for any IP addresses used in the network
interface of defined objects. If the IP addresses used in the
previous steps belong to an existing subnetwork in Tivoli
Provisioning Manager, proceed to the next step, otherwise, you
need to define a subnetwork. Use either the administrative user
interface to define the subnetwork, or you can define it using the
following XML section. Add the following section to the
previously chosen DCM files right above the file repositories and
customize it for the required subnetwork.
<subnetwork address-space="DEFAULT" name="Subnet for NFS and SMB server"
ipaddress="10.100.0.0" netmask="255.255.255.0">
<property component="KANAHA" name="broadcast" value="10.100.0.255" />
<property component="KANAHA" name="gateway" value="10.100.0.1" />
<property component="KANAHA" name="vm-mgmt-route" value="true" />
</subnetwork>
e. Save the file.
258 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
4. As indicated at the beginning of this task, ensure that there are not any
software definitions in Tivoli Provisioning Manager with the same name as
the software definition described above, specifically IBM Tivoli Monitoring
Agent, otherwise, the next step might fail.
5. Customize the Tivoli Monitoring agent software definition and define it in
Tivoli Provisioning Manager:
a. In the section of type software-resource-template, with name = "ITM
Installation Template - <OS_type>", set the value for the parameters
host and port to match the IP address and port number of the Tivoli
Monitoring server.
b. Save the file.
6. Import the DCM XML file using the Cloud Server Pool Administration
application: Go To > Service Automation > Configuration > Cloud Server
Pool Administration
7. Update the monitoring agent software definition with the Tivoli Provisioning
Manager object ID of all the installable software definitions in Tivoli
Provisioning Manager that represent the operating systems on which the
monitoring agent will be installed.
a. Log on to the administrative user interface as the user maxadmin.
b. Use the Object Finder to look up the object ID of all the software
definitions corresponding to the operating systems with which you want
the monitoring agent to be deployed. Repeat these steps for each software
definition.
1) Click Go To > IT Infrastructure > Image Library > Master Images,
and select the image that you registered.
2) In the Master Image tab, take note of the Guest Operating System and
Software Stack names.
3) From the Start Center, in the Data model object finder portlet, filter
for the name of the Guest Operating System object.
4) Take note of the associated Software Definition object ID.
5) Filter for the name of the Software Stack object.
6) Take note of the associated Software Stack and Image object IDs
c. Click Go To > IT Infrastructure > Software Catalog > Software Products
and select the monitoring agent software definition customized for Tivoli
Service Automation Manager.
d. Click the Variables tab
e. In the Variables section, click the variable named dependencies_1.
f. In the Value field, enter the object IDs (software definition, software stack,
image) determined in the previous step. Separate each value by a comma.
g. Define one more variable named exposetotivsam, with value 1.
h. Click Save.
8. Add the monitoring agent software definition to the software stack associated
to the resource pool that represents the virtual environment for which you
have configured Tivoli Service Automation Manager.
a. Click Go To > IT Infrastructure > Software Catalog > Software Stacks.
b. Click on the appropriate software stack from the list. For example, if you
want the agent to be deployed into a VMware environment configured
with defaults, choose the software stack called ESXPoolStack. For
PowerVM (if supported), choose the default software stack
ODSDS_PPC_host_stack.
Chapter 3. Configuring 259
c. From the Select Action menu, click Add Stack Entry.
d. Search for and select the IBM Tivoli Monitoring Agent from the list.
e. Click Submit.
f. Save the changes.
9. Configure the Tivoli Monitoring server endpoint PMRDPITM:
a. Click:Go To > Integration > Endpoints
b. Search for and select the PMRDPITM endpoint.
c. Set USERNAME and PASSWORD values with the credentials required to
access your Tivoli Monitoring server.
d. Set the value of REQUESTURL to: http://<ITM_Server_Hostname>:1920///
cms/soap
e. Set the value of SOAPTIMEOUT in seconds. Default value is 3600.
f. Click Save.
Note: If you change the end point URL in an operational Tivoli Service
Automation Manager environment, you have to restart the management
server, so that the change works properly in the administrative user interface.
10. Enable monitoring agent installation during provisioning for each of following
offerings:
v PMRDP_0211A_72 (Add VMware Servers)
v PMRDP_0212A_72 (Add POWER LPAR Servers)
v PMRDP_0213A_72 (Add Xen Servers)
v PMRDP_0214A_72 (Add z/VM Linux Servers)
v PMRDP_0215A_72 (Add KVM Servers)
v PMRDP_0201A_72 (Create Project with VMware Servers)
v PMRDP_0202A_72 (Create Project with POWER LPAR Servers)
v PMRDP_0203A_72 (Create Project with Xen Servers)
v PMRDP_0204A_72 (Create Project with z/VM Linux Servers)
v PMRDP_0205A_72 (Create Project with KVM Servers)
v PMRDP_0206A_72 (Create POWER LPAR Servers via IBM Systems Director
VMControl)
v PMRDP_0216A_72 (Add POWER LPAR Servers via IBM Systems Director
VMControl)
a. Log on to the administrative user interface as PMSCADMUSR.
b. Click Go To > Service Request Manager Catalog > Offerings
c. Click on the offering to open it.
d. Click the Change Status button and change the status of the offering to
Pending.
e. Switch to the Specifications tab.
f. In the Presentation section, select the Mandatory check box, and unselect
the Hidden check box, which correspond to the offering attribute
PMRDPCLSWS_MONITORING.
g. Change the status of this offering to Active.
260 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Enabling the monitoring agent installation on restored images:
By default, if a server image was saved, and that server had a monitoring agent
installed on it, you can no longer use the monitoring feature when you restore the
image. If you want the monitoring agent to work on the restored images, you need
to enable the Monitoring agent to be installed check box for the Create Project
from Saved Image and the Add Server from Saved Image offerings.
Procedure
1. Log on to the administrative user interface as PMSCADMUSR.
2. Navigate to Go To > Service Request Manager Catalog > Offerings
3. Filter for Create Project from Saved Image offering and open it.
4. From the Select Action menu, select Change Status.
5. Change the status of the offering to PENDING and click OK.
6. Click the Specification tab.
7. In the Presentation subsection of the Default Presentation Details, filter for
PMRDPCLSWS_MONITORING.
8. Deselect the Hidden check box and select the Mandatory box.
9. Change the status of the offering to ACTIVE.
10. Click Save.
11. Filter for Add Server from Saved Image offering, open it, and repeat the steps
4 to 10.
Results
You can now check the Monitoring agent to be installed check box in the Create
Project from Saved Image and Add Server from Saved Image requests.
Synchronizing data between Tivoli Monitoring and Tivoli Service Automation
Manager:
You can set the frequency in which you can monitor and synchronize data between
Tivoli Monitoring and Tivoli Service Automation Manager. After you install and
configure Tivoli Monitoring, you must activate the cron job to initiate data refresh.
Activate ITMSyncCron cron task and configure its frequency to refresh data. The
synchronized data renders UI values for Manage Project Details and Server Details.
Procedure
1. Go to > System Configuration > Platform Configuration > Cron Task Setup.
2. In Cron Task tab, search for ITMSyncCron.
3. Select Active? to activate monitoring. The default is inactive.
4. Optional: Change the value of Schedule or Time Interval in which the
monitoring activity must take place. The default value is not less than 15
minutes for 300 servers.
Important: Do not set smaller frequency for many virtual machines, as it
impacts performance.
5. Save the record.
6. If you modify the resources of a virtual machine, then restart virtual machine.
Chapter 3. Configuring 261
Important: The recent changes that are made to the virtual machine get
refreshed on the UI only after the next cron task is run.
7. Required: Stop and start the MXServer whenever you change the cron job
schedule:
su - tioadmin
./tio.sh stop wasadmin <wasadmin_password>
./tio.sh start wasadmin <wasadmin_password>
Note: To view the details about the virtual machine usage values, see Viewing
virtual machine usage values.
Viewing virtual machine usage values:
You can view the details of values that are shown in Manage server and other
Tivoli Service Automation Manager UI panels. Alternatively, you can get the values
of CPU, Memory, and Disk from Tivoli Monitoring.
Procedure
View the details of values that are shown in Manage server and other Tivoli
Service Automation Manager UI panels:
1. Log in to Simple SRM.
2. Go to Manage Servers.
Note: In the Resources tab, the capacity values are displayed. The values are
same as the values with which the virtual machine was created. In many UI
panels of Tivoli Service Automation Manager, you can find the following
values:
v CPU(%) - average CPU usage values. CPU corresponds Cpu_Averages of
Tivoli Monitoring.
v Memory(%) - Percentage of Memory that is used as against the total memory
size of the virtual machine. Memory corresponds to Free_Memory of Tivoli
Monitoring.
v Disk(%) - Percentage of Disk that is used as against the total disk size of the
virtual machine. Disk usage corresponds to Space_Available of Tivoli
Monitoring.
View the values of CPU, Memory, and Disk in Tivoli Monitoring:
3. Log in to Tivoli Monitoring Agent.
4. Run SOAP calls to query for available space: Example:
<CT_Get><userid>sysadmin</userid><password></
password><object>ManagedSystem</object><target>ManagedSystemName</
target></CT_Get>
Note: Within the Object tag, you can query for Space_Available (summation of
all the partitions), Free_Memory, Cpu_Averages.
5. Click Make SOAP Request.
Remember: Tivoli Monitoring Server and memory commands show disk usage
per file system in the virtual machine. The Disk(%) values are calculated based
on total disk size that is allocated to the virtual machine and hence there can be
some differences in the values. The templates also determine the number of
filesystems or drives a virtual machine can have.
262 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Configuring monitoring for the WebSphere Cluster service
About this task
The following sections describe tasks related to setting up the function to monitor
performance on deployed service instances of the WebSphere Cluster service.
Setting up predefined Tivoli Service Automation Manager events for
monitoring:
This task describes how you set up the Tivoli Service Automation Manager events
for monitoring, when using WebSphere Cluster service.
Before you begin
Note: This task is relevant only if you have purchased and installed the Tivoli
Service Automation Manager for WebSphere Application Server and IBM Tivoli
Monitoring products.
About this task
Before monitoring agents are deployed along the lines of creating a new service
instance and before scenario-based monitoring can take place, the events defined
for the various scenarios must be set up. The setup must be done on a Linux or
AIX system, where an IBM Tivoli Monitoring server (the hub monitoring server) is
installed. By default, the hub monitoring server is assumed to be installed on the
local host. Alternatively, the events can be defined to a hub monitoring server
running on another host. In any case, the hub monitoring server must be
configured to use SOAP services.
Procedure
1. The script ctjzhCreateSit.sh is installed on the Tivoli Service Automation
Manager management server system in install_dir/bin. The install_dir
defaults to /opt/IBM/TSAM.
2. Execute install_dir/bin/ctjzhCreateSit.sh to setup the predefined events.
Syntax:
ctjzhCreateSit.sh [-u username] [-p password] [-s server]
Meaning:
-u username
Specifies the user to authenticate at the monitoring server. If no
username is passed, the default user sysadmin is assumed without any
password.
-p password
Specifies the password of the user to authenticate at the monitoring
server.
-s server
server is the name of a system where a hub monitoring server is
running. By default, it is assumed that the hub monitoring server is
running on the local host.
Example
The following example creates the product-defined events and defines them at the
monitoring server running on the local host. To authenticate, the user id tstadmin
Chapter 3. Configuring 263
is provided together with the valid password: ctjzhCreateSit.sh -u tstadmin -p
<password>
Deleting events:
About this task
Individual events can be deleted using the script ctjzhDeleteSit.sh. Like
ctjzhCreateSit.sh, it is installed on the Tivoli Service Automation Manager
management server system in install_dir/bin, which defaults to
/opt/IBM/TSAM/bin. A list of all situations defined by the product is provided in
the ctjzhSituations file also located in install_dir/bin.
Syntax:
ctjzhDeleteSit.sh [-u username] [-p password] [-s server]
Meaning:
-u username
Specifies the user to authenticate at the monitoring server. If no username is
passed, the default user sysadmin is assumed without any password.
-p password
Specifies the password of the user to authenticate at the monitoring server.
-s server
server is the name of a system where a hub monitoring server is running.
By default, it is assumed that the hub monitoring server is running on the
local host.
Example
The following example deletes the situation pmzhb_zos_pageds_not_oper currently
being defined at the monitoring server running on the local host. To authenticate,
the user ID tstadmin is provided, together with the valid password:
ctjzhDeleteSit.sh -u tstadmin -p <password> pmzhb_zos_pageds_not_oper
The following example deletes all situations listed in file ctjzhSituations (or any
other file of choice):
for name in `cat ctjzhSituations` do; ctjzhDeleteSit.sh -u tstadmin -p
<password>; done
Enabling the SSH command end point to retrieve IBM Tivoli Monitoring
configuration information:
This task describes how to configure the end point when common performance
monitoring attributes are collected from local definition files.
About this task
When a monitoring definition is created the first time and the hub Tivoli Enterprise
Management Server (TEMS) is installed on the management server, common
performance monitoring attributes can be collected from the local definition files. To
enable this function, the endpoint PMZHBPCFG has to be configured. To configure
the endpoint, you must:
264 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Procedure
1. Click Go To, Integration, and End Points.
2. Select end point PMZHBPCFG.
3. Check the setting of property CMDWORKDIR. You are not required to change
this property unless you installed Tivoli Service Automation Manager to a
non-default location. Otherwise, specify your own install path followed by
/bin.
4. Enter details for the property USERNAME. The specified user must have
permission to read and execute scripts from the Tivoli Service Automation
Manager installation directory, and also have permission to read IBM Tivoli
Monitoring configuration files. By default, Tivoli Service Automation Manager
installs these components using the root user name.
5. Enter details for the property PASSWORD. This is the password for the user
that was specified in the previous step.
6. Save your settings by clicking the Save symbol. Alternatively, you can press
Ctrl-Alt-S.
Once the end point PMZHBPCFG has been defined, you can click on the Get
Configuration button on the Monitoring Definition panel to retrieve the settings
of IBM Tivoli Monitoring directly (instead of having to enter them manually).
Triggering the Tivoli Service Automation Manager event monitoring
application:
You need to configure a SOAP service to trigger the Tivoli Service Automation
Manager event monitoring application.
About this task
Whenever a predefined event has been detected by a performance monitor, the
Tivoli Service Automation Manager Situation Analysis application is triggered via
IBM Tivoli Monitoring reflex automation. The reflex automation command sends
correlation information to Tivoli Service Automation Manager using a SOAP
service.
Note: In order to be able to switch from the Tivoli Service Automation Manager
application to the Incident application via the Route Workflow button, the
performance security group to which the user belongs must have access rights for
the Incident application.
Procedure
1. Edit the install_dir/bin/.ctjzhtrigger file. Specify the host name and the
port number of the SOAP service listener inside Tivoli Service Automation
Manager. The .ctjzhtrigger file defines the Tivoli Service Automation
Manager host that is to be triggered, and also the Tivoli's process automation
engine user credentials that are used if a predefined event becomes true.
2. Specify a valid Tivoli's process automation engine user ID and password
required to authorize the SOAP service caller. The following example settings
trigger Tivoli Service Automation Manager on host
your.management.server.name on port 9080 for the user maxadmin.
#
# Settings for Maximo SOAP service
#
Chapter 3. Configuring 265
hostname=your.management.server.name
portnum=9080
maxusername=maxadmin
maxuserpassword=maxadmin
3. Create and deploy a Web service that initiates an event analysis workflow. This
should occur whenever a trigger is sent via the IBM Tivoli Monitoring reflex
automation, using ctjzhtrigger.sh. Complete the following steps:
a. Set the system property mxe.int.webappurl in Tivoli's process automation
engine:
1) Click Go To > System Configuration > Platform Configuration >
System Properties.
2) Type mxe.int.webappurl in the Property Name filter field.
3) Replace localhost by specifying <hostname>:<port> of the Tivoli Service
Automation Manager Server hosting the SOAP service listener.
4) Click Save.
5) Select the global property mxe.int.webappurl again.
6) Click Live Refresh.
Now both the Global Value and the Current

Value of the
mxe.int.webappurl property should contain the specified host name and
port.
b. Click Go To > System Configuration > Platform Configuration > Web
Services Library.
c. On the Select Action menu, select Create Web Service > Create WS from
Standard Service.
d. In the Create Web Service from a Standard Service Definition window, select
source name PMZHBPWO and click Create.
e. Select the PMZHBPWO Web service.
f. On the Select Action menu, select Deploy Web Service and click OK.
Integrating Tivoli Usage and Accounting Manager
Before you employ the usage and accounting function you need to configure both
Tivoli Service Automation Manager and Tivoli Usage and Accounting Manager to
work together.
Note: In Tivoli Service Automation Manager, a metering framework is used to
generate metering data of virtual server resources. The metering framework is also
the default method to generate the CSR files for integrating with Tivoli Usage and
Accounting Manager. For more details about the metering framework, see The
metering framework section in the Tivoli Service Automation Manager Extensions Guide.
CSR files
Service usage data is provided by Tivoli Service Automation Manager for Tivoli
Usage and Accounting Manager by generating periodically an appropriate CSR
(Common Server Resource) file. This file contains information about virtual servers
and their capacity with respect to the number of CPUs and the amount of memory
that is assigned to the virtual server during the lifetime of the service using them.
The extraction of service usage data from the auditing tables is typically performed
once a day.
The CSR file contains the following list of identifiers and resources:
v Server_Hostname - host name of the server
v Server_Platform - for example, VMware, LPAR, Xen, KVM
266 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v Deployment_Instance - the name of the deployment instance (project) the server
is allocated to
v Service_Definition - the name of the service definition that is used to create the
deployment instance
v Deployment_Owner - the user that requested the deployment instance
v Account_Code - the identifier consists of three values:
<CUSTOMER> unique customer short name
<PROJECTACCOUNT> the project account value that is specified in the Create
Team and Modify Team requests.
Note: Metering data is collected only for projects that have active customers.
<PERSONGROUP> unique team identifier
The CSR file contains five metrics for chargeback:
v SERVHRS - time (in hours) the server is allocated to a given deployment project
v CPUHRS - product of the time (in hours) and number of CPUs assigned to the
server
v MEMMBHRS - product of the time (in hours) and amount of memory (in MB)
assigned to the server
v STGGBHRS - product of the time (in hours) and amount of storage (in GB)
assigned as root disk to the server
v ADDSTGGBHRS - product of the time (in hours) and amount of storage (in GB)
assigned as Additional Disk to the server
The name of the generated CSR file is derived from the name of the service
definition, the revision of the definition and its timestamp:
<service_definition>_<revision>_<timestamp>.txt.
The CSR files remain on the Tivoli Service Automation Manager management
server after being retrieved by Tivoli Usage and Accounting Manager. The
transferred files must be archived or deleted by the administrator.
Configuring Tivoli Service Automation Manager for Tivoli Usage
and Accounting Manager
Configure Tivoli Service Automation Manager so that it generates appropriate CSR
files needed for Tivoli Usage and Accounting Manager.
Before you begin
You need to have Tivoli Service Automation Manager Administrator privileges to
perform this task.
Procedure
1. Configure Tivoli Service Automation Manager server to enable RXA
connections. See Configuring for RXA connections between Tivoli Service
Automation Manager and Tivoli Usage and Accounting Manager on page 268.
2. Enable metering and accounting in the administrative user interface:
a. Enable auditing of changes to service deployment instances. The generation
of the CSR file uses the identical configuration changes to the database as
the Reporting function. If reporting is already enabled, nothing else has to
be configured, if not, refer to Enabling table auditing on page 288 and
perform the steps as described.
Chapter 3. Configuring 267
b. Ensure an identification of a charge-back department for users requesting
deployment instances. This is done by adding project account information
to the team definition for the user requesting the project. Once the project
account information is defined for the team, the user creating the project
has to select the respective team name in the panel. See the User's Guide for
details.
c. Activate the recurring generation of metering and accounting data for Tivoli
Service Automation Manager. See Enabling CSR file generation on page
269.
d. Define the location of the CSR file. See Defining the directory for CSR file
generation on page 269.
Configuring for RXA connections between Tivoli Service Automation Manager
and Tivoli Usage and Accounting Manager:
Tivoli Usage and Accounting Manager uses RXA to connect to the Tivoli Service
Automation Manager server to retrieve the generated CSR files. This task
comprises the definition of the user ID and the pass key. If this task is omitted,
CSR file transfer to the Tivoli Usage and Accounting Manager server is not
possible.
Procedure
1. Log on as root on the Tivoli Service Automation Manager server and enter the
following commands:
cd .ssh
ssh-keygen -t rsa
cat TSAM_id_rsa.pub >>authorized_keys
scp TSAM_id_rsa.pub root@<tuam_server>:/root/.ssh/TUAM_id_rsa.pub
2. Log in as root on the Tivoli Usage and Accounting Manager server and enter
the following commands:
cd .ssh
cat TUAM_id_rsa.pub >>authorized_keys
3. Ensure that the file permissions for the .ssh directory and the authorized_keys
file are set to 600.
Enabling table auditing for Tivoli Usage and Accounting Manager data
collection:
You need to enable auditing in tables to permit collection of data for the Tivoli
Service Automation Manager interface.
Note: You do not need to enable table auditing if you are using the metering
framework introduced with Tivoli Service Automation Manager version 7.2.2.1.
Before you begin
The TPAe platform allows for auditing of database changes. Tivoli Service
Automation Manager uses this feature in conjunction with the generation of
reports. The generation of the CSR file uses the same configuration changes to the
database as the reporting function (see Working with the service automation
reports on page 287). If reporting is already enabled, nothing else has to be
configured, if not, the same steps as for report configuration have to be performed.
268 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Enabling CSR file generation:
You need to activate the escalation that enables generation of the CSR files for the
Tivoli Usage and Accounting Manager interface.
Before you begin
The PMZHBCRDPMD escalation is used to trigger the extraction of data. An
escalation defines a schedule, an object as the target, and a condition related to the
target object. The default schedule for the escalation is once a day at 1 a.m. to
extract usage and accounting data for the previous day.
The target of the escalation is RDPVS service definition. A service-specific action is
invoked that retrieves the data from the database and generates the respective CSR
file.
Procedure
1. In the administrative user interface, click Go To > System Configuration >
Platform Configuration > Escalations.
2. Enter the escalation name PMZHBCRDPMD.
3. From the Select Action menu, select Activate/Deactivate Escalation.
4. If you are using the metering framework introduced with Tivoli Service
Automation Manager 7.2.2.1, you might need to configure two additional
properties for the escalation:
pmzhb.rb.metering.enabled = {true,false}
When set to true, this property enables the escalation to generate the
metering records based on the new metering framework. When set to
false, the audit tables are used to generate the metering data. Default
value is true.
pmzhb.rb.metering.overwrite.startdate = {yyyyMMdd}
The property can be used to overwrite the start point of the meter
generation, where yyyy is the year, MM is the month and dd is the day of the
month. The escalation generates a file for each day starting from the date
specified, until the current day. The value must be reset manually,
otherwise generation always starts at overwrite time. This flag works only
for the new metering framework, that is, when pmzhb.rb.metering.enabled
= true.
To modify the properties, click Go To > System Configuration > Platform
Configuration > System Properties.
Defining the directory for CSR file generation:
This section describes the procedure for defining the directory in which the CSR
files for IBM Tivoli Usage and Accounting Manager are generated.
Before you begin
The directory in which CSR files will be generated can be defined by the customer
as a TPAE system property. The default directory is /var/IBM/TSAM/metering. A
trailing slash is optional.
Procedure:
Chapter 3. Configuring 269
Procedure
1. Ensure that the directory to be specified exists. Tivoli Service Automation
Manager does not create this directory (even the default directory)
automatically. If the directory does not exist at runtime, an error message is
issued.
2. Ensure that the directory has the necessary permissions. Change the ownership
(chown) of the directory (for example /var/IBM/TSAM/metering) to the uid
and gid of tioadmin.
3. From the Start Center of the administrative user interface, select Goto >
System Configuration > Platform Configuration > System Properties
4. Enter the property name (pmzhb.csr.dir).
5. Open the property and set the value to the desired directory (or retain the
default value).
6. Save the changes
7. Select the property from the list and select Live Change from the action bar.
Enabling logs for metering:
Optionally, if you want to capture the metering logs, enable logging. In the logging
page, set the log level for PMZHB to debug and apply settings.
Procedure
Set the log level for PMZHB to debug:
1. In the Maximo start center, click Go To > System Configuration > Platform
Configuration > Logging.
2. Search for PMZHB and change the Log Level to Debug.
3. Select Apply Settings in Select Action list.
4. In the System Message about applying the selected settings, click OK.
Configuring Tivoli Usage and Accounting Manager to process
CSR files
After you have enabled metering in Tivoli Service Automation Manager, configure
Tivoli Usage and Accounting Manager server to retrieve and process the CSR file.
Before you begin
You need to have Tivoli Usage and Accounting Manager Administrator privileges
to perform this task.
Procedure
1. Define the account code structure:
a. In the Tivoli Integrated Portal, select Administration > Usage and
Accounting Manager > Account Code Structure. For more details on the
account code, see Project account and the account code structure on page
291.
2. Associate the account code structure with the Tivoli Service Automation
Manager user that is entitled to view the reports:
a. Click Identity and Access > Usage and Accounting > User Groups.
b. Select the appropriate user group in your environment.
c. Edit the group to add the newly defined account code structure.
3. Verify that the user is able to view reports:
270 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
a. Click Identity and Access > Usage and Accounting > Users.
b. Click the selected user and verify that the Common Reporting checkbox is
marked.
4. Define Tivoli Service Automation Manager rate group and rate codes:
a. In Tivoli Integrated Portal, select Financial > Usage and Accounting >
Rates.
b. Define rates and rate codes for the provided resources: SERVHRS,
MEMMBHRS, CPUHRS, and ADDSTGGBHRS. The descriptions specified
are the descriptions that you will see in the reports.
c. Create a rate group called TivSAM and associate the new rates with this
group.
5. Customize the sample job file to retrieve the CSR file. See Configuring the
Tivoli Usage and Accounting Manager job file to retrieve CSR files from Tivoli
Service Automation Manager for more information.
6. Customize the sample job file to process the retrieved CSR file. See
Configuring the Tivoli Usage and Accounting Manager job file to process CSR
files from Tivoli Service Automation Manager on page 272 for more
information.
Configuring the Tivoli Usage and Accounting Manager job file to retrieve CSR
files from Tivoli Service Automation Manager:
This section describes the procedure for configuring a job file to periodically
retrieve the CSR files from Tivoli Service Automation Manager.
Before you begin
IBM Tivoli Usage and Accounting Manager provides sample jobs to transfer CSR
files between servers. The following values are required to customize the sample
job file to transfer the Tivoli Service Automation Manager CSR file to the Tivoli
Usage and Accounting Manager server:
v Host name of the Tivoli Service Automation Manager management server
v Path to User ID/passkeys generated as part of RXA (see Configuring for RXA
connections between Tivoli Service Automation Manager and Tivoli Usage and
Accounting Manager on page 268
v Path to the CSR file on the Tivoli Service Automation Manager server as
described in Defining the directory for CSR file generation on page 269.
v Target location of the transferred CSR file on the Tivoli Usage and Accounting
Manager server
The sample must be adapted to:
v Read the transferred file.
v Process the file, that is, add an additional account code that corresponds to the
account code structure defined in Tivoli Usage and Accounting Manager.
v Load the processed data into the Tivoli Usage and Accounting Manager
database.
To transfer the CSR file from Tivoli Service Automation Manager management
server, run a job file on the Tivoli Usage and Accounting Manager server. The
sample job file SampleFileTransferSSH_withPW.xml can be used to create a job file
to transfer the file via SSH to a UNIX or Linux server. Job execution creates a log
file on the Tivoli Usage and Accounting Manager server in the /log files directory.
This log file must be examined in case of errors.
Chapter 3. Configuring 271
The Tivoli Usage and Accounting Manager administrator starts the file transfer by
invoking the startJobRunner shell script or .bat file, depending on the platform of
the Tivoli Usage and Accounting Manager server, with the xml file and the
appropriate date specification. CSR files will not be deleted automatically on the
Tivoli Service Automation Manager server after retrieval.
Important: If you upgrade from earlier versions of Tivoli Service Automation
Manager, run this file transfer again. The additional disk metering is enabled in
7.2.4.4 and the CSR file format is updated. For the job file to understand the new
CSR format, you must run this file transfer again. In case you must initiate
metering for additional disks that were added in earlier versions of Tivoli Service
Automation Manager, see Metering for additional disks that are created during
earlier versions of Tivoli Service Automation Manager on page 276.
A customized job file might appear as follows:
<Process id="SSHFileTransfer" description="Transfer remote files to this machine via ssh"
joblogShowStepOutput="true"
joblogShowStepParameters="true"
active="true"&gt;
<Steps stopOnStepFailure="true">
<Step id="FileTransferViaSSH" description="Transfer remote files to this machine via ssh"
type="Process"
programName="FileTransfer"
programType="java"
active="true">
<Parameters>
<Parameter type="ssh"/>
<Parameter serverName="<management server host name>"/>
<Parameter userId="root"/>
<Parameter userPassword="userPassword"/>
<!-- If you generated the keys using a passphrase, the userPassword parameter
has to be set to the passphrase. If you left the passphrase empty,
it has to be set to a non-empty string -->
<Parameter KeyStoreFileName="/root/.ssh/TSAM_id_rsa"/>
<Parameter from=="ssh:///var/IBM/TSAM/metering/RDP_%LogEnd_Date%.txt"
to="file://%CollectorLogs%/RDP_%LogEnd_Date%.txt"
action="Copy"
overwrite="true"/>
</Parameters>
</Step>
</Steps>
</Process>
Configuring the Tivoli Usage and Accounting Manager job file to process CSR
files from Tivoli Service Automation Manager:
This section describes the procedure for configuring a job file to process CSR files
received from Tivoli Service Automation Manager.
Before you begin
The Tivoli Usage and Accounting Manager administrator initiates the process by
invoking the startJobRunner shell script or .bat file, depending on the platform of
the Tivoli Usage and Accounting Manager server, with the xml file and the
appropriate date specification. CSR files will not be deleted automatically on the
Tivoli Service Automation Manager server after retrieval.
The account code structure is:
272 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Table 33. Account code structure
Identifier Length in characters Description
Account_Code 40 consisting of:
<CUSTOMER> 12
<PROJECTACCOUNT> 20
<PERSONGROUP> 8
Account_Code consists of
three values:
<CUSTOMER> unique
customer short name
<PROJECTACCOUNT> the
project account value that is
specified in the Create Team
and Modify Team requests
<PERSONGROUP> unique
team identifier
Deployment_Owner 30 The user that requested the
project.
Service_Definition 4 First four characters of the
service definition name, used
to identify the service in a
report.
Deployment_Instance 30 The name of the project the
server is associated to.
The Tivoli Usage and Accounting Manager Integrator is used to generate an
account code based on the values of the identifiers in the CSR file. The following
illustrates the job file. Invocation of this file is similar to the file transfer job file.
Job execution creates a log file on the Tivoli Usage and Accounting Manager server
in the log files directory. This file must be examined if errors occur.
Sample XML file for processing Tivoli Service Automation Manager CSR files
on Tivoli Usage and Accounting Manager:
Chapter 3. Configuring 273
<Jobs xmlns="http://www.ibm.com/TUAMJobs.xsd">
- <Job id="TSAM" description="Daily collection" active="true" joblogShowStepParameters="true" joblogShowStepOutput="true" processPriorityClass="Low" smtpSendJobLog="false" smtpServer="mail.YourCo
- <Process id="TSAM" description="Process TSAM Usage and Configuration CSR file" active="true">
- <Steps stopOnStepFailure="true">
- <Step id="Integrator_Account_Code_Generation" description="Add Account Code Information" type="Process" programName="integrator" programType="java" active="true">
- <Integrator>
- <Input name="CSRInput" active="true">
- <Files>
<File name="%LogDate_End%.txt" />
</Files>
</Input>
- <Stage name="CreateIdentifierFromIdentifiers" active="true">
- <Identifiers>
- <Identifier name="Account_Code">
- <FromIdentifiers>
<FromIdentifier name="Chargeback_Department" offset="1" length="15" />
<FromIdentifier name="Deployment_Owner" offset="1" length="30" />
<FromIdentifier name="Service_Definition" offset="1" length="5" />
<FromIdentifier name="Deployment_Instance" offset="1" length="30" />
</FromIdentifiers>
</Identifier>
</Identifiers>
- <Parameters>
<Parameter keepLength="true" />
<Parameter modifyIfExists="true" />
</Parameters>
</Stage>
- <Stage name="RenameResourceFromIdentifier" active="true">
- <Identifiers>
<Identifier name="Server_Platform" />
</Identifiers>
- <Resources>
<Resource name="CPUHRS" />
</Resources>
- <Parameters>
<Parameter dropIdentifier="false" />
<Parameter renameType="prefix" />
</Parameters>
</Stage>
- <Stage name="RenameResourceFromIdentifier" active="true">
- <Identifiers>
<Identifier name="Server_Platform" />
</Identifiers>
- <Resources>
<Resource name="SERVHRS" />
</Resources>
- <Parameters>
<Parameter dropIdentifier="false" />
<Parameter renameType="prefix" />
</Parameters>
</Stage>
- <Stage name="RenameResourceFromIdentifier" active="true">
- <Identifiers>
<Identifier name="Server_Platform" />
</Identifiers>
- <Resources>
<Resource name="MEMMBHRS" />
</Resources>
- <Parameters>
<Parameter dropIdentifier="false" />
<Parameter renameType="prefix" />
</Parameters>
</Stage>
- <Stage name="RenameResourceFromIdentifier" active="true">
- <Identifiers>
<Identifier name="Server_Platform" />
</Identifiers>
- <Resources>
<Resource name="STGGBHRS" />
</Resources>
- <Parameters>
<Parameter dropIdentifier="false" />
<Parameter renameType="prefix" />
</Parameters>
274 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
</Stage>
- <Stage name="RenameResourceFromIdentifier" active="true">
- <Identifiers>
<Identifier name="Server_Platform" />
</Identifiers>
- <Resources>
<Resource name="ADDSTGGBHRS" />
</Resources>
- <Parameters>
<Parameter dropIdentifier="false" />
<Parameter renameType="prefix" />
</Parameters>
</Stage>
- <!-- Rename since limit of 8 characters applies to rate codes
-->
- <Stage name="RenameFields" active="true">
- <Fields>
<Field name="LPARCPUHRS" newName="PCPUHR" />
<Field name="VMWARECPUHRS" newName="XCPUHR" />
<Field name="KVMCPUHRS" newName="KCPUHR" />
<Field name="LPARSERVHRS" newName="PSERVHR" />
<Field name="VMWARESERVHRS" newName="XSERVHR" />
<Field name="KVMSERVHRS" newName="KSERVHR" />
<Field name="LPARMEMMBHRS" newName="PMEMMBHR" />
<Field name="VMWAREMEMMBHRS" newName="XMEMMBHR" />
<Field name="KVMMEMMBHRS" newName="KMEMMBHR" />
<Field name="LPARSTGGBHRS" newName="PSTGGBHR" />
<Field name="VMWARESTGGBHRS" newName="XSTGGBHR" />
<Field name="KVMSTGGBHRS" newName="KSTGGBHR" />
<Field name="LPARADDSTGGBHRS" newName="PASTGBHR" />
<Field name="VMWAREADDSTGGBHRS" newName="XASTGBHR" />
<Field name="KVMADDSTGGBHRS" newName="KASTGBHR" />
</Fields>
</Stage>
- <Stage name="CSROutput" active="true">
- <Files>
<File name="%ProcessFolder%/AcctCSR.txt" />
</Files>
</Stage>
</Integrator>
</Step>
- <Step id="Process" description="Standard Processing for TSAM" type="Process" programName="Bill" programType="java" active="true">
- <Bill>
<Parameters />
</Bill>
</Step>
- <Step id="DatabaseLoad" description="Database Load for TSAM" type="Process" programName="DBLoad" programType="java" active="true">
- <DBLoad>
<Parameters />
</DBLoad>
</Step>
- <Step id="Cleanup" description="Cleanup TSAM CSR Processing" type="Process" programName="Cleanup" programType="java" active="false">
- <Parameters>
<Parameter DaysToRetainFiles="45" />
<Parameter cleanSubfolders="true" />
</Parameters>
</Step>
</Steps>
</Process>
</Job>
</Jobs>
Chapter 3. Configuring 275
Metering for additional disks
Metering for additional disks is included in Tivoli Service Automation Manager
7.2.4.4. You do not have to execute any separate steps to enable metering for
additional disks. The general steps to enable metering also enables metering for
additional disks. Metering is also extended to additional disks that are included in
the earlier versions of Tivoli Service Automation Manager.
Metering for additional disks that are created during earlier versions of Tivoli
Service Automation Manager:
To use metering feature for additional disks that were created in earlier versions of
Tivoli Service Automation Manager, run escalation after you upgrade to Tivoli
Service Automation Manager 7.2.4.4. Escalation is a one time activity to enable
additional disk data collection to the metering table.
Procedure
1. In the Maximo start center, click Go To > System Configuration > Platform
Configuration > Escalations.
2. In Description search for PMZHBMETEXADDDSK. For example, enter Meter in
Description and press ENTER
3. Go to Escalations tab.
4. As a one time activity, in the Select Action list, select Activate/Deactivate
Escalation.
Remember: This is a one time activity.
Attention: Before you activate this escalation, verify whether you have
enabled metering.
Integrating with Tivoli Change and Configuration Management
Database (CCMDB)
You can integrate Tivoli Service Automation Manager processing with change and
configuration management to apply governance of these processes that is aligned
with Information Technology Infrastructure Library (ITIL).
About this task
Tivoli Service Automation Manager operations perform changes to the managed
datacenter, such as deployment of new virtual servers or installation of software.
You can integrate this processing with ITIL-aligned governance processes such as
change management or configuration management. By doing this, you are able to
apply ITIL-aligned governance implemented in these processes, for example
assessment or scheduling of changes done by Tivoli Service Automation Manager,
still using the benefits of Tivoli Service Automation Manager for automating
management actions.
The integration is recommended for some environments that already use
ITIL-aligned processes implemented with Tivoli Change and Configuration
Management Database.
276 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Configuration artifacts for integrating with Tivoli Change and
Configuration Management Database
Learn about configuration artifacts shipped with Change and Configuration
Management Database Integration Process Management Product that are used to
integrate Tivoli Service Automation Manager operations with the change
management process.
Create Standard Change from SR Action (PMZHBCCSTCA action)
This action is used to define ticket templates, or, if the service request is processed
with a workflow, it serves as a Tivoli Process Automation engine workflow action,
configured on the edge of the workflow.
PMZHBCCSTCA is an action that creates a request for change (RFC) from a service
request, then associates that change with the Tivoli Service Automation Manager
service instance and management plan so that the management plan is processed
in context of the change management process.
Create Standard Change from SR (PMZHBCCSTC job plan)
PMZHBCCSTC is a pre-configured activity job plan used to define ticket templates. It
starts the PMZHBCCSTCA action.
TSAM Standard Change (PMZHBCSCHG ticket template)
Ticket templates are a new concept introduced with Service Request Manager 7.2.
They provide users with default ways of processing service requests and tickets.
They are also used to define Service Request Manager offerings. This means, after
a user selects an offering and a service request is created, a specific ticket template
starts to process the request in a default way.
PMZHBCSCHG is a pre-configured ticket template that can be assigned to a service
request or used for a Service Request Manager offering. The template defines how
a request for change (RFC) is created from that service request.
TSAM Standard Change (PMZHBCSCHG change template)
Change templates are a new concept introduced with the Change Management
Process Management Product of CCMDB. A change template is a part of a job
plan, and is used to predefine a list of reviews and approvals that are required for
changes that correspond to this specific change template.
Within Tivoli Service Automation Manager, it is possible to define template job
plans as a part of single management plans of a service definition. You can use
such a template job plan to point to a change template, which in turn associates a
list of required reviews and approvals with the management plan. When the
management plan is processed during change management, the necessary review
and approval steps are triggered automatically.
Tivoli Service Automation Manager includes a sample PMZHBCSCHG change template
that can be used as the basis for defining new templates.
A change template assigned to a Tivoli Service Automation Manager management
plan is an empty job plan with the change template-specific parts filled out, but
with no tasks. It defines properties specific to the change process, and can be
found in the job plans application.
Chapter 3. Configuring 277
Note: You can only customize new change templates when the Default WO Class
property is set to CHANGE. Then the Change Template tab is displayed, and you can
enter information necessary for the management plan to which a change template
is to be assigned.
Start SR Activity (PMZHBSSRACT escalation)
In a typical situation, after a service request is created and associated with a ticket
template, the service desk staff manually starts the process for creating a request
for change. This process can also be executed automatically by an escalation, which
is part of the CCMDB Integration Process Management Product.
PMZHBSSRACT is a default escalation that can be used to execute service requests
processing. The execution process depends on the service request classification that
is encoded in the Condition WHERE clause of the escalation. The condition must
be adapted to the type of the service request to be processed.
Note: Ensure that the escalation is active in the system.
TSAM Asynchronous Request Correlation Escalation (PMZHBCREQCORA
escalation)
The Tivoli Service Automation Manager management plan preparation workflows
can be executed as part of the change process. In such a situation, the actual
change process does not start until the Tivoli Service Automation Manager
management plan preparation workflows have completed. The change process is
then resumed asynchronously with the PMZHBCREQCORA escalation.
Note: Ensure that the escalation is active in the system.
Configuring workflow integration
This section describes how you can integrate Tivoli Service Automation Manager
processing with Change Management workflows of the Change and Configuration
Management Database.
About this task
The execution of Tivoli Service Automation Manager management plan preparation
workflows can be included in the change management workflows as subprocesses
in order to achieve an end-to-end integrated flow. The workflows are run when
you start the management plans in order to perform preparations that are specific
to Tivoli Service Automation Manager.
Workflow integration is not configured by default when you install the Change
and Configuration Management Database integration Process Management Product
as part of Tivoli Service Automation Manager installation. Otherwise, any
customizations that might have been made to the change workflows could be
overwritten. Therefore, it is necessary to perform the configuration manually.
It is recommended to include the execution of the Tivoli Service Automation
Manager management plan preparation workflows in the Accept and Categorize
(PMCHGACCAT)) subprocess of the IT Infrastructure Library change process
(PMCHGITIL). The results calculated by Tivoli Service Automation Manager are then
available in the next phases of the change process, such as the assessment phase.
278 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
The following procedure assumes that the Change Management content Process
Management Product is installed and that workflows contained in that Process
Management Product exist with default names.
Procedure
1. Click Go To > System Configuration > Platform Configuration > Workflow
designer.
2. Deactivate the PMCHGITIL change workflow to make sure that any other
modifications are active later.
3. Open the Accept and Categorize workflow (PMCHGACCAT) and use the
toolbar to create a new revision.
4. In the new revision of the PMCHGACCAT workflow, delete the last edge that goes
from VALIDATE to STOP2 (this edge does not contain any action).
5. Insert a new subprocess node between VALIDATE and STOP2 and connect it
to them using a positive input edge and a positive and negative output edge.
6. Name that subprocess node, for example TSAMPREP. If the required
information is available at run time, the node starts the PMZHBCPRCH subprocess
that starts the Tivoli Service Automation Manager Management plan
preparation workflow.
7. Enable and activate the modified PMCHGACCAT workflow.
8. Reactivate the PMCHGITIL overall change process.
Results
The subprocess started by the subprocess node is the PMZHBCPRCH workflow, which
is shipped with the Change and Configuration Management Database integration
Process Management Product. At run time, this workflow checks whether all
information that Tivoli Service Automation Manager requires to start its
management plan preparation workflow is available for the current change. If the
information is available, it starts the workflow. Make sure that the PMZHBCPRCH
workflow is enabled and active.
Note: For the integration of Tivoli Service Automation Manager and the Change
Process, a modification has also been made to the default management plan
preparation workflows PMZHBSIWWA (Service Instantiation Workflow) and
PMZHBSIMOD (Service Modification Workflow). Make sure that the latest revision of
those workflows is enabled and activated.
Configuring a Service Request Manager offering for processing
Perform this task to integrate the Tivoli Service Automation Manager processing
with an ITIL-aligned governance.
About this task
Associate a ticket template to an offering to use the default behavior for opening a
request for change (RFC) and associating it to a Tivoli Service Automation
Manager management plan whenever you select the offering and create a service
request.
Procedure
1. Define a new offering and select Service Request in the Offering Type field.
2. Specify the name of the ticket template in the Ticket Template attribute.
Default ticket template name is PMZHBCSCHG.
Chapter 3. Configuring 279
Some specification attributes must be set for the offering to allow you to create
the RFC and associate the information required by this RFC. You must define
those attributes as part of classification PMZHBCSCOFF, which can be used as base
for deriving custom subclassifications that can then inherit these attributes.
3. Select the Specifications tab. In the Service Definition ID and Service
Definition Revision fields, specify the attributes that point to the service
definition to which the offering refers.
4. In the Management Plan ID field, specify the attribute that points to the
management plan that is triggered when you submit a request for the offering.
Note: These three attributes are normally hidden from end users and the
values that are defined are fixed values for the offering configuration.
Specify the Service Instance ID attribute only when you modify an existing
service instance. The attribute must be set during the request time, for example
by using the lookup mechanism in the offering dialog.
The two remaining attributes, Service Instance Name and Service Instance
Description are required for requests used to create new service instances.
These attributes are typically specified by the requester of the offering.
Extensions to the Change Management application
The Change and Configuration Management Database integration Process
Management Product introduces several extensions that are specific to the Changes
Management application to the Tivoli Service Automation Manager administrative
user interface.
When a change is associated with a Tivoli Service Automation Manager service
instance and management plan, a new section called Related Service Automation
Items is displayed in the Changes application on the Related Records tab. This
section includes the name and description of the service instance associated with
the current change and the management plan that is run in the context of that
change. It also includes a link you can use to open the Service Deployment
Instances application.
After accepting a request for change (RFC) and routing the workflow by clicking
the Change button on the toolbar, the Tivoli Service Automation Manager
management plan preparation workflow is run for the associated management
plan. Depending on the service definition and the management plan, this
preparation workflow can involve manual tasks.
At the end of the workflow, the standard change process continues using all the
information generated during the Tivoli Service Automation Manager management
plan preparation workflow.
You can switch to the Schedule tab to view and schedule all tasks that are
generated based on the associated management plan template.
Switch to the Workplan Map tab to view a graphical representation of these tasks
and their dependencies.
Based on the service topology information contained in the associated service
deployment instance, Tivoli Service Automation Manager can also calculate the
configuration items (CIs) affected by the change. A graphical representation of this
information is displayed on the Impact tab.
280 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Tivoli Change and Configuration Management Database
Configuration Items
This section describes the role of configuration items that are stored in CCMDB,
and reflects changes made to the managed IT infrastructure during execution of
Tivoli Service Automation Manager management plans.
You can use the CCMDB Integration Process Management Product of Tivoli Service
Automation Manager to define additional tasks that automatically update the
configuration items during execution of management plans. The Service Topology
Node Operations application contains templates that can be used for this purpose.
With those tasks the authorized configuration items are updated, not the actual
configuration items that are updated automatically during the discovery. The
procedure for creating authorized configuration items and performing other
operations from the Service Topology Node Operations application is optional,
which means that the already existing management plans can be executed without
any updates to the configuration items.
The CCMDB Integration Process Management Product of Tivoli Service
Automation Manager provides three template operations that can be used to define
management plan tasks for creating and updating configuration items and the
relationships between those items. Two of these are ready-to-use operations, and
one is a real template that must be customized. Do not use it as is. Instead, derive
the custom operations you need. Detailed descriptions of all three operations and
examples of how to use them are provided in the following sections.
The CreateOrUpdateAuthorizedCI operation:
Use this operation to create a new authorized configuration item (CI) using a task
in a management plan or to perform updates to an already existing configuration
item.
The CreateOrUpdateAuthorizedCI operation is a template operation. Do not use it
without any modifications. It only defines a generic set of input parameters that
are required for all configuration item types; it does not define parameters specific
for a configuration item type. Use this operation to derive custom operations for
specific configuration item types.
Parameters
This operation defines four input parameters that are required for all configuration
item types and that must be present in any derived operations. The parameters are:
CINUM
This parameter is the identifier of the configuration item that is to be created
or updated. If an authorized configuration item with the same identifier exists
in Change and Configuration Management Database, this operation performs
an update to the attributes or the status of that configuration item. If no such
configuration item is found, the operation creates a new one. A unique ID is
then generated by appending a number to the identifier passed via the CINUM
parameter. The resulting identifier is generated and returned as output
parameter CINUM of the operation. For example, if a new configuration item is
created and a value "MYCI" is provided as input, a possible resulting identifier
is "MYCI~2001".
ClassificationID
This parameter is the ID of the classification (configuration item type) used for
Chapter 3. Configuring 281
the new configuration item. You can find the configuration item classifications
in the Classifications application. An example of a valid configuration item
classification is APP.SOFTWAREINSTALLATION.
Description
This parameter is a description to assign to the configuration item in case of
creating a new one.
Status
This parameter is the status that is set for the new configuration item. This
status is set both for new configuration items and for the existing ones. You
can use this operation to update the status of existing configuration items.
Note: The internal value for configuration item status must be provided as input
value for that parameter, and not the potentially translated, external value. You can
check the internal configuration item status value by checking the domain
CISTATUS in the Domains application. By default, this domain contains the
following values NOT READY, OPERATING, and DECOMMISSIONED.
Custom operations derived from the base operation must also define the input
parameters specific to the configuration item type according to the following
naming conventions:
CIATTR_...
Input parameters that start with the prefix CIATTR_ are used to set primary
attributes of the affected configuration item, that is, the attributes of the
configuration item object, not the specification attributes.
CISPEC_...
Input parameters that start with the prefix CISPEC_ can be used for setting
specification attributes of the affected configuration item. You can check the set
of available specification attributes for a specific configuration item type
(classification) by looking up the respective configuration item classification in
the Classifications application.
The operation also defines one output parameter, CINUM, which is used to create the
final generated configuration item identifier for newly created configuration items
(see the description of the CINUM input parameter). Store this output parameter in
an attribute of the service topology node associated with the configuration item so
that it can be found later. For example, if a service topology node is associated
with a configuration item, attribute PRIMARY_CI can be defined for that node. The
identifier of the configuration item created with the CreateOrUpdateAuthorizedCI
operation (the output parameter CINUM) can hen be mapped to that PRIMARY_CI
attribute.
See this topic for an example of deriving an operation specific to a configuration
item.
282 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
The LinkAuthorizedCI operation:
Use this operation to create a relationship between two existing configuration items
(CIs).
You can use the LinkAuthorizedCI operation as is. It is not necessary to derive a
custom operation from it because no special input parameters are required for
different kinds of configuration item relationships.
Parameters
The following input parameters are defined by the LinkAuthorizedCI operation:
SourceCI
The identifier of the source configuration item of the relationship to be created.
TargetCI
The identifier of the target configuration item of the relationship to be created.
Relationship
The name or type of the relationship to be created.
When using this operation in a management plan, make sure that the relationship
type that is created between the two configuration items is valid. You can find the
valid relationship types for specific types of source and target configuration items
in the Relationships application. Open it by selecting Go to > IT Infrastructure.
For example, a valid relationship for a source configuration item of the Software
Installation type and a target configuration item of the Operating System type is
InstalledOn.
See this topic for an example procedure of using the configuration item operations.
The UnlinkAuthorizedCI operation:
Use this operation to delete an existing relationship between two configuration
items (CIs).
You can use the UnlinkAuthorizedCI operation as is - it is not necessary to derive a
custom operation from it because no special input parameters are required for
different kinds of configuration item relationships.
Parameters
The following input parameters are defined by the UnlinkAuthorizedCI operation:
SourceCI
The identifier of the source configuration item of the relationship to be deleted.
TargetCI
The identifier of the target configuration item of the relationship to be deleted.
Relationship
The name or type of the relationship to be deleted.
Note: If the relationship does not exist at run time, the operation completes
without performing any action. No errors are reported in such case.
Chapter 3. Configuring 283
Deriving an operation specific to a configuration item:
The CreateOrUpdateAuthorizedCI operation is a generic template and you can
derive custom operations for creating or updating configuration items of specific
types from it.
About this task
This example task describes how to create an operation for creating or updating a
software installation configuration item (classification APP.SOFTWAREINSTALLATION).
By performing this operation a number of attributes can be set or updated. The
attributes are:
v Install Location (SOFTWAREINSTALLATION_INSTALLEDLOCATION)
v Manufacturer Name (SOFTWAREINSTALLATION_MANUFACTURERNAME)
v Product Name (SOFTWAREINSTALLATION_PRODUCTNAME)
v Release (SOFTWAREINSTALLATION_RELEASE)
v Version String (SOFTWAREINSTALLATION_VERSIONSTRING)
To define such an operation follow the steps below:
Procedure
1. Open the original CreateOrUpdateAuthorizedCI operation in the Service
Topology Node Operations application and create a duplicate of the operation
by using the Action menu.
2. Enter a new Operation ID and Owner ID in the respective fields.
3. Define the new input parameters specific for the configuration item type in
order to set and update the specification attributes.
Remember: Define the input parameter names for setting the configuration
item specification attributes using the prefix CISPEC_ followed by the name of
the specification attribute. For example, the input parameter for setting a
specification attribute SOFTWAREINSTALLATION_PRODUDCTNAME, is
CISPEC_SOFTWAREINSTALLATION_PRODUCTNAME.
Using the configuration item operations:
In this example, a software installation configuration item for a WebSphere
Deployment Manager node that gets installed on a server is created using both the
LinkAuthorizedCI operation and the CreateOrUpdateAuthorizedCI operation.
About this task
This software installation configuration item is then linked to the computer system
configuration item on which it is installed. To perform this task, you must insert
two tasks into the management plan after the task that actually installs the
WebSphere Deployment Manager component. One of these tasks is required for
creating the software installation configuration item
(CreateOrUpdateSoftwareInstallationCI derived from
CreateOrUpdateAuthorizedCI), and the other for linking it to the computer system
configuration item (LinkAuthorizedCI).
Procedure
1. Open the Edit Management Task window to view the management task
details. The input parameter mappings are displayed in this window.
284 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
2. Set the ClassificationID parameter to a constant value
APP.SOFTWAREINSTALLATION
3. Use the PMZHB_WAS_NODE_NAME attribute of the associated topology node to set
the CINUM input parameter.
4. Map the CINUM output parameter to the PMZHB_PRIMARY_CI attribute. The final
identifier of the created configuration item is then stored in that node attribute
for later reference.
Note: The attributes used in steps 3 and 4 are examples created for the purpose
of this documentation.
5. Define the rest of the input parameters (such as release, version string, or
manufacturer name) using constant values. In this example, the
LinkAuthorizedCI task establishes an INSTALON relationship between the newly
created software installation configuration item and the computer system
configuration item on which the WebSphere Deployment Manager is installed.
6. Open the Edit Management Task window to view the management task
details. The parameter mappings are displayed in this window.
7. Set the constant value INSTALON as the Relationship parameter. This value is
the name of a valid relationship that can be created between a software
installation configuration item and a computer system configuration item.
Note: This is just an example. Always check which types of relationship are
valid for a system on which a specific service definition is used with a task
definition.
8. Use the value stored in the PMZHB_PRIMARY_CI attribute of the deployment
manager node to define the SourceCI parameter.
9. Map the TargetCI parameter to the identifier of the configuration item that
represents the computer system on which the deployment manager is installed.
Configuring security against XSS
As a protection against XSS, external URL references, or sql injection, you cannot
enter certain special characters for fields. The default is "^a-zA-Z0-9\s\/\.\\@~#$
&+\"\'!_-". However, you can configure the special characters that are allowed for
a field. The characters that are defined outside the list results in an error message
from the server.
Procedure
1. Go To System Configuration > Platform Configuration > Database
Configuration.
2. From Select Action list, select Manage Admin Mode.
3. In Turn Admin Mode ON, turn the Admin mode to on and click OK in the
system message.
4. In database configuration, go back to the List tab and filter for Object SR.
5. In Attributes tab, filter for DESCRIPTION.
6. Select DESCRIPTION and expand the row. The Details of DESCRIPTION is
displayed.
7. In the class field, enter
com.ibm.ism.pmzhb.sc.app.FldTicketDescriptionValidation.
8. Save the record and go back to the List tab.
9. Verify whether the Status of the SR is To Be Changed.
10. From Select Action list, select Apply Configuration Changes.
Chapter 3. Configuring 285
11. In Non-Structural Database Configuration, click Start Configuring the
Database.
12. After successfully configuring the database, from Select Action list, select
Manage Admin Mode.
13. In Turn Admin Mode OFF, click Turn Admin Mode OFF.
14. Optional: In System Properties, you can set a system property pmrdp.regex to
manually specify the regex to be used for allowed characters. Example:
[^a-zA-Z0-9\s\/!,_\\-]. Default value that is used in case this property is not
set is [^a-zA-Z0-9\s\/\.\\@~#$&+\"\'!,_-].
a. Open the Maximo UI and Go To > System Configuration > Platform
Configuration > System Properties.
b. Click New Row for Global Properties and enter the following details:
v Property Name - pmrdp.regex
v Description - this property decides whether the regex is used for a
valid input.
v Global Value - For example, [^a-zA-Z0-9\s\/\.\\@~#$&+\"\!,_-].
v Select the following options - Global Only, Online Changes Allowed,
Live Refresh.
c. Click Save.
d. Search for pmrdp.regex in the property name filter.
e. Select the property and click Live Refresh in the Application panel .
f. Click OK.
15. In addition, you can also set another property pmrdp.regex.assetattributes,
which can be set to manually specify the attributes in a "," separated string on
which this regex validation is enforced. Example:
PMRDPCLCVS_PROJECTNAME,PMRDPCLCVS_DESCRIPTION, and other attributes. The
default behavior applies to all attributes in Tivoli Service Automation Manager
that has a prefix "PMRDP".
286 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Chapter 4. Administering Tivoli Service Automation Manager
This chapter discusses various administrative
tasks for Tivoli Service Automation Manager.
These are tasks that can apply on a recurring
basis, possibly also as initial configuration
steps after installation.
Logging on to the Tivoli Service Automation Manager administrative
interface
The administrative user interface is used to modify the configuration of the Tivoli
Service Automation Manager components.
Procedure
1. In a supported browser, open the following URL:
https://<management_server_hostname>:9443/maximo
After IBM HTTP server is configured, the administrative interface is also
accessible using this address:
http://<management_server_hostname>/maximo
2. Log on with the credentials that you have been assigned.
Default administrative credentials:
Username: maxadmin
Password: maxadmin
Related tasks:
Configuring IBM HTTP Server on page 103
Configure the IBM HTTP Server after installing Tivoli Service Automation Manager
and its components.
Working with the service automation reports
This section describes the procedures for generating and viewing reports within
Tivoli Service Automation Manager.
About this task
Note: Reports can be produced only if the appropriate data is collected by the
system, and they can be accessed only if the user has been granted such
authorization. See Configuring the reporting function on page 288.
Copyright IBM Corp. 2008, 2012 287
Configuring the reporting function
Before users can view the Tivoli Service Automation Manager reports, the
administrator must configure them and authorize users to view them.
Generating request pages
To use the Tivoli Service Automation Manager reports, you must generate request
pages once. Users specify selection parameters using the request pages.
Procedure
1. Click Go To > Administration > Reporting > Report Administration.
2. In the Application field, enter PMZHB and press Enter to list all Tivoli Service
Automation Manager reports.
3. Click Generate Request Pages on the right side at the bottom of the table. The
process might take several minutes.
Enabling table auditing
You need to activate the auditing of database tables. The data collected by the
audit is used to produce reports within Tivoli Service Automation Manager that
show the service infrastructure or the progression of services over time.
About this task
Although other setup operations are performed automatically by Tivoli Service
Automation Manager, this task must be performed manually after installation.
All Tivoli Service Automation Manager reports with "History' in the name require
auditing for two tables. If you are going to use these reports, you must configure
audit tables. If you are not going to use these reports, you do not need to
configure table auditing. In this case, these reports must deleted from the system
so that they cannot be accessed. It is possible to re-import these reports from the
management server at a later time if required.
Procedure
1. Click Go To > System Configuration > Platform Configuration > Database
Configuration.
2. In the Object field, enter PMZHBWTN to find the Service Topology Node object
and open the Objects tab.
3. Select the Audit Enabled? check box.
a. Open the Attributes tab.
b. For each of the fields listed below, enable auditing by opening the property
with the twistie and selecting the option Audit Enabled?:
v CLASSSTRUCTUREID
v INSTANCE_STATE
v IS_TEMPLATE
v NAME
v PARENT_NODE_ID
v RESOURCE_ALLOCATION_RECORD_ID
v TOPOLOGY_ID
c. Save your changes.
d. Return to the List tab.
4. In the Object field, enter PMZHBWTNSPEC and select the Audit Enabled? check
box.
a. Open the Attributes tab.
288 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
b. For each of the fields below, select the field by opening the property with
the twistie and select the option Audit Enabled?:
v ALNVALUE
v ASSETATTRID
v NUMVALUE
v REFOBJECTID
c. Save the changes.
d. Return to the List tab.
5. In the Select Actions menu, select Apply Configuration Changes and click
OK.
Note: Changes must be applied in admin mode if there is already data present
in the PMZHBWTN table.
Authorizing users to access reports
The maxadmin user can authorize other users in a group to generate and access
reports.
Procedure
1. Log on to the administrative interface as user maxadmin.
2. Click Go To > Security > Security Groups.
3. Select the user group to be authorized.
4. Switch to the Applications tab.
5. Perform the steps for each of the following applications:
v Service Definitions
v Service Deployment Instances
a. Open the application.
b. Ensure that the user group has at least read access to the application.
c. Activate Grant Access? for the option Run Reports.
6. Click Go To > Administration > Reporting > Report Administration.
7. Select Action > Set Application Security.
8. Filter the application table by entering PMZHB in the application row.
9. Click the New Row button to add access rights for the security group to the
Service Definitions and Service Deployment Instances applications. In the
Detail section of the dialog, check the BIRT Reports? option.
10. Click OK.
11. Apply the changes by restarting the WebSphere Application Server.
Alternatively, you can switch the Admin Mode off and then on again in order
to refresh the system cache.
What to do next
Note: If security exceptions persist as a result of making these changes, restart the
WebSphere Application Server.
Chapter 4. Administering 289
Generating, viewing, and scheduling reports
Authorized users can generate the reports and view them immediately, or schedule
them in the future.
About this task
For authorized users, the entry Run Reports is presented in the Select Action
menu of the panels for the Service Definitions and Service Deployment Instances
applications. This menu item opens a window showing all previously imported
reports associated with the current application.
Procedure
1. Log on to the administrative interface as a user who has privileges to access
reports.
2. On the home page, next to the Go To menu, click Reports > Service
Automation, and select the type of reports that you want to access.
The Reports window is displayed. The On Demand Reports tab lists all
currently defined report types, and the Scheduling Status tab lists information
for previously scheduled reports.
v To run a specific report immediately:
a. In the On Demand Reports tab, select the report type. The Request Page
window for the indicated report type opens, where you can specify
additional parameters.
b. Specify any required parameters to limit the output. The date parameters
refer to the date that the service definition or deployment instance was
last changed.
c. Select Immediate to run the report without scheduling.
d. Click Submit.
The report is generated in HTML format and presented online in the BIRT
Viewer application.
v To schedule a report to run in the future, either once or on a recurring basis:
a. In the On Demand Reports tab, select the report type that you want to
schedule. The associated Request Page window is displayed.
b. Specify any required parameters to limit the output. The date parameters
refer to the date the service definition or deployment instance was last
changed.
c. To generate the report once, in the Schedule section, select At this time,
and click the Select Date icon to select the date and time.
d. To generate the report on a periodic basis, in the Schedule section, select
Recurring and click the Select Value icon to select a schedule or time
interval.
e. In the Email section enter one or more validated email addresses, and
specify the output format.
f. If required, enter a subject and comments. If the Subject entry is omitted,
the report description is used as the subject.
g. Click Submit.
Tip: For authorized users, the entry Run Reports is also present in the Select
Action menu of the panels for the Service Definitions and Service Deployment
Instances applications. This menu item opens a window that shows all
previously imported reports that are associated with the current application.
290 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Working with usage and accounting reports
If you have integrated your Tivoli Service Automation Manager installation with
Tivoli Usage and Accounting Manager, you can generate reports on usage of the
services for the accounting purposes.
Note: Ensure that you have configured both Tivoli Service Automation Manager
and Tivoli Usage and Accounting Manager , as described in Integrating Tivoli
Usage and Accounting Manager on page 266.
Project account and the account code structure
If you are planning to use the Tivoli Usage and Accounting Manager reporting
function, each team in the self-service user interface must be assigned a project
account. The project account value is part of the account code structure that is
defined in Tivoli Usage and Accounting Manager.
The account code structure reflects the chargeback hierarchy for the organization.
Tivoli Usage and Accounting Manager uses an account code to identify entities for
billing and reporting. This code determines how Usage and Accounting Manager
interprets and reports input data.
The account code consists of the following elements:
Table 34. Account code structure
Identifier Length in characters Description
Account_Code 40 consisting of:
<CUSTOMER> 12
<PROJECTACCOUNT> 20
<PERSONGROUP> 8
Account_Code consists of
three values:
<CUSTOMER> unique
customer short name
<PROJECTACCOUNT> the
project account value that is
specified in the Create Team
and Modify Team requests
<PERSONGROUP> unique
team identifier
Deployment_Owner 30 The user that requested the
project.
Service_Definition 4 First four characters of the
service definition name, used
to identify the service in a
report.
Deployment_Instance 30 The name of the project the
server is associated to.
Chapter 4. Administering 291
Generating Tivoli Usage and Accounting Manager reports
You can use the Tivoli Usage and Accounting Web Reporting application to
generate reports on service usage.
About this task
The Tivoli Usage and Accounting Manager Web Reporting application provides
comprehensive cost accounting, chargeback, and resource reporting in a
browser-based environment. You can generate reports on service usage based on
the information from the CSR files that are retrieved from the Tivoli Service
Automation Manager server. You can save, copy text from, and print reports. In
addition, many reports include multi-level drill-down capabilities that enable you
to view detailed resource usage and cost information.
Note: In Tivoli Service Automation Manager 7.2.4.4, metering support is extended
for the usage of additional disks, so an additional column Tivoli Service
Automation Manager Additional Storage GB Hour is displayed in the report.
Procedure
1. Start Web Reporting from a supported browser by typing http://host_name in
the address bar, where host_name is the server name or IP address of the server
that is running Web Reporting.
2. From the menu bar, select Reports > Run Reports, then select report type.
3. In the window that opens, select parameters for the report and click OK. The
report is generated.
What to do next
For more information about working with reports, see IBM SmartCloud Cost
Management > Version 7.1.2 > Administering web Reporting > Working with reports in
the IBM Tivoli Usage and Accounting Manager knowledge center.
Managing cloud networks
Use the Cloud Network Administration application to manage your cloud
networks or to configure network DCM objects.
About this task
Perform the following steps to open the application:
Procedure
1. Log on to the administrative user interface.
2. Click Go to > Service Automation > Configuration > Cloud Network
Administration.
292 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Network template management
Network templates are used during the Create Customer offering to define the
initial network setup that the new customer gets.
Network templates can be created by:
v Importing the sample network template files provided with the product. For
more information, see Importing network-related artifacts.
v Creating a network template within the cloud network administration
application. The related tasks are:
Creating a network template
Adding a network segment
Adding a subnetwork
Adding a virtual switch template
Deleting a network segment
Deleting a subnetwork
Deleting a virtual switch template
Network templates are visible in the self-service user interface as soon as their
status is set to active. The related task is:
v Changing the status of a network template
A network template can be modified at any time. It is used only during the Create
Customer offering in the self service user interface to create a customer network
configuration instance.
Note: Modifying a network template only affects newly created customers.
Importing network related artifacts
Use the Cloud Network Administration application to import Data Center Model
(DCM) definitions and network templates.
Before you begin
To access the Cloud Network Administration application, log on to the
administrative user interface and click Go To > Service Automation >
Configuration > Cloud Network Administration.
About this task
To import the network configuration:
Procedure
1. Click Import Network DCM Objects, browse for the XML file that you want to
import, and click Import. If any problems appear during the import, a dialog
box informs you about it. After the DCM definitions are imported, they become
available in Tivoli Service Automation Manager.
2. Click Import Network Template xml and browse for a network template XML
file to import a network template. In the window that appears, specify the
name and description for the network template.
Note: At this point, Tivoli Service Automation Manager checks whether:
v The template conforms to the network template schema.
Chapter 4. Administering 293
v The DCM resources referenced from the network template already exist.
They are referenced by their name so there can be only one resource with the
specified name for the given type in the DCM database.
v The network segment names are unique within the network template.
v The subnet names are unique within a network segment.
v The network segment usage values exist in the PMZHBNETSEGUSAGE domain.
v The network segment names are unique within the template
v The subnet names are unique within a network segment.
3. After the network template is imported, press Enter in the Name input field to
see the imported template.
4. Set the status of the network template to Active.
Creating a network template
You can use the Cloud Network Administration application to create a network
configuration template.
Procedure
1. Log on to the administrative user interface.
2. Select Go To > Service Automation > Configuration > Cloud Network
Administration.
3. Click the New Network Template button in the application menu. A new
empty network configuration template is created.
4. Specify a name and click Save. The network configuration template editor
at the bottom of the page is enabled.
5. In the network configuration template editor:
a. Define network segments.
b. Associate subnetworks with network segments.
c. Define virtual switch settings for each subnetwork.
Note: For each network configuration template at least one network segment of
type "Management" must be defined. Each network segment must have at least
one subnetwork associated with it. This subnetwork must have a reference to
an existing DCM subnetwork object that specifies the network parameters
(network address, netmask, gateway, blocked IP ranges). At least one virtual
switch template must be defined for each subnetwork.
Results
A new network template is created. You can use the Cloud Network
Administration application to edit network configuration templates. This procedure
is done using the built-in graphical editor. Changes to network templates affect
only new customers. Editing a network configuration comprises:
v Adding and removing network segments
v Modifying network segments (for example, change the network segment type)
v Adding subnetworks to segments
v Removing subnetworks from segments
v Editing virtual switch template settings for subnetworks
294 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Viewing network segments
You can use the Cloud Network Administration application to view and then
modify network segments.
About this task
Perform the following steps to view a network segment:
Procedure
1. Open the application:
a. Log on to the administrative user interface.
b. Click Go to > Service Automation > Configuration > Cloud Network
Administration.
2. In the Network Administration tab, open the Network Templates list to list all
network templates.
3. Select a network template. The Network Details tab is displayed.
Results
In the Network Segment section of the Network Details tab, all the network
segments are listed. The following information is displayed for each network
segment:
v Network segment name
v Segment type
v Description
v Segment usage
You can now perform a variety of tasks on the network segments.
Adding a network segment
You can use the Cloud Network Administration application to add new network
segments.
Procedure
1. Open the application:
a. Log on to the administrative user interface.
b. Click Go to > Service Automation > Configuration > Cloud Network
Administration.
2. In the Network Administration tab, select a network template. The Network
Details tab is displayed with a list of all network segments.
3. Click New Network Segment.
4. Specify the name and type of the new network segment.
5. In the DNS Settings tab, specify the required information in the fields.
6. In the Subnetwork Settings tab, fill in the required fields and select an
associated subnetwork data object.
7. Click Save.
Chapter 4. Administering 295
Deleting a network segment
You can use the Cloud Network Administration application to remove a network
segment from a network configuration template.
Procedure
1. Log on to the administrative user interface.
2. Select Go To > Service Automation > Configuration > Cloud Network
Administration.
3. In the Network Administration tab, select a network template. The Network
Details tab is displayed with a list of all network segments defined for this
template.
4. Click the Mark Row for Delete icon next to the network segment that you
want to remove.
5. Click Save.
Viewing subnetworks
You can use the Cloud Network Administration application to view a list of
existing subnetworks.
Procedure
1. Open the application:
a. Log on to the administrative user interface.
b. Click Go to > Service Automation > Configuration > Cloud Network
Administration.
2. In the Network Administration tab, select a network template. The Network
Details tab is displayed with a list of all network segments.
3. Click the View Details icon next to the name of a network segment to display
its details.
4. Switch to the Subnetwork Settings tab.
Results
The following information about the network segment is listed:
v Subnetwork title
v Subnetwork description
v Associated subnetwork data object
v IP address
v Subnetwork mask
Adding a subnetwork
You can use the Cloud Network Administration application to add a new
subnetwork to one of the existing network segments.
Procedure
1. Open the application:
a. Log on to the administrative user interface.
b. Click Go to > Service Automation > Configuration > Cloud Network
Administration.
2. In the Network Administration tab, select a network template. The Network
Details tab is displayed with a list of all network segments.
296 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
3. Click the View Details icon next to the name of a network segment to display
its details.
4. Switch to the Subnetwork Settings tab.
5. Click New Subnetwork Reference
6. Specify the subnetwork title and the associated subnetwork data object.
7. Click Save.
Deleting a subnetwork
You can use the Cloud Network Administration application to remove a
subnetwork that is assigned to network segments.
Procedure
1. Log on to the administrative user interface.
2. Select Go To > Service Automation > Configuration > Cloud Network
Administration.
3. In the Network Administration tab, select a network template. The Network
Details tab is displayed with a list of all network segments that are defined for
this template.
4. Click the View Details icon next to the name of a network segment to display
its details.
5. Open the Subnetwork Settings tab.
6. Click the Mark Row for Delete icon next to the subnetwork that you want to
remove.
7. Click Save.
Viewing virtual switch templates
You can use the Cloud Network Administration application to view a list of virtual
switch templates assigned to a subnetwork.
Procedure
1. Open the application:
a. Log on to the administrative user interface.
b. Click Go to > Service Automation > Configuration > Cloud Network
Administration.
2. In the Network Administration tab, select a network template. The Network
Details tab is displayed with a list of all network segments.
3. Click the View Details icon next to the name of a network segment to display
its details.
4. Switch to the Subnetwork Settings tab.
5. Click the View Details icon in the Virtual Switch Templates column.
Results
A new section is displayed with tabs that include information about virtual switch
templates for the supported hypervisors.
Chapter 4. Administering 297
Adding a virtual switch template
You can use the Cloud Network Administration application to add a virtual switch
template to a subnetwork.
Procedure
1. Open the application:
a. Log on to the administrative user interface.
b. Click Go to > Service Automation > Configuration > Cloud Network
Administration.
2. In the Network Administration tab, select a network template. The Network
Details tab is displayed with a list of all network segments.
3. Click the View Details icon next to the name of a network segment to display
its details.
4. Switch to the Subnetwork Settings tab.
5. Click the View Details icon in the Virtual Switch Templates column. A new
section is displayed with tabs that include information about virtual switch
templates for the supported hypervisors.
6. Click New Virtual Switch Template Reference.
7. Specify the required information in the fields.
8. Click Save.
Deleting a virtual switch template
You can use the Cloud Network Administration application to delete a virtual
switch template.
Procedure
1. Open the application:
a. Log on to the administrative user interface.
b. Click Go to > Service Automation > Configuration > Cloud Network
Administration.
2. In the Network Administration tab, select a network template. The Network
Details tab is displayed with a list of all network segments.
3. Click the View Details icon next to the name of a network segment to display
its details.
4. Switch to the Subnetwork Settings tab.
5. Click the View Details icon in the Virtual Switch Templates column. A new
section is displayed with tabs that include information about virtual switch
templates for the supported hypervisors.
6. Click the Mark Row for Delete icon next to the virtual switch template that
you want to delete.
7. Click Save.
298 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Changing the status of a network template
You can set the status of an imported network template to ACTIVE, DRAFT or
INACTIVE.
About this task
After a network template is imported, it is in the DRAFT status. A network
template with this status is not visible in the self-service user interface during the
customer on-boarding process in the network template box. If you want a network
template to be displayed in the self-service user interface, set its status to ACTIVE.
If you want to remove a network template from the selection box and indicate that
the template is no longer valid, set the status to INACTIVE.
Procedure
1. To access the Cloud Network Administration application, log on to the
administrative user interface and click Go To > Service Automation >
Configuration > Cloud Network Administration.
2. Select a previously imported network template.
3. In the Cloud Network Template Status section, click the icon next to the Status
field.
4. Select the new status.
5. Click Save on the toolbar to save this change.
Note: Changes of the status of a network template are reflected in the
self-service user interface in the following way:
v After you import a network template, it is in status DRAFT and is not
displayed in the dropdown box for network templates in the Create
Customer window of the self-service user interface.
v After you set the status to ACTIVE, the network template is displayed in the
dropdown box for network templates in the Create Customer window of the
self-service user interface.
v After you set the status to INACTIVE, the network template is not displayed
in the dropdown box for network templates in the Create Customer window
of the self-service user interface.
Network configuration instance management
The network configuration instances are created during the Create Customer or
Create Project offerings.
During customer creation process, a network template is assigned to the customer.
This assignment creates an instance (copy) of the selected network template that is
used in subsequent provisioning operations related with the customer. Subsequent
changes to the network configuration template are not promoted to the instances.
The network configuration instances are of two types:
Customer network configuration instance
Customer network configuration instances are created during the Create
Customer offering from the select network template. They can be modified
in the Instances tab of the network template in the Cloud Network
Administration application. There is no editor for these instances. Changes
are made by exporting or importing the instance, or by modifying the
XML file. For more information, see:
v Importing a customer network configuration instance
Chapter 4. Administering 299
v Exporting a customer network configuration instance
Note: Modifications only affect the selected instance and recently created
projects.
Project network configuration instance
Project network configuration instances are created during the Create
Project offering from the customer network configuration instance. The
customer status of the logged-on users defines which customer network
configuration instance is used in the default setting. Project network
configuration instance can be modified in the Cloud Network
Administration application by selecting the network template and then
moving to the Instances tab. For each customer a list of project network
configuration instances is displayed. There is no editor for these instances.
Changes are done by exporting or importing the instance, or by modifying
the XML file. For more information, see:
v Importing a project network configuration instance
v Exporting a project network configuration instance
Note: Modifications only affect the selected instance and servers recently
added to the related project.
Viewing network configuration instances
You can use the Cloud Network Administration application to view a list of
network instances related to a network template.
Procedure
1. Open the application:
a. Log on to the administrative user interface.
b. Click Go to > Service Automation > Configuration > Cloud Network
Administration.
2. In the Network Administration tab, open the Network Templates list to list all
network templates.
3. Select a template and open the Network Instances tab.
Results
A list of all network configuration instances related to this network template is
displayed. The following information is provided about each network instance:
v Name and Description
v Associated Customer
v Usage Type: Customer
v Status of the network instance
300 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Importing a customer network configuration instance
You can use the Cloud Network Administration application to import or update a
network configuration instance.
Procedure
1. Open the application:
a. Log on to the administrative user interface.
b. Click Go to > Service Automation > Configuration > Cloud Network
Administration.
2. Select a network template and switch to the Network Instances tab. The
Network Instances list contains information about all customer network
configuration instances associated with this network template.
3. Click the Import new Network Instance icon located in the same row as the
network configuration instance that you want to update.
4. Select the network instance XML file and click Import. The current version of
the customer network configuration instance is replaced by the imported one.
Exporting a customer network configuration instance
You can use the Cloud Network Administration application to export a network
configuration instance.
Procedure
1. Open the application:
a. Log on to the administrative user interface.
b. Click Go to > Service Automation > Configuration > Cloud Network
Administration.
2. Select a network template and switch to the Network Instances tab. The
Network Instances list contains information about all customer network
configuration instances associated with this network template.
3. Click the Export Network Instance icon located in the same row as the
network configuration instance that you want to export.
4. Copy and save the content of the dialog window to the required location.
Viewing project network configuration instances
You can use the Cloud Network Administration application to view a list of project
network configurations related to a network instance.
Procedure
1. Open the application:
a. Log on to the administrative user interface.
b. Click Go to > Service Automation > Configuration > Cloud Network
Administration.
2. Select a network template and switch to the Network Instances tab. The
Network Instances list contains information about all customer network
configuration instances associated with this network template.
3. Click the View Details icon next to the name of a network instance to view its
details.
Chapter 4. Administering 301
Results
A list is displayed that shows all project network configuration instances available
for the selected customer network configuration instance. The following
information about each project network configuration is provided:
v Name and description
v Associated customer
v Service deployment instance
v Usage type: project
v Status of the project network configuration
You can navigate to the related service deployment instance and customer.
Importing project network configuration instance
You can use the Cloud Network Administration Application to update a project
network configuration.
Procedure
1. Log on to the administrative user interface.
2. Select Go To > Service Automation > Configuration > Cloud Network
Administration.
3. Select a network template and click the Network Instances tab. The network
instances list contains information about all customer network configuration
instances associated with the network template.
4. Click the View Details icon next to the name of a customer network
configuration instance to display the list of the project network configuration
instances.
5. Click the Import New Project Configuration icon located in the same row as
the project network configuration instance that you want to update.
6. Select the project network configuration XML file and click Import. The current
version of the project network configuration instance is replaced by the
imported one.
Exporting a project network configuration instance
You can use the Cloud Network Administration application to view a list of project
network configurations related to a network instance.
Procedure
1. Open the application:
a. Log on to the administrative user interface.
b. Click Go to > Service Automation > Configuration > Cloud Network
Administration.
2. Select a network template and switch to the Network Instances tab. The
Network Instances list contains information about all customer network
configuration instances associated with this network template.
3. Click the View Details icon next to the name of a network instance to view its
details. A list is displayed that shows all project network configuration
instances available for the selected customer network configuration instance.
You can navigate to the related service deployment instance and customer.
4. Click the Export Project Configuration icon located in the same row as the
description of the project network configuration that you want to export.
5. Copy and save the content of the dialog window to the required location.
302 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Viewing customers
You can use the Cloud Network Administration application to view a list of
customers assigned to a network template.
Procedure
1. Open the application:
a. Log on to the administrative user interface.
b. Click Go to > Service Automation > Configuration > Cloud Network
Administration.
2. In the Network Administration tab, select a network template.
3. Switch to the Customers tab.
Results
A list of all customers associated with this network template is displayed in the
Customers section.
Configuring overlapping subnetwork
You can define or import subnetworks that have the same IP address range and
same subnet mask. However, when you define such networks, ensure that the
subnetwork names and VLANs are unique. The overlapping IP addresses is only
supported for VMControl Hypervisor.
Validating DCM subnetwork for overlapping IP
During the import of a DCM subnetwork definition template, you can configure to
enable validation of subnetwork for overlapping IP ranges. This task describes the
configuration options to set the validation elements during the import of a
subnetwork template.
About this task
The validation can be configured using the PMRDP.Net.SubnetworksValidation
system property. Possible values of the property are:
v Not set: Strict validation is active. At the default validation level, only the start
IP address and the netmask are validated.
v Relaxed: Consider blocked ranges in the subnetwork definitions.
v Disabled: Do not validate subnetwork definitions. Set this to Disabled if you
want to use the overlapping IP for subnetwork.
Procedure
1. Log on to the administrative user interface.
2. Select Go To > System Configuration > Platform Configuration > System
Properties.
3. Click New Row and add the PMRDP.Net.SubnetworksValidation property.
4. In the Global Value field, enter one of these values for the property: Disabled.
5. Save the changes.
6. Click Live Refresh icon. A window appears.
7. Select the values that are to be updated. The new global value is automatically
set as the current value.
Note: If you want to revert to the default setting, perform the following steps:
Chapter 4. Administering 303
a. Log on to the administrative user interface.
b. Select Go To > System Configuration > Platform Configuration > System
Properties.
c. Delete the PMRDP.Net.SubnetworksValidation property.
d. Save the changes.
Managing virtual server resources
You can administer virtual servers for various host platforms.
Increasing the maximum memory settings for System p
You can modify the value for the maximum memory that is available for LPAR
virtual servers.
About this task
The following procedure does not change:
v Future reservations - The values to create an LPAR are taken from the virtual
server template that has been created for the specific Create Project request.
v Saved images - The LPAR settings are stored in a virtual server template and
cannot be modified.
Procedure
1. Schedule an outage for all virtual LPAR servers which were previously
deployed with the old maximum values.
2. Log on to the administrative user interface as user maxadmin.
3. Select Go To > Service Automation > Configuration > Cloud Server Pool
Administration.
4. Filter for the System p cloud pool for which you want to modify memory
settings.
5. Click the Disable button in order to disable the cloud server pool.
6. Change the values in the Provisioning Parameters tab as needed. For
example, specify the maximum memory in MB.
Note: During provisioning, in case of insufficient resource, resource check is
done on all host platforms that are available in the same pool. If available,
then it is allocated and the data structures are updated accordingly.
7. Log on to HMC.
8. Modify the max.memory value in the profile settings for each existing LPAR for
which you want to increase memory.
9. Stop the LPAR servers for which the settings changed and then start them.
10. Using the Cloud Server Pool Administration application, run CEC discovery.
This process updates the maximum values in the data center model for the
provisioned LPARs.
11. Enable and validate the cloud pool.
Results
v Every max.memory field for each virtual server is updated in the data model.
v The memory for each LPAR virtual server can now be dynamically increased in
the self-service user interface.
304 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v If resources are allocated from a different host platform of the same pool, then
the data structures are updated after allocation.
Moving servers to another host for System p cloud server
pools
The Live Partition Mobility feature is implemented for System p Cloud Server
Pools only. The Server Management tab in the Cloud Server Pool Administration
application lists all servers that run on host and are associated with the current
cloud server pool. Click Move Server To Another Host to trigger the Live Partition
Mobility operation for a selected server. The HMC selects the default settings
automatically for Live Partition Mobility.
About this task
Procedure
1. Go to Server Pool Administration.
2. Open Server Management tab. A list of all servers that run on host and are
associated with the current cloud server pool are displayed.
3. Click Move Server To Another Host. You cannot move the server in case the
following conditions exist:
v A VIO-server must not be moved to other hosts.
v The move is not possible whenever the pool is enabled. You must disable to
avoid complications and inconsistencies that might occur because of parallel
provisioning operation.
A Live Partition Mobility is triggered for the selected server and the Move
Server Operation window is displayed. The Move Server Operation list all
possible targets for the movement of the selected server.
4. In the Move Server Operation window, select the target host where you want
to move the server instance.
Important:
If an LPAR is configured for a particular shared processor pool, the target
server must provide the same shared processor pool with sufficient resources.
There is no resource check performed on the list of target servers that are
displayed. Insufficient resources on the target CEC causes the operation to fail.
5. Click Move Server.
Creating an administrative role for VMware users
To access and administer the VMware system, users must log on to the vCenter
server and they must have administrative permissions. You can create a
customized VMware user role with the permissions that make it possible to
perform administrative tasks. Then, you can associate the new role with a specific
user or group of users.
Before you begin
You must be logged on as a user with Aministrator rights.
Procedure
1. In the vSphere Client Home page, click Administration > Roles > Add roles.
2. In the Add New Role panel:
Chapter 4. Administering 305
a. Type the Name for the new role.
b. Select at least the following privileges:
Table 35. The minimum rights of a VMware administrative user
Privilege name Options to be selected
Data store
v Allocate space
v Browse DataStore
Distributed Virtual Port Group
v Create
v Modify
v Delete
Network
v Assign Network
v Configure
v Move Network
Resources
v Assign virtual machine to resource pool
Virtual Machine Select all permissions in this group.
c. Click OK.
Assigning provisioning workflows for VMware additional disk
feature
If you are using the VMware additional disk feature, each ESX server must be
assigned the workflow for provisioning additional disks. This workflow is called
vmware_HostPlatform_AddStorage2 and is assigned to the existing servers
automatically when the feature is installed. When additional ESX servers are
discovered after product installation, you must assign this workflow manually.
For additional information about provisioning workflows, see theDeveloping
automation > Provisioning workflows and automation packages in the Tivoli
Provisioning Manager v7.2.0.2 knowledge center.
Assigning provisioning workflow to an ESX server
When other than the default device model "Cloud VMware VirtualCenter Host" is
associated to ESX hosts, you must add the workflow for provisioning additional
disks to this ESX server. The default device model will be extended automatically
during discovery.
Procedure
1. In the administrative user interface, click Go To > IT Infrastructure >
Provisioning Inventory > Provisioning Computers.
2. In the Computer field, filter for the required ESX server.
3. Once selected, go to the Workflows tab.
4. Click Assign provisioning workflow to assign the workflow for provisioning
additional disks.
5. Select vmware_HostPlatform_AddStorage2.
6. Click Save.
Results
The new workflow for provisioning additional disks is now assigned to the ESX
server.
306 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Assigning provisioning workflows to multiple ESX servers
If multiple ESX servers are discovered after product installation, you can run a
script that assigns the workflow for provisioning additional disks to every
discovered ESX server.
Procedure
1. Connect to the Maximo database using the DB2 client. For example, on a Linux
system connect by running the following command:
source ~ctginst1/sqllib/db2profile
db2 connect to maxdb71 user maximo
2. Run an SQL script to assign the workflow to any new discovered ESX servers:
db2 -f /opt/IBM/SMP/maximo/tools/maximo/en/cl_ad_pmp/assign_workflow.sql
3. Close the database connection using the following command:
db2 disconnect all
Results
The vmware_HostPlatform_AddStorage2 workflow for provisioning additional disks
is now assigned to any discovered ESX servers. If you navigate to the Provisioning
Computers application and search for any newly discovered ESX server, you can
confirm that the workflow for provisioning additional disks has been assigned to
this ESX server.
Managing server images
This section describes the procedures for administering server and software
images.
Creating operating system image templates
Before templates can be used by Tivoli Service Automation Manager to fulfill a
service request, there are special requirements that must be met.
Creating operating system image templates for KVM
By building one or more KVM templates, you can deploy multiple virtual machine
images in your environment.
The documentation includes information about creating operating system image
templates for the following operating systems:
v Windows Server 2003
v Windows Server 2008
v Windows Server 2008 R2
v Windows Server 2008 R2 SP1
v Red Hat Enterprise Linux
v SUSE Linux Enterprise Server
Chapter 4. Administering 307
Preparing a Windows image:
Configure the Windows image templates that you want to use for provisioning the
target virtual machines.
Before you begin
If you create the Windows 2008 64-bit R2 images, with Cygwin version greater
than 1.5 you must use preinstalled Cygwin method. Post-install Cygwin method is
not supported in this case. For more information, see Installing Cygwin on page
315
Note: For the Windows 7 64-bit images, Cygwin version 1.7.9-1 is not supported.
Windows 2008 64-bit R2 is the only certified Windows flavor for Cygwin 1.7.9-1.
Important: Only 32 bit version of Cygwin is supported.
About this task
This procedure applies to all supported Windows platforms. If a step is needed
only for specific platforms, that is indicated in parentheses.
Important:
v Use only virtual images without snapshots. Existing snapshots must be
integrated before you convert the virtual image to a template.
v Only virtual images with one hard disk configured can be deployed.
Procedure
1. Create a new virtual machine. Install a clean version of one of the following
operating systems:
v Windows 2003
v Windows 2008
v Windows 2008 R2
v Windows 2008 R2 SP1
Note: In case of windows 2003 extract sysprep tools deploy.cab from OS
installation disc to c:\Sysprep.
2. Install the Windows operating system on the virtual machine with the
minimum requirements for disk space and memory. During provisioning,
Tivoli Service Automation Manager extends the virtual machine disk partition
to the requested size.
Note: (Windows 2008) Specify at least 512 MB of RAM and 16-GB disk space.
3. Disable the firewall.
4. After the installation is completed, prepare a Cygwin installation file with the
name cloud_cygwin_install.zip:
a. Go to http://www.cygwin.com/ and download the Cygwin (version 1.7.1)
installable files to the \TEMP\CYGWIN directory. During the setup process,
when installing the packages for the first time, setup.exe does not install
every package. Only the minimal base packages from the Cygwin
distribution are installed by default. Click Categories and then Packages in
the setup.exe package installation screen to control what is installed or
308 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
updated. To install every Cygwin package, select the Default field next to
All category. For Windows 2008, use all packages available.
b. When the download is completed, you can find a folder named after the
URL of the mirror chosen during the installation in the\TEMP\CYGWIN
directory. Open that folder.
c. Copy the contents of that folder back into the \TEMP\CYGWIN directory and
delete the folder.
d. Place the following post-configuration scripts into the \TEMP directory:
postInstallAfter.bat, postInstallBefore.bat, shutdown.bat
e. Create the cloud_cygwin_install.zip compressed file with the contents of
the \TEMP\CYGWIN directory.
5. Copy cloud_cygwin_install.zip to the Windows virtual machine under the
root directory C:\.
6. Extract cloud_cygwin_install.zip on the Windows virtual machine. The
C:\TEMP directory now contains a CYGWIN folder.
7. Copy the file cloudPostinstallWindows.zip, from /opt/IBM/tsam/files/ on
the Tivoli Service Automation Manager management server, to the Windows
virtual machine under the root directory C:\ and extract it there.
8. Ensure that the contents of the \TEMP directory and the \TEMP\CYGWIN directory
reflect the contents and structure of the compressed file.
9. The scripts in the KVM directory are to be used for Windows image template
configuration.
10. Add the cloud_disable_autologon and the cloud_win2008R2.bat for Windows
2008 or cloud_win2003.bat for Windows 2003 scripts to the C:\temp\scripts\
folder. You can find these scripts in the cloudPostinstallWindows.zip
package.
11. Modify the files cloud_setup2008.bat and cloud_win2008.reg for Windows
2008 or cloud_setup2003.bat and cloud_win2003.reg for Windows 2003.
Change the password into the same as the administrator password.
12. Run script C:\temp\scripts\cloud_setup2008.bat for Windows 2008 or
C:\temp\scripts\cloud_setup2003.bat for Windows 2003.
13. Power off the Windows template, copy the disk file and the XML
configuration file, to the KVM repository for discovery.
14. Discover the Windows template and register the image through the user
interface.
15. For Windows 2008 or Windows 2008 R2, replace the instcygw-local.bat file
with the file provided.
Preparing a Linux image:
Configure the Linux image templates that are used for provisioning the target
virtual machines.
Before you begin
v Only one hard disk can be configured per image.
v No logical volumes should be included on the hard disk.
v Partition one must be the boot partition (/boot) and partition two the root
partition (/).
v Root (/) partition should be mountable externally to allow for configuring
network settings. Avoid creating a swap partition to reduce provisioning time.
Chapter 4. Administering 309
v Image size should be reduced as much as possible before making it available for
provisioning. Resize the file system to the size of data contained in the image
and resize the image hard disk to the size of the file system.
v The size of the root partition of the Linux image can be increased to the amount
specified by the user during provisioning. The tool resize2fs is used to modify
the size of the root partition, and the minimum required version is 1.39. SUSE
Linux Enterprise Server 10 SP2 ships with version 1.38, so a manual upgrade of
the tool in the image is required. To check the version of the tool, run the
following command on your image: # resize2fs.
v The file system on the root partition must be ext 3 or 4.
v For SUSE, bootloader must be installed in the Master Boot Record (MBR) and
not in the partitions.
v The following UNIX tools must be available in the Linux installation to enable
disk size modification in the Modify Server Resources request:
resize2fs, version 1.39 or higher
parted
fdisk
sfdisk
grep
sed
awk
tail
cut
Procedure
1. Create a new virtual machine using virt-manager.
2. Install the Linux Operating System on the virtual machine (Red Hat or SUSE)
with the minimum requirements. During the provisioning, Tivoli Service
Automation Manager extends the virtual machine disk/file system to the
requested size.
Note: Do not create templates with Volume groups during OS installation to
avoid provisioning failures.
a. Remove the SWAP partition, if any.
b. Use the minimum amount of disk space and memory size required. For
example, 5 GB of disk space and 1024 MB of memory.
3. After the operating system is installed, perform the following steps:
a. Stop the local firewall and set it to manual start.
SUSE
1) Select YaST > Security and Users > Firewall.
2) In the Service Start section, set the firewall startup to manual
and stop the firewall.
Red Hat
Disable the firewall and Security Enhanced Linux.
b. Remove persistent network interfaces.
SUSE
1) cd /etc/udev/rules.d
310 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
2) Edit the file 30-net_persistent_names.rules and remove all the
network entries, leave only comments.
Note: When the image is built, these entries reappear after each
reboot. Ensure that you repeat this step every time the virtual
machine reboots before converting the image to a template.
Red Hat
1) cd /etc/sysconfig/network-scripts
2) Remove all files matching the token ifcfg-eth*.
c. Remove all ssh keys in the root home directory, if any.
d. Remove all the ssh host keys, if any: cd /etc/ssh; rm ssh_host*
SUSE Linux Enterprise Server 11 image - PolicyKit Settings for Authorization free shut
down:
The default PolicyKit settings interfere with the Stop Server and Restart Server
use cases on the provisioned virtual server. Perform the steps in this section to
change the default settings.
Procedure
1. Log in to the virtual machine.
2. Open a terminal and run the following command:
polkit-gnome-authorization
3. In the left pane of the window that is displayed, navigate to
org.freedesktop.consolekit.system and click Stop the System.
4. Edit the Implicit Authorization for Anyone, Console and Active Console to
Yes.
5. Click Modify. Provide root authentication.
6. Close the window.
Red Hat Enterprise Linux 5.4 image:
Obtain a Red Hat 5.4 image and place it in a corresponding directory in the KVM
image server.
About this task
To provision virtual machines with KVM in Tivoli Service Automation Manager,
the image templates must be stored on the KVM image server.
Procedure
1. On the KVM image server, in the /repository/kvmimages directory, create a
directory, for example for Red Hat Enterprise Linuxrhel.
2. Inside the rhel directory, create another one, rhel54, where you will store the
image.
3. Name the image disk file with the same name of the directory, for example,
rhel54.img, and place it in the newly created directory.
4. Name the XML configuration file with the same name of the directory,
example, rhel54.xml, and place it in the newly created directory.
Important: Ensure that the .xml and .img files are placed in the correct
directory, that is in the second directory below /repository/kvmimages.
Chapter 4. Administering 311
What to do next
Now, you have to prepare the created template so that it can be used by Tivoli
Service Automation Manager to fulfill a service request. See Preparing OS image
templates for Tivoli Service Automation Manager on page 325.
Creating operating system image templates for VMware
By building one or more VMware templates, you can deploy multiple virtual
machine images in your environment.
The documentation includes information about creating operating system image
templates for the following operating systems:
v Windows Server 2008
v Windows Server 2008 R2
v Red Hat Enterprise Linux
v SUSE Linux Enterprise Server
Important: Drive-letter Z: must be free within the Windows template and available
for Tivoli Provisioning Manager to mount installation sources (e.g. for ITM agent
installation). If Z is not free, installation will fail because this drive-letter is always
used to mount the installation source.
Note:
v To work with VMware templates, you must have installed the VMware Tools
package.
v Do not create or use templates that have a MAC address set manually. In Tivoli
Provisioning Manager, each MAC address must be unique, and MAC addresses
created manually for VMware virtual servers are not supported. When copying a
template with manually set MAC address to another ESX or VirtualCenter,
ensure that you edit the manual MAC address to a new unique one or set it to
automatic.
v Even when automatic MAC generation is set, the same MAC address might be
generated for two templates. Ensure that the MAC address is unique for each
template as this is required for Tivoli Provisioning Manager.
Preparing a Windows image:
Configure the Windows image templates that you want to use for provisioning the
target virtual machines.
Before you begin
If you create the Windows 2008 64-bit R2 images, with Cygwin version greater
than 1.5 you have to to use preinstalled Cygwin method. Post-install Cygwin
method is not supported in this case. For more information, see Installing
Cygwin on page 315
Important: Only 32 bit version of Cygwin is supported.
Note: For the Windows 7 64-bit images, Cygwin version 1.7.9-1 is not supported.
Windows 2008 64-bit R2 is the only certified Windows flavor for Cygwin 1.7.9-1.
Note: For Windows 7, change the power-saving settings. Go to Control Panel >
Hardware > Power Option > Change when the computer sleeps. Set Put the
312 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
computer to sleep to Never. In this way, you will prevent your VM from going
into suspended mode after you added a new disk.
About this task
This procedure applies to all supported Windows platforms. If a step is needed
only for specific platforms, that is indicated in parentheses. For special
platform-dependent requirements, including minimum build levels of the ESXi and
vCenter Server components, see Planning for Tivoli Service Automation Manager
on page 28
Important:
v Use only virtual images without snapshots. Existing snapshots have to be
integrated before you convert the virtual image to a template.
v Only virtual images with one hard disk configured can be deployed.
Procedure
1. From the VMware Infrastructure Client, click Inventory and select Hosts and
Clusters.
2. Select the ESX server from the data center cluster to be managed by Tivoli
Service Automation Manager.
3. Create a new virtual machine.
4. Install the Windows operating system on the virtual machine with the
minimum requirements for disk space and memory. During provisioning,
Tivoli Service Automation Manager extends the virtual machine disk partition
to the requested size.
Note: (2008) Specify at least 512 MB of RAM and 16-GB disk space.
5. Install the Microsoft Sysprep tools on your vCenter Server machine. Microsoft
includes the Sysprep tool set on the installation CDs for Windows. It also
distributes Sysprep from the Microsoft website. To customize Windows, you
must install the Sysprep tools either from your installation disc, or from the
Microsoft download package.
For detailed information about this topic, visit the VMware Knowledge Base:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US
&cmd=displayKC&externalId=1005593.
Note: The Sysprep version is specific to the operating system:
Windows Server 2003 x64
v Location: C:\Documents and Settings\All Users\Application
Data\VMware\VMware VirtualCenter\sysprep\srv2003-64
v Download: http://www.microsoft.com/downloads/
details.aspx?familyid=C2684C95-6864-4091-BC9A-52AEC5491AF7
&displaylang=en
v Instructions: Extract the contents of the EXE and then extract the
file SP2QFE\deploy.cab. Copy these files to the srv203-64 folder.
6. Install VMware Tools on the Windows virtual machine image.
a. Open the console on the newly created virtual machine.
b. Select VM > Guest > Install/Update VMware tools.
7. After the installation completes, prepare a Cygwin installation file with the
name cloud_cygwin_install.zip.
Chapter 4. Administering 313
a. Go to http://www.cygwin.com and download the latest Cygwin
installable files to the \TEMP\CYGWIN directory with at least the following
packages: alternatives, ash, base-files, base-passwd, bash, bzip2, coreutils,
crypt, csih, cygrunsrv, cygutils, cygwin, cygwin-doc, db, diffutils,
editrights, expat, findutils, gawk, gdbm, gettext, grep, groff, gzip, less,
libiconv, login, man, minires, ncurses, openssh, openssl, pcre, perl, popt,
readline, rebase, run, sed, setup.exe, tar, tcp_wrappers, termcap, terminfo,
texinfo, tzcode, unzip, which, zip, zlib. For Windows 2008, use all
packages available.
b. When the Cygwin download completes, change to the \TEMP\CYGWIN
directory. There should be a directory named after the URL of the mirror
chosen during the installation. Change to that directory.
c. Copy the contents of the URL directory back into the \TEMP\CYGWIN
directory.
d. Delete the URL directory from the \TEMP\CYGWIN directory.
e. Place the post configuration scripts into the \TEMP directory:
postInstallAfter.bat, postInstallBefore.bat, shutdown.bat,
postInstallAfterCloning.bat, cloud_cygwin_ssh_config_cloning.sh.
f. Create the cloud_cygwin_install.zip compressed file with the contents of
the\TEMP\CYGWIN directory.
Note: You can also opt for a preinstalled Cygwin scenario. For more
information, see Installing Cygwin on page 315.
8. Copy cloud_cygwin_install.zip to the Windows virtual machine under the
root directory (C:\)
9. Extract cloud_cygwin_install.zip on the Windows virtual machine. The
C:\TEMP directory should now contain a CYGWIN folder.
10. Copy the file, cloudPostinstallWindows.zip, located in /opt/IBM/tsam/files/
on the Tivoli Service Automation Manager management server to the
Windows virtual machine under the root directory (C:\).
11. Extract the file C:\cloudPostinstallWindows.zip on the Windows virtual
machine.
12. Ensure the contents of the \TEMP directory and the TEMP\CYGWIN directory
reflect the contents and structure in the compressed file.
13. (Vista, 2008) Disable the firewall: Control Panel > Administrative Tools >
Services
Important: The save/restore function can save running images on VMware.
However, when the saved image is restored, the image is booted. For
Windows 2003 and 2008, the Windows shutdown event tracker realizes this
issue and prompts at the root console for the shutdown reason. Since the root
console is typically not accessible to users, this feature must be disabled.
Detailed instructions are available at: http://support.microsoft.com/kb/
293814
14. Shut down and power off the virtual machine.
15. From the VMware Infrastructure Client, right-click the virtual machine and
select Template > Convert to Template.
16. Follow the wizard instructions and save the template to the image data store.
Results
You created a Windows template.
314 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Important: If a VMware template that is already registered in Tivoli Service
Automation Manager, and it has to be modified, do not clone the virtual machine,
because then you will not be able to use the modified VMware template. Instead,
do the following procedure:
1. Use VirtualCenter to convert the template into a virtual machine.
2. Start the VM and modify it as required.
3. Convert the VM back into a template.
What to do next
Now, you have to prepare the created template so that it can be used by Tivoli
Service Automation Manager to fulfill a service request. See Preparing OS image
templates for Tivoli Service Automation Manager on page 325.
Note: Do not use VMware Infrastructure Client to provision a virtual machine
from the created template. Virtual machines should be provisioned by the Tivoli
Service Automation Manager self-service user interface only. See Creating a project
and adding virtual servers in the product Users Guide for more details.
Tip: If the timezone for the provisioned servers is not set correctly, as a
workaround, you can set the sysprep.timezone attribute for the server template. It
is described in the Tivoli Provisioning Manager information center. (See Table 4:
Hidden Parameters for VMware in User Guide > Configuring virtual servers > Managing
virtual servers and host platform servers using VMware > Part 3: Creating a virutal
server > Creating virtual servers.)
Installing Cygwin:
You can install Cygwin directly from the Internet or from a local directory.
Before you begin
If you create a Windows 2008 R2 template with preinstalled Cygwin version
1.7.9-1, refer to Creating Windows 2008 R2 template when using preinstalled
Cygwin 1.7.9-1 on page 316.
Important: Only 32 bit version of Cygwin is supported.
Procedure
1. Make sure that the \TEMP\CYGWIN directory on your computer contains only the
following files:
v cloud_cygwin_ssh_config.sh
v configure_ssh.sh
v instcygw-local.bat
Delete all other files.
Note: If opened in notepad on a Windows server, the .sh files inside the TEMP
folder need a format conversion from dos2unix. There is a command
dos2unix.exe in Linux and Cygwin which makes the file compatible again if
opened on a Windows server.
2. Go to http://www.cygwin.com.
3. Click setup.exe and download the file.
4. Run setup.exe from your computer.
Chapter 4. Administering 315
5. Select a download source:
Install from Local Directory
Select this option if you have the files locally. Otherwise, choose Install
from Internet.
Install from Internet
Select this option if you do not have the files locally. The downloaded
files will be saved for future use.
Click Next and follow the installation wizard.
6. In the Select Packages window, select the packages and the respective
installation options. Click Next to complete the installation. Once the
installation is complete you can see a Cygwin icon on the desktop.
Tip: In case of an error while selecting the packages, choose Reinstall.
7. Remove the Cygwin packages or keep them for future use.
What to do next
Important: In Windows 2003 64-bit, steps to allow Cygwin to use applications
from system32:
1. Remove the CD/DVD drive.
2. Implement the following hotfix: http://support.microsoft.com/kb/942589 .
Important: When you will be asked to permanently add the
%SystemRoot%\sysnative or ;%WinDir%\sysnative to your path, follow the
instructions found here: http://technet.microsoft.com/en-us/library/cc736637.
Complete the procedure described in Preparing a Windows image on page 312,
starting from step 13.
Creating Windows 2008 R2 template when using preinstalled Cygwin 1.7.9-1:
You must perform additional steps when creating a Windows 2008 R2 template
with preinstalled Cygwin version 1.7.9-1.
Procedure
1. Create a virtual machine and install Windows 2008 R2 64-bit operating system
on it.
2. Disable the firewall.
3. Install the VMware tools on the Windows virtual machine image:
a. Open the console on the newly created virtual machine.
b. Select VM > Guest > Install/Update VMware tools.
4. Modify the Recycle Bin properties:
a. Select Dont move files to the Recycle Bin, Remove files immediately
when deleted.
b. Clear the Display delete confirmation dialog check box.
5. Create a temp folder on drive C:\ and copy the following files into it: From:
/opt/IBM/tsam/files/cloudPostinstallWindows.zip/PreInstalledCygwin/temp.
v cloud_cygwin_ssh_config_preinstalled.sh
v postInstallAfter.bat
From: /opt/IBM/tsam/files/cloudPostinstallWindows.zip/temp directory
316 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v shutdown.bat
v postInstallBefore.bat
v postInstallAfterCloning.bat
v cloud_cygwin_ssh_config_cloning.sh
6. Install Cygwin in the Virtual Machine as the Administrator user.
7. After Cygwin installation completes, copy cloud_cygwin_ssh_config.sh to the
C:\cygwin\bin directory. The file cloud_cygwin_ssh_config.sh is present in the
/opt/IBM/tsam/files/cloudPostinstallWindows.zip package that must be
owned by Administrator. When you extract this package, two folders are
created: PreInstalledCygwin and temp. The cloud_cygwin_ssh_config.sh file
can be found in the temp/cygwin folder.
8. Convert the virtual machine to template, and run image template discovery to
use the template.
Tip: If the following error message is displayed:
The Recycle Bin on C:\ is corrupted. Do you want to empty the Recycle
Bin for this drive? follow the instruction provided in Common problems
and solutions on page 486.
VMware Windows provisioning with changed built-in administrator account name:
Non-standard built-in administrator account name for Windows system
provisioning and management.
Tivoli Service Automation Manager provisions VMware Windows guests with
non-standard administrator accounts. When a used Windows image is registered,
Tivoli Service Automation Manager renames the build-in administrator account to
the name specified in the Administrative User Account field of the self-service UI.
The renamed account retains identical management rights of the built-in
administrator account, and it is further used to manage the system.
Note: No account must have the account name that was given during the
registration of Windows image.
Ensure that the command-line interface WMIC for Windows Management
Instrumentation (WMI) is available in your master image. It is the case for default
installation. After provisioning, all management functions are called through ssh,
which is realized by cygwin.
Attention: Only Windows templates that have a built-in Administrator account are
supported.
A built-in account has extended account permissions in combination with cygwin.
The built-in Administrator accounts SID ends with -500. If it is not the case, then
use the unchanged account.
Tip: The custom Tivoli Provisioning Manager workflows do not need any
customization because all communication and execution, such as
Device.ExecuteCommand use the service access point that is configured for RSA
authentication with the changed administration account name.
Chapter 4. Administering 317
Preparing a Linux image:
Configure the Linux image templates that are used for provisioning the target
virtual machines.
Before you begin
Important:
v Requirements for the virtual machine:
Only virtual images without snapshots may be used. Existing snapshots must
be integrated before you convert the virtual image to a template.
Only one hard disk (vmdk) can be configured per image.
v Requirements for the guest partition:
No logical volumes (lvm) should be included on the hard disk.
The root (/) partition must be the last partition (/). A swap partition after the
root partition is tolerated.
Root (/) partition should be mountable externally to allow guest
configuration.
The file system on the root (/) partition must be ext 3 or 4.
v For SUSE, bootloader must be installed in the Master Boot Record (MBR)
and not in the partitions.
v Performance:
Reduce the image size as much as possible before making it available for
provisioning. Resize the file system to the size of data contained in the image
and resize the image hard disk to the size of the file system.
Avoid including a swap partition in the template to reduce provisioning time.
Swap space is created dynamically during provisioning, if requested.
v Requirements for the guest operating system:
Depending on the Linux version, the following components must be present
before the installation:
-
Table 36.
Architecture libstdc++ libgcc
li6243/li6246 32bit agent for
Linux Intel kernel 2.4
(RHEL3,SLES8)
libstdc++-2.96-98 N/A
li6263/li6266 32bit agent for
Linux Intel kernel 2.6
(RHEL4, RHEL5,
SLES9,SLES10)
libstdc++-3.3.3-43.41 libgcc-4.1-4.1.2_20070115-0.2
lx8266 64bit agent for Linux
x64 kernel 2.6
libstdc++-3.4.4-2 libgcc-3.4.4-2
lia266 64bit agent for Linux
IA64 kernel 2.6
libstdc++-3.2.2-23 libgcc-3.2.2-23
lpp266 64bit agent for Linux
PPC kernel 2.6
libstdc++-3.3.3-43.41 libgcc-3.3.3-43.41
ls3243 31bit agent for zLinux
kernel 2.4 (RHEL3,SLES8)
libstdc++-3.2.2-54 libgcc-3.2.2-54
318 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Table 36. (continued)
Architecture libstdc++ libgcc
ls3246 64bit agent for zLinux
kernel 2.4 (RHEL3,SLES8)
libstdc++-3.2.2-54 libgcc-3.2.2-54
ls3263 31bit agent for zLinux
kernel 2.6 (RHEL4, RHEL5,
SLES9,SLES10)
libstdc++-3.3.3-43.34 libgcc-3.3.3-43.34
ls3266 64bit agent for zLinux
kernel 2.6 (RHEL4, RHEL5,
SLES9,SLES10)
libstdc++-3.3.3-43.34 libgcc-3.3.3-43.34
The following UNIX tools must be available in the Linux installation:
- resize2fs version 1.39 or higher or ext2online on older Linux versions
- parted
- fdisk
- sfdisk
- grep
- sed
- awk
- tail
- cut
- bc
For Red Hat Enterprise Linux 6 x64 and SUSE Enterprise Linux 11 x64 in
combination with VMware 5, you must install:
- The following 32 bit and 64 bit compat-libstdc++libraries:
v compat-libstdc++-296.2.96.i386
v compat-libgcc-296.2.96.i386
v compat-libstdc++-33.3.2.3.x86_64
v compat-libstdc++-33.3.2.3.i386
v libXpm-3.5.5-3
- Parted version 1.8.1 that is available at http://ftp.gnu.org/gnu/parted/
onto the virtual machine, logging in as root.
Then, proceed to create a template as for any other Red Hat Enterprise Linux
x64 or SUSE Enterprise Linux x64 versions.
These tools are used during configuration of the guest.
To install DB2 on Red Hat Enterprise Linux 6.2 provisioned VM, a
libstdc++.so.6 rpm package is required.
Procedure
1. From the VMware Infrastructure Client, click Inventory and select Hosts and
Clusters.
2. Select the ESX server from the data center cluster to be managed by Tivoli
Service Automation Manager.
3. Create a new virtual machine.
Note: While creating CentOS templates supported by Tivoli Service
Automation Manager using VMware versions lower than 4.1, ensure that you
select Red Hat Enterprise Linux as the guest OS type.
Chapter 4. Administering 319
4. Install the Linux Operating System on the virtual machine (Red Hat or SUSE)
with the minimum requirements. During the provisioning, Tivoli Service
Automation Manager extends the virtual machine disk/file system to the
requested size.
Note: Do not create templates with Volume groups during OS installation to
avoid provisioning failures.
a. Remove the SWAP partition, if any.
b. Use the minimum amount of disk space and memory size required. For
example, 5 GB of disk space and 1024 MB of memory.
c. Ensure that the file systems on the root partition are set to ext 3 or ext 4.
5. After the operating system is installed, perform the following steps:
a. Stop the local firewall and set it to manual start.
SUSE
1) Select YaST > Security and Users > Firewall.
2) In the Service Start section, set the firewall startup to manual
and stop the firewall.
Red Hat
1) As root, disable the firewall by typing the following
commands:
service iptables save
service iptables stop
chkconfig iptables off
2) Disable Security Enhanced Linux:
a) Type: edit /etc/selinux/config
b) Change the SELINUX line to be as follows:
SELINUX=disabled
b. Ensure that the bootloader is installed in the Master Boot Record.
SUSE
1) Select YaST > System > Boot Loader.
2) Click the Boot Loader Installation tab and go to the Boot
Loader Location section.
3) Clear the Boot from Boot Partition check box.
4) Select the Boot from Master Boot Record check box.
c. Remove persistent network interfaces.
SUSE
1) Type: cd /etc/udev/rules.d
2) Edit the file 30-net_persistent_names.rules (for SUSE Linux
Enterprise Server 11, edit 70-persistent-net.rules) and
remove all the network entries, leave only comments.
Note: When the image is built, these entries reappear after
each reboot. Ensure that you repeat this step every time the
virtual machine reboots before converting the image to a
template.
Red Hat
1) Type: cd /etc/sysconfig/network-scripts
320 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
2) Remove all files matching the token ifcfg-eth*.
3) For Red Hat Enterprise Linux 5.4 and CentOS 5.4 templates,
edit the /etc/rc.local file in the template and add a line:
service network restart
at the end of this file.
d. Remove all ssh keys in the root home directory, if any.
e. Remove all the ssh host keys, if any: cd /etc/ssh; rm ssh_host*
6. For SUSE Linux Enterprise Server 11, verify that the red prompt is disabled on
the terminal. Open the terminal, and if the prompt appears in color red,
follow the steps below, otherwise ignore:
a. Edit /etc/bash.bashrc and comment the PS1="\[$_bred\]$PS1\[$_sgr0\]"
line as shown in the example:
if test "$UID" -eq 0 -a -t ; then
_bred="$(path tput bold 2> /dev/null; path tput setaf 1 2> /dev/null)"
_sgr0="$(path tput sgr0 2> /dev/null)"
# PS1="\[$_bred\]$PS1\[$_sgr0\]"
unset _bred _sgr0
fi
7. Install VMware tools.
a. Open the console of the newly created virtual machine.
b. From the console for the virtual machine, select VM > Guest >
Install/Update VMware tools.
c. Open a terminal and access the VMware tools by typing the following
commands:
mkdir -p /media/cdrom
mount /dev/cdrom /media/cdrom
ls -l /media/cdrom
You see the VMware tools software.
d. Copy VMware tools software to a temporary directory and unpack it by
typing the following commands:
cd
mkdir temp_vmwtools
cd temp_vmwtools
cp /media/cdrom/vmware-tools-distrib.tar.gz ./
gunzip vmware-tools-distrib.tar.gz
tar -xvf vmware-tools-distrib.tar
e. Install the VMware tools software:
1) Type the following commands:
cd /vmware-tools-distrib
./vmware-install.pl
2) Select all default options, including the option to configure VMware
tools.
3) Wait for the installation to complete.
f. Tidy up the temporary directory by typing the following commands:
cd ../..
rm -rf <temporary directory>
g. Click VM > Guest and check the menu options.
Chapter 4. Administering 321
h. (Optional) If you have such an option within your menu, click VM >
Guest > End Install VMware tools.
i. Close the terminal.
8. Once again verify the requirements described in Before you begin.
9. Shut down the guest operating system and power off the virtual machine.
10. From the VMware Infrastructure Client, right-click the virtual machine and
select Template > Convert to Template.
11. Follow the wizard instructions and save the template to the image data store.
Note: For SUSE Linux Enterprise Server 11 with vSphere version lower than
4.1 and 4.0 Update 2, while converting the virtual machine to template, you
need to change the OS type of the virtual machine to SUSE Linux Enterprise
Server 10 64-bit using Edit settings, and then convert it to a template.
What to do next
Now, you have to prepare the created template so that it can be used by Tivoli
Service Automation Manager to fulfill a service request. See Preparing OS image
templates for Tivoli Service Automation Manager on page 325.
Partition and file system requirements for the disk resize functionality:
Tivoli Service Automation Manager offers a functionality to resize a Linux virtual
machine if certain requirements are met.
When the resize is performed, three things happen :
1. The container of the virtual machine on the hypervisor is extended.
2. The partition table is modified to include the additional space.
3. The root (/) file system is resized so that additional space is available for the
user.
For second and the third point, certain assumptions are made regarding the
partition layout and the file system. These assumptions function as requirements
that must be met for the default disk resize functionality to work. However,
sometimes different partition layouts have to be supported, logical volumes have
to be included, or different file systems have to be used. In such cases, you can
customize the disk resize implementation by overwriting it with your own
implementation.
The standard disk resize implementation does not support logical volumes. It also
requires a specific layout of partitions as described in Preparing a Linux image
on page 318. You can avoid these limitations by adding custom-built scripts to the
image for handling disk resize. These are the locations for the scripts:
v /opt/IBM/tsam/config/configure_disks
v /opt/IBM/tsam/config/configure_disks_after_reboot (optional)
If the scripts exist in these locations and if they are executable files, then they
overwrite the standard implementation for disk resize.
Both scripts are called with one parameter swap_size. The
configure_disks_after_reboot script is optional. If it exists, the operating system
is rebooted before the script is called. Otherwise, the operating system is not
rebooted after exiting configure disks. Therefore, configure_disks_after_reboot
is only called if configure disks exists. Both scripts communicate success by
exiting with return code 0. Otherwise, the underlying management action is
322 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
stopped with an error.
VMware Linux provisioning with sudoer account instead of the standard root account:
The non-standard built-in "root" account name for Linux system provisioning and
management with VMware.
The Linux system to be used for image or template must work with SUDO instead
of root. The built-in root account exists, but is normally disabled for external
access, like ssh.
When the image is prepared, do the following to restrict ssh access for the built-in
root account:
1. Add or change line in /etc/ssh/sshd_config: PermitRootLogin no
2. Restart sshd service: /etc/init.d/sshd restart
Tivoli Service Automation Manager is working with RSA credential, as the user of
the client system can change passwords. Therefore, the sudoer must be configured
to do the following tasks:
v Administrative execution without password
v Add or change line in /etc/sudoers file for "<user-account-name>ALL=(ALL)
NOPASSWD: ALL" in the Linux template
The newly created user account with sudoer capability must be able to access all
executable commands that are available at the Linux system. You can customize
the exported PATH to enable this access.
During the image registration in the self-service UI, overwrite the default value
root in the Administrative User Account field with the alternative admin account
name that is prepared in a used image.
Tip: Customize the custom workflows, because during system creation, the RSA
authentication service access point is configured so that it is used together with
sudo command.
For simple commands or shell script execution through Device.ExecuteCommand, no
change is required, as the service access point handles the sudo usage during
execution.
If the command is more complex, it might be necessary to wrap the command
with bash -c "complex command chain", where internal hyphens (") must be
escaped.
Using the scriptlet, workflow inline script definition, it is required to place the
sudo in front of all commands where it is required. For example, see workflow
Cloud_Linux_Online_Configure_Disks.wkf. There is no automatic processing of
sudo even if the service access point contains the sudoers configuration.
Attention: Some Linux do not allow sudo operations unless you log in like that of
Red Hat Enterprise Linux. To enable remote login and sudo operation, comment
out Defaults requiretty in /etc/sudoer file.
bash -c "rm -f /etc/sysconfig/network/routes ;
echo \"0.0.0.0 9.152.136.1 0.0.0.0
$(ifconfig -a | egrep&apos;HWaddr 00:50:56:00:43:56&apos;
Chapter 4. Administering 323
| gawk&apos;{ print $1 }&apos;)\">>
/etc/sysconfig/network/routes ;
echo \"default 9.152.136.1 - - \"
>> /etc/sysconfig/network/routes"
Creating operating system image templates for PowerVM
The following sections describe the steps to configure the operating system image
templates for PowerVM.
Preparing an AIX image:
Configure the AIX image templates that will be used for provisioning the target
virtual machines.
About this task
AIX images for provisioning must fulfill the requirements outlined in the
"Deploying operating systems" and "Deploying software" topics in the Tivoli
Provisioning Manager Provisioning User Guide. Provide your AIX administrator
with any special requirements you have for the images.
Procedure
1. The AIX administrator captures the required AIX operating system image.
2. The AIX administrator provides the root password of the captured OS image to
the Tivoli Service Automation Manager administrator. This password is
required during a Tivoli Service Automation Manager configuration step for the
images to be used by Tivoli Service Automation Manager.
What to do next
Now, you have to prepare the created template so that it can be used by Tivoli
Service Automation Manager to fulfill a service request. See Preparing OS image
templates for Tivoli Service Automation Manager on page 325.
Discovering single virtual server image templates
To save time, you can limit the scope of discovery of images that are managed by
the VirtualCenter.
About this task
Restriction: This option is available only for VMware Cloud Pools.
The previous implementations of the image template discovery discovered and
reconciled all images that were managed by the VirtualCenter. You can now limit
the scope of the discovery to a certain cluster or a number of clusters. If your
back-end is large, this image discovery can take a long time. For data validity and
consistency reasons, you should not reconcile existing registered images.
Procedure
1. Deactivate the current cloud pool.
2. In the Cloud Service Pool Administration application, click the Image
Template Discovery tab, select Discover Selected Image Templates?.
3. In the Image Template Names field, specify the name of the virtual server
image that is to be discovered or reconciled. You can specify more than one
image template name at the same time by using a comma-separated list.
324 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
4. Click Discover Selected Images.
5. To see the discovery workflow ID, name, and status that are stored in the Most
recent image discovery section, click Refresh.
Preparing OS image templates for Tivoli Service Automation
Manager
You can prepare the OS image templates that you created, so that they can be used
by Tivoli Service Automation Manager to fulfill a service request.
Procedure
1. Use the Tivoli Provisioning Manager to discover the created operating system
templates from the hypervisor environment and add them to the data center
model:
a. Log on to the Tivoli Service Automation Manager administrative user
interface.
b. Go to Service Automation > Configuration > Cloud Pool Administration.
c. Click the Cloud Pool Details tab.
d. Scroll down to find the discovery-related sections.
e. Depending on the type of your hypervisor, use one of the following
discovery procedures:
v VMware: Discover image templates for VMware.
v PowerVM: Run the NIM discovery.
v KVM: Discover KVM images.
2. Register the image to the image library. See the Managing Image Library section
in the Tivoli Service Automation Manager User's Guide.
3. You must assign the image template, either to all customers or to selected
customers.
To assign the image template to all customers:
a. Go to IT Infrastructure > Image Library > Master Images.
b. Select an image and click the Customers tab.
c. Select the Assigned to all customers check box.
d. Click Save.
To assign the image only to selected customers:
a. Go to Service Automation > Configuration > Cloud Customer
Administration.
b. Select the customers to which you want to make the virtual server image
available.
c. Click the Customer details tab and then click IL Master Images tab in the
Associated Resources section.
d. To select customers for whom the image is available, click Assign Master
Images.
e. Click Save.
Chapter 4. Administering 325
Adding a new image from VMControl
If you have added a new image in IBM Systems Director VMControl, you need to
synchronize the Tivoli Provisioning Manager image repository before the image
can be used to provision virtual machines in the self-service user interface.
About this task
Note: When importing a Virtual Appliance in VMControl, which is to be used by
Tivoli Service Automation Manager for automated provisioning, do not set the
hostname property in the corresponding OVF description file.
To register an image added via VMControl:
Procedure
1. Log on to the administrative interface and synchronize the repository:
a. Click Go To > IT infrastructure > Image Library > Image Repositories.
b. Select the repository that contains the required image (virtual appliance),
and click Synchronize Repository to bring any new images into the Tivoli
Provisioning Manager data model.
2. Log on to the self-service user interface:
a. Click Request a New Service > Virtual Server Management > Manage
Image Library > Register VM Image via IBM Systems Director
VMControl.
b. Select the required image and click OK to register it.
Results
The image can now be used to create new virtual machines.
Migrating a VMControl image from 2.3 level to 2.4.1 or 2.4.2 level
VMControl 2.4.1 and 2.4.2 provides backward compatibility for VMControl 2.3
resources.
About this task
VMControl 2.4.1 and 2.4.2 supports 3 types of images:
v NIM 23 - Network Installation Manager for version 2.3
v NIM 24 - Network Installation Manager for version 2.4
v SCS - Storage Copy Services
A NIM 23 is simply an image that is supported by VMControl 2.3.
Procedure
1. Deploy a VMControl 2.3 image in VMControl 2.4.1 or 2.4.2.
2. Capture the provisioned LPAR to NIM 24 or SCS repository.
3. Optional: If you want to create an SCS image:
a. Deploy the NIM 23.
b. Install the activation engine.
c. Stop the image using the activation engine.
d. Capture the image to the SCS image repository.
326 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Results
After this migration, the virtual images are available for provisioning using
VMControl 2.4.1 or 2.4.2.
Deleting a server image
When an image is no longer needed, you need to remove it from both the
administrative and the self service user interfaces. Moreover, any future
reservations related to this image must be canceled.
About this task
To remove an image:
Procedure
1. Log on to the self-service user interface and unregister the image from the self
service user interface.
2. To unregister the image, click Request a New Service > Virtual Server
Management > Manage Image Library > Unregister Image.
3. On the hypervisor user interface, delete the image.
4. On the administrative user interface, run the discovery.
Results
The image is deleted from Tivoli Provisioning Manager Image Library and can no
longer be used to create new servers.
Storing server images
Use separate data stores for storing images that serve different functions.
Use separate data stores for storing:
1. Provisioned images
2. Save/restore images
3. Image templates and non-Tivoli Service Automation Manager managed
resources.
Note: Do not store non-Tivoli Service Automation Manager managed resources on
the data stores used for provisioning and save/restore. If you do and they become
full, you will run into resource allocation problems.
Registering single master images to different VMware clusters
and server resource pools
You can register a single VMware template to different server resource pools.
Before you begin
You must have access to the virtual center by using a VMware Infrastructure Client
software.
Procedure
1. By using an NFS server, mount the master image datastore to the ESX server of
the VMware cluster. Initially, the NFS server has at least one dedicated network
Chapter 4. Administering 327
card where one interface is configured and exported. Configure a set of
interfaces to be used as interface to mount the NFS master image datastore. To
configure it:
v Use different network cards, where each network card has one single
interface that is configured and added to the export list.
v Configure network interface's alias interfaces on a single network card by
running ip a add 10.102.2.50/16 broadcast 10.102.255.255 dev eth1. You
can list the configuration by using ip a s.
In case of parallel provisioning, the NFS mounted datastore results in bad
performance. To avoid processing timeouts, keep the number of concurrent
provisioned systems low. Because of different sizes and overall network traffic
limitations, you must figure out the number by testing. Each ESX server
connects to the master image store by using its own IP address and NFS
mount. This configuration can be performed for an NFS mounted master image
datastore as well. In such case, you do not have to use different IP addresses.
2. Register the image to the ESX server, that is to the vCenter library. Use different
names for each image. Think about a naming convention, for example, use an
image name stem adding a sequence number when you register it to the cluster
ESX servers (example: SLES11_1, SLES11_2).
3. Run the template discovery. You have registered single master image template
source to multiple VM clusters. You have also registered them with different
names to the vCenter library which resulted in differently named image library
master image entries. You can associate them to different customers and
therefore to different server resource pools.
What to do next
The disadvantage is the bottleneck in the network traffic capabilities. To avoid it,
follow the instructions described here Multiplying VMware template metadata.
Multiplying VMware template metadata
About this task
You cannot manipulate the connection to the datastore content by using different
IP addresses. Therefore, you cannot apply the FlashCopy feature to avoid the
performance bottleneck that caused by the network and NFS mounted master
image datastore. In a SAN storage, you cannot use a command shell or access the
file system in a similar way to copy and edit the image metadata.
To multiply the VMware template metadata:
Procedure
1. Download the vmtx file from the single image template by using the VMware
Infrastructure Client.
2. Update the file name and the display name with identical names. Use a naming
convention to differentiate between the source and copy, for example SLES11,
SLES11_C1. The names must be unique in a vCenter. If one of the master
images gets deleted, all copies cannot be used any more. Therefore, you are
advised to use the master image or template datastore as read-only to avoid
deleting it by accident.
3. Upload the updated vmtx file to the directory from which you downloaded it.
328 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
4. Add a new virtual machine template to the inventory of ESX server in different
cluster as compared to the registration of the source master image or template.
5. Run the discovery. The discovery lists the new master images in image library
master images. You can assign them to individual customers or server resource
pools.
Enabling the restore across project offerings
Learn about the assumptions that have to be met and about the procedure of
enabling the restore across project offerings.
Before you begin
Any project can have an individual network configuration. The restore across
project offerings can be performed successfully only if the source and the target
network configurations are equal. For this reason, Tivoli Service Automation
Manager ships the restore across project offerings disabled. As long as you do not
use the network extensibility function for a network configuration at individual
project level, you can enable the restore across project offerings again. To
successfully enable the restore across project offerings, the following requirements
must be met:
v The network extensibility to create project level network configuration is not
being used.
v The network configuration of the saved image is equal to the project the saved
image should be deployed in.
If the above mentioned assumptions are valid for your environment, you can
enable the following restore across project offerings:
v Create Project from saved Image
v Add Server from saved Image
Note: If the assumptions are not valid anymore and you want to enable these
offerings, disable the restore across project offerings again.
Procedure
1. Log on to the administrative user interface as PMSCADMUSR or as a user who has
the same privileges.
2. Open the offerings application by selecting Go To > Service Request Manager
Catalog > Offerings.
3. Locate the PMRDP_0248A_72 (Add Server from Saved Image) offering.
4. Change the status of the offering from pending to active. Save the changes.
5. Locate the PMRDP_0249A_72 (Create Project from Saved Image) offering
6. Change the status of the offering from pending to active. Save the changes.
Results
The restore across project offerings are available in the Web 2.0 UI as offerings.
What to do next
If you want to disable one or both of the restore across project offerings, repeat the
procedure and change the status of each of the offerings from active to pending.
Chapter 4. Administering 329
Controlling user access
This section provides information about the security policy, user roles, and data
segregation within Tivoli Service Automation Manager.
Security in the administrative user interface
Learn about the security concepts and roles in Tivoli Service Automation Manager.
The discussed functions are performed with the administrative user interface.
Security information in stored in two places in the Maximo environment:
v In the WebSphere Application Server, with one or more configured LDAP
servers
v In the Maximo database
Therefore, some of the security-related items (for example, security groups and
users) need to be configured in two places. However, tools are provided that
enable you to keep the definitions in LDAP and the Maximo database in sync.
Security concepts
The Maximo platform offers a highly sophisticated security framework. Security
can be categorized as follows:
Authentication
Implemented via WebSphere Application Server security and LDAP.
Authentication checks whether a user is known to the system and whether
the credentials that are provided are valid for database access.
Authorization
Handles user access rights for certain entities. Access rights are defined
using security groups. Once a user is authenticated, the authorization checks
what the user is permitted to do. Authorization is provided by
membership in one or more security groups. All security groups a user
belongs to make up the security profile of a user. Access rights are checked
by the system against the user's security profile. Security groups can
restrict access to various areas, including applications, sites, data
restrictions, start centers, and so on. Security groups can be configured to
provide access to specific catalogs and can also control the offerings that
are restricted within these catalogs.
Roles and work assignments
Whenever an interactive task is triggered by a workflow, the task is
assigned to one or more persons. Roles control the assignment of these
tasks. Roles are used as part of communication templates, escalations,
service level agreements, or workflow processes. When a role is used
within a process, the database management software determines which
users to route the process to based on information within the role record.
Roles do not serve any function related to security authorizations. The
security groups associated with a user ID determine the security
authorization of that user.
Security elements provided by Tivoli Service Automation Manager
v To configure security in the Maximo environment, you must configure security
groups, roles, users, persons, and person groups.
v Most of these activities are customer specific, and therefore need to be
configured at your site.
330 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
v Tivoli Service Automation Manager provides a set of predefined groups and
roles to which your users can be mapped.
Here is an overview of the roles and security groups that are provided by Tivoli
Service Automation Manager.
Table 37. Roles and groups provided by Tivoli Service Automation Manager
Role Description
Person Group (8
chars) Role (10 chars)
Security Group (must
be created in LDAP)
Service Administrator TSAMSADM PMZHBSADM PMZHBSADM
Service Definition Designer TSAMSSDD PMZHBSSDD PMZHBSSDD
Service Definition Manager TSAMSSDM PMZHBSSDM PMZHBSSDM
Service Deployment Instance Manager TSAMSSIM PMZHBSSIM PMZHBSSIM
Service Deployment Instance Operator TSAMSSIO PMZHBSSIO PMZHBSSIO
Service Resource Allocation Manager TSAMSRAM PMZHBSRAM PMZHBSRAM
WebSphere Cluster Service
Performance - AIX Administrators TSAMPAXA PMZHBPAIXA PMZHBPAIXA
Performance - WAS Administrators TSAMPWAS PMZHBPWASA PMZHBPWASA
Performance - Linux Administrators TSAMPLXA PMZHBPLNXA PMZHBPLNXA
Performance Monitoring Administrator TSAMPPMA PMZHBPPMA PMZHBPPMA
Self-Service Virtual Server Provisioning component
Tivoli Service Automation Manager
Administrator (for Self-Service Virtual
Server Provisioning)
Not applicable
PMSCADMUSR PMSCMADMUSR
Note:
v A corresponding role exists for each security group. Therefore, a user who is
assigned a task can easily also be configured to have authorization to accomplish
the task.
v The PMZHBTSAMR security group is a special security group that consists of
read authorization for all Tivoli Service Automation Manager applications. The
intent of this group is to simplify a configuration where, for example, a z/VM
administrator can gain read access to all other Tivoli Service Automation
Manager-related applications.
v The PMZHBTSAMR security group also configures the Tivoli Service
Automation Manager Start Center.
v The security groups only contain authorizations to Tivoli Service Automation
Manager applications. If a user in a certain role requires access to other
applications, this must be configured by the local Maximo security administrator.
v The shipped configuration of each role includes a person group. All of these
person groups contain only MAXADMIN as a person entry. Therefore, in the
default configuration, all tasks are assigned to MAXADMIN. This can be
re-configured with standard security tooling.
Chapter 4. Administering 331
v For details on how to manage security (for example, creating roles, persons,
person groups, and users), refer to the documentation that is supplied with the
CCMDB product (although Tivoli Service Automation Manager does not actually
use it).
Pre-Configured Roles provided for Tivoli Service Automation Manager:
Service Definition Manager
The Service Definition Manager role is designated for users who are
responsible for triggering and tracking new service definitions. This role is
also responsible for assigning the tasks for new definitions to designers,
and to approve designed service definitions for instantiation.
Service Definition Designer
The Service Definition Designer role is designated for users who are
responsible for creating new service definitions when instructed to do so
by a service manager.
Service Deployment Instance Operator
The Service Deployment Instance Operator is designated for users who are
responsible for instantiation of designed and approved service definitions.
These users operate service instances, such as starting a job plan.
For self-service offerings, users assigned to this group are authorized to
troubleshoot unexpected problems that arise with the automated
fulfillment of service requests submitted by members of the PMRDPUSR
(Offering Catalog user) group.
Service Deployment Instance Manager
The Service Deployment Instance Manager role is designated for users who
are responsible for controlling and approving the execution of service
instances.
Service Resource Allocation Manager
The Service Resource Allocation Manager role is designated for users who
manage the allocation of resources (CIs) to service instances. For example,
servers that are represented in CIs can be allocated to a specific instance.
Service Administrator
The Service Administrator role is a 'super user' for all aspects of service
definitions and instances. This user has the authority to exercise all the
functions of a Service Deployment Instance Manager, Service Deployment
Instance Operator , Service Definition Designer, and Service Definition
Manager.
For self-service offerings, the administrator performs a series of tasks that
set up the applications to enable service requesters to initiate and manage
the requests. This administrator also manages the offering catalog.
Performance: AIX Administrators
Users in this role are responsible for all kinds of performance and problem
determination tasks related to AIX systems.
Performance: WAS Administrators
Users in this role are responsible for all kinds of performance and problem
determination tasks related to WebSphere systems.
Performance: Linux Administrators
Users in this role are responsible for all kinds of performance and problem
determination tasks related to Linux systems.
332 Tivoli Service Automation Manager V7.2.4.4 Installation and Administration Guide
Performance Monitoring Administrator
Users in this role are responsible for all kinds of problem determination
tasks. They have the authority to perform all tasks that are defined in the
performance and problem determination tasks.
Tivoli Service Automation Manager Administrator (for Self-Service Virtual
Server Provisioning)
Users in this role perform various administrative tasks that are available
only in the administrative interface, such as managing the offering catalog
for the Self-Service Virtual Server Provisioning component.
Security management in the self-service user interface
In the self-service user interface, the administrator defines user access and rights
by customer and team assignment, policy level, and security groups.
A user can be assigned to one customer, either the default global customer, or any
other specified by the requester. Optionally, a user is assigned to one or more
teams.
A user can be assigned to multiple customers, but assignment must be done only
by the cloud administrator.
A new security step is also introduced in the Create User and Modify User
requests, where the requester specifies the privileges and permissions of the new
user. Permissions include the rights a user has to access data in the cloud, which
depend on his policy level, and the rights that a user can grant to other users by
means of the grant option. Privileges are the requests a user can submit, and they
are managed by assignment to security groups. For each user, the following
settings can be specified:
v Customer
Administrators always create users in the context of the customer for which they
are working when they request that a user be created. The customer cannot be
changed later. If you have the required privileges, you can select a customer in
the title bar of the main panel before submitting any request.
v Team
Team assignment is not obligatory, but to be able to use a project, the user must
be a member of the team that has access to this project. A user can be a member
of more than one team, but all of the teams must belong to the same customer.
v Policy level
Policy levels define which cloud objects or resources the user can access. Two
policy levels are predefined:
Cloud level
Users on this level can access the information and resources available in
the cloud, regardless of customer limits. They are assigned to a default
global customer called PMRDPCUST.
Customer level
Users in this level are always assigned to one customer only; however,
the cloud admin can choose to assign the user to multiple customers
using the multi-customer option.
v Security groups
Each user can be assigned to seven predefined security groups. Each group
defines which requests the user can submit, and what information they can
access. Groups can be combined. When creating a user, you can also specify the
grant option for each security group. When the grant option is checked for a
Chapter 4. Administering 333
security group, the user is allowed to create new users and assign them to that
security group. Security groups are described in detail in Security groups in the
self-service user interface.
To create users for the self-service interface and add them to the teams, submit the
appropriate requests in this interface. See the Managing Users section in the Tivoli
Service Automation Manager User's Guide.
In addition to the individual settings for user accounts, the administrator can also
enable approval of self-service requests. Cloud administrators, cloud customer
administrators, and cloud approvers have permissions to approve the requests. If a
new request for a customer is submitted, all the approvers assigned to this
customer are notified. Any of them can grant approval. The approval process is
disabled by default and all requests are auto-approved. For more information
about enabling the approval function, see Enabling or disabling automatic
approval of requests on page 350.
Security groups in the self-service user interface
User authorizations for the Tivoli Service Automation Manager self-service
interface are managed with security groups. Group membership determines which
requests the user can access. Seven predefined security groups are available in the
self-service user interface.
A user can belong to more than one group. When you create a user, not only do
you select security groups, you also select the grant option for each group that the
user belongs to. When the grant option is enabled, the user can create new users
within the security group.
The groups for the self-service user interface include:
Cloud administrator
Users in this role are the administrators of the cloud. They can be created
only on Cloud Level Policy, and are always assigned to the PMRDPCUST
global customer. They can perform the following tasks:
v Create customers based on customer templates and delete them
v Modify their own information
v Register and unregister software images
v Allow resource allocations and changes within the whole cloud
v Check the