You are on page 1of 163

Policy Control Server

PCS
PCS 6.3_3.0

PCS6.3-RMS-Installation and
Configuration Guide
Release No

PCS 6.3_3.0

Document Version

6.0

Date

18 August 2014

Status

IUS

Copyright REDKNEE 2014

Author:

All Rights Reserved.

Mr. Judson Jebaraj D BSO VIPT RD IMS PCS Pltf PCS 2 IN

Translator:
1

In addition to the authors named on the cover page the following persons have
collaborated on this document:

P. BalaSubramanian
Prem Kumar MN

BSO VIPT RD IMS PCS Pltf PCS 2 IN


BSO VIPT RD IMS PCS Pltf PCS 2 IN

The document comprises 163 pages.


This issue was created on 8/18/2014 4:21:00 PM
This document was edited with MS Word 2003 SP1

REDKNEE

Table of Contents

Table of Contents
Table of Contents _______________________________________________________ 3
0 General Information ___________________________________________________ 8
0.1 History of Change ________________________________________________________ 8
0.2 References ______________________________________________________________ 8
0.3 Glossary and Abbreviations ______________________________________________ 10
0.4 Keyword/Descriptor _____________________________________________________ 12
0.5 Figures ________________________________________________________________ 13
0.6 Tables_________________________________________________________________ 13

1 Introduction_________________________________________________________ 14
1.1 Target group ___________________________________________________________ 14
1.2 TSP Installation overview ________________________________________________ 14
1.3 Network environment for a PCS installation _________________________________ 15

2 Installation Overview _________________________________________________ 17


2.1 General Installation Overview ____________________________________________ 17
2.2 PCS Solaris single node Installation overview ________________________________ 17
2.3 PCS Solaris cluster Installation overview ___________________________________ 18
2.4 Hardware Requirements _________________________________________________ 19
2.4.1 PCS install server ____________________________________________________________ 19
2.4.2 PCS hosts __________________________________________________________________ 19
2.4.3 NTP server _________________________________________________________________ 20
2.4.4 Administration Console _______________________________________________________ 20
2.4.5 Terminal Concentrator, Terminal Server __________________________________________ 20

2.5 Software List ___________________________________________________________ 20


2.5.1 Install server________________________________________________________________ 21
2.5.2 PCS hosts __________________________________________________________________ 22
2.5.3 PCSmgr ___________________________________________________________________ 24

2.6 Interface and IP address assignment for a PCS host __________________________ 24


2.6.1 Traffic separation principle ____________________________________________________ 24
2.6.2 Example for Sun Netra T5220 Single Node Cabling _________________________________ 25
2.6.3 Example for Sun Netra T5220 Cluster with ST2540 Cabling __________________________ 25

2.7 Prepare TPD ___________________________________________________________ 25

3 Installation Environment Preparation ____________________________________ 26


3.1 Install Server ___________________________________________________________ 26
3.1.1 Download PCS installation files from the release area _______________________________ 26
3.1.2 Unpack the TSP medium (*.cpio) _______________________________________________ 26
3.1.3 Install the 3rd party utility packages ______________________________________________ 28
3.1.3.1 Install OpenSSH _________________________________________________________ 28
3.1.3.2 Install Perl______________________________________________________________ 28
3.1.3.3 Install JDK1.5 & JDK1.6 __________________________________________________ 29

REDKNEE

Table of Contents

3.1.4 Install the TSP medium on the install server _______________________________________ 29


3.1.5 Install PCSEP package ________________________________________________________ 30
3.1.5.1 Installation _____________________________________________________________ 31
3.1.5.2 Verification _____________________________________________________________ 32
3.1.6 Copy PCSmain & PCSmgr package to the install server ______________________________ 32
3.1.7 Install PCS Host Specific Package_______________________________________________ 32
3.1.7.1 Check whether the PCSEP package has been installed ___________________________ 33
3.1.7.2 Install PCS Host Specific Package ___________________________________________ 33
3.1.7.3 Verification _____________________________________________________________ 33

3.2 Configuration of the AutoInstall for a PCS host ______________________________ 34


3.2.1 Copying Customization package ________________________________________________ 34
3.2.2 Run the script PCS_inject.sh ___________________________________________________ 34
3.2.3 Execute prepTspInstall.pl _____________________________________________________ 35
3.2.4 Additional steps _____________________________________________________________ 38
3.2.4.1 Configure-tftp-nfs.sh _____________________________________________________ 38
3.2.4.2 Update the script AiRaidStorageTek.pm ______________________________________ 39
3.2.5 Verification of the prepTspInstall.pl actions _______________________________________ 39
3.2.6 FAQ and Troubleshooting Information ___________________________________________ 40
3.2.6.1 Unpack TSP medium ERROR ______________________________________________ 41
3.2.6.2 setup TSP install server ERROR ____________________________________________ 41
3.2.6.3 Preparation of Automated installation ERROR _________________________________ 41
3.2.6.4 SMAWtsphc error _______________________________________________________ 41

3.3 Password details ________________________________________________________ 41

4 PCS Installation on Single Node ________________________________________ 42


4.1 Pre-conditions __________________________________________________________ 42
4.2 Firmware ______________________________________________________________ 43
4.2.1 Sun Netra T5220 ____________________________________________________________ 43
4.2.2 Sun Netra T2000 ____________________________________________________________ 43

4.3 OBP setting (ok prompt) _________________________________________________ 43


4.4 Start Installation ________________________________________________________ 45
4.4.1 The Result of Automated Installation ____________________________________________ 45

4.5 Post Installation steps ____________________________________________________ 46


4.5.1 Establish communication with @Commander ______________________________________ 46
4.5.2 Add/delete route for PCS ______________________________________________________ 47
4.5.3 Alarm: File Permission Monitor found discrepancies. Look at
/var/opt/INTPaghar/run/hids/fileperm.log for more details ________________________________ 48
4.5.4 Alarm: Rhosts Monitor found discrepancies. Look at
/var/opt/INTPaghar/run/hids/rhostsAuth.log for more details ______________________________ 49
4.5.5 Backup & Restore Configuration________________________________________________ 49
4.5.6 B&R required additional manual step ____________________________________________ 49
4.5.7 Unix services Management ____________________________________________________ 49
4.5.8 TSP User Management _______________________________________________________ 50
4.5.9 Apply following patches (DV6307 & DV5733) ____________________________________ 50
4.5.10 Password expired ___________________________________________________________ 52
4.5.11 TSP GUI using https ________________________________________________________ 52

4.6 Verification ____________________________________________________________ 52


4.6.1 Check if the all process is running _______________________________________________ 52
4.6.2 Using TSP GUI _____________________________________________________________ 53
4.6.3 Check the hostname __________________________________________________________ 55
4.6.4 IP addresses ________________________________________________________________ 55

REDKNEE

Table of Contents

4.6.5 NTP configuration ___________________________________________________________ 57


4.6.6 UNIX Services ______________________________________________________________ 57

4.7 FAQ and Troubleshooting Information _____________________________________ 57


4.7.1 Installation does NOT start ____________________________________________________ 58
4.7.2 Client is installed from the wrong install server ____________________________________ 58
4.7.3 Failure in tspFIRSU.sh script ___________________________________________________ 59
4.7.4 The PCS host did NOT boot up automatically after Solaris has been installed. ____________ 60

5 PCS Installation on Cluster node ________________________________________ 61


5.1 Pre-conditions __________________________________________________________ 61
5.2 ST2540 storage IP configuration ___________________________________________ 61
5.3 OBP Setting (ok prompt) _________________________________________________ 61
5.4 Firmware upgrade ______________________________________________________ 64
5.5 Start Installation ________________________________________________________ 64
5.5.1 The result of Automated Installation _____________________________________________ 65
5.5.2 Installation log ______________________________________________________________ 66
5.5.3 Duration of Automated Installation ______________________________________________ 66

5.6 Post Installation steps ____________________________________________________ 66


5.6.1 Establish communication with @Commander ______________________________________ 66
5.6.2 Add/delete route for PCS ______________________________________________________ 66
5.6.3 Backup & Restore Configuration ________________________________________________ 66
5.6.4 Password Authentication for Plugin Filetransfer ____________________________________ 66
5.6.5 Alarm: File Permission Monitor found discrepancies. Look at
/var/opt/INTPaghar/run/hids/fileperm.log for more details ________________________________ 67
5.6.6 Alarm: Rhosts Monitor found discrepancies. Look at
/var/opt/INTPaghar/run/hids/rhostsAuth.log for more details ______________________________ 67
5.6.7 Service /system/webconsole:console in maintenance state ____________________________ 67
5.6.8 Apply following patches (DV6307 & DV5733) ____________________________________ 70
5.6.9 Check the file /rtp_environ.txt __________________________________________________ 70
5.6.10 Correct the file CfrAll_BoundResources.tcn ______________________________________ 70
5.6.11 PAM: Authentication failed for rtp99 from clusternode*-priv ________________________ 71
5.6.12 Re-diemtnioning context value. ________________________________________________ 72
5.6.13 Apply the following bin and lib ________________________________________________ 72
5.6.13.1 HotFix fallback script ____________________________________________________ 78
5.6.14 Password expired ___________________________________________________________ 84
5.6.15 TSP GUI using https ________________________________________________________ 84

5.7 Verification ____________________________________________________________ 84


5.7.1 PCS processes ______________________________________________________________ 84
5.7.2 Using TSP GUI _____________________________________________________________ 84
5.7.3 Hostname __________________________________________________________________ 87
5.7.4 NTP configuration ___________________________________________________________ 88
5.7.5 Cluster Status _______________________________________________________________ 88
5.7.6 IP Address _________________________________________________________________ 94
5.7.7 UNIX Services ______________________________________________________________ 98

5.8 Troubleshooting Information ____________________________________________ 105


5.8.1 Installation does NOT start ___________________________________________________ 105
5.8.2 Client is installed from the wrong install server ___________________________________ 105
5.8.3 After the Scratch Installation RTP services are not up ______________________________ 105
5.8.4 End of the Installation one node become panic ____________________________________ 107
5.8.5 Installation failure in script configureRaid.sh _____________________________________ 108
5.8.6 Installation failure in script configureRaid.sh _____________________________________ 109

REDKNEE

Table of Contents

5.8.7 Restrict names to 30 characters and try again _____________________________________ 112


5.8.7.1 Check you have done the steps explained in section 3.2.4.1 Configure-tftp-nfs.sh _____ 112
5.8.8 Failure in tspFIRSU.sh script __________________________________________________ 113
5.8.9 Reconfiguration step 12 was forced to return _____________________________________ 113
5.8.10 Failure in tspStart.sh script __________________________________________________ 114
5.8.11 prtdiag failure _____________________________________________________________ 116

6 PCS Manager Installation (Single and Cluster) ___________________________ 118


6.1 Requirements for PCS Manager __________________________________________ 118
6.2 Automatic Installation of PCS Manager ___________________________________ 118
6.2.1 Verification of Installation ____________________________________________________ 118
6.2.2 Configuration for Client-Server communicating via Proxy ___________________________ 119
6.2.2.1 Proxy Configuration for Browser ___________________________________________ 119
6.2.2.2 Automatic Proxy Configuration for Java Web Start _____________________________ 119

6.3 Start of PCSmgr as JWS Application ______________________________________ 122


6.4 Verification of Interworking of PCSmgr with the PCS _______________________ 123
6.5 Starting of PCSmgr via @Com ___________________________________________ 124
6.6 Reconfiguration of IP-Address for PCSmgr ________________________________ 126
6.7 FAQ and Troubleshooting Information ____________________________________ 126
6.7.1 Is it possible to run the older and the current PCSmgr release concurrently at the same client?
_____________________________________________________________________________ 126
6.7.2 Starting of PCSmgr leads to Error message _______________________________________ 126
6.7.3 Starting of PCSmgr leads to Error message Page cannot be found (HTTP 404) _________ 126
6.7.4 Configuration parameters are modified by the editor, but after saving it, the modifications are
not visible. ____________________________________________________________________ 126
6.7.5 New projects have been generated by the PCSmgr. But an inconsistence message appeared _ 126
6.7.6 Java Web Start Download Error occurs __________________________________________ 126
6.7.7 JNLP Cache Size Warning____________________________________________________ 127
6.7.8 After Update of PCSmgr the old Version appears at the Client ________________________ 129

7 Software Upgrade of a PCS host _______________________________________ 132


7.1 PCS upgrade via SUF___________________________________________________ 132
7.1.1 Upgrade to PCS6.3 _________________________________________________________ 132
7.1.1.1 Prepare install server ____________________________________________________ 133
7.1.1.1.1 Install PCSEP package _______________________________________________ 133
7.1.1.1.2 Prepare PCS upgrade RSU unit _________________________________________ 133
7.1.1.1.3 Prepare the control.xml file ____________________________________________ 134
7.1.1.1.4 Preparing datainit file ________________________________________________ 137
7.1.1.1.5 Upgrade SUF Software in PCS hosts ____________________________________ 139
7.1.1.1.6 Preparing Cluster Description file _______________________________________ 140
7.1.1.1.7 Start RSU for PCS upgrade ____________________________________________ 142
7.1.2 Post RSU steps _____________________________________________________________ 148
7.1.2.1 Disabling the pacct service ________________________________________________ 148
7.1.2.2 Start the crash.d service __________________________________________________ 148
7.1.2.3 Password Authentication for Plugin Filetransfer _______________________________ 148
7.1.2.4 PAM: Authentication failed for rtp99 from clusternode*-priv _____________________ 148
7.1.2.5 TSP GUI using https _____________________________________________________ 149
7.1.3 Post fallback steps __________________________________________________________ 149
7.1.3.1 OBP setting____________________________________________________________ 149

7.2 Trouble Shooting on RSU _______________________________________________ 151


7.2.1 ERROR: Control server failed to start ___________________________________________ 151

REDKNEE

Table of Contents

7.2.2 Error: Node XXX.XXX.XXX.XXX has wrong SW-Version _________________________ 152


7.2.3 StopRTP failed _____________________________________________________________ 152
7.2.4 INTPaains_plugin_wrapper_head.sh error _______________________________________ 153
7.2.5 Datainit_RSU.xml file not well-formed__________________________________________ 153
7.2.6 Before fallback follow below steps _____________________________________________ 154

8 Annex_____________________________________________________________ 162
8.1 Screen trace for Install Server Cleanup ____________________________________ 162
8.2 Screen trace for Media extraction_________________________________________ 162
8.3 Screen trace for Media installation ________________________________________ 162
8.4 Screen trace of install PCSEP package_____________________________________ 162
8.5 Screen trace of PCS Cluster _____________________________________________ 162
8.6 Screen trace of PCS Single Node FE_______________________________________ 162
8.6.1 Sun Netra T5220 sinlge Node _________________________________________________ 162

8.7 Screen trace of PCS and PCSmgr RSU on Cluster ___________________________ 163
8.7.1 Upgrade to PCS6.3 _________________________________________________________ 163
8.7.2 Upgrade Fallback ___________________________________________________________ 163

8.8 Sample datainit file for RSU _____________________________________________ 163

Installation and Configuration Guide

0 General Information

0 General Information
0.1 History of Change
Version

Date

Description

1.0
2.0

20-Nov-2013
26-Nov-2013

3.0
4.0
5.0

18-Dec-2013
17-Jun-2014
01-Aug-2014

6.0

18-Aug-2014

Initial Version of PCS 6.3_2.0


Added additional comment in 7.1.1.1.7 as per Jira PCSSYVE-7233.
Changed section 7.2.6 title.
Added additional comment in sec 4.5.1
Initial Version of PCS 6.3_3.0
Added new section 4.5.10 & 5.6.14 and 4.5.11 & 5.6.15 as per jira
CST-405
Changed password from upper case to lower case in sec 3.3

0.2 References
Important Documents:
You should download and read the following documents from share net document area
as background information before installing PCS 5000 for Solaris:
PCS Internal Documents:
https://sharenet-ims.inside.nokiasiemensnetworks.com/Guest/Open/450314831
1. [Hardware description guide]
2. [Backup & Restore and FSR]
3. [Install Server Configuration Guide]
4. [How to Generate Host Specific Package]
5. [Technical Project Description (TPD)]

https://sharenet-ims.inside.nokiasiemensnetworks.com/Guest/Open/433597572

6. [Operating PCS-5000]
7. [User manual]
8. [PCM User guide]

REDKNEE

for internal use only

8 of 163

Installation and Configuration Guide

0 General Information

TSP Documents:
https://sharenet-ims.inside.nokiasiemensnetworks.com/livelink/livelink?func=ll&objId=41
1275963&objAction=Browse&viewType=1
1. [TSP_InstallServer]
TSP7000 Automated First Installation (Install server) (First, Initial Installation, Solaris
10),
Andrea Geyer
P30309-A4983-A001-XX-7618
2. [TSP_SUF_UG]
TSP7000: Software Update Framework (SUF) User's Guide
P30309-A4344-A000-xx-7618
3. [TSP_ICG-OEM-SF]
TSP7000 Installation & Configuration Guide - OEM Software (Sun Fire, TSP V9.0)
P30309-A5952-T090-xx-7618
4. [TSP_AdminGuide]
TSP7000 Administration Guide
P30309-A7391-T092-xx-7618
5. [TSP_RTP-ICG]
TSP7000 RTP Software Installation & Configuration Guide
P30309-A7018-T091-xx-7618
https://sharenet-ims.inside.nokiasiemensnetworks.com/livelink/livelink?func=ll&objId=41
1281483&objAction=Browse&viewType=1
6. [TSP_SCG-SF]
TSP7000 Standard Configuration Guide (SunFire, TSP 9.0base)
P30309-A6151-T090-xx-7618
7. [TSP_PCG-EX-SF]
TSP7000 Platform Configuration Guide for Example Project (SunFire, TSP 9.0base)
P30309-A6152-T090-xx-7618

REDKNEE

for internal use only

9 of 163

Installation and Configuration Guide

0 General Information

External Documents:
Background information should be consulted when further information is required.
1. [Sun_T5220-InstGuide]
Netra T5220 Server Installation Guide
http://dlc.sun.com/pdf/820-3009-14/820-3009-14.pdf
2. [Sun_T5220-AdmGuide]
Netra T5220 Server Administration Guide
http://dlc.sun.com/pdf/820-3010-11/820-3010-11.pdf
3. [Sun_T5220-ProductNotes]
Netra T5220 Server Product Notes
http://dlc.sun.com/pdf/820-3014-15/820-3014-15.pdf
4. [Sun_ Solaris10-InstGuide-BasicInst]
Solaris 10 5/08 Installation Guide: Basic Installations
http://docs.sun.com/app/docs/doc/820-4039
5. [Sun_ Solaris10-InstGuide-LiveUpgrade]
Solaris 10 8/08 Installation Guide: Solaris Live Upgrade and Upgrade Planning
http://docs.sun.com/app/docs/doc/820-4041/

6. [Sun_ Solaris10-InstGuide-NetInstall]
Solaris 10 5/08 Installation Guide: Network-Based Installations
http://docs.sun.com/app/docs/doc/820-4040/

7. [Sun_ Solaris10-HWPLATFORMGUIDE]
Solaris 10 Sun Hardware Platform Guide
http://docs.sun.com/app/docs/doc/817-6337?l=de
8. [Sun_ Solaris10-InstGuide-JumpStart]
Solaris 10 5/08 Installation Guide: Custom JumpStart and Advanced Installations
http://docs.sun.com/app/docs/doc/820-4042/

0.3 Glossary and Abbreviations


AF

Application Function

CAF

Common Application Framework

NE3S

Nokia Enhanced SNMP Solution Suite

NAC

NetAct for Core

O2ML

Operation and Maintenance (O&M) Modeling Language

B&R

Backup and Restore

REDKNEE

for internal use only

10 of 163

Installation and Configuration Guide

0 General Information

CMTS

Cable Modem Termination System

ALOM

Advanced Lights Out Manager

FIRSU

First Installation RSU (SUF)

GGSN

Gateway GPRS Support Node

Go

Interface between PCS and 3G R5 GGSN

Gq

Interface between PCS and AF

GUI

Graphical User Interface

Gx

Interface between PCS and 3G PreR7 GGSN

HW

Hardware

ICG

Installation and Configuration Guide

IS

Install Server. A host where the TSP install server application is available

I&C

Installation and Configuration

JDK

Java Developer's Kit

JRE

Java Runtime Environment

JWS

Java Web Start

LAN

Local Area Network

NE

Network Element

OBP

Open Boot Prompt

OS

Operating System

OSU

Online Software Update

PCRF

Policy and Charging Rule Function

PCM

Policy and Configuration Management

PCS

Policy Control Server, the Nokia Siemens Networks product. PCS provides
different functions (PCRF, PCMM Policy Server, SPDF)

PDF

Policy Decision Function

PCMM

PacketCable Multimedia Specification

PKTMM2

Interface between PCS and CMTS

PKTMM3

Interface between PCS and AM

PKTMM4

Interface between PCS and RKS

PCSmgr

PCS Policy and Configuration Management System. It consists of an agent


integrated with the PCS and an external manager application

RKS

Record Keeping Server

RSU

Rolling Software Update

RTP

Resilient Telco Platform

SCG

Standard Configuration Guide

SMU

Split Mode Upgrade

SPDF

Service-based Policy Decision Function

REDKNEE

for internal use only

11 of 163

Installation and Configuration Guide

Sun

Sun Microsystems Cooperation

SUF

Software Update Framework

SVM

Solaris Volume Manager

SW

Software

TPD

Technical Project Data

TSP

Telecommunication Service Platform

0 General Information

PCS_version
Table 0-1 Glossary and Abbreviations

0.4 Keyword/Descriptor
Keyword

Description

TSP7000

TSP7000 is the Middleware used by PCS. The TSP package


used for PCS installation consists of Operating system, OEM
SW + TSP Middleware including the Install Server application
CAF was introduced bacause NetAct only talk to CAF
An Install Server is hosts where the TSP install server
application is running and one or more PCS install images are
available. The install server can be used for initial installation of
PCS and PCS Software upgrades.
PCS host being installed (cluster or single node)
automated Installation
The base directory where the InstallServer package is installed
The directory <iserverRoot>/autoinstall/clients/<name of 1st
node>
config directory
<iserverRoot>/autoinstall/<TspBaseAPS>/<SF|PW>/config
Script directory
<iserverRoot>/autoinstall/<TspBaseAPS>/<SF|PW>/scripts
Project directory <iserverRoot>/projects
Install directory <iserverRoot>/install
The I&C process for the specific HW and SW
The PCS package which contains installation scripts used for
PCS automatic installation through TSP install server. This
package will be installed on install server.
PCS package which contains all binary, configuration file and
shared libraries for Policy Server. The package will be installed
on PCS host.
PCS package which contains scripts providing PCS host
configuration information. This package will be installed on
install server.
A tool for automating interactive applications, which is used for
PCS automatic configuration.
A cpio file containing the TSP 7000 software including
Operating System, OEM packages and the TSP 7000 install
server software.
The machine running as Policy Server.
Solaris Management Console
The PCS Manger (PCSmgr) is the application which is used to
change the configuration parameters of the PCS in the run time
(e.g. AFs, CMTS, specify policy allowed peers, timers, etc).
More information on the PCSmgr is described in the PCS User
Manual [PCS_UMN].

CAF
Install server

Install client
AutoInstall
<iserverRoot>
Client directory
Config directory
Scripts directory
Project directory
install directory
I&C scenario
PCSEP package

PCSmain package

PCS host specific


package
Expect
TSP medium

PCS host
Smc
PCS Manager

REDKNEE

Example

for internal use only

A Sun Server or workstation.

A Sun Server
/export/home/iserver
/export/home/iserver/
autoinstall/clients/turtle

/export/home/iserver/projects/
/export/home/iserver/install/

For Solaris it is a cpio archive


generated by mediabuild
process.

12 of 163

Installation and Configuration Guide


Network element
Logic interface

0 General Information

A machine running software for specific function.


Logical interfaces provide PCS functions (e.g. interface to other
network elements (e.g. application servers) or PCS operation).
Logical interfaces are mapped to physical interfaces, i.e.
Ethernet ports during the PCS installation.

A Policy Server
PktMM2 interface or TSP B&R
interface

Table 0-2 Keywords

0.5 Figures
Figure 1-1 basic PCS network topology example .......................................................... 15
Figure 4-1 Oening page for TSP GUI ............................................................................ 53
Figure 4-2 TSP GUI shows after login ........................................................................... 54
Figure 4-3 Process status in TSP GUI ........................................................................... 55
Figure 5-1 Oening page for TSP GUI ............................................................................ 85
Figure 5-2 TSP GUI shows after login ........................................................................... 86
Figure 5-3 Process status in TSP GUI ........................................................................... 87
Figure 6-1 Proxy Server Configuration in Local Area Network (LAN) Settings ............. 119
Figure 6-2 Java Application Cache Viewer .................................................................. 120
Figure 6-3 Java Control Panel ..................................................................................... 121
Figure 6-4 Network Settings for Using Proxy Server.................................................... 121
Figure 6-5: Project View of Policy and Configuration Management after installation .... 123
Figure 6-6: Java Web Start Opening Window .............................................................. 125
Figure 6-7: Project View of Policy and Configuration Management ............................. 125
Figure 8: JNLP Chache Size Warning ......................................................................... 127

0.6 Tables
Table 0-1 Glossary and Abbreviations ........................................................................... 12
Table 0-2 Keywords ...................................................................................................... 13
Table 2-1 Installation Overview ..................................................................................... 17
Table 2-2 PCS with TSP on Solaris Single Node installation overview .......................... 18
Table 2-3 PCS with TSP on Solaris Cluster installation overview .................................. 18
Table 2-4 Sun Netra T5220 HW spec ............................................................................ 20
Table 2-5 PCS installation media .................................................................................. 21
Table 2-6 Install server SW list ...................................................................................... 22
Table 2-7 PCS with TSP on Solaris single node SW list ................................................ 22
Table 2-8 PCS with TSP on Solaris single node SW list ................................................ 23
Table 2-9 PCSmgr SW list............................................................................................. 24
Table 2-10 Network Design ........................................................................................... 24

REDKNEE

for internal use only

13 of 163

Installation and Configuration Guide

1 Introduction

1 Introduction
PCS 5000 is a REDKNEE product that implements Policy Control Server for different type
of access networks. The PCS software includes different features (e.g. CPCS, PCRF,
etc.) as described in the associated release notes for the specific PCS version. The
desired functionality of PCS for a customer is selected during the installation and can also
be changed later. This installation guide is valid for PCS6.3. For detailed information
about PCS functionality and operation please refer to the PCS Technical Description and
the PCS User Manual.
This document is part of the PCS service documentation which explains how to
customize PCS (e.g. configure Policy Server) and how to perform trouble shooting.

1.1 Target group


This installation and configuration guide is intended for REDKNEE service or test
representatives who will install a released version of the Policy Control Server and all
related software. PCS is a TSP 7000 based network element, so PCS inherits the typical
installation procedure for a TSP 7000 based network element using an external Install
Server.
After following the instructions of the Manual a PCS host with default configuration will be
installed and ready for operation.
Detailed knowledge of Sun Solaris 10 operating system and the TSP 7000 install server
and operation is expected by the reader of this document. It is also assumed that the HW
installation has been performed. For Sun Netra T2000 and T5220 step is explained in
[Hardware description guide].
It is recommended that you read this manual before starting the installation and collect all
information that you need during the installation (IP addresses, passwords).

1.2 TSP Installation overview


This section gives a very brief overview about the installation for a product that is based
on the TSP 7000 middleware. First Installation (also called Scratch installation) refers to
the installation of a new PCS host or a complete re-installation where all previous data is
lost.
A TSP Solaris product such as PCS is always installed from an install server that hosts all
necessary software. The TSP install server internally uses the Solaris jumpstart to install
the operating system on the target system. The PCS host to be (re-) installed sends an
installation request to the install server and then a new installation from scratch is
performed. For further information on the TSP 7000 install server operation refer to
[TSP_InstallServer].
Note: All information on the PCS host is lost during the first-time installation or
re-installation. So take the backup of required information before doing re-installation.

REDKNEE

for internal use only

14 of 163

Installation and Configuration Guide

1 Introduction

1.3 Network environment for a PCS installation


For different PCS deployment scenarios the network environment of the customer
network is different (e.g. Interface to Application Servers and Access Network Control
Nodes like GGSN or CMTS). More details how to connect PCS physical interfaces to a
customer network can be found in [Hardware description guide]. In general it depends on
the customer environment and the required PCS functionality, but it has no influence on
the installation procedure. The following figure shows the basic network topology for
installing PCS with TSP on Solaris Cluster/Single node. The diagram does not show how
PCS will be connected to a customer network.

Figure 1-1 basic PCS network topology example

The PCS host (typically a Sun Netra T2000/T5220) will be installed via the LAN
connection to the Install Server. For the PCS installation it is sufficient to connect one
physical IP interface of PCS to the install network. If possible select the IP addresses of
the PCS host according to the IP addresses required in the customer network. For
customer deployments, the PCS is usually configured such that different physical
interfaces are used for security and performance reasons.
The PCSmgr (here shown on the PCS host as PCM) is installed together with the
PCSmain on the PCS host. The PCSmgr is used to change Policies and most
configurations information after PCS is installed.
The Install Server is used for First Installation and update of the PCS host and the PCS
Manager. The install server runs the install server application.
Note: PCS and Install Server Admin LAN have to be located in the same subnet.
The Terminal Concentrator provides telnet access to the serial management port of the
PCS host this allows booting and network installation of the system. Alternatively the PCS
hosts net management port can be connected to the Ethernet switch directly.

REDKNEE

for internal use only

15 of 163

Installation and Configuration Guide

1 Introduction

The Admin Console can be used for PCS administration operations. The install server or
any other host can act as Administration Console.
In addition to the components mentioned before a NTP and a DNS server can optionally
be supported by the PCS system:
NTP server, provides time synchronization service, optional but highly recommended
DNS server provides name resolving service, optional.
At the end of the installation procedure the PCS system is operational with a default
functionality and configuration.
For additional configurations after the installation please see the PCS User Manual, the
PCS Service Manual and the PCS Manager User Guide. The User Manual and PCS
Manager User Guide contain procedures usually performed by the PCS customer. The
Service Manual provides mainly additional information about configuration of PCS for
OAM and Error Procedures executed by service representatives.

REDKNEE

for internal use only

16 of 163

REDKNEE

2 Installation Overview

2 Installation Overview
Before starting an installation, confirm that the hardware is supported by the current PCS
release based on the PCS release notes. Make sure that the HW is prepared as
described in the PCS [Hardware description guide] (e.g. including necessary firmware
upgrades) and you have all necessary passwords for your environment available.

2.1 General Installation Overview


Step
1

Action
Get Release Note

Hardware Installation
Overview

Software Installation
Overview

Guide

Network Planning

Fill out and submit TPD


according to your I&C
scenario.

Hardware setup

Check the hardware specification and


confirm if your hardware is qualified to
install PCS
Check the software specification and
determine the installation method. Download
the PCS installation medium
Make network planning according to the
planned PCS usage and determine the
required IP addresses for different PCS
interfaces, and etc.
Fill out the TPD based on the hardware,
software configuration and network
planning. Prepare your own host specific
package according to the [How to
Generate Host Specific Package].
Before the software installation the HW must
be installed and connected properly.

TSP, CAF, Solaris single


node installation
TSP, CAF, Solaris cluster
node installation

The first time preparation of PCS Install


server, PCS host and PCSmgr. This is the
configuration for all official PCS users and
customers.

8
Software Update
Table 2-1 Installation Overview

Section
N/A
2.4

Software Upgrade PCSmain and PCSmgr

2.5
2.6

2.7

N/A
4
5
7

Note: This document contains troubleshooting hints for each installation type at the end
of chapters 4 to 5 respectively. Please consult those parts if a problem occurs during the
installation. In case the information there does NOT solve the problem, please report the
problem
with
the
PCS5000
bugzilla
system
(https://jira.inside.nokiasiemensnetworks.com/browse/PCSSYVE).

2.2 PCS Solaris single node Installation overview

Install
Server

PCS

Step
1
2
3
4
5
6
7

REDKNEE

Description
Download installation files from Release area
TSP medium preparation
PCSEP package preparation
PCSmain and PCSmgr package preparation
PCS host specific package preparation
Automation installation preparation
Check prerequisite

for internal use only

Chapter
3.1.1
3.1.2-3.1.4
3.1.5
3.1.6
3.1.7
3.2
4.1

M
M
M
M
M
M
M

17 of 163

REDKNEE

2 Installation Overview

Firmware
8
Go to OBP
9
Start Installation
10
Post Installation steps
11
Verification
12
FAQ and trouble shooting
13
Table 2-2 PCS with TSP on Solaris Single Node installation overview

4.2
4.3
4.4
4.5
4.6
4.7

M
M
M
M
M
M

M= mandatory, O= Optional

2.3 PCS Solaris cluster Installation overview


Step
Description
Download
installation
files
from Release area
1
TSP medium preparation
2
PCSEP package preparation
3
Install
Server
PCSmain and PCSmgr package preparation
4
PCS host specific package preparation
5
Automation installation preparation
6
Check prerequisite
7
Go to OBP
8
Firmware
9
Start Installation
PCS
10
Post Installation steps
11
Verification
12
Trouble shooting information
13
Table 2-3 PCS with TSP on Solaris Cluster installation overview

Chapter
3.1.1
3.1.2-3.1.4
3.1.5
3.1.6
3.1.7
3.2
5.1
5.3
5.4
5.5
5.6
5.7
5.8

M= mandatory, O= Optional

REDKNEE

for internal use only

18 of 163

M
M
M
M
M
M
M
M
M
M
M
M
M

REDKNEE

2 Installation Overview

2.4 Hardware Requirements


2.4.1 PCS install server
The X86(Netra X4270)-based system will be installed automatically which can be used as
a TSP install server. For more information about automated installations refer the guide
[Install Server Configuration Guide]. The following requirement should be fulfilled on the
system in order to be used as install server:
1. Free disk space > 60 G (more if the install server shall be used for many PCS
hosts) which will required for media file *.cpio, extracted media and media
installation.
2. One dedicated Ethernet interface used for Installation.
3. One dedicated Ethernet interface used for administration.
Refer 1[Hardware description guide] for more information about install server hardware
details.

2.4.2 PCS hosts


The First Installation supported standard hardware platform for PCS 5000 is Sun Netra
T5220 (8 Core) as stand alone PCS cluster or DA cluster with ST2540 storage, Sun
Netra T5220(8 Core) Single Node as FE and other hardwares are only for trials (Sun
Netra T2000 (8Core) Single node and cluster configuration) not for commercials purpose.
For more details refer the release notes. Deviations of the HW (in particular disk size)
may prevent a successful installation.
Sun Netra T5220
HW-Unit

Quantity / Quality

CPU

1x 1.2 GHz

Main Memory
Cache
Internal Disk
Drive

64 GB
4 MB

Comment
1x UltraSPARC-T2 processor with 8 cores / 64
threads;
8 GB per Core; 16 DIMM slots with 4 GB (FB) each;
L2 (integrated); 16 KB instruction/8 KB data cache;

4x 146 GB

10k RPM, SAS (2 inch);

Ethernet Ports
Serial Ports

4x 10/100/1000BT
1x 10/100BT
1x ttya;1x SC
2

PCI

1
1
2

DVD-ROM

USB Ports

REDKNEE

onboard, on 2 controllers (RJ-45);


dedicated management port (RJ-45, NET MGT);
DB-9
RJ-45 (SER MGT)
PCI-X 64 bit, 133 MHz (one of half length (slot 3),
full height otherwise; PCI-X is converted to PCIe
4-lane);
PCIe x8 (full length, full height, on x16 connector
(slot 5));
PCIe x8 (low profile, half length on x8 connector
(slot 2));
PCIe x4 (low profile, half length on x8 connector
(slots 0and 1));
T5220-systems with 4 disks cannot have a DVD
drive;
2U, depth 500mm;

for internal use only

19 of 163

REDKNEE

2 Installation Overview

Table 2-4 Sun Netra T5220 HW spec

Refer 1[Hardware description guide] for more information.

2.4.3 NTP server


NTP server is needed to synchronize the clocks of computers over the network. It is
highly recommended to have access to an NTP server from the install server. Otherwise
problems may arise during the installation.

2.4.4 Administration Console


This is the console for local administration purposes. Only administrators have access. At
least one console is required per site. It is connected to the TSP-Admin LAN. The Admin
Console may be co-located with the installation server. But any other server may also be
used.

2.4.5 Terminal Concentrator, Terminal Server


The serial management ports of the PCS-5000 servers are connected to the Terminal
Concentrator / Terminal Server and provide remote maintenance and allow the initial
installation of blank machines. The serial interface is also required to get access to the
Open Boot Prom.
Alternatively the PCS hosts net management port can be connected to the Ethernet
switch directly, and then no terminal server is required.

2.5 Software List


This chapter gives an overview of the software components which are required during
installation of a PCS system. All necessary software for an installation will be provided in
the PCS release area (TSP medium (including OS), PCS package, PCSmgr package and
PCSEP package, etc,).
Setting up an install server and also the PCS software installation execute mainly
automatically.
Install server
The setup of an install server requires a Solaris Host(Install Server NE) that is
pre-installed with Solaris 10. This step is not described in this manual and the Solaris 10
operating system is not provided by the PCS team.
The TSP 7000 install server software is included in the TSP image/medium that is
provided for the PCS host installation. Setup of the install server software is described in
[Install Server Configuration Guide] document.
PCS and PCS Manager
The PCS and PCS Manager Installation method depends on the operating system of the
PCS host:
REDKNEE

for internal use only

20 of 163

REDKNEE

2 Installation Overview

Configuration
TSP, CAF, Solaris single node
TSP, CAF, Solaris cluster
Table 2-5 PCS installation media

Media
TSP medium, PCS packages
TSP medium, PCS packages

Install method
Install Server
Install Server

2.5.1 Install server


This section lists all software that needs to be installed on an install server as a
pre-requisite to install one or more PCS host(s) (Single node or cluster). If the Install
server is used for more than one PCS instance one set of PCS packages is required per
PCS host. More information refer the guide [Install Server Configuration Guide].
Name
OS

Solaris 10

Note

Depend

N/A

Reference
Included in TSP
medium provided in
PCS release area.
Included in TSP
medium provided in
PCS release area.
Included in TSP
medium provided in
PCS release area.
Included in TSP
medium provided in
PCS release area.
Included in TSP
medium provided in
PCS release area.

The Common Application Framework


(CAF) provides a component
framework (called
CFRAME), platform components and
Operating and Maintenance (OAM).

N/A

Included in TSP
medium provided in
PCS release area.

PCSmain application package

N/A

PCS policy control server manager


Contains the installation scripts and
3rd party packages.

N/A

As part for TSP media.


N/A

TSP

Install Server
package

install server package (SMAWtspis)


example project package
(SMAWtspsx)

N/A

SUF package

SUF packages (SMAWsufuc and


SMAWsufut)

N/A

FSR boot
image

FSR package

N/A

SMAWtspfs

OEM wrapper
package

CAF
CAF1.3
PCS main
package
PCSmgr
PCS

PCSEP
Host Specific
package
Customization
package

Contains the customization scripts


for the specific PCS host installation.
Contains interface names & IPs
address details which is required for
CAF.

TSP IS

Available in PCS
release area.
Available in PCS
release area.
Available in PCS
release area.

TSP IS

Provided in PCS
release area.

N/A

Perl

SMAWrtppl

N/A

JDK

SMAWjdk15 + SMAWjdk16

N/A

SMAWossh

N/A

Other

OpenSSH

REDKNEE

for internal use only

Pre-request for every


installation.
Included in TSP
medium provided in
PCS release area.
Included in TSP
medium provided in
PCS release area.
Included in TSP

21 of 163

M
M
M
M

M
M

M
M

REDKNEE

2 Installation Overview
medium provided in
PCS release area.

Table 2-6 Install server SW list

2.5.2 PCS hosts


This section lists all software needed for PCS host based on different I&C scenarios. All
these software will be installed and running on PCS host.

PCS with TSP on Solaris single node

The section describes the software components that will be installed on Solaris single
node from the install server in more detail. These include Operating System (Solaris 10),
TSP packages (basic packages, OEM packages and other packages), PCS packages
(PCSmain & PCS manager) and some other packages on PCS host. These packages
are contained in the PCS release provided for a customer and automatically installed
during the installation process.

OS

TSP

Name

Note

Depend

Solaris

Include in TSP Media.

N/A

M*

Basic Package

Include TSP base package and all TSP packages


(e.g. TSP Statistics Manager, TSP Context
Manager and TSP Configuration Tools)

N/A

N/A
N/A

M
M

N/A

N/A
N/A

M
M

TSP&JDK
TSP&JDK
N/A
N/A
TCL
libgcc

M
M
M*
M

Expect

OEM Package
SUF Package

Signalware is NOT included in the TSP medium for


PCS installation OEM package includes DB, JDK,
and Apache and so on.
It is used for software update or upgrade
Include all packages of ICCM Agent

ICCM Package

CAF
Backup&Restore
PCS

PCSmain
PCSmgr
Libgcc
TCL
Expect

Other
NTP Package

The Common Application Framework (CAF)


provides a component framework (called
CFRAME), platform components and Operating
and Maintenance (OAM).

includes SMClibgcc
GCC and TCL won't be installed if Expect is not
selected
The package includes SUNWntpr and SUNWntpu.
The automatic configuration is done by script via
Expect

IPSec Library
Package
IPsec Library packages includes SMAWipsec
Table 2-7 PCS with TSP on Solaris single node SW list

REDKNEE

for internal use only

22 of 163

REDKNEE

2 Installation Overview

* M = Mandatory, O = Optional

PCS with TSP on Solaris cluster nodes

The section describes the software components that will be installed on Solaris cluster
from the install server in more detail. These include Operating System (Solaris 10 +
SunCluster), TSP packages (basic packages, OEM packages and other packages), PCS
packages (PCSmain & PCS manager) and some other packages on PCS host. These
packages are contained in the PCS release provided for a customer and automatically
installed during the installation process.

OS

Name
Solaris
Sun Cluster

Basic Package

TSP

OEM Package
SUF Package

Note
Include in TSP Media.
Include in TSP Media.
Include TSP base package and all TSP
packages (e.g. TSP Statistics Manager, TSP
Context Manager and TSP Configuration
Tools)
the OEM packages consist:
Signalware is NOT included in the TSP
medium for PCS installation OEM package
includes DB, JDK, Apache and so on.
It is used for software update or upgrade
Include all packages of ICCM Agent

ICCM Package

CAF
Backup&Restor
e
PCSmain

The Common Application Framework (CAF)


provides a component framework (called
CFRAME), platform components and
Operating and Maintenance (OAM).

PCS
PCSmgr
Libgcc
TCL
Expect
Othe
r

includes SMClibgcc
GCC and TCL won't be installed if Expect is
not selected
The package includes SUNWntpr and
SUNWntpu.
The automatic configuration is done by script
via Expect

NTP Package
IPSec Library
Package
IPsec Library packages includes SMAWipsec
Table 2-8 PCS with TSP on Solaris single node SW list

Depend
N/A

N/A

N/A
N/A

M
M

N/A

N/A

N/A
TSP&JDK
TSP&JDK&OE
M
N/A
N/A
TCL
libgcc

M
M

Expect

M
M
M
M

M = Mandatory, O = Optional

REDKNEE

for internal use only

23 of 163

REDKNEE

2 Installation Overview

2.5.3 PCSmgr
The PCS policy management application runs on the PCS host as TSP process.
Name
Policy Management
PCSmgr
System package
Zip tools
Other
JDK
Table 2-9 PCSmgr SW list

Note

Depend

Available at PCS release area


Uzip
Update 9

N/A
N/A
N/A

Reference
M
M
M

2.6 Interface and IP address assignment for a PCS host


The section gives an overview how to assign the IP addresses required in the customer
network to the PCS and how to assign logical PCS interfaces to the physical Ethernet
ports of the PCS hardware. The assignment of logical to physical interfaces and the IP
addresses are specified in the TPD according to the requirement at the customer site.
The PCS software can support any type of mapping, e.g. all logical interfaces use the
same physical Ethernet interface, but commercial configuration should be separate LAN
for each network.

2.6.1 Traffic separation principle


Different traffic (administration, Core, Backup&Restore, PC LAN1 (IMS Lan 1) & PC
LAN2 (PS Lan 2)) should be assigned to different subnets due to performance and
security reasons. Table 2-10 Network Design shows as an example the recommended
networks for PCS acting as a PacketCable Multimedia Policy Server. For other
configurations the approach is similar (see [Hardware description guide] for more
examples). If the number of logical interfaces exceeds the physical interfaces, several
logical interfaces can share one physical port.
Network Name

Usage in VoC for

Redundancy

Core-LAN

Also called an OAM Lan, public LAN,


for all services and client database
connections.

Redundant

B&R-LAN

Only used for backup & Restoration of


PCS hosts.

Redundant

Admin-LAN

Private LAN for system installation and


administration

No redundancy

PC LAN1
(IMS Lan)

Used for traffic between PCS-5000 and


CMTS (pkt-mm2 interface), but also for
traffic between PCS-5000 and RKS
(pkt-mm4 interface).

Redundant

PC LAN2
(PS Lan)

Used for traffic between PCS-5000 and


AM (pkt-mm3 interface)

Redundant

Table 2-10 Network Design

REDKNEE

for internal use only

24 of 163

REDKNEE

2 Installation Overview

Most logical interfaces can be configured as redundant. In this case each Interface is
configured on 2 different Ethernet ports. Normally the default interface is used with
automatic failover to the second port in case of a failure.

2.6.2 Example for Sun Netra T5220 Single Node Cabling


Refer the [Hardware description guide].

2.6.3 Example for Sun Netra T5220 Cluster with ST2540 Cabling
Refer the [Hardware description guide].

2.7 Prepare TPD


To derive a TPD for an individual PCS host from the TPD form, please proceed as
follows:

Download the latest version from download area.

Rename the copy of the TPD to PCS_6.3_[EPX]-TPD_<customer>_<hostname>.xls,


e.g., PCS_6.13-TPD_MUCDev_obelix.xls. The [EPX] is Enhancement Package, if
you you want install maintenance release mention the maintenance release version,
the <customer> must be the same as given in the "main" sheet of the TPD; the
<hostname> must be the fist part (base name) of the fully qualified host name.

Fill out the copy. In case of any questions on how to fill out the TPD, please contact
PCS service person to get an example TPD.

Send it to PCS service person, who will arrange that the appropriate PCS SW will be
provided in the PCS release area. For experienced PCS installer, there are
instructions [How to Generate Host Specific Package].

REDKNEE

for internal use only

25 of 163

REDKNEE

4 PCS Installation on Single Node

3 Installation Environment Preparation


3.1 Install Server
It assumed that an install server is prepared as described in [Install Server Configuration
Guide], if not refer the guide and prepare the install server.
The following section describes all steps that are required to configure the install server
for a PCS host automated installation. This part can not serve as a TSP install server
tutorial.

3.1.1 Download PCS installation files from the release area


Download all PCS software from the release area to your install server. The directory is
the customer specific version, which is provided along with the Software by the PCS
service person. Check that the Release notes and all software parts are available on the
install server.

Refer to the Release notes for available PCS features, additional instructions for
the installation, required TSP version, restrictions etc.
A PCS release consists of the following parts:
A TSP medium, the file name is like PCS_FI_*.cpio.
PCSmain package, the file name is like PCSmain_PAT_ <PCS_version> .pkg.gz.
PCSmgr package, the file name is like PCSmgr_<PCS_version>.pkg.gz.
PCSEP package, the file name is like PCSEP_PA_<PCS_version> .pkg.gz.
PCS host specific package, the file name is like
PCSHturtle_PA_<PCS_version>.pkg.gz. turtle is PCS host name, this will be
delivered by the service person based on given TPD.

3.1.2 Unpack the TSP medium (*.cpio)


Download the media from download area (which is available in release notes), you can
download the media parts (each part size is ~4GB and last part will be <= 4MB).
Note: Downloading and extracting this media additionally required 40GB free space in
Install Server.
If youre downloading the media parts, then after downloading all the media parts
successfully, then unzip and combine the media parts using below command.
If media is for SPARC based install server
bash-2.05# gunzip PCS_FI_SPARCIS_TSPVAR927500_130225_105830_a1.gz
bash-2.05# cat PCS_FI_SPARCIS_TSPVAR927500_130225_105830_a* >
PCS_FI_SPARCIS_TSPVAR927500_130225_105830.cpio

If media is for X86 based install server


REDKNEE

for internal use only

26 of 163

REDKNEE

4 PCS Installation on Single Node

bash-2.05# gunzip PCS_FI_X86IS_TSPVAR927500_130126_151019_a1.gz


bash-2.05# cat PCS_FI_X86IS_TSPVAR927500_130126_151019_a* >
PCS_FI_X86IS_TSPVAR927500_130126_151019.cpio

If media is for SPARC based install server


bash-3.00$ cksum PCS_FI_SPARCIS_TSPVAR927500_130225_105830.cpio
4141556959

22469015040

PCS_FI_SPARCIS_TSPVAR927500_130225_105830.cpio

bash-3.00$

If media is for X86 based install server


bash-3.00# cksum
2175084489

PCS_FI_X86IS_TSPVAR927500_130126_151019.cpio

22468996608

PCS_FI_X86IS_TSPVAR927500_130126_151019.cpio

Note: You can delete the media parts once check sum is verified, itll save the space.
The TSP medium contains the Operating System, the TSP middleware and OEM
packages (e.g. Oracle DB, OAM agents) the Install Server application and the TSP
Software Update Framework (SUF).

It is assumed that TSP media located to a directory <medium_root> on a partition with


enough free space on Install.
Then the .cpio image can be unpacked to this directory using the following command:
root@iserver# cd <medium_root>
root@iserver# cpio -imdv <

<TSP_medium>.cpio

Screen trace for unpacking media available in section 8.2 Screen trace for Media
extraction
[Note]: < is important for read the stream from the cpio stream.
[Note]: The cpio file is NO longer required, so it can be deleted to save disk space.
Duration: ~20 mins
.
Afterwards a directory <medium_root>/<TSP_medium> will exist that contains the
content of the *.cpio file. Several variants of TSP media can be extracted under the same
medium_root, if desired.
If the cpio-operation fails perform the following checks:
1. Run df -kh to check if the disk space has been sufficient.
2. Check if *.cpio was not corrupted during the download by comparing the file size
of your copy and check sum of cpio file metioned above.

REDKNEE

for internal use only

27 of 163

REDKNEE

4 PCS Installation on Single Node

3.1.3 Install the 3rd party utility packages


Note: This section is mandatory only for newly prepared (from scratch installation)
SPARC (Netra T2000) based install server. This section is not required if youre using
existing install server and/or if install server is X86 (Netra 4270) based which is prepared
as explained in [Install Server Configuration Guide] by automated installation.
The TSP install server requires a few additional software packages that are also provided
in the TSP medium extracted before. Those packages need to be installed only once, i.e.
the following steps in 3.1.3.1, 3.1.3.2 and 3.1.3.3 can be skipped when adding a TSP
medium on the existing install server.
Check if install server is available with following package using pkginfo l command, if
packages are not available, and then follow the steps explained in section 3.1.3.1, 3.1.3.2
& 3.1.3.3Error! Reference source not found..
Note: Never unzip the packages under <medium_root> (Original location) since part of
the information may be necessary in later steps of setting up the install server for a PCS
host!

3.1.3.1 Install OpenSSH


The OpenSSH package required by the install server software is named SMAWossh.
Follow the steps below for the installation of OpenSSH.
Before installing the package, remove the existing ssh config file
root@iserver# mv /etc/ssh/sshd_config /etc/ssh/sshd_config_bkp

Find the package in the extracted TSP image.


root@iserver# cd <medium_root>/<TSP_medium>
root@iserver# find . name SMAWossh*

Copy this package to /tmp/ as shown below. The path for the OpenSSH package may be
different in your Install Server. Unzip the package file if packed.
Install this package
# pkgadd -d SMAWossh.pkg

The duration of this action is about 1~2 minutes.

3.1.3.2 Install Perl


Perl is required by the install server application for the execution of the script
prepTspInstall.pl. The package SMAWrtppl provides the required Perl package.
The Perl package can be found on the extracted TSP medium.
REDKNEE

for internal use only

28 of 163

REDKNEE

4 PCS Installation on Single Node

root@iserver# cd <medium_root>/<TSP_medium>
root@iserver# find . name SMAWrtppl*

Copy this package to /tmp/. The path for the Perl package may be different in Install
Server. Unzip the package file if packed.
Install this package if not already installed - with pkgadd d.
# pkgadd -d SMAWrtppl.pkg

The duration of this action is about 1~2 minutes.

3.1.3.3 Install JDK1.5 & JDK1.6


Note: The jdk15 is mandatory for the install server which is prepared manually (SPARC
based) only.
JDK 1.5 & 1.6 is required by the install server application (script prepTspInstall.pl).
The required jdk package can be found on the medium.
root@iserver# cd <medium root>/<TSP_medium>
root@iserver# find . -name "*SMAWjdk*"
./oem/.packages/TSP/VSG9/210/00/INTPagjav/solaris/java/1.6.0.13/SMAWjdk
16.tar.gz
./oem/.packages/TSP/VSG9/210/00/INTPagjav/solaris/java/1.5.0.18/SMAWjdk
15.tar.gz

Copy these packages to /tmp/ directory, unzip the package and install them one by one.
Install these packages in following order.
# pkgadd -d . SMAWjdk15
# pkgadd -d . SMAWjdk16

The duration of this action is about ~5 minutes.

3.1.4 Install the TSP medium on the install server


The installation of the TSP medium on the install server is performed with the script
setupInstallServer.sh. This script performs basically following tasks:
Extract the install server package (SMAWtspsx) from the extracted TSP medium
and install them on the install server. This package is also called the AutoInstall
scripts.
Convert the OEM software from the TSP image to the right format and location for
the AutoInstall scripts during the PCS installation.
Add a TSP sample project to the install server repository

REDKNEE

for internal use only

29 of 163

REDKNEE

4 PCS Installation on Single Node

The install server data structures will be created under the directory
<iserverRoot>. The location for this directory is asked when the the script
setupInstallServer.sh. The default location is /export/home/iserver.

Note: This step adds new versions of the OEM software (wrappers) contained in the
TSP image to the install server repository without deleting the old versions.
Now go to the root directory of the extracted TSP medium and execute
setupInstallServer.sh that is located there.
root@iserver# cd <medium_root>/<TSP_medium>
root@iserver#./setupInstallServer.sh

Complete screen trace available in section 8.3 Screen trace for Media installation
The script will ask some questions. Answer with y or accept the default values.
Duration: ~30 minutes.
If the setupInstallServer.sh script reports any ERRORs make sure that the Perl
and jdk15+16 packages are available, jdk15 is required only for manually prepared
SPARC based install server, not for X4270 install server which is prepared automatically,
that there is enough disk space on the install server, and consult the log file of
setupInstallServer.sh.
These checks are shown below:
1) Run pkginfo -l SMAWrtppl to check whether SMAWrtppl is installed.
2) Run pkginfo -l SMAWjdk15 to check whether SMAWjdk15 is installed only for
SPARC.
3) Run df -kh to check the disk space is sufficient.
4) Review the log file /tmp/setupInstallServer.<pid>.log.
After a successful execution of the setupInstallServer.sh the SMAWtspis and the
SMAWtspsx packages will be installed on the install server.
This can be verified as shown below:
1) Run pkginfo | grep SMAWtspis to check whether TSP install server package
is installed.
2) Run pkginfo | grep SMAWtspsx to check whether TSP install server example
project package is installed.
3) The directory <iserverRoot>/ exists and is not empty. This directory is called
<iserverRoot> in this document. This is the default location where the so
called OEM wrappers will be located.

3.1.5 Install PCSEP package

REDKNEE

for internal use only

30 of 163

REDKNEE

4 PCS Installation on Single Node

The PCSEP package contains PCS specific scripts that have to be executed during the
PCS First Installation and configuration parameters that are common to all PCS
instances.

3.1.5.1 Installation
In install server if PCSEP package is installed already, remove the previous version using
pkgrm.
Example:
# pkgrm PCSEP

Download the PCSEP package from release area and copy to install server, unzip
PCSEP package file which you want to install and use pkgadd command to install
PCSEP package.
Example:
# gunzip PCSEP_PA_063.pkg.gz
# pkgadd d PCSEP_PA_063.pkg
# pkginfo l PCSEP
PKGINST:
NAME:
CATEGORY:
ARCH:

PCSEP
PCS Extern Project
application
sparc,i386

VERSION:

0631181

BASEDIR:

/export/home/iserver

VENDOR:

Redknee Technologies

DESC: PCS Extern Project:Storage PCS Files for TSP7000, PCS6.3_0.3


P7 Correction Release-1.
PSTAMP:
INSTDATE:
STATUS:

blrmpcsdev120130828174252
Sep 19 2013 17:34
completely installed

FILES:

319 installed pathnames


3 shared pathnames
95 directories
205 executables
146322 blocks used (approx)

Answer with yes or Enter for questions during the installation.


Duration: ~2 minute.

REDKNEE

for internal use only

31 of 163

REDKNEE

4 PCS Installation on Single Node

3.1.5.2 Verification
Pease verify the installation of PCSEP package by
Way 1: use pkginfo l PCSEP
Way2: ls l /<iserverRoot>/projects/PCS/<PCS_version>/.
Below is the example output.
bash-3.00# ls -l /export/home/iserver/projects/PCS/0631181/
total 8
drwxr-xr-x

2 root

bin

512 Sep 29 10:44 clients

drwxr-xr-x

4 root

bin

512 Sep 29 10:44 SF

drwxr-xr-x

3 root

bin

512 Sep 29 10:44 storage

drwxr-xr-x

2 root

bin

512 Sep 29 10:44 xml

bash-3.00#

Note: Only one instance of the PCSEP package for corresponding PCS version is
mandatory. At the same time only one version of PCS cluster/single node can be installed,
but same version PCS hosts (cluster/Single node) any number can be installed at the
same time.

3.1.6 Copy PCSmain & PCSmgr package to the install server


Download and Copy the PCSmain and PCSmgr package to this directory
/export/home/iserver/install/PCS/PCS6.3 in Install Server.
Note: PCS packages (PCSmain, PCSmgr and PCSEP) name should not contain the
open and close braces ([]), should be like below.
The output should similar to the example below:
bash-3.00# cd /export/home/iserver/install/PCS/PCS6.3/
bash-3.00# ls -l
total 693440
-rw-r--r--

1 root

root

74338659 May

2 12:56 PCSmain_PAT_063.pkg.gz

-rw-r--r--

1 root

root

24263498 May

2 12:56 PCSmgr_063.pkg.gz

bash-3.00#

Duration: ~1 mins.
The <iserverRoot>/install directory contains also other software that is transferred to the
PCS host during the installation process.

3.1.7 Install PCS Host Specific Package


The PCS host specific package contains individual parameters for a specific PCS host
(e.g. IP addresses. Hostnames, etc as per the TPD). This step must be repeated for each
PCS host of same version on the install server.
REDKNEE

for internal use only

32 of 163

REDKNEE

4 PCS Installation on Single Node

3.1.7.1 Check whether the PCSEP package has been installed


Use pkginfo | grep PCSEP to check whether PCSEP package has been installed. If
NOT please go to section 3.1.5.
Use pkginfo l PCSEP to check the installed PCSEP version should be as same as
the PCS host specific package version.

3.1.7.2 Install PCS Host Specific Package


PCS Host Specific package was generated based on TPD which is sent to service person.
Unzip PCS host specific package file which you want to install. Use pkgadd -d
PCSH<hostname>_PA_<PCS_version>.pkg to install PCS host specific package for
PCS host:<hostname>, where <PCS_version> is the version number which is same as
PCSEP package version and <hostname> is the name of the Node 1 name which is
given TPD.
Example:
# gunzip PCSHpcs13a_PA_063.pkg.gz
# pkgadd d PCSHpcs13a_PA_063.pkg

Answer with yes or Enter for questions during the installation.


Duration: ~2 minute.

3.1.7.3 Verification
Pease verify this by
Way 1: use pkginfo l PCSH<hostname>
Way2: ls l
/<iserverRoot>/projects/PCS/<PCSEPVers>/clients/<host>*.
After the installation of the host specific PCS package there will be 4 files per PCS host in
the clients directory as shown below. The file name starts with the host name that will be
assigned to the PCS hosts. The example below shows the host specific files for host
pcs13a:
# ls -l /export/home/iserver/projects/PCS/0631181/clients/
-rw-r--r--

1 root

bin

-rw-r--r--

1 root

bin

10643 Aug 18 12:37 pcs13a_Pcs.parm


1684 Aug 18 12:37 pcs13a_PCS_ServiceConfigParams.xml

-rw-r--r--

1 root

bin

5023 Aug 18 12:37 pcs13a_PCS_BasicConfigParams.xml

-rw-r--r--

1 root

bin

7534 Aug 18 18:03 pcs13a_aiParameter.sh

Note: For each PCS to be installed a separate instance of the host specific package is
required.

REDKNEE

for internal use only

33 of 163

REDKNEE

4 PCS Installation on Single Node

Note: Please compare the *_aiParamer.sh file with the information that have been
provided in the TPD if the values are incorrect, manual update is possible as a
workaround to continue the installation.

3.2 Configuration of the AutoInstall for a PCS host


After adding all components for a PCS host to the install server as described above, the
host specific AutoInstall configuration must be completed on the install server. This is
performed by the prepTspInstall.sh script.
The script performs the following main tasks:
Announce a PCS host to Solaris jumpstart service (by using Solaris
add_install_client() internally.
Create the aiParameter.sh for the client. This script will be copied to the PCS
host after the Solaris installation is complete. It provides input for the subsequent
TSP7000 installation on the PCS host.
Copy packages (e.g TSP packages ) from the TSP medium to the <clientdir>/
TSP directory under <iserverRoot>, where <clientdir> is the host name of the
PCS host to be installed

3.2.1 Copying Customization package


Note: Refer the [How to Generate Host Specific Package] in chapter 6 for Customisation
Package generation.
Copy the customization package into the install server in any location, do not extract the
customization package, this package should be end with tar.gz and also should be with
read permission for every one.
Example:
# cd /var/tmp
# ls -l /export/home/jeba/PCScust13a.tar.gz
-r--r--r--

1 root

root

14547 Oct

1 08:45 /export/home/jeba/PCScust13a.tar.gz

3.2.2 Run the script PCS_inject.sh


Run the script PCS_inject.sh in install server, script will inject the customization package
into the media. This script is located in /export/home/iserver/install/LiveRSU/ directory.
Example:
# cd /export/home/iserver/install/LiveRSU/
# ./PCS_inject.sh

REDKNEE

for internal use only

34 of 163

REDKNEE

4 PCS Installation on Single Node

#################################################################
###############
###

###

### Extracted Media Location should be Absolute path

###

### Example: If media extracted in path "/export/home/", then you are input ###
### will be "/export/home/PCS_FI_XXXXXX_XXXXXX", where "PCS_FI_XXXXX_XXXXX" ###
### is the extracted media.

###

###

###

#################################################################
###############

Enter the Extracted Media Location:


/export/home/iserver/ PCS_FI_X86IS_TSPVAR927500_130126_151019
--- Enter the extracted media location
Enter the Customization Package Location with customization package name:
/export/home/jeba/PCScust13a.tar.gz
--- Enter the customization package location, package name should be end with tar.gz
Script PCS_inject.sh is completed 06_10_11_14_55_02

3.2.3 Execute prepTspInstall.pl


The prepTspInstall.pl script is located in the directory
<iserverRoot>/autoinstall/<BaseAps>/bin.
Example:
bash-3.00# pwd
/export/home/iserver/autoinstall/TSPVAR927500/bin/

Execute ./prepTspInstall.pl flash m <medium root> -f <host specific


file>
A sample for host pcs13a when the TSP medium is extracted to <medium_root> and
<iserver_root> is in /export/home/iserver is shown below:
root@iserver# cd /export/home/iserver/autoinstall/TSPVAR927500/bin/
root@iserver# ./prepTspInstall.pl flash -m
/media/PCS_FI_X86IS_TSPVAR927500_130126_151019 -f
/export/home/iserver/projects/PCS/0638480/clients/pcs13a_aiParameter.sh
InstallServer BASEDIR: /export/home/iserver

Reading my package data - one moment please ... ok

REDKNEE

for internal use only

35 of 163

REDKNEE

4 PCS Installation on Single Node

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++

p r e p a r e

T s p

I n s t a l l a t i o n

++

++
++

++ This program prepares the installation of a TSP7000:

++

++ - collect data needed for the installation

++

++ - announce client to install server (for jumpstart)

++

++ - create finish script, conf. file and sysidcfgfile (for jumpstart) ++


++

++

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++

++

++ Your input is temporarily saved in file:

++

++ --> /tmp/aiParameter.3422
++

++

++ If this program is interrupted this file can be used as input file

++

++ when executing this program again (option -f).

++

++ (Manually delete incorrect entries before)

++

++ It is deleted after successfull completion of this program.

++

++

++

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++

++

++ Further files of interest:

++

++ - check results (e.g for patches) : <clientdir>/checklist.log

++

++

++

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Do you want to continue (y/n) ? [yes] -> yes

--- Say yes to continue

----- Tsp -----

Note: The BaseAPS name (=Tsp version) is contained in the directory of this
script (./prepTspInstall.pl).
This BaseAPS name is needed to find the correct installation and
configuration scripts.

The selected BaseAPS resp. Tsp Version:


-> Solaris: TSPVAR927500
Is this correct (y/n) ? [yes] -> yes

--- Say yes, if youre TSP verson is right

-------------------------------

Media data

--

-------------------------------- Medium

REDKNEE

: OEM, TSP and Application data

for internal use only

36 of 163

REDKNEE

4 PCS Installation on Single Node

-- Base path: /media/PCS_FI_X86IS_TSPVAR927500_130126_151019


-Multiple modules available on medium, select module

Please select one from the following list:

PCS_FI_SPARCIS_TSPVAR927500.InstallServer.SMU_flash_cluster

PCS_FI_SPARCIS_TSPVAR927500.PCS_Cluster.gen

PCS_FI_SPARCIS_TSPVAR927500.InstallServer.SMU_flash_Single_Node

PCS_FI_SPARCIS_TSPVAR927500.InstallServer.gen

PCS_FI_SPARCIS_TSPVAR927500.PCS_SingleNode.gen

! Please select (1..5)

-> 4

--- Select youre option

if Cluster select 2(PCS_FI_SPARCIS_TSPVAR927500.PCS_Cluster.gen)


if Single node select 5(PCS_FI_SPARCIS_TSPVAR927500.PCS_SingleNode.gen)
Module selected: PCS_FI_SPARCIS_TSPVAR927500.PCS_Cluster.gen
WARNING: OEM descriptionfile for INTPasref does not exist

INFO: File cluster.cfg not available -> all data will be requested.

INFO
---The install server and the install client(s) MUST be in the same subnet.

If your install server serves more than one interface (your install server
is in more than one lan) then you must specify the correct interface data.

Please specify the hostname and IP address of the install server on the
installation lan.
This may depend on whether you are installing via core or admin lan.

If data is not provided via input file or if data is provided but is not
valid a selection is output showing the data of all configured interfaces.
Data for each line:
<num> - hostname

(ip - netmask - interface)

Default if not provided via input file:


InstallServerName=<hostname>
InstallServerIP=<corresponding IP>.

REDKNEE

for internal use only

37 of 163

REDKNEE

4 PCS Installation on Single Node

Installation server data:


------------------------Hostname

: pcsis3-admin

IP

: 192.168.22.17

Is this correct (y/n) ? [yes] -> y

--- Say yes, if your Install Server is right

** InstallServerName=pcsis3-admin
** InstallServerIP=192.168.22.17
----- Some output is truncated ---------

During the execution, follow the screen trace and answer the questions during the
installation of prepTspInstall.pl script, you have to provide some input as shown in
Host Specific document.
Duration: 2~3 minutes.
If you face any errors in this process provide the following files to the service person to
further analyze.
1. Log files: /tmp/prepTspInstall<PID>.log and /tmp/aiParameter.<PID>.
2. TSP medium file name used to setup the TSP install server.
Where <PID> is process ID.

3.2.4 Additional steps


3.2.4.1 Configure-tftp-nfs.sh
Note: This step is mandatory only if install server is X86 (X4270) and this step is required
only when X86 install server prepared first time by scratch installation.
This tool is used to enable and disable the tftp and nfs services. Login as user root in
install server and invoke the tool with
root@iserver# /opt/INTPaghar/secureTool/configure-tftp-nfs.sh
Following menu will appear:

Configuration of TFTP and NFS


1. Turn on TFTP
2. Turn off TFTP
3. Turn on NFS
4. Turn off NFS
5. Help
6. Quit

REDKNEE

for internal use only

38 of 163

REDKNEE

4 PCS Installation on Single Node

Please Enter your choice :


Select option 1 and 3 to enable TFTP and NFS, and then option 6 to quit the menu.

3.2.4.2 Update the script AiRaidStorageTek.pm


Note: This step is only necessary when ST2540 storage used in cluster, this step is not
required for Single Node.
In install server go the directory /export/home/iserver/install/LiveRSU/, run the script
PCS_AiRaidStorageTek.sh, this script will ask PCS first node hostname as input. This
script will update the file
/export/home/iserver/autoinstall/clients/pcs13a/storage/perl/AiRaidStorageTek.pm
Example:
bash-3.00# pwd
/export/home/iserver/install/LiveRSU/
bash-3.00# ./PCS_AiRaidStorageTek.sh
Enter PCS cluster first node hostname:
pcs21a

----Enter PCS cluster first name hostname

Changes are done for the script AiRaidStorageTek.pm

After the above script successful run, verify the difference as below. Here pcs21a is the
PCS cluster first node hostname.
root@pcsis5> cd
/export/home/iserver/autoinstall/clients/pcs21a/storage/perl
root@pcsis5> diff

AiRaidStorageTek.pm_bkp AiRaidStorageTek.pm

598c598
<
my
$fullname=$obj->{param}->{"NodeName[1]"}."-".$sname."-".time();---This
before change
--> my $fullname=$obj->{param}->{"NodeName[1]"}."-".$sname; ---This is after
the change
root@pcsis5>

3.2.5 Verification of the prepTspInstall.pl actions


Perform the following steps Install Server:
1.
Check if the PCS host is added to /etc/hosts. An entry for the PCS host must exist like
the following example.
REDKNEE

for internal use only

39 of 163

REDKNEE

4 PCS Installation on Single Node

# cat /etc/hosts

192.168.70.141

pcs13a-admin

2.

Check if the PCS host is added to /etc/ethers. An entry for PCS host must exist like
the following example.

# cat /etc/ethers

0:14:4f:3c:6c:6c

pcs13a-admin

3.

Check /etc/bootparams. An entry for PCS host must exist like the following example.

# cat /etc/bootparams

pcs13a-admin

root=eagle:/export/home/iserver/solaris/10_0606/Solaris_10/Tools/Boot
install=eagle:/export/home/iserver/solaris/10_0606 boottype=:in
sysid_config=eagle:/export/home/iserver/jumpstart/pcs13a
install_config=eagle:/export/home/iserver/jumpstart/pcs13a
rootopts=:rsize=8192

4. Check if the aiParameters.sh file has been created in the directory


<iserverRoot>/autoinstall/clients/<hostname>. A sample for a host turtle is shown below:
# ls -l /export/home/iserver/autoinstall/clients/pcs13a
total 30

-rwxr-xr-x

5 root

root

2560 Dec 12 06:59 aiParameters.sh

5. Check that the directory <iserverRoot>/jumpstart/<hostname> exists like the


following example.
# ls -l /export/home/iserver/jumpstart/
total 10

drwxr-xr-x

2 root

root

512 Dec 12 06:57 pcs13a

3.2.6 FAQ and Troubleshooting Information

REDKNEE

for internal use only

40 of 163

REDKNEE

4 PCS Installation on Single Node

3.2.6.1 Unpack TSP medium ERROR


Please check that there is enough disk space. Please keep more than 60GB free space
available at the unpack directory.
Please check whether the medium file is corrupt by checking file size and checksum.
Please re-download the medium again and MAKE SURE using ftp binary mode.

3.2.6.2 setup TSP install server ERROR


Please check if there is enough disk space.
Please check whether the required 3rd party software has been installed.

3.2.6.3 Preparation of Automated installation ERROR


Please provide the log file (the latest one of /tmp/prepTspInstall*.log) to service person.

3.2.6.4 SMAWtsphc error


If media installation throughs the following error in X86 (X4270) install server, then follow
the section 3.7, 3.7.1 and 3.7.2 in [Install Server Configuration Guide].
====================== Installing SMAWtsphc ===========================
-------Some outputs are truncated--------------All rights reserved
checkinstallnsaWxi: The processor name sparc is expected (i386 was found).
pkgadd: ERROR: request script did not complete successfully
Installation of <SMAWtsphc> failed.
No changes were made to the system.
ERROR: No partial information for "SMAWtsphc*" was found
ERROR: Installation of SMAWtsphc failed
Please check the error messages of pkgadd.
==== Logfile: /tmp/setupInstallServer.20110217161743.log

3.3

Password details

root password
root password

yt_xk39b
s!em3ns

Before successful installation


After successful installation

oracle password

yt_xk39b

Password for the unix user oracle.

REDKNEE

for internal use only

41 of 163

REDKNEE

4 PCS Installation on Single Node

FMftp password

s!em3ns

password to be set for the FMftp account (currently unused)

PMftp password

s!em3ns

traceftp password

s!em3ns

password to be set for the PMftp account (for ftp copying


performance counter files from the PCS host)
password to be set for the traceftp account (for ftp copying
trace files from the PCS host)

rtp99 password

s!em3ns

password to be set for the rtp99 account (for TSP)

superad password

PcsQos!7

password for the TSP sysad account (for TSP GUI and CLI);
this account can be used by an operator

sysad password

QosPbn3!

password for the TSP sysad account (for TSP GUI and CLI);
this account can be used by an operator

sysop1 password

QosPbn3!

password for the TSP sysop1 account (for TSP GUI and CLI);
this account can be used by an operator

sysop2 password

QosPbn3!

password for the TSP sysop2 account (for TSP GUI and CLI);
this account can be used by an operator

sysop3 password

QosPbn3!

password for the TSP sysop3 account (for TSP GUI and CLI);
this account can be used by an operator

sysop4 password

QosPbn3!

password for the TSP sysop4 account (for TSP GUI and CLI);
this account can be used by an operator

sysop5 password

QosPbn3!

password for the TSP sysop5 account (for TSP GUI and CLI);
this account can be used by an operator

4 PCS Installation on Single Node


After the Install server has been prepared as shown before the actual PCS installation
can be start.
Note: All data on the PCS host wiil be destroyed during this process, so please perform a
backup of all the data that if you want to retain. Before successful installation root user
password is yt_xk39b, after successful installation root user password is s!em3ns.
Please change the password after the successful installation.
Note: Hardware should be in standard configuration, if not installation will fail. More
information on hardware standard, refer PCS6.3 release notes and [Hardware description
guide].

4.1 Pre-conditions

REDKNEE

for internal use only

42 of 163

REDKNEE

4 PCS Installation on Single Node

PCS hardware has been setup and the host is powered on and the Ethernet port
with the MAC address configured on the install server is connected to the
network.
PCS host and install server are in the same subnet.
PCS host is NOT announced on any other install or boot server in the same LAN.
(Either logon to each IS in the subnet and check file /etc/bootparams and
/etc/ethers that the client(s) are not announced).
Update the firmware of PCS host. For more detail please refer to section 4.2
Have access to the system console of PCS host.
All interfaces should be connected to the network.

4.2 Firmware
4.2.1 Sun Netra T5220
Check the current firmware version for Sun Netra T5220 hardware, it should be 7.4.2.a.
This is the TSP recommended firmware version for PCS 6.3. If not, check the [Hardware
description guide] to upgrade the firmware.

4.2.2 Sun Netra T2000


Check the current firmware version for Sun Netra T2000 hardware, it should be 6.7.12.
This is the TSP recommended firmware version for PCS 6.3. If not, check the [Hardware
description guide] to upgrade the firmware.

4.3 OBP setting (ok prompt)


How to get there:
You need access to the system console of the PCS host to start the installation. How this
access is achieved may differ from platform to platform.
When you have access to the system console and Solaris is running type
root@client # init 0

This shuts down the system and you will see a switch to the OBP
Set the following variables in the OBP:
The following variable MUST be set:
ok set-defaults
ok setenv local-mac-address? false
local-mac-address? =

false

ok setenv auto-boot? false


auto-boot? =

REDKNEE

false

for internal use only

43 of 163

REDKNEE

4 PCS Installation on Single Node

ok setenv use-nvramrc? true


use-nvramrc? =

true

ok reset-all

After modification, the OBP setting would be:


ok printenv
Variable Name

Value

Default Value

ttya-rts-dtr-off

false

false

ttya-ignore-cd

true

true

keyboard-layout

US-English

reboot-command
security-mode

none

No default

security-password

No default

security-#badlogins

No default

verbosity

min

min

pci-mem64?

true

true

diag-switch?

false

false

local-mac-address?

false

true

fcode-debug?

false

false

scsi-initiator-id

oem-logo
oem-logo?

No default
false

oem-banner

false
No default

oem-banner?

false

false

ansi-terminal?

true

true

screen-#columns

80

80

screen-#rows

34

34

ttya-mode

9600,8,n,1,-

9600,8,n,1,-

output-device

virtual-console

virtual-console

input-device

virtual-console

virtual-console

auto-boot-on-error?

false

false

load-base

16384

16384

auto-boot?

false

true

boot

boot

network-boot-arguments
boot-command
boot-file

REDKNEE

for internal use only

44 of 163

REDKNEE

4 PCS Installation on Single Node

boot-device

/pci@0/pci@0/pci@2/scsi@ ...

disk net

multipath-boot?

false

false

boot-device-index

use-nvramrc?

true

false

boot

boot

nvramrc
error-reset-recovery

Note: First installation with hardening will enable OBP password. If youre doing the any
changes/activity in OBP level, it will ask you the OBP password. OBP password is
PCS123.

4.4 Start Installation


Start the automatic installation from the OBP of the install client (PCS host). This starts a
network installation from the install server that has been prepared:
Note: If any issue during the First Installation, first refer the section 4.7 FAQ and
Troubleshooting Information, if solution not present then send the following information to
Support person.

Version of PCS
Hardware type
Number of core processor
Memory(RAM) size on Node
Installation console logs ( from boot net to failure place)
Send the TspExplorer file (/dump/TspExplorer/TspExplorer.*.tar.gz)
File from both nodes (/var/adm/message*)

ok boot <net> -v install

The -v option will show the boot host your client is using.
Where <net> is physical path of the admin LAN interface, physical path of the admin
LAN may vary on youre hardware.
Refer the [Hardware description guide] which explains how to find the Admin LAN
physical path.
Example:
ok boot /pci@0/pci@0/pci@8/pci@0/pci@8/network@0,3 v install

4.4.1 The Result of Automated Installation


When the automated installation is finished successfully, the screen output should be
like:
REDKNEE

for internal use only

45 of 163

REDKNEE

4 PCS Installation on Single Node

TSPIS: --- Configure TCP Wrappers


TSPIS: === TCP Wrappers enabled successfully
TSPIS: ##########################################
TSPIS: ##
TSPIS: ##

##
Installation completed !!!

TSPIS: ##

##
##

TSPIS: ##########################################
TSPIS: ##

check all logfiles for

TSPIS: ## success

##
##

TSPIS: ##########################################

Note: After the Hardened OPB password is enabled. Password is PCS123. Dont delete
the directory and scripts in PCS hosts /tspinst/ and /tspinst/scripts; these will be used for
feature upgrade.
If there is an error message go to section 4.7 FAQ and Troubleshooting Information for
troubleshooting information.
A complete log of an installation is shown in Annex 8.6.1 Sun Netra T5220 sinlge Node.
The overall automated installation from boot net v install until finished installation and
configuration is expected to take about ~6 hours.

4.5 Post Installation steps


4.5.1 Establish communication with @Commander
Note: Only standard single node/Cluster can be added in @commander server.
For communication with @Commander it is necessary to distribute @Commanders public
key to the PCS host(s). The public key must be placed in a specific file called
'authorized_keys' in the home directory of the account <rtp99> used for the shell access.
At installation time of the @Commander server, the private/public key pair was created under
directory /ti_var/TI/ftp/pub/ files called KeyFile.pub.
Transfer the public key files /ti_var/TI/ftp/pub/KeyFile.pub from @vantage Commander
Server to the PCS host(s). This can be done either by FTP or by any other possible way
(e-mail, floppy disk, etc).

Example, from @Commander Server


scp /ti_var/TI/ftp/pub/KeyFile.pub rtp99@<PCSnode>:/tmp/

Assume public key is now copied to PCS host(s) in /tmp,

REDKNEE

for internal use only

46 of 163

REDKNEE

4 PCS Installation on Single Node

Now login as user <rtp99> in PCS host(s), create a directory .ssh in its home directory, like
login as user <rtp99>
mkdir p .ssh
Insert the contents of the public key file into the file ~<rtp99>/.ssh/authorized_keys.rtp99
Example
bash-3.00$ pwd
/export/home/rtp99
bash-3.00$ cat /tmp/KeyFile.pub >> .ssh/authorized_keys.rtp99

Example output from PCS node.


bash-3.00$ pwd
/export/home/rtp99/.ssh
bash-3.00$ ls -lrt
total 4
-rw-r-----

1 rtp99

dba

1199 Feb 18 18:01 authorized_keys.rtp99

bash-3.00$

In case of a PCS cluster above steps has to be done on each node of the cluster.

Creating a PCS NE in @Commander server,


select Product Line is IMS,
NE Type is CFX-5000_PCS_PCM,
Symbolic Name can be any name,
Common Host Name is CORE_VIRTUAL_HN from TPD or aiParameter.sh file and
Cluster Nodes is Host name of the PCS node, if it is cluster, enter both nodes
hostname, one hostname per line.

4.5.2 Add/delete route for PCS


If the peer NEs (e.g. CSCF, GGSN or BGF) is NOT in the sub-network, which is NOT
directly reachable, the routes should be added.
Inquiry routes:
You can inquiry the existing routes by netstat -r, for more detailed information please
man netstat
[kangaroo/root] netstat -r
Routing Table: IPv4
Destination

Gateway

Flags

Ref

Use

Interface

-------------------- -------------------- ----- ----- ------ --------140.231.197.224

140.231.197.225

UG

140.231.197.224

kangaroo

0 ce0

REDKNEE

for internal use only

47 of 163

REDKNEE

4 PCS Installation on Single Node

140.231.197.224

kangaroo-backup

0 ce1

139.24.201.192

kangaroo-admin

192.168.70.0

PCS-pktmm3

0 ce2

192.168.70.0

PCS-pktmm2

0 ce3

224.0.0.0

kangaroo

0 ce0

default

139.24.201.193

UG

localhost

localhost

UH

49

28 e1000g0

81921 lo0

Add route:
You can add a new route to PCS by route p1 add <destination2> <gateway3>,
for more detailed information please man route.
In the following example, 140.231.197.224 -netmask 255.255.255.224 is the
destination sub-network, 140.231.197.225 is the IP address of the gateway.
[kangaroo/root]route p add 140.231.197.224 -netmask 255.255.255.224
140.231.197.225
add net 140.231.197.224: gateway 140.231.197.225
add persistent net 140.231.197.224: gateway 140.231.197.225: entry exists

You can verify the result by netstat -r


Delete route:
You can delete route to PCS by route delete <destination> <gateway>,
[kangaroo/root] route p delete 140.231.197.224 -netmask 255.255.255.240
140.231.197.225
delete net 140.231.197.224: gateway 140.231.197.225

You can verify the result by netstat -r

4.5.3 Alarm: File Permission Monitor found discrepancies. Look at


/var/opt/INTPaghar/run/hids/fileperm.log for more details
Follow the below procedure to disable this alarm.
Login to TSP Webpage (https://<node-ip> address or FQDN: 8099, user=superad,
password=get it from the responsible party).
First expand Configuration Management option and select Configuration Parameters.
Then in the drop down menu, expand Rtp option and then expand Aud option.

1
2
3

-p make changes to the network route tables persistent across system restarts
The destination is the destination host/sub-network.
Gateway is the next-hop intermediary through which packets should be routed.

REDKNEE

for internal use only

48 of 163

REDKNEE

4 PCS Installation on Single Node

Then expand PermissionMonitor option, click on enable and then click on Modify. A
popup window will show New Value RtpTrue. Change it to RtpFalse and select Ok.

4.5.4 Alarm: Rhosts Monitor found discrepancies. Look at


/var/opt/INTPaghar/run/hids/rhostsAuth.log for more details
Follow the below procedure to disable this alarm.
Login to TSP Webpage (https://<node-ip> address or FQDN: 8099, user=superad,
password=get it from the responsible party).
First expand Configuration Management option and select Configuration Parameters.
Then in the drop down menu, expand Rtp option and then expand Aud option.
Then expand RhostsMonitor option, click on enable and then click on Modify. A popup
window will show New Value RtpTrue. Change it to RtpFalse and select Ok.

4.5.5 Backup & Restore Configuration


N/A

4.5.6 B&R required additional manual step


N/A

4.5.7 Unix services Management


You can manage all UNIX services by the following means as root:
Check service
Run svcs <service> as root.
Enable service
Run svcadm enable <service> as root.
Restart service
Run svcadm restart <service> as root.
For example PCS host will be hardened during AutoInstall procedure so that you can
NOT access PCS host by telnet. If you want to allow telnet from out-side please do as the
following steps to enable telnet service:
Login PCS host as root
Please login PCS host as rtp99 by ssh and then switch user to root.

REDKNEE

for internal use only

49 of 163

REDKNEE

4 PCS Installation on Single Node

Enable telnet service


Run svcadm enable telnet as root.
Check telnet service
Run svcs | grep telnet as root.
Please logout PCS host and try to re-login by telnet.

4.5.8 TSP User Management


The User Management is part of the Security Management subtree in the application
control area using the TSP CLI (Command Line Interface) or TSP GUI (Graphical User
Interface. Using this tool you can do the following.
Change user passwords
Get information about users and active management sessions
Modify user information
Create users

9. For more information refer the guide [How to Generate Host Specific Package]
10. [Technical Project Description (TPD)]

https://sharenet-ims.inside.nokiasiemensnetworks.com/Guest/Open/433597572
[Operating PCS-5000 and chapter Security Management.

4.5.9 Apply following patches (DV6307 & DV5733)


Download the file DV6307.tar and DV5733.tar file NOLS and copy the same to PCS
nodes (both nodes) in /var/tmp directory. Then execute the following steps in both nodes
as root user.
Follow the below steps to apply the patch DV5733.tar
root@pcs21a> pwd
/var/tmp
root@pcs21a>
root@pcs21a> ls -l DV*
-rw-r-----

1 root

root

-rw-r-----

1 root

root

220160 May

6 12:32 DV5733.tar

1354752 Apr 25 14:38 DV6307.tar

root@pcs21a> tar xvf DV5733.tar

REDKNEE

for internal use only

50 of 163

REDKNEE

4 PCS Installation on Single Node

x /tmp/DV5733, 0 bytes, 0 tape blocks


x /tmp/DV5733/RtpStaMan, 217804 bytes, 426 tape blocks
root@pcs21a> cp -p /opt/SMAW/SMAWrtp/bin/RtpStaMan
/opt/SMAW/SMAWrtp/bin/RtpStaMan_bkp

root@pcs21a> ls -l /opt/SMAW/SMAWrtp/bin/RtpStaMan_bkp
-r-xr-xr-x
1 bin
bin
219136 Mar 19
/opt/SMAW/SMAWrtp/bin/RtpStaMan_bkp

2012

root@pcs21a> cp /tmp/DV5733/RtpStaMan /opt/SMAW/SMAWrtp/bin/


root@pcs21a> ls -l /opt/SMAW/SMAWrtp/bin/RtpStaMan
-r-xr-xr-x
1 bin
bin
/opt/SMAW/SMAWrtp/bin/RtpStaMan

217804 May

7 11:58

root@pcs21a>

Follow the below steps to apply the patch DV6307.tar


root@pcs21a> pwd
/var/tmp
root@pcs21a>
root@pcs21a> ls -l DV*
-rw-r-----

1 root

root

-rw-r-----

1 root

root

220160 May

6 12:32 DV5733.tar

1354752 Apr 25 14:38 DV6307.tar

root@pcs21a> tar xvf DV6307.tar


x /tmp/DV6307/libRtpEvent.so, 1353008 bytes, 2643 tape blocks
root@pcs21a> cp -p /opt/SMAW/SMAWrtp/lib64/libRtpEvent.so
/opt/SMAW/SMAWrtp/lib64/libRtpEvent.so_orig

root@pcs21a> ls -l /opt/SMAW/SMAWrtp/lib64/libRtpEvent.so_orig
-r--r--r-1 bin
bin
1353496 Mar 19
/opt/SMAW/SMAWrtp/lib64/libRtpEvent.so

2012

root@pcs21a> cp /tmp/DV6307/libRtpEvent.so
/opt/SMAW/SMAWrtp/lib64/libRtpEvent.so

root@pcs21a> ls -l /opt/SMAW/SMAWrtp/lib64/libRtpEvent.so
-r--r--r-1 bin
bin
1353008 May
/opt/SMAW/SMAWrtp/lib64/libRtpEvent.so

9 13:21

root@pcs21a>

Do the above steps on both nodes.


Then reboot the node using command init 6(first reboot node1, once node1 is up and
running, then reboot node2).
REDKNEE

for internal use only

51 of 163

REDKNEE

4 PCS Installation on Single Node

4.5.10 Password expired


NOTE: Execute below steps on both PCS nodes if it is cluster.
Login to the PCS nodes as root user, then reset the password using command passwd
<user ID>, where <user ID> can be oracle, FMftp, etc. Find the default password details
in chapter Password details.
Example :
passwd FMftp
passwd PMftp
If any user ID password should be permanent then refer [PCS 6.3_3.0-RMS-Security
Configuration Guide] in section 2.12.

4.5.11 TSP GUI using https


TSP GUI should work with https (https://<IP_Address:8099>, where IP_Address is PCS
host Core IP Address), if not, then follow below procedure to enable https for TSP GUI.
Execute the following command as rtp99 in PCS node.
/opt/SMAW/SMAWrtp/bin/RtpApache stop
/opt/SMAW/SMAWrtp/bin/RtpApache start
execRTPenv RtpHttpsTool.pl tomcat

4.6 Verification
The following section provides a brief check is the PCS system is working properly after
the installation.
Verify the installation and configuration as follows:

4.6.1 Check if the all process is running


On PCS host switch user to rtp99, run execRTPenv status1.
You can see all Components (including PCS and PCS manager) are running or not.
NOTE: When you switch user to rtp99, you MUST use su rtp99 to make sure that
the profile of the rtp99 is touched.

REDKNEE

for internal use only

52 of 163

REDKNEE

4 PCS Installation on Single Node

4.6.2 Using TSP GUI


Also check the process status using TSP GUI, use https://<IP_Address:8099>, where
IP_Address is PCS host Core IP Address, you will see the following screen.

Figure 4-1 Oening page for TSP GUI


Click Login button, login with user id superad, will open the new window like below

REDKNEE

for internal use only

53 of 163

REDKNEE

4 PCS Installation on Single Node

Figure 4-2 TSP GUI shows after login


Expand the System Management in left hand side and click Process & Node in left
hand side; you can see right hand side like in above screen shot. In right hand side you
can expand the each cluster node to verify the process status as like below screen shot.

REDKNEE

for internal use only

54 of 163

REDKNEE

4 PCS Installation on Single Node

Figure 4-3 Process status in TSP GUI


All process should be running status.

4.6.3 Check the hostname


Check the hostname by uname -n, the expected result: the hostname for Core Lan
interface will be shown. Check the hostname binding to the Core interface by cat
/etc/hostname.<Core interface> and make sure the binding is correct.

4.6.4 IP addresses
Check the IP addresses that are assigned to the ethernet interface by ifconfig -a
The command should show the values that have been specified in the TPD. The following
is an example:
root@pcs14a> ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000

REDKNEE

for internal use only

55 of 163

REDKNEE

4 PCS Installation on Single Node

nxge3: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index


14
inet 10.58.128.9 netmask ffffff00 broadcast 10.58.128.255
groupname core

----- IPMP group for core LAN, Active interface

ether 0:21:28:bf:1c:7d
nxge3:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 14
inet 10.58.128.11 netmask ffffff00 broadcast 10.58.128.255 55

---- Core LAN IP address

nxge3:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 14


inet 10.58.128.13 netmask ffffff00 broadcast 10.58.128.255

----Core LAN HIP IP address

nxge3:3: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 14


inet 10.58.128.12 netmask ffffff00 broadcast 10.58.128.255

----Core LAN virtual IP address

e1000g1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index


3
inet 10.255.9.67 netmask ffffff00 broadcast 10.255.9.255
groupname ims

----IPMP group for PC LAN1 or IMS LAN, Active interface

ether 0:21:28:11:af:3d
e1000g1:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.255.9.16 netmask ffffff00 broadcast 10.255.9.255

---- PC LAN1 or IMS LAN IP address

e1000g1:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 3


inet 10.255.9.32 netmask ffffff00 broadcast 10.255.9.255

---- PC LAN1 or IMS LAN virtual IP address

e1000g2: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index


4
inet 192.168.23.67 netmask ffffff00 broadcast 192.168.23.255
groupname backup

----IPMP group for Backup LAN, Active interface

ether 0:21:28:11:af:3e
e1000g2:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.23.16 netmask ffffff00 broadcast 192.168.23.255

---- Backup LAN IP address

e1000g2:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 4


inet 192.168.23.32 netmask ffffff00 broadcast 192.168.23.255

----Backup LAN virtual IP address

nxge0:
flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACTIVE>
mtu 1500 index 5
inet 10.255.8.68 netmask ffffff00 broadcast 10.255.8.255
groupname core

---- IPMP group for Core LAN, Secondary interface

ether 0:21:28:37:6:6
nxge1:
flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACTIVE>
mtu 1500 index 6
inet 10.255.9.68 netmask ffffff00 broadcast 10.255.9.255
groupname ims
ether 0:21:28:37:6:7
nxge2: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 7
inet 10.255.10.67 netmask ffffff00 broadcast 10.255.10.255
groupname ps

---- IPMP group for PC LAN2 or PS LAN, Active interface

ether 0:21:28:37:6:8

REDKNEE

for internal use only

56 of 163

REDKNEE

4 PCS Installation on Single Node

nxge2:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7


inet 10.255.10.16 netmask ffffff00 broadcast 10.255.10.255

---- PC LAN2 or PS LAN IP address

nxge2:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 7


inet 10.255.10.32 netmask ffffff00 broadcast 10.255.10.255

---- PC LAN2 or PS LAN virtual IP address

nxge3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 8


inet 192.168.22.16 netmask ffffff00 broadcast 192.168.22.255

--- Admin LAN interface

ether 0:21:28:37:6:9
nxge5:
flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACTIVE>
mtu 1500 index 9
inet 192.168.23.68 netmask ffffff00 broadcast 192.168.23.255
----IPMP group for Backup LAN, Secondary interface

groupname backup
ether 0:21:28:35:9e:87

nxge6:
flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACTIVE>
mtu 1500 index 10
inet 10.255.10.68 netmask ffffff00 broadcast 10.255.10.255
---- IPMP group for PC LAN2 or PS LAN, Secondary interface

groupname ps
ether 0:21:28:35:9e:88
root@pcs14a>

Note: Above interface names and IP addresses may differ from your system.

4.6.5 NTP configuration


Check if NTP is working configuration. NTP is activated if set in TPD. This can be
performed by the command xntpdc:
root@pcs12> xntpdc -p
remote

local

st poll reach

delay

offset

disp

=======================================================================
*pcsis2

10.255.6.146

128

377 0.00113

0.000036 0.00006

root@pcs12>

4.6.6 UNIX Services


Check running services using svcs command
Not Available

4.7 FAQ and Troubleshooting Information

REDKNEE

for internal use only

57 of 163

REDKNEE

4 PCS Installation on Single Node

If any of the previously described actions fails and you can NOT find the solution for the
problem, please gather all relevant information for the issue (you can get detailed
information about what is needed below) and contact the service person.

4.7.1 Installation does NOT start


The installation does NOT start up with following error information
a) Some ERROR messages like Link Down
b) Some ERROR messages like panic[cpu*]
c) Some ERROR messages like Time out get RARP messages
For a) make sure that the network interface you used for installation on the client is the
same as the one specified in the PCS TPD. Check if the network cable of the installation
interface is connected.
For b) check the TSP medium on the install server is old.
For c) check the MAC address of the NIC which the PCS host is using to for the
communication with the installation server. It must be the same as the one in /etc/ether on
install server.

4.7.2 Client is installed from the wrong install server


This will happen only when the PCS host is announced on another install server in the
same network.
From the installation screen trace if the client is using the wrong install server you must
remove the client from the wrong install server.
In the following screen trace on PCS host PCS is bootup from wrongis NOT the
proposed install server.

Using RARP/BOOTPARAMS...
Internet address is: 141.29.96.6
hostname: cesar
Found 141.29.96.19 @ 0:3:ba:fd:ca:b5
root server: wrongis (141.29.96.19)
root directory:
/export/home/iserver/solaris/10_0606/Solaris_10/Tools/Boot
Size: 0x78a00+0x1cf01+0x319fb Bytes

Login to the wrong install server as root

REDKNEE

for internal use only

58 of 163

REDKNEE

4 PCS Installation on Single Node

root@wrongis # cd <solarisdir>/Solaris_X/Tools
# for each node of the cluster do
root@wrongis # ./rm_install_client <clientname>
# and remove client data from /etc/ethers!

And remove the PCS host data from /etc/ethers and /etc/bootparams of the wrong
install server manually.

4.7.3 Failure in tspFIRSU.sh script


During the first installation tspFIRSU.sh script may fails with following messages.
tspFIRSU.sh: ERROR - FIRSU failed.
21:59:45: tspFIRSU.sh: ACTION FIRSU has failed and needs to be finished manually!

Check the error and refer to manual TSP7000 Software Update Framework
for continuation.
Logfiles are found on the Install Server in
/var/opt/SMAWsuf/Controller/FirstInstallation and
/var/opt/SMAWsuf/Controller/<clustername>

Remember: mirror disks are not used during first installation.

Do not forget to run /tspinst/scripts/controlInstall.sh after FIRSU


has finished successfully in order to continue installation!

If above failure observed during the first installation, wait for 10 mins after failure, then
login to the node as root user and check the all the rtp processes are up and running, if
all the processes are up and running, then check the file /tspinst/tspFIRSU.out, itll show
the output like below.
root@pcs36> su rtp99 execRTPenv status1 e
this command will shows only failed process.

---check rtp processes are up and running,

root@pcs36>
root@pcs36> cat /tspinst/tspFIRSU.out
Script: tspFIRSU.sh started 07/08/11 21:33:15

Execution time: 07/08/11 21:33:15


/tspinst/scripts/tspFIRSU.pl
Reading /tspinst/scripts/aiParameter.sh ... ok
07/08/11 21:33:15 FIRSU successfully started

REDKNEE

for internal use only

59 of 163

REDKNEE

4 PCS Installation on Single Node

07/08/11 21:33:15 Waiting for SUF to be installed


07/08/11 21:33:45 New Status: FirstInstaller: start

07/08/11 21:34:45 New Status: SufControl: start

07/08/11 21:59:45 Error while doing FIRSU: ERROR17: SufControl failed


Check on Install Server:
/var/opt/SMAWsuf/Controller/FirstInstallation/startFIRSU.l2-pcs-fe.17491.log
/var/opt/SMAWsuf/Controller/FirstInstallation/SufControl.l2-pcs-fe.17491.stdout
Also check SUF logfiles on Install Server
To reenter SUF FIRSU, run as sufuser:/opt/SMAW/SMAWsuf/bin/SufControl -r -d
/var/opt/SMAWsuf/Controller/FirstInstallation/datainit.l2-pcs-fe.xml FIRSU l2-pcs-fe

End time: 07/08/11 21:59:45


exit code = 1
Script end status: partly executed
Script: tspFIRSU.sh ended 07/08/11 21:59:45

Login to the install server as sufuser user, then execute the command which marked in
bold in above output(use the command which shows in your system output). This
command execution may request to retry, once or twice select retry option.
bash-3.00$ /opt/SMAW/SMAWsuf/bin/SufControl -r -d
/var/opt/SMAWsuf/Controller/FirstInstallation/datainit.l2-pcs-fe.xml FIRSU l2-pcs-fe

Note: In above output file name datainit.l2-pcs-fe.xml and hostname l2-pcs-fe will differ
from your system output.

4.7.4 The PCS host did NOT boot up automatically after Solaris has
been installed.
Please reboot the system manually.
This problem occurs if the OBP parameters are not set as shown in section 4.3.

REDKNEE

for internal use only

60 of 163

Siemens AG

6 PCS Manager Installation

5 PCS Installation on Cluster node


Note: All data on the PCS host wiil be destroyed during this process, so please perform a
backup of all the data that if you want to retain. Before successful installation root user
password is yt_xk39b, after successful installation root user password is s!em3ns.
Change the root password after the successful installation.
Note: Hardware should be in standard configuration, if not installation will fail. More
information on hardware standard, refer PCS 6.3 release notes and [Hardware
description guide]

5.1 Pre-conditions

All pre-conditions for PCS single node applicable for cluster.


Any external HW (e.g. external storage ) must be configured manually, more
information refers the [Hardware description guide].

5.2 ST2540 storage IP configuration


Refer the PCS6.3 [Hardware description guide] to setup IP address in storage ST2540.

5.3 OBP Setting (ok prompt)


The following variable MUST be set on both nodes from OK prompt
Ok set-defaults
ok setenv local-mac-address? false
local-mac-address? =

false

ok setenv auto-boot? false


auto-boot? =

false

ok setenv use-nvramrc? true


use-nvramrc? =

true

ok reset-all

After modification, the OBP setting would be:


ok printenv
Variable Name

Value

Default Value

ttya-rts-dtr-off

false

false

ttya-ignore-cd

true

true

REDKNEE

for internal use only

61 of 163

Siemens AG
keyboard-layout

6 PCS Manager Installation


US-English

reboot-command
security-mode

none

No default

security-password

No default

security-#badlogins

No default

verbosity

min

min

pci-mem64?

true

true

diag-switch?

false

false

local-mac-address?

false

true

fcode-debug?

false

false

scsi-initiator-id

oem-logo
oem-logo?

No default
false

oem-banner

false
No default

oem-banner?

false

false

ansi-terminal?

true

true

screen-#columns

80

80

screen-#rows

34

34

ttya-mode

9600,8,n,1,-

9600,8,n,1,-

output-device

virtual-console

virtual-console

input-device

virtual-console

virtual-console

auto-boot-on-error?

false

false

load-base

16384

16384

auto-boot?

false

true

boot

boot

boot-device

/pci@0/pci@0/pci@2/scsi@ ...

disk net

multipath-boot?

false

false

boot-device-index

use-nvramrc?

true

false

boot

boot

network-boot-arguments
boot-command
boot-file

nvramrc
error-reset-recovery

Note: Below steps only necessary if you are installing a cluster T2000 with a shared
SCSI storage StorEdge SE3320.

REDKNEE

for internal use only

62 of 163

Siemens AG

6 PCS Manager Installation

On the 2nd node, set scsi-initiator-id to 6 prior to the automated installation.


ok setenv scsi-initiator-id 6
scsi-initiator-id

ok reset-all
ok probe-scsi-all

Output of probe-scsi-all similar like below


...
...
/pci@780/pci@0/pci@8/pci@0/scsi@1,1
<= this is the path to one JBOD storage
Target 0
Unit 0
Disk
SEAGATE ST314670LSUN146G045A
286739329 Blocks, 140009 MB
Target 1
Unit 0
Disk
SEAGATE ST314670LSUN146G045A
286739329 Blocks, 140009 MB
...
...
Target f
Unit 0
Processor
SUN
StorEdge 3320 D1180
/pci@780/pci@0/pci@8/pci@0/scsi@1
<= this is the path to the other JBOD storage
Target 0
Unit 0
Disk
FUJITSU MAW3147NCSUN146G1703
286739329 Blocks, 140009 MB
Target 1
Unit 0
Disk
FUJITSU MAW3147NCSUN146G1703
286739329 Blocks, 140009 MB

...
...
The paths to the JBOD storages can be deduced from the output above:
/pci@780/pci@0/pci@8/pci@0/scsi@1,1
/pci@780/pci@0/pci@8/pci@0/scsi@1

Aboue SCSI path may be different for your machine.


OK nvedit

#opens the nvramrc editor

\ begin scsi-init-section
probe-all
cd /pci@780/pci@0/pci@8/pci@0/scsi@1,1

#SCSI path 1

6 " scsi-initiator-id" integer-property


device-end
cd /pci@780/pci@0/pci@8/pci@0/scsi@1

#SCSI path 2

6 " scsi-initiator-id" integer-property


device-end
install-console
banner
\ end scsi-init-section

REDKNEE

#use CTRL+C to exit from the editor

for internal use only

63 of 163

Siemens AG

6 PCS Manager Installation

OK nvstore

#to save the nvramrc changes

ok reset-all

After modification, the OBP setting for the 2nd host would be:
ok printenv
Variable Name

Value

Default Value

--------------OUTPUT is TRUNCATED-------------------------------scsi-initiator-id

ok printenv nvramrc
nvramrc=\ begin scsi-init-section
probe-all
cd /pci@780/pci@0/pci@8/pci@0/scsi@1
6 " scsi-initiator-id" integer-property
device-end
cd /pci@780/pci@0/pci@8/pci@0/scsi@1,1
6 " scsi-initiator-id" integer-property
device-end
install-console
banner
\ end scsi-init-section

Note: First time installation with hardening will enable OBP password. If youre doing the
any changes in OBP level, will promt you the OBP password. OBP password is
PCS123.

5.4 Firmware upgrade


Refer the section 4.2 Firmware.

5.5 Start Installation


Start the automatic installation from the OBP of the install client (PCS host). This starts a
network installation from the install server that has been prepared:
Note: If any issue during the First Installation, first refer the section 5.8 Troubleshooting
Information, if solution not present then send the following information to Support person.

Version of PCS
Hardware type

REDKNEE

for internal use only

64 of 163

Siemens AG

6 PCS Manager Installation

Number of core processor


Memory(RAM) size on Node 1
Memory(RAM) size on Node 2
Installation console logs from both nodes( from boot net to failure place)
Send the TspExplorer file (/dump/TspExplorer/TspExplorer.*.tar.gz)
File from both nodes (/var/adm/message*)

ok boot <net> -v install

The -v option will show the boot host your client is using.
Where <net> is physical path of the admin LAN interface, physical path of the admin
LAN may vary on youre hardware.
Refer the [Hardware description guide] which explains how to find the Admin LAN
physical path.
Example:
ok boot /pci@0/pci@0/pci@8/pci@0/pci@8/network@0,3 v install

IMPORTANT: Make sure that you first execute the boot net command from the 1st host
and wait for ~5-10 mins then execute the same command from the 2nd host. Otherwise
Installation will fail if this order is changed!

5.5.1 The result of Automated Installation


When the cluster automated installation is finished successfully, the screen output of two
hosts should be:
TSPIS: --- Configure TCP Wrappers
TSPIS: === TCP Wrappers enabled successfully
TSPIS: ##########################################
TSPIS: ##
TSPIS: ##

##
Installation completed !!!

TSPIS: ##

##
##

TSPIS: ##########################################
TSPIS: ##

check all logfiles for

TSPIS: ## success

##
##

TSPIS: ##########################################

If there is an error message go to section 5.8 FAQ and Troubleshooting Information for
troubleshooting information.
Note: After the Hardened OPB password is enabled. Password is PCS123. Dont delete
the directory and scripts in PCS hosts /tspinst/ and /tspinst/scripts; these will be used for
feature upgrade.

REDKNEE

for internal use only

65 of 163

Siemens AG

6 PCS Manager Installation

5.5.2 Installation log


A complete log of an installation is shown in Annex 8.5 Screen trace of PCS Cluster
.

5.5.3 Duration of Automated Installation


The overall automated installation from boot net v install until finished installation and
configuration is expected to take about 8~9 hours.

5.6 Post Installation steps


5.6.1 Establish communication with @Commander
Refer the section 4.5.1 Establish communication with @Commander.

5.6.2 Add/delete route for PCS


Refer the section 4.5.2 Add/delete route for PCS.

5.6.3 Backup & Restore Configuration


Refer the [Backup & Restore and FSR] and chapter 3.

5.6.4 Password Authentication for Plugin Filetransfer


Note: This section is required only if PCS nodes are hardened during the First
Installation/upgrade and you want to enable the plug-in file transfer using SAI interface.
If the plugin is enabled, and also want to transfer the plug-in files using SAI interface, then
do the following settings on both the PCS nodes.
Edit the file /etc/ssh/sshd_config and set the PasswordAuthentication to yes.
Example:
# cat /etc/ssh/sshd_config
----- Some outputs are truncated --------PasswordAuthentication yes
----- Some outputs are truncated ---------

Restart the ssh service on both the nodes.


REDKNEE

for internal use only

66 of 163

Siemens AG

6 PCS Manager Installation

# svcadm restart svc:/network/ssh:default

5.6.5 Alarm: File Permission Monitor found discrepancies. Look at


/var/opt/INTPaghar/run/hids/fileperm.log for more details
Refer the section 4.5.3 Alarm: File Permission Monitor found discrepancies. Look at
/var/opt/INTPaghar/run/hids/fileperm.log for more details

5.6.6 Alarm: Rhosts Monitor found discrepancies. Look at


/var/opt/INTPaghar/run/hids/rhostsAuth.log for more details
Refer the section 4.5.4 Alarm: Rhosts Monitor found discrepancies. Look at
/var/opt/INTPaghar/run/hids/rhostsAuth.log for more details

5.6.7 Service /system/webconsole:console in maintenance state


Solaris vervice /system/webconsole:console in maintenanace state after the FI as below.
root@pcs21b> svcs -l /system/webconsole
fmri

svc:/system/webconsole:console

name

java web console

enabled

true

state

maintenance

next_state

none

state_time

Mon May 06 21:24:56 2013

logfile

/var/svc/log/system-webconsole:console.log

restarter

svc:/system/svc/restarter:default

contract_id
dependency

require_all/none svc:/milestone/network (online)

dependency

require_all/refresh svc:/milestone/name-services (online)

dependency

require_all/none svc:/system/filesystem/local (online)

dependency
optional_all/none svc:/system/filesystem/autofs (disabled)
svc:/network/nfs/client (disabled)
dependency

require_all/none svc:/system/system-log (online)

root@pcs21b>

Now edit the files /var/webconsole/domains/console/conf/server.xml and


/var/webconsole/domains/console/conf/console.xml as root user and remove the
line which is having the word ciphers, each file having two lines.
REDKNEE

for internal use only

67 of 163

Siemens AG

6 PCS Manager Installation

Before deleting:
root@pcs21b> grep cipher /var/webconsole/domains/console/conf/server.xml
ciphers="SSL_DHE_DSS_WITH_RC4_128_SHA,SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,T
LS_KRB5_WITH_RC4_128_MD5,TLS_KRB5_WITH_RC4_128_SHA,TLS_ECDH_ECDSA_WITH_RC4_128_SHA,TLS_EC
DH_RSA_WITH_RC4_128_SHA,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_RC4_128_SHA,S
SL_DH_DSS_WITH_3DES_EDE_CBC_SHA,SSL_DH_RSA_WITH_3DES_EDE_CBC_SHA,SSL_DHE_DSS_WITH_3DES_ED
E_CBC_SHA,SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_FIPS_WI
TH_3DES_EDE_CBC_SHA,TLS_KRB5_WITH_3DES_EDE_CBC_MD5,TLS_KRB5_WITH_3DES_EDE_CBC_SHA,TLS_ECD
H_ECDSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_ECDSA_WITH_3DE
S_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,TLS_DH
E_DSS_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_
SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDH_ECDSA_WITH_AES_128
_CBC_SHA,TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,TLS_ECDH_R
SA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256
_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA"
ciphers="SSL_DHE_DSS_WITH_RC4_128_SHA,SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,T
LS_KRB5_WITH_RC4_128_MD5,TLS_KRB5_WITH_RC4_128_SHA,TLS_ECDH_ECDSA_WITH_RC4_128_SHA,TLS_EC
DH_RSA_WITH_RC4_128_SHA,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_RC4_128_SHA,S
SL_DH_DSS_WITH_3DES_EDE_CBC_SHA,SSL_DH_RSA_WITH_3DES_EDE_CBC_SHA,SSL_DHE_DSS_WITH_3DES_ED
E_CBC_SHA,SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_FIPS_WI
TH_3DES_EDE_CBC_SHA,TLS_KRB5_WITH_3DES_EDE_CBC_MD5,TLS_KRB5_WITH_3DES_EDE_CBC_SHA,TLS_ECD
H_ECDSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_ECDSA_WITH_3DE
S_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,TLS_DH
E_DSS_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_
SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDH_ECDSA_WITH_AES_128
_CBC_SHA,TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,TLS_ECDH_R
SA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256
_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA"

root@pcs21b> grep cipher /var/webconsole/domains/console/conf/console.xml


ciphers="SSL_DHE_DSS_WITH_RC4_128_SHA,SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,T
LS_KRB5_WITH_RC4_128_MD5,TLS_KRB5_WITH_RC4_128_SHA,TLS_ECDH_ECDSA_WITH_RC4_128_SHA,TLS_EC
DH_RSA_WITH_RC4_128_SHA,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_RC4_128_SHA,S
SL_DH_DSS_WITH_3DES_EDE_CBC_SHA,SSL_DH_RSA_WITH_3DES_EDE_CBC_SHA,SSL_DHE_DSS_WITH_3DES_ED
E_CBC_SHA,SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_FIPS_WI
TH_3DES_EDE_CBC_SHA,TLS_KRB5_WITH_3DES_EDE_CBC_MD5,TLS_KRB5_WITH_3DES_EDE_CBC_SHA,TLS_ECD
H_ECDSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_ECDSA_WITH_3DE
S_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,TLS_DH
E_DSS_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_
SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDH_ECDSA_WITH_AES_128
_CBC_SHA,TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,TLS_ECDH_R
SA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256
_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA"
ciphers="SSL_DHE_DSS_WITH_RC4_128_SHA,SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,T
LS_KRB5_WITH_RC4_128_MD5,TLS_KRB5_WITH_RC4_128_SHA,TLS_ECDH_ECDSA_WITH_RC4_128_SHA,TLS_EC
DH_RSA_WITH_RC4_128_SHA,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_RC4_128_SHA,S
SL_DH_DSS_WITH_3DES_EDE_CBC_SHA,SSL_DH_RSA_WITH_3DES_EDE_CBC_SHA,SSL_DHE_DSS_WITH_3DES_ED
E_CBC_SHA,SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_FIPS_WI
TH_3DES_EDE_CBC_SHA,TLS_KRB5_WITH_3DES_EDE_CBC_MD5,TLS_KRB5_WITH_3DES_EDE_CBC_SHA,TLS_ECD
H_ECDSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_ECDSA_WITH_3DE
S_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,TLS_DH
E_DSS_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_
SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDH_ECDSA_WITH_AES_128
_CBC_SHA,TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,TLS_ECDH_R
SA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256
_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA"

root@pcs21b>

After deleting:
REDKNEE

for internal use only

68 of 163

Siemens AG

6 PCS Manager Installation

root@pcs21b> grep cipher /var/webconsole/domains/console/conf/server.xml


root@pcs21b> grep cipher /var/webconsole/domains/console/conf/console.xml

Then clear the maintenance state this service.


root@pcs21b> svcadm clear /system/webconsole
root@pcs21b> svcadm enable /system/webconsole
root@pcs21b> svcs -l /system/webconsole
fmri

svc:/system/webconsole:console

name

java web console

enabled

true

state

online

next_state

none

state_time

Tue May 07 13:03:53 2013

logfile

/var/svc/log/system-webconsole:console.log

restarter

svc:/system/svc/restarter:default

contract_id

170

dependency

require_all/none svc:/milestone/network (online)

dependency

require_all/refresh svc:/milestone/name-services (online)

dependency

require_all/none svc:/system/filesystem/local (online)

dependency
optional_all/none svc:/system/filesystem/autofs (disabled)
svc:/network/nfs/client (disabled)
dependency

require_all/none svc:/system/system-log (online)

root@pcs21b>
Stop and start the webconsole service, below is the steps.
root@pcs21b> /usr/sbin/smcwebserver stop
Shutting down Oracle Java(TM) Web Console Version 3.1 ...
The console is stopped
root@pcs21b> svccfg -s svc:/system/webconsole setprop options/tcp_listen = true
root@pcs21b> /usr/sbin/smcwebserver start
Starting Oracle Java(TM) Web Console Version 3.1 ...
The console is running
root@pcs21b>

Do the above steps on both PCS nodes.

REDKNEE

for internal use only

69 of 163

Siemens AG

6 PCS Manager Installation

5.6.8 Apply following patches (DV6307 & DV5733)


Refer the section 4.5.9 Apply following patches (DV6307 & DV5733)

5.6.9 Check the file /rtp_environ.txt


Check the content of file /rtp_environ.txt whether having $PATH in second line from the
last, if not, then add the same as shown below example.

File /rtp_environ.txt without $PATH in second line from last.


root@pcs25a> tail -2 /rtp_environ.txt
PATH=/bin:/sbin:/usr/sbin:/usr/bin:/opt/bin:/opt/SMAW/SMAWrtp/bin:/expo
rt/home/oracle/products/10.2.0/bin:/opt/SMAW/SMAWjdk/1.6/bin
export PATH
root@pcs25a>

File /rtp_environ.txt with $PATH in second line from last.


root@pcs21a> tail -2 /rtp_environ.txt
PATH=$PATH:/bin:/sbin:/usr/sbin:/usr/bin:/opt/bin:/opt/SMAW/SMAWrtp/bin
:/export/home/oracle/products/10.2.0/bin:/opt/SMAW/SMAWjdk/1.6/bin
export PATH
root@pcs21a>

Do this steps on both PCS nodes.

5.6.10 Correct the file CfrAll_BoundResources.tcn


Add the below workaround steps under post installation

1. Take backup of CfrAll_BoundResources.tcn file from /opt/SMAW/SMAWrtp/cust_conf and


/export/home/rtp99/99/cust_conf/ folders on both the nodes
2. Edit the CfrAll_BoundResources.tcn file as shown below from both the folders on both the
nodes, remove D from below line.

From: Before the change


APC_NE3S_DFMAgent::EXT.NE3S_DOMAgentTomcatAlias::0

To: After the change


APC_NE3S_DFMAgent::EXT.NE3S_OMAgentTomcatAlias::0

REDKNEE

for internal use only

70 of 163

Siemens AG

6 PCS Manager Installation

3. Save the file


4.

Restart RtpComCRHdl01 process on both the nodes from TSP

5.6.11 PAM: Authentication failed for rtp99 from clusternode*-priv


After the FI or Upgrade or reboot of both nodes or restart RTP, if you see following error
in /var/adm/messages or console...
Jun 27 14:46:48 pcs40b sshd[24016]: [ID 800047 auth.error] error: PAM:
Authentication failed for rtp99 from clusternode1-priv
Jun 27 14:46:52 pcs40b sshd[24071]: [ID 800047 auth.error] error: PAM:
Authentication failed for rtp99 from clusternode1-priv
Jun 27 14:46:52 pcs40b sshd[24071]: [ID 800047 auth.error] error: PAM:
Authentication failed for rtp99 from clusternode1-priv
Jun 27 14:46:52 pcs40b sshd[24071]: [ID 800047 auth.error] error: PAM:
Authentication failed for rtp99 from clusternode1-priv
Jun 27 14:46:55 pcs40b sshd[24212]: [ID 800047 auth.error] error: PAM:
Authentication failed for rtp99 from clusternode1-priv
Jun 27 14:46:55 pcs40b sshd[24212]: [ID 800047 auth.error] error: PAM:
Authentication failed for rtp99 from clusternode1-priv
Jun 27 14:46:56 pcs40b sshd[24212]: [ID 800047 auth.error] error: PAM:
Authentication failed for rtp99 from clusternode1-priv
Jun 27 14:46:59 pcs40b sshd[24249]: [ID 800047 auth.error] error: PAM:
Authentication failed for rtp99 from clusternode1-priv
Jun 27 14:46:59 pcs40b sshd[24249]: [ID 800047 auth.error] error: PAM:
Authentication failed for rtp99 from clusternode1-priv
Jun 27 14:46:59 pcs40b sshd[24249]: [ID 800047 auth.error] error: PAM:
Authentication failed for rtp99 from clusternode1-priv
Jun 27 16:07:51 pcs40b sshd[17614]: [ID 800047 auth.error] error: PAM:
Authentication failed for rtp99 from clusternode1-priv

To solve this problem you must restart the processes related to rtp communication
(RtpSendToNode and RtpRecvFromNode) on the affected node as root user.
RtpRecvFromNode01_02.class0
RtpSendToNode01_02.class0
RtpRecvFromNode01_02.class1
RtpSendToNode01_02.class1
RtpRecvFromNode01_02.class2
RtpSendToNode01_02.class2
RtpRecvFromNode01_02.class3
RtpSendToNode01_02.class3
RtpRecvFromNode01_02.class4
RtpSendToNode01_02.class4
RtpRecvFromNode01_02.class5
RtpSendToNode01_02.class5
RtpRecvFromNode01_02.class6
REDKNEE

for internal use only

71 of 163

Siemens AG

6 PCS Manager Installation

The short command would be:


# ps -ef | grep -i rtp | grep -i node | grep -i class | awk {'print $2'} | xargs -I {} kill {}

5.6.12 Re-diemtnioning context value.


Default PCS cluster will be installed with 6.39 million context (session) value. If want to
reduce this value, then follow the below steps.
Execute the below steps as root user.
# cp p /opt/SMAW/SMAWrtp/cust_conf/Pcs.parm
/opt/SMAW/SMAWrtp/cust_conf/Pcs.parm_red
# sed '/Rtp\/Ctx\/[0-9]\{3\}\// s/= [0-9]\{7\}/= 'NEW_VALUE'/g'
/opt/SMAW/SMAWrtp/cust_conf/Pcs.parm > /tmp/Pcs.parm
# cp /tmp/Pcs.parm /opt/SMAW/SMAWrtp/cust_conf/Pcs.parm
# chmod 644 /opt/SMAW/SMAWrtp/cust_conf/Pcs.parm
# chown rtp99:dba /opt/SMAW/SMAWrtp/cust_conf/Pcs.parm
Where 'NEW_VALUE' will be reduced context value.
Do the above steps on both nodes.
Then login to node as rtp99 user.
stopRTP on node1
stopRTP on node2
deleteRTP on Node1
createRTP on node1
startRTP on node1
startRTP on node2

5.6.13 Apply the following bin and lib


Note: This section is mandatory only for after PCS6.3_0.3 first installation.
Follow the below steps to update BIN/LIB for RMS cluster.
Step 1: Download PCS.tar.gz and pcs_hotfix.sh file from download area(Refer the
release notes download information).
Step 2: Copy the PCS.tar.gz and pcs_hotfix.sh files under /var directory in both the
PCS nodes 1 & 2 (give all permission to both the file).
root@pcs21a> cd /var
root@pcs21a> chmod 777 PCS.tar.gz pcs_hotfix.sh
root@pcs21a> ls -lrt
REDKNEE

for internal use only

72 of 163

Siemens AG
-rwxrwxrwx
-rwxrwxrwx

6 PCS Manager Installation


1 root
1 root

root
root

83943173 Oct 9 11:00 PCS.tar.gz


1627 Oct 9 11:30 pcs_hotfix.sh

root@pcs21b>
root@pcs21b> cd /var
root@pcs21b> chmod 777 PCS.tar.gz pcs_hotfix.sh
root@pcs21b> ls -lrt
-rwxrwxrwx 1 root
root
83943173 Oct 9 11:00 PCS.tar.gz
-rwxrwxrwx 1 root
root
1627 Oct 9 11:30 pcs_hotfix.sh
Step 3: Now made PCS both node RTP Down by using stopRTP command as rtp99
user.
Note: execute in NODE-1, once NODE-1 all process will completly down execute it in
NODE-2.
Example:
NODE-1:
bash-3.00$ stopRTP
Writing log to /export/home/rtp99/99/log/stopRTP.19566.4711
stopping SuperNodeManager
[/opt/SMAW/SMAWrtp/bin/RtpNodeCheck] Return success
sending USR2 to pid: 4055
.............................................................................................................................................
....................................
SuperNodeManager stopped
bash-3.00$
NODE-2:
bash-3.00$ stopRTP
Writing log to /export/home/rtp99/99/log/stopRTP.20673.4711
stopping SuperNodeManager
[/opt/SMAW/SMAWrtp/bin/RtpNodeCheck] Return success
sending USR2 to pid: 3912
.............................................................................................................................................
......................................
SuperNodeManager stopped
bash-3.00$
Step 4: Now update the new BIN & LIB package by using pcs_hotfix.sh script. Please
follow the steps as given below.
Before going to start HF BIN and LIB file installation, we need to create "logs" directory
under /export/home/rtp99/ path, please follow the steps below(crete it under both the
nodes) as root user
root@pcs21a> pwd
/export/home/rtp99
root@pcs21a> mkdir logs
root@pcs21a> chmod 777 logs
root@pcs21a> chown rtp99:dba logs
root@pcs21a> ls lrt
REDKNEE

for internal use only

73 of 163

Siemens AG

6 PCS Manager Installation

Step 4.1: Go to /var directory in PCS node-1 as root user


Step 4.2: Execute below commond.
root@pcs21a> cd/var
root@pcs21a> ls -lrt
root@pcs21a> ./pcs_hotfix.sh
BACKUP IS SUCCESFUL. Backup is in /opt/Backup_revert_131009133204
PLEASE VERIFY IF THE BACKUP IS SUCCESFUL AND THEN PROCEED WITH
NEXT STEP...
Installation logs available at
/export/home/rtp99/logs/pcs63HFInstall_131009133204.log
To continue with HF installation type - 1
/*****Please cross check the back of present BIN/LIB was happen or not under
/opt/Backup_revert_131009133204 path. If back was created then press "1" and enter to
replace new BIN/LIB package to update the HF.
root@pcs21a> cd /opt
root@pcs21a> ls -lrt
drwxr-x--4 root
root

512 Oct 9 13:32 Backup_revert_131009133204

Step 4.3: After enter 1 to proceed to HF installation, we will get below logs.
To continue with HF installation type - 1
1
INPUT = 1
HF INSTALLATION ONGOING
x PCS, 0 bytes, 0 tape blocks
x PCS/bin, 0 bytes, 0 tape blocks
x PCS/bin/updatesql, 815 bytes, 2 tape blocks
x PCS/bin/pcsLdapDispatcher, 1125112 bytes, 2198 tape blocks
x PCS/bin/pcsDiaDispatcher, 1194112 bytes, 2333 tape blocks
x PCS/bin/pcsConfig, 1198192 bytes, 2341 tape blocks
x PCS/bin/crash.d, 5043 bytes, 10 tape blocks
x PCS/bin/pcsGenericPluginInterface, 1063376 bytes, 2077 tape blocks
x PCS/bin/PCSpluginDatausage.sh, 893 bytes, 2 tape blocks
x PCS/bin/RtpListenInfo.pl, 16195 bytes, 32 tape blocks
x PCS/bin/PCSSubTraceDataAnalyzer.sh, 3668 bytes, 8 tape blocks
x PCS/bin/RtpHttpsTool.pl, 110876 bytes, 217 tape blocks
x PCS/bin/pcsCollectLogs.sh, 2980 bytes, 6 tape blocks
x PCS/bin/pcsTcpAdapter, 1206600 bytes, 2357 tape blocks
x PCS/bin/pcsNotificationDispatcher, 1073216 bytes, 2097 tape blocks
x PCS/bin/pcsCore, 1325944 bytes, 2590 tape blocks
x PCS/bin/pcsStatisticsManager, 1059016 bytes, 2069 tape blocks
x PCS/bin/PCSHealthCheck.sh, 17370 bytes, 34 tape blocks
x PCS/bin/pcsContextApp, 1023488 bytes, 1999 tape blocks
x PCS/bin/pcsOneNdsSoapDispatcher, 1064904 bytes, 2080 tape blocks
x PCS/bin/pcsGenericNotificationDispatcher, 1099048 bytes, 2147 tape blocks
x PCS/bin/pcsSessionContextManager, 1107112 bytes, 2163 tape blocks
x PCS/bin/pcsIaDispatcher, 1117624 bytes, 2183 tape blocks
REDKNEE

for internal use only

74 of 163

Siemens AG

6 PCS Manager Installation

x PCS/lib, 0 bytes, 0 tape blocks


x PCS/lib/libpcsPRE.so, 193144 bytes, 378 tape blocks
x PCS/lib/libclntsh.so.10.1, 51258248 bytes, 100114 tape blocks
x PCS/lib/libclntsh.so.11.1, 51258248 bytes, 100114 tape blocks
x PCS/lib/libxercesc.so, 56088584 bytes, 109549 tape blocks
x PCS/lib/libsmpp.so, 523512 bytes, 1023 tape blocks
x PCS/lib/libpcsTransport.so, 525376 bytes, 1027 tape blocks
x PCS/lib/libUMSsys.so, 146192 bytes, 286 tape blocks
x PCS/lib/pluginInterface.jar, 7657 bytes, 15 tape blocks
x PCS/lib/libpcsGenericNotificationDispatcher.so, 197952 bytes, 387 tape blocks
x PCS/lib/pdfpre.jar, 9072 bytes, 18 tape blocks
x PCS/lib/libpcsCore.so, 9365824 bytes, 18293 tape blocks
x PCS/lib/libpcsGenericPlugin.so, 328504 bytes, 642 tape blocks
x PCS/lib/sunxacml.jar, 189980 bytes, 372 tape blocks
x PCS/lib/libpcsConfig.so, 434592 bytes, 849 tape blocks
x PCS/lib/libpdfcppdia.so, 4168744 bytes, 8143 tape blocks
x PCS/lib/GenericNotificationPlugin.jar, 4603 bytes, 9 tape blocks
x PCS/lib/samples.jar, 6805 bytes, 14 tape blocks
x PCS/lib/libpcsCommon.so, 16120584 bytes, 31486 tape blocks
x PCS/lib/libpcsTcpAdapter.so, 159552 bytes, 312 tape blocks
x PCS/lib/libpcsStatisticsManager.so, 183352 bytes, 359 tape blocks
x PCS/lib/libpcsNotificationDispatcher.so, 216896 bytes, 424 tape blocks
x PCS/lib/libboost.so, 128552 bytes, 252 tape blocks
x PCS/lib/libpcsMegacoDispatcher.so, 1538840 bytes, 3006 tape blocks
x PCS/lib/libpcsOneNdsSoapDispatcher.so, 399400 bytes, 781 tape blocks
x PCS/lib/libpcsLdapDispatcher.so, 922336 bytes, 1802 tape blocks
x PCS/lib/libclntsh.so, 51258248 bytes, 100114 tape blocks
x PCS/lib/libpcsExpat.so, 483240 bytes, 944 tape blocks
x PCS/lib/libnnz11.so, 8070520 bytes, 15763 tape blocks
x PCS/lib/libpcsSessionContextManager.so, 585184 bytes, 1143 tape blocks
x PCS/lib/libpcsDiaDispatcher.so, 1169648 bytes, 2285 tape blocks
x PCS/lib/libpcsContextApp.so, 111424 bytes, 218 tape blocks
x PCS/lib/libpcsSPR.so, 422720 bytes, 826 tape blocks
PCS 6.3HF Install - Completed
root@pcs21a>
Step 4.4: After completion of HF installation, cross check the BIN & LIB files are
updated.
root@pcs21a> cd /opt/PCS/bin
root@pcs21a> ls -lrt
total 29278
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
PCSSubTraceDataAnalyzer.sh
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
REDKNEE

17370 Oct 9 13:36 PCSHealthCheck.sh


3668 Oct 9 13:36
893 Oct
110876 Oct
16195 Oct
5043 Oct
2980 Oct
1198192 Oct

9 13:36 PCSpluginDatausage.sh
9 13:36 RtpHttpsTool.pl
9 13:36 RtpListenInfo.pl
9 13:36 crash.d
9 13:36 pcsCollectLogs.sh
9 13:36 pcsConfig

for internal use only

75 of 163

Siemens AG
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
pcsGenericNotificationDispatcher
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
root@pcs21a>

6 PCS Manager Installation


1023488 Oct
1325944 Oct
1194112 Oct
1099048 Oct

9 13:36 pcsContextApp
9 13:36 pcsCore
9 13:36 pcsDiaDispatcher
9 13:36

1063376 Oct
1117624 Oct
1125112 Oct
1073216 Oct
1064904 Oct
1107112 Oct
1059016 Oct
1206600 Oct
815 Oct

9 13:36 pcsGenericPluginInterface
9 13:36 pcsIaDispatcher
9 13:36 pcsLdapDispatcher
9 13:36 pcsNotificationDispatcher
9 13:36 pcsOneNdsSoapDispatcher
9 13:36 pcsSessionContextManager
9 13:36 pcsStatisticsManager
9 13:36 pcsTcpAdapter
9 13:36 updatesql

root@pcs21a> cd /opt/PCS/lib
root@pcs21a> ls -lrt
-rwxrwxrwx 1 rtp99
dba
4603 Oct 9 13:36
GenericNotificationPlugin.jar.0631181_130828
-rwxrwxrwx 1 rtp99
dba
146192 Oct 9 13:36
libUMSsys.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
128552 Oct 9 13:36
libboost.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
51258248 Oct 9 13:36
libclntsh.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
51258248 Oct 9 13:36
libclntsh.so.10.1.0631181_130828
-rwxrwxrwx 1 rtp99
dba
51258248 Oct 9 13:36
libclntsh.so.11.1.0631181_130828
-rwxrwxrwx 1 rtp99
dba
8070520 Oct 9 13:36
libnnz11.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
16120584 Oct 9 13:36
libpcsCommon.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
434592 Oct 9 13:36
libpcsConfig.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
111424 Oct 9 13:36
libpcsContextApp.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
9365824 Oct 9 13:36
libpcsCore.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
1169648 Oct 9 13:36
libpcsDiaDispatcher.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
483240 Oct 9 13:36
libpcsExpat.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
197952 Oct 9 13:36
libpcsGenericNotificationDispatcher.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
328504 Oct 9 13:36
libpcsGenericPlugin.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
922336 Oct 9 13:36
libpcsLdapDispatcher.so.0631181_130828

REDKNEE

for internal use only

76 of 163

Siemens AG

6 PCS Manager Installation

-rwxrwxrwx 1 rtp99
dba
1538840 Oct 9 13:36
libpcsMegacoDispatcher.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
216896 Oct 9 13:36
libpcsNotificationDispatcher.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
399400 Oct 9 13:36
libpcsOneNdsSoapDispatcher.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
193144 Oct 9 13:36
libpcsPRE.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
422720 Oct 9 13:36
libpcsSPR.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
585184 Oct 9 13:36
libpcsSessionContextManager.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
183352 Oct 9 13:36
libpcsStatisticsManager.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
159552 Oct 9 13:36
libpcsTcpAdapter.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
525376 Oct 9 13:36
libpcsTransport.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
4168744 Oct 9 13:36
libpdfcppdia.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
523512 Oct 9 13:36
libsmpp.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
56088584 Oct 9 13:36
libxercesc.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
9072 Oct 9 13:36 pdfpre.jar.0631181_130828
-rwxrwxrwx 1 rtp99
dba
7657 Oct 9 13:36
pluginInterface.jar.0631181_130828
-rwxrwxrwx 1 rtp99
dba
6805 Oct 9 13:36
samples.jar.0631181_130828
-rwxrwxrwx 1 rtp99
dba
189980 Oct 9 13:36
sunxacml.jar.0631181_130828
root@pcs21a>
Step 4.5: Please cross check the HF installation log file is generated.
root@pcs21a> cd /export/home/rtp99/lo
local.cshrc
local.login
local.profile logs/
root@pcs21a> cd /export/home/rtp99/logs
root@pcs21a> ls -lrt
total 2
-rw-r----- 1 root
root
239 Oct 9 13:36
pcs63HFInstall_131009133204.log
root@pcs21a>
Step 5: Till here HF installation on NODE-1 is finished.
Step 6: Repeat the procedure from step number 3 to complete HF installation on
NODE-2 also.
Step 7: Once HF installation will completed in both NODE 1 and NODE 2, follow the
below procedure to start RTP process.
REDKNEE

for internal use only

77 of 163

Siemens AG

6 PCS Manager Installation

Step 7.1: Start RTP process in NODE-2 first: Wait till NODE-2 complete up with all
process, then we will go for NODE-1 RTP start.
$ bash
bash-3.00$ nohup startRTP &
[1] 9622
bash-3.00$ Sending output to nohup.out
Step 7.2: Start RTP process in NODE-1 now:

Step 8: After both NODES will come up, cross check all the process and applications are
working fine.
Step 9: If any issue faced during HF installation, send the HF Installation log file from
/export/home/rtp99/log for further verifications.

5.6.13.1 HotFix fallback script


If you want to fallback from the hotfix which is applied in section Apply the following bin
and lib, then follow below procedure.
Note: Check the Disk space availablility on PCS nodes root (/) file system it should be >
1GB space, if Disk space is not available/less <1 GB, delete any old HF backup from
/Opt folder.
Step 1: When we apply the PCS6.3.0.3+HF on exiting PCS6.3.0.3 installation the
changed bin, lib will be backed up in "/opt/Backup_revert_$ID" folder.
Example: NODE-1:
root@pcs21a> cd /opt
root@pcs21a> ls -lrt
drwxr-x--4 root
root

512 Oct 9 13:32 Backup_revert_131009133204

EXAMPLE: NODE-2:
root@pcs21b> cd /opt
root@pcs21b> ls -lrt
drwxr-x--4 root
root

512 Oct 14 16:56 Backup_revert_131009134458

If directory /export/home/rtp99/logs/ does not exist create the same with below steps
(crete it under both the nodes) as root user.
root@pcs50a> cd /export/home/rtp99/
root@pcs50a> ls -lrt
total 4
drwxr-xr-x. 4 root dba 4096 Oct 10 13:29 99
root@pcs50a> mkdir logs
root@pcs50a> chmod 777 logs
root@pcs50a> chown rtp99:dba logs
REDKNEE

for internal use only

78 of 163

Siemens AG

6 PCS Manager Installation

root@pcs50a>
root@pcs50a> ls lrt
Step 2: Download the script pcs_hotfix_fallback.sh.gz from download area (refer the
release notes for download information) and unzip the file and copy the script
"pcs_hotfix_fallback.sh" to "/opt/Backup_revert_$ID" in PCS nodes.
EXAMPLE: NODE-1:
root@pcs21a> cd /opt
root@pcs21a> ls -lrt
drwxr-x--4 root
root
512 Oct 14 16:56 Backup_revert_131009133204
root@pcs21a> cd Backup_revert_131009133204
root@pcs21a> ls -lrt
total 10
drwxr-x--2 root
root
2560 Oct 9 13:32 lib
drwxr-x--2 root
root
1024 Oct 9 13:32 bin
-rwxrwxrwx 1 root
root
359 Oct 14 15:41 pcs_hotfix_fallback.sh
root@pcs21a>
EXAMPLE: NODE-2:
root@pcs21b> cd /opt
root@pcs21b> ls -lrt
drwxr-x--4 root
root
512 Oct 14 16:56 Backup_revert_131009134458
root@pcs21b> cd Backup_revert_131009134458
root@pcs21b> ls -lrt
total 10
drwxr-x--2 root
root
2560 Oct 9 13:45 lib
drwxr-x--2 root
root
1024 Oct 9 13:45 bin
-rwxrwxrwx 1 root
root
359 Oct 14 15:41 pcs_hotfix_fallback.sh
root@pcs21b>
Step 3: Now made both node RTP Down by using stopRTP command.
Note: Execute in NODE-1 , once NODE-1 all process will completly down execute it in
NODE-2.
Example:NODE-1:
bash-3.00$ stopRTP
Writing log to /export/home/rtp99/99/log/stopRTP.19566.4711
stopping SuperNodeManager
[/opt/SMAW/SMAWrtp/bin/RtpNodeCheck] Return success
sending USR2 to pid: 4055
.............................................................................................................................................
....................................
SuperNodeManager stopped
bash-3.00$
Example:NODE-2:
bash-3.00$ stopRTP
Writing log to /export/home/rtp99/99/log/stopRTP.20673.4711
REDKNEE

for internal use only

79 of 163

Siemens AG

6 PCS Manager Installation

stopping SuperNodeManager
[/opt/SMAW/SMAWrtp/bin/RtpNodeCheck] Return success
sending USR2 to pid: 3912
.............................................................................................................................................
......................................
SuperNodeManager stopped
bash-3.00$

Step 4: From the same path execute ./pcs_hotfix_fallback.sh with root user.
Example:NODE1:
root@pcs21a> cd /opt/Backup_revert_131009133204
root@pcs21a> ls -lrt
total 10
drwxr-x--2 root
root
2560 Oct 9 13:32 lib
drwxr-x--2 root
root
1024 Oct 9 13:32 bin
-rwxrwxrwx 1 root
root
359 Oct 14 15:41 pcs_hotfix_fallback.sh
root@pcs21a>
root@pcs21a> ./pcs_hotfix_fallback.sh
root@pcs21a>
Step 4.1: Cross check the files: Back up files will deleted from back up folder:
---------------------------------------------------------------------root@pcs21a> cd /opt/Backup_revert_131009133204
root@pcs21a> ls -lrt
total 2
-rwxrwxrwx 1 root
root
359 Oct 14 15:41 pcs_hotfix_fallback.sh
root@pcs21a>
Step 4.2: Cross check fall back is done properly or not:
-------------------------------------------------root@pcs21a> cd /opt/PCS
root@pcs21a> ls -lrt
total 18
drwxrwxrwx 3 rtp99
dba
4608 Sep 20 14:52 conf
drwxrwxrwx 2 rtp99
dba
2560 Oct 14 17:04 lib
drwxrwxrwx 2 rtp99
dba
1024 Oct 14 17:04 bin
root@pcs21a> cd lib
root@pcs21a> ls -lrt
total 1004960
-rwxrwxrwx 1 rtp99
dba
4603 Oct 9 13:32 GenericNotificationPlugin.jar
-rwxrwxrwx 1 rtp99
dba
4603 Oct 9 13:32
GenericNotificationPlugin.jar.0631181_130828
-rwxrwxrwx 1 rtp99
dba
146192 Oct 9 13:32 libUMSsys.so
-rwxrwxrwx 1 rtp99
dba
146192 Oct 9 13:32
libUMSsys.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
128552 Oct 9 13:32 libboost.so
-rwxrwxrwx 1 rtp99
dba
128552 Oct 9 13:32
libboost.so.0631181_130828
REDKNEE

for internal use only

80 of 163

Siemens AG

6 PCS Manager Installation

-rwxrwxrwx 1 rtp99
dba
51258248 Oct 9 13:32 libclntsh.so
-rwxrwxrwx 1 rtp99
dba
51258248 Oct 9 13:32
libclntsh.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
51258248 Oct 9 13:32 libclntsh.so.10.1
-rwxrwxrwx 1 rtp99
dba
51258248 Oct 9 13:32
libclntsh.so.10.1.0631181_130828
-rwxrwxrwx 1 rtp99
dba
51258248 Oct 9 13:32 libclntsh.so.11.1
-rwxrwxrwx 1 rtp99
dba
51258248 Oct 9 13:32
libclntsh.so.11.1.0631181_130828
-rwxrwxrwx 1 rtp99
dba
323716 Oct 9 13:32 libexpat.so.0
-rwxrwxrwx 1 rtp99
dba
323716 Oct 9 13:32
libexpat.so.0.0631181_130828
-rwxrwxrwx 1 rtp99
dba
8070520 Oct 9 13:32 libnnz11.so
-rwxrwxrwx 1 rtp99
dba
8070520 Oct 9 13:32
libnnz11.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
16094920 Oct 9 13:32 libpcsCommon.so
-rwxrwxrwx 1 rtp99
dba
16094920 Oct 9 13:32
libpcsCommon.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
434528 Oct 9 13:32 libpcsConfig.so
-rwxrwxrwx 1 rtp99
dba
434528 Oct 9 13:32
libpcsConfig.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
111384 Oct 9 13:32 libpcsContextApp.so
-rwxrwxrwx 1 rtp99
dba
111384 Oct 9 13:32
libpcsContextApp.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
9360824 Oct 9 13:32 libpcsCore.so
-rwxrwxrwx 1 rtp99
dba
9360824 Oct 9 13:32
libpcsCore.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
1169376 Oct 9 13:32 libpcsDiaDispatcher.so
-rwxrwxrwx 1 rtp99
dba
1169376 Oct 9 13:32
libpcsDiaDispatcher.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
483240 Oct 9 13:32 libpcsExpat.so
-rwxrwxrwx 1 rtp99
dba
483240 Oct 9 13:32
libpcsExpat.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
197928 Oct 9 13:32
libpcsGenericNotificationDispatcher.so
-rwxrwxrwx 1 rtp99
dba
197928 Oct 9 13:32
libpcsGenericNotificationDispatcher.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
328472 Oct 9 13:32 libpcsGenericPlugin.so
-rwxrwxrwx 1 rtp99
dba
328472 Oct 9 13:32
libpcsGenericPlugin.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
922232 Oct 9 13:32 libpcsLdapDispatcher.so
-rwxrwxrwx 1 rtp99
dba
922232 Oct 9 13:32
libpcsLdapDispatcher.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
1538728 Oct 9 13:32 libpcsMegacoDispatcher.so
-rwxrwxrwx 1 rtp99
dba
1538728 Oct 9 13:32
libpcsMegacoDispatcher.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
216872 Oct 9 13:32
libpcsNotificationDispatcher.so
-rwxrwxrwx 1 rtp99
dba
216872 Oct 9 13:32
libpcsNotificationDispatcher.so.0631181_130828

REDKNEE

for internal use only

81 of 163

Siemens AG

6 PCS Manager Installation

-rwxrwxrwx 1 rtp99
dba
399384 Oct 9 13:32
libpcsOneNdsSoapDispatcher.so
-rwxrwxrwx 1 rtp99
dba
399384 Oct 9 13:32
libpcsOneNdsSoapDispatcher.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
193072 Oct 9 13:32 libpcsPRE.so
-rwxrwxrwx 1 rtp99
dba
193072 Oct 9 13:32
libpcsPRE.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
422664 Oct 9 13:32 libpcsSPR.so
-rwxrwxrwx 1 rtp99
dba
422664 Oct 9 13:32
libpcsSPR.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
584928 Oct 9 13:32
libpcsSessionContextManager.so
-rwxrwxrwx 1 rtp99
dba
584928 Oct 9 13:32
libpcsSessionContextManager.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
183336 Oct 9 13:32 libpcsStatisticsManager.so
-rwxrwxrwx 1 rtp99
dba
183336 Oct 9 13:32
libpcsStatisticsManager.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
159512 Oct 9 13:32 libpcsTcpAdapter.so
-rwxrwxrwx 1 rtp99
dba
159512 Oct 9 13:32
libpcsTcpAdapter.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
525032 Oct 9 13:32 libpcsTransport.so
-rwxrwxrwx 1 rtp99
dba
525032 Oct 9 13:32
libpcsTransport.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
4168248 Oct 9 13:32 libpdfcppdia.so
-rwxrwxrwx 1 rtp99
dba
4168248 Oct 9 13:32
libpdfcppdia.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
523512 Oct 9 13:32 libsmpp.so
-rwxrwxrwx 1 rtp99
dba
523512 Oct 9 13:32
libsmpp.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
56088584 Oct 9 13:32 libxercesc.so
-rwxrwxrwx 1 rtp99
dba
56088584 Oct 9 13:32
libxercesc.so.0631181_130828
-rwxrwxrwx 1 rtp99
dba
9072 Oct 9 13:32 pdfpre.jar
-rwxrwxrwx 1 rtp99
dba
9072 Oct 9 13:32 pdfpre.jar.0631181_130828
-rwxrwxrwx 1 rtp99
dba
7657 Oct 9 13:32 pluginInterface.jar
-rwxrwxrwx 1 rtp99
dba
7657 Oct 9 13:32
pluginInterface.jar.0631181_130828
-rwxrwxrwx 1 rtp99
dba
6805 Oct 9 13:32 samples.jar
-rwxrwxrwx 1 rtp99
dba
6805 Oct 9 13:32
samples.jar.0631181_130828
-rwxrwxrwx 1 rtp99
dba
189980 Oct 9 13:32 sunxacml.jar
-rwxrwxrwx 1 rtp99
dba
189980 Oct 9 13:32
sunxacml.jar.0631181_130828
-rwxrwxrwx 1 rtp99
dba
21743 Oct 9 13:32 NSRPlugin.jar
root@pcs21a>

root@pcs21a> cd /opt/PCS/bin
root@pcs21a> ls -lrt
total 29278
-rwxrwxrwx 1 rtp99
dba
REDKNEE

17370 Oct 9 13:32 PCSHealthCheck.sh


for internal use only

82 of 163

Siemens AG
-rwxrwxrwx 1 rtp99
dba
PCSSubTraceDataAnalyzer.sh
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
pcsGenericNotificationDispatcher
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
-rwxrwxrwx 1 rtp99
dba
root@pcs21a>

6 PCS Manager Installation


3668 Oct 9 13:32
893 Oct
110876 Oct
16195 Oct
5043 Oct
2980 Oct
1198944 Oct
1023448 Oct
1325904 Oct
1194104 Oct
1099040 Oct

9 13:32 PCSpluginDatausage.sh
9 13:32 RtpHttpsTool.pl
9 13:32 RtpListenInfo.pl
9 13:32 crash.d
9 13:32 pcsCollectLogs.sh
9 13:32 pcsConfig
9 13:32 pcsContextApp
9 13:32 pcsCore
9 13:32 pcsDiaDispatcher
9 13:32

1063368 Oct
1117608 Oct
1125104 Oct
1073208 Oct
1064896 Oct
1107096 Oct
1059000 Oct
1206584 Oct
815 Oct

9 13:32 pcsGenericPluginInterface
9 13:32 pcsIaDispatcher
9 13:32 pcsLdapDispatcher
9 13:32 pcsNotificationDispatcher
9 13:32 pcsOneNdsSoapDispatcher
9 13:32 pcsSessionContextManager
9 13:32 pcsStatisticsManager
9 13:32 pcsTcpAdapter
9 13:32 updatesql

Example:NODE2:
root@pcs21b> cd /opt/Backup_revert_131009134458
root@pcs21b> ls -lrt
total 10
drwxr-x--2 root
root
2560 Oct 9 13:45 lib
drwxr-x--2 root
root
1024 Oct 9 13:45 bin
-rwxrwxrwx 1 root
root
359 Oct 14 15:41 pcs_hotfix_fallback.sh
root@pcs21b>
Step 5: Repeate the step 4 to carry out Fall back activity in NODE-2.
Step 6: Then Start stopped PCS process through nohup startRTP on PCS NODE-2 first,
and once NODE-2 Will be up with all process start at NODE-1 as a rtp99 user.
Example: NODE-2::
bash-3.00$ nohup startRTP &
[1] 20501
bash-3.00$ Sending output to nohup.out
Example: NODE-1::
$ bash
bash-3.00$ nohup startRTP &
[1] 29485
bash-3.00$ Sending output to nohup.out

REDKNEE

for internal use only

83 of 163

Siemens AG

6 PCS Manager Installation

Step 7: During the fall back procedure, if you find any difficutly please collect the log file
from below path and share the same with us.
Example: NODE-1::
root@pcs21a> cd /export/home/rtp99/logs/
root@pcs21a> ls -lrt
total 2
-rw-r----- 1 root
root
62 Oct 14 17:04 pcs63HFFallBack.log
root@pcs21a>
Example: NODE-2::
root@pcs21b> cd /export/home/rtp99/logs
root@pcs21b> ls -lrt
total 4
-rw-r----- 1 root
root
239 Oct 9 13:46
pcs63HFInstall_131009134458.log
-rw-r----- 1 root
root
62 Oct 14 17:15 pcs63HFFallBack.log
root@pcs21b>

5.6.14 Password expired


Refer the section Password expired

5.6.15 TSP GUI using https


Refer the section TSP GUI using https
If it is cluster do the above steps on both PCS nodes.

5.7 Verification
5.7.1 PCS processes
On PCS host switch user to rtp99. Run execRTPenv status1. You can see all
Components (including PCS and PCS Manager) are running.
NOTE: When you switch user to rtp99, you MUST use su rtp99 to make sure that
the profile of the rtp99 is touched.

5.7.2 Using TSP GUI


Checking the process status using TSP GUI, use https://<IP_Address:8099>, where
IP_Address is PCS host Core IP Address, you will see the following screen.

REDKNEE

for internal use only

84 of 163

Siemens AG

6 PCS Manager Installation

Figure 5-1 Oening page for TSP GUI

Click Login button, login with user id superad, will open the new window like below

REDKNEE

for internal use only

85 of 163

Siemens AG

6 PCS Manager Installation

Figure 5-2 TSP GUI shows after login

Expand the System Management in left hand side and click Process & Node in left
hand side; you can see right hand side like in above screen shot. In right hand side you
can expand the each cluster node to verify the process status as like below screen shot.

REDKNEE

for internal use only

86 of 163

Siemens AG

6 PCS Manager Installation

Figure 5-3 Process status in TSP GUI


All the process should be running state.

5.7.3 Hostname
Check the hostname by uname -n on both hosts, the expect result: the hostname for
Core Lan interface will be shown. Check the hostname binding to the Core interface by
cat /etc/hostname.<Core interface> and make sure the binding is correct.

REDKNEE

for internal use only

87 of 163

Siemens AG

6 PCS Manager Installation

5.7.4 NTP configuration


Check if NTP is working configuration. NTP is activated if set in TPD. This can be
performed by the command xntpdc:
root@pcs15a> xntpdc -p
remote

local

st poll reach delay

offset

disp

=======================================================================
=LOCAL(0)

127.0.0.1

+clusternode2-pr 172.16.4.1
*pcsis1

64 377 0.00000

0.000000 0.01001

8 1024 376 0.00093 -0.000339 0.01472

10.255.8.18

7 1024 377 0.00255 0.000503 0.00076

root@pcs15a>

5.7.5 Cluster Status


Run the command scstat will display the following information
-

Cluster Nodes
Cluster Transport Paths
Quorum Informations
In Device Group, you can find the Master/Primary & Standby/Secondary node
informations
And more

Example:
root@pcs21a> scstat
------------------------------------------------------------------

-- Cluster Nodes --

Node name
---------

Status
------

Cluster node:

pcs21a

Online

Cluster node:

pcs21b

Online

------------------------------------------------------------------

-- Cluster Transport Paths --

Endpoint
-------Transport path:

pcs21a:nxge4

Transport path:

pcs21a:e1000g3

Endpoint
--------

Status
------

pcs21b:nxge4
pcs21b:e1000g3

Path online
Path online

------------------------------------------------------------------

REDKNEE

for internal use only

88 of 163

Siemens AG

6 PCS Manager Installation

-- Quorum Summary --

Quorum votes possible:

Quorum votes needed:

Quorum votes present:

-- Quorum Votes by Node --

Node Name
---------

Present Possible Status


------- -------- ------

Node votes:

pcs21a

Online

Node votes:

pcs21b

Online

-- Quorum Votes by Device --

Device Name
----------Device votes:

Present Possible Status


------- -------- ------

/dev/did/rdsk/d13s2 1

Online

------------------------------------------------------------------

-- Device Group Servers --

Device Group
------------

Primary
-------

Device group servers: cfsdg

Secondary
---------

pcs21a

pcs21b

Device group servers: arch1dg

pcs21a

pcs21b

Device group servers: arch2dg

pcs21b

pcs21a

Device group servers: tickdg

pcs21a

pcs21b

-- Device Group Status --

Device Group
------------

Status
------

Device group status:

cfsdg

Device group status:

arch1dg

Online

Device group status:

arch2dg

Online

Device group status:

tickdg

REDKNEE

Online

Online

for internal use only

89 of 163

Siemens AG

6 PCS Manager Installation

-- Multi-owner Device Groups --

Device Group
------------

Online Status
-------------

Multi-owner device group:

oradg

pcs21a,pcs21b

Multi-owner device group:

redo1dg

pcs21a,pcs21b

Multi-owner device group:

redo2dg

pcs21a,pcs21b

------------------------------------------------------------------

-- Resource Groups and Resources --

Group Name
----------

Resources

---------

Resources: rac-framework-rg rac_framework rac_udlm rac_svm


Resources: G_HIP4_010255008147 HIP4_010255008147
Resources: G_standard_services_1 standard_services_1
Resources: G_WEBSEC_1

WEBSEC_1

Resources: G_WEBGUI_1

WEBGUI_1

Resources: G_NE3S_pooledInvoker_1 NE3S_pooledInvoker_1


Resources: G_NE3S_namingServiceRmi_1 NE3S_namingServiceRmi_1
Resources: G_NE3S_namingService_1 NE3S_namingService_1
Resources: G_NE3S_jrmpInvoker_1 NE3S_jrmpInvoker_1
Resources: G_NE3S_dynClassloader_1 NE3S_dynClassloader_1
Resources: G_NE3S_HTTPS_1 NE3S_HTTPS_1
Resources: G_NE3S_HTTP_1 NE3S_HTTP_1
Resources: G_FTP_1

FTP_1

Resources: G_CORBA-NAMING_1 CORBA-NAMING_1

-- Resource Groups --

Group Name
----------

Node Name

---------

State
-----

Suspended

---------

Group: rac-framework-rg pcs21a

Online

No

Group: rac-framework-rg pcs21b

Online

No

Group: G_HIP4_010255008147 pcs21a

Online

Group: G_HIP4_010255008147 pcs21b

Offline

REDKNEE

for internal use only

No
No

90 of 163

Siemens AG

6 PCS Manager Installation

Group: G_standard_services_1 pcs21a

Online

No

Group: G_standard_services_1 pcs21b

Online

No

Group: G_WEBSEC_1

pcs21a

Online

No

Group: G_WEBSEC_1

pcs21b

Online

No

Group: G_WEBGUI_1

pcs21a

Online

No

Group: G_WEBGUI_1

pcs21b

Online

No

Group: G_NE3S_pooledInvoker_1 pcs21a

Online

No

Group: G_NE3S_pooledInvoker_1 pcs21b

Online

No

Group: G_NE3S_namingServiceRmi_1 pcs21a

Online

No

Group: G_NE3S_namingServiceRmi_1 pcs21b

Online

No

Group: G_NE3S_namingService_1 pcs21a

Online

No

Group: G_NE3S_namingService_1 pcs21b

Online

No

Group: G_NE3S_jrmpInvoker_1 pcs21a

Online

No

Group: G_NE3S_jrmpInvoker_1 pcs21b

Online

No

Group: G_NE3S_dynClassloader_1 pcs21a

Online

No

Group: G_NE3S_dynClassloader_1 pcs21b

Online

No

Group: G_NE3S_HTTPS_1 pcs21a

Online

No

Group: G_NE3S_HTTPS_1 pcs21b

Online

No

Group: G_NE3S_HTTP_1 pcs21a

Online

No

Group: G_NE3S_HTTP_1 pcs21b

Online

No

Group: G_FTP_1

pcs21a

Online

No

Group: G_FTP_1

pcs21b

Online

No

Group: G_CORBA-NAMING_1 pcs21a

Online

No

Group: G_CORBA-NAMING_1 pcs21b

Online

No

-- Resources --

Resource Name Node Name


------------- --------Resource: rac_framework pcs21a

REDKNEE

State
-----

Status Message

-------------Online

Online

for internal use only

91 of 163

Siemens AG

6 PCS Manager Installation

Resource: rac_framework pcs21b

Online

Online

Resource: rac_udlm

pcs21a

Online

Online

Resource: rac_udlm

pcs21b

Online

Online

Resource: rac_svm

pcs21a

Online

Online

Resource: rac_svm

pcs21b

Online

Online

Resource: HIP4_010255008147 pcs21a

Online

Online - SharedAddress online.

Resource: HIP4_010255008147 pcs21b

Offline

Offline

Resource: standard_services_1 pcs21a

Online

Online

Resource: standard_services_1 pcs21b

Online

Online

Resource: WEBSEC_1

pcs21a

Online

Online

Resource: WEBSEC_1

pcs21b

Online

Online

Resource: WEBGUI_1

pcs21a

Online

Online

Resource: WEBGUI_1

pcs21b

Online

Online

Resource: NE3S_pooledInvoker_1 pcs21a

Online

Online

Resource: NE3S_pooledInvoker_1 pcs21b

Online

Online

Resource: NE3S_namingServiceRmi_1 pcs21a

Online

Online

Resource: NE3S_namingServiceRmi_1 pcs21b

Online

Online

Resource: NE3S_namingService_1 pcs21a

Online

Online

Resource: NE3S_namingService_1 pcs21b

Online

Online

Resource: NE3S_jrmpInvoker_1 pcs21a

Online

Online

Resource: NE3S_jrmpInvoker_1 pcs21b

Online

Online

Resource: NE3S_dynClassloader_1 pcs21a

Online

Online

Resource: NE3S_dynClassloader_1 pcs21b

Online

Online

Resource: NE3S_HTTPS_1

pcs21a

Online

Online

Resource: NE3S_HTTPS_1

pcs21b

Online

Online

Resource: NE3S_HTTP_1

pcs21a

Online

Online

Resource: NE3S_HTTP_1

pcs21b

Online

Online

Resource: FTP_1

REDKNEE

pcs21a

Online

Online

for internal use only

92 of 163

Siemens AG
Resource: FTP_1

6 PCS Manager Installation


pcs21b

Online

Online

Resource: CORBA-NAMING_1 pcs21a

Online

Online

Resource: CORBA-NAMING_1 pcs21b

Online

Online

------------------------------------------------------------------

-- IPMP Groups --

Node Name
---------

Group
-----

Status

------

-------

Adapter

Status

------

IPMP Group: pcs21a

ps

Online

nxge6

Standby

IPMP Group: pcs21a

ps

Online

nxge2

Online

IPMP Group: pcs21a

backup Online

nxge5

Standby

IPMP Group: pcs21a

backup Online

e1000g2

Online

IPMP Group: pcs21a

ims

IPMP Group: pcs21a

ims

Online

e1000g1

IPMP Group: pcs21a

core

Online

nxge0

Standby

IPMP Group: pcs21a

core

Online

e1000g0

Online

IPMP Group: pcs21b

ps

Online

nxge6

Standby

IPMP Group: pcs21b

ps

Online

nxge2

Online

IPMP Group: pcs21b

backup Online

nxge5

Standby

IPMP Group: pcs21b

backup Online

e1000g2

Online

IPMP Group: pcs21b

ims

Online

nxge1

Standby

IPMP Group: pcs21b

ims

Online

e1000g1

Online

IPMP Group: pcs21b

core

Online

nxge0

IPMP Group: pcs21b

core

Online

e1000g0

Online

Status

Adapter

Status

Online

nxge1

Standby
Online

Standby

-- IPMP Groups in Zones --

Zone Name
---------

Group
-----

------

-------

------

-----------------------------------------------------------------root@pcs21a>

Note: Above output may be different for your system for the hostname.

REDKNEE

for internal use only

93 of 163

Siemens AG

6 PCS Manager Installation

5.7.6 IP Address
To find which IP Address is used for which communication, refer the repective TPD which
is used for Host Specific package creation.
Below output from cluster primary node
root@pcs21a> ifconfig -a
lo0: flags=20010008c9<UP,LOOPBACK,RUNNING,NOARP,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
lo0:1: flags=20010408c9<UP,LOOPBACK,RUNNING,NOARP,MULTICAST,DEPRECATED,IPv4,VIRTUAL> mtu
8232 index 1
inet 10.10.15.2 netmask ffffff00
e1000g0: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu
1500 index 2
inet 10.255.8.113 netmask ffffff00 broadcast 10.255.8.255
teat IP address
groupname core

Core LAN primary interface IPMP

Core LAN IPMP group name core

ether 0:21:28:6d:94:8e
e1000g0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 10.255.8.111 netmask ffffff00 broadcast 10.255.8.255

Core LAN primary node IP

e1000g0:2: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU>
mtu 1500 index 2
inet 10.255.8.147 netmask ffffff00 broadcast 10.255.8.255

Core LAN HIP IP Address

e1000g0:3: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 2


inet 10.255.8.117 netmask ffffff00 broadcast 10.255.8.255

Core LAN virtual IP Address

e1000g1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu


1500 index 3
inet 10.255.9.113 netmask ffffff00 broadcast 10.255.9.255
test IP address
groupname ims

PC LAN 1 primary interface IPMP

PC LAN 1 IPMP group name ims

ether 0:21:28:6d:94:8f
e1000g1:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.255.9.111 netmask ffffff00 broadcast 10.255.9.255

PC LAN 1 primary node IP

e1000g1:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 3


inet 10.255.9.117 netmask ffffff00 broadcast 10.255.9.255

PC LAN 1 virtual IP address

e1000g2: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu


1500 index 4
inet 192.168.23.113 netmask ffffff00 broadcast 192.168.23.255
interface IPMP test IP address
groupname backup

Backup LAN primary

Backup LAN IPMP group name backup

ether 0:21:28:6d:94:90
e1000g2:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.23.111 netmask ffffff00 broadcast 192.168.23.255
address

Backup LAN primary node IP

e1000g2:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 4

REDKNEE

for internal use only

94 of 163

Siemens AG

6 PCS Manager Installation

inet 192.168.23.117 netmask ffffff00 broadcast 192.168.23.255


address

Backup LAN virtual IP

e1000g3: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 8168 index 12


inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
ether 0:21:28:6d:94:91
nxge0:
flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACT
IVE> mtu 1500 index 5
inet 10.255.8.114 netmask ffffff00 broadcast 10.255.8.255
IPMP test IP address
groupname core

Core LAN secondary interface

Core LAN IPMP group name core

ether 0:21:28:72:96:fa
nxge1:
flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACT
IVE> mtu 1500 index 6
inet 10.255.9.114 netmask ffffff00 broadcast 10.255.9.255
IPMP test IP address
groupname ims

PC LAN 1 secondary interface

PC LAN 1 IPMP group name ims

ether 0:21:28:72:96:fb
nxge2: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu
1500 index 7
inet 10.255.10.113 netmask ffffff00 broadcast 10.255.10.255
interface IPMP test IP address

PC LAN 2 primary

groupname ps PC LAN 1 IPMP group name ps


ether 0:21:28:72:96:fc
nxge2:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7
inet 10.255.10.111 netmask ffffff00 broadcast 10.255.10.255 PC LAN 2 primary node IP
nxge2:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 7
inet 10.255.10.117 netmask ffffff00 broadcast 10.255.10.255 PC LAN 2 virtual IP address
nxge3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 8
inet 192.168.22.111 netmask ffffff00 broadcast 192.168.22.255

Admin LAN Primary node IP

ether 0:21:28:72:96:fd
nxge4: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 9194 index 11
inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
ether 0:21:28:72:9d:ba
nxge5:
flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACT
IVE> mtu 1500 index 9
inet 192.168.23.114 netmask ffffff00 broadcast 192.168.23.255
interface IPMP test IP address
groupname backup

Backup LAN secondary

Backup LAN IPMP group name backup

ether 0:21:28:72:9d:bb
nxge6:
flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACT
IVE> mtu 1500 index 10
inet 10.255.10.114 netmask ffffff00 broadcast 10.255.10.255
interface IPMP test IP address

REDKNEE

for internal use only

PC LAN 2 secondary

95 of 163

Siemens AG
groupname ps

6 PCS Manager Installation


PC LAN 1 IPMP group name ps

ether 0:21:28:72:9d:bc
clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu
8168 index 13
inet 172.16.4.1 netmask fffffe00 broadcast 172.16.5.255
ether 0:0:0:0:0:1
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
nxge3: flags=2004841<UP,RUNNING,MULTICAST,DHCP,IPv6> mtu 1500 index 8
inet6 fe80::221:28ff:fe72:96fd/10
ether 0:21:28:72:96:fd
nxge4: flags=200c841<UP,RUNNING,MULTICAST,DHCP,PRIVATE,IPv6> mtu 9194 index 11
inet6 fe80::221:28ff:fe72:9dba/10
ether 0:21:28:72:9d:ba
e1000g3: flags=200c841<UP,RUNNING,MULTICAST,DHCP,PRIVATE,IPv6> mtu 8168 index 12
inet6 fe80::221:28ff:fe6d:9491/10
ether 0:21:28:6d:94:91
root@pcs21a>

Below output from cluster secondary node


root@pcs21b> ifconfig -a
lo0: flags=20010008c9<UP,LOOPBACK,RUNNING,NOARP,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
lo0:1: flags=20010088c9<UP,LOOPBACK,RUNNING,NOARP,MULTICAST,PRIVATE,IPv4,VIRTUAL> mtu
8232 index 1
inet 10.255.8.147 netmask ffffffff

Core LAN HIP address

lo0:2: flags=20010408c9<UP,LOOPBACK,RUNNING,NOARP,MULTICAST,DEPRECATED,IPv4,VIRTUAL> mtu


8232 index 1
inet 10.10.15.3 netmask ffffff00
e1000g0: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu
1500 index 2
inet 10.255.8.115 netmask ffffff00 broadcast 10.255.8.255
IPMP test IP address
groupname core

Core LAN primary interface

Core LAN IPMP group name core

ether 0:21:28:6d:97:9a
e1000g0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 10.255.8.112 netmask ffffff00 broadcast 10.255.8.255

Core LAN seconday node IP

e1000g1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu


1500 index 3
inet 10.255.9.115 netmask ffffff00 broadcast 10.255.9.255
test IP address
groupname ims

PC LAN 1 primary interface IPMP

PC LAN 1 IPMP group name ims

ether 0:21:28:6d:97:9b

REDKNEE

for internal use only

96 of 163

Siemens AG

6 PCS Manager Installation

e1000g1:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3


inet 10.255.9.112 netmask ffffff00 broadcast 10.255.9.255
e1000g2: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu
1500 index 4
inet 192.168.23.115 netmask ffffff00 broadcast 192.168.23.255
primary interface IPMP test IP address
groupname backup

Backup LAN

255

Backup LAN

IPMP group name backup

ether 0:21:28:6d:97:9c
e1000g2:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.23.112 netmask ffffff00 broadcast 192.168.23.255
IP address

Backup LAN secondary node

e1000g3: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 8168 index 12


inet 172.16.0.130 netmask ffffff80 broadcast 172.16.0.255
ether 0:21:28:6d:97:9d
nxge0:
flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACT
IVE> mtu 1500 index 5
inet 10.255.8.116 netmask ffffff00 broadcast 10.255.8.255
IPMP test IP address
groupname core

Core LAN secondary interface

Core LAN IPMP group name core

ether 0:21:28:72:ac:7a
nxge1:
flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACT
IVE> mtu 1500 index 6
inet 10.255.9.116 netmask ffffff00 broadcast 10.255.9.255 PC LAN 1 secondary interface IPMP
test IP address
groupname ims

PC LAN 1 IPMP group name ims

ether 0:21:28:72:ac:7b
nxge2: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu
1500 index 7
inet 10.255.10.115 netmask ffffff00 broadcast 10.255.10.255
interface IPMP test IP address
groupname ps

PC LAN 2 primary

PC LAN 2 IPMP group name ps

ether 0:21:28:72:ac:7c
nxge2:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7
inet 10.255.10.112 netmask ffffff00 broadcast 10.255.10.255
IP address

PC LAN 2 secondary node

nxge3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 8


inet 192.168.22.112 netmask ffffff00 broadcast 192.168.22.255 Admin LAN secondary node IP
ether 0:21:28:72:ac:7d
nxge4: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 9194 index 11
inet 172.16.1.2 netmask ffffff80 broadcast 172.16.1.127
ether 0:21:28:72:b3:5a
nxge5:
flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACT
IVE> mtu 1500 index 9

REDKNEE

for internal use only

97 of 163

Siemens AG

6 PCS Manager Installation

inet 192.168.23.116 netmask ffffff00 broadcast 192.168.23.255


interface IPMP test IP address
groupname backup

Backup LAN

secondary

Backup LAN IPMP group name backup

ether 0:21:28:72:b3:5b
nxge6:
flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACT
IVE> mtu 1500 index 10
inet 10.255.10.116 netmask ffffff00 broadcast 10.255.10.255
interface IPMP test IP address
groupname ps

PC LAN 2 secondary

PC LAN 2 IPMP group name ps

ether 0:21:28:72:b3:5c
clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu
8168 index 13
inet 172.16.4.2 netmask fffffe00 broadcast 172.16.5.255
ether 0:0:0:0:0:2
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
nxge3: flags=2004841<UP,RUNNING,MULTICAST,DHCP,IPv6> mtu 1500 index 8
inet6 fe80::221:28ff:fe72:ac7d/10
ether 0:21:28:72:ac:7d
nxge4: flags=200c841<UP,RUNNING,MULTICAST,DHCP,PRIVATE,IPv6> mtu 9194 index 11
inet6 fe80::221:28ff:fe72:b35a/10
ether 0:21:28:72:b3:5a
e1000g3: flags=200c841<UP,RUNNING,MULTICAST,DHCP,PRIVATE,IPv6> mtu 8168 index 12
inet6 fe80::221:28ff:fe6d:979d/10
ether 0:21:28:6d:97:9d
root@pcs21b>

5.7.7 UNIX Services


Check running services using svcs command
root@pcs21a> svcs a
STATE

STIME

FMRI

legacy_run

May_31

lrc:/etc/rc2_d/S00set-tmp-permissions

legacy_run

May_31

lrc:/etc/rc2_d/S07set-tmp-permissions

legacy_run

May_31

lrc:/etc/rc2_d/S100OsHistscript

legacy_run

May_31

lrc:/etc/rc2_d/S10lu

legacy_run

May_31

lrc:/etc/rc2_d/S20sysetup

legacy_run

May_31

lrc:/etc/rc2_d/S29saip

legacy_run

May_31

lrc:/etc/rc2_d/S40llc2

legacy_run

May_31

lrc:/etc/rc2_d/S42ncakmod

legacy_run

May_31

lrc:/etc/rc2_d/S70ephemports

legacy_run

May_31

lrc:/etc/rc2_d/S70nddconfig

legacy_run

May_31

lrc:/etc/rc2_d/S72sc_update_hosts

REDKNEE

for internal use only

98 of 163

Siemens AG

6 PCS Manager Installation

legacy_run

May_31

lrc:/etc/rc2_d/S72sc_update_ntp

legacy_run

May_31

lrc:/etc/rc2_d/S73cachefs_daemon

legacy_run

May_31

lrc:/etc/rc2_d/S73swapadd

legacy_run

May_31

lrc:/etc/rc2_d/S74xntpd_cluster

legacy_run

May_31

lrc:/etc/rc2_d/S76tsp_routes

legacy_run

May_31

lrc:/etc/rc2_d/S77scpostconfig

legacy_run

May_31

lrc:/etc/rc2_d/S81dodatadm_udaplt

legacy_run

May_31

lrc:/etc/rc2_d/S89bdconfig

legacy_run

May_31

lrc:/etc/rc2_d/S91afbinit

legacy_run

May_31

lrc:/etc/rc2_d/S91gfbinit

legacy_run

May_31

lrc:/etc/rc2_d/S91ifbinit

legacy_run

May_31

lrc:/etc/rc2_d/S91jfbinit

legacy_run

May_31

lrc:/etc/rc2_d/S91kfbinit

legacy_run

May_31

lrc:/etc/rc2_d/S91zuluinit

legacy_run

May_31

lrc:/etc/rc2_d/S94ncalogd

legacy_run

May_31

lrc:/etc/rc2_d/S95SUNWmd_binddevs

legacy_run

May_31

lrc:/etc/rc2_d/S95networker

legacy_run

May_31

lrc:/etc/rc2_d/S98adviprouting

legacy_run

May_31

lrc:/etc/rc2_d/S98deallocate

legacy_run

May_31

lrc:/etc/rc2_d/S99netwatch_init

legacy_run

May_31

lrc:/etc/rc3_d/S16boot_server

legacy_run

May_31

lrc:/etc/rc3_d/S50apache

legacy_run

May_31

lrc:/etc/rc3_d/S52imq

legacy_run

May_31

lrc:/etc/rc3_d/S90iccm

legacy_run

May_31

lrc:/etc/rc3_d/S91initgchb_resd

legacy_run

May_31

lrc:/etc/rc3_d/S96init_crs

legacy_run

May_31

lrc:/etc/rc3_d/S97crash

legacy_run

May_31

lrc:/etc/rc3_d/S99initmtu

legacy_run

May_31

lrc:/etc/rc3_d/S99rtp

disabled

May_31

svc:/system/device/mpxio-upgrade:default

disabled

May_31

svc:/network/ipfilter:default

disabled

May_31

svc:/network/ipsec/ike:default

disabled

May_31

svc:/network/ipsec/manual-key:default

disabled

May_31

svc:/network/rpc/keyserv:default

disabled

May_31

svc:/network/rpc/nisplus:default

disabled

May_31

svc:/network/nis/client:default

disabled

May_31

svc:/network/inetd-upgrade:default

disabled

May_31

svc:/network/dns/client:default

disabled

May_31

svc:/network/ldap/client:default

disabled

May_31

svc:/network/nfs/client:default

disabled

May_31

svc:/system/filesystem/autofs:default

disabled

May_31

svc:/application/print/server:default

disabled

May_31

svc:/system/auditd:default

REDKNEE

for internal use only

99 of 163

Siemens AG

6 PCS Manager Installation

disabled

May_31

svc:/system/patch-finish:delete

disabled

May_31

svc:/system/power:default

disabled

May_31

svc:/system/rcap:default

disabled

May_31

svc:/application/management/snmpdx:default

disabled

May_31

svc:/application/management/dmi:default

disabled

May_31

svc:/network/rpc/bootparams:default

disabled

May_31

svc:/network/samba:default

disabled

May_31

svc:/network/nfs/server:default

disabled

May_31

svc:/network/winbind:default

disabled

May_31

svc:/network/wins:default

disabled

May_31

svc:/network/rarp:default

disabled

May_31

svc:/network/dhcp-server:default

disabled

May_31

svc:/application/graphical-login/cde-login:default

disabled

May_31

svc:/application/management/wbem:default

disabled

May_31

svc:/application/cde-printinfo:default

disabled

May_31

svc:/application/print/ipp-listener:default

disabled

May_31

svc:/application/print/ppd-cache-update:default

disabled

May_31

svc:/application/database/postgresql:version_81

disabled

May_31

svc:/application/database/postgresql:version_82

disabled

May_31

svc:/application/database/postgresql:version_82_64bit

disabled

May_31

svc:/network/dns/server:default

disabled

May_31

svc:/network/routing/legacy-routing:ipv4

disabled

May_31

svc:/network/routing/legacy-routing:ipv6

disabled

May_31

svc:/network/routing/rdisc:default

disabled

May_31

svc:/network/ipv6-forwarding:default

disabled

May_31

svc:/network/routing/ripng:default

disabled

May_31

svc:/network/routing/zebra:quagga

disabled

May_31

svc:/network/routing/ripng:quagga

disabled

May_31

svc:/network/routing/route:default

disabled

May_31

svc:/network/ipv4-forwarding:default

disabled

May_31

svc:/network/routing/rip:quagga

disabled

May_31

svc:/network/routing/ospf:quagga

disabled

May_31

svc:/network/routing/ospf6:quagga

disabled

May_31

svc:/network/routing/bgp:quagga

disabled

May_31

svc:/network/security/kadmin:default

disabled

May_31

svc:/network/security/krb5kdc:default

disabled

May_31

svc:/network/tnd:default

disabled

May_31

svc:/network/ipmievd:default

disabled

May_31

svc:/network/http:apache2

disabled

May_31

svc:/network/apocd/udp:default

disabled

May_31

svc:/network/slp:default

disabled

May_31

svc:/platform/sun4u/sckmd:default

disabled

May_31

svc:/platform/sun4u/dcs:default

REDKNEE

for internal use only

100 of 163

Siemens AG

6 PCS Manager Installation

disabled

May_31

svc:/platform/sun4u/oplhpd:default

disabled

May_31

svc:/platform/sun4u/efdaemon:default

disabled

May_31

svc:/ldoms/vntsd:default

disabled

May_31

svc:/system/consadm:default

disabled

May_31

svc:/system/labeld:default

disabled

May_31

svc:/system/tsol-zones:default

disabled

May_31

svc:/system/sar:default

disabled

May_31

svc:/system/iscsitgt:default

disabled

May_31

svc:/system/pools/dynamic:default

disabled

May_31

svc:/system/cluster/rgm:default

disabled

May_31

svc:/application/management/common-agent-container-1:default

disabled

May_31

svc:/system/cluster/sc_restarter:default

disabled

May_31

svc:/system/cluster/sc_ifconfig_proxy:default

disabled

May_31

svc:/application/hadb-ma:default

disabled

May_31

svc:/system/cluster/sc_ng_zones:default

disabled

May_31

svc:/system/cluster/sckeysync:default

disabled

May_31

svc:/system/cluster/quorumserver:default

disabled

May_31

svc:/system/hotplug:default

disabled

May_31

svc:/network/rpc/gss:default

disabled

May_31

svc:/application/x11/xfs:default

disabled

May_31

svc:/application/x11/xvnc-inetd:default

disabled

May_31

svc:/application/font/stfsloader:default

disabled

May_31

svc:/network/rpc/rstat:default

disabled

May_31

svc:/application/print/rfc1179:default

disabled

May_31

svc:/network/rpc/cde-calendar-manager:default

disabled

May_31

svc:/network/rpc/cde-ttdbserver:tcp

disabled

May_31

svc:/network/rpc/ocfserv:default

disabled

May_31

svc:/ldoms/agents:default

disabled

May_31

svc:/network/rpc/smserver:default

disabled

May_31

svc:/network/rpc/rex:default

disabled

May_31

svc:/network/rpc/rusers:default

disabled

May_31

svc:/network/rpc/spray:default

disabled

May_31

svc:/network/rpc/wall:default

disabled

May_31

svc:/network/cde-spc:default

disabled

May_31

svc:/network/tname:default

disabled

May_31

svc:/network/security/ktkt_warn:default

disabled

May_31

svc:/network/security/krb5_prop:default

disabled

May_31

svc:/network/telnet:default

disabled

May_31

svc:/network/nfs/rquota:default

disabled

May_31

svc:/network/swat:default

disabled

May_31

svc:/network/uucp:default

disabled

May_31

svc:/network/chargen:dgram

disabled

May_31

svc:/network/chargen:stream

REDKNEE

for internal use only

101 of 163

Siemens AG

6 PCS Manager Installation

disabled

May_31

svc:/network/daytime:dgram

disabled

May_31

svc:/network/daytime:stream

disabled

May_31

svc:/network/discard:dgram

disabled

May_31

svc:/network/discard:stream

disabled

May_31

svc:/network/echo:dgram

disabled

May_31

svc:/network/echo:stream

disabled

May_31

svc:/network/time:dgram

disabled

May_31

svc:/network/time:stream

disabled

May_31

svc:/network/ftp:default

disabled

May_31

svc:/network/comsat:default

disabled

May_31

svc:/network/finger:default

disabled

May_31

svc:/network/login:eklogin

disabled

May_31

svc:/network/login:klogin

disabled

May_31

svc:/network/rexec:default

disabled

May_31

svc:/network/shell:kshell

disabled

May_31

svc:/network/talk:default

disabled

May_31

svc:/network/rpc-100235_1/rpc_ticotsord:default

disabled

May_31

svc:/network/tftp/udp:default

online

May_31

svc:/system/svc/restarter:default

online

May_31

svc:/network/pfil:default

online

May_31

svc:/network/loopback:default

online

May_31

svc:/network/tnctl:default

online

May_31

svc:/system/installupdates:default

online

May_31

svc:/milestone/name-services:default

online

May_31

svc:/network/physical:default

online

May_31

svc:/system/identity:node

online

May_31

svc:/application/management/sufd-control:default

online

May_31

svc:/system/metainit:default

online

May_31

svc:/system/filesystem/root:default

online

May_31

svc:/system/scheduler:default

online

May_31

svc:/system/boot-archive:default

online

May_31

svc:/system/filesystem/usr:default

online

May_31

svc:/system/keymap:default

online

May_31

svc:/system/cluster/cl_boot_check:default

online

May_31

svc:/system/cluster/scmountdev:default

online

May_31

svc:/system/device/local:default

online

May_31

svc:/system/filesystem/minimal:default

online

May_31

svc:/system/rmtmpfiles:default

online

May_31

svc:/platform/sun4v/drd:default

online

May_31

svc:/system/identity:domain

online

May_31

svc:/system/name-service-cache:default

online

May_31

svc:/system/coreadm:default

online

May_31

svc:/system/sysevent:default

REDKNEE

for internal use only

102 of 163

Siemens AG

6 PCS Manager Installation

online

May_31

svc:/system/pkgserv:default

online

May_31

svc:/system/device/fc-fabric:default

online

May_31

svc:/milestone/devices:default

online

May_31

svc:/system/picl:default

online

May_31

svc:/system/cryptosvc:default

online

May_31

svc:/network/ipsec/ipsecalgs:default

online

May_31

svc:/network/ipsec/policy:default

online

May_31

svc:/milestone/network:default

online

May_31

svc:/system/manifest-import:default

online

May_31

svc:/network/multipath:cluster

online

May_31

svc:/milestone/single-user:default

online

May_31

svc:/system/pools:default

online

May_31

svc:/network/initial:default

online

May_31

svc:/platform/sun4v/efdaemon:default

online

May_31

svc:/system/resource-mgmt:default

online

May_31

svc:/network/service:default

online

May_31

svc:/network/iscsi/initiator:default

online

May_31

svc:/system/cluster/scslm:default

online

May_31

svc:/system/filesystem/local:default

online

May_31

svc:/system/cron:default

online

May_31

svc:/network/shares/group:default

online

May_31

svc:/system/sysidtool:net

online

May_31

svc:/system/boot-archive-update:default

online

May_31

svc:/system/dumpadm:default

online

May_31

svc:/application/font/fc-cache:default

online

May_31

svc:/network/routing-setup:default

online

May_31

svc:/system/cluster/loaddid:default

online

May_31

svc:/network/rpc/bind:default

online

May_31

svc:/system/sysidtool:system

online

May_31

svc:/network/nfs/status:default

online

May_31

svc:/network/nfs/mapid:default

online

May_31

svc:/network/nfs/cbd:default

online

May_31

svc:/milestone/sysconfig:default

online

May_31

svc:/system/postrun:default

online

May_31

svc:/network/nfs/nlockmgr:default

online

May_31

svc:/network/inetd:default

online

May_31

svc:/system/sac:default

online

May_31

svc:/system/utmp:default

online

May_31

svc:/system/system-log:default

online

May_31

svc:/system/console-login:default

online

May_31

svc:/network/ssh:default

online

May_31

svc:/network/smtp:sendmail

online

May_31

svc:/network/sendmail-client:default

REDKNEE

for internal use only

103 of 163

Siemens AG

6 PCS Manager Installation

online

May_31

svc:/application/management/seaport:default

online

May_31

svc:/application/management/sma:default

online

May_31

svc:/system/fmservice:default

online

May_31

svc:/network/rpc/meta:default

online

May_31

svc:/network/rpc/mdcomm:default

online

May_31

svc:/network/rpc/metamed:default

online

May_31

svc:/network/rpc/metamh:default

online

May_31

svc:/network/login:rlogin

online

May_31

svc:/network/shell:default

online

May_31

svc:/network/stdiscover:default

online

May_31

svc:/network/stlisten:default

online

May_31

svc:/network/rpc/scadmd:default

online

May_31

svc:/network/rpc/scrcmd:default

online

May_31

svc:/network/sccheckd:default

online

May_31

svc:/network/rpc/metacld:default

online

May_31

svc:/system/fmd:default

online

May_31

svc:/network/routing/ndp:default

online

May_31

svc:/system/cluster/bootcluster:default

online

May_31

svc:/system/cluster/sc_failfast:default

online

May_31

svc:/system/cluster/scvxinstall:default

online

May_31

svc:/system/cluster/initdid:default

online

May_31

svc:/network/tsp_clprivnet_mtu:default

online

May_31

svc:/system/cluster/cl_execd:default

online

May_31

svc:/system/cluster/sc_pmmd:default

online

May_31

svc:/system/cluster/clexecd:default

online

May_31

svc:/system/cluster/zc_cmd_log_replay:default

online

May_31

svc:/system/cluster/sc_rtreg_server:default

online

May_31

svc:/system/cluster/sc_ifconfig_server:default

online

May_31

svc:/system/mdmonitor:default

online

May_31

svc:/network/ntp:default

online

May_31

svc:/system/cluster/globaldevices:default

online

May_31

svc:/system/cluster/gdevsync:default

online

May_31

svc:/milestone/multi-user:default

online

May_31

svc:/system/boot-config:default

online

May_31

svc:/application/stosreg:default

online

May_31

svc:/system/cluster/cl-svc-enable:default

online

May_31

svc:/system/cluster/sc_zones:default

online

May_31

svc:/system/cluster/scdpm:default

online

May_31

svc:/system/cluster/scqdm:default

online

May_31

svc:/system/cluster/cl-event:default

online

May_31

svc:/system/cluster/rpc-pmf:default

online

May_31

svc:/system/cluster/pnm:default

online

May_31

svc:/system/cluster/ql_upgrade:default

REDKNEE

for internal use only

104 of 163

Siemens AG

6 PCS Manager Installation

online

May_31

svc:/system/cluster/cl-ccra:default

online

May_31

svc:/system/cluster/sc_zc_member:default

online

May_31

svc:/system/cluster/scprivipd:default

online

May_31

svc:/system/cluster/cznetd:default

online

May_31

svc:/system/cluster/cl-eventlog:default

online

May_31

svc:/system/cluster/rpc-fed:default

online

May_31

svc:/system/cluster/sc_pnm_proxy_server:default

online

May_31

svc:/system/cluster/rgm-starter:default

online

May_31

svc:/system/cluster/cl-svc-cluster-milestone:default

online

May_31

svc:/system/cluster/sc_svtag:default

online

May_31

svc:/system/cluster/sc_syncsa_server:default

online

May_31

svc:/system/cluster/scslmclean:default

online

May_31

svc:/system/cluster/mountgfs:default

online

May_31

svc:/system/cluster/clusterdata:default

online

May_31

svc:/system/cluster/ql_rgm:default

online

May_31

svc:/milestone/multi-user-server:default

online

May_31

svc:/application/management/sufd:default

online

May_31

svc:/system/zones:default

online

May_31

svc:/system/basicreg:default

offline

May_31

svc:/system/filesystem/volfs:default

offline

May_31

svc:/system/cluster/scsymon-srv:default

maintenance

May_31

svc:/system/webconsole:console

root@pcs21a>

Only
2
service
svc:/system/cluster/scsymon-srv:default
svc:/system/filesystem/volfs:default should be in offline state.

and

5.8 Troubleshooting Information


5.8.1 Installation does NOT start
Refer the section 4.7.1 Installation does NOT start.

5.8.2 Client is installed from the wrong install server


Refer the section 4.7.2 Client is installed from the wrong install server.

5.8.3 After the Scratch Installation RTP services are not up


Login to any one of the node in cluster as root user, do scstat g command, this show
you the following output.
root@pcsce9> scstat -g

REDKNEE

for internal use only

105 of 163

Siemens AG

6 PCS Manager Installation

-- Resource Groups and Resources -Group Name

Resources

----------

---------

Resources: rac-framework-rg rac_framework rac_udlm rac_svm

-- Resource Groups -Group Name

Node Name

State

----------

---------

-----

Suspended
--------Group: rac-framework-rg pcsce8

Online

No

Group: rac-framework-rg pcsce11

Online faulted No

-- Resources -Resource Name


Status Message

Node Name

State

--------------------------

---------

-----

Resource: rac_framework
Online

pcsce8

Online

Resource: rac_framework pcsce11


Faulted - Error in previous reconfiguration.

Start failed

Resource: rac_udlm
Online

pcsce8

Online

Resource: rac_udlm
Online

pcsce9

Online

Resource: rac_svm
Online

pcsce8

Online

Resource: rac_svm
Online

pcsce9

Online

Solution:
Bring down both the nodes to OK prompt using init 0, then boot Node 1 first, wait for 5~10
mins, then boot Node2, then RTP services will be up automatically.

REDKNEE

for internal use only

106 of 163

Siemens AG

6 PCS Manager Installation

5.8.4 End of the Installation one node become panic


End of the installation any one node become panic with similar message as shown below.
TSPIS: --- Configure TCP Wrappers
TSPIS: === TCP Wrappers enabled successfully
TSPIS: ##########################################
TSPIS: ##
TSPIS: ##

##
Installation completed !!!

TSPIS: ##

##
##

TSPIS: ##########################################
TSPIS: ## Please check all logfiles for

##

TSPIS: ## success

##

TSPIS: ##########################################
Apr 15 07:27:41 pcs15a ID[SUNWudlm.udlm]: Unix DLM version (2) and SUN Unix DLM library version
(1): compatible.

Apr 15 07:27:47 pcs15a rpc.metad: Terminated

Apr 15 07:27:48 pcs15a metaclust: Could not suspend rpc.mdcommd for set redo2dg

Apr 15 07:27:48 pcs15a metaclust: exiting with 1

Apr 15 07:27:48 pcs15a rpc.metamedd: Terminated

Apr 15 07:27:48 pcs15a SUNWscucm.ucmm_reconf: svm exited with error 1 in step cmmstart

Apr 15 07:27:48 pcs15a Cluster.OPS.UCMMD: prog <ucmm_reconf> failed on step <cmmstart> retcode
<1>

Apr 15 07:27:48 pcs15a Cluster.OPS.UCMMD: ucm_callback for start_trans generated exception


0

Apr 15 07:27:51 pcs15a Cluster.OPS.UCMMD: cm_callback_impl abort_trans: exiting

cm_getcluststate: rpc request timed out


Notifying cluster that this node is panicking

panic[cpu0]/thread=2a100047ca0: Failfast: Aborting zone "global" (zone ID 0) because "ucmmd"


died 30 seconds ago.

000002a1000473f0
cl_runtime:__1cZsc_syslog_msg_log_no_args6Fpviipkc0_nZsc_syslog_msg_status_enum__+30
(6007a50f000, 3, 0, 46, 2a1000475f0, 7070f561)

REDKNEE

for internal use only

107 of 163

Siemens AG

6 PCS Manager Installation

%l0-3: 0000000001834b08 0000000000010000 0000000000000000 0000000000000000


%l4-7: 000002a10007ff30 0000000000000001 0000000000000000 0000000000000001
000002a1000474a0
cl_runtime:__1cCosNsc_syslog_msgDlog6MiipkcE_nZsc_syslog_msg_status_enum__+1c
(6006fa5cd70, 3, 0, 7070f561, 3001a0bd28a, 70400)
%l0-3: 000003001a0bd180 0000000000070400 000000000007070a 0000000000070400
%l4-7: 000000007070f552 000000007070f000 000000007070e000 0000000000000001
000002a100047550 cl_haci:__1cHff_implNunit_timedout6M_v_+70 (3001a0bd218, 18f1400, 1876548,
3001a0bd180, 180c000, 1)
%l0-3: 000000007b285924 0000000000000000 0000000000000001 0000000000000000
%l4-7: 0000000000000000 0000000000000001 0000000000000001 0000000000000016
000002a100047600 cl_haci:___const_seg_900007001+4260 (3001a0bd180, 2a100047ca0,
3001a0bd310, 0, 0, 0)
%l0-3: 0000000000000000 0000000000000007 0000000000000000 0029a00a7f1c26bf
%l4-7: 0000000001894000 000007000a010200 000000000180c2d8 00000000010deec4
000002a1000476b0 cl_haci:__1cQff_callout_tableTper_tick_processing6F_v_+e4 (6534,
3001a0bd310, 7070e7a8, 7070e6a8, 7070e748, 7070e690)
%l0-3: 000000007070e848 00000000000000a0 0000000000000014 0000000000006534
%l4-7: 000000007b284a64 0000000000000000 0000000000000080 000002a1000477d0
000002a100047770
cl_haci:__1cNff_admin_implWsc_per_tick_processing6Mn0AQcallout_caller_t__v_+94
(60062d8ab40, 0, 40, 60062d8aac0, 23b7d6b016bcf1, 1312d00)
%l0-3: 000000000185b894 0000000000000029 0000000000000000 0000000000000000
%l4-7: 000000000000453d 000000000000453c 0000000000000000 0000000000000040
000002a100047820 genunix:clock+474 (0, 3b9ac9ff, 0, 0, 7b286ac8, f611)
%l0-3: 0000000000006533 000000000189b800 00000000018f2000 0000000001876548
%l4-7: 0000000000006534 00000000018f1400 0000000000006533 000000003b9aca00
000002a100047920 genunix:cyclic_softint+a4 (600608445a8, 300014ca2d8, 1, 60060844540,
3000356c08c, 300014ca288)
%l0-3: 000003000356c060 00000600608445a0 000000000000000a 0000000000000003
%l4-7: 000003000356c080 00000000010cfa3c 0000000000000000 0000000000006533
000002a1000479e0 unix:cbe_level10+8 (0, 0, 180c000, 2a100047d78, 1, 1011cb8)
%l0-3: 0000000001834b08 0000000000010000 0000000000000000 0000000000000000
%l4-7: 000002a10007ff30 0000000000000001 0000000000000000 0000000000000001

syncing file systems... 17 14 done


dumping to /dev/md/dsk/d20, offset 13740146688, content: kernel

Solution:
Bring down both the nodes to OK prompt using init 0, then boot Node 1 first, wait for 5~10
mins, then boot Node2, then RTP services will be up automatically.

5.8.5 Installation failure in script configureRaid.sh


If first installation failed with following error message, check the physical connectivity from
the management host to array. If the connectivity is good, log into the node as root user,
REDKNEE

for internal use only

108 of 163

Siemens AG

6 PCS Manager Installation

then reboot the node with command init 6. After the reboot installation will continue
automatically.
This is CAM 6.2.0.13
compatibility with CAM 5.x: n
compatibility with CAM 6.x: y
compatibility with CAM 6.1: n
Start working on 10.255.6.164 in action FI.
Registering storage ST2540_2 with CAM.
Storage pcs15a-ST2540_2-1246898559 (10.255.6.164) found, modifying to
pcs15a-ST2540_2-1246945783
FATAL_ERROR - Can't find device 10.255.6.164. [E312]
Fatal error at AiRaidStorageTek /tspinst/scripts/storage/perl/AiRaidStorageTek.pm 525:
Can't find device 10.255.6.164. [E312]
TSPIS: FATAL_ERROR - Script configureRaid.sh

failed

TSPIS: Trying to collect error logs using TspExplorer as TSP Installation has failed...
TSPIS: === Starting TspExplorer: /opt/SMAW/SMAWrtpdg/bin/TspExplorer.pl -noplid -all -scn FI
-local -dir /tspinst/TspExplorer -comp.

5.8.6 Installation failure in script configureRaid.sh


If first installation failed with following error message, then follow the given solution one
by one.
System/NVSRAM: All FRUs at baseline
Name

Model Current

pcs15a-ST2540_2-1249750592 2540

Baseline

N1932-670843-001 N1932-670843-001

----------------------------------------------Firmware update/verification on pcs15a-ST2540_2-1249918284 will be done NOW!


THIS WILL TAKE SOME MINUTES! DO NOT INTERRUPT!!
-----------------------------------------------

Firmware update finished!


Deleting all alert notifications.
Unmapping volume arch1 on pcs15a-ST2540_2-1249918284.
Unmapping volume arch2 on pcs15a-ST2540_2-1249918284.
Unmapping volume cfs on pcs15a-ST2540_2-1249918284.
Unmapping volume ora on pcs15a-ST2540_2-1249918284.
Unmapping volume redo1 on pcs15a-ST2540_2-1249918284.
Unmapping volume redo2 on pcs15a-ST2540_2-1249918284.
Unmapping volume tick on pcs15a-ST2540_2-1249918284.
Deleting volume arch1 on pcs15a-ST2540_2-1249918284.
Deleting volume arch2 on pcs15a-ST2540_2-1249918284.

REDKNEE

for internal use only

109 of 163

Siemens AG

6 PCS Manager Installation

Deleting volume cfs on pcs15a-ST2540_2-1249918284.


Deleting volume ora on pcs15a-ST2540_2-1249918284.
Deleting volume redo1 on pcs15a-ST2540_2-1249918284.
Deleting volume redo2 on pcs15a-ST2540_2-1249918284.
Deleting volume tick on pcs15a-ST2540_2-1249918284.
Deleting pool orapool on pcs15a-ST2540_2-1249918284.
Deleting pool redopool on pcs15a-ST2540_2-1249918284.
Deleting profile oraprofile on pcs15a-ST2540_2-1249918284.
Deleting profile redoprofile on pcs15a-ST2540_2-1249918284.
Creating profile oraprofile
Creating profile redoprofile
Creating pool orapool
Creating pool redopool
Start working on volumelist 1, pool orapool
Creating volume cfs
Waiting for volume cfs ...
Mapping volume cfs Lun 0
Fatal error at AiRaidStorageTek /tspinst/scripts/storage/perl/AiRaidStorageTek.pm 426:
cfs: The array could not be contacted. Check the physical connectivity from the management
host to the array. If the connectivity is good, try unregistering and re-registering this array.
:E547
Command /opt/SUNWsesscs/cli/bin/sscs map -a pcs15a-ST2540_2-1249918284 -l 0 volume cfs failed
at E547.FATAL_ERROR - cfs: The array could not be contacted. Check the physical connectivity
from the management host to the array. If the connectivity is good, try unregistering and
re-registering this array.
:E547
TSPIS: FATAL_ERROR - Script configureRaid.sh

failed

TSPIS: Trying to collect error logs using TspExplorer as TSP Installation has failed...
TSPIS: === Starting TspExplorer: /opt/SMAW/SMAWrtpdg/bin/TspExplorer.pl -noplid -all -scn FI
-local -dir /tspinst/TspExplorer -comp.
#########################################################################################
#########

TspExplorer Output (D)irectory (F)ile


=====================================
D : /tspinst/TspExplorer
D : /tspinst/TspExplorer/old
D : /tspinst/TspExplorer/TspExplorer.pcs15a.2009-08-10-21-17-48
D : /tspinst/TspExplorer/TspExplorer.pcs15a.2009-08-10-21-17-48/<Component>
F : /tspinst/TspExplorer/TspExplorer.pcs15a.2009-08-10-21-17-48/log/TspExplorer.log
D : /tspinst/TspExplorer/TspExplorer.pcs15a.2009-08-10-21-17-48/output
F : /tspinst/TspExplorer/TspExplorer.pcs15a.2009-08-10-21-17-48/output/TspExplorer.summary
F :
/tspinst/TspExplorer/TspExplorer.pcs15a.2009-08-10-21-17-48/output/TspExplorer.summary.xm
l

REDKNEE

for internal use only

110 of 163

Siemens AG

6 PCS Manager Installation

F :
/tspinst/TspExplorer/TspExplorer.pcs15a.2009-08-10-21-17-48/output/TspExplorer.checksum
F : /tspinst/TspExplorer/TspExplorer.pcs15a.2009-08-10-21-17-48/output/TspExplorer.content
F : /tspinst/TspExplorer/TspExplorer.pcs15a.2009-08-10-21-17-48/output/TspExplorer.prot
F : /tspinst/TspExplorer/TspExplorer.pcs15a.2009-08-10-21-17-48/output/TspExplorer.err
F : /tspinst/TspExplorer/TspExplorer.pcs15a.2009-08-10-21-17-48/output/TspExplorer.stderr
TSPIS: === TspExplorer executed successfully.
TSPIS: === Check Error Logs collected in /tspinst/TspExplorer to diagnose/fix the error

Solution 1:
Check the physical connectivity from the management host to array. If the connectivity is
good, log into the node as root user, then reboot the node with command init 6. After
the reboot installation will continue automatically.
Solution 2:
Try unregistering this array, first find array name using the below command from any one
of the PCS node.
root@pcs15a> /opt/SUNWsefms/bin/lsscs list array --this command will list the storage array
Array: pcs15a-ST2540_2-1268685321

--- Storage Array 2

Array: pcs15a-ST2540_1-1268683934

--- Storage Array 1

After finding the storage array name, use that to unregister the array.
root@pcs15a> /opt/SUNWsefms/bin/lsscs unregister storage-system
pcs15a-ST2540_1-1268683934
-- this willunregister the first array
root@pcs15a/opt/SUNWsefms/bin/lsscs unregister storage-system
pcs15a-ST2540_2-1268685321
-- this will unregister the second array

Then start the /tspinst/scripts/controlInstall.sh once again, it will continue the


installation.
Solution 3:
Please execute the following commands, first find array name using the below command
from any one of the PCS node.
root@pcs15a> /opt/SUNWsefms/bin/lsscs list array --this command will list the storage array
Array: pcs15a-ST2540_2-1268685321

--- Storage Array 2

Array: pcs15a-ST2540_1-1268683934

--- Storage Array 1

After finding the storage array name, use that to reset the array.
root@pcs15a>

/opt/SUNWsefms/bin/lsscs reset array pcs15a-ST2540_2-1268685321


--this command will list the storage array 1

root@pcs15a>

/opt/SUNWsefms/bin/lsscs reset array pcs15a-ST2540_1-1268683934


--this command will list the storage array 2

Then start the /tspinst/scripts/controlInstall.sh once again, it will continue the


installation.

REDKNEE

for internal use only

111 of 163

Siemens AG

6 PCS Manager Installation

Solution 4:
If above solutions are not working, then bring down both nodes to OK prompt, then
physically power off both nodes & both storges. Then power on both storages first, and
then power on both the nodes, boot both the nodes using boot command from OK
prompt, this will continue the installation.

5.8.7 Restrict names to 30 characters and try again


If First Installation fails in script configureRaid.sh with following error message
16:49:09: Registering storage ST2540_1 with CAM.
16:49:20: Storage ST2540_1 (10.104.41.206) found, modifying to
opioppcs301-ST2540_1-1268840949
16:49:22: We wait a short moment until resource may come back.
16:50:05: FATAL_ERROR - Unknown return result One or more of the specified options is too long.
Restrict names to 30 characters and try again.
. [E412]
Script end status: no system changes done
Script: configureRaid.sh ended 03/17/10 16:50:05

Solution 1:

5.8.7.1 Check you have done the steps explained in section 3.2.4.1
Configure-tftp-nfs.sh
Note: This step is mandatory only if install server is X86 (X4270) and this step is required
only when X86 install server prepared first time by scratch installation.
This tool is used to enable and disable the tftp and nfs services. Login as user root in
install server and invoke the tool with
root@iserver# /opt/INTPaghar/secureTool/configure-tftp-nfs.sh
Following menu will appear:

Configuration of TFTP and NFS


1. Turn on TFTP
2. Turn off TFTP
3. Turn on NFS
4. Turn off NFS
5. Help
6. Quit
Please Enter your choice :
Select option 1 and 3 to enable TFTP and NFS, and then option 6 to quit the menu.

REDKNEE

for internal use only

112 of 163

Siemens AG

6 PCS Manager Installation

Update the script AiRaidStorageTek.pm, if not do that in install server. Also also correct
the
same
in
PCS
hosts
(both
nodes)
file
/tspinst/scripts/storage/perl/AiRaidStorageTek.pm and then continue the installation
with command /tspinst/scripts/controlInstall.sh.
Solution 2:
If solution 1 is already exist, then also installation fails with above messages, then check
if PCS hosts hostname (both nodes) exceed 20 characters length, then re-create the TPD
with hostnames less than or equal to 20 character long and re-generate the Host Specific
Package. Then follow all the steps from section 3.1.7 Install PCS Host Specific Package
and re-start the installation with this hostname correction (from boot net command).

5.8.8 Failure in tspFIRSU.sh script


Refer the section 4.7.3 Failure in tspFIRSU.sh script.

5.8.9 Reconfiguration step 12 was forced to return


During end end of FI cluster is not up, stuck with following messages
NOTICE: CMM: Node pcs21a (nodeid = 1) with votecount = 1 added.
NOTICE: CMM: Node pcs21b (nodeid = 2) with votecount = 1 added.
NOTICE: CMM: Quorum device 1 (/dev/did/rdsk/d18s2) added; votecount = 1,
bitmask of nodes with configured paths = 0x3.
NOTICE: clcomm: Adapter nxge4 constructed
NOTICE: nxge4: xcvr addr:0x0d - link is up 1000 Mbps full duplex
NOTICE: clcomm: Adapter e1000g3 constructed
NOTICE: CMM: Node pcs21a: attempting to join cluster.
NOTICE: CMM: Node pcs21b (nodeid: 2, incarnation #: 1316805978) has become
reachable.
NOTICE: clcomm: Path pcs21a:nxge4 - pcs21b:nxge4 online
NOTICE: CMM: Cluster has reached quorum.
NOTICE: CMM: Node pcs21a (nodeid = 1) is up; new incarnation number =
1316840665.
NOTICE: CMM: Node pcs21b (nodeid = 2) is up; new incarnation number =
1316805978.
NOTICE: CMM: Cluster members: pcs21a pcs21b.
NOTICE: clcomm: Path pcs21a:e1000g3 - pcs21b:e1000g3 online
WARNING: CMM: Reconfiguration step 12 was forced to return.
NOTICE: CMM: Cluster members: pcs21a pcs21b

Then use the ALOM, go to the System Console (sc>), execute the command
powercycle -f on both nodes. Once you got the OK prompt, say boot command on
both nodes, then itll complete the installation.

REDKNEE

for internal use only

113 of 163

Siemens AG

6 PCS Manager Installation

5.8.10 Failure in tspStart.sh script


During the first installation if tspStart.sh script fails with following messages.
root@pcs19a> tail /tspinst/tspStart1.log

------------Some outputs are truncated------------------02:10:20:

- not all processes started yet ... waiting

02:12:24:

- not all processes started yet ... waiting

02:14:28:

- not all processes started yet ... waiting

02:16:32:

- not all processes started yet ... waiting

02:18:36:

- not all processes started yet ... waiting

02:20:40:

- not all processes started yet ... waiting

02:22:44: ERROR - Waited 1 hour for all TSP processes to start


02:22:44: ERROR - but there are still not running processes.
Script end status: partly executed
Script: tspStart.sh ended 03/15/10 02:22:56
root@pcs19a>

root@pcs19a> tail

/tspinst/tspStart.out

---------------Some outputs are truncated----------------CE_02

0 IMS_P_PCS_GX01_CE2_Backup RTP_NM_NOTRUNNING

CE_02

0 IMS_P_PCS_RX01_CE2_Backup RTP_NM_NOTRUNNING

CE_02

0 IMS_P_PCS_GY01_CE2_Backup RTP_NM_NOTRUNNING

CE_02

0 IMS_P_PCS_SH01_CE2_Backup RTP_NM_NOTRUNNING

CE_02

0 IMS_P_PCS_RA01_CE2_Backup RTP_NM_NOTRUNNING

CE_02

0 IMS_P_PCS_IA01_CE2_Backup RTP_NM_NOTRUNNING

End time: 03/15/10 02:22:56


Script end status: partly executed
Script: tspStart.sh ended 03/15/10 02:22:56
root@pcs19a>

Solution 1:
Check the file /global/PCSData/aaaconfig.xml as it look like below, here closing tag
</AaaConfig> is missing in this file.
root@pcs19a> cat /global/PCSData/aaaconfig.xml
<?xml version="1.0" encoding="UTF-8"?>
<!--

-->

------------Some outputs are truncated--------------<AaaConfig>


<DictionaryFile>/global/PCSData/dictionary.xml</DictionaryFile>
<ApplicationFile>/global/PCSData/application.xml</ApplicationFile>

REDKNEE

for internal use only

114 of 163

Siemens AG

6 PCS Manager Installation

<SecretsFile>/global/PCSData/secrets.xml</SecretsFile>
root@pcs19a>

Add the closing tag at the end of the file, afte the correction, file should be look like below.
root@pcs19a> cat /global/PCSData/aaaconfig.xml
<?xml version="1.0" encoding="UTF-8"?>
<!--

-->

------------Some outputs are truncated--------------<AaaConfig>


<DictionaryFile>/global/PCSData/dictionary.xml</DictionaryFile>
<ApplicationFile>/global/PCSData/application.xml</ApplicationFile>
<SecretsFile>/global/PCSData/secrets.xml</SecretsFile>
</AaaConfig>

----Add the closing tag here

root@pcs19a>

Then continue the installation using the command /tspinst/scripts/controlInstall.sh on


both nodes.
Solution 2:
Check the file /global/PCSData/aaaconfig.xml size is zero (0), then do the following
steps from any one of the node.
root@pcs19a> ls l /global/PCSData/aaaconfig.xml
-rwxr--r--

1 rtp99

dba

0 Jan 11 21:47

/global/PCSData/aaaconfig.xml

root@pcs19a> cp /opt/PCS/conf/aaaconfig.xml /global/PCSData/


root@pcs19a> echo "" >> /global/PCSData/aaaconfig.xml
root@pcs19a> sed 's/opt\/PCS\/conf/global\/PCSData/g' /global/PCSData/aaaconfig.xml >
/tmp/aaaconfig.xml

root@pcs19a> cp -f /tmp/aaaconfig.xml /global/PCSData/aaaconfig.xml

After doing above steps, file content should be look like below.
root@pcs19a> cat /global/PCSData/aaaconfig.xml
<?xml version="1.0" encoding="UTF-8"?>
<!--

-->

------------Some outputs are truncated--------------<AaaConfig>


<DictionaryFile>/global/PCSData/dictionary.xml</DictionaryFile>
<ApplicationFile>/global/PCSData/application.xml</ApplicationFile>
<SecretsFile>/global/PCSData/secrets.xml</SecretsFile>
</AaaConfig>
root@pcs19a>

REDKNEE

for internal use only

115 of 163

Siemens AG

6 PCS Manager Installation

Then continue the installation using the command /tspinst/scripts/controlInstall.sh on


both nodes.

5.8.11 prtdiag failure


There is rare chance prtdiag will hang during the first installation time, currently is
observered in two places.
Place 1: ConfigureRTPxml
During this configuration, first installation might fail with similar error message.
Node1:
TSPIS: Waiting for: pcs07b ...
SyncFile:/tspinst/tmp/tspInstall_rtp_install_ok-2
TSPIS: SyncFile /tspinst/tmp/tspInstall_rtp_install_ok-2 found, pcs07b finished
TSPIS: --- Install TSP (packages) done --TSPIS: Starting script: tspConfigure
TSPIS: --- Configure TSP --TSPIS: ConfigureRTPxml ...

Node2:
TSPIS: Waiting for: pcs07a ...
SyncFile:/tspinst/tmp/tspConfigure_configureRTP_ok-1
TSPIS: Waiting for: pcs07a ...
SyncFile:/tspinst/tmp/tspConfigure_configureRTP_ok-1
TSPIS: Waiting for: pcs07a ...
SyncFile:/tspinst/tmp/tspConfigure_configureRTP_ok-1
TSPIS: ERROR - Failed to receive /tspinst/tmp/tspConfigure_configureRTP_ok-1.
TSPIS: ERROR - I tried 1080 times.
TSPIS: ACTION - Check partner node(s).
TSPIS: FATAL_ERROR - Script tspConfigure.sh

failed

Solution:
Login to the both PCS nodes as root user, execute the following command and find
which node having the prtdiag process is running, then kill prtdiag process.
Below is the example, in the above output Node1 hung in ConfigureRTPxml script, find
the process ID for ConfigureRTPxml script, then find the prtdiag process ID using ptree
command.
root@pcs07a>
root

311180 26673

root@pcs07a>
1059

ps ef | grep ConfigureRTPxml
0 Sep25 ?

ptree a 31180

00:00:12

ConfigureRTPxml

--- 31180 is the process ID of ConfigureRTPxml

/usr/bin/tee -a /dev/console

REDKNEE

for internal use only

116 of 163

Siemens AG
1060

6 PCS Manager Installation

/usr/bin/ksh /tspinst/scripts/controlInstall.sh

29080 /usr/bin/ksh
29100 /usr/local/bin/perl -w
9050

sh -c /usr/platform/`uname -i`/sbin/prtdiag -v|grep OBP

9051

grep OBP

9052 /usr/platform/SUNW,Netra-T5220/sbin/prtdiag v
this(prtdiag) process ID
root@pcs07a>

kill -9 9052

---- Kill

--- 9052 is the process ID of prtdiag

Place 2: tspExplorer.sh
During this configuration, first installation might fail with similar error message.
12:55:14: FATAL_ERROR - Script tspExplorer.sh

failed

Solution:
Login to the both PCS nodes as root user, execute the following command and find
which node having the prtdiag process is running, then kill prtdiag process.
ptree -a `pgrep tspExplorer.sh`

Below is the example, in the above output Node1 hung in tspExplorer.sh script, find the
process ID for tspExplorer.sh script, then find the prtdiag process ID using ptree
command.
root@pcs07a> ptree -a `pgrep tspExplorer.sh`
1

/sbin/init
1059

/usr/bin/tee -a /dev/console

1060

/usr/bin/ksh /tspinst/scripts/controlInstall.sh

29080 /usr/bin/ksh /tspinst/scripts/tspExplorer.sh


29100 /usr/local/bin/perl -w /opt/SMAW/SMAWrtp/bin/TspExplorer.pl -all
9050

sh -c /usr/platform/`uname -i`/sbin/prtdiag -v|grep OBP

9051

grep OBP

9052 /usr/platform/SUNW,Netra-T5220/sbin/prtdiag v
this(prtdiag) process ID
root@pcs07a> kill -9 9052

REDKNEE

---- Kill

--- 9052 is the process ID of prtdiag

for internal use only

117 of 163

Siemens AG

6 PCS Manager Installation

6 PCS Manager Installation (Single and Cluster)


The PCS Manager (PCSmgr) can be used to change the parameter of the PCS in the run
time (e.g. AFs, CMTS, specify policy allowed peers, timers, etc). More information on the
PCSmgr is described in the PCS User ManualError! Reference source not found. and
PCS PCM User Guide.
The PCS Manager is a client-server Java application where the PCS Manager Client is
downloaded to the client host via Java Web Start (JWS application). The PCS Manager
requires TSP and runs on Solaris as TSP process. The PCS Manager Client side is
platform independent.
The installation procedure is the automatic installation procedure via install server, where
the PCS package (PCSmain) together with the PCS Manager package (PCSmgr) is
installed on a PCS host (see 4 and 5).
In a @Com environment it can be initialized and used in the context of NE PCS_PCM.
In the following section 6.2 the automatic installation procedure is described.
Hint for using the PCS Manager: Please ensure that each PCS host is managed by
exactly and exclusively one dedicated PCS manager server.

6.1 Requirements for PCS Manager


The PCS Manager has to be installed on the PCS host. PCS host is with TSP and OS is
Solaris.

6.2 Automatic Installation of PCS Manager


The automatic installation of PCS Manager is only available for installation on PCS host
in case it is listed in the TPD. The automatic installation is executed during the automatic
installation of the PCS host, along with all other software parts for PCS. For description of
automatic installation on single node please see 4 and for automatic installation on
cluster please see 5.

6.2.1 Verification of Installation


The verification of the installation is done by checking if the PCS Manager process and
the TSP web server process are running.
Verification if PCS Manager process is running is done by the command
ps aef | grep pcsmgr
Verification if the TSP web server is running is done by the command
ps aef | grep httpd
The TSP web server is located at
/opt/SMAW/SMAWrtpa2/bin/httpd

REDKNEE

for internal use only

118 of 163

Siemens AG

6 PCS Manager Installation

6.2.2 Configuration for Client-Server communicating via


Proxy
The PCS Manager client is downloaded to the client host via Java Web Start call from a
browser (i.e. Internet Explorer).
The PCS Manager Client communicates with the PCS Manager Server via SOAP over
HTTP.
The proxy configuration for the browser must be done manually (see 6.2.2.1).
The proxy configuration for the PCS Manager Client (Java Web Start) is done
automatically when the configuration of the proxy settings for the Java Runtime
Environment (JRE) is available (see 6.2.2.2).

6.2.2.1 Proxy Configuration for Browser


The proxy configuration for browser is done at the browsers configuration menus.
For EX the proxy configuration for Internet Exploer IE is done in 6 steps:
1.
2.
3.
4.
5.
6.

Launch Internet Explorer


Select the Tools menu item
Select Internet Options
Select the Connections tab
Click the LAN Settings button
Include the IP address and port of the proxy (see figure)

Figure 6-1 Proxy Server Configuration in Local Area Network (LAN) Settings

6.2.2.2 Automatic Proxy Configuration for Java Web Start


The automatic proxy configuration for Java Web Starts requires the configuring proxy
settings for the Java Runtime Environment (JRE) (see also
http://www.java.com/en/download/help/proxy_setup.xml).
REDKNEE

for internal use only

119 of 163

Siemens AG

6 PCS Manager Installation

The configuration of the proxy settings for the Java Runtime Environment (JRE) is done
via the Java Web Start application javaws.exe. It can be launched by one of the following
ways:
1. Open the file javaws.exe from the local java directory:
This file can usually be found in one of the following directories:
On Windows:
C:\Program Files\Java\[java version]\bin or C:\Program Files\Java\[java
version]\javaws
On Linux or Solaris:
/usr/java/[java version]/bin or /usr/java/[java version]/jre/javaws
2. From the control panel:
On Windows:
Start -> Settings -> Control Panel -> Java
On Linux or Solaris:
Launch the control panel from the menu or from the ControlPanel
executable file located at [Java installation directory]/bin/ControlPanel
3. From the command prompt:
On Windows:
Start -> Settings -> CMD
Type javaws
On Linux or Solaris: in a shell window
Type javaws
Once the Java Web Start Application Cache Viewer is open, right-click on the menu Edit
and select Preferences

Figure 6-2 Java Application Cache Viewer

REDKNEE

for internal use only

120 of 163

Siemens AG

6 PCS Manager Installation

The Java Control Panel is opened.

Figure 6-3 Java Control Panel


Select the Network Settings and select here the entry Use proxy server.

Figure 6-4 Network Settings for Using Proxy Server


REDKNEE

for internal use only

121 of 163

Siemens AG

6 PCS Manager Installation

Make sure that the "Use Browser Settings" checkbox is not checked. Configure the "Java
Plug-in Control Panel" "Proxies" Tab using your web browser's proxy settings by
checking Use proxy server and adding the address and the port of the used proxy in the
fields Address and Port.
Select OK to close the window Network Settings, select OK to close the window Java
Control Panel. The Java Web Start Application Cache Viewer can now be closed, too.,

6.3 Start of PCSmgr as JWS Application


Note: Install JDK 1.5 Update 10 in PCM Client node to access the PCM Manager GUI.
1. Open a browser and point it to
https://servername-IPaddress:/PCSPCM/PCSPCM.jnlp
For EX: https://192.168.70.160/PCSPCM/PCSPCM.jnlp
2. This should open the Java Web Start window followed by the PCSmgr.

REDKNEE

for internal use only

122 of 163

Siemens AG

6 PCS Manager Installation

Figure 6-5: Project View of Policy and Configuration Management after installation

6.4 Verification of Interworking of PCSmgr with the PCS


The correct working of the PCSmgr can be verified by configuring a PCS. For the
verification the following steps has to be executed:
1. Announcing a PCS to PCSmgr: Click the tab PCS Hosts, move the mouse pointer to
some place within the tab window, right click the mouse to activate the context menu
and select the option New. Enter the following details: "IP Address"; "PCM internal
PCS Name"; "Description"; "Port Number" (default 8088).
Hint: The PCS should be up and running.
2. Create a PCS Group: Click the tab PCS Groups, move the mouse pointer to some
place within the tab window, right click the mouse to activate the context menu and
REDKNEE

for internal use only

123 of 163

Siemens AG

3.

4.

5.

6.

7.

6 PCS Manager Installation

select the option New. Enter the following details: PCS Group Name, Description,
Assign PCSs to the group.
Create a project: Click the tab Projects, move the mouse pointer to some place
within the tab window, right click the mouse to activate the context menu and select
the option New. Enter the following details: Project Name, Description
Assign the project to a PCS group: Select the specific PCS Project to be assigned,
right click the mouse to activate context menu and select the option Assign. All the
PCS group list will be displayed and select the appropriate group from the list.
Validate a project: Select the specific PCS project to be validated. Click the option
Validate from the context menu. This checks if all mandatory files were created for a
project.
Activate a project: Select the specific PCS Project to be activated, right click the
mouse to activate context menu and select the option Activate. If the enforcement is
successful, the popup box will gives you the positive result, otherwise an error
message
You can add, delete or modify the PCS parameters (AF settings, CMTS settings, Qos
settings) via the XML files contained in the project created just now.

6.5 Starting of PCSmgr via @Com


The PCSmgr can be accessed via the @Com management application, called OAM
control center.
In the @com environment, the PCS Manager communicates via Proxy. Therefore it is
required, that the proxy is configured for the used brower and for the PCS Manager (see
also 6.2.2).
For using the PCSmgr, for @Com a netelement must be set up.
The type of the net element must be PCSPCM, the IP address is the IP address of PCS
host, where the PCSmgr is installed and should be used.
The PCSmgr is called via @Com by:
1. select the element icon of the net element PCSPCM
2. call context menu with right mouse click
3. click at Local Administration
Alternatively:
1. select the element icon of the net element PCSPCM
2. open the main menu
3. click Configuration and select Local Administration
At the @com client a window should be opened and the PCSmgr application should
come up as shown below.

REDKNEE

for internal use only

124 of 163

Siemens AG

6 PCS Manager Installation

Figure 6-6: Java Web Start Opening Window

Figure 6-7: Project View of Policy and Configuration Management

REDKNEE

for internal use only

125 of 163

Siemens AG

6 PCS Manager Installation

6.6 Reconfiguration of IP-Address for PCSmgr


In case the IP address of the PCSmgr host needs to be changed (e.g. changes in the
network, initiated by the network administration), the corresponding entries in the
configuration file of the TSP Web server in the jnlp file needs to be changed.

6.7 FAQ and Troubleshooting Information


6.7.1 Is it possible to run the older and the current PCSmgr
release concurrently at the same client?
Yes, it is possible. They run in different environment, but it is not recommended. To avoid
conflicts please run them not concurrently. Both instances need the same system
variables, which are set by both releases with different values. E.g. such variables
contain installation paths and link to the Java libraries.

6.7.2 Starting of PCSmgr leads to Error message


After starting of the PCSmgr the following error message may be displayed:
Exception in thread "main" java.lang.NoClassDefFoundError:
com/siemens/dxa/xacmleditor/XacmlEditor

For Windows: Please check if the path to the Java installation and the required system
variables (JAVA_HOME and PCM_HOME) are set correctly.
For Linux and Solaris: Please check if the path to the Java installation and the required
system variables (JAVA_HOME and PCM_HOME) are set correctly.

6.7.3 Starting of PCSmgr leads to Error message Page


cannot be found (HTTP 404)
Please check if the called URL is the correct one.
If yes: Please check if the TSP web server (the server in the URL) is running.

6.7.4 Configuration parameters are modified by the editor, but


after saving it, the modifications are not visible.
Under Solaris, this happens if the repository directory and its sub-directories and files do
not have full write access right.

6.7.5 New projects have been generated by the PCSmgr. But


an inconsistence message appeared
Please check if disk is full.

6.7.6 Java Web Start Download Error occurs


After calling the PCSmgr client via JWS (browser) a Java Web Start download error
occurs and error message is displayed (Java IO exception in combination with HTTP
response code 502).

REDKNEE

for internal use only

126 of 163

Siemens AG

6 PCS Manager Installation

Please check, if a proxy for communication of client and server is required.

6.7.7 JNLP Cache Size Warning


After calling the PCSmgr client via JWS (browser) a Java Web Start error occurs and the
JNLP Cache Size Warning is displayed (see figure).

Figure 8: JNLP Chache Size Warning


REDKNEE

for internal use only

127 of 163

Siemens AG

6 PCS Manager Installation

The PCSmgr GUI cannot be loaded.


The cache size for Java Web Start needs to be configured with an higher limit, at least
with about more than 15 Mb. Java installation.The configuration is done by use of the
Java control panel at the client host.
At Windows clients the Java Control Panel is called via system control and calling Java.
At Linux or Solaris client hosts the Java Control Panel is located at the JRE installation
directory in the directory javaws and is called like a script.
For ex:
The JRE installation directory is /usr/java/jre1.5.0_06/.
Cd /usr/java/jre1.5.0_06/javaws
./javaws
This call the Java Control Panel (see figure).

Click on settings and change here the amount of disk space to use to at least
more than 15 MB or to unlimited.

REDKNEE

for internal use only

128 of 163

Siemens AG

6 PCS Manager Installation

Close the window by clicking on ok and close the Java control Panel by clicking on ok,
too.

6.7.8 After Update of PCSmgr the old Version appears at the


Client
After update of the PCSmgr the former version appears at the client.
To force to get loaded the updated PCSmgr version, please clear the Java Web Start
cache. It is done via the Java application cache viewer. It can be called via the Java
Control Panel (see 6.7.7).
Click on Java.

REDKNEE

for internal use only

129 of 163

Siemens AG

6 PCS Manager Installation

Select User:

Mark PCS-5000 entry and the concering Library entries and click on remove.

REDKNEE

for internal use only

130 of 163

Siemens AG

6 PCS Manager Installation

Close the Java Application Cache Viewer and call again the PCSmgr via the browser.
The PCSmgr and the related libraries are now loaded.

REDKNEE

for internal use only

131 of 163

REDKNEE

8 Annex

7 Software Upgrade of a PCS host


Upgrade means adding a new version of PCS (PCSmain and PCSmgr) packages. The
procedures here explain only PCSmain and PCS Manager Packages upgrades. This
section not explains TSP and other upgrades check the release notes for respective TSP
upgrade document.
PCS upgrades supports only for cluster not for single node.
When PCS is released you can select the following ways to get the new PCS systems:
1. Re-setup PCS install server with the new released TSP medium and PCS images,
then re-install PCS hosts from the install server.
2. If ONLY PCS images want to upgrade use the Software Upgrade Framework (SUF)
which is explained here.
Note: Manual upgrades are strongly not recommended and not supported. Dont do any
PCS configuration changes and new project creations during the upgrade, if you do the
changes during the upgrade, this will lead inconsistency in data migration.

7.1 PCS upgrade via SUF


When new PCS release is available you can upgrade PCS using Software Upgrade
Framework (SUF). First you must prepare a Software Update Server (SUS, co-located
with Install Server) for this upgrade. Then start the update as sufuser on SUS4. At
present Rolling Software Update (RSU) is supported (cluster).

RSU is used to upgrade PCS (PCSmain, PCSmgr at the same time).

To choose the appropriate upgrade method, please have a look at the particular PCS
Release Notes.
Note: Now RSU is supported to upgrade both PCSmain and PCSmgr at the same time.
Dont abort the RSU during the upgrade using Ctrl+C or closing putty seesion. If you had
any problem during the RSU, wait till request of for fall back.

7.1.1 Upgrade to PCS6.3


Note: Refere the release notes for the respective version upgrade guide and section.
Note: Hardware should be in standard configuration, if not installation will fail. More
information on hardware standard, refer PCS6.3 release notes and Hardware Description
guide.

You can switch from root to sufuser by su sufuser.

REDKNEE

for internal use only

132 of 163

REDKNEE

8 Annex

7.1.1.1 Prepare install server


Considering the Install Server must installed with TSP media which has the same as PCS
hosts. Preparing install server and installing TSP media refer the section 3.1.
Perform the following task as root user in Install Server.

7.1.1.1.1 Install PCSEP package


The PCSEP package contains the specific PCS upgrade scripts which have been
executed during the RSU.
If PCSEP package is installed already remove the previous version using the pkgrm
command.
Example:
# pkgrm PCSEP

Now download the PCSEP package from doc-access to install server, Unzip PCSEP
package, and then install it in Install Server using pkgadd commad.
Example:
# gunzip PCSEP_PA_063.pkg.gz
# pkgadd d PCSEP_PA_063.pkg

Answer with yes or Enter for questions during the installation.


Duration of this action is ~2 minute.
A screen trace of a sample execution of the package installation is shown in Annex 8.4

7.1.1.1.2 Prepare PCS upgrade RSU unit


Download the PCSmain and PCSmgr package from Doc-Access to Install Server in
/export/home/iserver/install/PCS/PCS6.3/ location.
Note: PCS packages (PCSmain, PCSmgr and PCSEP) name should not contain the
open and close braces ([]), should be like below.
Example:
bash-3.00# pwd
/export/home/iserver/install/PCS/PCS6.3/
bash-3.00# ls -l
total 211378
-rw-r--r--

REDKNEE

1 root

root

83882887 Aug

8 08:31 PCSmain_PAT_063.pkg.gz

for internal use only

133 of 163

REDKNEE
-rw-r--r--

8 Annex
1 root

root

24260209 Aug

8 08:32 PCSmgr_063.pkg.gz

bash-3.00#

7.1.1.1.3 Prepare the control.xml file


Note: This section is mandatory only when upgrading from PCS6.1/PCS6.1EP1 version,
if upgrading from any other PCS version, then ignore this section.
Login to any one of the PCS node (Which is going to be upgraded from
PCS6.1/PCS6.1EP1)
as
root
user
and
download
the
file
/opt/SMAW/INTP/descriptors/PCSXXXX_platform_cframe_CD.xml to desktop.
Where XXXX is the name which is given during the customization package generation,
remember/note this name.
Example:
root@pcs25a> ls -l /opt/SMAW/INTP/descriptors/PCS*_platform_cframe_CD.xml
lrwxrwxrwx
1 root
other
65 Jul 17 06:48
/opt/SMAW/INTP/descriptors/PCScust25_platform_cframe_CD.xml ->
../install/PCScust25/descriptors/PCScust25_platform_cframe_CD.xml
--- In this example cust25 is the name which is given during the customization package generation
root@pcs25a>

Now edit this file which is downloaded to desktop as per the below comments and save
the file.

-------------------Some outputs are truncated ---------------<ComponentStructure>


<Filegroup FilegroupName="Jar_Files">
<File Name="custPCScust25_Install.jar" Directory="/vobs/ims_pcscol_caf/Build/"/>
</Filegroup>

---- Remove the above the 3 lines


</ComponentStructure>
<IPServiceDescriptions>
<IPServiceDescription Name="FTP" IPServiceType="scalable_sticky">
<Port Name="FTP_21" PortNumber="21" Protocol="tcp"/>
<Port Name="FTP_22" PortNumber="22" Protocol="tcp"/>

--- Add the above line


</IPServiceDescription>
<IPServiceDescription Name="standard_services" IPServiceType="scalable_sticky">

REDKNEE

for internal use only

134 of 163

REDKNEE

8 Annex

<Port Name="9099" PortNumber="9099" Protocol="tcp"/>


<Port Name="8099" PortNumber="8099" Protocol="tcp"/>
<Port Name="23" PortNumber="23" Protocol="tcp"/>
<Port Name="513" PortNumber="513" Protocol="tcp"/>
<Port Name="1100" PortNumber="1100" Protocol="tcp"/>
<Port Name="1101" PortNumber="1101" Protocol="tcp"/>
<Port Name="4800" PortNumber="4800" Protocol="tcp"/>
</IPServiceDescription>
</IPServiceDescriptions>
</ComponentDescription>

Note: During the below customization package generation use the same name which
you found in above ls l command output. (Example: In above example cust25 is the
name given during the customization package generation)
Once above changes are done, then generate the customization package, refer the [How
to Generate Host Specific Package] document in section 6.2 for customization package
generation.

Then copy the Customization(Which is generated in above steps) to Install Server in


Location /export/home/iserver/install/PCS/PCS6.3/.
Example:
bash-3.00# pwd
/export/home/iserver/install/PCS/PCS6.3/
bash-3.00# chmod 644 *
bash-3.00# ls -l
total 211378
-rw-r--r--

1 root

root

83882887 Aug

8 08:31 PCSmain_PAT_063.pkg.gz

-rw-r--r--

1 root

root

24260209 Aug

8 08:32 PCSmgr_063.pkg.gz

-rw-r--r--

1 root

root

21918 Jul

5 11:25 PCScust25.tar.gz

bash-3.00#

Then copy the file /export/home/iserver/install/LiveRSU/Live/pkg_cust_control.xml in


Install Server to /export/home/iserver/install/LiveRSU/Live/control.xml.
Example:
bash-3.00# cd /export/home/iserver/install/LiveRSU/Live/
bash-3.00# cp p control.xml control.xml_bkp
bash-3.00# cp p pkg_cust_control.xml control.xml
bash-3.00# ls l
total 6
drwxr-xr-x

REDKNEE

3 root

bin

4 Jul 18 11:56 ScriptUnit

for internal use only

135 of 163

REDKNEE

8 Annex

-rw-r--r--

1 root

bin

960 May 30 17:23 control.xml

-rw-r--r--

1 root

bin

632 May 30 17:23 control.xml_bkp

-rw-r--r--

1 root

bin

960 May 30 17:23 pkg_cust_control.xml

bash-3.00#

Now edit the control.xml file as per below comments.


bash-3.00# cd /export/home/iserver/install/LiveRSU/Live/
bash-3.00# vi control.xml

-------- Some outputs are truncated ------<pkgadd name='PCSmgr' admin="SINGLE">


<iserver product='PCS' type='install' version='PCS6.3'>
<source compression='gzip' type='pkg'>PCSmgr_063.pkg.gz</source>
</iserver>
</pkgadd>
-------- Some outputs are truncated --------- Replace the XXXX with customization package name (which is given during the
customization package build).
<pkgadd name='PCSXXXX' admin="SINGLE">
<iserver product='PCS' type='install' version='PCS6.3'>
<source compression='gzip' type='tar'> PCSXXXX.tar.gz</source>
</iserver>
</pkgadd>
</unit>

After the change file control.xml output look like below.


bash-3.00# cd /export/home/iserver/install/LiveRSU/Live/
bash-3.00# cat control.xml
-------- Some outputs are truncated ------<pkgadd name='PCSmgr' admin="SINGLE">
<iserver product='PCS' type='install' version='PCS6.3'>
<source compression='gzip' type='pkg'>PCSmgr_063.pkg.gz</source>
</iserver>
</pkgadd>
<pkgadd name='PCScust25' admin="SINGLE">
<iserver product='PCS' type='install' version='PCS6.3'>
<source compression='gzip' type='tar'> PCScust25.tar.gz</source>
</iserver>
</pkgadd>
</unit>

REDKNEE

for internal use only

136 of 163

REDKNEE

8 Annex

7.1.1.1.4 Preparing datainit file


Default datainit file comes with PCSEP package and it is available in following location
/export/home/iserver/install/LiveRSU of the install server and file name called
datainit_R_D_RSU.xml.
bash-3.00# pwd
/export/home/iserver/install/LiveRSU
bash-3.00#
bash-3.00# ls -lrt
total 26
-rw-r--r--

1 root

bin

4641 Aug

7 21:35 PCS_preparation.xml

-rw-r--r--

1 root

bin

2850 Aug

7 21:35 PCS_preparation_PCSmain_PCSmgr.xml

-rw-r--r--

1 root

bin

676 Aug

7 21:35 OEM_install.sh

-rw-r--r--

1 root

bin

818 Aug

7 21:35 datainit_RSU.xml

-rw-r--r--

1 root

bin

817 Oct

9 12:32 datainit_R_D_RSU.xml

drwxr-xr-x

2 root

bin

512 Aug 10 10:11 Live

drwxr-xr-x

3 root

bin

512 Aug 10 10:11 NonLive

drwxr-xr-x

3 root

bin

512 Aug 10 10:11 ScriptUnit

Copy this file to directory /export/home/sufuser and set the following permissions
bash-3.00#

cp datainit_R_D_RSU.xml /export/home/sufuser

bash-3.00# cd

/export/home/sufuser

bash-3.00# chown sufuser:other datainit_R_D_RSU.xml


bash-3.00# ls -l datainit_R_D_RSU.xml
-rw-r--r--

1 sufuser

other

770 Sep 10 15:34 datainit_R_D_RSU.xml

bash-3.00#

Edit the file content as shown in the below example.


Example: Comments are added in red color.
<?xml version="1.0" standalone="yes"?>
<datainit>
<uidata id="check">
<entry name="allownounits" value="no" />
<entry name="waitaftercheck" value="no" />
</uidata>
<uidata id="install">
<entry name="globalnfs" value="yes" />
</uidata>
<uidata id='localMirrors'>
<entry name='enable' value='yes'/>
</uidata>

REDKNEE

for internal use only

137 of 163

REDKNEE

8 Annex

<uidata id="installunitdata">
<list ident="units">
<item>
<entry name="name" value="RSU" />
<entry name="path" value="/export/home/iserver/install/LiveRSU" />
<entry name="host" value="Install Server Admin LAN IP Address" /> Replace here Install
</item>
</list>

Server Admin Lan IP Address, IP address should be in double code (),


example: value= 192.168.22.17

</uidata>
<uidata id="scriptunitdata">
<list ident="units">
<item>
<entry name="name" value="ScriptUnit" />
<entry name="path" value="/export/home/iserver/install/LiveRSU/RSU" />
<entry name="host" value="Install Server Admin LAN IP Address" /> Replace here Install
</item>

Server Admin Lan IP Address, IP address should be in double code (),

</list>

example: value= 192.168.22.17

</uidata>
</datainit>

Sample file available in Annex 8.8 Sample datainit file for RSU
Check the file /export/home/iserver/install/LiveRSU/Live/control.xml whether it
contains the right PCSmain and PCSmgr package names as copied in section 7.1.1.1.3,
if not please correct package names in this file.
Example:
bash-3.00# cd /export/home/iserver/install/LiveRSU/Live
bash-3.00# ls -l
total 4
-rw-r--r--

1 root

bin

641 Nov

8 04:02 control.xml

bash-3.00# cat control.xml


<?xml version="1.0"?>
<!DOCTYPE unit SYSTEM "unit.dtd">
<unit>
<supports installable='live' type='Live'/>
<pkgadd name='PCSmain' admin="SINGLE">
<iserver product='PCS' type='install' version='PCS6.3'>
<source compression='gzip' type='pkg'>PCSmain_PAT_063.pkg.gz</source>
</iserver>
</pkgadd>
<pkgadd name='PCSmgr' admin="SINGLE">
<iserver product='PCS' type='install' version='PCS6.3'>

REDKNEE

for internal use only

138 of 163

REDKNEE

8 Annex

<source compression='gzip' type='pkg'>PCSmgr_063.pkg.gz</source>


</iserver>
</pkgadd>
-------- Some outputs are truncated ------</unit>
bash-3.00#

7.1.1.1.5 Upgrade SUF Software in PCS hosts


Note: This step is not required if your using the same Install Server and same TSP
version which is used for First Installation/Upgrade.
Add hostname and IP address of core LAN and admin LAN (both nodes) in /etc/inet/hosts
file of Install Server.
bash-3.00# cat /etc/inet/hosts
-------OUTPUT TRUNCATED------180.144.133.191 pcs10a

Core Lan Node1 hostname & IP address

180.144.133.201 pcs10b

Core Lan Node2 hostname & IP address

192.168.22.191 pcs10a-admin Node 1 admin Lan IP and hostname(Admin Lan


hostname should be Node1CoreLanHostname-admin), example: Core Lan
Hostname is pcs10a, then admin Lan hostname is pcs10a-admin.
192.168.22.201 pcs10b-admin Node 2 admin Lan IP and hostname(Admin Lan
hostname should be Node2CoreLanHostname-admin), example: Core Lan
Hostname is pcs10b, then admin Lan hostname is pcs10b-admin.
bash-3.00#

Login to the Install Server as sufuser


SufInstaller i e <Node_1>
Node_1 is PCS primary node name or First node name, below output from Netra T2000 Install
Server.
bash-3.00$ su - sufuser
bash-3.00$ SufInstaller -i -e pcs5a
Writing log to
/var/opt/SMAWsuf/Controller/SufInstaller.8/SufInstaller.log
No install directory given. Using default directory
*** If using ssh, be prepared to enter the passwords asap. ***
*** sshd has a timeout while waiting for the password.

***

** Starting update for node pcs5a using interface pcs5a


** Getting local version

REDKNEE

for internal use only

139 of 163

REDKNEE

8 Annex

Local APS: TSPVAI920900


pkgadd SMAWsufut

APS: TSPVAI920900

pkgadd SMAWsufus

APS: TSPVAI920900

Continue (y/n)? y
--- Say Y, after this it may ask node 1 root password,
provide the root password, password wont visible.
Installing node pcs5a using interface pcs5a
Warning: Permanently added 'pcs5a,10.255.6.3' (RSA) to the list of known
hosts.
Node pcs5a updated
Accessing pcs5a using interface pcs5a is already possible without password
** Starting update for node pcs5b using interface 10.255.7.4
** Getting local version
Local APS: TSPVAI920900
pkgadd SMAWsufut

APS: TSPVAI920900

pkgadd SMAWsufus

APS: TSPVAI920900

Continue (y/n)? y
--- Say Y, after this it may ask node 2 root password,
provide the root password, password wont visible.
Installing node pcs5b using interface 10.255.7.4
Warning: Permanently added '10.255.7.4' (RSA) to the list of known hosts.
Node pcs5b updated
Accessing pcs5b using interface 10.255.7.4 is already possible without
password
SufInstaller successfully finished
bash-3.00$

7.1.1.1.6 Preparing Cluster Description file


Note: This is not required if your using the same Install Server which is used for First
Installation.
Login to any one of the PCS host as root user and go the directory
/var/opt/SMAWsuf/Server/data. Copy the file cluster.xml to install server in directory
/export/home/sufuser and set the following permission.
bash-3.00# pwd
/export/home/sufuser
bash-3.00# chown sufuser:other cluster.xml
bash-3.00# ls -l cluster.xml

REDKNEE

for internal use only

140 of 163

REDKNEE
-rw-r--r--

8 Annex
1 sufuser

other

770 Sep 10 15:34 cluster.xml

bash-3.00#

Now edit the cluster description file in Install Server and change the admin Lan IP
address and host name of Install Server as given in example below and save the file.
Example: Comments in Blue color
bash-3.00# cd /export/home/sufuser
bash-3.00# cat cluster.xml
<?xml version='1.0'?>
<!DOCTYPE cluster SYSTEM 'cluster.dtd'>
<cluster id='pcsce10_pcsce11' topology='TwoNode'
system='solaris' project='PCSP_SUNFIRE_090' column='PCS_Cluster' module='gen' tsp='yes'>
<installserver clusternode='no' iserverpath='/export/home/iserver'>
<if nodename='pcsis3-admin' ipaddr='192.168.22.168'/>
</installserver>

Replace the current Install

Server admin lan hostname & IP Address

<node id='1'>
<if nodename='pcs10a' ipaddr='192.168.22.191' rmethod='ssh'
</node>
<node id='2'>
<if nodename='pcs10b' ipaddr='192.168.22.201' rmethod='ssh'/>

</node>
<mirroring>
<local nodeid='1' device='/dev/md/dsk/d0'/>
<local nodeid='1' device='/dev/md/dsk/d3'/>
<local nodeid='1' device='/dev/md/dsk/d6'/>
<local nodeid='1' device='/dev/md/dsk/d12'/>
<local nodeid='2' device='/dev/md/dsk/d100'/>
<local nodeid='2' device='/dev/md/dsk/d103'/>
<local nodeid='2' device='/dev/md/dsk/d106'/>
<local nodeid='2' device='/dev/md/dsk/d112'/>
<shared type='name' name='ADVFRW' split='no'/>
<shared type='name' name='T001' split='no'/>
</mirroring>
</cluster>
bash-3.00#

Now set the new Cluster Description to PCS hosts with following command as sufuser
user.
SufAdmin f setClusterDescription <Node_1> <new cluster description file name>
Example:

REDKNEE

for internal use only

141 of 163

REDKNEE

bash-3.00#

8 Annex

su - sufuser

bash-3.00# SufAdmin f setClusterDescription pcs10a /export/home/sufuser/cluster.xml


Cluster Description on pcs10a set successfully
Cluster Description on pcs10b set successfully
bash-3.00$

7.1.1.1.7 Start RSU for PCS upgrade


Before starting the RSU, follow all the above steps. Now login to the Install Server as
sufuser and execute the following command.
SufControl -d datainit_R_D_RSU.xml

RSU

<Node_1 name>

<Node_1 name> is a primary node name


Note: If any issue during the RSU upgrade, first refer the section 7.2 Trouble Shooting on
RSU, if solution not present then send the following information to Support person.

Hardware type
Number of core processor
Memory size on Node 1
Memory size on Node 2
Putty log from Install Server (from Sufcontrol command to failure place)
Log from Install Server (/var/opt/SMAWsuf/Controller/<Node1_Node2>)
Log from PCS nodes, both nodes (/var/opt/SMAWsuf/Server/*sufd.log and
/tmp/updateRTP/*)

File from both nodes (/var/adm/message*)


From PCS version to PCS version

Below table gives you the user interaction time period and possible Fallback places.
User
interaction
1

REDKNEE

state

Time
(hh:mm:ss)
Checks install unit, script unit, local & shared
disk status

Starting RSU

0:00:00

Node 1 is active now.


Stops RTP on Node 1, Node 2 become
active, here approximately less than 60 secs
downtime.

Stop RTP on Node 1

0:06:00

Node 2 is become active now.


Starts upgrade on Node 1, also migrates
Pcs.parm and PCS projects.

Upgrade Node 1

0:12:00

Starts RTP

0:18:00

Node 2 is active.
After the start RTP, Node 1 become active
with new software, here there is less than 20
secs down time.

for internal use only

142 of 163

REDKNEE

8 Annex

Node 1 is active now.


Fallback is possible; there is less than 60
secs down time.

Upgrade completed on
Node 1.

0:22:00

After completing Node 1 upgrade, if youre


trying to use the PCM GUI, then first exit or
close the existing PCM GUI, re-login into the
PCM GUI again.

Start upgrading Node 2

0:23:00

Node 1 is active.

Stop RTP on Node 2

0:25:00

Node 1 is active.

Upgrade Node 2

0:30:00

Node 1 is active.

Start RTP on Node 2

0:35:00

Node 1 is active.
Node 2 upgrade also completed now.
Node 1 is active.

Upgrade completed on
Node 2.

Continue the upgrade

0:40:00

0:41:00

First Node2 reboots with fallback disk and


Node1 continues to serve the calls with
upgraded SW version, then Node1 reboots
with fallback disk(here downtime is 80 second
for switchover from Node1 to node2) and
node2 starts serving the calls with Old SW.
Continue the upgrade.
Node 1 is active.
First Node2 reboots with fallback disk and
Node1 continues to serve the calls with
upgraded SW version, then Node1 reboots
with fallback disk(here downtime is 80 second
for switchover from Node1 to node2) and
node2 starts serving the calls with Old SW.
Upgrade is in commit state here.
Node 1 is active.
First Node2 reboots with fallback disk and
Node1 continues to serve the calls with
upgraded SW version, then Node1 reboots
with fallback disk(here downtime is 80 second
for switchover from Node1 to node2) and
node2 starts serving the calls with Old SW.

Commit the changes

0:43:00

After commit fallback not possible, only FSR


is possible.

Upgrade completed

0:58:00

Upgrade completed here.

REDKNEE

for internal use only

143 of 163

REDKNEE

8 Annex

This is not Mandatory: Use the screen command to reconnect or re-establish the
upgrade session if upgrade putty session is closed. First login as sufuser in Install
Server, then run the command screen S <Some Name>, then start the upgrade. If
upgrade putty session closed unexpectedly, then restablish the putty session using
command screen x <Some Name>, where Some Name should be same as which is
used already. After start using the screen command for upgrade dont press Ctrl+C, if
you press Ctrl+C, your upgrade session will be vanished and then you cant re-establish
the upgrade session again using screen command.
Example:
[sufuser@tspatca101 ~]$ screen S PCS-Upgrade
[sufuser@tspatca101 ~]$ SufControl -d datainit_R_D_RSU.xml RSU tspatca103
SufControl session start: UT=RSU Cluster=tspatca103_tspatca104
Checking SUF SW version
Node id=10.200.203.3: version=TSPYRI971200
Node id=10.200.203.4: version=TSPYRI971200
OK

If putty session closed, then open new putty and re-establish the closed connection using
below command.
[sufuser@tspatca101 ~]$ screen x PCS-Upgrade
[sufuser@tspatca101 ~]$ SufControl -d datainit_R_D_RSU.xml RSU tspatca103
SufControl session start: UT=RSU Cluster=tspatca103_tspatca104
Checking SUF SW version
Node id=10.200.203.3: version=TSPYRI971200

Bold:
Red:
Blue:

youre input. stands for <Enter>


Important comments. Make decision based on your own environment.
Comments.

Note: Some user interactions are needed during the upgrade process, so please do NOT
leave the upgrade server until the upgrade is complete, for users input please refer to the
below screen trace.
Example:
bash-3.00# su - sufuser
bash-3.00$ SufControl -d datainit_RSU.xml RSU pcs15a

SufControl session start: UT=RSU Cluster=pcs15a_pcs15b


Checking SUF SW version
Node id=192.168.22.157: version=TSPVAI920900
Node id=192.168.22.160: version=TSPVAI920900
OK
Writing log to /var/opt/SMAWsuf/Controller/pcs15a_pcs15b/SufControl_101.log

REDKNEE

for internal use only

144 of 163

REDKNEE

8 Annex

Control server at port 55560 started with pid=17004


Waiting for control server to get ready
.Appending log to /var/opt/SMAWsuf/Controller/pcs15a_pcs15b/SufDetector_101.log
........................

- 2009-08-09 22:18:42 - Entering state startSUF on pcs15a (id=1) 22:18:50 - Action startSUF on pcs15a (id=1) returns OK (0)

--------------Some output Truncated--------------

2009-08-09 22:27:41 - Entering state execScriptUnitInstall on pcs15a (id=1) 22:27:43 - Action execScript on pcs15a (id=1) returns FALLBACK (1)

ScriptUnit(0019) scriptexec 'RSU_ServiceConfig.sh'


scriptexec 'RSU_ServiceConfig.sh' failed with non-fatal exit code [1] and output:

Fallback requested by execScriptUnitInstall ***

[F]allback
[R]etry
[C]ontinue(execscriptunitinstall)
[A]bort
Select code >>F
If you get above error during the Node-1 upgrade, then say [C]ontinue to continue the upgrade.

- 2009-08-10 14:41:26 - Entering state uiWaitAfterInstall on pcs15a (id=1)

--------------Some output Truncated--------------

Intentional wait state: Wait after install


Installation on current node finished
[C]ontinue
[F]allback
[A]bort
Select code >>C

- Say C to continue the upgrade the Node 2.

Node 1 upgrade completed. Fallback is possible, there is less than 80 sec downtime.

After completing Node 1 upgrade, if youre trying to use the PCM GUI, then
first exit or close the existing PCM GUI, re-login into the PCM GUI again.
- 2009-08-10 14:42:56 - Entering state execScriptUnitBegin on pcs15b (id=2) 14:42:56 - Action execScript on pcs15b (id=2) returns REENTRY (0)

--------------Some output Truncated--------------

REDKNEE

for internal use only

145 of 163

REDKNEE

8 Annex

2009-08-10 14:57:41 - Entering state execScriptUnitInstall on pcs15b (id=1) 14:57:43 - Action execScript on pcs15b (id=1) returns FALLBACK (1)

ScriptUnit(0019) scriptexec 'RSU_ServiceConfig.sh'


scriptexec 'RSU_ServiceConfig.sh' failed with non-fatal exit code [1] and output:

Fallback requested by execScriptUnitInstall ***

[F]allback
[R]etry
[C]ontinue(execscriptunitinstall)
[A]bort
Select code >>F
If you get above error during the Node-2 upgrade, then say [C]ontinue to continue the upgrade.

- 2009-08-10 14:41:26 - Entering state uiWaitAfterInstall on pcs15a (id=1)

--------------Some output Truncated--------------

- 2009-08-10 15:04:46 - Entering state uiContinueWithSwitch on pcs15b (id=2) -

Intentional wait state: Wait before switch


Installation on current node finished
[C]ontinue
[R]etry_check
[F]allback
[A]bort
Select code >>C

---Node 2 upgrade also completed now.

Fallback is possible, there is less than 80 sec downtime.

- 2009-08-10 15:16:58 - Entering state execScriptUnitComplete on pcs15a (id=1) 15:16:58 - Action execScript on pcs15a (id=1) returns REENTRY (0)

--------------Some output Truncated--------------

- 2009-08-10 15:17:50 - Entering state uiContinueWithCommit on pcs15b (id=2) -

Intentional wait state: Wait before cleanup


[C]ommit
[R]etry_check

REDKNEE

for internal use only

146 of 163

REDKNEE

8 Annex

[F]allback
[A]bort
Select code >> C

------ Commit the upgrade.

Fallback is possible, there is less than 80 sec downtime. If Commit here, fallback not possible,
only FSR is possible.

- 2009-08-10 15:24:32 - Entering state rtpUpdCmdExecCleanup on pcs15a (id=1) 15:24:33 - Action RtpUpdCmdExec on pcs15a (id=1) returns OK (0)

--------------Some output Truncated--------------

- 2009-08-10 15:27:30 - Entering state AH_uiAskHardeningRequired on pcs15a (id=1) -

Do you want to perform automatic OS Hardening?


[Y]es
[N]o
Select code >> N

-------Say Y if cluster is already harden, else Say N

- 2009-08-10 15:28:49 - Entering state AH_checkIfSmu on pcs15b (id=2) 15:28:49 - Action getFlag on pcs15b (id=2) returns NO (0)

--------------Some output Truncated--------------

- 2009-08-10 15:28:53 - Entering state ELM_uiAskEnableMirrors on pcs15a (id=1)

- 2009-08-10 15:28:56 - Entering state ELM_checkLocalMirrorsEnable on pcs15a (id=1) 15:29:10 - Action checkLocalMirrors on pcs15a (id=1) returns OK (0)

--------------Some output Truncated--------------

- 2009-08-10 15:30:27 - Entering state finishSUF on pcs15b (id=2) 15:30:27 - Action finishSUF on pcs15b (id=2) returns OK (0)

Finished OK

Terminating Control Server (pid=17004)


Control Server terminated with 0
SufControl session exit: 0
bash-3.00$

Note: After the successful upgrade, disk synchronization will take ~1h:30m.

REDKNEE

for internal use only

147 of 163

REDKNEE

8 Annex

Sample screen trace avail I Annex 8.7 Screen trace of PCS and PCSmgr RSU on
Cluster.

7.1.2 Post RSU steps


7.1.2.1 Disabling the pacct service
Note: This step is mandeteroy if hardening is enabled during the upgrade (RSU).
This service will generate the pacct file in directory /var/adm/, to avoid this, after the
upgrade do the following step to stop this service permanently.
-

Become superuser and edit the adm crontab file and delete the entries for the
ckpacct, runacct, and monacct programs
# crontab -e adm

Edit the root crontab file and delete the entries for the dodisk program.
# crontab e

Remove the startup script for Run Level 3.


# unlink /etc/rc3.d/S22acct

Remove the stop script for Run Level 0.


# unlink /etc/rc0.d/K22acct

Stop the accounting program.


# /etc/init.d/acct stop

7.1.2.2 Start the crash.d service


After the RSU start the crash.d service manually, use the below command to start this
service.
/etc/rc3.d/S97crash start
Example
bash-3.00$ /etc/rc3.d/S97crash start

7.1.2.3 Password Authentication for Plugin Filetransfer


Refer the section 5.6.4

7.1.2.4 PAM: Authentication failed for rtp99 from clusternode*-priv


Refer the section 5.6.11.

REDKNEE

for internal use only

148 of 163

REDKNEE

8 Annex

7.1.2.5 TSP GUI using https


Refer the section TSP GUI using https
If it is cluster do the above steps on both PCS nodes.

7.1.3 Post fallback steps


Note: Below sections are mandatory only if fallback is enabled during the upgrade. If
fallback is enabled after the Node 1 upgrade, then do the below steps only on Node 1, if
fallback enabled after both Node upgrade, then do the below steps in both nodes.

7.1.3.1 OBP setting


Note: This step is mandatory only if hardening is enabled during the First Installation.
Login to the PCS node as root user and do the following steps. PCS node reboot is not required.
root@pcs10b> pwd

Change to the directory /opt/INTPaghar/secureTool

/opt/INTPaghar/secureTool
root@pcs10b> ./sectool.sh

Run the sectool.sh script

-------------------------------------------------WELCOME TO SECURITY CONFIGURATION TOOL


--------------------------------------------------

1. Solaris 5.10 Hardening

2. Solaris 5.10 Unhardening

3. OEM Hardening

4. OEM Unhardening

5. FileSystem Hardening

6. FileSystem Unhardening

7. Solaris 5.10 Hardening Audit

8. OBP Password Change

9. QUIT

REDKNEE

for internal use only

149 of 163

REDKNEE
Enter your choice: 8

8 Annex
Seclect 8 ,

******************************************
Not in secure mode
******************************************
******************************************
Setting the security mode
******************************************
EEPROM Password should satisfy the following criteria:

1) Password should contain atleast one alphabetical character.


2) Password Minimum length :6

Maximum length :8

Changing PROM password:


New password: PCS123

Type your OBP password as

Retype new password: PCS123

Re-type OBP password

Be very careful, if you are using your own OBP password, if you forget your OBP password, Sun
people only can able to reset OBP password.
Changing PROM password:
New password:
Retype new password:
******************************************
SUCCESS: EEPROM Security password set now
******************************************
Press Enter to continue

Press enter

-------------------------------------------------WELCOME TO SECURITY CONFIGURATION TOOL


--------------------------------------------------

1. Solaris 5.10 Hardening

2. Solaris 5.10 Unhardening

3. OEM Hardening

4. OEM Unhardening

5. FileSystem Hardening

6. FileSystem Unhardening

REDKNEE

for internal use only

150 of 163

REDKNEE

8 Annex

7. Solaris 5.10 Hardening Audit

8. OBP Password Change

9. QUIT

Enter your choice: 9

Say 9

root@pcs10b> exit

Say exit

7.2 Trouble Shooting on RSU


7.2.1 ERROR: Control server failed to start
During the RSU, if you are getting the following error message.
bash-3.00$ SufControl -d datainit.pcs5a.xml RSU pcs5a
SufControl session start: UT=RSU Cluster=pcs5a_pcs5b
Checking SUF SW version
Node id=192.168.22.3: version=TSPVAI920900
Node id=192.168.22.4: version=TSPVAI920900
OK
Writing log to /var/opt/SMAWsuf/Controller/pcs5a_pcs5b/SufControl_101.log
Control server at port 55556 started with pid=11719
Waiting for control server to get ready
.Appending log to /var/opt/SMAWsuf/Controller/pcs5a_pcs5b/SufDetector_101.log
..........ERROR: Control server failed to start
bash-3.00$

In PCS host, comment the below line Banner /etc/issue in file /etc/ssh/sshd_config,
after the comment, file looks like below.
---------------------Some output Truncated ------------------------------------------X11Forwarding yes
#Banner /etc/issue

---- Commented line

Protocol 2
---------------------Some output Truncated -------------------------------------------

Then restart the svc:/network/ssh:default.

REDKNEE

for internal use only

151 of 163

REDKNEE

8 Annex

Example:
root@pcs5a> svcadm restart svc:/network/ssh:default

Note: Do the steps on both nodes if it is cluster.

7.2.2 Error: Node XXX.XXX.XXX.XXX has wrong SW-Version


Install Server and PCS hosts has different version of SUF during the RSU upgrade. When RSU
starts, following error message will be displayed and stops RSU.
bash-3.00$ SufControl -d datainit.pcs5a.xml RSU pcs5a
Checking SUF SW version
Error: Node 192.168.22.191 has wrong SW-Version.
Update node 192.168.22.191 using SufInstaller

Then follow the steps mentioned in section 7.1.1.1.5 Upgrade SUF Software in PCS
hosts.

7.2.3 StopRTP failed


During the RSU if stopRTP is failed with following message, login to the particular PCS
host as root user and delete the directory /tmp/.ORACLE_99_stop_error' and [R]etry to
continue the installation.
- 2009-10-02 12:31:25 - Entering state stopRTP on pcs14b (id=2) 12:35:36 - Action stopRTP on pcs14b (id=2) returns ERROR (1)
Actor

: http://localhost:55555/

Message: Died in server method stopRTP() (Package: RTP, Line: 2339)


Detail : Command /opt/SMAW/SMAWrtp/bin/stopRTP -p 99 -L 1 failed with exitcode 6: Writing
log to /export/home/rtp99/99/log/stopRTP.16282.4711
stopping SuperNodeManager
[/opt/SMAW/SMAWrtp/bin/RtpNodeCheck] Return success
sending USR1 to pid: 3070
-----------------------------------------------------------------------------------------------------------------------------------------SuperNodeManager stopped
ERROR: Subsystem 'ORACLE' did not finish properly or in time.
Please check the concerned subsystem. If subsystem meanwhile stopped properly,
delete the subdirectory '/tmp/.ORACLE_99_stop_error' before restarting.

ERROR: Failure while checking subsystems


log written to /export/home/rtp99/99/log/stopRTP.16282.4711

*** Fallback requested by stopRTP ***

REDKNEE

for internal use only

152 of 163

REDKNEE

8 Annex

[F]allback
[R]etry
[C]ontinue(was_uiallowed)
[A]bort

Select code >>

7.2.4 INTPaains_plugin_wrapper_head.sh error


If this error occurs while doing the upgrade, check if all the RTP process is up and
running, if running please go with the [R]etry option and Continue the upgrade.
If [R]etry option also failed, check the RTP process are up and running and continue the
upgrade.
This issue is seen when few major, critical alarms are raised from past 24 hours, due that
this error occurred.
## ERROR in INTPaains_plugin_wrapper_head.sh:
--ERROR---- 999.1-- plugin_wrapper: errors detected during execution of script
/opt/SMAW/INTP/postchecks/postconditioncheck.sh in /opt/SMAW/INTP/postchecks, see
/dump/ApplUpdDiag/ADV_CE1_RSU_postcheck.log
-------------------------------------------------OK: 0

WARNINGS: 0

ERRORS: 1

*** Fallback requested by execPostcheckPlugins ***


[F]allback
[R]etry
[C]ontinue(execlegacypostcheckplugins)
[A]bort

Select code >>

7.2.5 Datainit_RSU.xml file not well-formed


If datainit_RSU.xml file not updated properly, you will get the following problem. Check the
file in Install Server /export/home/sufuser/datainit_RSU.xml and also compare with
available example 8.8 Sample datainit file for RSU. Correct the file and re-start the
upgrade.
bash-3.00$

SufControl -d datainit_RSU.xml

RSU pcs14a

SufControl session start: UT=RSU Cluster=pcs14a_pcs14b


Checking SUF SW version
Node id=192.168.22.69: version=TSPVAI920900
Node id=192.168.22.70: version=TSPVAI920900

REDKNEE

for internal use only

153 of 163

REDKNEE

8 Annex

OK
Writing log to /var/opt/SMAWsuf/Controller/pcs14a_pcs14b/SufControl_100.log

not well-formed (invalid token) at line 15, column 30, byte 449 at
/opt/SMAW/SMAWrtppl/5.8.0_1-202/lib/perl5/site_perl/5.8.0/sun4-solaris/XML/Parser.pm line
185

7.2.6 Before fallback follow below steps


Note: If want to initiate fallback after the node1 upgrade, then execute below steps on Node1.
If you want to fallback after both nodes upgrade, then execute the below steps on Node1 first and
then Node2.
Download ContextClearingScript_RMS.zip from download area(Refer the release notes for
download location) and copy this file to Node2 (dont copy this file to Node1) and verify the
checksum as mentioned in release notes, extract the files and then set the permission as below.
root@pcs21b> pwd
/export/home/rtp99
root@pcs21b> cksum ContextClearingScript_RMS.zip
1178877558 90077 ContextClearingScript_RMS.zip
root@pcs21b>
root@pcs21b> unzip ContextClearingScript_RMS.zip
Archive:

ContextClearingScript_RMS.zip

creating: ContextClearingScript_RMS/
inflating: ContextClearingScript_RMS/operate_pcs_solaris_ipsl_batch.exp
inflating: ContextClearingScript_RMS/pcsContextAppRem_sol
inflating: ContextClearingScript_RMS/stopDispatcherProcess.param
root@pcs21b> cd ContextClearingScript_RMS/
root@pcs21b>

pwd

/export/home/rtp99/ContextClearingScript_RMS
root@pcs21b> chmod 777 operate_pcs_solaris_ipsl_batch.exp stopDispatcherProcess.param
pcsContextAppRem_sol

Step1: Login as rtp99 user in Node2 and execute the below command to stop the dispatcher
process.

./operate_pcs_solaris_ipsl_batch.exp
stopDispatcherProcess.param

stop

<testbed-hostname>

superad

<password>

Where <testbed-hostname> is a Node2_hostname


Where

<password> is TSP user superad password.

Note: In case any error thrown while stopping the process need not be considered, because those
process may not be available in PCS4.2MR*/PC5.1MR*/PCS6.1MR*.

REDKNEE

for internal use only

154 of 163

REDKNEE

8 Annex

Example:
root@pcs21b> su - rtp99
-bash-4.1$ id
uid=4711(rtp99) gid=1521(dba) groups=1521(dba),1522(ticgroup)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
-bash-4.1$ pwd
/export/home/rtp99/ContextClearingScript_RMS
-bash-4.1$ ./operate_pcs_solaris_ipsl_batch.exp stop
stopDispatcherProcess.param

pcs21b superad

PcsQos!7

006D

spawn /usr/bin/clear
spawn echo entering TSP CLI
spawn /opt/SMAW/SMAWrtp/bin/RtpAdmCli -x

RTP Command Line Interface


Copyright (C) Nokia Siemens Networks 2007
All Rights Reserved

(This process is running as "RtpAdmCli01")


Login: superad
Password:
Management session established for user "superad" on "tspatca212"

SOFTWARE UPDATE IN PROGRESS !! SOFTWARE UPDATE IN PROGRESS !! SOFTWARE


UPDATE IN PROGRE
SS !!

A software update has been started on node tspatca210 via SUF.


Depending on the update type your application could be interrupted
if the update process reaches the node where your application
or application server process is running.

SOFTWARE UPDATE IN PROGRESS !! SOFTWARE UPDATE IN PROGRESS !! SOFTWARE


UPDATE IN PROGRE
SS !!

Activating expert mode.

REDKNEE

for internal use only

155 of 163

REDKNEE

8 Annex

Use the command 'help' to see all available commands or


'help "REGULAR EXPR."' to see all commands that match the given
regular expression.
Use 'help command' to get help for the specific command.
Use 'exit' to exit.
Use 'menu' for menu mode.
Use 'file' "FILENAME" to execute commands.
Use 'batch' "FILENAME" to execute commands without manual interaction.
Use 'showCbOutput' for callback output.
Use 'waitForCb' for continuous callback output.
Use 'changePw' for password change.
Use 'newLogin' to change login.
Use 'openlog' "FILENAME" to write output to a log file.
Use 'closelog' to close a previously opened logfile.
Use 'sleep' "millisecs" to pause.
CLI>batch "stopDispatcherProcess.param"
Executing command: procStopProcess "IMS_P_PCS_GX01_CE2_Backup"

CLI>exit
-bash-4.1$

Go to TSP GUI, verify whether following processes are stoped on node2 if applicable, if stoped
proceed with Step2.
IMS_P_PCS_GX01_CE2_Backup
IMS_P_PCS_GQ01_CE2_Backup
IMS_P_PCS_GY01_CE2_Backup
IMS_P_PCS_E401_CE2_Backup
IMS_P_PCS_RA01_CE2_Backup
IMS_P_PCS_RX01_CE2_Backup
IMS_P_PCS_ONENDSSOAP01_CE1_Backup

Step2: Execute the bleow command as rtp99 user on Node2 to clear all the context values.

execRTPenv ./pcsContextAppRem_sol -t 151 -t 158 -t 160


-t 168 -t 169 -dA

-t 161 -t 162

-t 163

-t 165 -t 167

Example:
root@pcs21b> su - rtp99
-bash-4.1$ id
uid=4711(rtp99) gid=1521(dba) groups=1521(dba),1522(ticgroup)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
-bash-4.1$ pwd
/export/home/rtp99/ContextClearingScript_RMS

REDKNEE

for internal use only

156 of 163

REDKNEE

8 Annex

-bash-4.1$ execRTPenv ./pcsContextAppRem_sol -t 151 -t 158 -t 160 -t 161 -t 162 -t 163 -t


165 -t 167

-t 168

-t 169

-dA

Command to start ContextAppRem is


./pcsContextAppRem_lin -t 151 -t 162 -t 163 -t 165 -t 160 -dA
Input command ac = 12
The number of ContextTypes are 0Context tpye to add is 151
Adding Context Type :151
The number of ContextTypes are 1Context tpye to add is 162
Adding Context Type :162
The number of ContextTypes are 2Context tpye to add is 163
Adding Context Type :163
The number of ContextTypes are 3Context tpye to add is 165
Adding Context Type :165
The number of ContextTypes are 4Context tpye to add is 160
Adding Context Type :160
Context Delete All Enabled
Successfull in initialization of Context Type
NOTE : Manual delete of Context will result in invalid Counters related to Context 151
RtpCtxNamesSetup Success exists = 1 desc_cnt = 1000 desc_used = 0
delete all failed for contextType 151
NOTE : Manual delete of Context will result in invalid Counters related to Context 162
RtpCtxNamesSetup Success exists = 1 desc_cnt = 2000 desc_used = 0
delete all failed for contextType 162
NOTE : Manual delete of Context will result in invalid Counters related to Context 163
RtpCtxNamesSetup Success exists = 1 desc_cnt = 4290000 desc_used = 0
delete all failed for contextType 163
NOTE : Manual delete of Context will result in invalid Counters related to Context 165
RtpCtxNamesSetup Success exists = 1 desc_cnt = 4290000 desc_used = 0
delete all failed for contextType 165
NOTE : Manual delete of Context will result in invalid Counters related to Context 160
RtpCtxNamesSetup Success exists = 1 desc_cnt = 3000 desc_used = 1
RtpCtxNamesData Success :: done = 1
next_curDesc = 0
valid = 1
-bash-4.1$

Now verify whether all the context values are cleared using below command as rtp99 user on
Node2, all context should be zero(0).
execRTPenv RtpCtxMiniTool64 L
Example:
CTXTYPE
USED DESC
SHM MAX SHM
"051" 51
"052" 52
"151" [unknown]
"159" [unknown]
"160" [unknown]
"161" [unknown]
"162" [unknown]
"163" [unknown]
"164" [unknown]
"165" [unknown]
"167" [unknown]

REDKNEE

7
31
0
0
0
0
0
0
0
0
0

LIM DESC

MAX DESC

USED SHM

BLK SZ LIM

100000
7
7
1024 100000
7
100000
36
31
1024 100000
36
1000
0
0
1024
1000
0
1000
0
0
1024
1000
0
4000
1
0
5120
4000
1
4400
23
0
4096 40960
184
2000
1
0
1024
2000
1
2250000
90691
0
6144 2250000
90691
110000
0
0
2048 110000
0
2250000
90674
0
512 2250000
90674
2250000
90656
0
2048 2250000
90656

for internal use only

157 of 163

REDKNEE
"168" [unknown]
"169" [unknown]

8 Annex
0
0

100
4000000

60

0
0

1024
100
65
256 4000000

Manual delete of Context will lead to inconsistent counters and requires manual reset,
follow below steps to reset the counter value.
Step1:
Open browser with https://<IP_Address:8099>, where IP_Address is PCS node2 Core IP
Address, you will see the following screen.

REDKNEE

for internal use only

158 of 163

REDKNEE

8 Annex

Step2:
Click Login button, login with user id superad, will open the new window like below

Figure 7-1 TSP GUI shows after login

REDKNEE

for internal use only

159 of 163

REDKNEE

8 Annex

Step3:
In left side panel click Performance Management, then click Statics Counters in left
side panel, it is marked in red color. This will open show all the counter names, values,
etc.

Now type the counter name in right top corner which you want to reset the value as it is
marked in blue color in above figure, example counter name should be typed in below
format and press enter key.
Format:
%Counter name%
Example:
%TCurrentlyExisting%

REDKNEE

for internal use only

160 of 163

REDKNEE

8 Annex

Step4:
It will filter and show particular counter as it is shown in below figure, then click the reset
button as shown in below figure.

Step5:
After the reset, particular counter value will become zero(0).

Repeat these steps 3 to 5 for the counters TCurrentlyExisting, GxBearersCurrExis,


GxSessCurrExis, RxSessCurrExis, SubsCurrentlyExis and ActiveContexts.
REDKNEE

for internal use only

161 of 163

REDKNEE

8 Annex

Then go and say to [C]continue to proceed the upgrade.

8 Annex
8.1 Screen trace for Install Server Cleanup
https://sharenet-ims.inside.nokiasiemensnetworks.com/livelink/livelink/418939078/ISCle
an-Screen-Trace.txt?func=doc.Fetch&nodeid=418939078&vernum=1&viewType=1

8.2 Screen trace for Media extraction


https://sharenet-ims.inside.nokiasiemensnetworks.com/Guest/Open/D488813562

8.3 Screen trace for Media installation


https://sharenet-ims.inside.nokiasiemensnetworks.com/Guest/Open/D488812760

8.4 Screen trace of install PCSEP package


Not Available

8.5 Screen trace of PCS Cluster


Node 1:
https://sharenet-ims.inside.nokiasiemensnetworks.com/Guest/Open/D494884254
Node 2:
https://sharenet-ims.inside.nokiasiemensnetworks.com/Guest/Open/D494874063

8.6 Screen trace of PCS Single Node FE


8.6.1 Sun Netra T5220 sinlge Node
Not Available

REDKNEE

for internal use only

162 of 163

REDKNEE

8 Annex

8.7 Screen trace of PCS and PCSmgr RSU on Cluster


8.7.1 Upgrade to PCS6.3
Not available.

8.7.2 Upgrade Fallback


Not Available

8.8 Sample datainit file for RSU


Not Available

REDKNEE

for internal use only

163 of 163