Professional Documents
Culture Documents
A1300 CMC 10
ADMINISTRATION GUIDE
OPERATOR GUIDE
Mnemonic NM2ADM
All rights reserved. Passing on and copying of this document, use and
communication of its contents not permitted without written authorization
from Alcatel-Lucent.
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1 Introduction to technical concepts and principles of the document . . . . . . . . . . . . . . . . . . . . . 15
1.1 Command conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.2 Role of the administrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3 Host name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4 System management processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.5 Supported NMC2 hardware configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.6 Services in distributed architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6.1 DSS-HMS architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.7 Server shutdown and restart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.7.1 Shutting down and reboot a server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.7.2 Shutting down complete a server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.7.3 Restarting a server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.7.4 Shutting down or restarting a server gracefully . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.8 Maintenance tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2 Online user documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1 Installing the ADES documentary collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2 Removing the ADES documentary collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3 General NMC2 tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1 Accessing the TMN-OSs window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2 Managing an NMC2 instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.1 Starting permanent processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.2 Stopping permanent processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.3 Launching the system configuration tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.4 Viewing information about an NMC2 instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.5 Defining the maximum number of operators logged . . . . . . . . . . . . . . . . . . . . . . . 28
3.3 Setting a personal printer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4 Synchronizing systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5 How to change the system date and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4 Managing users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1 Accessing the operator administration application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2 Defining your own “User password policy” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2.1 Password Aging Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2.2 Password Format Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3 Creating a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.4 Password definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.5 Updating a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.6 Deleting a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.7 Changing the password of a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.8 Assigning a new password in case of loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.9 Activating an account with expired password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.9.1 Procedure for PARISC servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.9.2 Procedure for ITANIUM servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5 Managing sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.1 Accessing the session management application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2 Customizing the workstations area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.3 Broadcasting a message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.4 Viewing good logins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.5 Viewing bad logins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.6 Locking the workstation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.6.1 Manual lock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.6.2 Automatic lock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Appendix C: Appendix C: How to access to the MP interface for ITANIUM servers . . . . . . . . . 189
C.1 How to configure ILO 2 TO MP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
C.2 How to access to the MP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Figures
Figure 1: Example of the system response to a command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Figure 2: Maintenance directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Figure 3: Device Name of source media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Figure 4: Collection type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Figure 5: TMN-OSs Management window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Figure 6: System information window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Figure 7: Maximum operators management window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Figure 8: Set personal printer window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Figure 9: Operators Administration window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Figure 10: Warning of double launch of Operators Admin window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Figure 11: Sistem Management window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Figure 12: Sistem Management window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Figure 13: Auditing and Security Attributes Configuration Tool window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Figure 14: Auditing and Security Attributes Configuration Tool window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Figure 15: Auditing and Security Attributes Configuration Tool window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Figure 16: Create User dialog box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Figure 17: Update a user dialog box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Figure 18: Delete current user dialog box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Figure 19: Change Your Password dialog box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Figure 20: EFI main window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Figure 21: EFI booting prompt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Figure 22: Session Management window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Figure 23: Workstations sub-menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Figure 24: Broadcast Message dialog box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Figure 25: Example of a Broadcast Message window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Figure 26: Good Logins dialog box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Figure 27: Lock Screen dialog box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Figure 28: Lock options dialog box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Figure 29: Unlock Screen dialog box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Figure 30: Force Logout dialog box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Figure 31: Process Monitoring Control main window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Figure 32: Menu and toolbar area of the PMC tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Figure 33: Work area part of the PMC main window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Figure 34: Node health status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Figure 35: Managed NMC2 instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Figure 36: Example of a PMC group list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Figure 37: Example of a PMC agent list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Figure 38: Information on a PMC agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Figure 39: Select agent trace file window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Figure 80: Q3ES: Log FTAM Parameters dialog box: General Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Figure 81: Q3ES: Log FTAM Parameters dialog box: Parameters Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Figure 82: Q3ES: Log FTAM Browsing window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Figure 83: Q3ES: Log FTAM Information window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Figure 84: Q3ES: Log Management window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Figure 85: Q3ES Informations window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Figure 86: Q3ES: EFD Management window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Figure 87: BDH Administration window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Figure 88: Example of a BDH: OSS Management main window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Figure 89: BDH: OSS Management window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Figure 90: BDH: OSS Declaration dialog box (OSS Panel) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Figure 91: BDH: OSS Declaration dialog box (USER Panel) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Figure 92: BDH: Bulk Management window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Figure 93: BDH: Bulk Declaration dialog box (Bulk Panel) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Figure 94: BDH: Bulk Declaration dialog box (Files Panel) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Figure 95: BDH: File Filter dialog box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Figure 96: BDH: File Management window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Figure 97: BDH: Bulk Management window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Figure 98: Open Connection dialog box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Figure 99: Connection Settings dialog box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Figure 100: NE Os1300nmc creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Figure 101: NE Os1300nmc creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Figure 102: SMH Home tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Figure 103: Cooling status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Figure 104: Cooling events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Figure 105: Disks status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Figure 106: Mark telnet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Figure 107: “mozex preferences” window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Figure 108: MP main menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Figure 109: MP configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Figure 110: Connect to MP interface window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Figure 111: Commands list of the MP interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Tables
Table 1: Services in distributed architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Table 2: Levels of dependencies among groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Table 3: Functional states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Table 4: PMC icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Table 5: PMC command line parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Table 6: Origin of the available log types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Table 7: Information traced . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Table 8: Secure Shell Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Table 9: Alarms troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Preface
Purpose This document describes the tasks to be done by the system administrator to man-
age and monitor the Alcatel-Lucent 1300 CMC E10 processes.
This document concerns the Alcatel-Lucent 1300 CMC E10 from rel. 1.3.3
Audience This document is intended for the Alcatel-Lucent 1300 CMC E10 administrator.
It is recommended having some knowledge of UNIX environment.
This document does not replace a training on Alcatel-Lucent 1300 CMC
E10 administration.
Chapter 15 describes the access to the FTP, telnet and rlogin services.
Chapter 16 describes the supervision of NMC2 system
Chapter 17 describes the CONFIGURATION OF MOZILLA FOR TELNET (mozex
plugin)
Appendix give the procedure to access the web console.
A
AppendixB give the procedure to extend partitions manually.
Appendix C give the procedure to configure and access to the MP interface for
ITANIUM servers.
Abbrevia- gives the meaning of the acronyms and abbreviations used in this
tions document.
Index gives another way to get information from a word or a concept.
Typographical conventions The following typing conventions are used in this guide :
text in italics: document titles, window names and field values
text in bold -i tali cs : menu items, buttons and field names
text in bold important information, headings of lists and paragraphs that are
not numbered
Terminological conventions The following terminologic conventions are used in this guide :
NMC2 is often used for shorts instead of Alcatel-Lucent 1300 CMC E10. In
this guide, sometimes NMC2 is used.
<NMS_INSTANCE_DIR> is a term which represents the product root directory
: /usr/Systems/NMC2_1
There is no way for the administrator to modify the host name, except by doing a
new installation.
Each of the processes that constitute the NMC2, treats a particular functionality of
the system. Three types of processes can be identified:
General processes
They correspond to the functional parts of the operation. They control the
data processing and the data persistency. Some of these processes are started
automatically on launching of the system. They run independently of the user
session and are controlled through PMC process management features (Local
Registration File) and by the NMC2 application manager PMC.
PMC manages single processes, groups of processes or subsystems. A group
of processes is a set of dependent processes. If a process of the group is
launched then all the other processes of this group are also launched. If it is
stopped, the group stops. A subsystem is also a group of processes but they
are independent. When one of them crashes, the others are not affected.
Presentation processes
They control the Human Machine Interface (HMI) of an application. The pre-
sentation processes are started at user login, on explicit request by the user or
as a result of navigation between different functions of the system. The pro-
cesses that can be started by a user depend on his/her access rights.
These processes have a lifespan contained in a user session.
The failure of a presentation process is reported to the operators using that par-
ticular presentation process. Generally, the failure of a presentation process
results in the disappearance of the graphical user interface associated to it.
Presentation processes are not supervised by PMC. They cannot be restarted
automatically. In the case of a failure, they have to be restarted manually.
Note: When a presentation process fails it is not always stopped. The user has to check
if it is still running, and eventually ’kill’ it.
Monitoring processes
They are the processes that monitor the general processes composing the
NMC2. They detect the failed processes and report the failure to the NMC2
system administrator via PMC USM.
The allocation of the general processes on the servers is recorded in a PMC con-
figuration file that can be edited. This file contains information on the different
processes. It is not recommended to modify this file.
Workplaces Workplaces (WP) represent PCs intended for operators (OWP) and administrators
(AWP). A WP can display the graphical views of the USM that is running on the
servers.
An OWP is a PC with a UNIX emulation (Exceed and/or Metaframe and/or
GoGlobal).
An AWP is a PC with a UNIX emulation (Exceed and/or Web Console and/or MP
interface and/or GoGlobal).
Connecting the Workplaces An OWP is always installed with the IP address of the HMS on which it depends.
In DSS-HMS configuration, an AWP must be installed with the IP address of the
ENMS.
Specific ENMS Services Some services are available only from the ENMS . They are listed hereafter:
printer configuration (see section 11.1)
X25 configuration (see section 11.2)
Q3ES administration (see chapter 12)
NSAP address configuration (refer to the Alcatel-Lucent 1300 CMC E10 Map
Management operator guide, NM2MAPGOP)
Server Management In order to know the status of the servers, the administrator must declare then
supervise them by means of the map management application, from the AWP.
In the case of a high availability configuration, the servers must be declared with
their real IP address (not the virtual address of the cluster).
The management of the system processes is done by using the Process Monitoring
Control (PMC) application that enables the supervision of each NMC2 server. They
can be used from any WP by the administrator but only if the ENMS is running.
Operators who have access rights to UNIX a normal UNIX command shell (called
command shell in this document), where commands are typed.
All the messages forwarded to the console are saved in log files (e.g. syslog.log).
They can be visualized by the administrator, by using a text editor.
The following table summarizes the available services on each server in different
architectures. The optional applications are Traffic management, Trouble Ticket-
ing and Software management.
SSS ENMS All CMC E10 functions (CMC E10 core and optional ap-
plications)
HMS All CMC E10 functions except SWM. The number of HMS
servers can be 1 or 2.
Profile declaration
In this architecture, a new security profile must be declared from the ENMS via
SEC USM.
User declaration
A new user must be declared from the ENMS via SMF USM (see chapter 4.3).
Note: While declaring users in SMF USM, the administrator must pay attention to which
server the user will be authorized to log in.
Changing password
To change its password in this architecture, the user must change it from the HMS
server via Global Actions menus (see chapter 4.7).
After a crash, pay attention to the messages indicating a disk corruption. If fsck is
required, that means the integrity of the NMC2 application can not be assumed
any longer. In such a case it is recommended to perform a full system restore.
In all these cases, after a server reboot, the third party software products are
restarted automatically.
The NMC2 processes have to be relaunched manually by using the Process Mon-
itoring function.
All the stages in the relaunching of the server processes are displayed in trace files
that store all the events occurring in the process of the server.
6. Three collection types are offered : NEDOC, OSDOC or NIL. Choose the
collection type you are going to install.
Enter the collection type you are going to install or NIL: OSDOC (for e.g.)
There can be only one collection of type "OSDOC" installed. The collection is
copied from the CD-ROM to the disks.
7. If you choose "NEDOC", a list of NE Types and Releases are displayed. This
list corresponds to the list of NE types and Releases PNM is configured with.
Choose the neType/neRelease associated to the collection you are installing.
This procedure also covers the special case when a documentary collection gath-
ers the documentation of several NE types and releases, meaning you have to
launch the remove_collect script only once.
The window is divided horizontally into two areas: the OSs area and the message
area. By default only the OSs area is shown.
OSs area The OSs area shows the NMC2 instances. The instance can have the following
colors:
green: all permanent processes of this instance are running
red: instance is running partially (at least 1 process is down)
blue: instance is stopped (all processes are stopped)
yellow: instance is in an indeterminate state
Message area By default the Message area is not shown. To view the message area, go to menu
Vi ew —> Messag e area —> S how
The message area shows the results of performed actions. The messages is
stamped by the date and time (year/month/day hour:minutes:seconds) (e.g.
launchSMFwithNAV) in a command shell. The scripts are gathered into the
<NMS_INSTANCE_DIR>/script directory.
For more information on the TMN OSs window, refer to the Alcatel-Lucent 1300
CMC E10 Getting Started operator guide, NM2GSGOP.
2. In the New value field, enter the new value for the maximum number of
users that can be logged simultaneously. To enter the default value, you can
also click the default button.
3. Click Apply to validate you selection.
4. Click Close to exit.
DD : day
HH : hour
mm : minutes
AA : year
If you have the following question, type yes :
date: do you really want to run time backwards ? [yes / no]
Example :
…,sys,root # date 0601010008
date : do you really want to run time backwards?[yes/no] yes
4 Managing users
This chapter describes:
how to access the operator management tool (see section 4.1)
how to create a user (see section 4.3)
how to update a user (see section 4.5)
how to delete a user (see section 4.6)
how to change the password of a user (see section 4.7)
Note: System users must be defined before being used in the Security Management.
User management does not cover the users of Trouble Ticketing, PEX and PC SW.
You can also change the appearance of the Workstation area (see section 5.2).
User management icons The meaning of the icons specific to the user management is given, as follows,
(from left to right in the icon bar):
Create User,
Update User,
Delete User,
In order to ensure the user data consistency; when this application is already
opened and we try to launch it again, the following error message is displayed :
then double-click on the “System Security Policies”, then the “Auditing and Se-
curity Attributes Configuration Tool” window is displayed:
Each NMC2 administrator can create other user profiles (see NM2SECGOP) for
more details, and can create other user accounts.
To create a user, proceed as follows:
1. In the User Management window either:
go to menu User -> C r eate or
In the current release, the click on the Mor e on Pr of i l es... push button has
no action.
Any user must be present at least on the ENMS server. So do not remove the ENMS
server from the Present o n Ws list.
Tick the C onti nue i n the case of p r obl ems button if you
want the creation to continue whatever happens. If the creation of the
user fails on one server, it will be performed on all the others.
5. Click either Ap p l y to create the new user or C lose to close the window
without making any changes.
Trusted-UNIX is configured The following rules below are added to the SMF rules :
The password must have a minimum length of 7 characters with at least 2 al-
phabetic characters, at least 1 numeric character and at least 1 special Char-
acter.
The special characters must be different than blank and #.
Each user is invited to change its password after 90 days.
The creation of new users, which passwords are not compliant with the SMF rules,
is rejected immediately.
The creation of new users, which passwords are not compliant with the Trusted-
UNIX rules, is not rejected immediately : During the creation, following messages
appear: "passwd of user not updated to some workstations ..." "default configured
password is invalid. It’s also rejected. Password could be updated by adminis-
trator". Finally, the user is created and appears on the operator list. Its login is
refused with the message "login incorrect. Please, try again". If you are in such sit-
uation, you must delete the user login and create it again with an other password
that respects the Trusted-UNIX rules or perform an update user (as described in
the following chapter).
This window is the same as for the Create user window. The contents of all the
areas can be updated (refer to section 4.3), except the User Log i n field. The
only way to update it, is to delete the user account and to create a new one with
a new login name.
Updating a user profile does not modify the ’security profile’ information in the
front panel (refer to the Alcatel-Lucent 1300 CMC E10 Getting Started operator
guide - NM2GSGOP).
Before deleting a user, you have to first delete all the resources (alarm aging, etc.)
for this user.
Case of root When the root password is lost, the solution is specific to the operating System.
Please follow the procedure provided by the underlying platform for this situation.
# fsck -y
# mount -a
Execute the command:/ usr / lbi n/ mo d p rp w -l -k r oot
Execute the command:/ usr/ lbi n/ mod p rp w – v r oot (in order to re-
fresh other passwords).
Execute the command:i ni t 4 in order to start the server in multi-user mode.
In the EFI window select the Primary device as boot device (this is the default
choice).
You will get the EFI auto boot window:
Press any key in order to stop the auto booting
After getting the prompt HPUX>, issue the command “hp ux -i s ”; this will
boot the system into single user mode.
Enter the following commands:Y
# fsck -y
# mount -a
Execute the command:/ usr / lbi n/ mo d p rp w -l -k r oot
Execute the command:/ usr/ lbi n/ mod p rp w – v r oot (in order to re-
fresh other passwords).
Execute the command:i ni t 4 in order to start the server in multi-user mode.
5 Managing sessions
This chapter describes:
how to access the section management tools (see section 5.1)
how to customize the workstations area (see section 5.2)
how to broadcast a message (see section 5.3)
how to view the good logins (see section 5.4)
how to view the bad logins (see section 5.5)
how to lock the workstation (see section 5.6)
how to unlock the workstation (see section 5.7)
how to force the user logout (see section 5.8)
The Workstations area shows all OS workstations and remote terminals that are
connected. OS servers are labeled with their name (e.g. ornmos56) while remote
terminals are labeled with the OS workstation name, “@” and their network ad-
dress (e.g. ornmos56@139.54.93.253:0).
The Sessions area shows all the sessions on all the servers, with the following
attributes:
Login: user login name
From: OS server name or network address from which the user is logged
Login Date: date and time of the login start
Terminal: terminal type (console for OS server or dtremote for remote termi-
nals)
Session management The menu bar of the Session Management window contains 3 session manage-
pull-down menus ment-specific menus:
Work stati on:
to broadcast a message (see section 5.3)
to view the good logins (see section 5.4)
to view the bad logins (see section 5.5)
to lock the workstation (see section 5.6)
to unlock the workstation (see section 5.7)
You can also change the appearance of the Workstation area (see section 5.2).
Session management icons The meaning of the icons specific to the session management is as follows (from
left to right in the icon bar):
Workstation -> Show Good Logins
Workstation -> Show Bad Logins
2. The Logins area shows all successful logins on the selected servers, with the
following attributes:
Workstation: server name
Login: user login name
From: station from which the user is logged
Terminal: terminal name
Login Date: date and time of the login start
Logout Date: date and time of the logout
Status: true or false
An operator cannot lock his/her own workstation. In this case an error message
will be generated.
5. To dismiss the dialog box, click OK . To cancel the operation, click C ancel .
Note: Some privileged users cannot be forced to logout. The list of privileged users (e.g.
axadmin, root) is defined in a configuration file at the integration time.
To force the logout of a user, proceed as follows:
1. In the Session Management window, select a server in the Workstations area.
2. In the Sessions area, select a user login.
3. Go to menu:
User s -> For ce Log o ut
The following dialog box opens (see figure 30):
This dialog box lists all the processes launched by the selected user and
running on the selected presentation server.
4. Click either Ok to confirm or C ancel to cancel the operation.
Level Group
5 Group Z
4 Group Y
3 Group X
2 System RM
1 Database
Thus, starting the Group X (level 3) requires that the groups System RM (level 2)
and Database (level 1) are running. If they are not running, PMC starts them au-
tomatically. Stopping the Group X requires that the Group Y (level 4) and Group Z
(level 5) must be stopped. If they are running, PMC stops them first automatically.
6.3 Security
Only the users who have administrator rights can access the PMC tool.
Icon Description
Icon Description
Shows an information window that contains static data about the selected item.
The same information is shown in the Info Area of the PMC window. When no
data is available for an item, the warning “IM not running. No info available for
host [name of the host]”
Node health status The following figure 34 shows the node health status part of the status area.
Node CPU charge . Two thresholds, warning and critical, can be set.
The icon color is either:
green (charge is smaller than warning threshold),
yellow (charge is higher than the warning threshold) or
red (charge is higher than the critical threshold).
A yellow triangle replaces the Node CPU icon when no information is available
for the machine.
A yellow triangle replaces the Node file system icon when no information is avail-
able for the machine.
Managed NMC2 instance The following figure shows the managed NMC2 instance.
NMC2 instance state . When you select the NMC2 instance state icon,
the information area displays:
the node name
load averages
total number of processes and the number of sleeping/running/zombie
processes
memory occupation: real memory (kbytes), virtual memory (kbytes) and
free memory (kbytes)
File system occupation: the total memory (Mbyte), the available memory
(Mbyte), the usage (%) of each directories.
Control state . When the control is not running the eye has a red cross
on it. When this icon shows a yellow triangle with an exclamation mark inside,
the PMC Server-side is not running. Also machine state icon is affected by this
situation showing a red icon.
Group list The group list consists of one icon and a label for each defined group. The label
shows the name supplied in the PMC server side configuration. See figure 36 for
an example of a group list.
You can select groups one by one by clicking on a group. To select more than one
group, keep the CTRL key pressed and click on the groups you want to select. To
select the groups between the first item and last item selected, keep the SHIFT key
pressed and click on the first group you want to select, then on the last group. All
groups between these ones will be included in the selection.
Agent list The agent list consists of one icon and one label for each defined agent. Label
shows the name supplied in PMC server side configuration. For an example of an
agent list, see figure 37.
Child list The agents can have subsidiary agents as child objects. The child objects act the
same way as the agents.
Command Description
Command Description
startup_group <group_name> Starts the named group and all the groups required by its
dependencies.
shutdown_group <group_name> Stops the named group and all the groups required by its
dependencies.
When the control state is ON, a warning message “Control is active” will be dis-
played. To start the item, click Y es . To cancel the starting of the item, click No .
When the control state is ON, a warning message “Control is active” will be dis-
played. To start the item, click Y es . To cancel the starting of the item, click No .
the control icon is removed from the work area. The information
area shows that the state of the control function is OFF.
b) To cancel the stopping of the process control, click No .
1. Go to menu:
Acti o ns —> S y nchr oni z e
2. A confirmation window about the reload of the configuration appears.
a) To execute the command, click Y es .
b) To cancel the execution of the command, click No .
3. If you clicked Y es , the hierarchical views of the NMC2 instance are closed
and the work area only displays the node icons and a yellow NMC2 instance
icon which indicates an unknown functional state and that the PMC IM is not
running. The NMC2 instance will be opened again automatically.
The basic information on an agent is also available in the information area of the
PMC main window (where it is dynamically updated).
If no trace file is available for an agent, an error message “No trace available for
this item” appears.
4. A window containing the selected trace file opens (see figure 40).
This trace file view is dynamic and when opened, it displays the latest 100
lines. It can display up to 8000 bytes of data. When the file size exceeds the
maximum value, the older bytes are discarded.
5. To view all the lines of the trace file in a static view, click Total .
6. In this example an information message appears saying that only 2 MB of
the size of the file can be shown (see figure 41). To dismiss the window, click
OK .
Figure 41: Trace file too big window
You can now view all the lines by using the scroll bar of the window.
7. To view the dynamic partial view of the maximum number of bytes, click
S tart .
8. To stop the updating of the dynamic partial view, click S top .
9. To close the window, click C lose .
The PMC2 log file view is dynamic, and when opened, it displays the latest
100 lines. In dynamic view, it can display up to 8000 bytes of data. When
the file size exceeds the maximum value, the older bytes are discarded.
3. To view all the lines of the log file in a static view, click Total .
4. In this example an information message appears saying that only 2 MB of
the size of the file can be shown (see figure 43). To dismiss the window, click
OK .
Figure 43: Log file size too big window
You can now view all the lines by using the scroll bar of the window.
5. To view the dynamic partial view of the maximum number of bytes, click
S tart .
6. To stop the updating of the dynamic partial view, click S top . You can view
again the log file in a static view.
7. To close the window, click C lose .
3. To set the threshold for either CPU or Disk, click on the corresponding panel.
4. To set the warning threshold, place the mouse pointer on the warning thresh-
old button and drag it to the desired value.
The warning threshold value must be smaller than the critical threshold value.
5. To set the critical threshold, place the mouse pointer on the critical threshold
button and drag it to the desired value.
6. To confirm the thresholds set, click Ap p l y .
7. To close the window, click C lose .
The Data Replication replicates the Backup data files on Stand-by host, and it is
possible to restore them after a Switch Over.
There aren’t restrictions about the Data Replication status during a Backup action.
To perform the Restore of a generic kind of data, the following procedure must
be followed:
a) Set, if needed, the switchover mode in Manual
b) Stop, if needed, the Data Replication
c) Perform the Restore procedure, and taking into account the following advises:
Historical list
Q3ES data
archives
Note: You may want to schedule your own scripts and/or macro-commands. In that
case, you can use crontab files under UNIX (see section 8.7) or the NECTAR OS
Calendar (see section 8.8).
This section describes:
how to access the scheduler management tool, see section 8.1
how to create a plan, see section 8.2
how to edit a plan, see section 8.3
how to validate a plan , see section 8.4
how to stop a plan, see section 8.5
how to delete a plan, see section 8.6
how to use crontab files, see section 8.7
how to use the NECTAR OS calendar, see section 8.8
The Plan list area displays all the servers. Double-click on a server icon to show
or hide the associated plans.
The Plan details area gives the parameters of the plan that is selected in the Plan
list area.
After logging in, you have full access (modify/delete) to the periodic or punctual
actions you had previously created, but not to other user’s plans.
Scheduler window The menu bar of the Scheduler window contains one Scheduler-specific menu
pull-down menu (Pl an) enabling you to:
create a new a plan
edit a plan
validate a plan
stop a plan
delete a plan
You can also change the appearance of the Workstation area (see section 5.2).
Scheduler management The meaning of the icons specific to the scheduler management is given, as fol-
icons lows, (from left to right in the icon bar):
create new scheduler plan icon , task also available from menu Plan
-> New plan
edit a scheduler plan icon , task also available from menu Plan ->
Edit plan
validate scheduler plan icon , task also available from menu Plan ->
Valid plan
stop scheduler plan icon , task also available from menu Plan -> Stop
plan
delete a scheduler plan icon , task also available from menu Plan ->
Delete plan
2. In the Plan name area, enter a name in the text entry field.
3. In the Plan parameters area:
Use the Ho st name option button to select the server on which the
plan is to be executed.
If the execution is to be periodic, tick the Rep eat d elay check box then
fill in the Day ( s) , Hour(s) , Mi nute(s) and S econd e(s) fields.
To suspend the execution during the weekend, tick the Not on w eek -
end check box.
The state of the new plan is inactive (this is indicated by a blue cross). To activate
the execution of the plan, you must validate it first (see section8.4 )
2. Modify any parameter except the Pl an name and the Host name
which are greyed.
3. Click either Ap p l y to validate or C lo se to cancel the operation and close
the dialog box.
Note: If the plan was active at the time of the modification, it is automatically stopped.
2. In the Plan list, a green arrow appears on the left of the selected plan name,
as shown in figure 49).
2. In the Plan list, a blue cross appears on the left of the selected plan name,
as shown in figure 50).
Note: A command launched by the scheduler can only be stopped by using UNIX com-
mands (kill) from a terminal window.
Each field of the command is separated by a space. Each field (v, w, x, y, z) may
contain several values separated by commas. * character means: all the values.
Javascript macro-command To directly execute a javascript macro-command file (essai.mjs for instance, lo-
files cated in the /users/name directory), use the following script:
<NMS_INSTANCE_DIR>/switchmml/script/mmlmacroinline -f
/users/name/essai.mjs
After execution, the result of the macro-command is sent by the cron process in
the user’s mailbox.
* 1,2,3 * 5 * <NMS_INSTANCE_DIR>/switchmml/script/mmlmacroin-
line -f /users/name/command.mjs: launches the command.mjs
macro-command every minute from 1:00 to 3:59 every day of May.
* * * * 2 <NMS_INSTANCE_DIR>/switchmml/script/mmlmacroinline
-f /users/name/command.mjs: launches the command.mjs macro-com-
mand every Tuesday.
* * 5 * * <NMS_INSTANCE_DIR>/switchmml/script/mmlmacroinline
-f /users/name/command.mjs: launches the command.mjs macro-com-
th
mand every 5 day of the month.
Note: When you plan to launch commands during the night, do not trigger all of them
at midnight. In that way, you will prevent an overload.
More information about You can get more information about the crontab commands by entering:
crontab man crontab
(*) only the name of the main log file is given, but it is also true for the overflow
logs that have the same name and a ".log.old" extension.
(**) for more information about the ftp.log format, refer to the HP-UX documen-
tation.
All the log files are located on the server where they have been generated.
Life cycle of logs Figure 52 presents the life cycle of a log file.
The IM and USM create current log files. Each log file has a predefined size
When a log file exceeds its predefined size, it is renamed and it becomes an
overflow log file. These files have the same name than the original log and
the extension ”.old” (e.g. lss.log and lss.log.old). Log types for text logs are
”Command Log”, ”System Log” and ”LogEngineTrivial”. The logging continues
with the current log file.
When the administrator is informed an overflow log file has been created,
he/she has to move this file in an area on the master server: the overflow log
file becomes an archived log file. Archived log files are compressed by using
.gzip
Current log and overflow log files are stored into <NMS_INSTANCE_DIR>/main-
tenance/log
Archived log files are stored on the master server into <NMS_IN-
STANCE_DIR>/maintenance/log
Archived logs An archived log is a log in a compressed format, stored in a specific area on the
backup server. It is possible to archive several logs into the same archive file. Only
logs that are archived may be saved using the SMF backup, or deleted.
Capacity of log browser The capacity of the log browser is limited. In the current release it cannot display
more than one thousand records. If the log is bigger, a warning message indicates
that the display has been truncated.
Distributed architectures This application and its services are available on each server in case of all dis-
tributed architectures. However, the list of the logs and the list of available work-
stations depend on the selected instance.
Log management icons The meaning of the icons specific to log management operations is given, as
follows (from left to right in the icon bar):
View log file information icon , task also available from the menu Log
File -> Info
Archive log file icon , task also available from the menu Log File ->
Archive
Update log file icon , task also available from the menu Log File ->
Update
Window description The Log Files area shows all the existing logs, with the following attributes:
Workstation: server name
Type: log type (system, command, security or other logs)
Name: user friendly log name, different from the log file path-name
Archived: indicates whether the log is archived (archived) or not (original)
The Log Records (no filter applied) area shows the log records contained in the
log file(s) selected on the left.
When no log file is selected in the Log Files area, the Log Records (no filter applied)
area is empty as in figure 53.
When at least one log file is selected in the Log Files area, the SMF Log Manage-
ment window may become as in figure 54, as follows.
Note: Each time a change occurs in the file selection, the Log Records area is emptied
to ensure that its contents is coherent with the file selection.
By default the log records are sorted from the most recent one to the oldest one
and only the header of each log record is displayed. The header format depends
on the type of log:
Command Log:
<date><time>:<server name>:<usm><process identifier>:Operator
name= <user login name>:<Command><command status>
SMF Log:
<date><time> <hostname>:NMC2>/<functional domain>:<user login
name> <text>
Other Log:
The header has no specific format, i.e. in general text format (ASCII). It is
possible to edit any text file (ASCII format) with the SMF log browser if the file
is in the logging area. In particular, SMF can be used to edit and delete (after
archiving) copies of ELM logs.
tmn
configuration
start stop
maintenance
plugin
Log management The menu bar of the Log Management window contains 2 log management-spe-
pull-down menus cific menus:
Log Fi l e : to manage the log files (see section 9.3)
Log Recor d : to manage the log record display (see section 9.4)
Log management icons The meaning of the icons specific to the log management is given, as follows,
(from left to right in the icon bar):
Log File -> Info
Log File -> Archive
Log Record -> From Update
Note: The log file contents is erased when the log is archived.
The Archive is a file containing compressed logs, the save area contains what is
saved or restored when using SMF backup or the log domain.
3. Repeat the same operation for each selected log files, except if you clicked
on the All or C ancel push button the previous time.
3. In the Date Filter area, indicate the period of time where records must be
retrieved.
4. If you want to refine your search, in the Attribute Filter area, define other
filtering criteria to be applied to log records:
Click the Attr i bute option-button to select an attribute among a list.
Click the Op erator option-button to select the kind of comparison
which is going to be performed between the previously selected attribute
and a value.
The Val ue part enables you to select a value among a list (if there is an
option-button) or to enter a value (if there is only an entry field).
The Filter Items subarea displays all the attribute filter criteria just being de-
fined by the user.
To manage those filter items, you can:
click Ad d to add an item just after defining an attribute criterion
click Remove to delete an item from the Filter Items list
click the AND radio-button if all the items of the Filter Items list must be
verified to retrieve a log record
click the OR radio-button if at least one of the items of the Filter Items list
must be verified to retrieve a log record
Note: When a filter is applied to log records, the Log Records ( no filter applied ) area
of the SMF Log Management window becomes the Log Records ( filter applied )
area, informing you about what it is precisely displayed in this area.
To disactivate the filter To disactivate the filter, open it and click the Reset button to remove the previous
filtering criteria. Then, click Apply.
To update the Log Recor d s area of the Log Manag ement window
1. Go to menu:
Log Recor d -> Up d ate
2. The Log Records area will be updated.
To display only the header of the log records (display by default), go to menu:
Lo g Record -> Head er s Onl y
The Log Records area becomes as in figure 54 again.
Note: It is also possible to expand (or shrink) a log record by clicking on the + (or -)
sign on the left of the header.
10 Maintenance
From the OS Management window, you can access maintenance tasks by going
to menu:
Ad mi ni strati on Too l -> Mai ntenance
The Mai ntenance submenu contains the items as follows:
Trace Manag ement : to edit the trace levels, visualize and reset the trace
files (see section 10.1)
OS S nap shot Manag ement/ Fai lure Manag ement : to manage
the OS snapshots (see section 10.2)
C leanup Manag ement : to manage the files to be deleted (see section
10.3),
C orba scri p t manag ement : to access the MML CORBA Script appli-
cations (see section 10.4)
Level Information
6 debugging
7+ specific
When enabled, dataflow tracing (level 3), is issued, along with process events
(level 2) in dedicated trace files with extension .dataflow.
The trace management user interface in detailed in section 10.1.1.
Trace management window The Workstations area shows all the available servers.
description The Trace Files area shows all the existing traces on the selected server (when no
server is selected, this area is empty), with the following attributes:
Name: trace file name
Size (byte): trace file size (in bytes)
Trace management The menu bar of the Trace File Management window contains 2 trace manage-
pull-down menus ment-specific menus:
Processes : to edit the process trace levels (see section 10.1.2)
Trace-Fi l e : to manage the trace files (see section 10.1.3)
You can also change the appearance of the Workstation area (see section 5.2).
In this window, each line shows the trace levels of one process. Each level
can be switched on (tick in the small square) or off (empty square). If a level
is switched on, the traces of this level will be written in the trace file. If a level
is switched off, no trace is written for this level.
2. Edit the trace level configuration:
To switch on (or off) a level, click on the corresponding square to tick (or
untick) it.
To switch on all the levels of a process, click Al l in the corresponding
line.
To switch off all the levels of a process, click C l ear in the corresponding
line.
Increasing the trace level also increases the load of processing of the server. Thus
you can change the trace level only if the operator need it.
4. Click Ok to launch the search. The first occurrence appears in reverse video.
5. Click either Ok again to go to the next occurrence, or C ancel to stop the
search and close the dialog box.
This dialog box displays online all the traces that are currently executed.
2. Click C ancel to close the dialog box.
Mode: ’Manual’ means that the OS snapshot has been made on user request.
’Automatic’ means that the failure of a process has triggered the OS snapshot
Failed Process: indicates the failed process (in manual mode, this column is
empty)
Access: ’Locked’ means that the OS snapshot is protected against a possibly
deletion from a user. ’Unlocked’ means that the OS snapshot is not protected
Size (KBs): OS snapshot size (in kilobytes)
OS snapshot management The menu bar of the OS Snapshot Management window contains 3 OS snapshot
pull-down menus management-specific menus:
Work stati on: to perform an OS snapshot (see section 10.2.2)
OS -S nap shot : to manage the OS snapshots (see section 10.2.3)
Op ti ons : to set the maximum number of OS snapshots (see section 10.2.4)
You can also change the appearance of the Workstation area (see section 5.2).
OS snapshot management The icons in the toolbar correspond to the OS snapshot menus, as follows, (from
icons left to right in the icon bar):
2. In the New Max i mum field, enter the new maximum number of OS
snapshots the OS snapshot server will be able to store.
3. Click either Ok to validate the possible modifications, or C ancel to close
the dialog box without change.
Before cleaning the Event Forward Discriminator (EFD) definition files, you must
stop the OS. For more information on EDF, refer to section 12.7.
Cleanup management The Domain list area is a table whose each line corresponds to a server. The files
window description to be cleaned are classified in domains:
Back up :
directories created for the backup and restore operations, located on
the master station only, under <NMS_INSTANCE_DIR>/Backu-
pArea, <NMS_INSTANCE_DIR>/maintenance/backup, <NMS_IN-
STANCE_DIR>/maintenance/restore
Log :
files concerning the PM-logfile, the .old log files, and all the .log files under
save, located in <NMS_INSTANCE_DIR>/maintenance/log on a server
Trace :
files concerning all the files .trace, .err and .dataflow in the NMC2 run time
tree of a server
Fai lure :
snapshot files written on failure. Concerns the directories located on
<NMS_INSTANCE_DIR>/maintenance/failure on a server
:
files dumped on process crashes. Concerns all the files on a station: of NMC2
processes within <NMS_INSTANCE_DIR>/maintenance/, and resulting
of panic UNIX error within /var/adm/crash
NE Doc :
NE documentation release
EFD :
concerns all the Event Forwarding Discriminator definition files. After removal,
the administrator has to restart the system. Only available in command line.
Cleanup management The menu bar of the Cleanup window contains one cleanup-specific menu
pull-down menus (C l eanup ) enabling you to:
get details about a domain to be cleaned (see section 10.3.2)
perform the cleanup (see section 10.3.3)
Cleanup management The meaning of the icons specific to the cleanup management is given, as follows,
icons (from left to right in the icon bar):
Clean files from a domain , task also available from the menuCleanup
-> Clean
2. Go to menu:
C leanup -> C l ean
launch SMH tool, the main window of this tool is then displayed, select the
“Configure Printers or Plotters” tab:
3. Click OK to confirm.
4. In the dialog box that opens, fill in the Pr i nter Name and Pr i nter
Mod el/ Interf ace fields then click OK .
5. In the Printer and Plotters window, if the status of the Print Spooler is STOPPED,
start it by using the menu path:
Acti o ns -> S tart P ri nt S p ool er
It is strictly forbidden to launch the X25 interface from SMH. If by mistake, the
administrator accesses the X25 Menu from SMH, all the XOT configuration is re-
moved, the server trying to communicate with the NE sites through the hardware
X25 board (which is no longer available vNMC2). The problem can be solved
only with a full restauration of the vNMC2 server.
The X25 board and the XOT front router is replace by the XOT HP software.
The pseudo-interface X25 is automatic configured when the first install of the
VM is performed. It is necessary to specify the equivalent of the front XOT
router in the former software XOT conf file => / etc / x25 / x121_to_ip_map
(pairs X25 @ destination - @IP XOT router knows the next router):
root@nmc2ve01:1/317:/$ cat /etc/x25/x121_to_ip_map
…
9406402 135.243.16.117 #TIM02 1
9406502 135.243.16.117 #TIM02 2
The rest of the routing is provided by the IP layer.
12 Q3ES administration
This chapter describes:
the Q3ES subsystem (see section 12.1)
the Q3ES user interface (see section 12.2)
how to declare, modify and delete an OSS (see section 12.3)
how to manage log FTAM (File Transfer Access and Management) functions
(see section 12.4)
how to view logs and delete an EFD (event forwarding discriminator) (see sec-
tion 12.5)
how to access information on Q3ES processed and addresses (see section
12.6)
how to send alarms and observation elements through the EFD user interface
(see section12.7)
Note: The Q3ES administration concerns only OCB and Alcatel-Lucent 8300—based
NEs.
Note: The use of Q3ES for the bulk data in files (Q3ES File Transfer Access and Man-
agement (FTAM) is exclusive with the BDH interface described in chapter 13.
Q3ES external interface Q3ES uses both services CMISE and FTAM as described, as follows:
The CMISE services offered by Q3ES are those of a Q3 agent in charge of the
Object Model (MOC). It contains:
the Network Topology (NEs and Routes) managed by the OSS
standardized support object classes, namely EFD (Event Forwarding Dis-
criminator), Log and Log Record. These objects can be managed from the
interface, with restrictions for some event types
The ones are handled by Q3IC and Q3ES partly overlap: Unsolicited and Bulk
Data Messages and Alarms; the others are added: "file available" events and
Network Topology.
The FTAM file transfer can be started:
by a client of Q3ES (i.e. the external OSS is the file transfer initiator) having
created EFDs for ’file available’ events of some expected families of bulk
data (see the basic interface defined later). In that case, Q3ES is ’FTAM
server’.
by Q3ES (i.e. Q3ES is the file transfer initiator) for clients not provided with
CMISE capabilities (see the reduced interface defined later). In that case,
Q3ES is ’FTAM client’.
Q3ES component The Q3ES component has to carry exchanges with external OSSs through two
different types of interfaces, according to the external OSS capabilities:
the Basic interface, through which both entities communicate via CMISE
messages, and via FTAM transfers for file transmission. In that case, the distant
OS is always the initiator of the FTAM session.
the Reduced interface, through which both entities can only transfer files via
FTAM, without any CMISE capabilities. In that case, as Q3ES has no possibility
to warn the distant OS of the file availability, it will decide by its own to establish
an FTAM session to transfer the available file. The decision can be carried out
by two means:
either automatically thanks to a static configuration file
or manually, upon local operator decision through Q3ES USM
The Q3ES behaves as a manager towards COLLIM (provides Bulk Data), ASIM
(provides current Alarms), PNMIM (provides EMLIM instances list), EMLIMs in-
stances (provides Network Topology, NE objects and Unsolicited).
The Q3ES component can be composed of several sub-components:
a broadcast handler
a Q3ES agent
a collection manager
an FTAM server and an FTAM client for the file transmission to external OS
the same time; the communication with several Q3ES is sequential; to dialog
with one Q3ES, Q3ES-USM waits that the dialog with another is terminated;
before sending a request to Q3ES, Q3ES-USM has to unregistered then register
the agent
Note: The Q3ES processes receive all the alarms; Q3ES sends only the alarms to the
OSS matching the EFD filter.
The seven push buttons in the middle of the window enable you to access the
main Q3ES applications. They are as follows:
The E x i t push button enables you to quit the Q3ES administration user interface.
The Help push button opens the contextual help on line of the window.
The Q3ES main windows are composed of five areas, from top to bottom:
the window title
the menu bar
the list area
at the bottom left, the information field
at the bottom right, the number field
Window title The window title indicates the Q3ES application name (OSS Management in the
example above).
Menu bar The menu bar contains four pull-down menus. The pull-down menus introduced,
as follows, are common to all the Q3ES main windows:
Vi ew : contains five items:
OS S Ad mi ni strati on: to open the OSS Management main window
(see section 12.3)
Log FTAM Manag ement : to open the Log FTAM Management main
window (see section 12.4)
Log Manag ement : to open the Log Management main window (see
section 12.5)
Q3E S Inf or mati o ns : to open the Q3ES Informations main window
(see section 12.6)
E FD Manag ement : to open the EFD Management main window (see
section 12.7)
C lose : to close the current Q3ES main window and go back to the Q3ES
Administration window
E x i t : to quit the Q3ES administration user interface
The fourth pull-down menu is specific to each Q3ES main window. It is presented
in the corresponding paragraphs hereafter.
List area This area contains the lists of the objects managed in the Q3ES application. The
column headers are presented in the corresponding paragraphs.
Information field This field indicates the last operation performed from the window.
Number field This field indicates the number of items in the list area.
The window areas are described in section 12.2.2. Only specific items are pre-
sented here.
List area The list area includes two columns:
OS S Name : OSS user label used by the four other applications
AE Ti tle : application address
OSS management The menu bar of the Q3ES: OSS Management window contains the common
pull-down menu menus mentioned in section 12.2.2 and one specific menu (OS S ) enabling you
to:
declare an OSS (see section 12.3.1)
modify an OSS (see section 12.3.2)
delete an OSS (see section 12.3.3)
2. In the OSS area, enter the OSS user label (8 characters or less).
Note: Q3ES can manage a maximum number of external OSSs. A new OSS declara-
tion may be refused by Q3ES-USM if this number is reached. This number is a
configuration parameter.
3. In the Application Entity Title area, enter the external application parameters:
In the Application Title field, enter the application name that can be only
composed of numbers and points. It cannot start nor end with a point,
and two points must be separated by at least one digit: "123.2.4.24" is
a possible value but "12.3." is not, nor than "12..3".
In the Application Entity Qualifier(AEQ), enter a number: "1234" is a
possible value but "X12345" is not.
4. Click on the OSS Address 1 panel to enter the main address of the OSS.
The dialog box becomes as in figure 78.
Figure 78: Q3ES: OSS Parameters dialog box: OSS Address 1 Panel
This panel gives the user address elements for the different used classes:
In the Presentation Address area, fill in:
the Pr esentati on S elector field: 16 characters maximum
the S essi on S elector field: 16 characters maximum
Transp ort S el ector field: 32 characters maximum
Note: Selector can also be filled up in hexadecimal format. Example to introduce 22
hexadecimal value: give 22’h or 22’H in the selector’s field
In the Network Service Access Point area, select the address type by click-
ing:
either on the X 12 1 radio button: in that case, enter a number of 14
digits maximum.
or on the ISO 8348(RFC 1006) radio button: in that case the IP ad-
dress (= RFC 1006) must be preceded with the prefix . For example,
if the IP address is "123.56.2.245", enter the address as follows:
in the first field: 123
in the second field: 56
in the third field: 2
in the fourth field: 245
The panel that appears in exactly the same as the OSS Address 1 panel. It
is use for the external OSS secondary address.
6. Click either Ok to validate or C l ose to cancel the operation and close the
dialog box.
Note: The AE Tittle modification is not performed by Q3ES-USM if there are remaining
resources (Log, EFD) associated to this OSS.
Note: The OSS deletion is not performed by Q3ES-USM if there are remaining resources
(Log, EFD) associated to this OSS. The verification of the deletion availability is
performed by the agent part.
Note: If there are problems to communicate with at least one Q3ES process, a message:
"Communication problem" is displayed in the information field.
The window areas are described in section 12.2.2. Only specific items are pre-
sented here.
List area The list area includes four columns:
Log Id : log FTAM identifier
Fi lter ty p e : data flow type
NEs Li st : indicates the list of the NEs concerned by the log FTAM
OS S Name : name of the OSS for which the log FTAM is created (declared
on FTAM creation). It can be different from a log to another one.
Number Rec.: number of records
Tr. ty p e : type of transfer
Log FTAM management The menu bar of the Q3ES: Log FTAM Management window contains the com-
pull-down menu mon menus mentioned in section 12.2.2 and one specific menu (Log FTAM)
enabling the user to:
create a log FTAM (see section 12.4.1)
delete a log FTAM (see section 12.4.2)
browse a log FTAM (see section 12.4.3)
get information about a log FTAM (see section 12.4.6)
change the transfer type of a log FTAM (see section 12.4.7)
Figure 80: Q3ES: Log FTAM Parameters dialog box: General Panel
2. In the OSS area, use the Name option button to select an OSS. The AE
Ti tl e field, displaying the corresponding external application name, is read-
only.
3. In the Filter area, use the Fi lter ty p e option button to select the bulk data
flow type
4. In the NEs area, the All NEs value appears by default in the S elected NE s
list (indeed an empty NE list is forbidden).
To add an NE in the list, select it in the Prop o sed NEs list, then click
the right arrow button.
To remove an NE from the list, select it in the S el ected NEs list, then
click the left arrow button.
Figure 81: Q3ES: Log FTAM Parameters dialog box: Parameters Panel
Once created, an FTAM log is not automatically deleted when the corresponding
NE is deleted. You have to delete such an FTAM log (see section 12.4.2).
The window areas are described in section 12.2.2. Only specific items are pre-
sented here.
List Area The list area includes four columns:
Recor d Id : record identifier
Event Ti me :
Fi le Name :
Transf er S tate : indicates whether the record has been transferred or is
to be transferred.
Log FTAM browsing The menu bar of the Q3ES: Log FTAM Browsing window contains the common
pull-down menu menus mentioned in section 12.2.2 and one specific menu (Reco rd ) enabling
the user to:
delete a record (see section 12.4.4)
send a record (see section 12.4.5)
The window areas are described in section 12.2.2. Only specific items are pre-
sented here.
List area The list area includes six columns whose content is detailed in section 12.4.1.
Log FTAM information The menu bar of the Q3ES: Log FTAM Information window contains two of the
pull-down menu common menus mentioned in section 12.2.2.
Note: If there are problems to communicate with at least one Q3ES process, a message:
"Communication problem" is displayed in the information field.
The window areas are described in section 12.2.2. Only specific items are pre-
sented here:
List area The list area includes five columns:
Log Id : log identifier (it is unique). Since there are several Q3ES instances,
Q3ES-USM displays the Log Ids as follow:
<Q3ESid><.><Log identifier> (Example OS00_1.7).
The log are classified according to the Q3ES instance, first the log in Q3ES
number 1, then the log in Q3ES number 2...
Fi lter ty p e : indicates the type of bulk data involved in the log
NEs Li st : indicates the list of the NEs concerned by the log. The value may
be All NEs.
OS S Name : indicates the originator of the log
Number Rec.: number of non cleared alarms of all the NEs managed by
the Q3ES instance
Log management The menu bar of the Q3ES: Log Management window contains the common
pull-down menu menus mentioned in section 12.2.2 and one specific menu (Log ) enabling you
to delete a log (see section 12.5.2).
Deleting a log is available only for debug purposes. It must only be used in critical
situations.
Note: If there are problems to retrieve the NE list from any Q3ES process, a "communi-
cation problems" message is displayed in the NE list field.
There are two ways of accessing the Q3ES information:
either from the Q3ES Administration window, by clicking on the Q3ES In-
f ormati ons push button
or from another Q3ES main window, by using the menu path:
Vi ew -> Q3ES Inf or mati ons
Note: If there are problems to communicate with one Q3ES process, a pop-up displays
the following message: "War ni ng : The displayed information may be uncom-
pleted. At least one Q3ES process is unreachable". If all the Q3ES processes are
unreachable, the message is " War ni ng : All the Q3ES processes are unreach-
able".
The window areas are described in section 12.2.2. Only specific items are pre-
sented here.
List area The list area includes eight columns:
EFD Id : EFD identifier (it is unique). Since there are several Q3ES instances,
Q3ES-USM displays the EFD Ids as follow:
<Q3ESid><.><EFD identifier> (example OS00_1.7). The EFDs are classi-
fied according to the Q3ES instance, first the EFD in Q3ES number 1, then the
EFD in Q3ES number 2...
Ad mi n. S tate
Fi lter Ty p e : indicates if the EFD concerns the Bulk Data, the Alarms, the
Unsolicited, the States Changes or the Configuration Updates
NEs Li st : indicates the list of the NEs concerned by the EFD. The value may
be All Nes. in the case of a State change or Configuration Update EFD, this
piece of information has no sense, so nothing is displayed in this column.
Fi lter characteri sti cs : indicates:
the alarm severity for an Alarms EFD
the application list for an Unsolicited EFD
the MOC list for a State Change or Configuration Update EFD
Those values may be All Nes. in the case of Bulk Data EFD, this piece of
information has no sense, so nothing is displayed in this column.
OS S Name : indicates the main destination of the EFD
OS S acti ve name : indicates the active destination of the EFD
OS S Back up name : indicates the backup destination of the EFD
EFD management The menu bar of the Q3ES: EFD Management window contains the common
pull-down menu menus mentioned in section 12.2.2 and one specific menu (E FD ) enabling you
to delete an EFD (see section 12.7.2).
Deleting an EFD is available only for debug purposes. It must be used only in
critical situations.
13 BDH management
This section describes:
what is BDH management
the BDH user interface
how to declare, modify or delete an OSS in BDH
how to create, modify, delete a Bulk Data Collection
how to manage BDH files
Note: The use of this interface is exclusive with the bulk data mode of Q3ES described
in chapter 12.
The bulk data sent by the different NEs in supervised state are stored in a disk
partition called repository. The bulk data repository is the file structure gathering
all the bulk data files. All the bulk data can be put in files.
The repository is organized according to the system type (HLR, OCB, OS, etc.),
the system name and the bulk data type (via the file naming). From the OSS, the
repository is read-only.
It is the responsibility of the OSS to perform the FTP operations to transfer the file
from the repository. There are no mechanisms in the NMC2 to ensure the transfer
integrity. The OSS may retransfer the file if it detects incomplete or corrupted data.
The NMC2 do not guarantee the data in the case of a server switchover, connec-
tion breakdown with the NE or when the NE supervision is stopped.
The list of NEs is not provided by this interface but by the OS-OS topology inter-
face. No indication on the NE releases is provided by the BDH interface.
The BDH management is only available for an administrator.
The three buttons in the middle of the window enable you to access the main BDH
management functions. They are as follows:
OS S Manag ement : to manage the OSSs (see section 13.3)
Bulk Manag ement : to manage the bulk data (see section 13.4)
Fi le Manag ement : to manage the BDH files in the repository (see section
13.5)
The E x i t push button enables you to quit the BDH management user interface.
The Help push button opens the contextual help on line of the window.
The BDH management main windows are composed of five areas, from top to
bottom:
the window title
the menu bar
the list area
at the bottom left, the information field
at the bottom right, the number field
Window title The window title indicates the BDH management domain (OSS Management in
the example above).
Menu bar The menu bar contains four pull-down menus. The pull-down menus introduced,
as follows, are common to all the BDH management main windows:
Vi ew : contains five items:
OS S Manag ement : to open the OSS Management main window (see
section 13.3)
B ul k Manag ement : to open the Bulk Management main window (see
section 13.4)
Fi le Manag ement : to open the File Management main window (see
section 13.5)
C lose : to close the current BDH management main window and go back
to the BDH Administration window
E x i t : to quit the BDH management user interface
The fourth pull-down menu is specific to each BDH management main window.
It is presented in the corresponding paragraphs hereafter.
List area This area contains the lists of the objects managed in the BDH management do-
main. The column headers are presented in the corresponding paragraphs.
Information field This field indicates the last operation performed from the window.
Number field This field indicates the number of items in the list area.
The window areas are described in section 13.2.2. Only specific items are pre-
sented here.
List area The list area includes three columns:
OS S Name : OSS user label
OS S Ad d r ess : DNS or IP address
User Name : user login
OSS management The menu bar of the BDH: OSS Management window contains the common menus
pull-down menu mentioned in section 13.2.2 and one specific menu (OS S ) enabling you to:
declare an OSS (see section 13.3.1)
2. In the OSS name area, enter the OSS user label (up to 8 characters).
3. In the OSS addressing area, enter the DNS or IP address (4 fields for integer
values from 0 to 255).
4. Click on the USER panel.
The dialog box becomes as in figure 91.
Note: User identity field must not be already used by an other application or External
OS
6. Click either Ok to validate or C l ose to cancel the operation and close the
dialog box.
Declaring an OSS via BDH enables any NMC2 user previously created viaSMF to
connect to the NMC2 from the external OS by using FTP. However only the ’BDH’
user (created during the OSS declaration) home directory is directly in the BDH
repository.
Note: If there are problems to communicate with at least one Q3ES process, a message:
"Communication problem" is displayed in the information field.
The window areas are described in section 13.2.2. Only specific items are pre-
sented here.
List area The list area includes five columns:
Bulk Data Ty p e
S y stem l i st : list of the concerned systems (all by default). For a given bulk
data type, a list of possible systems is provided by means of an option button.
Li f eti me : lifetime (in day) of the bulk data file in the repository
C onsti tuti on mod e : there are four modes defining how a BDH file is
closed and moved to the repository:
per dispatch: only for performance data types of the A8300 NEs
size: when a configurable file size is reached
close timer: at the expiration of a configurable timer
close daily: everyday at a specified time
Bulk management The menu bar of the BDH: Bulk Management window contains the common menus
pull-down menu mentioned in section 13.2.2 and one specific menu (B ul k ) enabling you to:
create a new bulk data collection (see section 13.4.1)
modify the parameters of a bulk data collection (see section 13.4.2)
delete a bulk data collection (see section 13.4.3)
2. In the Bulk type area, select the bulk data type by using the option button.
3. In the NEs area, select the systems:
To add a system, select it in the Prop osed sy stems list then click the
right arrow button.
To remove a system, select it in the S elected sy stems list then click
the left arrow button.
5. In the File life time area, enter a number of days (7 days maximum).
6. In the File constitution mode area, click on one radio button to select the
mode, then:
if the Fi l e si z e mode has been chosen, enter the size in the si z e field
(30.72 kbytes maximum).
if the C l ose ti mer mode has been chosen, enter the expiration time
in the d elay field (1440 mn maximum, i.e. 1 day).
if the C lose ti me mode has been chosen, enter the hours in the HH,
MM and S S fields (HH accepted values are from 0 to 23, for MM and
SS accepted values are form 0 to 59).
7. Click either Ok to validate or C l ose to cancel the operation and close the
dialog box.
Once created, a bulk data collection is not automatically deleted when the cor-
responding NE is deleted. You have to delete such a bulk data collection (see
section 13.4.3).
In both cases, the following dialog box (see figure 95) opens:
This dialog box enables you to define a filter to be applied to the BDH files dis-
played in the BDH: File Management window (see figure 96). You can select three
main criteria by means of check buttons and option buttons.
After selecting the filtering criteria, click either Ok to validate them or C lose to
close the dialog box without change.
The BDH: File Management window (see figure 96) opens.
The window areas are described in section 13.2.2. Only specific items are pre-
sented here.
List area The list area includes four columns:
Fi le name : BDH file name
File management The menu bar of the BDH: File Management window contains the common menus
pull-down menu mentioned in section 13.2.2 and one specific menu (Fi le ) enabling you to:
delete a BDH file (see section 13.5.3)
export a BDH file (see section 13.5.4)
import a BDH file (see section 13.5.5)
In the case of an import action a list is built from the DAT content, the window is
similar to the window used to display list of files from the repository but the file
menu is replaced by the simple action "import" which triggers the copy of the file
from the DAT to the repository.
Note: The exported BDH file is stored on the DAT just after the last record.
2. In this window, use the Imp ort pull-down menu to trigger the copy of the
BDH files from the DAT to the repository.
NMC2 can support several clients (external OSs / application), and therefore this
couple of parameters is required for each externalOS/application.
Note that two client applications located on a same externalOS will have the same
IP address. They need to have different passwords. IP address and password are
declared in the configuration file registryTable.xml.
This file is located in the directory <NMS_INSTANCE_DIR>/NMSI/instance/de-
fault/nmsi/1/conf.
The file registryTable.xml is parsed each time an externalOS connection request
is processed, so it is possible to declare a new client even while the serverOs is
running.
Here is an example extracted from this file. In this example, 3 clients are declared
with values:
IPaddress1 and password1
IPaddress1 and password2
IPaddress2 and password3
The first two clients are hosted by the same external machine with IPaddress1.
They have different passwords.
<!-- PARAMETERS FOR THE SERVEROS APPLICATION -->
<client WS="IPaddress1"
KEY="password1"></client>
<client WS="IPaddress1"
KEY="password2"></client>
<client WS="IPaddress2"
KEY="password3"></client>
Note: To enable the file transfer between NMC2 and the external applications, see the
chapter 15.1.
3703A312E300000000000000001000000000000003000010
000000000076E6D6F73353700000FAA0000000000184465
6661756C744E616D6553657276657250726F63657373
Note: One easy way to retrieve the server IOR is to use FTP to retrieve the files indicated
above. This approach implies to have FTP account in the NMC2 machine.
15 OS security
When the OS security package is installed on the NMC2, any FTP, telnet or rlogin
command performed from any UNIX machine to an NMC2 machine fails. The
corresponding messages are as follows:
FTP case:
421 Service not available, remote server has closed connection
telnet case
Trying... telnet: Unable to connect to remote host: Connec-
tion ref
rlogin case (the IP address of the NMC2 machine is for example
abc.def.ghi.jkl):
Wait for login exit: ..
Then define allowed users. Verify that /var/adm/inetd.sec file contains the line
If this line is present in the file, add the IP address of the new authorized machines.
Otherwise, add the line with the IP address of the authorized machines. If you want
to forbid all machines, delete the service line. Then save the file
For a given service, only the last line beginning with the service name will
be taken into account. So, if you have to modify a line for a given service, check
that it is the last line of the file beginning with the service name
2. In the S erver Name field, enter the name of the server to which you want
to connect.
3. Fill in the User Name and Passw or d fields.
4. Click S etti ng s . The following dialog box (see figure 99) opens:
5. In the S erver Port field, replace the displayed value (21, that corresponds
to the standard FTP server) with 2001 (for the proFTP server).
6. Click OK .
7. In the Open Connection dialog box, click Op en .
Launching ProFTP by using You can also activate the ProFTP service by launching the script:
a script <NMS_INSTANCE_DIR>/tools/user/script/UpdateOSS.pl <argument>
where <argument> is composed of the following parameters:
old IP address
new IP address
old user
new user
password
a digit (0 to add, 1 to modify, 2 to remove)
The standard services (ftp, telnet and rlogin) are not secured and they must be
forbidden to make efficient the use of the Secure Shell services. To de-activate
them refer to the chapter 15.1.
As the product provides this secured shell, Nokia is not responsible of problems
that may occur due to the use of the non-secured services.
The Secure Shell package is a set of commands that provide secured remote login,
file transfer, and remote command execution.
It ensures the data integrity and provides secure tunneling features and port for-
warding.
The following table lists the main Secure Shell commands and provides a brief
description of each.
Command Description
More information on this package can be found on the HP web site (the name of
the package contains the string : HP-UX S ecur e S hell ).
How to know if the Secure By executing the command sw li st | g r ep “HP-UX S ecure S hel l” as
Shell package is installed root.
Security configuration ended with Bastille warning. Check the log file:
/SCINSTALL/security/log/scsecurity.log
The special characters, blank and #, are allowed in the password after
trusted-UX activation on server.
16 NMC2 supervision
An alarm agent is installed which store all local machine specific events (SNMP
traps) generated by the NMC2 monitors. This agent keeps its own Active Problem
Table (APT).
16.2.2 Os1300nmc NE
The Os1300nmc NE is an EMP based network element used for the NMC2 super-
vision. It provides the mapping rules needed by EMP to convert the SNMP traps
generated by the nmc_SOS monitors in Alarm Surveillance specific alarms.
Alarm synchronization for the Os1300nmc NE is performed when aligning-up the
NE, automatically (every 10 minutes) or manually from Alarm Surveillance. This is
realized using SNMP GET operations on the APT of the alarm agent implemented
in NMA nmc_SOS.
When declaring an Os1300nmc NE in Topology Manager, the “Operations Sys-
tem” Family and “1300 NMC2” Type must be used:
When setting the NE address and port, port 7161 must be chosen:
For NMC2 software supervision of the own server (self-supervision), the declara-
tion and supervision of the corresponding Os1300nmc NE is the only necessary
step to perform.
The supervision of the own NMC2 server on distant NMC2 server(s) is possible but
require manual configuration in order for the corresponding traps to be forwarded
in real time. A trapsink <IP_address>directive must be added in the local
/ etc/ snmp / snmp d .conf file. The IP address to be used is the IP address
of the NMC2 server where the supervision is performed (NE Os1300nmc with
local IP address created). This mechanism must be applied for supervision of HA
stand-by servers and HMS servers.
RRN = 75 HA problem
Subreason 01: HA file replication problem.
Subreason 02: HA data replication problem.
Subreason 03: HA switchover problem.
Subreason 04: HA connectivity with the stand-by node problem.
RRN = 79 Too many users are logged into the system and further logins are dis-
abled.
Note: The paths containing the <NMS_INSTANCE_DIR> keyword present in this chap-
ter relate to the instance and server from where the alarm originates, not to the
instance and server where the respective Os1300nmc NE is supervised.
5 - file /var/adm/ sys- To solve the problem of growing syslog file, restart the syslogd. This is done by
log/syslog.log ex- logging in as root and launching:
ceeds maximum size
# /sbin/init.d/syslogd stop
<maxsize>
# /sbin/init.d/syslogd start
After that the oversized syslog file is moved to OLDsyslog.log and a new sys-
log.log is created. Pay attention, that the old OLDsyslog.log is deleted.
Nevertheless, it should be analysed why the log file grew so large.
5 - file The /var/adm/wtmp file logs all logins, rlogins and telnet sessions.
/var/adm/wtmp
Use the following command to convert the content of the wtmp file in a readable
exceeds maximum
format:
size <maxsize>
# /usr/sbin/acct/fwtmp < /var/adm/wtmp >/tmp/wtmp_out
Open the /tmp/wtmp_out file with a text editor and verify if corrupted entries
(wrong structure of the log) are the cause of the growing log.
Clear the /var/adm/wtmp file using the command:
# cat /dev/null >/var/adm/wtmp
5 - file /var/adm/btmp This file contains the logs of bad login attempts.
exceeds maximum
Run as user root the following command to check the content of the btmp file:
size <maxsize>
# lastb -f /var/adm/btmp
Clear the /var/adm/btmp file using the following command, as user root:
# cat /dev/null >/var/adm/btmp
5 - file /var/adm/su- This file contains the logs of "su" command attempts.
log exceeds max-
Log as user root and check the content of the /var/adm/sulog file.
imum size <max-
size> Rename the /var/adm/sulog file using the command:
# mv /var/adm/sulog /var/adm/sulog_old
The original file will be recreated by the next attempted "su" command.
30 - Core process nfsd Verify that NFS_CORE is set to "1" in the /etc/rc.config.d/nfsconf file.
does not run.
Check if the <rpcbind> and <nfs4srvkd> processes are running.
Try to stop and start the NFS server.
30 - Core process Verify that NFS_CORE is set to "1" in the /etc/rc.config.d/nfsconf file.
rpc.mountd does
Check if the rpc.mountd has ports assigned (look for mountd service entries),
not run.
using the command:
# rpcinfo -p
Try to stop and start the NFS server.
30 - Core process nfs- Verify if the nfslogkd daemon has logged errors in the /var/adm/syslog/sys-
logkd does not run. log.log file.
Execute as user root the following:
On the NMC2 server:
# /sbin/init.d/nfs.client stop
On the host where the NFS server is running, stop the NFS server.
On the NMC2 server:
# /sbin/init.d/nfs.core stop
# /sbin/init.d/nfs.core start
# /sbin/init.d/nfs.client start
On the host where the NFS server is running, start the NFS server.
30 - Core process Check if the rpcbind has ports assigned (look for mountd service entries), using
rpcbind does not the command:
run.
# rpcinfo -p
Execute as user root the following:
On the NMC2 server:
# /sbin/init.d/nfs.client stop
On the host where the NFS server is running, stop the NFS server.
On the NMC2 server:
# /sbin/init.d/nfs.core stop
# /sbin/init.d/nfs.core start
# /sbin/init.d/nfs.client start
On the host where the NFS server is running, start the NFS server.
30 - Core processes Verify that NFS_CORE is set to "1" in the /etc/rc.config.d/nfsconf file.
rpc.statd or
Check if the rpcbind process is running.
rpc.lockd do not
run. Enter as user root the following commands to kill rpc.statd and rpc.lockd:
# /usr/bin/ps -ef | grep rpc.statd
# kill <PID>
# /usr/bin/ps -ef | grep rpc.lockd
# kill <PID>
Enter the following commands to restart rpc.statd and rpc.lockd:
# /usr/sbin/rpc.statd
# /usr/sbin/rpc.lockd
Enter the following commands to verify that rpc.statd, rpc.lockd and nfsd are all
running and responding to RPC requests:
# /usr/bin/rpcinfo -u <NFS_server_host> status
# /usr/bin/rpcinfo -u <NFS_server_host> nlockmgr
# /usr/bin/rpcinfo -u <NFS_server_host> nfs
# /usr/bin/rpcinfo -u <NFS_client_host> status
# /usr/bin/rpcinfo -u <NFS_client_host> nlockmgr
# /usr/bin/rpcinfo -u <NFS_client_host> nfs
30 - Core process Verify if the rpc.ttdbserver service has logged errors in the /var/adm/syslog/sys-
rpc.ttdbserver log.log file.
does not run.
Enter as user root the following command to restart the rpc.ttdbserver service:
# /usr/sbin/inetd -c
30 - Core process sshd Verify if the rpc.ttdbserver service has logged errors in the /var/adm/syslog/sys-
does not run. log.log file.
Enter as user root the following:
# /sbin/init.d/secsh stop
# /sbin/init.d/secsh start
30 - Core process swa- Verify if the swagentd daemon has logged errors in the /var/adm/sw/swa-
gentd does not run. gentd.log
Check for invalid or duplicate entries in the /etc/hosts file.
Verify the content of the /etc/resolv.conf file, if configured.
Execute as user root the following:
# /sbin/init.d/swagentd stop
# /sbin/init.d/swagentd start
30 - Core process smh- Log as user root and check the logs from the /var/opt/hpsmh/logs directory.
startd does not run.
Execute as user root the following:
# /opt/hpsmh/lbin/hpsmh stop
# /opt/hpsmh/lbin/hpsmh start
30 - Core process cron Check the /var/adm/syslog/syslog.log, /etc/rc.log and /var/adm/cron/log files
does not run. for errors.
Display the crontab file by executing as user root the following:
# crontab -l
Try to start the cron process by running as user root the command:
# /sbin/init.d/cron start
72 - The NFS server is Check the connexion with the host where the NFS server is running.
down or not reach-
Check file /var/adm/syslog/syslog.log for errors.
able. Home directo-
ries cannot be used Do as root user on NMC2 Master and Presentation:
- NFS badcalls statis- # /sbin/init.d/nfs.client stop
tics problem.
Connect as root user on the host where the NFS server is running and restart
- NFS client <id> NFS server.
hangs.
Do as root user on NMC2 Master and Presentation:
# /sbin/init.d/nfs.client start
74 - Kerberos connex- To solve the problem, check the Kerberos connection to the missing host. A possi-
ion to host <host- ble problem might be different local times on the connecting hosts, or temporary
name> is broken. network problems in general.
77 - Process ’<PID>’ The monitor that supervises the CPU usage of the running processes will periodi-
(’<name>’) exceeds cally check if CPU usage exceeds a threshold for any of the processes. A process
CPU usage upper (e.g. firefox) must be found running over the CPU threshold three times in a row
threshold. to produce this internal event.
Because sometimes high CPU usage can occur even under normal operations,
the operator should assess the situation:
- Check process information with the ’top’ command
CPU TTY PID USERNAME PRI NI SIZE RES STATE TIME %WCPU %CPU COM-
MAND
2 ? 26036 alvc3776 237 20 2080K 316K run 1:06 90.25 90.09 sh
- To get more details about the process name, use below command
# ps -efax | grep -i <PID>
alvc3776 26036 26029 232 12:23:01 ? 1:11 /bin/sh /opt/firefox/bin/firefox
-CreateProfile firefox _profile_alvc3776_slsm6i
- Monitor the process for some time. If the CPU process is not coming down, kill
the process with the following command:
# kill -9 <PID>
88 - Filesystem <filesys- Check the directory structure of the filesystem for old logs, archives, oversized
tem> occupation is traces or core files.
<occupation>.
Extend filesystem if needed. Contact Nokia for support.
The IP address to use is the address of the HP-UX server that hosts the NMC2
virtual machine. The fully qualified name of that server can also be used.
From the Cooling status window the Events window for the Cooling resources can
be accessed.
For hardware status regarding Disks, select the Disks menu from Home-> Stor-
age.
or:
# hpvmstatus
# hpvmconsole –P guestname
Once the previous steps are done, The iLO 2 MP login prompt appears. Log in
using the default the iLO 2 MP user name and password (Admin/Admin). The MP
Main Menu screen appears :
Enter CM and press Enter. The prompt of command menu will appear. To con-
figure IP address, host name, subnet mask, and gateway address, enter LC.
Abbreviations
A
EFI Extensible Firmware Interface : boot manager for Integrity servers (During
HP-UX integrity initial boot, EFI boot manager loads \efi\hpux\hpux.efi,
which reads the \efi\hpux\auto file to determine which kernel and mode
to boot. You can interrupt the HP-UX kernel loader when prompted and
Type the HP-UX kernel and mode to boot from.)
ENMS Element and Network Management Server
IM Information Manager
IP Internet Protocol
ITANIUM Integrity, ITANIUM, ia64 are the terms for new HP servers
NE Network Element
NMC2 Network Management Center, 2nd generation
NMS Network Management Server
NSAP Network Service Access Point
PC Personal Computer
PID Process ID
PMC Process Monitoring Control
WP WorkPlace
Index
A Forcing user logout . . . . . . . . . . . . . . . . . . . . . . . . . 57
FTP flow control server . . . . . . . . . . . . . . . . . . . . . 162
Administrator Functional
role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
workplace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Agent list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 G
Application process restart . . . . . . . . . . . . . . . . . . . 21
Applicative logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 General processes . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Group dependencies . . . . . . . . . . . . . . . . . . . . . . . 60
B Group list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 H
Cleanup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
OSS backup destination . . . . . . . . . . . . . . . . . . 131 Host name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
OSS backup name . . . . . . . . . . . . . . . . . . . . . . 141
BDH file I
deleting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
exporting to DAT . . . . . . . . . . . . . . . . . . . . . . . . 155
importing from DAT . . . . . . . . . . . . . . . . . . . . . 156 IOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
BDH file management . . . . . . . . . . . . . . . . . . . . . 153 Items/groups
BDH management . . . . . . . . . . . . . . . . . . . . . . . . . 143 starting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
BDH management user interface . . . . . . . . . . . . 144 stopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
BDH repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Broadcasting a message . . . . . . . . . . . . . . . . . . . . 53 L
Bulk data collection . . . . . . . . . . . . . . . . . . . . . . . . 149
creating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Log files
deleting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 archiving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
modifying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 deleting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
viewing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
C Log FTAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
browsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
creating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Child list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
deleting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Cleanup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16, 114
Log FTAM record
Command conventions . . . . . . . . . . . . . . . . . . . . . 15
deleting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Command logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
sending . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Control state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Log management . . . . . . . . . . . . . . . . . . . . . 93, 138
CORBA SNM API . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Log records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Crontab files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
changing display mode . . . . . . . . . . . . . . . . . . 102
filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
D sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
viewing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Domain files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Logins
bad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
E good . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
N R
Workstation
customizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
locking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
unlocking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
X.733 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159