You are on page 1of 168

E-XMS

Software Installation and Upgrade Guide

Release 5.7.1
July 2017
Revision C
Copyright

Notice
Information in this guide is subject to change without notice. Companies, names, and data used in
examples herein are fictitious unless otherwise noted. No part of this guide may be reproduced or
transmitted in any form by means electronic or mechanical, for any purpose, without express written
permission of Empirix Inc.

Trademarks
The following are trademarks and service marks, or registered trademarks and service marks, of
Empirix Inc. or its subsidiaries, in the U.S. and other jurisdictions: Empirix, the Empirix logo, One-
Sight, Hammer On-Call, Voice Watch, IntelliSight, Hammer xCentrix, Hammer XMS, Hammer XMS
Active, Hammer G5, Hammer Call Analyzer, Hammer NetEm, Hammer TDM, Hammer FX-TDM and
Hammer DEX are trademarks or registered trademarks of Empirix Inc. in the U.S. and other jurisdic-
tions.
All other names are used for identification purposes only and are trademarks or registered trade-
marks of their respective companies.

Copyright © 2017 by Empirix, Inc.


600 Technology Park Drive, Suite 100
Billerica, MA 01821
United States
All Rights Reserved
Visit us at http://www.empirix.com

ii E-XMS Software Installation and Upgrade Guide


Third Party Trademarks

Sun Microsystems, Inc.


This application is Java Powered, containing tested and compatible Java virtual machine software
from Sun Microsystems, Inc.

Java and all Java based trademarks and logos are trademarks of Sun Microsystems, Inc. in the U.S.
and other countries.

Apache Tomcat
Licensed under the Apache Tomcat License, Version 2.0; you may not use this file except in compli-
ance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is
distributed on an “AS IS” basis, without warranties or conditions of any kind, either express or implied.
See the License for the specific language governing permissions and limitations under the License.

Wireshark
Wireshark and the “fin” logo are registered trademarks of the Wireshark Foundation.

WebRTC
Copyright (c) 2011, The WebRTC project authors. All rights reserved.

JIDE Software, Inc.


Copyright (c) 2002 - 2013 JIDE Software, Inc., all rights reserved.

OpenAM
OpenAM product is copyright © 2010-2014 by ForgeRock.

E-XMS Software Installation and Upgrade Guide iii


iv E-XMS Software Installation and Upgrade Guide
Contents

Chapter 1 Overview of the Installation and Upgrade Process


E-XMS 5.7.1 Installation Requirements.................................................... 1-1
Supported SLES OS Upgrade Paths ................................................... 1-2
First-Time Installation Prerequisites ......................................................... 1-2
Upgrade Prerequisites.............................................................................. 1-4
Planning Your Upgrade ............................................................................ 1-4
Upgrading from E-XMS 5.5 to 5.7.1 ..................................................... 1-4
Upgrading from E-XMS 5.1 - 5.4 to 5.7.1 (no re-imaging).................... 1-5
Upgrading to E-XMS 5.7.1 for ROS at SLES 11.0 or Filesystem XFS . 1-6
Vertica Backup and Restore Time Estimates ....................................... 1-7
Configure Fully Qualified Domain Names (FQDN) on NWV and ROS Servers
1-8
Create an E-XMS YUM Software Repository ........................................... 1-9
Create a Third-Party Software Repository.............................................. 1-10
Add SLES or Red Hat Software Repository Before Installation or Upgrade ..
1-11
Third-Party Software Installation ............................................................ 1-12
Installing Wireshark on NWV and ROS .............................................. 1-12

Chapter 2 Installing E-XMS Software


Configuration 1: Standalone ROS with MSP Probes................................ 2-2
Configuration 2: Fully-Distributed DMS .................................................... 2-3
Configuration 3: Redundant NWV with a ROS and MSP Probes............. 2-4
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster
2-5
Installing a Standalone NWV Server .................................................... 2-6
Installing a Redundant NWV Server..................................................... 2-9
Installing the SMM Component .......................................................... 2-14
Installing a DMS RDB or ROS DB Cluster ......................................... 2-16
Installing a DMS (Aggregator/Proxy) Server ...................................... 2-19
Installing a MSP Probe ....................................................................... 2-22
Installing RAN Vision (Optional) ......................................................... 2-26
Installing HA-ROS Application Nodes ................................................ 2-31
Installing High Availability MSP 6000 Probes for the Voice Engine ... 2-38
Installing a ROS Server ...................................................................... 2-40

v
Chapter 3 Upgrading Vertica on the ROS and DB Nodes
Prerequisite Steps..................................................................................... 3-1
Backing Up and Restoring the Vertica Database...................................... 3-2
Backing Up the Vertica Database (Full)................................................ 3-2
Backing Up User-Defined Directories and Files ................................... 3-3
Restoring the Vertica Database (Full)................................................... 3-3
Backing Up and Restoring the ROS Configuration and MySQL ............... 3-4
Backing Up the ROS Configuration and MySQL .................................. 3-4
Restoring the ROS Configuration and MySQL ..................................... 3-6
Configuring NFS for Backup/Restore........................................................ 3-6
NFS Server (SLES 11.3/SLES 11.0) .................................................... 3-6
NFS Client ............................................................................................ 3-9
Upgrading Vertica Database 6.1, 7.0 or 7.1 to 7.2.................................. 3-11
Post-Migration Configuration .................................................................. 3-12
Post-installation Check ........................................................................... 3-12

Chapter 4 Upgrading E-XMS Components


Upgrading to E-XMS 5.7.1 from E-XMS 5.1 up to 5.4 Using the Migration Util-
ity .............................................................................................................. 4-1
Upgrade Prerequisites .......................................................................... 4-2
Installing the exms Migration Utility ...................................................... 4-5
Special Server Installation Requirements............................................. 4-6
Upgrading to E-XMS 5.7.1 from E-XMS 5.5 or 5.7 ................................... 4-8
Upgrade Prerequisites .......................................................................... 4-8
Upgrading an MSP Probe..................................................................... 4-9
Upgrading a ROS ............................................................................... 4-10
Upgrading an Active and Standby HA-ROS node .............................. 4-10
Upgrading a NWV or Redundant NWV............................................... 4-13
Upgrading an SMM............................................................................. 4-14
Upgrading a DMS (Aggregator/Proxy)................................................ 4-14
Upgrading a DMS or ROS RDB or RDB Cluster ................................ 4-14
Upgrading RAN Vision........................................................................ 4-14
Manually Verifying Successful Upgrade Completion .............................. 4-15

Chapter 5 Post-Installation Requirements


Third-Party Software Installation............................................................... 5-1
HA Proxy............................................................................................... 5-1

Chapter 6 Upgrading the Network Wide View Web Portal


OpenAM Agent Installer............................................................................ 6-2
Installing OpenAM Agent Installer ........................................................ 6-2
Interoperability Between IntelliSight and E-XMS .................................... 6-11

vi
Chapter 7 Configuration for Using HTML5 DiagnostiX When No DMS is Present
Required Configuration for Voice Protocols on NWV or Standalone ROS
when using HTML5 DiagnostiX ................................................................ 7-1

Appendix A E-XMS Network TCP Port Assignments


Network Port Assignment Table ............................................................. A-3

Appendix B VM Configuration and Set Up


VM Configuration...................................................................................... B-1
Adding a Virtual Machine Port Group ................................................... B-1
VM Monitoring ........................................................................................ B-11
Adding a Virtual Network Interface ..................................................... B-11

Appendix C Troubleshooting
Common Errors and Resolutions ............................................................. C-1
Install Missing Packages ...................................................................... C-1
Resolve Installation Package Errors .................................................... C-2
Remove the MySQL Lock During an Upgrade ..................................... C-3
Update the Kernel Package for MSP Probes Based on Red Hat 6.8... C-5
Vertica Upgrade Script Errors 6.1.3 to 7.2.3 ........................................ C-5

Appendix D Installing and Upgrading SLES OS


Setup for SLES 11.3 Auto-installer........................................................... D-1
Installing SLES 11.3 OS with Auto-installer.............................................. D-3
Enabling Java Content in the Browser ............................................... D-13
Updating the Intel Server 5023 BIOS ..................................................... D-15
Upgrading SLES11.3 to SLES 11.4........................................................ D-18
Conventions ....................................................................................... D-18
Pre-requisites ..................................................................................... D-19
Configuring the Server(s) Hosting the SLES 11.4 Repository............ D-20
Adding SLES 11.4 Repository to the Server ...................................... D-21
Performing the Distribution Upgrade to SLES 11.4 ............................ D-24

Appendix E Configuring Dynamic Linksets for GTP/GTPv2 Traffic


Configuring Probes from the Control GUI................................................. E-1
Configuring Linkset Parameters ............................................................... E-3

vii
viii
CHAPTER 1 Overview of the Installation
and Upgrade Process

This chapter provides software application installation and upgrade pre-


requisites.

E-XMS 5.7.1 Installation Requirements


Before you begin installing E-XMS 5.7.1 or until the upgrade to E-XMS
5.7.1 can be considered complete, the following requirements must be
met:
 SLES 11.4 must be installed on all SLES-based servers including MSP
 Vertica 7.2.3-11 must be installed on DMS database and ROS servers
 ext3 filesystem must be installed on Database and ROS servers
 Server models in your environment are supported, see “SLES 11.3/
11.4 Compatibility Matrix,” page 1-2
 Software-only installations must meet specifications, see Empirix Rec-
ommended Specifications for Customer Provided Hardware
NOTE: E-XMS 5.7 is the final release with Ubuntu and PPC support.

Overview of the Installation and Upgrade Process 1-1


First-Time Installation Prerequisites

Supported SLES OS Upgrade Paths


The following table shows the SLES 11.3/11.4 compatibility matrix. The
table indicates whether SLES 11.3 or 11.4 is supported for a specific
server type, and the possible upgrade path is to that version of SLES.

TABLE 1-1. SLES 11.3/11.4 Compatibility Matrix

Server Board SLES 11.3 SLES 11.4


Server (dmidecode |
Short grep "Product
Name Name") Supported Autoinstaller1 Supported

5023 S5000PAL Y xms_hpos_1.45.iso2,3 N4

5423 S5520UR Y xms_hpos_1.45.iso3 N5

SuperMicro X8DTH Y xms_hcos_2.54.iso6 Y7

Ivy Bridge S2600GZ Y N/A8 Y7

1. Autoinstallation can be performed remotely via IPMI or on-site with physical media such as a DVD burned
from the autoinstaller ISO image. If an internal DVD drive is not present on the server, an external USB
DVD drive can be used.
2. The 5023 requires BIOS version 11.0. Run utility check_5023_bios.sh to confirm whether the server is
ready to be upgraded to SLES 11.3 with the corresponding autoinstaller. To upgrade the BIOS, on-site
presence is needed to boot from a USB stick and to run the BIOS upgrade.
3. The autoinstaller must be used because third-party RAID drivers need to be loaded for the hard drives to
be recognized correctly as part of a RAID.
4. Third-party RAID drivers are not available for the 5023 server for SLES 11.4.
5. Third-party RAID drivers have not been qualified for the 5423 server for SLES 11.4.
6. The autoinstaller must be used to set up the partitions correctly.
7. To upgrade to SLES 11.4, a separate procedure must be followed to perform a distribution upgrade from
SLES 11.3 to 11.4 (refer to Appendix D Installing and Upgrading SLES OS).
8. Ivy Bridge servers have already been installed with SLES 11.3, however some servers have the kernel
(3.0.76-0.11) and packages that are provided by the original SLES 11.3 installer instead of the newer ker-
nel (3.0.101-0.47.67) and packages that have been patched to fix various security vulnerabilities and the
leap second issue.

First-Time Installation Prerequisites


If you are performing a first-time installation of the E-XMS software, make
sure the following requirements are met:
 You have obtained the servers that will become the E-XMS servers in
your environment. The specifications for the servers are described in
the Empirix Hardware Requirements Guide.

1-2 Chapter 1
First-Time Installation Prerequisites

 You have installed the OS (SLES or Red Hat) on the target E-XMS
servers described in the Empirix Operating System Installation and
Configuration Guide.
You must be running SLES 11.4 after the installation. You can either
upgrade before or after the E-XMS software installation, depending on
whether you have the SLES 11.4 iso image to use for the installation. If
not, then upgrade from SLES 11.3 to 11.4 after the installation. See
Appendix D, Installing and Upgrading SLES OS.
NOTE: After the OS installation, some common tasks that may need
to be performed include configuring IP addresses, changing the host-
name, and setting the date, time, and time zone.
IMPORTANT: If you are using Red Hat make sure the packages ‘read-
line.i686’ and nscd are not installed on your system. To uninstall the
packages, enter the following commands:
# yum remove readline.i686
# yum remove nscd
 You have the fully-qualified domain names, IP addresses, and root
account passwords for all of the NWV and ROS nodes. See “Configure
Fully Qualified Domain Names (FQDN) on NWV and ROS Servers,”
page 1-8.
 The IP addresses and fully qualified domain names for the E-XMS
NWV and ROS servers are defined in the /etc/hosts file on each node
as described in “Configure Fully Qualified Domain Names (FQDN) on
NWV and ROS Servers,” page 1-8.
 You have created an E-XMS YUM software repository where you can
download and copy the required software bundles to the E-XMS serv-
ers. See “Create an E-XMS YUM Software Repository,” page 1-9.
 You have created a third-party repository where you can download
and copy required software applications to the E-XMS servers. Refer
to the following sections:
 “Create a Third-Party Software Repository,” page 1-10.
 “Add SLES or Red Hat Software Repository Before Installation or
Upgrade,” page 1-11
 “Installing Wireshark on NWV and ROS,” page 1-12
 TCP ports must be allowed through any firewalls in the network. See
Appendix A, E-XMS Network TCP Port Assignments.

Overview of the Installation and Upgrade Process 1-3


Upgrade Prerequisites

Upgrade Prerequisites
If you are performing an upgrade to the E-XMS software, make sure the
following requirements are met:
 You have the fully-qualified domain names, IP addresses, and root
account passwords for the NWV and ROS servers.
 The IP addresses and fully qualified domain names for all the E-XMS
NWV and ROS servers are defined in the /etc/hosts file on each node
as described in “Configure Fully Qualified Domain Names (FQDN) on
NWV and ROS Servers,” page 1-8.
 You have installed Wireshark for 5.7.1.
 The E-XMS system is working properly. In particular, the Redundant
NWV and HA-ROS replication are working with no errors on either
side.
 ext3 filesystem for Vertica must be installed on Database and ROS
servers
 SLES 11.3 or 11.4 on all SLES based servers. If on earlier version
you'll need to reimage to SLES 11.3 first (see Appendix D, Install-
ing and Upgrading SLES OS)

Planning Your Upgrade


This section provides details to upgrade from:
 E-XMS 5.5 to 5.7.1
 E-XMS 5.1 through 5.5 to 5.7.1 (no re-imaging)
 E-XMS 5.7.1 for ROS at SLES 11.0 or Filesystem XFS
Also included are Vertica backup and restore time estimates

Upgrading from E-XMS 5.5 to 5.7.1


The following table provides the steps required to perform an upgrade
from E-XMS 5.5 to E-XMS 5.7.1.

1-4 Chapter 1
Planning Your Upgrade

NOTE: If for some reason one of your 5.5 ROS still has XFS file system,
then a remiage to SLES 11.3 using the autoinstall to get to EXT3 files sys-
tem will be required for the Vertica upgrade.

TABLE 1-2. Upgrading 5.5 to 5.7.1

E-XMS
Step Task System Status Approximate Duration
1 Prerequisites: get necessary files in Up Depends on site
place
2 Upgrade Vertica 7.2.3 on the HA- 30 mins
ROS database cluster
3 Upgrade Vertica 7.2.3 on DMS data- 30 mins
base cluster
4 Upgrade from E-XMS 5.5 to 5.7.1 15 to 30 mins
per server
5 Post Upgrade checks Up 30 mins
depending on the size of deploy-
ment
6 Upgrade OS to SLES 11.4 Down 15 mins
7 Post Reimage checks Up 30 mins
depending on the size of deploy-
ment

Upgrading from E-XMS 5.1 - 5.4 to 5.7.1 (no re-imaging)


The following table provides the steps required to perform an upgrade
(with no re-imaging) from E-XMS 5.1 - 5.4 to E-XMS 5.7.1 when the ROS
is already running SLES 11.3 with the EXT3 filesystem type required for
Vertica 7.x.

TABLE 1-3. Upgrading 5.1 - 5.4 to 5.7.1

E-XMS
Step Task System Status Approximate Duration
1 Prerequisites: get necessary files in Up Depends on site
place
2 Backup Vertica and MySQL on Optional 5 to 22 hrs
ROS depending on the amount of Ver-
See “Backing Up and Restoring the tica data to be preserved. See
Vertica Database,” page 3-2. “Vertica Backup and Restore
Time Estimates,” page 1-7.
3 Upgrade Vertica on the ROS Down 30 mins
4 Migrate to E-XMS 5.7.1 Down 15 to 30 mins per server

Overview of the Installation and Upgrade Process 1-5


Planning Your Upgrade

TABLE 1-3. Upgrading 5.1 - 5.4 to 5.7.1 (Continued)

E-XMS
Step Task System Status Approximate Duration
5 Post Upgrade checks Up 30 mins
depending on the size of deploy-
ment
6 Upgrade SLES 11.3 to 11.4 Down 15 mins per server
7 Post Reimage checks Up 30 mins
depending on the size of deploy-
ment

Upgrading to E-XMS 5.7.1 for ROS at SLES 11.0 or Filesystem XFS


The following table provides the steps required to perform an upgrade to
E-XMS 5.7.1 when you need to re-image the server to SLES 11.3 and
Filesystem EXT3.

TABLE 1-4. Upgrading 5.7.1 ROS at SLES 11.0 or XFS

E-XMS
Step Task System Status Approximate Duration
1 Prerequisites: get necessary files in Up Depends on site
place
2 Configure NFS Client for Backup 10 mins
3 Full backup of the Vertica database Up 5 to 22 hrs
See “Backing Up and Restoring the depending on the amount of Ver-
Vertica Database,” page 3-2. tica data to be preserved. See
next section Vertica Backup and
Restore estimate calculations.
4 Shutdown the ROS processes Down 5 mins
5 Incremental Backup Vertica Down 1-3 hours
Backup and Full of MySQL and depending on days since Full
E-XMS Configuration backup. See section below Ver-
See “Backing Up and Restoring the tica Incremental Backup estimate
Vertica Database,” page 3-2. calculation.
6 Install SLES 11.3, configure with Down 45 mins
YaST
See Appendix D, Installing and
Upgrading SLES OS
7 Configure NFS Client for Restore 10 mins
8 Install E-XMS 5.1 to 5.4 Down 30 mins

1-6 Chapter 1
Planning Your Upgrade

TABLE 1-4. Upgrading 5.7.1 ROS at SLES 11.0 or XFS (Continued)

E-XMS
Step Task System Status Approximate Duration
9 Restore the MySQL database and Down 2 mins
configuration backups
See “Backing Up and Restoring the
Vertica Database,” page 3-2.
10 Start E-XMS Down/Up 5 mins
11 Full restore of Vertica database Up 6.5 to 23.5 hrs
See “Backing Up and Restoring the depending on the amount of Ver-
Vertica Database,” page 3-2. tica data to be preserved. See
next section Vertica Backup and
Restore estimate calculations.
12 Post Restore checks Up 30 mins
depending on the size of deploy-
ment
13 Shutdown E-XMS 5.1 to 5.4 Down 5 mins
14 Upgrade Vertica to 7.2.3 Down 15 to 45 mins
15 Upgrade/Migrate to E-XMS 5.7.1 Down 15 to 30 mins
depending on the server type
16 Start E-XMS 5.7.1 Down/Up 5 mins
17 Post Upgrade checks Up 30 mins
depending on deployment
18 Upgrade NWV Web Portal 5 mins
19 Upgrade SLES 11.3 to 11.4 Down 15 mins per server
20 Post SLES 11.4 Upgrade Checks Up 15 to 30 mins

Vertica Backup and Restore Time Estimates


The amount of time that the backup and restore procedures will take
depends on:
 The available network bandwidth between the server being backed up
and the server hosting the backup storage.
 How busy Vertica is under normal conditions. The baseline Vertica
Database activity could contribute to the backup or restore procedures
taking longer.
 The amount of Vertica data being backed up.
NOTE: This procedure was tested using NFS Client software on the
server to be backed up.

Overview of the Installation and Upgrade Process 1-7


Configure Fully Qualified Domain Names (FQDN) on NWV and ROS Servers

All calculations for time in this section are based on the fact that the net-
work is fast enough to support a transfer rate of 100Mbps, but the limiting
factor is really the rate at which data can be pulled from Vertica. If the
actual throughput that is achievable between servers is less than this,
then the calculations should be adjusted accordingly. Similarly, if the avail-
able bandwidth is > 100 Mbps, the calculations can be adjusted accord-
ingly, but keep in mind that the rate at which the data can be pulled from or
pushed to Vertica could be the limiting factor.
It is recommended to test a full backup to determine the actual time
required based on the conditions at a particular deployment.
NOTE: For information on backing up the Vertica database, see “Backing
Up and Restoring the Vertica Database,” page 3-2.

Full Backup Time Estimate


Let's assume that, as in the running example, <Vertica-DB-Size> is
981.656597781926 GB and corresponds to 28 days of data. If this amount
of data is backed up, assuming that data can be written at a rate of
100Mbps, <Vertica-DB-Full-Backup-Time> would be 21.8 hours. If the
retention were reduced to 14 days, <Vertica-DB-Size> would only be 491
GB and <Vertica-DB-Full-Backup-Time> would be 11 hours. If instead, the
retention were reduced to 7 days, <Vertica-DB-Size> would only be 245
GB and <Vertica-DB-Full-Backup-Time> would be 5.5 hours.
It is recommended to preserve 1 week or less of historical data. The more
data preserved, the longer the restore time.

Incremental Backup Time Estimate


The time to complete the incremental backup will depend on how much
time has elapsed since the full backup has completed. If approximately 1
day has elapsed since the full backup, then <Vertica-DB-Incremental-
Backup-Time> would be approximately 1 hour (1/7 * 5.5).
It is recommended to not perform the full backup more than 1 day in
advance of the incremental backup and the actual upgrade.

Full Restore Time Estimate


<Vertica-DB-Full-Restore-Time> is approximately the sum of <Vertica-
DB-Full-Backup-Time> and <Vertica-DB-Incremental-Backup-Time>.
Based on the running example, allow 6.5 to 12 hours, depending on
whether the data retention is 7 or 14 days.

Configure Fully Qualified Domain Names (FQDN) on NWV and ROS


Servers
Each 5.x NWV and ROS server requires a fully qualified domain name
(FQDN). The hostname must match the leading part of the FQDN value.

1-8 Chapter 1
Create an E-XMS YUM Software Repository

Also, ports 8080/8443 must be open to each client computer, in addition to


ports 80/443.
The following information is required:
 E-XMS Network Wide system IP addresses and root passwords are
required.
 When installing an NWV or ROS, an FQDN is required. All Users will
access E-XMS 5.x using the FQDN of the webserver and not by the IP
address. In addition, the hostname of the NWV or ROS server must be
changed to match the leading part of the server's FQDN value or the
installation will fail. For example: if the FQDN is exms5.com-
pany.com, then the hostname of the NWV or ROS must be exms5
before starting the installation.
In summary:
1. Create an FQDN for the NWV or ROS. If there is a Redundant NWV,
a unique FQDN is required for both NWVs. The two FQDNs must also
be in the same domain, for example:
nwv1.dept.company.com and nwv2.dept.company.com
Everything after the hostname part of the FQDN must be identical for
the Redundant NWV replication to work. In this example
dept.company.com FQDNs are the same in both NWV servers.
2. Validate that the FQDN is resolvable and pingable from an E-XMS
user's computer.
3. Change the NWV or ROS hostname to match the leading part of the
FQDN. The Linux hostname command can be used, e.g. “hostname
exms5”.
4. Add this entry to the server's /etc/hosts file:
<server ip> <fqdn> <hostname>
5. Validate that the server can resolve the FQDN and the hostname
using the Linux ping command.

Create an E-XMS YUM Software Repository


Create an E-XMS software repository to manage E-XMS software ver-
sions locally or remotely. The repository will contain the RPM package
files that can be accessed locally or remotely. This repository can simplify
the management of software versions for installing and upgrading E-XMS
servers and components. Once a repository has been created, command-
line package management tools such as zypper (SLES) and yum (Red
Hat) can be used to install or upgrade the software more easily.

Overview of the Installation and Upgrade Process 1-9


Create a Third-Party Software Repository

This version of E-XMS supports the use of a local software repository. The
installation tarball includes a commodity script called "install-repo.sh". The
script automatically updates the repository configuration in SLES or Red
Hat to include a local E-XMS repository.
1. Copy the E-XMS software bundle to the target E-XMS node. Save it in
a temporary location, for example:
/home/E-XMS_SW
2. If there is an existing "repo" directory present in share, remove it so
that there are no conflicts with files and packages:
# rm -rf /home/hammer/share/<version>
3. Create a directory where the software repository can reside and tar-
balls can be extracted from:
# mkdir -p /home/hammer/share/<version>
4. Extract the E-XMS tarball from the install repo directory to the directory
you created in step 2 <exms-version-target.gz>:

Takes about # tar -xzvf <the tarball file name> -C /home/hammer/


1 minute share/<version>
For example:
# tar -xzvf exms-5.7.1-bxxx.tar.gz -C /home/hammer/
share/5.7.1<version>
5. Change to the directory where the install repository script is located
and run the install-repo.sh script. The script assumes that the current
working directory is the location of the repository (i.e. /home/hammer/
share/<version>/repo):
# cd /home/hammer/share/<version>/repo

Takes seconds # ./install-repo.sh

Create a Third-Party Software Repository


This version of E-XMS requires some additional third-party packages for
the E-XMS installation that may not already be installed on the system.
These third-party rpms must also be added to the repository before start-
ing the E-XMS installation.
1. Download the third-party sles tarball:

Takes seconds thirdparty-sles-repo-5.7.1.tar.gz


2. Extract the files to the following location using the command:

1-10 Chapter 1
Add SLES or Red Hat Software Repository Before Installation or Upgrade

Takes seconds # tar -xzvf thirdparty-sles-repo-5.7.1.tar.gz


-C /home/hammer/share/<version>
3. Change to the directory where the install repository script is located
and run the script. The script assumes that the current working direc-
tory is the location of the repository (i.e. /home/hammer/share/<ver-
sion>/repo):
# cd /home/hammer/share/<version>/thirdparty-sles-
repo/

Takes seconds # ./install-extra-repo.sh


NOTE: This repository does not include third-party Wireshark software.
See “Installing Wireshark on NWV and ROS,” page 1-12.

Add SLES or Red Hat Software Repository Before Installation or


Upgrade
The Empirix software components require specific packages to be
installed. Zypper or YUM will automatically check for these software
dependencies during installation or upgrade. Any missing packages will
be automatically downloaded and installed or upgraded; this will only work
if the corresponding SLES or Red Hat software repository containing
these packages is configured properly and accessible during installation
or upgrade. If SLES or Red Hat is not properly registered or there is no
network connection, a local repository must be created and added or the
packages must be manually downloaded and installed or upgraded before
proceeding with the installation or upgrade of E-XMS software compo-
nents.
Example:
Retrieving package info2html-2.0-104.18.noarch (1/
69), 17.0 KiB (39.0 KiB unpacked)
File './suse/noarch/info2html-2.0-
104.18.noarch.rpm' not found on medium 'cd:///
?devices=/dev/sr0'
Please insert medium [SUSE-Linux-Enterprise-Server-
11-SP3 11.3.3-1.138] #1 and type 'y' to continue or
'n' to cancel the operation. [yes/no] (no):
In the example, the SLES repository called "SUSE-Linux-Enterprise-
Server-11-SP3 11.3.3-1.138" was configured with the URL "cd:///
?devices=/dev/sr0", but was not reachable at installation or upgrade time.

Overview of the Installation and Upgrade Process 1-11


Third-Party Software Installation

Third-Party Software Installation

Installing Wireshark on NWV and ROS


The Wireshark network analyzer is required on the NWV and ROS.
IMPORTANT: The recommended wireshark version for E-XMS 5.7.1-b256
and later is 3.0.0-5. For E-XMS 5.7 builds earlier than 239 you can still use
wireshark version 3.0.0-4.
To install the Wireshark rpm, perform the following steps:
4. Download and install Wireshark 3.0 from the SourceForge location for
version 5.7:
a. Go to http://sourceforge.net/projects/ensure-forge/files/

b. Click the Version 5.7 folder and refer to the README.txt file for
instructions on downloading the wireshark rpm.
c. Compare the rpm checksum with the contents of md5 file. Both
files need to be in the same directory when using the “-c” option of
md5sum:
$ md5sum -c hxms-wireshark-3.0.0-5.x86_64.md5
hxms-wireshark-3.0.0-5.x86_64.rpm: OK
Alternatively, the contents of the .md5 file can be displayed and
manually compared against the output of the md5sum command:
$ md5sum hxms-wireshark-3.0.0-5.x86_64.rpm
cf01c6195a1c278ca4cfad2cf3592138
hxms-wireshark-3.0.0-5.x86_64.rpm
$ cat hxms-wireshark-3.0.0-5.x86_64.md5
cf01c6195a1c278ca4cfad2cf3592138
hxms-wireshark-3.0.0-5.x86_64.rpm
d. Install/upgrade the new Wireshark library rpm on the NWV and
ROS before installing or upgrading to E-XMS 5.7.1.
$ rpm -Uvh --force
hxms-wireshark-3.0.0-5.x86_64.rpm

1-12 Chapter 1
CHAPTER 2 Installing E-XMS Software

This chapter describes how to install E-XMS software for the following
sample configurations:
 “Configuration 1: Standalone ROS with MSP Probes,” page 2-2
 “Configuration 2: Fully-Distributed DMS,” page 2-3
 “Configuration 3: Redundant NWV with a ROS and MSP Probes,”
page 2-4
 “Configuration 4: NWV with HA-ROS Application Nodes and ROS DB
Cluster,” page 2-5
Each E-XMS server and component installations consists of:
 Installing E-XMS rpm packages and their dependencies (other
required rpm packages for the component to be able to run).
 Running a setup script to configure the component. For example,
setup-ros.sh creates the MySQL and Vertica schemas and the Apache
Tomcat/Tomee and OpenAM SSO configurations.
IMPORTANT: Make sure you have completed the prerequisites for a first-
time E-XMS server installation in Chapter 1, Overview of the Installation
and Upgrade Process.

Installing E-XMS Software 2-1


Configuration 1: Standalone ROS with MSP Probes

Configuration 1: Standalone ROS with MSP Probes


The following image shows an example of a configuration with an E-XMS
ROS and two MSP probes.

Refer to the software installation sections for the following components to


configure a ROS with two MSP probes:
 “Installing a ROS Server,” page 2-40
 “Installing a MSP Probe,” page 2-22

2-2 Chapter 2
Configuration 2: Fully-Distributed DMS

Configuration 2: Fully-Distributed DMS


The following image shows an example of a configuration of a fully-distrib-
uted DMS with two MSP probes.

Refer to the software installation sections for the following components to


configure a fully-distributed DMS configuration:
 “Installing a Standalone NWV Server,” page 2-6
 “Installing the SMM Component,” page 2-14
 “Installing a DMS RDB or ROS DB Cluster,” page 2-16
 “Installing a DMS (Aggregator/Proxy) Server,” page 2-19
 “Installing a ROS Server,” page 2-40
 “Installing a MSP Probe,” page 2-22

Installing E-XMS Software 2-3


Configuration 3: Redundant NWV with a ROS and MSP Probes

Configuration 3: Redundant NWV with a ROS and MSP Probes


The following image shows an example of a configuration with an E-XMS
NWV with a ROS and two MSP probes.

Refer to the software installation sections for the following components to


configure an NWV and ROS with two MSP probes:
 “Installing a Standalone NWV Server,” page 2-6
 “Installing a Redundant NWV Server,” page 2-9
 “Installing a ROS Server,” page 2-40
 “Installing a MSP Probe,” page 2-22
 “Installing RAN Vision (Optional),” page 2-26

2-4 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Configuration 4: NWV with HA-ROS Application Nodes and ROS DB


Cluster
The following image shows an example of a Network Wide View configu-
ration with two HA-ROS Application Nodes and a ROS DB Cluster.

Refer to the software installation sections for the following components to


configure a Active and Standby HA-ROS Application Node with a ROS DB
Cluster:
 “Installing a Standalone NWV Server,” page 2-6
 “Installing HA-ROS Application Nodes,” page 2-31
 “Installing a DMS RDB or ROS DB Cluster,” page 2-16

Installing E-XMS Software 2-5


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Installing a Standalone NWV Server


The NWV server provides a single user interface with visibility to informa-
tion correlated and collected by one or more ROSs.

Prerequisites
Create a Persistent SSH Session
Created a persistent SSH session to monitor the progress of the installa-
tion procedure. This will create a virtual session to the system that will
allow you to reconnect to the session if you lose connectivity during the
installation.
1. Log in to the server using an SSH client as ‘hammer’.

2. Switch to root user:

su -
3. Open the screen session:
screen /bin/bash -s <screen session name>
IMPORTANT: Make sure you have completed the prerequisites for a first-
time E-XMS server installation in Chapter 1, Overview of the Installation
and Upgrade Process.

Start the NWV Server Software Installation


Log in to the NWV server and install the RPM:
SLES

This takes about # zypper install exms-nwv


5 minutes
Red Hat
# yum install exms-nwv
NOTE: All dependencies and conflicts are automatically checked during
the installation and the required packages are downloaded and installed.
See “Common Errors and Resolutions,” page C-1 to resolve any errors.
IMPORTANT: If your environment has DMS components, you must install
the SMM component.

Run the NWV Server Configuration Script


The NWV server configuration script tests the hardware characteristics
and validates the configuration.
1. Run the following configuration script:

This takes about # /home/hammer/hmcommon/bin/setup-nwv.sh


5 seconds

2-6 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

If the system does not meet the minimum requirements a warning is


displayed.
2. Select Yes to proceed with the setup or select No for details about the
invalid configuration.
IMPORTANT: If you ignore the minimum requirements warning, the sys-
tem likely will not perform acceptably and Empirix cannot stand behind
the performance of the system. Please contact Empirix Technical Sup-
port for help to resolve this important warning.

If you select Yes, the NWV configuration window displays.


IMPORTANT: If you ignore the minimum requirements warning, the sys-
tem likely will not perform acceptably and Empirix cannot stand behind
the performance of the system. Please contact Empirix Technical Sup-
port for help to resolve this important warning.
3. Configure the NWV parameters.

This step takes about


5 minutes

Enter the following NWV parameters:


 NWV Address—the IP address of the server
 Domain Name—the fully qualified domain name of the server, for
example: exms.ps.empirix.com
 SSL Enabled for Web Server—to enable SSL support on the web
server, set to 1

Installing E-XMS Software 2-7


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

 The values for a secondary NWV should be left empty.


4. Configure the NTP server parameters.

Enter the following NTP parameters:


 NTP Enabled—NTP must be enabled on all servers; set to 1
 NTP Primary Address—the IP address of the primary NTP server
 NTP Secondary Address—the IP address of the secondary NTP
server

Complete the NWV Server Installation


To complete the NWV installation, ensure the license file is available, then
restart the processes.
1. Copy the license.xml file to the following location:

/home/hammer/hmcommon/license.xml
2. On both the primary and secondary NWV servers restart hammer ser-
vices:
# /etc/init.d/hmmonitor restart
3. On the secondary NWV server restart Tomcat and TomEE services:

# /etc/init.d/tomee stop
# /etc/inti.d/tomcat7 stop
# /etc/init.d/tomcat7 start
# /etc/init.d/tomee start
The NWV server installation is completed. Now install the next compo-
nent.

2-8 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Configuring Voice Protocols Using HTML5 DiagnostiX


For information on configuring voice protocols using HTML5 DiagnostiX
user interface, see Chapter 7, Configuration for Using HTML5 Diagnos-
tiX When No DMS is Present.

Install the Next Component


If your configuration requires a redundant NWV server, continue to
“Installing a Redundant NWV Server,” page 2-9.
If your configuration does not require a redundant NWV server, continue
to “Installing the SMM Component,” page 2-14.
If your configuration includes HA-ROS application nodes, continue to
“Installing HA-ROS Application Nodes,” page 2-31
IMPORTANT: If your environment has DMS components or if you want to
use the HTML5 DiagnostiX user interface you must install the SMM com-
ponent.

Installing a Redundant NWV Server

Prerequisites
IMPORTANT: Make sure you have completed the prerequisites for a first-
time E-XMS server installation in Chapter 1, Overview of the Installation
and Upgrade Process.
The primary NWV server must be installed and running. After the second-
ary NWV server is installed and running, rerun the setup-nwv.sh script on
the primary NWV to enable MySQL and OpenAM replication.

Start the Secondary NWV Server Software Installation


NOTE: If you have a redundant/secondary NWV configuration, DMS serv-
ers can point to ONLY one SMM. There is no redundant option for post-
gres data.
Log in to the secondary NWV server and install the RPM:
SLES
# zypper install exms-nwv
Red Hat
# yum install exms-nwv
NOTE: All dependencies and conflicts are automatically checked during
the installation and the required packages are downloaded and installed.
See “Common Errors and Resolutions,” page C-1 to resolve any errors.

Installing E-XMS Software 2-9


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Run the NWV Server Configuration Script


The NWV server configuration script tests the hardware characteristics
and validates the configuration.
1. Run the following configuration script on the primary NWV server:

# /home/hammer/hmcommon/bin/setup-nwv.sh
If the system does not meet the minimum requirements the warning
below displays.
2. Select Yes to proceed with the setup or select No for details about the
invalid configuration.
IMPORTANT: If you ignore the minimum requirements warning, the sys-
tem likely will not perform acceptably and Empirix cannot stand behind
the performance of the system. Please contact Empirix Technical Sup-
port for help to resolve this important warning.

If you select Yes, the NWV configuration window displays.


3. Configure the primary NWV server parameters to enable MySQL and
OpenAM Replication.
NOTE: The secondary NWV must be up and running and have passed
all post checks.

Enter the following NWV server parameters.


 NWV IP Address—IP address of the secondary server

2-10 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

 Domain name—fully qualified domain name of the secondary


server
 SSL Enabled for Web Server—to enable SSL support on the web
server, set to 1
 Enable Redundancy—set to 0 on the primary NWV server
 NWV Hammer Password—Password for the Hammer user for the
primary NWV
 NWV Root Password—Password for the root user for the primary
NWV
 NWV Primary Address—IP address of the primary NWV where you
are currently running the setup-nwv.sh script
 NWV Primary Domain Name—Fully qualified domain name of the
primary NWV
4. Configure the NTP server parameters.

Enter the following NTP parameters:


 NTP Enabled—NTP must be enabled on all servers; set to 1
 NTP Primary Address—the IP address of the primary NTP server
 NTP Secondary Address—the IP address of the secondary NTP
server
After NTP is configured, the script will complete the setup of the NWV. The
license.xml file must be available in /home/hammer/hmcommon.
IMPORTANT: If DMS components have been installed, remember to install
SMM; see “Installing the SMM Component,” page 2-14.

Installing E-XMS Software 2-11


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Run the NWV Server Configuration Script


The NWV server configuration script tests the hardware characteristics
and validates the configuration.
1. Run the following configuration script on the secondary NWV server:

# /home/hammer/hmcommon/bin/setup-nwv.sh
If the system does not meet the minimum requirements the following
warning displays.
2. Select Yes to proceed with the setup or select No for details about the
invalid configuration.
IMPORTANT: If you ignore the minimum requirements warning, the sys-
tem likely will not perform acceptably and Empirix cannot stand behind
the performance of the system. Please contact Empirix Technical Sup-
port for help to resolve this important warning.

If you select Yes, the NWV configuration window displays.


3. Configure the NWOS parameters.

Enter the following NWOS server parameters.


 NWV IP Address—IP address of the secondary server

2-12 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

 Domain name—fully qualified domain name of the secondary


server
 SSL Enabled for Web Server—to enable SSL support on the web
server, set to 1 (This setting must be the same on the primary and
secondary NWV.)
 Enable Redundancy—set to 1 to enable redundancy on the sec-
ondary NWV
 NWV Hammer Password—Password for the Hammer user for the
primary NWV
 NWV Root Password—Password for the root user for the primary
NWV
 NWV Primary Address—IP address of the primary NWV
 NWV Primary Domain Name—Fully qualified domain name of the
primary NWV
4. Configure the NTP server parameters.

Enter the following NTP parameters:


 NTP Enabled—NTP must be enabled on all servers; set to 1
 NTP Primary Address—the IP address of the primary NTP server
 NTP Secondary Address—the IP address of the secondary NTP
server
IMPORTANT: If DMS components are installed, do not install the SMM on
the secondary NWV.

Installing E-XMS Software 2-13


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Complete the Secondary NWV Server Installation


To complete the NWV installation on both the primary and secondary
NWV servers, ensure the license file is available, then restart the pro-
cesses.
1. Copy the license.xml file to the following location:

/home/hammer/hmcommon/license.xml
2. Restart all processes in the following order:

a. hmmonitor

b. Tomcat7

c. TomEE

Failure restart processes in this order may cause licensed features to


not work properly.

Install the Next Component


Continue to “Installing the SMM Component,” page 2-14.
IMPORTANT: If you are installing DMS components or if you plan to use
the HTML5 DiagnostiX user interface you must install the SMM on the Pri-
mary NWV server.

Installing the SMM Component


The SMM is required on an NWV and standalone ROS when there is a
DMS server in your configuration or you plan to use the HTML5 Diagnos-
tiX user interface.
NOTE: If you have a redundant NWV configuration, DMS servers can
point to ONLY one SMM component. There is no redundant option for
Postgres data.

Start the SMM Component Installation


Log in to the SMM and install the RPM:
SLES

This takes about # zypper install exms-smm


1 minute
Red Hat
# yum install exms-smm
NOTE: All dependencies and conflicts are automatically checked during
the installation and the required packages are downloaded and installed.
See “Common Errors and Resolutions,” page C-1 to resolve any errors.

2-14 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Run the SMM Component Configuration Script


The SMM component configuration script tests the hardware characteris-
tics and validates the configuration.
1. Run the following configuration script:

This takes about # /home/hammer/hmcommon/bin/setup-smm.sh


5 minutes
If the system does not meet the minimum requirements a warning is
displayed.
2. Select Yes to proceed with the SMM configuration or select No for
details about the invalid configuration.
IMPORTANT: If you ignore the minimum requirements warning, the sys-
tem likely will not perform acceptably and Empirix cannot stand behind
the performance of the system. Please contact Empirix Technical Sup-
port for help to resolve this important warning.

3. Configure the SMM parameters.

Enter the following SMM parameters:


 SMM Address—the NWV or Standalone ROS IP address

Installing E-XMS Software 2-15


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

 RDB Address—the RDB IP address. If there is no DMS, set this


value to the same value as this server.

Complete the SMM Component Installation


To complete the SMM installation, ensure the license file is available, then
restart the processes.
1. Copy the license.xml file to the following location:

/home/hammer/hmcommon/license.xml
2. Restart all processes in the following order:

a. hmmonitor

b. Tomcat7

c. TomEE

Failure restart processes in this order may cause licensed features to


not work properly.
The SMM component installation is completed. Now install the next com-
ponent.

Install the Next Component


Continue to “Installing a DMS RDB or ROS DB Cluster,” page 2-16.

Installing a DMS RDB or ROS DB Cluster


A single DMS RDB or an ROS DB cluster can be used in a configuration.
An RDB cluster consists of three nodes. Each DMS RDB or ROS DB node
in a cluster is installed and configured before the entire cluster is initial-
ized.
Before Installing an DMS RDB or ROS DB cluster:
 Make sure all DMS RDB or ROS DB cluster nodes have the same root
password.
 If the hostnames of the DMS RDB or ROS DB cluster nodes are
passed to the init_db script, the hostnames and IP addresses of all
nodes must be added to the /etc/hosts file on each of the nodes.

Prerequisites
IMPORTANT: Make sure you have completed the prerequisites for a first-
time E-XMS server installation in Chapter 1, Overview of the Installation
and Upgrade Process.

Start the DMS RDB or ROS DB Installation


Log in to the DMS RDB or ROS DB node and install the RPM:

2-16 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

SLES

This takes about # zypper install exms-rdb


3 minutes
Red Hat
# yum install exms-rdb
NOTE: All dependencies and conflicts are automatically checked during
the installation and the required packages are downloaded and installed.
See “Common Errors and Resolutions,” page C-1 to resolve any errors.

Run the DMS RDB or ROS DB Configuration Script


The DMS RDB and ROS DB node configuration script tests the hardware
characteristics and validates the configuration.
1. Run the following configuration script on each of the DMS RDB or
ROS DB nodes:
# /home/hammer/hmcommon/bin/setup-rdb.sh
If the system does not meet the minimum requirements the warning
below displays.
2. Select Yes to proceed with the setup or select No for details about the
invalid configuration.
IMPORTANT: If you ignore the minimum requirements warning, the sys-
tem likely will not perform acceptably and Empirix cannot stand behind
the performance of the system. Please contact Empirix Technical Sup-
port for help to resolve this important warning.

If you selected Yes, the node configuration displays.


3. Configure DMS RDB or ROS DB node parameters.

Installing E-XMS Software 2-17


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Enter the following DMS RDB or ROS DB node parameter:


DMS RDB Node
 SMM Address—the IP address of the System Management Mod-
ule (SMM) used to configure the Data Management System (DMS)
topology for gathering the status of all configured network ele-
ments.
ROS DB Node
 SMM Address—the IP address of the NWV or the standalone
ROS.
4. Configure NTP server parameters.

Enter the following NTP parameters:

2-18 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

 NTP Enabled—NTP must be enabled on all servers; set to 1


 NTP Primary Address—the IP address of the primary NTP server
 NTP Secondary Address—the IP address of the secondary NTP
server

Initialize an DMS RDB or ROS DB Cluster


1. Run the following command on only one of the DMS RDB or ROS DB
nodes:

This takes about /home/hammer/hmrdb/scripts/init_db.sh


3 minutes host1,host2,host3
Use a comma separated list of IP addresses or hostnames (recom-
mended) as shown in the example above.
2. Enter the password: 1256

3. In the DMS RDB or ROS DB node name field, just press return.

NOTE: A cluster can consist of one or more nodes. In the example


above three nodes are used to initialize the cluster.

Complete the DMS RDB or ROS DB Node Installation


To complete the DMS RDB or ROS DB node installation, ensure the
license file is available, then restart the processes.
1. Copy the license.xml file to the following location:

/home/hammer/hmcommon/license.xml
2. Restart all processes in the following order:

a. hmmonitor

b. Tomcat7

c. TomEE

Failure restart processes in this order may cause licensed features to


not work properly.
The DMS RDB or ROS DB node installation is completed. Now install the
next component.

Install the Next Component


Continue to “Installing a DMS (Aggregator/Proxy) Server,” page 2-19.

Installing a DMS (Aggregator/Proxy) Server


NOTE: If you have a redundant NWV configuration, DMS servers can
point to ONLY one SMM. There is no redundant option for postgres data.

Installing E-XMS Software 2-19


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Prerequisites
IMPORTANT: Make sure you have completed the prerequisites for a first-
time E-XMS server installation in Chapter 1, Overview of the Installation
and Upgrade Process.

Start the DMS Server Installation


Log in to the DMS and install the RPM:
SLES

This takes about # zypper install exms-dms


2 minutes
Red Hat
# yum install exms-dms
NOTE: All dependencies and conflicts are automatically checked during
the installation and the required packages are downloaded and installed.
See “Common Errors and Resolutions,” page C-1 to resolve any errors.

Run the DMS Server Configuration Script


The DMS server configuration script tests the hardware characteristics
and validates the configuration.
1. Run the following configuration script:

# /home/hammer/hmcommon/bin/setup-dms.sh
If the system does not meet the minimum requirements the warning
below displays.
2. Select Yes to proceed with the setup or select No for details about the
invalid configuration.
IMPORTANT: If you ignore the minimum requirements warning, the sys-
tem likely will not perform acceptably and Empirix cannot stand behind
the performance of the system. Please contact Empirix Technical Sup-
port for help to resolve this important warning.

If you selected Yes, the DMS server configuration displays.


3. Configure the DMS server parameters.

2-20 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Takes about 1 minute

Enter the following DMS server parameter:


 SMM Address—the IP address of the System Management Mod-
ule (SMM) used to configure the Data Management System (DMS)
topology for gathering the status of all configured network elements
4. Configure the NTP server parameters.

Enter the following NTP server parameters:


 NTP Enabled—NTP must be enabled on all servers; set to 1
 NTP Primary Address—the IP address of the primary NTP server
 NTP Secondary Address—the IP address of the secondary NTP
server

Installing E-XMS Software 2-21


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Complete the DMS Server Installation


To complete the DMS server installation, ensure the license file is avail-
able, then restart the processes.
1. Copy the license.xml file to the following location:

/home/hammer/hmcommon/license.xml
2. Restart all processes in the following order:

a. hmmonitor

b. Tomcat7

c. TomEE

Failure restart processes in this order may cause licensed features to


not work properly.
The DMS server installation is completed. Now install the next compo-
nent.

Install the Next Component


Continue to “Installing a MSP Probe,” page 2-22.

Installing a MSP Probe

Prerequisites
IMPORTANT: Make sure you have completed the prerequisites for a first-
time E-XMS server installation in Chapter 1, Overview of the Installation
and Upgrade Process.

Start the MSP Probe Installation


Log in to the MSP probe and install the RPM:
SLES
# zypper install exms-probe
Red Hat
# yum install exms-probe
NOTE: All dependencies and conflicts are automatically checked during
the installation and the required packages are downloaded and installed.
See “Common Errors and Resolutions,” page C-1 to resolve any errors.

Run the MSP Probe Configuration Script


The MSP probe configuration script tests the hardware characteristics and
validates the configuration.

2-22 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

1. Run the following configuration script on the MSP probe:

# /home/hammer/hmcommon/bin/setup-probe.sh
If the system does not meet the minimum requirements the warning
below displays.
2. Select Yes to proceed with the setup or select No for details about the
invalid configuration.
IMPORTANT: If you ignore the minimum requirements warning, the sys-
tem likely will not perform acceptably and Empirix cannot stand behind
the performance of the system. Please contact Empirix Technical Sup-
port for help to resolve this important warning.

If you selected Yes, the Probe configuration displays.


3. Configure the MSP probe parameters.

Enter the following Probe parameters:


 Probe ID—a unique identifier that must match the probe id speci-
fied for this probe during the ROS setup
 ROS Address—the IP address of the ROS system

Installing E-XMS Software 2-23


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

 XPA Enabled—to enable XPA on this probe, set to 1


 DMS Enabled—to enable DMS on this probe, set to 1
4. Configure NTP.

Enter the following NTP parameters:


 NTP Enabled—NTP must be enabled on all servers; set to 1
 NTP Primary Address—the IP address of the primary NTP server
 NTP Secondary Address—the IP address of the secondary NTP
server
5. For software only probes without Empirix supplied capture cards, con-
figure internal NICs that will be used as capture devices (except eth0).
NOTE: A capture device selected here is not available as an Ethernet
device to the operating system. Interfaces for the hardware accelerator
boards can only be configured from the Control GUI.
After all interfaces are configured, the system will be installed. The
license.xml file must be available in /home/hammer/hmcommon.
If the NICs support DPDK, the setup script will prompt for the DPDK
Acquisition Interfaces:

2-24 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

If the kernel hugepages setting has been modified, which is required


for DPDK, then you will be prompted to reboot:

If DPDK cannot be used on the NICs and the libpcap acquisition


method is available, then the setup script will prompt for the libpcap
acquisition interfaces:

Installing E-XMS Software 2-25


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Complete the MSP Probe Installation


To complete the MSP probe installation, ensure the license file is avail-
able, then restart the processes.
1. Copy the license.xml file to the following location:

/home/hammer/hmcommon/license.xml
2. Once the probe installation is completed, hmmonitor is restarted.

Failure restart processes in this order may cause licensed features to


not work properly.
The MSP probe installation is completed. The NWV server installation
for the mobile application and data configuration is now completed.

Installing RAN Vision (Optional)


RAN Vision is an optional component in the NWV configuration.

Prerequisites
IMPORTANT: Make sure you have completed the prerequisites for a first-
time E-XMS server installation in Chapter 1, Overview of the Installation
and Upgrade Process.

Start the RAN Vision Software Installation


1. The RAN Vision CSMS component must be installed on an NWV sys-
tem.
2. The RAN Vision CSMS component requires a third party application,
Custadia. Download the Custadia rpm from the SourceForge website:
http://sourceforge.net/projects/custadia
Install the Custadia RPM:
rpm --install custadia-<version>.x86_64.rpm
3. After installing the OS (SLES or Red Hat) and adding the E-XMS YUM
repository, start the RAN Vision CSMS installation:
SLES
# zypper install ranvision-csms
Red Hat
# yum install ranvision-csms
NOTE: All dependencies and conflicts are automatically checked
during the installation and the required packages are downloaded and
installed. See “Common Errors and Resolutions,” page C-1 to resolve
any errors.

2-26 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

4. Once installed, run the following configuration script. It requires no


input, as it re-uses configuration from the existing E-XMS components
on the system, however will take several minutes to complete:
/home/hammer/hmcommon/bin/setup-ranvision-csms.sh
Installing the RAN Vision LSS-KPIGen Component
1. LSS-KPIGen is installed on a standalone server, and is installed in the
same way as other systems. The KPIGen license file is required in the
/home/hammer/hmcommon directory.
2. After installing the OS (SLES or Red Hat) and adding the E-XMS YUM
repository, start the RAN Vision LSS-KPIGen installation:
SLES
# zypper install ranvision-lsskpigen
Red Hat
# yum install ranvision-lsskpigen
NOTE: All dependencies and conflicts are automatically checked
during the installation and the required packages are downloaded and
installed. See “Common Errors and Resolutions,” page C-1 to resolve
any errors.
3. Once installed, run the following configuration script:

/home/hammer/hmcommon/bin/setup-lsskpigen.sh
4. The interactive configuration script prompts you to configure the fol-
lowing parameters:
a. SMM System IP - the IP address of the SMM server which is used
to provide configuration
b. KPIGen System ID - a numeric identifier unique to this KPIGen in
the system. This must match the KPIGen System ID set in the
DMS Administration console for this KPIGen system.

Installing E-XMS Software 2-27


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

5. The interactive configuration script prompts you to configure the fol-


lowing NTP server parameters.
a. NTP Enabled—NTP must be enabled on all servers; set to 1

b. NTP Primary Address—the IP address of the primary NTP server

c. NTP Secondary Address—the IP address of the secondary NTP


server

2-28 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

6. After NTP is configured, the script will complete the setup of the LSS-
KPIGen. The license.xml file must be available in /home/hammer/
hmcommon.
Installing RAN Vision LSS-RANMon on an MSP Probe
1. LSS-RANMon is installed on an E-XMS Probe. The exms-probe soft-
ware package is required by LSS-RANMon and must be installed prior
to installation of LSS-RANMon.
2. After installing the OS (SLES or Red Hat) and adding the E-XMS YUM
repository, start the RAN Vision LSS-KPIGen installation:
SLES
# zypper install ranvision-lssranmon
Red Hat
# yum install ranvision-lssranmon
NOTE: All dependencies and conflicts are automatically checked
during the installation and the required packages are downloaded and
installed. See “Common Errors and Resolutions,” page C-1 to resolve
any errors.
3. RANMon makes use of existing configuration settings from the E-XMS
probe software and therefore requires no additional setup.
Installing RAN Vision TraceReader on an MSP Probe

Installing E-XMS Software 2-29


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

1. TraceReader is installed on an E-XMS Probe. The exms-probe and


ranvision-lssranmon software packages are required by TraceReader
and must be installed prior to installation of TraceReader.
2. After installing the OS (SLES or Red Hat) and adding the E-XMS YUM
repository, start the RAN Vision TraceReader installation:
SLES
# zypper install ranvision-tracereader
Red Hat
# yum install ranvision-tracereader
NOTE: All dependencies and conflicts are automatically checked
during the installation and the required packages are downloaded and
installed. See “Common Errors and Resolutions,” page C-1 to resolve
any errors.
3. TraceReader requires manual configuration of the traceport sources,
which is beyond the scope of this document. Please refer to the RAN
Vision TraceReader Configuration Manual to enable traceport feeds
from the network equipment.

Complete the RAN Vision Software Installation


To complete the RAN Vision installation. Ensure the license file is avail-
able, then restart the processes.
1. Copy the license.xml file to the following location:

/home/hammer/hmcommon/license.xml
2. Restart all processes in the following order:

a. hmmonitor

b. Tomcat7

c. TomEE

Failure restart processes in this order may cause licensed features to


not work properly.
The RAN Vision installation is completed.

2-30 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Installing HA-ROS Application Nodes


HA-ROS application nodes must be configured with a NWV server.
IMPORTANT: When using the HTML5 user interface Diagnostics feature
for voice protocols in E-XMS release 5.7.1, you must install and configure
the SMM on the NWV, even if it is not a DMS system.

Start the HA-ROS Installation


Log in to the HA-ROS application node and install the RPM:
SLES
# zypper install exms-ros
Red Hat
# yum install exms-ros

Run the HA-ROS Application Node Configuration Script


The HA-ROS configuration script tests the hardware characteristics and
validates the configuration.
1. Run the following configuration script on the HA-ROS:

NOTE: All dependencies and conflicts are automatically checked


during the installation and the required packages are downloaded and
installed. See “Common Errors and Resolutions,” page C-1 to resolve
any errors.
2. The HA-ROS installation requires you to run the following configura-
tion script:
# /home/hammer/hmcommon/bin/setup-ros.sh
If the system does not meet the minimum requirements the following
warning displays. Please check with Empirix technical support before
continuing.
3. Select Yes to proceed with the setup or select No for details about the
invalid configuration.
IMPORTANT: If you ignore the minimum requirements warning, the sys-
tem likely will not perform acceptably and Empirix cannot stand behind
the performance of the system. Please contact Empirix Technical Sup-
port for help to resolve this important warning.

Installing E-XMS Software 2-31


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

If you selected Yes, the HA-ROS Configuration displays.


4. Configure the HA-ROS Application Node parameters.

Enter the following HA-ROS parameters:


 ROS ID—a unique numeric id used to access the Region
 ROS Address—the virtual IP address of the first ROS used to
access the Region
 NWV Address—the IP address of the NWV system
 Vertica Addresses—a comma separated list of IP addresses used
to connect to the ROS DB Cluster.
 XPA Enabled—to enable XPA on the first HA-ROS Application
Node, set to 1
 Domain Name—the fully qualified domain name of the first HA-
ROS Application Node
 System Name—the name of the first HA-ROS Application Node
 Number of Probes—the number of probes to configure during the
setup phase. If left to 0, the setup script will not prompt for probe
configuration.
 SSL enabled for web server—to enable SSL support, set to 1

2-32 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

 Enable HA—to enable HA support on the first HA-ROS Application


Node, set to 1. When HA is set to 1 (enabled), the HA UCARP con-
figuration displays
5. The interactive configuration script prompts you to configure the HA
UCARP parameters.

Enter the following HA UCARP parameters:


 ROS Address—the physical IP address of the first HA-ROS Appli-
cation Node, which is this node.
 ROS Other Address—the physical IP address of the second (peer)
HA-ROS Application Node.
 ROS Hammer Password—the password for the Hammer user on
both the first and second HA-ROS Application Nodes.
 ROS Root Password—the password for the root user on both the
first and second HA-ROS Application Nodes.
 UCARP VHID—the numeric UCARP id that must be unique in your
local network and must be the same on the first HA-ROS Applica-
tion Node and the second HA-ROS Application Node.
 UCARP Interface—the Ethernet interface used for the first HA-
ROS Application Node.
 UCARP Password—the password shared by the first and second
HA-ROS Application Nodes.
 Enable MYSQL Replication—set to 1 to enable MYSQL replication.
IMPORTANT: The first HA-ROS Application Node (this HA-ROS Applica-
tion Node) must have “Enable MYSQL Replication” set to 0. The sec-
ond HA-ROS Application Node (installed below) must have “Enable
MYSQL Replication” set to 1.

Installing E-XMS Software 2-33


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Complete the HA-ROS Server Installation


To complete the HA-ROS server installation on both the active and
standby HA-ROS servers, ensure the license file is available, then restart
the processes.
1. Copy the license.xml file to the following location:

/home/hammer/hmcommon/license.xml
2. Restart all processes in the following order:

a. hmmonitor

b. Tomcat7

c. TomEE

Failure restart processes in this order may cause licensed features to


not work properly.
The HA-ROS active server installation is completed. Now install the next
component.

Install the Next Component


If you have a Standby HA-ROS, continue to “Start the Standby HA-ROS
Installation,” page 2-34
If you do not have a Standby HA-ROS in your configuration, continue to
“Installing the SMM Component,” page 2-14.

Start the Standby HA-ROS Installation


Log in to the Standby HA-ROS application node and install the RPM:
SLES
# zypper install exms-ros
Red Hat
# yum install exms-ros
NOTE: All dependencies and conflicts are automatically checked during
the installation and the required packages are downloaded and installed.
See “Common Errors and Resolutions,” page C-1 to resolve any errors.

Run the HA-ROS Application Node Configuration Script


The HA-ROS configuration script tests the hardware characteristics and
validates the configuration.
1. Run the following configuration script on the secondary HA-ROS:

# /home/hammer/hmcommon/bin/setup-ros.sh

2-34 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

If the system does not meet the minimum requirements the following
warning displays.
2. Select Yes to proceed with the setup or select No for details about the
invalid configuration.
IMPORTANT: If you ignore the minimum requirements warning, the sys-
tem likely will not perform acceptably and Empirix cannot stand behind
the performance of the system. Please contact Empirix Technical Sup-
port for help to resolve this important warning.

If you selected Yes, the ROS Configuration displays.


3. Configure the secondary ROS parameters.

Enter the following ROS parameters:


 ROS ID—the ROS ID of the first HA-ROS Application Node
 ROS Address—the virtual IP address used to access the Region
(the ROS address of the first HA-ROS Application Node)
 NWV Address—the IP address of the NWV system
 Vertica Addresses—a comma separated list of IP addresses used
to connect to the ROS DB Cluster.
 XPA Enabled—to enable XPA on the second HA-ROS Application
Node, set to 1

Installing E-XMS Software 2-35


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

 Domain Name—the fully qualified domain name of the second HA-


ROS Application Node
 System Name—the name of the second HA-ROS Application Node
 Number of Probes—the number of probes connected to the second
HA-ROS Application Node.This number must match the number of
probes connected on the first HA-ROS Application Node.
 SSL enabled for web server—to enable SSL support, set to 1.
 Enable HA—to enable HA support, set to 1. When HA is set to 1
(enabled), the HA UCARP configuration displays.
4. The interactive configuration script prompts you to configure the HA
UCARP parameters.

Enter the following HA UCARP parameters:


 ROS Address—the physical IP address of the first HA-ROS Appli-
cation Node.
 ROS Other Address—the physical IP address of the second HA-
ROS Application Node.
 ROS Hammer Password—the password for the Hammer user on
both the first and second HA-ROS Application Nodes.
 ROS Root Password—the password for the root user on both the
first and second HA-ROS Application Nodes.
 UCARP VHID—the numeric UCARP id that must be unique in your
local network and must be the same on the first HA-ROS Applica-
tion Node and the second HA-ROS Application Node.
 UCARP Interface—the Ethernet interface used for the second HA-
ROS Application Node

2-36 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

 UCARP Password—the password shared by the first and second


HA-ROS Application Node. This must be the same password
assigned to the first HA-ROS Application Node.
 Enable MYSQL Replication—enables MYSQL replication
IMPORTANT: The first HA-ROS Application Node (installed first) must
be set to 0. The secondary HA-ROS Application Node (this HA-ROS
Application Node) must be set to 1.
Both the active and standby HA-ROS nodes must be installed and running
for MySQL replication to function.
5. Check UCARP status on both HA-ROS nodes by issuing the following
command:
/etc/init.d/ucarpd status
6. The first HA-ROS node should display Active and the second HA-ROS
node should be Standby. If this is not true, execute the following com-
mands on the HA-ROS node that you want to be the standby node:
/home/hammer/hmcommon/etc/vip-down.sh
7. Confirm UCARP status on both HA-ROS nodes by issuing the follow-
ing command:
/etc/init.d/ucarpd status
One server should return "Active" while the other should return
"Standby", It may take a few moments for UCARP arbitration to pro-
cess and for one server to become Active.

Complete the HA-ROS Server Installation


To complete the HA-ROS server installation on both the primary and sec-
ondary HA-ROS servers, ensure the license file is available, then restart
the processes.
1. Copy the license.xml file to the following location:

/home/hammer/hmcommon/license.xml
2. Restart all processes on the active HA-ROS (not the standby HA-
ROS) in the following order:
a. hmmonitor

b. Tomcat7

c. TomEE

Failure restart processes in this order may cause licensed features to


not work properly.
The HA-ROS server installation is completed. Now install the next compo-
nent.

Installing E-XMS Software 2-37


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Install the Next Component


Continue to the section “Installing a DMS RDB or ROS DB Cluster,”
page 2-16.

Installing High Availability MSP 6000 Probes for the Voice Engine
High availability functionality for redundant MSP 6000 probes is only sup-
ported on the voice engine. This procedure provides the steps to install
and configure high availability redundant MSP 6000 probes.
NOTE: This procedure can also be used for MSP 1500 probes.

Prerequisites
The following hardware is required:
 2 MSP 6000 Probes
 1 ROS; 1 Virtual IP address

Start the HA MSP 6000 Probe Installation


Log in to both HA MSP 6000 probes and install the RPM:
SLES
# zypper install exms-probe
Red Hat
# yum install exms-probe

Configure the HA MSP 6000 Probes


1. Run the following configuration script on both HA MSP 6000 probes.

# /home/hammer/hmcommon/bin/setup-probe.sh
IMPORTANT: Both HA MSP 6000 probes must have the same probe ID.

2. Stop the hammer hmmonitor services:

# /etc/init.d/hmmonitor stop
3. Make sure the following files are present on both MSP 6000 probes in
the directory /home/hammer/hmcommon/etc:
 vip-post-down.sh
 vip-post-up.sh
 vip-pre-down.sh
 vip-pre-up.sh

2-38 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

4. Configure the UCARP parameters on both MSP 6000 probes. Choose


one MSP 6000 probe to be the source probe and the other to be the
peer probe.
a. Create the UCARP configuration file:

# /home/hammer/hmcommon/etc/ucarp-config
b. Configure the following parameters:

hm_ha_interface:eth0
hm_ha_srcip:X.X.X.X
hm_ha_peer_ip:Y.Y.Y.Y
hm_ha_virtualip:Z.Z.Z.Z
hm_ha_vhid:NNN
hm_ha_password:secret
Where:
X.X.X.X: local ip address of eth0
Y.Y.Y.Y: ip address of the other probe in the pair
Z.Z.Z.Z: virtual ip shared between the 2 probe
(this is the address to be used in the ROS)
NNN: virtual id, it must be unique in the network
(typically, this is the probeID)
5. Set up a pre-shared key between both MSP 6000 probes:

cd /home/hammer/hmcommon/bin
./ single_ssh_key.sh hammer Y.Y.Y.Y
6. Remove hmmonitor services:

insserv -r hmmonitor
NOTE: The command to remove hmmonitor services is not executed
on RHEL 6.8.
7. Enable UCARP services to manage requests for the virtual ip:

chkconfig ucarpd on
8. Start UCARP services on both MSP 6000 probes:

service ucarpd start


9. Start the hammer hmmonitor services on both MSP 6000 probes:

# /etc/init.d/hmmonitor start

Installing E-XMS Software 2-39


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Installing a ROS Server


The ROS server is used for voice-only and DMS configurations.
Special Consideration
When using the HTML5 user interface Diagnostics feature for voice proto-
cols in E-XMS release 5.7.1 on a Standalone ROS, you must install and
configure the SMM on the ROS, even if it is not a DMS system. For
more information refer to “Configuration for Using HTML5 Diagnostix
When No DMS is Present,” page B-1

Start the ROS Server Installation


Log in to the ROS server and install the RPM:
SLES
# zypper install exms-ros
Red Hat
# yum install exms-ros

Run the ROS Server Configuration Script


The ROS configuration script tests the hardware characteristics and vali-
dates the configuration.
1. Run the following configuration script on the ROS:

NOTE: All dependencies and conflicts are automatically checked during


the installation and the required packages are downloaded and installed.
See “Common Errors and Resolutions,” page C-1 to resolve any errors.
2. The ROS installation requires you to run the following configuration
script:
# /home/hammer/hmcommon/bin/setup-ros.sh
If the system does not meet the minimum requirements the following
warning displays. Please check with Empirix technical support before
continuing.
3. Select Yes to proceed with the setup or select No for details about the
invalid configuration.
IMPORTANT: If you ignore the minimum requirements warning, the sys-
tem likely will not perform acceptably and Empirix cannot stand behind
the performance of the system. Please contact Empirix Technical Sup-
port for help to resolve this important warning.

2-40 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

If you selected Yes, the ROS configuration displays.


4. Configure the ROS parameters.

Enter the following ROS parameters:


 ROS ID—a unique identifier for the ROS (also known as the
region_id, unique for all ROSs under the NWV).
 ROS Address—the IP address of the ROS system’s webserver;
used by the NWV to access the system
 NWV Address—the IP address of the NWV system; leave empty if
the system is a Standalone ROS
 Vertica Addresses—a comma separated list of IP addresses used
to connect to the ROS DB Cluster. If there are no IP addresses
listed, ROS is a standalone system.
 XPA Enabled—to enable XPA on the ROS, set to 1 when any
MSPs under this ROS have XPA licensed.
 Domain Name—the fully qualified domain name of the ROS
 System Name—the name of the Region

Installing E-XMS Software 2-41


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

 Number of Probes—the number of probes to configure during the


setup phase. If left to 0, the setup script will not prompt for probe
configuration.
 SSL enabled for web server—to enable SSL support, set to 1
5. Configure NTP.

Enter the following NTP parameters:


 NTP Enabled—NTP must be enabled on all servers; set to 1
 NTP Primary Address—the IP address of the primary NTP server
 NTP Secondary Address—the IP address of the secondary NTP
server
6. Configure the Number of Probes specified in the ROS configuration.

The interactive configuration script prompts you to configure the num-


ber of probes previously specified.

2-42 Chapter 2
Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

Configure all MSP probes for this ROS:


 Probe ID of Remote System—a unique identifier for the probe >1.
If there is a DMS, this value must be unique across all other MSPs,
otherwise unique within the region.
 IP Address of Remote System—the IP address for the probe
 Name of Remote System—the name of the probe
 Probe Type MSP 5000—if MSP 5000, set to 1; otherwise set to 0
 MSP 5000 runs a Data Engine—to enable the MSP 5000 to run the
data engine on one of the blades, set to 1. Generally, if the probe is
monitoring RTP then set this value to 1.
 Probe runs MRP—to enable the Media Report Component, set to 1
After all probes are configured, the script will proceed with setting up the
ROS. The license.xml file must be available in /home/hammer/hmcom-
mon.
IMPORTANT: If DMS components have been installed and this is a Stand-
alone ROS, remember to install SMM; see “Installing the SMM Compo-
nent,” page 2-14.

Configuring Voice Protocols Using HTML5 DiagnostiX


For information on configuring voice protocols using HTML5 DiagnostiX
user interface, see Chapter 7, Configuration for Using HTML5 Diagnos-
tiX When No DMS is Present.

Complete the ROS Server Installation


To complete the ROS server installation. Ensure the license file is avail-
able, then restart the processes.

Installing E-XMS Software 2-43


Configuration 4: NWV with HA-ROS Application Nodes and ROS DB Cluster

1. Copy the license.xml file to the following location:

/home/hammer/hmcommon/license.xml
2. Restart all processes in the following order:

a. hmmonitor

b. Tomcat7

c. TomEE

NOTE: Tomcat7 is not used for the ROS when under a NWV server
but must be running.
Failure restart processes in this order may cause licensed features to
not work properly.
The HA-ROS server installation is completed. Now install the next compo-
nent.

Install the Next Component


Continue to “Installing a MSP Probe,” page 2-22

2-44 Chapter 2
CHAPTER 3 Upgrading Vertica on the ROS
and DB Nodes

This chapter provides instructions to upgrade:


 From Vertica 6.1 to Vertica 7.2.3 — on ROS (not HA-ROS) and DB
nodes
 From Vertica 7.x to 7.2.3 — on HA-ROS, RDB, and ROS DB Cluster
This chapter also provides instructions for backing up and restoring the
Vertica, MySQL, and other E-XMS configuration data on the ROS and
NWV as part of the process of reimaging those servers to SLES 11.3.
These upgrade procedures must be followed in the order presented,
and must be successfully completed before upgrading the ROS
server and the HA-ROS server to E-XMS 5.7.1.

Prerequisite Steps
1. Make sure all the Vertica nodes are up and running.

2. Restart the database to make sure it is in a healthy state.

3. Check the size of the catalog with "du -h /data/spinning/catalog". After


the upgrade the catalog will be four times larger. Make sure you have
enough disc space on all the nodes.
4. Shut down the cluster database and make sure all nodes are down by
executing as dbadmin user:
/opt/vertica/bin/admintools -t stop_db -d rdb -p ipx-
one

5. Configure NFS for Backup/Restore (see “Backing Up and Restoring


the Vertica Database,” page 3-2 below.)

Upgrading Vertica on the ROS and DB Nodes 3-1


Backing Up and Restoring the Vertica Database

Backing Up and Restoring the Vertica Database

Backing Up the Vertica Database (Full)


The full Vertica Database backup will be performed first while the region is
still up and running.
NOTE: Because the range of data to be backed up is selected at the
moment that the backup script is run and the region will still be up and run-
ning during the full backup, a subsequent incremental backup will need to
be performed while the ROS processes are down to backup any addi-
tional data outside of this selected range. It is recommended to perform
the full backup no more than 1 day in advance of the actual upgrade.
1. Make sure a clean directory is available for the migration utilities'
download and extraction and change to that directory.
# rm -rf /home/hammer/share/migration-utilities

# mkdir /home/hammer/share/migration-utilities

# cd /home/hammer/share/migration-utilities

2. Download the following files into /home/hammer/share/migration-utili-


ties:
 migration_utilities_<datetime_hour>_pm.tar.gz
(for example, v20170124_10pm)
 nfs-kernel-server-1.2.3-18.38.43.1.x86_64.rpm
NOTE: The migration utilities tarball is available for download from
https://ftp.empirix.com/XMS/XMS-Software/customer-migration-sles-
exms/
3. Optionally create the file listed below and populate it with additional
directories and files that need to be backed up. Refer to “Backing Up
User-Defined Directories and Files,” page 3-3 for details on the file for-
mat. Refer to “Backing Up and Restoring the ROS Configuration and
MySQL,” page 3-4 for a list of directories and files that will automati-
cally be backed up by the configuration backup script.
/home/hammer/share/migration-utilities/config_<ROS-IP-address>
4. Extract the migration utilities, which include the backup, restore, and
migration scripts.
# tar xzvf migration_utilities_v20170124_10pm.tar.gz

5. The NFS server should have been previously set up. Now configure
the NFS client on each NWV server. See “Configuring NFS for
Backup/Restore,” page 3-6.
6. Purge Vertica data older than <num-days-of-data-to-retain> days old
and drop excess projections before backing up the Vertica Database.

3-2 Chapter 3
Backing Up and Restoring the Vertica Database

For guidelines on selecting an appropriate value for <num-days-of-


data-to-retain>, see “Planning Your Upgrade,” page 1-4.
# ./xmsPurgeVertica.sh <num-days-of-data-to-retain>

# ./xmsDropProjections.sh

7. Start the full backup of the Vertica Database and let it run in the back-
ground until it completes; this could take from 5 to 22 hours, depend-
ing on the amount of data, see “Planning Your Upgrade,” page 1-4.
# ./backup_vert.sh 1

8. The progress of the Vertica Database backup can be monitored from


another SSH session by checking whether there are any active ses-
sions running.
If any of the sessions listed include .gz files, the Vertica backup is still
in progress.
# /opt/vertica/bin/vsql -U vert_admin -w 17ac29a3a8 -c
"select current_statement from sessions;"

*** Need example of the sessions records output from this command
showing a .gz file
***

Backing Up User-Defined Directories and Files


Additional directories and files can be specified in a config file that is
passed to the backup_mystuff.sh script. For example, if you create the
config file config_mystuff with the following content:
/var/spool/cron/tabs

/home/hammer/pso

Pass the name of the config file to the backup script:


./backup_mystuff.sh config_mystuff

Each file or directory to be backed up must appear on its own line in the
config file. The absolute path must be used for each entry (i.e. start with a
'/').
The config file will be automatically backed up.
When the script "restore_mystuff.sh" is called, the config file (if one is
specified) will be restored along with all of the directories and files that
were previously backed up.

Restoring the Vertica Database (Full)


The restore of the Vertica Database will take 6.5 to 23.5 hours to com-
plete, depending on the size of the Database that was backed up and the

Upgrading Vertica on the ROS and DB Nodes 3-3


Backing Up and Restoring the ROS Configuration and MySQL

speed of the network. To determine how much time the full backup should
take, see “Planning Your Upgrade,” page 1-4. The region will be fully
operational (except for access to all historical data, which will be gradually
restored over time) during the restore.
1. Start the restore process for the Vertica Database and let it run in the
background until completion. This will restore the full and incremental
backups that were previously performed.
# cd /home/hammer/share/migration-utilities

# ./restore_vert.sh

2. The progress of the Vertica Database restore can be monitored from


another SSH session by checking whether there are any active ses-
sions running.
If any of the sessions listed include .gz files, the Vertica backup is still
in progress. In the example, the restore is still in progress.
# /opt/vertica/bin/vsql -U vert_admin -w 17ac29a3a8 -c
"select current_statement from sessions;"

current_statement

------------------------------------------------------
------------------------------------------------------
------------------------------------------------------
---------------

COPY hammer_monitor_vert.misc_protocol FROM LOCAL '/


work/exms-db-backups/vertica/gz/hammer_monitor_vert/
02/hammer_monitor_vert.misc_protocol.gz' GZIP DELIM-
ITER E'\033' direct;

select current_statement from sessions;

(10 rows)

Backing Up and Restoring the ROS Configuration and MySQL

Backing Up the ROS Configuration and MySQL


An incremental backup must be performed to cover additional data that
has been stored in the Vertica Database since the full backup was started.
The ROS configuration and MySQL need to be backed up after the incre-
mental backup to make sure the Vertica Database data is consistent with
the MySQL data.

3-4 Chapter 3
Backing Up and Restoring the ROS Configuration and MySQL

1. Stop E-XMS processes on the ROS.

# service hmmonitor stop

2. Run the incremental backup of the Vertica Database until it completes


(refer to section 3.3.2.2: Incremental Backup Time Estimate to deter-
mine how much time the incremental backup should take).
# ./backup_vert.sh 2

3. The progress of the Vertica Database backup can be monitored from


another SSH session by checking whether there are any active ses-
sions running.
If any of the sessions listed include .gz files, the Vertica backup is still
in progress.
# /opt/vertica/bin/vsql -U vert_admin -w 17ac29a3a8 -c
"select current_statement from sessions;"

*** Need example of the sessions records output from this command
showing a .gz file
***
4. Backup the E-XMS configuration files and MySQL tables.

# ./backup_conf_mysql.sh

5. Backup additional directories/files defined in config_<ROS-IP-


address>. If this file was not created/copied during step 1 (all data to
be backed up has already been backed up in the previous step), then
this step can be skipped.
# ./backup_mystuff.sh config_<ROS-IP-address>

There is a template file, config_mystuff_template.txt, under the folder.


There are important network configuration files/location in that file. It
recommends to manually write down the critical network configuration
information about network interfaces, so that they can be used during
the Yast/Yast2 setting after fresh installing SuSE 11.3 to make the
ROS/NWV to connect to the network. The following list displays:
a. IP address, netmask and MAC address of the Ethernet cards, e.g.
at /etc/sysconfig/network/ifcfg-*
b. DNS name at /etc/resolv.conf

c. Default Gateway at /etc/sysconfig/network/routes

d. Domain name at /etc/hosts

e. NTP server at /etc/ntp.conf

f. /etc/sysconfig/network/routes

Upgrading Vertica on the ROS and DB Nodes 3-5


Configuring NFS for Backup/Restore

It is recommended that you backup those files manually, BUT not


directly restore them, because SuSE11.3 is different than SuSE 11.0. It
is recommended that you use Yast/Yast2 to setup the network.

Restoring the ROS Configuration and MySQL


1. Restore the E-XMS configuration files and MySQL after making sure
that MySQL is up and running.
# cd /home/hammer/share/migration-utilities

# service mysql start

Engine DB server started.

Full DB server started.

# ./restore_conf_mysql.sh

2. Restore additional directories/files that were defined in the file


config_<ROS-IP-address>. If this file was not created in “Backing Up
the ROS Configuration and MySQL,” page 3-4 (all data to be restored
has already been restored in the previous step), then skip this step.
# ./restore_mystuff.sh

Configuring NFS for Backup/Restore


The following instructions assume that NFS-mounted directories are the
only viable method for backup/restore.

NFS Server (SLES 11.3/SLES 11.0)


The instructions in this section assume that the NFS server is running
SLES 11.3/SLES 11.0; an NFS server can be setup on other operating
systems, but the instructions will vary. There must not be a firewall
between the NFS server configured in this section and the server config-
ured as the NFS client (where the backup will be performed).

Installing and Configuring a Server


Due to the destructive nature of the SLES 11.3 installation procedure for a
server (i.e. the hard drives will be re-formatted and all existing data will be
lost), all data that is to be preserved must be backed up to a remote server
via an NFS-mounted directory; data may also be backed up even if the
SLES 11.3 installation procedure will not be performed on the server.
A server on the network with sufficient storage to contain the backups
must be chosen and an NFS server must be configured. When choosing a
server to host the backup storage, consider the following:

3-6 Chapter 3
Configuring NFS for Backup/Restore

 Throughput available between the server holding the backup data and
the server that will be backed up/restored.
 Whether there is a firewall in between the servers.
 Whether the free backup storage is likely to decrease due to other
data being written.
 Whether there's already a lot of disk activity on the server that will be
holding the backup storage.
 Only one backup operation is recommended for each server hosting
the backup storage. In theory, multiple backups/restores can be per-
formed to/from the server hosting the backup storage, but keep in
mind that if insufficient storage has been set aside, this could cause
one or more backups to fail. Another consideration in this use case is
whether bandwidth requirements from multiple backups could slow
down the backup operations.
1. Check whether the NFS server package has been already installed. If
the command returns back a value, then jump to step 3. Otherwise,
proceed with step 2.
# rpm -qa | grep -i nfs-kernel-server

nfs-kernel-server-1.2.3-18.38.43.1

For SuSE 11.0 machine:


# rpm -qa | grep -i nfs-kernel-server

nfs-kernel-server-1.1.3-18.17

2. Install the NFS server package, which is available under the RPM sub-
directory where the migration utilities were extracted.
For SuSE 11-3 machine
# cd /home/hammer/share/migration-utilities

# tar xzvf migration_utilities_v20161102_4pm.tar.gz

# rpm -ivh RPM/nfs-kernel-server-1.2.3-


18.38.43.1.x86_64.rpm

For SuSE 11.0 machine


# rpm -ivh RPM/nfs-kernel-server-1.1.3-
18.17.x86_64.rpm

3. Create a base directory for the backups. This will be exported via NFS
so that it can be remotely mounted. It MUST be world-readable so that
multiple user accounts can write to/read from the directory.
# mkdir -p /home/hammer/NFSCloud

# chmod -R 777 /home/hammer/NFSCloud

Upgrading Vertica on the ROS and DB Nodes 3-7


Configuring NFS for Backup/Restore

# ls -ld /home/hammer/NFSCloud

drwxrwxrwx 2 root root 4096 Nov 7 10:24 /home/hammer/


NFSCloud

4. Configure the NFS server using YAST.

# yast nfs_server add mountpoint=/home/hammer/NFSCloud


hosts=* options="fsid=0,rw,no_root_squash,sync,no_sub-
tree_check" verbose

Ready

Initializing

Finishing

# yast nfs_server start

# yast nfs_server summary

NFS server is enabled

NFS Exports

* /home/hammer/NFSCloud

NFSv4 support is enabled.

The NFSv4 domain for idmapping is localdomain.

NFS Security using GSS is disabled.

5. Check that the NFS server is running.

Configuring a New Subdirectory for the Server Backup


For each server that is to be backed up, a new sub-directory should be
created with the IP address of the server. In the following instructions/out-
put, replace <NFS-client-IP-address> with the actual IP address of the
server where the backup/restore will be performed on (e.g. 192.168.0.3).
# mkdir -p /home/hammer/NFSCloud/<nfs-client-IP-
address>

# chmod -R 777 /home/hammer/NFSCloud/<nfs-client-IP-


address>

3-8 Chapter 3
Configuring NFS for Backup/Restore

# ls -ld /home/hammer/NFSCloud/<nfs-client-IP-address>

drwxrwxrwx 2 root root 4096 Nov 7 10:24 /home/hammer/


NFSCloud/<nfs-client-IP-address>

Stopping the Server and Removing the Exported Directory


Once all backup and restore activity has been completed and the backed
up data is no longer needed, remove the exported directory from the NFS
server, stop the server, and delete the directory.
# yast nfs_server delete mountpoint=/home/hammer/NFS-
Cloud verbose

# yast nfs_server stop

# yast nfs_server summary

NFS server is disabled

NFS Exports

Not configured yet.

NFSv4 support is enabled.

The NFSv4 domain for idmapping is localdomain.

NFS Security using GSS is disabled.

# rm -rf /home/hammer/NFSCloud

NFS Client
The server that will be installed with SLES 11.3 will need to mount the
exported remote directory:
 Before the SLES 11.3 installation procedure to backup data.
 After the SLES 11.3 installation procedure and after the re-installation
of the original E-XMS release to restore data.

Mounting Remote Directory for Backup and Restore


1. Create local mount directory and make sure it is world-readable so that
multiple user accounts can write to/read from the directory.
# mkdir -p /work/exms-db-backups

# chmod 777 /work/exms-db-backups

# ls -ld /work/exms-db-backups

drwxrwxrwx 2 root root 4096 Nov 7 10:24 /work/exms-db-


backups

Upgrading Vertica on the ROS and DB Nodes 3-9


Configuring NFS for Backup/Restore

2. Mount the appropriate remote sub-directory on the NFS server to a


local directory so that data can be backed up and restored. In the fol-
lowing instructions, replace <NFS-server-IP-address> with the IP
address of the remote server that was configured in section 10.1.1:
Installing and Configuring Server and <NFS-client-IP-address> with
the IP address of the server where the backup/restore will be per-
formed; make sure that the directions for section 10.1.2: Configuring
New Sub-Directory for Server Backup has also been performed.
# mount -o rw,hard,intr,tcp <nfs-server-IP-address>:/
home/hammer/NFSCloud/<nfs-client-IP-address> /work/
exms-db-backups

3. Check whether the remote directory is mounted on the local server. In


the following example, the NFS server's IP address is 192.168.0.1 and
the NFS client's (local server's) IP address is 192.168.0.3.
# df -H /work/exms-db-backups

Filesystem Size
Used Avail Use% Mounted on

<nfs-server-IP-address>:/home/hammer/NFSCloud/<nfs-
client-IP-address> 1T 0 1T 0% /work/exms-
db-backups

4. Check whether vert_admin user (if it exists) can write to the directory.

# su vert_admin

> touch /work/exms-db-backups/myfile1

> rm /work/exms-db-backups/myfile1

> exit

If you get a "Permission denied" error, you need to make sure the direc-
tory is world readable by running as root:
# chmod -R 777 /work/exms-db-backups

If you get the error "su: user vert_admin does not exist", then you can
safely ignore the error and continue, as this server (usually an NWV) does
not have this user defined.
5. Check whether the mysql user can write to the directory.

# su mysql

> touch /work/exms-db-backups/myfile2

> rm /work/exms-db-backups/myfile2

> exit

3-10 Chapter 3
Upgrading Vertica Database 6.1, 7.0 or 7.1 to 7.2

Cleaning Up the Backup Directory


The following instructions explain how to delete all previously backed up
data including some temporary data that may have been generated during
a previous backup. This assumes that the /work/exms-db-backups has
already been configured/mounted properly, as described in “Mounting
Remote Directory for Backup and Restore,” page 3-9.
From the migration utilities directory on the local server, run the following
commands:
# cd /home/hammer/migration-utilities

# ./backup_vert_clear.sh

# rm -rf /work/exms-db-backups/*

Unmounting the Remote Directory


1. Umount the remote directory from the NFS client configuration and
confirm that the directory is no longer mounted.
# umount /work/exms-db-backups

# mount | grep NFSCloud

2. If the mount is still showing up, do a lazy, forceful unmount.

# mount | grep NFSCloud

<nfs-server-IP-address>:/home/hammer/NFSCloud/<nfs-client-IP-
address> on /work/exms-db-backups type nfs
(rw,hard,intr,tcp,addr=<nfs-server-IP-address>)
# umount -lf /work/exms-db-backups

Upgrading Vertica Database 6.1, 7.0 or 7.1 to 7.2


Perform the following steps to upgrade the Vertica database from 6.1 or
7.1 to 7.2.3 before upgrading to E-XMS 5.7.1.
If there are script errors during the upgrade process, see “Vertica Upgrade
Script Errors 6.1.3 to 7.2.3,” page C-5.
The following Vertica upgrade scripts are available for a ROS and DB
cluster node:
 vertica_upgrade.sh
 vertica_cluster_upgrade.sh
The default location used for the rpms is:

Upgrading Vertica on the ROS and DB Nodes 3-11


Post-Migration Configuration

/home/hammer/hxms/x86_64

You can also put the rpms in an alternate location and when prompted by
the script for missing rpms, type this alternate location.
NOTE: Empirix recommends that you backup your Vertica database
before upgrading.
1. Ensure the dialog.rpm is installed. If not, install the rpm from the loca-
tion thirdparty-sles-repo-5.7.1:
zypper install dialog

NOTE: For information on installing the dialog rpm, see “Create a


Third-Party Software Repository,” page 1-10
2. Unzip the migration utilities tar file and copy the Vertica RPMs located
in the ‘RPM’ folder.
3. Execute the vertica_upgrade.sh or vertica_cluster_upgrade.sh script.

NOTE: The Vertica script must be run on every ROS. In a DB cluster


the Vertica script must be run on only one of the nodes.
4. If starting from 6.1, enter the local server’s IP address when prompted.
This should only occur when Vertica is running on the ROS.

Post-Migration Configuration
If the ROS is under the NWV, log in to the NWV UI to check the Region ID
value, and use that value to change the region_id value at /home/ham-
mer/hmcommon/common-config file on the ROS server. Restart the ham-
mer services on the ROS.
For a Standalone ROS, it is recommended that you check the DB record
that shows the same result as in the region_id at common-config file by
typing the following command on the ROS:
# mysql -uroot -A xms_national -e "select region_id,
url from region;"

If the region_id is not the same, change it in the common-config file and
restart the hammer services on the ROS.

Post-installation Check
Make sure the data is correctly loaded in the database and that the ORI
interface can extract the information.

3-12 Chapter 3
Post-installation Check

NOTE: Contact Empirix Tech Support if you're have a large database with
sensitive information and you've never performed this installation proce-
dure.
NOTE: This installation procedure can potentially take a long time,
depending on the size of the database. Make sure you have an appropri-
ate length of time to perform this installation procedure.

Upgrading Vertica on the ROS and DB Nodes 3-13


Post-installation Check

3-14 Chapter 3
CHAPTER 4 Upgrading E-XMS
Components

This chapter provides upgrade instructions for the following E-XMS 5.7.1
components on SLES or Red Hat:
 “Upgrading an MSP Probe,” page 4-9
 “Upgrading a ROS,” page 4-10
 “Upgrading an Active and Standby HA-ROS node,” page 4-10
 “Upgrading a NWV or Redundant NWV,” page 4-13
 “Upgrading an SMM,” page 4-14
 “Upgrading a DMS (Aggregator/Proxy),” page 4-14)
 “Upgrading a DMS or ROS RDB or RDB Cluster,” page 4-14
 “Upgrading RAN Vision,” page 4-14
When upgrading from a legacy installation on SLES, see “Upgrading to E-
XMS 5.7.1 from E-XMS 5.1 up to 5.4 Using the Migration Utility,” page 4-1
When upgrading an MSP 5000 or Ubuntu MSP probe, the old legacy
upgrade method is still used. Please refer to the instructions in the sec-
tion, “Upgrading an MSP Probe,” page 4-9.
NOTE: For information on configuring traffic monitoring on a probe run-
ning GTP and GTPv2 traffic, see Appendix E, Configuring Dynamic Link-
sets for GTP/GTPv2 Traffic.

Upgrading to E-XMS 5.7.1 from E-XMS 5.1 up to 5.4 Using the


Migration Utility
If your E-XMS system is already running 5.5, See “Upgrading to E-XMS
5.7.1 from E-XMS 5.5 or 5.7,” page 4-8.
If you are upgrading to E-XMS release 5.7.1 from E-XMS release 5.1 up
to 5.4 (not a zypper or yum installation), use the exms migration utility.
The exms migration utility includes tar files and third-party repo packages
and must be installed on all physical servers, which may have one or
more components installed.

Upgrading E-XMS Components 4-1


Upgrading to E-XMS 5.7.1 from E-XMS 5.1 up to 5.4 Using the Migration Utility

The exms migration utility only needs to be done once to get the system
converted over from the old legacy installation to the new zypper update
method.
NOTE: The HA-ROS DB nodes, MSP 5000 or Ubuntu probes do not
require the exms migration utility to be installed or run.

Upgrade Prerequisites
Before you begin upgrading to E-XMS 5.7.1, perform the following tasks.
1. Ensure the system is healthy:

 If you have a redundant NWV or HA-ROS, ensure that the MySQL


replication is working without any errors.
 Make sure that every ROS with Tomcat7 is running and healthy.
Even if the ROS is not being used a region under a NWV, Tomcat7
must be working for the exms migration to work.
 Make sure that every NWV and ROS has an FQDN assigned with
FQDN and hostname values are in the /etc/hosts file.
2. Create a persistent screen session:

a. Log in to the server using an SSH client as ‘hammer’.

b. Switch to root user: su -

c. Create a screen session to monitor the progress of the installation


procedure. This will create a persistent session on the system that
will allow you to reconnect to the session if you lose connectivity
during the installation. To open a screen session, run the com-
mand:
screen /bin/bash
3. Add the SLES 11.3 software repository to each server to allow access
to all the RPMS. If any RPMs are missing, the process will stop and
the server will be in an unknown state. See “Common Errors and Res-
olutions,” page C-1.
4. Create the E-XMS YUM repository, described in “Create an E-XMS
YUM Software Repository,” page 2-12.
5. The exms migration script requires the tarballs and third-party files to
be available in the local repository:
 thirdparty-sles-repo-<Target-E-XMS-Release>.tar.gz
 exms-<Target-E-XMS-Version>.tar.gz

4-2 Chapter 4
Upgrading to E-XMS 5.7.1 from E-XMS 5.1 up to 5.4 Using the Migration Utility

Updating XPA Replication


Update the XPA replication to sync the two HA-ROS nodes before you
begin upgrading an HA-ROS. The procedure to update XPA replication
depends on whether MySQL is currently running on your system.
If MySQL replication is not running on your system or if there is an
error, perform the following steps:
1. Check to see if MySQL replication is running. See “To verify MySQL
database replication on an NWV or Redundant NWV,” page 4-13.
2. Compare the file size of the MySQL folder on both the Active and
Standby HA-ROS. If the folder sizes of the HA-ROS nodes are differ-
ent then there is an error. The MySQL folder size must be the same.
The To check the file size of the MySQL folder:
cd /home/hammer/mysql
du -Sh
3. Use the node with the larger file size. Typically, this is the Active HA-
ROS node. Run the mysqldump command to save the XPA database.
mysqldump xpa > /tmp/xpa.sql
4. Copy the files from the first node (typically, the Active HA-ROS) to the
second node (typically, the Standby HA-ROS).
scp /tmp/xpa.sql root@<ip address of the second
node>:/tmp/
5. Export the XPA database to the second node and run the following
command:
mysql xpa < /tmp/xpa.sql
If MySQL replication is running on your system, perform the following
steps
1. Stop MySQL on both the Active and Standby HA-ROS nodes:

# service mysql stop


2. Back up the following files on both HA-ROS nodes:

# cp /etc/my_full.cnf/etc/my_full.cnf-orig
3. Edit the file to add the following bolded lines on both HA-ROS nodes:

binlog-do-db=hammer_monitor
binlog-do-db=regmon
binlog-do-db=sys_health_stats
binlog-do-db=xpa
replicate-do-db=hammer_monitor

Upgrading E-XMS Components 4-3


Upgrading to E-XMS 5.7.1 from E-XMS 5.1 up to 5.4 Using the Migration Utility

replicate-do-db=regmon
replicate-do-db=sys_health_stats
replicate-do-db=xpa
4. Start MySQL on both the Active and Standby HA-ROS nodes:

# service mysql start

Run the Update Command


Run the update command on both HA-ROS nodes, one after the other. Do
no attempt to run the update command on each node simultaneously.
SLES
# zypper update exms-ros
Red Hat
# yum update exms-ros
There are no questions in the script requiring a response.

Verifying MySQL database replication is running between


HA-ROS application nodes
Verify that MySQL database replication is running between both HA-ROS
nodes before and after running the update exms-ros command.
To verify MySQL database replication between two HA-ROS application
nodes, follow these steps:
1. As root, connect to the MySQL server:

# mysql -A hammer_monitor
2. Execute the following command:

# show slave status\G


3. Ensure the following output displays among the number of lines dis-
played in the output:
Slave_IO_Running: Yes
Slave SQL_Running: Yes
Ensure there are no replication errors shown and that the
‘Seconds_Behind_Master’ value is close to 0.
NOTE: Show variables like ‘server_id’\G is used to determine primary and
secondary designation:
3 = active server
4 = standby server

4-4 Chapter 4
Upgrading to E-XMS 5.7.1 from E-XMS 5.1 up to 5.4 Using the Migration Utility

NOTE: The active and standby state will change as failover occurs.

IMPORTANT: Download and install the correct Wireshark 3.0 version from
SourceForge for the E-XMS build version you plan to install. See “Installing
Wireshark on NWV and ROS,” page 1-12.
IMPORTANT: Upgrade Vertica to version 7.2.3 before you run the migra-
tion utility. You should do this immediately before you run the migrate-
xms.sh script because the 5.1 to 5.4 software releases are not qualified to
work with Vertica 7.2.3.
See Chapter 3, Upgrading Vertica on the ROS and DB Nodes.

Installing the exms Migration Utility


To install and run the exms migration utility on each server, follow these
steps:
NOTE: For HA-ROS application nodes and redundant NWVs, see “Spe-
cial Server Installation Requirements,” page 4-6.
1. Install the exms migration script in the /home/hammer/exms-migrate
directory.
# zypper install exms-migrate
2. Run the exms migration script.

NOTE: Once the migration procedure has begun, do not interrupt its
execution. If the script does not complete, please contact Empirix Tech-
nical Support for assistance.
/home/hammer/exms-migrate/bin/migrate-xms.sh
The migration script will perform the following tasks:
 Backup configuration and data
 Remove old software
 Install new E-XMS 5.7.1 software
 Restore data and configuration

WARNING! After the migrate-xms.sh script has completed, the E-XMS 5.7.1 software
is installed. Do not run the zypper update or zypper install commands.
3. Upgrade SLES 11.3 to 11.4. This is now required for all 5.7.1 upgrades
for all SLES-based servers such as MSP, ROS, NWV, DMS.

Upgrading E-XMS Components 4-5


Upgrading to E-XMS 5.7.1 from E-XMS 5.1 up to 5.4 Using the Migration Utility

Special Server Installation Requirements

For HA-ROS Application Nodes


Before you begin replication of the HA-ROS application nodes, you must
verify the assigned (original) role of each node. To change the assigned
roles, follow the steps below.
Verifying HA-ROS active and standby roles before migration
1. Stop the standby HA-ROS.

/etc/init.d/ucarpd stop
2. Check the role status on the standby HA-ROS.

/etc/init.d/ucarpd status
The status should be ‘Standby’
3. Check the role status on the active HA-ROS.

/etc/init.d/ucarpd status
The status should be ‘Active’. If the status is not ‘Active’ restart this
HA-ROS.
/etc/init.d/ucarpd restart
4. Start the standby HA-ROS.

/etc/init.d/ucarpd start
5. Afater migrating each of the HA-ROS application nodes, run the setup
command on the original Active node:
/home/hammer/hmcommon/bin/setup-ros.sh
Set “Enable MySQL Replication” to 1 to complete the migration of the
HA-ROS application nodes.
6. Check that MySQL and file replication is working on the Active and
Standby servers using the MySQL SHOW SLAVE STATUS com-
mands.

For Redundant NWVs


1. On the Primary NWV server, disable the MySQL and OpenAM repli-
cation before you begin the migration-xms.sh script on the NWV
to E-XMS 5.7.1 from E-XMS release 5.1 to 5.4. This breaks the con-
nection between the Primary and Secondary MySQL servers.
nw_redundantdb_undo.sh <maint user> <maint password>
<primary IP>
Where:
<maint user> is the database login user id 'nwadmin'

4-6 Chapter 4
Upgrading to E-XMS 5.7.1 from E-XMS 5.1 up to 5.4 Using the Migration Utility

<maint password> is the database login password 'NW_Honcho'


<primary IP> is the Primary server IP address. The IP address must be
explicit - do not use 'localhost'
2. On the Secondary NWV server, change the directory to the following
and run the following command to disable OpenAM replication:
cd /home/tomcat/openam-ssoadmintools/
./stopOpenAMReplication.sh [SERVER_FQDN]
Where:
SERVER_FQDN is the domain name of the primary NWV.
3. Install and run the migration-xms.sh script on both the Primary and
Secondary NWVs. See instructions in the section “Installing the exms
Migration Utility,” page 4-5.
With the redundant servers not replicating to each other, the migration
script can be run sequentially or at the same time on each NWV.
NOTE: It is also recommended to make a backup of the OpenAM user
configuration in case there are any serious issues with the migration or
re-enabling replication. Please use the following to commands to dump
the OpenAM user configuration and confirm the dump is completed.
Execute the following command as ROOT user:
/home/tomcat/openam/opends/bin/export-ldif --port
4444 –hostname `hostname -f` --bindDN "cn=Directory
Manager" --bindPassword Hmr_Admin --backendID
userRoot --includeBranch
ou=people,dc=openam,dc=forgerock,dc=org --
includeBranch
ou=groups,dc=openam,dc=forgerock,dc=org --ldifFile
/home/tomcat/exported_users.ldif --start 0 --
trustAll
The above command is ASYNCHRONOUS and will give an output
similar to:
Export task 20160601100932364 scheduled to start Jun
1, 2016 10:09:32 AM CEST
To verify when the export is complete run the following command:
/home/tomcat/openam/opends/bin/manage-tasks --
hostname `hostname -f` --port 4444 --bindDN
"cn=Directory Manager" --bindPassword Hmr_Admin --
trustAll --no-prompt

Upgrading E-XMS Components 4-7


Upgrading to E-XMS 5.7.1 from E-XMS 5.5 or 5.7

When complete it will give an output like:


ID Type Status
-------------------------------------------------
20160601100932364 Export Completed successfully
4. On the Primary NWV server, run the setup command:

/home/hammer/hmcommon/bin/setup-nwv.sh
Set “Enable Redundancy” to 1 to complete the migration of the redun-
dant NWVs and reestablish the bidirectional MySQL and OpenAM
data replication between the two nodes.
5. Check that MySQL and OpenAM replication is working correctly
between the primary and secondary NWV using the show slave status
commands and the OpenAM console.
6. Install the SMM, if not already installed. The SMM is always required
on the NWV or Standalone ROS to be able to use the new HTML5
DiagnostiX feature introduced in E-XMS 5.7.1.
NOTE: If SMM was already installed prior to the upgrade on your NWV
or Standalone Region, then SMM was migrated also as part of running
the migrate-xms.sh script.
7. Upgrade the NWV Web Portal, if previously installed before the
upgrade.
NOTE: The NWV Web Portal is required to be installed on every NWV
and Standalone ROS as of E-XMS 5.1, so it is highly recommended to
be installed.

Upgrading to E-XMS 5.7.1 from E-XMS 5.5 or 5.7


Use this procedure if you are upgrading to E-XMS release 5.7.1 from
E-XMS release 5.5 or 5.7. This upgrade procedure must be run or exe-
cuted on all physical servers, which may have one or more components
installed.

Upgrade Prerequisites
Before you begin upgrading to E-XMS 5.7.1, follow these steps:
1. Ensure the system is healthy:

 If you have a redundant NWV or HA-ROS, ensure that the MySQL


and OpenAM replication is working without any errors.

4-8 Chapter 4
Upgrading to E-XMS 5.7.1 from E-XMS 5.5 or 5.7

 Make sure that every ROS with Tomcat7 is running and healthy.
Even if the ROS is not being used a region under a NWV, Tomcat
must be working for the update to work.
 Make sure that every NWV and ROS with FQDN and hostname
values are in the /etc/hosts file.
2. Create a persistent screen session:

a. Log in to the server using an SSH client as ‘hammer’.

b. Switch to root user: su -

c. Create a screen session to monitor the progress of the installation


procedure. This will create a persistent session to the system that
will allow you to reconnect to the session if you lose connectivity
during the installation. To open a screen session, run the com-
mand:
screen /bin/bash
3. Make sure that the SLES or Red Hat software repository is added
properly. See “Common Errors and Resolutions,” page C-1.
4. Create the E-XMS YUM repository, described in “Create an E-XMS
YUM Software Repository,” page 2-12.
5. Remove the MySQL lock when upgrading, described in “Remove the
MySQL Lock During an Upgrade,” page C-3.
6. Upgrade Wireshark to 3.0.0.5.

7. Upgrade SLES 11.3 to 11.4 after the upgrade is successful. This is


now required for all E-XMS 5.7.1 upgrades for all SLES-based servers
such as MSP, ROS, NWV, DMS.
IMPORTANT: Upgrade Vertica to version 7.2.3 before you begin upgrading
to E-XMS 5.7.1.
See Chapter 3, Upgrading Vertica on the ROS and DB Nodes.

Upgrading an MSP Probe


Upgrade an MSP probe running SLES or Red Hat.
To upgrade the MSP Probe, start the update:
SLES
# zypper update exms-probe
Red Hat
# yum update exms-probe
There are no questions in the script requiring a response.

Upgrading E-XMS Components 4-9


Upgrading to E-XMS 5.7.1 from E-XMS 5.5 or 5.7

Upgrading MSP 5000 or Ubuntu Probe


1. Copy the installation upgrade file tar.gz to the probe location used for
upgrades. Create the directory, if not already created. For example:
mkdir /home/hammer/share/release_<n>
chmod oug+w /home/hammer/share/release_<n>
Where: release_<n> is the release number (for example,
release_5_7_1).
2. Unzip and untar the MSP 5000 upgrade tarball. A subdirectory where
you will run the upgrade script is created during this process.
tar -xzvf hxms-<n>-Bantu-or-Ubuntu-ppc.tar.gz
Where <n> is the release and build number. For example: tar -xzvf
hxms-5.7.1-b56-Bantu-ppc.tar
3. Go to the hxms directory in the current location and start the upgrade:

cd hxms
./upgrade.sh -u

Upgrading a ROS
Upgrade a ROS after Vertica has been upgraded to 7.2.3.
SLES
# zypper update exms-ros
Red Hat
# yum update exms-ros
There are no questions in the script requiring a response.
NOTE: If this is a standalone ROS, you must also upgrade the SMM. See
“Upgrading an SMM,” page 4-14.

Upgrading an Active and Standby HA-ROS node


The HA-ROS upgrade procedure is different than the standard ROS
upgrade procedure. The HA-ROS system is composed of 2 ROSs, 3
Database cluster nodes, a NWV and MSP probes. The upgrades must be
performed in the following sequence:
1. DB cluster nodes

2. HA-ROS nodes

3. NWV

4. MSP probes

4-10 Chapter 4
Upgrading to E-XMS 5.7.1 from E-XMS 5.5 or 5.7

Upgrade the HA-ROS application node after upgrading the DB cluster


nodes to Vertica 7.2.3. For instructions to upgrade the Vertica binary ver-
sion see Chapter 3, Upgrading Vertica on the ROS and DB Nodes. You
must also configure XPA replication before you begin upgrading a primary
and secondary HA-ROS.

Updating XPA Replication


Update the XPA replication to sync the two HA-ROS nodes before you
begin upgrading an HA-ROS. The procedure to update XPA replication
depends on whether MySQL is currently running on your system.
If MySQL replication is not running on your system or if there is an
error, perform the following steps:
1. Check to see if MySQL replication is running. See “To verify MySQL
database replication on an NWV or Redundant NWV,” page 4-13.
2. Compare the file size of the MySQL folder on both the Active and
Standby HA-ROS. If the folder sizes of the HA-ROS nodes are differ-
ent then there is an error. The MySQL folder size must be the same.
To check the file size of the MySQL folder:
cd /home/hammer/mysql
du -Sh
3. Use the node with the larger file size. Typically, this is the Active HA-
ROS node. Run the mysqldump command to save the XPA database.
mysqldump xpa > /tmp/xpa.sql
4. Copy the files from the first node (typically, the Active HA-ROS) to the
second node (typically, the Standby HA-ROS).
scp /tmp/xpa.sql root@<ip address of the second
node>:/tmp/
5. Export the XPA database to the second node and run the following
command:
mysql xpa < /tmp/xpa.sql
If MySQL replication is running on your system, perform the following
steps
1. Stop MySQL on both the Active and Standby HA-ROS nodes:

# service mysql stop


2. Back up the following files on both HA-ROS nodes:

# cp /etc/my_full.cnf/etc/my_full.cnf-orig
3. Edit the file to add the following bolded lines on both HA-ROS nodes:

binlog-do-db=hammer_monitor

Upgrading E-XMS Components 4-11


Upgrading to E-XMS 5.7.1 from E-XMS 5.5 or 5.7

binlog-do-db=regmon
binlog-do-db=sys_health_stats

binlog-do-db=xpa
replicate-do-db=hammer_monitor
replicate-do-db=regmon
replicate-do-db=sys_health_stats

replicate-do-db=xpa
4. Start MySQL on both the Active and Standby HA-ROS nodes:

# service mysql start

Run the Update Command


Run the update command on both HA-ROS nodes, one after the other. Do
no attempt to run the update command on each node simultaneously.
SLES
# zypper update exms-ros
Red Hat
# yum update exms-ros
There are no questions in the script requiring a response.

Verifying MySQL database replication is running between


HA-ROS application nodes
Verify that MySQL database replication is running between both HA-ROS
nodes before and after running the update exms-ros command.
To verify MySQL database replication between two HA-ROS application
nodes, follow these steps:
1. As root, connect to the MySQL server:

# mysql -A hammer_monitor
2. Execute the following command:

# show slave status\G


3. Ensure the following output displays among the number of lines dis-
played in the output:
Slave_IO_Running: Yes
Slave SQL_Running: Yes
Ensure there are no replication errors shown and that the
‘Seconds_Behind_Master’ value is close to 0.

4-12 Chapter 4
Upgrading to E-XMS 5.7.1 from E-XMS 5.5 or 5.7

NOTE: Show variables like ‘server_id’\G is used to determine primary and


secondary designation:
3 = active server
4 = standby server
NOTE: The active and standby state will change as failover occurs.

Upgrading a NWV or Redundant NWV


To upgrade a NWV or Redundant NWV, start the update:
SLES
# zypper update exms-nwv
Red Hat
# yum update exms-nwv
There are no questions in the script requiring a response.
Notes:
 SMM must always be updated.
 Replication does not need to be undone before the update.
 If you have NWV web portal, see Chapter 6, Upgrading the Network
Wide View Web Portal.

To verify MySQL database replication on an NWV or Redun-


dant NWV
The following steps must be performed on the NWV or Redundant NWV.
1. As root, connect to the MySQL server:

# mysql -A xms_national
2. Execute the following command:

# show slave status\G


3. Ensure the following output displays among the number of lines dis-
played in the output:
Slave_IO_Running: Yes
Slave SQL_Running: Yes
Ensure there are no replication errors shown and that the
‘Seconds_Behind_Master’ value is close to 0.
NOTE: Show variables like ‘server_id’\G is used to determine primary and
secondary NWV designation:
3 = primary

Upgrading E-XMS Components 4-13


Upgrading to E-XMS 5.7.1 from E-XMS 5.5 or 5.7

4 = secondary

Upgrading an SMM
To upgrade an SMM for NWV or a standalone ROS, start the update:
SLES
# zypper update exms-smm
Red Hat
# yum update exms-smm
There are no questions in the script requiring a response.

Upgrading a DMS (Aggregator/Proxy)


To upgrade a DMS, start the update:
SLES
# zypper update exms-dms
Red Hat
# yum update exms-dms
There are no questions in the script requiring a response.

Upgrading a DMS or ROS RDB or RDB Cluster


To upgrade an RDB cluster or ROS DB cluster, start the update on each
RDB or RDB cluster node:
SLES
# zypper update exms-rdb
Red Hat
# yum update exms-rdb
There are no questions in the script requiring a response.

Upgrading RAN Vision


This section provides upgrade information for the following RAN Vision
components:
 CSMS components on NWV
 LSS-KPIGen components
 LSS-RANMon on an MSP Probe

4-14 Chapter 4
Manually Verifying Successful Upgrade Completion

 TraceReader on an MSP Probe

Upgrading RAN Vision CSMS Components on an NWV


To upgrade the RAN Vision for CSMS components on an NWV, start the
update:
SLES
# zypper update ranvision-csms
Red Hat
# yum update ranvision-csms

Upgrading RAN Vision LSS-KPIGen Components


To upgrade RAN Vision for LSS-KPIGen components, start the update:
SLES
# zypper update ranvision-lsskpigen
Red Hat
# yum update ranvision-lsskpigen

Upgrading RAN Vision LSS-RANMon on an MSP Probe


To upgrade RAN Vision for LSS-RANMon on an MSP probe, start the
update:
SLES
# zypper update ranvision-lssranmon
Red Hat
# yum update ranvision-lssranmon

Upgrading RAN Vision TraceReader on an MSP Probe


To upgrade RAN Vision for TraceReader on an MSP probe, start the
update:
SLES
# zypper update ranvision-tracereader
Red Hat
# yum update ranvision-tracereader

Manually Verifying Successful Upgrade Completion


1. Log in to the NWV or Standalone ROS as admin.

Upgrading E-XMS Components 4-15


Manually Verifying Successful Upgrade Completion

a. Click the E-XMS link on the Web Portal screen if installed.

b. Start the Java Diagnostics GUI from the E-XMS landing screen.

2. Check “Help About” to verify the NWV or Standalone ROS E-XMS


software release build version.
3. From the Admin menu, for each Region check the “Version & License”
to verify the E-XMS software version on each E-XMS system.
4. Check “System Health” to verify that the status of all regions is green
and they are connected and the probes under each region are healthy.
5. Execute the “Protocol Search” in all regions and verify the call results.

6. Execute the “Call Detail Summary Report” for all regions to ensure sta-
tistics are available.
7. If the customer is expecting to use the new HTML5 DiagnostiX search,
please refer to Chapter 7, Configuration for Using HTML5 Diagnos-
tiX When No DMS is Present for instructions on additional configura-
tion required and how to verify it is working.

4-16 Chapter 4
CHAPTER 5 Post-Installation
Requirements

Third-Party Software Installation

HA Proxy
High-Availability Proxy is a third-party component used by the NWV and
Standalone ROS (with SMM) to communicate with a DMS Report Data-
base (RDB) running Vertica.
NOTE: You must install E-XMS software on the NWV or Standalone ROS
before installing HA Proxy.
Because HA Proxy software license cannot be distributed with E-XMS
software, you must download HA Proxy from Linux and install it on your
E-XMS system.
E-XMS has been tested with HA Proxy version 1.4.21 for i586 Linux with
stripped symbols. This is the recommended version and can be down-
loaded from the following location:
http://www.haproxy.org/download/1.4/bin/haproxy- 1.4.21-pcre-40kses-
linux-i586.stripped.

Download HA Proxy
1. Download HA Proxy:

a. Login as root.

b. Make sure the current directory is: /root

c. If the system has Internet access, run the following command:

wget http://www.haproxy.org/download/1.4/bin/
haproxy-1.4.21-pcre-40kses-linux-i586.stripped
d. If the system does not have Internet access or you must use a dif-
ferent version, the file must be transferred to the /root directory.

2. Install the HA Proxy software:

a. Login as root.

Post-Installation Requirements 5-1


Third-Party Software Installation

b. Run the following commands, substituting the download filename


for <HAProxy binary>
chmod 550 /root/<HAProxy binary>
chown hammer:hammer /root/<HAProxy binary>
mv /root/<HAProxy binary> /home/hammer/hmcommon/bin/
ln -s /home/hammer/hmcommon/bin/<HAProxy binary> /
home/hammer/hmcommon/bin/haproxy
3. Now perform the installation/update. At the end of the installation/
update the following two new operations must be performed:
a. /sbin/chkconfig haproxy on

b. Check if the haproxy service is set to automatic startup:

/sbin/chkconfig --list haproxy


c. Check that the HAProxy is running by verifying the following output:

haproxy 0:off 1:off 2:off 3:on 4:off 5:on


6:off

5-2 Chapter 5
CHAPTER 6 Upgrading the Network Wide
View Web Portal

This chapter provides instructions for upgrading the Network Wide View
Web Portal on a NWV or on a standalone ROS.

Prerequisite
E-XMS 5.x must be correctly configured on your server. The OpenAM
server must be up and running.
To test the OpenAM browser go to:
https://<fqdn>:8443/OpenAM

If you can reach with website, OpenAM and Tomcat7 are running.

Procedure
1. Copy nwv-portal-installer-<release>.rpm into
/home/hammer/share folder of E-XMS as host for NWV-Portal.
2. Login on server as root and go under /home/hammer/share

cd /home/hammer/share

3. If you have previously installed the NWV-portal rpm, you must remove
it using the command:
rpm -e nwv-portal-installer-<old rpm>

4. Install the NWV-portal rpm:

This takes a few rpm -U nwv-portal-installer-<version>.rpm


seconds
A message reporting installation instructions is shown if everything is
correct.
5. Change to the location /home/hammer/hmportal/bin

cd /home/hammer/hmportal/bin

6. Launch the portal installation for the E-XMS script:

./install_over_nwv.sh <current_host_fqdn>

<current_host_fqdn>: The fully qualified domain of the current host


(i.e. ros1.perform.empirix.com). The domain name must have at least

Upgrading the Network Wide View Web Portal 6-1


OpenAM Agent Installer

2 dots (.example.com). Mapping between the FQDN of the host and its
IP Address must be resolved via DNS and in local /etc/hosts file. The
FQDN must be defined in the /etc/hosts file.
7. Open a browser and type https://<current_host_fqdn>

The NWV portal homepage is shown, after a successful login. An icon


with E-XMS should be present and when clicked will bring you to the
E-XMS applications landing page.

OpenAM Agent Installer


The OpenAM Agent installer requires the OpenAM rpm to be installed on
the system being used as the web portal.

Installing OpenAM Agent Installer

Prerequisite
The application host must be correctly configured in standalone mode with
the Web Application Server e.g. TomEE or apache are present.

Install the OpenAM agent


1. Copy on the application host openam-agent-installer-<release>-
x86_64.rpm.

6-2 Chapter 6
OpenAM Agent Installer

2. Log in to the host machine as root.

3. In folder containing openam-agent-installer-<release>-x86_64.rpm:

cd <path_to_folder_containing_rpm>

4. Install the rpm using the following command:

rpm -U openam-agent-installer-<release>-x86_64.rpm

If the installation is successful, installation procedure notes are shown.


5. Set up the agent for the application:

cd /home/hammer/hmportal_client/bin

6. Launch the installer:

./install_agent_on_app.sh <current_host_fqdn>
<openam_host_fqdn> <tomee_installation_folder> <tomee_user-
name> <app_type>

Where:
<current_host_fqdn> is the fully qualified domain of server hosting the
application
<openam_host_fqdn> is the fully qualified domain of server hosting
OpenAM web application
<tomee_installation_folder> is the place where TomEE is installed
(CATALINA_HOME folder).
<tomee_username> is the user that executes tomee server in this
host (see the ownership of jsvc in case of doubts)
<app_type> is one of the values among the following:
 exms - to add another ROS/NWV instance
 is - to provide support for Intellisight
 isran - uses apache instead of TomEE. The agent configuration
procedure is not currently automated by the script.

General Post-Installation Notes


The installation scripts checks if all requirements for the agent are met. In
some cases the agent filter is not set in the applications. The installation
script asks the user to edit the file WEB-INF/web.xml and add the xml
required to inject the agent filter. Both webapps that require you to manu-
ally edit and filter the snippet are shown. Below is the filter that must be
present:
<filter>

<filter-name>Agent</filter-name>

<display-name>Agent</display-name>

Upgrading the Network Wide View Web Portal 6-3


OpenAM Agent Installer

<description>OpenAM Policy Agent Filter</descrip


tion>

<filter-class>com.sun.identity.agents.filter.AmA
gentFilter</filter-class>

</filter>

<filter-mapping>

<filter-name>Agent</filter-name>

<url-pattern>/*</url-pattern>

<dispatcher>REQUEST</dispatcher>

<dispatcher>INCLUDE</dispatcher>

<dispatcher>FORWARD</dispatcher>

<dispatcher>ERROR</dispatcher>

</filter-mapping>

Once the filter is in place, restart the TomEE server:


/etc/init.d/tomee restart

IntelliSight Post-Installation Notes


To use IntelliSight with OpenAM, manually edit the filter server.xml,
located in the folder:
/usr/share/tomee/conf/server.xml

Make the following edits in the server.xml file:


 Uncomment the Tomee single sign-on valve
 Restore the Lockout realm, if not present
 Inside the lockout realm, uncomment the database realm
 Inside the lockout realm, enable the OpenAM realm
After editing the server.xml file, the file should look similar to the following
file, substituting @APP_FQDN@ with the application FQDN name:
<?xml version='1.0' encoding='utf-8'?>

<Server port="8005" shutdown="SHUTDOWN">

<!-- TomEE plugin for Tomcat -->

<Listener className="org.apache.tomee.catalina.ServerL-
istener" />

<!--APR library loader. Documentation at /docs/apr.html


-->

6-4 Chapter 6
OpenAM Agent Installer

<Listener className="org.apache.catalina.core.AprLife-
cycleListener" SSLEngine="on" />

<!--Initialize Jasper prior to webapps are loaded. Doc-


umentation at /docs/jasper-howto.html -->

<Listener className="org.apache.catalina.core.JasperL-
istener" />

<!-- Prevent memory leaks due to use of particular java/


javax APIs-->

<Listener className="org.apache.catalina.core.JreMemor-
yLeakPreventionListener" />

<Listener className="org.apache.catalina.mbeans.Global-
ResourcesLifecycleListener" />

<Listener className="org.apache.catalina.core.ThreadLo-
calLeakPreventionListener" />

<!-- Global JNDI resources

Documentation at /docs/jndi-resources-howto.html

-->

<GlobalNamingResources>

<!-- Editable user database that can also be used by

UserDatabaseRealm to authenticate users

-->

<Resource name="UserDatabase" auth="Container"

type="org.apache.catalina.UserDatabase"

description="User database that can be


updated and saved"

factory="org.apache.catalina.users.MemoryU-
serDatabaseFactory"

pathname="conf/tomcat-users.xml" />

</GlobalNamingResources>

<!-- A "Service" is a collection of one or more "Connec-


tors" that share

a single "Container" Note: A "Service" is not


itself a "Container",

Upgrading the Network Wide View Web Portal 6-5


OpenAM Agent Installer

so you may not define subcomponents such as


"Valves" at this level.

Documentation at /docs/config/service.html

-->

<Service name="Catalina">

<!-- A "Connector" represents an endpoint by which


requests are received

and responses are returned. Documentation at :

Java HTTP Connector: /docs/config/http.html


(blocking & non-blocking)

Java AJP Connector: /docs/config/ajp.html

APR (HTTP/AJP) Connector: /docs/apr.html

Define a non-SSL HTTP/1.1 Connector on port 8080

-->

<Connector port="80" protocol="HTTP/1.1"

connectionTimeout="20000"

redirectPort="443" />

<!-- A "Connector" using the shared thread pool-->

<!--

<Connector executor="tomcatThreadPool"

port="8080" protocol="HTTP/1.1"

connectionTimeout="20000"

redirectPort="8443" />

-->

<!-- Define a SSL HTTP/1.1 Connector on port 8443

This connector uses the JSSE configuration, when


using APR, the

connector should be using the OpenSSL style con-


figuration

described in the APR documentation -->

<Connector port="443" SSLEnabled="true"

6-6 Chapter 6
OpenAM Agent Installer

maxThreads="150" scheme="https"
secure="true"

clientAuth="false"

protocol="org.apache.coyote.http11.Http11-
NioProtocol"

keystoreFile="/usr/share/tomee/conf/
default.keystore"

keystorePass="secret"

keyAlias="default"/>

<!-- An Engine represents the entry point (within


Catalina) that processes

every request. The Engine implementation for


Tomcat stand alone

analyzes the HTTP headers included with the


request, and passes them

on to the appropriate Host (virtual host).

Documentation at /docs/config/engine.html -->

<!-- You should set jvmRoute to support load-balancing


via AJP ie :

<Engine name="Catalina" defaultHost="localhost" jvm-


Route="jvm1">

-->

<Engine name="Catalina" defaultHost="@APP_FQDN@">

<!--For clustering, please take a look at documenta-


tion at:

/docs/cluster-howto.html (simple how to)

/docs/config/cluster.html (reference documenta-


tion) -->

<!--

<Cluster className="org.apache.catalina.ha.tcp.Sim-
pleTcpCluster"/>

-->

Upgrading the Network Wide View Web Portal 6-7


OpenAM Agent Installer

<!-- Use the LockOutRealm to prevent attempts to


guess user passwords

via a brute-force attack -->

<Realm className="org.apache.catalina.realm.Lock-
OutRealm">

<Realm className="com.sun.identity.agents.tom-
cat.v6.AmTomcatRealm" debug="99"/>

<!--

<Realm className="org.apache.cata-
lina.realm.UserDatabaseRealm"

resourceName="UserDatabase"/>

-->

</Realm>

<Host name="@APP_FQDN@" appBase="webapps"

unpackWARs="true" autoDeploy="true">

<Valve className="org.apache.catalina.authentica-
tor.SingleSignOn" />

<Valve className="org.apache.cata-
lina.valves.AccessLogValve" directory="logs"

prefix="localhost_access_log." suf-
fix=".txt"

pattern="%h %l %u %t &quot;%r&quot; %s %b"


/>

</Host>

</Engine>

</Service>

</Server>

6-8 Chapter 6
OpenAM Agent Installer

Add Users to IntelliSight


The web portal installation scripts create only IntelliSight root users. To
add other users perform the following steps:
1. Connect to the web portal as a root user, log in to IntelliSight and click
User Management.

2. Click Add Account.

3. Set the users information and assign it to a group. Don't force the
change password check!

4. Logout then log in to the portal as ‘amAdmin’:

https://FQDN_PORTAL:8443/opeam

Upgrading the Network Wide View Web Portal 6-9


OpenAM Agent Installer

5. Go to Access Control > /(Top Level Realm) > Subjects > User, then
click New.

6. Add the new user information. The ID must correspond to the ‘Name’
set in IntelliSight. When done entering new user information, click OK.

7. Click the user name in the list, then select the group. Add groups
accordingly. Select Roles from the list box on the left side and click
Add to move them to the list box on the right side. When you are done,
click Save.

6-10 Chapter 6
Interoperability Between IntelliSight and E-XMS

Interoperability Between IntelliSight and E-XMS


You must create a user that can operate on both IntelliSight and E-XMS.
Create the new user in E-XMS
1. On E-XMS, log in to the web portal as ‘user admin’ then open the
Diagnostics and Voice Reports UI. Download the jnlp application and
select Admin > Account Management. In the left pane, click Users and
expand the list.

2. Enter the new user info in the form. Select the Role to be ‘Power User’.
Note the name of the user account.

Upgrading the Network Wide View Web Portal 6-11


Interoperability Between IntelliSight and E-XMS

3. Click Save. The user is saved in E-XMS and OpenAM.

Create the same new user in IntelliSight


1. Create a new user in IntelliSight using the same name used to create a
new user in E-XMS. See “Add Users to IntelliSight,” page 6-9.
2. In OpenAM login in as ‘amAdmin’ and configure the correct rights for
the user in the IntelliSight role.
IMPORTANT: root and admin user accounts are meant for system adminis-
tration and have not been configured for IS Microstrategy visualization.

6-12 Chapter 6
CHAPTER 7 Configuration for Using
HTML5 DiagnostiX When No
DMS is Present
On customer systems using the HTML5 user interface DiagnostiX feature
for voice protocols, perform the steps below for configuring the database
views the first time upgrading to 5.7.1. Once done this does not have to be
repeated again.

Prerequisite
E-XMS Release 5.7.1 requires the installation and setup of the SMM on
the NWV or Standalone ROS, even if it is not a DMS system. For installa-
tion steps, see See Chapter 2, Installing E-XMS Software.
After installation or upgrade to E-XMS 5.7.1, copy the script
'CreateROSVerticaViewsForProtocols,py" (if not already there) into the
directory of each active ROS:
/home/hammer/hmcommon/bin/scripts/python/projects/ros/
src/ros

/home/hammer/hmsasps/bin/saspsmultistage.sh

Required Configuration for Voice Protocols on NWV or Standalone


ROS when using HTML5 DiagnostiX
To retrieve data from the ROS Database, you must edit the csv configura-
tion file to include the list of protocols involved and the IP address of the
ROS Database cluster. The configuration file must be named
‘ros_servers.csv’ and placed in the folder:
/home/hammer/hmipxflex/etc
An example of the file is shown below.

Configuration for Using HTML5 DiagnostiX When No DMS is Present 7-1


Required Configuration for Voice Protocols on NWV or Standalone ROS when using HTML5 DiagnostiX

Each line consists of 8 fields:


1. ROS Database ID

2. ROS Database IP address. The master node IP address is expected in


case of a cluster.
3. Module name assigned to the protocol.

4. Constant tag that must be set to ROSDB.

5. Constant tag that must be set to DMS-PROXY.

6. Must be filled as field 1.

7. Must be filled as field 2.

8. ROS Database identifier name.

Each field must be delimited by a comma "," and refer to a single protocol.
The example configures all the 13 voice protocols accessible through the
ROS Database:
 SIP
 ISUP (to configure both ISUP and BICC protocols)
 RANAP
 BSSAP
 CAMEL
 INAP
 H248
 DIAMETER (to configure the DIAMETER+ protocol)
 S102

7-2 Chapter 7
Required Configuration for Voice Protocols on NWV or Standalone ROS when using HTML5 DiagnostiX

 DNS (to configure the ENUM protocol)


 MGCP
 RTP
Additionally, run the following command:
/home/hammer/hmsasps/bin/sasps/bin/sasps_column_maker
-V <verticaIP>

Where:
<Vertica IP> is one (=any working) node of the ROS Vertica DB.
NOTE: This must be run on the NWV at the end of the installation/
upgrade and each time the DB configuration changes (e.g. when a proto-
col filter is enabled by the Java UI or any other change in the DB).

Configuration for Using HTML5 DiagnostiX When No DMS is Present 7-3


Required Configuration for Voice Protocols on NWV or Standalone ROS when using HTML5 DiagnostiX

7-4 Chapter 7
APPENDIX A E-XMS Network TCP Port
Assignments

Because E-XMS System and E-XMS Network Wide System components


accommodate widely varying monitoring requirements, it is important to
understand which TCP ports are used to accommodate communications
between those components, across networks, firewalls, and domains.
The following figure shows port assignments for an E-XMS Network Wide
system with one region. The Port Assignment table lists all ports used by
E-XMS and associated processes.

E-XMS Network TCP Port Assignments A-1


FIGURE A-1. Network TCP Port Assignments

A-2 Appendix A
Network Port Assignment Table

Network Port Assignment Table

TABLE A-1. Port Assignments

Diagram
Notation Device A Device B Port(s) Direction Comments

1 User Client Network 80, 443, 8080, A>B Client connects to NWV via
Wide View 8443 HTTP and/or HTTPS

OneSight Network 80, 443, 8080, A>B OneSight connects to NWV


Srever Wide View 8443 via HTTP and/or HTTPS

2 User Client MSP Probe 5913 A<>B Required for some System
Administration of the MSP
probes (except MSP 5000)

3 Network Regional 22 A<>B SSH/Secure FTP


Wide View OS
80, 443, 5123, A>B Web and Reporting Applica-
8080, 8443 tions communication, OAM

5443 A>B Vertica

UDP 162 B>A SNMP trap forwarding

4 Network Redundant 3306, 1689, A<>B Redundant database and


Wide View Network 4444, 8080, OpenAM communication
Wide View 8443, 50389,
50899, 58989

E-XMS Network TCP Port Assignments A-3


Network Port Assignment Table

TABLE A-1. Port Assignments

Diagram
Notation Device A Device B Port(s) Direction Comments

5 Regional MSP Probe 22 A<>B SSH/Secure FTP


OS
TCP 6452 A<>B IP sec is needed if encryption
is required

5107, 5123, A>B Empirix proprietary ports


5124, 5129,
5130, 5135,
5155, 5177,
5178

5201 B>A Port needed on MSP probes


(except MSP 5000) when
configured for monitoring
mobile traffic

5136 - slot 1 A>B XPA Data-only mode on MSP


5317 - slot 2 5000
5318 - slot 3
5319 - slot 4
5320 - slot 5
5321 - slot 6
5322 - slot 7
5323 - slot 8
5324 - slot 9

3306, 3307, B>A DB ports and Empirix propri-


5103, 5104, etary ports
5108, 5109,
5123, 5155,
5174, 5180,
5191

UDP 162 B>A SNMP trap forwarding

6 MSP Probe MSP Probe 5112, 5158 A<>B Only for probe to probe peer
dispatching

5220 A<>B Inter-probe handover support

7 ISDM Regional 22 A>B SSH/Secure FTP


OS

8 DMS ISDM 22 A>B SSH/Secure FTP


Aggregator

9 OneSight CC 22 A>B SSH/Secure FTP


Server

A-4 Appendix A
Network Port Assignment Table

TABLE A-1. Port Assignments

Diagram
Notation Device A Device B Port(s) Direction Comments

12 OneSight OneSight 445 A<>B MS-DS SMB file sharing


Server Data
Collector 135 A<>B MS EPMAP End Point Map-
per (SQL Debugger)

139 A<>B NetBIOS Session Service

5007 A>B Port used to listen on from a


communication channel

5008 A>B JMS Server port

7001 A>B SNMP Agent port

5006 B>A Standard port for the Probe


agent

5100 B>A Standard port for the JMX


Proxy

13 User Client OneSight 8080 A>B HTTP connection


Server
8443 A>B HTTPS connection

80, 443 A>B Client connects to OneSight


Server via HTTP and/or
HTTPS

14 OneSight ISDM 8080 A>B Web Services API for drill-


Server down

15 ISDM OneSight 6050 A>B ISDM to push KPI configura-


Data tion and KPI values to One-
Collector Sight Data Collector

16 ISDB ISDM 22 A>B Inter-process communication


via SSH/Secure FTP

17 ISDV ISDB 22 A>B Inter-process communication


via SSH/Secure FTP

5433 A>B JDBC, ODBC to Vertica

10000, A>B JMX Access for DB stats


10000,... Each ISDM app takes a
10006 sequential port starting at
10000 on each node

19 User Client ISDM 8080 A>B Web based UI (HTTP)

8443 A>B Web based UI (HTTPS)

E-XMS Network TCP Port Assignments A-5


Network Port Assignment Table

TABLE A-1. Port Assignments

Diagram
Notation Device A Device B Port(s) Direction Comments

20 MSP Probe DMS Proxy 22 A<>B Inter-process communication


via SSH/Secure FTP

21 MSP Probe Regional 22 A<>B Inter-process communication


OS via SSH/Secure FTP

22 NWV/ DMS Proxy 22 A<>B Inter-process communication


SMM 5422 (FTP) via SSH/Secure FTP

UDP 123 B>A Port needs to be opened to


send any configured SNMP
trap to the SMM

5432 B>A Postgres configuration


access

4242 B>A Caching

23 NWV/ DMS 22 A<>B Inter-process communication


SMM Aggregator via SSH/Secure FTP

UDP 123 B>A Port needs to be opened to


send any configured SNMP
trap to the SMM

5432 B>A Postgres configuration


access

24 DMS Proxy DMS 22 A<>B Inter-process communication


Reports DB via SSH/Secure FTP

7263 A>B Filecopy

25 DMS DMS 22 A<>B Inter-process communication


Aggregator Reports DB via SSH/Secure FTP

7263 A>B Filecopy

26 DMS Proxy DMS 22 A<>B Inter-process communication


Aggregator via SSH/Secure FTP

4242 A>B Caching

27 NWV/ DMS 5433 A>B ORI Accelerated Reports


SMM Reports DB

28 NTP All Empirix UDP 123 B>A Communication with NTP


Server Systems Server for synchronization

A-6 Appendix A
Network Port Assignment Table

TABLE A-1. Port Assignments

Diagram
Notation Device A Device B Port(s) Direction Comments

29 Any SNMP Network UDP port 161 A>B Port needs to be open to
Requester Wide View, request any MIB entry
Regional
OS,
MSP Probe,
IDMC,
OneSight,

DMS 164 A>B

30 Any SNMP Network UDP port 162 B>A Port needs to be open to
Trap Wide View, send any configured SNMP
Receiver Regional trap to an NMS
OS,
MSP Probe,
IDMC,
OneSight

DMS 163 B>A

31 NWV/ RAN Vision TCP 5432 A<>B KPI/CDR


SMM KPIGen PostgrSQL
O{ sec is needed if encryption
is required

TCP 6451 A<>B KPIGen Admin


IP sec is needed if encryption
is required

TCP 22 A<>B SFTP, SSH

32 RAN Vision DMS Proxy TCP 22 A<>B CDR


KPIGen

33 OneSight OneSight 1433 A>B DB Communication


Server DB Server

34 OneSight Mail Server 25 A>B For email of reports


Server

A-7
Network Port Assignment Table

A-8 Chapter A
APPENDIX B VM Configuration and Set Up

This chapter provides the steps necessary to configure and monitor soft-
ware-only probes on Virtual Machines (VM) running VMware and KVM
hypervisors.

VM Configuration
To monitor traffic running on an existing Virtual Machine (VM) connected
to a virtual switch in VMware, you need to set up a new Virtual Machine
Port Group and configure it for Promiscuous Mode. When promiscuous
mode is enabled on a virtual adapter, all traffic flowing through the virtual
switch, including local traffic between virtual machines and remote traffic
originating from outside the virtual host, is sent to the promiscuous virtual
adapter.

Adding a Virtual Machine Port Group


Use the vSphere Client to add a new Virtual Machine Port Group. If you
do not already have the vSphere Client, you must download it from the
website before beginning this procedure.
1. Log in to the vSphere client and select the desired server in the left
pane and select the Configuration tab.
2. In the Hardware section, select Networking to display the current VM's
set up on the VM server and the virtual switches they are connected
to.
3. Click Properties to add a new Virtual Machine Port Group.

VM Configuration and Set Up B-1


VM Configuration

4. In the Properties dialog, select the virtual switch with the VM whose
traffic needs to be monitored by the Software Only probe. For exam-
ple, the screen capture below shows vSwitch2.
5. Click Add to display the Add Network Wizard.

B-2 Appendix B
VM Configuration

6. In the Connections Type section, select Virtual Machine and click


Next.

7. In the Port Group Properties section, specify a Network Label, for


example Local2. Leave VLAN ID (Optional) selection as None.
8. Click Next to display the Summary section.

VM Configuration and Set Up B-3


VM Configuration

9. Click Finish to add the Virtual Machine Port Group with label Local2.

10. Click Close. Continue to the next section to set the new Virtual
Machine Port Group to Promiscuous Mode.

B-4 Appendix B
VM Configuration

11. Select the new port group from the list in the left pane. Notice the Pro-
miscuous Mode is set to Reject.

12. Click Edit to display the Properties dialog.

13. Select the Security tab, check Promiscuous Mode to change the mode
dropdown to Accept.
14. Click OK. The Promiscuous Mode is now set to Accept.

B-5
VM Configuration

15. Click Close.

Add the new Port Group In the vSphere Client as a network adapter of
the VM that will run on the Software Only probe.
16. Select the desired VM in the left pane, which is ‘ProbeSO’.

B-6 Chapter B
VM Configuration

17. Right-click the VM ‘ProbeSO’ to display the VM Properties dialog.

18. Select the Hardware tab and click Add to display Device Types.

19. Select Ethernet Adapter and click Next to display Network Types. The
Adapter Type: E1000 displays by default.

B-7
VM Configuration

20. In the Network Connection section, select ‘Local2’ from the dropdown.

21. Click Next to review changes.

The selected options display for verification.


22. If options are correct, click Finish.

B-8 Chapter B
VM Configuration

23. To change the adapter type, click Back.

24. In the Adapter Type section, select a different adapter type from the
dropdown, for example: VMXNET3.
25. Click Next to review your changes.

26. Click Finish to add the network adapter.

B-9
VM Configuration

The Virtual Machine Properties dialog shows the adapter in the pro-
cess of being added.

27. Click OK to add the new adapter.

28. Right-click the new adapter and select Edit to verify the settings and
ensure the adapter was added. When verified, click OK.

B-10 Chapter B
VM Monitoring

VM Monitoring
To monitor traffic running on a KVM system without using Open vSwitch
requires you to turn off MAC address learning for the bridge created by
KVM for the physical Ethernet port where the traffic is being monitored. To
turn off MAC address learning, type:
brctl setageing br1 0
This will set the aging for the MAC address learning to 0 for bridge br1 so
it will never learn any MAC address and will flood the frame out of its
active ports, except the port where the frame was received.
To turn on MAC address learning, type
brctl setageing br1 100

Adding a Virtual Network Interface


To add a virtual network interface for the corresponding bridge, perform
the following steps.
1. On the VM dialog, select Details.

2. Click Add Hardware to display the Add New Virtual Hardware dialog.

B-11
VM Monitoring

3. From the Host Device dropdown, select the corresponding bridge, in


this case ‘Bridge br1’.
4. Select Device Model ‘virtio’.

5. Click Finish.

6. Configure the hardware emulator to monitor this interface in the


hwemul-config file:
/home/hammer/hmhwemul/etc/hwemul-config
Add the configuration parameter:
hm_hwemul_pcap_device_list

B-12 Chapter B
APPENDIX C Troubleshooting

Common Errors and Resolutions


There are some common errors that may occur during installation or
upgrade, refer to the following sections for more information.
 “Install Missing Packages,” page C-1
 “Resolve Installation Package Errors,” page C-2
 “Remove the MySQL Lock During an Upgrade,” page C-3
 “Update the Kernel Package for MSP Probes Based on Red Hat 6.8,”
page C-5
 “Vertica Upgrade Script Errors 6.1.3 to 7.2.3,” page C-5

Install Missing Packages


The Empirix software components require specific packages to be
installed. Zypper or YUM will automatically check for these software
dependencies during installation or upgrade. There are two methods you
can use to install missing packages:
 Method 1— Manually install the missing packages.
If SLES or Red Hat is not properly registered or there is no network
connection, a local repository must be created and added or the pack-
ages must be manually downloaded and installed or upgraded before
proceeding with the installation or upgrade of E-XMS software compo-
nents.
 Method 2—Configure the repositories (SLES or Red Hat) so that the
missing packages can be installed automatically.
Any missing packages will be automatically downloaded and installed
or upgraded; this will only work if the corresponding SLES or Red Hat
software repository containing these packages is configured properly
and accessible during installation or upgrade.

Method 1 - Manually installing missing packages


Migrating a Server from E-XMS 5.4 to 5.5 or Later - Additional SLES
Packages Required

Troubleshooting C-1
Common Errors and Resolutions

During the migration to E-XMS 5.5, you are prompted to remove NTP/
NSCD packages that conflict with E-XMS. You must remove these pack-
ages to proceed. Select “Solution 1” to remove the conflicted packages.
If the system is registered to a SUSE server and there is no Internet
access, a connection error displays. You can ignore this error or disable
the repository using the following command:
zypper mr --disable “reponame”

Method 2 - Automatically installing missing packages by


configuring repositories
Add SLES or Red Hat Software Repository Before Installation or
Upgrade
Example:
Retrieving package info2html-2.0-104.18.noarch (1/
69), 17.0 KiB (39.0 KiB unpacked)
File './suse/noarch/info2html-2.0-
104.18.noarch.rpm' not found on medium 'cd:///
?devices=/dev/sr0'
Please insert medium [SUSE-Linux-Enterprise-Server-
11-SP3 11.3.3-1.138] #1 and type 'y' to continue or
'n' to cancel the operation. [yes/no] (no):
In the example, the SLES repository called "SUSE-Linux-Enterprise-
Server-11-SP3 11.3.3-1.138" was configured with the URL "cd:///
?devices=/dev/sr0", but was not reachable at installation or upgrade time.

Resolve Installation Package Errors


If zypper (SLES) detects any package conflicts or dependency issues it
will display a menu of possible resolutions and prompt for a choice. If yum
(Red Hat) detects any package conflicts or dependency issues it will dis-
play an error and exit; the conflict must be manually resolved.
Resolving Issues within Installation
SLES 11.3/11.4
In the following example, the package exms-ros is being installed and
there is a dependency issue between the exms-common and nscd pack-
ages.
Problem: exms-ros-5.7.0-320.x86_64 requires exms-
common = 5.7.0-320, but this requirement cannot be
provided
uninstallable providers: exms-common-5.7.0-
320.x86_64[exms-common]

C-2 Appendix C
Common Errors and Resolutions

Solution 1: deinstallation of nscd-2.11.3-


17.54.1.x86_64
Solution 2: do not install exms-ros-5.7.0-320.x86_64
Solution 3: break exms-ros-5.7.0-320.x86_64 by
ignoring some of its dependencies

Choose from above solutions by number or cancel [1/


2/3/c] (c):
To resolve this within the installation of exms-ros, nscd must be removed
by selecting Solution 1 (entering ‘1’ at the prompt above). Solution 2 is
incorrect because exms-ros must be installed. Solution 3 is incorrect
because the breaking of dependencies will cause the ROS component to
not function properly. If ‘c’ is selected, this will cancel the installation.
Red Hat 6.8
As mentioned previously, if an issue is detected by yum, the issue must be
resolved manually. You can disable nscd or you can remove it. If you
remove nscd, LDAP will also be removed.
To remove nscd, run the following command:
# yum remove nscd
Manually resolving issues outside of installation
SLES 11.3/11.4
In the following example, the IBM version of Java (java-1_7_1-ibm-
1.7.1_sr3.0-1.1) must be removed before installing exms-ros. Run the fol-
lowing command:
# zypper remove java
Red Hat 6.8
In the following example, the package exms-probe is being installed and
there is a conflict detected with the ntp package.
Error: exms-probe conflicts with ntp-4.2.6p5-
5.el6_7.4.x86_64
To resolve this, you must manually remove the conflicting package using
the following command:
# yum remove ntp

Remove the MySQL Lock During an Upgrade


When upgrading, if the mysql lock is present, it will prevent the upgrade
from succeeding. In the following example, multiple versions of the mysql
client are available, and the best option is to select the option that allows

Troubleshooting C-3
Common Errors and Resolutions

the removal of the mysql lock to install the most recent version of the lib-
mysqlclient-solution 4.
Please choose Solution 4 when prompted.
Problem: exms-ros-5.7.0-345.x86_64 requires exms-
mysql >= 1.1, but this requirement cannot be
provided
uninstallable providers: exms-mysql-1.1-
0.x86_64[exms-sles]
Solution 1: remove lock to allow installation of
libmysqlclient_r15-5.0.96-0.6.1.x86_64[thirdparty-
sles-3.0.101-0.47.67-default]
Solution 2: remove lock to allow installation of
libmysqlclient_r15-5.0.96-
0.8.8.1.x86_64[thirdparty-sles-3.0.101-0.47.67-
default]
Solution 3: remove lock to allow installation of
libmysqlclient_r15-5.0.96-0.6.1.x86_64[ranvision-
sles]
Solution 4: remove lock to allow installation of
libmysqlclient_r15-5.0.96-0.8.8.1.x86_64[ranvision-
sles]
Solution 5: remove lock to allow installation of
libmysqlclient_r15-5.0.96-0.6.1.x86_64[SUSE-Linux-
Enterprise-Server-11-SP3 11.3.3-1.138]
Solution 6: do not install exms-ros-5.7.0-345.x86_64
Solution 7: break exms-ros-5.7.0-345.x86_64 by
ignoring some of its dependencies

Choose from above solutions by number or cancel [1/


2/3/4/5/6/7/c] (c): 2

Please select 2, never choose 5, 6 or 7 during the


migration process.

C-4 Appendix C
Common Errors and Resolutions

Update the Kernel Package for MSP Probes Based on Red Hat 6.8
When installing E-XMS on a Red Hat 6.8 based probe using the yum com-
mand ‘yum install exms-probe’ the installation exits with the following
error message:
Kernel development package for currently running
kernel version. Please install the kernel development
for kernel version 2.6.32-642.el6.x86_64 and run this
script again
To resolve this issue, you must manually update the kernel development
package to match to the currently running kernel version by executing the
following command:
yum update
Now reboot your system.

Vertica Upgrade Script Errors 6.1.3 to 7.2.3


IMPORTANT: You must upgrade Vertica on the NWV or ROS before you
perform an upgrade of E-XMS 5.7.1.
During the Vertica upgrade process the script may stop if there are admin-
tools processes running. You should locate and stop the processes and
then kill the processes if they are not critical.
1. Locate where the admintools processes are running:

ps aux | grep -i admintool


2. Stop the admintool processes using either of the following commands:

a. /etc/init.d/process_launcher stop

b. /etc/init.d/hmmonitor stop

3. Kill the admintools processes if they are not performing a critical func-
tion on the Vertica database:
kill pid
Where pid is the process id.

C-5
Common Errors and Resolutions

C-6 Chapter C
APPENDIX D Installing and Upgrading
SLES OS

The chapter provides procedures to install SLES 11.3 using the Empirix
auto-installer ISO and upgrade SLES 11.3 to SLES 11.4 using the SLES
11.4 distribution ISO. See “Performing the Distribution Upgrade to SLES
11.4,” page D-24.

Setup for SLES 11.3 Auto-installer


There are two methods to set up a server installation through the auto-
installer:
Physical access to the server to be reimaged
1. Create a bootable DVD from the auto-installer iso. If the server to be
reimaged to SLES 11.3 does not have a DVD drive, attach an external
DVD drive to the server.
2. Insert the DVD in the DVD drive.

Virtual access through the IPMI of the server to be reimaged


1. Install the NFS server where the auto-installer ISO will be.

a. Check whether the NFS server package is already installed using


the command xxxx. If the command returns a value, continue to the
step ‘Set Up the NFS server’. If the command does not return a
value, continue to the step ‘Install the NFS server package’.
# rpm -qa | grep -i nfs-kernel-server
nfs-kernel-server-1.2.3-18.38.43.1
b. Install the NFS server package.

# zypper install nfs-kernel-server


2. Set up the NFS server.

a. Create a base directory for the auto-installer ISO images. This will
be exported via NFS to be remotely mounted/accessed for the
auto-installer installation.
# mkdir -p /home/hammer/share/autoinstaller

Installing and Upgrading SLES OS D-1


Setup for SLES 11.3 Auto-installer

# chmod a+rx /home/hammer/share/autoinstaller

b. Download the required auto-installers to the location /home/ham-


mer/share/autoinstaller and make sure folder is world-readable.
Replace /<autoinstaller.iso> with the required autoinstaller from the
list.
#chmod a+r /home/hammer/share/autoinstaller/
<autoinstaller.iso>
3. Set up the NFS client. The NFS client is your Windows PC (e.g. VPN
client computer) or laptop (e.g. if on-site at customer) where you will
access the IPMI web browser from.
a. Open the Windows feature ‘Turn Windows features on or off’ and
scroll to ‘Services for NFS’, then check ‘Client for NFS’. Click OK
and reboot the system.

b. VPN into the local network of the NFS server, if you are not already
connected.
c. Open ‘Computer’ (Windows 7) or ‘This PC’ (Windows 10), right-
click the icon and select ‘Map network drive...’.

d. In the ‘Map Network Drive’ window, in the ‘Folder’ field enter:

D-2 Appendix D
Installing SLES 11.3 OS with Auto-installer

<NFS_server_IP>:/home/hammer/share/autoinstaller
Click Finsh. Now replace <FNS_server_IP> with the IP address of the
NFS server where the autoinstaller.iso is stored.

The mapped network drive is now available in ‘This PC’ Network Loca-
tions.

e. The autoinstaller ISO is now ready to be attached to the virtual


DVD drive. Log in to the IPMI webpage of the server to be reim-
aged.
f. Continue to the section below “Installing SLES 11.3 OS with Auto-
installer,” page D-3 for Supermicro and Intel server instructions to
attach the auto-installer ISO to the virtual DVD drive within the
IPMI.

Installing SLES 11.3 OS with Auto-installer


Follow the steps below to attach the auto-installer ISO to the virtual DVD
drive within the IPMI of the Supermicro or Intel server.
Supermicro Server
a. Select the Remote Control tab on the main menu and then click
‘Launch Console’.

Installing and Upgrading SLES OS D-3


Installing SLES 11.3 OS with Auto-installer

The ‘jviewer.jnlp’ file downloads.


b. Open the ‘jviewer.jnlp’ file and click Continue.

c. If a message displays similar to the one shown below, see


“Enabling Java Content in the Browser,” page D-13 and then
launch the ‘jviewer.jnlp’ file again. If there is no message displayed,
continue to step d below.

d. In the Security Warning dialog, check “I accept the risk...” and click
Run.

D-4 Appendix D
Installing SLES 11.3 OS with Auto-installer

The console opens.


e. Select the Media tab > Virtual Media Wizard.

The Virtual Media window opens.


f. In the CD Media section, browse to the location of the auto-installer
iso image and click Open.

g. Click Connect CD/DVD and then click Close. Continue to step 4


‘Reboot the device and wait for the options...’.

D-5
Installing SLES 11.3 OS with Auto-installer

Intel Server
a. Select the Remote Control tab on the main menu and then click
‘Launch Console’.

The ‘jviewer.jnlp’ file downloads.


b. Open the ‘jviewer.jnlp’ file and click Continue.

c. If a message displays similar to the one shown below, see the sec-
tion..... and then launch the ‘jviewer.jnlp’ file again. If there is no

D-6 Chapter D
Installing SLES 11.3 OS with Auto-installer

message displayed, continue to step d ‘In the Security Warning


dialog...’.

d. In the Security Warning dialog, check ‘I accept the risk...’ and click
Run.

The console opens.


e. Select the Device tab and then check Redirect ISO.

f. In the Open window browse to the location of the auto-installer iso


image file and open it. Continue to step 4 to reboot the device.
IMPORTANT: During the virtual IPMI installation make sure the net-
work connection is not lost and the IPMI console window is not
closed. If either of these scenarios occur, the connection to the auto-
installer will be lost and the installation will stop. The NFS server and
client should have fast network connectivity to ensure the installa-
tion process goes quickly and smoothly.

D-7
Installing SLES 11.3 OS with Auto-installer

4. Reboot the device and wait for the options screen that looks similar to
the image shown below.

5. Now select the required option from the list. The installation starts.

TABLE D-1. Auto-installer Server Options

auto-installer Server
Filename Installer Option Server Model Description E-XMS Model
db_7.10 E-XMS HCOS Ivy Bridge High HC-ROS
performance
database
DMS DB Ivy Bridge High RDB
performance
database
mid_range_9.8 E-XMS HPOS Ivy Bridge Mid-size HP-ROS
database
low_end_10.7 E-XMS ROS and Ivy Bridge Entry-level 1MSP-ROS
NWV database
Ivy Bridge Entry-level NWV
database
xms_hcos_2.54 Installation Supermicro HCOS HC-ROS

Installation Supermicro HCOS NWV

D-8 Chapter D
Installing SLES 11.3 OS with Auto-installer

TABLE D-1. (Continued)Auto-installer Server Options

auto-installer Server
Filename Installer Option Server Model Description E-XMS Model
xms_hpos_1.45 Installation Old Intel servers 5023, 5423 HP-ROS

Installation Old Intel servers 5023, 5423 NWV

IMPORTANT: For information on server supported SLES OS upgrade


paths, see Table 1-1, page 1-2.

6. Once the kernel is loaded, the installation system is loaded.

7. The System Probing screen displays.

8. Prepare the system for the automated installation.

D-9
Installing SLES 11.3 OS with Auto-installer

9. The installation begins formatting and mounting the required partitions.

10. The installation continues by installing the packages.

D-10 Chapter D
Installing SLES 11.3 OS with Auto-installer

11. The system reboots.

12. After reboot, the options screen appears again with the option ‘Boot
from Hard Disk’ selected.

13. The package installation completes.

D-11
Installing SLES 11.3 OS with Auto-installer

14. The installation continues by configuring the system according to the


autoinstall settings.

15. The installation continues by writing the system configuration.

D-12 Chapter D
Installing SLES 11.3 OS with Auto-installer

16. After the installation completes, the login screen displays.

Enabling Java Content in the Browser


With the release of Java 7 Update 45 and higher, web applications that do
not have signed vendor jar files will not load Java properly without adding
the base website to Edit Site List.
When the application is blocked by security setting, follow the steps given
below:
1. Have the complete webserver address of the IPMI available.

D-13
Installing SLES 11.3 OS with Auto-installer

2. Open the application ‘Configure Java’ and select the Security tab.
Check Enable Java Content in the Browser’ and click ‘Edit Site List...’.

3. In the Exception Site List dialog, click Add to add the webserver
address of the IPMI, then click OK.

D-14 Chapter D
Updating the Intel Server 5023 BIOS

Updating the Intel Server 5023 BIOS


1. Run the following script to verify whether the server is an Intel 5023
server and the current BIOS version needs to be upgraded.
check_5023_bios.sh
2. If the Intel 5023 server BIOS version needs to be updated, prepare a
USB MSDOS bootable stick and then copy the BIOS version
11-96 files to the USB stick. The zip file <<s5000pal_dos_pk-
g_96_64_47.zip>> includes the following 2 files:
 <<S5000 BIOS 96 Settings Rev 2.pdf>> --- new BIOS parameter
settings
 <<s5000pal_dos_pkg_96_64_47.zip>> --- BIOS firmware 11-96
below
BIOS firmware 11-96

D-15
Updating the Intel Server 5023 BIOS

USB MSDOS bootable stick and BIOS firmware 11-96 combined

NOTE: The COMMAND.COM, IO.SYS and MSDOS.SYS are the sys-


tem files for the MSDOS bootable USB stick.
3. Plug the USB stick into the Intel 5023 server and reboot the system.

4. Type ‘BIOS96’ at the prompt and press Enter:

C:\ BIOS96
The BIOS update begins. The update will take about 3 minutes to
complete.

D-16 Chapter D
Updating the Intel Server 5023 BIOS

WARNING! Do not interrupt the BIOS update.


5. Reboot the system.

6. Click <F2> to access the BIOS.

7. Verify to BIOS version number and setup the BIOS parameters based
on the files available in the zip file<<S5000 BIOS 96 Settings Rev
2.pdf>> mentioned above in step 2.

D-17
Upgrading SLES11.3 to SLES 11.4

8. Save the BIOS configuration and exit.

Upgrading SLES11.3 to SLES 11.4


To upgrade SLES 11.3 to 11.4 you must purchase a SLES license for
each server.
 Purchase a one year maintenance and upgrade subscription for
SUSE. You can purchase this subscription from Empirix using part
number SLES-MAINT-1YR or you may purchase the subscription
directly from SUSE.
 Once you have purchased the subscription and registered your activa-
tion code at the SUSE web site, you can download the SUSE 11.4 dis-
tribution.

Conventions
In this section, commands to run appear after the prompt. Unless explicitly
indicated, run the commands as the root user, indicated by a similar
prompt:
#

D-18 Chapter D
Upgrading SLES11.3 to SLES 11.4

For example, use the ‘ls’ command followed by the output of the com-
mand:
# ls
check_process_running.pl passwd.exp
To execute the command, be sure NOT to copy/paste the leading '#' or
the command will not get executed.
When running a series of commands as a specific user, the following con-
vention will be used. In this example, the current user is root and the user
is changed to the mysql user:
# su mysql
> ls
check_process_running.pl passwd.exp
> exit
#
Some commands and output must take into consideration IP addresses,
build numbers, and customer-specific values. Placeholders for these val-
ues will be indicated with the convention <replace this with actual value>.
The highlighted expression, within and including the angle brackets (< >),
must be substituted with the actual value. For example, 5.7.1-b<build-
number> would become 5.7.1-b206 if the build-number for the release
being installed is 206.

Pre-requisites
 Only E-XMS 5.7 (or later) is supported on SLES 11.4.
 SLES 11.4 is NOT supported on the 5023/5423 servers; the servers
can only be installed with SLES 11.3 because they require additional
third-party drivers that will not work with or have not be qualified for
SLES 11.4.
 SLES 11.4 installation ISO (SLES-11-SP4-DVD-x86_64-GM-
DVD1.iso) and valid SLES license.
 Local repository/server to host the SLES 11.4 installation ISO. The
desired method is to setup central server(s) to host the installation ISO
via NFS. If NFS is not desired as a means to access the installation
ISO, then the ISO may be copied to each server and that local ISO can
be configured as an installation repository.
 The preference is to use servers on the network to host the SLES
11.4 installation ISO and to configure NFS to export the ISO image.
These servers must not be behind any firewalls.
 If the NWV server or another server is not readily network-accessi-
ble by all systems, then the recommendation is to use the ROS for

D-19
Upgrading SLES11.3 to SLES 11.4

each region as a local repository for all of the systems under the
region (ROS + probes).
 nfs-kernel-server-1.2.3-18.38.43.1.x86_64.rpm if an NFS server
package is not already installed on the server(s) to host the instal-
lation ISO.

Configuring the Server(s) Hosting the SLES 11.4 Repository


After the following configuration steps are performed on one or more serv-
ers, the SLES 11.4 installation ISO (DVD1) will be accessible from these
servers via NFS. The instructions in this section assume that the NFS
server is running SLES 11.3/11.4; an NFS server can be setup on other
operating systems, but the instructions will vary. There must not be a fire-
wall between the NFS server configured in this section and the server that
will be updated from SLES 11.3 to 11.4.
1. Create a screen session (with a useful name such as "upgrade") after
logging in as hammer and switching to the root user using "su"; if
you've logged in as root, just create a screen session.
> su -
Password:
# screen -S upgrade
<screen will clear>
#

Installing the NFS Server


2. Check whether the NFS server package has been already installed. If
the command returns back a value, then jump to section 3.2: Configur-
ing NFS server. Otherwise, continue to the next step to install the NFS
server package.
# rpm -qa | grep -i nfs-kernel-server
nfs-kernel-server-1.2.3-18.38.43.1
3. Install the NFS server package

# rpm -ivh nfs-kernel-server-1.2.3-


18.38.43.1.x86_64.rpm

Configuring the NFS Server


4. Create a base directory for SLES 11.4-related ISO images and reposi-
tories. This will be exported via NFS so that it can be remotely
mounted/accessed when upgrading servers from SLES 11.3 to 11.4.
# mkdir -p /home/hammer/share/SLES-11.4
# chmod a+rx /home/hammer/share/SLES-11.4

D-20 Chapter D
Upgrading SLES11.3 to SLES 11.4

5. Download SLES-11-SP4-DVD-x86_64-GM-DVD1.iso to /home/ham-


mer/share/SLES-11.4 and make sure it is world-readable.
# chmod a+r /home/hammer/share/SLES-11.4/SLES-11-
SP4-DVD-x86_64-GM-DVD1.iso
6. Configure the NFS server using YAST.

# yast nfs_server add mountpoint=/home/hammer/share/


SLES-11.4 hosts=*
options="ro,root_squash,sync,no_subtree_check"
verbose
Ready
Initializing
Finishing
# yast nfs_server start
# yast nfs_server summary
NFS server is enabled
NFS Exports
* /home/hammer/share/SLES-11.4
NFSv4 support is enabled.
The NFSv4 domain for idmapping is localdomain.
NFS Security using GSS is disabled.
7. Check that the NFS server is running.

Adding SLES 11.4 Repository to the Server


To upgrade each server from SLES 11.3 to 11.4, the zypper Update Man-
agement command "dist-upgrade" (also known as "dup") will be invoked
after the SLES 11.4 repository has been added to the server. Although the
SLES 11.4 repository can be accessed as a local ISO file, the recom-

D-21
Upgrading SLES11.3 to SLES 11.4

mended method is to configure one or more servers to host the repository


(see section 3: Configuring the Server(s) Hosting the SLES 11.4 Reposi-
tory) and then add the ISO as a repository that will be accessed via NFS.

Remote Repository (Recommended)


8. Add the SLES 11.4 installation ISO as a software repository to the
local server. The ISO image will be accessed via NFS. Replace
<Repo-Server-IP-Address> with the IP address of the server that was
setup as an NFS server (see section 3: Configuring the Server(s)
Hosting the SLES 11.4 Repository to host the ISO).
# zypper ar "iso:/?iso=SLES-11-SP4-DVD-x86_64-GM-
DVD1.iso&url=nfs://<Repo-Server-IP-Address>/home/
hammer/share/SLES-11.4" "SLES 11.4"
Adding repository 'SLES 11.4' [done]
Repository 'SLES 11.4' successfully added
Enabled: Yes
Autorefresh: No
GPG check: Yes
URI: iso:///?iso=SLES-11-SP4-DVD-x86_64-GM-
DVD1.iso&url=nfs://10.93.101.12/home/hammer/share/
SLES-11.4
9. Make sure the repositories have been refreshed.

# zypper refresh
Retrieving repository 'SLES 11.4' metadata [done]
Building repository 'SLES 11.4' cache [done]
Repository 'exms-common' is up to date.
Repository 'exms-sles' is up to date.
Repository 'exms-wireshark' is up to date.
Retrieving repository 'ranvision-sles' metadata
[done]
Retrieving repository 'thirdparty-sles-3.0.101-
0.47.67-default' metadata [done]
All repositories have been refreshed.

Local Repository (Alternative to Remote Repository)


In cases where it's more desirable for the ISO image to be accessed
locally, an NFS server does not need to be setup. Instead, the ISO image

D-22 Chapter D
Upgrading SLES11.3 to SLES 11.4

needs to be copied to the local server and the repository added with a dif-
ferent syntax.
1. Create a base directory for SLES 11.4-related ISO images and reposi-
tories.
# mkdir -p /home/hammer/share/SLES-11.4
# chmod a+rx /home/hammer/share/SLES-11.4
2. Download SLES-11-SP4-DVD-x86_64-GM-DVD1.iso to /home/ham-
mer/share/SLES-11.4 and make sure it is world-readable.
# chmod a+r /home/hammer/share/SLES-11.4/SLES-11-
SP4-DVD-x86_64-GM-DVD1.iso
3. Add the SLES 11.4 installation ISO as a software repository to the
local server.
# zypper ar "iso:/?iso=/home/hammer/share/SLES-11.4/
SLES-11-SP4-DVD-x86_64-GM-DVD1.iso" "SLES 11.4"
Adding repository 'SLES 11.4' [done]
Repository 'SLES 11.4' successfully added
Enabled: Yes
Autorefresh: No
GPG check: Yes
URI: iso:///?iso=/home/hammer/share/SLES-11.4/SLES-
11-SP4-DVD-x86_64-GM-DVD1.iso
4. Make sure the repositories have been refreshed.

# zypper refresh
Retrieving repository 'SLES 11.4' metadata [done]
Building repository 'SLES 11.4' cache [done]
Repository 'exms-common' is up to date.
Repository 'exms-sles' is up to date.
Repository 'exms-wireshark' is up to date.
Retrieving repository 'ranvision-sles' metadata
[done]
Retrieving repository 'thirdparty-sles-3.0.101-
0.47.67-default' metadata [done]
All repositories have been refreshed.

D-23
Upgrading SLES11.3 to SLES 11.4

Disable the SuSE 11.3 Repository


In order to avoid the SuSE 11.3 repository's influence on upgrade, it is
possible to disable the SuSE 11.3 repository in the system.
1. Lis all the repository at the system.

# zypper lr
2. Disable the SuSE 11.3 repository

# zypper mr -d "<SuSE 11 sp3 repository name>"


For example:
# zypper modifyrepo -d "SUSE-Linux-Enterprise-
Server-11-SP3 11.3.3-1.138"
3. Make sure the repositories have been refreshed.

# zypper refresh

Performing the Distribution Upgrade to SLES 11.4


1. Create a screen session (with a useful name such as "upgrade") after
logging in as hammer and switching to the root user using "su"; if
you've logged in as root, just create a screen session.
> su -
Password:
# screen -S upgrade

<screen will clear>

#
2. Perform the distribution upgrade that will update all relevant packages
to upgrade the server from SLES 11.3 to 11.4. The distribution upgrade
will need to be run a couple of times to ensure that all packages have
been upgraded. Confirm that the SuSE release has officially been
updated to 11.4. Commands and user input have been highlighted in
green below to make them more visible.
# zypper dup -r "SLES 11.4" -l
153 packages to upgrade, 60 to downgrade, 5 new, 5
to remove.Overall download size: 287.1 MiB. After
the operation, additional 24.3 MiB will be used.
Continue? [y/n/? shows all options] (y): <Enter>
<output truncated>

D-24 Chapter D
Upgrading SLES11.3 to SLES 11.4

There are some running programs that use files


deleted by recent upgrade. You may wish to restart
some of them. Run 'zypper ps' to list these
programs.
# zypper dup -r "SLES 11.4" -l
Loading repository data...
Reading installed packages...
Computing distribution upgrade...
The following items are locked and will not be
changed by any action:
Available:
adaptec-firmware atmel-firmware brocade-firmware
icom-firmware ipw-firmware
libqt4-sql-mysql libqt4-sql-sqlite mpt-firmware
pcmciautils sendmail sles-tuning_en-pdf
yast2-qt yast2-qt-pkg
The following NEW package is going to be installed:
openssh-askpass
1 new package to install.
Overall download size: 23.0 KiB. After the
operation, additional 46.0 KiB will be used.
Continue? [y/n/? shows all options] (y): <Enter>
Retrieving package openssh-askpass-1.2.4.1-
1.46.x86_64 (1/1), 23.0 KiB (46.0 KiB unpacked)
Installing: openssh-askpass-1.2.4.1-1.46 [done]
# cat /etc/SuSE-release
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 4
3. To complete the upgrade, a reboot will need to be performed.

# reboot

D-25
Upgrading SLES11.3 to SLES 11.4

D-26 Chapter D
APPENDIX E Configuring Dynamic
Linksets for GTP/GTPv2
Traffic
This appendix explains how to configure traffic monitoring on a probe,
supporting a mixture of GTP and GTPv2 traffic on the same linkset.
This linkset mechanism supports a traffic configuration that associates the
user plane to the control plane for the following LTE interfaces:
 Gn: GTPv1 CP and UP
 S4: GTPv2 CP and UP
 S11/S12: GTPv2 CP on S11 and S1-U (S1-U must be associated to
the S11 CP)
 S5/S8: GTPv2 CP and UP
NOTE: No VLAN id, IP addresses ranges or SFP can be used to distin-
guish interfaces.

Configuring Probes from the Control GUI


To configure dynamic linkset on a probe you must remove the test.xml file
and re-configure the Probe from the Control GUI.
1. From the Command Line Interface, stop the IPX GUID:

/etc/init.d/ipxguid stop
2. Stop the hmmonitor:

/etc/init.d/hmmonitor stop
3. Manually remove the test.xml file.

4. Enable the parameters.ini file:

Set ip_up_thread_enabled=1
For information about available parameters, refer to table below “Con-
figuring Linkset Parameters,” page E-3"
5. From the Control GUI, start the IPX GUID:

/etc/init.d/ipxguid start

Configuring Dynamic Linksets for GTP/GTPv2 Traffic E-1


Configuring Probes from the Control GUI

6. Configure all the requested modules.

7. Enable PCAP recording (it is not enabled by default).

a. Go to the Probe GUI.

b. Select Data Recording.

c. Enable Packets, Header and Payload.

8. From the CLI, start hmmonitor:

/etc/init.d/hmmonitor start
When enabled, all the user plane modules over the tunnel in the test.xml
file are set as "_up".
Example
From the probe Control GUI enable:
 Control Plane: Gn, S5S8
 User Plane: http, ftp
In test.xml you will see following xml nodes:
<module id= "http_up" name= "http"> -----> HTTP over the generic
tunnel
<module id= "ftp_up" name= "ftp"> -----> FTP over the generic tunnel
If disabled (ip_up_thread_enabled=0), the xml nodes are not removed
from test.xml file.
To disable and use the probe without the generic thread (ip_up), follow
these steps from the CLI:
1. Disable the thread in parameters.ini file (ip_up_thread_enabled=0).

2. Stop the IPX GUID:

/etc/init.d/ipxguid stop
3. Manually remove the test.xml file:

E-2 Appendix E
Configuring Linkset Parameters

rm /home/hammer/hmipxprobe/etc/test.xml
4. Start the IPX GUID:

/etc/init.d/ipxguid start
5. Configure the probe using the Control GUI.

Configuring Linkset Parameters


To configure linkset parameters, edit the paramenters.ini file in the follow-
ing location:
/home/hammer/hmcommon/etc/parameters.ini

Default
Parameter Description Value

ip_up_thread_enabled Enables the ip_up threads and the automatic 0


association of UP to CP without linksets.
To associate the UP to CP, the transport info based
mechanism is used.
IMPORTANT:
When this feature is enabled, the linkset
mechanism is supported.

multithread_GTPv2 Enables the use of a dedicated thread for the 0


GTPv2Dispatcher module. If enabled, the
GTPv2Dispatcher uses the first "ip" thread "free"
starting from last index.
Example: When there are 8 "ip" threads under
tunnel then the GTPv2Dispatcher uses the thread
with id=7.
Also available with ip_up_thread_enabled
disabled.

multithread_S11 Represents the number of threads used by the S11 1


module.
Also available with ip_up_thread_enabled
disabled.

multithread_S2b Represents the number of threads used by the S2b 1


module.
Also available with ip_up_thread_enabled
disabled.

E-3
Configuring Linkset Parameters

Default
Parameter Description Value

multithread_S3S10 Represents the number of threads used by the 1


S3S10 module.
Also available with ip_up_thread_enabled
disabled.

multithread_S4S12 This represents the number of threads used by the 1


S4S12 module.
Also available with ip_up_thread_enabled
disabled.

multithread_S5S8 This represents the number of threads used by the 1


S5S8 module.
Also available with ip_up_thread_enabled
disabled.

gtp_by_update_only_with_user_info If set to 1, then open GTP ASDR by update only if 0


user info (IMSI, MSISDN) are present.

E-4 Chapter E

You might also like