You are on page 1of 20

Migrating an Integrity HP-UX 11iv3

Instance to New Hardware

August 2011

Technical white paper

Contents
Introduction ......................................................................................................................................... 2
Overview of the DRD method ................................................................................................................ 3
Assumptions for the DRD method ........................................................................................................... 3
Related Information .............................................................................................................................. 3
Summary of Steps for the DRD method ................................................................................................... 3
Use the DRD clone to migrate to new hardware....................................................................................... 4
Step 1: Create a DRD clone of the source system ................................................................................. 4
Step 2: Modify file system sizes on the clone if needed......................................................................... 4
Step 3: Identify and install additional target software on the clone ......................................................... 4
Step 4: Determine additional kernel content that is needed on the target ................................................ 5
Step 5: Build a kernel on the clone suitable for the target ...................................................................... 5
Step 6 (Optional): Adjust target kernel tunables if needed ..................................................................... 6
Step 7: Set the system identity on the clone for boot on the target .......................................................... 7
Step 8: Mark the clone LUN for identification from the target EFI ........................................................... 7
Step 9: Move scenario: Disable the source system ............................................................................... 7
Step 10: Move storage ..................................................................................................................... 8
Step 11: Boot the clone on the target system ....................................................................................... 8
Step 12: Test the target system........................................................................................................... 8
Step 13: If not successful, revise and repeat. ....................................................................................... 9
Overview of the Ignite-UX Recovery Method ............................................................................................ 9
Assumptions for the Ignite-UX Recovery Method ....................................................................................... 9
Related Information ............................................................................................................................ 10
Detailed Migration Steps for the Ignite-UX Recovery Method ................................................................... 10
Step 1: Install the latest IGNITE bundle on the Ignite-UX server ............................................................. 10
Step 2: Make a recovery archive of the source system ........................................................................ 10
Step3: Copy source client config to the target config .......................................................................... 11
Step4: Clone scenario: Give the target read access to source recovery archive ..................................... 12
See the manpages dfstab(4) and share_nfs(1M) for more information. ....................................... 12
Step 5: Remove or modify the network information for the target ......................................................... 12
Step 6: Change a recovery-related variable in control_cfg ............................................................ 13
Step 7: Create depots with additional software for the target .............................................................. 14
Step 8: Ensure iux_postload scripts from the recovery archive run correctly ........................................... 14
Step 9: Create config files for the new depots ................................................................................... 16
Step 10: Add the new config files to the target CINDEX ..................................................................... 17
Step 11: Modify file system sizes if needed ....................................................................................... 18
Step 12: Check configuration syntax ................................................................................................ 18
Step 13: Move scenario- Disable the source system ............................................................................ 18
Step 14: Deploy the configuration to the target system ....................................................................... 18
Step 15: Boot and test the target system. .......................................................................................... 19
Step 16: Return to the source, if the target is not as desired................................................................. 19
Call to action .................................................................................................................................... 20

1
Introduction
Customers often find that they want to move an existing instance of an HP-UX installation to new
hardware. The change may be to a new computer model or to a system of the same model with
additional I/O or networking capacity. The move may provide greater compute capacity, more
expansion for the future, lower power and cooling costs, and better use of data center real estate.

In such cases, the prospect of “setting the new system up from scratch” can be daunting, since it may
involve identifying and restoring all customizations made to the system. It is often preferable to move
the previous system boot disk, either literally, or through a mechanism such as moving a DRD clone of
the system, or deploying an Ignite-UX recovery image of the system, to the new hardware.

This paper provides complete descriptions of two techniques which can be used to migrate the pre-
existing HP-UX 11i v3 systems to new computers. This raises the question of which technique is
preferable given your circumstances. The following table is a list of some common migration
situations along with a recommendation for the method to use for migration:

If this is your situation Consider using this method:


If you are moving the entire system – Move the boot disk or DRD clone, along with all data. Device
data as well as boot disks files arepreserved and very little additional setup is needed.
If you do not have an Ignite-UX Move the boot disk or a DRD clone.
server setup already
If you have many boot disks on a Deploy an Ignite-UX recovery image.
given system or hard partition,
making it challenging to identify a
particular boot disk
If your root disk is managed by Deploy an Ignite-UX Recovery image. DRD rehosting does not
VxVM support VxVM roots.
If you need to clone or move an Consult “Successful System Cloning using Ignite-UX“ located at
instance of HP-UX 11iv1 http://www.hp.com/go/ignite-ux-docs to see if this is
possible. DRD is not available on HP-UX 11i v1.
If you only use internal boot disks Deploy an Ignite-UX recovery image.
that are not hot-pluggable
If you are moving a “physical If the physical source system does not support vPars, deploy
system” to a vPar an Ignite-UX recovery image plus an additional depot of vPars
software. If the physical system already has vPars installed,
move the boot disk or DRD clone, or deploy an Ignite-UX
recovery image.
If you are moving a “physical Create the guest with hpvmcreate. Prepare the DRD clone,
system” to a VM or the Ignite-UX recovery image, being sure to add the module
hpvmdynmem and install the additional VM guest depot from
the VM host. See also “Using Ignite-UX with Integrity Virtual
Machines” located at http://www.hp.com/go/hpux-hpvm-
docs
If you want to move an entire set of Set up the vPars on the new hardware from the iLO, then
vPars to a next Generation server either move DRD clones or deploy Ignite-UX recovery images.
If you are moving a VM guest from Use online or offline VM migration.
one VM host to a new VM host.

2
If this is your situation Consider using this method:
If you are moving from PA to IPF Install the new system from depots.

Overview of the DRD method


The overall approach used here is to create a DRD clone on an older “source” system, modify it to
support the new “target” system model, and move it to the new target system. The recommended steps
detail the upgrades and changes that must be accomplished on the source system, as well as those
that must be defined for automatic consumption during boot of the target system.

Note:
A system administrator may choose to move the actual boot
disk from the source system to the target. To do this, the
changes that follow should be applied to the boot disk
rather than to the DRD clone. However, this makes it
somewhat more difficult to revert to the original hardware if
issues are encountered. For this reason, the procedure
below is described for a DRD clone.

Assumptions for the DRD method


• DynRootDisk is installed on the source system and on the DRD clone of the source system at
release B.11.31.A.3.6 or above. This release, supplied with the September 2010 media,
includes support for kernel management on the clone using “drd runcmd mk_kernel”, “drd
runcmd kcmodule” and “drd runcmd kconfig” (as well as the previously supported “drd
runcmd kctune”).
• The release of HP-UX on the source system is 11iv3.
• The root group is managed by LVM.

Related Information
Additional information regarding Dynamic Root Disk can be obtained from the DRD documentation
web site located at: http://www.hp.com/go/drd-docs. The documents located on this site include the
following:

• Dynamic Root Disk Administrator’s Guide


• Dynamic Root disk Quick Start & Best Practices
• Exploring DRD Rehosting in HP-UX 11iv2 and 11iv3

Summary of Steps for the DRD method


In some cases the goal is to move a pre-existing HP-UX instance to new hardware; in other cases, the
goal is to deploy a very similar system (same HP-UX release, same patches, etc.) with a different
network identify (hostname, MAC and IP addresses, etc.). The difference between the steps needed
for these two scenarios is small, so both scenarios are covered here. To distinguish between these
similar scenarios, the first is called the move scenario, and the second is called the clone scenario.
The following steps are used in both scenarios:

3
1. Create a DRD clone of the source system on storage that can be moved to the target.
2. Modify the file system sizes on the clone, if needed.
3. Identify and install additional target software on the clone.
4. Determine additional kernel content that is needed on the target.
5. Build a kernel on the clone suitable for the target.
6. Optional: Adjust target kernel tunables, if needed.
7. Set the system identify on the clone for boot on the target.
8. Mark the clone LUN for identification from the target EFI.
9. Move scenario: Disable the source system.
10. Move storage.
11. Boot the clone on the target system.
12. Test the target system.
13. If the target does not satisfy expectations, repeat the process.

Use the DRD clone to migrate to new hardware


Step 1: Create a DRD clone of the source system
The DRD clone must be created on storage that can be moved to the target (such as a SAN LUN or
hot-pluggable SAS disk) and must be supported as a boot device on the target.

To ensure that the latest technology and enhancements are used for creating the DRD clone,
download the most recent release of DRD from http://www.hp.com/go/drd.

Step 2: Modify file system sizes on the clone if needed


Use /usr/bin/bdf on the source system to determine how fully allocated the source file systems
are. If more free space is desirable, refer to the white paper Dynamic Root Disk: Quick Start and Best
Practices, available at http://www.hp.com/go/drd-docs for information about resizing clone file
systems.

Step 3: Identify and install additional target software on the clone


The target system may need additional software, particularly if the system differs greatly from the
source. To ensure that all needed kernel modules, as well as any other required software, are
available, consult the Errata document for the target system. The Errata document can be found by a
web search of the model and the string “Errata”. There may be both an HP-UX Errata site and a
System Errata site for a given model.

Create one or more depots containing all needed software. It is probably more convenient to create
the depots so that any single depot does not contain multiple versions of the same software.

For example, if the target system is a BL860C i2 system, the Errata document available in May, 2010
indicates that HP-UX 11iv3 OE Update Release for March 2010 and additional software listed in the
document are required. For migration to a BL860C i2 system, it is probably most convenient to
create or use an existing depot containing the Operating Environment (Data Center, Virtual Server,
High Availability, or Base) from March 2010 that you intend to install, and create one or more
additional depots with the remaining software documented in the Errata and applicable to your
environment. Note that software that implements firmware changes cannot be installed on the clone.
Firmware-modifying software must be installed on the target system after the drd rehosting procedure
is complete.

If desired, you can use the swcopy command to copy software from various source depots to one or
more directory depots. Note that if you are copying a serial (tar format) depot, you need to issue the

4
swcopy command from the system containing the serial depot. See swcopy (1M) for more
information.

Once the target software has been included in accessible depots, install it to the clone. If one of the
depots contains an entire new Operating Environment, the first installation should be run using “drd
runcmd update-ux” with the Operating Environment depot as a source:

drd runcmd update-ux –s <OE_depot_location>

Other depots can be installed after the update using drd runcmd swinstallas follows:

drd runcmd swinstall –s <directory_depot_location> <software_selection>

(Note that the command drd runcmd does not support serial depots.)

Step 4: Determine additional kernel content that is needed on the


target
Once all the required software is installed on the clone of the source system, a kernel can be created
that contains all drivers that are needed on the target. The following procedure can be used to
identify drivers needed in the kernel for the target. This step makes minimal use of Ignite-UX, but does
not require that an Ignite-UX server be available.

1. Login to the console of the target system with an X-windows interface that supports cut and paste.
2. Initiate an install of HP-UX from media by booting from an installation DVD. Alternatively, initiate
the installation from an Ignite-UX server that can be accessed across the network from the target
system. In either case, the version of Ignite-UX used must support the target hardware. The
simplest way to ensure that this is the case is to use the latest version of Ignite-UX, or use install
media supplied for the target hardware.
3. From the install menu on the target, select Run an Expert Recovery Shell.
4. If you are using install media, select n to the prompt to start networking; otherwise select y.
5. Select “l” to load a file.
6. Enter the list by issuing the command:
/usr/bin/sort /usr/bin/rm /sbin/date /usr/lbin/sysadm/create_sysfile
7. Confirm the list, then press enter <return> to continue, and chose x to exit to a shell.
8. Issue the command:
# /usr/lbin/sysadm/create_sysfile /RAMFS1/system_new_hw
# cat /RAMFS1/system_new_hw
9. Cut and paste the contents of /RAMFS1/system_new_hw to a convenient location on the
source system (or its clone), such as /stand/system_new_hw.

Step 5: Build a kernel on the clone suitable for the target


1. On the source system, mount the clone:
drd mount
2. In a convenient location on the source system, such as
/usr/local/bin/merge_system_files, create the script shown in Figure 1, which is used
to merge the system file needed for the new hardware to the system file on the clone of the source
system.
3. Merge the system file for the new hardware to the system file for the source system clone:
/usr/local/bin/merge_system_files /stand/system_new_hw
4. Build the kernel on the source system clone to include drivers for the target hardware:
drd runcmd mk_kernel –o /stand/vmunix

5
#!/usr/bin/sh
#
# merge_system_files - Merges system file from new hardware
# into system file on source clone
# $1 - system file from new hardware to be merged into
# system file on DRD clone.
#
typeset -i module_found
system_new_hw=$1
system_merged=/var/opt/drd/mnts/sysimage_001/stand/system

cp -p ${system_merged} ${system_merged}.save

cat ${system_new_hw} |
while read module_keyword module_name module_state
do
module_found=0
# ignore obsolete drivers
case $module_name in
"fcd_fcp" | "fcd_vbus" | "usb_ms_scsi" | "sasd_vbus" )
break
;;
*)
if [[ ${module_keyword} = "module" ]]
then
grep ${module_name} ${system_merged} |
while read mod_keyword mod_name rest
do
if [[ ${mod_keyword} = "module" ]]
then
if [[ ${module_name} = ${mod_name} ]]
then
module_found=1
fi
fi
done
if [[ ${module_found} -eq 0 ]]
then
echo "Adding module ${module_name} ..."
echo "${module_keyword} ${module_name} ${module_state}" >> \
${system_merged}
fi
fi
;;
esac
done

Figure 1 /usr/local/bin/merge_system_files

Step 6 (Optional): Adjust target kernel tunables if needed


1. To determine kernel tunables on the source clone, issue the command:
drd runcmd kctune

2. To change tunable settings which are known to be different for the target, issue the command:
drd runcmd kctune <tunable_name>=<tunable_value>

6
Step 7: Set the system identity on the clone for boot on the target
This step is ordinarily needed for both the “move” and “clone” scenarios, since the MAC address(es)
on the target will differ from those on the source. (The exception to this is the case where your MAC
addresses have been virtualized through Virtual Connect, and you are moving the VC profile to the
new system.)

To set the identity of the target (hostname, mapping of IP addresses to network interfaces, language,
time zone, etc.) perform the following while logged onto the source system as root:

1. Create a sysinfo file modeled on the template supplied in


/etc/opt/drd/default_sysinfo_file. This file contains hostname, IP addresses or DHCP
information, and other customizing information.
If you prefer to wait until the target system boots to supply this information, leave the parameter
SYSINFO_INTERACTIVE set to ALWAYS. Otherwise, comment out this variable and set the
values for other variables in the sysinfo file.

Additional information regarding the content and syntax of the sysinfo file is available in the
sysinfo(4) manpage, packaged in PHCO_39064 or any superseding patch.

A sample sysinfo file, including the required parameter SYSINFO_HOSTNAME, appears below.

SYSINFO_HOSTNAME=myhost
SYSINFO_DHCP_ENABLE[0]=0
SYSINFO_MAC_ADDRESS[0]=0x0017A451E718
SYSINFO_IP_ADDRESS[0]=192.2.3.4
SYSINFO_SUBNET_MASK[0]=255.255.255.0
SYSINFO_ROUTE_GATEWAY=192.2.3.75
SYSINFO_ROUTE_DESTINATION[0]=default
SYSINFO_ROUTE_COUNT[0]=1
SYSINFO_DNS_DOMAIN=ours
SYSINFO_DNS_SERVER=192.2.3.50

2. Issue the command:


drd rehost –f <sysinfo_file_location>

For additional information about the drd rehost command, see the chapter “Rehosting and
unrehosting systems” in the Dynamic Root Disk Administrator’s Guide, available at
http://www.hp.com/go/drd-docs, and in the drd-rehost(1M) manpage.

Step 8: Mark the clone LUN for identification from the target EFI
It can be challenging to identify the clone LUN after it is moved to the target system. Since the LUN is
partitioned, it is displayed with “fs” entries. However, if multiple partitioned disks are visible from
the EFI menus of the target, an extra “marker” can help to identify the LUN.

To create the marker, issue the following on the source system:

# touch /tmp/move_to_new_hw
# efi_mkdir -d <dsf of EFI partition of clone> EFI/HPUX/DRD
# efi_cp -d <dsf of EFI partition of clone> /tmp/move_to_new_hw \
EFI/HPUX/DRD/move_to_new_hw

Step 9: Move scenario: Disable the source system


If you are moving the system (keeping the same hostname and network identity) to a location on the
same network, shutdown the source system or remove it from the network.

7
Step 10: Move storage
Use the interface to the SAN storage management (or move a hot-pluggable disk) to move the clone
from the source system to the target. Depending on your system setup and whether you are executing
a “move” or “clone” scenario, you may also want to move additional non-boot storage to the clone.

Step 11: Boot the clone on the target system


By default, the EFI screens do not usually display SAN LUNs unless they are already known to be
boot disks. The first time the moved disk is booted, the SAN scan must be enabled. The steps
required to do this vary somewhat by system model and firmware revision, but the process is similar
to the following steps, executed from the EFI shell on the target system.

1. Issue the command:


Shell> drivers -b
and identify the Fibre Channel driver .
2. Issue the command:
Shell> drvcfg <driver number>
and identify the controller number of the driver.
3. To display the drvcfg menu, issue the command
Shell> drvcfg –s <driver number> <controller number>
4. Select Option 4: Edit boot Settings
5. Select Option 6: EFI Variable EFIFCScanLevel
6. Enter y to create the variable.
7. Enter 1 to set the value of the EFIFCScanLevel.
8. Enter 0 to go to the Previous Menu.
9. Enter 12 to quit.
10. To rescan the devices, issue the following commands:
Shell> reconnect –r
Shell> map –r

After the SAN scan is enabled, identify the boot disk, using the marker created earlier if needed.
To find the marker, perform the following additional steps:

1. Issue the command:


Shell> fs0:
2. If no error is issued, issue the command:
Shell> cd EFI\HPUX\DRD
3. To look for the marker, issue the command:
Shell> dir

If the marker is not found on fs0, choose fs1: and continue until the marker is found.
You can then run EFI\HPUX\EFI\hpux.efi on the disk containing the marker to boot the system.

Step 12: Test the target system


Moving a disk from one machine to another may lead to configuration problems if careful planning
is not employed prior to moving. For example, the fstab file may contain entries for volumes that
do not exist on the new system. After moving that disk, make sure the fstab, lvmtab, lvmtab_p
andlvmpvg files reflect volume and volume groups that are actually present on the new system. You
may
need to present additional LUNs to the target system and import volume groups using the
vgimport(1M) command. Also, persistent dump device entries may no longer reflect appropriate dump
devices on the new system. Use crashconf(1M) to reconfigure dump devices if needed.

8
You may also want to run /usr/sbin/setboot to set the primary bootpath to the current boot disk
as determined by vgdisplay. On the next reboot, you can speed up subsequent boots by using the
steps above to reset EFIFCScanLevel back to 0.

Step 13: If not successful, revise and repeat.


If the target system does not work as expected, the target can be shutdown or removed from the
network and the source system rebooted. Additional software or kernel content can be installed and
the procedure repeated.

Overview of the Ignite-UX Recovery Method


The overall approach for the Ignite-UX recovery method is to deploy an Ignite-UX recovery archive of
the HP-UX instance running on older hardware, called the source system, to the new system, called the
target. The recommended steps detail the upgrades and changes that must be accomplished on the
source system prior to creating the recovery archive, on the Ignite-UX server after the recovery archive
is created, and during boot of the target system.

In some cases the goal is to move a pre-existing HP-UX instance to new hardware; in other cases, the
goal is to deploy a very similar system - same HP-UX release, same patches, etc. - with a different
network identify - hostname, MAC and IP addresses, etc. The difference between the steps needed
for these two scenarios is small, so both scenarios are covered here. To distinguish between these
similar scenarios, the first is called the move scenario, and the second is called the clone scenario.

The following migration steps are described in more detail in the remainder of this whitepaper:

1. Install the latest IGNITE bundle on the Ignite-UX server.


2. Make a recovery archive of the source system.
3. Copy the source client config to the target config.
4. Clone scenario: Give the target read access to the source recovery archive.
5. Remove or modify the network information for the target.
6. Change a recovery-related variable in control_cfg.
7. Create depots with additional software for the target.
8. Ensure iux_postload scripts from the recovery archive run correctly.
9. Create config files for the depots.
10. Add the new config files to the target CINDEX.
11. Modify file system sizes if needed.
12. Check configuration syntax.
13. Move scenario: Disable the source system.
14. Deploy the configuration to the target system.
15. Boot and test the target system.
16. If the target is not as desired, return to the source system.

Assumptions for the Ignite-UX Recovery Method


The following assumptions are made in using this method for the migration of a system to new
hardware:

• The HP-UX major release installed on the source system is 11iv3 or above. The use of the
“match” specification when installing an Operating Environment requires 11iv3.
• The target system will run on the same major release running on the source system. The
potential installation of the Operating Environment depot after the restore of the recovery
image requires that the recovery image and Operating Environment depot be the same major
release of HP-UX.
9
For example, if the source is running 11iv3, the target must support 11iv3 as well. The target may
need a newer revision of 11iv3, such as the March 2010 release, but not a new major release of HP-
UX.

Related Information
Additional information regarding Ignite-UX can be obtained from the Ignite-UX documentation web
site, http://www.hp.com/go/ignite-ux-docs. This includes the following

• Ignite-UX Administration Guide (March 2010, B3921-90006)


• Successful System Cloning using Ignite-UX
• Successful System Recovery using Ignite-UX
• Installing and Updating Ignite-UX

The white paper “Using Ignite-UX with Integrity Virtual Machines” provides additional information
regarding the use of Ignite-UX to setup Integrity Virtual Machine. It is available at the following:
http://www.hp.com/go/hpux-hpvm-docs

Detailed Migration Steps for the Ignite-UX Recovery Method


Step 1: Install the latest IGNITE bundle on the Ignite-UX server

This step is particularly important if the target system includes recently released hardware, such as a
new system model or new I/O or networking interfaces. In these cases, you need the latest Ignite-UX
install kernel to boot the system.

The most recent IGNITE bundle can be obtained from the Operating Environment media that provides
support for the new hardware, or it can be downloaded from http://www.hp.com/go/ignite-ux

The whitepaper “Successful System Cloning using Ignite-UX“, available from


http://www.hp.com/go/ignite-ux-docs contains additional information, including details for
determining the minimum release of Ignite-UX needed for specific hardware. The simplest approach
is to use the latest available release of Ignite-UX.

Step 2: Make a recovery archive of the source system


A recovery archive of the source system, to be deployed on the target system, can be initiated on the
source system or on the Ignite-UX server, and it can be run from the command line or run from the
ignite user interface on the Ignite-UX server.

To create the recovery archive using the ignite user interface on the Ignite-UX server,
1. Run ignite.
2. Click the source system to highlight it. (If the source system is not displayed, choose Actions-
>Add New Client for Recovery).
3. Choose Actions->Create Network Recovery Archive and follow the prompts in the
wizard, specifying that the entire root volume group is to be included in the archive.
4. The Actions->Client Status choice will show the progress of the archive creation.

Alternatively, you can initiate a recovery from the source system with the following command:

/opt/ignite/bin/make_net_recovery -s <hostname of Ignite-UX server> -A

10
For example, if the Ignite-UX server hostname is ignsvr, the command would be:

/opt/ignite/bin/make_net_recovery -s ignsvr -A

For more information about creating recovery archives, see the “Recovery” chapter of the Ignite-UX
Administration Guide for HP-UX 11i, available at http://www.hp.com/go/Ignite-ux-docs.

Step3: Copy source client config to the target config


To make the following changes, log-in to the Ignite-UX server as root. Note, these changes are
needed for both the “move” and “clone” scenarios, since the new system will have a different MAC
address from the source.

1. Determine the MAC address for the target system that will be used to connect to the Ignite-UX
server.
2. Create the MAC address directory in /var/opt/ignite/clients:
# cd /var/opt/ignite/clients
# su bin
$ umask u=rwx,g=rx,o=rx
$ mkdir <target MAC address>
3. If you are moving the system (as opposed to cloning), remove the symlink from target hostname to
its “old” MAC address:
# rm <target hostname>
4. Create a symlink from the target hostname to the directory just created:
$ ln -s <target MAC address> <target hostname>
5. Copy the CINDEX file and recovery directory from the source client directory:
$ cd <source hostname>
$ find CINDEX recovery | cpio -pdvma ../<target hostname>
6. Identify the “cfg” clause that is set to TRUE in the /var/opt/ignite/clients/<target
hostname>/CINDEX file. The subdirectory of the recovery directory containing
system_cfg, control_cfg, and archive_cfg has a <recovery-date-time> name in
the format “yyyy-mm-dd,hh:mm”. Ordinarily, the directory
/var/opt/ignite/clients/<target hostname>/recovery/latest is a symlink to the
directory /var/opt/ignite/clients/<target hostname>/recovery/<recovery-
date-time>. If this is not the case on your Ignite-UX server, you need to replace references to
the directory /var/opt/ignite/clients/<target hostname>/recovery/latest in
the directions that follow with the directory /var/opt/ignite/clients/<target
hostname>/recovery/<recovery-date-time>.

For example, if the source system is srcsys, the target system is tgtsys, and the MAC address for the
target system 0x001560042B1, the sequence of commands would be as follows:

# cd /var/opt/ignite/clients
# su bin
$ umask u=rwx,g=rx,o=rx
$ mkdir 0x001560042B1
$ ln -s tgtsys 0x001560042B1
$ cd srcsys
$ find CINDEX recovery | cpio -pdvma ../tgtsys

For the example, check whether


$ ll /var/opt/ignite/clients/tgtsys/recovery/latest
matches the cfg set to TRUE in /var/opt/ignite/clients/tgtsys/CINDEX.

11
Step4: Clone scenario: Give the target read access to source
recovery archive
In the move scenario, the source and target systems have the same hostname, so the target system
already has network access to the recovery archive.

If the Ignite-UX server is running 11i v3 or later, edit the /etc/dfs/dfstab file to allow access to
both the source and target clients as follows:

1. Open the dfstab file:


#vi /etc/dfs/dfstab
2. Once open, append the following to the -argument of the line for the source system
ro=<target hostname>

For example, if the source hostname is srcsys, and the target hostname tgtsys, change the line

share -F nfs -o anon=2,rw=srcsys \


/var/opt/ignite/recovery/archives/srcsys
to

share -F nfs -o anon=2,rw=srcsys,ro=tgtsys \


/var/opt/ignite/recovery/archives/srcsys

3. #shareall -F nfs

See the manpages dfstab(4) and share_nfs(1M) for more information.

If the Ignite-UX server is running a release prior to 11i v3, edit the /etc/exports file to allow
access to both the source and target clients:

1. Open the exports file:


#vi /etc/exports
2. Once open, append the following to the argument of the source client's line.:
:<target hostname>

For example, if the source hostname is srcsys, and the target hostname tgtsys, change the line

/var/opt/ignite/recovery/archives/srcsys -anon=2,access=srcsys

to

/var/opt/ignite/recovery/archives/srcsys -anon=2,access=srcsys:tgtsys

3. #exportfs –av

See exports(4) for more information

Step 5: Remove or modify the network information for the target


In the clone scenario, the hostname, IP addresses, subnet masks, and other network information for
the target system will differ from that of the source system. Even in the move scenario, when the
hostname is retained, the IP address(es), subnet mask(s), and other information will probably need to
be modified.

A simple approach to setting up the networking on the target system is to remove or comment out the
network configuration information stored with the recovery archive, and supply the network identity
12
when the target system is deployed, either by specifying it directly on Ignite-UX menus, or by choosing
on the menus to supply the information when the system is first booted.

To remove the pre-existing network information, edit /var/opt/ignite/clients/<target


hostname>/recovery/latest/system_cfg to remove the _hp_custom_sys stanza.
Alternatively, the stanza may be commented out by inserting “#” in column 1 of the lines it contains.

Here is a sample of the _hp_custom_sys stanza that should be commented out:

#
# System/Networking Parameters
#
#_hp_custom_sys+={"HP-UX save_config custom sys"}
#init _hp_custom_sys="HP-UX save_config custom sys"
#_h p_custom_sys visible_if false
#(_hp_custom_sys=="HP-UX save_config custom sys") {
# final system_name="<source hostname>"
# final ip_addr["<source NIC hw path"]="<source address>"
# final netmask["<source NIC hw path"]="<source mask in hex>"
# final broadcast_addr["<source NIC hw path"]="<broadcast>"
# init _hp_default_final_lan_dev="<source NIC hw path>"
# final route_destination[0]="default"
# final route_gateway[0]="<source gateway>"
# final route_count[0]=1
# final nis_domain="udl"
# final wait_for_nis_server=TRUE
# final dns_domain="<DNS domain>"
# final dns_nameserver[0]="<IP address of DNS server>"
# is_net_info_temporary=FALSE
#} # end "HP-UX save_config custom sys"

Prior to deploying the target system, determine the network configuration information needed for it.
This is the same information that is needed to cold install the target system from a depot, including
whether DHCP is used to manage the interfaces, IP addresses (if DHCP is not used), subnet masks,
gateways, and optional NIS and DNS servers.

If you prefer to modify the information in the system_cfg file itself, and have multiple network
interfaces on the target system, you may need to identify the hardware path for each NIC prior to
editing system_cfg. See instl_adm(4) for further information about the syntax of the
networking parameters in the system_cfg file.

Step 6: Change a recovery-related variable in control_cfg


The config file /var/opt/ignite/clients/<target
hostname>/recovery/latest/control_cfg contains information specific to deploying a
recovery archive to the same system. In both the move and the clone scenarios, the target is a
different system.

If the target system is a different model from the source system, then the setting of _HP_CLONING
does not need to be changed. However, if the target system is the same model but is configured with
different peripheral devices, you may want to ensure that the kernel is re-built on the target system by
modifying the setting of _HP_CLONING. To do this, edit the file
/var/opt/ignite/clients/<target hostname>/recovery/latest/control_cfg on the
Ignite-UX server and make the following change:
Change:

(MODEL == "<source system model>")


{ init _HP_CLONING = "FALSE" }

13
else
{ init _HP_CLONING = "TRUE" }

To:

init _HP_CLONING = "TRUE"

Step 7: Create depots with additional software for the target


The target system may need additional software, particularly if the system differs greatly from the
source. To ensure that all needed kernel modules, as well as any other required software, are
available, consult the Errata document for the target system. The Errata document can be found by
searching the web for the model and the string “Errata”. There may be both an HP-UX Errata site
and a System Errata site for a given model.

Create one or more depots containing all needed software. It is probably most convenient to create
the depots so that any single depot does not contain multiple versions of the same software.

For example, if the target system is a BL860C i2 system, the Errata document available in May, 2010
indicates that HP-UX 11iv3 OE Update Release for March 2010 and additional software listed in the
document are required. For migration to a BL860C i2 system, it is probably most convenient to
create or use an existing depot containing the Operating Environment (Data Center, Virtual Server,
High Availability, or Base) from March 2010 that you intend to install, and create one or more
additional depots with the remaining software documented in the Errata and applicable to your
environment. Often, software that implements firmware changes is packaged so that it excludes itself
during an Ignite-UX recovery. Delay installing firmware changes until after you have deployed the
recovery image and other Errata software.

If desired, you can use the swcopy command to copy software from various source depots to one or
more directory depots. Note that if you are copying a serial (tar format) depot, you need to issue the
swcopy command from the system containing the serial depot. See swcopy(1M) for more
information.

Step 8: Ensure iux_postload scripts from the recovery archive run


correctly
Currently, Ignite-UX determines the list all iux_postload scripts after the recovery archive is installed.
However, the iux_postload scripts are not actually run until after the depots are also loaded. This is
not the correct processing for migration to new hardware. In this case, the scripts need to run before
additional kernel software is installed, which may replace products with newer revisions, thus
changing or removing iux_postload scripts.

To ensure that the iux_postload scripts are run at the right time, and that Ignite-UX executes a
harmless script later – at the “wrong time” - create the following two scripts, with owner bin:bin and
permission 755, named /var/opt/ignite/scripts/run_iux_postloads, as shown in Figure
2 and /var/opt/ignite/scripts/restore_iux_postloads, as shown in Figure 3.

14
#!/bin/sh

IPD_DIR=/var/adm/sw/products
IUX_SCRIPT_NAME=iux_postload

/usr/bin/find ${IPD_DIR} -name ${IUX_SCRIPT_NAME} |


while read script_path
do
echo " Running ${script_path} .... "
${script_path}
# Need to leave a harmless script named iux_postload
# for IUX to run later.
/usr/bin/mv ${script_path} ${script_path}.save
echo "/sbin/true" > ${script_path}
echo "# To be removed after migration" >> ${script_path}
/usr/bin/chmod 744 ${script_path}
done
exit 0

Figure 2/var/opt/ignite/scripts/run_iux_postloads

#!/bin/sh

IPD_DIR=/var/adm/sw/products
IUX_SCRIPT_NAME=iux_postload

find ${IPD_DIR} -name ${IUX_SCRIPT_NAME}.save |


while read saved_path
do
script_path=`echo ${saved_path} | \
sed -e 's/iux_postload.save/iux_postload/'`
if [[ -e ${script_path} ]]
then
# The iux_postload exists. It may be the one we created,
# or it may have been delivered by a new revision of the product.
# Only in the first case should we restore it to the version
# we saved, so look for identifying comment.
grep -q "To be removed after migration" ${script_path}
if [[ $? -eq 0 ]]
then
echo " Restoring ${script_path} .... "
/usr/bin/mv ${saved_path} ${script_path}
#else Didn't find identifying string.
# Subsequent release must have delivered new iux_postload.
# Don't touch.
fi
#else Didn't find the script at all.
# Subsequent release must have removed it. Don't do anything.
fi
# Remove saved_path, which may or may not exist.
/usr/bin/rm -f ${saved_path}
done
exit 0

Figure 3 var/opt/ignite/scripts/restore_iux_postloads

15
Create a config file, /var/opt/ignite/clients/<target
hostname>/run_iux_postloads_cfg, as shown in Figure 4 that runs the scripts listed in Figures
2 and 3.

sw_source "KernelFixup" {
source_format = CMD
load_order = 1
}
init sw_sel "Run KernelFixup" {
description = "Run iux_postloads from archive"
sw_source="KernelFixup"
sw_category="KernelFixupCategory"
post_load_script = "/var/opt/ignite/scripts/run_iux_postloads"
post_config_script =
"/var/opt/ignite/scripts/restore_iux_postloads"
}=TRUE

Figure 4 /var/opt/ignite/clients/<target hostname>/run_iux_postloads_cfg

Step 9: Create config files for the new depots


The config files for the depots should not be created with “make_config” for three reasons:
1. The software impacts created will be added to the software impacts from the recovery
archive, making it appear that twice as much space is needed as is really required.
2. Category tags and load order settings that prevent installation of both the recovery archive
and the extra software may be created.
3. The selection of software in the depot that matches software present in the archive can be
automated in a manually created config file.

Instead, create /var/opt/ignite/clients/<target hostname>/updateOE_cfg with the


following contents for the OE depot as shown in Figure 5, where you have filled in the hostname of
the SD depot server, the location of the depot containing the OE, and the OE name.

Here <OE name> is the name of the OE which you plan to use on the target system. The OE name is
a bundle name that begins with HPUX11i and ends with OE.
As of March, 2010, the OE names are “HPUX11i-BOE, HPUX11i-DC-OE, HPUX11i-HA-OE, HPUX11i-
VSE-OE”

sw_source "core" {
description = "HP-UX Core Software"
source_format = SD
sd_server = “<IP address of depot server>”
sd_depot_dir = “<absolute directory path of SD depot dir>”
source_type = "NET"
load_order = 2
}

init sw_sel "OE Update" {


description = "Update of the OE"
sw_source = "core"
sw_category = "Updates"
sd_software_list = "<OE name>/%match"
} = TRUE

Figure 5 /var/opt/ignite/clients/<target hostname>/updateOE_cfg

16
The specification of “%match” installs software and patches in the OE that match software that was
included in the recovery archive.

The specification of “load_order=2” ensures that the depot is processed after the recovery archive
and the execution of the iux_postload scripts.

You can use any value for category_tag other than “HPUXEnvironments”, which has special meaning
and is already used for the recovery archive.

For each additional depot, create /var/opt/ignite/<target hostname>/errata_cfg<n>


as shown in Figure 6, where you have filled in the hostname of the SD depot server and the location
of the depot with the Errata contents:

sw_source "Errata" {
description = "HP-UX Errata Software"
source_format = SD
sd_server = “<IP address of depot server>”
sd_depot_dir = “<absolute directory path of SD depot dir>”
source_type = "NET"
load_order = 3
}

init sw_sel "Errata_selection" {


description = "Additional software for model xxxx"
sw_source = "Errata"
sw_category = "Additional"
sd_software_list = "additional_sw1 additiona_sw2 …"
} = TRUE

Figure 6 /var/opt/ignite/<target hostname>/errata_cfg<n>

For the sd_software_list, list the actual bundles, products, or patches that have been included in
the errata documentation. If you want to install all the software in the depot, you can specify a
selection of “*”.

The specification “load_order=3” ensures that the depot is processed after the OE depot.

Step 10: Add the new config files to the target CINDEX
1. If multiple cfg clauses appear in /var/opt/ignite/clients/<target
hostname>/CINDEX, choose the one set equal to TRUE to be <cfg_name> in the commands
below.
2. Use manage_index to add the new config files to cfg:
# /opt/ignite/bin/manage_index -a \
-f /var/opt/ignite/clients/<target hostname>/run_iux_postloads_cfg \
-c "<cfg name>" -v -i /var/opt/ignite/clients/<target hostname>/CINDEX
# /opt/ignite/bin/manage_index -a \
-f /var/opt/ignite/clients/<target hostname>/updateOE_cfg \
-c "<cfg name>" -v -i /var/opt/ignite/clients/<target hostname>/CINDEX
# /opt/ignite/bin/manage_index -a \
-f /var/opt/ignite/<target hostname>/errata_cfg<n> \
-c "<cfg name>" -v -i /var/opt/ignite/clients/<target hostname>/CINDEX

If additional depots are used, use manage_index to add the config file corresponding to each
depot.

17
Step 11: Modify file system sizes if needed
Ordinarily, the update using the errata depot will not substantially increase the required file system
sizes. Rather than try to predict exactly the file system sizes needed, it is more convenient to ensure
that all file systems have some “growing space”.

Use /usr/bin/bdf on the source system to check for file systems are close to full and need
additional space. You may also want to compare the sizes on the source system to the minimum sizes
recommended by HP for a given release. These can be found in
/opt/ignite/data/Rel*/config.

If needed, increase the logical volume sizes in /var/opt/ignite/clients/<target


hostname>/recovery/latest/system_cfg.

Step 12: Check configuration syntax


From the directory /var/opt/ignite/clients/<target hostname>, check the syntax of all
configuration information with the command:

/opt/ignite/bin/instl_adm -T -i CINDEX

Step 13: Move scenario- Disable the source system


If you are moving the system (keeping the same hostname and network identity) to a location on the
same network, shutdown the source system or remove it from the network. This step is not needed for
the clone scenario.

Step 14: Deploy the configuration to the target system


A convenient way to contact the Ignite-UX server to deploy the target system is to use the lanboot
command together with a Direct Boot profile created using the dbprofile command on the EFI shell
of the target.

1. Boot the target system to the EFI boot menus and quit to the EFI shell.
2. Use the following command to create a dbprofile called “newserver”:

dbprofile –dn newserver \


–sip <Ignite-UX server IP address> \
-cip <target IP address reachable from the Ignite-UX server> \
-gip <gateway IP address to connect target to other networks > \
-m <network mask for target’s subnet > \
-b “/opt/ignite/boot/nbp.efi”

See Direct Boot Profiles for Itanium-Based Systems in the chapter, “Booting and Installing HP-UX
From the Server Using the Client Console” in the Ignite-UX Administration Guide for HP-UX11i,
available at http://www.hp.com/go/ignite-ux-docs for more information on the dbprofile
command.

3. Use the lanboot command to boot from the Ignite-UX server

lanboot select -dn newserver

A list of LAN devices is displayed. Choose the device that has network connectivity to the Ignite-
UX server. Since a Direct Boot Profile is being used, the Ignite-UX server does not need to be on
the same subnet as the target.

18
From the Ignite-UX installation screens, choose the configuration that was created in the previous
steps. If the system is being cloned, specify the correct configuration (hostname, IP address, etc.)
that is used for the target system.

Step 15: Boot and test the target system.


Determine whether additional data volumes need to be made available to the target. For example,
additional SAN LUNs may need to be presented to the target and vgimported on the target system.
Additional testing can be used to ensure that applications run correctly on the new hardware.

Step 16: Return to the source, if the target is not as desired


If the target system does not work as expected, the target can be shutdown or removed from the
network. If a move scenario was used, the source system can be rebooted. If a clone scenario was
being executed, the depots used can be modified and the process repeated.

19
Call to action
HP welcomes your input. Please give us comments about this white paper, or suggestions for related
documentation, through our technical documentation feedback website:
http://www.hp.com/bizsupport/feedback/ww/webfeedback.html

© 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The
only warranties for HP products and services are set forth in the express warranty statements accompanying such products and
services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or
editorial errors or omissions contained herein.

5900-1078, August 2011

20

You might also like