You are on page 1of 27

Dynamic Root Disk: Quick Start & Best Practices

August 2010
Technical white paper Table of Contents
Introduction ......................................................................................................................................... 3 Quick Start ......................................................................................................................................... 3 Installing DRD .................................................................................................................................. 3 Creating a Clone ................................................................................................................................. 4 Overview ........................................................................................................................................ 4 Important Considerations .................................................................................................................. 4 Choosing a Target Disk .................................................................................................................... 4 Checking the Size of the Target Disk .................................................................................................. 5 Performing the Clone Operation ........................................................................................................ 5 Viewing and Accessing the Clone ...................................................................................................... 5 Modifying the Clone......................................................................................................................... 7 Activating & Booting the Clone .......................................................................................................... 7 After the Clone is Booted .................................................................................................................. 8 Best Practices ...................................................................................................................................... 8 Best Practice 1 (BP1): Basic Maintenance - Patching ............................................................................ 8 BP1: Overview of steps ................................................................................................................ 8 BP1: Additional Considerations..................................................................................................... 9 Best Practice 2 (BP2): Basic Maintenance – Patching & Security Bulletin Management with DRD and Software Assistant (SWA) ............................................................................................................................. 12 BP2: Overview of steps .............................................................................................................. 13 BP2: Additional Considerations................................................................................................... 13 Best Practice 3 (BP3): Basic Maintenance – Updating Within Versions of HP-UX 11i v3 ......................... 14 BP3: Overview of Steps.............................................................................................................. 14 BP3: Additional Considerations................................................................................................... 14 Best Practice 4 (BP4): Basic Maintenance – Using DRD to Assist an Update from HP-UX 11i v2 to HP-UX 11i v3 ................................................................................................................................................ 14 BP4: Overview of Steps.............................................................................................................. 15 BP4: Additional Considerations................................................................................................... 15 Best Practice 5 (BP5): Basic Recovery .............................................................................................. 16 BP5: Overview of Steps.............................................................................................................. 16 BP5: Additional Considerations................................................................................................... 17 Best Practice 6 (BP6): Basic Provisioning .......................................................................................... 17 BP6: Overview of Steps.............................................................................................................. 17 BP6: Additional Considerations................................................................................................... 18 Special Considerations for All Best Practices ......................................................................................... 18 Specific Details Regarding Clone Creation ........................................................................................ 18 Mirroring .......................................................................................................................................... 18

Using DRD to Expand LVM Logical Volumes and File Systems ................................................................. 19 Extending Files Systems Other Than /stand or / ................................................................................ 19 Extending the /stand or / File System ............................................................................................ 19 Viewing Log Files............................................................................................................................... 24 Maintaining the Integrity of System Logs ........................................................................................... 26 DRD sync .......................................................................................................................................... 26 DRD Activate and Deactivate Commands ............................................................................................. 26 Delayed Activation/Boot of the Clone .................................................................................................. 26 For More Information ......................................................................................................................... 27 Call to Action .................................................................................................................................... 27

2

virtual partitions (vPars). Alternatively.pdf. and Integrity Virtual Machines—running the following operating systems: • HP-UX 11i v2 (11.Introduction Dynamic Root Disk (DRD) provides customers the ability to clone an HP-UX system image to an inactive disk. See the Exploring DRD Rehosting whitepaper for more details: http://bizsupport1.austin. Special Considerations for All Best Practices – this section provides detailed information about processes you might want to use in many of the best practice scenarios.com/go/DRD. • Utilize the clone for system recovery. • Perform an OE Update on the clone from an older version of HP-UX 11i v3 to HP-UX 11i v3 update 4 or later. preview the swinstall installation and check if any kernel patches are included in the selection at the end of the swinstall’s analysis phase.hp. Quick Start Installing DRD The Dynamic Root Disk (DRD) product is contained in the DynRootDisk bundle. Best Practices – this section provides advice on how to utilize DRD to perform basic tasks such as maintenance. and then: • Perform system maintenance on the clone while their HP-UX 11i system is online. can be installed from the Operating Environment or Application Software media for HP-UX 11i v2 or 11i v3. recovery and provisioning. eliminating the need to manually update files on the clone. updates. together with any dependencies. DRD is supported on systems—including hard partitions (nPars). • Quickly reboot during off-hours—after the desired changes have been made—significantly reducing system downtime. if needed. To determine definitively if your installation of DRD will require a reboot.hp. DRD and any dependencies can be downloaded from the DRD website: http://www.com/bc/docs/support/SupportManual/c01920363/c01920363. 3 . • Re-host the clone on another system for testing or provisioning purposes—only on VMs or blades running HP-UX 11i v3 with LVM root volume groups.23) September 2004 or later • HP-UX 11i v3 (11. • Automatically synchronize the active image and the clone. This white paper provides an overview of Dynamic Root Disk (DRD) and is divided into 3 major parts: Quick Start – this section provides an overview of how to install DRD and how to create a clone. or on VMs running HP-UX 11i v2 with LVM root volume groups.31) DRD. Note that the website will always have the most up-to-date version of DRD.

Creating a Clone Overview All uses of DRD begin with creating a clone of the root volume group. • The source of the drd clone command—the volume group that is copied—is the LVM volume group or VxVM disk group containing the root (“/”) file system. Choosing a Target Disk The target disk must be specified as a block device file. “root group” refers to the LVM volume group or VxVM disk group that contains the root (“/”) file system. we will present best practices a system administrator can follow to make software maintenance tasks easier. Important: It is the system administrator's responsibility to determine which disks are not currently in use and may therefore be used for a clone. for each logical volume.hp. the root disk group may be spread across multiple disks. and large enough to hold a copy of each logical volume in the root group being cloned. It is not appropriate for systems where the HPUX operating system resides in multiple volume groups. then the system is not appropriate for DRD. In this white paper. The drd clone command creates a bootable disk containing a copy of the LVM volume group or VxVM disk group containing the root file system “/". all the volumes in the root disk group must reside on a single disk.) • The target of the drd clone operation must be a single disk. • The disk needs to be as big as the allocated space. not currently in use by other applications.com/bc/docs/support/SupportManual/c01918754/c01918754. which can be found at: http://bizsupport1. After discussing clone creation.pdf. but is only 70% full. For example.austin. (For example. Chapter 2. you can use the -x mirror_disk option to mirror the clone to another disk. a configuration where all volumes except swapvol are mirrored is not supported. for more details on tools and utilities that can be used to determine an appropriate target disk. If the root disk group is mirrored. you will still need 5 GB for the /var logical volume in the cloned group. However. • For LVM based-systems. For example. if / resides in vg00. if the logical volume containing /var has been allocated 5 GB. • For VxVM-based systems. 4 . The term “logical volume” refers to an LVM logical volume or a VxVM volume. all volumes in the root disk group must contain the same number of mirrors. but /var resides in vg01. Important Considerations • For VxVM-based systems. • The drd clone operation clones the root group. • An appropriate target disk should be writeable by the system. Please see the Dynamic Root Disk Administrator’s Guide. the root disk group may be mirrored to additional disks. not the used space.

Running applications which modify data on other volumes is not an issue for clone validity. you might want to choose a location for the clone so that writing the clone data does not compete for performance with writes of application data. HP recommends designating a period of time for creating the clone when the root group is not undergoing many changes. However. see the manpage drd-clone(1M). See drd-clone(1M) for options available with this command.) The preview operation includes the disk space analysis needed to determine if the target disk is sufficiently large. For HP-UX 11iv3. If you prefer to investigate disk sizes before previewing the clone.TXT not present 04/08/09 13:00:57 MDT None None /dev/dsk/c1t3d0 AUTO file present. and to view many other details about the clone. vgdisplay. the device file must refer to the entire disk. Viewing and Accessing the Clone To determine whether or not a clone has been created. not the HP-UX partition on an Integrity system. and vxprint commands might be useful. Boot loader present SYSINFO. the disk that was used for the clone. use the drd status command: # /opt/drd/bin/drd status ======= 04/13/09 11:49:43 MDT BEGIN Displaying DRD Clone Image Information (user=root) (jobid=mesa) * * * * * * * * * * * Clone Disk: Clone EFI Partition: Clone Rehost Status: Clone Creation Date: Clone Mirror Disk: Mirror EFI Partition: Original Disk: Original EFI Partition: Original Rehost Status: Booted Disk: Activated Disk: /dev/dsk/c1t4d0 AUTO file present. Boot loader present SYSINFO. In either case. Performing the Clone Operation The clone operation is initiated with the drd clone command: # /opt/drd/bin/drd clone -v -t /dev/dsk/cxtxdx Where /dev/dsk/cxtxdx is a block device special file for the target disk you have chosen. the diskinfo.Checking the Size of the Target Disk A simple mechanism for determining if you have chosen a sufficiently large disk is to run a preview of the drd clone command: # /opt/drd/bin/drd clone -p -v -t /dev/dsk/cxtxdx (legacy device file) # /opt/drd/bin/drd clone -p -v -t /dev/disk/diskx (agile device file) (For further information about options available with the drd clone command. either a legacy or persistent block device special file may be specified.TXT not present Original Disk (/dev/disk/disk10) Original Disk (/dev/disk/disk10) END Displaying DRD Clone Image ======= 04/13/09 11:49:52 MDT Information 5 . when the clone was created.

the clone group is “drd_rootdg”. (user=root) (jobid=mesa) On an LVM-based system: • The clone that has been created is not visible (when executing commands such as bdf or vgdisplay) at the completion of the clone operation. the clone can be mounted by executing the command: # /opt/drd/bin/drd mount and subsequently unmounted by executing the command: # /opt/drd/bin/drd umount See drd-runcmd(1M). As a result.) If a system administrator wants to check the contents of particular files. • If the root group is “rootdg”. If the root group does not have the form “vgnn”. When the clone is booted. If the root group is “drd_rootdg”. drd-mount(1M). the clone group name is formed by prefixing the root group with “drd”. A VxVM clone is not deported by the drd clone command because a deported group cannot be booted. • When the clone is booted the root group is the same as the clone group name when it was visible on the original system. If a system administrator wants to run both of these commands. vxprint. its configuration scripts are postponed until the image is booted. On a VxVM-based system: • The cloned disk group will be displayed in the output of commands such as vxdisk. and vxstat. the root group is the same as the original root group that was cloned. • When the clone is booted. (Thus. This is because the file systems on the clone are unmounted and the clone volume group is exported at completion of the drd clone command. or by removing the prefix. See the information below regarding drd mount to view the clone. • If the root group name is “vgnn”. the following sequence is preferable: # # # # /opt/drd/bin/drd /opt/drd/bin/drd /opt/drd/bin/drd /opt/drd/bin/drd mount runcmd swlist runcmd swverify umount 6 . More generally. the clone group is “rootdg”.succeeded. the name of the root group changes when the clone is booted. and drd-umount(1M) for options available on these commands. the following commands can be executed: # /opt/drd/bin/drd runcmd swlist # /opt/drd/bin/drd runcmd swverify When software is installed in a drd runcmd session. If a system administrator needs to verify the software contents of the clone. the swlist command shows the states as configured. the clone group name is formed by prefixing the root group name with “drd_”. the state attribute of a fileset is installed rather than configured. but eliminate the overhead of mounting and unmounting the clone for each command. the clone group is “drdnn”.

if a patch installs a new daemon. A command.3 and later of DRD. it is usually necessary that the daemon be started automatically when the image is booted. swmodify(1M). set by the command setboot -b. swverify(1M). swremove(1M). The value of the autoboot flag. it must not start or stop daemons. The option -x reboot can be set to true on the drd activate command if an immediate shutdown and reboot is desired: # /opt/drd/bin/drd activate -x reboot=true 7 . swlist(1M). make dynamic kernel changes. update-ux(1M). Similarly. or in any way affect the process space of the booted system. please see the BP1: Additional Considerations section later in this white paper. In particular. Modifying the Clone The drd runcmd operation is used to run commands that modify the inactive system image. Activating & Booting the Clone When the clone is ready for deployment. For release 3. is not affected by the drd activate command.When drd runcmd finds the file systems in the clone already mounted. there are restrictions on the options that can be used on the sw* and update-ux commands. • The changes the command makes to the inactive system image must be fully functional when the image is booted. such as swinstall. even if it is issued multiple times by multiple system administrators. In addition. the drd activate command can be used to set the inactive system image as the primary boot path for the next system boot: # /opt/drd/bin/drd activate If desired. For more information on DRD safety. The drd activate command always changes the primary boot path to the inactive system image. These restrictions are documented in the manpage drdruncmd(1M). the alternate boot path and the High Availability (HA) alternate boot path may also be changed by using the options -x alternate_bootdisk and -x HA_alternate_bootdisk on the drd activate command. For example. a package whose control scripts behave correctly when executed under drd runcmd is designated as DRD-safe. kctune(1M).) The drd activate command does not toggle the boot path between the booted system and the inactive image. This makes the result of the drd activate command predictable. and view(1). they will not be unmounted—nor on an LVM-based system will the volume group be vgexported—at the completion of the drd runcmd operation. There are two fundamental requirements for a command run by the drd runcmd operation: • The command must not affect the currently booted system. that satisfies the two fundamental requirements above is designated as DRD-safe. the commands certified to be DRD-safe are swinstall(1M). swjob(1M). (This is the clone until the clone is booted—then the original system becomes inactive.

The mount point of the root file system of the original system is: /var/opt/drd/mnts/sysimage_000. Create the clone: drd clone –t /dev/disk/disk5 8 .) If you need to restore the booted system image to be the primary boot path—that is. our setup is as follows: Original image: /dev/disk/disk1 Clone disk: /dev/disk/disk5 Patches to apply: Quality Pack Patch bundle plus PHCO_77777 Depot with Quality Pack and PHCO_77777: depot_svr:/var/depots/1131 Objective: Perform maintenance on my server while minimizing downtime. It might be a clone. If application testing shows the newly booted clone to be unacceptable.Patching The most common benefit of a clone created by a drd clone command is to minimize the downtime needed to perform proactive maintenance. and may be mounted by executing the drd mount command.log.As a best practice. When the clone is booted. After the Clone is Booted After the clone is booted. a swlist of all software on the system should show all patches as configured. Any configure errors are documented in the usual SD location: /var/adm/sw/swconfig. BP1: Overview of steps 1. references to “inactive system image” or “inactive image” refer to the copy of the root volume group that is not currently booted. and this best practice will focus on applying patches. and it can be manipulated using the drd runcmd. with supporting information provided to add clarity. the original system is now inactive. drd mount. the original system becomes an offline system image. This allows the reader to get a complete picture of the steps required to perform a task. it might be the original system image. In the Best Practices that follow. consider creating a shutdown script that runs drd sync so that files changed on the original image after the clone was created will be propagated to the clone when it is activated and booted (see the “DRD sync” section below for more details. or. Best Practice 1 (BP1): Basic Maintenance . when the clone is booted. For this scenario. “undo” the drd activate command—the drd deactivate command can be used. which was discussed in detail in the previous section. Note that after a disk created by the drd clone command is booted. Each Best Practice begins with the creation of a clone. Best Practices The Best Practices below describe the high level steps required to complete a task. the system administrator can use drd activate -x reboot=true to return to the original system. followed by Additional Considerations to add more detailed information to tasks when needed. drd umount and drd activate commands.

the product is considered to be DRD-unsafe. Most HP-UX 11i v2 products are not safe. Most new patches 9 . the is_drd_safe attribute is used to indicate whether a product is DRD-safe. • For HP-UX 11i v3. see the next section for details about this file. • For HP-UX 11i v2. any patch that is not safe is placed on the DRD Unsafe Patch List. • For products. see the next section for details about this file. or that has a tag including the UNOF string. If this attribute is not set or is missing. For HP-UX 11i v2 patches. this new firmware will be present if the original image is booted. Ensure the patch and QPK is installed: drd runcmd swlist QPKBASE QPKAPPS \ PHCO_77777 5. patches and all products in the Operating Environments are checked prior to release for DRD safety. Please see the section below. only patches are checked prior to release to ensure they are DRD-safe. “DRD-safe: Updating the drd_unsafe_patch_list File” for information on how to make sure this list is current on your system. and those that are not are listed in the drd_unsafe_patch_list file. the patch/product must not impact the running system.2. • Note that firmware patches are not DRD-safe and will be automatically excluded from any attempt to install or remove them from an inactive image. has not been through DRD-safe certification. Install the QPK and PHCO_77777: drd runcmd swinstall \ -s depot_svr:/var/depots/1131 -x patch_match_target=true 4. Use drd status to view the clone: drd status 3. Almost all of the HP-UX 11i v2 patches are DRD-safe. These types of patches should not be applied—by executing drd runcmd—without thorough discussion with the patch provider. Any products or patches that are not DRD-safe will not be installed during a DRD session. Activate and boot the clone: drd activate -x reboot=true BP1: Additional Considerations DRD-Safe: Overview Any patch or product that is installed on a DRD clone must be DRD-safe. and thus cannot be managed with DRD. Important: Any patch that has been written for a specific site. the original image (with no changes) acts as a backup and can be reactivated at any time if the clone does not operate as expected. If a firmware patch was loaded on the clone once it was booted. Any patches that are not DRD-safe either have the is_drd_safe flag set to false or are listed in the drd_unsafe_patch_list file. One of the values of DRD is that once a DRD clone is created and booted. Create a shutdown script that runs drd sync so that files changed on the original image after the clone was created will be propagated to the clone (see the “DRD sync” section below for more details) 6. DRD-Safe: Updating the drd_unsafe_patch_list file The /etc/opt/drd/drd_unsafe_patch_list file is delivered as a volatile file containing a list of DRD-unsafe patches delivered without the attribute is_drd_safe set to false. the DRD toolset uses the is_drd_safe attribute to determine safety. • The DRD toolset will not process products or patches that are DRD-unsafe. For products and HP-UX 11i v3 patches. That is.

then you are done with updating the drd_unsafe_patch_list file and have finished this procedure.hp. continue with Step B. that patch will be added to the drd_unsafe_patch_list file. Because DRD uses the copy of the drd_unsafe_patch_list file on the inactive system image.) Determine the “last updated” date listed in the file and make a note of it. the copy on that image must also be updated. (You can open this URL in a Web browser. to see if any DRDunsafe patches will be rejected when installing from a patch depot. use the dos2ux(1) command.com/export/DRD/drd_unsafe_patch_list file to /etc/opt/drd/drd_unsafe_patch_list on your active system image.com/export/DRD/drd_unsafe_patch_list. If your installed drd_unsafe_patch_list file is out of date and needs to be updated (to handle Update the drd_unsafe_patch_list file as follows: 1. This file is updated when any new patch is determined to have the is_drd_safe flag incorrectly set to true. DRD-Safe: Identifying Patches that will not be Installed in a drd runcmd Session A preview operation can be used to help a system administrator determine if any patches from a desired patch selection will not install in a drd runcmd session. mount the inactive system image: # /opt/drd/bin/drd mount 3. (If the “last updated” date listed in the Step 1 file is the same as the Step 2 file. 2. Compare the dates from Step 1 and Step 2 and if the “last updated” date listed in the Step 1 file is later than the date listed in the Step 2 file. For example. that is. On your active system image. Download the ftp://ftp. then your installed drd_unsafe_patch_list file is out of date and you need to continue to Step 4 (in this procedure) to update the file.itrc. Follow this procedure to determine if your drd_unsafe_patch_list file needs to be updated: 1. In the rare event that a patch is released with the is_drd_safe attribute incorrectly set to true. go to the HP IT Resource Center FTP site (see Step A below) to download the most recent drd_unsafe_patch_list file. then you do not need to update the drd_unsafe_patch_list file and you are finished with this procedure.are DRD-safe.hp. execute the following preview command: 10 . DOS-type carriage-return line feeds might be inserted into the file.) potential new DRD-unsafe patches). Copy the drd_unsafe_patch_list file to the inactive system image: # /usr/bin/cp /etc/opt/drd/drd_unsafe_patch_list \ /var/opt/drd/mnts/sysimage_00*/etc/opt/drd/drd_unsafe_patch_list Note: If you download the file to a Windows system and then copy it to an HP-UX system.itrc. they can be installed by the drd runcmd command to an inactive image without affecting the booted system. 3. View the file at ftp://ftp. If you have not created a clone. The few patches that are not DRD-safe set the fileset attribute is_drd_safe to false. for example firmware patches fall into this category. If there is a clone. To eliminate these characters. 4. If you have created a clone. 2. view the “last updated” date listed in the /etc/opt/drd/drd_unsafe_patch_list file and make a note of it.

itrc. When the actual DRD installation is run. To copy software from a serial depot to a directory depot. swcopy it into your patch depot. Attempts to do so result in the following error: ERROR: The source you have chosen is a tape device located on the booted system. If you are satisfied with the quality measures for the superseding patch. 11 . Has the patch been superseded by a DRD-safe patch? If so.# /opt/drd/bin/drd runcmd swinstall -p -x patch_match_target\ -s depot_svr:/var/depots/mydepot Each unsafe selection will be rejected with an appropriate message. no action is necessary. Does the patch apply to a product that is not used and for which no use is planned? If so. swverify operations are always DRD-safe. Installing from Serial Depots HP does not support executing the drd runcmd to install from a serial depot source. the patch must be applied to the inactive system image after the image is booted. system before creating the clone. and the remaining selections will be listed. and requires a kernel build or reboot. The same does n ot apply to swverify operations with the -F option. even for patches and products that are not DRD-safe for sw* and update-ux operations. first copy the software to a non-tape device using swcopy. apply the following logic: 1. Verifying Operations Verify scripts have always been required to refrain from making changes either to the file system or to the process space of the system where a swverify operation is run.depot SoftwareSelections @ \ /path/to/non-serial/depot If the software you are trying to copy does not have its dependencies in the depot.com. check the superseding patch’s rating and delivery date in the patch database at the HP IT Resource Center: http://www. Does the patch NOT require a kernel build or reboot? If so. Use of the -F option on a swverify command is not supported under drd runcmd. one option is to apply it to the booted 4. To install software from this tape device. use the following command: # /usr/sbin/swcopy -s /path/to/serial. Add the patch to a list of patches (or a selection file of patches) that you plan to install as soon as the inactive system image is booted. which causes “fix” scripts to be run. For each patch. If the patch is needed on your system. Installing from this source is not supported under drd runcmd. use the option “-x patch_match_target=true”. 3. DRD exits with a failure return code without running any fix scripts. If you attempt to run the swverify operation with the -F option under drd runcmd. 2. you should add the –x enforce_dependencies=false option to the swcopy command. For this reason. Planning the Installation of DRD-Unsafe Patches Examine the list of DRD-unsafe patches determined in the section above. has not been superseded by a DRD-safe patch.

Best Practice 2 (BP2): Basic Maintenance – Patching & Security Bulletin Management with DRD and Software Assistant (SWA) As in Best Practice 1. our setup is as follows: Assumption: clone is already created and you are booted on the active image Patches to apply: SWA will figure this out for you Depot with patches: /var/depots/1131swa 12 . These messages could include a need to acquire or extend license keys. When the system administrator is satisfied with the patch selection. passwords. a system administrator might create a clone. first run the drd sync command in preview mode: # /opt/drd/bin/drd sync -p • Using drd sync: Examine the /var/opt/drd/files_to_be_copied_by_drd_sync file. the clone could be deployed as part of bringing the system up. or an unexpected outage occurred. or just be notes about automatically selected patch filesets. and fixes for published security issues. However. and then investigate any unexpected messages. Regardless of the option used to bring the clone up to date. and. The final patch selection is then applied with the drd runcmd.Patch Planning and Alternative Patch Deployment Schemes The offline patching scenario presents a straightforward progression from clone to boot in a fairly short time period. you can then activate and boot the clone. If downtime is scarce. printer configurations) are included in the clone. this scenario will once again demonstrate how to use DRD to reduce downtime during system maintenance. However in this scenario we will be utilizing Software Assistant (SWA) to identify any required patches. patches with warnings. a system administrator might benefit from some time and feedback to craft the exact selection of patches to be applied to an inactive system image that is eventually booted. To determine the best method. For this scenario. the system administrator could compare the patch selection with that of a reference system. In addition. SWA will identify missing patches/patch bundles. consider running drd clone to re-create the clone. This will ensure current copies of all configuration files (for example: users. use drd runcmd to apply it to the clone. The swlist and swverify commands can be run against the clone. one of the following two options could be utilized to make sure the clone is up to date before activating and booting it.) • Re-creating the clone: If the /var/opt/drd/files_to_be_copied_by_drd_sync file is large. If short test periods are available before the actual patch deployment. the system administrator can boot the clone and perform application testing. identify a patch selection. to test it before final deployment. identify conflicts with previously installed site-specific patches. If the file is not large. create a shutdown script that runs drd sync so that files changed on the original image after the clone was created will be propagated to the clone (See the “DRD sync” section below for more details. The drd clone and drd runcmd operations can be useful in helping a system administrator to identify an appropriate patch selection. With this mechanism. When an application outage was needed for other reasons. For example. the system administrator could create a clone daily and apply the “known good” patch selection to it. if desired. problems are identified before the entire community is exposed to the changes.

Review any special installation instructions 8. warned.26 or later Objective: Perform maintenance on my server. -t /var/depots/1131swa documented in /var/depots/1131swa/readBeforeInstall.txt. The default functionality of SWA is used in this scenario. Install everything in the 1131swa depot: drd runcmd swinstall \ 9. In order to address the other issues identified in the SWA report. Download the patches identified by SWA into a depot: swa get \ 6. Determine what patches are needed: 3.02. see the SWA Web page: http://www. while still reducing downtime. missing patch dependencies. Unmount the clone: drd umount 11. Note that the SWA report created in Step 2b will identify two basic categories of requirements: • Product. 13 . patches and manual actions that need to be applied in order to address known security issues • Patches and patch bundles that are missing. Note that this step can be taken after Step 5 above. Activate and boot the clone: drd activate -x reboot=true BP2: Additional Considerations Using Software Assistant (SWA) In this best practice. -s /var/depots/1131swa -x patch_match_target=true 7. with the required product updates added to the 1131swa depot so that all products and patches may be installed at the same time and with a single reboot. • If manual actions need to be taken in order to address security issues.log 10. For information on how to customize networking and the SWA analyzers utilized. Mount the clone: drd mount 4. • Other patch issues may be identified and addressed. Use drd status to view the clone: drd status 2. All required patches and patch bundles identified in either category listed above will be downloaded to the depot 1131swa in Step 5. Patch installation might require special attention. you might need to take one or more of the following steps: • If products need to be updated to address security issues. BP2: Overview of steps 1. Create an SWA report: swa report -s \ /var/opt/drd/mnts/sysimage_001 5. you will need to download and install those products. such as warned patches in recommended bundles.com/go/swa. while minimizing downtime. or site specific patches. Ensure the patches are installed: drd runcmd view /var/adm/sw/swagent. these actions will need to be addressed.Version of SWA: C. including the identification and repair of security issues. Create a shutdown script that runs drd sync so that files changed on the original image after the clone was created will be propagated to the clone (see the “DRD sync” section below for more details) 12. we combine the features of SWA and DRD to create a solution that helps reduce the time required to identify needed patches.hp. etc.

In order to determine if any file systems need to be expanded.31. which is shipped with HP-UX 11i v3 OE Update 4 and later. our setup is as follows: Original image: /dev/disk/disk1 (with HP-UX 11i v3. Use drd status to view the clone: drd status 3.x systems at http://bizsupport1.x and later. You can then expand the file systems on the clone prior to performing the update.log and /var/opt/swm/swm.3 or later Objective: Perform update on my server while minimizing downtime.For more information on SWA.) For this scenario. Install HP-UX 11i v3 Update 4. Ensure that the OE is installed: drd runcmd swlist 5. • When performing an update.log. later in this white paper. BP3: Additional Considerations • DRD update functionality is only supported from DRD version 3.A.com/bc/docs/support/SupportManual/c01919407/c01919407.11. BP3: Overview of Steps 1.pdf. update 2 installed) Clone disk: /dev/disk/disk5 What to apply: HP-UX 11i v3 Update 4. DRD can be used to assist such an update by providing a mechanism to adjust file system sizing prior to performing an 14 . Best Practice 4 (BP4): Basic Maintenance – Using DRD to Assist an Update from HP-UX 11i v2 to HP-UX 11i v3 Though a DRD clone cannot be updated from HP-UX 11i v2 to HP-UX 11i v3.com/go/swa.hp. you might want to run update-ux in preview mode on either your active disk or the clone. Create a shutdown script that runs drd sync so that files changed on the original image after the clone was created will be propagated to the clone (see the “DRD sync” section below for more details) 6. After updating. go to http://www. Virtual Server OE Depot with OE: depot_svr:/var/depots/1131_VSE-OE Version of DRD: B. for detailed information. Activate and boot the clone: drd activate -x reboot=true 7. LVM file systems often need to be expanded.3. Virtual Servier OE: drd runcmd update-ux -s \ depot_svr:/var/depots/1131_VSE-OE HPUX11i-VSE-OE 4.3. See the Using DRD to Expand LVM Logical Volumes and File Systems section. Best Practice 3 (BP3): Basic Maintenance – Updating Within Versions of HP-UX 11i v3 This best practice focuses on how to reduce downtime while performing an update from an older version of HP-UX 11i v3 to a newer release of HP-UX 11i v3 (update 4 or later. For more information on patching.hp. you can check the following logs: /var/adm/sw/swagent.austin. see the Patch Management User Guide for HP-UX 11. Create the clone: drd clone –t /dev/disk/disk5 2.

Adjust file system sizes on the clone as needed (see BP4: Additional Considerations below for more information) 5.31.A. 3rd party applications. perform the following two actions to complete your preparation work: 1. and providing a quick backup mechanism if you wish to restore the system to HP-UX 11i v2 for any reason. Ensure the integrity of your updated system by checking the following log files: /var/adm/sw/update-ux.update. Virtual Server OE Depot with patches: depot_svr:/var/depots/1131_VSE-OE Version of DRD: B. Ensure that the software is installed properly: 11. 2. update-ux -s \ depot_svr:/var/depots/1131_VSE-OE HPUX11i-VSE-OE Note that there will be a reboot executed at this time.3 or later Objective: Utilize DRD to help adjust file system sizes and provide a quick backup mechanism when performing an HP-UX 11i v2 to v3 update.3.hp. For this scenario.11.com/go/tov3. Virtual Server OE: 9.log BP4: Additional Considerations Recommended Actions Prior to Update Prior to updating a system to HP-UX 11i v3. BP4: Overview of Steps 1. Use drd status to view the clone: drd status 3. setting the alternate boot disk to the HP-UX 11i v2 disk: 7. Run update-UX in preview mode on the active disk: update-ux -p -s \ depot_svr:/var/depots/1131 HPUX11i-VSE-OE 4. Verify all software has been updated to the HP-UX 11i v3 version: swlist 12. our setup is as follows: Original image: /dev/dsk/c0t0d0 (with HP-UX 11i v2 installed) Clone disk: /dev/dsk/c1t0d0 What to apply: HP-UX 11i v3 Update 4. Update the active image to HP-UX 11i v3. HP has created a single website that contains all the information necessary to make sure a system is ready to update to HP-UX 11i v3 at http://www. Create the clone: drd clone –t /dev/dsk/c1t0d0 2. Once a system has been checked to ensure it is ready to update to HP-UX 11i v3. This will reduce time needed for update and prevent any problems with obsolete software. At the 15 . you need to ensure the system being updated supports HP-UX 11i v3 including the required firmware. It is highly recommended that you follow the steps on this Web page prior to updating a system to HP-UX 11i v3. etc. Activate and boot the clone. swverify \* 10. Optional: Check for obsolete software on your system and delete software that is no longer used. Create a shutdown script that runs drd sync so that files changed on the original image after the clone was created will be propagated to the clone (see the “DRD sync” section below for more details) 6.log and /var/opt/swm/swm. storage devices. drd activate -x alternate_bootdisk=/dev/dsk/c0t0d0 -x reboot=true 8. Run “swverify \*” and take action if there are any problems with existing packages or patches.

For this scenario. Create a shutdown script that runs drd sync so that files changed on the original image after the clone was created will be propagated to the clone (see the “DRD sync” section below for more details) 5. DRD provides a better mechanism for quickly returning to the system’s previous state. the clone can provide a fallback for reverting from recent software changes. Create the clone: drd clone –t /dev/disk/disk5 2. you cannot run any other sw*commands. Both disks are still running HP-UX 11i v2. Disk mirroring provides robust protection against hardware failures. verify that System Fault Manager (SFM) and Event Monitoring Service (EMS) are installed and configured properly. so we need to activate and boot the clone which contains the original image: 6. drd activate -x reboot=true 16 . Modify semaphore tunables in preparation for updating an application 4. Special Considerations with Different OS Versions on the Active and In-Active Images After Step 5 is executed. you should not use any sw* commands with drd runcmd. BP5: Overview of Steps 1. While making the tunable changes. our setup is as follows: Original image: /dev/disk/disk1 Clone disk: /dev/disk/disk5 Change to be made: Modify semaphore tunables prior to updating an application Objective: Utilize a DRD clone as a quick recovery mechanism for the root volume group if needed. After Step 6 is executed. you need to be aware of some limitations: • If an HP-UX 11i v2 disk is booted and HP-UX 11 v3 is on the inactive clone. however. the active disk is running HP-UX 11i v3. Best Practice 5 (BP5): Basic Recovery A key benefit of the DRD toolset is that you can use it for basic system recovery. While mirroring provides excellent up-to-date protection from hardware failures. but it also automatically updates the mirror image with all software and file system updates. the clone (disk 5) becomes the active disk. and the inactive disk is running HP-UX 11i v2. You can find detailed information on how to expand file systems. the output of the update-ux -p command will identify file systems that need to be expanded. if a software installation caused an undesirable system state. Adjusting File System Sizes on the Clone In Step 3 above. Therefore. and these changes can be made on the clone prior to activating it and performing the actual update.same time. and the original disk (disk 1) becomes inactive. including /stand. you can run drd runcmd swlist or drd runcmd swverify. • If an HP-UX 11i v3 disk is booted and HP-UX 11i v2 is on the inactive clone. Use drd status to view the clone: drd status 3. Whenever you are in a situation where the active and inactive disks are not running the same major release of HP-UX. a critical networking configuration file is accidentally deleted. later in this whitepaper’s Using DRD to Expand LVM Logical Volumes and File Systems section.

the clone could be made just prior to application of the critical patches. From the VM2 guest. running the same version of HP-UX as VM2 Objective: Utilize DRD rehosting to quickly provision VM3 Assumptions: VM host and all VM guests are running HP-UX 11i v3 with patches PHCO_36525 and PHCO_39064 loaded BP6: Overview of Steps 1. In this scenario. application data updates). With the ability to rehost a clone. you can quickly and easily provision new systems. Copy the system info file to the EFI partition of the clone disk: drd rehost -f \ /var/opt/drd/tmp/drdivm3_sysinfo 3. need to be recovered from the point in time the clone was created. Best Practice 6 (BP6): Basic Provisioning Each of the previous best practices described creating a clone. create VM3 with just a network interface: a. we will demonstrate how a clone can be created on one system then booted on a different system—a task referred to as rehosting the clone. One option is to match the timing of DRD cloning with the regularity of system changes. create and rehost the new boot disk (/dev/disk/disk5) c. our setup is as follows: Original image: /dev/disk/disk71 (initially allocated to VM2 as /dev/disk/disk1) Clone disk: /dev/disk/disk75 (initially allocated to VM2 as /dev/disk/disk5) Initial setup: a VM host with 2 VM guests: VM1 and VM2 Need to add: a third VM guest. the clone could be created frequently to ensure speedy recovery from unknown problems. until the time it is booted. Create the system info file with VM3’s personality: cp /etc/opt/drd/default_sysinfo_file \ /var/opt/drd/tmp/drdivm3_sysinfo (see the “BP6: Additional Considerations” section below for more information. hpvmstatus -P drdivm3 2. From the VM host. activate VM3: 1. Use drd status to view the clone: drd status e. if there is a time period when availability is particularly critical. Alternatively. hpvmcreate -P drdivm3 -c 1 -r 2G -a \ network:avio_lan::vswitch:myvswtch b. if critical non-reboot patches are identified bi-weekly. From the VM host. For example. For this scenario. then using that clone on the same system used to create it.) f. BP5: Additional Considerations Ongoing Recovery Failsafe The approach described in this recovery best practice can be used on an ongoing basis by creating a clone regularly—daily or weekly—depending on the volatility of the system. Create the clone: drd clone -t /dev/disk/disk5 d. hpvmstatus -d -P drdivm2 17 . VM3. Note that any file system changes (for example.Booting the clone is considerably faster than recovering the system from a network recovery image. Move the clone disk from VM2 to VM3 i.

ii. cd EFI\HPUX 3.1:disk:/dev/rdisk/disk75 2. not the target for rehosting. For more details.1:disk:/dev/rdisk/disk75 Note: You might see messages indicating a restart is required due to devices being busy. hpvmconsole -P drdivm3 iii. enter the following: 1. hpvmmodify -P drdivm3 -a \ disk:avio_stor:0. • Configures swap and dump volumes • Copies the contents of each file system in the root volume group to the corresponding file system in the new group • Modifies particular files on the clone that identify the disk on which the volume group resides • For LVM-based systems.1. at http://bizsupport1.com/bc/docs/support/SupportManual/c01920361/c01920361.1.efi BP6: Additional Considerations The steps listed above give a very basic overview of rehosting. 18 . Special Considerations for All Best Practices Specific Details Regarding Clone Creation Prior to creating a clone. Combining the use of DRD and MirrorDisk/UX can provide many benefits. ensure that your original disk is bootable. DRD provides a means of protecting against software failures. hpvmstart -P drdivm3 ii.austin. hpvmmodify -P drdivm2 -d \ disk:avio_stor:0. hpux. Please see the Dynamic Root Disk and MirrorDisk/UX whitepaper. modifies volume group metadata on the clone so that the volume group name is the same as the original root volume group when the clone is booted Mirroring System Administrators frequently use MirrorDisk/UX to create a redundant copy of an HP-UX system as protection against hardware failures. for various strategies that can be used to combine the benefits of DRD clones and LVM mirrors.pdf. The drd clone operation performs the following items: • Creates Extensible Firmware Interface (EFI) partition on HP-UX Integrity systems • Creates boot records • Creates a new LVM volume group or VxVM disk group and a volume in the new group for each volume in the root volume group. Boot VM3: i.hp. fs0: 2. these can safely be ignored as they only pertain to the original system. and is only supported on LVM managed VMs for HP-UX 11i v2. “Exploring DRD Rehosting on HP-UX 11i v2 and v3.” Note that DRD rehosting is only supported between LVM managed blades and virtual machines (VMs) for HP-UX 11i v3. iii. The volume management type of the clone matches that of the root group. please see the whitepaper. From the EFI shell.

fortuitously. Because the entire inactive image created by a drd clone command is not in use. even in a Logical Volume Manager (LVM) environment. Typically. For brevity. After creating the clone. 19 . /stand must be the first file system on a physical disk and it must be contiguous and non-relocatable. The size of the vxfs file system is increased to 999 extents. we simply refer to the /stand file system in this section. in a “standard” configuration. 1. Run bdf to check that the /var/opt/drd/mnts/sysimage_001/opt file system now has the desired size. Extending the /stand or / File System Note: Extending the root (“/”) file system is very similar to extending the /stand file system. execute the command: # /opt/drd/bin/drd mount 2. we are using /opt. the system administrator has an opportunity to expand file systems on the inactive image. so a boot to LVM maintenance mode is often needed to complete the size change. it takes one reboot to free up swap space by moving to a new swap logical volume. For this example. is usually the swap area (and. Because it is used by the boot loader. Choose the file system on the clone to expand. which. Overview It can be a challenging task for a system administrator to increase the size of the /stand file system.Using DRD to Expand LVM Logical Volumes and File Systems Extending Files Systems Other Than /stand or / One of the difficulties in expanding file systems in the root volume group of an LVM-based system is that the file systems are always busy. Execute the following commands to expand /opt: # /usr/sbin/umount /dev/drd00/lvol6 # /usr/sbin/lvextend -l 999 /dev/drd00/lvol6 # /usr/sbin/extendfs -F vxfs /dev/drd00/rlvol6 # /usr/sbin/mount /dev/drd00/lvol6 \ /var/opt/drd/mnts/sysimage_001/opt 3. The logical volume is /dev/drd00/lvol6 mounted at /var/opt/drd/mnts/sysimage_001/opt. The following steps work for file systems other than the boot (/stand) file system. is also contiguous). and then another reboot to switch to the larger /stand. Extents added to /stand must therefore come from the second logical volume on the disk.

On 11iv3: /usr/sbin/lvlnboot -v Boot Definitions for Volume Group /dev/vg00: Physical Volumes belonging in Root Volume Group: /dev/disk/disk6_p2 -. thus accomplishing both tasks with a single reboot. see the Creating a Clone section of this document. Notes • The following steps assume that the system has a “standard” configuration with lvol2 used for swap.Boot Disk Boot: lvol1 on: /dev/dsk/c1t15d0 Root: lvol3 on: /dev/dsk/c1t15d0 Swap: lvol2 on: /dev/dsk/c1t15d0 Dump: lvol2 on: /dev/dsk/c1t15d0. the administrator can use the reboot required to boot the patched image to also resize the /stand file system. If desired. However. For further information on creating a clone.pdf 2. 0 Boot Definitions for Volume Group /dev/drd00: Physical Volumes belonging in Root Volume Group: /dev/dsk/c1t15d0 (0/0/1/1.15. 3. Create a DRD clone. 0 5. the manpage drd-clone(1M). for example.Boot Disk Boot: lvol1 on: /dev/disk/disk6_p2 20 . After the change below has been made.austin.0) -. care should be taken to make /stand sufficiently large for some period in the future. you enter a command with vg00 instead of drd00).Boot Disk Boot: lvol1 on: /dev/dsk/c3t15d0 Root: lvol3 on: /dev/dsk/c3t15d0 Swap: lvol2 on: /dev/dsk/c3t15d0 Dump: lvol2 on: /dev/dsk/c3t15d0. On 11iv2 Integrity: /usr/sbin/lvlnboot -v Boot Definitions for Volume Group /dev/vg00: Physical Volumes belonging in Root Volume Group: /dev/dsk/c3t15d0 (0/0/2/1. The most typical use of Dynamic Root Disk (DRD) is to patch an inactive system image. or the Dynamic Root Disk Administrator’s Guide which can be found at: http://bizsupport1. Mount the clone by executing the following command: # /opt/drd/bin/drd mount This command imports the cloned disk as the volume group drd00.15. Execute the following command to see the current logical volumes used for swap.com/bc/docs/support/SupportManual/c01918754/c01918754. dump and boot. the assumption is no longer valid. Procedure 1.This section describes how a system administrator can use an inactive system image created by a drd clone command to expand /stand with a single reboot. Because the solution described below is a one-time change. you should make sure you have a current recovery image of your system. You should see lvol2 being used for swap on both the booted system and the clone: 4. • The procedure described below restricts all changes to the inactive system image.hp. as a failsafe (in case.0) -.

The extents for swap must be contiguous and non- relocatable. Create a new logical volume for swap.Volume groups --VG Name VG Write Access VG Status Max LV Cur LV Open LV Max PV Cur PV Act PV Max PE per PV VGDA PE Size (Mbytes) Total PE Alloc PE Free PE Total PVG Total Spare PVs Total Spare PVs in use --.Root: lvol3 Swap: lvol2 Dump: lvol2 on: on: on: /dev/disk/disk6_p2 /dev/disk/disk6_p2 /dev/disk/disk6_p2.Physical volumes --PV Name PV Status Total PE Free PE Autoswitch /dev/vg00 read/write available 255 9 9 16 1 1 4350 2 4 4340 3452 888 0 0 0 /dev/vg00/lvol1 available/syncd 300 75 75 1 /dev/vg00/lvol2 available/syncd 1024 256 256 1 /dev/dsk/c1t15d0 available 4340 888 On 7. you would use the following command: 21 . to assign 2 GB to a new logical volume to be used for swap. decide how much space you want to allocate to the new swap logical volume. and the number of free extents on the disk.Boot Disk Boot: lvol1 on: /dev/disk/disk7_p2 Root: lvol3 on: /dev/disk/disk7_p2 Swap: lvol2 on: /dev/disk/disk7_p2 Dump: lvol2 on: /dev/disk/disk7_p2.Logical volumes --LV Name LV Status LV Size (Mbytes) Current LE Allocated PE Used PV LV Name LV Status LV Size (Mbytes) Current LE Allocated PE Used PV --. 0 6. the physical extent (PE) size. Execute the following command and note the size of both lvol1 (/stand) and lvol2 (swap). Based on the sizes above. For example. and how much space currently in lvol2 you want to assign to /stand. # /usr/sbin/vgdisplay -v | more --. 8. 0 Boot Definitions for Volume Group /dev/drd00: Physical Volumes belonging in Root Volume Group: /dev/disk/disk7_p2 -.

Using the values you determined in Step 5 above. # /usr/sbin/extendfs -F hfs /dev/drd00/rlvol1 18. Remove the old swap device: # /usr/sbin/lvrmboot -s /dev/drd00 12. Create the new dump device: # /usr/sbin/lvlnboot -d /dev/drd00/swap 11. Use extendfs to extend the /stand file system on the inactive system image. the following command expands /dev/drd00/lvol1 to 150 extents: # /usr/sbin/lvextend -l 150 /dev/drd00/lvol1 16. Note that the character device file must be specified. Unmount /dev/dr00/lvol1 so that the file system can be extended: # /usr/sbin/umount /dev/drd00/lvol1 17. Re-mount /drd00/lvol1: # /usr/sbin/mount /dev/drd00/lvol1 /var/opt/drd/mnts/sysimage_001/stand 19. Remove the old dump device from drd00: # /usr/sbin/lvrmboot -v -d lvol2 /dev/drd00 10.# /usr/sbin/lvcreate -L 2048 -C y -r n -n swap drd00 9. Make the newly expanded logical volume a boot volume: # /usr/sbin/lvlnboot -b lvol1 /dev/drd00 22 . extend /dev/drd00/lvol1 — the volume where /stand is mounted. Remove /dev/drd00/lvol2: # /usr/sbin/lvremove -f /dev/drd00/lvol2 15. and that the argument of -F is the file system type. For example. Verify the changes: # /usr/sbin/lvlnboot -v 14. Add the new swap device: # /usr/sbin/lvlnboot -s /dev/drd00/swap 13.

the mapfile is still updated. The lower order bytes should match those of the file: /de/drd00/swap.) 23. Create a new character device and a new block device on the inactive system image so that the new swap volume is recognized when it is mounted. On 11iv3: # /usr/sbin/lvlnboot -v 23 . On 11iv2 Integrity: # /usr/sbin/lvlnboot -v Boot Definitions for Volume Group /dev/vg00: Physical Volumes belonging in Root Volume Group: /dev/dsk/c3t15d0 (0/0/2/1. Re-create the file /var/opt/drd/mapfiles/drd00mapfile: # /usr/sbin/vgexport -p -m /var/opt/drd/mapfiles/drd00mapfile /dev/drd00 (This is the only change to the booted system. the file to be removed is part of the volume group vg00 on the inactive image: # /usr/bin/rm -f /var/opt/drd/mnts/sysimage_001/dev/vg00/lvol2 21.Boot Disk Boot: lvol1 on: /dev/dsk/c1t15d0 Root: lvol3 on: /dev/dsk/c1t15d0 Swap: swap on: /dev/dsk/c1t15d0 Dump: swap on: /dev/dsk/c1t15d0. The old lvol2 must be removed from the device directory of the inactive system image. 0 Boot Definitions for Volume Group /dev/drd00: Physical Volumes belonging in Root Volume Group: /dev/dsk/c1t15d0 (0/0/1/1. Even though the vgexport fails because the volume group is active. Check to see that the boot and swap areas on the clone are as expected: 24.15.0) -.15.0) -. the drd00 volume group will be properly imported.Boot Disk Boot: lvol1 on: /dev/dsk/c3t15d0 Root: lvol3 on: /dev/dsk/c3t15d0 Swap: lvol2 on: /dev/dsk/c3t15d0 Dump: lvol2 on: /dev/dsk/c3t15d0. It is made so that if the clone is un-mounted and remounted before booting to it. However. because the device directory on the inactive system image reflects devices when the image is booted. In this example.20. 0 25. we find: # /usr/bin/ll /var/opt/drd/mnts/sysimage_001/dev/vg00/group crw-r----1 root sys 64 0x020000 Aug /var/opt/drd/mnts/sysimage_001/dev/vg00/group 7 19:26 # ll /dev/drd00/swap brw-r----1 root sys 64 0x02000a Aug 7 20:23 /dev/drd00/swap So we use the commands: # /usr/sbin/mknod /var/opt/drd/mnts/sysimage_001/dev/vg00/rswap c 64 0x02000a # /usr/sbin/mknod /var/opt/drd/mnts/sysimage_001/dev/vg00/swap b 64 0x02000a 22. The high order byte of the minor number used should match the high order byte of the group file: /var/opt/drd/mnts/sysimage_001/dev/vg00.

the command: # /opt/drd/bin/drd runcmd swinstall -s depotserver:/patch_depot PHKL_9999 results in new messages in each of the following log files: • In /var/opt/drd/drd. Any sw* command that you run on an inactive image appends to the sw* logs on the inactive image. 0 Boot Definitions for Volume Group /dev/drd00: Physical Volumes belonging in Root Volume Group: /dev/disk/disk7_p2 -. New records might have been appended to the logs by subsequent drd runcmd sw* operations or by sw* commands run after the image was booted. the record of the clone operation in the clone’s log is truncated at the message indicating that file systems are being copied. a DRD log is created on the active image. ending with the final banner message. However. Set the clone to be the primary boot disk and boot to it: # /opt/drd/bin/drd activate -x autoreboot=true HP recommends that you create a shutdown script that runs drd sync so that files changed on the original image after the clone was created will be propagated to the clone.log on the clone • In the copy of /var/adm/sw/swagent. located on the booted system) • In the copy of /var/adm/sw/swinstall. The log on the booted system will be complete. because the clone’s file systems must be unmounted before the final ending banner message of the operation is written to the log.log on the clone Because drd. it is copied during the clone operation to the /var file system on the clone. sw* logs for a given image produce a complete picture of all software operations on that image. Viewing Log Files When you use drd runcmd to run commands that modify the inactive system image. logging occurs in several places that correspond to the locations at which the processes were executed. then the initial copies of the logs were copied by the clone operation. For example.log (the original log file. Note the following important factors about log files created from a drd runcmd operation: 24 . If the image was created by a clone.log is located in the /var file system. The next message in the clone’s log is issued by the next DRD command run on the clone itself—after it is booted. 0 26.Boot Disk Boot: lvol1 on: /dev/disk/disk6_p2 Root: lvol3 on: /dev/disk/disk6_p2 Swap: lvol2 on: /dev/disk/disk6_p2 Dump: lvol2 on: /dev/disk/disk6_p2.Boot Disk Boot: lvol1 on: /dev/disk/disk7_p2 Root: lvol3 on: /dev/disk/disk7_p2 Swap: lvol2 on: /dev/disk/disk7_p2 Dump: lvol2 on: /dev/disk/disk7_p2.Boot Definitions for Volume Group /dev/vg00: Physical Volumes belonging in Root Volume Group: /dev/disk/disk6_p2 -. Because DRD runs on the booted system. See the “DRD sync” section below for more details.

and boot a clone. Logs can be viewed on the inactive system image by executing drd runcmd view. The mount point of the root file system of an inactive image created by the drd clone command is: /var/opt/drd/mnts/sysimage_001. Note: The drd runcmd view provides a mechanism for browsing logs. but not for annotating them. you need to mount the inactive system image and edit the logs using their full pathnames on the booted system.log) contains entries that pertain only to the command that was run. to view the swagent log on the clone. the active system image). you would use the following commands: # /opt/drd/bin/drd mount # /usr/bin/echo “Swagent log after quality pack application using drd runcmd. To swverify the booted system and view the swverify log.log To swverify the inactive system image and view the swverify log. For example. so it is not particularly useful.log. 2007” >> \ /var/opt/drd/mnts/sysimage_001/var/adm/sw/swagent.log 25 . including the return code and/or error messages. If you want to modify log files. to annotate the swagent log on the inactive system image. execute the following commands: # /opt/drd/bin/drd runcmd swverify \* # /opt/drd/bin/drd runcmd view /var/adm/sw/swverify. • The sw* log file that results from a drd runcmd operation is always on the inactive system image (that is. This log does not contain any log messages from the sw* command itself. The swagent log resides at /var/opt/drd/mnts/sysimage_001/var/adm/sw/swagent.log Note that drd runcmd view /var/opt/drd/drd. The log paths are relative to the mount point. After DRD is used to create.• The drd log (/var/opt/drd/drd.log does not provide a view of drd runcmd commands that were run recently on the booted system. You can view logs directly by mounting the inactive system image with the drd mount command.log The following example compares operations and logs on the booted system and the inactive system image.log or /var/opt/drd/mnts/sysimage_000/var/adm/sw/swagent. the mount point of the original image (which is now inactive) is: /var/opt/drd/mnts/sysimage_000. June 6. the clone) and is not appended to the original logs on the booted system (that is. execute the following command: # /opt/drd/bin/drd runcmd view /var/adm/sw/swagent. For example. activate. execute the following commands: # /usr/sbin/swverify \* # /usr/bin/view /var/adm/sw/swverify.

eliminating the need to manual update files on the clone.186.hp. For detailed information regarding drd sync. An administrator can use drd activate and drd deactivate to implement various maintenance schemes. A drd deactivate command activates the booted image. rather than using drd sync to copy files from the original image to the clone.austin. it is recommended that the clone be recreated. it is desirable to avoid gaps in logs between the creation and boot of the clone. If it has been a few days since the clone was created.hp. intrusion detection.5. Delayed Activation/Boot of the Clone HP recommends that system administrators clone. the drd sync command was introduced in March 2010. To address this need. so that any files changed on the original image after the clone was created will be propagated to the clone. DRD Activate and Deactivate Commands The commands drd activate and drd deactivate enable an administrator to choose an image to be booted the next time the system is re-started: An image is said to be activated if it will be booted. and even includes a sample drd sync system shutdown script. when it is recommended that drd clone be used to recreate a clone instead of using drd sync. patch.3.pdf 26 . you can use the drd sync command to determine how many files have changed on the original image that would need to be propagated to the clone. version A. For further information on the commands.pdf.hp. or forensics purposes.austin. and boot in a fairly short time cycle.Maintaining the Integrity of System Logs If system logs are being collected and audited for regulatory. If this number is large. DRD sync With the March 2010 release of DRD. it may be advisable to re-create the clone. The drd sync command allows you to automatically synchronize the active image and the clone. For more details. please see Chapter 5 of the Dynamic Root Disk Administrator’s Guide at http://bizsupport1. See the “DRD sync” section below for more information.com/bc/docs/support/SupportManual/c01920455/c01920455.com/bc/docs/support/SupportManual/c01918754/c01918754. the drd sync command is now supported to automatically synchronize the active image and the clone. The information presented will help you understand how to use drd sync.com/bc/docs/support/SupportManual/c01918754/c01918754. A drd activate command activates the inactive image. please see the Using Dynamic Root Disk Activate and Deactivate whitepaper at http://bizsupport1. If a long period of time has passed since the clone was created. please review Chapter 5 of the Dynamic Root Disk Administrator’s Guide at http://bizsupport1. such as setting a DRD clone as an alternate boot disk or activating a mirrored DRD clone.austin. It is recommended that the drd sync command be incorporated into a shutdown script prior to activating and booting the clone.pdf.

HP shall not be liable for technical or editorial errors or omissions contained herein. L.For More Information To read more about Dynamic Root Disk. through our technical documentation feedback website: http://docs. The information contained herein is subject to change without notice.html © 2010 Hewlett-Packard Development Company. or suggestions for LVM or related documentation.P. Nothing herein should be construed as constituting an additional warranty. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. August 2010 Share with colleagues © Copyright 20XX Hewlett-Packard Development Company. 4AA2-xxxxENW. L. Please give us comments about this white paper.hp. Nothing herein should be construed as constituting an additional warranty. Trademark acknowledgments. if needed.com/go/drd.com/en/feedback. The information contained herein is subject to change without notice. August 2010 .hp. 5900-0594. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Call to Action HP welcomes your input. go to www.P. HP shall not be liable for technical or editorial errors or omissions contained herein.