Professional Documents
Culture Documents
Version: A.3.13
HP-UX 11i v2 and HP-UX 11i v3
Revision History
766141-001 A.3.13
nl
HP-UX 11i v2 March 2014
nl
Contents 3
HP secure development lifecycle
Starting with HP-UX 11i v3 March 2013 update release, HP secure development lifecycle provides
the ability to authenticate HP-UX software. Software delivered through this release has been digitally
signed using HP's private key. You can now verify the authenticity of the software before installing
the products, delivered through this release.
To verify the software signatures in signed depot, the following products must be installed on your
system:
• B.11.31.1303 or later version of SD (Software Distributor).
• A.01.02.00 or later version of HP-UX Whitelisting (WhiteListInf).
To verify the signatures, run: /usr/sbin/swsign -v -s <depot_path>.
For more information, see Software Distributor documentation at http://www.hp.com/go/sd-docs.
NOTE: Ignite-UX software delivered with HP-UX 11i v3 March 2014 release or later supports
verification of the software signatures in signed depot or media, during cold installation. For more
information, see Ignite-UX documentation at http://www.hp.com/go/ignite-ux-docs.
4
1 Overview
DRD, also known as Dynamic Root Disk, is an HP-UX system administration toolset used to clone
an HP-UX system image to an inactive disk for software update, maintenance, recovery, and
rehosting. DRD (Dynamic Root Disk) is available for download from Software Depot. System
administrators use DRD (Dynamic Root Disk) to manage system images on HP PA-RISC and Itanium®
-based systems. DRD (Dynamic Root Disk) complements other parts of your total HP solution by
reducing system downtime while installing and updating patches and other software.
DRD (Dynamic Root Disk) is supported on HP-UX 11i v2 September 2004 and all subsequent
releases of HP-UX 11i v2. It is also supported on all HP-UX 11i v3 releases. DRD (Dynamic Root
Disk) supports LVM or VxVM managed root volumes.
Product features
The following features are supported on the most recent release of DRD (Dynamic Root Disk).
Anything labeled as “new” was added as of the most recent release. New functionality for previous
releases can be seen in the next chapter that details each release.
The most recent release of DRD (Dynamic Root Disk) is supported on HP-UX 11i v2 and HP-UX 11i
v3 with an LVM or a VxVM volume manager. It provides the following functionality:
• Hot maintenance capability. The DRD (Dynamic Root Disk) tools can be used to create a clone
of the running system, apply patches to the clone, and boot the clone as the running system.
On 11i v3, products as well as patches can be installed and managed.
• Hot recovery capability. The DRD (Dynamic Root Disk) tools can be used to create a clone
and boot it if the running system fails.
• SD command support. The DRD (Dynamic Root Disk) tools provide a mechanism for running
SD commands such as swinstall, swremove, swverify, swmodify, swlist, and
swjob on the clone.
• Clone accessibility. The clone can be mounted on the running system so that its files can be
viewed or modified.
• Mirror compatibility. The DRD (Dynamic Root Disk) operation will not affect any mirror already
created on the running system. DRD (Dynamic Root Disk) can be used to create a mirror of
the clone during the cloning operation.
• Command line interface. The DRD (Dynamic Root Disk) tools are run from the command line.
• The drd deactivate command was added. This restores primary boot path to the current
active/running system image.
• A rich set of commands for activating and deactivating the inactive system image. This
determines what volume will be used as the root on the next system boot. This includes options
to set the alternate boot disk and the High Availability alternate boot disk.
• Support for volumes created under root volume group/root disk group with LVM volume group
versions 1.0 and 2.2, 4.1 VxVM, 5.0 VxVM, 5.0.1 VxVM, and 5.1SP1 VxVM. These root
volumes can have any name (not just vg00).
• The drd status command. This allows the user to easily view clone information on the
system. The command specifies the following: which disk the clone resides on; when the clone
was created; the location of the clone's mirror (if one exists); and the original disk that was
copied to create the clone. It also specifies the state of the boot partition on the clone, mirror,
and original disks, as well as which disk is booted and which is activated (the disk that will
be booted from on the next reboot).
• Rehost the clone to another system. This feature allows users to create a clone, which can
optionally be modified, then boot that clone on another system. Rehosting can be used to
Product features 5
quickly and efficiently provision new systems, and to simplify the setup of test systems. Rehosting
is supported on HP-UX 11i v3 and v2 Integrity systems with LVM roots, for rehosting from a
blade to another blade (v3 only) or a VM to another VM (v2 and v3).
• This version of DRD (Dynamic Root Disk) introduces the ability to perform an OE update from
an older version of 11i v3 to HP-UX 11i v3 update 4 or later. You are able now to update
your OE level on the clone while your original system remains up and running. Once the
update on the clone is done, you can boot the clone and keep your original image as backup.
• Improved performance and accuracy of the DRD (Dynamic Root Disk) algorithm which
determines if a disk to be used as a clone is in already in use or not.
• Sync command to compare and synchronize the clone with the original OS disk. After creating
a drd clone but prior to activating the clone, you can use drd sync to compare the original
image to the clone, identifying and optionally applying any changes made to the original
image that need to be propagated to the clone. Examples are; changes to password files, log
files, and so on. Enhancements to drd sync to:
◦ Allow users to exclude files and directories, in addition to those described in drd
sync(1M) , from the sync operation.
◦ Support frequently changing files such as logs of automated processes.
6 Overview
2 Supported DRD (Dynamic Root Disk) releases
The first release of DRD (Dynamic Root Disk), version A.1.0 was posted on the web in January
2007. For each release in the last two years, this chapter lists the new features and defect fixes.
Note that all releases are cumulative, and that all releases of DRD (Dynamic Root Disk) are
compatible with previous versions of DRD (Dynamic Root Disk), unless otherwise noted.
Versions of DRD (Dynamic Root Disk) are supported for at least two years. Versions not listed in
this section of the latest release notes are no longer supported. This means that while it is
recommended to use the latest version of DRD (Dynamic Root Disk) possible, since there are more
quality improvements in each release, it is even more strongly recommended to use a version of
DRD (Dynamic Root Disk) that is listed as supported in the most recent version of this document.
Defect ID Description
QXCR1001212797 Title: drd can fail with out of memory errors if too many files in vg00
Severity: Medium
Problem and details: DRD (Dynamic Root Disk) will not fail with "out of memory error",
if too many files are present in the vg00. With this fix, DRD (Dynamic Root Disk)
supports approximately 2.5 million files in vg00. The -x exclude_list option must
be used to prevent syncing of directories that contain large number of files
Resolution: This issue is fixed in this release.
NOTE: For more information on out of memory issues, see “Out of memory issues”
(page 19).
Defect ID Description
Defect ID Description
QXCR1001186489 Title: drd activate selects incorrect (standby) HW path for setboot command
Severity: Serious
Problem and details: Prior to this fix for this CR, drd activate command was setting
a standby hardware path as the primary bootpath, resulting in a boot failure. With
this fix drd activate command will not select standby hardware path.
Resolution: This issue is fixed in this release.
nl
Defect ID Description
QXCR1001080540 Title: DRD (Dynamic Root Disk) clone appears to hang when root group contains
snapshot volumes
Severity: Serious
Problem and details: Prior to this fix, a “drd clone” operation might hang when
cloning a system that contained LVM snapshot volumes. With this fix, the “drd clone”
operation will issue an error, when attempting to clone a system with LVM snapshot
volumes.
Resolution: This issue is fixed in this release.
QXCR1001129462 Title: drd clone does not copy disk*_p1:/efi/hpux/vparconfig.efi when cloning
vPars
Severity: Medium
Problem and details: Prior to this fix, DRD (Dynamic Root Disk) was not copying the
EFI/HPUX/vparconfig.efi file during a clone, when the file was present in the EFI/HPUX
directory. With this fix the EFI/HPUX/vparconfig.efi file will be copied during
a clone, when the file is present in the EFI/HPUX directory.
Resolution: This issue is fixed in this release.
QXCR1001168773 Title: drd clone fails at frecover if /etc/lvmconf is moved & symbolic linked
Severity: Medium
Problem and details: Prior to this fix, drd clone will fail during the copy of file
systems, when /etc/lvmconf was moved and a symbolic link was put in its place.
With this fix, we no longer fail when /etc/lvmconf is moved and a symbolic link
was put in its place.
Resolution: This issue is fixed in this release.
Defect ID Description
QXCR1000988687 Title: EFI content on drd clone is generic rather than a copy of booted EFI
Severity: Serious
Problem and details: Prior to this fix, the EFI AUTO file on the clone was generic rather
than a copy of that which was on the original system. With the fix, the EFI AUTO file
is copied from the original system to the clone and mirror, if applicable.
Resolution: This issue is fixed in this release.
QXCR1001078844 Title: Some DRD error messages refer to files erased before job completes
Severity: Serious
Problem and details: Prior to the fix, DRD (Dynamic Root Disk) was deleting the
temporary configuration files in /var/opt/drd/tmp when the DRD session was
completed. With the fix, in the event of an error, DRD will copy the contents of /var/
opt/drd/tmp to /var/opt/drd/save for use to diagnose the error.
Resolution: This issue is fixed in this release.
QXCR1001106819 Title: DRD (Dynamic Root Disk) activate and deactivate do not work correctly on cell
based systems (vPars)
Severity: Critical
Problem and details: Prior to this fix, drd activate and drd deactivate was
attempting to set the High Availability bootpath on vPar systems that do not support
the High Availability bootpath. With this fix, drd activate and drd deactivate
do not attempt to set the High Availability bootpath on vPar systems.
Resolution: This issue is fixed in this release.
QXCR1001113284 Title: drd sync fails due to file with invalid character in the file name
Severity: Serious
Problem and details: Prior to this fix, drd sync was unable to tolerate the file names
that contained unprintable control characters. With this fix, drd sync now handles
unprintable control characters as well as characters in the extended ASCII code set
Resolution: This issue is fixed in this release.
Defect ID Description
QXCR1001026491 Title: pvmove of root disk causes many DRD (Dynamic Root Disk) problems
Severity: Serious
Problem and details: Prior to this fix, if boot disks were moved by using pvmove or
by creating and breaking mirrors without a reboot, drd clone was failing with the
following errors:
"/var/opt/drd/mnts/sysimage_000".
Unmount the volume with umount(1M), then re-run drd.
Resolution: This issue is fixed in this release.
QXCR1001038220 Title: drd runcmd update-ux -i fails with Terminal user interface failed message
Severity: Serious
Problem and details: The use of the -i (interactive) option with drd runcmd
update-ux was never meant to be supported. We now check for use of this option and
issue an error message. The DRD (Dynamic Root Disk) Manpage reflects this change.
Resolution: This issue is fixed in this release.
QXCR1001074042 Title: umount at end of drd clone attempt fails if PHCO_40522 installed
Severity: Medium
Problem and details: Prior to this fix, when PHCO_40522 was installed on an HP-UX
11i v2 (11.23) system, the clone operation was failing with an unmount error. Now
clone works as expected.
Resolution: This issue is fixed in this release.
Defect ID Description
QXCR1001056449 Title: drd sync is not working for directories and files underneath
Severity: Serious
Problem and details: This fix resolves the issue where drd sync does not copy new
files and directories to the clone even though they are listed in
/var/opt/drd/sync/files_to_be_copied_by_drd_sync.
Resolution: This issue is fixed in this release.
QXCR1001051734 Title: DRD (Dynamic Root Disk) needs to differentatie WARNINGS from ERRORs issued
by list_expander
Severity: Critical
Problem and details: Prior to the fix for QXCR1001051734, drd sync treated all
non-zero return codes from list_expander (the utility that creates the initial list of
files to be synchronized) as warnings. In some cases, this resulted in an empty list of
files to be synchronized, but just a warning return code from "drd sync." After this
fix, the return code from list_expander is always respected, and an error return
code will cause "drd sync" to fail.
Resolution: This issue is fixed in this release.
QXCR1001050440 Title: non-empty /etc/lvmpvg file will cause drd clone to abort
Severity: Serious
Problem and details: Prior to the fix for QXCR1001050440, the existence of
/etc/lvmpvg caused drd clone to fail. After this fix, drd clone succeeds.
Resolution: This issue is fixed in this release.
QXCR1001041894 Title: files or directories with " char will cause drd sync to fail
Severity: Serious
Problem and details: Prior to the fix for QXCR1001041894, drd sync incorrectly
recorded files with special characters in the DRD registry. This caused the registry to
be unreadable, making subsequent "drd sync" operations fail. After this fix, files
containing special characters are handled correctly.
Resolution: This issue is fixed in this release.
DRD version A.3.6 and A.3.7 contain fixes for the following defects:
QXCR1000894322 Title: File system ACLs are not preserved on clone mount points
Severity: Serious
Problem and details: ACLs that are set on LVM volume mount points in the root group
will now be copied to the clone providing the mount point has a journal file system.
Resolution: This issue is fixed in this release.
QXCR1000933863 Title: ^C during DRD (Dynamic Root Disk) causes the daemons and mounts to get left
behind on the active sys
Severity: Serious
Problem and details: Signals (SIGINT, SIGQUIT, SIGHUP, and SIGTERM) to drd
runcmd will no longer have extraneous fsdaemon and swagentd processes running.
Resolution: This issue is fixed in this release.
QXCR1000801373 Title: drd clone of VxVM root fails if a root volume ends with name of another
volume
Severity: Critical
Problem and details: This was a defect in VxVM that affected DRD. Before this issue
was addressed, the drd clone command failed on a VxVM-managed system with
a root group volume name ending in another root group volume name.
For example, the root volume group names varoptvol and optvol caused the drd
clone command to fail.
The following HP-UX patches for VxVM, or appropriate superseding patches, resolve
this issue:
• VxVM 5.0 on HP-UX 11i v3 (11.31): PHCO_40294
• VxVM 5.0 on HP-UX 11i v2 (11.23): PHCO_38570
• VxVM 4.1 on HP-UX 11i v3 (11.31): PHCO_38654
• VxVM 4.1 on HP-UX 11i v2 (11.23): PHCO_38464
Resolution: This issue is fixed in this release.
QXCR1000950254 Title: Disks missing device files trigger stack trace in drd clone
Severity: Serious
Problem and details: The drd clone command succeeds when a device file is missing
for a disk other than the clone target.
Resolution: This issue is fixed in this release.
QXCR1000961233 Title: Cloning fully allocated LVM systems can fail if group parameters are non-default
Severity: Serious
Problem and details: Cloning a fully allocated LVM disk to an identical sized target
can fail due to space issues, if the LVM parameters for ‘extent size’, ‘max physical
vols’, or ‘max logical vols’ are set to non default values.
# /opt/drd/bin/drd clone -v -x overwrite=true -t /dev/dsk/c3t5d0
(jobid=lccns166)
QXCR1000964506 Title: LVM panic booting 11.23 disk that was drd mounted on 11.31 system
Severity: Critical
Problem and details: When an 11.23 system is cloned and then updated to 11.31.
The clone can now be mounted from the updated 11.31 system and remain bootable.
Resolution: This issue is fixed in this release.
QXCR1000983266 Title: Msg from drd runcmd update-ux says "swmoeupdate" must be "swm
oeupdate"
Severity: Serious
Problem and details: During drd runcmd update-ux -s /depot, the following
message needs a space between /opt/swm/bin/swm and oeupdate: * Executing
command: /opt/swm/bin/swmoeupdate -s /depot.
nl
The DRD (Dynamic Root Disk) message is fixed to be: * Executing command:
/opt/swm/bin/swm oeupdate -s /depot.
Resolution: This issue is fixed in this release.
QXCR1000984246 Title: The tmpauth.save directory is recursively created on multiple drd runcmds
Severity: Serious
Problem and details: Prior to the fix for QXCR1000984246, drd clone may fail if
the booted system was previously in an inactive image, that was modified or listed
repeatedly with drd runcmd. If it is an LVM-managed root, the clone will produce
messages similar to these:
QXCR1000985424 Title: drd B.1123.A.3.4.91: drd clone always fails if we specify mirror_disk
Severity: Serious
Problem and details: The release of A.3.4 introduced a regression that caused creation
of mirrored clones to fail. This regression is fixed in A.3.5.
Resolution: This issue is fixed in this release.
• HP-UX 11i v3: SW-DIST (Software Distributor) version B.11.31.0709 or greater — when
installing DRD (Dynamic Root Disk) from an OE or AR or from Software Depot, these
dependencies are installed if your system is not having them already, this means that no special
action is required on your part.
DRD (Dynamic Root Disk) has patch requirements in addition to those listed above. For up-to-date
information on which patches are required and how to acquire them along with their dependencies,
see the DRD (Dynamic Root Disk) Downloads & Patches web page.
nl
If your root volume group is VxVM, there is a good chance you will need to install a VXVM patch
in order for DRD (Dynamic Root Disk) to operate correctly.
For more information about HP Dynamic Root Disk, see http://www.hp.com/go/drd-docs.
For more information related to HP-UX technical documentation, see http://www.hp.com/go/
hpux-core-docs.
Clone features
The drd clone command supports the following configurations:
• Clone target must be a single physical disk (with optional second disk for mirroring) or SAN
LUN. If an LVM root volume is spread across multiple disks, it can still be cloned, but the clone
will be on a single physical disk. A VxVM root disk group may reside on several disks, but
each disk must be an exact mirror of every other disk. The clone of a VxVM root disk group
will reside on a single physical disk.
• Root volume must be LVM (DRD [Dynamic Root Disk] versions A.1.0, A.1.1, and A.2.0); root
volume can be LVM or VxVM (DRD (Dynamic Root Disk) versions A.3.0 or later.)
• Prior to DRD [Dynamic Root Disk] version A.3.0, the root volume name must be vg00; the drd
clone command will only clone the contents of vg00, regardless of other volume groups that
exist (DRD (Dynamic Root Disk) versions A.1.0, A.1.1 and A.2.0). The root volume group
may have any name when using DRD (Dynamic Root Disk) version A.3.0 or later.
• Due to system calls DRD (Dynamic Root Disk) (and many other HP-UX applications) depend
on, DRD (Dynamic Root Disk) expects legacy Device Special Files (DSFs) to be present and
the legacy naming model to be enabled. Therefore HP suggests only partial migration to
persistent DSFs be performed as detailed in HP-UX 11i v3 Persistent DSF Migration Guide at
HPSC.
• On VxVM configurations, DRD (Dynamic Root Disk) expects OS Native Naming (osn). Enclosure
Based Naming (ebn) must be turned off.
Use of SD TUI/GUI
When using the SD TUI/GUI with drd runcmd, you may see messages about building a kernel
or rebooting the system. These messages are not accurate; under no circumstances will using SD
under drd runcmd lead to a reboot, nor will SD under drd runcmd lead to a kernel build on
the running system.
Installation requirements
DRD (Dynamic Root Disk) is dependent on other patches and software.
nl
For more information see, “Required patches and software” (page 17) section.
Workaround
The versions of DRD (Dynamic Root Disk) prior to A.3.13 support approximately 1 million files.
DRD (Dynamic Root Disk) version A.3.13 supports approximately 2.5 million files in the root volume
group. The number of files that can be cloned or synchronized can vary depending on each
environment (example, the length of path name). So it might fail even if the number of files is less
than 2.5 million.
If there are more files in the root volume group than supported by DRD (Dynamic Root Disk), the
directories containing large number of files must be excluded using -x exclude_list option (for drd
sync alone). For drd clone and drd sync to work properly, user must move the files from
root volume group to any other volume group so that the number of files in the root volume group
will be within the maximum file limit supported by DRD (Dynamic Root Disk).
Workaround
In order to use a directory depot on the active system image, you will need to install March 2010
or later versions of DRD (Dynamic Root Disk), SWM, and SW-DIST products from an OE depot.
This must be done before the clone is created, so the new DRD (Dynamic Root Disk), SWM, and
SW-DIST are on the active system and on the clone.
The following assumes the OE depot is at /var/depots/1003-DCOE. This must be executed as
root:
Workaround
In order to use a media depot to do a DRD (Dynamic Root Disk) update, you will need to first install
September 2010 or later versions of DRD (Dynamic Root Disk), SWM, and SW-DIST products from
the media. This must be done before the clone is created, so the new DRD (Dynamic Root Disk),
SWM, and SW-DIST are on the active system and on the clone.
To install these products execute a command like to following. This assumes that the first DVD of
the September 2010 or later release is mounted to /SD_CDROM.
swinstall -x autoselect_dependencies=false -s /SD_CDROM \ DRD SWM SW-DIST
PHCO_36525
Now you can create the clone, then perform an OE update from the media on the clone.
Rehosting issues
Issue
DRD (Dynamic Root Disk) status reports error on rehosted boot disk
After rehosting a clone disk using "drd rehost" and booting that disk, running DRD (Dynamic Root
Disk) commands may result in the following error. The new rehosted machine may not recognize
the disks that were on the previous system and which are now specified in the /var/opt/drd/
registry/registry.xml file.
Workaround
In order to clear out the outdated information on your new system, simply remove the registry file,
which is used by DRD (Dynamic Root Disk) to hold pertinent clone and disk information. Once a
system is rehosted, this information no longer applies. To remove the registry file:
# rm /var/opt/drd/registry/registry.xml
Status: plan to fix this in a future release.
Workaround
If the version of LVM 2.x on the system is from September 2008 or newer, and DRD (Dynamic
Root Disk) B.11.31.A.3.4 is also installed, DRD (Dynamic Root Disk) recognizes usage and
formatting by an LVM 2.x volume group on the system. In this case, DRD (Dynamic Root Disk) will
not overwrite a disk in use by LVM 1.0, or LVM 2.x.
If you have DRD (Dynamic Root Disk) B.11.31.A.3.4 installed and a version of LVM 2.x released
prior to September 2008, DRD (Dynamic Root Disk) recognizes when a disk is formatted, but will
overwrite it if the user specifies overwrite=true on the command line—even if the disk is in use.
DRD (Dynamic Root Disk) cannot determine usage in this case.
If the versions of DRD (Dynamic Root Disk) and LVM are not updated on the system—as is the case
for other usages (swap, dump, database, and so on.)—it is the system administrator’s responsibility
in these cases to identify a disk that can be overwritten by DRD (Dynamic Root Disk).
Status: More of a limitation than a problem.
Issue
limitation on nsswitch.conf entries during drd runcmd
The drd runcmd command does not support the following nsswitch.conf file entries on the clone
while managing software through drd runcmd. If the file contains them, the drd runcmd
command will fail.
passwd: compat
group: compat
hosts: nis [NOTFOUND=return] files
You might see the following error message during the execution of drd runcmd if your
nsswitch.conf file contains the "hosts: nis" entry:
Workaround
Because DRD (Dynamic Root Disk) does not need NIS to be running during swinstall, swremove,
or update you can move the nsswitch.conf file on the clone to a temporary location. After you
are done modifying the clone, you can move it back.
# drd mount
Cloning problems
Issue
occasional failure on fully allocated file systems
In some cases where a file system is nearly full, drd clone fails with the following frecover
messages
Workaround
To resolve, free up some space in the nearly full source file system.
Status: Alternative copy mechanisms are being investigated. QXCR1000963803
Issue
failure cloning, sconf* line 7: syntax error
DRD (Dynamic Root Disk) can abort during a clone attempt if instances or disk devices have been
renumbered, and multiple agile device files exist for the same device. The following error can
occur:
======= 01/07/11 14:09:58 MST BEGIN Clone System Image Preview (user=root)
(jobid=iuxonx1)
======= 01/07/11 14:11:45 MST END Clone System Image Preview failed with 1
error. (user=root) (jobid=iuxonx1)
Workaround
When this issue is encountered during a clone or clone preview, and ioscan shows multiple agile
device files for the same device, remove the stale device files in /dev/disk and /dev/rdisk
22 Known problems, issues, limitations, and workarounds
using rmsf or rm if they are busy. On LVM systems, you may need to rebuild lvmtab using vgscan
-a if lvlnboot –v reports a stale device.
Issue
Dedicated dump devices not set up correctly on clone
After booting clone, dmesg can show errors if dump devices were set up using crashconf on
the original system:
...
Persistent dump device list:
64:0x2
...
Workaround
This issue can be worked around by switching modes from config_crashconf_mode to
config_deprecated_mode using crashconf –o on the original system, prior to cloning. Any
dedicated dump devices will also need to be added to fstab prior to cloning.
Status: A fix will be available in a future DRD (Dynamic Root Disk) release. (QXCR1001101295)
======= 10/01/12 12:16:09 IST BEGIN Clone System Image (user=root) (jobid=iuxonx6)
* Reading Current System Information
ERROR: System information retrieval fails.
- Gathering system configuration information fails with the following error(s):
- The command "/opt/drd/lbin/drd_save_config" fails with the return code 2. The error message from the command
is
"drd_save_config - running in drd context
drd_save_config: Error - disk not in the volume/disk group"
* Reading Current System Information failed with 1 error.
* DRD operation failed, contents of /var/opt/drd/tmp copied to /var/opt/drd/save.
Workaround
There is no resolution to this case. DRD (Dynamic Root Disk) does not support Whole disk layout.
Workaround
There is no resolution to this case. DRD (Dynamic Root Disk) does not support cloning of VxVM
volume with special characters in their names.
Workaround
You must change the naming scheme set on the system to OSN (Operating System Naming)
scheme, if it was set to EBN scheme. See the following:
1. Verify if the naming scheme is set to EBN scheme.
/usr/sbin/vxddladm get namingscheme
2. Change the naming scheme to OSN, if it was set to EBN, and run the drd clone operation.
/usr/sbin/vxddladm set namingscheme=osn mode=new