You are on page 1of 36

HP Enterprise Backup Solution design guide

Abstract
This guide describes how to use the HP Enterprise Backup Solution (EBS) to design a data protection solution.

HP Part Number: 5697-7309 Published: June 2012 Edition: Eleventh

Copyright 2012 Hewlett Packard. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.21 1 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Contents
1 Overview..................................................................................................5
Supported components..............................................................................................................6 Supported topologies................................................................................................................6 Direct-attach parallel SCSI.....................................................................................................6 Serial-attach SCSI (SAS)........................................................................................................6 Point-to-point.......................................................................................................................6 Switched fabric....................................................................................................................6 Platform and operating system support........................................................................................6 Use of native backup programs and commands............................................................................7

2 Zoning......................................................................................................8
Zone recommendations.............................................................................................................8

3 Configuration and operating system details....................................................9


Nearline configuration information..............................................................................................9 SAN utilities that may disrupt Nearline connectivity.......................................................................9 HP-UX...................................................................................................................................10 Initial requirement (HP-UX 1 1.23 on IA-64 and PA-RISC)...........................................................10 Initial requirement (HP-UX 1 1.31 on IA-64 and PA-RISC)...........................................................11 HP-UX 1 1.31 can experience poor I/O performance on VxFS file systems due to memory blocking during high system memory usage.......................................................................................12 Poor I/O performance resolution.....................................................................................12 HP-UX 1 1.23: Disabling rewind-on-close devices with st_san_safe.........................................13 Configuring the SAN.....................................................................................................13 Final host configurations ...........................................................................................13 Installation checklist..................................................................................................14 Windows Server and Windows Storage Server..........................................................................14 Configuring the SAN..........................................................................................................15 Installing the HBA device driver (Windows Server 2008/2003)..........................................15 Storport considerations..................................................................................................15 Installation checklist.......................................................................................................15 Windows 2003 known issues..............................................................................................15 Target and LUN shifting.................................................................................................15 Interop issues with Microsoft Windows persistent binding for tape LUNs...............................16 Tape drive polling.........................................................................................................16 Library slot count/Max Scatter Gather List issue................................................................17 Tape.sys block size issue................................................................................................17 Emulex SCSIport driver issue...........................................................................................17 FC interface controller device driver issue.........................................................................18 Not Enough Server Storage is Available to Process this Commandnetwork issue.................18 Updating the HP Insight Management Agents for Microsoft Windows using the ProLiant Support Pack Version 7.70 (or later).............................................................................................18 NAS and ProLiant Storage Server devices using Microsoft Windows Storage Server 2003...........19 Known issues with NAS and ProLiant Storage Servers........................................................19 Tru64 UNIX...........................................................................................................................19 Backup software patch.......................................................................................................20 Configuring the SAN..........................................................................................................20 Confirming mapped components.........................................................................................21 Installed and configured host bus adapters............................................................................21 Visible target devices..........................................................................................................21 Configuring switch zoning...................................................................................................21 Installation checklist............................................................................................................21
Contents 3

Red Hat and SUSE Linux..........................................................................................................22 Operating system notes......................................................................................................22 Installing HBA drivers and tools...........................................................................................22 Additional SG device files...................................................................................................23 Installation checklist............................................................................................................23 Linux known issues.............................................................................................................24 Rewind commands being issued by rebooted Linux hosts....................................................24 Tape devices not discovered and configured across server reboots.......................................24 Sparse files causing long backup times with some backup applications................................25 LUNs shifting after reboot...............................................................................................25 Oracle Solaris........................................................................................................................25 Configuring the SAN..........................................................................................................25 Oracle Solaris native driver configuration..............................................................................26 Troubleshooting with the cfgadm utility.................................................................................27 QLogic driver configuration for QLA2340 and QLA2342........................................................27 Emulex driver configuration for LP10000 and LP10000DC......................................................28 Configuring Oracle Servers for tape devices on SAN..............................................................30 Configuring switch zoning...................................................................................................31 Installation checklist............................................................................................................31 IBM AIX................................................................................................................................32 Configuring the SAN..........................................................................................................32 IBM 6228, 6239, 5716, or 5759 HBA configuration...............................................................32 Configuring switch zoning...................................................................................................34 Installation checklist............................................................................................................34 Installing backup software and patches.................................................................................34

4 Backup and recovery of Virtual Machines ...................................................35


HP EBS VMware backup and recovery strategy..........................................................................35 HP Integrity Virtual Machines (Integrity VM)...............................................................................36 HP EBS Hyper-V backup and recovery strategy...........................................................................36

Contents

1 Overview
The HP Enterprise Backup Solution (EBS) is an integration of data protection and archiving software and industry-standard hardware, providing a complete enterprise class solution. HP has joined with leading software companies to provide software solutions that support the backup and restore processes of homogeneous and heterogeneous operating systems in a shared storage environment. EBS software partners data protection solutions incorporate database protection, storage management agents, and options for highly specialized networking environments. Data protection and archiving software focuses on using an automated LTO Ultrium tape library and/or Virtual Tape and/or NAS backup technologies.. The EBS combines the functionality and management of Storage Area Network (SAN), data protection software, and scaling tools to integrate tape and disk storage subsystems in the same SAN environment. Enterprise data protection can be accomplished with different target devices in various configurations, using a variety of transport methods such as the corporate communication network, a server SCSI/SAS, SCSI, FCoE (Fibre Channel over Ethernet), or a Fibre Channel infrastructure. EBS typically uses a storage area network that provides dedicated bandwidth independent of the local area network (LAN). This independence allows single or multiple backup or restore jobs to run without the network traffic caused by data protection environments. Depending on the data protection/archiving software used, submitted jobs are run locally on the backup server to which the job was submitted. Data, however, is sent over the SAN backup/archive target to the tape library rather than over the LAN. This achieves greater speed and reduces network traffic. Jobs and devices can be managed and viewed from either the primary or any server or client connected within the EBS which has the supported data protection software solution installed. All servers within the EBS server group can display the same devices. Data protection and archiving software focuses on using an automated LTO Ultrium tape library and/or Virtual Tape and/or NAS backup technologies. The EBS combines the functionality and management of Storage Area Network (SAN), data protection software, and scaling tools to integrate tape and disk storage subsystems in the same SAN environment. Enterprise data protection can be accomplished with different target devices in various configurations, using a variety of transport methods such as the corporate communication network, a server SCSI/SAS, SCSI, FCoE (Fibre Channel over Ethernet), or a Fibre Channel infrastructure. EBS typically uses a storage area network that provides dedicated bandwidth independent of the local area network (LAN). This independence allows single or multiple backup or restore jobs to run without the network traffic caused by data protection environments. Depending on the data protection/archiving software used, submitted jobs are run locally on the backup server to which the job was submitted. Data, however, is sent over the SAN backup/archive target to the tape library rather than over the LAN. This achieves greater speed and reduces network traffic. Jobs and devices can be managed and viewed from either the primary or any server or client connected within the EBS which has the supported data protection software solution installed. All servers within the EBS server group can display the same devices. To implement an Enterprise Backup Solution: 1. Consult the HP Enterprise Backup Solutions Compatibility Matrix available at: http:// www.hp.com/go/ebs. 2. Consult the EBS design guide for EBS hardware configurations currently supported and how to efficiently provide shared tape library backup in a heterogeneous SAN environment. 3. Install and configure the backup application or backup software. Recommendations for individual backup applications and software may be found in separate implementation guides. For more information about EBS, go to http://www.hp.com/go/ebs.

Supported components
For complete EBS configuration support information, refer to the HP Enterprise Backup Solutions Compatibility Matrix located at: http://www.hp.com/go/ebs. You can also use the specific software vendors documentation.

Supported topologies
The following topologies are supported:

Direct-attach parallel SCSI


Direct-attach SCSI (DAS) is still a common form of attachment to tape drives. Direct-attach SCSI allows a server to communicate directly to the given target device over a SCSI cable. Parallel SCSI devices are not supported in a multi-hosting configuration and are not covered in this document.

Serial-attach SCSI (SAS)


The Serial-attach SCSI (SAS) interface is the successor technology to the parallel SCSI interface, designed to bridge the gap in performance, scalability, and affordability. SAS combines high-end features from fibre channel (such as multi-initiator support and full-duplex communication) and the physical interface leveraged from SATA (for better compatibility and investment protection), with the performance, reliability, and ease-of-use of traditional SCSI technology. SAS devices are supported in limited multi-hosting configuration (i.e., HP blade systems) and are not covered in this document.

Point-to-point
Point-to-point, or direct-attach fibre (DAF), connections are direct Fibre Channel connections made between two nodes, such as a server and an attached tape library. This configuration requires no switch to implement. It is very similar to a SCSI bus model, in that the storage devices are dedicated to a server. There is also private loops, which is often used by default for a DAF link.

Switched fabric
A switched fabric topology is a network topology where network nodes connect with each other via one or more network switches NOTE: SAS switches are also implemented on a typically smaller scale as an option within HP blade servers NOTE: HP Virtual Connect FlexFabric switches are used as an option in C-series blade enclosures to support FCoE connectivity to the switched fabric. For standalone servers Converged Network Adapters are used to connect to the fabric through FCoE fabric switches.

Platform and operating system support


Library sharing in a heterogeneous environment is supported. All platforms may be connected through one or more switches to a tape library. The switches do not need to be separated by operating system type, nor do they have to be configured with separate zones for each operating system. The host server needs to detect all of the tape and robotic devices intended to be used; shared access to tape drives is handled by the backup application software running on each host. While some operating systems found in enterprise data centers may not be supported on the storage network by some data protection applications, it is still possible to back up these servers as clients over the LAN and still be supported.

Overview

Use of native backup programs and commands


A limited number of backup programs and commands that are native to a particular operating system are verified for basic functionality with SCSI direct-attached tape drives and autoloaders only. Tape libraries and virtual library systems are not tested. These programs and commands are limited in their ability to handle complicated backups and restores in multi-host, storage area networks (SANs). They are not guaranteed to provide robust error handling or performance throughput. Use of these programs and/or commands in a user developed script is not recommended for use with tape libraries in an Enterprise Backup Solution shared storage environment. Refer to the HP Enterprise Backup Solutions Compatibility Matrix at http://www.hp.com/go/ebs for a list of tested and supported applications that are specifically designed for backup and restore operations. The following table shows the native utilities tested on each operating system (OS): Table 1 Supported native utilities
Utilities Supported Tape drive commands Tar DD (dump) Pax Mt Yes Yes Yes Yes Yes Yes Yes Yes No No No No No No No Yes HP-UX Linux Windows

Make tape recovery (BFT) * Yes NT backup No

Library and autochanger commands Mc Mtx RSM


1
1

Yes No No

No Yes No

No No Yes

* Make Tape Recovery - refer to Ignite-UX Administration Guide for HP-UX 1 1i for details of platform requirements and interface type support

Use of native backup programs and commands

2 Zoning
Zone recommendations
Due to complexities in multi-hosting tape devices on SANs, it is best to make use of zoning tools to help keep the backup/restore environment simple and less susceptible to the effects of changing or problematic SANs. Zoning provides a way for servers, disk arrays, and tape controllers to only see what hosts and targets they need to see and use. The benefits of zoning include but are not limited to: The potential to greatly reduce target and LUN shifting Reducing stress on backup devices by polling agents Reducing the time it takes to debug and resolve anomalies in the backup/restore environment Reducing the potential for conflict with untested third-party products

Zoning may not always be required for configurations that are already small or simple. Typically the bigger the SAN is, the more zoning is needed. HP recommends the following for determining how and when to use zoning. Use host-centric zoning. Host-centric zoning is implemented by creating a specific zone for each server or host by World Wide Port Name, and adding only those storage elements to be utilized by that host. Host-centric zoning prevents a server from detecting any other devices on the SAN or including other servers, and it simplifies the device discovery process. Disk and tape on the same HBAs is supported. For larger SAN environments, it is recommended to also add storage-centric zones for disk and backup targets. This type of zoning is done by adding overlapping zones with disk and backup targets separated. See example below.

Figure 1 Zoning disk and tape

NOTE:

Overlapping zones are supported.

NOTE: For HP StoreOnce Backup Systems zoning by World Wide Port Names is recommended, not Port ID's or Ports.

Zoning

3 Configuration and operating system details


Nearline configuration information
The Enterprise Backup Solution (EBS) supports several operating systems (OSs). This section provides an overview for configuring the EBS using the following OSs: HP-UX Microsoft Windows NAS or ProLiant Storage Server devices using Microsoft Windows Storage Server 2003 and 2008 Linux Oracle Solaris IBM AIX

SAN utilities that may disrupt Nearline connectivity


Some software products commonly found in SAN environments can interfere with the normal functioning of backup and restore operations. These applications include system management agents and monitoring software and a wide range of tape drive and system configuration utilities. A list of known applications and the operating systems on which they are found is shown below. This list is not intended to be an exhaustive list. Windows (all versions)

RSM polling (This must always be disabled for tape and library devices). Qlogic QConvergeConsole (QCC)/SAN Surfer Emulex OneCommandManager (OCM)/HBAnywhere (HBA configuration utilities) Brocade Host Connectivity Manager (HCM) (HBA configuration utility) HP System Insight Manager (management agents) HP Library & Tape Tools (tape utilities): mt commands (Linux/Unix) mtx commands (Linux) Oracle Explorer (Solaris) EMS tape polling (HP-UX)

Emulex and Brocade tools mtx and HP-UX media changer applet

When run concurrently with backup or restore operations, these applications, utilities, and commands have been shown to interfere with components in the data path. Some specific recommendations are: SCSI Reserve & ReleaseIf your backup application supports the use of SCSI reserve and release, enable and use it. Reserve and release can prevent unwanted applications or commands from taking control of a device. SAN ZoningEBS recommends host-based SAN switch zoning by WWPN. When zoning is employed, these applications are much less likely to interfere with tape device operation.

Nearline configuration information

HP-UX
The configuration process for HP-UX involves: Upgrading essential EBS hardware components to meet the minimum firmware and device driver requirements. NOTE: See the HP Enterprise Backup Solutions Compatibility Matrix for all current and required hardware, software, firmware, and device driver versions, including Hardware Enablement Kits and Quality Packs on the HP website: http://www.hp.com/go/ebs. Installing the minimum patch level support. Go to the following website to obtain the necessary patches: http://www.hp.com/support NOTE: See the installation checklist at the end of this section to ensure all of the hardware and software is correctly installed and configured in the SAN.

Initial requirement (HP-UX 1 1.23 on IA-64 and PA-RISC)


HP currently supports HP-UX 1 1.23 in an EBS environment using an HP AB465A, A9782A, A9784A, A5158A, A6795A, A6826A, AB378A/AB378B, AB379A/AB379B, AD193A, AD194A, AD300A, AD299A, AD355A, or QMH2462 FC HBA. Contact HP or your HP reseller for information on how to acquire these cards. The following OS software bundles contain the drivers for the A5158A and A6795A adapters: FibrChanl-00 B.1 1.23.0803 HP-UX (B1 1.23 IA PA) and all patches the bundle requires per bundle installation instructions

The following OS software bundles contain the drivers for the A6826A, A9782A, A9784A, AB378A, AB379A, AB465A, AD193A, AD194A, AD300A, and QMH2462 adapters: FibrChanl-01 B.1 1.23.08.02 HP-UX (B1 1.23 IA PA) and all patches the bundle requires per bundle installation instructions. FibrChanl-02 B.1 1.23.0712 HP-UX (B1 1.23 IA) and all patches the bundle requires per bundle installation instructions.

The following OS software bundles contain the drivers for the AD299A and AD355A adapters:

Patches and installation instructions are provided at the HP-UX support website: http://h20566.www2.hp.com/portal/site/hpsc NOTE: Accessing this site requires registration.

After the hardware is installed, proceed with the following steps: NOTE: QMH2462 adapter support will not be listed using the swlist utility; however, the current FibrChanl-01 bundle does support the adapter. 1. The drivers stape, sctl, and schgr must all be installed in the kernel. To see if these drivers are installed, enter the following command: # /usr/sbin/kcmodule schgr sctl stape The following example shows output from kcmodule where the stape driver is not installed:
Module schgr sctl State static static Cause explicit depend

10

Configuration and operating system details

Module stape estape

State unused static

Cause

best

If one or more of the above drivers is in the unused state, they must be installed in the kernel. If they are all installed (static state), proceed to the next section, Configuring the SAN. 2. Use kcmodule to install modules in the kernel, for example to install the stape module do the following: # /usr/sbin/kcmodule stape=static Enter Yes to backup the current kernel configuration file and initiate the new kernel build. 3. Reboot the server to activate the new kernel. # cd / # /usr/sbin/shutdown -r now The HP-UX 1 1iv2 Quality Pack (QPK1 123) December 2007 2010 (B.1 1.23.1012.086a) and Hardware Enablement Pack (HWEable1 1i) December 2010 (B.1 1.23.1012.085a) contain required software bundles. These patches and installation instructions are provided at the HP website: http://h20566.www2.hp.com/portal/site/hpsc NOTE: Accessing this site requires registration.

Initial requirement (HP-UX 1 1.31 on IA-64 and PA-RISC)


HP currently supports HP-UX 1 1.31 in an EBS environment using an HP AB465A, A9782A, A9784A, A6795A, A6826A, AB378A/AB378B, AB379A/AB379B, AD193A, AD194A , AD300A, AD299A, AD355A, AD221A, AD222A, AD393A, QMH2462, or LPe1 105 FC HBA. Contact HP or your HP reseller for information on how to acquire these cards. The following OS software bundles are required for the A6795A and A5158A adapters: FibrChanl-00 B.1 1.31.1003 The following OS software bundles are required for the AB465A, A9782A, A9784A, A6826A, AB378A/AB378B, AB379A/AB379B, AD193A, AD194A and AD300A adapters: FibrChanl-01 B.1 1.31.1203 TFibrChanl-03 B.1 1.31.1203 FibrChanl-04 B.1 1.31.1203

The following OS software bundles are required for the AD299A, AD355A, AD221A, AD222A, AD393A, AH402A, AH403A, 403621-B21, and 456972-B21 adapters: FibrChanl-02 B.1 1.31.0809 HP-UX (B1 1.31 IA PA) and all patches the bundle requires per bundle installation instructions.

Patches and installation instructions are provided at the HP-UX support website: http://h20566.www2.hp.com/portal/site/hpsc After the hardware is installed, proceed with the following steps: 1. The LPe1 105 adapters support will not be listed using the swlist utility. However, the current FibrChanl-01 and FibrChanl-02 bundles do support the adapters. 2. The drivers stape, sctl, and schgr must all be installed in the kernel. To see if these drivers are installed, enter the following command: # /usr/sbin/kcmodule sctl schgr eschgr stape estape

HP-UX

1 1

The following example shows output from kcmodule where the stape driver is not installed:
Module sctl schgr eschgr stape estape State static static static unused static best Cause best best best

If one or more of the above drivers is in the unused state, they must be installed in the kernel. If they are all installed (static state), proceed to the next section, Configuring the SAN. 3. Use kcmodule to install modules in the kernel, for example to install the stape module do the following: # /usr/sbin/kcmodule stape=static Enter Yes to backup the current kernel configuration file and initiate the new kernel build. 4. Reboot the server to activate the new kernel. # cd / # /usr/sbin/shutdown -r now The HP-UX 1 1i Version 3 March 2012 release contains the current software bundles. These patches and installation instructions are provided at the HP website: http://software.hp.com

HP-UX 1 1.31 can experience poor I/O performance on VxFS file systems due to memory blocking during high system memory usage
The HP-UX 1 1.31 kernel, subsystems, and file I/O data cache can consume up to 90 percent of system memory during normal operation. When a heavy file I/O application such as a data protection application starts, the memory usage can reach close to 100 percent. In such conditions, if VxFS attempts to allocate additional memory for inode caching, this can result in memory blocking and subsequent poor file I/O performance. In extreme conditions, this scenario can cause data protection applications to time out during file system reads, which could result in backup job failures.

Poor I/O performance resolution


To avoid the situation of backup job failures due to memory blocking, modify the kernel tunable parameter vx_ninode. The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching. By default, the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine. When modifying the value of vx_ninode, HP recommends the following:
Physical memory or kernel available memory 1 GB 2 GB 3 GB > 3 GB VxFS inode cache (number of inodes) 16384 32768 65536 131072

To determine the current value of vx_ninode, run the following at the shell prompt: # /usr/sbin/kctune vx_ninode
12 Configuration and operating system details

To set vx_ninode to 32768, run the following command at the shell prompt: # /usr/sbin/kctune vx_ninode=32768 NOTE: The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system I/O operations. By default, these parameters are automatically determined by the system to better balance the memory usage among file system I/O intensive processes and other types of processes. The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system I/O caching. Determining whether or not to modify these parameters depends on the nature of the applications running on the system.

HP-UX 1 1.23: Disabling rewind-on-close devices with st_san_safe


Turning on the HP-UX 1 1.23 kernel tunable parameter st_san_safe disables tape device special files that are rewind-on-close. This will prevent utilities like mt from rewinding a tape that is in use by another utility. Some applications or utilities require rewind-on-close device special files (for example, the frecover utility that comes with HP-UX). In this case, disabling rewind-on-close devices renders the utility unusable. Most data protection applications such as HP Data Protector can be configured to use SCSI reserve/release, which protects them from rogue rewinds by other utilities. The requirements of the data protection environment should be considered when determining whether or not to enable st_san_safe. To determine if rewind-on-close devices are currently disabled, enter: # /usr/sbin/kctune st_san_safe If the value of st_san_safe is 1, then rewind-on-close devices are disabled. If the value is 0, then rewind-on-close devices are enabled. To disable rewind-on-close devices, enter: # /usr/sbin/kctune st_san_safe=1

Configuring the SAN


Set up the qualified tape library and router. See the documentation provided with each Storage Area Network (SAN) component for additional component setup and configuration information. Due to current issues with the fcparray driver within HP-UX, HP recommends that there be no SCC LUN set to 0 on the router. Final host configurations When the preliminary devices and the appropriate drivers listed earlier are installed and the SAN configuration is complete, the host should see the devices presented to it. 1. Run ioscan to verify that the host detects the tape devices. # ioscan 2. After verifying that all devices are detected, check for device files assigned to each device. For HP-UX 1 1.31 legacy device special files (DSFs), run the following commands: # ioscan -fnkC tape # ioscan -fnkC autoch 3. For HP-UX 1 1.31 persistent DSFs, run the following commands: # ioscan -fnNkC tape # ioscan -fnNkC autoch NOTE: Some data protection products might not currently support HP-UX 1 1.31-persistent DSFs for tape. See the data protection product documentation for more information. 4. If no device files have been installed, enter the following command: # insf -C tape -e # insf -C autoch -e

HP-UX

13

Installation checklist With a complete SAN configuration, review the questions below to ensure that all components on the SAN are logged in and configured properly. Are all hardware components at the minimum supported firmware revision, including: Server, HBA, Fibre Channel Switch, Fibre Channel to SCSI Router, Interface Manager, Command View TL, tape drives, library robot? Are all recommended HP-UX patches, service packs, quality packs or hardware ennoblement bundles installed on the host? Is the minimum supported HBA driver loaded on the host? Are all tape and robotic devices mapped, configured and presented to the host from the Fibre Channel to SCSI Router, or Interface Manager? Is the tape library online? Is the Fibre Channel to SCSI router correctly logged into the Fibre Channel Switch? Is the host HBA correctly logged into the Fibre Channel Switch? If multiple Fibre Channel switches are cascaded or meshed, are all ISL ports correctly logged in? If using zoning on the Fibre Channel Switch, is the server HBA and Tape Library's Fibre Channel to SCSI Router in the same switch zone (either by WWN or by switch Port)? If using zoning on the Fibre Channel Switch, has the zone been added to the active switch configuration?

Windows Server and Windows Storage Server


This section provides instructions for configuring Windows Server 2008 and Windows Server 2003 in an Enterprise Backup Solution (EBS) environment. Windows Storage Server is often the operating system used to build network attached storage (NAS) servers. While this is a modified build of Windows, it operates in the same manner as other Windows servers in an EBS environment. The storage servers have limitations on how much they can be changed. Refer to the storage server installation and administration guides for more information: http://www.hp.com/go/servers The configuration process involves: Upgrading essential EBS hardware components to meet the minimum firmware and device driver requirements. Installing the minimum patch/service pack level support for:

Windows Server 2008/2003 on 32- and 64-bit platforms Data Protection software

See the following websites to obtain the necessary patches: For HP: http://www.hp.com/support For Microsoft: http://www.microsoft.com NOTE: See the HP Enterprise Backup Solutions Compatibility Matrix for all current and required hardware, software, firmware, and device driver versions at http://www.hp.com/go/ebs. See the Installation Checklist at the end of this section to ensure proper installation and configuration of the hardware and software in the SAN.

14

Configuration and operating system details

Configuring the SAN


This procedural overview provides the necessary steps to configure a Windows Server 2008/2003 host into an EBS. See the documentation provided with each storage area network (SAN) component for additional component setup and configuration information. NOTE: To complete this installation, log in as an administrator with administrator privileges.

Installing the HBA device driver (Windows Server 2008/2003)


Obtain the appropriate Smart Component driver install package from http://www.hp.com. Double-click on the .exe file and the driver will be installed for you. A reboot might be necessary after the driver installation.

Storport considerations
EBS supports Storport configurations with the Brocade, Emulex and QLogic Storport mini-port drivers. Prior to installing the Storport HBA driver, the Storport storage driver (storport.sys) must be updated. Check the HP Enterprise Backup Solutions Compatibility Matrix for the currently supported version. CAUTION: Failure to upgrade the Storport storage driver prior to installing the HBA mini-port driver may result in system instability.

Installation checklist
To ensure that all components on the SAN are logged in and configured properly, review the following questions: Are all hardware components at the minimum supported firmware revision, including: Server, HBA, Fibre Channel Switch, Fibre Channel to SCSI/FC Router, Interface Manager, Command View TL, tape drives, library robot? Are all recommended Windows Server 2008/2003 patches and service packs installed on the host? Is the minimum supported HBA driver loaded on the Windows server? Is the tape library/VLS/D2D/NAS targets online? Is the host HBA correctly logged into the Fibre Channel Switch? If using zoning on the Fibre Channel Switch, is the server HBA and Tape Library's Fibre Channel Port(s) in the same switch zone (by WWN or PWWN)? Is the Removable Storage Manager (RSM) in Windows Server disabling Properly? (See Windows 2003 known issues for more information on RSM.) Is the Test Unit Ready (TUR) command stopped? (See Windows 2003 known issues for more information on TUR.) Has connectivity and performance been tested using HP Library and Tape Tools (L&TT)?

Windows 2003 known issues


Target and LUN shifting
Device binding can be helpful in resolving issues where device targets, and sometimes even LUNs, shift. For operating systems such as Windows and HP-UX, issues can arise when a given target or LUN changes in number. This can be caused by something as simple as plugging or unplugging another target (typically a disk or tape device) into the SAN. In most cases this can be controlled through the use of good zoning and/or persistent binding.

Windows Server and Windows Storage Server

15

The Windows operating system can still have issues even when zoning and binding are used. The cause of many of these issues is due to the way that Windows enumerates devices. Windows enumerates devices as they are discovered during a scan sequence. They are enumerated with device handles such as such as TAPE0, TAPE1, and so on. The Windows device scan sequence goes in the order of bus, target, and LUN. Bus is the HBA PCI slot, then target, which is representative of a WWN, and LUN, which is representative of a device behind the WWN. The order will be the lowest bus, then a target and its LUNs, then on to the next target, until it sees no more on that HBA. Then on to the next HBA, and its targets and LUNs. A common cause for device shifting is when the tape device is busy and cannot respond in time for the OS to enumerate it. Each device after that shifts up a number. Note the target persistency in the Emulex One Connect Manager utility or QLogic's utility. The same applies to LUN binding in the Emulex full port driver utility. Some backup applications communicate with the tape device by using the Windows device name. As noted, the device name may shift and cause a problem for the backup application. Some applications monitor for this condition and adjust accordingly. Other applications must wait for a reboot and scan of devices, or the application must be manually reconfigured to match the current device list. What neither of the binding utilities do is affect Windows' device enumeration. NOTE: Some vendor applications use device serialization and are not affected by LUN shifting.

Interop issues with Microsoft Windows persistent binding for tape LUNs
Windows Server 2003 provides the ability to enable persistence of symbolic names assigned to tape LUNs by manually editing the Windows registry. Symbolic name persistence means that tape devices will be assigned the same symbolic name across reboot cycles, regardless of the order in which the operating system actually discovers the device. This feature was originally released by Microsoft as a stand-alone patch and was later incorporated into SP1. For more information, go to the following website: http://www.microsoft.com/ and search for support article KB873337 for details The persistence registry key is as follows: HKey_Local_Machine\System\CurrentControlSet\Control\Tape\Persistence NOTE: Persistence=1 symbolic tape names are persistent

Persistence=0 non-persistent Persistence is disabled by default. When you enable persistence, symbolic tape names (also referred to as logical tape handles) change significantly. For example, \\.\Tape0 becomes \\.\ Tape2147483646 . It is not possible to configure the new symbolic tape name. Some applications are unable to correctly recognize and configure devices that have these longer persistent symbolic names. As a workaround, persistent binding of Fibre Channel port target IDs, enabled through the Fibre Channel host bus adapter utilities (such as Emulex Once Connect Manager, Brocade (HCM), or QLogic (QCC)) can provide some benefit. Target ID binding assures that targets are presented in a consistent manner but cannot guarantee consistent presentation of symbolic tape names.

Tape drive polling


The Windows Removable Storage Manager service (RSM) polls tape drives on a frequent basis. Windows built-in backup software (NTBACKUP) relies on the RSM polling to detect media changes in the tape drive. In SAN configurations, this RSM polling can have a significant negative impact on tape drive performance.

16

Configuration and operating system details

NOTE:

For SAN configurations, HP strongly recommends disabling RSM polling.

NOTE: Removable Storage Manager is no longer available as of Windows 7 and Windows Server 2008 R2.

Library slot count/Max Scatter Gather List issue


The Max Scatter Gather List parameter for the Emulex driver must be changed for hosts with a library that has a slot count greater than 2000. The recommended fix is to make the Windows registry parameter MaximumSGList a value of 0x40 (64) or greater which allows the transfer of larger blocks of data. This may already be a default parameter in newer driver revisions.

Tape.sys block size issue


At the end of March 2005, Microsoft released Service Pack 1 (SP1) for the Windows Server 2003 platform. With SP1, Microsoft changed the driver tape.sys to allow for NTBackup tapes written on 64-bit Windows 2003 Server to be read or cataloged on 32-bit Windows 2003 Server systems. The change in the tape.sys driver imposes a limit on the data transfers to a block size of no greater than 64KB. Performance issues will be more apparent on all high-performance tape drives such as the HP Ultrium 960 and SDLT 600 using the tape.sys driver. As a result, backup applications using HP tape drivers (hplto.sys, hpdat.sys, hpdltw32.sys, hpdltx64.sys, and hpdltw64.sys) and tape.sys driver may experience poor performance and/or failed backup jobs. If experiencing either of these symptoms, check the backup application with the software vendor to see if the backup application is using the Microsoft tape driver, tape.sys. Microsoft released a hotfix that replaces the affected tape.sys file with a version that removes the 64KB limitation on block sizes; see http://support.microsoft.com/kb/907418/en-us. This hotfix was integrated into Windows 2003 Service Pack 2 (SP2).

Emulex SCSIport driver issue


A potential tape I/O performance issue has been discovered with Windows Server 2003 32-bit systems configured with Emulex SCSIPort mini-port HBA drivers. This issue only affects those backup applications using tape block sizes/transfer lengths exceeding 128KB. Emulex SCSIPort mini-port HBA drivers use a MaximumSGList entry in the Windows registry that defines the maximum data transfer length supported by the adapter for SCSI commands. Early Emulex drivers, version 5.5.20.8 and older, set this registry entry to a value of 33 decimal (21 hexadecimal) limiting SCSI transfers to a maximum of 128KB. Beginning with version 5.5.20.9, this registry entry was increased to 129 decimal (81 hexadecimal) increasing SCSI transfers to 512KB. The issue surfaces when upgrading an installed Emulex HBA SCSIPort mini-port driver from driver version 5.5.20.8 and earlier to driver version 5.5.20.10 or later (driver version 5.5.20.9 is exempt from this issue). During the upgrade, the existing MaximumSGList registry entry is not modified from 33 to 129. Since it remains at the lower value (33), the SCSI transfer length remains at 128K, thus possibly affecting performance when large block sizes/transfer lengths are used. To resolve this issue, modify the MaximumSGList in the registry as follows: CAUTION: Using the Registry Editor incorrectly can cause serious, system-wide problems. Microsoft cannot guarantee that any problems resulting from the use of Registry Editor can be solved. Use this tool at your own risk. Back up the registry before editing. 1. 2. 3. Click Start > Run to open the Run dialog box. Enter regedit to launch the registry editor. Navigate to the following registry key: \\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lpxnds\ Parameters\Device\MaximumSGList 4. Change the REG_DWORD value from 33 to 129.
Windows Server and Windows Storage Server 17

FC interface controller device driver issue


In Windows environments, the Hardware Discovery Wizard detects the presence of the controller LUN, identifies it to the user as HP FC interface controller, and prompts for installation of a device driver. The FC interface controller does not require a device driver and is fully functional without the device driver. However, until a device entry is created in the System Registry, the Windows operating system (OS) will classify it as an unknown device. Each time the server is booted, the Hardware Discovery Wizard will prompt for installation of a driver file. A device information file (.inf) is available, which installs a null device driver and creates a device entry in the System Registry. The .inf file is located on the HP website for your product or by searching for HP_CPQ_router_6.zip or later version. By using this file, a storage router can be essentially 'registered' with the device manager once, minimizing user interaction.

Not Enough Server Storage is Available to Process this Commandnetwork issue


A network issue was observed where a backup application was unable to make a network connection to a remote network client. The error that was reported from Windows 2003, through the backup application, was Not enough server storage is available to process this command. This error can occur with various versions of Windows Server (NT, 2000, 2003) that also has Norton AntiVirus or IBM AntiVirus software installed. This is a known network issue documented in the following Microsoft Knowledge Base article at http://support.microsoft.com/default.aspx?scid=kb;en-us;177078 To resolve the issue, a registry edit is required. Please review the steps detailed in the Microsoft Knowledge Base Article to resolve the issue.

Updating the HP Insight Management Agents for Microsoft Windows using the ProLiant Support Pack Version 7.70 (or later)
When updating the HP Insight Management Agents for Microsoft Windows using the ProLiant Support Pack Version 7.70 (or later) the Disable Fibre Agent Tape Support option is inadvertently unchecked by default. This occurs because the previous data in the registry is not saved during the software update from the Management Agents Version 7.60 (or earlier). Figure 4 shows the Disable Fibre Agent Tape Support option selected prior to the ProLiant Support Pack Version 7.70 (or later) update installation. Figure 2 Updating the HP Insight Management Agents for Microsoft Windows using the ProLiant Support Pack Version 7.70 (or later)

18

Configuration and operating system details

Follow the link below to see the full advisory: http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us& obectID=c0122 9672

NAS and ProLiant Storage Server devices using Microsoft Windows Storage Server 2003
Server storage differs from direct-attached storage in several ways. Stored data can have multiple sets of file attributes due to heterogeneous access (NFS, SMB, Novel, Macintosh). The storage server is typically not supported by a console. Storage server vendors are often forced to use a common backup protocol, NDMP, because of lack of backup application support of the underlying or customized storage server OS. Storage servers using Data Protection Manager (DPM) provide administrative tools for protecting and recovering data on the file servers in the network.

HP NAS and HP ProLiant Storage Server devices are built on Windows Storage Server (WSS) 2003. Backup applications supported by Windows 2003 also run on WSS 2003, and the terminal services on the Microsoft based storage server supports backup application's GUI. The major backup vendors are actively testing their applications on the WSS framework. All tape devices (both SCSI and FC connected) supported by the Windows systems are automatically available on Windows Storage Server 2003 storage server solutions. Since most storage servers are built with a specialized version of the OS, some of the device drivers may be outdated or unavailable. Updates to the storage server product from HP may have more current drivers. These updates are available for download from the HP server website. Newer tape device drivers are made available from hardware and software vendors and are used on these platforms. See the following website for the HP Enterprise Backup Solutions Compatibility Matrix and certified Windows device drivers: http://www.hp.com/go/ebs.

Known issues with NAS and ProLiant Storage Servers


Storage servers are highly dependent on networking resources to serve up data. Backup applications are also highly dependent on networking resources to establish communications with other backup servers to coordinate the usage of the tape libraries. At times this dependency on networking services can conflict, and the backup application may lose contact with the other servers, causing backups to fail. Take note of any extended networking resources used for storage servers that may be shared with backup, such as NIC teaming, and make sure that communications are not broken.

Tru64 UNIX
The configuration process for Tru64 UNIX can be a somewhat seamless process. When firmware and driver revisions of the components are at minimum EBS acceptable levels, the integration process is as simple as installation of hardware and configuring devices to the SAN fabric. This is possible because Tru64 UNIX maintains driver and configuration parameters in the OS kernel and device database table. It is recommended, however, that if new console firmware is available, it is applied as outlined below to Tru64 UNIX servers in an EBS. NOTE: Refer to the HP Enterprise Backup Solutions Compatibility Matrix for all current and required hardware, software, firmware, and device driver versions at: http://www.hp.com/go/ ebs. To ensure correct installation and configuration of the hardware, see Installation checklist (page 14).

Tru64 UNIX

19

Backup software patch


Refer to your backup software vendor to determine if any updates or patches are required.

Configuring the SAN


This procedural overview provides the necessary steps to configure a Tru64 UNIX host into an EBS. Refer to the documentation provided with each storage area network (SAN) component for additional component setup and configuration information. 1. Prepare the required rack mounted hardware and cabling in accordance with the specifications listed in backup software user guide as well as the installation and support documentation for each component in the SAN. NOTE: Loading Console firmware from the Console firmware CD may also update the host bus adapter (HBA) firmware. This HBA firmware may or may not be the minimum supported by EBS. Refer to the HP Enterprise Backup Solutions Compatibility Matrix for minimum supported HBA firmware revisions. 2. Upgrade the AlphaServer to the latest released Console firmware revision. Refer to http://www.hp.com/support to obtain the latest Console firmware revision. a. Boot the server to the chevron prompt (>>>). b. Insert the Console firmware CD into CD-ROM drive. c. To see a list of all accessible devices, at the chevron prompt, type: >>> show dev d. Obtain the CD-ROM device filename from the device list. Where DQA0 is an example CD-ROM device filename for the CD-ROM drive, at the chevron prompt type: >>> Boot DQA0 e. f. Complete all of the steps in the readme file, as noted in the message prompt. If the minimum supported HBA firmware revision was installed in this step, go to step 3. If the minimum supported HBA firmware revision was not installed in this step, upgrade at this time. Refer to the release notes provided with the HBA firmware for installation instructions. To verify the latest supported revisions of HBA firmware and driver levels for the 32-bit KGPSA-BC, and 64-bit KGPSA-CA, FCA2354, FCA2384, FCA2684 and FCA2684DC, refer to the HP Enterprise Backup Solutions Compatibility Matrix at: http:// www.hp.com/go/ebs. NOTE: HBA firmware can be upgraded before or after installing Tru64 UNIX. The driver will be installed after Tru64 UNIX is installed. Contact Global Services to obtain the most current HBA firmware and drivers. 3. Install the Tru64 patch kit. a. Refer to the release notes and perform the steps necessary to install the most current Tru64 patch kit. b. The current patch kit installs the current Tru64 UNIX HBA driver. To verify that the installed HBA driver meets minimum support requirements, refer to the HP Enterprise Backup Solutions Compatibility Matrix at: http://www.hp.com/go/ebs. Upgrade the HBA driver if the HBA does not contain the most current supported driver.

4.

20

Configuration and operating system details

a. b. c.

Contact Global Services to obtain the latest HBA driver. Upgrading the HBA driver may require building a new kernel. Create a backup copy of the kernel file (/vmunix) before building a new kernel. If building a new kernel was necessary, reboot the server. If building a new kernel was not necessary, at a Tru64 UNIX terminal window type: # hwmgr -scan scsi.

5.

Verify that the Tru64 UNIX host is logged in to the Fibre Channel switch. Make sure that the server logs in to the switch as an F-port.

Confirming mapped components


This section provides the commands needed to confirm that the components have been successfully installed in the SAN.

Installed and configured host bus adapters


To obtain a list of all host bus adapters (HBAs) that are physically installed and configured in the server, enter the following command in a terminal window on the Tru64 host: # emxmgr -d For Tru64 5.1b the following command is recommended: # hwmgr -show fibre

Visible target devices


WWN of the routerTo view a list of target devices that are visible to each installed HBA on the Tru64 UNIX host, enter the following command where emx0 is the name of the HBA. # emxmgr -t emx0 For Tru64 5.1b the following command is recommended: # hwmgr -show fibre -adapter -topology Verify that the WWN of the router is included in the list. Tape and Robot DevicesTo view a list of all of the tape Tape and Robot DevicesTo view a list of all of the tape and robotic devices that are visible and configured by Tru64 UNIX host, enter the following command: # hwmgr -view dev Tru64 UNIX dynamically builds all device files. This process may take several minutes.

Configuring switch zoning


If zoning will be used, either by World Wide Name or by port, perform the setup after the HBA has logged into the fabric. Refer to the Fibre Channel switch documentation for complete switch zone setup information. Ensure that the World Wide Name (WWN) or port of the Fibre-Channel-to-SCSI bridge is in the same zone as the WWN or port of the HBA installed in the server.

Installation checklist
Are all hardware components at the minimum supported firmware revision, including: Server, HBA, Fibre Channel switch, Fibre Channel to SCSI router, Interface Manager, Command View TL, tape drives, library robot? Are the current Tru64 operating system patches installed, and is the server running the current console firmware?
Tru64 UNIX 21

Is the minimum supported HBA driver loaded on the host? Are all tape and robotic devices mapped, configured, and presented to the host from the Fibre Channel to SCSI Router, or Interface Manager? Is the tape library online? Is the Fibre Channel to SCSI router correctly logged into the Fibre Channel switch? Is the host HBA correctly logged into the Fibre Channel switch? If multiple Fibre Channel switches are cascaded or meshed, are all ISL ports correctly logged in? If using zoning on the Fibre Channel switch, is the server HBA and Tape Library's Fibre Channel to SCSI router in the same switch zone (either by WWN or by switch port)? If using zoning on the Fibre Channel switch, has the zone been added to the active switch configuration?

Red Hat and SUSE Linux


This section provides instructions for configuring Linux in an Enterprise Backup Solution (EBS) environment. The configuration process involves: Upgrading and installing the EBS hardware components to meet the minimum firmware and device driver requirements (this includes all supported server, Fibre Channel Host Bus Adapters (FC HBA), switches, tape libraries, tape drives, and interconnect components). Installing the minimum required patches, both for the operating system and the backup application.

NOTE: See the HP Enterprise Backup Solutions Compatibility Matrix for all current and required hardware, software, firmware, and device driver versions at: http://www.hp.com/go/ebs.

Operating system notes


In general, EBS configurations are not dependent on a specific kernel errata level. This is not the case for support of HP disk storage products. EBS follows the recommended minimum kernel errata versions as documented for HP disk arrays. This support can be found on the Single Point of Connectivity Knowledge (SPOCK) web site at: http://www.hp.com/storage/ spock. Access to SPOCK requires an HP Passport account. HP recommends installing the kernel development option (source code) when installing any Linux server. Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel.

Installing HBA drivers and tools


Obtain the latest HP-supported Emulex or Qlogic driver kit from the HP support website: 1. From an Internet browser, go to http://www.hp.com. 2. Click Support and Drivers. 3. In the For Product box, search for the driver kit appropriate for your model HBA (for example, FCA2214, A6826A or FC2143). 4. Select the operating system version of the system in which the HBA is installed. 5. See the driver kit release notes. 6. Install the driver kit by running the Install script included in the kit. HP recommends using the Install script instead of running individual RPMs to ensure that drivers are installed with the appropriate options and that the fibre utilities are installed properly.

22

Configuration and operating system details

7.

8.

Beginning with the driver kits that included the 8.01.06 QLogic driver and the 8.0.16.27 Emulex driver (both kits released October 2006), execute the following script found in the fibreutils directory: # cd /opt/hp/hp_fibreutils/pbl # ./pbl_inst.sh -i Reboot the server to complete the installation. NOTE: Step 7 of the above procedure was introduced to eliminate the need to have hp_rescan -a run as part of /etc/rc.local (or some other boot script). In previous versions of the driver kit, executing the hp_rescan utility was necessary to work around an intermittent issue with device discovery of SCSI-2 tape automation products. Executing the pbl script inserts the probe-luns utility into the boot sequence and identifies and adds SCSI-2 device strings for legacy tape products into the kernel's blacklist. The result is that all of the supported tape libraries and drives should be discovered correctly without any additional steps by the user.

9.

Verify that the host has successfully discovered all tape drive and library robotic devices using one of the following methods: Review the device listing in /proc/scsi/scsi Review the output from the hp_rescan command Review the output from the lssgcommand

If there are devices that have not been successfully discovered, review the HBA driver installation procedure above, particularly step 5, then proceed to the Installation checklist (page 14). HP's fibre utilities, located in the /opt/hp/hp_fibreutils directory, are installed as part of the driver kit and include the following: hp_rescan: used to force a rescan of all SCSI buses scsi_info: used to query a device adapter_info: displays HBA information (i.e. World Wide Names) lssd: lists disk devices (sd device files) lssg: lists online and nearline devices (sg device files) hp_system_info: lists system configuration information

Additional SG device files


In most environments, the default number of SG device files is sufficient to support all of the required devices. If the environment is fairly large and the default number of SG device files is fewer than the combined total of disk, tape, and controller devices being allocated to the server, then additional device files need to be created. SG device files are preferable to the standard st (symbolic tape) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations. To create additional SG device files, perform the following: # mknod /dev/sgX c 21 X X signifies the number of the device file that does not already exist. For additional command options, see the mknod man page.

Installation checklist
Are all hardware components at the minimum supported firmware revision, including: server, HBA, Fibre Channel switch, interface controller, Interface Manager, CommandView TL, tape drives, library robot? Are there any required Linux operating system patches missing (required patches are noted on the EBS Compatibility Matrix)?
Red Hat and SUSE Linux 23

Is the supported HBA driver loaded on the host? Are all tape and robotic devices mapped, configured, and presented to the host from the interface controller or Interface Manager? Is the tape library online? Is the FC-attached tape drive logged into the Fibre Channel switch (F-port)? Is the interface controller logged into the Fibre Channel switch (F-port)? Is the host HBA correctly logged into the Fibre Channel switch (F-port)? If multiple Fibre Channel switches are cascaded or meshed, are all ISL ports correctly logged in? If using zoning on the Fibre Channel switch, is the interface controller, or tape drive, configured into the same switch zone as the host (either by WWPN or by switch port number)? If using zoning on the Fibre Channel switch, has the host's zone been added to the active switch configuration?

Linux known issues


Rewind commands being issued by rebooted Linux hosts
Device discovery that occurs as part of a normal Linux server boot operation issues a SCSI rewind command to all attached tape drives. For backup applications that do not employ SCSI Reserve and Release, if the rewind command is received while the tape drive is busy writing, the result is a corrupted tape header and an unusable piece of backup media. This issue could manifest itself as either a failed verify operation, a failed restore operation, or the inability to mount a tape and read the tape header. If a backup verification is not completed, the normal backup process might not detect that an issue exists. This issue is present today in SUSE Enterprise Linux 9 and will become an issue for SUSE Enterprise Linux 8 and Red Hat Enterprise Linux 3 and 4 with the introduction of the QLogic v8.01.06 and Emulex v8.0.16.27 driver kits. NOTE: Refer to Customer Advisory c00788781 for additional details on the new driver kits and their associated installation procedure changes. The scope of this issue includes any EBS configuration that uses a backup application which does not implement SCSI Reserve and Release and contains at least one Linux host which has shared access to tape devices. Backup applications known to be affected are HP Data Protector (all versions) and Legato NetWorker prior to v7.3. The only recommended work-around for affected applications is to not reboot Linux servers while other hosts are running backups.

Tape devices not discovered and configured across server reboots


Tape drives disappear from Linux servers after the host reboots. This issue was identified and communicated in Customer Advisory OT050715_CW01, dated 26 September 2005. Adding the line "hp_rescan -a" to /etc/rc.d/rc.local resolved the issue. Hp_rescan is an HP Host Bus Adapter (HBA) utility included and installed with the Fibre Channel HBA driver kit. This issue, which affects Red Hat installations and, intermittently some SUSE Linux installations, is now understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products. The permanent resolution to this issue is to upgrade to the latest Fibre Channel driver kit (QLogic 8.01.06 or Emulex 8.0.16.27, both released in October 2006). This driver kit introduced a revised installation procedure, incorporating the probe-luns utility into the boot sequence. The revised installation procedure was outlined earlier in this section and was also communicated in Customer Advisory c00788781, dated 1 1 October 2006.

24

Configuration and operating system details

Sparse files causing long backup times with some backup applications
Some Integrity and X64 64-bit HP Servers running the Red Hat Enterprise Linux 3 operating system (or later) may have longer than expected system backup times or appear to be stalled when backing up the following file: /var/log/lastlog This file is known as a "sparse file." The sparse file may appear to be over a terabyte in size and the backup software will take a long time to back up this file. Most backup software applications have the capability to handle sparse files with special sparse command flags. An example of this is the "tar" utility, which has the "-sparse" or "-S" flag that can be used with sparse files. If your backup application does not include support for backing up sparse files, then /var/log/ lastlog should be excluded from the backup.

LUNs shifting after reboot


The Linux 2.6 kernel and later enhanced the management of the attached devices through the introduction of udev. the udev device manager provides users with a persistent naming process for all devices across reboots. For details on how to configure udev refer to the appropriate Linux distribution documentation. If the application requires persistent device mapping, run the ISV device configuration wizard.

Oracle Solaris
This section provides instructions for configuring Oracle Solaris in an Enterprise Backup Solution (EBS) environment. The configuration process involves: Upgrading essential EBS hardware components to meet the minimum firmware and device driver requirements. Installing the following minimum patches for Oracle Solaris:

Solaris 9 requires 1 12233-12, 1 12834-06, and 1 13277-51 Solaris 10 SPARC requires 1 18822-36 and 1 18833-36 Solaris 10 x86/64 requires 1 18855-36

Installing the minimum patch/service pack level support for the backup software

See the following websites to obtain the necessary patches: For HP: http://www.hp.com/support For Solaris: http://www.oracle.com NOTE: Refer to the HP Enterprise Backup Solutions Compatibility Matrix for all current and required hardware, software, firmware, and device driver versions at: http://www.hp.com/go/ ebs. See Installation checklist (page 14) to ensure that the hardware and software in the SAN is correctly installed and configured.

Configuring the SAN


This procedural overview provides the necessary steps to configure an Oracle Solaris host into an EBS. See the documentation provided with each storage area network (SAN) component for additional component setup and configuration information. Currently supported adapters for Oracle Solaris include Oracle, QLogic, and Emulex-branded HBAs. HP EBS supports all 4Gb and 8Gb HBAs with the Sun native driver. For some models of 2Gb HBAs, the QLogic qla and Emulex lpfc drivers are supported.

Oracle Solaris

25

Device binding can help resolve issues where device targets shift. Issues can arise when a given target or LUN changes number. In most cases, this can be controlled through the use of good zoning or persistent binding. When using QLogic or Emulex drivers, configuring for persistent binding is recommended. For the Oracle native driver, persistent binding is not necessary unless recommended by the backup application vendor or for an environment where tape devices will be visible across multiple hosts. For configuring persistent binding with the Sun native driver, see the Oracle document Solaris SAN Configuration and Multipathing Guide at http://docs.oracle.com/cd/E19253-01/820-1931/ index.html.

Oracle Solaris native driver configuration


1. Prepare the required rack mounted hardware and cabling in accordance with the specifications listed in backup software user guide as well as the installation and support documentation for each component in the SAN. NOTE: 2. To complete this installation, a root login is required.

For Solaris 9, download the current Sun StorEdge SAN Foundation Software (SFS) from http:// www .sun.com/storage/san. Select the following files for download: Install_it Script SAN 4.4.x (SAN_4.4.x_install_it.tar.Z) Install_it Script SAN 4.4.x Readme (README_install_it.txt)

The README document explains how to uncompress the downloaded file and execute the Install_it Script. NOTE: From Oracle's site, the Install_it Script is considered an optional download, but does include all required SFS packages and patches for Solaris 9. The Install_it Script will identify the type of HBA and version of Solaris before installing the appropriate SFS packages and patches. 3. SFS functionality is included within the Solaris 10 operating system. The Solaris native SUNWqlc driver is included with Solaris 10. For Solaris 10 01/06 or later release, SUNWemlxs and SUNWemlxu driver packages are included. To obtain SUNWemlx packages, go to Oracles Products Download page at http://developers.sun.com/products/. Search for StorageTek Enterprise Emulex Host Bus Adapter Device Driver. Install the appropriate patch: 4. SUNWqlc on Solaris 10 SPARC, install patch 1 19130-33 or later SUNWqlc on Solaris 10 x86/64, install 1 19131-33 or later SUNWemlx on Solaris 10 SPARC, install patch 120222-31 or later SUNWemlx on Solaris 10 x86/64, install patch 120223-31 or later

Update the HBA fcode if needed using the flash-upgrade utility included in the appropriate patch. SG-XPCI1FC-QF2 (X6767A) and SG-XPCI2FC-QL2 Patch 1 14873-05 or later SG-XPCI2FC-QF2 (X6768A) and SG-XPCI2FC-QF2-Z Patch 1 14874-07 or later SG-XPCI1FC-EM2 and SG-XPCI2FC-EM2 Patch 121773-04 or later SG-XPCI1FC-QF4 (QLA2460) and SG-XPCI2FC-QF4 (QLA2462) Patch 123305-04 or later

5. 6.

Reboot the server with -r option: #reboot -- -r Use the cfgadm utility to show the HBA devices: #cfgadm -al

26

Configuration and operating system details

7. 8.

Use the cfgadm utility to configure the HBA devices. c2 is the HBA device in this example. #cfgadm -c configure c2 Use devfsadm utility to create device files: #devfsadm

Troubleshooting with the cfgadm utility


Getting the status of FC devices using cfgadm: # cfgadm -al Example output for above command:

This output shows a media changer at LUN 0 for the 100000e0022229fa9world wide name, and tape and disk devices at LUN 0 for other world wide names. The devices are connected and have been configured and are ready for use. The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id. Fixing a device with an unusable condition: If the condition field of a device in the cfgadm output is unusable, then the device is in a state such that the server cannot use the device. This may have been caused by a hardware issue. In this case, do the following to resolve the issue: 1. Resolve the hardware issue so the device is available to the server. 2. 2. After the hardware issue has been resolved, use the cfgadm utility to verify device status and to mend the status if necessary:

Use cfgadm to get device status: # cfgadm -al For a device that is unusable use cfgadm to unconfigure the device and then re-configure the device. For example (this is an example only, your device world wide name will be different): # cfgadm -c unconfigure c4::100000e0022286ec # cfgadm f -c configure c4::100000e0022286ec

Use cfgadm again to verify that the condition of the device is no longer unusable: # cfgadm -al

QLogic driver configuration for QLA2340 and QLA2342


Substitute for your device name appropriately.

Oracle Solaris

27

1.

2.

Prepare the required rack mounted hardware and cabling in accordance with the specifications listed in backup software user guide as well as the installation and support documentation for each component in the SAN. After installing the HBA, verify proper hardware installation. At the OpenBoot PROM ok prompt, type: show-devs If the HBA is installed correctly, an entry similar to the following is displayed (the path will vary slightly depending on your configuration): /pci@1f,4000/QLGC,qla@5 Verify the HBA hardware installation in Solaris at the shell prompt by typing: prtconf -v | grep QLGC If the HBA is installed correctly, and the driver has not yet been installed, a device similar to the following is displayed: QLGC,qla (driver not attached) NOTE: To complete this installation, log in as root.

3. 4.

After installing the HBA, install the device driver. The driver comes with the HBA or can be obtained from http://www.qlogic.com. To ensure that no previous device driver was installed, at the prompt, type: #pkginfo | grep QLA2300 If no driver is installed, a prompt is returned. If there is a driver installed, verify that it is the correct revision by entering: #pkginfo -l QLA2300 If the driver needs to be removed, enter: #pkgrm <package name>

5.

Install the new driver. Navigate to the directory where the driver package is located and at the prompt, type: #pkgadd -d ./<package name> Make sure that the driver is installed. At the prompt, type: #pkginfo -l QLA2300 Look at /kernel/drv/qla2300.conf (the device configuration file) to make sure the configuration is appropriate. Fibre Channel tape support is enabled. An example follows: hba0-fc-tape=1; Persistent binding can be configured by binding SCSI target IDs to the Fibre Channel world wide port name of the router or tape device. To set up persistent binding, the persistent binding only option is enabled. An example follows. hba0-persistent-binding-configuration=1; After enabling persistent binding only, router or tape drive world wide port names (wwpn) is bound to SCSI target IDs. For example, if a router has a wwpn of 1 1 1 1222233334444 and is visible to hba0, bind it to SCSI target ID 64 as follows: hba0-SCSI-target-id-64-fibre-channel-port-name = 1111222233334444;

6. 7.

Emulex driver configuration for LP10000 and LP10000DC


Substitute for your device name appropriately. The example shown is for a dual FC port adapter connected to the fabric.
28 Configuration and operating system details

1.

Prepare the required rack mounted hardware and cabling in accordance with the specifications listed in backup software user guide as well as the installation and support documentation for each component in the SAN. NOTE: To complete this installation, a root login is required.

2.

After installing the HBA, verify proper hardware installation. At the OpenBoot PROM ok prompt, type: show-devs If the HBA installed correctly, devices similar to the following will be displayed (the path will vary slightly depending on your configuration). /pci@8,700000/fibre-channel@1,1 /pci@8,700000/fibre-channel@1 Verify the HBA hardware installation in Solaris at the shell prompt by typing: prtconf -v | grep fibre-channel

3. 4.

Install the HBA device driver. The driver can be obtained from http://www.emulex.com. To ensure that no previous device driver was installed, at the prompt, type: #pkginfo -l lpfc If no driver is loaded, a prompt is returned. If there is a driver installed, verify that it is the correct revision. If the driver removal is required, enter: #pkgrm <package name>

5.

Install the new driver. Navigate to one directory level above where the driver package directory is located and at the prompt, type: #pkgadd -d Select the lpfc package

6. 7.

Make sure that the driver is installed. At the prompt, type: #pkginfo -l lpfc Verify the HBA driver attached by typing: #prtconf -v | grep fibre-channel If the driver attached, devices similar to the following are displayed: fibre-channel, instance #0 fibre-channel, instance #1

8.

Look at /kernel/drv/lpfc.conf (the device configuration file) to make sure the configuration is appropriate. For World Wide Port Name binding, add the following line: fcp-bind-method=2; For FCP persistent binding, the setting fcp-bind-WWPN binds a specific World Wide Port Name to a target ID. The following example shows two NSR FC ports zoned in to the second interface on the HBA:

Oracle Solaris

29

NOTE: The interface definitions appear in /var/adm/messages. The interfaces lpfc0 and lpfc1 map to the following devices: lpfc0 is /pci@8,700000/fibre-channel@1 1lpfc1 is /pci@8,700000/fibre-channel@1,1 NOTE: Refer to comments within the lpfc.conf for more details on syntax when setting fcp-bind-WWPN. Add the following to item 2 within section Configuring Oracle Servers for tape devices on SAN: For LP10000 adapter: name="st" class="scsi" target=62 lun=0; name="st" class="scsi" target=62 lun=1; name="st" class="scsi" target=62 lun=2; name="st" class="scsi" target=62 lun=3;

Configuring Oracle Servers for tape devices on SAN


NOTE: The information in the following examples, such as target IDs, paths, and LUNs, are examples only. The specific data for your configuration may vary. NOTE: This section applies to Solaris 9 and Solaris 10 prior to Update 5 (05/08). Configuration of the st.conf file is no longer required with Solaris 10 Update 5 (05/08) or later. Tape devices will be discovered automatically after a reboot. 1. Edit the st.conf file for the type of devices to be used and also for binding. The st.conf file should already reside in the /kernel/drv directory. Many of the lines in the st.conf file are commented out. To turn on the proper tape devices, uncomment or insert the appropriate lines in the file. tape-config-list=

NOTE: The tape-config list is composed of a group of triplets. A triplet is composed of the Vendor ID + Product ID, pretty print, and the data property name. The syntax is very important. There must be eight characters for the vendor ID (COMPAQ or HP) before the product ID (DLT8000, SDLT600, Ultrium, etc). In the above line, there are exactly two spaces betweenCOMPAQand DLT8000, and there are exactly six spaces between HPand Ultrium. The order of the triplets is also important for Ultrium tape drives for discovery. The pretty print value will be displayed in the boot log /var/adm/messages for each tape drive discovered that matches the associated vendor ID + product ID string. Below the tape config list is a list of data property names used to configure specific settings for each device type.
30 Configuration and operating system details

Some data protection applications handle the SCSI reservation of the tape drives and others require the operating system to do so. For a complete description of setting SCSI reservation, see the options bit flag ST_NO_RESERVE_RELEASE on the man page for st. The ST_NO_RESERVE_RELEASE flag is part of the fourth parameter in the data property name. For LTO1-data and LTO2-data, a value of 0x9639 means the operating system handles reserve/release and a value of 0x29639 means the application handles reserve/release. For LTO3-data and LTO-4 data, a value of 0x18659 means the operating system handles reserve/release and a value of 0x38659 means the application handles reserve/release. 2. Define tape devices for other adapters by adding lines similar to the following to the SCSI target definition section of the st.conf file. Example for QLogic adapters: name=st class=scsi parent=/pci@1f,4000/QLGC,qla@1 target=64 lun=0; name=st class=scsi parent=/pci@1f,4000/QLGC,qla@1 target=64 lun=1; NOTE: The parent is the location of the HBA in the /devices directory.

NOTE: The target can be chosen; however, it must not conflict with other target bindings in the st.conf and sd.conf files. 3. Perform a reconfiguration reboot (reboot ---r) on the server and verify that the new tape devices are seen in /dev/rmt.

Configuring switch zoning


If zoning will be used, either by World Wide Name or by port, perform the setup after the HBA has logged into the fabric. Refer to the Fibre Channel switch documentation for complete switch zone setup information. Ensure that the World Wide Name (WWN) or port of the Fibre-Channel-to-SCSI bridge is in the same zone as the WWN or port of the HBA installed in the server.

Installation checklist
To ensure that all components on the SAN are logged in and configured properly, review the following questions: Are all hardware components at the minimum supported firmware revision, including: Server, HBA, Fibre Channel switch, Fibre Channel to SCSI router, Interface Manager, Command View TL, tape drives, library robot? Are all recommended Solaris patches installed on the host? Is the minimum supported HBA driver loaded on the host? Are all tape and robotic devices mapped, configured and presented to the host from the Fibre Channel to SCSI router, or Interface Manager? Is the tape library online?
Oracle Solaris 31

Is the Fibre Channel to SCSI router correctly logged into the Fibre Channel switch? Is the host HBA correctly logged into the Fibre Channel switch? If multiple Fibre Channel switches are cascaded or meshed, are all ISL ports correctly logged in? If using zoning on the Fibre Channel switch, is the server HBA and Tape Library's Fibre Channel to SCSI router in the same switch zone (either by WWN or by switch port)? If using zoning on the Fibre Channel switch, has the zone been added to the active switch configuration?

IBM AIX
The configuration process for IBM AIX in an EBS environment involves: Upgrading essential EBS hardware components to meet the minimum firmware and device driver requirements Installing the minimum patch level support for:

IBM AIX Backup software

Refer to the following websites to obtain the necessary patches: For HP: http://www.hp.com/support For IBM: http://www.ibm.com NOTE: Refer to the HP Enterprise Backup Solutions Compatibility Matrix for all current and required hardware, software, firmware, and device driver versions at http://www.hp.com/go/ ebs. Refer to the Quick Checklist at the end of this section to ensure proper installation and configuration of all of the hardware and software in the SAN.

Configuring the SAN


This procedural overview provides the necessary steps to configure an AIX host into an EBS. Refer to the documentation provided with each storage area network (SAN) component for additional component setup and configuration information. NOTE: To complete this installation, log in as root.

Prepare the required hardware and cabling in accordance with the specifications listed in chapter 2 of this guide as well as the installation and support documentation for each component in the SAN.

IBM 6228, 6239, 5716, or 5759 HBA configuration


NOTE: See the EBS compatibility matrix concerning IBM AIX OS version support for these Host Bus Adapters. 1. Install the latest maintenance packages for your version of AIX. This ensures that the latest drivers for the 6228/6239/5716/5759/5773/5774 HBA are installed on your system. For AIX 4.3.3, the latest packages must be installed because the base OS does not contain drivers for the newer HBAs. Install the IBM 6228/6239/5716/5759/5773/5774 HBA, and restart the server. Ensure that the card is recognized. At the prompt, type: #lsdev -Cc adapter
32 Configuration and operating system details

2. 3.

There is a line in the output similar to the following: fcs0 Available 1D-08 FC Adapter If the adapter is not recognized, check that the correct HBA driver is installed: 6228: #lslpp -L|grep devices.pci.df1000f7 6239: #lslpp -L|grep devices.pci.df1080f9 5716: #lslpp -L|grep devices.pci.df1000fa 5759: #lslpp -L|grep devices.pci.df1000fd 5773: #lslpp -L|grep devices.pciex.df1000fe 5774: #lslpp -L|grep devices.pciex.df1000fe There are lines in the output for lslpp similar to the following for a 6239 HBA: devices.pci.df1080f9.diag 5.1.0.1 C F PCI-X FC Adapter Device devices.pci.df1080f9.rte 5.1.0.1 C F PCI-X FC Adapter Device For AIX 5.1, the device drivers may need to be installed separately from the Maintenance pack. See the IBM installation guide for the 6239. 4. For information about the HBA, such as the WWN, execute the following command: #lscfg -vl fcs0 The output will look similar to the following:

5.

After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured, configure the HBA and devices within the fabric. At the prompt, type: #cfgmgr -1 <devicename> -v Within the command, <devicename> is the name from the output of the lsdev command in step 3, such as fcs0.

IBM AIX

33

6. 7.

To ensure all tape device files are available, at the prompt, type: #lsdev -HCc tape By default, AIX creates tape devices with a fixed block length. To change the devices to have variable block lengths, at the prompt, type: #chdev -1 <tapedevice> -a block_size=0 Configuration of the tape devices (where tape devices are rmt0, rmt1, and so on) are complete. NOTE: HP tape drives (SDLT and LTO) use the IBM host tape driver. When properly configured, a device listing will show the tape device as follows: For IBM native HBAs: Other FC SCSI Tape Drive For non-IBM native HBAs: Other SCSI Tape Drive

Configuring switch zoning


If zoning will be used, either by World Wide Name or by port, perform the setup after the HBA has logged into the fabric. Refer to the Fibre Channel switch documentation for complete switch zone setup information. Ensure that the World Wide Name (WWN) or port of the Fibre-Channel-to-SCSI bridge is in the same zone as the WWN or port of the HBA installed in the server.

Installation checklist
Are all hardware components at the minimum supported firmware revision, including: Server, HBA, Fibre Channel switch, Fibre Channel to SCSI router, Interface Manager, Command View TL, tape drives, library robot? Are all recommended AIX maintenance packages installed on the host? Is the minimum supported HBA driver loaded on the host? Are all tape and robotic devices mapped, configured and presented to the host from the Fibre Channel to SCSI router, or Interface Manager? Is the tape library online? Is the Fibre Channel to SCSI router correctly logged into the Fibre Channel switch? Is the host HBA correctly logged into the Fibre Channel switch? If multiple Fibre Channel switches are cascaded or meshed, are all ISL ports correctly logged in? If using zoning on the Fibre Channel switch, is the server HBA and Tape Library's Fibre Channel to SCSI router in the same switch zone (either by WWN or by switch port)? If using zoning on the Fibre Channel switch, has the zone been added to the active switch configuration?

Installing backup software and patches


After all components on the SAN are logged in and configured, the system is ready for the installation of any supported backup software. Refer to the installation guide for your particular software package, or contact the vendor for detailed installation procedures and requirements. After installing the backup software, check with the software vendor for the latest updates and patches. If any updates or patches exist for your backup software, install them now.

34

Configuration and operating system details

4 Backup and recovery of Virtual Machines


Virtual Machine software is used for portioning, consolidating, and managing computing resources allowing multiple, unmodified operating systems and their applications to run in virtual machines that share physical resources. Each virtual machine represents a complete system with processors, memory, networking, storage, and BIOS. Table 2 Virtual machine backup methods
VM image backup from physical host to locally attached backup device* File-level recovery No is required Application data Yes (cold) to backup Backup window required Large number of VMs to backup Yes Not suggested Guest VM file backup to locally attached backup device* Yes Guest VM file backup to locally mapped NAS Guest VM LAN share or iSCSI backup to media backup device host Yes Yes

Off-host backup server VMware: Yes (VADP) HPVM: Yes (ZDB) VMware: Yes (VADP) HPVM: Yes (ZDB) No Yes, with multiple backup servers

Yes (hot/cold)

Yes (hot/cold)

Yes (hot/cold)

Yes Not suggested

Yes Not suggested

Yes Not suggested

*Not supported by all virtualization products VADP = vStorage APIs for Data Protection

ZDB = Snapshot-based backup on an off-host proxy server

NOTE:

Be sure to do the following:

See the backup software documentation for supported virtual machine backup methods. See the virtual machine documentation for supported backup devices. See the EBS Compatibility Matrix for backup application VM support and VM tape support.

HP EBS VMware backup and recovery strategy


NOTE: ESX VMs do not support FC-connected tape devices. VADP with an off-host backup server can be used to manage SAN devices. VMware vStorage APIs for Data Protection (VADP) offloads backup responsibility from ESX hosts to a dedicated backup server or servers. This reduces the load on ESX hosts. VDAP provides full-image backup and restore capabilities for all virtual machines and file-based backups for Microsoft Windows and Linux virtual machines. VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices. See backup software documentation for VM support. VMs can also be set up for LAN backup the same as a regular client. See backup software documentation for details. For recommendations on Virtual Machine backup and recovery to HP StoreOnce backup systems, see Protecting VMware Virtual Machines with HP StoreOnce D2D Systems and the

HP EBS VMware backup and recovery strategy

35

VMware vStorage APIs for Data Protection at www.hp.com/go/storeonce under the White Papers link. For complete details on Zero Downtime Backup of an Oracle database running on a VMware virtual machine, see the HP Oracle on VMware ZDB Solution implementation guides at www.hp.com/go/ebs, under the EBS whitepapers link.

NOTE: VMware datastores residing on HP EVA storage arrays should use the Windows host profile mode for the VADP backup server.

HP Integrity Virtual Machines (Integrity VM)


HP supports, certifies, and sells HP Integrity Virtual Machines (HPVM) Virtualization software on HP Integrity servers. HPVM is an application installed on an HP-UX server and allows multiple, unmodified operating systems (HP-UX, Windows and Linux) and their applications to run in virtual machines that share physical resources. The HP Virtual Server Environment (VSE) for HP Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability. HP VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service. NOTE: The HP Integrity VM host and VMs do support FC SAN connected tape, Virtual Library Systems (VLS) devices, and HP StoreOnce backup systems. Off-host backups using HP storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources require for backup. VMs can also be setup for LAN backup the same as a regular client or media host. See backup software documentation for details. For complete details on Virtual Machine backup and recovery including Off-host, LAN-based and local media server backups, see HP EBS Solutions Guide for HP Integrity Virtual Machine Backup at www.hp.com/go/ebs under the EBS whitepapers link.

HP EBS Hyper-V backup and recovery strategy


NOTE: Hyper-V VMs do not support FC or direct attach tape devices. A backup server can be used to manage such devices. The Volume Shadow Copy Service (VSS) Hyper-V writer can be used to quiesce Windows virtual machines and create a snapshot on the Hyper-V host volume. VMs that cannot be quiesced can be placed in the Saved state before snapshot creation. The snapshots are then used for image or file backup of the VMs. If a VM was placed in the Saved state, Hyper-V will return the VM to its original state. See backup software documentation for details. VMs can also be set up for LAN backup the same as a regular client. See backup software documentation for details. VMs can also be setup as backup servers with locally mapped NAS shares or iSCSI backup devices. See backup software documentation for VM support.

36

Backup and recovery of Virtual Machines

You might also like