IBM System Storage

Multipath Subsystem Device Driver User's Guide

IBMLicense Agreement for Machine Code This guide might contain references to machine code, which includes Licensed Internal Code. Licensed Internal Code is licensed to you under the terms of the IBM License Agreement for Machine Code. Carefully read the agreement. By using this product, you agree to abide by the terms of this agreement and applicable copyright laws. See “IBM license agreement for machine code” on page 457.

GC52-1309-03

IBM System Storage

Multipath Subsystem Device Driver User's Guide

GC52-1309-03

Note Before using this information and the product it supports, read the information in “Notices” on page 455.

|

This edition applies to the following versions of IBM Multipath Subsystem Device Driver and to all subsequent releases and modifications until otherwise indicated in new editions: Subsystem Device Driver Version 1 Release 8 Modification 0 Level x for HP-UX Subsystem Device Driver Version 1 Release 7 Modification 2 Level x for AIX Subsystem Device Driver Version 1 Release 6 Modification 5 Level x for Solaris Subsystem Device Driver Version 1 Release 6 Modification 4 Level x for Windows Subsystem Device Driver Version 1 Release 6 Modification 3 Level x for Linux Subsystem Device Driver Version 1 Release 6 Modification 0 Level x for Netware Subsystem Device Driver Device Specific Module Version 2 Release 4 Modification x Level x Subsystem Device Driver Path Control Module Version 2 Release 6 Modification x Level x Subsystem Device Driver Path Control Module Version 3 Release 0 Modification x Level x This edition replaces GC52-1309-02. © Copyright IBM Corporation 1999, 2010. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents
Figures . . . . . . . . . . . . . . xiii Tables . . . . . . . . . . . . . . . xv About this guide . . . . . . . . . . xvii
|
Who should use this guide. . . . . . . . Summary of changes . . . . . . . . . . New information . . . . . . . . . . Command syntax conventions . . . . . . Highlighting conventions . . . . . . . Special characters conventions . . . . . Related information . . . . . . . . . . The ESS library . . . . . . . . . . . The DS8000 library . . . . . . . . . The DS6000 library . . . . . . . . . The DS5000 and DS Storage Manager library The DS4000 library . . . . . . . . . The SAN Volume Controller library . . . The Tivoli Storage Productivity Center and Tivoli Storage Productivity Center for Replication libraries . . . . . . . . . Ordering IBM publications . . . . . . IBM Publications Center . . . . . . How to send your comments . . . . . . . . xvii . xvii . xviii . xviii . xviii . xviii . xix . xix . xxi . xxi xxii . xxii . xxiii Virtualization products requirements . . . . SCSI requirements for ESS . . . . . . . Fibre requirements . . . . . . . . . . Preparing for the SDD installation . . . . . . . Configuring the disk storage system . . . . . Configuring the virtualization products . . . . Installing the AIX fibre-channel device drivers. . Uninstalling the AIX fibre-channel device drivers Using the smitty deinstall command . . . . Using the installp command . . . . . . . Installing and upgrading the AIX SDD host attachment . . . . . . . . . . . . . Configuring fibre-channel-attached devices . . . Removing fibre-channel-attached devices . . . Verifying the adapter firmware level . . . . . Determining if the sddServer for Expert is installed . . . . . . . . . . . . . . Understanding SDD support for IBM System p with static LPARs configured . . . . . . . Determining the installation package . . . . . Installation packages for 32-bit and 64-bit applications on AIX 4.3.3 (or later) host systems. . . . . . . . . . . . . . Switching between 32-bit and 64-bit modes on AIX 5.1.0, AIX 5.2.0, and AIX 5.3.0 host systems. . . . . . . . . . . . . . Installation of major files on your AIX host system . . . . . . . . . . . . . . Determining the installation type . . . . . . Installing and upgrading the SDD . . . . . . . Installing the SDD . . . . . . . . . . . Installing the SDD from CD-ROM . . . . . Installing SDD from downloaded code . . . Upgrading the SDD . . . . . . . . . . Upgrading the SDD packages automatically without system restart . . . . . . . . . Preconditions for migration or upgrade . . Customizing the SDD migration or upgrade . . . . . . . . . . . . Procedures for automatic migration or upgrade . . . . . . . . . . . . Error recovery for migration or upgrade. . Upgrading SDD manually . . . . . . . Updating SDD packages by applying a program temporary fix . . . . . . . . Committing or Rejecting a PTF Update . . Upgrading AIX OS or host attachment and SDD packages . . . . . . . . . . . Case 1: In stand-alone host or with the HACMP services stopped . . . . . . . Case 2: In HACMP node-by-node migration with nonconcurrent resource group . . . . Verifying the currently installed version of SDD for SDD 1.3.3.11 (or earlier) . . . . . . . . Verifying the currently installed version of SDD for SDD 1.4.0.0 (or later) . . . . . . . . . 14 14 14 15 15 16 16 17 17 17 18 18 18 19 19 20 20

| | |

. . . .

xxiv xxiv xxiv xxiv

20

Chapter 1. Overview of the SDD . . . . 1
The SDD architecture . . . . . . . . . . Enhanced data availability . . . . . . . . . Dynamic I/O load balancing . . . . . . . . Automatic path-failover protection . . . . . . Concurrent download of licensed machine code for disk storage systems . . . . . . . . . . . Concurrent download of licensed machine code for virtualization products . . . . . . . . . . Concurrent download of licensed machine code for DS3950, DS4000 and DS5000 . . . . . . . . Concurrent download of licensed machine code for IBM BladeCenter S SAS RAID Controller Module (RSSM) . . . . . . . . . . . . . . . Active/Passive dual array controller path-selection algorithm for DS3950, DS4000 and DS5000 products Preferred node path-selection algorithm for the virtualization products . . . . . . . . . . Open HyperSwap Replication . . . . . . . . . . . 2 4 6 6

21 21 22 23 23 23 24 24 24 25 26 27 27 28 29 30 32 32 34 35 37

. 7 . 7 . 8

. 8 . 8 . 9 . 9

|

Chapter 2. Using the SDD on an AIX host system . . . . . . . . . . . . 11
Supported SDD features . . . . . . . . . Verifying the hardware and software requirements Hardware . . . . . . . . . . . . . Software . . . . . . . . . . . . . Unsupported environments . . . . . . . Host system requirements . . . . . . . Disk storage systems requirements . . .
© Copyright IBM Corp. 1999, 2010

. 11 12 . 12 . 12 . 13 . 13 . 14

iii

Preparing to configure SDD . . . . . . . . . Maximum number of LUNs . . . . . . . . ODM attributes for controlling the maximum number of LUNs in SDD version 1.6.0.7 or later on AIX 5.2 and later . . . . . . . . Preparing your system to configure more than 600 supported storage devices or to handle a large amount of I/O after queue depth is disabled . . . . . . . . . . . . . Filesystem space. . . . . . . . . . Controlling I/O flow to SDD devices with the SDD qdepth_enable attribute . . . . . . . Controlling reserve policy of SDD devices with the SDD reserve_policy attribute . . . . . . Configuring SDD . . . . . . . . . . . . Unconfiguring SDD . . . . . . . . . . Verifying the SDD configuration . . . . . . Dynamically adding paths to SDD vpath devices Dynamically removing or replacing PCI adapters or paths . . . . . . . . . . . . . . Dynamically removing a PCI adapter from SDD configuration . . . . . . . . . . Dynamically replacing a PCI adapter in an SDD configuration . . . . . . . . . . Dynamically removing a path of an SDD vpath device . . . . . . . . . . . . Removing SDD from an AIX host system . . . . Special considerations when you uninstall SDD in the NIM environment . . . . . . . . . Removing SDD Host Attachment from an AIX host system . . . . . . . . . . . . . . . . SAN boot support . . . . . . . . . . . . Manual exclusion of devices from the SDD configuration . . . . . . . . . . . . . Replacing manually excluded devices in the SDD configuration . . . . . . . . . . . . . SAN boot installation procedures . . . . . . SAN boot installation procedure for AIX 5.1 SAN boot installation procedure for AIX 5.2, AIX 5.3, and AIX 6.1 . . . . . . . . . Understanding SDD support for High Availability Cluster Multiprocessing . . . . . . . . . . SDD persistent reserve attributes . . . . . . Preparation for importing volume groups under HACMP . . . . . . . . . . . . . . HACMP RAID concurrent-mode volume groups and enhanced concurrent-capable volume groups. Creating HACMP RAID concurrent-mode volume groups . . . . . . . . . . . Importing HACMP RAID concurrent-mode volume groups . . . . . . . . . . . Removing HACMP RAID concurrent-mode volume groups . . . . . . . . . . . Extending HACMP RAID concurrent-mode volume groups . . . . . . . . . . . Reducing HACMP RAID concurrent-mode volume groups . . . . . . . . . . . Exporting HACMP RAID concurrent-mode volume groups . . . . . . . . . . . Enhanced concurrent-capable volume groups

38 38

40

40 43 43 44 45 45 46 47 47 47 48 49 50 51 51 52 52 52 53 53 53 54 56 57 58 59 59 62 62 63 64 64

Creating enhanced concurrent-capable volume groups . . . . . . . . . . Importing enhanced concurrent-capable volume groups . . . . . . . . . . Extending enhanced concurrent-capable volume groups . . . . . . . . . . Reducing enhanced concurrent-capable volume groups . . . . . . . . . . Recovering paths that are lost during HACMP node fallover that is caused when a system locks up . . . . . . . . . . . . . Supporting enhanced concurrent mode in an HACMP environment . . . . . . . . . SDD server daemon . . . . . . . . . . . Verifying if the SDD server has started . . . . Starting the SDD server manually . . . . . . Changing to a different port number for the SDD server . . . . . . . . . . . . . . . Stopping the SDD server . . . . . . . . . Replacing the SDD server with a stand-alone version . . . . . . . . . . . . . . . PTFs for APARs on AIX with Fibre Channel and the SDD server . . . . . . . . . . . . Dynamically changing the SDD path-selection policy algorithm . . . . . . . . . . . . . . . datapath set device policy command . . . . . Dynamically opening an invalid or close_dead path Fibre-channel dynamic device tracking for AIX 5.20 TL1 (and later) . . . . . . . . . . . . . Understanding SDD 1.3.2.9 (or later) support for single-path configuration for supported storage devices . . . . . . . . . . . . . . . . Understanding the persistent reserve issue when migrating from SDD to non-SDD volume groups after a system reboot . . . . . . . . . . . Managing secondary-system paging space . . . . Listing paging spaces . . . . . . . . . . Adding a paging space . . . . . . . . . Removing a paging space . . . . . . . . Verifying load-balancing and failover protection . . Displaying the supported storage device SDD vpath device configuration . . . . . . . . Configuring volume groups for failover protection . . . . . . . . . . . . . . Losing failover protection . . . . . . . . . Losing a device path . . . . . . . . . . Creating a volume group from single-path SDD vpath devices. . . . . . . . . . . . . A side effect of running the disk change method Manually deleting devices and running the configuration manager (cfgmgr) . . . . . . Using LVM with SDD . . . . . . . . . . . Importing volume groups with SDD . . . . . Exporting a volume group with SDD . . . . . Recovering from mixed volume groups . . . . Extending an existing SDD volume group . . . Backing up all files belonging to an SDD volume group . . . . . . . . . . . . . . . Restoring all files belonging to an SDD volume group . . . . . . . . . . . . . . . SDD-specific SMIT panels . . . . . . . . .

64 65 65 66

66 67 67 68 68 68 68 69 69 70 70 71 71

72

72 73 73 73 73 73 74 75 76 77 77 77 79 79 79 80 81 81 81 82 82

iv

Multipath Subsystem Device Driver User's Guide

Accessing the Display Data Path Device Configuration SMIT panel . . . . . . . . Accessing the Display Data Path Device Status SMIT panel . . . . . . . . . . . . . Accessing the Display Data Path Device Adapter Status SMIT panel . . . . . . . . . . . Accessing the Define and Configure All Data Path Devices SMIT panel . . . . . . . . . Accessing the Add Paths to Available Data Path Devices SMIT panel . . . . . . . . . . Accessing the Configure a Defined Data Path Device SMIT panel . . . . . . . . . . . Accessing the Remove a Data Path Device SMIT panel . . . . . . . . . . . . . . . Accessing the Add a Volume Group with Data Path Devices SMIT panel . . . . . . . . . Accessing the Add a Data Path Volume to a Volume Group SMIT panel . . . . . . . . Accessing the Remove a Physical Volume from a Volume Group SMIT panel . . . . . . . . Accessing the Backup a Volume Group with Data Path Devices SMIT panel . . . . . . . . . Accessing the Remake a Volume Group with Data Path Devices SMIT panel . . . . . . . SDD utility programs . . . . . . . . . . . addpaths . . . . . . . . . . . . . . hd2vp and vp2hd . . . . . . . . . . . dpovgfix . . . . . . . . . . . . . . lsvpcfg . . . . . . . . . . . . . . . mkvg4vp . . . . . . . . . . . . . . extendvg4vp . . . . . . . . . . . . . excludesddcfg . . . . . . . . . . . . lquerypr . . . . . . . . . . . . . . sddgetdata . . . . . . . . . . . . . Persistent reserve command tool . . . . . . . Using supported storage devices directly . . . . Using supported storage devices through AIX LVM Migrating a non-SDD volume group to a supported storage device SDD multipath volume group in concurrent mode . . . . . . . . . . . . Detailed instructions for migrating a non-SDD volume group to a supported storage device SDD multipath volume group in concurrent mode . . Using the trace function . . . . . . . . . .

83 84 84 85 85 85 85 85 86 86 86 87 87 87 88 88 88 89 90 90 91 91 91 93 94

94

96 98

Chapter 3. Using SDDPCM on an AIX host system . . . . . . . . . . . . 99
Supported SDDPCM features . . . . . . . Unsupported SDDPCM features . . . . . . Verifying the hardware and software requirements Hardware . . . . . . . . . . . . Software . . . . . . . . . . . . . Unsupported environments. . . . . . . Host system requirements . . . . . . . Supported storage device requirements . . Fibre requirements . . . . . . . . Preparing for SDDPCM installation . . . . . Preparing for SDDPCM installation for supported storage devices . . . . . . . Determining the correct installation package Determining if the SDD package is installed . 101 . 102 102 . 102 . 102 . 103 . 103 . 103 . 103 . 104 . 105 106 106

Determining if the ibm2105.rte package is installed . . . . . . . . . . . . . Determining if the devices.fcp.disk.ibm.rte package is installed . . . . . . . . . Installing the AIX fibre-channel device drivers . . . . . . . . . . . . . Verifying and upgrading the fibre-channel adapter firmware level . . . . . . . . Verifying the adapter firmware level. . . Upgrading the fibre channel adapter firmware level . . . . . . . . . . Fibre-channel HBA attributes for DS4000, DS5000, and DS3950 storage devices. . . Verifying and Upgrading SAS controller firmware . . . . . . . . . . . . . Installing and upgrading the AIX SDDPCM host attachment . . . . . . . . . . . . . . Installing and upgrading SDDPCM . . . . . . Installing SDDPCM from CD-ROM . . . . . Creating and mounting the CD-ROM filesystem . . . . . . . . . . . . Using the System Management Interface Tool facility to install SDDPCM . . . . . . . Unmounting the CD-ROM File System . . . Installing SDDPCM from downloaded code . . Installing SDDPCM with the AIX OS from an AIX NIM SPOT server to the client SAN boot disk or the internal boot disk . . . . . . . Updating SDDPCM . . . . . . . . . . Updating SDDPCM packages by installing a newer base package or a program temporary fix . . . . . . . . . . . . . . . Committing or rejecting a program temporary fix update . . . . . . . . . Verifying the currently installed version of SDDPCM . . . . . . . . . . . . . Maximum number of devices that SDDPCM supports . . . . . . . . . . . . . Migrating SDDPCM . . . . . . . . . . Migrating the supported storage SAN boot device or nonboot volume group from AIX default PCM to SDDPCM . . . . . . . Migrating from SDDPCM to the AIX default PCM or to SDD . . . . . . . . . . Migrating from SDD with SAN boot devices (on supported storage hdisks) to SDDPCM with multipath SAN boot devices . . . . Migrating SDDPCM during an AIX OS upgrade with multipath SAN boot devices (on supported storage hdisks) . . . . . . Configuring and unconfiguring supported storage MPIO-capable devices . . . . . . . . . . Configuring supported storage MPIO-capable devices . . . . . . . . . . . . . . Verifying the SDDPCM configuration . . . . Dynamically adding and removing paths or adapters . . . . . . . . . . . . . . Unconfiguring supported storage MPIO-capable devices . . . . . . . . . . . . . . Switching DS4000, DS5000, or DS3950 storage device configurations . . . . . . . . . .
Contents

106 106 106 108 108 108 109 109 109 110 110 110 112 112 113

113 114

114 115 116 116 117

117 118

119

119 119 119 119 120 122 122

v

| |

| | | |

Removing SDDPCM from an AIX host system . . SDDPCM support for HACMP with Enhanced Concurrent Mode volume groups . . . . . . MPIO reserve policies . . . . . . . . . . No Reserve reservation policy . . . . . . . Exclusive Host Access single-path reservation policy . . . . . . . . . . . . . . . Persistent Reserve Exclusive Host Access reservation policy . . . . . . . . . . . Persistent Reserve Shared Host Access reservation policy . . . . . . . . . . . SDDPCM active/passive storage device controller health-check feature . . . . . . . . . . . SDDPCM ODM attribute settings. . . . . . . SDDPCM ODM attribute default settings . . . Changing device reserve policies . . . . . . Changing the path selection algorithm . . . . Using the load balancing port algorithm . . . Changing SDDPCM path healthcheck mode . . Changing SDDPCM path healthcheck time interval . . . . . . . . . . . . . . Changing the Open HyperSwap quiesce expire time . . . . . . . . . . . . . . . Supported AIX fibre-channel device driver features Fast I/O failure of fibre-channel devices . . . Fibre channel dynamic device tracking . . . . Changing SDDPCM controller healthcheck delay_time . . . . . . . . . . . . . Changing SDDPCM controller healthcheck interval time. . . . . . . . . . . . . Multipath SAN boot support . . . . . . . . Configuring supported storage system MPIO devices as the SAN boot device . . . . . . Support system dump device with the supported storage system MPIO device . . . . . . . . Dynamically enabling and disabling paths or adapters . . . . . . . . . . . . . . . Dynamically enabling or disabling a path . . . Dynamically enabling or disabling an adapter Using the SDDPCM trace function . . . . . . SDDPCM server daemon . . . . . . . . . Verifying if the SDDPCM server has started . . Starting the SDDPCM server manually . . . . Stopping the SDDPCM server . . . . . . . AE daemon . . . . . . . . . . . . . . Verifying if the AE server has started . . . . Starting the AE server manually . . . . . . Stopping the AE server manually. . . . . . SDDPCM utility programs . . . . . . . . . Persistent reserve command tools . . . . . pcmquerypr . . . . . . . . . . . . pcmgenprkey . . . . . . . . . . . sddpcm_get_config . . . . . . . . . Using SDDPCM pcmpath commands . . . . pcmpath clear device count. . . . . . . pcmpath disable ports . . . . . . . . pcmpath enable ports . . . . . . . . pcmpath open device path . . . . . . . pcmpath query adapter . . . . . . . . pcmpath query adaptstats . . . . . . . pcmpath query device . . . . . . . .

123 123 124 125

|
125 125 125 125 126 126 127 127 128 129 130 130 131 131 131 132 133 133 134 135 135 135 136 137 137 138 138 138 139 139 139 140 140 140 140 143 143 144 146 147 149 151 153 155 157

pcmpath query devstats . . . . pcmpath query essmap . . . . pcmpath query port . . . . . pcmpath query portmap. . . . pcmpath query portstats. . . . pcmpath query session . . . . pcmpath query version . . . . pcmpath query wwpn . . . . pcmpath set adapter . . . . . pcmpath set device algorithm . . pcmpath set device hc_interval . pcmpath set device hc_mode . . pcmpath set device cntlhc_interval pcmpath set device cntlhc_delay . pcmpath set device path. . . . Summary of command syntax . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

163 165 166 168 169 171 172 173 174 176 177 178 179 180 181 182

Chapter 4. Using the SDD on an HP-UX host system . . . . . . . . . 183
Verifying the hardware and software requirements Hardware . . . . . . . . . . . . . Software . . . . . . . . . . . . . . SDD-supported HP-UX kernel levels . . . Unsupported environments. . . . . . . . Understanding how SDD works on an HP-UX host system . . . . . . . . . . . . . Preparing for the SDD installation . . . . . . Configuring the disk storage system. . . . . Configuring the virtualization products . . . Configuring DS4000 and DS5000 family disk systems . . . . . . . . . . . . . . Bypassing the boot disk check . . . . . . . Planning for installation . . . . . . . . . Determining if the SDD 1.3.1.5 (or later) server for Expert is installed. . . . . . . . . . Installing SDD . . . . . . . . . . . . . Installing SDD from CD-ROM . . . . . . . Installing SDD from downloaded code . . . . Upgrading the SDD . . . . . . . . . . Upgrading from SDD 1.6.0.x to SDD 1.6.1.0 or later with concurrent access . . . . . Performance during upgrade . . . . . Upgrading from SDD 1.5.0.4 to SDD 1.6.1.0 or later with nonconcurrent access. . . . . Upgrading SDD 1.3.0.2 or earlier to SDD 1.5.0.4 or later . . . . . . . . . . . Installing an additional package for DS4000 and DS5000 support . . . . . . . . . . . . Configuring the SDD . . . . . . . . . . . Maximum number of LUNs . . . . . . . Changing an SDD hardware configuration . . Converting a volume group . . . . . . . Dynamic reconfiguration . . . . . . . . Dynamically changing the SDD path-selection policy algorithm . . . . . . . . . . . datapath set device policy command . . . Preferred node path-selection algorithm for the virtualization products . . . . . . . . . SDD datapath query adapter command changes for SDD 1.4.0.0 (or later). . . . . . . . . 183 183 183 183 184 184 184 184 184 184 185 185 186 187 187 188 189 189 189 190 190 191 191 191 191 192 192 192 193 193 194

vi

Multipath Subsystem Device Driver User's Guide

SDD datapath query device command changes for SDD 1.4.0.0 (or later). . . . . . . . Postinstallation . . . . . . . . . . . . Uninstalling SDD . . . . . . . . . . . SDD server daemon . . . . . . . . . . Verifying if the SDD server has started . . . Starting the SDD server manually . . . . Changing to a different port number for the SDD server . . . . . . . . . . . . Stopping the SDD server . . . . . . . How to import and export volume groups . . Exporting volume groups . . . . . . . Moving the map file . . . . . . . . . Creating the volume group device directory . Creating the group special file . . . . . . Importing volume groups . . . . . . . Exporting or importing MC Service Guard volume groups . . . . . . . . . . . Using applications with SDD . . . . . . . Standard UNIX applications . . . . . . Creating new logical volumes . . . . . Removing logical volumes . . . . . . Re-creating the existing logical volumes . Installing SDD on a NFS file server . . . . Setting up NFS for the first time . . . . Installing SDD on a system that already has the NFS file server . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

195 196 198 199 199 200 200 200 200 201 201 201 202 202 202 204 204 205 207 207 209 209

. 210

Chapter 5. Using SDD on a Linux host system . . . . . . . . . . . . . . 211
Verifying hardware and software requirements . . Hardware . . . . . . . . . . . . . Software . . . . . . . . . . . . . . Unsupported environments. . . . . . . . Preparing for SDD installation. . . . . . . . Configuring disk storage systems. . . . . . Configuring virtualization products . . . . . Configuring fibre-channel adapters on disk storage systems. . . . . . . . . . . . Configuring fibre-channel adapters on virtualization products . . . . . . . . . Disabling automatic Linux system updates . . Installing SDD . . . . . . . . . . . . . Installing SDD from CD-ROM . . . . . . . Installing SDD from downloaded code . . . . Upgrading SDD . . . . . . . . . . . Verifying the SDD installation . . . . . . . Configuring SDD . . . . . . . . . . . . Configurating and verifying an SDD . . . . SDD configuration. . . . . . . . . . Verifying SDD configuration . . . . . . Automatic SCSI disk device exclusion during SDD configuration. . . . . . . . . . Reconfiguring using SDD userspace commands cfgvpath . . . . . . . . . . . . . rmvpath . . . . . . . . . . . . . lsvpcfg . . . . . . . . . . . . . addpaths . . . . . . . . . . . . . Configuring SDD at system startup . . . . . Maintaining SDD vpath device configuration persistence . . . . . . . . . . . . . 211 211 211 212 212 212 212 212 213 213 213 213 214 214 215 216 217 217 218 221 221 221 221 222 222 222 223

Dynamically changing the SDD path-selection policy algorithm . . . . . . . . . . datapath set device policy command . . Using dynamic reconfiguration . . . . . Uninstalling SDD . . . . . . . . . . . Mounting SDD devices with automount . . . Setting up automount . . . . . . . . Configuring automount . . . . . . . . Sample mount output . . . . . . . . Using Linux Logical Volume Manager with SDD Using LVM on SUSE . . . . . . . . . Using LVM on Red Hat . . . . . . . . LVM 2 general configuration information . . Booting Linux over the SAN with SDD. . . . SAN Boot instructions for RHEL 3 with IBM SDD (x86) . . . . . . . . . . . . Prerequisite steps . . . . . . . . . Verifying the SAN boot configuration . . Upgrading the SDD . . . . . . . . SAN Boot instructions for RHEL 3 with IBM SDD (ppc) . . . . . . . . . . . . Prerequisite steps . . . . . . . . . SAN boot configuration . . . . . . . SDD upgrade procedure. . . . . . . SAN Boot Instructions for SLES 8 with IBM SDD (x86) . . . . . . . . . . . . Prerequisite steps . . . . . . . . . SAN boot configuration . . . . . . . SDD upgrade procedure. . . . . . . SAN Boot Instructions for SLES 9 with IBM SDD (x86) . . . . . . . . . . . . Prerequisite steps . . . . . . . . . SAN boot configuration . . . . . . . SDD upgrade procedure. . . . . . . SAN Boot instructions for SLES 9 with IBM (ppc) . . . . . . . . . . . . . . Prerequisite steps . . . . . . . . . SAN boot configuration . . . . . . . Upgrading the SDD . . . . . . . . SAN Boot Instructions for SLES 9 with IBM SDD (x86) and LVM 2 . . . . . . . . Prerequisite steps . . . . . . . . . SAN boot configuration . . . . . . . SAN boot instructions for RHEL 4 with IBM SDD (x86) . . . . . . . . . . . . Prerequisite steps . . . . . . . . . SAN boot configuration . . . . . . . Upgrading the SDD . . . . . . . . SAN Boot instructions for RHEL 4 with IBM SDD (ppc) . . . . . . . . . . . . Prerequisite steps . . . . . . . . . SAN boot configuration . . . . . . . Upgrading the SDD . . . . . . . . SAN boot instructions for RHEL 4 with IBM SDD (x86) and LVM 2 . . . . . . . . Prerequisite steps . . . . . . . . . SAN boot configuration . . . . . . . Using lilo with SDD (remote boot) on x86 . . Manually specifying disk geometry of the boot device . . . . . . . . . . . SDD server daemon . . . . . . . . . .
Contents

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

224 225 225 226 226 226 227 228 229 230 231 232 233 233 233 235 240 240 240 242 248 249 249 250 257 258 258 260 264 264 264 266 271

. 272 . 272 . 273 . . . . . . . . . . . . 279 279 280 284 285 285 287 290 291 291 293 298

. 299 . 301

vii

Verifying if the SDD server has started . . Starting the SDD server manually . . . Changing to a different port number for the SDD server . . . . . . . . . . . Stopping the SDD server . . . . . . Collecting trace information . . . . . Understanding SDD support for single-path configuration . . . . . . . . . . . Partitioning SDD vpath devices . . . . . Using standard UNIX applications . . . . Managing common issues . . . . . . .

. . . . . . . . .

. 301 . 301 . 302 . 302 . 302 . . . . 302 303 303 304

Chapter 6. Using the SDD on a NetWare host system . . . . . . . . 307
Verifying the hardware and software requirements Hardware requirements . . . . . . . . . Software requirements . . . . . . . . . Supported environments . . . . . . . . Unsupported environments. . . . . . . . Disk storage system requirements . . . . . SCSI requirements . . . . . . . . . . . Fibre-channel requirements . . . . . . . . Preparing for SDD installation. . . . . . . . Configuring the disk storage system. . . . . Configuring fibre-channel adapters . . . . . Configuring SCSI adapters . . . . . . . . Using a NetWare Compaq Server. . . . . . Installing SDD . . . . . . . . . . . . . Installing SDD from CD-ROM . . . . . . . Installing SDD from downloaded code . . . . Configuring SDD . . . . . . . . . . . . Maximum number of LUNs . . . . . . . Displaying the current version of the SDD. . . . Features . . . . . . . . . . . . . . . Automatic path detection, failover and selection Manual operations using the datapath commands . . . . . . . . . . . . . Understanding SDD error recovery algorithms Single-path mode . . . . . . . . . . Multiple-path mode . . . . . . . . . Dynamic load balancing . . . . . . . . . Disk storage system logical unit detection . . . Error reporting and logging . . . . . . . SDD in NetWare-layered architecture . . . . Display a single device for a multipath device . . . . . . . . . . . . . . Removing the SDD . . . . . . . . . . . Cluster setup for Novell NetWare 5.1 . . . . . Cluster setup for Novell NetWare 6.0 . . . . . Examples of commands output on the Console Window . . . . . . . . . . . . . . 307 307 307 308 308 308 308 309 309 309 309 310 310 311 311 311 311 312 312 312 312 313 314 314 314 314 314 314 315 315 315 316 316 316

Chapter 7. Using the SDD on a Solaris host system . . . . . . . . . . . . 321
Verifying the hardware and software requirements Hardware . . . . . . . . . . . . Software . . . . . . . . . . . . . Supported environments . . . . . . . Unsupported environments. . . . . . . . . . . 321 321 321 322 322

Understanding how SDD works on a Solaris host system . . . . . . . . . . . . . Preparing for SDD installation. . . . . . . . Configuring disk storage systems. . . . . . Configuring virtualization products . . . . . Determining if the SDD server for Expert is installed . . . . . . . . . . . . . . Planning for installation . . . . . . . . . Installing SDD . . . . . . . . . . . . . Installing SDD from CD-ROM . . . . . . . Installing SDD from downloaded code . . . . Installing SDD in Jumpstart Environment . . Postinstallation . . . . . . . . . . . . Verifying the SDD installation . . . . . . . Upgrading the SDD . . . . . . . . . . Configuring the SDD . . . . . . . . . . . Changing an SDD hardware configuration . . Option 1: Reconfigure the system and reconfigure the SDD . . . . . . . . . Option 2: Dynamic reconfiguration . . . . Solaris 10 ZFS support . . . . . . . . . Solaris 10 Zone support . . . . . . . . . SDD installation in a server with nonglobal zones . . . . . . . . . . . . . . Access SDD vpath devices in nonglobal zones . . . . . . . . . . . . . . Dynamically changing the SDD path-selection policy algorithm . . . . . . . . . . . datapath set device policy command . . . Excluding LUNs from being configured by SDD Determining the LUN identifier of a LUN Special consideration during SDD upgrade Uninstalling the SDD . . . . . . . . . . . Understanding SDD support for single-path configuration for disk storage system . . . . . SDD server daemon . . . . . . . . . . . Verifying if the SDD server has started . . . . Starting the SDD server manually . . . . . Changing to a different port number for the SDD server . . . . . . . . . . . . . Changing the retry count value when probing SDD server inquiries . . . . . . . . . . Stopping the SDD server . . . . . . . . Using applications with SDD . . . . . . . . Standard UNIX applications . . . . . . . Installing the SDD on a NFS file server . . . . Setting up NFS for the first time . . . . . Installing SDD on a system that already has the Network File System file server . . . . Veritas Volume Manager. . . . . . . . . Oracle . . . . . . . . . . . . . . . Installing an Oracle database for the first time . . . . . . . . . . . . . . Using a file system . . . . . . . . Using raw partitions . . . . . . . . Installing an SDD on a system that already has Oracle in place . . . . . . . . . If using a file system . . . . . . . . If using raw partitions . . . . . . . Solaris Volume Manager (formerly Solstice DiskSuite) . . . . . . . . . . . . .

322 322 322 322 322 323 325 325 327 327 327 330 330 331 331 331 331 333 333 333 333 334 335 335 335 335 336 337 337 337 337 337 338 338 338 339 339 339 339 340 341 341 341 341 342 342 343 344

viii

Multipath Subsystem Device Driver User's Guide

. . Verifying the configuration . . . . Configuring SDD . . . . . . . . . Adding new storage to an existing configuration .x5> or later . . . . . . . . . Uninstalling the SDD in a two-node cluster environment. Installing and configuring additional paths Verifying additional paths are installed correctly . . . . . . . . . . . Configuring the SDD . . Host system requirements . . . . . . . . . . . . . . . . . . Fibre-channel requirements . . SCSI requirements . . . . . . . . . . . . . . . Verifying that the SDD server has started . . Installing SDD . . . . . . . . . . . . . . . . . . . . . . . . . . Using SDD on a Windows Server 2003 host system . . . Configuring the supported storage device . . . . Starting the SDD server manually . Installing SDD from CD-ROM . . . . . . . . . . . . . . . . . . Disk storage system requirements . Preparing for SDD installation. . . . . . . . . Software . Special considerations in the Windows 2000 clustering environment . . . . Maximum number of LUNs . . SAN boot support . . Installing SDD from CD-ROM . Upgrading SDD . . . . . . . . . . . . . 349 349 349 349 350 350 350 350 351 351 351 351 351 351 352 352 353 353 353 353 353 354 355 355 357 357 358 358 359 359 360 360 362 362 362 362 362 363 374 375 376 377 377 377 378 378 379 379 379 379 380 380 Chapter 10. . . . . . . . Fibre-channel requirements . . . . . . . . . . . . . Booting from a SAN device with Windows 2000 and the SDD using an Emulex HBA <Firmware v3. . . . . Stopping the SDD server . . . . . . Setting up transactional volume for UFS logging on a new system . . . . . . . . . . . . . . . Reviewing the existing SDD configuration information . . Activating additional paths . Starting the SDD server manually . . . . . . SAN boot disk migration . . . . . . ESS requirements . . . . Booting from a SAN device with Windows 2000 and the SDD using Qlogic HBA <BIOS 1. . . . .Installing Solaris Volume Manager for the first time . . Using the SDD on a Windows 2000 host system . . . . . . . . Configuring the ESS . Unsupported environments. . . . . . Configuring SCSI adapters for ESS devices . Displaying the current version of SDD . . Configuring a Windows 2000 cluster with the SDD installed . . . Stopping the SDD server . Configuring SCSI adapters for ESS devices . . . . . . . . . . . . Contents . . . . . . . . Installing vpath on a system that already has transactional volume for UFS logging in place . . . . . . . . . . 345 345 346 Chapter 9. . . . Configuring fibre-channel adapters . . . . . . . Configuring a Windows NT cluster with the SDD installed . . . . Preparing for SDD installation. . . . 365 Verifying the hardware and software requirements Unsupported environments. . . . . . . .90.92a2. . . . . . . . Using high-availability clustering on an ESS . . . . . . . . . . . . . . . 349 Verifying the hardware and software requirements Hardware . . Configuring fibre-channel adapters . . . . . . . . . . . . . . . . . . 381 Verifying the hardware and software requirements Unsupported environments. . . . . . . . . . . . . . . . . . . . Changing to a different port number for the SDD server . . . . . . . . . . . . . . . . . . . . Verifying that additional paths are installed correctly . SCSI requirements . . . . . . Displaying the current version of the SDD. Host system requirements . . . Installing SDD from downloaded code . . . . . . 381 381 381 382 382 382 383 383 ix . . . . . . . . . . . . . . . . . . . . Verifying that new storage is installed correctly . v1. . . . . . . . . . . . . . . . . . . . . . . Verifying if the SDD server has started . . . . . . . . . . . . . . . . . . . . . . Reviewing the existing SDD configuration information . Uninstalling the SDD . . . . . . . . . . . . . Installing SDD on a system that already has Solstice DiskSuite in place . . . . . . . . . . . . . . . . . . . . . . . . SDD server daemon . . . . . . . . . . . Fibre-channel requirements . . . . . . . . . . . . . . . Adding or modifying a multipath storage configuration to the supported storage device . . . Making the MoveGroup Service startup type automatic . . . . Maximum number of LUNs . . Installing SDD . . . Supported storage device requirements . . Host system requirements . . . . . . . . . . . . . . Adding paths to SDD devices . SDD server daemon . . Installing for the first time . . . . . . . . Configuring the supported storage device . . . . . . . . Special considerations in the high-availability clustering environment . . Installing for the first time . . . . . . . . . . . . . . . . . Upgrading the SDD in a two-node cluster environment. . . . . . . . . . . . .43> or later . . . . 365 365 365 366 366 367 367 367 367 368 368 368 369 370 370 371 371 371 372 372 373 374 347 Chapter 8. . . . . . . . . Upgrading SDD . . . . Uninstalling SDD . . . . . Support for Windows 2000 clustering . . . . . . . . . . . . . . . Limitations when you boot from a SAN boot device on a Windows 2000 host . . . . . . . . . . . . . . . . . . . . . Using the SDD on a Windows NT host system . . . . . . . . . . . . . . . . . . Installing SDD from downloaded code . . . . . . . . . . . . . Preparing for an SDD installation . . . . . Changing to a different port number for the SDD server . . . .

Upgrading SDD in a two-node cluster environment. . . Configuring the supported storage device . .1. . . . . . . . . . . . Configuring the SDD . . . SDDPCM server daemon . . . . . . . . . Uninstalling SDDDSM . . . . . . . . Installing SDDDSM from CD-ROM . Support for Windows Server 2003 or Windows Server 2008 clustering . . . . . . . . . 395 . . . . . . . . . . . query adaptstats . . . . . . . . . . . . . . . . . . . . Activating additional paths . . . . . . . .43> or later . . SDDDSM datapath command support . . . . . . . . . . . . . . . . 399 400 400 400 400 401 401 401 401 401 403 . . . . . . . . . . . . . . . . .90. . . . . . 393 . . . . Displaying the current version of the SDD.Configuring fibre-channel adapters . . . . . . . . . . . Enabling or disabling the sddsrv or pcmsrv TCP/IP port. . . . . Starting the SDDDSM server manually . . . . . . . . . . . . . . . enable ports . Verifying that additional paths are installed correctly . . . . . . 392 . . . . . Changing to a different port number for the SDD server . 423 424 425 426 428 430 x Multipath Subsystem Device Driver User's Guide . . . . . . . . . . . . . Installing SDDDSM from downloaded code Upgrading SDDDSM . . . . . . . . . . . Configure Windows 2003 cluster with the SDD installed . . . . . . . . . 397 . . . Configuring SDDDSM . . . . . . . . . . Path reclamation . . . . . . 404 . . . . . Using the datapath commands . . . 404 . . . . . 395 . . . . . . . . . . . . Stopping the SDDDSM server . . Fibre-channel requirements . Installing SDD . . . . . . . . . Host system requirements . . . . . . . . . . . . . . . . . . Installing SDD from CD-ROM . Installing SDDDSM . . . . . . 398 Chapter 11. . . . . . . . . . . . . . . . 397 397 397 398 Maximum number of LUNs . SDDDSM server daemon . . . . . . . . . . Uninstalling the SDD . . SAN boot support . Upgrading from a Windows NT host system to Windows Server 2003. Installing for the first time . . . query adapter .conf file format . . Configuring SCSI adapters for ESS devices . . . . . . Changing the sddsrv or pcmsrv TCP/IP port number . . . . . . . Verifying the configuration . 415 SDD server daemon . . 415 415 415 415 416 . . . . 390 . . . . . . . . . . Understanding how the SDD server daemon works . . . . . . . . . . . . .92a2. . . . . . . . Booting a SAN device with Windows Server 2003 and the SDD using Qlogic HBA <BIOS 1. . . pcmsrv. . . . . . . . . . . . . . . . . Booting a SAN device with IA64-bit Windows Server 2003 and the SDD using a Qlogic HBA Booting from a SAN device with Windows Server 2003 and SDD using an EMULEX HBA <Firmware v3. . Changing to a different port number for the SDDDSM server . . . . Activating additional paths . . . . . . . . . . . . Changing the probing interval in sddsrv . . v1. . . . . . Installing for the first time . . . . . . 391 . . . . . . . 398 . . . . . . . . . . . . . . open device path . . . . Using SDDDSM on a Windows Server 2003 or Windows Server 2008 host system . . . . . . 383 384 384 384 384 386 387 387 388 388 388 388 389 . . . . . . . . . . . . . Uninstalling SDD in a two-node cluster environment. Verifying the configuration . . . . . . . . . . Upgrading the SDD . . . . Verifying if the SDD server has started . . . . . . . . . . . . . . Verifying if the SDDDSM server has started . . . . . . . Booting from a SAN device with Windows Server 2003 or Windows Server 2008 and the SDD using an Emulex HBA . . . Verifying that additional paths are installed correctly . . . . . Configuring fibre-channel adapters . . . . .0 . . sddsrv. . . . . . . . . . . . . . . . 416 417 417 418 419 419 419 419 Chapter 13. . . . . . . . . Path probing . . sddsrv and the IBM TotalStorage Expert V. 421 datapath datapath datapath datapath datapath datapath clear device count disable ports . . . Using the SDD server and the SDDPCM server. . 404 . . . . . . . Support for Windows Server 2003 clustering . . . . Starting the SDD server manually . . . . . 394 . . . . SDD server daemon . . . . . . . . . . . . . Configuring a Windows Server 2003 or Windows Server 2008 cluster with SDDDSM installed . . . . SAN boot support . . 392 . . Special considerations in the Windows Server 2003 clustering environment . . . . . Displaying the current version of SDDDSM . . . . . . . . . 396 . . . 396 . . . Special considerations in the Windows Server 2003 clustering environment . . . . . . . . Installing SDD from downloaded code . . . . . . . . . . . . Removing SDDDSM in a two-node cluster environment. . . . . . . Stopping the SDD server . sddsrv and IBM TotalStorage support for Geographically Dispersed Sites for Microsoft Cluster Service . . . . . . . . . . .x5> or later . Preparing for SDDDSM installation . . . . . 399 Verifying the hardware and software requirements Unsupported environments. . Remote boot support for 32-bit Windows Server 2003 or Windows Server 2008 using a QLogic HBA . . . . . . . . . . . . . . . . . . . . . Disabling the probing function in sddsrv . . . . . . . . . . . . . . . . . . . . . Maximum number of LUNs . . . SAN boot disk migration . .conf file format . . . . . . . . . . .2. 404 404 406 406 408 408 408 409 410 410 411 412 412 413 413 413 413 413 Chapter 12. .

. set device path. . . . . . . . . . . . . . . . 451 . . . set device policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 Trademarks . . . . . . query portmap. . 483 Contents xi . . . . . 449 Glossary . . . . . . . . . . . . . . . 452 . . . . . . 457 | IBM license agreement for machine code . . . . . . . . . . . . . . . . . . . . . . . . . . 450 Using sddpcmgetdata to collect information for problem determination . . . . . . . . . . . . . Using sddgetdata to collect information for problem determination . query version . query essmap . . . . . . . query wwpn . . . . . . . . . . . . . . . . . . set qdepth . . . . . . . . . . . . . . query devstats . . . System log messages AIX error and information messages SDDPCM error log messages . . . . . . 461 Index . . . . . 449 . . 454 Notices . SDDPCM and SDDDSM data collection for problem analysis. . . . . . . .datapath datapath datapath datapath datapath datapath datapath datapath datapath datapath datapath datapath query device . . . . . . . . . . . 450 Appendix B. . . . . . . 449 Enhanced trace capability for the SDD and SDDDSM . Windows log messages . . . 451 . . . 431 434 436 438 440 441 442 443 445 446 447 448 Enhanced trace capability for SDDPCM . . . SDD. . . . remove device path set adapter . . . . . remove adapter . . 457 Appendix A. . . . . . . . . . . . . .

xii Multipath Subsystem Device Driver User's Guide .

SDDPCM in the protocol stack . . 2010 xiii .Figures 1. . . . . . . . 4. . . 5. . 403 3. . . . . Workload imbalance when one host sharing workload across two paths loses one path . . 6. Workload imbalance when one link is more heavily loaded than another link . . . . . . . . . 5 Example showing ESS devices to the host and path access to the ESS devices in a successful SDD installation on a Windows Server 2003 host system . . . . Multipath connections between a host system and the disk storage in a disk storage system Multipath connections between a host system and the disk storage with the SAN Volume Controller . 2. . . . . . 386 Example showing SAN Volume Controller devices to the host and path access to the SAN Volume Controller devices in a successful SDDDSM installation on a Windows Server 2003 host system . Example showing ESS devices to the host and path access to the ESS devices in a successful SDD installation on a Windows 2000 host system . . Workload imbalance when one link receives twice the load of the other links . . . 7. . . . 6 100 128 128 129 9. . . . . . . . . 369 © Copyright IBM Corp. . . . . . . 8. 1999. . .

xiv Multipath Subsystem Device Driver User's Guide .

. 55 Software support for HACMP 4. . . . . 20. . . . . .0 (or later) installation packages for different AIX OS levels and the supported AIX kernel mode. . . . . . . . . 25 Maximum LUNs allowed for different AIX OS levels . application mode. . . . 19.Tables 1.5 on AIX 4. . 11. . . . . . . . . . 23. SDD installation scenarios . . . . 9. . 3 Package-naming relationship between SDD 1. . . 21. . . . Patches necessary for proper operation of SDD on HP-UX . System files updated for Solaris host systems SDD commands and their descriptions for Solaris host systems . . . . . . 26. . . . 24. . . . . . .3. . .0 . . . . . . 22 List of previously installed installation packages that are supported with the installation upgrade . .0. 5. .1.x and SDD 1. . . . .4. . Commands .4.0 (32-bit and 64-bit kernel) . . . . . . . . . SDD platforms that are supported by supported storage devices . .3 (32-bit only). . . 1 SDD in the protocol stack . . . . . . 377 421 © Copyright IBM Corp. . . 7. . . . .1.2.0 (32-bit and 64-bit) . .3. . . . . . . . . SDD installation scenarios . 22. . . . 5. . .5 on AIX 5. . . . . . . . Operating systems and SDD package file names . . SDD components installed for HP-UX host systems . . . 2010 xv . . 82 144 185 186 196 196 197 215 216 324 324 327 328 329 5. . 3. . . . . 12. . . . 15. . . . 25. . . .4. . . . . . . . Windows 2000 clustering SCSI-2 Reserve/Release and Persistent Reserve/Release support with MSCS . . . 8.3. . . 14. 16. . 27. . . . . 39 Recommended SDD installation packages and supported HACMP modes for SDD versions earlier than SDD 1. . . SDD components for a Linux host system Summary of SDD commands for a Linux host system . . SDD components installed for Solaris host systems . 18. . . 55 Software support for HACMP 4. . System files updated for HP-UX host systems SDD commands and their descriptions for HP-UX host systems . . . . 10. . . . . . 4. 12 SDD 1. .0.0 (32-bit and 64-bit). . 20 Major files included in the SDD installation package . and interface . . . . . . . .0. . . . 28. . 56 PTFs for APARs on AIX with fibre-channel support and the SDD server daemon running . SDD-specific SMIT panels and how to proceed Commands . . . . . . . . . . 6. . 1999. . . . . . . .2 or later .0 (or later) . . 69 13. 39 Recommended maximum paths supported for different number of LUNs on AIX 5. . . 2. 17.

xvi Multipath Subsystem Device Driver User's Guide .

Who should use this guide This guide is intended for users who will install and configure IBM System Storage Multipath Subsystem Device Driver (SDD). The AIX MPIO-capable device driver with the SDDPCM module provides the same functions that SDD provides. SDDPCM. All changes to this guide are marked with a vertical bar (|) in the left margin. © Copyright IBM Corp. levels. During the configuration of supported storage devices. dynamic input/output (I/O) load balancing across multiple paths. 1999. and architectures v Novell Netware v Sun Solaris v Microsoft Windows NT v Microsoft Windows 2000 v Microsoft Windows Server 2003 v Microsoft Windows Server 2008 The IBM System Storage Multipath Subsystem Device Driver Path Control Module (SDDPCM) provides AIX Multipath I/O (MPIO) support. SDDPCM is loaded and becomes part of the AIX MPIO Fibre Channel protocol device driver. It provides enhanced data availability. It is a loadable module. The installer of SDD. or SDDDSM should be: v Familiar with the operating systems on which the driver will be installed v Familiar with the storage systems on which the driver will be operated v Familiar with the applications used with the driver v Aware of the procedures in this document v Aware of related installation and service publications Summary of changes This guide contains both information previously published in the IBM System Storage Multipath Subsystem Device Driver User's Guide and major technical changes to that information. The IBM System Storage Multipath Subsystem Device Driver Device Specific Module (SDDDSM) provides multipath I/O support for Windows platforms based on the MPIO technology provided by Microsoft. or IBM System Storage Multipath Subsystem Device Driver Device Specific Module (SDDDSM). and automatic path-failover protection for the following host systems: v IBM AIX® v HP-UX v Supported Linux distributions. 2010 xvii .About this guide The IBM® System Storage® Multipath Subsystem Device Driver (SDD) supports a storage-redundant configuration environment for a host system that is attached to storage devices. IBM System Storage Multipath Subsystem Device Driver Path Control Module (SDDPCM).

Online. examples of program code or messages from the system. SDDPCM 3. Updates in support of Open HyperSwap include: v The addition of the AE Daemon to the SDDPCM installation package. Closed.log. monospace Text in monospace identifies the commands that you type. which is used for communication with Tivoli® Storage Productivity Center for Replication v Updated device support information v Updates to the SDDPCM Object Data Manager (ODM) attribute settings. samples of command output.log v Additional SDDPCM error log messages Changes to this edition also include: v Updates to the information about starting and stopping the SSDPCM server manually v An updated “License for Machine Code” in the “Notices” section. to add support for the Open HyperSwap quiesce expire time of a device v Information about dynamically enabling or disabling a path for Open HyperSwap devices v The addition of the pcmpath query session command.0. italics Text in italics is used to emphasize a word.0. In command syntax. Invalid.x. Highlighting conventions The following typefaces are used to show emphasis: boldface Text in boldface represents menu items and command names. and AE_bak.| | | | | | | | | | | | | | | | | | | | | | | | | | Note: For the last-minute changes that are not included in this guide. and configuration state of the paths or volumes (such as Dead. Special characters conventions The following special character conventions are used in this book: xviii Multipath Subsystem Device Driver User's Guide . go to the SDD Web site at: www. Command syntax conventions This section describes the notational conventions that this book uses. Active. it is used for variables for which you supply actual values. which displays the session of the Open HyperSwap devices configured on the host v Updates to the pcmpath query device information v The addition of two more trace files for SDDPCM: AE.ibm.x.x adds functionality to support Open HyperSwap® replication. Offline. Defined). Available.com/servers/storage/support/software/sdd New information This edition provides updates in support of SDDPCM Version 3.x. Open HyperSwap is a special Metro Mirror replication method that is designed to automatically failover I/O from primary logical devices to secondary logical devices in the event of a primary disk storage system failure. Open.

. the order number is SK2T-8803. About this guide xix .. These publications are also available as PDF files by clicking the Documentation link on the following ESS Web site: http://www-947. and . Unless otherwise noted.com/systems/support/ See “IBM Publications Center” on page xxiv for information about ordering these and other IBM publications.. See “How to send your comments” on page xxiv for information about how to send comments about the publications.. <> Indicate optional parameters.* asterisks Asterisks (*) are used as wildcard symbols to search for the beginning or remaining characters of the installation package name. these publications are available in Adobe portable document format (PDF) on a compact disc (CD) that comes with the ESS. .ibm.. the asterisks in the beginning and at the end of Sdd characters in the lslpp -l *Sdd* command are used as wildcard symbols to search for the characters ibm. Related information The tables in this section list and describe the following publications: v The publications for the IBM TotalStorage® Enterprise Storage Server® (ESS) library v The publications for the IBM System Storage DS8000® library v The publications for the IBM System Storage DS6000™ library v The publications for the IBM System Storage DS5000 and DS Storage Manager library v The publications for the IBM System Storage DS4000® library v The publications for the IBM System Storage SAN Volume Controller library v The publications for the IBM Tivoli Storage Productivity Center and IBM Tivoli Storage Productivity Center for Replication libraries v Other IBM publications that relate to the ESS v Non-IBM publications that relate to the ESS See “Ordering IBM publications” on page xxiv for information about how to order publications. | | The ESS library The following customer publications make up the ESS library. For example..rte. If you need additional copies of this CD. ellipsis Ellipsis indicates that more commands are in the next command line.

This guide provides instructions for setting up and operating the ESS and for analyzing problems. It balances the workload across paths. such as channel commands.Title IBM TotalStorage Enterprise Storage Server Copy Services Command-Line Interface Reference Description Order number SC26-7494 This guide describes the commands that you can use (See Note. and it transparently integrates with applications. This guide provides guidelines and work sheets for SC26-7477 planning the logical configuration of an ESS that attaches (See Note.) IBM TotalStorage This guide introduces the ESS product and lists the Enterprise Storage features you can order. and IBM Eserver iSeries® 400 hosts. The CLI application provides a set of commands that you can use to write customized scripts for a host system. You can use the CLI commands to indirectly control Peer-to-Peer Remote Copy (PPRC) and IBM FlashCopy® configuration tasks within an ESS Copy Services server group. GC26-7444 GC26-7229 SC26-7297 SC26-7637 IBM TotalStorage Enterprise Storage Server User's Guide SC26-7445 (See Note. This publication describes the functions of the ESS. IBM TotalStorage Enterprise Storage Server Configuration Planner for Open-Systems Hosts IBM TotalStorage Enterprise Storage Server Configuration Planner for S/390® and IBM Eserver zSeries® Hosts IBM TotalStorage Enterprise Storage Server Host Systems Attachment Guide This guide provides guidelines and work sheets for SC26-7476 planning the logical configuration of an ESS that attaches (See Note. SDD creates redundant paths for shared LUNs. IBM Application System/400® (IBM AS/400®). and error recovery procedures for UNIX. sense bytes. This guide provides guidelines for attaching the ESS to your host system and for migrating to fibre-channel attachment from either a Small Computer System Interface (SCSI) or from the IBM SAN Data Gateway. It also provides guidelines for Server Introduction planning the installation and configuration of the ESS. This publication describes how to use the IBM TotalStorage ESS Subsystem Device Driver (SDD) on open-systems hosts to enhance performance and availability on the ESS.) from the ESS Copy Services command-line interface (CLI) for managing your ESS configuration and Copy Services relationships. It provides reference information.) to open-systems hosts. The scripts initiate predefined tasks in an ESS Copy Services server application. SC26-7446 (See Note. and Planning Guide IBM TotalStorage Storage Solutions Safety Notices IBM TotalStorage Enterprise Storage Server SCSI Command Reference IBM TotalStorage Enterprise Storage Server Subsystem Device Driver User's Guide This publication provides translations of the danger notices and caution notices that IBM uses in ESS publications.) xx Multipath Subsystem Device Driver User's Guide . SDD permits applications to run without interruption when path errors occur.) to either the IBM S/390 or IBM Eserver zSeries host system.

ibmlink. The DS8000 library The following publications make up the IBM System Storage DS8000 library.com/systems/support/.2 Installation and Reference GC53-1127 SC26-7917 GC35-0516 IBM CIM Agent for DS Open Application Programming GC26-7929 Interface 5.ibm.cgi. Title IBM System Storage DS6000 Installation. GC35-0489 This reference provides information about the ESS IBM TotalStorage Enterprise Storage application programming interface (API). These publications are available from www. Server Application Programming Interface Reference Note: No hardcopy book is produced for this publication.com/public/applications/ publications/cgibin/pbi.elink.4. These publications are available from www.com/public/applications/ publications/cgibin/pbi. However. Troubleshooting.ibm.ibm.5 The DS6000 library The following publications make up the IBM System Storage DS6000 library.cgi.Title IBM TotalStorage Enterprise Storage Server Web Interface User's Guide IBM TotalStorage Common Information Model Agent for the Enterprise Storage Server Installation and Configuration Guide Description This guide provides instructions for using the two ESS Web interfaces: ESS Specialist and ESS Copy Services. Order number SC26-7448 (See Note. a PDF file is available from http://www-947. The CIM agent acts as an open-system standards interpreter.4. Title Order number IBM System Storage DS8000 Introduction and Planning GC35-0515 Guide IBM System Storage DS® Command-Line Interface User's Guide for the DS6000 series and DS8000 series IBM System Storage DS8000 Host Systems Attachment Guide IBM System Storage DS Application Programming Interface 5.1 and 5.ibmlink.elink. providing a way for other CIM-compliant storage resource management applications (IBM and non-IBM) to interoperate with each other. and Recovery Guide Order number GC26-7925 IBM System Storage DS6000 Introduction and Planning GC26-7924 Guide About this guide xxi .) This guide introduces the common interface model (CIM) GC35-0485 concept and provides instructions for installing and configuring the CIM agent.

x Installation and Host Support Guide IBM System Storage DS Storage Manager Version 10. These publications are available from www. DS5300 and EXP5000 IBM TotalStorage DS4300 Fibre Channel Storage Subsystem Installation.ibm.elink. These publications are available from www. User's and Maintenance Guide IBM System Storage EXP5000 Storage Expansion Enclosure Installation. Title Order number IBM System Storage DS4000 Storage Manager Concepts GC26-7734 Guide IBM System Storage DS4000 Storage Manager 10 Installation and Host Support Guide IBM System Storage DS4000 Storage Manager Copy Services Guide IBM System Storage DS4000 Storage Manager Fibre Channel and Serial ATA Intermix Premium Feature Installation Overview IBM System Storage DS4000 Hard Drive and Storage Expansion Enclosure Installation and Migration Guide GA76-0422 GC27-2172 GC26-7907 GC26-7849 xxii Multipath Subsystem Device Driver User's Guide .Title IBM System Storage DS6000 Host System Attachment Guide IBM System Storage DS6000 Messages Reference Order number GC26-7923 GC26-7920 IBM System Storage DS Command-Line Interface User's GC53-1127 Guide for the DS6000 series and DS8000 series IBM System Storage DS6000 Quick Start Guide GC26-7921 The DS5000 and DS Storage Manager library The following publications make up the IBM System Storage DS5000 and DS Storage Manager library.com/public/applications/publications/cgibin/pbi. and Maintenance Guide IBM System Storage DS Storage Manager Command-Line Programming Guide IBM System Storage DS5000 Quick Start Guide: Quick Reference for the DS5100.com/public/applications/ publications/cgibin/pbi. Title IBM System Storage DS Storage Manager Version 10.ibm.ibmlink. User’s.x Copy Services Guide IBM System Storage DS4000/DS5000 Fibre Channel and Serial ATA Intermix Premium Feature Installation Overview IBM System Storage DS5100 and DS5300 Installation.ibmlink.cgi.cgi.elink. and Maintenance Guide Order number GC53-1135 GC53-1136 GC53-1137 GC53-1140 GC53-1141 GC52-1275 GC53-1134 GC26-7722 The DS4000 library The following publications make up the IBM System Storage DS4000 library. User’s.

Unless otherwise noted. these publications are available in Adobe PDF from the following Web site: http://www. User’s. User's and Maintenance Guide IBM System Storage DS4800 Quick Start Guide IBM System Storage DS4800 Installation. User's and Maintenance Guide IBM System Storage DS4000 EXP810 Storage Expansion Enclosure Installation.Title IBM System Storage DS3000/DS4000 Command-Line Programming Guide IBM System Storage DS4000 EXP420 Storage Expansion Unit Installation. User's and Maintenance Guide IBM System Storage DS4800 Controller Cache Upgrade Kit Instructions GC27-2147 GC26-7843 GC27-2148 GC26-7845 GC26-7774 The SAN Volume Controller library The following publications make up the SAN Volume Controller library. and Maintenance Guide IBM System Storage DS4200/DS4700 Quick Start Guide IBM System Storage DS4700 Installation. User's and Maintenance Guide Order number GC52-1275 GC27-2050 GC26-7798 IBM TotalStorage DS4000 EXP700 and EXP710 Storage GC26-7735 Expansion Enclosures Installation.com/storage/support/2145 Title IBM System Storage SAN Volume Controller Planning Guide IBM System Storage SAN Volume Controller Model 2145-8A4 Hardware Installation Guide IBM System Storage SAN Volume Controller Model 2145-CF8 Hardware Installation Guide IBM System Storage SAN Volume Controller Software Installation and Configuration Guide IBM System Storage SAN Volume Controller Command-Line Interface User's Guide IBM System Storage SAN Volume Controller CIM Agent Developer's Guide IBM System Storage SAN Volume Controller Host Attachment Guide IBM System Storage SAN Volume Controller Troubleshooting Guide Order number GA32-0551 GC27-2219 GC52–1356 SC23-6628 SC26-7903 SC23-6665 SC26-7905 GC27-2227 IBM System Storage SAN Volume Controller Hardware GC27-2226 Maintenance Guide About this guide xxiii .ibm.

S.com/e-business/linkweb/publications/servlet/pbi. you can submit them in one of the following ways: v E-mail – Internet: starpubs@us. v Mail or fax xxiv Multipath Subsystem Device Driver User's Guide . if applicable. The Publications Center displays prices in your local currency.wss How to send your comments Your feedback is important to help us provide the highest quality information.ibm. If you have any comments about this book.com – IBMLink from U. such as a page number or table number. Some publications are available for you to view or download at no charge.ibm. the specific location of the text you are commenting on. IBM Publications Center The Publications Center is a worldwide central repository for IBM product publications and marketing material.wss Title IBM Tivoli Storage Productivity Center and IBM Tivoli Storage Productivity Center for Replication Installation and Configuration Guide IBM Tivoli Storage Productivity Center User's Guide IBM Tivoli Storage Productivity Center Messages IBM Tivoli Storage Productivity Center Command-Line Interface Reference IBM Tivoli Storage Productivity Center Problem Determination Guide IBM Tivoli Storage Productivity Center Workflow User's Guide Order number SC27-2337 SC27-2338 SC27-2340 SC27-2339 GC27-2342 SC27-2341 Ordering IBM publications This section tells you how to order copies of IBM publications and how to set up a profile to receive notifications about new or changed publications. You can access the IBM Publications Center through the following Web site: http://www-05.| | | | | | | | | | | | | | | | | | | | | The Tivoli Storage Productivity Center and Tivoli Storage Productivity Center for Replication libraries The following publications make up the Tivoli Storage Productivity Center and Tivoli Storage Productivity Center for Replication libraries.ibm. The IBM Publications Center offers customized search functions to help you find the publications that you need.: STARPUBS at SJEVM5 – IBMLink from Canada: STARPUBS at TORIBM – IBM Mail Exchange: USIB3WD at IBMMAIL Be sure to include the name and order number of the book and.com/e-business/linkweb/publications/servlet/pbi.A. These publications are available from the following Web site: http://www-05. You can also order publications.

S.A. Return it by mail or fax (1-800-426-6209). About this guide xxv . Arizona 85775-4401 U. you can address your comments to: International Business Machines Corporation RCF Processing Department Department 61C 9032 South Rita Road Tucson. If the RCF has been removed.Fill out the Readers' Comments form (RCF) at the back of this book. or give it to an IBM representative.

xxvi Multipath Subsystem Device Driver User's Guide .

Overview of the SDD The IBM System Storage Multipath Subsystem Device Driver (SDD) supports a storage-redundant configuration environment for a host system that is attached to the following devices: v v v v v v v IBM IBM IBM IBM IBM IBM IBM TotalStorage Enterprise Storage Server (ESS) System Storage DS8000 System Storage DS6000 System Storage SAN Volume Controller System Storage DS5000 System Storage DS4000 BladeCenter S SAS RAID Controller Module (RSSM) v IBM System Storage DS3950 This guide uses the following terminology: v The phrase supported storage devices refers to the following types of devices: – RSSM – DS3950. v The phrase Open HyperSwap device refers to pair of volumes that are managed in a Tivoli Productivity Center for Replication copy set. DS4500. DS4300. SDD platforms that are supported by supported storage devices Supported storage device SAN Volume Controller U U U U U U U U U U U | | | | | Platform AIX SDD AIX SDDPCM HP Linux ESS U U U U DS8000 U U U U DS6000 U U U U DS5000 DS4000 DS3950 RSSM © Copyright IBM Corp.Chapter 1. DS4700. v The phrase Open HyperSwap refers to Open HyperSwap replication. DS4100 (AIX only). DS5020. or DS6000 devices. 2010 1 . DS5100. v The phrase DS4000 refers to DS4100 (AIX only). v The phrase RSSM refers to IBM BladeCenter S SAS RAID Controller Module devices. DS4300. Table 1. DS8000. DS4200. v The phrase virtualization product refers to the SAN Volume Controller. Table 1 indicates the products that different SDD platforms support. DS4200. DS6000. DS5300. and DS4800 devices. v The phrase Open HyperSwap session refers to a collection of Tivoli Productivity Center for Replication managed copy sets. DS4700. 1999. DS4500. DS4800. and DS8000 – ESS – SAN Volume Controller v The phrase disk storage system refers to ESS. v The phrase DS5000 refers to DS5100 and DS5300 devices.

It resides in a host system with the native disk device driver and provides the following functions: v Enhanced data availability v Dynamic input/output (I/O) load-balancing across multiple paths v Automatic path failover protection v Concurrent download of licensed machine code Table 2 on page 3 shows the position of the SDD in the protocol stack. This guide provides step-by-step procedures on how to install. levels. see the Readme file on the CD-ROM or visit the SDD Web site: www. For up to date information about specific kernel levels supported in this release.com/servers/storage/support/software/sdd v v v v v Novell Netware (disk storage systems only) Sun Solaris Microsoft Windows NT.ibm. I/O operations that are sent to the SDD proceed to the host disk driver after path 2 Multipath Subsystem Device Driver User's Guide . and architectures. and use SDD features on the following host systems: v IBM AIX (SDD and SDDPCM) v HP-UX v Supported Linux distributions. It provides enhanced data availability.Table 1. dynamic input/output (I/O) load-balancing across multiple paths. SDD platforms that are supported by supported storage devices (continued) Supported storage device SAN Volume Controller Platform Novell SUN Windows NT SDD Windows 2000 and Windows 2003 SDD Windows 2003 and Windows 2008 SDDDSM ESS U U U U DS8000 U U DS6000 U U DS5000 DS4000 DS3950 RSSM U U U U U U U U U U The SDD supports a storage-redundant configuration environment for a host system that is attached to storage devices. or Windows 2003 SDD Microsoft Windows Server 2003 or Windows Server 2008 SDD Microsoft Windows Server 2003 or Windows Server 2008 SDDDSM The SDD architecture The SDD is a software solution to support the multipath configuration environments in supported storage devices. configure. Windows 2000. and automatic path failover protection.

the SDD dynamically switches to another path. Overview of the SDD 3 . When an active path experiences a failure (such as a cable or controller failure).selection. SDD in the protocol stack Raw disk I/O Logical Volume Manager (LVM) I/O LVM device driver Disk I/O File system Subsystem Device Driver Subsystem Device Driver AIX SCSI/FCP disk driver Linux disk SCSI driver SCSI/FCP adapter driver SCSI adapter driver S008996Q S009318 Raw disk I/O Logical Volume Manager I/0 LVM device driver Raw disk I/O Logical Volume Manager I/O LVM device driver Subsystem Device Driver Subsystem Device Driver Sun Solaris disk driver SCSI adapter driver HP disk driver SCSI adapter driver S008999Q S008998Q Chapter 1. Table 2.

The SDD vpath devices behave almost like native operating system disk devices. dd. That is. SDD in the protocol stack (continued) System disk I/O System disk I/O Subsystem Device Driver Windows 2000 disk driver Windows NT disk driver Subsystem Device Driver adapter driver adapter driver S008997Q System disk I/O Windows Server 2003 disk driver Subsystem Device Driver This space intentionally blank adapter driver Each SDD vpath device represents a unique physical device on the storage server.Table 2. Each physical device is presented to the operating system as an operating system disk device. or fsck. close. including commands such as open. The SDD. There can be up to 32 operating system disk devices that represent up to 32 different paths to the same physical device. Enhanced data availability Figure 1 on page 5 shows a host system that is attached through small computer system interface (SCSI) or fibre-channel adapters to a disk storage system that has internal component redundancy and multipath configuration. when there is a path failure. You can use most disk device operations of operating systems on the SDD vpath devices. uses this multipath configuration to enhance data availability. the SDD reroutes I/O operations from the failing 4 Multipath Subsystem Device Driver User's Guide . residing in the host system.

or host-interface adapter on the disk storage system from disrupting data access. Overview of the SDD 5 . This capability prevents a single failing bus adapter on the host system.path to an alternate operational path. Host System SCSI / FCP adapter 0 SCSI / FCP adapter 1 ESS Port 0 Port 1 Cluster 1 Cluster 2 LUN 0 LUN 1 LUN 2 LUN 3 S009000Q Figure 1. SCSI or fibre-channel cable. Multipath connections between a host system and the disk storage in a disk storage system Chapter 1.

or host-interface adapter on the SAN Volume Controller from disrupting data access. when there is a path failure. v Selects and uses an alternate data path. Multipath connections between a host system and the disk storage with the SAN Volume Controller Note: SAN Volume Controller does not support parallel SCSI attachment. Dynamic I/O load balancing By distributing the I/O workload over multiple active paths. This capability prevents a single failing bus adapter on the host system. 6 Multipath Subsystem Device Driver User's Guide . The SDD.Figure 2 shows a host system that is attached through fibre-channel adapters to a SAN Volume Controller that has internal components for redundancy and multipath configuration. fibre-channel cable. v Notifies the host system of the path failure. That is. residing in the host system. the SDD reroutes I/O operations from the failing path to an alternate operational path. uses this multipath configuration to enhance data availability. Automatic path-failover protection The SDD failover protection feature minimizes any disruptions in I/O operations and recovers I/O operations from a failing data path. In the event of failure in one data path. Host System FCP adapter 0 FCP adapter 1 SAN Volume Controller FABRIC Storage Device Port 0 Port 1 LUN 0 LUN 1 LUN 2 LUN 3 Figure 2. The SDD provides path-failover protection using the following process: v Detects a path failure. the SDD automatically switches the affected I/O operations to another active data path. the SDD provides dynamic load balancing and eliminates dataflow bottlenecks. ensuring path-failover protection.

the disk storage system I/O bay or tower are quiesced and resumed. the SDD does not support single-path mode during any disk storage system concurrent maintenance that impacts the path attachment. Concurrent download of licensed machine code for virtualization products With the SDD multipath mode (configured with at least two paths per multipath device). For certain disk storage system LMC. With the SDD automatic path-failover feature. you can concurrently download and install the licensed machine code while your applications continue to run. Chapter 1. Some operating system drivers report each detected error in the system error log. at least two paths to each node are recommended. Attention: Do not shut down the host or reconfigure the SDD during the concurrent download of licensed machine code or you might lose your initial SDD configuration. see the microcode installation instructions for your specific type and model. Concurrent download of licensed machine code for disk storage systems With the SDD multipath mode (configured with at least two paths per multipath device). Its adapters might not respond for the duration of the service action. However. Overview of the SDD 7 . The node that is being upgraded is temporarily unavailable. you can concurrently download and install the licensed machine code (LMC) while applications continue to run. they must go to separate nodes for each I/O group. At least one path must be configured through each node of a virtualization product group. such as a disk storage system host-bay-adapter replacement. Also. and all I/O operations to that node fail. For information about performing concurrent download of LMC for virtualization products. failed I/O operations are directed to the other node of the I/O group. However. For information about performing the concurrent download of LMC for ESS. That is. Attention: Do not shut down the host or reconfigure the SDD during the concurrent download of licensed machine code or you might lose your initial SDD configuration. some reported errors are actually recovered from an alternative path. Note: The SDD does not support single-path mode during the concurrent download of LMC. if only two paths exist. see the Configuration Guide for your specific type and model. During the code upgrade. which could be 30 minutes or more. and applications do not see any I/O failures.The SDD dynamically selects an alternate I/O path when it detects a software or hardware problem. each node of an I/O group is upgraded sequentially.

and applications do not see any I/O failures.wss/ docdisplay?lndocid=MIGR-5078491&brandind=5000020 Active/Passive dual array controller path-selection algorithm for DS3950. it selects 8 Multipath Subsystem Device Driver User's Guide .Concurrent download of licensed machine code for DS3950. controller.com/systems/support/supportsite. the SDD distinguishes the following paths to the DS4000 and DS5000 product LUN: v Paths on the ownership (active) controller v Paths on the alternate (passive) controller With this type of active/passive dual-controller subsystem device. Because switching a device to another controller is a time-consuming recovery action and affects I/O performance. at the following URL. you can use this redundancy to avoid an unnecessary controller failover if a path fails. Attention: Do not shut down the host or reconfigure the SDD during the concurrent download of licensed machine code or you might lose your initial SDD configuration. failed I/O operations are directed to the other RSSM node. I/O can be sent only to the ownership controller. However. or the active controller. The other controller is considered as an alternate. you can concurrently download and install the licensed machine code while your applications continue to run. DS4000 and DS5000 If you are using the SDD multipath mode. Refer to RSSM documentation. for information about performing concurrent download of LMC for RSSM: http://www. or passive. Thus. configure a minimum of four paths for each LUN with two host adapter ports and two storage controller ports where each host adapter port has redundancy to each storage controller port and vice versa. each RSSM node is upgraded sequentially. When the SDD selects paths for I/O. Therefore. of a particular LUN. you can concurrently download and install the licensed machine code while your applications continue to run. which is considered the owner. DS4000 and DS5000 products The DS4000 and DS5000 products are dual array controller disk subsystems. Concurrent download of licensed machine code for IBM BladeCenter S SAS RAID Controller Module (RSSM) With the SDD multipath mode (configured with two paths per multipath device). Attention: Do not shut down the host or reconfigure the SDD during the concurrent download of licensed machine code or you might lose your initial SDD configuration. and all I/O operations to that node fail.ibm. Each LUN is assigned to one controller. The node that is being upgraded is temporarily unavailable. During the code upgrade. as long as you configure redundant paths to each storage controller port in addition to the multiple host adapter ports.

2. in the selection algorithm.paths that are connected only to the ownership controller. DEV#: 0 DEVICE NAME: Disk0 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005676801800005F800000000000004 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk0 Part0 OPEN NORMAL 501876 0 1 Scsi Port4 Bus0/Disk0 Part0 OPEN NORMAL 501238 0 2 Scsi Port4 Bus0/Disk0 Part0 OPEN NORMAL 0 0 3 Scsi Port4 Bus0/Disk0 Part0 OPEN NORMAL 0 0 4 Scsi Port5 Bus0/Disk0 Part0 OPEN NORMAL 499575 0 5 Scsi Port5 Bus0/Disk0 Part0 OPEN NORMAL 500698 0 6 Scsi Port5 Bus0/Disk0 Part0 OPEN NORMAL 0 0 7 Scsi Port5 Bus0/Disk0 Part0 OPEN NORMAL 0 0 | | | | | | | | | Open HyperSwap Replication Open HyperSwap replication is a special Metro Mirror replication method that is designed to automatically failover I/O from the primary logical devices to the secondary logical devices in the event of a primary disk storage system failure. Open HyperSwap replication applies to both planned and unplanned replication sessions. If there is no path on the ownership controller that can be used. The following is the path selection algorithm for the SDD: 1. When a session has Open HyperSwap enabled. switches the paths that were passive to active. 3. If an error occurs and a path retry is required. automatic I/O failback processing to the preferred controller occurs if a path on the preferred controller is available. retry paths are first selected on the preferred controller. This means that the SDD automatically fails back to the preferred controller any time a path on the preferred controller becomes available during either manual or automatic recovery. SDD changes the LUN controller ownership to an alternate controller. preference is always given to a path on the preferred controller. If all retries fail on the preferred controller's paths. Overview of the SDD 9 . With all paths available. If no path on the preferred controller is available. Preferred node path-selection algorithm for the virtualization products Virtualization products are two-controller disk subsystems. This function can be done with minimal disruption to the applications that are using the logical devices. I/O failover processing to the alternate controller occurs. Therefore. The SDD distinguishes the paths to a virtualization product LUN: v Paths on the preferred controller v Paths on the alternate controller When the SDD selects paths for I/O. Paths on the alternate controller are selected at random. When failover processing to the alternate controller has occurred. The following output of the datapath query device command shows that the preferred paths are being selected and shows the format of the virtualization product serial number. an I/O error on the primary site automatically causes the I/O to switch to the secondary site without Chapter 1. paths on the alternate controller are selected for retry. an initial attempt is made to select a path on the preferred controller. Only if no path can be used on the preferred controller is a path selected on the alternate controller. and then selects these active paths for I/O. I/O is only routed to paths on the preferred controller.

| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | any user interaction. Windows. The following output of the pcmpath query device command shows that the target volume paths are being selected. Note: The primary serial is not always the source volume. With an Open HyperSwap device. DEV#: 14 DEVICE NAME: hdisk14 TYPE: 2107900 ALGORITHM: Load Balance SESSION NAME: session1 OS Direction: H1<-H2 ========================================================================== PRIMARY SERIAL: 25252520000 ----------------------------Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 OPEN NORMAL 6091 0 1 fscsi0/path2 OPEN NORMAL 6300 0 2 fscsi1/path4 OPEN NORMAL 6294 0 3 fscsi1/path5 OPEN NORMAL 6187 0 SECONDARY SERIAL: 34343430000 * ----------------------------Path# Adapter/Path Name State 4 fscsi0/path1 OPEN 5 fscsi0/path3 OPEN 6 fscsi1/path6 OPEN 7 fscsi1/path7 OPEN Mode NORMAL NORMAL NORMAL NORMAL Select 59463 59250 59258 59364 Errors 0 0 0 0 In the preceding example. Linux. IBM Tivoli Storage Productivity Center for Replication suspends the entire set of data consistency checking. 10 Multipath Subsystem Device Driver User's Guide . If there is no path on the source volume that can be used. The output shows that after the swap from site one to site two. the Metro Mirror session supports disaster recovery. and secondary serial is the serial number of the volume on site two. SDD distinguishes the paths of the source volume from the paths of the target volume on an Open HyperSwap copy set. After the swap. If the system fails. it only selects paths that are connected to the source volume. the source volume is on site one. and the target volume is on site two. while Open HyperSwap is enabled. In addition. I/O can only be sent to the source volume. and with minimal application impact. so when SDD selects paths for I/O. If a write is successful on the primary site but is unable to get replicated on the secondary site. or z/OS®). SDD will select the target volume paths for I/O. The primary serial is the serial number of the volume on site one. thus ensuring that a consistent copy of the data exists on the secondary site. You can control Open HyperSwap from any system that is running IBM Tivoli Storage Productivity Center for Replication (AIX. but the data should be consistent and allow the user to manually switch host servers to the secondary site. this data might not be the latest data. SDD selects paths of devices on site two. However. SDD will initiate the Open HyperSwap request to Tivoli Storage Productivity Center for Replication and work together to perform the swap. the volumes that are involved with Open HyperSwap must be attached to an AIX system that is connected to IBM Tivoli Storage Productivity Center for Replication.

only with AIX53 TL04 or later and with 64-bit kernel v Two types of reserve policies: No reserve and Persistent reserve exclusive host v General Parallel File System (GPFS™) © Copyright IBM Corp.4. DS8000.com/servers/storage/support/software/sdd Supported SDD features The following SDD features are supported in this release: v 32. For updated and additional information that is not included in this chapter. and removing the SDD on an AIX host system that is attached to a supported storage device. upgrading.and 64-bit kernels v Support for ESS. 2010 11 . Beginning with SDD 1. Using the SDD on an AIX host system This chapter provides step-by-step procedures for installing.ibm. the SDD supports the coexistence of ESS and SAN Volume Controller devices.0.6. the SDD supports the coexistence of all supported storage devices.0. configuring.0. see the Readme file on the CD-ROM or visit the SDD Web site: www. DS6000 and virtualization products v Preferred node path-selection algorithm for DS6000 and virtualization products v Changing the SDD path-selection algorithm dynamically.Chapter 2.5. 1999. Five path-selection algorithms are supported: – Failover – Round robin – Round robin sequential – Load balancing – Load balancing sequential v Dynamically adding paths to the SDD vpath devices v Dynamically opening an invalid or close_dead path v v v v v v Dynamically removing or replacing PCI adapters or paths Fibre-channel dynamic device tracking SDD server daemon support Support for HACMP™ Support for secondary-system paging Support for load-balancing and failover protection for AIX applications and LVM v SDD utility programs v Support for SCSI-3 persistent reserve functions v Support for AIX trace functions v Support more than 512 SAN Volume Controller devices from multiple SAN Volume Controller clusters on an AIX host v Storage I/O priority feature in DS6000 and DS8000. Beginning with SDD 1.

3 or later For more information about Virtual I/O Server.0 (or later) SDD 1.sdd.rte) requires AIX53 TL04 or later.ibm2105.x ibmSdd_432.51.0.fcp.0 (or later) Not applicable Notes® Obsolete.0.disk.53.0 (or later).52.3.x and SDD 1.disk.0.43.2 (devices.6.52.3. Hardware The following hardware components are needed: v One or more supported storage devices.3 (devices.ibm.sdd.3. ibmSdd_433.sdd. v A switch if using a SAN Volume Controller (no direct attachment allowed for SAN Volume Controller) v Host system v SCSI adapters and cables (for ESS only) v Fibre-channel adapters and cables Software The following software components are needed: v AIX operating system.rte) requires AIX52 TL08 or later and the SDD package for AIX 5.rte or devices.0 (and later) will be using new package names in order to comply with AIX packaging rules and allow for NIM installation.sdd.rte package if using NIM) v devices.ibm2105.rte) requires AIX53 TL04 with APAR IY76997.3.0.software. Starting with SDD 1.43.html Verifying the hardware and software requirements You must install the following hardware and software components to ensure that the SDD installs and operates successfully.3. Package-naming relationship between SDD 1.disk.rte for DS8000. the SDD package for AIX 5.rte Not applicable devices.rte devices. Not applicable New package for AIX 5.51.scsi.com/webapp/set2/sas/f/vios/documentation/ home.0 (or later). Table 3 shows the package-naming relationship between SDD 1.4. This package has been merged with devices.6.53.rte.sdd.0.3 (devices. v SCSI and fibre-channel device drivers v ibm2105.rte 12 Multipath Subsystem Device Driver User's Guide .rte package for ESS devices (devices.4.sdd.4.rte devices.v Virtual I/O Server with AIX 5.3 or later v Dual Virtual I/O Server with AIX 5. Starting with SDD 1.2. go to the following Web site: http://www14. Table 3. the SDD package for AIX 5.rte ibmSdd_510.1.0.rte.sdd.x and SDD 1.ibm.4.sdd. Not applicable Obsolete. This package has been merged with devices.2. DS6000.rte Not applicable ibmSdd_510nchacmp.fcp.rte SDD 1. and SAN Volume Controller Packages for SDD 1.3.

4.0. DS6000.0 (or later) devices.0.0 (or later). AIX 5. or AIX 6.0. SDD 1. Package-naming relationship between SDD 1.52. AIX 5.3.3.3.rte) requires AIX53 TL04 or later.1. See “Understanding the persistent reserve issue when migrating from SDD to non-SDD volume groups after a system reboot” on page 72 for more information.2.sdd.4.sdd. or AIX 6. v Placing system primary paging devices (for example. Both concurrent and nonconcurrent HACMP functions are now incorporated into one package for each AIX kernel level.1 installed on your host system. Chapter 2.3.2. Starting with SDD 1.1. Using the SDD on an AIX host system 13 . 2.1 v DS8000.3 (devices.3.0 (or later) (continued) SDD 1.0.3.0.0 v More than 1200 SDD vpath devices if the host system is running AIX 5.4.61.3 or AIX 5.3.0 (or later) no longer releases separate packages for concurrent and nonconcurrent High Availability Cluster Multiprocessing (HACMP). such as a disk storage systems host-bay-adapter replacement v Multipathing to a system boot device v Configuring the SDD vpath devices as system primary or secondary dump devices v More than 600 SDD vpath devices if the host system is running AIX 4.1.1.3. A persistent reserve issue arises when migrating from SDD to non-SDD volume groups after a reboot. you must have AIX 4.sdd. installed Host system requirements To successfully install the SDD for supported storage devices.3 (devices.x and SDD 1. Unsupported environments The SDD does not support: v A host system with both a SCSI and fibre-channel connection to a shared ESS logical unit number (LUN). and SAN Volume Controller with SCSI connectivity v Multiple AIX servers without the SDD-supported clustering software.2 (devices. /dev/hd6) on an SDD vpath device v Any application that depends on a SCSI-2 reserve and release device on AIX v Single-path mode during concurrent download of licensed machine code nor during any disk storage systems concurrent maintenance that impacts the path attachment. New package for AIX 6.sdd.rte) requires AIX53 TL04 or later with APAR IY76997.rte devices.0 (or later).x Not applicable Not applicable SDD 1. the SDD package for AIX 5.6. Starting with SDD 1.Table 3.rte Notes® New package for AIX 5. the SDD package for AIX 5.53. This special case only occurs if the volume group was varied on prior to the reboot and auto varyon was not set when the volume group was created.53.sdd. AIX 5.2. Notes: 1.53.3.rte) requires AIX52 TL08 or later and the SDD package for AIX 5. AIX 5. such as HACMP.6.

ensure that a minimum of two SCSI adapters are installed.html Fibre requirements You must check for and download the latest fibre-channel device driver APARs. For information about the SCSI adapters that can attach to your AIX host system. 2.com/servers/storage/support 14 Multipath Subsystem Device Driver User's Guide . go to the following Web site: www. The SAN Volume Controller always requires that the host be connected through a switch.ibm. maintenance-level fixes. IBM FC 2107 – For DS6000. IBM FC 1750 Virtualization products requirements To successfully install the SDD. You must have at least two fibre-channel adapters to prevent data loss due to adapter hardware failure or software failure. see the IBM System Storage SAN Volume Controller Model 2145–8A4 Hardware Installation Guide. For more information.ibm. it requires you to connect through a switch to multiple disk storage system ports.com/servers/eserver/support/unixservers/aixfixes. SCSI requirements for ESS To use the SDD SCSI support for ESS.html Disk storage systems requirements To successfully install the SDD: Ensure that the disk storage system devices are configured as: – For ESS: . ensure that your host system meets the following requirements: v The bos. v If you need the SDD input/output (I/O) load-balancing and failover features. go to the following Web site: www.IBM FC 2105 (fibre-channel-attached device) – For DS8000.adt package is installed. For information about the fibre-channel adapters that can be used on your AIX host system. .You must check for and download the latest authorized program analysis reports (APARS). The host system can be a single processor or a multiprocessor system.html Notes: 1. v A SCSI cable connects each SCSI host adapter to an ESS port.ibm. ensure that the SAN Volume Controller devices are configured either as SAN Volume Controller Device or FC 2145. maintenance-level fixes.ibm.IBM 2105xxx (SCSI-attached device) where xxx represents the disk storage system model number. such as Symmetric Multi-Processor (SMP). and microcode updates from the following Web site: www-03.com/servers/eserver/support/unixservers/index. If your host has only one fibre-channel adapter.com/servers/eserver/support/unixservers/index. and microcode updates from the following Web site: www.

With a single-path configuration. The host system can be a single processor or a multiprocessor system. failover protection is not provided. For more information about how to configure your disk storage system. ensure that a minimum of two paths to a device are attached. See the IBM System Storage SAN Volume Controller Software Installation and Configuration Guide for information about the SAN Volume Controller. see “Manual exclusion of devices from the SDD configuration” on page 52.To use the SDD fibre-channel support. see the Introduction and Planning Guide.3 (or later). If you want to manually exclude supported devices (hdisks) from the SDD configuration. Configuring the disk storage system 2. you must use the excludesddcfg command before configuring the SDD vpath devices. v A fiber-optic cable connects each SAN Volume Controller fibre-channel adapter to a switch. v A fiber-optic cable connects each fibre-channel adapter to a disk storage system port. Configuring the virtualization products 3. 7. Installing the AIX fibre-channel device drivers 4. Note: Ensure that the correct host attachment package for your storage device is installed on the host system. Chapter 2. you must configure: v The disk storage system to your host system. The excludesddcfg command reads the unique serial number of a device (hdisk) and saves the serial number in an exclude file. Preparing for the SDD installation Before you install the SDD. you must perform the tasks identified in the following sections: 1. v If you need the SDD I/O load-balancing and failover features. 5. v The AIX host system has the fibre-channel device drivers installed along with all latest APARs. such as SMP. ensure that your host system meets the following requirements: v The AIX host system is an IBM RS/6000®™ or IBM System p®™ with AIX 4. The switch must also be configured correctly. 6. Using the SDD on an AIX host system 15 . Installing and upgrading the AIX SDD host attachment Configuring fibre-channel-attached devices Verifying the adapter firmware level Determining if the sddServer for Expert is installed Determining the installation package Determining the installation type Note: The SDD allows the manual exclusion of supported devices from the SDD configuration.3. 8. 9.adt package is installed. For detailed information about the excludesddcfg command. v A minimum of two independent paths that share the same logical unit. v The bos. Configuring the disk storage system Before you install the SDD.

disk.fcp. Select Software to Install and press F4. 7. for example. The fibre-channel device drivers include the following installation packages: devices.Configuring the virtualization products Before you install the SDD.fcp. The Software to Install panel is displayed. Press F4 to display the INPUT Device/Directory for Software panel. devices. devices. and microcode updates at the following Web site: www-1. Installing the AIX fibre-channel device drivers You must check for the latest information on fibre-channel device driver APARs.df1000f9 The adapter device driver for RS/6000 or IBM System p with feature code 6228. 3. see the IBM System Storage SAN Volume Controller Software Installation and Configuration Guide. The Install Software panel is displayed.disk The FCP disk driver. With a single-path configuration.fc The FCP protocol driver.pci. From your desktop command prompt.pci. /dev/cd0. maintenance-level fixes.com/servers/storage/support/ Perform the following steps to install the AIX fibre-channel device drivers from the AIX compact disk: 1. enter smitty install_update and press Enter to go directly to the installation panels.ibm. Select Install Software and press Enter. The Install and Update Software menu is displayed. Select the compact disc drive that you are using for the installation. 16 Multipath Subsystem Device Driver User's Guide . 9. 2.ibm.common. 8. devices. Select each one by highlighting it and pressing F7. 5. Load the compact disc into the CD-ROM drive. v SDD requires a minimum of two independent paths that share the same logical unit. devices. Press Enter again.rte host attachment package is installed before configuring the virtualization product. and press Enter.IBM. Log in as the root user.df1000f7 The adapter device driver for RS/6000 or IBM System p with feature code 6227. For more information about how to configure your SAN Volume Controller. 6. you must configure: v The virtualization product to your host system. 4. failover protection is not provided.pci. Note: Ensure that the devices.df1080f9 The adapter device driver for RS/6000 or IBM System p with feature code 6239.

Check to see if the correct APARs are installed by issuing the following command: instfix -i | grep IYnnnnn where nnnnn represents the APAR numbers. go to step 3. There are two methods for uninstalling all of your fibre-channel device drivers: v smitty deinstall command v installp command Using the smitty deinstall command Perform the following steps to use the smitty deinstall command: 1.df1000f9 and press Enter. 3. that means that they are installed. devices. go to “Configuring fibre-channel-attached devices” on page 18. Remove the compact disc. All of the software that is installed is displayed. Press Enter. If the APARs are listed. Use the Tab key to toggle to No in the PREVIEW Only? field. 4. 413 | | This is your last chance to stop before continuing. Enter Enter Enter Enter Enter installp installp installp installp installp -ug -ug -ug -ug -ug devices. The selected file name is displayed in the Software Name field of the Remove Installed Software panel. Using the SDD on an AIX host system 17 .pci.disk and press Enter. Enter smitty deinstall at the AIX command prompt and press Enter. Otherwise. 11. 14.common. If they are installed. When the installation is complete. 16.df1080f9 and press Enter.fc and press Enter.pci. 2. devices. 415 | +------------------------------------------------------------------------+ 13. Uninstalling the AIX fibre-channel device drivers The following steps describe how to uninstall the AIX fibre-channel device drivers. The installation process can take several minutes to complete. Press Enter to continue. 15. devices. Press F4.df1000f7 and press Enter. Press Enter. The Install and Update from LATEST Available Software panel is displayed with the name of the software you selected to install. Repeat steps 1 through 14 to install the APARs.pci. devices.10.fcp. 3. Using the installp command Perform the following steps to use the installp command from the AIX command line: 1.IBM. press F10 to exit from SMIT. The Remove Installed Software panel is displayed. Chapter 2. The uninstallation process begins. Press Enter to install. SMIT responds with the following message: +------------------------------------------------------------------------+ | ARE YOU SURE?? | | Continuing may delete information you may want to keep. Press Enter. 2. 4. Check the default option settings to ensure that they are what you need. 12. Select the file name of the fibre-channel device driver that you want to uninstall. 5.

100. Removing fibre-channel-attached devices To remove all fibre-channel-attached devices.rte.6.tar) or FC (devices.rte. Use one of the following commands to configure these devices: v cfgmgr command Note: In AIX 4.2.disk. This device information allows AIX to perform the following actions: v Identify the hdisks as 2105.disk.3 or SDD 1. If the FCP devices are configured correctly.27 (or later) v devices. they are in the Available state.disk. DS8000. After the system restarts. 2107. v Set default hdisk attributes.0 (or later) on AIX 5.rte.6 (or later) See the SDD Readme file for the latest Host Attachment level for the SDD version that you are installing.ibm. For AIX host attachment installation and upgrade instructions. v Indicate to the AIX device driver configure method to configure the hdisk as a non-MPIO-capable device.fcp. go to “Verifying the adapter firmware level” on page 19 to determine if the proper firmware level is installed.scsi. 2107.rte.0. After the command prompt appears. v devices. 2107.0.rte 32. you might need to issue the cfgmgr command once for each host adapter each time you add a new device. use the lsdev -Cc disk command to check the Fibre Channel Protocol (FCP) disk configuration.tar for 2105 devices.ibm2105. Note: SDD 1.ibm.disk. see the Host Systems Attachment Guide for your storage system. or 1750 hdisks. use the lsdev -Cc disk command to check the Fibre Channel Protocol (FCP) disk configuration. you must enter the following command for each installed FCP adapter: rmdev -dl fcsN -R 18 Multipath Subsystem Device Driver User's Guide . 2145. or 1750 device information to allow AIX to properly configure 2105. or a 1750 hdisks.tar) ESS NIM support add on if you are using NIM.6.rte 1. such as queue_depth and timeout values. If the FCP devices are configured correctly.fcp. If the FCP devices are configured correctly.Installing and upgrading the AIX SDD host attachment The host attachments for SDD add 2105. v shutdown -rF command to restart the system. go to “Verifying the adapter firmware level” on page 19 to determine if the proper firmware level is installed. Also download the SCSI (devices. A NIM support add on is not necessary for the SAN Volume Controller.0 (or later) on AIX 5. 2145. If the FCP devices are configured correctly.0. 2145.7. The following names are AIX SDD Host Attachment packages: v ibm2105. and 1750 devices.1. or DS6000 Host Attachment.tar for 2145.ibm2105. Configuring fibre-channel-attached devices The newly installed fibre-channel-attached devices must be configured before you can use them.3 or 5.fcp.3 (or later) requires one or both of the following packages: v ibm2105. 2107. they are in the Available state.

S2F3. ignore the first three characters in the ZB field. The ZB field should look similar to: Device Specific.. To check the current supported firmware level for fibre-channel adapters.rte command looks similar to this: Chapter 2.. you must remove this stand-alone version of sddServer before you proceed with SDD 1. If your current adapter firmware is not at the latest level.22A1 3. otherwise.3. To determine if the stand-alone version of sddServer is installed on your host system. Enter the lscfg -vl fcsN command. there is no need to upgrade.rte package.30X1 v The current firmware level for the FC 6228 adapter is 3.3.30X1 To verify the firmware level. enter: lslpp -l sddServer. Look at the ZB field. For example.com/support/fixcentral/ Tip: v The current firmware level for the FC 6227 adapter is 3. the firmware level must be upgraded.3..3. the output from the lslpp -l sddServer. Using the SDD on an AIX host system 19 ..rte If you previously installed the sddServer. which incorporates the functionality of the stand-alone version of sddServer (for ESS Expert).. The installation package for SDD 1. The vital product data for the adapter is displayed.(ZB).92a1 Perform the following steps to verify the firmware level that is currently installed: 1. if you have two installed FCP adapters (adapter 0 and adapter 1).ibm. you must upgrade to a new adapter firmware (microcode).93A0 v The current firmware level for the FC 6239 adapter is 1. go to the following Web site: http://www-933. see the description for each firmware at: http://www-933.where N is the FCP adapter number.9 (or later) includes the SDD server daemon (also referred to as sddsrv). If the adapter firmware level is at the latest level. 2...com/support/fixcentral/ Determining if the sddServer for Expert is installed If you previously installed the stand-alone version of the sddServer for IBM TotalStorage Expert V2R1 (ESS Expert) on your AIX host system. you must enter both of the following commands: rmdev rmdev -dl -dl fcs0 fcs1 -R -R Verifying the adapter firmware level You must verify that your current adapter firmware is at the latest level. the firmware level is 3.ibm.9 (or later) installation. In the example. For instructions on upgrading the firmware level.

43. some dedicated memory.com/servers/storage/support/software/swexpert/ For more information about the SDD server daemon. If a partition has multiple fibre-channel adapters that can see the same LUNs in a supported storage device.rte AIX OS level AIX 4. So each partition functions the same way that it does on a stand-alone system.0.0. SDD 1.0. See Table 4 to determine the correct installation package. see the IBM Subsystem Device Driver Server 1. the path optimization can be performed on those adapters in the same way as in a stand-alone system.4.3. SCSI reservation. and shared data on concurrent mode) where the application is capable of sharing data.Fileset Path: /usr/lib/objrepos sddServer.rte Path: /etc/objrepos sddServer.3 1 AIX kernel mode 32-bit Application mode 32-bit. application mode. When the adapters are not shared with any other partitions. you need to determine the installation package that is appropriate for your AIX OS level. and dedicated I/O adapters.0 State COMMITTED COMMITTED Description IBM SDD Server for AIX IBM SDD Server for AIX For instructions on how to remove the stand-alone version of sddServer (for ESS Expert) from your AIX host system. and interface SDD installation package names devices.sdd. 64-bit SDD interface LVM.0. Each LPAR is composed of one or more processors.3 or later). and LUN level masking operate as expected (by being "bound" to an instance of the operating system). Installation packages for 32-bit and 64-bit applications on AIX 4.ibm.0 (sddsrv) README for IBM TotalStorage Expert V2R1 at the following Web site: www. Understanding SDD support for IBM System p with static LPARs configured The IBM System p server supports static LPARs as a standard feature. Each partition has an instance of an operating system and does not share IBM System p hardware resources with any other partition.0. go to “SDD server daemon” on page 67.0.3.0 1. raw device 20 Multipath Subsystem Device Driver User's Guide .rte Level 1. persistent reserve.3.3 (or later) host systems Table 4. shared ports into the storage subsystem. and users can partition them if they choose to do so. The SDD provides the same functions on one of the partitions or LPARs of a IBM System p server as it does on a stand-alone server Determining the installation package Before you install the SDD on your AIX host system (4.0 (or later) installation packages for different AIX OS levels and the supported AIX kernel mode. Storage subsystems need to be shared the same way that they have always been shared (shared storage pool.0.

rte devices.0 host systems that run in both 32-bit and 64-bit kernel modes. SDD 1.0 AIX 5. devices.1.2.3. you can switch it to 32-bit mode by typing the following commands in the given order: ln -sf /usr/lib/boot/unix_mp /unix ln -sf /usr/lib/boot/unix_mp /usr/lib/boot/unix bosboot -ak /usr/lib/boot/unix_mp shutdown -Fr The kernel mode of your AIX host system is switched to 32-bit mode after the system restarts.0 host systems SDD supports AIX 5.sdd.1.43.3.sdd. raw device Switching between 32-bit and 64-bit modes on AIX 5. AIX OS level AIX 5. If your host system is currently running in 32-bit mode.rte devices. 64-bit SDD interface LVM. Installation of major files on your AIX host system The SDD installation package installs a number of major files on your AIX system.0 host system is running.rte devices.sdd.Table 4.0. devices.0.53.1.3.0. your AIX host system runs in 32-bit mode. If the /unix links to /usr/lib/boot/unix_64. 64-bit 32-bit. 64-bit 32-bit. You can use the bootinfo -K or ls -al /unix command to check the current kernel mode in which your AIX 5. application mode. Chapter 2.2.1.sdd. 2. The bootinfo -K command directly returns the kernel mode information of your host system.0 AIX 5. The ls -al /unix command displays the /unix link information.3.0.sdd. AIX 5. If the /unix links to /usr/lib/boot/unix_mp.0 2 AIX kernel mode 32-bit.61. and AIX 5.rte Note: 1.rte is supported only by the ESS and virtualization products. Using the SDD on an AIX host system 21 .0. 64-bit 32-bit. raw device LVM.51. 5.52.2. If your host system is currently running in 64-bit mode.61. and interface (continued) SDD installation package names devices.2. raw device LVM. AIX 5.0. raw device LVM. you can switch it to 64-bit mode by typing the following commands in the given order: ln -sf /usr/lib/boot/unix_64 /unix ln -sf /usr/lib/boot/unix_64 /usr/lib/boot/unix bosboot -ak /usr/lib/boot/unix_64 shutdown -Fr The kernel mode of your AIX host system is switched to 64-bit mode after the system restarts.0 AIX 6. 64-bit 32-bit.sdd.0 (or later) installation packages for different AIX OS levels and the supported AIX kernel mode. or 5.1. 64-bit 32-bit. 64-bit 64-bit Application mode 32-bit. your AIX host system runs in 64-bit mode.4. Table 5 on page 22 lists the major files that are part of the SDD installation package.rte supports only the 64-bit kernel.0 and AIX 5.

The sample SDD server configuration file. The command that dynamically adds more paths to SDD vpath devices while they are in Available state. The SDD script that restores a system's SDD vpath devices and LVM configuration when a migration failure occurs. The SDD driver tool to exclude user-specified hdisk devices from the SDD vpath configuration.conf lvmrecover sddfcmap sddgetdata Description Define method of the SDD pseudo-parent data path optimizer (dpo). Configure method of the SDD vpath devices. Configure method of the SDD pseudo-parent dpo. The SDD device driver. The command that creates an SDD volume group. The SDD script that converts an hdisk device volume group to an SDD vpath device volume group. The SDD server daemon for path reclamation and probe. The SDD tool that collects information on ESS SCSI or disk storage systems fibre-channel devices through SCSI commands. The command that fixes an SDD volume group that has mixed vpath and hdisk physical volumes. The SDD driver persistent reserve command tool. If there is no previous version of the SDD installed on the host system. see “Installing the SDD” on page 23 for instructions on installing and configuring the SDD. Define method of the SDD vpath devices. The command that backs up all files belonging to a specified volume group with the SDD vpath devices. Determining the installation type Before you install the SDD on your AIX host system 4. The SDD driver console command tool. The command that extends the SDD vpath devices to an SDD volume group. Method to change vpath attributes. The command that restores all files belonging to a specified volume group with the SDD vpath devices.3 (or later). you need to determine the installation type that is appropriate for your environment. The SDD driver query configuration state command. The SDD data collection tool for problem analysis. If there is a previous version of the SDD installed on the host system and you want to upgrade to one of the following packages: 22 Multipath Subsystem Device Driver User's Guide .Table 5.3. The SDD script that converts an SDD vpath device volume group to an hdisk device volume group. Fast-path configuration method to configure the SDD pseudo-parent dpo and all SDD vpath devices. Major files included in the SDD installation package File name defdpo cfgdpo define_vp addpaths cfgvpath chgvpath cfallvpath vpathdd hd2vp vp2hd datapath lquerypr lsvpcfg excludesddcfg mkvg4vp extendvg4vp dpovgfix savevg4vp restvg4vp sddsrv sample_sddsrv.

53.v v v v v devices. Use the System Management Interface Tool (SMIT) facility to install SDD.rte devices. You must have root access and AIX system administrator knowledge to install SDD.sdd.4. Log in as the root user. upgrading. you must first remove the newer version from your host system before you can install the older version of SDD.bff) and requires special consideration when being installed. devices. Enter smitty to invoke the nongraphical user interface or enter smit to invoke the graphical user interface (GUI).61.2.0.sdd.sdd. Using the SDD on an AIX host system 23 .rte devices. enter smitty install_update and press Enter to go directly to the installation panels. Load the compact disc into the CD-ROM drive. The Install and Update Software menu is displayed.sdd. If you are installing an older version of SDD when a newer version is already installed. Chapter 2. Press F4 to display the INPUT Device/Directory for Software panel. see “Updating SDD packages by applying a program temporary fix” on page 29 for instructions.0 (or later) is installed on the host system and you have an SDD PTF that you want to apply to the system. Installing the SDD The SDD is released as an installation image. use the installation package that is appropriate for your AIX OS level.rte devices. 4. If SDD 1. 1. Installing and upgrading the SDD This section provides information about installing.sdd.43. Installing the SDD from CD-ROM Throughout this SMIT procedure.1. nongraphical and graphical. The SMIT facility has two interfaces. /dev/cd0 is used for the compact disc drive address. and interface. To install the SDD. A PTF file has a file extension of bff (for example. The drive address can be different in your environment.1. 3.sdd.0. From your desktop window.rte devices. 5. Table 4 on page 20 lists the correct installation packages for different AIX OS level and the supported AIX kernel mode.rte.52. application mode. Select Install Software and press Enter. and verifying the SDD. 2. Note: The following procedures assume that SDD will be used to access all of your single-path and multipath devices. Perform the following SMIT steps to install the SDD package on your system.43.51.rte See “Upgrading the SDD packages automatically without system restart” on page 24 for instructions on upgrading the SDD.

toc This command reflects the newer SDD code version that will be updated. 3. Untar the downloaded file: for example.6. Press Enter.0. Press Enter to install.52. Select Software to Install and press F4. This feature will support the following scenarios: 24 Multipath Subsystem Device Driver User's Guide .4. and then the SDD device and LVM configuration of the host will be restored. 7. Upgrading the SDD Upgrading the SDD packages automatically without system restart With SDD 1. 11. During SDD migration or upgrade. This is especially beneficial in a complex SAN environment where a system administrator has to maintain a large number of servers. a new feature is provided to migrate or upgrade SDD packages. Continue the installation by following the instructions beginning in step 3 on page 23. The Install Software panel is displayed. Download code from the Web site that is appropriate for your system. Check the default option settings to ensure that they are what you need. The Software to Install panel is displayed. This is your last chance to stop before continuing. 12.sdd. For example: pwd rm -i . Move the downloaded files to the /usr/sys/inst.toc inutoc . Select the compact disc drive that you are using for the installation. tar -xvf devices. The installation process can take several minutes to complete. 9. 8. When the installation is complete. Press Enter again. for example. the new SDD package will be installed. press F10 to exit from SMIT. Installing SDD from downloaded code Use this procedure to install SDD from downloaded code: 1. Change directory to the /usr/sys/inst. SMIT responds with the following message: ARE YOU SURE?? Continuing may delete information you may want to keep.images directory. /dev/cd0. 13. and recovery of LVM configurations and SDD device configurations automatically on the server. Note: You do not need to reboot SDD even though the bosboot message indicates that a reboot is necessary. as well as migration from non-PR to PR SDD packages. The Install and Update from LATEST Available Software panel is displayed with the name of the software that you selected to install. grep -i sdd . restoration.0 (or later). This feature supports backup.images directory. 6. Select the installation package that is appropriate for your environment.rte.tar Update the . 5. 14. 2. Press Enter to continue. and press Enter. the LVM and SDD device configuration of the host will automatically be removed. 4. 10. Remove the compact disc.toc file.

rte.0 (or later).rte ibmSdd. gracefully stop the cluster services.3 (or later) to a persistent reserve package with version 1.rte (stand-alone IBM TotalStorage Expert SDD Server) is installed. This tremendously reduces the SDD migration time. Package upgrade from version 1. If there is any I/O running to SDD devices. Furthermore.3.3. and then the original LVM configuration of the host will be restored. If HACMP is running.0.51.sdd.0.rte. SDD introduces a new feature in the configuration method to read the pvid from the physical disks and convert the pvid from hdisks to vpaths during the SDD vpath configuration. 3. Package migration from version 1.1.0.0. Package migration from a nonpersistent reserve package with version 1. you will need to upgrade the SDD manually Table 6. uninstall sddServer. Using the SDD on an AIX host system 25 .3 or later to version 1.rte → devices.6. Furthermore.rte ibmSdd.rte devices.61.0 to a later version. Chapter 2.4. the new SDD package will be installed. you can skip the process of converting the pvid from hdisks to vpaths after configuring SDD devices.rte ibmSdd_510nchacmp.3.51.rte → devices.rte devices.sdd.1.sdd. 3.sdd.rte.0. the SDD now introduces two new environment variables that can be used in some configuration environments to customize the SDD migration and further reduce the time needed to migrate or upgrade the SDD.1.rte and ibmSdd_510. If sddServer.4.0 or later.432 ibmSdd_433.3 is not supported. the LVM configuration of the host will be removed. With this feature.43. If the SDD currently installed on your host system is listed in Table 6.rte.1.sdd. Preconditions for migration or upgrade: The following are the preconditions for running the migration: 1.sdd. 2. the SDD migration scripts can now skip the pvid conversion scripts.43. During the migration or upgrade of the SDD. especially with a large number of SDD devices and LVM configuration environment. If the SDD currently installed on your host system is not listed in Table 6. Migration from SDD version earlier than 1.rte devices.rte devices. That is. ibmSdd_432. List of previously installed installation packages that are supported with the installation upgrade Installation package name ibmSdd_432.53.52. 2. See “Customizing the SDD migration or upgrade” on page 26 for details.433 ibmSdd_510.sdd.rte devices. stop these I/O activities. you can use this automatic migration to upgrade the SDD.rte Beginning with SDD 1.4.

When this variable is set to YES. SDDVG_NOT_RESERVED: SDDVG_NOT_RESERVED is an environment variable to indicate to the SDD migration script whether the host has any SDD volume group reserved by another host. deactivate the paging space.0. 5. and a system reboot is acceptable. you should set this variable to YES. not sharing LUNs with any other host 2. Stop any activity related to system configuration changes. for you to customize the SDD migration or upgrade to maximize performance. This variable could help to decrease SDD upgrade time if you choose to reboot the system after upgrading SDD. If the host has any SDD volume group reserved by another host. The following discussion explains the conditions and procedures for using these two environment variables. 2. that is. If this variable is not set. You can set these two variables based on the configuration of your system. Issue mount filesystem-name to mount the file system. SDD offers two new environment variables. configuring more devices). Issue varyonvg vg_name for the volume groups that are not auto-varied on after reboot. Set this variable to YES if the host is: 1. A completely stand-alone host.4. the SDD migration script skips some procedures. set this variable to NO. the migration or upgrade will fail.0. 3. you might choose this option if you are upgrading other software that requires a reboot on the host at the same time. 4. If any of the preceding preconditions are not met. you might want to choose this option to skip the SDD automated migration process. Another example is if you have a large number of SDD devices and LVM configuration. restoration. SKIP_SDD_MIGRATION: The SKIP_SDD_MIGRATION environment variable is an option available to bypass of the SDD automated migration process (backup. 5. Issue smitty install to install SDD.6. and recovery of LVM configurations and SDD device configurations). Issue export SKIP_SDD_MIGRATION=YES to set the SKIP_SDD_MIGRATION environment variable. the SDD migration script makes certain assumptions and runs more steps. If SDDVG_NOT_RESERVED is set to NO. A host in a clustering environment but all the volume groups (including the volume groups that belong to a cluster software resource group) are configured for concurrent access only 26 Multipath Subsystem Device Driver User's Guide . These activities are not allowed during the SDD migration or upgrade (for example. In these cases. Otherwise. This dramatically reduces the SDD migration time. follow these procedures to perform an SDD upgrade: 1. the SDD migration script will assume the value to be NO. Reboot the system. If you choose to skip the SDD automated migration process. Customizing the SDD migration or upgrade: Beginning with SDD 1. If there is active paging space created with the SDD devices. For example. SKIP_SDD_MIGRATION and SDDVG_NOT_RESERVED.

Ready for deinstallation of SDD! 2.3. The following message displays when the SDD migration or upgrade script fixes the mixed volume group: <volume group> has a mixed of SDD and non-SDD devices. such as disconnection of cables. Install the new SDD package by entering the smitty install command.” Procedures for automatic migration or upgrade: To start the SDD migration or upgrade: 1. There are two common ways that the migration or the upgrade can fail: Case 1: Smitty install failed. SDD Server is stopped. A volume group contains the SDD vpath devices or only supported storage hdisk devices. All SDD devices are removed. no other node has made reserve on the SDD volume groups. Chapter 2. The migration or upgrade script automatically configures the SDD devices and restores the original LVM configuration. The following messages indicate that the postinstallation of SDD is successful: Original lvm configuration is restored successfully! Error recovery for migration or upgrade: If any error occurred during the preinstallation or postinstallation procedures. The migration or upgrade scripts run as part of the installation procedure that is initiated by the smitty install command. If you do have a mixed volume group. Issue export SDDVG_NOT_RESERVED=NO or export SDDVG_NOT_RESERVED=YES to set the SDDVG_NOT_RESERVED environment variable 2. 3. Mixed volume group <volume group> is converted to SDD devices successfully! The following messages indicate that the preuninstallation operations of the SDD are successful: LVM configuration is saved successfully. Using the SDD on an AIX host system 27 . All varied-on volume groups are varied off. Follow the procedures in “Procedures for automatic migration or upgrade. the SDD migration or upgrade script fixes it by changing only the volume group to contain the SDD vpath devices. The older SDD is uninstalled before new SDD will be installed. so that the SDD migration script runs the vp2hd pvid conversion script to save the pvid under hdisks. Follow these procedures to perform SDD migration with this variable: 1. you can recover the migration or upgrade. All volume groups created on SDD devices are converted to non-SDD devices. All mounted file systems are unmounted. dpovgfix <volume group> is run to correct it. These scripts save SDD-related LVM configuration on the system. SDD does not support mixed volume groups with the SDD vpath devices and supported storage hdisk devices. A host in a clustering environment with nonconcurrent volume groups but all the nonconcurrent volume groups on all the hosts are varied off. set SDDVG_NOT_RESERVED to NO. That is. If the host does not meet the any of these three conditions.

Use the smitty command to uninstall the SDD. Enter the following command: umount filesystem_name 4.Smitty install fails if there is an error during the preuninstallation activities for the older SDD package. reboot the system after the operating system upgrade. for example. If you need to upgrade the AIX operating system.0. that is. you skip this step. The uninstallation process begins. Thus. 9.6. reserved by other host. Then. If you are upgrading to an SDD version earlier than 1. You must uninstall the existing SDD and then manually install the new version of SDD in these cases. Remove all the SDD vpath devices.1. Complete the uninstallation process.3 to AIX 5. Case 2: Smitty install exits with an OK prompt but configuration of SDD devices or LVM restoration failed. Enter the following command: varyoffvg vg_name 5.0 or later and your host is in a HACMP environment with nonconcurrent volume groups that are varied-on on other host. Enter smitty deinstall and press Enter. the new SDD package is still successfully installed.0. you could perform the upgrade now. If there is an error during the postinstallation (either the configuration of SDD devices has failed or LVM restoration has failed). Use the smitty install command to install the new SDD package again. Enter the umount command to unmount all file systems belonging to the SDD volume groups. Enter the lspv command to find out all the SDD volume groups. A manual migration or upgrade is required if you are upgrading from a previous version of the SDD package not listed in Table 6 on page 25. However. An error message indicating the error is printed. run the vp2hd volume_group_name script to convert the volume group from the SDD vpath devices to supported storage hdisk devices. 6. Otherwise.0. See “Upgrading the SDD packages automatically without system restart” on page 24 for information about migrating or upgrading SDD automatically. so you can identify and fix the problem. If required. run the shell script lvmrecover to configure SDD devices and automatically recover the original LVM configuration. or if you are upgrading to SDD 1. Perform the following steps to upgrade SDD: 1. See “Removing SDD from an AIX host system” on page 50 for the step-by-step procedure for uninstalling the SDD. from AIX 4.6. Stop the SDD server by entering the following command: stopsrc -s sddsrv 7. 2. so you can identify and fix the problem. 28 Multipath Subsystem Device Driver User's Guide . Enter the lsvgfs command for each SDD volume group to find out which file systems are mounted. Enter the following command: rmdev -dl dpo -R 8. the Smitty install exits with an OK prompt. an error message indicating the error is printed. Enter the varyoffvg command to vary off the volume groups. Upgrading SDD manually The following section describes the procedure to migrate or upgrade SDD manually. Enter the following command: lsvgfs vg_name 3.

0 and later allows users to update SDD by installing a program temporary fix (PTF). Enter the following command: hd2vp volume_group_name 14. to remove the PTF.sdd. you must run the dpovgfix utility to fix this problem. enter smitty install_update and press Enter to go directly to the installation panels. Use the smitty command to install the newer version of the SDD from the compact disc.bff) and can either be applied or committed when it is installed. Enter the following command: lsvpcfg 13. SDD will not function properly. 3. A PTF file has a file extension of bff (for example. you can choose to commit or to reject the PTF at a later time. Go to “Installing and upgrading the SDD” on page 23 to complete the installation process. Using the SDD on an AIX host system 29 . 5. the update to SDD is permanent. Enter the lsvpcfg command to verify the SDD configuration. The installation process begins.rte. 2.2.6. Enter the dpovgfix vg_name command to fix this problem. Log in as the root user.43. Enter smitty install and press Enter. If you are upgrading to an SDD version earlier that 1. you must uninstall SDD. Enter the lspv command to verify that all physical volumes of the SDD volume groups are SDD vpath devices. Enter the mount command to mount all file systems that were unmounted in step 3 on page 28. If the PTF is applied. 11. /dev/cd0 is used for the compact disc drive address.0. The drive address can be different in your environment.1. run the hd2vp volume_group_name script for each SDD volume group to convert the physical volumes from supported storage hdisk devices back to the SDD vpath devices. Perform the following SMIT steps to update the SDD package on your system: 1.4.1. The Install and Update Software menu is displayed. 16. Enter the following command: varyonvg vg_name 15. See “Configuring SDD” on page 45 for a step-by-step procedure for configuring devices. you will not need to uninstall SDD from the host system. Press F4 to display the INPUT Device/Directory for Software panel. Enter the varyonvg command for each volume group that was previously varied offline.0. Chapter 2. Load the compact disc into the CD-ROM drive.0. Throughout this SMIT procedure. From your desktop window. Select Install Software and press Enter. devices. If the PTF is committed. Otherwise. Updating SDD packages by applying a program temporary fix SDD 1. 12.0. Use the System Management Interface Tool (SMIT) facility to update SDD. Use the smitty device command to configure all the SDD vpath devices to the Available state. Enter the following command: mount filesystem-name Attention: If the physical volumes on an SDD volume group's physical volumes are mixed with hdisk devices and SDD vpath devices. If you decide to reject the PTF. 4.10.

Follow these steps prior to rejecting a PTF update: 1. | +--------------------------------------------------------------------------+ 14. run the vp2hd volume_group_name script to convert the volume group from SDD vpath devices to supported storage hdisk devices.0 or if you are downgrading to SDD 1. The default setting is to commit the PTF. Enter the lspv command to find out all the SDD volume groups. If you are downgrading to an SDD version earlier than 1. 3. Committing or Rejecting a PTF Update: Before you reject a PTF update. you need to stop sddsrv and remove all SDD devices. The installation process can take several minutes to complete. reserved by other host). you will not need to perform these steps. /dev/cd0) and press Enter. you skip this step. 11. 8.0. Remove the compact disc. Press Enter. Enter the lsvgfs command for each SDD volume group to find out which file systems are mounted. If you want to commit a package. | | This is your last chance to stop before continuing.0. Remove all SDD devices. be sure that you specify yes to Save Replaced Files?. 10. Enter the following command: umount filesystem_name 5. Enter the following command: rmdev -dl dpo -R 30 Multipath Subsystem Device Driver User's Guide . Note: You do not need to reboot SDD even though the bosboot message indicates that a reboot is necessary. 16. 13. SMIT responds with the following message: +--------------------------------------------------------------------------+ | ARE YOU SURE?? | | Continuing may delete information you may want to keep. If you specify no to Commit Software Updates?. The Software to Install panel is displayed. press F10 to exit from SMIT.6. The Install and Update from LATEST Available Software panel is displayed with the name of the software that you selected to install. Press Enter to continue. The Install Software panel is displayed. Select Software to Install and press F4.0 or later but your host is in a HACMP environment with nonconcurrent volume groups that are varied-on on other host (that is. Enter the varyoffvg command to vary off the volume groups. When the installation is complete. Press Enter to install.6. If you only want to apply the PTF.6. Enter the following command: lsvgfs vg_name 4. Enter the following command: stopsrc -s sddsrv 2. 15. 7. Select the PTF package that you want to install. Stop SDD Server. Otherwise. 9. Select the compact disc drive that you are using for the installation (for example. Enter the following command: varyoffvg vg_name 6. Press Enter again. Enter the umount command to unmount all file systems belonging to SDD volume groups. 7. 12. Check the other default option settings to ensure that they are what you need. select Commit software Updates? and tab to change the entry to no. The following steps will guide you through this process.

Check the default option settings to ensure that they are what you need. Chapter 2. Otherwise. If you have downgraded to an SDD version earlier that 1. press F10 to exit from SMIT. Log in as the root user.0. Use the smitty device command to configure all the SDD vpath devices to the Available state. Enter the following command: lsvpcfg 3. The commit or reject process can take several minutes to complete. Press Enter. 3. Select Commit Applied Software Updates to commit the PTF or select Reject Applied Software Updates to reject the PTF. Select the Software package that you want to commit or reject. Press Enter. Select Software Maintenance and Utilities and press Enter. | | This is your last chance to stop before continuing. Enter the following command: mount filesystem-name Note: If the physical volumes on an SDD volume group's physical volumes are mixed with hdisk devices and vpath devices. Enter the following command: hd2vp vg_name 4. Enter the lspv command to verify that all physical volumes of the SDD volume groups are SDD vpath devices. Enter the varyonvg command for each volume group that was previously varied offline.0. 4.6. Note: You do not need to reboot SDD even though the bosboot message might indicate that a reboot is necessary. Enter the lsvpcfg command to verify the SDD configuration. 1. SMIT responds with the following message: +---------------------------------------------------------------------------+ | ARE YOU SURE?? | | Continuing may delete information you may want to keep. When the installation is complete. 9. 5. run the hd2vp script for each SDD volume group to convert the physical volumes from supported storage hdisk devices back to SDD vpath devices. After the procedure to reject a PTF update completes successfully: 1. Using the SDD on an AIX host system 31 . you must run the dpovgfix utility to fix this problem. 8. Select Software name and press F4. 6. See “Configuring fibre-channel-attached devices” on page 18 for a step-by-step procedure for configuring devices. | +---------------------------------------------------------------------------+ 10. 7. 2. Enter the following command: varyonvg vg_name 5. From your desktop window. The Software Installation and Maintenance menu is displayed. SDD does not function properly. 6. enter smitty install and press Enter to go directly to the installation panels. The Commit Applied Software Updates panel is displayed or the Reject Applied Software Updates panel is displayed. The software name panel is displayed.Perform the following steps to commit or reject a PTF update with the SMIT facility. Enter the mount command to mount all file systems that were unmounted in step 4. 2. Enter the dpovgfix vg_name command to fix this problem. Press Enter to continue. 11.

If you are not upgrading AIX OS and you can reboot the system. host attachment.1. Otherwise. Use the procedures in either Case 1 or Case 2 when you want to upgrade one of the following systems: v AIX OS only* v Host attachment and AIX OS* v SDD and AIX OS v Host attachment and SDD v Host attachment only v SDD.51. Stop all activities related to SDD devices: a. If an AIX system is going to be upgraded to a different OS level.sdd. Remove SDD vpath devices using the rmdev -dl dpo -R command. deactivate the paging space. these are the procedures for upgrading the software. Remove hdisk devices using the following command: 32 Multipath Subsystem Device Driver User's Guide . For example.2. c. ensure that rootvg is either on local SCSI disks or a SAN boot disk. a. follow these procedures. automatic migration from devices.rte to devices. If there is active paging space created with SDD devices. go to Step 5 on page 33. see “Upgrading the SDD packages automatically without system restart” on page 24 or “Upgrading SDD manually” on page 28.rte after an OS upgrade from AIX 4.rte after an OS upgrade from AIX 5.sdd. See “Special considerations when you uninstall SDD in the NIM environment” on page 51. you need to install the corresponding SDD package for that OS level.sdd.sdd. f. 3.7. Vary off all SDD volume groups. stop the cluster services in an orderly manner.3 to AIX 5.43. is not supported. If your host is in an HACMP environment. Stop applications running on SDD volume groups or file systems. Otherwise. 2. If you want to upgrade SDD only. or automatic migration from devices. continue to the next step.rte to devices. and AIX OS Upgrading the AIX OS always requires you to install the SDD that corresponds to the new AIX OS level. Enter the following command: startsrc -s sddsrv Upgrading AIX OS or host attachment and SDD packages SDD provides different packages to match the AIX OS level. Start the SDD Server. If rootvg is on local SCSI disks. * Case 1: In stand-alone host or with the HACMP services stopped If the system is a stand-alone host or if you can stop the HACMP cluster services. Use the umount command to unmount all file systems of SDD volume groups. Before you upgrade. skip to step 4.52. b. e.51. d. if rootvg is on a SAN boot disk. Automatic migration of an SDD package from an earlier OS level to a later OS level after an OS upgrade is not supported. 1.1 to 5.

Make sure no disk group is online except rootvg.ibm.rte for 2145.6. a. a.ibm. Upgrade to the latest version of the host attachment. if you are not upgrading the operating system. Continue to step 6. If you are not upgrading the operating system. if required. Configure SDD vpath devices by running the cfallvpath command. 6. start the cluster services. If you are upgrading the operating system. d. Migrate the AIX OS level. v If rootvg is on local SCSI disks and you cannot restart the system. If rootvg is on a SAN boot disk. Vary on all SDD volume groups. v If rootvg is on a SAN boot disk. skip the rest of the steps and restart the system. g. Using the SDD on an AIX host system 33 . if required. continue to the next step. c. 2107. After upgrading Host Attachment.fcp. c. Install SDD for the new AIX OS level. h. d. Upgrade to the latest version of Host Attachment. follow these steps. restart the system. The system automatically restarts at the end of migration.0. Mount all file systems. Verify that the hdisk devices are successfully removed using the following command: lsdev lsdev lsdev lsdev –C –C –C –C –t –t –t –t 2105* 2145* 2107* 1750* -F -F -F -F name name name name for for for for 2105 2145 2107 1750 devices devices devices devices 4. The following are Host Attachment Package names: v ibm2105.rte for 2105 devices v devices. Then skip the rest of the steps and follow the procedures in “Upgrading the SDD packages automatically without system restart” on page 24 to upgrade SDD. if required. f. Upgrade to the latest version of SDD. activate the paging space.disk. Otherwise. b. c. Configure hdisks and SDD vpath devices by running the cfgmgr command.0. 5. v If rootvg is on local SCSI disks and you can restart the system. Chapter 2. Then follow the procedures in “Upgrading the SDD packages automatically without system restart” on page 24 to upgrade SDD.fcp. Uninstall SDD.disk. skip this step.rte for 2145. restart the system. Run stopsrc -s sddsrv to stop the sddsrv daemon. and 1750 devices d.rte for 2105 devices v devices. Resume all activities related to SDD devices: a. follow these procedures. run the hd2vp command on all SDD volume groups. if required. If your host was in an HACMP environment. skip to step 5. If there was active paging space created with SDD devices. and 1750 devices b. The following are package names: v ibm2105. e. Otherwise. If your new SDD version is earlier than 1.lsdev lsdev lsdev lsdev –C –C –C –C –t –t –t –t 2105* 2145* 2107* 1750* -F -F -F -F name name name name | | | | xargs xargs xargs xargs -n1 -n1 -n1 -n1 rmdev rmdev rmdev rmdev -dl -dl -dl -dl for for for for 2105 2145 2107 1750 devices devices devices devices b. if required. 2107. 7. b.

Remove the SDD vpath devices using the rmdev -dl dpo -R command. 5. v Change the option to automatically import user volume groups to no. a. Verify that the hdisk devices are successfully removed using the following command: lsdev lsdev lsdev lsdev –C –C –C –C –t –t –t –t 2105* 2145* 2107* 1750* -F -F -F -F name name name name for for for for 2105 2145 2107 1750 devices devices devices devices 7. if there are any. Reboot automatically starts at the end of migration. follow these procedures. if rootvg is on a SAN boot disk. Otherwise. a. perform the following steps to upgrade the software: 1. Make sure no disk group is online except rootvg. if rootvg is on a SAN boot disk. If you want to upgrade OS. skip to step 7. Stop applications running on SDD volume groups or file systems. Case 2: In HACMP node-by-node migration with nonconcurrent resource group If you are upgrading the software on the HACMP standby node during the HACMP node-by-node migration with non-concurrent resource group. Vary off all SDD volume groups. skip to step 10 on page 35.e. Otherwise. skip to Step 9 on page 35. Note: Currently. Migrate to the desired AIX OS level. 34 Multipath Subsystem Device Driver User's Guide . v If you are using NIM to upgrade to AIX 5. If rootvg is on local SCSI disks. If you do not want to upgrade OS. Use the umount command to unmount all file systems of SDD volume groups. d.3. 2. 4. 8. Make sure you perform the following operations for the OS migration. deactivate the paging space. go to the step 6.2 with NIM is not supported. If there is active paging space created with SDD devices. Ensure that all SDD vpath devices are closed by running the datapath query device command to verify that all the paths of all SDD vpath devices are at CLOSE state. Stop all activities related to SDD devices. make sure NIM SPOT contains AIX Interim Fix APAR IY94507. follow these procedures. follow these procedures. 6. c. 3. migration to AIX 5. See “Special considerations when you uninstall SDD in the NIM environment” on page 51. If rootvg is on a local SCSI disk. Start applications running on SDD volume groups or file systems. Uninstall SDD. On the HACMP standby node. Remove hdisk devices using the following command: lsdev lsdev lsdev lsdev –C –C –C –C –t –t –t –t 2105* 2145* 2107* 1750* -F -F -F -F name name name name | | | | xargs xargs xargs xargs -n1 -n1 -n1 -n1 rmdev rmdev rmdev rmdev -dl -dl -dl -dl for for for for 2105 2145 2107 1750 devices devices devices devices b. Run stopsrc -s sddsrv to stop the sddsrv daemon. b.

2107. Upgrade to the latest version of Host Attachment. 13.rte”.rte Chapter 2. If rootvg is on a SAN boot disk. Verify that the hdisk devices are successfully removed using the following command: lsdev lsdev lsdev lsdev –C –C –C –C –t –t –t –t 2105* 2145* 2107* 1750* -F -F -F -F name name name name for for for for 2105 2145 2107 1750 devices devices devices devices 9.a.0. Otherwise. On the HACMP active node. If you have migrated to a new AIX OS level.. if required.” and “. Verifying the currently installed version of SDD for SDD 1. 14. If your new SDD version is earlier than 1. On the HACMP standby node.rte lslpp -l ibmSdd_510. perform the following steps: a.ibm. you can enter one of the following commands: lslpp -l ibmSdd_432.3. you can verify your currently installed version of SDD by entering the following command: lslpp -l '*Sdd*' The asterisks (*) in the beginning and end of the Sdd characters are used as wildcard symbols to search for the characters “ibm. Using the SDD on an AIX host system 35 . make sure you install the SDD for the new AIX OS level..rte for 2145. and 1750 devices 10. skip this step. Remove all the hdisks of the SDD supported storage devices with the following command. run varyonvg –bu volume group name on all the SDD non-concurrent volume groups that are shared with the standby node.0.0. Run importvg –L volume group name physical volume name to update any possible Object Data Manager (ODM) changes on a volume group. run hd2vp on all SDD volume groups.rte for 2105 devices v devices. restart the system.11 (or earlier) For SDD packages prior to SDD 1. On the HACMP active node. 12. The following are Host Attachment Package names: v ibm2105.0. lsdev lsdev lsdev lsdev –C –C –C –C –t –t –t –t 2105* 2145* 2107* 1750* -F -F -F -F name name name name | | | | xargs xargs xargs xargs -n1 -n1 -n1 -n1 rmdev rmdev rmdev rmdev -dl -dl -dl -dl for for for for 2105 2145 2107 1750 devices devices devices devices b. c. Configure hdisks and the SDD vpath devices using one of the following options: v Run cfgmgr –vl fcsX for each fibre channel adapter and then run cfallvpath v Run cfgmgr b.fcp.4.disk.6.3. Alternatively..rte lslpp -l ibmSdd_433. Install SDD. run varyonvg volume group name on all SDD non-concurrent volume groups that are shared with the standby node.. 11.

3.9 COMMITTED IBM SDD AIX V510 for concurrent HACMP Path: /etc/objrepos ibmSdd_510.rte command looks like this: 36 Multipath Subsystem Device Driver User's Guide .rte command looks like this: Fileset Level State Description -------------------------------------------------------------------------------Path: /usr/lib/objrepos ibmSdd_433.rte 1.rte.rte 1.rte command looks like this: Fileset Level State Description --------------------------------------------------------------------------------Path: /usr/lib/objrepos ibmSdd_510...rte package.rte 1. the output from the lslpp -l ibmSdd_510.3.3.rte command looks like this: Fileset Level State Description -----------------------------------------------------------------------------Path: /usr/lib/objrepos ibmSdd_432. If you successfully installed the package.9 COMMITTED IBM SDD AIX V433 HACMP for nonconcurrent If you successfully installed the ibmSdd_510.9 COMMITTED IBM SDD AIX V510 for concurrent HACMP If you successfully installed the ibmSdd_510nchacmp.9 COMMITTED IBM SDD AIX V432 V433 for concurrent HACMP Path: /etc/objrepos ibmSdd_432.9 COMMITTED IBM SDD AIX V432 V433 for concurrent HACMP If you successfully installed the ibmSdd_433.3.3.3.3. the output from the lslpp -l ibmSdd_510nchacmp.3.lslpp -l ibmSdd_510nchacmp..3.3. .3.3.rte lslpp -l ibmSdd. the output from the lslpp -l ibmSdd_433.rte 1.rte package.rte 1. the output from the lslpp -l '*Sdd*' or lslpp -l ibmSdd_432.9 COMMITTED IBM SDD AIX V433 for nonconcurrent HACMP Path: /etc/objrepos ibmSdd_433..432 .rte package.rte 1.

rte devices.3.rte 1.sdd.51.sdd.0 COMMITTED IBM Subsystem Device Driver for AIX V52 If you successfully installed the devices.sdd.*’ Alternatively.rte 1.sdd.53.0 COMMITTED IBM Subsystem Device Driver for AIX V51 If you successfully installed the devices.0 (and later). the output from the lslpp -l devices.0 COMMITTED IBM Subsystem Device Driver for AIX V51 Path: /etc/objrepos devices.11 COMMITTED IBM SDD AIX V510 for nonconcurrent HACMP Path: /etc/objrepos ibmSdd_510nchacmp.rte 1. the output from the lslpp -l 'devices.51. you can verify your currently installed version of SDD by entering the following command: lslpp -l ’devices.sdd.43.0.sdd.0.sdd.52.sdd.52.4.sdd.sdd.rte devices.rte command looks like this: Fileset Level State Description ---------------------------------------------------------------------------------------Path: /usr/lib/objrepos devices.Fileset Level State Description -------------------------------------------------------------------------------Path: /usr/lib/objrepos ibmSdd_510nchacmp.rte 1.0 (or later) For SDD 1.3.43.51.0 COMMITTED IBM Subsystem Device Driver for AIX V433 If you successfully installed the devices.sdd.3.rte command looks like this: Fileset Level State Description ---------------------------------------------------------------------------------------Path: /usr/lib/objrepos devices.53.43.sdd.rte package.0.43.0 COMMITTED IBM Subsystem Device Driver for AIX V433 Path: /etc/objrepos devices.rte command looks like this: Chapter 2.rte package.51.rte 1.61.rte command looks like this: Fileset Level State Description ---------------------------------------------------------------------------------------Path: /usr/lib/objrepos devices.43.rte 1.0.rte devices.rte devices.sdd.4.52.0.sdd.rte If you successfully installed the devices.4.sdd.4.rte 1.sdd.0 COMMITTED IBM Subsystem Device Driver for AIX V52 Path: /etc/objrepos devices.4.53.0.3. the output from the lslpp -l devices.sdd.0.4.52.*' command or lslpp -l devices.rte package.0.52. Using the SDD on an AIX host system 37 .4.sdd.11 COMMITTED IBM SDD AIX V510 for nonconcurrent HACMP Verifying the currently installed version of SDD for SDD 1.4. the output from the lslpp -l devices.rte package.rte 1.sdd. you can enter one of the following commands: lslpp lslpp lslpp lslpp lslpp -l -l -l -l -l devices.51.sdd.sdd.

Enter the lsdev -C -t xxxx command to check the supported storage hdisk device configuration. ensure that all paths (hdisks) are in Available state.sdd. You can pipe the output of the lsdev command to grep for a certain type of device.0. AIX creates one hdisk device for each path to a physical disk. Increasing the number of paths that are configured to a physical disk increases the number of AIX system hdisk devices that are created and are 38 Multipath Subsystem Device Driver User's Guide . Perform the following steps: 1.7.0 COMMITTED IBM Subsystem Device Driver for AIX V61 Path: /etc/objrepos devices. some SDD vpath devices will lose multipath capability. If you configure multiple paths to a supported storage device. you must also unmount all file systems in order to configure SDD vpath devices correctly. If there are file systems of these volume groups that are mounted.6.61.6.sdd.61. v The devices. v The supported storage device hdisks are configured correctly on the AIX host system.0. use one of the following commands: v lsdev -C -t 2105 to check the ESS device configuration v lsdev -C -t 2107 to check the DS8000 device configuration v lsdev -C -t 1750 to check the DS6000 device configuration v lsdev -C -t 2145 to check the SAN Volume Controller device configuration 2. where xxxx is the supported storage device type.sdd.53.rte software is installed on the AIX host system.7.0 COMMITTED IBM Subsystem Device Driver for AIX V53 Path: /etc/objrepos devices. Otherwise.53.0 COMMITTED IBM Subsystem Device Driver for AIX V61 Preparing to configure SDD Before you configure SDD. If you have already created some active volume groups with SDD supported storage devices. the output from the lslpp -l devices.Fileset Level State Description ---------------------------------------------------------------------------------------Path: /usr/lib/objrepos devices.0.rte 1. In a multipath configuration environment. ensure that: v The supported storage device is operational. SDD has set different limits on the maximum number of LUNs that can be configured.rte 1.0.61.rte 1. Configure the supported storage devices before you configure SDD.61.rte 1.rte command looks like this: Fileset Level State Description ---------------------------------------------------------------------------------------Path: /usr/lib/objrepos devices. Maximum number of LUNs For different AIX OS levels.0 COMMITTED IBM Subsystem Device Driver for AIX V53 If you successfully installed the devices. vary off (deactivate) all these active volume groups by using the varyoffvg (LVM) command.sdd.sdd. These limits exist because AIX has resource limitations on the total number of devices that a system can support. For example.nn. where nn identifies the installation package.rte package.sdd.sdd.

consuming system resources.600 vpath LUN 601 .2 or later.3 AIX 6. This might leave fewer resources for SDD vpath devices to be configured. On the other hand. Based on this limitation.4. For AIX version 5.2 or later. Beginning with SDD 1. more SDD vpath devices can be configured if the number of paths to each disk is reduced. Table 7 provides a summary of the maximum number of LUNs and the maximum number of paths allowed when running on a host systems with different operating system levels.2 or later. You can have a maximum of 32 paths per SDD vpath device regardless of the number of LUNs that are configured. Recommended maximum paths supported for different number of LUNs on AIX 5. SDD limits the total maximum number of SDD vpath devices that can be configured to 600. Use the minimum number of paths that are necessary to achieve sufficient redundancy in the SAN environment. see Table 8 for recommended maximum number of paths.3 is only supported for ESS and virtualization products.1. This number is shared by all SDD-supported storage devices. Table 8. Using the SDD on an AIX host system 39 .1 * SDD supported storage devices 600 LUNs (maximum 32 paths) 600 LUNs (maximum 32 paths) 1200 LUNs (maximum 32 paths. Maximum LUNs allowed for different AIX OS levels OS level AIX 4.7.6. APAR IY49825 is required.900 vpath LUN 901 .) * Note: AIX 4. The recommended number of paths is 2 .) 1200 LUNs (maximum 32 paths.0. SDD supports a combined maximum of 1200 supported storage devices on AIX version 5. configuring more paths than is needed for failover protection might consume too many system resources and degrade system performance.1 AIX 5. the resource of the AIX operating system is increased. Table 7.2 AIX 5.2 or later Number of LUNs 1. Chapter 2. follow the recommendations in Table 8.3 AIX 5. To avoid exceeding the maximum number of paths per SDD vpath device on AIX 5. However. AIX has a published limit of 10 000 devices per system. This means that you can only have a maximum of 32 host adapter ports for your SDD vpath devices.3 and 5. see Table 8 for recommended maximum number of paths. see Table 8 for recommended maximum number of paths.) 1200 LUNs (maximum 32 paths. For AIX versions 4. SDD has increased the SDD vpath device limit accordingly.1200 vpath LUN * Maximum paths per vpath 16 8 4 Note: * In order to configure 1200 LUNs.

ODM attributes for controlling the maximum number of LUNs in SDD version 1. This attribute has a default value as well as a maximum value of 1200.7 or later will fail because SDD does not support more than 1200 LUNs on one AIX host. use the lsattr -El dpo command: > lsattr -El dpo SDD_maxlun 1200 persistent_resv yes Maximum LUNS allowed for SDD Subsystem Supports Persistent Reserve Command False False Preparing your system to configure more than 600 supported storage devices or to handle a large amount of I/O after queue depth is disabled If you plan to configure more than 600 supported storage devices by configuring multiple types of supported storage systems and the total number of LUNs will exceed 600. To avoid system-performance degradation. To display the value of the SDD_maxlun attribute.0. If you have this configuration. SDD_maxlun. or if you plan to disable queue depth to remove the limit on the amount of I/O that SDD vpath devices can send.0.If you have more than 1200 vpaths already configured on your AIX host (for example. The new SDD ODM attribute. There are also some system configurations that must be changed to avoid system bottleneck.7 or later on AIX 5. if you have 800 ESS LUNs and 512 SAN Volume Controller LUNs configured as SDD vpath devices on one AIX host). you must first determine whether the system has sufficient resources for large device configuration or heavy I/O operations.6. The SDD_maxlun ODM attribute is now used to replace the following ODM attributes: v v v v v 2105_max_luns 2145_max_luns 2062_max_luns Enterpr_maxlun Virtual_maxlun See Table 7 on page 39 for information about the total number of LUNs that you can configure. This value is not user-changeable.2 and later has consolidated the ODM attributes for controlling the maximum number of LUNs for all supported storage devices. contact IBM Customer Support at 1-800-IBM-SERV. tune the following ODM attributes for your AIX fibre-channel adapters before you configure more than 600 supported storage devices or disable queue depth: v lg_term_dma v num_cmd_elems v max_xfer_size v fc_err_recov 40 Multipath Subsystem Device Driver User's Guide .6. SDD migration to SDD 1.2 and later SDD for AIX 5. defines the maximum number of storage LUNs that SDD can support on a host.

change these attributes before assigning or configuring supported storage devices on a host system.2. Therefore. and the maximum value is: LP9000 adapters LP10000 adapters LP7000 adapters 2048 2048 1024 When a large number of supported storage devices are configured. If you still experience poor I/O performance after changing the value to 0x400000. 2. you need to reconfigure the fibre-channel adapter and all its child devices. you can increase this attribute to improve performance. Fast failover is not supported on AIX 4. Enabling this attribute can reduce the amount of time that the AIX disk driver takes to fail I/O in certain conditions. apply APAR IY48725 (Fast I/O Failure for Fibre Channel Devices) to add the fast failover feature. A recommended change is to increase the value of lg_term_dma to 0x400000. Its default value is 100000 and the maximum value is 1000000. To enable fibre-channel adapter fast failover.3.3 or earlier.If you change these attributes. If you have a dual-port fibre-channel adapter.0 or later. For AIX 5. max_xfer_size This AIX fibre-channel adapter attribute controls the maximum transfer size of the fibre-channel adapter. Using the -P option with the chdev command causes the changes to not take effect until the system is restarted. The default value for fc_err_recov is delayed_fail. because this will cause the configuration of the second adapter port to fail. never increase lg_term_dma to the maximum value for a dual-port fibre-channel adapter. fc_err_recov Beginning with AIX 5. Different storages might need different maximum transfer size to utilize the performance. the maximum value of the lg_term_dma attribute is divided between the two adapter ports. You can increase this attribute to improve performance. Use the -P option with the chdev command if your system falls under any one of the following conditions: v If you have boot devices on the adapter v If you have a large number of devices configured and prefer to restart the system Chapter 2. and therefore. you can increase the value of this attribute again. change the value to fast_fail. the fc_err_recov attribute enables fast failover during error recovery. and the maximum value is 0x8000000. Because this is a disruptive procedure.1 and AIX52 TL02. num_cmd_elems This AIX fibre-channel adapter attribute controls the maximum number of commands to be queued to the adapter. Note: You can change this attribute only with AIX 5. The default value is 200.1. lg_term_dma This AIX fibre-channel adapter attribute controls the DMA memory resource that an adapter driver can use. The default value of lg_term_dma is 0x200000. Using the SDD on an AIX host system 41 . reduce the overall error recovery time. Notes: 1.

3. 7. 5. Assign new LUNs to the AIX host. The ncargs attribute controls the ARG/ENV list size in 4-KB byte blocks. 8. Issuing the ls command with a wildcard (*) in this directory might fail. Issue chdev -l fcsN -a num_cmd_elems=1024 to increase the maximum commands value. num_cmd_elems. Use the following procedure if you cannot reboot the system but want the new attributes to take effect immediately: 1. and max_xfer_size.v If you plan to reboot the system later Use the following procedure if you can reboot the system and allow the new attribute values to take effect after the reboot: 1. 2. Issue lsattr -El fcsN to check the current value of lg_term_dma. 11. 7. 6. Issue chdev -l fcsN -P -a lg_term_dma=0x400000 to increase the DMA value. 4. Issue lsattr -El fscsiN to check the current value of fc_err_recov. Reboot the system now or later. change the ncargs attribute of sys0. the child devices and hdisks using cfgmgr -l fcsN. Issue chdev -l fcsN -a lg_term_dma=0x400000 to increase the DMA value. if needed. 6. and max_xfer_size. Use the following command to change the ncargs attribute to 30: chdev -l sys0 -a ncargs=30 42 Multipath Subsystem Device Driver User's Guide . 2. 8. Use the rmdev -dl dpo -R command to remove SDD vpath devices. The default value for this attribute is 6 (24 KB) and the maximum value for this attribute is 128 (512 KB). Issue chdev -l fcsN -a max_xfer_size=100000 to increase the maximum transfer size. Configure SDD vpath devices with the cfallvpath command if they are removed in step 3. Issue chdev -l fcsN -P -a max_xfer_size=20000 to increase the maximum transfer size. Issue chdev -l fscsiX -a fc_err_recov=fast_fail to enable fast failover. many special device files are created in the /dev directory. Issue lsattr -El fscsiN to check the current value of fc_err_recov. 5. Increase the value of this to 30. Issue chdev -l fcsN -P -a num_cmd_elems=1024 to increase the maximum commands value. if needed. Assign new LUNs to the AIX host. Changing the ncargs attribute is dynamic. Put all existing fibre-channel adapters and their child devices to the Defined state by issuing rmdev -l fcsN -R. Issue the chdev -l fscsiX -P -a fc_err_recov=fast_fail command to enable fast failover. 3. If issuing the command fails in this situation. increase this value to a larger number. If you still experience failures after changing the value to 30. 9. 10. if they are already configured on your system. 4. When you have a large number of LUNs. Issue lsattr -El fcsN to check the current value of lg_term_dma. Configure the fibre-channel adapters. num_cmd_elems.

Using the SDD on an AIX host system 43 .Filesystem space: If you are increasing the maximum number of LUNs. the software might generate many threads. To remove the limit on the amount of I/O sent to vpath devices. qdepth_enable. Issue df to ensure that root file system (that is. Issue cfgmgr -l [scsiN/fcsN] for each relevant SCSI or FCP adapter. qdepth_enable was a dpo attribute before SDD 1. the queue depth logic to control the amount of I/O being sent to the vpath devices is enabled in the SDD driver. run the following command: # lsattr -El vpath0 active_hdisk hdisk66/13AB2ZA1020/fscsi3 active_hdisk hdisk2/13AB2ZA1020/fscsi2 active_hdisk hdisk34/13AB2ZA1020/fscsi2 active_hdisk hdisk98/13AB2ZA1020/fscsi3 policy df pvid 0005f9fdcda4417d0000000000000000 qdepth_enable yes reserve_policy PR_exclusive serial_number 13AB2ZA1020 unique_id yes Active hdisk Active hdisk Active hdisk Active hdisk Scheduling Policy Physical volume identifier Queue Depth Control Reserve Policy LUN serial number Device Unique Identification False False False False True False True True False False For SDD 1.6. '/') size is large enough to hold the device special files.0. or even a system hang. For example. 2. For example: Filesystem /dev/hd4 512-blocks 196608 Free 29008 %Used 86% Iused 15524 %Iused 32% Mounted on / The minimum required size is 8 MB.5. run the chfs command to increase the size of the root file system. which can send heavy I/O to a relatively small number of devices. SDD uses the device queue_depth setting to control the I/O flow to SDD vpath device and paths. IBM Lotus® Notes®. use the following steps to determine whether there is sufficient space in the root file system after hdisks are configured: 1.1. By default. The datapath set qdepth command offers a new option to dynamically enable or disable the queue depth logic.2. use the qdepth_enable attribute to disable this queue depth logic on I/O flow control. such as an application running with a DB2® database. after changing the ODM attributes.5.0.0 or later.1. If there is insufficient space.1. or IBM Informix®® database. qdepth_enable is changed to a vpath attribute and it allows you to set different queue depth logic per SDD vpath device. you can change the qdepth_enable attribute dynamically. Beginning with SDD 1.2 and it controls the queue depth logic on all the SDD vpath devices. if you enter datapath set device 0 2 qdepth disable command. Enabling queue depth logic to control I/O flow can cause performance degradation. the following output is displayed when the queue depth logic is currently enabled on these SDD vpath devices: Success: set qdepth_enable to no for vpath0 Success: set qdepth_enable to no for vpath1 Success: set qdepth_enable to no for vpath2 Chapter 2. With certain database applications.6. the SDD attribute. allows you to control I/O flow to SDD vpath devices. By default. Controlling I/O flow to SDD devices with the SDD qdepth_enable attribute Starting with SDD 1. To determine if queue depth logic is enabled for a particular SDD vpath device.

To display the current reserve_policy. run the following command: # lsattr -El vpath0 active_hdisk hdisk331/140FCA30/fscsi1 active_hdisk hdisk2/140FCA30/fscsi0 active_hdisk hdisk53/140FCA30/fscsi0 active_hdisk hdisk197/140FCA30/fscsi0 active_hdisk hdisk280/140FCA30/fscsi1 active_hdisk hdisk475/140FCA30/fscsi1 policy df pvid 00082dba1dae728c0000000000000000 qdepth_enable yes reserve_policy PR_exclusive serial_number 140FCA30 unique_id yes Active hdisk Active hdisk Active hdisk Active hdisk Active hdisk Active hdisk Scheduling Policy Physical volume identifier Queue Depth Control Reserve Policy LUN serial number Device Unique Identification False False False False False False True False True True False False To change the vpath attribute.The qdepth_enable ODM attribute of these SDD vpath devices will be updated. reserve_policy. By default. the following output is displayed when lsattr -El vpath0 is entered. from default value “PR_exclusive” to “no_reserve”. Controlling reserve policy of SDD devices with the SDD reserve_policy attribute Starting with SDD 1. such as the Dual Virtual I/O Servers environment. SDD always makes a persistent reservation on the device.0. a new reserve_policy attribute allows you to control the default reserve policy of each SDD vpath device. the reserve_policy attribute is set to PR_exclusive. unless an application opens the SDD vpath device with the no reserve device open option. you must set the reserve_policy attribute to no_reserve. This means that. SDD does not make a persistent reservation on the device. If you set the reserve_policy attribute to no_reserve. In a shared environment where an application does not implement the no reserve device open option. enter the following command: # chdev -l vpath0 -a reserve_policy=no_reserve vpath0 changed The chdev command requires that the device is closed because it reconfigures the SDD vpath device. # lsattr -El vpath0 active_hdisk hdisk66/13AB2ZA1020/fscsi3 active_hdisk hdisk2/13AB2ZA1020/fscsi2 active_hdisk hdisk34/13AB2ZA1020/fscsi2 active_hdisk hdisk98/13AB2ZA1020/fscsi3 policy df pvid 0005f9fdcda4417d0000000000000000 qdepth_enable no reserve_policy PR_exclusive serial_number 13AB2ZA1020 unique_id yes Active hdisk Active hdisk Active hdisk Active hdisk Scheduling Policy Physical volume identifier Queue Depth Control Reserve Policy LUN serial number Device Unique Identification False False False False True False True True False False See “Preparing your system to configure more than 600 supported storage devices or to handle a large amount of I/O after queue depth is disabled” on page 40 to determine whether the system has sufficient resources for disabling queue depth logic. regardless of the device open option. 44 Multipath Subsystem Device Driver User's Guide . For example.1.7.

Note: Because SDD implements persistent reserve command set. Select Define and Configure All Data Path Devices and press Enter. 5. v If you unconfigure and delete the device information from the ODM database. Check the SDD configuration state. Enter smitty device from your desktop window. or you can unconfigure and delete device information from the ODM database: v If you unconfigure without deleting the device information. 2. 3. Enter smitty device from your desktop window. A list of all SDD devices and their states (either Defined or Available) is displayed. The Data Path Device panel is displayed. You can either unconfigure without deleting the device information from the Object Database Manager (ODM) database. Select Data Path Device and press Enter. The Devices menu is displayed. 2. v A paging space created with SDD devices is deactivated. v All file systems belonging to the SDD volume groups are unmounted and all volume groups are varied off. Select Remove a Data Path Device and press Enter. 4. Select Data Path Device and press Enter. Mount the file systems for all volume groups that were previously unmounted. To reconfigure it. that device is removed from the system. The configuration process begins. The Data Path Device panel is displayed. 2. 4. Run the vp2hd volume_group_name conversion script to convert the volume group from SDD devices (vpathN) to supported storage devices (hdisks). Select Devices and press Enter. You can use SMIT to unconfigure the SDD devices in two ways. ensure that: v All I/O activities on the devices that you need to unconfigure are stopped. The Devices menu is displayed. 3. Perform the following steps to delete device information from the ODM and to unconfigure SDD devices: 1. follow the procedure described in “Configuring SDD” on page 45. Before you unconfigure SDD devices. 1. you must remove the SDD vpath device before removing the SDD vpath device's underlying hdisk devices. See “Displaying the supported storage device SDD vpath device configuration” on page 74. Using the SDD on an AIX host system 45 . 6. Use the varyonvg command to vary on all deactivated supported storage device volume groups. You can use either SMIT or the mkdev -l vpathN command to return the device to the Available state.Configuring SDD Perform the following steps to configure SDD using SMIT: Note: The list items on the SMIT panel might be worded differently from one AIX version to another. the device remains in the Defined state. Chapter 2. Unconfiguring SDD 1.

Perform the following steps to verify the SDD configuration on an AIX host system: 1. Select whether you want to delete the device information from the ODM database. 3. vpath13) v The Defined or Available state of an SDD vpath device v Whether the SDD vpath device is defined to AIX as a physical volume (indicated by the pv flag) v The name of the volume group the device belongs to (for example. Verifying the SDD configuration To check the SDD configuration. Press Enter. You should see an output similar to this: vpath0 (Avail pv vpathvg) 018FA067 = hdisk1 (Avail ) vpath1 (Avail ) 019FA067 = hdisk2 (Avail ) vpath2 (Avail ) 01AFA067 = hdisk3 (Avail ) vpath3 (Avail ) 01BFA067 = hdisk4 (Avail ) hdisk27 (Avail ) vpath4 (Avail ) 01CFA067 = hdisk5 (Avail ) hdisk28 (Avail ) vpath5 (Avail ) 01DFA067 = hdisk6 (Avail ) hdisk29 (Avail ) vpath6 (Avail ) 01EFA067 = hdisk7 (Avail ) hdisk30 (Avail ) vpath7 (Avail ) 01FFA067 = hdisk8 (Avail ) hdisk31 (Avail ) vpath8 (Avail ) 020FA067 = hdisk9 (Avail ) hdisk32 (Avail ) vpath9 (Avail pv vpathvg) 02BFA067 = hdisk20 (Avail ) hdisk44 (Avail ) vpath10 (Avail pv vpathvg) 02CFA067 = hdisk21 (Avail ) hdisk45 (Avail ) vpath11 (Avail pv vpathvg) 02DFA067 = hdisk22 (Avail ) hdisk46 (Avail ) vpath12 (Avail pv vpathvg) 02EFA067 = hdisk23 (Avail ) hdisk47 (Avail ) vpath13 (Avail pv vpathvg) 02FFA067 = hdisk24 (Avail ) hdisk48 (Avail ) The output shows: v The name of each SDD vpath device (for example. The fast-path command to unconfigure and remove all SDD devices from your system is: rmdev -dl dpo -R. Select all devices for Select Query Option. If you want to use the command-line interface to verify the configuration. The fast-path command to unconfigure all SDD devices and change the device state from Available to Defined is: rmdev -l dpo -R. See “Configuring SDD” on page 45 for the procedure. 6.6 for each SDD device. Enter smitty device from your desktop window. Select Data Path Device and press Enter. you have to repeat steps 4 . 2. The device is unconfigured to the state that you selected. Check the configuration procedure again.5. Select the device that you want to unconfigure. If any device is listed as Defined. The state (either Defined or Available) of all SDD vpath devices and the paths to each device is displayed. The Data Path Device panel is displayed. Select Display Data Path Device Configuration and press Enter. 4. you can use either the SMIT Display Device Configuration panel or the lsvpcfg console command. vpathvg) 46 Multipath Subsystem Device Driver User's Guide . 7. enter lsvpcfg. To unconfigure more SDD devices. leave the Device Name/ Device Model field blank and press Enter. The Devices menu is displayed. the configuration was not successful.

and you must change it to the Open state by closing and reopening the SDD vpath device. You can use the AIX lsslot command to display dynamically reconfigurable slots. Running cfgmgr alone does not add new paths to SDD vpath devices. 02FFA067) or the unit serial number of the virtualization product LUN (for example. With SDD levels earlier than 1.0 and later.v The unit serial number of the disk storage system LUN (for example.0 and later. You can determine how many more hdisks can be added to the existing SDD vpath devices by using the command datapath query device to find out how many hdisks are already configured for each vpath.5. where n is the adapter number. running addpaths will not add any paths to vpaths. Notes: 1. the new path is not automatically in the Open state. In this case. use the datapath remove adapter n command.0 (or later) supports AIX Hot Plug available on 5L or later. the new path automatically opens after you add it to an opened SDD vpath device. If you configure more than 32 hdisks.1. Dynamically removing or replacing PCI adapters or paths SDD 1.5. Replacing an adapter or removing paths does not interrupt current I/O and SDD can be dynamically reconfigured without shutting down or powering off the system. You can also remove a particular path of an SDD vpath device. You can dynamically replace an adapter in a hot-plug slot. That is. If an SDD vpath device is opened.5. 2. in SDD 1. The addpaths command allows you to dynamically add more paths to SDD vpath devices when they are in the Available state.5. It also allows you to add paths to SDD vpath devices that belong to active volume groups. Ensure that the number of hdisks from the existing configuration is below the recommended maximum paths allowed according to Table 8 on page 39.1. the paths that are added are automatically in the Open state. you must run cfgmgr to configure the new hdisks devices. 60056768018A0210B00000000000006B) v The names of the AIX disk devices making up the SDD vpath device and their configuration and physical volume state Dynamically adding paths to SDD vpath devices This section shows you how to add paths to SDD vpath devices with the addpaths command. Using the SDD on an AIX host system 47 . there is an exception when you enter the addpaths command to add a new path to an SDD vpath device that has only one configured path. Three possible scenarios using this feature in the SDD environment are: v “Dynamically removing a PCI adapter from SDD configuration” v “Dynamically replacing a PCI adapter in an SDD configuration” on page 48 v “Dynamically removing a path of an SDD vpath device” on page 49 Dynamically removing a PCI adapter from SDD configuration To permanently remove a PCI adapter and its child devices from an SDD configuration. and their characteristics.1. This exception is removed for SDD 1. such as hot-plug slots. Before you run addpaths. Chapter 2.1.0.

a parent of fscsi0. to configure the new adapter and its child devices. 8. use the datapath remove adapter n command. go to the ESS Specialist to update the WWN of the new adapter. Enter drslot -R -c pci -s P1-I8 where P1-I8 is the slot location found by issuing lscfg -vl fcs0. where n is the adapter number. Enter the addpaths command to configure the newly installed adapter and its child devices to SDD. The newly added paths are automatically opened if vpath is open. to remove adapter 0. This command prepares a hot-plug slot for systems with AIX 5L™ or later. indicating that I/O is still running. For example. See “datapath remove adapter” on page 442 for more information about the datapath remove adapter n command. The zone information of fabric switches must be updated with the new WWN as well. This command removes the adapter and associated paths from the SDD configuration. Complete the following steps to dynamically replace a PCI adapter in the SDD configuration: 1. Enter datapath remove adapter n. Enter rmdev -dl fcs0 -R to remove fcs0. 4. Follow the instruction given by drslot to physically remove the adapter and install a new one. Enter cfgmgr or cfgmgr -vl pci(n). 6. 7. 5. For example. 48 Multipath Subsystem Device Driver User's Guide . enter datapath remove adapter 0. Use the lsdev -Cc disk and lsdev -Cc adapter commands to ensure that all devices are successfully configured to Available state. and all of its child devices from the system. 3.Dynamically replacing a PCI adapter in an SDD configuration To dynamically replace a PCI adapter in an SDD configuration. Enter datapath query adapter to identify the adapter to be replaced. Issuing lsdev -Cc disk should not show any devices associated with fscsi0. for ESS devices. +--------------------------------------------------------------------------------------+ |Active Adapters :4 | | | |Adpt# Adapter Name State Mode Select Errors Paths Active | | 0 fscsi0 NORMAL ACTIVE 62051 415 10 10 | | 1 fscsi1 NORMAL ACTIVE 65386 3 10 10 | | 2 fscsi2 NORMAL ACTIVE 75697 27 10 10 | | 3 fscsi3 NORMAL ACTIVE 4788 35 10 10 | +--------------------------------------------------------------------------------------+ 2. +-------------------------------------------------------------------------------------+ |Success: remove adapter 0 | | | |Active Adapters :3 | | | |Adpt# Adapter Name State Mode Select Errors Paths Active | | 1 fscsi1 NORMAL ACTIVE 65916 3 10 10 | | 2 fscsi2 NORMAL ACTIVE 76197 28 10 10 | | 3 fscsi3 NORMAL ACTIVE 4997 39 10 10 | +-------------------------------------------------------------------------------------+ Note that Adpt# 0 fscsi0 is removed and the Select counts are increased on other three adapters. After you physically replace and configure a new adapter. where n is the adapter number. Update the World Wide Name (WWN) of the new adapter at the device end and in the fabric. where n is the adapter number to be removed. the adapter and its associated paths can be added to SDD with the addpaths command.

Enter datapath remove device m path n. +------------------------------------------------------------------------------------------+ |DEV#: 0 DEVICE NAME: vpath0 TYPE: 2105E20 POLICY: Optimized | | SERIAL: 20112028 | | | |==========================================================================================| |Path# Adapter/Hard Disk State Mode Select Errors| | 0 fscsi1/hdisk18 OPEN NORMAL 557 0| | 1 fscsi1/hdisk26 OPEN NORMAL 568 30| | 2 fscsi0/hdisk34 OPEN NORMAL 566 0| | 3 fscsi0/hdisk42 OPEN NORMAL 545 0| +------------------------------------------------------------------------------------------+ 2.+--------------------------------------------------------------------------------------+ |Active Adapters :4 | | | |Adpt# Adapter Name State Mode Select Errors Paths Active | | 0 fscsi0 NORMAL ACTIVE 11 0 10 10 | | 1 fscsi1 NORMAL ACTIVE 196667 6 10 10 | | 2 fscsi2 NORMAL ACTIVE 208697 36 10 10 | | 3 fscsi3 NORMAL ACTIVE 95188 47 10 10 | +--------------------------------------------------------------------------------------+ Dynamically removing a path of an SDD vpath device To dynamically remove a particular path from an SDD vpath device. see “Dynamically adding paths to SDD vpath devices” on page 47. 3. where m is the device number and n is the path number of that device. +------------------------------------------------------------------------------------------+ |Success: device 0 path 1 removed | | | |DEV#: 0 DEVICE NAME: vpath0 TYPE: 2105E20 POLICY: Optimized | | SERIAL: 20112028 | |==========================================================================================| |Path# Adapter/Hard Disk State Mode Select Errors | | 0 fscsi1/hdisk18 OPEN NORMAL 567 0 | | 1 fscsi0/hdisk34 OPEN NORMAL 596 0 | | 2 fscsi0/hdisk42 OPEN NORMAL 589 0 | +------------------------------------------------------------------------------------------+ Note that fscsi1/hdisk26 is removed and Path# 1 is now fscsi0/hdisk34. Chapter 2. See “datapath remove device path” on page 443 for more information about the datapath remove device m path n command. +------------------------------------------------------------------------------------------+ |DEV#: 0 DEVICE NAME: vpath0 TYPE: 2105E20 POLICY: Optimized | | SERIAL: 20112028 | | | |==========================================================================================| |Path# Adapter/Hard Disk State Mode Select Errors | | 0 fscsi1/hdisk18 OPEN NORMAL 588 0 | | 1 fscsi0/hdisk34 OPEN NORMAL 656 0 | | 2 fscsi0/hdisk42 OPEN NORMAL 599 0 | | 3 fscsi1/hdisk26 OPEN NORMAL 9 0 | +------------------------------------------------------------------------------------------+ Note that fscsi1/hdisk26 is added with Path# 3. For example. To reclaim the removed path. This command permanently removes the logical path from the SDD device. Using the SDD on an AIX host system 49 . use the datapath remove device m path n command. Enter datapath query device to identify which path of which device is to be removed. Complete the following steps to remove a path of an SDD vpath device: 1. enter datapath remove device 0 path 1 to remove Path#1 from DEV#0.

rte devices. The SDD server daemon is automatically started after SDD is installed.sdd.9 (or later). a. If there is active paging space that is created with SDD devices. the process fails. SMIT responds with the following message: 50 Multipath Subsystem Device Driver User's Guide . before uninstalling SDD.sdd. “Using the SDD server and the SDDPCM server. 2. Go to “Verifying if the SDD server has started” on page 68 and “Stopping the SDD server” on page 68 for more instructions.rte devices.sdd.52.rte devices.53. SDD has a server daemon running in the background. if there are any. Perform the following steps to uninstall the SDD package.3. Therefore.rte devices.3. See “Special considerations when you uninstall SDD in the NIM environment” on page 51. deactivate the paging space. 1.rte Then press Enter. You must stop the SDD server if it is running in the background before removing SDD. Press Enter. Vary off all SDD volume groups. Remove the SDD vpath devices using the rmdev -dl dpo -R command.sdd. Note: If you select Yes. Press the Tab key in the PREVIEW Only? field to toggle between Yes and No.Removing SDD from an AIX host system The SDD server (sddsrv) is an integrated component of SDD 1. d.0 (or later)” on page 37 to verify your currently installed installation package or version of SDD. the deinstall process will do a pre-check and let you preview the results without removing the software. Note: See “Verifying the currently installed version of SDD for SDD 1. Select No for the remaining fields on this panel. See Chapter 12. 4. Enter smitty deinstall from your desktop window to go directly to the Remove Installed Software panel.4.sdd.” on page 415 for more details about the SDD server daemon. e. Stop all activities that are related to SDD devices.0. b.3.51. issue the stopsrc -s sddsrv command to stop the sddsrv daemon.2. Follow these procedures to completely remove SDD package from your AIX host system. 3. Enter one of the following installation package names in the SOFTWARE name field: devices.11 (or earlier)” on page 35 or “Verifying the currently installed version of SDD for SDD 1.43.61. Issue the umount command to unmount all file systems of SDD volume groups. b. a. c. d. You can also press F4 in the Software name field to list the currently installed installation package and do a search (/) on SDD. c. If the state for any SDD device is either Available or Defined. Select No to remove the software package from your AIX host system. Stop applications that are running on SDD volume groups or file systems.

Enter smitty deinstall from your desktop window to go directly to the Remove Installed Software panel.ARE YOU SURE?? Continuing may delete information you may want to keep. Removing SDD Host Attachment from an AIX host system Follow these procedures to completely remove the SDD Host Attachment package from your AIX host system: 1. Using the SDD on an AIX host system 51 . the SDD software package is removed from your system. If the state for any SDD device is either Available or Defined. You do not need to issue the rmdev command in your NIM pre-migration script before you uninstall SDD. Verify that the hdisk devices are successfully removed using the following command: lsdev lsdev lsdev lsdev –C –C –C –C –t –t –t –t 2105* 2145* 2107* 1750* -F -F -F -F name name name name for for for for 2105 2145 2107 1750 devices devices devices devices 3. Press Enter. Select No to remove the software package from your AIX host system.fcp. Note: If you select Yes. the process fails. Remove hdisk devices using the following command: lsdev lsdev lsdev lsdev –C –C –C –C –t –t –t –t 2105* 2145* 2107* 1750* -F -F -F -F name name name name | | | | xargs xargs xargs xargs -n1 -n1 -n1 -n1 rmdev rmdev rmdev rmdev -dl -dl -dl -dl for for for for 2105 2145 2107 1750 devices devices devices devices 2.disk. 7. g. Select No for the remaining fields on this panel. This might take a few minutes. SMIT responds with the following message: Chapter 2. When the process is complete. Special considerations when you uninstall SDD in the NIM environment When you uninstall SDD in the Network Installation Manager (NIM) environment. SDD does not require that you remove all the SDD vpath devices or stop sddsrv daemon before you perform the SDD uninstallation. you can add SDD for AIX 5. Press the Tab key in the PREVIEW Only? field to toggle between Yes and No.fcp. For example. 4.3 package uninstallation as one of the procedures in your NIM pre-migration script.1. devices. This is your last chance to stop before continuing.3 to 6.ibm.disk. 5. Press Enter to begin the removal process.ibm. f. ibm2105. 6.rte Note: You can also press F4 in the Software name field to list the currently installed installation package and search (/) on ibm2105 and devices. Enter the following installation package names in the SOFTWARE name field: a.rte b. the deinstall process does a pre-check and lets you preview the results without removing the software. if you are using NIM to upgrade AIX OS from 5.

Press Enter to begin the removal process. See the Host System Attachment Guide for the supported storage device for additional information. If you require dual or multiple boot capabilities on a server and multiple operating systems are installed on multiple boot devices.9 (or later) allows you to manually exclude devices from the SDD configuration. See “excludesddcfg” on page 90 for the syntax of the excludesddcfg command. the SDD software package is removed from your system. use the excludesddcfg command to manually exclude all boot devices that belong to nonactive rootvg volume groups on the server or disk storage system devices that are going to be selected as boot devices.3. 2. When the process is complete. Use the umount command to unmount all the file systems of the volume group and vary off the volume group. you need to perform one of the following actions: a. 8. do not configure that device with multipath configuration. If the excluded devices belong to an active volume group and file systems of that volume group are mounted.ARE YOU SURE? Continuing may delete information you may want to keep. This might take a few minutes. 1. 52 Multipath Subsystem Device Driver User's Guide . SDD 1.3. Notes: 1. You can also exclude any virtualization products devices from the SDD configuration with the excludesddcfg command. The maximum number of devices that can be excluded is 100. This allows you to select fibre-channel devices as the boot device. 9. Manual exclusion of devices from the SDD configuration With certain technology levels of the AIX operating system. The SDD configuration methods automatically excludes any devices from SDD configuration if these boot devices are the physical volumes of an active rootvg. AIX supports fibre-channel boot capability for selected IBM System p and RS/6000 systems. SAN boot support These sections provide information about SAN boot support. This is your last chance to stop before continuing. Fibre-channel boot capability is available for disk storage system only. a multipathing boot device is not supported. The data cannot be recovered. If you plan to select a device as a boot device. Do not use the excludesddcfg command multiple times on the same logical device. Replacing manually excluded devices in the SDD configuration Use the following procedure to place manually excluded devices back in the SDD configuration. Attention: Using this procedure will result in the loss of all data on these physical volumes. However.

7. Issue bosboot -aDd /dev/ipldevice. IBM 2107. 12.2 and later. 4. or IBM 1750 devices.1 are slightly different from the SAN boot procedures for AIX 5. AIX 5.2. Run the datapath query device command to verify that SDD vpath devices are configured with single-path and that the SDD vpath device policy is Single Path. The SAN boot procedures for AIX 5. 6. Install the base operating system on the selected disk storage system single-path devices. Configure disk storage system devices to the AIX system. Using the SDD on an AIX host system 53 . Upgrade the base operating system to the latest technology level. 3.2 and later releases: 1.1. do not select more than 5 physical volumes (hdisks) for a boot device.1: 1.b. Verify that SDD vpath devices are configured correctly with multipath per LUN. Install the base OS on the selected disk storage system single-path devices. Reboot the AIX system. Ensure that SDD vpath devices are configured correctly. 8.1 Use this procedure for SAN boot installation for AIX 5. 2. Upgrade the base OS to latest technology level. Install both SDD Host Attachment and SDD.2. Reboot the AIX system. AIX 5. SAN boot installation procedure for AIX 5. 11. SAN boot installation procedures DS8000 supports single-path SAN boot on AIX 5. Issue excludesddcfg -dl hdisk# to remove this device from exclusion. 3. In other words. Note: The disk storage system's hdisk(s) name selected for the OS installation should not be changed throughout this procedure. Chapter 2. the AIX system should see only one hdisk configured per LUN. Disk storage system devices (hdisks) should be configured as IBM 2105. 4. Reboot the AIX system. unmount all the file systems of the volume group and use the reducevg command to reduce that device from the volume group. 2. 9. and AIX 5. In order to successfully install the operating system on the single-path disk storage system devices in a multipath SDD environment. Issue cfallvpath configure methods to configure these new devices.3. 10. the AIX system should see only one hdisk configured per LUN. Issue lsvpcfg to verify that these devices are configured as SDD vpath devices. Connect additional host adapters and additional storage adapter to the fabric and configure multipath (multiple hdisks) per disk storage system LUN. there should be only a single path per LUN. Configure disk storage system devices to the AIX system. 3. while ESS and DS6000 support single-path SAN boot on AIX 5. SAN boot installation procedure for AIX 5. Or. Note: Because of current AIX boot device limitation. 2. 5. In other words. Issue bosboot -aDd /dev/ipldevice. and AIX 6.1 Use this procedure for SAN boot installation for AIX 5.1 and later. follow the provided procedures. there should be only a single path per LUN.3.

HACMP supports two modes of operation: 54 Multipath Subsystem Device Driver User's Guide . HACMP provides a reliable way for clustered RS/6000 and IBM System p servers that share disk resources to recover from server and disk failures.3. or IBM 1750 devices.4. run the nonconcurrent version of SDD.3. before you create any SDD volume group and file systems.rte. With SDD 1. Run the datapath query device command to verify that SDD vpath devices are configured with multiple paths and that the vpath device policy is Optimized. the nonconcurrent version of SDD would be ibmSdd_433. see the corresponding User's Guide for HACMP support information.0 (or later). IBM 2107.4.3.0.rte is the nonconcurrent version.2 and above. Reboot the AIX system. Verify that SDD vpath devices are configured correctly with multipath per LUN. Each node has access to shared disk resources that other nodes access. there are no longer different SDD packages for HACMP running in concurrent and nonconcurrent modes. a. For SDD versions earlier than 1.4. Connect additional AIX host adapters and additional storage adapter to the fabric in order to configure multipath (multiple hdisks) per disk storage system LUN. Understanding SDD support for High Availability Cluster Multiprocessing You can run SDD in concurrent and nonconcurrent multihost environments in which more than one host is attached to the same LUNs on a supported storage device. A single package (that corresponds to the AIX OS level) applies to HACMP running in different modes. Install both SDD Host Attachment and SDD. ibmSdd_510nchacmp.0.1.0) and the HACMP modes that support them.3. Verify that all the logical device names of the hdisks (except rootvg) are configured in a continuous sequence. In an HACMP environment. 8.0. Table 9 on page 55 provides a list of installation packages and the SDD versions (earlier than 1. For AIX 4. if HACMP is running. or reboot the AIX system. HACMP transfers ownership of shared disks and other resources based on how you define the resource takeover mode of the nodes in a cluster. For SDD versions earlier than 1. The logical device names of the hdisks might not be configured in a continuous sequence because of the parallel configuration feature in AIX 5. Disk Storage system devices (hdisks) should be configured as IBM 2105.1. For AIX 5. each RS/6000 and IBM System p server in a cluster is a node.3. If that is the case. When there is a failure. follow these additional steps to simplify future maintenance. This process is known as node fallover or node fallback. c. 5.1. 6. Reconfigure all the hdisk logical devices and SDD vpath devices with cfgmgr command.4. b. Remove all hdisk logical devices names (rootvg will not be removed) and SDD vpath devices. SDD supports High Availability Cluster Multiprocessing (HACMP) running on RS/6000 and IBM System p servers. 7.0 but later than version 1.

where xxxx is the APAR number.com/systems/storage/disk/ess/ http://www. 5.5 on AIX 4.2. You can use the command instfix -ik IYxxxx.3.1.3 (or later) (SCSI and fibre channel) SDD 1.ibm. and you use a mix of nonconcurrent and concurrent resource groups (such as cascading and concurrent resource groups or rotating and concurrent resource groups) with HACMP.0 (or later) (SCSI/FCP) HACMP 4. The kind of attachment support depends on the version of SDD that you have installed. Using the SDD on an AIX host system 55 .5.0.4.0.rte ibmSdd_433. For information.0.3 (or later) (SCSI and fibre channel) HACMP mode supported Concurrent Concurrent or nonconcurrent Concurrent or nonconcurrent Tip: If your SDD version is earlier than 1. see the interoperability matrix for your storage: http://www.3 (32-bit only).rte Version of SDD supported SDD 1.nonconcurrent Only one node in a cluster is actively accessing shared disk resources while other nodes are standby.0.com/systems/storage/software/virtualization/svc/ SDD supports RS/6000 and IBM System p servers connected to shared disks with SCSI adapters and drives as well as FCP adapters and drives. you should use the nonconcurrent version of SDD.51.4. to determine if APAR xxxx is installed on your system.ibm.com/systems/storage/disk/ds8000/ http://www.0.5 + APARs Not applicable v IY36938 v IY36933 v IY35735 v IY36951 Chapter 2. Table 10 and Table 11 on page 56 summarize the software requirements to support HACMP v4.3.43.1.sdd. 5.ibm.1. Software support for HACMP 4.1.0 (or later) (SCSI/FCP) devices. Table 9. concurrent Multiple nodes in a cluster are actively accessing shared disk resources.sdd.4.com/systems/storage/disk/ds6000/ http://www.rte ibmSdd_510nchacmp. Recommended SDD installation packages and supported HACMP modes for SDD versions earlier than SDD 1.0 (32-bit and 64-bit). Table 10.rte installation package for SDD 1.4 (SCSI only) SDD 1.rte installation package for SDD 1. Different storage systems or models might support different versions of HACMP.ibm.0 Installation package ibmSdd_432.3.0 (32-bit and 64-bit) SDD version and release level devices.4.

If SDD detects that the storage device supports Persistent Reserve.html For HACMP up-to-date APAR information.rte installation package for SDD 1.3.doc/hacmpbooks.9 (to SDD 1.3.hacmp.3.1.software. a single package (that corresponds to the AIX OS level) applies to HACMP running in both concurrent and nonconcurrent mode. v5.0 (32-bit and 64-bit).1. Software support for HACMP 4.2.5 on AIX 4. go to the following Web site: http://www14.html SDD persistent reserve attributes With SDD 1.0 (or later) (SCSI/FCP) HACMP 4.jsp?topic=/ com. go to the following Web site: http://www.4 for AIX5L support information.com/support/us/en/ For HACMP v5. 5.0 or later.x) (SCSI/FCP) v IY36938 v IY36933 v IY35735 v IY36951 Note: For up-to-date APAR information for HACMP.4. and v5.ibm. v5.ibm.2. the SDD installation packages implement the SCSI-3 Persistent Reserve command set.rte installation package for SDD 1.Table 10.52.com/webapp/set2/sas/f/hacmp/download/ aix53.boulder.4. The attribute name is persistent_resv.0.3 (SCSI/FCP) HACMP 4.5 + APARs v IY36938 v IY36933 v IY35735 v IY36951 ibmSdd_510nchacmp.ibm. 5.0 (32-bit and 64-bit) (continued) SDD version and release level devices.3. the persistent_resv attribute is created in the 56 Multipath Subsystem Device Driver User's Guide . The SDD installation packages have a new attribute under the pseudo-parent (dpo) that reflects whether the supported storage device supports the Persistent Reserve Command set.3 (32-bit only).5 on AIX 5.rte installation package for SDD 1.2.3.0 (32-bit and 64-bit kernel) SDD version and release level ibmSdd_510nchacmp.ibm.cluster. go to the following Web site: http://publib.sdd.3.com/infocenter/clresctr/vxrx/index.1.1.com/support/us/en/ Table 11.0. In order to support HACMP in nonconcurrent mode with single-point-failure protection.ibm. go to the following Web site: http://www.5 + APARs v IY36938 v IY36933 v IY36782 v IY37744 v IY37746 v IY35810 v IY36951 Note: For up-to-date APAR information for HACMP. Software support for HACMP 4.

These special requirements apply to both concurrent and nonconcurrent volume groups. otherwise this attribute exists only in the PdAt ODM and the value is set to no (default).0. With this feature. if a pvid exists on the physical device. You should determine the scenario that matches your host and follow the procedures described for that scenario.0. It is necessary to determine the state of a pvid as displayed by the lspv command. The AIX disk driver automatically creates the pvid attribute in the ODM database. Later on. SDD does not automatically create the pvid attribute in the ODM database for each SDD vpath device. the SDD configuration method will read the pvid from the physical disk and create the pvid attribute in the ODM database for all SDD vpath devices. you will see the display as shown in Scenario 4 below.CuAt ODM and the value is set to yes. Under certain conditions.6.0 do not automatically create the pvid attribute in the ODM database for each SDD vpath device. Furthermore. Before SDD 1. Next.6. the first time that you import a new SDD volume group to a new cluster node. Using the SDD on an AIX host system 57 .6.0. you can import and vary on the volume group directly from the SDD vpath devices. the state of a physical device's pvid on a system is not always as expected.0. if a pvid exists on the physical disk (See Scenario 3 below). you must import the volume group using hdisks as physical volumes. the host should have the pvid on the SDD vpath devices after an SDD vpath configuration.0. You can use the following command to check the persistent_resv attribute after the SDD device configuration is complete: lsattr -El dpo If your host is attached to a supported storage device that supports Persistent Reserve. If no pvid exists on the physical disk. run the hd2vp conversion script (see “SDD utility programs” on page 87) to convert the volume group's physical volumes from supported storage device hdisks to SDD vpath devices. if the SDD vpath device is not reserved by another host and if there is pvid resided on the physical disk. enter the command: odmget -q "name = ioaccess" CuAt The output should look similar to this: name = "ioaccess" attribute = "preservekey" value = "01043792" type = "R" generic = "" rep = "s" nls_index = 0 Preparation for importing volume groups under HACMP Beginning with SDD 1. in order to select the appropriate import volume group action. This conversion step not only creates pvid attributes for all SDD vpath devices that belong to that imported volume group. it also deletes the pvid attributes for the underlying hdisks for these SDD vpath devices. the output should look similar to the following output: > lsattr -El dpo SDD_maxlun 1200 Maximum LUNS allowed True persistent_resv yes Subsystem Supports Persistent Reserve Command False To check the persistent reserve key of a node that HACMP provides. Because SDD versions earlier than 1. the SDD configuration method will clean up supported storage devices' (hdisk) pvid from ODM database. Chapter 2.

lspv does not display the pvid on the hdisks or the vpath: >lspv hdisk1 hdisk2 vpath0 none none none None None None For Scenario 4. lspv displays the pvid for vpath only: >lspv hdisk1 hdisk2 vpath0 none None none None 003dfc10a11904fa None For Scenario 3. lspv displays pvids for both hdisks and vpath: >lspv hdisk1 hdisk2 vpath0 003dfc10a11904fa None 003dfc10a11904fa None 003dfc10a11904fa None Scenario 2. This section also provides instructions on the following procedures for both HACMP RAID concurrent-mode volume groups and enhanced concurrent-capable volume groups. lspv displays pvids for hdisks only: >lspv hdisk1 hdisk2 vpath0 003dfc10a11904fa None 003dfc10a11904fa None none None For both Scenario 1 and Scenario 2. the volume group should be imported using the vpath name: >importvg -y vg_name -V major# vpath0 Scenario 4. HACMP RAID concurrent-mode volume groups and enhanced concurrent-capable volume groups This section provides information about HACMP RAID concurrent-mode volume groups and enhanced concurrent-capable volume groups. the pvid will need to be placed in the ODM for the SDD vpath devices and then the volume group can be imported using the vpath name: >chdev -l vpath0 -a pv=yes >importvg -y vg_name -V major# vpath0 Note: See “Importing volume groups with SDD” on page 79 for a detailed procedure for importing a volume group with the SDD devices.There are four scenarios: Scenario 1. v Creating volume groups v Importing volume groups v v v v Removing volume groups Extending volume groups Reducing volume groups Exporting volume groups 58 Multipath Subsystem Device Driver User's Guide . the volume group should be imported using the hdisk names and then converted using the hd2vp command: >importvg -y vg_name -V major# hdisk1 >hd2vp vg_name Scenario 3.

Enter the information appropriate for your environment. the instructions to export a volume group are always the same.4. Note: For more information about HACMP RAID concurrent-mode volume groups. Enter smitty datapath_mkvg at the command prompt. Using the SDD on an AIX host system 59 . You can import the volume group from either the SDD vpath device or the hdisk device. Creating HACMP RAID concurrent-mode volume groups Perform the following steps to create an HACMP RAID concurrent-mode volume group: Note: On each node in the cluster. see the HACMP Administration Guide. Follow this procedure to import a volume group with SDD vpath device. On the node where the volume group was originally created. HACMP supports both kinds of concurrent volume groups (HACMP RAID concurrent-mode volume groups and enhanced concurrent-capable volume groups). you can get the pvid: NODE VG ORIGINALLY CREATED ON monkey> lspv | grep con_vg vpath124 000900cf4939f79c con_vg monkey> Chapter 2. depending on the pvid condition on the node to which the volume group is to be imported. Press Enter AFTER making all desired changes. you need to vary off the volume group on the node after it is created. Add a Volume Group with Data Path Devices Type or select values in entry fields. 1. This section describes the advantage of enhanced concurrent-capable volume groups in an HACMP environment. See “Exporting HACMP RAID concurrent-mode volume groups” on page 64.Starting with AIX51 TL02 and HACMP 4.1. you can create enhanced concurrent-capable volume groups with supported storage devices. VOLUME GROUP name Physical partition SIZE in megabytes * PHYSICAL VOLUME names Force the creation of a volume group? Activate volume group AUTOMATICALLY at system restart? Volume group MAJOR NUMBER Create VG Concurrent Capable? Auto-varyon in Concurrent Mode? Create a big VG format Volume Group? LTG Size in kbytes [Entry Fields] [con_vg] [vpath124] no no [80] no no no 128 + + + + +# + + + + Importing HACMP RAID concurrent-mode volume groups When importing the volume group to other nodes in the cluster. 1. A screen similar to the following is displayed. The following example shows how to create a concurrent access volume group con_vg on an SDD vpath124. issue the lvlstmajor command to determine the next common available major number (volume groups must be created with a major number that is available on all nodes). 2. It also describes the different ways of creating two kinds of concurrent-capable volume groups.4. While there are different ways to create and vary on concurrent-capable volume groups.

The following example shows how to import an enhanced concurrent-capable volume group using the con_vg on an SDD vpath device vpath3: ************************************************************************ Import a Volume Group Type or select values in the entry fields. Enter the information appropriate for your environment. Press Enter AFTER making all desired changes. [Entry Fields] VOLUME GROUP name [con_vg] PHYSICAL VOLUME names [hdisk126] Volume Group MAJOR NUMBER [80] Make this VOLUME GROUP concurrent-capable? no Make default varyon of VOLUME GROUP concurrent? no ********************************************************************** 60 Multipath Subsystem Device Driver User's Guide . If the pvid is on hdisk devices. 2) A screen similar to the following is displayed. [Entry Fields] VOLUME GROUP name [con_vg] PHYSICAL VOLUME names [vpath124] Volume Group MAJOR NUMBER [80] Make this VOLUME GROUP concurrent-capable? no Make default varyon of VOLUME GROUP concurrent? no ************************************************************************ b. The following example shows how to import an HACMP RAID concurrent-mode volume group using the con_vg on an SDD hdisk126: *********************************************************************** Import a Volume Group Type or select values in the entry fields. the output of the lspv | grep <pvid found in step 1> and the lsvpcfg commands should look like the following example: NODE VG BEING IMPORTED TO zebra> lspv | grep 000900cf4939f79c hdisk126 000900cf4939f79c none hdisk252 000900cf4939f79c none zebra> zebra> lsvpcfg | egrep -e ’hdisk126 (’ vpath124 (Avail) 21B21411=hdisk126 (Avail pv) hdisk252 (Avail pv) 1) Enter smitty importvg at the command prompt. Enter the information appropriate for your environment. There are three scenarios. Press Enter AFTER making all desired changes. 2) A screen similar to the following example is displayed. If the pvid is on an SDD vpath device. the output of the lspv | grep <pvid found in step 1> and the lsvpcfg commands should look like the following example: NODE VG BEING IMPORTED TO zebra> lspv | grep 000900cf4939f79c vpath124 000900cf4939f79c none zebra> zebra> lsvpcfg vpath124 vpath124 (Avail pv) 21B21411=hdisk126 (Avail) hdisk252 (Avail) 1) Enter smitty importvg at the command prompt.2. Then grep the pvid on the other nodes using the lspv | grep <pvid found in step 1> and the lsvpcfg commands. Follow the procedure for the scenario that matches the pvid status of your host: a.

monkey> lsvpcfg vpath122 (Avail) 21921411=hdisk255 (Avail) hdisk259 (Avail) vpath123 (Avail) 21A21411=hdisk256 (Avail) hdisk260 (Avail) vpath124 (Avail pv con_vg) 21B21411=hdisk127 (Avail) hdisk253 (Avail) monkey> zebra> lsvpcfg | egrep -e ’21B221411 vpath124 (Avail) 21B21411=hdisk126 (Avail) hdisk252 (Avail) zebra> Note: You should also verify that the volume group is not varied on for any of the nodes in the cluster prior to attempting retrieval of the pvid. issue the lsvpcfg command again to verify the state of the vpath. [Entry Fields] VOLUME GROUP name [con_vg] PHYSICAL VOLUME names [vpath124] Volume Group MAJOR NUMBER [80] Make this VOLUME GROUP concurrent-capable? no Make default varyon of VOLUME GROUP concurrent? no ********************************************************************** 3. it is 21B21411) following the SDD vpath device names on each node are identical. Verify that the serial numbers (in this example. ********************************************************************** Import a Volume Group Type or select values in the entry fields. After importing volume groups has been completed. issue the lsvpcfg command again to verify the state of the SDD vpath device. Using the SDD on an AIX host system 61 . Enter the information appropriate for your environment.3) After importing volume groups have been completed. The following example shows how to import an HACMP RAID concurrent-mode volume group using the con_vg on an SDD vpath device vpath124. To determine a matching serial number on both nodes. run the lsvpcfg command on both nodes. 2) There is a possibility that the SDD vpath device might be different for each host. Press Enter AFTER making all desired changes. zebra> lsvpcfg vpath124 vpath124 (Avail pv con_vg) 21B21411=hdisk126 (Avail) hdisk252 (Avail) Chapter 2. 4) A screen similar to the following is displayed. zebra> lsvpcfg | egrep -e ’hdisk126 (’ vpath124 (Avail) 21B21411=hdisk126 (Avail pv con_vg) hdisk252 (Avail pv con_vg) 4) Enter the hd2vp command against the volume group to convert the pvid from hdisk devices to SDD vpath devices: zebra> hd2vp con_vg zebra> lsvpcfg | egrep -e ’hdisk126 (’ vpath124 (Avail pv con_vg) 21B21411=hdisk126 (Avail) hdisk252 (Avail) c. 3) Enter smitty importvg at the command prompt. the output of the lspv | grep <pvid found in step 1> and the lsvpcfg commands should look like the following example: NODE VG BEING IMPORTED TO zebra> lspv | grep 000900cf4939f79c zebra> 1) Issue the chdev -l vpathX -a pv=yes command to retrieve the pvid value. If there is no pvid on either hdisk or SDD vpath device.

The following example shows how to remove an HACMP RAID concurrent-mode volume group using the con_vg volume group. [Entry Fields] VOLUME GROUP name [con_vg] ************************************************************************ Extending HACMP RAID concurrent-mode volume groups Perform the following steps to extend an HACMP RAID concurrent-mode volume group: 1. 62 Multipath Subsystem Device Driver User's Guide . ************************************************************************ Remove a Volume Group Type or select values in the entry fields. or by following the procedure below. This will cause errors when attempting to synchronize the HACMP cluster. When the concurrent access volume groups are not set to autovaryon. Enter smitty vg at the command prompt. Enter smitty datapath_extendvg at the command prompt of one of the nodes. 3. 2. Use the lsvg vgname command to check the value of autovaryon. 2. Enter the information appropriate for your environment. Removing HACMP RAID concurrent-mode volume groups Notes: 1. Enter the information appropriate for your environment. The following example shows how to extend an HACMP RAID concurrent-mode volume group using the con_vg on an SDD vpath2. Press Enter AFTER making all desired changes. A screen similar to the following example is displayed. a special option flag -u is required when issuing the varyonvg command to make them concurrent-accessible across all the cluster nodes. Note: A screen similar to the following example is displayed. 2. it is important that they are not set for autovaryon. Use the chvg -an vgname command to set autovaryon to FALSE. Select Remove a Volume Group from the displayed menu. 3.Attention: When any of these HACMP RAID concurrent-mode volume groups are imported to the other nodes. Vary off the HACMP RAID concurrent-mode volume group to be extended on all nodes. Ensure that the volume group is varied on. **************************************************************** Add a Datapath Physical Volume to a Volume Group Type or select values in the entry fields. Press Enter AFTER making all desired changes. Perform the following steps to remove an HACMP RAID concurrent-mode volume group: 1. Removing an HACMP RAID concurrent-mode volume group can be accomplished by exporting volume groups. These steps need to be run on all nodes.

c. do the following: a. Vary off the volume group after extending it on the current node. Select Remove a Physical Volume from a Volume Group from the displayed menu. [Entry Fields] VOLUME GROUP name [con_vg] PHYSICAL VOLUME names [vpath1] FORCE deallocation of all partitions yes ************************************************************************ 6. Enter exportvg con_vg at the command prompt. Press Enter AFTER making all desired changes. Using the SDD on an AIX host system 63 . 4. For all the nodes sharing con_vg. Enter the chdev -l vpath2 -a pv=yes command to obtain the pvid for this vpath on the other host. Enter importvg -L con_vg vpath2 to import the volume group again. Enter the information appropriate for your environment. Enter the information appropriate for your environment. The following example shows how to reduce an HACMP RAID concurrent-mode volume group using the con_vg on an SDD vpath1. Enter smitty importvg at the command prompt. A screen similar to the following is displayed. c. For all the nodes sharing con_vg. A screen similar to the following example is displayed. 5. Reducing HACMP RAID concurrent-mode volume groups Perform the following steps to reduce an HACMP RAID concurrent-mode volume group: 1. b. 5. *************************************************************** Import a Volume Group Type or select values in entry fields. Press Enter AFTER making all desired changes. Assume that con_vg originally has vpath0 and vpath1 as its physical volumes. 2. Verify that con_vg has the extended vpath included by using the lspv command. b. Vary off the HACMP RAID concurrent-mode volume group to be reduced on all nodes. Verify that the pvid exists by issuing the lspv command. Select Set Characteristics of a Volume Group from the displayed menu.[Entry Fields] VOLUME GROUP name [con_vg] PHYSICAL VOLUME names [vpath2] ***************************************************************** 4. 3. VOLUME GROUP name PHYSICAL VOLUME name [Entry Fields] [con_vg] [vpath0] Chapter 2. Vary off the volume group after reducing it on the current node. Enter smitty vg at the command prompt. d. ************************************************************************ Remove a Physical Volume from a Volume Group Type or select values in the entry fields. do the following: a. 7.

make sure the volume group is varied off. extending or reducing a volume group). Notes: 1. the next common available major number can be selected (41. To export con_vg.49.62. 58. the user might begin with the highest available (80) and increase by increments from there. 67.78. Verify that con_vg has the vpath reduced by using the lspv command. From this listing. Exporting HACMP RAID concurrent-mode volume groups To export an HACMP RAID concurrent-mode volume group. 55. the changes made to the logical volume or volume group structures on one node (for example.67.....61. monkey>lvlstmajor 39. The advantage of this mode is that after you create an enhanced concurrent-capable volume group on multiple nodes. zebra>lvlstmajor 40...+ Volume Group MAJOR NUMBER +# Make this VG Concurrent Capable? Make default varyon of VG Concurrent? [45] No no + + *************************************************************** d.80. 64 Multipath Subsystem Device Driver User's Guide . use the exportvg con_vg command. see “Supporting enhanced concurrent mode in an HACMP environment” on page 67. Also.67.4.55. issue the lvlstmajor command to determine the next available major number. The volume groups must be created with a major number that is available on all nodes.. the Logical Volume Manager (LVM) configuration files are updated on all nodes...78.60.. 80.67.58. enhanced concurrent mode is supported with both 32-bit and 64-bit kernels. However. are propagated to all other nodes. enter exportvg at the command prompt. For more detailed information on enhanced concurrent-capable volume groups. Before exporting an HACMP RAID concurrent-mode volume group.. if multiple volume groups are going to be created. Enter smitty datapath_mkvg at the command prompt.).58..55.54. The following sections provide information and instructions on the operating actions that you can perform. 1. 68. The following listing is an example: dollar>lvlstmajor 41.1.58. Enhanced concurrent-capable volume groups With the AIX51 TL02 and HACMP 4.49. Creating enhanced concurrent-capable volume groups: Perform the following steps to create enhanced concurrent-capable volume groups: Note: On each node in the cluster. 2..4 environments. 61. .

The following example shows how to import an enhanced concurrent-capable volume group using the con_vg on SDD vpath device vpath3. If pvid is not displayed. Enter the information appropriate for your environment. VOLUME GROUP name Physical partition SIZE in megabytes * PHYSICAL VOLUME names Force the creation of a volume group? Activate volume group AUTOMATICALLY at system restart? Volume group MAJOR NUMBER Create VG Concurrent Capable? Auto-varyon in Concurrent Mode? Create a big VG format Volume Group? LTG Size in kbytes [Entry Fields] [con_vg] [vpath0] no no [80] yes no no 128 + + + + +# + + + + Importing enhanced concurrent-capable volume groups: Perform the following steps to import enhanced concurrent-capable volume groups. Extending enhanced concurrent-capable volume groups: Chapter 2. This method can also be used when attempting to obtain a pvid on an SDD vpath device. run the lspv command against the hdisk. issue the lspv command to make sure there is pvid on the SDD vpath device. Verify that the volume group is not varied on for any of the nodes in the cluster prior to attempting to retrieve the pvid.2. run chdev -l hdiskX -a pv=yes to recover it. The following example shows how to create an enhanced concurrent-capable volume group using the con_vg on an SDD vpath0. A screen similar to the following example is displayed. Using the SDD on an AIX host system 65 . A screen similar to the following example is displayed. Enter the information appropriate to your environment. Add a Volume Group with Data Path Devices Type or select values in entry fields. Press Enter AFTER making all desired changes. Enter hd2vp to convert the volume group to SDD vpath devices. To verify that pvid now exists. Enter smitty importvg at the command prompt. import the volume group on one of the hdisks that belongs to the SDD vpath device. [Entry Fields] VOLUME GROUP name [con_vg] PHYSICAL VOLUME names [vpath3] Volume Group MAJOR NUMBER [45] Make this VOLUME GROUP concurrent-capable? yes Make default varyon of VOLUME GROUP concurrent? no ******************************************************************************** Note: The major number identified must be the same one used when the volume group was first created. Before importing enhanced concurrent-capable volume groups on SDD vpath devices. ******************************************************************************** Import a Volume Group Type or select values in the entry fields. Press Enter AFTER making all desired changes. If the hdisks do not have a pvid.

66 Multipath Subsystem Device Driver User's Guide . Select Remove a Physical Volume from a Volume Group from the displayed menu. [Entry Fields] VOLUME GROUP name [con_vg] PHYSICAL VOLUME names [vpath2] ******************************************************************************** Note: Verify that extending of enhanced concurrent-capable volume groups worked on the particular node and that all changes were propagated to all other nodes in the cluster using the lsvpcfg command. Reducing enhanced concurrent-capable volume groups: Perform the following steps to reduce an enhanced concurrent-capable volume group: 1. 2. Enter smitty datapath_extendvg at the command prompt. Press Enter AFTER making all desired changes. Enter the information appropriate for your environment. 4. Press Enter AFTER making all desired changes. ensure that pvids exist on the SDD vpath device/hdisks on all nodes in the cluster.Note: Before attempting the extend of the concurrent volume group. The following example shows how to extend an enhanced concurrent-capable volume group using the con_vg on SDD vpath device vpath2. [Entry Fields] VOLUME GROUP name [con_vg] PHYSICAL VOLUME names [vpath2] FORCE deallocation of all partitions yes ******************************************************************************** Note: Verify that reducing of volume groups worked on the particular node and that all changes were propagated to all other nodes in the cluster using the lsvpcfg command. ******************************************************************************** Remove a Physical Volume from a Volume Group Type or select values in the entry fields. if an active node locks up. The following example shows how to reduce an enhanced concurrent-capable volume group using the con_vg on SDD vpath device vpath2. Certain situations. Select Set Characteristics of a Volume Group from the displayed menu. ******************************************************************************** Add a Datapath Physical Volume to a Volume Group Type or select values in the entry fields. Enter the information appropriate for your environment. Recovering paths that are lost during HACMP node fallover that is caused when a system locks up Typically. Enter smitty vg at the command prompt. A screen similar to the following is displayed. 3. HACMP transfers ownership of shared disks and other resources through a process known as node fallover. 2. A screen similar to the following is displayed. Perform the following steps to extend an enhanced concurrent-capable volume group: 1.

To create enhanced concurrent-capable volume groups from the smit panel. Performance can be restored by unconfiguring and reconfiguring the fibre-channel adapter or by rebooting the system.3. Attention: Running sddsrv will activate several AIX Fibre Channel Protocol or adapter driver problems on AIX 4. SDD server daemon The SDD server (sddsrv) is an integrated component of SDD 1. AIX users with Fibre Channel Protocol support and the SDD server Chapter 2. can cause your SDD vpath devices to lose one or more underlying paths after the failed node is restarted. To create enhanced concurrent-capable volume groups from the command line. v Enter the addpaths command to add the lost paths back to the SDD devices. Import this concurrent volume group to all other nodes and add the volume group into the HACMP concurrent resource group.10 (or later) on all the nodes SDD 1.1.enh and bos. The volume group will be varied-on by HACMP. See Chapter 12. If your SDD vpath devices have lost one or more underlying paths that belong to an active volume group.1. and exporting enhanced concurrent-capable volume groups.rte.9 (or later). you can use either the Add Paths to Available Data Path Devices SMIT panel or run the addpaths command from the AIX command line to recover the lost paths. removing. Using the SDD on an AIX host system 67 .such as a loose or disconnected SCSI or fibre-channel-adapter card. all changes are automatically propagated to the other nodes.2. Supporting enhanced concurrent mode in an HACMP environment To run HACMP in this enhanced concurrent mode.lvm filesets installed at level 5. “Using the SDD server and the SDDPCM server. Go to “Dynamically adding paths to SDD vpath devices” on page 47 for more information about the addpaths command. Both ways will leave the enhanced concurrent-capable volume group in varied-off mode.3.clvm.3. set Create Volume Group concurrent-capable? to yes. After the changes are made to one volume group on one node.9 (or later) provides the updated version of mkvg4vp and smit panel for the user to create enhanced concurrent-capable volume groups. reducing. you must run the addpaths command to recover the lost paths. you need: v ESCRM feature of HACMP v bos. You will experience this as a performance degradation that grows worse over time.0. Note: Running the cfgmgr command while the SDD vpath devices are in the Available state will not recover the lost paths.” on page 415 for more information about sddsrv.0. Then run the cfgmgr command to configure all the underlying paths (hdisks) to Available state. see “Enhanced concurrent-capable volume groups” on page 64. and then start the HACMP cluster.3 and 5.2. One of the problems in the AIX Fibre Channel Protocol driver is that internal resources can be leaked. importing. For more detailed information and instructions on creating. the user needs to turn on the -c (in 32-bit kernel) or the -C (in 64-bit kernel) option for the mkvg4vp command. This component consists of a UNIX application daemon that is installed in addition to the SDD device driver. Perform the following steps to recover these paths: v Make sure the issue that is causing lost paths is fixed.

You should check /etc/inittab during OS installations or migrations and comment out this line. verify if the SDD server (sddsrv) has automatically started by entering lssrc –s sddsrv. but sddsrv will start again if the system is restarted. Verifying if the SDD server has started After you have installed SDD. the status will be Inoperative. you must manually recover the adapter or paths after the quiesce/resume is completed on one bay. If the SDD server has not started. Stopping the SDD server You can temporarily disable sddsrv by entering the command stopsrc -s sddsrv. see the instructions in “Changing the sddsrv or pcmsrv TCP/IP port number” on page 419. the feature that automatically recovers failed paths (DEAD or CLOSE_DEAD path) is disabled. sddsrv will not run after system reboot. the following command could be added to /etc/inittab: install_assist:2:wait:/usr/sbin/install_assist </dev/console>/dev/console 2>&1 Because this command runs in the foreground. Starting the SDD server manually If the SDD server did not start automatically after you performed the SDD installation. Go to “Starting the SDD server manually” to proceed. See “datapath set device path” on page 447 for more information. the output from lssrc –s sddsrv command looks like this: Subsystem GROUP PID Status sddsrv NNN Active where NNN is the process ID number. If the SDD server (sddsrv) has automatically started.daemon installed should apply the PTFs listed in “PTFs for APARs on AIX with Fibre Channel and the SDD server” on page 69. 68 Multipath Subsystem Device Driver User's Guide . This will stop the current version of sddsrv. it will prevent all the subsequent commands in the script from starting. Note: During OS installations and migrations. During the concurrent storage bay quiesce/resume process. Changing to a different port number for the SDD server To change to a different port number for the SDD server. and before the quiesce/resume starts on the next bay. If sddsrv happens to be behind this line. you can start sddsrv by entering startsrc –s sddsrv. Go to “Verifying if the SDD server has started” to verify that the SDD server started successfully. Without doing so. the application might fail. If sddsrv is stopped. The status of sddsrv should be Active if the SDD server has automatically started.

com 4.1. Add following line: srv:2:wait:/usr/bin/startsrc -a s0 -s sddsrv > /dev/null 2>&1 to the system /etc/inittab table. 5. you should re-enable sddsrv.df1000f7.com 4.1 AIX 4.pci.28 or 5.36) IY35177 (available in devices.3 U484723 If you experience a degradation in performance.com 5.pci.Replacing the SDD server with a stand-alone version You can also choose to replace the current version of sddsrv with a stand-alone version by performing the following steps: 1. see “Stopping the SDD server” on page 68.0. Using the SDD on an AIX host system 69 . Comment out following line: srv:2:wait:/usr/bin/startsrc -s sddsrv > /dev/null 2>&1 in the system /etc/inittab table. PTFs for the APARs shown in Table 12 must be applied to your AIX servers in order to avoid a performance degradation.1.3.df1000f7.df1000f7.3. Run stopsrc -s sddsrv to stop the current version of sddsrv. see the “Replacing the SDD server with a stand-alone version.0.com 5. Run startsrc -a s0 -s sddsrv to start a stand-alone version of sddsrv. If you are running IBM TotalStorage Expert.1.pci. Chapter 2. you should disable sddsrv until the PTFs for these APARs can be installed. Table 12.0.3. Verify that the SDD server has stopped. PTFs for APARs on AIX with Fibre Channel and the SDD server If you have fibre-channel support and the SDD server daemon running. Starting sddsrv with the s0 flag does not provide path health check or path reclamation functions.35) IY37437 (available in devices.3.df1000f7.1 APAR IY32325 (available in either of devices. 4.3. After the PTFs for these APARs are installed. You should manually recover paths by using the datapath command. 2.” Otherwise.3. See “datapath set device path” on page 447 for more information.3 U483803 AIX 4.84) IY37841 (available in devices.pci.86) PTF U476971 U482718 U483680 AIX 5. PTFs for APARs on AIX with fibre-channel support and the SDD server daemon running AIX version AIX 5. 3. See “Verifying if the SDD server has started” on page 68 and the status should be inoperative.

round robin sequential (rrs) This policy is the same as the round-robin policy with optimization for sequential I/O.3. I/O operation on each adapter will not be balanced among the adapters based on the load of each adapter. Enter the lsattr -El vpathN command. determine the active attributes for the SDD device. which is load balancing. load balancing sequential (lbs) This policy is the same as the load-balancing policy with optimization for sequential I/O. The load is a function of the number of I/O operations currently in process. If there are multiple (preferred) paths on multiple adapters. The output should look similar to this: # lsattr -El vpath0 active_hdisk hdisk66/13AB2ZA1020/fscsi3 active_hdisk hdisk2/13AB2ZA1020/fscsi2 active_hdisk hdisk34/13AB2ZA1020/fscsi2 active_hdisk hdisk98/13AB2ZA1020/fscsi3 policy df pvid 0005f9fdcda4417d0000000000000000 qdepth_enable no reserve_policy PR_exclusive serial_number 13AB2ZA1020 unique_id yes Active hdisk Active hdisk Active hdisk Active hdisk Scheduling Policy Physical volume identifier Queue Depth Control Reserve Policy LUN serial number Device Unique Identification False False False False True False True True False False datapath set device policy command Use the datapath set device policy command to change the SDD path-selection policy dynamically: 70 Multipath Subsystem Device Driver User's Guide . The path-selection policy is set at the SDD device level.Dynamically changing the SDD path-selection policy algorithm SDD 1. The default path-selection policy for an SDD device is load balancing. If a device has only two paths. round robin (rr) The path to use for each I/O operation is chosen at random from paths that were not used for the last I/O operation. Load-balancing mode also incorporates failover protection.9 (or later) supports dynamically changing the SDD devices path-selection policy. default (df) The policy is set to the default policy. The following path-selection policies are supported: failover only (fo) All I/O operations for the device are sent to the same (preferred) path until the path fails because of I/O errors. If multiple paths have the same load. Then an alternate path is chosen for subsequent I/O operations. load balancing (lb) The path to use for an I/O operation is chosen by estimating the load on the adapter to which each path is attached. Press Enter.3. Note: The load-balancing policy is also known as the optimized policy. Before changing the path-selection policy. a path is chosen at random from those paths. SDD alternates between the two. where N represents the vpath number.

5. 4.Note: You can enter the datapath set device N policy command to change the policy dynamically associated with vpaths in either Close or Open state. 2. The switch ports must be in the same zone on the same switch. Run cfgmgr to reconfigure all devices back to the available state. Use the following procedure to enable dynamic tracking: 1. Note: This 15 seconds includes the time to bring up the fibre channel link after you reconnect the cables. 3. Issue the chdev -l fscsiX -a dyntrk=yes command for all adapters on your system.3. you can issue the datapath open path command to dynamically open a path that is in an INVALID or CLOSE_DEAD state if the SDD vpath device it belongs to is in the OPEN state. “Using the datapath commands. Swap ports of two cables on disk storage system. 3. Combine two switches in two SANs into one SAN by connecting switches with cable and cascading switches within 15 seconds. the fibre-channel adapter detects the change of the device's fibre-channel node port ID. the disconnected cable must be reconnected within 15 seconds.20 TL1. See “datapath open device path” on page 426 in Chapter 13.” on page 421 for more information.2. By default. For disk storage systems. it takes approximately 5 seconds to bring up the fibre channel link after the fibre channel cables are reconnected. It reroutes the traffic that is destined for that device to the new worldwide port name (WWPN) while the device is still online. Dynamically opening an invalid or close_dead path With SDD 1.6.9 (or later). This enables the dynamic changing of fibre-channel cable connections on switch ports or on supported storage ports without unconfiguring and reconfiguring hdisk and SDD vpath devices. Change connection to another switch port.0 and later support this feature. Issue the rmdev -l fscsiX -R for all adapters on your system to change all the child devices of fscsiX on your system to the defined state. Using the SDD on an AIX host system 71 . You can use this command even when the I/O is actively running. Thus the actual time that you can leave the cable disconnected is less than 15 seconds.20 TL1 and later releases. the disconnected cable must be reconnected within 15 seconds. Swap switch ports of two cables on the SAN. Fibre-channel dynamic device tracking for AIX 5.0 and later support all disk storage system devices. dynamic tracking is disabled. SDD 1. the disconnected cable must be reconnected within 15 seconds.0. SDD 1. This feature allows for the following scenarios to occur without I/O failure: 1. See “datapath set device policy” on page 446 for more information about the datapath set device policy command. With dynamic tracking enabled. Chapter 2. 2. Beginning with AIX 5.0. the AIX fibre-channel driver supports fibre-channel dynamic device tracking.20 TL1 (and later) This section applies only to AIX 5.

bff v U488808.0. but a subsequent command to vary on the volume group will fail.pci. Issue lquerypr -Vh /dev/vpathX on the physical LUN before varying on volume groups after the system reboot. There are two ways to avoid this issue. Unmount the filesystems and vary off the volume groups before rebooting the system. the persistent reserve on the physical volume of the volume group was not released.2. If the LUN is reserved by the current 72 Multipath Subsystem Device Driver User's Guide . 1. you need the following fibre-channel device driver PTFs applied to your system: v U486457. you should not use a single-path configuration. 2. go to “Dynamically adding paths to SDD vpath devices” on page 47.14 or later. However. The command to migrate from SDD to non-SDD volume group (vp2hd) will succeed. previously unseen adapter on the disk storage system.9 (or later) does not support concurrent download of licensed machine code in single-path mode. After the system reboot.disk.3. Note: Fibre-channel device dynamic tracking does not support the following case: The port change on the supported storage devices where a cable is moved from one adapter to another free. Tip: It is also possible to change from single-path to multipath configuration by using the addpaths command.3.rte and devices. because SDD cannot provide single-point-failure protection and load balancing with a single-path configuration.To use Fibre-channel Dynamic Device Tracking.2.9 (or later) support for single-path configuration for supported storage devices SDD 1. For more information about the addpaths command. The World Wide Port Name is a static identifier of a remote port. The World Wide Port Name will be different for that previously unseen adapter.) v U488821.bff After applying the PTFs listed above. SDD does support single-path SCSI or fibre-channel connection from your AIX host system to supported storage devices. Understanding the persistent reserve issue when migrating from SDD to non-SDD volume groups after a system reboot There is an issue with migrating from SDD to non-SDD volume groups after a system reboot. and tracking will not be possible.com are at level 5. so when you vary on the volume group.df1000f7.bff (This is a prerequisite PTF. It is possible to create a volume group or an SDD vpath device with only a single path. use the lslpp command to ensure that the files devices. This issue only occurs if the SDD volume group was varied on prior to the system reboot and auto varyon was not set when the volume group was created.fcp. Understanding SDD 1. the volume group will not be varied on.) v U486473. This is because during the reboot. the command will do a SCSI-2 reserve and fail with a reservation conflict.bff (This is a prerequisite PTF.2.

Listing paging spaces You can list paging spaces by entering: lsps -a Adding a paging space You can add a paging space by entering: mkps -a -n -sNN vg The mkps command recognizes the following options and arguments: -a -n -sNN vg Makes the new paging space available at all system restarts.1.3. AIX 5. Removing a paging space You can remove a specified secondary paging space that is not active. Using the SDD on an AIX host system 73 . release the reserve by issuing lquerypr -Vrh /dev/vpathX command. AIX 5. Chapter 2. The following sections provide information about managing secondary-system paging space. Do not redefine your primary paging space using SDD vpath devices.host. These devices must have a minimum of two paths to a physical LUN for failover protection to exist.6 (or later) supports secondary-system paging on multipath fibre-channel SDD vpath device from an AIX 4. to remove paging space PS01. Doing so might mean that no paging space is available during the system startup. or AIX 6. After successful processing.0.2. enter: rmps PS01 Verifying load-balancing and failover protection SDD provides load-balancing and failover protection for AIX applications and for the LVM when SDD vpath devices are used. Activates the new paging space immediately. AIX 5. Accepts the number of logical partitions (NN) to allocate to the new paging space. SDD supports secondary-system paging on supported storage devices.2.3.3.1 host system to a supported storage device. Managing secondary-system paging space SDD 1. which can result in poor startup performance. For example. The benefits are multipathing to your paging spaces. The volume group name in which a paging logical volume is to be created.3. Note: AIX does not recommend moving the primary paging space out of rootvg. you can vary on the volume group successfully. All the same commands for hdisk-based volume groups apply to using vpath-based volume groups for paging spaces.

It is either Defined or Available. 74 Multipath Subsystem Device Driver User's Guide . You will see an output similar to the following example: vpath0 (Avail pv vpathvg) 018FA067 = hdisk1 (Avail ) vpath1 (Avail ) 019FA067= hdisk2 (Avail ) vpath2 (Avail ) 01AFA067 = hdisk3 (Avail ) vpath3 (Avail ) 01BFA067 = hdisk4 (Avail ) hdisk27 (Avail ) vpath4 (Avail ) 01CFA067 = hdisk5 (Avail ) hdisk28 (Avail ) vpath5 (Avail ) 01DFA067 = hdisk6 (Avail ) hdisk29 (Avail ) vpath6 (Avail ) 01EFA067 = hdisk7 (Avail ) hdisk30 (Avail ) vpath7 (Avail ) 01FFA067 = hdisk8 (Avail ) hdisk31 (Avail ) vpath8 (Avail ) 020FA067 = hdisk9 (Avail ) hdisk32 (Avail ) vpath9 (Avail pv vpathvg) 02BFA067 = hdisk20 (Avail ) hdisk44 (Avail ) vpath10 (Avail pv vpathvg) 02CFA067 = hdisk21 (Avail ) hdisk45 (Avail ) vpath11 (Avail pv vpathvg) 02DFA067 = hdisk22 (Avail ) hdisk46 (Avail ) vpath12 (Avail pv vpathvg) 02EFA067 = hdisk23 (Avail ) hdisk47 (Avail ) vpath13 (Avail pv vpathvg) 02FFA067 = hdisk24 (Avail ) hdisk48 (Avail ) The following information is displayed: v The name of each SDD vpath device. Select Data Path Devices and press Enter. At least two paths to each SDD vpath device must be in the Available state to have failover protection. The other SDD vpath devices have two paths and. 4. v The unit serial number of the supported storage device LUN. If pv is displayed for both SDD vpath devices and the hdisk devices that it is comprised of. v The name of the volume group to which the device belongs. The Data Path Devices panel is displayed. v The configuration state of the SDD vpath device. Both the SDD vpath device and the hdisk devices must be in the Available state. There is no failover protection if only one path is in the Available state. such as vpath1. v The names of the AIX disk devices that comprise the SDD vpath devices. The Devices panel is displayed. you might not have failover protection.Displaying the supported storage device SDD vpath device configuration To provide failover protection. and vpath2 all have a single path and. will not provide failover protection because there is no alternate path to the LUN. Perform the following steps to use SMIT: 1. In the following example. 2. See “lsvpcfg” on page 88 for information about the lsvpcfg command. therefore. such as 019FA067. or run the lsvpcfg command. such as vpathvg. To display which supported storage device SDD vpath devices are available to provide failover protection. can provide failover protection. Attention: The configuration state also indicates whether the SDD vpath device is defined to AIX as a physical volume (pv flag). To display the state (either Defined or Available) of all SDD vpath devices and the paths to each device. select all devices for Select Query Option. vpath1. an SDD vpath device must have a minimum of two paths. Enter smitty device from your desktop window. therefore. 3. and the physical volume states. Enter the dpovgfix command to fix this problem. leave Device Name/ Device Model blank and press Enter. use either the Display Data Path Device Configuration SMIT panel. Select Display Data Path Device Configuration and press Enter. their configuration states. vpath0.

striped. an SDD volume group does not have failover protection if any of its physical volumes has only a single path. See “datapath query device” on page 431 for more information about the datapath query device command. If you use a script file to create a volume group with SDD vpath devices.) However. The Logical Volume Manager panel is displayed.You can also use the datapath command to display information about an SDD vpath device. 5. 4. In the output shown on page 74. Select Logical Volume Manager and press Enter. or compressed) on an SDD volume group. All the functions that apply to a regular volume group also apply to an SDD volume group. you must modify your script file and replace the mkvg command with the mkvg4vp command. For example. the datapath query device 10 command might produce this output: DEV#: 10 DEVICE NAME: vpath10 TYPE: 2105B09 POLICY: Optimized SERIAL: 02CFA067 ================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 scsi6/hdisk21 OPEN NORMAL 44 0 1 scsi5/hdisk45 OPEN NORMAL 43 0 The sample output shows that device vpath10 has two paths and both are operational. After you create the volume group. (See “Dynamically adding paths to SDD vpath devices” on page 47 for information about adding paths to an SDD device. Perform the following steps to create a new volume group with SDD vpath devices: 1. Choose the SDD vpath devices that have failover protection for the volume group. Using the SDD on an AIX host system 75 . This command displays the number of paths to the device. striped. Select Volume Group and press Enter. Select Add a Volume Group with Data Path Devices and press Enter. Use SMIT to create a logical volume (mirrored. 2. The System Storage Management (Physical & Logical Storage) panel is displayed. It is possible to create a volume group that has only a single path (see 74) and then add paths later by reconfiguring the supported storage device. 3. Enter smitty at the AIX command prompt. Note: Press F4 while highlighting the PHYSICAL VOLUME names field to list all the available SDD vpath devices. AIX creates the SDD vpath device as a physical volume (pv). vpath9 through vpath13 are included in a volume group and they become physical volumes. Configuring volume groups for failover protection You can create a volume group with SDD vpath devices using the Volume Groups SMIT panel. or compressed) or a file system (mirrored. To list all the Chapter 2. Select System Storage Management (Physical & Logical Storage) and press Enter. The Volume Groups panel is displayed. The System Management Interface Tool (SMIT) is displayed.

...04 00. The lspv command lists all the physical volumes known to AIX..00.00. the lsvg -p vpathvg command might produce the following output: PV_NAME vpath9 vpath10 vpath11 vpath12 vpath13 PV STATE active active active active active TOTAL PPs 29 29 29 29 29 FREE PPs 4 4 4 4 28 FREE DISTRIBUTION 00.00.04 06.00.05.00.00...05.04 00..04 00..physical volumes known to AIX..00.00. AIX makes it a physical volume as part of the procedure of creating the volume group.. Any SDD vpath devices that were created into physical volumes are included in the output similar to the following output: hdisk0 hdisk1 .00. hdisk48 hdisk49 vpath0 vpath1 vpath2 vpath3 vpath4 vpath5 vpath6 vpath7 vpath8 vpath9 vpath10 vpath11 vpath12 vpath13 0001926922c706b2 none none 00000000e7f5c88a none 00000000e7f5c88a 00019269aa5bc858 none none none none none none none none 00019269aa5bbadd 00019269aa5bc4dc 00019269aa5bc670 000192697f9fd2d3 000192697f9fde04 rootvg None None None None None None None None None None None None None None vpathvg vpathvg vpathvg vpathvg vpathvg To display the devices that comprise a volume group..06.... Losing failover protection AIX can create volume groups only from or SDD vpath devices that are physical volumes.00. If a volume group is created using a device that is not a physical volume...00.. enter the lsvg -p vg-name command... A physical volume has a physical volume identifier (pvid) written on its sector 0 and also has a pvid attribute attached to the device attributes in the CuAt ODM...06 The example output indicates that the vpathvg volume group uses physical volumes vpath9 through vpath13.. Here is a sample output from this command: 76 Multipath Subsystem Device Driver User's Guide . hdisk10 hdisk11 .. For example. use the lspv command..00.

A side effect of running the disk change method It is possible to modify attributes for an hdisk device by running the chdev command. This causes the SDD vpath device and one or more of its hdisks to have the same pvid attribute in the ODM. Using the SDD on an AIX host system 77 . the LVM uses the first device it finds in the ODM with the required pvid to activate the volume group.. SDD might remove one or more nonfunctional paths from an SDD vpath device. If the volume group containing the SDD vpath device is activated.. Creating a volume group from single-path SDD vpath devices A volume group created using any single-path SDD vpath device does not have failover protection because there is no alternate path to the supported storage device LUN. In addition.. An SDD vpath device loses failover protection if it has only one functional path left. but failover protection might not be present. Failover protection can be lost in several ways: v v v v v Losing a device path Creating a volume group from single-path SDD vpath devices A side effect of running the disk change method Running the mksysb restore command Manually deleting devices and running the configuration manager (cfgmgr) The following sections provide more information about the ways that failover protection can be lost. Losing a device path Due to hardware errors. hdisk10 hdisk11 . Chapter 2. The states of these nonfunctional paths are marked as Dead. or Close_Dead by SDD.hdisk0 hdisk1 . Invalid. The chdev command invokes the hdisk configuration method to make the requested change. use the datapath query device command to show the state of the paths to an SDD vpath device. the hdisk configuration method sets the pvid attribute for an hdisk if it determines that the hdisk has a pvid written on sector 0 of the LUN. hdisk48 hdisk49 vpath0 vpath1 vpath2 vpath3 vpath4 vpath5 vpath6 vpath7 vpath8 vpath9 vpath10 vpath11 vpath12 vpath13 0001926922c706b2 none none 00000000e7f5c88a none 00000000e7f5c88a 00019269aa5bc858 none none none none none none none none 00019269aa5bbadd 00019269aa5bc4dc 00019269aa5bc670 000192697f9fd2d3 000192697f9fde04 rootvg None None None None None None None None None None None None None None vpathvg vpathvg vpathvg vpathvg vpathvg In some cases. access to data is not lost. To determine if any of the SDD vpath devices have lost failover protection due to nonfunctional paths..

04 00.00. the output of the lsvpcfg command would look similar to this: vpath0 (Avail pv vpathvg) 018FA067 = hdisk1 (Avail ) vpath1 (Avail ) 019FA067 = hdisk2 (Avail ) vpath2 (Avail ) 01AFA067 = hdisk3 (Avail ) vpath3 (Avail ) 01BFA067 = hdisk4 (Avail ) hdisk27 (Avail ) vpath4 (Avail ) 01CFA067 = hdisk5 (Avail ) hdisk28 (Avail ) vpath5 (Avail ) 01DFA067 = hdisk6 (Avail ) hdisk29 (Avail ) vpath6 (Avail ) 01EFA067 = hdisk7 (Avail ) hdisk30 (Avail ) vpath7 (Avail ) 01FFA067 = hdisk8 (Avail ) hdisk31 (Avail ) vpath8 (Avail ) 020FA067 = hdisk9 (Avail ) hdisk32 (Avail ) vpath9 (Avail pv vpathvg) 02BFA067 = hdisk20 (Avail ) hdisk44 (Avail ) vpath10 (Avail pv vpathvg) 02CFA067 = hdisk21 (Avail ) hdisk45 (Avail ) vpath11 (Avail pv vpathvg) 02DFA067 = hdisk22 (Avail ) hdisk46 (Avail pv vpathvg) vpath12 (Avail pv vpathvg) 02EFA067 = hdisk23 (Avail ) hdisk47 (Avail ) vpath13 (Avail pv vpathvg) 02FFA067 = hdisk24 (Avail ) hdisk48 (Avail ) The output of the lsvpcfg command shows that vpath11 contains hdisk22 and hdisk46. hdisk46 is the one with the pv attribute set.00. the output would look similar to this: vpathvg: PV_NAME vpath10 hdisk46 vpath12 vpath13 PV STATE active active active active TOTAL PPs 29 29 29 29 FREE PPs 4 4 4 28 FREE DISTRIBUTION 00.. If you run the lsvg -p vpathvg command again.00.00..05..00..00...00. the following output is displayed: vpath0 (Avail pv vpathvg) 018FA067 = hdisk1 (Avail ) vpath1 (Avail ) 019FA067 = hdisk2 (Avail ) vpath2 (Avail ) 01AFA067 = hdisk3 (Avail ) vpath3 (Avail ) 01BFA067 = hdisk4 (Avail ) hdisk27 (Avail ) vpath4 (Avail ) 01CFA067 = hdisk5 (Avail ) hdisk28 (Avail ) vpath5 (Avail ) 01DFA067 = hdisk6 (Avail ) hdisk29 (Avail ) vpath6 (Avail ) 01EFA067 = hdisk7 (Avail ) hdisk30 (Avail ) vpath7 (Avail ) 01FFA067 = hdisk8 (Avail ) hdisk31 (Avail ) vpath8 (Avail ) 020FA067 = hdisk9 (Avail ) hdisk32 (Avail ) vpath9 (Avail pv vpathvg) 02BFA067 = hdisk20 (Avail ) hdisk44 (Avail ) vpath10 (Avail pv vpathvg) 02CFA067 = hdisk21 (Avail ) hdisk45 (Avail ) vpath11 (Avail pv vpathvg) 02DFA067 = hdisk22 (Avail ) hdisk46 (Avail ) vpath12 (Avail pv vpathvg) 02EFA067 = hdisk23 (Avail ) hdisk47 (Avail ) vpath13 (Avail pv vpathvg) 02FFA067 = hdisk24 (Avail ) hdisk48 (Avail ) The following example of a chdev command could also set the pvid attribute for an hdisk: chdev -l hdisk46 -a pv=yes For this example.04 06..As an example.04 00. This is a problem that must be fixed because failover protection is effectively disabled for the vpath11 physical volume of the vpathvg volume group.06...05.00.. 78 Multipath Subsystem Device Driver User's Guide .. Note: The way to fix this problem with the mixed volume group is to run the dpovgfix vg-name command after running the chdev command.. In this example. The volume group is now in a mixed mode of operation because it partially uses SDD vpath devices and partially uses hdisk devices. However. if you issue the lsvpcfg command.06 Notice that now device vpath11 has been replaced by hdisk46..... the LVM used hdisk46 instead of vpath11 when it activated volume group vpathvg.00. That is because hdisk46 is one of the hdisk devices included in vpath11 and it has a pvid attribute in the ODM.

3 and 5. and back up and restore files belonging to a volume group. configure multiple paths for vpath3). If the vpath3. and hdisk27 devices are all deleted by using the rmdev command and then cfgmgr is invoked at the command line. This means that if you manually delete SDD vpath devices and its hdisks (paths) and run configuration manager. recover from mixed volume groups. It is only necessary to run cfgmgr n times for an n-path configuration if the supported storage device has been used as a physical volume of a volume group. Go to “Dynamically adding paths to SDD vpath devices” on page 47 for more information about the addpaths command. Tip: Running cfgmgr n times for n-path vpath configurations is not always required. The following commands might produce this situation: rmdev -dl vpath3 rmdev -dl hdisk4 rmdev -dl hdisk27 cfgmgr The datapath query device command might show now that only one path (either hdisk4 or hdisk27) is configured for vpath3. hdisk4. perform the following steps: 1. Using the SDD on an AIX host system 79 . or enter cfgmgr n times. and the original number of paths of the vpath is more than one. Enter cfgmgr once for each installed SCSI or fibre-channel adapter. extend an existing volume group. The addpaths command allows you to dynamically add more paths to SDD vpath devices while they are in Available state. To restore failover protection (that is.Manually deleting devices and running the configuration manager (cfgmgr) In AIX 4. where n represents the number of paths per SDD device. Run addpaths to dynamically add the paths discovered by cfgmgr to SDD vpath devices. This is because the AIX disk driver might configure only one set of hdisks from one adapter if pvid is present on a device. it is possible that only one path of the original vpath3 is configured by AIX.1. Assume that vpath3 is made up of hdisk4 and hdisk27 and that vpath3 is currently a physical volume. Chapter 2. You can either use the Add Paths to Available Data Path Devices SMIT panel or run the addpaths command from the AIX command line. Importing volume groups with SDD You can import a new volume group definition from a set of physical volumes with SDD vpath devices using the Volume Groups SMIT panel. This command opens a new path (or multiple paths) automatically if the SDD vpath device is in the Open state. you must either have root user authority or be a member of the system group. 2. The cfgmgr command might need to be run N times when adding new LUNs. Note: To use this feature. Using LVM with SDD This section describes how to use the Logical Volume Manager (LVM) with SDD to import and export volume groups. if there is more than one adapter and the storage device has been used as a physical volume of a volume group. you might lose the failover protection. running the cfgmgr command once might not configure all the hdisks.

Select Volume Groups and press Enter. an exported volume group is no longer accessible. c.Perform the following steps to import a volume group with SDD devices: 1. 4. Enter smitty from your desktop window. it should not be accessed by another system until it has been explicitly exported from its current system and imported on another. is to allow portable volumes to be exchanged between systems. A volume group is an unshared resource within the system. Select Import a Volume Group and press Enter. 6. The Export a Volume Group panel is displayed. The System Storage Management (Physical & Logical Storage) panel is displayed. the Import a Volume Group panel. Enter the volume group to export and press Enter. In a. SMIT is displayed. Select Volume Groups and press Enter. 3. The exportvg command does not modify any user data in the volume group. The Volume Groups panel is displayed. 5. 6. You can press F4 for a list of choices. The primary use of the exportvg command. Perform the following steps to export a volume group with SDD devices: 1. Press Enter after making the changes. perform the following tasks: Enter the volume group that you want to import. Using the exportvg command and the importvg command. b. You can use the F4 key to select the volume group that you want to export. The Logical Volume Manager panel is displayed. Select System Storage Management (Physical & Logical Storage) and press Enter. The Logical Volume Manager panel is displayed. you can also switch ownership of data on physical volumes shared between two systems. The Volume Groups panel is displayed. coupled with the importvg command. 4. Enter the physical volume that you want to import. Because all system knowledge of the volume group and its contents are removed. SMIT is displayed. 5. The Import a Volume Group panel is displayed. Select Logical Volume Manager and press Enter. not individual physical volumes. 2. Only a complete volume group can be exported. Select Export a Volume Group and press Enter. 2. you must either have root user authority or be a member of the system group. The exportvg command removes the definition of the volume group specified by the Volume Group parameter from the system. Enter smitty from your desktop window. Select System Storage Management (Physical & Logical Storage) and press Enter. Select Logical Volume Manager and press Enter. Note: To use this feature. Exporting a volume group with SDD You can export a volume group definition from the system with SDD vpath devices using the Volume Groups SMIT panel. The System Storage Management (Physical & Logical Storage) panel is displayed. 80 Multipath Subsystem Device Driver User's Guide . 3.

2. Select Add a Data Path Volume to a Volume Group and press Enter. With a single path.Recovering from mixed volume groups When an SDD volume group is not active (that is. Enter smitty from your desktop window. If you use a script file to extend an existing SDD volume group. 5. After successful completion of the dpovgfix shell script. The SDD vpath devices to be added to the volume group should be chosen from those that can provide failover protection. you must modify your script file and replace the savevg command with the savevg4vp command. This will cause the SDD volume group to become a mixed volume group. Backing up all files belonging to an SDD volume group You can back up all files belonging to a specified volume group with SDD vpath devices using the Volume Groups SMIT panel. The Volume Groups panel is displayed. Using the SDD on an AIX host system 81 . SMIT is displayed. Extending an existing SDD volume group You can extend a volume group with SDD vpath devices using the Logical Volume Groups SMIT panel. In order for the shell script to be run. Select System Storage Management (Physical & Logical Storage) and press Enter. It is possible to add an SDD vpath device to an SDD volume group that has only a single path (vpath0 on 74) and then add paths later by reconfiguring the supported storage device. all mounted file systems of this volume group have to be unmounted. and certain AIX system administrative operations cause a device reconfiguration. You can also use the F4 key to list all the available SDD devices. To back up a volume group with SDD devices. (See “Dynamically adding paths to SDD vpath devices” on page 47 for information about adding paths to an SDD device. Select Volume Group and press Enter. you must modify your script file and replace the extendvg command with the extendvg4vp command. 4. The script searches for an SDD vpath device that corresponds to each hdisk in the volume group and replaces the hdisk with the SDD vpath device. mount the file systems again. The syntax is dpovgfix vg-name. varied off). 3. If you use a script file to back up all files belonging to a specified SDD volume group. The following command is an example of a command that does this: chdev -1 hdiskN -a queue_depth=30 Run the dpovgfix shell script to recover a mixed volume group. failover protection is not provided.) Perform the following steps to extend a volume group with SDD devices: 1. a pvid attribute will be created for the supported storage device hdisks. and you can select the devices that you want to add to the volume group. Chapter 2. The Logical Volume Manager panel is displayed. The System Storage Management (Physical & Logical Storage) panel is displayed. 6. Select Logical Volume Manager and press Enter. go to “Accessing the Backup a Volume Group with Data Path Devices SMIT panel” on page 86. Enter the volume group name and physical volume name and press Enter.

Table 13. SDD-specific SMIT panels SDD supports several special SMIT panels. Table 13 lists the SDD-specific SMIT panels and how you can use them. SDD-specific SMIT panels and how to proceed SMIT panels How to proceed using SMITTY – Go to: Equivalent SDD command lsvpcfg “Accessing the Display Data Display Data Path Device Path Device Configuration SMIT panel” on page 83 Configuration Display Data Path Device “Accessing the Display Data Status Path Device Status SMIT panel” on page 84 Display Data Path Device “Accessing the Display Data Adapter Status Path Device Adapter Status SMIT panel” on page 84 Define and Configure all Data Path Devices “Accessing the Define and Configure All Data Path Devices SMIT panel” on page 85 “Accessing the Add Paths to Available Data Path Devices SMIT panel” on page 85 datapath query device datapath query adapter cfallvpath Add Paths to Available Data Path Devices addpaths Configure a Defined Data “Accessing the Configure a Path Device Defined Data Path Device SMIT panel” on page 85 Remove a Data Path Device Add a Volume Group with Data Path Devices mkdev “Accessing the Remove a Data rmdev Path Device SMIT panel” on page 85 “Accessing the Add a Volume mkvg4vp Group with Data Path Devices SMIT panel” on page 85 82 Multipath Subsystem Device Driver User's Guide . while other SMIT panels provide AIX functions (but require SDD-specific commands). go to “Accessing the Remake a Volume Group with Data Path Devices SMIT panel” on page 87. Restoring all files belonging to an SDD volume group You can restore all files belonging to a specified volume group with SDD vpath devices using the Volume Groups SMIT panel. instead of the AIX mkvg command. Keep system activity at a minimum during the system backup procedure. the Add a Volume Group with Data Path Devices function uses the SDD mkvg4vp command.Attention: Backing up files (running the savevg4vp command) will result in the loss of all material previously stored on the selected output medium. For example. you must modify your script file and replace the restvg command with the restvg4vp command. To restore a volume group with SDD vpath devices. Some SMIT panels provide SDD-specific functions. If you use a script file to restore all files belonging to a specified SDD volume group. Data integrity of the archive might be compromised if a file is modified during system backup.

The following example shows the Data Path Devices panel: +----------------------------------------------------------------------+ | Display Data Path Device Configuration | | | |Type or select values in entry fields. Select Devices and press Enter. Using the SDD on an AIX host system 83 . Chapter 2. Enter a device name in the Device Name/Device Model field. Enter a device model in the Device Name/Device Model field. 3. 2. Select Data Path Devices and press Enter. SDD-specific SMIT panels and how to proceed (continued) Add a Data Path Volume to a Volume Group “Accessing the Add a Data Path Volume to a Volume Group SMIT panel” on page 86 “Accessing the Remove a Physical Volume from a Volume Group SMIT panel” on page 86 extendvg4vp Remove a Physical Volume from a Volume Group exportvg volume_group Back Up a Volume Group “Accessing the Backup a with Data Path Devices Volume Group with Data Path Devices SMIT panel” on page 86 Remake a Volume Group with Data Path Devices “Accessing the Remake a Volume Group with Data Path Devices SMIT panel” on page 87 savevg4vp restvg Accessing the Display Data Path Device Configuration SMIT panel Perform the following steps to access the Display Data Path Device Configuration panel: 1. Device name This option runs lsvpcfg <device name> and only the specified device is displayed. Device model This option runs lsvpcfg -d <device model> and only devices with the specified device model are displayed. The Data Path Devices panel is displayed. | | | | [Entry Fields] | | Select Query Option all devices + | | Device Name/ Device Model [ ] | +----------------------------------------------------------------------+ The Select Query Option has three options: All devices This option runs lsvpcfg and all the data path devices are displayed. | |Press Enter AFTER making all desired changes.Table 13. Select Display Data Path Device Configuration and press Enter 5. SMIT is displayed. No entry is required in the Device Name/Device Model field. 4. Enter smitty from your desktop window. The Devices panel is displayed.

Select Data Path Devices and press Enter. Select Devices and press Enter. Select Display Data Path Device Adapter Status and press Enter. The Data Path Devices panel is displayed. 5. The following example shows the Data Path Devices Status panel: +----------------------------------------------------------------------+ | Display Data Path Device Status | | | |Type or select values in entry fields. SMIT is displayed. See “datapath query device” on page 431 for detailed information about the datapath query device command.See “lsvpcfg” on page 88 for detailed information about the lsvpcfg command. Select Display Data Path Device Status and press Enter. Select Data Path Devices and press Enter. SMIT is displayed. Enter a device model in the Device Name/Device Model field. | | | | [Entry Fields] | | Select Query Option all devices + | | Device Number/ Device Model [ ] | +----------------------------------------------------------------------+ The Select Query Option has 3 options: All devices This option runs datapath query device and all the data path devices are displayed. Select Devices and press Enter. The Devices panel is displayed. No entry is required in the Device Name/Device Model field. 3. 84 Multipath Subsystem Device Driver User's Guide . 4. Enter a device number in the Device Name/Device Model field. Device model This option runs datapath query device –d <device model> and only devices with the specified device model are displayed. | |Press Enter AFTER making all desired changes. 3. Accessing the Display Data Path Device Adapter Status SMIT panel Perform the following steps to access the Display Data Path Device Adapter Status panel: 1. Accessing the Display Data Path Device Status SMIT panel Perform the following steps to access the Display Data Path Device Status panel: 1. Enter smitty from your desktop window. The Devices panel is displayed. 2. The Data Path Devices panel is displayed. 4. 2. Enter smitty from your desktop window. Device number This option runs datapath query device <device number> and only the specified device is displayed.

3. SMIT is displayed. Accessing the Configure a Defined Data Path Device SMIT panel Perform the following steps to access the Configure a Defined Data Path Device panel: 1. 2. Accessing the Add a Volume Group with Data Path Devices SMIT panel Perform the following steps to access the Add a volume group with data path devices panel: 1. Accessing the Remove a Data Path Device SMIT panel Perform the following steps to access the Remove a Data Path Device panel: 1. The Devices panel is displayed. The Data Path Devices panel is displayed. Select Configure a Defined Data Path Device and press Enter. 2. Select Devices and press Enter. The Devices panel is displayed. Select Data Path Devices and press Enter. Select Devices and press Enter. The Data Path Devices panel is displayed.Accessing the Define and Configure All Data Path Devices SMIT panel Perform the following steps to access the Define and Configure All Data Path Devices panel: 1. The Data Path Devices panel is displayed. Enter smitty from your desktop window. The Data Path Devices panel is displayed. The Devices panel is displayed. Enter smitty from your desktop window. 4. Select Add Paths to Available Data Path Devices and press Enter. 4. 2. 4. Chapter 2. SMIT is displayed. Select Devices and press Enter. The Devices panel is displayed. 4. 3. Using the SDD on an AIX host system 85 . Select System Storage Management (Physical & Logical Storage) and press Enter. 3. Select Define and Configure All Data Path Devices and press Enter. Select Remove a Data Path Device and press Enter. Select Data Path Devices and press Enter. SMIT is displayed. 2. The System Storage Management (Physical & Logical Storage) panel is displayed. Select Devices and press Enter. Select Data Path Devices and press Enter. 3. SMIT is displayed. Enter smitty from your desktop window. Select Data Path Devices and press Enter. Accessing the Add Paths to Available Data Path Devices SMIT panel Perform the following steps to access the Add Paths to Available Data Path Devices panel: 1. SMIT is displayed. Enter smitty from your desktop window. 2. Enter smitty from your desktop window.

4. Select Logical Volume Manager and press Enter. The System Storage Management (Physical & Logical Storage) panel is displayed. 5. The Volume Groups panel is displayed. Select Volume Group and press Enter. The Logical Volume Manager panel is displayed. The Volume Groups panel is displayed. Alternately. 86 Multipath Subsystem Device Driver User's Guide . 2. 4. Accessing the Remove a Physical Volume from a Volume Group SMIT panel Perform the following steps to access the Remove a Physical Volume from a Volume Group panel: 1. 3. Select Back Up a Volume Group with Data Path Devices and press Enter. 3. Enter smitty from your desktop window. Enter the volume group name and physical volume name and press Enter. Select System Storage Management (Physical & Logical) and press Enter.3. The System Storage Management (Physical & Logical) panel is displayed. Select Volume Groups and press Enter. Select Add a Data Path Volume to a Volume Group and press Enter. The Volume Group panel is displayed. Select Add Volume Group with Data Path Devices and press Enter. 5. The Back Up a Volume Group with Data Path Devices panel is displayed. 3. Select Remove a Physical Volume from a Volume Group and press Enter. Enter smitty from your desktop window. Select Set Characteristics of a Volume Group and press Enter. 6. 2. Accessing the Add a Data Path Volume to a Volume Group SMIT panel Perform the following steps to access the Add a Data Path Volume to a Volume Group panel: 1. 2. Select Logical Volume Manager and press Enter. you can use the F4 key to list all the available SDD vpath devices and use the F7 key to select the physical volumes that you want to add. Select Logical Volume Manager and press Enter. The Set Characteristics of a Volume Group panel is displayed. Note: Press F4 while highlighting the PHYSICAL VOLUME names field to list all the available SDD vpaths. 5. Select Volume Groups and press Enter. SMIT is displayed. SMIT is displayed. Select System Storage Management (Physical & Logical Storage) and press Enter. Enter smitty from your desktop window. 4. The Remove a Physical Volume from a Volume Group panel is displayed. Select Logical Volume Manager and press Enter. The Volume Groups panel is displayed. The Logical Volume Manager panel is displayed. 5. Select Volume Groups and press Enter. 4. The Logical Volume Manager panel is displayed. SMIT is displayed. Accessing the Backup a Volume Group with Data Path Devices SMIT panel Perform the following steps to access the Back Up a Volume Group with Data Path Devices panel and to backup a volume group with SDD devices: 1. The Logical Volume Manager panel is displayed.

and you can select the devices or files that you want to restore. You can also press F4 to list all the available SDD devices.6. Enter the Restore DEVICE or FILE name that you want to restore. Accessing the Remake a Volume Group with Data Path Devices SMIT panel Perform the following steps to access the Remake a Volume Group with Data Path Devices panel and restore a volume group with SDD devices: 1. Press Enter after making all required changes. SDD utility programs The following SDD utility programs are available: addpaths You can use the addpaths command to dynamically add more paths to the SDD devices when they are in the Available state. Data integrity of the archive might be compromised if a file is modified during system backup. Using the SDD on an AIX host system 87 . and press Enter. and you can select the devices or files that you want to backup. Attention: Backing up files (running the savevg4vp command) will result in the loss of all material previously stored on the selected output medium. This command will open a new path (or multiple paths) automatically if the SDD vpath device is in Open state. 6. The Remake a Volume Group with Data Path Devices panel is displayed. Select Remake a Volume Group with Data Path Devices and press Enter. perform the following steps: a. In the Back Up a Volume Group with Data Path Devices panel. Enter the Volume Group to backup. The System Storage Management (Physical & Logical Storage) panel is displayed. Enter smitty from your desktop window. 2. Select Volume Groups and press Enter. Select Logical Volume Manager and press Enter. Select System Storage Management (Physical & Logical Storage) and press Enter. 3. Enter the Backup DEVICE or FILE name. The Volume Groups panel is displayed. You can either use the Add Paths to Available Data Path Devices SMIT panel or run the addpaths command from the AIX command line. Keep system activity at a minimum during the system backup procedure. 5. c. this command allows you to add paths to the SDD vpath devices (which are then opened) belonging to active volume groups. In addition. Tip: You can also use the F4 key to list all the available SDD devices. SMIT is displayed. The Logical Volume Manager panel is displayed. b. 4. The syntax for this command is: addpaths Chapter 2.

LVM might pick up the hdisk device instead of the SDD vpath device. Use the vp2hd program when you want to configure your applications back to original supported storage device hdisks. The syntax for this command is: dpovgfixvgname vgname Specifies the volume group name of the mixed volume group to be recovered. The hd2vp script converts a volume group from supported storage device hdisks to the SDD vpath devices. The following is an example of a command that does this: chdev -1 hdiskN -a queue_depth=30 If this disk is an active hdisk of an SDD vpath device that belongs to an SDD volume group. The syntax for these conversion scripts is as follows: hd2vpvgname vp2hdvgname vgname Specifies the volume group name to be converted. The result is that an SDD volume group partially uses the SDD vpath devices. and the vp2hd script converts a volume group from the SDD vpath devices to supported storage device hdisks. dpovgfix You can use the dpovgfix script tool to recover mixed volume groups. and partially uses supported storage device hdisk devices. The lsvpcfg command can be issued in three ways. hd2vp and vp2hd The SDD provides two conversion scripts. This causes the volume group to lose path-failover capability for that physical volume. go to “Dynamically adding paths to SDD vpath devices” on page 47. This displays the configuration state for all SDD devices. Performing AIX system management operations on adapters and hdisk devices can cause original supported storage device hdisks to be contained within an SDD volume group.For more information about this command. or when you want to remove the SDD from your AIX host system. and you run the varyonvg command to activate this SDD volume group. The dpovgfix script tool fixes this problem. hd2vp and vp2hd. 88 Multipath Subsystem Device Driver User's Guide . Mixed volume groups happen when an SDD volume group is not active (varied off). lsvpcfg You can use the lsvpcfg script tool to display the configuration state of SDD devices. and certain AIX commands to the hdisk put the pvid attribute of hdisk back into the ODM database. This is known as a mixed volume group.

1.ibm. go to “Configuring volume groups for failover protection” on page 75. The command can also be issued using the SDD vpath device name as a parameter. mkvg4vp You can use the mkvg4vp command to create an SDD volume group. 2145 2107 1750 All 2145 models (SAN Volume Controller).htm. The option to specify a device model cannot be used when you specify an SDD vpath device. Using the SDD on an AIX host system 89 . All DS8000 models. The syntax for this command is: lsvpcfg See “Verifying the SDD configuration” on page 46 for an example of the output and what it means. 3. The command can also be issued using the device model as a parameter. 2. The command can be issued without parameters. The syntax for this command is: lsvpcfg device model The following are examples of valid device models: 2105 2105F 2105800 All 2105 800 models (ESS). All 2105 models (ESS). All 2105 F models (ESS).com/doc_link/en_US/a_doc_lib/cmds/aixcmds3/ mkvg. The syntax for this command is: mkvg4vp S* -d MaxPVs -B -G -f -q ** -C | -c [-x] -i -s PPsize -n Chapter 2. For more information about this command. go to: http://publib16. For information about the flags and parameters for this command. All DS6000 models.boulder. The syntax for this command is: lsvpcfg vpathN₀ vpathN₁ vpathN₂ You will see output similar to this: vpath10 (Avail pv ) 13916392 = hdisk95 (Avail ) hdisk179 (Avail ) vpath20 (Avail ) 02816392 = hdisk23 (Avail ) hdisk106 (Avail ) vpath30 (Avail ) 10516392 = hdisk33 (Avail ) hdisk116 (Avail ) See “Verifying the SDD configuration” on page 46 for an explanation of the output.

htm The syntax for this command is: extendvg4vp -f VGname PVname excludesddcfg You can use the excludesddcfg command to exclude supported storage device (hdisk) from the SDD vpath configuration. Example: 90 Multipath Subsystem Device Driver User's Guide . During the SDD configuration. the excludesddcfg command removes all the exclusions by deleting all the existing contents from the exclude file.com/doc_link/en_US/a_doc_lib/cmds/aixcmds2/ extendvg. -dl When this optional flag is set. go to: http://publib16. This is not the SDD device name.-m MaxPVsize | -t factor PVname -y VGname * -V MajorNumber -L LTGsize*** for AIX 5. the SDD configure methods read all serial numbers that are listed in this file and exclude these devices from the SDD configuration.1 and later only ** extendvg4vp You can use the extendvg4vp command to extend an existing SDD volume group.boulder. device name Specifies the supported storage device (hdiskN). For information about the flag and parameters for this command. You must run this command before the SDD vpath devices are configured. The syntax for this command is: excludesddcfg -l hdisk# -d -dl hdisk# device-name -l Specifies the logical number of the supported storage device (hdiskN). the excludesddcfg command allows users to remove the exclusion of a particular storage device.ibm.2 and later only *** for AIX 5. -d When this optional flag is set.3 and later only for AIX 5. The excludesddcfg command saves the serial number for the logical device (hdisk) to an exclude file (/etc/vpexclude). For more information about this command. go to “Extending an existing SDD volume group” on page 81.

lquerypr See “Persistent reserve command tool. run excludesddcfg -dl hdisk#. Notes: 1.” sddgetdata See Appendix A. especially when implementing preempt-abort or clear persistent reserve service action. 3. If the supported storage device LUN has multiple configurations on a server. Persistent reserve command tool SDD offers the lquerypr tool to provide a set of persistent reserve functions. see “Replacing manually excluded devices in the SDD configuration” on page 52. run cfallvpath. With clear service action. If you want to remove only one device from the /etc/vpexclude file. 4. Issue the excludesddcfg command with the -d flag to delete all existing contents from the exclude file. “SDD. so that the system administrator has to administer the file and its content. This tool supports the following persistent reserve service actions: v Read persistent reservation key v Release persistent reserve v Preempt-abort persistent reserve v Clear persistent reserve and registrations Note: Caution must be taken with the command. Do not use the excludesddcfg command to exclude a device if you want the device to be configured by SDD. it also aborts tasks on the LUN that originated from the initiators that are registered with the preempted key. # excludesddcfg -dl hdisk11 hdisk11 SERIAL NUMBER = 7C0FCA30 TYPE = 2105 Success: SERIAL_NUMBER 7C0FCA30 is removed from /etc/vpexclude file. With preempt-abort service action not only the current persistent reserve key is preempted. SDDPCM and SDDDSM data collection for problem analysis. Do not use the excludesddcfg command multiple times on the same logical device. which describes the use of sddgetdata to collect information for problem determination. Using the excludesddcfg command multiple times on the same logical device results in duplicate entries in the /etc/vpexclude file. 2.# excludesddcfg -l hdisk11 hdisk11 SERIAL_NUMBER = 7C0FCA30 Success: Device with this serial number is now excluded from the SDD configuration. Chapter 2. issue the excludesddcfg command with the -dl flag and specify the logical device name for which you want to remove the exclusion. both persistent reservation and reservation key registrations are cleared from the device or LUN. Using the SDD on an AIX host system 91 . For detailed instructions on the proper procedure. use the excludesddcfg command on only one of the logical names of that LUN. To undo this exclusion.” on page 449. To configure previously excluded device(s). The following information describes in detail the syntax and examples of the lquerypr command.

the command returns 3. 2. the command will return 1.lquerypr command Purpose To query and implement certain SCSI-3 persistent reserve commands on a device. it preempts the persistent reservation key on the device. It returns 1 if the device is reserved by another host. Example 1. If there is a persistent reserve on a disk. the command will return 0 under two circumstances. If the command fails. -r. it returns 2. To query and display the persistent reservation on a device. 2. The device is reserved by the current host. This command queries the persistent reservation on the device without displaying. Note: This command can only be used when the device is not already opened. 92 Multipath Subsystem Device Driver User's Guide . Flags –p If the persistent reservation key on the device is different from the current host reservation key. and clear persistent reservation. enter lquerypr -vh/dev/vpath30. There is no persistent reservation key on the device. Verbose mode. or -c. it removes any persistent reservation and clears all reservation key registration on the device. Displays the persistent reservation key if it exists on the device. To query the persistent reservation on a device. preempt-abort persistent reservation. If the device is already opened on a current host. enter lquerypr -h/dev/vpath30. The device can be either hdisk or SDD vpath devices. Prints detailed message. If there is a persistent reservation key on the device. it returns 0 if the device is reserved by the current host. 1. –c –r –v –V Return code If the command is issued without options of -p. This command supports persistent reserve service actions. release persistent reservation. Syntax lquerypr -p -c -r -v -V -h/dev/PVname Description The lquerypr command implements certain SCSI-3 persistent reservation commands on a device. such as read reservation key. If the persistent reservation key is different from the host reservation key. Removes the persistent reservation key on the device made by this host.

Applications can access SDD vpath devices either by using raw devices or logical devices. To use the SDD load-balancing and failover features to access supported storage devices. a volume group must be created with SDD vpath devices. use the SDD vpath device rather than hdisk to use the SDD load-balancing and failover features. This command resets any persistent reserve and clears all reservation key registrations on a device. it creates SDD vpath devices for supported storage device LUNs. 2. 6. Select Devices and press Enter. After installing SDD. the AIX disk driver creates hdisks for the supported storage device LUNs. enter lquerypr -ch/dev/vpath30. If your application used hdisk special files directly before installing SDD. Using the SDD on an AIX host system 93 . select all SDD vpath devices for Select Query Option. Chapter 2. SMIT is displayed. In addition. 3. an application now has two ways to access the supported storage devices. convert the application to use SDD vpath device special files. your application must use SDD vpath devices only rather than hdisks. 4. It returns 2 if the command fails. To remove the persistent reservation if the device is reserved by another host. the manner in which you replace these files is different. The Devices panel is displayed. The Data Path Devices panel is displayed. It returns 0 if the command succeeds or the device is not reserved. To display all SDD vpath devices with their attached multiple paths (hdisks). 5. or 2 if the command fails. 5. with SDD installed on a system. Enter smitty from your desktop window. Search the list of hdisks to locate the hdisks that your application is using. To release the persistent reservation if the device is reserved by the current host. 4. enter lquerypr -rh/dev/vpath30. When you configure SDD. It returns 0 if the command succeeds. Select Data Path Devices and press Enter. It returns 2 if the command fails. This command removes an existing registration and persistent reserve from another host. 7. leave Device Name/ Device Model blank. If this is a new application. It returns 0 if the command succeeds or if the device is not persistent reserved. enter lquerypr -ph/dev/vpath30. Select Display Data Path Device Configuration and press Enter. Therefore. This command releases the persistent reserve if the device is reserved by the current host. Using supported storage devices directly When you configure your host adapters. perform the following steps: 1. Note: Depending upon your application. To reset any persistent reserve and clear all reservation key registrations. Replace each hdisk with its corresponding SDD vpath device. If the application is accessing SDD vpath devices using logical devices. and press Enter. 3.Same as Example 1. it displays the persistent reservation key.

Using supported storage devices through AIX LVM If your application accesses supported storage devices through LVM. Then perform the following steps to convert the volume group from the original supported storage device hdisks to the SDD vpath devices: 1. e. Enter the following command to convert the volume group from the supported storage device hdisks to SDD multipath vpath devices: hd2vp vgname 9. Select Logical Volume Manager and press Enter. When the conversion is complete. if it is mounted. Select List All File Systems to locate all file systems that have the same mount points as the logical volumes and press Enter. Enter smitty from your desktop window. The logical volumes are listed by volume group. The File Systems panel is displayed. SMIT is displayed. This provides load-balancing and failover protection for your application. Select System Storage Management (Physical & Logical Storage) and press Enter. Enter smitty from your desktop window. Select Logical Volume Manager and press Enter. Unmount these file systems. SMIT is displayed. 5. Determine the file systems or logical volumes that your application accesses. b. To determine the file systems. mount all file systems that you previously unmounted. 4. Press Enter. make sure that you have completed the following tasks: 94 Multipath Subsystem Device Driver User's Guide . Select Logical Volume and press Enter. Select List All Logical Volumes by Volume Group to determine the logical volumes that belong to this volume group and their logical volume mount points. 8. determine that the physical volumes of the volume group that the application is accessing are SDD-supported storage devices. The Logical Volume panel is displayed. The file systems are listed. you can enter lsvpcfg from the command-line interface rather than using SMIT. The System Storage Management (Physical & Logical Storage) panel is displayed. Migrating a non-SDD volume group to a supported storage device SDD multipath volume group in concurrent mode Before you migrate your non-SDD volume group to an SDD volume group. The Logical Volume Manager panel is displayed.Note: Alternately. your application now accesses supported storage device physical LUNs through SDD vpath devices. This displays all configured SDD vpath devices and their underlying paths (hdisks). f. d. 6. Note the file system name of that volume group and the file system mount point. Select File Systems and press Enter. When the conversion is complete. 2. 3. c. The Logical Volume Manager panel is displayed. perform the following steps: a. 7.

The System Management Interface Tool panel is displayed. To use SMIT: – Enter smitty and press Enter from your desktop window. Select Logical Volume Manager and press Enter. – Select Devices and press Enter. If a LUN has never belonged to any volume group. v The supported storage devices to which you want to migrate have multiple paths configured per LUN. The System Management Interface Tool panel is displayed. Enter the lsvpcfg command output to check the SDD vpath devices that you are going to use for migration. Select System Storage Management (Physical & Logical) and press Enter. Chapter 2. A list of the SDD vpath devices and whether there are multiple paths configured for the devices is displayed. you need to clear the pvid from the LUN before using it to migrate a volume group. there is no pvid written on it. See “Verifying the currently installed version of SDD for SDD 1. f. The Logical Volume Manager panel is displayed. Select Volume Group and press Enter. To check the state of your SDD configuration. Add new SDD vpath devices to an existing non-SDD volume group: a.v The SDD for the AIX host system is installed and configured. d. Enter smitty and press Enter from your desktop window. Issuing this command to a device that does belong to an existing volume group can cause system failures.4. Alternately. you can use the F4 key to list all the available SDD vpath devices and use the F7 key to select the physical volumes that you want to add. v Ensure that the SDD vpath devices that you are going to migrate to do not belong to any other volume group.3. The System Storage Management (Physical & Logical) panel is displayed. The Volume Group panel is displayed. – Select Display Data Path Device Configuration and press Enter.0 (or later)” on page 37. use the System Management Interface Tool (SMIT) or issue the lsvpcfg command from the command line.0. The Devices panel is displayed. Using the SDD on an AIX host system 95 . The commands to clear the pvid are: chdev -l hdiskN -a pv=clear chdev -l vpathN -a pv=clear Attention: Exercise care when clearing a pvid from a device with this command.3. and that the corresponding physical device (supported storage device LUN) does not have a pvid written on it. Make sure that there is no pv displayed for this SDD vpath device and its paths (hdisks). In case there is a pvid written on the LUN and the LUN does not belong to any volume group. c. The Data Path Device panel is displayed. Select Add a Data Path Volume to a Volume Group and press Enter. b. – Select Data Path Device and press Enter. You should complete the following steps to migrate a non-SDD volume group to a multipath SDD volume group in concurrent mode: 1. e. Enter the volume group name and physical volume name and press Enter.11 (or earlier)” on page 35 or “Verifying the currently installed version of SDD for SDD 1.

Enter the smitty reducevg command to remove the original non-SDD vpath devices from the volume group. In this example. d. vg1. 96 Multipath Subsystem Device Driver User's Guide . Select System Storage Management (Physical & Logical) and press Enter. hdisk14 and hdisk30. f. The Logical Volume Manager panel is displayed. made up of one supported storage device. The Remove a Physical Volume panel is displayed. Select Logical Volume Manager and press Enter. the volume group is migrated to an SDD device. Remove all non-SDD devices. Use the new SDD vpath devices for copying all logical volumes. Select Volume Group and press Enter. Enter the smitty syncvg command to synchronize all the volumes: There are two options on the smitty panel: v Synchronize by Logical Volume v Synchronize by Physical Volume The fast way to synchronize logical volumes is to select the Synchronize by Physical Volume option. Do not forget to include JFS log volumes. To perform the migration. you must have SDD vpath devices available that are greater than or equal to the size of each of the hdisks making up the volume group. Synchronize logical volumes (LVs) or force synchronization. Enter smitty and press Enter from your desktop window. Enter the smitty rmlvcopy command to remove the original copy of the logical volumes from all original non-SDD physical volumes. Enter vg1 in the Volume Group Name field and enter vpath12 in the Physical Volume Name field. with two paths. 1. Enter the smitty mklvcopy command to mirror logical volumes from the original volume to an SDD supported storage device volume. Note: The command smitty mklvcopy copies one logical volume at a time. Add the SDD vpath device to the volume group as an Available volume: a. You do not take the volume group out of service. hdisk13. The example shown starts with a volume group. Detailed instructions for migrating a non-SDD volume group to a supported storage device SDD multipath volume group in concurrent mode This procedure shows how to migrate an existing AIX volume group to use SDD vpath devices that have multipath capability. The System Storage Management (Physical & Logical) panel is displayed. c. 5. Press Enter. b. Select Add a Data Path Volume to a Volume Group and press Enter.2. 3. The System Management Interface Tool panel is displayed. Note: A non-SDD volume group refers to a volume group that consists of non-supported storage devices or supported storage hdisk devices. Remove the mirror and delete the original LVs. You can also use the extendvg4vp -f vg1 vpath12 command. e. The Volume Group panel is displayed. vpath12. A fast-path command to mirror all the logical volumes on a volume group is mirrorvg. 4.

b. b. Enter smitty and press Enter from your desktop window. Synchronize the logical volumes in the volume group: a. The System Management Interface Tool panel is displayed. The Logical Volume Manager panel is displayed. The Set Characteristics of a Volume Group panel is displayed. The Logical Volumes panel is displayed. Select System Storage Management (Physical & Logical) and press Enter. The System Management Interface Tool panel is displayed. Select Logical Volume Manager and press Enter. b. The Remove Copy from a Logical Volume panel is displayed. d. Remove the old physical volume from the volume group: a. The System Management Interface Tool panel is displayed. f. Select Set Characteristic of a Logical Volume and press Enter. The System Storage Management (Physical & Logical) panel is displayed. 3. The System Storage Management (Physical & Logical) panel is displayed. Mirror logical volumes from the original volume to the new SDD vpath device volume: a. c. Select Set Characteristics of a Volume Group and press Enter. The Synchronize LVM Mirrors panel is displayed. The Logical Volume Manager panel is displayed. Select Synchronize LVM Mirrors and press Enter. Enter smitty and press Enter from your desktop window. b. c. c. The Logical Volume Manager panel is displayed. c. The Volume Group panel is displayed. Chapter 2. d. Enter a volume group name and a physical volume name. e. Using the SDD on an AIX host system 97 . The System Management Interface Tool panel is displayed. Select Logical Volume Manager and press Enter. The Volume Group panel is displayed. Select Mirror a Volume Group and press Enter.2. Select Volume Group and press Enter. Select Remove Copy from a Logical Volume and press Enter. Delete copies of all logical volumes from the original physical volume: a. e. Press Enter. 4. Enter smitty and press Enter from your desktop window. The Mirror a Volume Group panel is displayed. Select Volume Group and press Enter. Select Logical Volume Manager and press Enter. d. Select Logical Volumes and press Enter. You can also enter the command: rmlvcopy loglv01 1 hdisk13 rmlvcopy lv01 1 hdisk13 5. Select Volume Groups and press Enter. d. You can also enter the mirrorvg vg1 vpath12 command. Select Synchronize by Physical Volume. Enter smitty and press Enter from your desktop window. f. The Set Characteristic of a Logical Volume panel is displayed. Select System Storage Management (Physical & Logical) and press Enter. The Volume Groups panel is displayed. You can also enter the syncvg -p hdisk13 vpath12 command.

To use it. then turn off the trace function either after the program stops. By default. 20 to open the device. The SDD trace ID is 2F8. you must have the bos. Enter pathtest -d . You can also enter the reducevg vg1 hdisk13 command. 3.e. 5. follow these steps: 1. and error paths of the algorithm. enter: trcrpt | pg To save the trace data to a file. or any time you need to read the trace report. (for example. 90 (enable or disable the AIX trace). Using the trace function SDD supports AIX trace functions. pathtest -d 0) 2. SDD disables the AIX trace functions. exit. 3 (as option NO_DELAY). Enter Enter Enter Enter 777. manually turn on the trace function before the program starts to run. To start the trace function. Trace ID 2F8 traces routine entry. enter: trcstop To read the report. enter: trcrpt > filename Note: To perform the AIX trace function. 98 Multipath Subsystem Device Driver User's Guide . Then you can start the trace function. 4. enter: trace -a -j 2F8 To stop the trace function. Select Remove a Physical Volume from a Volume Group and press Enter. Follow the prompt: enter 1 to enable. To enable the AIX trace functions for SDD. The Remove a Physical Volume from a Volume Group panel is displayed.trace installation package installed on your system.sysmgt.

© Copyright IBM Corp. including where it resides on the I/O stack in the operating system and the features and functions that it supports. The AIX MPIO-capable device driver with the supported storage devices SDDPCM module enhances the data availability and I/O load balancing. Using SDDPCM on an AIX host system SDDPCM is a loadable path control module for supported storage devices to supply path management functions and error recovery algorithms. I/O operations are sent to the AIX disk driver. SDDPCM is loaded as part of the AIX MPIO FCP (Fibre Channel Protocol) or AIX MPIO SAS (serial-attached SCSI) device driver during the configuration.Chapter 3. This chapter also provides procedures for the following management tasks: v Install SDDPCM v Configure SDDPCM MPIO-capable devices v Uninstall SDDPCM v Migrate disk storage MPIO-capable devices from the AIX default PCM to SDDPCM v Migrate disk storage MPIO-capable devices from SDDPCM to the AIX default PCM or SDD v Installing SDDPCM from AIX NIM SPOT to client SAN boot disk or internal boot disk Figure 3 on page 100 shows the position of SDDPCM in the protocol stack. When the supported storage devices are configured as Multipath I/O (MPIO) devices. This chapter provides a general view of the SDDPCM path control module. The SDDPCM path selection routine is invoked to select an appropriate path for each I/O operation. 2010 99 . 1999.

ibm. multiple logical device instances are created for a physical LUN). or AIX 6. see the Readme file on the CD-ROM or visit the SDD Web site: www.2 TL07 (or later). SDDPCM manages the paths to provide: v High availability and load balancing of storage I/O v Automatic path-failover protection v Concurrent download of supported storage devices licensed machine code v Prevention of a single-point-failure For updated and additional information that is not included in this chapter. configure and make available every storage device path. See “Verifying the hardware and software requirements” on page 102 and the “Preparing for SDDPCM installation for supported storage devices” on page 105.com/pseries/en_US/aixbman/baseadmn/manage_MPIO.com/servers/storage/support/software/sdd SDD and SDDPCM are exclusive software packages on a server. 100 Multipath Subsystem Device Driver User's Guide sdd52w04 IBM disk storage system Vendor disk storage . you should install SDD to get multipath support.ibm.boulder. Note: SDDPCM does not support SCSI storage devices. When supported storage devices are configured as non-MPIO-capable devices (that is. You must install SDDPCM in order to configure supported storage devices into MPIO-capable-devices (where only one logical device instance is created for a physical LUN). SDDPCM in the protocol stack For detailed information about MPIO support on AIX 5. make sure that you meet all the required hardware and software requirements.3 TL03 (or later). You cannot install both software packages on a server for supported storage devices. AIX 5. visit the following Web site: http://publib16.htm AIX MPIO-capable device drivers automatically discover. Before you install SDDPCM.Raw I/O Logical Volume Manager (LVM) I/O AIX MPIO Disk Driver SDD PCM (for IBM disk storage system) AIX Default PCM Other Vendor PCM FCP/SAS Adapter Driver Figure 3.1.

3. Using SDDPCM on an AIX host system 101 . v Support for external supported MPIO storage devices as the primary or secondary dump device v Support storage devices multipath devices as system paging space v Support SDDPCM server daemon enhanced path health check function v v v v v v v Support a maximum of 1200 LUNs Dynamically adding paths or adapters Dynamically removing paths or adapters Dynamically changing the device path selection algorithm Dynamically changing the device hc_interval Dynamically enabling or disabling the device health check function Dynamically changing the device hc_mode Chapter 3. DS6000. DS6000. SDDPCM supports DS4000. beginning with AIX53 TL8 and AIX61 TL2 4. all devices must be configured either as non-MPIO-capable devices or as MPIO-capable devices.0. SDDPCM supports ESS devices as SAN boot devices. and SAN Volume Controller devices as SAN boot devices.0 (or later). DS3950. v Automatic failed paths reclamation by healthchecker v Failback error-recovery algorithm v Fibre-channel dynamic device tracking v Support for all DS4000.With SDD 1. beginning with AIX52 TL07 and AIX53 TL03. SDDPCM and SDD cannot coexist on a AIX server. DS5000. and DS3950 devices as SAN boot devices. SDDPCM supports DS8000. DS5000. DS8000.and 64-bit kernels v Four types of reserve policies: – No reserve policy – Exclusive host access single path policy – Persistent reserve exclusive host policy – Persistent reserve shared host access policy v Four path-selection algorithms: – Failover – Round robin – Load balancing – Load balancing port This algorithm is supported only with DS8000. 2. beginning with AIX52 TL06 and AIX53 TL02. ESS. beginning with AIX61 TL3. DS6000. Supported SDDPCM features The following SDDPCM features are supported in this release: v 32. If a server connects to any supported storage devices. and SAN Volume Controller device models. ESS FCP. and SAN Volume Controller devices v Support SAN boot device on MPIO supported storage devices Notes: 1.6. SDDPCM supports RSSM devices as SAN boot devices.

Note: This support does not include DS4000. v HACMP with persistent reservation policies v Multiple hosts sharing supported storage MPIO devices without supported clustering software installed v HACMP with non-Enhanced Concurrent Mode volume group v Mirror Write Consistency with volume group in a concurrent resource group v Virtual I/O server is not supported with DS4000. | | | | Unsupported SDDPCM features The following SDDPCM features are not currently supported. and cables Software The following software components are needed: 102 Multipath Subsystem Device Driver User's Guide . 5. and AIX 6. v Support Virtual I/O server with AIX 5.com/systems/power/software/aix/index.1.ibm.v Web-based System Manager (WebSM) for MPIO supported storage devices (See http://www-03.1. Note: This support is only available on AIX 5.3 or later and AIX 6. v Support GPFS in AIX 5. if the supported storage devices is not direct-attached v Host system v Fibre-channel adapters or serial-attached SCSI adapters. and DS3950 storage devices. v Open HyperSwap device is not supported with a SAN boot volume group. and DS3950 storage subsystems v Open HyperSwap for System Storage DS8000 is not supported within a clustering environment. Hardware The following hardware components are needed: v Supported storage devices (FCP and SAS devices only) v One or more switches.) v Reserve last path of a device in OPEN mode v Support the essutil Product Engineering tool in the SDDPCM pcmpath command line program v Support HACMP with Enhanced Concurrent Mode volume group in concurrent resource groups and nonconcurrent resource groups Note: This support does not include RSSM devices as HACMP is not supported on IBM JS-series blades.2 TL06 (or later).3 TL02 (or later).3 TL11 (or later).html for more information about WebSM.1 TL04 (or later). DS5000. | | | Verifying the hardware and software requirements You must install the following hardware and software components to ensure that SDDPCM installs and operates successfully. and AIX 6. DS5000. v Support Tivoli Storage Productivity Center for Replication Metro Mirror Failover/Failback replication for Open HyperSwap for System Storage DS8000.

0.disk. AIX 5. with all the latest PTFs.ibm.rte (version 1.ibm.ibm. Always refer to the Readme file for the required AIX TL for the SDDPCM level that you plan to install.0 or later) – devices. SDDPCM requires AIX 5.2.5.rte package.0. Supported storage device requirements To successfully install SDDPCM.3 TL08 (or later). you must have the required AIX TL installed on your host system along with the AIX required fixes. Using SDDPCM on an AIX host system 103 .0.61.2 TL10 (or later). and microcode updates that are identified on the following Web site: www-1.0. APARs.0 or later) v Supported storage devices: | | | | – devices.rte (version 2.disk.1 or later versions) host attachment package for SDDPCM (version 3.0. you must install SDDPCM 2.2.disk. | v AIX5.1. such as a supported storage device host-bay-adapter replacement. or devices.2 with DS4000.1 TL02 (or later) operating system.sas.0.0 or later) – devices.mpio.com/servers/storage/support/ Starting with SDDPCM 2.53.2 with Open HyperSwap.fcp.disk.3 or above with the required AIX TL and APAR.0.1.sddpcm. ensure that the devices.sas. v AIX5.sddpcm.x. v If your attached storage is SAN Volume Controller version 4.x) host attachment package for SDDPCM (version 2.ibm.fcp. is installed on the server.rte (version 2. Refer to RSSM documentation for SAS requirements at: Chapter 3. Refer to “Installing and upgrading the AIX SDDPCM host attachment” on page 109 for more information.fcp. DS5000. Fibre requirements Note: There is no fibre requirement for RSSM devices that are connected through SAS.0 or later versions) – devices.ibm.mpio. and DS3950 storage subsystems.ibm.x.mpio.0.rte (version 2.1.3 TL06 (or later).1. or AIX 6. Refer to the Readme file of the SDDPCM level that you plan to install for the required AIX TL and APARs.rte (version 1.0.disk.21 or later versions of 1. Refer to the Readme file of the SDDPCM level that you plan to install for the required AIX TL for that level.52.0.6 or above and you require SAN Volume Controller APAR IC55826.2 TL10 (or later).rte.5.5.mpio.x.2. AIX 5.1.0 or later versions) host attachment package for SDDPCM Unsupported environments SDDPCM does not support: v ESS SCSI devices v A host system with both a SCSI and fibre-channel connection to a shared ESS logical unit number (LUN) v Single-path mode during code distribution and activation of LMC nor during any supported storage devices concurrent maintenance that impacts the path attachment.rte (version 2. v Fibre-channel device drivers or serial-attached SCSI drivers v One of the following installation packages: – devices.x or prior versions) – devices.v AIX 5.sddpcm. or AIX 6.mpio.x. with all the latest PTFs.0. Host system requirements To successfully install SDDPCM for supported storage devices.

For information about the fibre-channel adapters that can be used on your AIX host system.com/servers/eserver/support/ If your host has only one fibre-channel adapter. and First Time Data Capture The sample SDDPCM server daemon configuration file sddpcmke sdduserke pcmpath pcmsrv sample_pcmsrv. The major files that are part of the SDDPCM installation package are: File name sddpcmrtl Description A dynamically loaded module added to the device configuration methods to extend the supported storage devices device configuration methods to facilitate the configuration operations of the PCM KE A dynamically-loaded module added to the AIX 5L kernel that provides path management functions for supported storage devices A dynamically-loaded module added to the AIX 5L kernel that provides the API to sddpcmke SDDPCM command line tool Daemon for enhanced path healthcheck. v The host system can be a single processor or a multiprocessor system.2 TL10 (or later).com/servers/storage/support To use the SDDPCM fibre-channel support. and microcode updates from the following Web site: www-1. Preparing for SDDPCM installation The SDDPCM installation package installs a number of major files on your AIX system. You should have at least two fibre-channel adapters to prevent data loss due to adapter hardware failure or software failure.wss/ docdisplay?lndocid=MIGR-5078491&brandind=5000020 You must check for and download the latest fibre-channel device driver APARs. AIX 5.3 TL06 (or later).conf 104 Multipath Subsystem Device Driver User's Guide . ensure that a minimum of two paths to a device are attached.ibm.ibm.ibm.1. v The AIX host system has the fibre-channel device drivers installed along with all latest APARs.com/systems/support/supportsite. go to the following Web site: www. or AIX 6. maintenance-level fixes. v If you need the SDDPCM I/O load-balancing and failover features. ensure that your host system meets the following requirements: v The AIX host system is an IBM RS/6000 or IBM System p with AIX 5. such as SMP. it requires you to connect through a switch to multiple supported storage device ports. v A fiber-optic cable connects each fibre-channel adapter to a supported storage system port.http://www.

0.x) and/or devices. see the Planning Guide for your product. the relbootrsv program also can be used to release SCSI-2 reserve on an MPIO hdisk.x). and DS3950 storage and MPIO-based devices.0. Script to collect SDDPCM information. Display SDDPCM MPIO device/path configuration/state information. v Install the AIX fibre-channel device drivers.5. you must: v Determine that you have the correct installation package v Remove the SDD package.0.0.fcppcmmap pcmquerypr pcmgenprkey relbootrsv Collects supported storage devices fibre-channel device information through SCSI commands SDDPCM persistent reserve command tool SDDPCM persistent reserve command tool to generate persistent reserve key Release SCSI-2 reserve on boot devices or on active nonboot volume groups Note: Beginning with SDDPCM version 2.100. To use the load-balancing and failover features.1. SDDPCM requires a minimum of two independent paths that share the same logical unit. if they are installed. and system error logs into an sddpcmdata_host_date_time. Chapter 3. Daemon for communication with Tivoli Storage Productivity Center for Replication to support Open HyperSwap.6.0. relSDDPCMbootrsv Automatically executed upon reboot to release leftover SCSI-2 reserve on SDDPCM boot device. beginning with SDDPCM version 2. you must complete the following tasks: v Connect the supported storage device to your host system and the required fibre-channel adapters or SAS adapters that are attached. Using SDDPCM on an AIX host system 105 .0.rte ( version 1. With a single path. Note: This file was added to the SDDPCM installation package beginning with SDDPCM 3.rte (version 32.2. DS5000. if necessary. fcp. if it is installed.tar file for problem determination Display configuration information for DS4000.1. v Configure the supported storage device for single-port or multiple-port access for each LUN. trace log files.disk.ibm. sddpcmgetdata sddpcm_get_config lspcmcfg | | | | | | AE Preparing for SDDPCM installation for supported storage devices Before you install SDDPCM.0. For more information about how to configure your supported storage device. v Remove the ibm2105. failover protection is not available. Before you install SDDPCM.

1 Determining if the SDD package is installed To determine if the SDD is installed: 1.100.ibm. or devices.fcp.fcp.rte for AIX 6.3 v devices.disk. If SDD is installed for supported storage device configuration.disk. If ibm2105.sddpcm.rte package.0.rte is installed.disk.disk. If devices. Determining if the devices.fcp.ibm. and microcode updates at the following Web site: www-1.0 or later).rte (version 1.disk.disk.ibm. See “Removing SDD from an AIX host system” on page 50.mpio.ibm.xx is installed.sdd* commands to determine if any SDD package is installed on the system.x is installed.fcp.disk. b. If devices.rte with VRMF 32.2 v devices.com/servers/storage/support/ 106 Multipath Subsystem Device Driver User's Guide .0. Unconfigure and remove all supported storage devices.rte (version 1.rte package. Unconfigure and remove all supported storage devices. 2.v Verify and upgrade the fibre channel adapter firmware level v Install the SDDPCM host attachment: devices.rte package is installed: Note: The following steps are not required for RSSM devices.rte package is installed To determine if the ibm2105. 2. Use the lslpp -l devices.disk. 2.52.mpio.15 or later).ibm. Use the lslpp -l *ibm2105* command to determine if any ibm2105.rte is installed.fcp.sas.fcp.sddpcm. See “Removing SDD from an AIX host system” on page 50.rte for AIX 5.ibm. 1. Installing the AIX fibre-channel device drivers Note: Installing AIX fibre-channel device drivers is not a requirement for RSSM devices. Use the lslpp -l *ibmSdd* and lslpp -l devices.fcp. See “Removing SDD from an AIX host system” on page 50.0. Use smitty to uninstall the ibm2105.ibm.rte package.ibm.rte for AIX 5.61.disk.fcp. you must remove all supported storage devices and uninstall the ibm2105. Use smitty to uninstall the devices.rte is installed.ibm. b.0.rte package is installed: 1. If ibm2105. maintenance-level fixes. you must unconfigure and remove all SDD vpath devices.fcp. Determining if the ibm2105.rte with VRMF 1. Determining the correct installation package The package name of SDDPCM for different AIX operating system levels is the following: v devices.sddpcm. you must: a.53.0. and then uninstall the SDD package.rte package. you must: a.rte package is installed To determine if the devices.0.ibm* command to determine if any devices.disk. You must check for the latest information on fibre-channel device driver APARs.6.rte is installed. you must remove all supported storage devices and uninstall the devices.ibm.

IBM. 415 | +------------------------------------------------------------------------+ 13. 14. Press F4 to display the INPUT Device/Directory for Software panel. enter smitty install_update and press Enter to go directly to the installation panels. The Install and Update from LATEST Available Software panel is displayed with the name of the software you selected to install.pci. Load the compact disc into the CD-ROM drive. 5. 7. When the installation is complete. Select each one by highlighting it and pressing F7. devices.common. for example. /dev/cd0. Press Enter to continue. Check the default option settings to ensure that they are what you need. Highlight Software to Install and press F4. The fibre-channel device drivers include the following installation packages: devices. SMIT responds with the following message: +------------------------------------------------------------------------+ | ARE YOU SURE?? | | Continuing may delete information you may want to keep. The Software to Install panel is displayed. press F10 to exit from SMIT.disk The FCP disk driver.fcp. The Install Software panel is displayed. Press Enter to install. 10. 12.df1000f7 The adapter device driver for RS/6000 or IBM System p with feature code 6227. Highlight Install Software and press Enter. 9. The Install and Update Software menu is displayed.fc The FCP protocol driver. Log in as the root user. 4. 8. The installation process can take several minutes to complete. Chapter 3. Press Enter again. Remove the compact disc. 11.df1080f9 The adapter device driver for RS/6000 or IBM System p with feature code 6239. devices. 15. 3.Perform the following steps to install the AIX fibre-channel device drivers from the AIX compact disk: 1.pci. From your desktop window. 413 | | This is your last chance to stop before continuing. and press Enter. Check to see if the correct APARs are installed by entering the following command: instfix -iv | grep IYnnnnn where nnnnn represents the APAR numbers. Using SDDPCM on an AIX host system 107 . Press Enter.df1000f9 The adapter device driver for RS/6000 or IBM System p with feature code 6228.pci. devices. 2. Select the compact disc drive that you are using for the installation. 6. devices.

. go to the following Web site: http://www-933.If the APARs are listed. After the devices are configured. The vital product data for the adapter is displayed. 3. Verifying and upgrading the fibre-channel adapter firmware level Use the following procedures to verify and upgrade your current fibre-channel adapter firmware level. If they are installed. Verify that the correct level of firmware is installed on your AIX host system. there is no need to upgrade.S2F3. where XXXXXX is the level of the microcode.com/support/fixcentral/ Perform the following steps to verify the firmware level that is currently installed: 1. 5. Press F7.” Upgrading the fibre channel adapter firmware level: Upgrading the firmware level consists of downloading the firmware (microcode) from your AIX host system to the adapter.. The ZB field should look similar to: (ZB). one adapter at a time. 7. Follow the instructions that are displayed to download the firmware.. If the adapter firmware level is at the latest level. This file was copied into the /etc/microcode directory during the installation of the fibre-channel device drivers.XXXXXX for feature code 6228. Repeat steps 1 through 14 to install the APARs. go to step 3. From the AIX command prompt. Press Enter to select all the fibre-channel adapters to which you want to download firmware. In the example. To check the current supported firmware level for fibre-channel adapters. go to “Configuring supported storage MPIO-capable devices” on page 119..30X1 To verify the firmware level. Note: The following procedures do not apply to RSSM devices.ibm.. otherwise. enter diag and press Enter.. ensure that you have configured any fibre-channel-attached devices (see “Configuring fibre-channel-attached devices” on page 18). The Download panel is displayed with one of the selected adapters highlighted. 16.. To upgrade the firmware level. Otherwise.. go to “Upgrading the fibre channel adapter firmware level. 4. Highlight the Task Selection option. 2. the firmware level must be upgraded.. Verifying the adapter firmware level: You must verify that your current adapter firmware is at the latest level. download the firmware from the AIX host system to the FCP adapter by performing the following steps: 1. Enter the lscfg -vl fcsN command. 6. the firmware level is sf330X1. you must upgrade to a new adapter firmware (microcode). Look at the ZB field.XXXXXX for feature code 6227 and df1000f9. Press Enter to continue. ignore the second character in the ZB field. Highlight /etc/microcode and press Enter.. Highlight the Download Microcode option. Before you upgrade the firmware. 3. Go to the /etc/microcode directory and locate the file called df1000f7.. 108 Multipath Subsystem Device Driver User's Guide . 2. If your current adapter firmware is not at the latest level.. that means that they are installed.

toc or pwd rm -i . which is no. Update the .ibm. grep -i devices.mpio. The supported storage devices information allows AIX to perform the following actions: v Identify the hdisks as 2105.rte.ibm. the AIX SDDPCM host attachment package name is devices.sas.mpio. 3.ibm .toc inutoc .images directory. 2107. DS4300. v Indicate to the AIX device driver configure method to set 2105. 2. 1750. Download code from the Web site that is appropriate for your system. 1820.com/systems/support/supportsite. DS4300. Change the directory to the /usr/sys/inst. For AIX SDDPCM host attachment installation and upgrade instruction from the CD.ibm . Do not reboot the system before the SDDPCM package is installed on the system.ibm. DS4500.disk.toc inutoc .toc Chapter 3. http://www. 2145.disk.toc file. For fibre-channel attached devices. DS4200. v Set default hdisk attributes. DS4500.disk. or DS3950 hdisks. grep -i devices. 4. Untar the downloaded file. For example: tar -xvf devices. DS3950.tar 5. See the SDDPCM Readme file for the latest host attachment level for the SDDPCM version that you are installing.ibm. For AIX SDDPCM host attachment installation and upgrade from the SDD download Web site.sas. DS4100.rte. Move the downloaded file to the /usr/sys/inst. DS5000. For RSSM devices.fcp.images directory. DS4800. DS4200.rte.disk. the AIX SDDPCM host attachment package name is devices.tar or tar -xvf devices. see the Host System Attachment Guide for your storage device.fcp. 1820.disk. DS4700. use following procedure: 1. for instructions on how to verify and upgrade the integrated SAS controller firmware or SAS Expansion Card firmware. Verifying and Upgrading SAS controller firmware Refer to RSSM documentation.mpio. and DS3950 storage devices: You must set the FC HBA dyntrk attribute to yes instead of the default settting. DS4100.Fibre-channel HBA attributes for DS4000. For example: pwd rm -i .fcp. Using SDDPCM on an AIX host system 109 . such as queue_depth and timeout values. DS5000.ibm.rte. 1750. at the following URL.sas. DS4700. 2145. or DS5020 hdisks as MPIO devices. DS5020. 2107. DS5000.wss/ docdisplay?lndocid=MIGR-5078491&brandind=5000020 Installing and upgrading the AIX SDDPCM host attachment The host attachment for SDDPCM adds supported storage devices information to allow AIX to properly configure these devices as MPIO-capable hdisks.mpio. DS4800.disk.

or devices. | |This is your last chance to stop before continuing. and the SDDPCM package is already installed on the system. 10. Notes: 1. Type .disk.fcp. 15. Check the default option settings to ensure that they are what you need. 16. which is the default directory for the AIX install program.53.rte. type smitty install_update and press Enter to go directly to the installation panels. 110 Multipath Subsystem Device Driver User's Guide . Installing and upgrading SDDPCM Use these topics to install and upgrade SDDPCM. The SDDPCM install image resides in the /usr/sys/inst.rte package.rte or devices.disk.sas.This command should reflect the newer SDDPCM host attachment version that will be uploaded. | +---------------------------------------------------------------------+ 14.mpio. 12. Press Enter to install.sddpcm. Press Enter to continue. 6. SMIT responds with the following message: +---------------------------------------------------------------------+ |ARE YOU SURE?? | |Continuing may delete information you may want to keep. 9.mpio. 8. Select the devices. then reboot the system to complete the host attachment upgrade.ibm. Attention: Do not reboot the system if you only have the SDDPCM host attachment package installed.61.fcp. The Install and Update Software menu is displayed.ibm. If this is a host attachment installation. To mount the CD-ROM and install SDDPCM. From your desktop window.rte). Because the package does not reside in the /usr/sys/inst. Installing SDDPCM from CD-ROM SDDPCM is released as an AIX installation image. press F10 to exit from SMIT. Creating and mounting the CD-ROM filesystem To install SDDPCM from the CD-ROM. 7.images directory. When the installation or upgrade is complete.rte. you must first create and mount the CD-ROM filesystem. The Install and Update from the LATEST Available Software panel is displayed with the name of the software that you selected to install.images/SDDPCM directory on CD-ROM directory. The Software to Install panel is displayed. If this is a host attachment upgrade. Highlight Software to Install and press F4. 11. you must have root access and AIX system administrator knowledge. you must mount the CD-ROM file system before you can use SMIT to install SDDPCM from the CD-ROM directory. 2. Highlight Install Software and press Enter. do not reboot the system until you have the SDDPCM package installed.disk.rte) must be installed before you install the SDDPCM package (devices. 13.sddpcm.52. Use SMIT to perform the following steps to create and mount the CD-ROM to CD-ROM file system. Press Enter. devices.mpio. The AIX SDDPCM host attachment package (devices. to indicate the current directory and press Enter.ibm. The installation process may take a few minutes to complete.sddpcm.

10. + + Press Enter AFTER making all desired changes. cd0). Insert the compact disc into the CD-ROM drive. 19. Press Enter to create the CDROM File System. 23. The DEVICE name panel is displayed. /dev/cd0 is used for the compact disc driver address. press F10 to exit from smit.Note: Throughout this procedure. 9. 8. enter smitty mount and press Enter. cdrfs as the type of file system and press Enter. Select Add / Change / Show / Delete File Systems and press Enter. (for example. Click to check the default option settings for the other fields to ensure that they are what you need. 14. Click the default option settings for the other fields to ensure that they are what you need. 15. + + [Entry Fields] + + FILE SYSTEM name [/dev/cd0] + + DIRECTORY over which to mount [/cdmnt] + + TYPE of file system cdrfs + Chapter 3. Select Mount a File System and press Enter. Select Select Select Select Select Select FILE SYSTEM name and press F4 the CDROM File System that you created and press Enter. 6. (for example. 5. enter smitty fs and press Enter. the CDROM File System that you created and press Enter. Select Add a CDROM File System and press Enter. 3. 21. +-----------------------------------------------------------------+ + Mount a File System + + Type or select values in entry fields. 7. The driver address can be different in your environment. Using SDDPCM on an AIX host system 111 . Select DEVICE name and select F4. /cdmnt). From your desktop window. Select Mount as a REMOVABLE file system? and press TAB to change the entry to yes. 13. 20. 16. + + Press Enter AFTER making all desired changes. TYPE of file system and press Enter. DIRECTORY on which to mount and press F4. +-----------------------------------------------------------+ + Add a CDROM File System + + + + Type or select values in entry fields. 22. 1. 12. + + + + [Entry Fields] + + * DEVICE name cd0 + + * MOUNT POINT [/cdmnt] + + Mount AUTOMATICALLY at system restart? no + + + +-----------------------------------------------------------+ 11. 4. The Add a CDROM File System panel is displayed. From your desktop window. 18. Select CDROM File System and press Enter. Select Mount as a READ-ONLY system? and press TAB to change entry to yes. Select MOUNT POINT and enter a directory where you want the CDROM File System to be mounted. The Mount a File System panel is displayed. and press Enter. Log in as the root user 2. Select the compact disc drive that you are using for the installation. 17. When the CDROM File System has been created.

The drive address can be different in your environment. press F10 to exit from smit. 10. to indicate the current directory and press Enter. 25. Go to the directory usr/sys/inst. 1. The Software to Install panel is displayed. When the file system has been mounted successfully.rte package. 3. Press Enter to continue. use the following procedure to unmount CD-ROM file system in order to remove the CD-ROM: 1. based on the OS level. 11. The Install and Update from LATEST Available Software panel is displayed with the name of the software that you selected to install. Press Enter to mount the file system. Attention: Do not reboot the system if you only have the SDDPCM host attachment package installed. Go to the root (/) directory.sddpcm. The SMIT facility has two interfaces: nongraphical (enter smitty to invoke the nongraphical user interface) and graphical (enter smit to invoke the graphical user interface). press F10 to exit from SMIT.61. When the installation is complete. Check the default option settings to ensure that they are what you need. 2.sddpcm. Press Enter to install. 8. This is your last chance to stop before continuing.52. devices. 4. Press Enter. 12. Perform the following SMIT steps to install the SDDPCM package on your system. Select the devices. Highlight Install Software and press Enter. Using the System Management Interface Tool facility to install SDDPCM Use the System Management Interface Tool (SMIT) facility to install SDDPCM. /dev/cd0 is used for the compact disc drive address. 112 Multipath Subsystem Device Driver User's Guide .sddpcm.rte. cd to the directory where the CD-ROM file system is mounted. Throughout this SMIT procedure. SMIT responds with the following message: ARE YOU SURE?? Continuing may delete information you may want to keep. 9. enter smitty install_update and press Enter to go directly to the installation panels. 5. The installation process can take several minutes to complete. for example /cdmnt. From your desktop window. 7.+ FORCE the mount? no + + REMOTE NODE containing the file system [] + + to mount + + Mount as a REMOVABLE file system? yes + + Mount as a READ-ONLY system? yes + + Disallow DEVICE access via this mount? no + + Disallow execution of SUID and sgid programs no + + in this file system? + + + +-----------------------------------------------------------------+ 24.images/SDDPCM. or devices. From your desktop window. Highlight Software to Install and press F4. Unmounting the CD-ROM File System After successfully installing SDDPCM. Enter .53.rte. The Install and Update Software menu is displayed. 6.

See the NIM task roadmap on the Web for detailed information on how to complete these tasks: Chapter 3. Update the . tar -xvf devices. Add the SDDPCM fileset to the newly created lpp_source by running the smitty nim_task_inst command.sddpcm. Installing SDDPCM with the AIX OS from an AIX NIM SPOT server to the client SAN boot disk or the internal boot disk You can install SDDPCM from an AIX Network Installation Management (NIM) server to the client SAN boot disk or the internal boot disk at the same time that the AIX OS is installed. Prepare for the NIM SPOT installation with AIX OS and SDDPCM on the client's SAN boot disk or the internal boot disk.spot 2. Initialize the NIM master system by running the smitty nim_config_env command. Install the following filesets to set up the system as an NIM master: bos. Enter umount /cdmnt and press Enter to unmount the CD-ROM file system from the /cdmnt directory. Enter rmfs /cdmnt and press Enter to remove the CD-ROM file system. 4.2.toc file.min.rte.images directory 3. 3. Change directory to the /usr/sys/inst. Using SDDPCM on an AIX host system 113 . 3. 4. You can use the System Management Interface Tool (SMIT) facility to implement the following procedures: 1. Create a SPOT from the new lpp_source by running the smitty nim_config_env command.toc This command should reflect the newer SDDPCM code version that will be updated. Remove the CD-ROM. Download code from the Web site that is appropriate for your system 2. 4. first set up a NIM master and create the lpp_source and SPOT resource. Untar the downloaded file: for example. Define an NIM client by running the smitty nim command. For example: pwd rm -i . which is either NFS-exported or is obtained from a CD or DVD.toc inutoc . Create a new lpp_source and SPOT resource by running the smitty nim_config_env command. Continue the installation by following the instructions beginning in step 3 on page 112. To do this. 6. 5. Move the downloaded files to the /usr/sys/inst.tar 5.master bos.53.nim.images directory. You must set up the NIM master and create the lpp_source and Shared Product Object Tree (SPOT) resources with the images on a file system. 6. grep -i sdd .sysmgt. Installing SDDPCM from downloaded code Use this procedure to install SDDPCM from downloaded code: 1.sysmgt.

verifying the currently installed version of SDDPCM.rte. to remove the PTF.htm#nim_roadmap After you have successfully prepared for the NIM SPOT installation. Updating SDDPCM packages by installing a newer base package or a program temporary fix SDDPCM allows you to update SDDPCM by installing a newer base package or a program temporary fix (PTF). Updating SDDPCM The following sections discuss methods of updating SDDPCM.1. If you decide to reject the PTF. you can update SDDPCM packages by installing a newer base package or a program temporary fix.bff (for example.2. a. c. devices. Otherwise. Select the hostname of the client that you defined previously.ibm. and the maximum number of devices that SDDPCM supports. 2. reboot the system. c. Use the following command to check the SAN boot disk and make sure the boot disk is configured with SDDPCM: lsattr -El hdiskX (SAN boot disk device name) From the output of this command. b. If it does not start automatically. and then press Enter. Click Perform NIM Administration Tasks > Manage Network Install Resource Allocation > Manage Machines > Allocate Network Install Resources. b. After applying the base package or the PTF. The system automatically reboots after the smitty nim task completes.bff) and can either be applied or committed when it is installed. Note: If you are not upgrading the operating system. Select the hostname of the client that you selected previously. start the SDDPCM server daemon manually. If the PTF is applied. Click bos_inst.52. The SDDPCM server daemon should automatically start after restarting the system. regardless of whether you have SAN boot devices. you can choose to commit or to reject the PTF at a later time. Click Perform NIM Administration Tasks > Manage Machines > Perform Operations on Machines.boulder.publib16. a.sddpcm.1. Run the smitty nim command. and then press Enter. Run the smitty nim command again. check the ODM attribute PCM to ensure that the value is PCM/friend/sddpcm or PCM/friend/sddappcm. the update to SDDPCM is permanent. Select the lpp_source and SPOT resources that you created previously. d. you must uninstall SDDPCM. 114 Multipath Subsystem Device Driver User's Guide .com/pseries/en_US/aixins/insgdrf/ nim_roadmap. A PTF file has a file extension of . see “Migrating SDDPCM during an AIX OS upgrade with multipath SAN boot devices (on supported storage hdisks)” on page 119. you will not need to uninstall SDDPCM from the host system. If the PTF is committed.0. Set the ACCEPT new license agreements field to Yes. you are ready to use the SMIT tool to start the NIM installation on the client system: 1.

12. The default setting is to commit the PTF. Chapter 3. 10. Check the other default option settings to ensure that they are what you need. The Install and Update from LATEST Available Software panel is displayed with the name of the software that you selected to install. Select the base package or the PTF package that you want to install. Press Enter again. and press Enter. Press Enter to continue. ensure that you specify yes to Save Replaced Files?. 4. 2. 8. Press Enter to install. Using SDDPCM on an AIX host system 115 . enter smitty install and press Enter to go directly to the installation panels. Press F4 to display the INPUT Device/Directory for Software panel. /dev/cd0. Perform the following SMIT steps to update the SDDPCM package on your system: 1. In the following procedure. The installation process can take several minutes to complete. Unmount the CD-ROM file system and remove the compact disc. 2. The Install and Update Software menu is displayed. Select Software to Install and press F4. you must mount the CD file system. 3. 3. Select either a CD drive that you are using for the installation or a local directory where the packages reside. SMIT responds with the following message: +---------------------------------------------------------------------+ |ARE YOU SURE?? | |Continuing may delete information you may want to keep. Type smitty install_update and press Enter to go directly to the installation panels. Log in as the root user. 6. Select Install Software and press Enter. press F10 to exit from SMIT. Committing or rejecting a program temporary fix update Before you reject a PTF update. When the installation is complete. See “Creating and mounting the CD-ROM filesystem” on page 110 for directions on how to mount the CD file system. for example. Perform the following steps to commit or reject a PTF update with the SMIT facility. 14. | +---------------------------------------------------------------------+ 13. you need to unconfigure and remove all supported storage devices from your host system. 11. 9. The Software Installation and Maintenance menu is displayed. select Commit software Updates? and tab to change the entry to no. 15. If you only want to apply the PTF. The Software to Install panel is displayed. Press Enter. | |This is your last chance to stop before continuing. If the base package or PTF is on a CD-ROM. 1. From your desktop window. Log in as the root user. Committing a PTF does not require this extra step. nongraphical (enter smitty to invoke the nongraphical user interface) and graphical (enter smit to invoke the GUI). 5. Select Software Maintenance and Utilities and press Enter. The drive address can be different in your environment. /dev/cd0 is used for the CD drive address. The SMIT facility has two interfaces. 7.Use the SMIT facility to update SDDPCM. If you specify no to Commit Software Updates?. The Install Software panel is displayed. The SMIT facility has two interfaces: nongraphical (enter smitty to invoke the nongraphical user interface) and graphical (enter smit to invoke the GUI).

Press Enter. with the round robin or load balance path selection algorithms. The software name panel is displayed. Select Commit Applied Software Updates to commit the PTF or select Reject Applied Software Updates to reject the PTF. Press Enter. When the installation is complete.wss/ docdisplay?lndocid=MIGR-5078491&brandind=5000020 116 Multipath Subsystem Device Driver User's Guide . Use the minimum number of paths necessary to achieve sufficient redundancy in the SAN environment. Press Enter to continue. Note: You do not need to restart the system even though the bosboot message may indicate that a restart is necessary. 8. 6. | |This is your last chance to stop before continuing. Verifying the currently installed version of SDDPCM You can verify your currently-installed version of SDDPCM by issuing one of the following commands: lslpp -l *sddpcm* or pcmpath query version Note: Using the pcmpath command requires that at least one SDDPCM-supported MPIO device is configured on the system. | +------------------------------------------------------------------------+ 10. The Commit Applied Software Updates panel is displayed or the Reject Applied Software Updates panel is displayed. system administrators should first determine whether the system has sufficient resources to support a large number of devices. 7. Check the default option settings to ensure that they are what you need. Select the Software package that you want to commit or reject.ibm.4. refer to the RSSM documentation at the following URL for the maximum number of RSSM LUNs supported: http://www. That means the maximum number of host adapter ports that are supported is 16. Select Software name and press F4.com/systems/support/supportsite. Maximum number of devices that SDDPCM supports SDDPCM supports a maximum of 1200 configured devices and a maximum of 16 paths per device. 5. For AIX blade servers in an IBM BladeCenter® S Chassis connected to RSSM devices. See “Preparing your system to configure more than 600 supported storage devices or to handle a large amount of I/O after queue depth is disabled” on page 40 for more information. press F10 to exit from SMIT. The recommended number of paths per device is four. 11. In order to support 1200 supported storage device LUNs. However. 9. configuring more than four paths per device might impact the I/O performance. SMIT responds with the following message: +------------------------------------------------------------------------+ |ARE YOU SURE?? | |Continuing may delete information you may want to keep. The commit or reject process can take several minutes to complete.

1.0. The path selection algorithm is fail_over. and the reserve policy is single_path (SCSI-2 reserve).. Instead. which is SCSI-2 reserve.3 TL07 or later. and leave the volume groups active and file system mounted v You configure default PCM devices as the backing devices of a virtual target device when you are in a VIOS environment After the system starts. thus. you must always run the relbootrsv command to release the SCSI-2 reserve on SAN boot devices after you install the SDDPCM host attachment package and the SDDPCM package. with 16 logical paths configured for each copy set in the session. You might not see paths in the INVALID state if your system is at AIX 5.2 TL10 or later or at AIX 5. only the paths previously opened with SCSI-2 reserve can be opened and used for I/O after system restart. which means only one path is opened at a time and that path made SCSI-2 reserve to the disk. If you have supported storage SAN boot devices that are configured with AIX default PCM. # pcmquerypr -Vh /dev/hdisk6 connection type: fscsi0 open dev: /dev/hdisk6 Attempt to read reservation key. Run the relbootrsv command before you restart the system and then run the following command against the hdisks that are part of the rootvg to verify that they are no longer reserved. This is because the SCSI-2 reserve is not released during the system restart. a single host should manage a maximum of 1024 devices when devices have been enabled for Open HyperSwap on the host. with 8 logical paths configured for each copy set in the session. The INVALID state means that the path failed to open. switching the boot devices from AIX default PCM to SDDPCM might result in this reservation conflict situation. This reserve policy and path selection algorithm can cause problems if.1. a single host should manage a maximum of 1024 devices when devices have been enabled for Open HyperSwap on the host. only opened paths that previously made SCSI-2 reserve can actually be used for I/O. For AIX 6. you might see a heavier I/O select count on one path. Chapter 3. All I/O is routed to this path. The default PCM device can be opened if: v You build a volume group and file system with AIX default PCM devices. If you install an SDDPCM version earlier than 2. Migrating SDDPCM The following sections discuss the methods of migrating SDDPCM with and without SAN boot devices v “Migrating the supported storage SAN boot device or nonboot volume group from AIX default PCM to SDDPCM” v “Migrating from SDDPCM to the AIX default PCM or to SDD” on page 118 v “Migrating from SDD with SAN boot devices (on supported storage hdisks) to SDDPCM with multipath SAN boot devices” on page 119 Migrating the supported storage SAN boot device or nonboot volume group from AIX default PCM to SDDPCM The default reserve policy of the AIX base PCM is a single-path policy.| | | | | | For AIX 5. Even if all the paths are allowed to be opened. This is because the SCSI-2 reserve is not released during the system restart. or if you have IY83717 or IY83847 installed on your system.. after the SDDPCM packages are installed.3.2. Using SDDPCM on an AIX host system 117 . you might see some paths in the INVALID state. you open the default PCM device before you restart the system.

In a VIOS environment. and then uninstall the SDDPCM package and the SDDPCM host attachment package. nonboot volume group devices before you restart the system. After you uninstall SDDPCM. perform one of the following actions: v Switch from the AIX default PCM to SDDPCM before you make any volume groups and file systems. you must unmount file systems and vary off the volume group of the AIX default PCM to release the SCSI-2 reserve on the volume group before system restart.2. To prevent this problem. If you want to migrate supported storage devices to SDD devices. stop the SDDPCM server daemon. v To switch from the AIX default PCM to SDDPCM. you must then install the supported storage device host attachment for SDD and the appropriate SDD package for your system. Then restart the system to configure the supported storage devices to SDD vpath devices. put the virtual target devices in the Define state... otherwise. See “Removing SDDPCM from an AIX host system” on page 123 for directions on uninstalling SDDPCM.Attempt to read registration keys. Reservation conflict problems can also occur on nonboot volume groups. you can then restart the system to migrate supported storage MPIO devices to the AIX default PCM. reservation conflict problems can occur on a virtual target device that used to have a default PCM device as its backing device. This properly closes the AIX default PCM and releases the SCSI-2 reserve before migrating to SDDPCM. To prevent this problem. it releases the SCSI-2 reserve of a SAN boot volume group (rootvg). Note: If you specify a VGname (volume group name).0. To migrate from SDDPCM to the AIX default PCM or to SDD.1 or later. If you install SDDPCM version 2. relbootrsv releases the SCSI-2 reserve of the specified non-SAN boot volume group. Migrating from SDDPCM to the AIX default PCM or to SDD Note: If you have supported storage devices configured with SDDPCM as MPIO boot devices. you need to contact IBM Customer Support for migration from SDDPCM to the AIX default PCM. the SCSI-2 reserve on SAN boot devices is released automatically during system boot. perform one of the following actions: v Switch from the AIX default PCM to SDDPCM before you use the AIX default PCM device as the backing device of a virtual target device. v Issue relbootrsv VGname to release the SCSI-2 reserve on the active. The following procedure is applicable only for a system without SDDPCM MPIO devices configured as the SAN boot device. v Before switching from the AIX default PCM to SDDPCM. Read Keys parameter Generation : 0 Additional Length: 0 resrvpolicy= no_reserve Reserve Key provided by current host = none (hex)02bbb003 Not reserved. you must first unconfigure the devices. 118 Multipath Subsystem Device Driver User's Guide .

you need to contact IBM Customer Support for migration from SDD to SDDPCM. from AIX 5. After the system restarts. for example.Migrating from SDD with SAN boot devices (on supported storage hdisks) to SDDPCM with multipath SAN boot devices If you have supported storage devices configured with SDD and there are SAN boot devices with supported storage hdisk devices. issue the cfgmgr command once for each host adapter each time a device is added.3 TL06. After the first system restart.3 to AIX 6. you can follow the procedures in “Updating SDDPCM” on page 114. the SDDPCM server daemon (pcmsrv) should automatically start. If an AIX system is going to be upgraded to a different OS level.3 TL04 to AIX 5. If you use the cfgmgr command to configure supported storage MPIO devices. you need to contact IBM Customer Support for migration from SDDPCM during the OS upgrade. Migrating SDDPCM during an AIX OS upgrade with multipath SAN boot devices (on supported storage hdisks) SDDPCM provides different packages to match the AIX OS level. If you want to upgrade AIX OS and there are SAN boot devices with SDDPCM supported storage hdisk devices. you can use one of the following: v SMIT MPIO management submenu v SDDPCM pcmpath query device command Chapter 3. See “SDDPCM server daemon” on page 137 for information describing how to check the daemon status and how to manually start the daemon. v shutdown -rF command to restart the system. If you are not in SAN boot environment. you need to restart the system in order to configure supported storage devices as MPIO-capable devices. Use one of the following commands to configure these devices: v cfgmgr command Note: If operating in a switched environment. the SDDPCM server daemon (pcmsrv) should automatically start. Using SDDPCM on an AIX host system 119 . you might need to start the SDDPCM server daemon manually. After the system restarts. from AIX 5. for example. Verifying the SDDPCM configuration To verify the SDDPCM configuration. or you are only upgrading the AIX OS Technology Level or Server Pack. you can then use the normal AIX command line configure programs to configure and unconfigure supported storage MPIO-capable devices. Configuring and unconfiguring supported storage MPIO-capable devices After installing the MPIO-supported storage device host attachment and the SDDPCM package. if it has not already started.1. you need to install the corresponding SDDPCM package for that OS level. Configuring supported storage MPIO-capable devices The newly installed supported storage devices must be configured as MPIO-capable devices before you can use them.

where XXXX can be 2105. or 3950 devices. Search for all IBM MPIO DSXXXX array disks. The MPIO management menu is displayed. Because the AIX disk-configure method supports parallel device configuration. You can also use the SDDPCM pcmpath query device command to query the configuration status of supported storage devices. Ensure that these devices are in the Available state. 4700. ensure that you check the MPIO logical device name binding with the storage device LUN. 4400. Dynamically adding and removing paths or adapters SDDPCM supports dynamically adding or removing extra paths or adapters while I/O is running. Search for all IBM MPIO FC XXXX devices. The MPIO Device Management panel is displayed. If there is a change to the binding between an MPIO logical device name and the storage device LUN.Perform the following steps using SMIT to verify the SDDPCM configuration on an AIX host system: 1. enter: mkpath -l hdiskX -p fscsiY or mkpath -l hdiskX -p sasY When the command returns successfully.0. 4200. 4800. Select List ALL MPIO Devices and press Enter. 5020. Also search for MPIO FC 2145 if you have a host attachment package for SDDPCM with version 1. the pcmpath query device command will fail. 2107. 1750. or SAN Volume Controller MPIO devices. Ensure that these devices are in the Available state Search for all IBM MPIO SAS 1820 devices. 2. Enter smitty MPIO from your desktop window. to existing available devices. 3.8 or later. If none of the supported storage devices are configured successfully as MPIO devices. To check the device configuration status. the paths are added to the devices. If you have a program or script that has the MPIO logical device name hardcoded. Ensure that these devices are in the Available state. enter: lspath -l hdiskX -H -F "name path_id parent connection status" or pcmpath query device X 120 Multipath Subsystem Device Driver User's Guide . the same MPIO logical device name can configured with different supported storage devices after each device configuration.0. where XXXX can be 4100. 4300. 4. 5000. Notes: 1. Select MPIO Device Management and press Enter. To add extra paths that are attached to a fibre-channel adapter or a SAS adapter. 2. you must revise your program or script. All MPIO devices on the host are listed.

available. enter: rmpath -dl hdiskX -p fscsiY To dynamically remove a fibre-channel or SAS adapter and all child devices from supported storage MPIO devices.To add a new fibre-channel adapter to existing. storage MPIO devices. depending on the parent driver. enter: cfgmgr -vl fscX To add a new SAS adapter. enter: cfgmgr -vl mptsasX or cfgmgr -vl sissasX To check the adapter configuration status. run smit mpio. or enter the following on the command line: rmdev -l fscsiX -R or rmdev -l sasY -R To dynamically remove a particular path. supported. enter: pcmpath query adapter or pcmpath query device To dynamically remove all paths under a parent fibre-channel adapter from a supported storage MPIO device. use smit mpio. or enter one of the following commands on the command line: rmpath -l hdiskX -p fscsiY -w connection location code or rmpath -l hdiskX -p sasY -w connection location code or rmpath -dl hdiskX -p fscsiY -w connection location code or rmpath -dl hdiskX -p sasY -wconnection location code Issue the following command to get a particular path connection location code: lspath -l hdiskX -H -F "name path_id parent connection status" Chapter 3. Using SDDPCM on an AIX host system 121 .

AIX_fcparray DS3950 AIX_SDDAPPCM AIX_APPCM DS5020 AIX_SDDAPPCM AIX_APPCM DS5100 AIX_SDDAPPCM AIX_APPCM This AIX command also allows users to switch DS4000.AIX_fcparray DS4500 AIX_SDDAPPCM AIX_APPCM.3 TL08 and AIX 6/1 TL02 and required AIX APAPs installed. DS4200. switching device configuration drivers requires a system reboot to reconfigure the devices after running this command. Unconfiguring supported storage MPIO-capable devices Note: Contact IBM support if your system has SAN boot MPIO devices configured with SDDPCM on supported storage system disks. enter the following command: lsdev -Cc disk | grep 1820 | awk '{print $1}' | xargs -n1 rmdev -dl | | Switching DS4000. 4.AIX_fcparray DS4700 AIX_SDDAPPCM AIX_APPCM. DS5000. To 1.AIX_non_MPIO DS4100 AIX_SDDAPPCM AIX_APPCM. and DS5020 storage devices. DS4300. or DS3950 storage device configurations For SDDPCM version 2. For example: [root@telefone] /home/root # manage_disk_drivers -l Device Present Driver Driver Options 2810XIV AIX_AAPCM AIX_AAPCM. DS5100.Note: You cannot remove the last path from a supported storage MPIO device. See “Stopping the SDDPCM server” on page 138 for the procedure. The syntax of this command function is: manage_disk_drivers -d device -o driver_option 122 Multipath Subsystem Device Driver User's Guide . and AIX_FCP array driver (displayed as AIX_fcparray. and DS3950 storage device configuration drivers among these driver options. However. Vary off all supported storage device volume groups. enter the following command for each adapter: rmdev -dl fcsX -R 5. 2. and a list of supported drivers for these device models. AIX native PCM (displayed as AIX_APPCM). For fibre-channel-attached devices. DS5000. DS4100. users can use the AIX manage_disk_drivers command to display the configured drivers for DS3950. Stop pcmsrv.AIX_fcparray DS4200 AIX_SDDAPPCM AIX_APPCM. The command will fail if you try to remove the last path from a supported storage MPIO device. DS4800. The SAN boot MPIO devices cannot be unconfigured with following procedures.0 and subsequent releases.5. For RSSM devices.AIX_fcparray DS4800 AIX_SDDAPPCM AIX_APPCM. 3.AIX_fcparray DS4300 AIX_SDDAPPCM AIX_APPCM. DS4500. remove all non-SAN supported storage MPIO-capable devices: Unmount the file systems of all supported storage devices. with systems having AIX 5. Supported drivers are SDDPCM (displayed as AIX_SDDAPPCM). DS4700.1. only supports DS4K storage devices).

the process stops at this point and previews what you are removing. Press Enter.0.0.1 system with both concurrent and nonconcurrent resource groups. or migrated to the AIX default PCM.3. 5. perform the following steps to remove the SDDPCM software package: 1. The results of your precheck are displayed without removing the software. Press Tab in the PREVIEW Only? field to toggle between Yes and No. and V5. the process fails.0 supports HACMP V5. Otherwise. Note: If you select Yes. Press F4 in the SOFTWARE name field to bring up a list of packages and press the F7 key to select the package to uninstall. the devices can be in the Defined state and cannot be configured as either MPIO or non-MPIO devices. Select No to remove the software package from your AIX host system. After all the supported storage devices are removed or migrated to the AIX default PCM and the SDDPCM server daemon (pcmsrv) is stopped. SDDPCM supports MPIO-supported storage devices as SAN boot devices. If your server has a supported storage device configured with SDDPCM as the SAN boot device. When the process is complete.1. SDDPCM 2. This is your last chance to stop before continuing.0. 2.Removing SDDPCM from an AIX host system Before you remove the SDDPCM package from your AIX host system. 6.0. and V5. Press Enter to begin the removal process. | | Notes: 1. the SDDPCM software package and the supported storage device host attachment for SDDPCM are removed from your system.4 on an AIX 5. all supported storage devices must be unconfigured and removed from your host system. SDDPCM support for HACMP with Enhanced Concurrent Mode volume groups Beginning with SDDPCM 2. Select No for the remaining fields on this panel. V5. With version 3. contact IBM support for your SAN boot configuration environment.4. The SDDPCM server daemon must be stopped. you must remove both the host attachment for SDDPCM and the SDDPCM software packages before you reconfigure supported storage devices or restart the system. Chapter 3. To remove SDDPCM. SMIT responds with the following message: ARE YOU SURE?? Continuing may delete information you may want to keep. This might take a few minutes.0. Enter smitty deinstall from your desktop window to go directly to the Remove Installed Software panel. 3. it's no longer required to manually stop the SDDPCM server daemon (pcmsrv).2.1.3.2 TL07 (or later) and AIX 5. If the state for any supported storage MPIO device is either Available or Defined.4 on an AIX 6. SDDPCM supports HACMP V5. 2. 4. Using SDDPCM on an AIX host system 123 . 7.3 TL03 (or later) system with both concurrent and nonconcurrent resource groups.

the following should be done: 1. This ensures that HACMP will activate the appropriate access controls on the shared disk before applications have a chance to modify the access controls.com/webapp/set2/sas/f/hacmp/download/ aix53. This is in contrast to the mandatory locking provided by SCSI reserves. v5.software. That is. see the interoperability matrix for your storage: http://www. The Enhanced Concurrent Mode volume groups are sufficient to ensure high availability. You can select one of the four reserve policies based on their configuration environment or application needs. Start HACMP on the cluster nodes at boot time.ibm.com/systems/storage/disk/ess/ http://www.1.ibm. For information.2. applications are allowed to access it on one node at a time.cluster.com/systems/storage/disk/ds8000/ http://www. in either concurrent or nonconcurrent resource groups.ibm. Use either physical cabling or zoning to ensure that only HACMP nodes have access to the shared LUNs.com/infocenter/clresctr/vxrx/index. (This is known as a partitioned cluster.boulder. or split brain syndrome). you should configure your SAN using the following guidelines: v The interaction between HACMP and LVM to ensure that only one node has nonconcurrent access at a time is advisory locking. If system ECM volume groups are in nonconcurrent resource groups. Only no_reserve policy is supported in both concurrent and nonconcurrent resoure groups. A special interaction between HACMP and LVM ensures that if the volume group is used in a nonconcurrent resource group.ibm.4 for AIX5L support information. use one disk in each enclosure as a heartbeat path. The supported reserve policies are: v No Reserve reservation policy v Exclusive Host Access Single Path reservation policy 124 Multipath Subsystem Device Driver User's Guide . However. go to the following Web site: http://publib.com/systems/storage/disk/ds6000/ http://www.ibm.This support requires definition of the shared volume groups as Enhanced Concurrent Mode volume groups. v5.html MPIO reserve policies SDDPCM supports four types of MPIO reserve policies.jsp?topic=/ com.html For HACMP up-to-date APAR information. go to the following Web site: http://www14.ibm.3. Different storage systems or models might support different versions of HACMP and SDDPCM.com/systems/storage/software/virtualization/svc/ For HACMP v5.hacmp. This means that no reserve needs to be broken in response to a node failure. v Configure disk heartbeating to reduce the likelihood of one node considering the other dead and attempting to take over the shared disks. and v5. non-HACMP nodes should be prevented by hardware from accessing the shared LUNs. 2. and hence any requirement on breaking reserves is removed.ibm.doc/hacmpbooks. If the shared disks consist of multiple enclosures. To ensure that production data is not inadvertently modified by nodes that are not in the HACMP cluster.

another path is selected. Any initiators who register with the same PR key can access this device. Exclusive Host Access single-path reservation policy This is the SCSI-2 reservation policy. With this reservation policy. In a nonconcurrent clustering environment. and SCSI-2 reserve is reissued by this new path. I/O can be sent from all the paths of the MPIO device. is invoked when an enabled path become unavailable due to transport problems. this is the reserve policy that you should select. By default. such as HACMP. I/O is routed to all paths of the MPIO device. such as HACMP. All input and output is now routed to this new path. Normally. Persistent Reserve Exclusive Host Access reservation policy If you set an MPIO device with this persistent reserve policy. Chapter 3. set the following two ODM attributes to a non-zero value for the active/passive storage devices: cntl_delay_time and cntl_hcheck_int. A device without reservation can be accessed by any initiators at any time. HACMP support for persistent reserve policies for supported storage MPIO devices is not available. Persistent Reserve Shared Host Access reservation policy If you set an MPIO device with this persistent reserve policy. there is no reserve being made on MPIO devices. HACMP support for persistent reserve policies for supported storage MPIO devices is not available. a persistent reservation is made on this device with a persistent reserve (PR) key. Current® HACMP clustering software supports no_reserve policy with Enhanced Concurrent Mode volume group. SDDPCM active/passive storage device controller health-check feature This controller health-check function. because all paths of an MPIO device are registered with the same PR key. Enabling this feature results in faster I/O failure time when the access to a LUN is lost. only one path made a SCSI-2 reservation on the device. Different servers should have different unique PR key. The faster failure time is controlled by the controller health check attribute setting: cntl_delay_time. When this path is broken. only the fail_over path selection algorithm can be selected for the devices. the controller health-check feature is disabled. any initiators that implemented persistent registration can access this MPIO device. a persistent reservation is made on this device with a persistent reserve (PR) key. when enabled. If you set this reserve policy for MPIO devices. you should pick a unique PR key for a server. even if the initiators are registered with different PR keys. This is the default reserve policy of SDDPCM. however. Using SDDPCM on an AIX host system 125 .v Persistent Reserve Exclusive Host Access reservation policy v Persistent Reserve Shared Host Access reservation policy No Reserve reservation policy If you set MPIO devices with this reserve policy. In a concurrent clustering environment. reserve is released. this is the reserve policy that you should select for sharing resources among multiple servers. an MPIO device has all paths being opened. Current HACMP clustering software supports no_reserve policy with Enhanced Concurrent Mode volume group. However. I/O can only be sent through this path. To enable this feature.

Setting cntl_delay_time=1 also disables this feature. SDDPCM ODM attribute settings The following sections discuss the SDDPCM ODM attribute default settings. if you want to allow the storage device to come back on the fabric within 30 seconds after leaving the fabric. or other error injection operations. or the chdev -P option must be used. See “pcmpath set device cntlhc_delay” on page 180 and “pcmpath set device cntlhc_interval” on page 179 for information about these commands. If the controller health check feature is enabled. Two pcmpath commands allow you to dynamically change these two controller health-check ODM attributes. refer to RSSM documentation.com/systems/support/supportsite. if you are running with controller health check enabled and the cntl_delay_time setting is under 60 seconds. CAUTION: There are cases where the storage device may reboot both of the controllers and become inaccessible for a period of time.Note: Setting either value to zero (0) disables this feature. it may result in an I/O failure. must not be in use when setting the ODM values with the chdev command. Ensure that you have a mirrored volume with failover protection.wss/ docdisplay?lndocid=MIGR-5078491&brandind=5000020 SDDPCM ODM attribute default settings SDDPCM has following default attributes settings: Attribute device reserve policy path selection algorithm healthcheck mode healthcheck time interval Default value no_reserve load balance nonactive 60 seconds 126 Multipath Subsystem Device Driver User's Guide . or is not long enough to cover the temporary device inaccessible conditions that can occur during the storage concurrent code load.ibm. and the cntl_delay_time is set too short. or with a GPFS configuration. at the following URL. and how to change the attributes of the supported storage MPIO devices: v v v v v | “SDDPCM “Changing “Changing “Changing “Changing ODM attribute default settings” device reserve policies” on page 127 the path selection algorithm” on page 127 SDDPCM path healthcheck mode” on page 129 SDDPCM path healthcheck time interval” on page 130 v “Changing the Open HyperSwap quiesce expire time” on page 130 v “Changing SDDPCM controller healthcheck delay_time” on page 132 v “Changing SDDPCM controller healthcheck interval time” on page 133 Note: For RSSM devices. which requires a reboot to make the change to take effect. /dev/hdisk#. set cntl_delay_time=30 and cntl_hcheck_int=2. for information about setting queue_depth attribute http://www. For example. The device.

Use the following command to change the device path selection algorithm to round robin: chdev -l hdiskX -a algorithm=round_robin You can change the reserve_policy and algorithm for a device with one command. then an alternative path is selected. 2 persistent reserve tools are provided to manage supported storage system MPIO devices.0.1. to change the reserve policy to PR_shared: chdev -l hdiskX -a PR_key_value=0x1234 -a reserve_policy=PR_shared Note: Beginning with SDDPCM 2. Because chdev requires that the device be unconfigured and then reconfigured.0. The following reserve policies can be used with any of the supported path selection algorithms (see “Supported SDDPCM features” on page 101): v no_reserve v persistent reserve exclusive host access v persistent reserve shared host access When the reserve policy of a device is exclusive host access single path (SCSI-2). The SCSI-2 reserve is reissued by this alternative path To change the device reserve policy to no_reserve. The fail_over algorithm selects one path at a time for all I/Os. Changing the path selection algorithm Starting with SDDPCM 2. enter: chdev -l hdiskX -a reserve_policy=no_reserve If you want to change the reserve policy to one of the persistent reserve policies. you must provide a persistent reserve key at the same time that you change the device policy to one of the persistent reserve types.0. See “pcmpath set device algorithm” on page 176 for information about this command. Because chdev requires the device to be unconfigured and reconfigured. the only supported path selection algorithm is the fail_over algorithm. When the active path fails. See “Persistent reserve command tools” on page 140 for more information. You can also use the chdev command to change the path selection algorithm of a device. you can use the pcmpath set device algorithm command to dynamically change the path selection algorithm. Using SDDPCM on an AIX host system 127 . For example. to change the reserve policy to no_reserve and the path selection algorithm to round robin: chdev -l hdiskX -a reserve_policy=no_reserve -a algorithm=round_robin Chapter 3. For example.0. this is a disruptive operation.| quiesce expire time 120 seconds Changing device reserve policies Use the chdev command to change the reserve policy for a device.1. this is a disruptive operation.

In Figure 4. Workload imbalance when one link is more heavily loaded than another link Host X is attempting to balance the load across four paths (A1. This algorithm is based on the existing load balancing algorithm. you can use the load balancing port algorithm to achieve better balanced I/O performance in certain specific SAN configuration environments. link 2 between the switch and the storage is more heavily loaded than link 1. Host Y is attempting to balance the load across four paths (C2. D3). B2). which results in the same workload imbalance shown in Figure 5. 1750.0. 2107. The following examples demonstrate the possible configurations in which this algorithm can help to improve performance. but it also uses the incorporated I/O statistics on each target port to which the host is connected. A2. This condition creates a throughput imbalance on link 2 that prevents optimal load balancing on both hosts. A2. and 2145 storage devices. B1. Figure 5. Figure 4.4. This path selection algorithm supports only 2105. Host Y has only one active path to the storage (C2).0. In Figure 6 on page 129. 128 Multipath Subsystem Device Driver User's Guide . D2. In Figure 5. one path is lost from a failed component. You can use this algorithm for SAN configurations where I/O throughput is unbalanced on the storage targets. B2). B1.Using the load balancing port algorithm Starting with SDDPCM 2. Workload imbalance when one link receives twice the load of the other links Host X is attempting to balance the load across four paths (A1. C3. This condition creates a throughput imbalance on link 2 that prevents optimal load balancing on Host X. link 2 between the switch and the storage is receiving twice the load of the other links.

Workload imbalance when one host sharing workload across two paths loses one path Host X is attempting to balance the load across four paths (A1. For more information. the healthcheck command will be sent to paths that are opened with a normal path mode. because the round robin and load-balancing algorithms route I/O to all opened paths that are functional. This includes paths that are opened or in failed state.When this value is selected. the healthcheck command is sent to paths that are in failed state. Chapter 3. D2). It cannot completely balance the throughput on the target ports because SDDPCM manages I/O load balancing only on the host where it is installed. You can either use pcmpath set device algorithm lbp or chdev -l hdiskX -a algorithm=load_balance_port to configure the device with the load balancing port algorithm.When this value is selected.Figure 6. you can use the pcmpath query port target port number command and the pcmpath query portstats target port number command. To ensure that the load balancing port algorithm produces the desired result. but only one path is active (C1. A2. the healthcheck command will only be sent to failed paths. B2). If this function is enabled. SDDPCM tests opened paths and reclaims failed paths based on the value that is set in the following device healthcheck attribute: hc_mode Healthchecking supports the following modes of operations: v Enabled . all the hdisks on a host system that is attached to a given storage device must be configured with the same algorithm. pcmpath query port target port number pcmpath query portstats target port number To display performance information about the target ports. Changing SDDPCM path healthcheck mode SDDPCM supports the path healthcheck function. Host Y has two paths to the storage. the healthcheck command will be sent to paths that have no active I/O. Using SDDPCM on an AIX host system 129 . B1. If the algorithm selected is round robin or load balance. Note: This algorithm improves performance by balancing drastically unbalanced throughput on multiple target ports of the same storage. v Failed . see “Using SDDPCM pcmpath commands” on page 144.When this value is selected. v Nonactive . The default value setting of SDDPCM is nonactive.

0.0. See “pcmpath set device hc_interval” on page 177 for information about this command. To disable the path healthcheck interval function. the SDDPCM healthcheck function only checks the paths being opened. To change the path healthcheck mode to failed.0.1. You can also use the chdev command to change the device path healthcheck mode. From SDDPCM v2. Because chdev requires that the device be unconfigured and then reconfigured. you can use the chdev command to change the Open HyperSwap quiesce expire time of a device. Because chdev requires that the device be unconfigured and then reconfigured.1.0.1.1.0. Because chdev requires that the 130 Multipath Subsystem Device Driver User's Guide . issue following command: chdev -l hdiskX -a hc_mode=failed Changing SDDPCM path healthcheck time interval The hc_interval attribute will determine how often the paths of a device should be health-checked. Starting with SDDPCM 2.0. It does not healthcheck any path that is in the close state. It does not implement healthcheck on opened or idle paths as the SDDPCM internal healthceck does. the pcmpath set device hc_interval command allows you to dynamically change the path healthcheck time interval. the healthcheck function is disabled.2. Beginning with SDDPCM v2. The hc_interval attribute has a range of values from 0 .3 and later. The SDDPCM server daemon healthchecks close_failed paths. See Chapter 12. | | | Changing the Open HyperSwap quiesce expire time Beginning with SDDPCM 3.2. The SDDPCM server daemon automatically starts or stops the healthcheck function on a device if you issue one of the following: v pcmpath set device m hc_interval 0 on the fly to disable a device internal healthcheck function v pcmpath set device m hc_interval n to enable a device internal healthcheck function on the fly Note: The SDDPCM server daemon only healthcheck FAILED paths.” on page 415 for more information.3. this is a disruptive operation. You can also use the chdev command to change the device path healthcheck time interval. “Using the SDD server and the SDDPCM server. the default value setting is changed from 20 (seconds) to 60 (seconds). This is the difference between the SDDPCM internal healthcheck and the SDDPCM server daemon healthcheck. issue the following command: chdev -l hdiskX -a hc_interval=0 Note: Currently.0. a new feature of SDDPCM server daemon healthcheck function is introduced.Starting with SDDPCM 2.3600 seconds. the SDDPCM server daemon healthchecks the device's failed path that is opened. See “pcmpath set device hc_mode” on page 178 for information about this command. the pcmpath set device hc_mode command allows you to dynamically change the path healthcheck mode. this is a disruptive operation. If a device healthcheck function is disabled. When a value of 0 is selected.

Use the following command to change the device quiesce expire time: chdev -l hdisk$i -a quiesce_timeout=n (30 < n < 600) Note: Note: For a setup with a large number or the maximum number of devices. you must perform the following steps: 1. and DS3950 storage devices. increase the quiesce_timeout to prevent a delayed recover command. By default. Change all the child devices of fscsiX on your system to the defined state by running the rmdev -l fscsiX -R command for all adapters on your system. which means that the fast failure is disabled. Using SDDPCM on an AIX host system 131 . Fibre channel dynamic device tracking Dynamic Tracking support allows for the following scenarios without I/O errors or loss of access: Chapter 3. this is a disruptive operation. Enter the cfgmgr command to reconfigure all devices back to the available state.| | | | | | device be unconfigured and then reconfigured. To enable fast failure. 2. If enabled. Supported AIX fibre-channel device driver features SDDPCM supports the following fibre channel adapter driver features: v Fast I/O failure of fibre channel devices v Fibre-channel dynamic device tracking v Changing SDDPCM controller healthcheck delay_time v Changing SDDPCM controller healthcheck interval time Fast I/O failure of fibre-channel devices The AIX fibre channel adapter driver supports the new fc_err_recov attribute in AIX. this attribute has a value of delayed_fail. 3. You need to have the following Fibre Channel device driver APARs applied to your system in order to have all dynamic tracking and fast fail fixes: APAR IY37183 IY44139 IY44142 IY44342 IY44344 IY45368 IY46701 Description Dynamic Tracking and Fast I/O Failure of Fibre Channel Devices Dynamic Tracking: PERM I/O errors after Host HBA Cable Pull Dynamic Tracking: I/O Hang after Host HBA Cable Pull Dynamic Tracking: Ioctl call may fail after N_Port ID Change Dynamic Tracking: Back-to-Back Cable Move May Delay Error Recovery Fast Fail/Dynamic Tracking: FC Device Inaccessible after Move Dynamic Tracking & MPIO: Multiple Cable Swap Cause Path Failure Note: You must set the FC HBA fc_err_recov attribute to fast_fail for DS4000. this attribute can reduce the amount of time needed to fail I/O in certain conditions. DS5000. Enter the chdev -l fscsiX -a fc_err_recov=fast_fail command for all adapters on your system.

and DS3950 storage devices For more information about Dynamic Tracking and Fast I/O Failure. Note: You must set the FC HBA dyntrk attribute to yes for DS4000.ibm. The WWPN must be a static identifier of a remote port. Disconnected cable must be reconnected within 15 seconds. The disconnected cables must be reconnected within 15 seconds.FIBRE-CHANNEL This file has more information about the Dynamic Tracking and Fast I/O Failure features. DS5000. To enable dynamic tracking. see: http://www-01. you must do the following: 1.com/support/docview. the time allowed to leave the cable disconnected is 11 seconds. Notes: 1. The reason is that the world wide port name (WWPN) will be different for that adapter. v Change the supported storage device FC cable from one switch port to a different switch port. the cntl_delay_time attribute is used to specify the amount of time. in seconds. AIX has added a new attribute that must be changed to enable dynamic tracking. The disconnected cables must be reconnected within 15 seconds. if no paths are detected as good. 3.2. that the active/passive storage device's controller(s) will be health checked after a transport failure. v Swap two supported storage device FC cables on the supported storage device ports. By default. previously unseen adapter on the supported storage device.5. Issue the chdev -l fscsiX -a dyntrk=yes command for all adapters on your system. dynamic tracking is disabled. and the supported storage device cannot track that adapter. if it takes 4 seconds to bring up the FC link. For example. 2. Change all the child devices of fscsiX on your system to the defined state by issuing the rmdev -l fscsiX -R command for all adapters on your system. At the end of the specified period. 132 Multipath Subsystem Device Driver User's Guide . all pending and subsequent I/O to the device will fail until the device health checker detects that a failed path has recovered. Fibre-channel device dynamic tracking does not support the case when a cable is moved from one adapter on the supported storage device to another free. Issue cfgmgr to reconfigure all devices back to the available state.0 and subsequent releases. 2.v Combine two switches in two different SANS into one SAN by connecting the switches with a cable and cascading the switches. v Swap two supported storage device FC cables on the switch ports.wss?rs=1203&context=SWGD0 &context=SWG10&context=SWGC0&context=HW182&dc=DB550 &q1=dynamic+fast+tracking+%22and%22+Fast+I%2fO+failure&uid=isg1IY37183 &loc=en_US&cs=UTF-8&lang=en Installing APAR IY37183 also installs the file: /usr/lpp/bos/README. Changing SDDPCM controller healthcheck delay_time Starting with SDDPCM V2. This 15 seconds window actually includes the time required to bring up the FC link after the cable is reconnected. I/O will fail if the cable is disconnected longer than 11 seconds.

the controller healthcheck function is disabled.5. which requires a reboot to make the change to take effect. When a value of 0 is selected. to enable the device controller health check function. You can also use the chdev command to change the device controller healthcheck delay_time. Because chdev requires that the device be unconfigured and then reconfigured. Chapter 3.0 and subsequent releases. The pcmpath set device cntlhc_delay command allows you to dynamically change the controller health check delay_time. By default. issue the following command to set cntl_delay_time and cntl_hcheck_int to non-zero values: chdev -l hdiskX -a cntl_delay_time=60 -a cntl_hcheck_int=3 Multipath SAN boot support This section describes how to configure supported storage system MPIO devices as the SAN boot device. the cntl_hcheck_int attribute is used to specify the amount of time. this attribute value is set to 0. The pcmpath set device cntlhc_interval command allows you to dynamically change the controller health check interval time. When a value of 0 or 1 is selected. Using SDDPCM on an AIX host system 133 . By default. The cntl_delay_time attribute has a range of values from 0-300 seconds. See “pcmpath set device cntlhc_delay” on page 180 for information about this command.The cntl_delay_time attribute has a range of values from 0-300 seconds. Because chdev requires that the device be unconfigured and then reconfigured. Attention: The sddpcm default reserve policy is no_reserve and the default path selection algorithm is load_balance. the first controller health check is issued after a storage fabric transport failure is detected. Or you can use the chdev -P command. When the controller health check is enabled. which means the controller health check function is disabled. this is a disruptive operation. Changing SDDPCM controller healthcheck interval time Starting with SDDPCM V2. in seconds. This is because no scsi-2 reserve or scsi-3 persistent reserve is implemented on these SAN boot devices to prevent access from another AIX server. the controller healthcheck function is disabled. See “pcmpath set device cntlhc_interval” on page 179 for information about this command. For example. this is a disruptive operation. Note: To enable the active/passive storage device controller health check function. to disable this feature. which requires a reboot to make the change to take effect. This value must be less than the cntl_delay_time unless set to zero (0). you must not configure these SAN boot devices on other AIX servers. the values of both controller health check attributes cntl_delay_time and cntl_hcheck_int must be set to a non-zero value. Or you can use the chdev -P command.2. You can also use the chdev command to change the device controller healthcheck delay_time. which means the controller health check function is disabled. this attribute value is set to 0. that the next controller health check command is issued. If the SAN boot devices are configured with the default reserve policy and path selection algorithm.

or AIX 6.3 TL03 (or later). or AIX 6. you must issue the relbootrsv command. If you want to release non-rootvg scsi-2 reserves. 6. Restart the system. v If the selected supported storage device is RSSM. AIX 5.0 and earlier from the AIX default PCM to SDDPCM. as shown in step 5. v If the selected supported storage device is DS4000. v If the selected supported storage device is DS8000. the required operating system is AIX 6.0. if the OS is AIX53. The supported storage boot device is configured as an MPIO device with AIX default PCM. To configure the supported storage device boot device with the SDDPCM module: 1. For boot devices with SDDPCM v2.0 or earlier. DS5000. you might encounter a problem where either all paths of the boot device cannot be opened successfully or they can be opened but cannot be used for I/O. the required operating system is AIX 5.0. or DS3950.1 TL0 (or later).1 TL0 (or later). 2. Note: For IBM DS4000 storage devices. provide the volume group name as a parameter. Install one of the following AIX operating systems on the selected supported storage devices: v If the selected supported storage device is ESS.2. AIX 5.2 TL07 (or later).3 TL03 (or later). scsi-2 reserve is automatically released during the system reboot.2 TL07 (or later). or AIX 6. 5. All supported storage MPIO devices. For example: relbootrsv vgname. the required operating system is AIX 5. are now configured with SDDPCM.1 TL0 (or later).3 TL02 (or later). To release scsi-2 reserve on boot devices with SDDPCM v2. the required operating system is AIX53 TL8 (or later) or AIX61 TL2 (or later). When you convert a boot device with SDDPCM v2. v If the selected supported storage device is DS6000. the required operating system is AIX 5. Refer to the following URL for RSSM documentation with instructions on how to install the AIX operating system on a JS blade with remote volumes: http://www.1 TL03 (or later).2. or if the system was migrated from AIX53 to AIX61 (or later).2.wss/ docdisplay?lndocid=MIGR-5078491&brandind=5000020 3. AIX 5. the DS4000 devices are configured by the AIX FCPArray(RDAC) device driver instead of the AIX native MPIO PCM. including supported storage MPIO SAN boot devices.Configuring supported storage system MPIO devices as the SAN boot device A supported storage MPIO device can be used as the system boot device.2 TL06 (or later). If you fail to do so. 4. you must issue the relbootrsv command.0. Install the supported storage device host attachment for SDDPCM and the SDDPCM packages. Select one or more supported storage system devices as the boot device.com/systems/support/supportsite.ibm. See “Migrating the supported storage SAN 134 Multipath Subsystem Device Driver User's Guide . This problem occurs because the AIX default PCM has a default reserve policy of single-path (SCSI-2).1 or later. Restart the system. AIX 5.1 TL0 (or later).3 TL03 (or later). the required operating sytsem is AIX 5.2 TL07 (or later). or AIX 6. v If the selected supported storage device is SAN Volume Controller.

Using SDDPCM on an AIX host system 135 .3 TL04 or later: apply APAR IY83847 No APAR is available to correct this problem on AIX52 TL07 and AIX53 TL03.2 TL08 or later: apply APAR IY83717 v AIX 5. or with the nonboot device. If the first path fails. During the system dump. You can configure the system dump device with the supported SAN boot device. Use the following pcmpath commands to change the path state: pcmpath set device M path N online or pcmpath set device M path N offline 2. I/O is routed to the next path being selected. Use the path control commands provided by AIX. Use the following AIX path command to change the path state of the fibre-channel or SAS-connected path: Chapter 3. After the system is restarted followed by the operating system installation on the supported storage MPIO devices. This problem is corrected in AIX 5. If you configure a SAN boot device with supported storage MPIO devices on one of these operating system levels and experience this problem.2 TL04 (or later) or AIX 5. AIX 5. change path state (enable or disable). Dynamically enabling and disabling paths or adapters This section describes how to dynamically enable and disable paths and adapters. only one path is selected for dump requests.2 TL08 and later.2 TL08 or later and AIX 5.boot device or nonboot volume group from AIX default PCM to SDDPCM” on page 117 for information about solving this problem. You must apply all the APARs for AIX 5. Apply the following APARs on these OS levels after the first reboot followed by the operating system installation: v AIX 5. and display the path current state. There is a known problem during the SAN boot configuration.3 TL01 (or later) provides several new path control commands.3 TL04 or later. The path selection algorithm for the system dump device automatically defaults to failover_only when the system dump starts. or AIX 5. These commands can be used to configure or remove paths.3 TL04 and later. This can happen even if the system is successfully restarted. Dynamically enabling or disabling a path There are three ways to dynamically enable (place online) or disable (place offline) a path: 1. you might see that some paths of the rootvg are in Failed path state. you can manually recover these failed paths by issuing one of the following commands: v chpath -s E -l hdiskX -p fscsiX v pcmpath set device M path N online Support system dump device with the supported storage system MPIO device You can choose a supported storage MPIO device to configure with the system primary and secondary dump devices.

Enter smitty MPIO and press Enter. This is an asynchronous event. If you issue the pcmpath set device M path N offline command to a DS4000 or a DS5000 device and the path to be set offline is the last active path. the device might not have completed the controller failover yet. These are active/passive storage devices. This command will fail if there is any device using the last path attached to this adapter. use the following command: pcmpath set adapter N offline Notes: 1. DS6000. and might take a few seconds to complete. when the device is opened with all paths to the source volume offline. Select Enable paths or Disable paths to enable or disable paths. an attempt to set the last active path of the source volume offline by using the pcmpath set device M path N offline command will be rejected. it causes the device to fail over to the alternate controller. Select MPIO Path Management and press Enter. DS8000. Beginning with SDDPCM version 2. 136 Multipath Subsystem Device Driver User's Guide . a.chpath -l hdiskX -s E|D -p fscsiX -w "5005076300c99b0a.0. This displays the MPIO Management panel. To support the new devices. or DS3950 storage devices to online or offline. Notes: 1. DS8000.5200000000000000" or chpath -l hdisk1 -s E|D -p sasX -w 5005076b074012a0. use the following AIX path command: lspath -l hdiskX -H -F "name path_id parent connection" 3. To disable an adapter. you can see that the active paths and passive paths are swapped. storage devices DS4000 and DS5000 are supported. | | | | | | 2. If the command completed successfully. and behave differently than ESS. When the device is closed. thus. To set the adapter that connects to DS4000. b. However. after the pcmpath set device M path N offline command returns.4. This is a time-consuming action.0. RSSM.1000000000000 Note: To find the parent adapter and connect location of a path. it's allowed to set the last active path of the source volume offline. and SAN Volume Controller storage devices. This displays the MPIO Path Management panel. Use the smitty MPIO management submenu. SDDPCM reserves the last path of a device. an unplanned HyperSwap will be triggered. or SAN Volume Controller storage devices to online or offline. enter the ap parameter. You must issue the pcmpath query device M command to check the result and to ensure that this set offline command completed successfully. Dynamically enabling or disabling an adapter The SDDPCM pcmpath command can be used to enable (place online) or disable (place offline) an adapter. For Open HyperSwap devices. RSSM. 2. the syntax of the pcmpath set adapter N online|offline command changed by adding the extra parameter aa|ap. enter the aa parameter. DS5000. To set the adapter that connects to ESS. c. DS6000.

the pcmapth set adapter N offline command is not supported. To start the trace function.trace installation package installed on your system. If this is the case. or any time you need to read the trace report.1. this set adapter offline command might cause multiple devices to fail over to an alternate controller if the device path that is attached to this adapter is the last active path. For Open HyperSwap devices. enter: trace -a -j 5A7 To stop the trace function. the pcmpath set adapter N offline command might cause multiple device models to fail over to the alternate controllers. manually turn on the trace function before the program starts to run.| | | | | 3. This component consists of a UNIX application daemon that is installed in addition to the SDDPCM path control module.0. or DS3950 device model configured. You must issue the pcmpath query device command to find out if the command is fully completed or partially completed.0 (or later). DS5000. but some number of devices are failed over successfully and some devices are not.sysmgt. It is possible that the pcmpath set adapter M offline command returns a failure message. For DS4000 and DS5000 storage devices. when the device is opened with all paths to the source volume offline. and error paths of the algorithm. exit. an attempt to set the last active path of the source volume offline by the pcmpath set adapter command will be rejected. it's allowed to set the last path of the source volume offline. then turn off the trace function either after the program stops. an unplanned HyperSwap will be triggered. SDDPCM server daemon The SDDPCM server (also referred to as pcmsrv) is an integrated component of SDDPCM 2. When the device is closed. if some devices are owned by controller A and some devices are owned by controller B. Using the SDDPCM trace function SDDPCM supports AIX trace function. However. The SDDPCM trace ID is 5A7. Furthermore. The SDDPCM server daemon provides path-recovery function for SDDPCM devices and the First Chapter 3. enter: trcrpt | pg To save the trace data to a file. Trace ID 5A7 traces routine entry. enter: trcstop To read the report. To use it. when the device is open. 4. enter: trcrpt > filename Note: To perform the SDDPCM trace function. Using SDDPCM on an AIX host system 137 . the set adapter offline command might cause some devices to fail over from controller A to controller B and some devices to fail over from controller B to controller A. Attention: If the AIX server has more than one DS4000. you must have the bos.

the new SDDPCM kernel extension can be loaded into the system when supported storage MPIO devices are configured.Time Data Capture function.0.” on page 415. Because pcmsrv is bound to the SDDPCM kernel extension module. verify if the SDDPCM server (pcmsrv) has automatically started by entering lssrc -s pcmsrv. | | | For SDDPCM 3. Starting the SDDPCM server manually | | | | | If pcmsrv did not start automatically after you performed the SDDPCM installation and configured the supported storage devices. Because pcmsrv is bound to the SDDPCM kernel extension module. you can start pcmsrv manually. in order to uninstall or upgrade SDDPCM.0.0. For detailed information about the SDDPCM server daemon. Verifying if the SDDPCM server has started After you have installed SDDPCM and restarted the system. “Using the SDD server and the SDDPCM server. pcmsrv can fail to start if the SDDPCM is installed and the supported storage MPIO devices have not been configured yet. If the SDDPCM server has not started. In this case.0 or later releases. you must stop pcmsrv so that the SDDPCM kernel extension can be unloaded from the system. During an upgrade. see Chapter 12.0.0 or later releases. The status of pcmsrv should be Active if the SDDPCM server has automatically started. the output from the lssrc –s pcmsrv command looks like this: +-----------------------------------------------------------------+ | Subsystem GROUP PID Status | | pcmsrv NNN Active | +-----------------------------------------------------------------+ where NNN is the process ID number. Go to “Starting the SDDPCM server manually” to proceed.1 or prior releases. you can permanently stop pcmsrv by entering: stoppcmsrv 138 Multipath Subsystem Device Driver User's Guide . you can start pcmsrv by entering: startsrv -s pcmsrv -e XPG_SUS_ENV=ON Go to “Verifying if the SDDPCM server has started” to see if you successfully started the SDDPCM server. the status will be Inoperative. Stopping the SDDPCM server You can manually stop pcmsrv. you can either restart the system or you can start pcmsrv manually after supported storage MPIO devices are configured. If pcmsrv has automatically started. For SDDPCM 3.6. you can start pcmsrv by entering: startpcmsrv For sddpcm 2.0.

the output from the lssrc -s AE command looks like this: +-----------------------------------------------------------------+ Subsystem Group PID Status AE NNN active where NNN is the process ID number.0. Go to “Starting the AE server manually” to proceed. The AE server daemon intefaces with the TPC-R server and the SDDPCM kernel driver to provide OpenSwap functionality. See Chapter 12. Verifying if the AE server has started After you have installed SDDPCM and restarted the system. If AE has automatically started. Using SDDPCM on an AIX host system 139 . you can start AE by entering: startAE Go to “Verifying if the AE server has started” to see if you successfully started the AE server. SDDPCM server provides the First Time Data Capture function. the status will be Inoperative. which saves an SDDPCM internal trace into a log file. Chapter 3.0.0. you can temporarily disable pcmsrv by entering: stopsrc -s pcmsrv. Starting the AE server manually If AE server did not start automatically after you performed the SDDPCM installation. when you want to permanently disable pcmsrv on SDDPCM 2. but pcmsrv will start again if the system is restarted. a UNIX® application daemon. you must remove the following line in the system init table (/etc/inittab): srv:2:wait:/usr/bin/startsrc -s pcmsrv -e XPG_SUS_ENV=ON > /dev/null 2>&1 Attention: The SDDPCM server (pcmsrv) recovers failed paths for devices that have disabled the SDDPCM healthcheck function.6. In certain cases.1 or earlier releases.0. | | | | | The stopsrc -s pcmsrv command will stop the current version of pcmsrv. is added to SDDPCM path control module. If the AE server has not started. The status of AE should be Active if the AE server has automatically started.0 or later releases. verify if the AE server has automatically started by entering lssrc -s AE. so that it does not start even after a system restart. AE server. Furthermore.” on page 415 for information about the following tasks: v Enabling or disabling a TCP/IP port for SDDPCM server (“Enabling or disabling the sddsrv or pcmsrv TCP/IP port” on page 419) v Changing to a different port number for the SDDPCM server (“Changing the sddsrv or pcmsrv TCP/IP port number” on page 419) | | | | | | | | | | | | | | | | | | | | | AE daemon For SDDPCM 3. “Using the SDD server and the SDDPCM server.| | For SDDPCM 2.6. You should not stop pcmsrv permanently.1 or earlier releases.

If AE is not running. such as unexpected persistent reserve left on devices because of failing to release the persistent reserve. Persistent reserve command tools Starting with SDDPCM 2. With clear service action. preempt. both persistent reservation and reservation key registrations are cleared from the device. Description The pcmquerypr command implements certain SCSI-3 persistent reservation commands on a device. SDDPCM supports two persistent reserve command tools. The pcmquerypr command can be used in the following situation: the reserve policy of the SDDPCM MPIO devices is set to either persistent reserve exclusive host access (PR_exclusive) or persistent reserve shared host access (PR_shared). The pcmquerypr command can be used in this situation to query. The device can be a supported storage MPIO device. it also aborts tasks on the LUN that originated from the initiators that are registered with the preempted key. The following sections describe the persistent reserve command tools. the SDDPCM pcmpath commands. and a tool to query DS4000 and DS5000 storage subsystem and MPIO device configuration information.1. Caution must be taken with the command. pcmquerypr The pcmquerypr command provides a set of persistent reserve functions.0.0. It is important to ensure that AE is running. There are more cases when you might need this tool to solve persistent reserve related problems. This command supports persistent reserve IN and OUT 140 Multipath Subsystem Device Driver User's Guide . or clear the persistent reserve left by a node or server on the devices. The following information describes in detail the syntax and examples of the pcmquerypr command. SDDPCM utility programs This section describes the persistent reserve command tools. AE is an integral part of the Open HyperSwap functionality. Open HyperSwap functionality is not available.| | | | | | | Stopping the AE server manually You can temporarily disable AE by entering: stopAE Attention: Don't manually stop AE while the application is running and there are devices configured. including MPIO devices not supported by SDDPCM. and persistent reserve has been left on the device by a node. especially when implementing preempt-abort or clear persistent reserve service action. With preempt-abort service action not only the current persistent reserve key is preempted. This command supports the following persistent reserve service actions: v v v v Read persistent reservation key Release persistent reserve Preempt-abort persistent reserve Clear persistent reserve and registration keys This command can be issued to all system MPIO devices. blocking access by another node.

Displays the persistent reservation key if it exists on the device. reservation key. Syntax pcmquerypr -p -c -r -v -V -h /dev/PVname Flags: -p If the persistent reservation key on the device is different from the current host reservation key. Removes the persistent reservation key on the device made by this host. enter pcmquerypr -vh /dev/hdisk30. Prints detailed message. the command will return: 0 1 2 There is no persistent reservation key on the device. To query and display which type of persistent reservation is on a device. If the command issued with one of the options of -p. enter pcmquerypr -h /dev/hdisk30. or clear persistent reservation and reservation key registrations. 3. such as read. If there is a persistent reservation on the device. To query and display the persistent reservation on a device. If there is a persistent reserve on a disk. -c -r -v -V Return code If the command issued without options of -p. It returns 1 if the device is reserved by another host. it displays the persistent reservation key. Examples Chapter 3. Verbose mode. This command queries the persistent reservation on the device without displaying. In addition. This option can be issued only when the device is not already open. -r or -c. enter pcmquerypr -Vh /dev/hdisk#. This option can be issued only when the device is not already open.service actions. preempt-abort persistent reservation. The command failed. or the device is reserved by the current host The persistent reservation key is different from the host reservation key The command failed. 2. the existing persistent reservation key on the device is preempted. -r or -c. Same as Example 1. 1. it returns: 0 2 The command was successful. release persistent reservation. This option can be issued only when the device is not already open. To query the persistent reservation on a device. the persistent reservation is removed and all reservation key registrations on the device are cleared. Using SDDPCM on an AIX host system 141 . it returns 0 if the device is reserved by the current host.

It returns 0 if the command succeeds or the device is not reserved. *> ioctl(PR_READ) error... errno = 5 (I/O error) *> status_validity=0x1. *> ioctl(PR_READ) error. It returns 2 if the command fails. This command resets any persistent reserve and clears all reservation key registrations on a device. enter pcmquerypr -ch /dev/hdisk30. *> ioctl(PR_READ) error.. To remove the persistent reservation if the device is reserved by another host. Read Keys parameter Generation : 12 Additional Length: 32 Key0 : 0x3236303232344446 Key1 : 0x3236303232344446 Key2 : 0x3236303232344446 Key3 : 0x3236303232344446 resrvpolicy= no_reserve Reserve Key provided by current host = none (hex)0924ffff Reserve Key on the device: 0x3236303232344446 Reservation key type: 0x6 Device is reserved by SDD device. 142 Multipath Subsystem Device Driver User's Guide . *> ioctl(PR_READ) error. or 2 if the command fails.. scsi_bus_status=0x2 Attempt to read reservation key.. To reset any persistent reserve and clear all reservation key registrations. errno = 5 (I/O error) *> status_validity=0x1.. enter pcmquerypr -ph /dev/hdisk30... It returns 0 if the command succeeds..The following output indicates there is SCSI-2 reserve on the device: # pcmquerypr -Vh /dev/hdisk27 connection type: fscsi3 open dev: /dev/hdisk27 Attempt to read reservation key. scsi_bus_status=0x18 Attempt to read reservation key. scsi_bus_status=0x18 Attempt to read reservation key. 4. 6. scsi_bus_status=0x2 Attempt to read reservation key. This command releases the persistent reserve if the device is reserved by the current host.. enter pcmquerypr -rh /dev/hdisk30. To release the persistent reservation if the device is reserved by the current host. errno = 5 (I/O error) *> status_validity=0x1. 5. *> ioctl(PR_READ) error. errno = 5 (I/O error) *> status_validity=0x1. errno = 5 (I/O error) *> status_validity=0x1.. Attempt to read registration keys.. scsi_bus_status=0x18 The following output indicates that there is SCSI-3 reserve on the device: # pcmquerypr -Vh /dev/hdisk43 connection type: fscsi0 open dev: /dev/hdisk43 Attempt to read reservation key...

otherwise. 3.This command removes an existing registration and persistent reserve from another host. if those devices have a PR key. issue pcmgenprkey -u -k none . pcmgenprkey Description The pcmgenprkey command can be used to set or clear the PR_key_value ODM attribute for all SDDPCM MPIO devices. Specifically. When the -u option is used without the -k option. issue pcmgenprkey -u -k 0x1234567890abcedf. Chapter 3. including the hdisk name. preferred path. a value of none is displayed. It returns 0 if the command succeeds or if the device is not persistent reserved. except the devices that already have the same customized PR key attribute. To set the persistent reserve key to all SDDPCM MPIO devices with a provided key value. 2. To display the reserve_policy. The provided key must contain either a decimal integer or a hexadecimal integer. It returns 2 if the command fails. including the frame's assigned name and worldwide name. To update the customized PR_key_value attribute with the HACMP-provided Preserve key or the output string from the uname command for all the SDDPCM MPIO devices. This creates a customized PR_key_value attribute with the provided key value for all SDDPCM MPIO devices. and the user-assigned label for that volume. current ownership. it uses the output string from the uname command as the PR key. It also can be used to query and display the reservation policy of all SDDPCM MPIO devices and the persistent reserve key. Syntax pcmgenprkey -v -u -k prkeyvalue Examples 1. LUN number. and a list of hdisks (only those currently in the Available state) that are associated with that subsystem. 4. Attention: To protect the configuration database. the PR_key_value attribute. because stopping this command before it completes could result in a corrupted database. To clear the PR_key_value attribute from all SDDPCM MPIO devices. and the persistent reserve key attribute of all the SDDPCM devices. it displays information about the frame (subsystem). If the MPIO device does not have a persistent reserve key. Using SDDPCM on an AIX host system 143 . sddpcm_get_config Description The sddpcm_get_config command can be used to display information about MPIO-based DS4000 or DS5000 subsystems and the hdisks associated with them. issue pcmgenprkey -u. this command searches for the HACMP-provided Preservekey attribute and uses that value as the PR key if that attribute is available. issue pcmgenprkey -v. the sddpcm_get_config command is not interruptible.

number of controllers. You can also set individual path conditions either to online or offline. Commands Command pcmpath clear device count pcmpath disable ports pcmpath enable ports pcmpath open device path pcmpath query adapter pcmpath query adaptstats Description Dynamically clears the error count or error/select counts to zero. Places paths connected to certain ports online. Opens an INVALID path. and where to go in this chapter for more information. and partition count. Example Display information about the frame of which hdisk4 is a member. -A -V List information for all attached subsystems. Places paths connected to certain ports offline. partition number. This section includes descriptions of these commands. sddpcm_get_config -l hdisk4 The system displays a message similar to the following: Storage Subsystem Name = ’SDD_DS5300’ hdisk LUN # Ownership hdisk34 0 B (preferred) hdisk35 1 A (preferred) hdisk36 2 B (preferred) hdisk37 3 A (preferred) User Label ocotillop2_1 ocotillop2_2 ocotillop2_3 ocotillop2_4 ----------------------------------------------------------------------- Using SDDPCM pcmpath commands SDDPCM provides commands that you can use to display the status of adapters that are used to access managed devices. including the MPIO SDDPCM internal frame number. a brief description. Table 14. Displays information about adapters. Table 14 provides an alphabetical list of these commands. Page 146 147 147 151 153 155 144 Multipath Subsystem Device Driver User's Guide . Displays performance information for all FCS adapters that are attached to SDDPCM devices. List additional information. or set all paths that are connected to a supported storage device port or ports to online or offline. largely of limited value. set all paths that are connected to an adapter either to online or offline.Syntax sddpcm_get_config -v -l hdiskN -A Flags -l hdiskN List information for the subsystem which includes hdiskN. to display the status of devices that the device driver manages. or to map supported storage MPIO devices or paths to a supported storage device location.

Set all or some of supported storage MPIO device path selection algorithm Set all or some of supported storage MPIO device health check time interval Set all or some of supported storage MPIO device health check mode Set all or some of supported active/passive MPIO device controller health check time interval Page 157 163 pcmpath query essmap pcmpath query port 165 166 pcmpath query portmap 168 pcmpath query portstats 169 | pcmpath query session 171 pcmpath query version pcmpath query wwpn pcmpath set adapter pcmpath set device path pcmpath set device algorithm pcmpath set device hc_interval pcmpath set device hc_mode pcmpath set device cntlhc_interval 172 173 174 181 176 177 178 179 pcmpath set device cntlch_delay Set all or some of supported active/passive MPIO device controller health check delay_time 180 Chapter 3. location. Displays the session of the Open HyperSwap devices configured on the host. Displays the version of the currently installed SDDPCM.Table 14. Sets all device paths that are attached to an adapter to online or offline. Sets the path of a device to online or offline. Displays information about a single target port or all target ports that are attached to SDDPCM-configured MPIO devices. and attributes. Displays the status of the logic paths that are managed by SDDPCM between the host and the storage ports. Commands (continued) Command pcmpath query device pcmpath query devstats Description Displays information about devices. Displays performance information about a single target port or all target ports that are attached to SDDPCM-configured MPIO devices. Displays the world wide port name (WWPN) for all fibre-channel adapters. path. Using SDDPCM on an AIX host system 145 . Displays performance information for a single SDDPCM device or all SDDPCM devices. Displays each device.

the following output is displayed: DEV#: 20 DEVICE NAME: hdisk20 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 60050768018180235800000000000463 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 CLOSE NORMAL 0 0 1* fscsi1/path1 CLOSE NORMAL 0 0 2 fscsi3/path2 CLOSE NORMAL 0 0 3* fscsi3/path3 CLOSE NORMAL 0 0 146 Multipath Subsystem Device Driver User's Guide . Syntax: pcmpath clear device number 1 device number 2 count error all Parameters: device number 1 <device number 2> When two device numbers are entered. entering pcmpath query device 20 causes the following output to be displayed: DEV#: 20 DEVICE NAME: hdisk20 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 60050768018180235800000000000463 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 CLOSE NORMAL 14 0 1* fscsi1/path1 CLOSE NORMAL 8 0 2 fscsi3/path2 CLOSE NORMAL 10009 0 3* fscsi3/path3 CLOSE NORMAL 8 0 If you enter the pcmpath clear device 20 count all and then enter pcmpath query device 20. Examples: If you have a non-zero select counter or error counter. error Clears the error counter of the specified SDDPCM MPIO device or devices. this command applies to all the devices whose index numbers fit within the range of these two device index numbers.pcmpath clear device count The pcmpath clear device count command dynamically sets the error counter or error/select counters to zero. all Clears both the select counter and the error counter of the specified SDDPCM MPIO device or devices.

pcmpath disable ports The pcmpath disable ports command sets SDDPCM MPIO device paths offline for the specified supported storage system location code. given by the output of pcmpath query portmap command. Syntax: pcmpath disable ports connection ess essid Parameters: connection The connection code must be in one of the following formats: v Single port = R1-Bx-Hy-Zz v All ports on card = R1-Bx-Hy v All ports on bay = R1-Bx Use the output of the pcmpath query essmap command to determine the connection code. the following output is displayed: DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20712028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE OFFLINE 6 0 1 fscsi0/path1 CLOSE NORMAL 9 0 2 fscsi1/path2 CLOSE OFFLINE 11 0 3 fscsi1/path3 CLOSE NORMAL 9 0 DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20712028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE OFFLINE 8702 0 1 fscsi0/path1 CLOSE NORMAL 8800 0 2 fscsi1/path2 CLOSE OFFLINE 8816 0 3 fscsi1/path3 CLOSE NORMAL 8644 0 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20912028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE OFFLINE 8917 0 1 fscsi0/path1 CLOSE NORMAL 8919 0 2 fscsi1/path2 CLOSE OFFLINE 9008 0 3 fscsi1/path3 CLOSE NORMAL 8944 0 DEV#: 6 DEVICE NAME: hdisk6 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20B12028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE OFFLINE 9044 0 1 fscsi0/path1 CLOSE NORMAL 9084 0 2 fscsi1/path2 CLOSE OFFLINE 9048 0 3 fscsi1/path3 CLOSE NORMAL 8851 0 Chapter 3. essid The supported storage device serial number. Using SDDPCM on an AIX host system 147 . Examples: If you enter the pcmpath disable ports R1-B1-H3 ess 12028 command and then enter the pcmpath query device command.

DEV#: 7 DEVICE NAME: hdisk7 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20F12028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE OFFLINE 9089 0 1 fscsi0/path1 CLOSE NORMAL 9238 0 2 fscsi1/path2 CLOSE OFFLINE 9132 0 3 fscsi1/path3 CLOSE NORMAL 9294 0 DEV#: 8 DEVICE NAME: hdisk8 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 21012028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE OFFLINE 9059 0 1 fscsi0/path1 CLOSE NORMAL 9121 0 2 fscsi1/path2 CLOSE OFFLINE 9143 0 3 fscsi1/path3 CLOSE NORMAL 9073 0 148 Multipath Subsystem Device Driver User's Guide .

Syntax: pcmpath enable ports connection ess essid Parameters: connection The connection code must be in one of the following formats: v Single port = R1-Bx-Hy-Zz v All ports on card = R1-Bx-Hy v All ports on bay = R1-Bx Use the output of the pcmpath query essmap command to determine the connection code. Examples: If you enter the pcmpath enable ports R1-B1-H3 ess 12028 command and then enter the pcmpath query device command. the following output is displayed: DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20112028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 6 0 1 fscsi0/path1 CLOSE NORMAL 9 0 2 fscsi1/path2 CLOSE NORMAL 11 0 3 fscsi1/path3 CLOSE NORMAL 9 0 DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20712028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 8702 0 1 fscsi0/path1 CLOSE NORMAL 8800 0 2 fscsi1/path2 CLOSE NORMAL 8816 0 3 fscsi1/path3 CLOSE NORMAL 8644 0 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20912028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 8917 0 1 fscsi0/path1 CLOSE NORMAL 8919 0 2 fscsi1/path2 CLOSE NORMAL 9008 0 3 fscsi1/path3 CLOSE NORMAL 8944 0 DEV#: 6 DEVICE NAME: hdisk6 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20B12028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 9044 0 1 fscsi0/path1 CLOSE NORMAL 9084 0 2 fscsi1/path2 CLOSE NORMAL 9048 0 3 fscsi1/path3 CLOSE NORMAL 8851 0 Chapter 3.pcmpath enable ports The pcmpath enable ports command sets MPIO device paths online for the specified supported storage device location code. given by the output of pcmpath query portmap command. Using SDDPCM on an AIX host system 149 . essid The supported storage device serial number.

DEV#: 7 DEVICE NAME: hdisk7 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20F12028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 9089 0 1 fscsi0/path1 CLOSE NORMAL 9238 0 2 fscsi1/path2 CLOSE NORMAL 9132 0 3 fscsi1/path3 CLOSE NORMAL 9294 0 DEV#: 8 DEVICE NAME: hdisk8 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 21012028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 9059 0 1 fscsi0/path1 CLOSE NORMAL 9121 0 2 fscsi1/path2 CLOSE NORMAL 9143 0 3 fscsi1/path3 CLOSE NORMAL 9073 0 150 Multipath Subsystem Device Driver User's Guide .

which means that open path 2 and path 3 failed. Syntax: pcmpath open device device number path path number Parameters: device number The logical device number of this hdisk. as displayed by the pcmpath query device command. Examples: If you enter the pcmpath query device 23 command. as displayed under “Path Name” by the pcmpath query device command. Chapter 3.pcmpath open device path The pcmpath open device path command dynamically opens a path that is in the invalid state. the following output is displayed: Success: device 23 path 2 opened DEV#: 23 DEVICE NAME: hdisk23 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20112028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 OPEN NORMAL 557 0 1 fscsi1/path1 OPEN NORMAL 568 0 2 fscsi0/path2 OPEN NORMAL 0 0 3 fscsi0/path3 INVALID NORMAL 0 0 After issuing the pcmpath open device 23 path 2 command. Using SDDPCM on an AIX host system 151 . If the root cause of the path 2 open failure is fixed and you enter the pcmpath open device 23 path 2 command. path number The path ID that you want to change. Device name The name of this device. Type The device product ID from inquiry data. You can use this command to open an invalid path even when I/O is actively running on the devices. The terms used in the output are defined as follows: Dev# The logical device number of this hdisk. the following output is displayed: DEV#: 23 DEVICE NAME: hdisk23 TYPE: 2105E20 ALGORITHM: Load Balance SERIAL: 20112028 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 OPEN NORMAL 557 0 1 fscsi1/path1 OPEN NORMAL 568 0 2 fscsi0/path2 INVALID NORMAL 0 0 3 fscsi0/path3 INVALID NORMAL 0 0 Note that the current state of path 2 and path 3 is INVALID. the state of path 2 becomes OPEN.

Path is opened. The algorithm selected is one of the following types: load balancing. Mode The mode of the named path. Select The number of times this path was selected for I/O. Path Name The name of the path. Errors The number of I/O errors that occurred on this path. Path is broken and is not being used. load balancing port.Algorithm The current path selection algorithm for the device. which is either Normal or Offline. round robin. The number displayed as part of the name is the path ID of this path that is used by the pcmpath open device path and pcmpath set device path commands. State The condition Open Close Close_Failed Failed Invalid of each path of the named device: Path is in use. 152 Multipath Subsystem Device Driver User's Guide . Path is not being used. Serial The LUN for this device. but no longer functional because of error. The path failed to open. Adapter The name of the adapter to which the path is attached. Path# The path index displayed by the pcmpath query device command. or failover.

information about all adapters is displayed. Using SDDPCM on an AIX host system 153 . Select The number of times this adapter was selected for I/O. If you do not enter an adapter index number. All opened paths that are attached to this adapter are not functioning. Syntax: pcmpath query adapter adapter number aa ap Parameters: adapter number The index number of the adapter for which you want information displayed. DS4800) devices. the following output is displayed: Total Dual Active and Active/Asymmetrc Adapters : 1 Adpt# 0 Name State sas1 NORMAL Mode ACTIVE Select 1680197 Errors 0 Paths 32 Active 32 The terms used in the output are defined as follows: Adpt # The index number of the adapter. ap The adapter of active/passive storage controller devices. It can be either: Adapter is in use. Mode The mode of the named adapter. Name The name of the adapter. Examples: If you enter the pcmpath query adapter command and your system has both Dual Active or Active/Asymmetrc (for example. ESS) and Active/Passive (for example. which is either Active or Offline. Chapter 3. One or more opened paths are not functioning. State The condition Normal Degraded Failed of the named adapter.pcmpath query adapter The pcmpath query adapter command displays information about a single adapter or all adapters that are attached to SDDPCM-configured MPIO devices. the following output is displayed: Total Dual Active and Active/Asymmetrc Adapters : 2 Adpt# 0 1 Name fscsi2 fscsi0 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 920506 921100 Errors 0 0 Paths 80 80 Active 38 38 Total Active/Passive Adapters : 2 Adpt# 0 1 Name fscsi0 fscsi1 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 0 0 Errors 0 0 Paths 30 30 Active 0 0 If you enter the pcmpath query adapter command on a host with RSSM LUNs. aa The adapter of active/active storage controller devices.

The number of functional paths is equal to the number of opened paths attached to this adapter minus any that are identified as failed or disabled (offline).Errors The number of errors that occurred on all paths that are attached to this adapter. 154 Multipath Subsystem Device Driver User's Guide . Paths The number of paths that are attached to this adapter. Active The number of functional paths that are attached to this adapter.

ap The adapter of active/passive storage controller devices. aa The adapter of active/active storage controller devices.pcmpath query adaptstats The pcmpath query adaptstats command displays information about a single or all fibre-channel adapters that are attached to SDDPCM-configured MPIO devices. information about all adapters is displayed. Examples: If you enter the pcmpath query adaptstats command and your system only has Active/Passive devices (for example. If you do not enter an adapter number. information about all adapters is displayed. DS4800). Using SDDPCM on an AIX host system 155 . Syntax: pcmpath query adaptstats adapter number aa ap Parameters: adapter number The index number of the adapter for which you want information displayed. the following output is displayed: Total Active/Passive Adapters : 2 Adapter #: 0 ============= Total Read Total Write Active Read Active Write I/O: 0 0 0 0 SECTOR: 0 0 0 0 Adapter #: 1 ============= Total Read Total Write Active Read Active Write I/O: 0 0 0 0 SECTOR: 0 0 0 0 Maximum 0 0 Maximum 0 0 /*-------------------------------------------------------------------------*/ The terms used in the output are defined as follows: Total Read v I/O: total number of completed read requests v SECTOR: total number of sectors that have been read Total Write v I/O: total number of completed write requests v SECTOR: total number of sectors that have been written Active Read v I/O: total number of read requests in process v SECTOR: total number of sectors to read in process Active Write v I/O: total number of write requests in process v SECTOR: total number of sectors to write in process Chapter 3. If you do not enter an adapter index number.

Maximum v I/O: the maximum number of queued I/O requests v SECTOR: the maximum number of queued sectors to Read or Write 156 Multipath Subsystem Device Driver User's Guide .

This feature will help you verify your configuration by indicating which path is connected to the nonpreferred controller before the device starts operation.0.7.1.4. For example. If two device numbers are entered.pcmpath query device The pcmpath query device command displays information about a single SDDPCM MPIO device. and SAN Volume Controller.7.0. Notes: 1. a new option. such as DS6000 or SAN Volume Controller. Chapter 3. a specific model of SDDPCM MPIO devices. the command displays the device information about the hdisk that is associated with this number. the following device models are added for option -d: v 1722 .All 1742 devices (DS4400 and DS4500) v 1814 .All 1818 devices (DS5100 and DS5300) Beginning with SDDPCM 2. if you configure 4 paths per MPIO hdisk device and you want to configure an equal distribution between preferred controller and nonpreferred controller of DS6000 devices.All 1814 devices (DS4200 and DS4700) v 1815 .All 1722 devices (DS4300) v 1724 . You must open SDDPCM MPIO RSSM devices before the '*' mark on nonpreferred paths can be displayed in the pcmpath query device command.6. -d.0.All 2107 models (DS8000) v 2145 . A nonpreferred path will be marked with “*”.All 1750 models (DS6000) v 2105 .All 1815 devices (DS4800) v 1818 . the command displays device information for a set of devices between these two device numbers. The valid device models are: v 1750 . Beginning with SDDPCM 2.0. a new feature is available to verify SAN configurations in a controller environment. Using SDDPCM on an AIX host system 157 . The option to specify a model cannot be used when you query with device numbers. the following device model is added for option -d: v 1820 . once before the '*' mark on nonpreferred paths can be displayed in the following three commands: v pcmpath query device v pcmpath query essmap v pcmpath query portmap 2.All 1820 devices (RSSM) Beginning with SDDPCM 2. or all SDDPCM MPIO devices. is provided to display devices of a particular device model. You must open SDDPCM MPIO devices of DS6000.All 2145 models (SAN Volume Controller) Beginning with SDDPCM 2. If a device number is entered. a set of SDDPCM MPIO devices between device M and N.All 2105 models (ESS) v 2107 . If you do not enter a device number. information about all devices is displayed.0.All 1724 devices (DS4100) v 1742 .0. you can configure your environment with 2 paths connected to a preferred controller and 2 paths connected to a nonpreferred controller.1.

All . Any AIX internal disks or non-SDDPCM-configured MPIO devices are not displayed.All 1815 1818 2105 2107 2145 devices (DS4800) devices (DS5100 and DS5300) models (ESS) models (DS8000) models (SAN Volume Controller) v 1820 . the second option -i x y lets you repeat the query command every x seconds for y times. i Repeats the command every x seconds for y times. device model Displays devices of a particular device model. two new options are added to the device query command.0.3.All 1820 devices (RSSM) | | session name Displays the set of devices in the specified session. the command repeats indefinitely every x seconds.All 1742 devices (DS4400 and DS4500) v 1750 .0. The pcmpath query device commands display only supported storage MPIO devices that are configured with the SDDPCM module. | | | Beginning with SDDPCM 3. Beginning with SDDPCM 2.All . only opened passive paths are displayed with the '*' mark in the pcmpath query device command.For DS4000 and DS5000 SDDPCM MPIO devices.0. If you do not specify y. The first option lets you specify two numbers to query a set of devices. The valid device models are: v 1722 .All 1724 devices (DS4100) v 1742 .All 1814 devices (DS4200 and DS4700) v v v v v 1815 1818 2105 2107 2145 .All .All 1722 devices (DS4300) v 1724 .1.All .All 1750 models (DS6000) v 1814 . a new session name option is added to the device query command that allows you to query a session name to show the set of devices in that session.0. Syntax: | pcmpath query device device number device number m device number n -d device model -S session name -i x -i x y Parameters: device number The device number refers to the logical device number of the hdisk. device number_m device_number_n Use the device_number_m device_number_n option to provide a range of device index numbers. 158 Multipath Subsystem Device Driver User's Guide .

only one of the two physical devices is active for I/Os. the following output is displayed. The Chapter 3. hdisk2 is created on two physical devices. Note: Note: in the following example. the following output is displayed: DEV#: 4 DEVICE NAME: hdisk4 TYPE: 1820N00 ALGORITHM: Load Balance SERIAL: 6005076B07409F7F4B54DCE9000000B9 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 sas1/path0 OPEN NORMAL 1748651 0 1* sas1/path1 OPEN NORMAL 364 0 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 1820N00 ALGORITHM: Load Balance SERIAL: 6005076B07409F7F4B54DCED000000BA =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 sas1/path0 OPEN NORMAL 1748651 0 1* sas1/path1 OPEN NORMAL 364 0 | | | | | | | | | | | | | | | | | | | | | | | If hdisk2 is an Open HyperSwap device that has never been opened.Examples: If you enter the pcmpath query device 65 66 command. and you enter pcmpath query device 2. and you enter pcmpath query device 2. the following output is displayed: DEV#: 2 DEVICE NAME: hdisk2 TYPE: 2107900 ALGORITHM: Load Balance SESSION NAME: session1OS DIRECTION: H1->H2 ========================================================================== PRIMARY SERIAL: 10000000F00 SECONDARY SERIAL: 20000000F80 ----------------------------Path# Adapter/Path Name State 0 fscsi0/path0 CLOSE 1 fscsi0/path1 CLOSE 2 fscsi0/path2 CLOSE 3 fscsi0/path3 CLOSE 4 fscsi1/path4 CLOSE 5 fscsi1/path5 CLOSE 6 fscsi1/path6 CLOSE 7 fscsi1/path7 CLOSE Mode NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL Select 0 0 0 0 0 0 0 0 Errors 0 0 0 0 0 0 0 0 If hdisk2 is an Open HyperSwap device that is being opened. At any time. Using SDDPCM on an AIX host system 159 . the following output is displayed: For the supported storage devices: DEV#: 65 DEVICE NAME: hdisk65 TYPE: 1814 ALGORITHM: Load Balance SERIAL: 600A0B800011796C0000B68148283C9C =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0* fscsi0/path0 OPEN NORMAL 42 0 1* fscsi1/path2 OPEN NORMAL 44 0 2 fscsi0/path1 OPEN NORMAL 54 0 3 fscsi1/path3 OPEN NORMAL 52 0 DEV#: 66 DEVICE NAME: hdisk66 TYPE: 1814 ALGORITHM: Load Balance SERIAL: 600A0B80001179440000C1FE48283A08 =========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 44 1 1 fscsi1/path2 CLOSE NORMAL 57 0 2 fscsi0/path1 CLOSE NORMAL 0 0 3 fscsi1/path3 CLOSE NORMAL 1 0 If you enter the pcmpath query device 4 5 command.

| | asterisk (*) indicates the active device to which the current I/Os are being sent. 160 Multipath Subsystem Device Driver User's Guide .

round robin. Chapter 3. Using SDDPCM on an AIX host system 161 . | | | Session name The name of the Tivoli Productivity Center for Replication session in which this Open HyperSwap device is contained. or failover. It has been removed from service due to errors. Type The device product ID from inquiry data. Primary serial The serial number of the volume on the H1 site. Path Name The name of the path. H1->H2 shows that the I/O is active on the H1 site. Name The logical name of this device. | | | | | | | OS direction The current Open HyperSwap direction. Algorithm The current path selection algorithm selected for the device. Failed Path is no longer being used. Path# The path index displayed by device query command.| | | | | | | | | | | | | | | | | | | | | DEV#: 2 DEVICE NAME: hdisk2 TYPE: 2107900 ALGORITHM: Load Balance SESSION NAME: session1 OS DIRECTION: H1->H2 ========================================================================== PRIMARY SERIAL: 10000000F00 * ----------------------------Path# 0 1 2 3 Adapter/Path Name fscsi0/path0 fscsi0/path2 fscsi1/path4 fscsi1/path5 State OPEN OPEN OPEN OPEN Mode NORMAL NORMAL NORMAL NORMAL Select 8 9 9 9 Errors 0 0 0 0 SECONDARY SERIAL: 20000000F80 ----------------------------Path# 4 5 6 7 Adapter/Path Name fscsi0/path1 fscsi0/path3 fscsi1/path6 fscsi1/path7 State OPEN OPEN OPEN OPEN Mode NORMAL NORMAL NORMAL NORMAL Select 0 0 0 0 Errors 0 0 0 0 The terms used in the output are defined as follows: Dev# The logical device number of this hdisk. Serial The LUN for this device. The number displayed as part of the name is the path ID that is used by pcmpath open device path and pcmpath set device path commands. Close Path is not being used. H1<-H2 shows that the I/O is active on the H2 site. Secondary serial The serial number of the volume on the H2 site. The algorithm selected is one of the following: load balancing. Adapter The name of the adapter to which the path is attached. load balancing port. State The condition of the path attached to the named device: Open Path is in use.

Select The number of times this path was selected for I/O. The mode can be either Normal or Offline. Invalid The path is failed to open. 162 Multipath Subsystem Device Driver User's Guide . Errors The number of input and output errors that occurred on a path of this device. but the MPIO device is opened.Close_Failed Path was detected to be broken and failed to open when the device was opened. Mode The mode of the named path. The path stays in Close_Failed state when the device is closed.

All . If you do not enter a device number.All 1722 devices (DS4300) v 1724 .0. device number_m device_number_n Use the device_number_m device_number_n option to provide a range of device index numbers. the following output about hdisk2 is displayed: DEV#: 2 DEVICE NAME: hdisk2 =============================== Total Read Total Write Active Read Active Write Maximum Chapter 3.pcmpath query devstats The pcmpath query devstats command displays performance information for a single SDDPCM MPIO device or all SDDPCM MPIO devices.All 1750 models (DS6000) v 1814 . Any AIX internal disks or non-SDDPCM configured MPIO devices will not be displayed.All 1814 devices (DS4200 and DS4700) v v v v v i 1815 2105 2107 2145 1820 . information about all devices is displayed. The first option lets you specify two numbers to query a set of devices.All . Beginning with SDDPCM 2. device model Displays devices of a particular device model. the command repeats indefinitely every x seconds.All .All 1724 devices (DS4100) v 1742 .All 1742 devices (DS4400 and DS4500) v 1750 . Syntax: pcmpath query devstats device number device number m device number n -d device model -i x -i x y Parameters: device number The device number refers to the logical device number of the hdisk. The valid device models are: v 1722 .All 1815 2105 2107 2145 1820 devices (DS4800) models (ESS) models (DS8000) models (SAN Volume Controller) devices (RSSM) Repeats the command every x seconds for y times. the second option -i x y lets you repeat the query command every x seconds for y times. two new options are added to the query devstats command.3.All .1. If a device number is entered. the command displays the device information about the hdisk that is associated with this number. Examples: If you enter the pcmpath query devstats 2 command. If you do not specify y. Using SDDPCM on an AIX host system 163 . Note: The pcmpath query devstats command displays only SDDPCM MPIO devices.

whose transfer size is 4 KB or less (where KB equals 1024 bytes) v <= 16K: the number of I/O requests received.I/O: SECTOR: Transfer Size: 60 320 10 0 0 0 0 0 2 16 <= 512 <= 4k <= 16K <= 64K > 64K 30 40 0 0 0 /*-------------------------------------------------------------------------*/ The terms used in the output are defined as follows: Total Read v I/O: total number of completed read requests v SECTOR: total number of sectors that have been read Total Write v I/O: total number of completed write requests v SECTOR: total number of sectors that have been written Active Read v I/O: total number of read requests in process v SECTOR: total number of sectors to read in process Active Write v I/O: total number of write requests in process v SECTOR: total number of sectors to write in process Maximum v I/O: the maximum number of queued I/O requests v SECTOR: the maximum number of queued sectors to read or write Transfer size v <= 512: the number of I/O requests received. whose transfer size is 16 KB or less (where KB equals 1024 bytes) v <= 64K: the number of I/O requests received. whose transfer size is 512 bytes or less v <= 4k: the number of I/O requests received. whose transfer size is greater than 64 KB (where KB equals 1024 bytes) 164 Multipath Subsystem Device Driver User's Guide . whose transfer size is 64 KB or less (where KB equals 1024 bytes) v > 64K: the number of I/O requests received.

“*” indicates that the path is a nonpreferred path.. The logical path name of a MPIO device. The unique identifier for each RAID array within the supported storage device.1 1. . The unique serial number for each LUN within the supported storage device.. the following output is displayed: Disk -----hdisk5 hdisk5 hdisk5 hdisk5 Path ---path0 path1 path2 path3 P Location adapter LUN SN .-----.-----. The capacity of the configured LUN.. Indicates the logical paths and whether the path is preferred and nonpreferred.1 LSS ---18 18 18 18 Vol --0 0 0 0 Rank ----0000 0000 0000 0000 C/A ---01 01 01 01 S Y Y Y Y . The device and model.) The volume number within the LSS.. The logical adapter name assigned by the host LUN.. (Beginning with 2. Syntax: pcmpath query essmap Examples: If you enter the pcmpath query essmap command. The supported storage device port through which the LUN is accessed..0.3. Using SDDPCM on an AIX host system Vol Rank C/A S Connection Port RaidMode 165 . Chapter 3... The fcppcmmap program issues SCSI commands directly to the disks to collect the information that is displayed. the value displayed is changed from decimal to hexadecimal. This command is supported for disk storage systems only.. The disk RAID mode..1 1. The physical location code of the host adapter through which the LUN is accessed.1 1..---------. Valid values are yes or no. Indicates that the device is shared by two and more supported storage device ports. The cluster and adapter accessing the array. .pcmpath query essmap The pcmpath query essmap command displays data collected from the fcppcmmap program after the system has been restarted..1. . The physical location code of supported storage device adapter through which the LUN is accessed. .----* 30-60-01[FC] fscsi1 13AAAKA1200 30-60-01[FC] fscsi0 13AAAKA1200 * 20-60-01[FC] fscsi0 13AAAKA1200 20-60-01[FC] fscsi1 13AAAKA1200 Type ----------IBM 1750-500 IBM 1750-500 IBM 1750-500 IBM 1750-500 Size ---1. . The terms used in the output are defined as follows: Disk Path P Location Adapter LUN SN Type Size LSS The logical device name assigned by the host. The logical subsystem where the LUN resides. or by the pcmpath program if any device configuration change occurs after the system has been restarted.

The number of errors that occurred on all paths that are attached to this target port. The number of functional paths that are attached to this target port. The worldwide port name of the target port. which can be one of the following types: Normal The target port that is in use. Degraded One or more of the opened paths are not functioning. the following output is displayed: Port# 0 1 2 3 4 5 6 7 8 9 10 11 Wwpn 5005022300ca031e 5005022307044750 5005022307155750 500502230e02ac54 500502230e01fe54 500502230eb3d54 500503380110046b 5005033801100d64 50050338011003d1 5005033801100118 500503330323006a 500503330323406a State NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL Mode ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE Select 0 0 0 0 0 0 0 0 0 0 3167 3220 Errors Paths Active 0 60 0 0 64 0 0 64 0 0 64 0 0 64 0 0 64 0 0 60 0 0 60 0 0 60 0 0 60 0 0 62 4 1 62 4 The terms used in the output are defined as follows: Port # WWPN State The index number of the port. which is either Active or Offline. Mode Select Errors Paths Active The mode of the named target port. 166 Multipath Subsystem Device Driver User's Guide .pcmpath query port The pcmpath query port command displays information about a single target port or all target ports that are attached to SDDPCM-configured MPIO devices. The number of paths that are attached to this target port. The condition of the named target port. Syntax: pcmpath query port target port number Parameters: target port number Use the target port number option to display information about the target port. If you do not enter a target port number. Examples: If you have 12 active ports and enter the pcmpath query port command. information about all target ports is displayed. Failed All opened paths that are attached to this target port are not functioning. The number of functional paths is equal to the number of opened paths that are attached to this target port minus any that are identified as failed or disabled (offline). The number of times this target port was selected for input or output.

2107. Chapter 3. Using SDDPCM on an AIX host system 167 . and 2145 device types. 2105.Note: This command is supported only with 1750.

meaning that at least one path attached to this port is functional.---- y = (alternate path) online/open o = (alternate path) online/closed n = (alternate path) offline The terms used in the output are defined as follows: Y y The port is online and open.------.---. The port is online and closed. Paths connected to this port are nonpreferred paths.---.---y--.---.---.---. the following output is displayed: ESSID DISK BAY-1(B1) H1 H2 H3 H4 ABCD ABCD ABCD ABCD BAY-5(B5) H1 H2 H3 H4 ABCD ABCD ABCD ABCD BAY-2(B2) H1 H2 H3 ABCD ABCD ABCD BAY-6(B6) H1 H2 H3 ABCD ABCD ABCD H4 ABCD H4 ABCD BAY-3(B3) H1 H2 H3 ABCD ABCD ABCD BAY-7(B7) H1 H2 H3 ABCD ABCD ABCD H4 ABCD H4 ABCD BAY-4(B4) H1 H2 H3 ABCD ABCD ABCD BAY-8(B8) H1 H2 H3 ABCD ABCD ABCD H4 ABCD H4 ABCD 13AAAKA hdisk5 O--. Paths connected to this port are nonpreferred paths. whereas the serial number of DS6000 and DS8000 devices is seven digits.---. meaning that all paths attached to this port are offline. The port is offline.pcmpath query portmap The pcmpath query portmap command displays the status of the logic paths that are managed by SDDPCM between the host and the storage ports. Paths connected to this port are nonpreferred paths.---. meaning that all paths attached to this port are offline.---. It is either not functional or has been placed offline.---.---. O o N n PD Note: The following fields apply only 1750 devices and can only be shown after the device is opened once: v y v o v n The serial number of ESS devices is five digits. The port is online and closed. 168 Multipath Subsystem Device Driver User's Guide . The port is offline.---13AAAKA hdisk6 Y--.---.---.---. meaning that at least one path state and mode is closed and online.---.---Y O N PD = = = = = online/open online/closed offline path not configured path down o--. Syntax: pcmpath query portmap Examples: If you enter the pcmpath query portmap command.---. The path is down. meaning that at least one path attached to this port is functional.---. The port is online and open.---- ---. This command is supported for disk storage systems only.---- ---.------. The path is not configured. meaning that at least one path state and mode is closed and online.

Syntax: pcmpath query portstats target port number Parameters: target port number Use the target port number option to display information about the target port. Using SDDPCM on an AIX host system 169 . If you do not enter a target port number. If you do not enter a device number. information about all devices is displayed. Examples: If you have four target ports and enter the pcmpath query portstats command. the following output is displayed: Port #: 0 ============= I/O: SECTOR: Port #: 1 ============= I/O: SECTOR: Port #: 2 ============= I/O: SECTOR: Port #: 3 ============= I/O: SECTOR: Total Read Total Write Active Read Active Write 3169 51 0 0 24762 6470 0 0 Maximum 2 2096 Total Read Total Write Active Read Active Write 3109 58 0 0 28292 4084 0 0 Maximum 2 2096 Total Read Total Write Active Read Active Write 0 0 0 0 0 0 0 0 Maximum 0 0 Total Read Total Write Active Read Active Write 0 0 0 0 0 0 0 0 Maximum 0 0 The terms used in the output are defined as follows: Total Read v I/O: The total number of completed read requests v SECTOR: The total number of sectors that have been read Total Write v I/O: The total number of completed write requests v SECTOR: The total number of sectors that have been written Active Read v I/O: The total number of read requests in process v SECTOR: The total number of sectors to read in process Active Write v I/O: The total number of write requests in process v SECTOR: The total number of sectors to write in process.pcmpath query portstats The pcmpath query portstats command displays performance information about a single target port or all target ports that are attached to SDDPCM-configured MPIO devices. information about all target ports is displayed. Maximum v I/O: The maximum number of queued I/O requests v SECTOR: The maximum number of queued sectors to Read or Write Chapter 3.

2. there is no change in the port statistical output. If none of the device’s algorithm is set to lbp. 2107.Notes: 1. if the algorithm from hdisk10 to hdisk20 is set to lbp. For example. and 2145 device types. 2105. 170 Multipath Subsystem Device Driver User's Guide . Data that is displayed by this command is collected only when the device’s algorithm is set to lbp. the statistical data for each device is saved in the associated ports and displayed here. This command is supported only with 1750.

SessId Host_OS_State Host_Total_copysets Disabled Quies Resum SwRes Chapter 3. The number of Open HyperSwap devices configured on this host. Number of Open HyperSwap devices in the swap-and-resume process. the following output is displayed: Total Open Hyperswap Sessions : 1 SESSION NAME: session1 SessId 0 Host_OS_State READY Host_Total_copysets 2 Disabled 0 Quies 0 Resum 0 SwRes 0 The terms used in the output are defined as follows: Session Name The name of the Tivoli Productivity Center for Replication session in which this Open HyperSwap device is contained. The number of Open HyperSwap devices disabled for HyperSwap on this host. Using SDDPCM on an AIX host system 171 . The number of Open HyperSwap devices quiesced or in the process of being quiesced. The number of Open HyperSwap devices in the resume process. Syntax: pcmpath query session Parameters: None Examples: If you enter the pcmpath query session command. The session ID.| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pcmpath query session The pcmpath query session command displays the session of the Open HyperSwap devices configured on the host. The session state on this host (READY or NOT READY).

1.pcmpath query version The pcmpath query version command displays the version of SDDPCM that is installed.rte) 172 Multipath Subsystem Device Driver User's Guide . Syntax: pcmpath query version Parameters: None Examples: If you enter the pcmpath query version command.52. the following output is displayed: [root@abc]> pcmpath query version IBM SDDPCM Version 2.0 (devices.1.sddpcm.

the following output is displayed: Adapter Name fscsi0 fscsi1 PortWWN 10000000C925F5B0 10000000C9266FD1 Chapter 3. Syntax: pcmpath query wwpn Parameters: None Examples: If you enter the pcmpath query wwpn command.pcmpath query wwpn The pcmpath query wwpn command displays the World Wide Port Name (WWPN) of the host fibre-channel adapters. Using SDDPCM on an AIX host system 173 .

Examples: If you enter the pcmpath set adapter 0 offline ap command: 174 Multipath Subsystem Device Driver User's Guide . the pcmapth set adapter N offline command is not supported. because the device will lose all its paths. if some devices are owned by controller A and some devices are owned by controller B. This command will fail if there is any device that has the last path attached to this adapter. the pcmpath set adapter offline command does not have the last path offline protection. When the device is closed. Attention: If the AIX server has more than one DS4000 or DS5000 device model configured. In this case. an attempt to set the last active path of the source volume offline by issuing the pcmpath set adapter command will be rejected. Note: For DS4000 and DS5000 storage devices. Furthermore. the pcmpath set adapter N offline command might cause multiple device models to fail over to the alternate controller. Attention: If the device is closed. ap The adapter of active/passive storage controller devices. this set adapter offline command might cause multiple devices to fail over to an alternate controller if the device path that is attached to this adapter is the last active path. it's allowed to set the last path of the source volume offline. | | | | | | Attention: For Open HyperSwap devices. aa The adapter of active/active storage controller devices. when the device is opened with all paths to the source volume offline. the pcmpath set adapter 0 offline command will fail. However. SDDPCM reserves the last path of a device if the device is opened. but some number of devices are failed over successfully and some devices are not. online Enables the adapter for service. when the device is open. If all paths of a device are attached to a single fibre-channel adapter that connects to multiple storage ports through a switch. Syntax: pcmpath set adapter adapter number online offline aa ap Parameters: adapter number The index number of the adapter that you want to change. an unplanned HyperSwap will be triggered. the set adapter offline command might cause some devices to fail over from controller A to controller B and some devices to fail over from controller B to controller A.pcmpath set adapter The pcmpath set adapter command sets all device paths attached to an adapter either to online or offline. offline Disables the adapter from service. You must issue the pcmpath query device command to find out if the command is fully completed or partially completed. It is possible that the pcmpath set adapter M offline command returns a failure message.

v All paths of the active/passive controller devices that are attached to adapter 0 change to Offline mode and their states change to Dead. Chapter 3. its state might change to failed. if they were in the Open state. if there are some paths in the opened state.v Adapter 0 of the active/passive controller devices changes to Offline mode and. Using SDDPCM on an AIX host system 175 .

Any attempt to set the algorithm to round_robin. option Specifies one of the following path selection algorithms: v rr. You can enter the pcmpath set device N algorithm rr/fo/lb/lbp command to dynamically change to the path selection algorithm associated with SDDPCM MPIO devices that are in either Close or Open state. where rr indicates round robin v lb.pcmpath set device algorithm The pcmpath set device algorithm command dynamically changes the path selection algorithm per MPIO device. 2107. 2105. Beginning with SDDPCM 2.4. or load_balance_port with single_path reserve policy will fail.0. Syntax: pcmpath set device num1 num2 algorithm option Note: If the device reserve policy is set to single_path ( SCSI-2 reserve). load_balance. this command applies to all the devices whose logical numbers fit within the range of the two device logical numbers. and 2145. This new algorithm is applicable only for device models 1750. You can also use the chdev command to change the path selection algorithm of a device: chdev -l hdiskX -a algorithm=load_balance_port 176 Multipath Subsystem Device Driver User's Guide . where fo indicates failover policy v lbp. where lbp indicates load balancing port Notes: 1. Parameters: num1 [ num2 ] v When only num1 is specified. v When two device logical numbers are entered. Examples: If you enter pcmpath set device 2 10 algorithm rr. 2. the command applies to the hdisk specified by num1.0. the algorithm lbp incorporates I/O statistics from both host adapters and target ports in the path selection algorithm. the path-selection algorithm of hdisk 2 to hdisk 10 is immediately changed to the round robin algorithm. the device algorithm must be set to fail_over. where lb indicates load balancing v fo.

pcmpath set device hc_interval The pcmpath set device hc_interval command dynamically changes the health check time interval for SDDPCM MPIO devices or disables the health check function of a device. Using SDDPCM on an AIX host system 177 . Note: Starting with SDDPCM v2123. when you issue this command to change the SDDPCM internal health check function for one or more devices from enable to disable or from disable to enable. this command applies to all the devices whose logical numbers fit within the range of the two device logical numbers. t The range of supported values for health check interval is 1-3600 seconds. Syntax: pcmpath set device num1 num2 hc_interval t Parameters: num1 [ num2 ] v When only num1 is specified. the SDDPCM server daemon (pcmsrv) automatically starts or stops the health check function on the devices. the command applies to the hdisk specified by num1. To disable the health check function of a device. the health check time interval of hdisk2 to hdisk10 is immediately changed to 30 seconds. Examples: If you enter pcmpath set device 2 10 hc_interval 30. v When 2 device logical numbers are entered. set interval time to 0. Chapter 3.

indicates the health check command will be sent to paths that are in failed state. indicates the health check command will be sent to paths that are opened with a normal path mode. option Specifies one of the following policies: v enabled.pcmpath set device hc_mode The pcmpath set device hc_mode command dynamically changes the MPIO device health check mode. Examples: If you enter pcmpath set device 2 10 hc_mode enabled. the command applies to the hdisk specified by num1. 178 Multipath Subsystem Device Driver User's Guide . v When 2 device logical numbers are entered. v failed. v nonactive. indicates the health check command will be sent to paths that have no active I/O. this command applies to all the devices whose logical numbers fit within the range of the two device logical numbers. Syntax: pcmpath set device num1 num2 hc_mode option Parameters: num1 [ num2 ] v When only num1 is specified. This includes paths that are opened or in failed state. the health check mode of MPIO hdisk2 to hdisk10 is immediately changed to the enabled mode.

Using SDDPCM on an AIX host system 179 .pcmpath set device cntlhc_interval The pcmpath set device cntlhc_interval command dynamically changes the Active/Passive MPIO device controller health check time interval or disables this feature. the command applies to the hdisk specified by num1. this command applies to all active/passive devices whose logical numbers fit within the range of the two device logical numbers. t The range of supported values for controller health check time interval is 0-300 seconds. Chapter 3. Syntax: pcmpath set device num1 num2 cntlhc_interval t Parameters: num1 [ num2 ] v When only num1 is specified. v When 2 device logical numbers are entered. if hdisk2 to hdisk10 are all active/passive devices. Examples: If you enter pcmpath set device 2 10 cntlhc_interval 3. Setting the value to 0 will disable this feature. the controller health check time interval of hdisk2 to hdisk10 is immediately changed to 3 seconds.

If cntl_delay_time is set to '1'. If you try to set cntl_hcheck_int with a value larger than cntl_delay_time. 2. 3. the command applies to the hdisk specified by num1. If you try to set cntl_delay_time with a value smaller than cntl_hcheck_int.pcmpath set device cntlhc_delay The pcmpath set device cntlhc_delay command dynamically changes the active/passive MPIO device controller health check delay time or disables this feature. Syntax: pcmpath set device num1 num2 cntlhc_delay t Parameters: num1 [ num2 ] v When only num1 is specified. then cntl_hcheck_int will be set to the same value as cntl_delay_time. this command applies to all active/passive devices whose logical numbers fit within the range of the two device logical numbers. if hdisk2 to hdisk10 are all active/passive devices. v When 2 device logical numbers are entered. 180 Multipath Subsystem Device Driver User's Guide . t The range of supported values for controller health check time interval is 0-300 seconds. Examples: If you enter pcmpath set device 2 10 cntlhc_delay 30. it disables the controller health check feature. the command will fail with the INVALID parameter. Notes: 1. Setting the value to 0 will disable this feature. the controller health check delay time of hdisk2 to hdisk10 is immediately changed to 30 seconds. which is the same as setting it to '0'.

it's allowed to set the last path of the source volume offline. the pcmpath set adapter offline command does not have the last path offline protection. when the device is opened with all paths to the source volume offline. path ID The path ID that you want to change. When the device is closed. when the device is open. path 0 for device 5 changes to Offline mode.pcmpath set device path The pcmpath set device path command sets the path of a device either to online or offline. However. | | | | | | Attention: For Open HyperSwap devices. Attention: If the device is closed. Using SDDPCM on an AIX host system 181 . Chapter 3. an unplanned HyperSwap will be triggered. online Enables the path for service. This prevents a data access failure from occurring. Examples: If you enter the pcmpath set device 5 path 0 offline command. offline Disables the path from service. You cannot remove the last path to an opened device from service. as displayed under “Path Name” by the pcmpath query device command. Syntax: pcmpath set device device number path path-ID online offline Parameters: device number The logical device number of the hdisk. an attempt to set the last active path of the source volume offline by issuing the pcmpath set device M path N offline command will be rejected.

which can be fscsi5.Summary of command syntax SDDPCM supports the following pcmpath commands: v pcmpath clear device <n>/(<m> <n>) count error/all v pcmpath query adapter [<n> aa|ap] v pcmpath query adaptstats [<n> aa|ap] v pcmpath query device <n>/[<m> <n>] [-d <device_model>] [-S <session name>] [-i <x>/(<x> <y>)] v pcmpath query devstats <n>/[<m> <n>] [-d <device_model>] [-i <x>/(<x> <y>)] v pcmpath set adapter n online|offline aa|ap v pcmpath set device M path N online|offline v pcmpath set device <n1> [n2] algorithm <option> v pcmpath set device <n1> [n2] hc_interval <t> v pcmpath set device <n1> [n2] hc_mode <option> v pcmpath set device <n1> [n2] cntlhc_interval <t> v pcmpath set device <n1> [n2] cntlhc_delay <t> v pcmpath disable port <location> ess <essid> v v v v v | pcmpath pcmpath pcmpath pcmpath pcmpath enable port <location> ess <essid> open device <m> path <n> query essmap query port <target port number> query portmap | | v pcmpath query portstats <target port number> v pcmpath query session v pcmpath query wwpn v pcmpath query version Note: If the commands are used for a device. pcmpath query devstats 3 queries the device statistics for hdisk3. For example. For example. the n is the number of the device logical name. 182 Multipath Subsystem Device Driver User's Guide . the n is the index of the adapter. pcmpath query adapter 2 queries the adapter statistics for the third adapter in adapter list order. If the commands are used for adapter.

ibm.ibm.com/servers/storage/support/software/sdd Verifying the hardware and software requirements You must install the following hardware and software components to ensure that SDD installs and operates successfully. see the Readme file on the CD-ROM or go to SDD Web site: www. Using the SDD on an HP-UX host system This chapter provides step-by-step procedures for you to install. Hardware The following hardware components are needed: v One or more of the supported storage devices. v For ESS devices: at least one SCSI host adapter (two are required for load balancing and failover) To install SDD and use the input/output (I/O) load balancing and failover features.Chapter 4. 2010 183 . For updated and additional information that is not included in this manual. SDD-supported HP-UX kernel levels SDD supports the following HP-UX kernel levels: v HP-UX 11. 1999. and use SDD on a Hewlett-Packard (HP-UX) host system that is attached to supported storage devices. For information on the fibre-channel adapters that can be used on your HP-UX host system go to: www. configure.com/servers/storage/support v A SCSI cable to connect each SCSI host adapter to a storage system controller port v Subsystem LUNs that have been created and confirmed for multiport access v A fiber-optic cable to connect each fibre-channel adapter to a supported storage device port Software SDD supports certain HP-UX kernel levels.11 (11i) 32-bit and 64-bit v HP-UX 11. you need a minimum of two SCSI or fibre-channel adapters. remove. A host system with a single fibre-channel adapter that connects through a switch to multiple ESS ports is considered to have multiple fibre-channel SDD vpath devices.23 (11i v2) PA_RISC 64-bit and IA 64-bit Ensure that the patches listed in Table 16 on page 186 or newer patches are installed on your HP-UX host system. © Copyright IBM Corp.

failover protection is not provided. Configuring the disk storage system Before you install SDD. Note: 1200 LUNs are supported only by HP-UX 11. SDD requires a minimum of two independent paths that share the same logical unit to use the load balancing and path-failover-protection features. the more system resources are required. configure your SAN Volume Controller devices for single-port or multiport access for each LUN. With a single path. follow the instructions in the IBM System Storage DS4x00 Installation. you must configure the supported storage devices to your host system and attach the required SCSI or fibre-channel adapters. The HP-UX SDD supports up to 32 paths per LUN. configure your ESS for single-port or multiport access for each LUN. Be aware that there is no performance gain with a large number of LUNs that are smaller in size. With a single path. SDD requires a minimum of two independent paths that share the same logical unit to use the load balancing and path-failover-protection features. A maximum of 1200 LUNs can be configured by the SDD. Configuring DS4000 and DS5000 family disk systems Before you install the SDD.23 (PA-RISC 64 and IA64). For more information about how SDD works. see “The SDD architecture” on page 2. The larger the configuration.11. For information about configuring your ESS. such as a disk storage system-host-bay-adapter replacement v Single-path configuration for fibre channel v DS8000 and DS6000 with SCSI connectivity Understanding how SDD works on an HP-UX host system SDD resides above the HP SCSI disk driver (sdisk) in the protocol stack. failover protection is not provided.0 32-bit kernel v HP-UX 11. User's and Maintenance Guide and the IBM System Storage 184 Multipath Subsystem Device Driver User's Guide . Preparing for the SDD installation Before you install the SDD. see the IBM System Storage SAN Volume Controller Software Installation and Configuration Guide.0 64-bit kernel v A system start from an SDD pseudo device v A system paging file on an SDD pseudo device v A host system with both a SCSI and fibre-channel connection to a shared LUN v Single-path mode during concurrent download of licensed machine code nor during any disk storage system concurrent maintenance that impacts the path attachment. Configuring the virtualization products Before you install SDD.Unsupported environments SDD does not support the following environments: v HP-UX 11. 600 LUNs are supported by HP-UX 11. see the IBM TotalStorage Enterprise Storage Server Introduction and Planning Guide. For information about configuring your SAN Volume Controller.

Because switching to another controller affects performance. The controller mode is determined by the host type. The SDD requires a minimum of two independent paths that share the same logical unit to use load balance and path failover features. configuring the redundant path to a controller can avoid unnecessary controller failover that is caused by a path failure. To ensure that a single path failure does not result in controller failover during recovery. you need to understand what kind of software runs on your host. so the process can be lengthy if your configuration is large or if the devices are unable to satisfy the I/O request. which is predefined in the DS4000 or DS5000 Storage Manager profile. Two types of special device files are supported: v Block device files v Character device files There are three possible scenarios for installing SDD. Table 15. This process attempts to read the boot record from every device instance of the system. SDD installation scenarios Installation scenario Description How to proceed Chapter 4. Planning for installation Before you install SDD on your HP-UX host system. The DS4000 and DS5000 controllers can be set to two different modes: AVT or non-AVT but can operate only in one mode per storage partition. Using the SDD on an HP-UX host system 185 . The scenario you choose depends on the kind of software that you have installed: Table 15 further describes the various installation scenarios and how you should proceed. you can bypass the process that checks the boot disk. Table 16 on page 186 lists the patches necessary for proper operation of SDD on HP-UX . enter the following before the SDD installation: touch /opt/IBMsdd/BYPASS_BOOTDISK_CHECK You might want to bypass boot disk checking for systems with DS4000 or DS5000 devices. because reading the boot record from every device instance of DS4000 or DS5000 can be very time consuming. With a single path. To do this. Bypassing the boot disk check The SDD installation process checks the boot disk and then excludes the bootable disk from the SDD configuration. The way you install SDD depends on the kind of software that you are running. failover protection is not provided. configure redundant paths to each controller. If the system boot disk is not an SDD-supported disk.DS4000 Storage Manager Concepts Guide for configuring LUNs that are attached to the HP-UX host system. Only the AVT mode is supported for the HP-UX host type.

1. “Determining if the SDD 1.1.Table 15.3. standard patch bundles September 05. support plus See http://www11.23 11. v The SDD server for Expert is installed. Scenario 3 v SDD is installed.com/service/home/ home.5 (or later) server for Expert is installed If you previously installed the SDD server (the stand-alone version) for IBM TotalStorage Expert V2R1 (ESS Expert) on your HP-UX host system. “Determining if the SDD 1.itrc. “Installing SDD” on page 187 3. v The SDD server for Expert is installed.3. “Using applications with SDD” on page 204 Go to 1.0 SDDsrv bb-bit Version: 1. “Determining if the SDD 1.5 (or later) server for Expert is installed” 2.3. Scenario 2 v SDD is not installed.1.11 Patch bundles March 06.5 (or later) server for Expert is installed” 2.5 installation. which incorporates the functionality of the stand-alone version of the SDD server (for ESS Expert). v An existing application package or DBMS communicates directly to the sdisk interface. enter: swlist SDDsrv If you previously installed the stand-alone version of the SDD server.do?admit=109447626+1266865784425+28353475 for patch details and prerequisites for patches.hp. The installation package for SDD 1.1.3.0.1.3. v No software application or DBMS communicates directly to sdisk interface.0 Nov-14-2001 15:34 186 Multipath Subsystem Device Driver User's Guide . SDD installation scenarios (continued) Scenario 1 v SDD is not installed. Determining if the SDD 1. Patches necessary for proper operation of SDD on HP-UX HP-UX 11.5 includes the SDD server daemon (also referred to as sddsrv). “Upgrading the SDD” on page 189 Table 16.3. you must remove this stand-alone version of the SDD server before you proceed with SDD 1. Go to: 1.0.5 (or later) server for Expert is installed” 2.0. To determine if the stand-alone version of the SDD server is installed on your host system.1. the output from the swlist SDDsrv command looks similar to this: SDDsrv 1.0. “Standard UNIX applications” on page 204 Go to: 1. “Installing SDD” on page 187 3. v The SDD server for Expert is installed.

depot or /your_installation_directory/hp32bit/IBMsdd. Run ioscan -funC disk.or 64-bit. make sure that you have root access to your HP-UX host system and that all the required hardware and software is ready. choose the directory and the IBMsdd. Select Install Software to Local Host. the stand-alone SDD server installation package is SDDsrvHP32_020115.0 (sddsrv) readme for IBM TotalStorage Expert V2R1 at the following Web site: www. For ESS Expert V2R1. Select Software Management. The installation package for the stand-alone version of the SDD server (for ESS Expert) is SDDsrvHPbb_yymmdd. Installing SDD from CD-ROM Perform the following steps to install SDD on your HP-UX host system: 1.depot file. Mount the CD-ROM drive using the mount command.depot for a 32-bit environment. Almost immediately afterwards. and yymmdd represents date of installation package).0. For instructions on how to remove the stand-alone version of the SDD server (for ESS Expert) from your HP-UX host system. b. b. Run the sam program. 3.com/servers/storage/support/software/swexpert/ For more information about the SDD server daemon. see the IBM SUBSYSTEM DEVICE DRIVER SERVER 1. use: /cdrom/hp32bit/IBMsdd. 6.depot for a 64-bit environment. Installing SDD Before you install SDD. 7. use: Chapter 4. Using the SDD on an HP-UX host system 187 . 4. For Source Depot Path. Insert the CD-ROM into your CD-ROM drive.ibm.depot For 64-bit mode applications.Software Selection panel is displayed. Record the CD-ROM or DVD-ROM directory. > sam 5. For Source Depot Type. and SDDsrvHP64_020115. select the local CD-ROM. If CD-ROM directory is not known: a. a Specify Source menu is displayed: a.Notes: 1. At this point. For 32-bit mode applications.0.depot (where bb represents 32. Make sure that the SDD compact disc (CD) is available. the SD Install . go to “SDD server daemon” on page 199. Here are two examples of the mount command: mount /dev/dsk/c0t2d0 /cdrom or mount /dev/dsk/c0t2d0 /your_installation_directory where /cdrom or /your_installation_directory is the name of the directory to which you want to mount the CD-ROM drive.) 2. 2.

0.0.com/servers/storage/support/software/ sdd. Note: SDD 1. Click the IBMsdd_tag product. It becomes available after the installation process is complete. 11.4 is changed from a static driver to a dynamic loadable kernel module (DLKM) driver. Run the sam program. After you have downloaded the appropriate SDD package into your chosen directory.3 IBMsdd Driver 64-bit Version: 1.ibm. Click Yes and press Enter. www. 9. you can untar the file using the tar xvf IBMsdd*. The window looks similar to the following: Press ’Product Summary’ and/or ’Logfile’ for more target information. From the Bar menu. showing the status of Ready. The perform the following steps to install SDD: 1.3 Sep-24-2007 16:35 8.depot or /your_installation_directory/hp64bit/IBMsdd. A Confirmation window opens and states that the installation will begin. informing you about the progress of the IBMsdd software installation. An Install Analysis panel is displayed. Select Software Management./cdrom/hp64bit/IBMsdd. > sam 2. Click Yes and press Enter. After the installation is finished. click Actions → Install (analysis). click Actions → Mark for Install. 13. From the Bar menu.5. Click Done.0. 15. 10. Installing SDD from downloaded code The SDD download Web site. has several SDD packages for different versions of HP-UX. You can use the datapath query device command to verify the SDD installation. After the analysis phase has finished. You will see output similar to the following example: IBMsdd_tag 1.7. SDD is successfully installed if the command runs successfully. The SDD packages are in tar archives.tar command. Click OK. Click OK to proceed. 188 Multipath Subsystem Device Driver User's Guide . The installation of IBMsdd will now proceed. 14. An Install window opens. the SDD driver is automatically loaded. another Confirmation window opens informing you that the system will be restarted after installation is complete. 12. The system does not restart after SDD is installed.7.depot c. The analysis phase starts. Target : XXXXX Status : Executing install setup Percent Complete : 17% Kbytes Installed : 276 of 1393 Time Left (minutes) : 1 Product Summary Logfile Done Help The Done option is not available when the installation is in progress.

11. Using the SDD on an HP-UX host system 189 .1. Restore the cluster environment after upgrading SDD. The concurrent driver upgrade function permits uninterrupted operation when installing SDD. v Reloading the SDD driver can also take longer because of the contention with the kernel memory. the SD Install .tar command to untar the file and the IBMsdd. physical memory size. 2. With the concurrent I/O.01 IBMsdd Driver 64-bit <version> <date> nnnn HPUX_B. Select the directory in which you have issued the tar xvf IBMsdd*. Upgrading the SDD Use these sections to upgrade SDD: Upgrading from SDD 1. Unloads and reloads the SDD driver 3. The shared volume groups in the adoptive nodes must be exported so that the volumes are not shared. The actual time for installation depends on the processor model. Output similar to the following example is displayed: Name Revision Information Size(Kb) Architecture Category IBMsdd_tag -> B. Because the volume groups must be active for the PVLINK conversion process. Use the fully-qualified path name for the depot file as shown below. Converts the PVLINK devices back to SDD vpath devices after the new package is installed. The MC Service Guard cluster must be halted prior to upgrade. Select the Local Directory for Source Depot Type. the following are the limitations: 1.0.23_ia64 5. Click OK. At this point.depot file for the Source Depot Path.6. The larger the SDD configuration or the more concurrent I/O activities.Software Selection panel is displayed. the volume groups can be active in the primary node only. Converts SDD vpath devices to PVLINK devices 2. /your_installation_directory/IBMsdd. the system must wait for a window when the resources become available. accessing LVM meta data and the kernel memory.depot c. I/O intensity. Performance during upgrade: You should consider the following performance topics while you are upgrading: v The PVLINK conversion process and the driver reload require additional system resources such as LVM lock. The installation process: 1. Then a Specify Source menu is displayed: a. the longer it can take to upgrade. The installation time can also take longer if the devices from Chapter 4.6. the upgrade process can take longer because the conversion process must wait for the I/O to complete before a link can be removed from PVLINK. and configuration size. Click the IBMsdd_tag product and perform the steps beginning with step 9 on page 188 shown in “Installing SDD from CD-ROM” on page 187. 4.0 or later with concurrent access The memory management and the installation process have been enhanced to allow installation of the SDD package while the LVM volume groups are active and user applications are running.x to SDD 1.11. The volume groups must be managed by HP-UX LVM.23. The primary node and the adoptive node or nodes must operate in a single-host environment. b.3. Select Install Software to Local Host.

0 or later with nonconcurrent access.2 (or earlier).3. you should check the VPATH_EVENT for allocation failures in syslog.2 or earlier to SDD 1.0. After the upgrade. hd2vp and the syslog messages. the recovery process can be lengthy if the upgrade failed due to the hardware errors.4 (or later): 1. vpath0 should not be configured. These are the indications that the upper limit of the resources have been reached during the conversion process and that you should take a more conservative approach next time.1. 2.errlog. 190 Multipath Subsystem Device Driver User's Guide . That is.the ioscan output are not accessible. To upgrade from SDD 1.0.cfg file 4.0. Issue the swjob command that is indicated at the end of swinstall output to get detailed information about the installation. 3. /var/adm/IBMsdd/hd2vp. Issue cfgvpath -c to reconfigure SDD. When the volume groups have been converted: a.4 (or later) on your host. When the host responds. Upgrading SDD 1.5. v The diagnose message in the package installation and configuration process has been greatly improved to include logs for cfgvpath.3. Because SDD 1.2 (or earlier) to SDD 1.log. Remove the /etc/vpathsave.3. The volume group conversion scripts hd2vp and vp2hd in /opt/IBMdpo/bin b. 2.3. If you are upgrading SDD. Remove its /etc/vpathsave. If vpath0 no longer exists and the state of all the SDD vpath devices looks good in the datapath query device output. use the hd2vp script to change volume groups to volume groups containing SDD vpath devices.3.5.2 (or earlier) could have created a vpath0 device.5.0. use the showvpath command to determine that vpath0 has been removed.0. Install SDD 1.2 (or earlier) does not have the volume group conversion scripts hd2vp and vp2hd. Uninstall SDD 1. After installation.0. v Upgrading from SDD 1. the uninstallation will not convert the existing volume groups back to cXtXdX devices. Issue cfgvpath -c to reboot. The /etc/vpathsave.5.4 or later SDD 1.0.cfg file. go to “Uninstalling SDD” on page 198 and then go to “Installing SDD” on page 187.cfg (vpath name reservation file) file. All the SDD related logs have been moved to the /var/adm/IBMsdd directory. 5. you should attempt to clean up the configuration before upgrading. vp2hd.0.0. v The installation process also ensures that the current SDD state is not in any degraded state. b.6. you need to ensure that you have no vpath0 devices using the following steps: 1.4 to SDD 1. Use the vp2hd script to convert volume groups from volume groups containing SDD vpath devices to volume groups containing cXtXdX devices.errlog and vp2hd. If there were a lot of inaccessible devices as the result of fabric reconfiguration. Upgrading SDD consists of removing and reinstalling the IBMsdd package. check to see that you have: a. the concurrent upgrade should be performed during a period when the system load is lighter than the normal operation. 6. To upgrade your SDD.

If the On-Demand package is installed later.Note: Vpathname vpathN is reserved when it is assigned to a LUN even after the LUN has been removed from the host. including both IA64 and PA-RISC architectures.depot IBMsdd_ds5k_tag Configuring the SDD This section provides information necessary to configure the SDD.4 or later. Issue the cfgvpath command to reconfigure the SDD vpath device by entering: /opt/IBMsdd/bin/cfgvpath 2. Restart the system by entering: shutdown -r 0 Note: Restarting the system is no longer required for SDD 1. Do not remove the /etc/vpathsave. Perform the following steps to reconfigure SDD: 1. HP-UX SDD supports up to 32 paths per LUN. The querysn command can be used to list all disk storage system devices visible to the host. which recognizes the new devices as a plug-in to the driver.0.depot IBMsdd_ds4k_tag For DS5000: swinstall IBMsdd_ds5k_platform. vpathN. issue the following command: For DS4000: swinstall IBMsdd_ds4k_platform. To install the DS On-Demand-Device package. The same vpathname. 7. Improper removal of the file will invalidate existing volume groups. Maximum number of LUNs A maximum of 1200 LUNs can be configured by SDD. If the On-Demand package was installed earlier. To manually exclude devices from the SDD configuration. run cfgvpath –r to recognize and reconfigure the DS4000 or DS5000 device. /etc/vpathsave. The querysn command reads the unique serial number of a disk storage system device (sdisk).cfg is the file to reserve vpathnames. Note: DS5000 support is limited to HP-UX 11. Changing an SDD hardware configuration When you are adding or removing multiport SCSI devices. Chapter 4. Installing an additional package for DS4000 and DS5000 support The HP SDD support for DS4000 and DS5000 is enabled through the On-Demand-Device function. Use the HP command line interface (CLI) to manage SDD devices. you must reconfigure SDD to recognize the new devices. the installation process automatically recognizes and configures the DS4000 and DS5000 devices.23 systems. You can install the On-Demand package before or after you install the traditional SDD package.5.cfg file. Using the SDD on an HP-UX host system 191 . will be assigned to the same LUN when it is reconnected to the host.

Beginning with SDD 1. cfgvpath -r: This operation finds the current hardware configuration and compares it to the SDD vpath device configuration in memory and then identifies a list of differences. The following path-selection policies are supported: failover only (fo) All I/O operations for the device are sent to the same (preferred) path until the path fails because of I/O errors. Converting a volume group SDD provides the following conversion scripts: hd2vp The hd2vp script converts a volume group from supported storage device sdisks into SDD vpath devices. the get_root_disks command generates a file called /etc/vpathexcl. c.4. d. but will set path to DEAD and OFFLINE. Dynamic reconfiguration Dynamic reconfiguration provides a way to automatically detect path configuration changes without requiring a reboot. rmvpath -all rmvpath vpath_name # Remove all SDD vpath devices # Remove one SDD vpath device at a time # this will fail if device is busy Dynamically changing the SDD path-selection policy algorithm SDD 1. Add path to the SDD vpath device. Then an alternate path is chosen for subsequent I/O operations. this will fail deletion of the path if the device is busy.12.cfg to exclude bootable disks from the SDD configuration. The commands that cfgvpath -r issues to the vpath driver are: Add an SDD vpath device. The syntax for vp2hd script is as follows: vp2hd vgname hd2vp and vp2hd convert a volume group from an sdisk pvlink to and from SDD vpaths devices.cfg text file.0. this will fail if device is busy. For bootable devices. 2.6. vp2hd is not invoked at the system shutdown. 192 Multipath Subsystem Device Driver User's Guide .0. The syntax for hd2vp script is as follows: hd2vp vgname vp2hd The vp2hd script converts a volume group from SDD vpath devices into supported storage device sdisks. 1. Remove an SDD vpath device. b. Remove path from the SDD vpath device. a. The hd2vp conversion program is invoked at the system boot time to ensure the conversion of sdisks to SDD vpath devices. rmvpath command removes one or more SDD vpath devices. It then issues commands to update the SDD vpath device configuration in memory with the current hardware configuration. Use the vp2hd program when you want to configure your applications back to original supported storage device sdisks.their serial number information can be included in the /etc/vpathmanualexcl.0 (or later) supports path-selection policies that increase the performance of multipath-configured supported storage devices and make path failures transparent to applications.

4. SDD version 1. Therefore. When failover to the alternate controller has occurred. With all paths available. paths on the alternate controller will be selected for retry. datapath set device policy command Use the datapath set device policy command to change the SDD path-selection policy dynamically: See “datapath set device policy” on page 446 for more information about the datapath set device policy command. 3. If all retries fail on the preferred controller's paths. You can change the policy for an SDD device. Load balancing mode also incorporates failover protection.load balancing (lb) The path to use for an I/O operation is chosen by estimating the load on the adapter to which each path is attached. The default path-selection policy for an SDD device is load balancing. If a device has only two paths. 2. Enter datapath query device N where N is the device number of the SDD vpath device to show the current active policy for that device.0. SDD distinguishes the paths to a virtualization product LUN as follows: 1. Note: The load balancing policy is also known as the optimized policy. I/O is only routed to paths on the preferred controller.0 (or later) supports dynamic changing of the SDD devices path-selection policy. If multiple paths have the same load. Using the SDD on an HP-UX host system 193 . round robin (rr) The path to use for each I/O operation is chosen at random from those paths that were not used for the last I/O operation. Preferred node path-selection algorithm for the virtualization products The virtualization products are two-controller disk subsystems. determine the active policy for the device. Chapter 4. in the selection algorithm. preference is always given to a path on the preferred controller. Before changing the path-selection policy. Paths on the preferred controller 2. Paths on the alternate controller When SDD selects paths for I/O. This means that SDD will automatically fail back to the preferred controller any time a path on the preferred controller becomes available during either manual or automatic recovery. SDD alternates between the two. Paths on the alternate controller are selected at random. The load is a function of the number of I/O operations currently in process. Only if no path can be used on the preferred controller will a path be selected on the alternate controller. retry paths are first selected on the preferred controller. If an error occurs and a path retry is required. if a path on the preferred controller is made available. an initial attempt is made to select a path on the preferred controller. a path is chosen at random from those paths. I/O automatically fails back to the preferred controller. The path-selection policy is set at the SDD device level. The following is the path selection algorithm for SDD: 1. If no path on the preferred controller is available. I/O fails over to the alternate controller.

” on page 421 for details about the datapath commands.11 (or earlier) and for SDD 1.36 4 0/7/0/0.0.4. If you need to place an adapter offline. the output of datapath query adapter will be: 194 Multipath Subsystem Device Driver User's Guide . 2. datapath set adapter 0 offline. and you need to determine which hardware paths relate to which adapters.36 4 0/7/0/0.0. After the command is issued.0 (or later).0.4.3.0.38 1 0/4/0/0.34 5 0/4/0/0.0.18.4. For SDD 1. Example output from datapath query adapter command issued in SDD 1.4. you need to issue one single command.11 (or earlier). See Chapter 13.4.0. 4.34 6 0/7/0/0.0.0.38 2 0/7/0/0. For SDD 1.4.4. the output of the datapath query adapter command has been simplified.36 3 0/4/0/0. the output of datapath query adapter will be: Active Adapters :8 Adapter# Adapter Name 0 0/7/0/0.34 6 0/7/0/0.18.3.36 3 0/4/0/0. 6 belong to the same physical adapter.0.32 State NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL Mode OFFLINE ACTIVE OFFLINE ACTIVE OFFLINE ACTIVE OFFLINE ACTIVE Select Error Path Active 0 0 1 0 0 0 1 0 0 0 2 0 0 0 2 0 0 0 2 0 0 0 2 0 0 0 1 0 0 0 1 0 Example output from datapath query adapter command issued in SDD 1.0.0.32 7 0/4/0/0.0. The following examples show the output resulting from the datapath query adapter command for the same configuration for SDD 1.38 1 0/4/0/0.3.4.0.0 (or later) For SDD 1.4.34 5 0/4/0/0.18.38 2 0/7/0/0.4. the output of some of the datapath commands has changed.0.0.4.4.18.32 7 0/4/0/0.4.18.0 (or later).0 (or later).18.4.4. After the four commands are issued.SDD datapath query adapter command changes for SDD 1.4.18.18.32 State NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL NORMAL Mode ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE Select Error Path Active 0 0 1 1 0 0 1 1 0 0 2 2 0 0 2 2 0 0 2 2 0 0 2 2 0 0 1 1 0 0 1 1 Adapter #s 0.18. you need to manually issue multiple commands to remove all the associated hardware paths.3.18.4.3.18.4.18.11 (or earlier): Active Adapters :8 Adapter# Adapter Name 0 0/7/0/0. In order to place this adapter offline.3.18.18.0.18. you need to issue datapath set adapter offline four times.0. the output of the datapath query adapter command shows all the fibre-channel arrays as different adapters.18.4. for example.0.0.4.0 (or later): Active Adapters :2 Adapter# Adapter Name State Mode 0 0/7/0/0 NORMAL ACTIVE 1 0/4/0/0 NORMAL ACTIVE Select 0 0 Error Path Active 0 6 6 0 6 6 Adapters 0 and 1 represent two physical adapters. To place one of the adapters offline. “Using the datapath commands.0.0.4.

) Dev#: 2 Device Name: vpath4 Type: 2145 Policy: Optimized Serial: 60056768018506870000000000000000 ================================================================================== Path# Adapter H/W Path Hard Disk State Mode Select Error 0 0/7/0/0 c23t0d0 OPEN NORMAL 2736767 62 1 0/7/0/0 c9t0d0 OPEN NORMAL 6 6 2 0/3/0/0 c22t0d0 OPEN NORMAL 2876312 103 3 0/3/0/0 c8t0d0 OPEN NORMAL 102 101 Dev#: 3 Device Name: vpath5 Type: 2105800 Policy: Optimized Serial: 14123922 ================================================================================== Path# Adapter H/W Path Hard Disk State Mode Select Error 0 0/7/0/0 c19t8d1 OPEN NORMAL 3869815 0 1 0/7/0/0 c13t8d1 OPEN NORMAL 3872306 0 2 0/3/0/0 c17t8d1 OPEN NORMAL 3874461 0 3 0/3/0/0 c11t8d1 OPEN NORMAL 3872868 0 Note: vpathname vpathN is reserved once it is assigned to a LUN even after the LUN has been removed from the host. vpathN.0 (or later): (This example shows a SAN Volume Controller and a disk storage system device. Using the SDD on an HP-UX host system 195 .4.0 (or later) The following change is made in SDD for the datapath query device command to accommodate the serial numbers of supported storage devices.3. will be assigned to the same LUN when it is reconnected to the host.Active Adapters :2 Adapter# Adapter Name State Mode 0 0/7/0/0 NORMAL OFFLINE 1 0/4/0/0 NORMAL ACTIVE Select 0 0 Error Path Active 0 6 0 0 6 0 SDD datapath query device command changes for SDD 1.11 (or earlier): Dev#: 3 Device Name: vpath5 Type: 2105800 Serial: 14123922 Policy: Optimized ================================================================================== Path# Adapter H/W Path Hard Disk State Mode Select Error 0 0/7/0/0 c19t8d1 OPEN NORMAL 3869815 0 1 0/7/0/0 c13t8d1 OPEN NORMAL 3872306 0 2 0/3/0/0 c17t8d1 OPEN NORMAL 3874461 0 3 0/3/0/0 c11t8d1 OPEN NORMAL 3872868 0 Dev#: 3 Device Name: vpath5 Type: 2105800 Policy: Optimized Serial: 14123922 ================================================================================== Path# Adapter H/W Path Hard Disk State Mode Select Error 0 0/7/0/0 c19t8d1 OPEN NORMAL 3869815 0 1 0/7/0/0 c13t8d1 OPEN NORMAL 3872306 0 2 0/3/0/0 c17t8d1 OPEN NORMAL 3874461 0 3 0/3/0/0 c11t8d1 OPEN NORMAL 3872868 0 Example output from datapath query device command issued in SDD 1. Example output from datapath query device command issued in SDD 1.3.0.0. Chapter 4. The locations of Serial and Policy are swapped because the SAN Volume Controller serial is too long to fit in the first line. The same vpathname.4.

1 Manpage for rmvpath /usr/local/man/man1/ showvpath. In other words.1 cfgvpath.1 Manpage for vpcluster /usr/local/man/man1/ sddgetdata.d/S100localmount to /sbin/rc1. the device driver resides above the HP SCSI disk driver (sdisk) in the protocol stack.1 Manpage for cfgvpath /usr/local/man/man1/vpcluster.d datapath.1 hd2vp.1 sddsrv.1 vp2hd.1 /usr/local/man/man1/hd2vp. The SDD software installation procedure installs a number of SDD components and updates some system files.d /stand/system. SDD components installed for HP-UX host systems File mod.conf sddserver Location /opt/IBMsdd/bin /opt/IBMsdd/bin /opt/IBMsdd /sbin/sddsrv /etc/ /sbin/init.o Executables README.1 vpcluster.1 /usr/local/man/man1/gettrace.d mvserver /sbin/init.1 Manpage for showvpath Manpage for gettrace Manpage for sddsrv Manpage for vp2hd Manpage for hd2vp /usr/local/man/man1/cfgvpath.1 gettrace.1 showvpath.1 /usr/local/man/man1/sddsrv.1 /usr/local/man/man1/vp2hd.1 /usr/local/man/man1/datapath. SDD now communicates to the HP-UX device layer. Table 17.1 rmvpath. System files updated for HP-UX host systems File vpath vpath Location /usr/conf/master.d Description Master configuration file System configuration file 196 Multipath Subsystem Device Driver User's Guide .sd sddsrv sample_sddsrv.1 sddgetdata. Those components and files are listed in Table 17 through Table 19 on page 197.d Description Object file for the SDD driver module Configuration and status tools README file SDD server daemon Sample SDD server configuration file Script to start or stop the SDD daemon at system up/down time Script to load SDD driver and run cfgvpath at system boot time Script to move /sbin/rc1.1 Manpage for sddgetdata Table 18.d/ S250localmount in order to fix the auto mount problem for SDD vpath device filesystems confserver /sbin/init.1 Manpage for datapath /usr/local/man/man1/rmvpath.Postinstallation After SDD is installed.

vp2hd. do cfgvpath instead. However. The SDD driver console command tool. issue cfgvpath without any option. If initially there is no SDD vpath device configured. SDD commands and their descriptions for HP-UX host systems Command cfgvpath Description Configures new SDD vpath devices when there are no existing vpath devices.d/vpath v /usr/conf/master. hd2vp. Do not use the legacy parameter "-c". The SDD vpath device configuration is updated without system reboot. Supported SDD commands are datapath. go to “Using applications with SDD” on page 204 and read the information specific to the application that you are using.o v /stand/system. rmvpath. Converts a volume group from sdisks into the SDD vpath devices. Imports or exports MC Service Guard volume groups. Using the SDD on an HP-UX host system 197 . such as Oracle. The SDD data collection tool for problem analysis. don't run dynamic configuration. vpcluster. During the installation process. Lists the configuration mapping between the SDD devices and underlying disks. you still need to customize HP-UX so that standard UNIX® applications can use the SDD. Manpage for SDD commands. Debug tool that gets trace information when problem occurs. the installation procedure is nearly complete. and sddgetdata. sddgetdatashowvpath. cfgvpath -r will fail with message “failed to get information from kernel. Converts a volume group from the SDD vpath devices into sdisks. Go to “Standard UNIX applications” on page 204 for instructions. sddsrv. querysn. If you have a DBMS or an application package installed that communicates directly to the sdisk interface. -vpathname] gettrace sddgetdata man If you are not using a DBMS or an application package that communicates directly to the sdisk interface. man datapath. gettrace.” In this case. the following files were copied from the IBMsdd_depot to the system: # Kernel-related files v /opt/IBMsdd/bin/mod. for example. cfgvpath [-r] (Dynamic reconfiguration) showvpath datapath hd2vp vp2hd vpcluster rmvpath [-all.d/vpath # SDD driver-related files v /opt/IBMsdd Chapter 4.Table 19. Removes the SDD vpath devices from the configuration.

The device is named according to the SDD number. 4. Uninstalling SDD The following procedure explains how to remove SDD. Run the sam program. 198 Multipath Subsystem Device Driver User's Guide .o /opt/IBMsdd/bin/rmvpath /opt/IBMsdd/bin/get_root_disks /opt/IBMsdd/bin/gettrace /opt/IBMsdd/bin/sddgetdata /opt/IBMsdd/bin/hd2vp /opt/IBMsdd/bin/vp2hd /opt/IBMsdd/bin/vpcluster /sbin/cfgvpath /sbin/datapath /sbin/get_root_disks v /sbin/rmvpath v /sbin/showvpath v /sbin/hd2vp v v v v /sbin/vp2hd /sbin/vpcluster /sbin/sddgetdata /sbin/sddsrv v /etc/sample_sddsrv. A device with a number of 0 would be /dev/rdsk/vpath1. Before running the sam program. Stop applications. /opt/IBMsdd/bin/cfgvpath program is initiated to create SDD vpath devices in the /dev/dsk and /dev/rdsk directories for all IBM disks that are available on the system. Complete the following procedure to uninstall SDD: 1. 3.conf During installation. You must uninstall the current level of SDD before upgrading to a newer level. After installation is done.sd /opt/IBMsdd/bin/cfgvpath /opt/IBMsdd/bin/datapath /opt/IBMsdd/bin/showvpath /opt/IBMsdd/bin/master /opt/IBMsdd/bin/system /opt/IBMsdd/bin/mod. edit the appropriate database configuration files (database partition) to remove all the SDD devices. run script vp2hd to convert volume groups from SDD vpath devices to sdisks. 2. If you are using SDD with a database. The system will not reboot. such as Oracle. Note: SDD devices are found in /dev/rdsk and /dev/dsk. all SDD vpath devices are configured and the driver is loaded.v v v v v v v v v v v v v v v v v v /opt/IBMsdd/bin /opt/IBMsdd/README.

the volume groups can be active in the primary node only.. Click Yes.. click Actions → Mark for Remove. > sam Click Software Management.. SDD server daemon The SDD server (also referred to as sddsrv) is an integrated component of SDD 1. Using the SDD on an HP-UX host system 199 . another Confirmation window opens indicating that the system will be restarted after the uninstallation is complete. 6. An Uninstall window opens showing the progress of the IBMsdd software uninstallation. e. See the man pages for the ctime command for more information.log reflects the correct local time. you should modify /etc/default/tz to reflect the corresponding time zone so that sddsrv. a. 9. verify that the SDD server (sddsrv) has automatically started by entering ps –ef | grep sddsrv. b.. A Confirmation window opens and indicates that the uninstallation will begin. From the Bar menu. Click Remove Software. 7. The Done option is not available when the uninstallation process is in progress. Click Done. Note: The MC Service Guard cluster must be halted prior to uninstall. Click Yes and press Enter. It becomes available after the uninstallation process completes. Chapter 4.3. This is what the panel looks like: Target : XXXXX Status : Executing unconfigure Percent Complete : 17% Kbytes Removed : 340 of 2000 Time Left (minutes) : 5 Removing Software : IBMsdd_tag. Because the environment variable $TZ is not exported at init time. The primary node and the adoptive node or nodes must operate in a single-host environment. f. 8. The uninstallation of IBMsdd begins.” on page 415 for more information about sddsrv.. The analysis phase starts.. Note: The SDD server is started from /etc/inittab. c. Click Remove Local Host Software.. To complete an upgrade. See Chapter 12.1. When SDD has been successfully uninstalled.. The shared volume groups in the adoptive nodes must be exported so that the volumes are not shared. Click the IBMsdd_tag selection. Click OK to proceed. click Actions → Remove (analysis)..5. d.. From the Bar menu. A Remove Analysis window opens and shows the status of Ready. Verifying if the SDD server has started After you have installed SDD. After the analysis phase has finished.5 (or later). This component consists of a UNIX application daemon that is installed in addition to the SDD device driver. See the installation procedure in “Installing SDD” on page 187.. “Using the SDD server and the SDDPCM server. the first part of the procedure for upgrading the SDD is complete. you need to reinstall SDD.

2. See “Installing SDD” on page 187 for more information. The vgimport command is useful in conjunction with the vgexport command. you should uninstall SDD and then reinstall SDD. If sddsrv is still running. 3. See “Creating the volume group device directory” on page 201. or if you want to start it manually after stopping sddsrv. 4. Check if sddsrv is running by issuing ps -ef |grep sddsrv. perform the following steps: 1. Edit /etc/inittab and comment out the SDD server entry: #srv:23456:respawn:/sbin/sddsrv >/dev/null 2>&1 2 2. use the following process to start sddsrv: 1. Issue init q. Save the file /etc/inittab. 200 Multipath Subsystem Device Driver User's Guide . you must perform the following tasks: 1. See “Moving the map file” on page 201. issue kill -9 pid of sddsrv. See “Exporting volume groups” on page 201. For example: srv:23456:respawn:/sbin/sddsrv >/dev/null 2>&1 2. Changing to a different port number for the SDD server To change to a different port number for the SDD server. Before you can import the specified volume groups. Go to “Verifying if the SDD server has started” on page 199 for the steps to see if you successfully started the SDD server. 3. Issue init q.If the SDD server (sddsrv) has automatically started. You can import volume groups that are created over SDD vpath devices using the vgimport command. FTP the map file to the other node within a high-availability cluster. Edit /etc/inittab and verify the sddsrv entry. Create the volume group device directory. Starting the SDD server manually If the SDD server does not start automatically after you perform the SDD installation. see the instructions in “Changing the sddsrv or pcmsrv TCP/IP port number” on page 419. How to import and export volume groups Use the HP CLI to manage SDD devices. Save the file. Export or move volume groups from one node to another node within a high availability cluster by using the vgexport command. the output will display the process number on which sddsrv has started. 3. If sddsrv has not started. Stopping the SDD server To stop the SDD server.

Exporting volume groups The vgexport command recognizes the following options and arguments: –p –v –s The –p option previews the actions to be taken but does not update the /etc/lvmtab file or remove the devices file. and vgvpath1 represents the path name of volume group that you want to export. you must also specify the –p. Using the SDD on an HP-UX host system 201 . the volume group specified in the mapfile can be shared with the other systems in the high availability cluster. –s is the sharable option (Series 800 only). Create the group special file. When you specify the –s option. For example. See “Creating the group special file” on page 202. The –v option prints verbose messages including the names of the physical volumes associated with this volume group. a file named mapfile is created in your current directory. –v.4. enter: rcp /tmp/vgvpath1. enter: vgexport –p -v –s –m /tmp/vgpath1. The mapfile serves as input to vgimport. The vg_name is the path name of the volume group. The mapfile contains a description of the volume group and its associated logical volumes. Moving the map file You must also FTP the map file to the other node. By default.map map file to node 2. to create the volume group device directory /dev/vgvpath1 on node 2. For example. see “Importing volume groups” on page 202.map represents your mapfile. When the mapfile is used with the –s option.map node2:/tmp/vgvpath1.map Creating the volume group device directory You must also create the volume group device directory. Use the –m option to specify a different name for the mapfile. enter: mkdir /dev/vgvpath1 Chapter 4. to FTP the vgvpath1. For more information about the vgimport command.map vgvpath1 where /tmp/vgpath1. –m mapfile vg_name vgexport command example: To export the specified volume group on node 1. and –m options. A mapfile is created that can be used to create volume group entries (with the vgimport command) on other systems in the high availability cluster.

The mapfile serves as input to vgimport. Importing volume groups The vgimport command recognizes the following options and arguments: –p –v -s The –p option previews the actions to be taken but does not update the /etc/lvmtab file or remove the devices file. –s is the sharable option (disk storage system Series 800 only). –m mapfile vg_name vgimport command example: To import the specified volume group on node 2. vpcluster extracts the node and volume group information from the keywords NODE_NAME and VOLUME_GROUP as specified in the cluster configuration file. It ensures that the devices of these volumes groups 202 Multipath Subsystem Device Driver User's Guide . –v. Note: The vgimport command only imports the scsi pvlink devices. The normal Service Guard configuration and SDD device configuration process remain the same. When you specify the –s option. The vg_name is the path name of the volume group. and vgvpath1 represents the path name of the volume group that you want to import. When the mapfile is used with the –s option. you must specify the –p. a file named mapfile is created in your current directory.map vgvpath1 where /tmp/vgpath1. The specified mapfile is the same mapfile specified by using the vgexport command also using the –p. –m. The mapfile contains a description of the volume group and its associated logical volumes. Issue the hd2vp command after issuing the vgimport command. and –m options. Use the –m option to specify a different name for the mapfile. enter: mknod /dev/vgvpath1/group c 64 n where n is the same as that was given when /dev/vgvpath1/group was created on node 1. and –s options. Exporting or importing MC Service Guard volume groups The vpcluster script makes it easier to export or import MC Service Guard volume groups managed by SDD. For example. enter: vgimport -p -v -s -m /tmp/vgpath1. to create the group c 64 file. The mapfile is used to create the volume groups on the importing systems. By default.map represents your mapfile. the volume group specified in the mapfile can be shared among the exporting system and the importing system. The –v option prints verbose messages including the names of the logical volumes.Creating the group special file You must also create the group special file on node 2. The sharable volume groups used by the HA clusters must be marked For the primary node operation.

it validates that the adoptive node is included in the cluster configuration. In case the same minor number is in use. -adoptive Specifies adoptive node operation. performs vgexport and creates vpcluster control files for the adoptive nodes to import. Because the HP vgimport process only imports those device names with the standard name C#T#D#. specify the vpcluster control file created the primary node. A mismatch of volume group minor number or vpath device name is allowed. Other mismatches will be reported. However. Chapter 4. vpcluster uses the control file created by the primary node operation. default is /tmp/vpcluster/vpcluster. You must specify -primary or -adoptive. the vpcluster control file can be copied to each adoptive node with the rcp command. It can be any ASCII file with the line entries that contain NODE_NAME and VOLUME_GROUP keywords without # as a comment.ascii.primary. The device names. ensures the importing volume groups are not active volume groups in the adoptive node. Notes: 1. If you are using the cmquerycl command to create the cluster configuration file. For the adoptive node operation. creates volume group nodes /dev/vgXXXX using the mknod command. hd2vp is invoked to convert sdisk devices to SDD devices after a successful vgimport. you should first issue vp2hd to convert vpath devices to pvlink devices. Using the SDD on an HP-UX host system 203 . Prior to volume group import. and ensures that the same device serial and LUN-id are configured by SDD. You must specify -primary or -adoptive.tar -dorcp Specifies the vpcluster control tar file to be RCPed to the adoptive nodes.are SDD-managed vpath devices. 2. Optionally. Syntax: vpcluster -primary -adoptive -f file -dorcp -report -debug -h where. For the adoptive node. the next sequential number is assigned. the vpcluster process attempts to keep the volume group minor number consistent between the primary and adoptive nodes. Use the cmquerycl HP command to create the cluster configuration file. The default is no. This command recognizes only the pvlink scsi devices on both nodes. the report option for adoptive node validates all volume groups exported by the primary node. specify the cluster configuration file. either vpath# or C#T#D#. might be different between the primary and adoptive nodes. -f file For the primary node. The input file to vpcluster does not have to be the same cluster configuration file for the SG cluster creation. In addition. default is /etc/cmcluster/cmclconf. -primary Specifies primary node operation.

many software applications and DBMSs need to control certain device attributes such as ownership and permissions. Due to the standard and designed behavior of the SCSI driver on HP-UX 11i V2 and prior. There are more than one way to configure the SG cluster locking: quorum server. In addition. The lock device should be excluded from the SDD configuration. SDD can still load balance with the raw device access but failover/failback is not possible. If you have not already done so.11. you need to insert the new SDD device layer between the software application and the HP-UX disk device layer. install SDD using the procedure in “Installing SDD” on page 187. In case the lock disk is chosen. from an I/O request. SDD resides above the HP-UX SCSI disk 204 Multipath Subsystem Device Driver User's Guide . you must ensure that the new SDD devices that these software applications or DBMSs access in the future have the same attributes as the HP-UX sdisk devices that they replace.31) has no way to distinguish between a non-volume-manager file system I/O and a raw access. I/O errors returned on certain file system I/Os can result in panic. I/O errors are retried indefinitely because pre-HP-UX 11i v3 (B. The positive feedback is an I/O response. This option is valid in the adoptive node. -debug Specifies that a debugging statement is to be printed during vpcluster run time. it is recommended that SDD vpath devices and sdisk pvlink should not be mixed in the same volume group. 2. You also need to customize the software application to have it communicate with the SDD devices instead of the HP-UX devices. You need to customize the application or DBMS to accomplish this. When this is done. Using applications with SDD If your system already has a software application or a DBMS installed that communicates directly with the HP-UX disk device drivers.cfg text file on page 191. Therefore. an I/O error is likely to hang due to indefinite retries at the SCSI driver layer. This section contains the procedures for customizing the following software applications and DBMSs for use with SDD: v Standard UNIX applications v Network File System (NFS) file server Standard UNIX applications Note: The SDD path state change depends on the “positive” feedback from an I/O request. the following conditions occur when access raw sdisk through SDD: 1. the SDD vpath device should not be used since it is not recognized by the FIRST_CLUSTER_LOCK_PV parameter. In most cases. or lock disk. See the information about the /etc/vpathmanualexcl. Furthermore. either successful or failed condition.-report Validates that the exported volume groups from the primary node are imported to the adoptive node and creates a report. -h Specifies that detailed help info about the vpcluster function is to be displayed.

Create a subdirectory in the /dev directory for the volume group. /usr. Enter the following command to create a subdirectory in the /dev directory for the volume group: # mkdir /dev/vgIBM In this example. 2. Change to the /dev directory. /tmp or /var. To use standard UNIX applications with SDD. Enter the following command to change to the /dev directory: Chapter 4. fsck. In other words. Doing so may render your system unusable if SDD is ever uninstalled (for example.driver (sdisk) in the protocol stack. Creating new logical volumes Use the following process to create a new logical volume to use SDD: Note: You must have superuser privileges to perform these subtasks. SDD now communicates to the HP-UX device layer. you can use a minor number of 0x010000. To use the SDD driver for an existing logical volume. Create a device node for the logical volume device. and mount. a. A message similar to the following is displayed: # mknod group c 64 0x010000 Create a physical volume by performing the procedure in step 3 on page 206. You must convert your existing logical volumes or create new ones. Note: If you do not have any other logical volume devices. vgIBM is the name of the directory. also accept the SDD device as a parameter. /stand. Standard UNIX applications such as newfs. Using the SDD on an HP-UX host system 205 . you must make some changes to your logical volumes. Enter the following command to determine the major number: # lsdev | grep lv A message similar to the following is displayed: 64 64 lv lvm The first number in the message is the major number of the character device. Make sure that the devices that you want to replace are replaced with the corresponding SDD device. In this example. Similarly. mkfs. you must run the hd2vp conversion script (see “SDD utility programs” on page 87). 1. b. which is the number that you want to use. Issue the showvpath command to list all SDD vpath devices and their underlying disks. entries in files such as vfstab and dfstab (in the format of cntndnsn) can be replaced by entries for the corresponding SDD vpathNs devices. such as /(root). Attention: Do not use the SDD for critical file systems needed at startup. as part of an upgrade). which normally take a disk device or raw disk device as a parameter. Determine the major number of the logical volume device. assume that you have no other logical volume devices.

In this example. 6. enter the following command to create a file system on the volume group: # newfs -F HFS /dev/vgIBM/rlvol1 b. enter the following command to create a file system on the volume group: # newfs -F VXFS /dev/vgIBM/rlvol1 c. If you are using an HFS file system. Create a logical volume. Enter the following command to create a volume group: # vgcreate /dev/vgIBM/dev/dsk/vpath1 5. the SDD vpath device associated with the underlying disk is vpath1. Create a device node for the logical volume device. Now you are ready to create a file system on the volume group. Create a physical volume. Enter the following command to re-create the physical volume: # pvcreate /dev/rdsk/vpath1 A message similar to the following is displayed: Physical volume "/dev/rdsk/vpath1" has been successfully created. Mount the logical volume. Create a file system on the volume group. Mount the logical volume. you can make it larger if you want to. Verify the underlying disk by entering the following showvpath command: # /opt/IBMsdd/bin/showvpath A message similar to the following is displayed: vpath1: /dev/dsk/c3t4d0 3. Enter the following command to create logical volume lvol1: # lvcreate -L 100 -n lvol1 vgIBM The -L 100 portion of the command makes a 100-MB volume group. Create a volume group. Use the following process to create a file system on the volume group: a.# cd /dev/vgIBM c. 7. Enter the following command to mount the logical volume lvol1: # mount /dev/vgIBM/lvol1 /mnt 206 Multipath Subsystem Device Driver User's Guide . This process assumes that you have a mount point called /mnt. Enter the following command to create a physical volume: # pvcreate /dev/rdsk/vpath1 4. If you are using a VXFS file system.

it must be unmounted. suppose you have a logical volume called lvol1 under a volume group vgIBM. A message similar to the following is displayed: Logical volume "/dev/vgIBM/lvol1" has been successfully removed.com/ Click HP-UX Reference (Manpages).Attention: In some cases. For information about using recovery procedures. Enter the following command to determine the size of the logical volume: Chapter 4. through path /dev path /dev/dsk/c3t4d0). remove the logical volume. or vgreduce. Remove the existing logical volume. Removing logical volumes Use the following procedure to remove logical volumes: 1. Now. As an example. see the following Web site: http://docs. 1. do you really want to delete the logical volume (y/n) Enter y and press Enter. Before the logical volume is removed. For example. Remove the existing volume group. Volume Group configuration for /dev/vgIBM has been saved in /etc/lvmconf/vgIBM. which is currently using the disk directly. For example. vgextend. Re-creating the existing logical volumes Use the following process to convert an existing logical volume to use SDD: Note: You must have superuser privileges to perform these subtasks. You want to convert logical volume lvol1 to use SDD.conf When prompted to delete the logical volume. Using the SDD on an HP-UX host system 207 . Determine the size of the logical volume. enter the following command to remove logical volume lvol1: # lvremove /dev/vgIBM/lvol1 A message similar to the following is displayed: The logical volume "/dev/vgIBM/lvol1" is not empty. enter the following command to unmount logical volume lvol1: # umount /dev/vgIBM/lvol1 Next.hp. vpchange. (for example. it may be necessary to use standard HP-UX recovery procedures to fix a volume group that has become damaged or corrupted. such as vgscan. enter y. 2. Enter the following command to remove the volume group vgIBM: # vgremove /dev/vgIBM A message similar to the following is displayed: Volume group "/dev/vgIBM" has been successfully removed. you can re-create the logical volume. Then see HP-UX Reference Volume 2.

Enter the following command to re-create the volume group: # vgcreate /dev/vgibm /dev/dsk/vpath1 A message similar to the following is displayed: Increased the number of physical extents per physical volume to 2187. Re-creating the logical volume consists of a number of smaller steps: a. 2. Re-create the logical volume.conf 4. the SDD vpath device associated with the underlying disk is vpath1. Re-create the physical volume.# lvdisplay /dev/vgIBM/lvol1 | grep "LV Size" A message similar to the following is displayed: LV Size (Mbytes) 100 In this case. Volume Group configuration for /dev/vgibm has been saved in /etc/lvmconf/vgibm. Re-create the volume group.conf 208 Multipath Subsystem Device Driver User's Guide . Re-creating the physical volume b. Enter the following command to re-create the physical volume: # pvcreate /dev/rdsk/vpath1 A message similar to the following is displayed: Physical volume "/dev/rdsk/vpath1" has been successfully created. Re-creating the logical volume Enter the following command to re-create the logical volume: # lvcreate -L 100 -n lvol1 vgibm A message similar to the following is displayed: Logical volume "/dev/vgibm/lvol1" has been successfully created with character device "/dev/vgibm/rlvol1". Volume group "/dev/vgibm" has been successfully created. Volume Group configuration for /dev/vgibm has been saved in /etc/lvmconf/vgibm. In this example. Re-creating the volume group c. Verify the underlying disk by entering the following command: # /opt/IBMsdd/bin/showvpath A message similar to the following is displayed: vpath1: /dev/dsk/c3t4d0 3. the logical volume size is 100 MB. Logical volume "/dev/vgibm/lvol1" has been successfully extended.

install SDD using the procedure in “Installing SDD” on page 187. use the HP default value. if you do not use the default LV timeout value. use the lvchange command after lvcreate. such as vgscan. use the pvchange command after pvcreate. 2. you must set the PV timeout value to a minimum of 90 seconds. this is sufficient during normal operations.hp. Attention: The re-created logical volume should be the same size as the original volume. the re-created volume cannot store the data that was on the original. vpchange. Setting the proper timeout value for the logical volume manager.com/ Click HP-UX Reference (Manpages). which is 0 (forever). ensure that the LV timeout value is no less than the PV timeout value multiplied by the number of paths. For information about using recovery procedures. If the application has no specific timeout requirement. or vgreduce. In addition. Setting up NFS for the first time Perform the following steps if you are installing exported file systems on SDD devices for the first time: 1. which is determined by using the lvdisplay command. To display the timeout value. enter pvchange -t 90 /dev/dsk/vpathX v To change the timeout value for logical volume /dev/vgibm/lvolY to 360 seconds. The PV timeout value is recommended by the storage vendor. enter lvchange -t 360 /dev/vgibm/lvolY In some cases. If you have not already done so. the LV timeout value must be at least 360. In this example. To change the PV timeout value. see the following Web site: http://docs. Determine which SDD (vpathN) volumes that you will use as file system devices. Using the SDD on an HP-UX host system 209 . Chapter 4. during the concurrent firmware download. The HP default PV timeout value is 30 seconds. Generally. see HP-UX Reference Volume 2. Installing SDD on a NFS file server The procedures in this section show how to install SDD for use with an exported file system (NFS file server). if a vpath device has four underlying paths and the PV timeout value is 90. it might be necessary to use standard HP recovery procedures to fix a volume group that has become damaged or corrupted. otherwise. Then. However. The timeout values for the Logical Volume Manager must be set correctly for SDD to operate properly. The LV timeout value is determined by the application.The -L 100 parameter comes from the size of the original logical volume. use the lvdisplay or pvdisplay command. and to change the LV timeout value. 5. This is particularly true if the concurrent firmware download has taken place. when the default is not used. vgextend. There are two timeout values: one for logical volume (LV) and one for physical volume (PV). For example. the original logical volume was 100 MB in size. You can set the timeout value to 90 seconds for normal operations as well. For example: v To change the timeout value of all underlying paths of vpathX to 90 seconds.

click yes. Match the mount points found in step 1 with sdisk device link names (files named /dev/(r)dsk/cntndn) by looking in the /etc/fstab directory. Restart the system. and v Use SDD partitions instead of sdisk partitions to access them 1. Installing SDD on a system that already has the NFS file server Perform the following steps if you have the NFS file server already configured to: v Export file systems that reside on a multiport subsystem. Then review your steps and try again. If you are using the standard HP-UX UFS file system. Install the file system mount points into the /etc/exports directory for export. enter the following command: # newfs /dev/rdsk/vpathN In this example. Create file systems on the selected SDD devices using the appropriate utilities for the type of file system that you will use. List the mount points for all currently exported file systems by looking in the /etc/exports directory. Mounts properly c. In the mount at boot field. 6. Edit the /etc/fstab file. 4. 210 Multipath Subsystem Device Driver User's Guide . Is exported and available to NFS clients If there is a problem with any exported file system after completing step 7. 7. 5. 5. Restart the system. restore the original /etc/fstab file and restart to restore NFS service. 3. Verify that each exported file system: a.3. N is the SDD device instance of the selected volume. 4. 2. 6. Install the file systems into the directory /etc/fstab. Create mount points for the new file systems. Make a backup copy of the current /etc/fstab file. replacing each instance of an sdisk device link named /dev/(r)dsk/cntndn with the corresponding SDD device link. Match the sdisk device link names found in step 2 with SDD device link names (files named /dev/(r)dsk/vpathN) by issuing the showvpath command. Passes the start time fsck pass b.

Subsystem LUNs are known as SDD vpath devices in Linux SDD. Using SDD on a Linux host system This chapter provides step-by-step procedures on how to install. and remove SDD on supported Linux host systems that are attached to supported storage devices. use. see the Host Systems Attachment Guide for your product.ibm. software. Each SDD vpath device can have up to 32 paths (SCSI disk instances). see the Readme file on the CD-ROM or visit the SDD Web site: www. 1999. v Subsystem LUNs that have been created and configured for multiport access. configure.ibm. For more information about the fibre-channel adapters that you can use on your Linux host system. See the IBM TotalStorage Enterprise Storage Server: Interoperability Guide for more information regarding hardware. For updated and additional information that is not included in this chapter. and driver support. Software A general list of supported Linux distributions and major release levels is shown below. 2010 211 . v A fibre optic cable to connect each fibre-channel adapter to a supported storage device port.1 © Copyright IBM Corp.com/servers/storage/support/software/sdd Verifying hardware and software requirements You must install the following hardware and software components to ensure that SDD installs and operates successfully.1 – Red Flag DC Server 4. or to switch ports subsequently zoned to supported storage device ports. Hardware The following hardware components are needed: v Supported storage devices v One or more pairs of fibre-channel host adapters To use SDD's input/output (I/O) load balancing features and failover features. see the Readme file for the latest SDD release on the CD-ROM or visit the SDD Web site: www. you need a minimum of two paths to your storage devices. For the most up-to-date information regarding support for specific architectures and kernels.Chapter 5.com/servers/storage/support/software/sdd v Novell SUSE – SUSE Linux Enterprise Server (SLES) 8 / UnitedLinux 1.0 – SLES 9 v Red Hat – RHEL 3 AS – RHEL 4 AS v Asianux – Red Flag Advanced Server 4.

See the IBM System Storage SAN Volume Controller Host Systems Attachment Guide for information on working around Linux LUN limitations. v Single-path mode during concurrent download of licensed machine code nor during any disk storage system concurrent maintenance that impacts the path attachment. such as a disk storage system host-bay-adapter replacement. configure your disk storage system for multiport access for each LUN. See the Planning Guide and Host Systems Attachment Guide for your product to find more information about how to configure the disk storage system. see the IBM System Storage SAN Volume Controller Software Installation and Configuration Guide. Configuring virtualization products Before you install SDD. With a single-path. or host zoning reconfiguration that affects the host or storage ports in use. SDD requires a minimum of two paths to your storage devices that share the same LUN to use the load-balancing and path-failover-protection features. With a single-path. ESS Model 800 does support SCSI connectivity. A host system with a single fibre-channel adapter connected through a switch to multiple disk storage system ports is considered a multipath fibre-channel connection. you must configure the supported storage device for your host system and attach required fibre-channel adapters. A host system with a single fibre-channel adapter connected through a switch to multiple disk storage system ports is considered a multipath fibre-channel connection.4. failover protection is not provided. Configuring disk storage systems Before you install SDD. configure your virtualization product for multiport access for each LUN. See the IBM TotalStorage Enterprise Storage Server Host Systems Attachment Guide for information on working around Linux LUN limitations. 212 Multipath Subsystem Device Driver User's Guide . Preparing for SDD installation Before you install SDD. These publications are listed in “Related information” on page xix. v The EXT3 file system on an SDD vpath device is only supported on distributions running the 2.Unsupported environments SDD does not support environments containing the following functions: v DS8000 and DS6000 do not support SCSI connectivity. failover protection is not provided. SDD requires a minimum of two paths to your storage devices that share the same LUN to use the load-balancing and path-failover-protection features. Follow the adapter-specific configuration instructions to configure the adapters. Configuring fibre-channel adapters on disk storage systems You must configure the fibre-channel adapters and the adapter drivers that are attached to your Linux host system before you install SDD.21 or newer kernel. For information about configuring your SAN Volume Controller.

and the IBM System Storage SAN Volume Controller Software Installation and Configuration Guide. Installing SDD Before you install SDD. By allowing automatic updates of the kernel. 3. RPM relocation allows you to manually relocate the files that SDD installs from the default subdirectory (/opt) to another subdirectory. Log on to your host system as the root user. host bus adapter (HBA) drivers need to be compiled against specific kernels in order to function optimally. Disabling automatic Linux system updates Many Linux distributions give you the ability to configure your systems for automatic system updates. Some drivers supplied by IBM. but must be a subdirectory or other filesystem under root. see step 7 below.1. are dependent on a specific kernel and will cease to function in the presence of a new kernel. make sure that you have root access to your Linux host system and that all the required hardware and software is ready. These features periodically query for updates available for each host and can be configured to automatically install any new updates that they find. If this is the case. Installing SDD from CD-ROM Note: A new feature of SDD 1. If you want to use the relocation command. Perform the following steps to install SDD on your Linux host system: 1. Red Hat provides this ability in the form of a program called up2date.6. while Novell SUSE provides the YaST Online Update utility. such as SDD. Enter mount /dev/cdrom to mount the CD-ROM drive. the automatic update process also upgrades the system to the latest kernel level. See the IBM System Storage SAN Volume Controller Host Systems Attachment Guide for more information about how to install and configure fibre-channel adapters for your Linux host system and for information about working around Linux LUN limitations. You can specify the new subdirectory under the root filesystem (/) by using the --prefix command. 4. Similarly.x and later packages is the possibility of RPM relocation. Configuring fibre-channel adapters on virtualization products You must configure the fibre-channel adapters and the adapter drivers that are attached to your Linux host system before you install SDD. For information about configuring your SAN Volume Controller. Using SDD on a Linux host system 213 . see the IBM System Storage SAN Volume Controller Planning Guide. Relocation cannot be done directly onto the root filesystem (/). Follow the adapter-specific configuration instructions to configure the adapters. Enter the following to access your CD-ROM contents : v For Red Hat or Asianux: enter cd /mnt/cdrom Chapter 5.See the IBM TotalStorage Enterprise Storage Server Host Systems Attachment Guide for more information about how to install and configure fibre-channel adapters for your Linux host system and for information about working around Linux LUN limitations. Often. you risk impacting your host systems unexpectedly. Insert the SDD installation CD-ROM into your CD-ROM drive. hosts running SDD should consider turning off the automatic update of kernel levels. 2.

6.N-1 Installing SDD from downloaded code Use this procedure to install SDD from downloaded code: 1. for example.distro. A message similar to the following is displayed: Preparing for installation . enter cd asianux. If you are running Red Hat.rpm where newpath is the new directory where you want to place SDD files (the default directory is /opt).arch.distro. 4.N. newpath is the new directory under which you want to place SDD files (the default is /opt). Enter mount /dev/cdrom to mount the CD-ROM drive.N. If you are running Miracle Linux.N. The prefix flag is optional. 3.v For SUSE: enter cd /media/cdrom 5. or Asianux. The prefix flag is optional. Enter rpm -qpl IBMsdd-N. ia64) v distro is one of the following: – rhel3 – rhel4 – ul1 – sles8 – sles9 – asianux 7.N-x.rpm file.arch.6.rpm where.N.distro. Enter the following command to install SDD: rpm -ivh [--prefix=newpath] IBMsdd-N.N-x represents the current version release modification level number. If you are running SUSE.N. and then enter ls to display the name of the package. Issue the following instruction: rpm -ivh [--prefix=newpath] IBMsdd-N.N. Log on to your host system as the root user.. enter cd suse. Upgrading SDD Perform the following steps to upgrade SDD on your Linux host system: 1. where: v N.N. N.N.rpm to view all the files in the package.N-x.1-1.N. Change directory to the directory that contains the downloaded .N. 3. Red Flag.. You cannot specify --prefix=/.N.N. enter cd redhat. Insert the SDD installation CD into your CD-ROM drive. Note that you cannot specify --prefix=/.N-x. v arch is the architecture (i686. Enter the following to access your CD-ROM contents : v For Red Hat or Asianux: enter cd /mnt/cdrom v For SUSE: enter cd /media/cdrom 214 Multipath Subsystem Device Driver User's Guide . ppc64.N-x = 1. 2. IBMsdd-N.arch. 2. Download code from the Web site that is appropriate for your system.0.

Verifying the SDD installation The SDD installation installs the device driver and its utilities in the /opt/IBMsdd directory. SDD configuration file sddsrv configuration file SDD configuration and status tools Symbolic links to the SDD utilities Symbolic link for the SDD system startup option Symbolic link for the SDD manual start or restart option sdd-mod.6.4 and earlier kernels) sdd-mod.6.conf. You can issue the rpm -qi IBMsdd command to receive information on the particular package. the rpm -e command saved a copy in /etc/vpath.rcscript /usr/sbin/sdd In this table. You will see output similar to the following: Chapter 5.o-xxx (for /opt/IBMsdd Linux 2. Enter rpm -U IBMsdd-N. To preserve your /etc/vpath..conf sddsrv. Install the new SDD 1..conf. the /opt directory is the default directory.N-1 Note: The RPM upgrade command (rpm -U) will not work if you want to upgrade from a pre-SDD 1. 3.rpmsave. Table 20.6. SDD device driver file (where XXX stands for the kernel level of your host system.5.x or later package.1. Enter rpm -qpl IBMsdd-N. 7. and then enter ls to display the name of the package.N-x. The --prefix option should be used if it was used during the RPM installation of SDD.rpm to view all the files in the package.rpmsave to /etc/vpath.N-x.conf.N. The root prefix might be different.arch.x or later package using rpm -i.distro. issue the cd /opt/IBMsdd and then ls -l commands to list all the installed SDD components. Table 20 lists the SDD driver and its major component files. If the installation was successful. depending on the installation.ko-xxx (For /opt/IBMsdd Linux 2. SDD components for a Linux host system File name Location 1 Description SDD device driver file (where XXX stands for the kernel level of your host system.N.6 kernels only) vpath.N. A message similar to the following is displayed: Preparing for installation .distro. Using SDD on a Linux host system 1 215 .conf. if you are running SUSE. If you are running Red Hat.1. enter cd suse.d/sdd sdd.conf executables /etc /etc /opt/IBMsdd/bin /usr/sbin /etc/init. or rpm -ql IBMsdd command to list the specific SDD files that were successfully installed on your Linux host system. you must also copy the /etc/vpath.N.N. If you modified your /etc/vpath. Instead: 1. enter cd redhat.x package to an SDD 1.N.conf modifications.1. IBMsdd-N.arch.rpm [--prefix=newpath] to upgrade SDD. Uninstall the SDD package using the RPM erase command (rpm -e IBMsdd) 2. 6.

o-2. a message similar to the following is displayed: package IBMsdd is not installed Configuring SDD Before you start the SDD configuration process. 216 Multipath Subsystem Device Driver User's Guide .4 kernels.4.o-2.9-smp SDD utilities are packaged as executable files and contained in the /bin directory. If you issue the cd /opt/IBMsdd/bin and then ls -l commands. Displays all SCSI disk devices.6 kernels cfgvpath will perform the functionality of addpaths. Table 21. If the installation failed. Displays the current devices that are configured and their corresponding paths.See the note.2-smp sdd-mod.4. make sure that you have successfully configured the supported storage device to which your host system is attached and that the supported storage device is operational.4. On the 2.6-smp sdd-mod. Summary of SDD commands for a Linux host system Command cfgvpath cfgvpath query lsvpcfg rmvpath Description Configures SDD vpath devices.total 580 -rw-r-----rw-r----drw-r-----rw-r-----rw-r-----rw-r----- 1 1 2 1 1 1 root root root root root root root root root root root root 8422 9120 4096 88817 88689 89370 Sep Sep Oct Sep Sep Sep 26 26 2 26 26 26 17:40 17:40 16:21 17:40 17:40 17:40 LICENSE README bin sdd-mod. Removes one or all SDD vpath devices.rcscript lsvpcfg pathtest rmvpath addpaths Note: The addpaths command is still supported on the 2.o-2. More details about the function and use of each command are described later in this section. you will see output similar to the following: total 232 -rwxr-x---rwxr-x---rwxr-x---rwxr-x---rwxr-x---rwxr-x---rwxr-x--1 1 1 1 1 1 1 root root root root root root root root root root root root root root 32763 28809 1344 16667 78247 22274 92683 Sep Sep Sep Sep Sep Sep Sep 26 26 26 26 26 26 26 17:40 17:40 17:40 17:40 17:40 17:40 17:40 cfgvpath datapath sdd. This section provides instructions for the following procedures: v Configurating and verifying an SDD v Configuring SDD at system startup v Maintaining SDD vpath device configuration persistence Table 21 lists all of the commands that can help system administrators configure SDD.

6 kernel. If the system is not configured properly. vpathaz vpathba..... the functionality of the addpaths command has been added to the cfgvpath command.. sdd start sdd stop sdd restart Loads the SDD driver and automatically configures disk devices for multipath access. vpathp vpathaa.4 kernels. 1.. vpathbp . vpathaab. vpathaab... vpathbb. . vpathzp vpathaaa. Using SDD on a Linux host system 217 . vpathaap . see “Verifying SDD configuration” on page 218.. run the cfgvpath command.. vpathbz .. unload. and then restart the SDD configuration process.. vpathzb.. You can verify the configuration using the datapath query device command to determine that all your disk are configured.6 kernels. If you need to add paths to an existing SDD vpath device with a Linux 2. vpathza. Use the sdd restart command to unconfigure... vpathaaz . . If a vpath device is in use (mounted). vpathby. Summary of SDD commands for a Linux host system (continued) Command addpaths Description Adds any new paths to an existing SDD vpath device. vpathaay.. .. For Linux 2. and then loads the SDD driver and automatically configures disk devices for multipath access. 3. the SDD vpath devices are assigned names according to the following scheme: vpatha. vpathzz vpathaaa. . vpathz vpathaa..Table 21. vpathab.. ... . vpathzb.. Note: For Linux 2. vpathay. Enter sdd start.. Chapter 5. This command is only supported for Linux 2. then the sdd stop command fails with a error stating that the module sdd-mod is in use.4 kernels.. Configurating and verifying an SDD Perform the following steps to load and configure SDD on your Linux host system: SDD configuration Use the following steps to configure SDD on your Linux host system. vpathab. Unloads the SDD driver (requires that no SDD vpath devices currently be in use)... .. .. Use the sdd stop command to unconfigure and unload the SDD driver.. 4. . 2..... the SDD vpath devices are assigned names according to the following scheme: vpatha. Log on to your Linux host system as the root user. vpathza. vpathy. vpathbb. vpathb..6 kernels.. Unloads the SDD driver (requires that no SDD vpath devices currently be in use).. vpathzy. . For Linux 2. . vpathb... vpathap vpathba.

2. output similar to the following is displayed: sdd-mod qla2300 nls_iso8859-1 cs4232 ad1848 uart401 sound soundcore nfsd usb-uhci usbcore ipv6 olympic ipchains lvm-mod * 233360 192000 2880 3760 16752 6352 56192 4048 67664 20928 48320 131872 15856 34112 40880 0 0 1 1 0 0 1 4 4 0 1 -1 1 0 0 (unused) (autoclean) (autoclean) (autoclean) (autoclean) (autoclean) (autoclean) (autoclean) (autoclean) (unused) [usb-uhci] (autoclean) (autoclean) (unused) (autoclean) [cs4232] [cs4232] [cs4232 ad1848 uart401] [sound] For Linux 2. SCSI driver (scsi-mod or scsi_mod) The SCSI driver has to recognize the disks. The following example shows a sample output: 218 Multipath Subsystem Device Driver User's Guide .0 2. The following example shows that SDD 1. SDD generates more entries in /proc/partitions. and if this succeeds it puts the entries into /proc/partitions. Example /proc/scsi/adapter_type/host_number output is shown below. Fibre-channel Host Bus Adapter (HBA) driver The HBA driver needs to recognize the disks.0 is installed on a Linux host system running a 2. In this command. for example /proc/scsi/qla2300/2.6.0. Note: If you are on an unsupported kernel you will get an error message about the kernel not being supported. and.6 kernel because the HBA driver version can use the sysfs filesystem instead of the proc filesystem to expose information.6. Enter lsmod or enter cat /proc/modules to verify that the SDD sdd-mod driver* is loaded.Verifying SDD configuration Use the following steps to verify SDD configuration after running the sdd start command. SCSI disk driver (sd-mod or sd_mod) The SCSI disk driver has to recognize the disk entries. if this succeeds. d. The recognized disks are typically listed in /proc/scsi/adapter_type/host_number.6 kernels.0. c. it puts disk entries into /proc/scsi/scsi. If it is successfully loaded.9 symmetric multiprocessor kernel: sdd-mod: SDD 1. Enter cat /proc/IBMsdd to verify that the SDD sdd-mod driver level matches that of your system kernel.4. SDD driver (sdd-mod or sdd_mod) SDD then uses the disk entries in /proc/partitions to configure the SDD vpath devices.9 SMP Sep 26 2001 17:39:06 (C) IBM Corp. If configuration succeeds. 3. b.4. the SDD driver is displayed as sdd_mod. and N represents the host-assigned adapter number. Note that this is not always true for the Linux 2. adapter_type indicates the type of adapter that you are using. 1. Enter cat /proc/scsi/adapter_type/N to display the status of a specific adapter and the names of the attached devices. The order of disk recognition on a Linux system is: a.

00b5 Entry address = e08ea060 HBA: QLA2300 . scsi-qla0-target-6=500507680140017c. 0:0:8c. scsi-qla0-target-10=500507680120017c. An * at the end of a disk description indicates that the disk is not yet registered with the operating system. scsi-qla0-target-9=500507680120011a. scsi-qla0-target-3=5005076300ca9b0a. 29. scsi-qla0-target-5=500507680140011a. 0x0. Response Queue count= 512 Total number of active commands = 0 Total number of interrupts = 7503 Total number of IOCBs (used/max) = (0/600) Total number of queued commands = 0 Device queue depth = 0x10 Number of free request entries = 57 Number of mailbox timeouts = 0 Number of ISP aborts = 0 Number of loop resyncs = 47 Number of retries for empty slots = 0 Number of reqs in pending_q= 0.01. 0x0.# ls /proc/scsi/ qla2300 scsi sym53c8xx # ls /proc/scsi/qla2300/ 2 3 HbaApiNode # cat /proc/scsi/qla2300/2 QLogic PCI to Fibre Channel Host Adapter for ISP23xx: Firmware version: 3. 0:0:8c. 0x0. flags flags flags flags flags flags flags flags 0x0. 0x0. 0:0:8c. 29. Information: Total Total Total Total Total Total Total Total reqs reqs reqs reqs reqs reqs reqs reqs 35. The disk descriptions are shown one per line. scsi-qla0-target-2=5005076300ce9b0a. 29. scsi-qla0-target-4=5005076801400153. Response Queue = 0xc40000 Request Queue count= 128. Using SDD on a Linux host system 219 . retry_q= 0. 0. done_q= 0. 0. 4. Pending Pending Pending Pending Pending Pending Pending Pending reqs reqs reqs reqs reqs reqs reqs reqs 0. 0. 0. See the appropriate Host Systems Attachment Guide for your product to learn about SCSI LUN discovery in Linux. 29. 0:0:8c. 0x0. scsi-qla0-target-7=5005076801400150. The disks that the QLogic adapter recognizes are listed at the end of the output under the heading SCSI LUN Information. 0. The cfgvpath query is effectively looking at the /proc/partitions output. . 0x0.05. Chapter 5. Enter cfgvpath query to verify that you have configured the SCSI disk devices that you allocated and configured for SDD.18. scsi-qla0-target-0=5005076300c70fad. scsi-qla0-target-11=5005076801200150. 0:0:8c. scsi-qla0-target-1=10000000c92113e5. scsi-qla0-target-8=5005076801200153. 29. Serial# C81675 Request Queue = 0x518000. 0x0. scsi-qla0-adapter-port=210000e08b044b4c. flags= 0x8a0813 Dpc flags = 0x0 MBX flags = 0x0 SRB Free Count = 4096 Port down retry = 008 Login retry count = 008 Commands retried with dropped frame(s) = 0 SCSI Device Information: scsi-qla0-adapter-node=200000e08b044b4c. 29. 0. 0. 0:0:8c. 0:0:8c. SCSI LUN (Id:Lun) ( 2: 0): ( 2: 1): ( 2: 2): ( 2: 3): ( 2: 4): ( 2: 5): ( 2: 6): ( 2: 7): . 29. SDD cannot configure devices that are not registered with the operating system. 0:0:8c. Driver version 6. . scsi_retry_q= 0 Host adapter:loop state= <READY>.

65. 8. 8. 8. If cfgvpath has not configured a disk that you think it should have configured. . and its product identification. 8. 8. The cfgvpath utility examines /etc/fstab and the output of the mount command in order to determine the disks that it should not configure. verify that an entry for one of these disks exists in /etc/fstab or in the output of the mount command. 65. 65. 8. 8. 65. 65. /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy /dev/sdz /dev/sdaa /dev/sdab /dev/sdac /dev/sdad /dev/sdae /dev/sdaf . a message similar to the following is displayed. 65. 0) 16) 32) 48) 64) 80) 96) 112) 128) 144) 160) 176) 192) 208) 224) 240) 0) 16) 32) 48) 64) 80) 96) 112) 128) 144) 160) 176) 192) 208) 224) 240) host=0 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 id=0 id=0 id=0 id=0 id=0 id=1 id=1 id=1 id=1 id=6 id=6 id=6 id=6 id=6 id=6 id=6 id=6 id=6 id=6 id=7 id=7 id=7 id=7 id=7 id=7 id=7 id=7 id=7 id=7 id=10 id=10 id=10 lun=0 lun=0 lun=1 lun=2 lun=3 lun=0 lun=1 lun=2 lun=3 lun=0 lun=1 lun=2 lun=3 lun=4 lun=5 lun=6 lun=7 lun=8 lun=9 lun=0 lun=1 lun=2 lun=3 lun=4 lun=5 lun=6 lun=7 lun=8 lun=9 lun=0 lun=1 lun=2 vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM pid=DDYS-T36950M pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2062 pid=2062 pid=2062 serial=xxxxxxxxxxxx serial=60812028 serial=70912028 serial=31B12028 serial=31C12028 serial=60812028 serial=70912028 serial=31B12028 serial=31C12028 serial=600507680183000a800000000000000a serial=600507680183000a800000000000000b serial=600507680183000a800000000000000c serial=600507680183000a800000000000000d serial=600507680183000a800000000000000e serial=600507680183000a800000000000000f serial=600507680183000a8000000000000010 serial=600507680183000a8000000000000011 serial=600507680183000a8000000000000012 serial=600507680183000a8000000000000013 serial=600507680183000a800000000000000a serial=600507680183000a800000000000000b serial=600507680183000a800000000000000c serial=600507680183000a800000000000000d serial=600507680183000a800000000000000e serial=600507680183000a800000000000000f serial=600507680183000a8000000000000010 serial=600507680183000a8000000000000011 serial=600507680183000a8000000000000012 serial=600507680183000a8000000000000013 serial=600507680183000a800000000000000a serial=600507680183000a800000000000000b serial=600507680183000a800000000000000c ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 df_ctlr=0 X df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 The sample output shows the name and serial number of the SCSI disk device. 65. 8. 65. A capital letter X at the end of a line indicates that SDD currently does not support the device or the device is in use and cfgvpath has not configured it. 220 Multipath Subsystem Device Driver User's Guide . 65. delete the wrong entry and issue cfgvpath again to configure the device. 8. 65. 65. 8. 8. . 8. 65. 65. 8. 8. 65. 65. 65. 65. 65. ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( 8. 8. 8. 8. 65.After you enter the cfgvpath query command. 65. 65. its connection information. 8. 65. 8. 65. 8. 65. 8. 65. 65. 8. 8. 8. 65. ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( 8. If the entry is incorrect. 65. This example output is for a system with disk storage system and virtualization product LUNs. 65. 65. 8. 8. 0) 16) 32) 48) 64) 80) 96) 112) 128) 144) 160) 176) 192) 208) 224) 240) 0) 16) 32) 48) 64) 80) 96) 112) 128) 144) 160) 176) 192) 208) 224) 240) host=0 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 host=2 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 ch=0 id=0 id=0 id=0 id=0 id=0 id=1 id=1 id=1 id=1 id=6 id=6 id=6 id=6 id=6 id=6 id=6 id=6 id=6 id=6 id=7 id=7 id=7 id=7 id=7 id=7 id=7 id=7 id=7 id=7 id=10 id=10 id=10 lun=0 lun=0 lun=1 lun=2 lun=3 lun=0 lun=1 lun=2 lun=3 lun=0 lun=1 lun=2 lun=3 lun=4 lun=5 lun=6 lun=7 lun=8 lun=9 lun=0 lun=1 lun=2 lun=3 lun=4 lun=5 lun=6 lun=7 lun=8 lun=9 lun=0 lun=1 lun=2 vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM vid=IBM pid=DDYS-T36950M pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2105E20 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 pid=2145 serial=xxxxxxxxxxxx serial=60812028 serial=70912028 serial=31B12028 serial=31C12028 serial=60812028 serial=70912028 serial=31B12028 serial=31C12028 serial=600507680183000a800000000000000a serial=600507680183000a800000000000000b serial=600507680183000a800000000000000c serial=600507680183000a800000000000000d serial=600507680183000a800000000000000e serial=600507680183000a800000000000000f serial=600507680183000a8000000000000010 serial=600507680183000a8000000000000011 serial=600507680183000a8000000000000012 serial=600507680183000a8000000000000013 serial=600507680183000a800000000000000a serial=600507680183000a800000000000000b serial=600507680183000a800000000000000c serial=600507680183000a800000000000000d serial=600507680183000a800000000000000e serial=600507680183000a800000000000000f serial=600507680183000a8000000000000010 serial=600507680183000a8000000000000011 serial=600507680183000a8000000000000012 serial=600507680183000a8000000000000013 serial=600507680183000a800000000000000a serial=600507680183000a800000000000000b serial=600507680183000a800000000000000c ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=0 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_flag=1 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 ctlr_nbr=0 ctlr_nbr=1 ctlr_nbr=0 df_ctlr=0 X df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 df_ctlr=0 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy /dev/sdz /dev/sdaa /dev/sdab /dev/sdac /dev/sdad /dev/sdae /dev/sdaf . 8. 65. . 8. 8. . 65. 8.

SDD configuration assumes that the device is in use for another purpose and will not configure the device. The system administrator must keep the fstab configuration file up-to-date so that when SDD checks this file. an entry specifying the disk or partition that acts as swap space would be in fstab. minor pair of the raw command output matches with an SCSI disk device major. minor pair.cfg command where configuration_file_name is the name of the configuration file that you want to specify. The SCSI disk device is from an unsupported storage subsystem. Use the raw -qa command to check the raw device bindings. such as how and where they should be mounted. See “Configurating and verifying an SDD” on page 217 for additional information about how to determine whether the SCSI disk devices are supported. 2. the SCSI disk device will be excluded. If you use a self-specified configuration file.conf file to maintain vpath name persistence in subsequent driver loads and configurations. 4. You can determine whether your SCSI disk devices are supported by running cfgvpath query and checking the output. Enter cfgvpath ? for more information about the cfgvpath command. The SCSI disk device is currently bound to a raw device. it is able to correctly exclude drives and partitions. fstab is a configuration file that contains information about the important file system information regarding disk devices and partitions. the SDD configuration will print out a message indicating whether it has excluded any SCSI disk devices. Using SDD on a Linux host system 221 . The configuration information is saved by default in the /etc/vpath. Important things to note about the exclusion process are: 1. The SCSI disk device is listed in the file /etc/fstab. The SCSI disk device is currently mounted (using the Linux mount command). 3. Chapter 5. You might choose to specify your own configuration file by issuing the cfgvpath -f configuration_file_name. SDD only configures the SDD vpath devices that this file defines. Reconfiguring using SDD userspace commands You can use the following commands when reconfiguring SDD vpath devices: cfgvpath Enter cfgvpath to configure SDD vpath devices. When running cfgvpath or sdd start. rmvpath You can remove an SDD vpath device by using the rmvpath vpath_name command. all SCSI disk devices (or paths) belonging to the SDD vpath device will be excluded. Enter rmvpath ? for more information about the rmvpath command. Once an SCSI disk device that belongs to an SDD vpath device is excluded. For example. 2.Automatic SCSI disk device exclusion during SDD configuration The SDD configuration might sometimes exclude a SCSI disk (sd) device that is present on the system from being configured for use by an SDD vpath device in the following situations: 1. where vpath_name represents the name of the SDD vpath device that is selected for removal. If the major.

0 2. addpaths is not supported because the functionality of addpaths has been incorporated into the cfgvpath command. “Using the datapath commands.6. addpaths would be useful in a scenario where disks are configured and are visible to the OS but were unavailable at the time that SDD was configured because of a failed switch or unplugged fibre cable. run cfgvpath.lsvpcfg Verify the SDD vpath device configuration by entering lsvpcfg or datapath query device. the Linux kernel SCSI disk driver must already be aware of the path. 0) 60812028 = /dev/sdb /dev/sdf /dev/sdax /dev/sdbb 001 vpathb ( 247. 128) 600507680183000a800000000000000e = /dev/sdn /dev/sdx /dev/sdah /dev/sdar /dev/sdbj /dev/sdbt /dev/sdcd /dev/sdcn 009 vpathj ( 247. Log on to your Linux host system as the root user. 144) 600507680183000a800000000000000f = /dev/sdo /dev/sdy /dev/sdai /dev/sdas /dev/sdbk /dev/sdbu /dev/sdce /dev/sdco 010 vpathk ( 247. output similar to the following is displayed by lsvpcfg. 160) 600507680183000a8000000000000010 = /dev/sdp /dev/sdz /dev/sdaj /dev/sdat /dev/sdbl /dev/sdbv /dev/sdcf /dev/sdcp 011 vpathl ( 247. Later. SDD provides a startup script sdd. A rpm upgrade does not change the current configuration. For example. Perform the following steps to configure SDD at system startup: 1. Use cfgvpath to add new disks. Issue the addpaths command to add new paths to existing disks.rcscript file in the /opt/IBMsdd/bin directory and creates a symbolic link to /etc/init. 192) 600507680183000a8000000000000012 = /dev/sdr /dev/sdab /dev/sdal /dev/sdav /dev/sdbn /dev/sdbx /dev/sdch /dev/sdcr 013 vpathn ( 247. Use this section to load SDD on system startup.0. 16) 70912028 = /dev/sdc /dev/sdg /dev/sday /dev/sdbc 002 vpathc ( 247. 112) 600507680183000a800000000000000d = /dev/sdm /dev/sdw /dev/sdag /dev/sdaq /dev/sdbi /dev/sdbs /dev/sdcc /dev/sdcm 008 vpathi ( 247. 32) 31B12028 = /dev/sdd /dev/sdh /dev/sdaz /dev/sdbd 003 vpathd ( 247. addpaths You can issue the addpaths command to add paths to existing SDD vpath devices. See “Using dynamic reconfiguration” on page 225.6 kernel. To add new paths to existing disks when using a Linux 2. This example output is for a system with disk storage system and virtualization product LUNs: sdd-mod: SDD 1. Configuring SDD at system startup Note: SDD is currently set to not be loaded on system startup after installation. 000 vpatha ( 247. Note: For Linux 2. 48) 31C12028 = /dev/sde /dev/sdi /dev/sdba /dev/sdbe 004 vpathe ( 247. you can issue addpaths on a running system to add the restored paths. 64) 600507680183000a800000000000000a = /dev/sdj /dev/sdt /dev/sdad /dev/sdan /dev/sdbf /dev/sdbp /dev/sdbz /dev/sdcj 005 vpathf ( 247. when the disks are recovered through the recovery process or maintenance.6 kernels. 80) 600507680183000a800000000000000b = /dev/sdk /dev/sdu /dev/sdae /dev/sdao /dev/sdbg /dev/sdbq /dev/sdca /dev/sdck 006 vpathg ( 247. 96) 600507680183000a800000000000000c = /dev/sdl /dev/sdv /dev/sdaf /dev/sdap /dev/sdbh /dev/sdbr /dev/sdcb /dev/sdcl 007 vpathh ( 247. For SDD to discover new paths.” on page 421 for more information about the datapath query device command and all other SDD datapath commands. 2. SDD can be set to automatically load and configure when your Linux system starts up.4. 176) 600507680183000a8000000000000011 = /dev/sdq /dev/sdaa /dev/sdak /dev/sdau /dev/sdbm /dev/sdbw /dev/sdcg /dev/sdcq 012 vpathm ( 247.19-64GB-SMP SMP Mar 3 2003 18:06:49 (C) IBM Corp. If you successfully configured SDD vpath devices.d/sdd. 208) 600507680183000a8000000000000013 = /dev/sds /dev/sdac /dev/sdam /dev/sdaw /dev/sdbo /dev/sdby /dev/sdci /dev/sdcs See Chapter 13. At startup run one of these commands to enable run level X: For Red Hat: chkconfig --level X sdd on 222 Multipath Subsystem Device Driver User's Guide .

SDD needs to be placed in the initial ramdisk (initrd).For SUSE: chkconfig --set sdd X 3. Any other customized application.conf to maintain name persistence in subsequent driver loads and configurations. If LVM is used to encapsulate the root disk. the /etc/vpath. Some of the known system configurations are described below. This means SDD needs to be placed in the initial ramdisk (initrd). This can be assured at start time by adding the appropriate driver or drivers to the kernel's initial RAM disk. then you will need to use another procedure to start SDD to allow these applications or filesystems access to SDD vpath devices. filesystem. or (2) the SDD startup script be placed in earlier in the init scripts. the configuration method finds all SCSI disk devices.conf and restore it after the rpm has been reinstalled. For first time configuration. See the Red Hat mkinitrd command documentation or the SUSE mk_initrd command documentation for more information. Enter chkconfig --list sdd to verify that the system startup option is enabled for SDD configuration. See “Booting Linux over the SAN with SDD” on page 233 for more instructions on how to set up this environment. then configures and assigns SDD vpath devices accordingly. before issuing sdd start. you can disable the startup option by entering: chkconfig --level X sdd off In order for SDD to automatically load and configure. Additional suggestions might be available from the HBA driver vendor. Sometimes certain system configurations require SDD to start earlier than is possible under the procedure described above. Chapter 5. The /etc/vpath. If you are doing a rpm removal. Restart your host system so that SDD is loaded and configured. if the rpm is removed and reinstalled (using the rpm -e and rpm -i commands). However. SDD needs to be available before the root filesystem is mounted. Maintaining SDD vpath device configuration persistence Use the cfgvpath command to configure SDD vpath devices. See “Using Linux Logical Volume Manager with SDD” on page 229 for more information. Using SDD on a Linux host system 223 .conf is not modified during a rpm upgrade (rpm -U). the host bus adapter (HBA) driver must already be loaded. If necessary. The general rule is: if some application. SDD remote boot If booting off of a SDD vpath device. or other product needs to use an SDD vpath device before it is loaded in the system init scripts. 2. The configuration is saved in /etc/vpath. driver.conf is removed. Linux Logical Volume Manager (LVM) with SDD Linux LVM with SDD often requires SDD to start early in the init script process because the LVM initialization occurs relatively early. This is not an exhaustive list. it is important to manually save your /etc/vpath. or filesystem that requires access to a SDD vpath device early in the boot process might require: (1) SDD be placed in the initial ramdisk (initrd). but it does provide an idea of situations where other methods are required: 1. 4.

issue lsvpcfg or the datapath query device command to verify the configuration. load balancing sequential (lbs) This policy is the same as the load-balancing policy with optimization for sequential I/O. SDD alternates between the two. You can manually exclude a device in /etc/vpath. The load-balancing sequential policy is also known as the optimized sequential policy. Load-balancing mode also incorporates failover protection. Removing the # allows a previously excluded device to be configured again. You can change the policy for an SDD device. The following output shows the contents of a vpath. Then an alternate path is chosen for subsequent I/O operations. If multiple paths have the same load. To manually exclude a device from being configured. This is the default setting.After the SDD vpath devices are configured. round robin (rr) The path to use for each I/O operation is chosen at random from paths that were not used for the last I/O operation. round robin sequential (rrs) This policy is the same as the round-robin policy with optimization for sequential I/O. SDD supports dynamic changing of the SDD devices' path-selection policy. The default path-selection policy for an SDD device is load balancing sequential. If a device has only two paths. The load-balancing policy is also known as the optimized policy. See “datapath query device” on page 431 for more information.conf file with vpathb and vpathh not configured: vpatha 60920530 #vpathb 60A20530 vpathc 60B20530 vpathd 60C20530 vpathe 70920530 vpathf 70A20530 vpathg 70B20530 #vpathh 70C20530 Dynamically changing the SDD path-selection policy algorithm SDD supports path-selection policies that increase the performance of multipath-configured supported storage devices and make path failures transparent to applications. load balancing (lb) The path to use for an I/O operation is chosen by estimating the load on the adapter to which each path is attached. edit the vpath. a path is chosen at random from those paths.conf file prior to running sdd start. adding a # before the first character of the entry for the device that you want to remain unconfigured.conf from being configured. The load is a function of the number of I/O operations currently in process. The following path-selection policies are supported: failover only (fo) All I/O operations for the device are sent to the same (preferred) path until the path fails because of I/O errors. 224 Multipath Subsystem Device Driver User's Guide .

v Remove an SDD vpath device.Before changing the path-selection policy. /dev/sda now refers to a path to LUN 1.4 kernel have more limited options. this will fail if device is busy. but after LUN 1 is added dynamically. v Add path to an SDD vpath device. In SDD 1.6 kernels (that is.) v Remove path for an SDD vpath device. datapath set device policy command Use the datapath set device policy command to change the SDD path-selection policy dynamically: See “datapath set device policy” on page 446 for more information about the datapath set device policy command. If a procedure is used that does not require a HBA reload. cfgvpath will automatically refuse to configure the added LUN if name slippage has occurred. to see a LUN that has been dynamically added. specific versions of HBA drivers and adapters might offer procedures to force a re-scan of the bus in order to detect new LUNs without requiring a driver reload. See the respective HBA vendor's documentation to determine if a procedure is supported and. The HBA drivers for the Linux 2. See the note below. Using SDD on a Linux host system 225 . The commands that cfgvpath issues to the SDD driver are: v Add an SDD vpath device. determine the active policy for the device. but will set path to DEAD and OFFLINE. Chapter 5. For example.6 kernels. However. before the dynamic LUN addition. running cfgvpath will be adequate to configure the added LUN. a reload of the HBA driver is required. Typically. it is possible that the procedure provided by the HBA vendor to force a re-scan of the bus can cause SCSI device name slippage in the OS. /dev/sda denoted a path to LUN 0. The cfgvpath command finds the current hardware configuration and compares it to the SDD vpath device configuration in memory and then identifies a list of differences. this will fail deletion of path if device is busy.6 kernels. Enter datapath query device N where N is the device number of the SDD vpath device to show the current active policy for that device. If a HBA driver reload is used.6. SLES 9 and RHEL 4) offer procedures using the sysfs interface.) Note: Although unlikely.1.0 versions and later. if so. which means the same device file name refers to different LUNs before and after the re-scan. (Only for SDD drivers on Linux 2. it is important to run sdd stop before unloading the HBA driver and sdd start after the HBA driver has been reloaded. It then issues commands to update the SDD vpath device configuration in memory with the current hardware configuration. running sdd restart is required. the OS and HBA drivers must detect and configure it. with which HBA driver versions. In such a scenario. (Only for SDD drivers on Linux 2. the QLogic and Emulex drivers for Linux 2. In general. Using dynamic reconfiguration Before SDD can configure a dynamically-added LUN.

Mounting SDD devices with automount This procedure describes how to automatically mount vpath partitions using the program automount. it can also automatically unmount the device after a certain timeout value. 4. Log on to your Linux host system as the root user. If you successfully removed SDD. automount is a daemon that includes a kernel component that will automatically mount partitions when accessed. 5. To check this. Once automount has automatically mounted the device. and 5. Setting up automount The autofs daemon should be set up at boot time by default. Verify the SDD removal by entering either rpm -q IBMsdd or rpm -ql IBMsdd. For example: autofs 0:off 1:off 2:off 3:on 4:on 5:on 6:off This output indicates that autofs is running on runlevels 3. Enter rpm -e IBMsdd to remove the SDD package. 4. issue the following commands to ensure that it will run on startup: On SUSE: chkconfig autofs 345 On Red Hat: chkconfig --level 345 autofs on 226 Multipath Subsystem Device Driver User's Guide . Enter sdd stop to remove the driver.Uninstalling SDD You must unload the SDD driver before uninstalling SDD. issue the following command: chkconfig --list autofs The output of the command should state the runlevels to which autofs is set. if you had an SDD device vpatha configured to automount at the mount point /mnt/vpatha. 2. Perform the following steps to remove SDD from a Linux host system: 1. 3. For example. unless otherwise specified. If you notice that the autofs daemon is not running on runlevels 3. output similar to the following is displayed: package IBMsdd is not installed Note: The sdd stop command will not unload a driver that is in use. 4. which should be the default setting. vpatha would not be mounted until any program tried to access something on that mount point.

The format of the file is the following: [mount point] [map file] [options] where. ensure that you do not have conflicting mount points for separate applications and that if you plan to mount other things under the master mount point. if you wanted vpatha to be mounted at /vpath/vpatha. It will usually reside under /etc. it is named after the mount point. The most relevant setting is the --timeout setting. The default setting is 5 minutes. or /vpath (note that it is an absolute path). The timeout setting is the number of seconds that automount will wait for mount point access before unmounting that mount point. The secondary map file is the file referred to by the file /etc/auto. mount point This variable will be the master mount point under which all the vpath devices will be mounted. If the conflict is unavoidable. mount options The mount options are standard options passed to the Linux mount Chapter 5. That means that whatever items you had mounted at that mount point will be invisible once automount is activated. options These are the options that you can specify and can be referenced by the automount man page. Note: The mount point that you specify will be mounted over by autofs.master.Configuring automount Use the following steps to configure automount: 1.vpath.master. you do so with automount and not within fstab or another facility or script. Configure the secondary map file. Configure the master map file. Usually. change the automount master mount point to a nonconflicting mount point to prevent problems from occurring. If you set this value to 0.mnt or auto. it will remain permanently mounted unless it is manually unmounted). Using SDD on a Linux host system 227 . For example.vpath --timeout=0 2. Automount configuration requires the configuration of the master map file. The following example shows a sample auto.master file: /vpath /etc/auto. automount will not attempt to unmount the master mount point (that is. For example. /mnt. such as auto. you would set this secondary mount point to vpatha. Thus. The format of this map file is: [secondary mount point] [mount options] [device name] where. map file This is another separate file that will describe under which names certain devices will be mounted and the mount variables for the device. secondary mount point The secondary mount point is the mount point relative to the master mount point. /etc/auto.

fd=4.pgrp=14846. Capture your new file settings. Defaults will give some values to the filesystem that are standard for most Linux operating environments. but performance might be degraded. ext3.check=normal) Mount output after accessing /vpath/vpathi: 228 Multipath Subsystem Device Driver User's Guide . You can set check=strict to ensure even stricter checking rules during mount time.check=normal :/dev/vpathi 3.minproto=2. For example. Mount output with automount loaded and before accessing /vpath/vpatha: /dev/hda3 on / type ext3 (rw) none on /proc type proc (rw) none on /dev/pts type devpts (rw. you can use ext2.mode=620) /dev/hda1 on /boot type ext3 (rw) none on /dev/shm type tmpfs (rw) automount(pid14846) on /vpath type autofs (rw. reiserfs. The only difference is that you can use the option fstype to specify the exact filesystem type of the device. device name The following example shows a sample auto.pgrp=14846.maxproto=4) Mount output after accessing /vpath/vpatha: /dev/hda3 on / type ext3 (rw) none on /proc type proc (rw) none on /dev/pts type devpts (rw. The check=normal option ensures that certain sanity checks are made on the filesystem before mounting. You can find the other options under the man page for mount.defaults. Test with a reboot cycle at least once to ensure that autofs is loaded with the current map files and that the system will automatically mount the devices correctly.fd=4. Set the fstype to the correct value and use the two options defaults and check=normal.d/autofs restart. etc for the fstype.vpath file: vpatha -fstype=ext3.check=normal :/dev/vpatha vpathi -fstype=ext2. Most modern filesystems check themselves after a certain number of mounts.command using the -o option.minproto=2. Sample mount output The following samples shows example output.mode=620) /dev/hda1 on /boot type ext3 (rw) none on /dev/shm type tmpfs (rw) automount(pid14846) on /vpath type autofs (rw.gid=5. v Run /etc/init. Perform one of the following steps: v Reboot the system.defaults.maxproto=4) /dev/vpatha on /vpath/vpatha type ext3 (rw.gid=5.

pgrp=16309. configured. or add the mount -a command to rc.fd=4. Also. if you want to add the mount points over SDD devices in /etc/fstab and have them mounted by mount -a during system startup time. and its filesystems mounted.check=normal) /dev/vpathi on /vpath/vpathi type ext2 (rw. This is because LVM has a dependency on SDD vpath devices. SLES 9 and RHEL 4) requires some additional configuration that is discussed at the end of this section. the SDD module must be loaded and vpath devices configured before LVM is loaded. If this script is not enabled. verify that other applications that need to use SDD vpath devices or their mount points are started after SDD has been started. Note: This procedure will describe how to set up SDD to load before LVM using boot up scripts. Supporting this type of environment requires that SDD also be placed Chapter 5.gid=5. Using Linux Logical Volume Manager with SDD Using Linux Logical Volume Manager (LVM) with SDD requires some configuration to ensure the correct behavior. One common reason why LVM would be loaded in the initrd is because the system is booting off of an LVM device for it's root disk. ensure that you make the following change in /opt/IBMsdd/bin/sdd.00 75 This allows the SDD driver tp start as early as possible so that other RC scripts that mount file systems as part of the startup sequence will mount vpath devices configured by SDD. either enable it using chkconfig.rcscript: Change: # chkconfig: . that is. However. The usual startup script that calls mount -a is S25netfs.mode=620) /dev/hda1 on /boot type ext3 (rw) none on /dev/shm type tmpfs (rw) automount(pid16309) on /vpath type autofs (rw./dev/hda3 on / type ext3 (rw) none on /proc type proc (rw) none on /dev/pts type devpts (rw. LVM 2 (on 2. In order to use SDD with the Linux LVM. You then need to issue chkconfig sdd on to configure SDD at system startup.25 75 to: # chkconfig: .6 kernels. This procedure might not work if LVM is loaded in the initrd (initial ramdisk). Use chkconfig --list sdd to verify the run levels that sdd is configured to start. on Red Hat Linux. and therefore SDD must be loaded prior to LVM startup. modify the run levels by using the --level option in chkconfig to adjust the levels to values that are appropriate for the system configuration. you must not enable the autofsck option (which is done by setting the sixth field of the new entry to 0).check=normal) Use automount to mount SDD vpath devices.local so that any entries in /etc/fstab that have not yet been mounted will be attempted. Also.maxproto=4) /dev/vpatha on /vpath/vpatha type ext3 (rw. If the run levels are not correct.minproto=2. Using SDD on a Linux host system 229 .

d/boot.5-129 or later.d/boot. which is a process that is not described here. LVM 2 with SDD on SLES 9 will work after following the LVM 2-specific directions at the end of this section. The system remounts the filesystem to read write at a later point in the boot process.sdd file that was just modified: # ln -s .sdd file to the /etc/init. this line is needed to remount the root filesystem in read/write mode. Enter cd /etc/init. Using LVM on SUSE LVM with SDD on SLES 8 is only supported on LVM version lvm-1. mount -n -o remount.0.o . This restores the mount state before and after the start function./boot. Complete the following steps: 1. 4. Enter cp /opt/IBMsdd/bin/sdd. 2. Upgrade your LVM to this version or later using the SUSE maintweb update.d/ directory.. Edit the boot. 3.in the initrd..d to change to the /etc/init.lvm* Because LVM is loading up at S04.d directory. Use the following line to remount in read-only mode: mount -n -o remount.ro / 2> /dev/null # ADDED THIS LINE } 5.rw / 2> /dev/null Because the SDD configuration utility (cfgvpath) must be able to write certain configuration parameters to the root disk.sdd with xx being a number smaller than the one that is on the LVM boot script link.sdd S03boot. create a link to the boot. you must load SDD at boot time using the bootup scripts. Enter cd /etc/init. For example./boot.d/ directory.ro / 2> /dev/null (the only change from above is ro) The beginning of your start function should look like the following: start() { mount -n -o remount. the LVM on this system is at S04boot.. The procedures and requirements are different for SUSE and Red Hat. 6.rw / 2> /dev/null # ADDED THIS LINE echo -n "Starting $dev_name driver load: " rm -f ${driver_dir}/${driver}. you must set SDD to at least S03 in order to avoid this problem.d/ to change to the /etc/init.lvm -> . Therefore.sdd file and add the following line to the beginning of the start function: mount -n -o remount. In order to get SDD to load before LVM. Create a link named Sxxboot.lvm: # ls -l | grep lvm lrwxrwxrwx 1 root root 11 Aug 12 17:35 S04boot.sdd Here are the results: 230 Multipath Subsystem Device Driver User's Guide .sdd to copy the boot. Your LVM will not work with SDD vpath devices correctly otherwise. Add another line at the end of the start function to remount the root filesystem back into read-only mode.rcscript boot..

Using LVM on Red Hat LVM with SDD on RHEL 3 is only supported on LVM version lvm-1.. issue the chkconfig command: chkconfig -s sdd off 8. before the # LVM initialization comment. update_boot_stage RCmountfs state=`awk ’/ \/ / && ($3 !~ /rootfs/) { print $4 }’ /proc/mounts` [ "$state" != "rw" -a "$READONLY" != "yes" ] && \ action $"Remounting root filesystem in read-write mode: " mount -n -o remount. use the following steps to cause SDD to load at boot time. Using SDD on a Linux host system 231 . Upgrade your LVM to this version or later using the Red Hat up2date utility. you must shut off the script..d/sdd start 4. If you have SDD starting in the runlevel init scripts.0.sysinit.sdd* 11 Aug 12 17:35 S04boot. This is the LVM level that comes packaged with RHEL 3 Update 6. Edit the file /etc/rc.. Configure LVM./boot.sysinit file should look like this: Chapter 5./boot. 7. Your LVM will not work with SDD vpath devices correctly otherwise. Append the following to the end of the block of commands. Reboot the system and the LVM configuration should come up after reboot using SDD vpath devices. 1. The affected section of the rc. Locate the block of commands that start with the following: # Remount the root filesystem read-write. 2. lrwxrwxrwx lrwxrwxrwx . To do this.. 1 root 1 root root root 11 Mar 11 12:03 S03boot. 1.. 3. In order to get SDD to load before LVM.. before the # LVM2 initialization comment: # Starting SDD /etc/init.sdd -> .8-14 or later.lvm -> . you must ensure that the SDD script is run before the LVM script is run.rw / # LVM initialization .. or on RHEL 4. LVM 2 with SDD on RHEL 4 will work after following the LVM 2-specific directions at the end of this section.lvm* Because SUSE uses the numbering scheme to determine which script is run first at boot time.# ls -l ..

. As always. In this file. you might see the default as the following: # By default. Value of types.*/" ] This regular expression accepts all vpath devices and rejects all other devices under /dev. Modify the regular expression to accept the name vpath and to ignore all other types of devices. If you have SDD starting in the runlevel init scripts. 5. back up the file before attempting to modify it. In order for LVM2 to work on your machine.6 kernel.*|".*/" ] This filter is too broad for SDD.rw / # Starting SDD /etc/init.*|". chkconfig sdd off 6. In the file.conf. You can do this using the chkconfig command. This means that the system should be running a 2. filter = [ "a/vpath[a-z]*/".. because it recognizes both SDD vpath devices as well as the underlying paths (that is. you will see that it is commented out: 232 Multipath Subsystem Device Driver User's Guide . LVM2 refers to a new userspace toolset that provides logical volume management facilities on Linux.d/sdd start # LVM initialization . Reboot the system and the LVM configuration should come up after reboot using SDD vpath devices. Adjust the example according to your environment. 7. Configure LVM. In the file. This is the simplest example.*/by-id/. "r|/dev/. In order to get LVM2 to run with SDD. "a/. 1. such as SLES 9 or RHEL 4. /dev/sdxxx) to that SDD vpath device. accept every block device except udev names: filter = [ "r|/dev/. update_boot_stage RCmountfs state=`awk ’/ \/ / && ($3 !~ /rootfs/) { print $4 }’ /proc/mounts` [ "$state" != "rw" -a "$READONLY" != "yes" ] && \ action $"Remounting root filesystem in read-write mode: " mount -n -o remount. look on the OS install CD's for the LVM2 package. 2. you must have the LVM2 rpm installed.# Remount the root filesystem read-write. LVM 2 general configuration information The information in this section only applies to systems running LVM version 2. Value of filter.*/by-path/. you need to shut off the script. several changes must be made to the LVM configuration file /etc/lvm/lvm. there are two values that must be modified. Under the section devices. "r/. You can narrow this regular expression to only accept vpath devices and not the underlying SCSI disk devices. there are several fields that must be modified.

# types = [ "fd". /dev/vpatha) and create volume groups using vgcreate.conf file. For example: types = [ "vpath". The installation target MUST be single-pathed before installing RHEL 3. the boot and swap devices will be running on IBM SDD vpath devices. (This procedure was performed on an ESS Model 800). The partition number should stay at 16. c. 16 ] Delete the comment marker. This includes understanding which packages will be installed. save the lvm. You should be able to run pvcreate on vpath devices (that is. Ensure that the following conditions exist before continuing with this procedure: a. Be familiar with creating LUNs on the ESS Model 800 so that the host can access the ESS Model 800 devices. Prerequisite steps 1.# List of pairs of additional acceptable block device types found # in /proc/devices with maximum (non-zero) number of partitions. Using SDD on a Linux host system 233 . This allows LVM to add vpath to its list of internally recognized devices. This procedure assumes that no installation is present to work from and when completed. Be familiar with the RHEL 3 installation. and replace fd with vpath. 16 ] After making these two changes. Chapter 5. d. e. Booting Linux over the SAN with SDD This section discusses the following procedures for remote booting Linux with SDD: v “SAN Boot instructions for RHEL 3 with IBM SDD (x86)” v “SAN Boot instructions for RHEL 3 with IBM SDD (ppc)” on page 240 v “SAN Boot Instructions for SLES 8 with IBM SDD (x86)” on page 249 v “SAN Boot Instructions for SLES 9 with IBM SDD (x86)” on page 258 v “SAN Boot instructions for SLES 9 with IBM (ppc)” on page 264 v “SAN Boot Instructions for SLES 9 with IBM SDD (x86) and LVM 2” on page 272 v “SAN boot instructions for RHEL 4 with IBM SDD (x86)” on page 279 v “SAN Boot instructions for RHEL 4 with IBM SDD (ppc)” on page 285 v “SAN boot instructions for RHEL 4 with IBM SDD (x86) and LVM 2” on page 291 v “Using lilo with SDD (remote boot) on x86” on page 298 SAN Boot instructions for RHEL 3 with IBM SDD (x86) The following procedure is used to install RHEL 3 x86 with fibre-channel connect storage. b. Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the host system can access LUNs from those storage systems. Have a copy of RHEL 3 x86 either network-accessible or on CD-ROM.

v The boot device that will be used for installation and booting should be at least 4 GB in size. 5. Install v From the BIOS Menus select the installation source to boot from. This is the minimum size for installing a base package set from the installation media to the boot devices. an NFS-exported set of CD-ROMs was used. Optionally. that is. NFS. root. the source for installation. Installation Media The installation media. and if SAN devices have already been configured for multiple path access. h. 234 Multipath Subsystem Device Driver User's Guide . For simplicity these instructions assume that the boot. it is recommended that all internal disk drive controllers be disabled. FTP. can be CD-ROM. Verify that the QLogic XXXXXXX SAN HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device. that is. If the utility cannot see the correct device. v Select the desired options until arriving at the Installation Settings. Although SDD functions correctly in single-path environments.f. Rebooting v On reboot. g. This permits discovery and use of SAN disk devices during this procedure. v Verify that the QLogic SAN HBA devices that are configured for the host have been setup to have their BOOT BIOS enabled. This procedure assumes that this has been done. Configure QLogic Devices Note: For ease of installation and to avoid issues with internal SCSI or IDE controllers. HTTP. See the installation procedures to determine which packages are needed for the type of system being installed. modifications of the partitioning settings are required for this installation. it is recommended that there be redundant physical paths to the devices from the host after installation of RHEL 3. v NOTE: Because of the way Linux discovers SAN devices. 6. /dev/sda. Note which device will be used for the installation before proceeding. 2. While in the QLogic Utility. once for each logical path to the device. and so forth. modify the BIOS to boot from hard disk. and swap devices are all located on the same device. However. 4. 3. check the SAN and ESS Model 800 configurations before continuing. Linux will discover the same physical device multiple times. The system should now boot to the newly installed OS. v It is also recommended that the swap device be at least the size of physical memory that is configured in the host. This is to make sure that the device noted in the previous step will be used for the root/boot installation target. v Verify that the system is booted from the correct disk and vpaths. Configure Boot/Root/SWAP devices. Ensure that there will be network access to the system. Here. this is not a requirement for the installation. v NOTE: The details of installation and partitioning are not written up here. For this installation. configure the ESS Model 800 device from which the system will boot. You can use any of the installation sources that are listed.

Also verify that the swap. using the df command. however. there will not be a /boot mount but rather the boot files will reside in the directory /boot on the root “/” mounted file system. 16) lun_id=12120870 /dev/sdc (8. cfgvpath query /dev/sda (8. See “Installing SDD” on page 213 for more information. 0) lun_id=12020870 /dev/sdb (8. This step is not required. sdd start The /etc/vpath. Major devices. Chapter 5. This completes the single-path boot from SAN. Note that some data from the following output has been modified for ease of reading. Because /boot was installed on /dev/sda1 and vpatha corresponds to sda in the/etc/vpath.v At this point the installed boot device can be set as the default boot device for the system. Modify the /etc/fstab file There is a one-to-one correlation between sd and vpath minor devices. sda is the root device). It is recommended but not required that vpatha be used as the vpath boot device.lst and /boot/initrd. might not necessarily correlate. 3. Verify this by looking at the mounted devices and where the root is mounted. 32) lun_id=12220870 /dev/sdd (8. Note: The following instructions are examples and the values used herein might be different on your systems. Download and install the IBM SDD driver for the kernel version being used. Gather SDD data in preparation for configuring /etc/fstab. In some cases. Remove all other entries from this file (they will be automatically added later by SDD). sdb1 could be vpathd1.conf file using the lun_id for vpatha. At the end of this document are instructions on how to upgrade the SDD driver. Upgrading the SDD driver. SDD is packaged in an RPM format and can be installed using the rpm command. 7. Edit the /etc/vpath. Verifying the SAN boot configuration The system should be booted in single-path mode from the SAN. menu.conf file has now been created. Install the IBM SDD driver. 1. sda1 and vpatha1. 48) lun_id=12320870 host=0 ch=0 id=0 lun=0 vid=IBM pid=2105800 serial=12020870 host=0 ch=0 id=0 lun=1 vid=IBM pid=2105800 serial=12120870 host=0 ch=0 id=0 lun=2 vid=IBM pid=2105800 serial=12220870 host=0 ch=0 id=0 lun=3 vid=IBM pid=2105800 serial=12320870 The lun_id for /dev/sda is 12020870. To modify the boot/root and other devices for booting using the SDD driver continue with the following steps. /dev/vpatha1 will be the mount device for /boot. using swapon -s and other configured partitions are correctly mounted. You must ensure that vpatha is the root device. but is suggested because it enables unattended reboots after this procedure is complete.conf file. Using SDD on a Linux host system 235 . Use the cfgvpath query command to obtain the LUN ID of the root's physical device. For example. The cfgvpath query command produces output similar to the following example. that is. 2. (In this procedure.

mode=620 defaults defaults defaults 1 1 0 0 0 0 1 2 0 0 0 0 To: /dev/vpatha3 /dev/vpatha1 none none none /dev/vpatha2 / /boot /dev/pts /proc /dev/shm swap ext3 ext3 devpts proc tmpfs swap defaults defaults gid=5.1.vp seek=33554432 count=1 bs=1 losetup /dev/loop0 initrd.gz mkdir /boot/mnt 5. ramdisk_size=34000 Modify the /boot/grub/menu. title Red Hat Enterprise Linux AS (2. The correct initrd can be determined by the following method: ls -1A /boot | grep initrd | grep $(uname -r) cd /boot cp [initrd file] to initrd.0) kernel /vmlinuz-2.1.mode=620 defaults defaults defaults 1 1 0 0 0 0 1 2 0 0 0 0 4.21-32. For resizing the initrd to 33554432 add the following to the /boot/grub/menu.ELsmp ro root=/dev/vpatha3 ramdisk_size=34000 initrd /initrd.0.4. Mount the initrd file system.ELsmp) with vpath/SDD root (hd0.21-32.lst file.vp. The [initrd file] refers to the current initrd in /boot. Make additional directories in /boot/mnt.Example: Change from: LABEL=/1 LABEL=/boot1 none none none /dev/sda2 / /boot /dev/pts /proc /dev/shm swap ext3 ext3 devpts proc tmpfs swap defaults defaults gid=5. For ext2 file system initrds.gz gunzip initrd.vp. mount -o loop -t ext2 initrd. you might need to resize the initrd file system.4. Prepare the initrd file.vp.vp /boot/mnt 7.vp 6. dd if=/dev/zero of=initrd. 236 Multipath Subsystem Device Driver User's Guide . Change directory to /boot and un-archive the initrd image to /boot/mnt.lst file.0.vp e2fsck -f /dev/loop0 resize2fs -f /dev/loop0 losetup -d /dev/loop0 Note: Adding the ramdisk_size= option to the kernel entry in the boot loader file is required after increasing the size of the initrd file. Add an entry for the SDD boot using initrd.

Modify the /boot/mnt/etc/nsswitch.4. a. Copy required library files for cfgvpath. cp cp cp cp cp /etc/fstab /boot/mnt/etc/ /etc/vpath.o-`uname –r` sdd-mod.1.so. Copy the following binaries and change permissions to 755: Chapter 5.6 /lib/ld-linux. 11.conf file if it is not as stated below.21-32. Change: passwd: compat To: passwd: files b.o sdd-mod./ 12.0.so.conf /boot/mnt/etc/ 9.o-`uname –r` /boot/mnt/lib/ lib -s sdd-mod.o . Example: ldd /opt/IBMsdd/bin/cfgvpath | awk ’{print $(NF-1)}’| grep lib The ldd command returns something similar to the following: /lib/tls/libc. Copy the following /etc files to /boot/mnt/etc.2 XXXX XXXX These files must be copied to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively.conf /boot/mnt/etc/ /etc/group /boot/mnt/etc/ /etc/passwd /boot/mnt/etc/ /etc/nsswitch. Using SDD on a Linux host system 237 .. Copy the correct sdd-mod to the initrd file system. Example: The command will return something similar to 2.ELsmp cp cd ln cd /opt/IBMsdd/sdd-mod. Use the uname -r command to determine the correct sdd-mod and create a soft link. Use the ldd command to determine the library files and locations. Change: group: compat To: group: files 10.cd /boot/mnt mkdir mnt mkdir lib/tls mkdir -p opt/IBMsdd/bin chmod -R 640 opt/IBMsdd 8.

0.so. Use the ldd command to determine the library files and locations. Also. Change: insmod /lib/scsi_mod.o max_scsi_luns=256 The following is the original linuxrc script in the initrd file system: 238 Multipath Subsystem Device Driver User's Guide . copy the following libraries: cp /lib/libnss_files.o To: insmod scsi_mod.17 /boot/mnt/lib 14.4 kernels and addition option must be appended to the line where the scsi_mod module is loaded. Copy required library files for each binary in step 14. Modify the /boot/mnt/linuxrc file. Note: Many binaries use the same libraries so there might be duplications of copying.so.2 /boot/mnt/lib cp /lib/libproc.2. For some storage systems with Linux 2. Append the following lines to then end of the linuxrc file.cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/ /bin/awk /boot/mnt/bin/ /bin/cat /boot/mnt/bin/ /bin/tar /boot/mnt/bin/ /bin/grep /boot/mnt/bin/ /bin/chmod /boot/mnt/bin/ /bin/chown /boot/mnt/bin/ /bin/mknod /boot/mnt/bin/ /bin/mount /boot/mnt/bin/ /bin/ls /boot/mnt/bin/ /bin/umount /boot/mnt/bin/ /bin/cp /boot/mnt/bin/ /bin/ash /boot/mnt/bin /bin/rm /boot/mnt/bin /bin/sh /boot/mnt/bin /bin/ps /boot/mnt/bin /bin/sed /boot/mnt/bin /bin/date /boot/mnt/bin /usr/bin/cut /boot/mnt/bin 13.

o echo "Loading qla2300.o echo "Loading sd_mod. Add all other paths and reboot again. The following commands can be used to verify the use of vpath devices.vp cd / shutdown -r now 16.o module" insmod /lib/ext3.o module" insmod /lib/jbd.gz initrd. Chapter 5. Once booted.o echo "Loading jbd.o module" insmod /lib/sd_mod.o echo "Loading ext3.o module" insmod /lib/jbd.o module" insmod /lib/ext3. verify that vpath devices are being used.o module" insmod /lib/qla2300.vp. Prepare the mounted initrd file system for rebooting: cd /boot umount /boot/mnt gzip initrd.o echo Running cfgvpath /opt/IBMsdd/bin/cfgvpath echo Creating block devices mkdevices /dev echo Copying over device files mount –o rw –t ext3 /dev/vpatha3 /sysroot mkdevices /sysroot/dev umount /sysroot #echo Creating root device #mkrootdev /dev/root echo 0x0100 > /proc/sys/kernel/real-root-dev echo Mounting root filesystem mount -o defaults --ro -t ext3 /dev/vpatha3 /sysroot pivot_root /sysroot /sysroot/initrd umount /initrd/procAppend Delete Edit 15.o module" insmod /lib/sd_mod.o echo "Loading jbd.vp mv initrd.o echo Mounting /proc filesystem mount -t proc /proc /proc echo Creating block devices mkdevices /dev echo Creating root device mkrootdev /dev/root echo 0x0100 > /proc/sys/kernel/real-root-dev echo Mounting root filesystem mount -o defaults --ro -t ext3 /dev/root /sysroot pivot_root /sysroot /sysroot/initrd umount /initrd/proc The following is the modified linuxrc script in the initrd file system: #!/bin/nash echo "Loading scsi_mod.#!/bin/nash echo "Loading scsi_mod.o echo "Loading ext3.o module" insmod /lib/qla2300.o module" insmod /lib/scsi_mod. Using SDD on a Linux host system 239 .o max_scsi_luns=256 echo "Loading sd_mod.o module" insmod /lib/scsi_mod.o echo "Loading qla2300.o echo Mounting /proc filesystem mount -t proc /proc /proc echo Creating block devices mkdevices /dev echo Loading SDD module insmod /lib/sdd-mod.

It is assumed that no installation is present to work from and when completed. Prepare the initrd. Install the new SDD driver. umount mnt gzip initrd. Copy the new cfgvpath command and use the ldd command to verify that the correct libraries are installed for /boot/mnt/opt/IBMsdd/bin/cfgvpath. Prerequisite steps 1.conf file will be saved to vpath.rhel3. SAN Boot instructions for RHEL 3 with IBM SDD (ppc) The following procedure is used to install RHEL 3 ppc onto a IBM System p JS20 LPAR with fibre-channel connected storage. rpm –e IBMsdd The /etc/vpath. 240 Multipath Subsystem Device Driver User's Guide .mount swapon -s lsvpcfg datapath query device Upgrading the SDD Use the following procedure to upgrade SDD: 1.rpmsave. Remove the previous SDD driver. Verify that the soft link sdd-mod. 5. 7. Place the SDD rpm update package on the system 2. You might come to a Repair filesystem prompt.gz gunzip initrd.ko-`uname –r` /boot/mnt/lib/ 6.rpm cd /boot mv initrd. rpm –ivh IBMsdd-x.x. Reboot the system. remount the file system as read/write.vp mnt cp /opt/IBMsdd/sdd-mod.gz mount –o loop –t ext2 initrd.vp. Ensure that the following conditions exist before continuing with this procedure: v Have a copy of RHEL 3 either network accessible or on CD-ROM. then enter the root password. the boot and swap devices will be running on IBM SDD vpath devices.vp. At the prompt.vp initrd.i686. cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/ 8. Restart the system in single-path mode (no SDD) 3.vp for booting.vp cd / 9.rw / 4.x.vp.vp mv initrd.ko in /boot/mnt/lib points to the current sdd module. mount –n –o remount.gz initrd.x-y.conf. It so.

v Be familiar with how to setup a SAN network or direct-attached SAN storage devices so that the configured LPAR can access LUNs from the storage unit. 2. Linux will discover the same physical device multiple times. that is. 2. From the SMS menu. and for CD-ROM installs a CD-ROM is required. Using SDD on a Linux host system 241 . there will not be a /boot mount but rather the boot files will reside in the directory /boot on the root “/” mount. however this is not a requirement for the installation. memory.v Be familiar with the Red Hat installation. Although SDD functions correctly in single-path environments. Configure root/boot/swap devices v The physical boot device that will be used for installation and booting should be at least 4 GB in size. This permits discovery and use of SAN disk devices during this procedure. v Be familiar with how to setup an LPAR and select the required resources to create a configured LPAR with processors. 4. once for each logical Chapter 5. FTP. these procedures must be performed for the new OS and initrd. This includes understanding which packages will be installed and how to select required options through the installation. have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device. NFS. Note: Because of the way Linux discovers SAN devices. This is the minimum size for installing all packages from the installation media to the boot devices. In some cases. It is not required that a /boot mount exists. v Be familiar with creating LUNs on the storage unit so that the LPAR can access the storage devices. and if SAN devices have already been configured for multiple path access. an NFS-exported set of CD-ROMs was used. the source for installation. For network installs. v Each time the OS is updated or a new initrd is created. and so forth. there should be redundant physical paths to the devices from the host (after installation). Upgrading the SDD driver and/or OS v At the end of this document are instructions on how to upgrade the SDD driver. Verify that the Fibre HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. Configure Fibre Channel Adapters v Verify that the SAN HBA devices that are configured for the system have been setup to have their BOOT BIOS enabled. For this installation. can be CD-ROM. HTTP. Installation Media v The installation media. 3. v Optionally. v Be familiar with how to connect to and operate IBM BladeCenter control or IBM System p LPAR. select the installation source and boot from the media. and SAN HBAs. 5. It is also recommended that the swap device be at least the size of physical memory that is configured in the LPAR. For simplicity these instructions assume that the root/boot/swap devices are all located on the same device. Also. You can use any of the installation sources that are listed. Use this procedure to install RHEL 3: 1. a network port is required.

modification of the partitioning settings is required for this installation. /dev/sdb. To modify the boot/root and other devices for booting using the SDD driver continue with the following steps. sdd start 242 Multipath Subsystem Device Driver User's Guide . using swapon -s and other configured partitions are correctly mounted. 5. there will not be a /boot mount but rather the boot files will reside in the directory /boot on the root “/” mounted file system. If an error occurs while attempting to create the yaboot boot loader stating that the device type of fcp is unknown. Here. 6. and go to the custom partition setup. 3. Take note which device will be used for the installation before proceeding. See the installation procedures to determine which packages are needed for the type of system being installed.conf. that is. Select the desired options until arriving at the Installation Settings step of the yast install. 1. Select the appropriate device and boot. d. Install the IBM SDD driver. Note: The following instructions are examples and the values used herein might be different on your systems. using the df command. This is to make sure that the device noted in the previous step will be used for the root/boot installation target. Make sure that there is a PReP boot partition on the root/boot device and that it is the first partition. See “Installing SDD” on page 213 for more information. In some cases. Gather SDD data in preparation for configuring /etc/vpath. Also note which of the fibre HBA devices is used to discover this device as it will be needed in a later step.conf and /boot/initrd. It is recommended but not required that vpatha be used as the vpath boot device. Select the device and Custom partitioning for experts. /etc/fstab/. 2. SAN boot configuration The system should now be booted in single-path mode from the SAN. 7. Download and install the IBM SDD driver for the kernel version being used. Continue to partition devices as required to complete this configuration. b.path to the device. The installation boot device should now be listed in the bootable devices discovered in the SAN on the selected Fibre HBA. This completes the single-path boot from SAN. Select to boot from a Hard Drive/SAN and select the Fibre HBA device adapter associated with the SAN disk device which the installation was completed. Select OK and select No when asked to retry. 4. Finish the installation. Also verify that the swap. Select partitioning. etc/yaboot. c. This time the boot device which has been setup over the previous steps is now ready to be booted. Reboot the SMS menu. a. SDD is packaged in an RPM format and can be installed using the rpm command. Verify this by looking at the mounted devices and where the root is mounted. The details of installation and partitioning are not written up here.

sda is the root device). serial=13320870 /dev/sdb /dev/sdb ( 8. or mounted or is a raw device lun=1 vid=IBM pid=2105800 ctlr_flag=0 ctlr_nbr=0 df_ctlr=0 X lun=2 vid=IBM pid=2105800 ctlr_flag=0 ctlr_nbr=0 df_ctlr=0 in /etc/fstab.vp.gz mkdir /boot/mnt Create a temporary directory where the image will be manipulated. serial=12E20870 /dev/sdd /dev/sdd ( 8.gz gunzip initrd.img.conf is pointing to.21-32.conf file has now been created. if the file name is initrd-2. For example. foe example. cfgvpath query /dev/sda ( 8. Copy the file to a temporary filename with a . or mounted or is a raw device lun=3 vid=IBM pid=2105800 ctlr_flag=0 ctlr_nbr=0 df_ctlr=0 X The lun_id for /dev/sdb is 13E2087. Remove all other entries from this file (they will be automatically added later by SDD) . You must ensure that vpatha is the root device.gz extension.1. Use the cfgvpath query command to obtain the LUN ID of the root's physical device.conf file using the lun_id for vpatha (vpatha 13E2087).vp. Note that the file pointed to might be symbolic link to another file. mkdir /boot/mnt For ext2 file system initrd's. Answer y when prompted: Chapter 5. The cfgvpath query command produces output similar to the following: Note that some data from the following output has been modified for ease of reading.The /etc/vpath. dd if=/dev/zero of=initrd. This will be the image that /etc/yaboot.vp Output similar to the following will be displayed.vp seek=33554432 count=1 bs=1 e2fsck –f /boot/initrd. Contents of /etc/vpath.0. Extracting and mounting the initrd The following unzips and extracts the initrd image so that it can be modified to include the required elements to enable a vpath boot image. /boot/mnt. serial=13E20870 /dev/sdc ( 8.EL. cd /boot Locate the initrd image used for booting. serial=13F20870 0) host=0 ch=0 id=0 lun_id=13320870 not configured: Either 16) host=0 ch=0 id=0 lun_id=13E20870 32) host=0 ch=0 id=0 lun_id=12E20870 not configured: Either 48) host=0 ch=0 id=0 lun_id=13F20870 lun=0 vid=IBM pid=2105800 ctlr_flag=0 ctlr_nbr=0 df_ctlr=0 in /etc/fstab.4. Edit the /etc/vpath. the correct [initrd file] can be determined by the following method: cd /boot ls -1A /boot | grep initrd | grep $(uname -r) cp [initrd file] to initrd. (In this procedure.conf : vpatha 13E280 3. you might be required to resize the initrd file system (recommended). This is referred to as the image temporary directory throughout the rest of this documentation. Using SDD on a Linux host system 243 .

o-`uname –r` /boot/mnt/lib/ /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/ /bin/awk /boot/mnt/bin/ /bin/cat /boot/mnt/bin/ /bin/tar /boot/mnt/bin/ /bin/grep /boot/mnt/bin/ /bin/chmod /boot/mnt/bin/ /bin/chown /boot/mnt/bin/ /bin/mknod /boot/mnt/bin/ /bin/mount /boot/mnt/bin/ /bin/ls /boot/mnt/bin/ /bin/umount /boot/mnt/bin/ /bin/cp /boot/mnt/bin/ /bin/ash /boot/mnt/bin /bin/rm /boot/mnt/bin /bin/sh /boot/mnt/bin /bin/ps /boot/mnt/bin /bin/sed /boot/mnt/bin /bin/date /boot/mnt/bin /usr/bin/cut /boot/mnt/bin Issue the following command from the lib directory. Modifying the /boot/initrd Create the following directories in the image temporary directory. Mount the initrd file system. An example of this entry is provided later in this topic. The linked module is the name of the module that was copied into the /boot/mnt/lib directory above.32 (09-Nov-2002) Pass 1: Checking inodes. and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity /lost+found not found. blocks. cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp /opt/IBMsdd/sdd-mod. ramdisk_size=34000. Create <y>? y Pass 4: Checking reference counts Pass 5: Checking group summary information initrd. For resizing the initrd to 33554432 add the following to the /boot/grub/menu.vp Note: Adding the ramdisk_size= option to the kernel entry in the boot loader file might be required after increasing the size of the initrd file. 2863/8000 blocks [root@elm17a212 boot]# resize2fs –f /boot/initrd.vp /boot/mnt 4.vp: ***** FILE SYSTEM WAS MODIFIED ***** initrd.lst file in the “append section”. 244 Multipath Subsystem Device Driver User's Guide . mount –o loop –t ext2 initrd.e2fsck 1. cd /boot/mnt mkdir mnt mkdir -p lib/tls mkdir -p lib64/tls mkdir -p opt/IBMsdd/bin chmod -R 640 opt/IBMsdd Copy the following files to the following directories relative to the image temporary directory.0% non-contiguous).vp: 36/2000 files (0.

Other devices using vpaths will also need to be changed.so.cd /boot/mnt/lib ln -s sdd-mod. Copy this additional library file: cp /lib/libnss_files. Edit the /boot/mnt/linuxrc file.o For each of the above binary files (except sdd-mod.6 /lib64/ld-linux.conf /boot/mnt/etc 7. Copy /etc files to the image temporary directory /boot/mnt/etc.conf /boot/mnt/etc /etc/passwd /boot/mnt/etc /etc/group /boot/mnt/etc /etc/nsswitch.o-2.21-32.0.EL sdd-mod.2 /boot/mnt/lib 6.o). Using SDD on a Linux host system 245 .4. For the initial install it is recommended to only work with the root/boot/swap devices and comment out other sd and hd devices until completed. Example: ldd /opt/IBMsdd/bin/cfgvpath | awk ’{print $(NF-1)}’| grep lib The ldd command might return something similar to the following: /lib64/tls/libc. cp cp cp cp /etc/vpath. copy the listed library files that do not exist to the corresponding lib and lib64 directories in the image temporary directory.2 XXXX XXXX These file must be copied to the /boot/mnt/lib64/tls/ and /boot/mnt/lib64/ directories respectively.so. run the ldd command and verify that the listed library files exist in the image temporary directory. Chapter 5.so. Copy required library files for 'cfgvpath'. If they do not. Use the 'ldd' command to determine the library files and locations.mode=620 proc defaults tmpfs defaults swap defaults swap defaults 1 0 0 0 0 0 1 0 0 0 0 0 cp /etc/fstab /boot/mnt/etc 8. Original: LABEL=/1 none none none /dev/sdd3 /dev/sdb3 / /dev/pts /proc /dev/shm swap swap ext3 defaults devpts gid=5. 5.1. Modify the /etc/fstab to use vpath devices for /root and swap.mode=620 proc defaults tmpfs defaults swap defaults swap defaults 1 0 0 0 0 0 1 0 0 0 0 0 Modified: /dev/vpatha2 none none none #/dev/sdd3 /dev/vpatha3 / /dev/pts /proc /dev/shm swap swap ext3 defaults devpts gid=5.

o echo "Loading ext3.o echo Mounting /proc filesystem mount -t proc /proc /proc echo Creating block devices mkdevices /dev echo “Loading SDD module” insmod /lib/sdd-mod.o module" insmod /lib/ext3.o echo “Configuring vpath devices” /opt/IBMsdd/bin/cfgvpath echo Copying over device files mount –o rw –t ext3 /dev/vpatha3 /sysroot mkdevices /sysroot/dev umount /sysroot echo 0x0100 > /proc/sys/kernel/real-root-dev echo Mounting root filesystem mount -o defaults --ro -t ext3 /dev/vpatha3 /sysroot pivot_root /sysroot /sysroot/initrd umount /initrd/proc 9. Repackaging the initrd.o echo "Loading jbd.o max_scsi_luns=256 Original linuxrc script in the initrd file system #!/bin/nash echo "Loading scsi_mod.o module" insmod /lib/jbd.o module" insmod /lib/qla2300.o echo "Loading sd_mod.o echo "Loading jbd.o module" insmod /lib/sd_mod. Change: insmod /lib/scsi_mod.o module" insmod /lib/scsi_mod.o max_scsi_luns=256 echo "Loading sd_mod.o echo "Loading qla2300.o module" insmod /lib/scsi_mod.o To: insmod scsi_mod.o echo "Loading qla2300.o module" insmod /lib/sd_mod.For some storage systems with Linux 2.o module" insmod /lib/ext3.o module" insmod /lib/qla2300.4 kernels and addition option must be appended to the line where the scsi_mod module is loaded.o echo "Loading ext3.o module" insmod /lib/jbd. 246 Multipath Subsystem Device Driver User's Guide .o echo Mounting /proc filesystem mount -t proc /proc /proc echo Creating block devices mkdevices /dev echo Creating root device mkrootdev /dev/root echo 0x0100 > /proc/sys/kernel/real-root-dev echo Mounting root filesystem mount -o defaults --ro -t ext3 /dev/root /sysroot pivot_root /sysroot /sysroot/initrd umount /initrd/proc Modified linuxrc script in the initrd file system #!/bin/nash echo "Loading scsi_mod.

Remember to include the partition if required.EL label=2.E read-only initrd=/boot/initrd-2.4. e.1. type in the given name for the new boot image. select the old image to boot from. Also make sure to modify the entry name.conf: image=/boot/vmlinux-2. starting with step 9 (repackaging the initrd).EL.21-32.gz initrd.0.To repackage all of the changes that have just been made to the initrd. 10. if the boot device is not already set up as the first boot device.21-32.vp.0.EL. Modifying root device files and updating the boot partition. When the system boots. Modify /etc/yaboot. c.0.1.vp mv initrd.1.21-32.conf: image=/boot/vmlinux-2.4.1.conf.E_SDD read-only initrd=/boot/initrd.4.4.21-32. review the preceding steps and make any corrections to errors.EL.21-32.img append="console=hvc0 root=/LABEL=/” image=/boot/vmlinux-2.E read-only initrd=/boot/initrd-2. Reboot the system. b.img now has the repackaged initrd image with the SDD drive and modified files required to boot from a vpath. ensure that the IBMsdd module is loaded after the SAN disk devices are discovered. Using SDD on a Linux host system 247 .4.1.21-32. then repeat these steps.4.21-32. Add a new entry in the file and modify the entry to point at the new initrd image created in the previous step. f.4.EL label=2.0.vp The initrd-2.0.4. During the OS load. 12.4.0.1.EL label=2.21-32. Ensure that there were no errors printed to the console during boot.21-32.0. issue the following commands: cd /boot umount /boot/mnt gzip initrd.img append="console=hvc0 root=/LABEL=/" Modified /etc/yaboot. Restart the system a.1.1. Also modify the root device in the new entry to point to the vpath chosen from the previous steps. d. When the yaboot prompt is shown during boot.vp append="console=hvc0 root=/dev/vpatha3 ramdisk_size=34000" Write the yaboot configuration to the boot partition: ybin –b /dev/sdb1 11. select the boot devices as you did before. If there were errors. From the SMS menu. Original /etc/yaboot.1.0. reboot the system and at the yaboot prompt. Verify System has reboot and SDD is configured correctly Chapter 5.0.

Reboot the system.vp. 248 Multipath Subsystem Device Driver User's Guide . You might come to a Repair filesystem prompt.gz gunzip initrd. 7.vp initrd.conf file will be saved to vpath.ko in /boot/mnt/lib points to the current sdd module.vp mnt cp /opt/IBMsdd/sdd-mod.rpm cd /boot mv initrd.rpmsave. the installed boot device can be set as the default boot device for the system via the SMS menu.x. verify that vpath devices are being used. cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/ 8. Install the new SDD driver. remount the file system as read/write.gz mount –o loop –t ext2 initrd. Prepare the initrd.x.vp cd / 9.conf. Restart the system in single-path mode (no SDD) 3.vp. The following commands can be used to verify the use of vpath devices: v mount v swapon –s v lsvpcfg v datapath query device At this point. but is suggested because it enables unattended reboots after this procedure is complete.rhel3. At the prompt. SDD upgrade procedure Use the following procedure to upgrade SDD: 1. Copy the new cfgvpath command and use the ldd command to verify that the correct libraries are installed for /boot/mnt/opt/IBMsdd/bin/cfgvpath. Add all other paths and reboot again. mount –n –o remount.gz initrd. Verify that the soft link sdd-mod. 4.x-y.vp. rpm –e IBMsdd The /etc/vpath. 5.vp mv initrd.rw / For systems that have /boot on a separate mount point. mount /boot partition using /dev/sd device.ppc64.Once booted. umount mnt gzip initrd. Place the SDD rpm update package on the system 2. Remove the previous SDD driver.vp for booting. rpm –ivh IBMsdd-x.ko-`uname –r` /boot/mnt/lib/ 6. enter the root password. This step is not required. If so.

Optionally. it is recommended that all internal disk drive controllers be disabled. 4. h. c. can be CD-ROM. For this installation an NFS-exported set of CD-ROMs was used. the source for installation. NFS. that is. The installation target MUST be single-pathed before installing SLES 8. This includes understanding which packages will be installed. While in the QLogic Utility. however. f. If the utility cannot see the correct device. b. This procedure assumes that no installation is present to work from and when completed. check the SAN and ESS Model 800 configurations before continuing. (This procedure was performed on an ESS Model 800). Ensure that the following conditions exist before continuing with this procedure: a. Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the host system can access LUNs from those storage systems. HTTP. firmware. d.SAN Boot Instructions for SLES 8 with IBM SDD (x86) The following procedure is used to install SLES 8 SP4 i386 on an xSeries® host with fibre-channel connect storage. this is not a requirement for the installation. g. Using SDD on a Linux host system 249 . these instructions assume that the boot. Variations from the documents have not been specifically tested. it is recommended that there be redundant physical paths to the devices from the host after installation of SLES 8. Ensure that there will be network access to the system. This permits discovery and use of SAN disk devices during this procedure. 5. 2. and swap devices are all located on the same device. root. You can use any of the installation sources that are listed. This is the minimum size for installing a base package set from the installation media to the boot devices. Although SDD functions correctly in single-path environments. FTP. This procedure is intended to be used with the following revisions of software. e. the boot and swap devices will be running on IBM SDD vpath devices. Install Chapter 5. Be familiar with creating LUNs on the ESS Model 800 so that the host can access the ESS Model 800 devices. Prerequisite steps 1. have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device. Have a copy of SLES 8 SP4 i386 either network-accessible or on CD-ROM. Configure QLogic Devices v For ease of installation and to avoid issues with internal SCSI or IDE controllers. Configure Boot/Root/SWAP devices The boot device that will be used for installation and booting should be at least 4 GB in size. Installation Media The installation media. and hardware. Be familiar with the SLES 8 installation. configure the ESS Model 800 device from which the system will boot. and so forth. v Verify that the QLogic SAN HBA devices that are configured for the host have been setup to have their BOOT BIOS enabled. This procedure assumes that this has been done. The swap device be at least the size of physical memory that is configured in the host. For simplicity. 3.

2. sdd start The /etc/vpath. any time a kernel is upgraded a new initrd for that kernel must be built following the instructions in the procedure. You must ensure that vpatha is the root device. Gather SDD data in preparation for configuring /etc/fstab. Use the cfgvpath query command to get the LUN ID of the root's physical device. v Because of the way Linux discovers SAN devices. /dev/sda. Here. sda is the root device). menu. At this point the installed boot device can be set as the default boot device for the system. The cfgvpath query command will produce output similar to the following: Note that some data from the following output has been modified for ease of reading. Also verify that the swap. 250 Multipath Subsystem Device Driver User's Guide . The system should now boot to the newly installed OS.lst and /boot/initrd. 6. for example. SDD is packaged in an RPM format and can be installed using the rpm command. v Select the desired options until arriving at the Installation Settings. Rebooting a. Download and install the IBM SDD driver for the kernel version being used. b. This is to make sure that the device noted in the previous step will be used for the root/boot installation target. SAN boot configuration The system should be booted in single-path mode from the SAN. and if SAN devices have already been configured for multiple path access. 7. See the installation procedures to determine which packages are needed for the type of system being installed. Verify that the system is booted from the correct disk and vpaths. Note: The following instructions are examples and the values used herein might be different on your systems. On reboot. 1. See “Installing SDD” on page 213 for more information. This completes the single-path boot from SAN. using swapon -s and other configured partitions are correctly mounted. once for each logical path to the device. Note which device will be used for the installation before proceeding. (In this procedure. Linux will discover the same physical device multiple times.conf file has now been created. v The details of installation and partitioning are not included here. Also.v From the BIOS Menus select the installation source to boot from. Install the IBM SDD driver. c. Verify that the QLogic XXXXXXX SAN HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. but is suggested because it enables unattended reboots after this procedure is complete. modify the BIOS to boot from hard disk. are instructions on how to upgrade the SDD driver. Upgrading the SDD driver At the end of this document. To modify the boot/root and other devices for booting using the SDD driver continue with the following steps. This step is not required. modifications of the partitioning settings are required for this installation. using the df command. Verify this by looking at the mounted devices and where the root is mounted.

however.lst file.0)/vmlinuz-2.vp. sdb1 could be vpathd1. Modify the /etc/fstab and the /boot/grub/menu.0)/initrd.vp.gz mkdir /boot/mnt Chapter 5. Add an entry for the SDD boot using initrd. Remove all other entries from this file (they will be automatically added later by SDD) .0)/vmlinuz-2.cfgvpath query /dev/sda (8.conf file using the lun_id for vpatha.conf file. 32) lun_id=12220870 /dev/sdd (8. for example. might not necessarily correlate. sda1 and vpatha1. The [initrd file] refers to the current initrd in /boot.4.vp 4.mode=620 defaults defaults defaults 1 1 0 0 0 0 1 2 0 0 0 0 To: /dev/vpatha3 /dev/vpatha1 none none none /dev/vpatha2 / /boot /dev/pts /proc /dev/shm swap ext3 ext3 devpts proc tmpfs swap defaults defaults gid=5.0)/initrd-2. 3.21-295-smp title linux-smp-SDD kernel (hd0.mode=620 defaults defaults defaults 1 1 0 0 0 0 1 2 0 0 0 0 Modify the /boot/grub/menu.21-295-smp root=/dev/sda3 initrd (hd0. Edit the /etc/vpath. 0) lun_id=12020870 /dev/sdb (8.4.gz gunzip initrd. Major devices.lst There is a one-to-one correlation between sd and vpath minor devices. such as. Using SDD on a Linux host system 251 .21-295-smp root=/dev/vpatha3 ramdisk_size=34000 initrd (hd0. 48) lun_id=12320870 host=0 ch=0 id=0 lun=0 vid=IBM pid=2105800 serial=12020870 host=0 ch=0 id=0 lun=1 vid=IBM pid=2105800 serial=12120870 host=0 ch=0 id=0 lun=2 vid=IBM pid=2105800 serial=12220870 host=0 ch=0 id=0 lun=3 vid=IBM pid=2105800 serial=12320870 The lun_id for /dev/sda is 12020870.vp title linux-smp kernel (hd0. Prepare the initrd file. /dev/vpatha1 will be the mount device for /boot. 16) lun_id=12120870 /dev/sdc (8. Because /boot was installed on /dev/sda1 and vpatha corresponds to sda in the/etc/vpath. The correct initrd can be determined by the following method: ls -1A /boot | grep initrd | grep $(uname -r) cd /boot cp [initrd file] to initrd. Example: Change from: /dev/sda3 /dev/sda1 none none none /dev/sda2 / /boot /dev/pts /proc /dev/shm swap ext3 ext3 devpts proc tmpfs swap defaults defaults gid=5.4.

Change: group: compat To: group: files 10.lst). Mount the initrd file system.conf file. Change directory to /boot and un-archive the initrd image to /boot/mnt.5.conf /boot/mnt/etc/ /etc/group /boot/mnt/etc/ /etc/passwd /boot/mnt/etc/ /etc/nsswitch. Change: passwd: compat To: passwd: files b. mount -o loop -t ext2 initrd. For Sles8u5. ramdisk_size=34000 (see the previous step for modifying the /boot/grub/menu. a.lst file.vp e2fsck -f /dev/loop0 resize2fs -f /dev/loop0 losetup -d /dev/loop0 Note: Adding the ramdisk_size= option to the kernel entry in the boot loader file might be required after increasing the size of the initrd file. 252 Multipath Subsystem Device Driver User's Guide .vp seek=33554432 count=1 bs=1 losetup /dev/loop0 initrd. dd if=/dev/zero of=initrd. Modify the /boot/mnt/etc/nsswitch. For ext2 file system initrds. Copy cfgvpath to the initrd image. Copy /opt/IBMsdd/bin/cfgvpath to /boot/mnt/opt/IBMsdd/bin/ and change permissions of cfgvpath to 755. Make additional directories in /boot/mnt if they do not exist. cd /boot/mnt mkdir mnt mkdir -p opt/IBMsdd/bin chmod -R 640 opt/IBMsdd mkdir -p lib/tls (For some Sles8 version create lib/i686) mkdir sysroot mkdir proc 8. For resizing the initrd to 33554432 add the following to the /boot/grub/menu. you might need to resize the initrd file system. 6. Copy the following /etc files to /boot/mnt/etc. cp cp cp cp cp /etc/fstab /boot/mnt/etc/ /etc/vpath.conf /boot/mnt/etc/ 9.vp /boot/mnt 7. this step might not be required.

o . cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp cp /bin/awk /boot/mnt/bin/ /bin/chmod /boot/mnt/bin/ /bin/chown /boot/mnt/bin/ /bin/cp /boot/mnt/bin/ /bin/date /boot/mnt/bin/ /bin/grep /boot/mnt/bin/ /bin/ls /boot/mnt/bin/ /bin/mknod /boot/mnt/bin/ /bin/mount /boot/mnt/bin/ /bin/ps /boot/mnt/bin/ /bin/rm /boot/mnt/bin/ /bin/sed /boot/mnt/bin/ /bin/sh /boot/mnt/bin/ /bin/tar /boot/mnt/bin/ /bin/umount /boot/mnt/bin/ /bin/cat /boot/mnt/bin/ cp /usr/bin/cut /boot/mnt/bin/ cp /usr/bin/expr /boot/mnt/bin/ 14. Example: The uname -r command will return something similar to 2.5-7.191-smp. Note: Many binaries use the same libraries so there might be duplications of copying. cp /lib/libnss_files.2 XXXX These file must be copied to the /boot/mnt/lib/i686/ and /boot/mnt/lib/ directories respectively.so.so. Use the ldd command to determine the library files and locations. Use the ldd command to determine the library files and locations. Copy /dev/sd devices to the initrd /dev directory using the tar command.cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/ 11. 12. Copy the required library files for each binary in the previous step..6.2 /boot/mnt/lib 15.o-`uname –r` /boot/mnt/lib/ lib -s sdd-mod. Copy the following binaries to the initrd bin directory. Use the uname -r command to determine the correct sdd-mod. copy the following library.o-`uname –r` sdd-mod.o file and create a soft link.6 XXXX /lib/ld-linux. Also./ 13.so. Chapter 5. Using SDD on a Linux host system 253 . Copy the correct sdd-mod. cp cd ln cd /opt/IBMsdd/sdd-mod.o file to the initrd file system.o sdd-mod. Copy required library files for cfgvpath . Example: ldd /opt/IBMsdd/bin/cfgvpath | awk {print $(NF-1)} | grep lib The output of the ldd command might be similar to the following: /lib/i686/libc.

o To: insmod scsi_mod.conf /sysroot/etc/ umount /sysroot You must ensure that the correct major and minor number of the /root vpath device is written to /proc/sys/kernel/real-root-dev.o echo "Running SDD configuration" /opt/IBMsdd/bin/cfgvpath Ensure an updated copy of vpath.4 kernels an addition option must be appended to the line where the scsi_mod module is loaded for storage systems such as the DS6000 and the DS8000. For 2. echo "Copying over device files" mount -o rw -t ext3 /dev/vpatha3 /sysroot (tar cps /dev/IBMsdd /dev/vpath*) | (cd /sysroot && tar xps) cp /etc/vpath. echo "Mounting proc" mount -n -tproc none /proc echo "Loading SDD module" insmod /lib/sdd-mod. mount -o rw -t [fstype] [device] /mnt Add the following lines just after the modules load entries.o max_scsi_luns=256 Add the following lines to the linuxrc file after the last driver has been loaded. Modify the /boot/mnt/linuxrc file. To do this. Add the following lines just after the last kernel module is loaded. The values used for the [fstype] and [device] here are only examples. add the following line to the linuxrc file.conf and the vpath device files are copied to the root file system during boot by using the following syntax to mount the root file system. Use the correct values for the system that is being configured. 254 Multipath Subsystem Device Driver User's Guide . Change: insmod /lib/scsi_mod.cd /boot/mnt tar cps /dev/sd[a-z] /dev/sd[a-z][a-z] | tar xps 16.

*\) ` #find the major/minor in /proc/partitions parts=`grep $dev_name /proc/partitions` dev_major=`echo $parts | cut -d -f1` dev_minor=`echo $parts | cut -d -f2` dev_num=`expr $dev_major \* 256 + $dev_minor` echo $dev_num > /proc/sys/kernel/real-root-dev continue fi done Original linuxrc script in the initrd file system: Chapter 5. Using SDD on a Linux host system 255 .*\) ` echo "Found root = $dev_name" #chop off the "dev" dev_name=`expr "$dev_name" : /dev/\(. then # chop off the "root=" dev_name=`expr "$name" : . do #Look for "root=" echo $name | grep -q ^root if [ $? -eq 0 ].echo "Setting correct root device" for name in `cat /proc/cmdline`.*=\(.

4.4.o echo "Loading kernel/drivers/scsi/qla2300.o echo "Loading kernel/fs/jbd/jbd.21-295-smp/kernel/drivers/scsi/scsi_mod.o" insmod /lib/modules/2.o (this line is part of the above line) $extra_scsi_params max_scsi_luns=255 echo "Loading kernel/drivers/scsi/sd_mod.4.4.o Modified linuxrc script in the initrd file system with modifications.o" insmod /lib/modules/2.o" insmod /lib/modules/2.4.4. esac done umount -n /proc echo "Loading kernel/drivers/scsi/scsi_mod.o echo "Loading kernel/fs/ext3/ext3.21-295-smp/kernel/drivers/scsi/qla2300.o" insmod /lib/modules/2.21-295-smp/kernel/drivers/scsi/qla2300.21-295-smp/kernel/drivers/scsi/sd_mod. do case $p in scsi*|*_scsi_*|llun_blklst=*|max_report_luns=*) extra_scsi_params="$extra_scsi_params $p" .21-295-smp/kernel/drivers/scsi/qla2300_conf.o echo "Loading kernel/drivers/scsi/qla2300.21-295-smp/kernel/drivers/scsi/qla2300_conf.o" insmod /lib/modules/2.4.o $extra_scsi_params" insmod /lib/modules/2. esac done umount -n /proc echo "Loading kernel/drivers/scsi/scsi_mod.4.o $extra_scsi_params echo "Loading kernel/drivers/scsi/sd_mod.o echo "Loading kernel/fs/ext3/ext3.21-295-smp/kernel/drivers/scsi/sd_mod.o 256 Multipath Subsystem Device Driver User's Guide .o echo "Loading kernel/drivers/scsi/qla2300_conf. do case $p in scsi*|*_scsi_*|llun_blklst=*|max_report_luns=*) extra_scsi_params="$extra_scsi_params $p" .o" insmod /lib/modules/2.21-295-smp/kernel/fs/ext3/ext3.o" insmod /lib/modules/2.21-295-smp/kernel/fs/jbd/jbd.o echo "Loading kernel/drivers/scsi/qla2300_conf..o" insmod /lib/modules/2.4.#! /bin/ash export PATH=/sbin:/bin:/usr/bin # check for SCSI parameters in /proc/cmdline mount -n -tproc none /proc for p in `cat /proc/cmdline` .o" insmod /lib/modules/2.4.o echo "Loading kernel/fs/jbd/jbd.o $extra_scsi_params max_scsi_luns=255" insmod /lib/modules/2.21-295-smp/kernel/fs/jbd/jbd.4. #! /bin/ash export PATH=/sbin:/bin:/usr/bin # check for SCSI parameters in /proc/cmdline mount -n -tproc none /proc for p in `cat /proc/cmdline` .4.21-295-smp/kernel/fs/ext3/ext3.21-295-smp/kernel/drivers/scsi/scsi_mod.o" insmod /lib/modules/2..

mount –n –o remount.vp mv initrd. Once booted. Place the SDD rpm update package on the system 2. If so. You might come to a Repair filesystem prompt. do #Look for "root=" echo $name | grep -q ^root if [ $? -eq 0 ].*\) ` #find the major/minor in /proc/partitions parts=`grep $dev_name /proc/partitions` dev_major=`echo $parts | cut -d -f1` dev_minor=`echo $parts | cut -d -f2` dev_num=`expr $dev_major \* 256 + $dev_minor` echo $dev_num > /proc/sys/kernel/real-root-dev continue fi done echo "Unmounting proc" umount /proc 17.*\) ` echo "Found root = $dev_name" #chop off the "dev" dev_name=`expr "$dev_name" : /dev/\(.gz initrd. mount /boot partition using /dev/sa device. then # chop off the "root=" dev_name=`expr "$name" : . verify that vpath devices are being used.rw / 4.vp. mount swapon -s lsvpcfg datapath query device SDD upgrade procedure Use the following procedure to upgrade SDD: 1. enter the root password. remount the file system as read/write. Prepare the mounted initrd file system for rebooting: cd /boot umount /boot/mnt gzip initrd.*=\(. Add all other paths and reboot again.vp cd / shutdown -r now 18.o echo "Running SDD configuration" /opt/IBMsdd/bin/cfgvpath echo "Copying over device files" mount -o rw -t ext3 /dev/vpatha3 /sysroot (tar cps /dev/IBMsdd /dev/vpath*) | (cd /sysroot && tar xps) umount /sysroot echo "Setting correct root device" for name in `cat /proc/cmdline`. The following commands can be used to verify the use of vpath devices. At the prompt. Restart the system in single-path mode (no SDD) 3. Chapter 5.echo "Mounting proc" mount -n -tproc none /proc echo "Loading SDD module" insmod /lib/sdd-mod. For systems that have /boot on a separate mount point. Using SDD on a Linux host system 257 .

Prepare the initrd. Ensure that there will be network access to the system.vp cd / 10. 8. 258 Multipath Subsystem Device Driver User's Guide . Verify that the soft link sdd-mod.ko-`uname –r` /boot/mnt/lib/ 7. The installation target MUST be single-pathed before installing SLES 9. Remove the previous SDD driver.conf.vp mnt cp /opt/IBMsdd/sdd-mod.x-y. Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the host system can access LUNs from those storage systems. Be familiar with creating LUNs on the ESS Model 800 so that the host can access the ESS Model 800 devices. d. it is recommended that there be redundant physical paths to the devices from the host after installation of SLES 9.vp. Copy the new cfgvpath command and use the ldd command to verify that the correct libraries are installed for /boot/mnt/opt/IBMsdd/bin/cfgvpath.gz mount –o loop –t ext2 initrd.ko in /boot/mnt/lib points to the current SDD module. Prerequisite steps 1. (This procedure was performed on an ESS Model 800). e. f. Be familiar with the SLES 9 installation. SAN Boot Instructions for SLES 9 with IBM SDD (x86) The following procedure is used to install SLES 9 x86 on an xSeries host with fibre-channel connect storage. Reboot the system.gz gunzip initrd. umount mnt gzip initrd. This procedure assumes that no installation is present to work from and when completed. Install the new SDD driver.vp. g.sles8. 6. Have a copy of SLES 9 SP2 i386 either network-accessible or on CD-ROM.5. the boot and swap devices will be running on IBM SDD vpath devices.vp initrd. This includes understanding which packages will be installed.conf file will be saved to vpath.vp for booting.x. rpm –e IBMsdd The /etc/vpath. cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/ 9.x. Optionally.gz initrd. c.vp mv initrd. rpm –ivh IBMsdd-x. have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device.rpmsave. Although SDD functions correctly in single-path environments. h. b.i686.vp. Ensure that the following conditions exist before continuing with this procedure: a.rpm cd /boot mv initrd.

v For Emulex fibre HBAs. For this installation. While in the QLogic Utility. modify the BIOS to boot from hard disk. 5. configure the ESS Model 800 device from which the system will boot. however. v Verify that the QLogic SAN HBA devices that are configured for the host have been setup to have their BOOT BIOS enabled. The swap device be at least the size of physical memory that is configured in the host. the system must be booted to DOS). At this point the installed boot device can be set as the default boot device for the system.2. This step is not required. and so forth. modifications of the partitioning settings are required for this installation. On reboot. 6. Rebooting a. b. Install v From the Qlogic BIOS Menus. go into the Emulex BIOS during POST boot and enable the boot BIOS for each adapter and select the boot LUN from the list. v The details of installation and partitioning are not included here. use Emulex’s utility software for the Emulex model to enable the Emulex HBA BIOS (to use this utility. Verify that the system is booted from the correct disk and vpaths. it is recommended that all internal disk drive controllers be disabled. Configure QLogic Devices v For ease of installation and to avoid issues with internal SCSI or IDE controllers. that is. can be CD-ROM. This is to make sure that the device noted in the previous step will be used for the root/boot installation target. for example. 4. This procedure assumes that this has been done. This permits discovery and use of SAN disk devices during this procedure. check the SAN and ESS Model 800 configurations before continuing. the system should now boot to the newly installed OS. After the BIOS is enabled. You can use any of the installation sources that are listed. an NFS-exported set of CD-ROMs was used. and swap devices are all located on the same device. HTTP. these instructions assume that the boot. 3. Using SDD on a Linux host system 259 . root. and if SAN devices have already been configured for multiple path access. This is the minimum size for installing a base package set from the installation media to the boot devices. v Select the desired options until arriving at the Installation Settings. /dev/sda. For simplicity. Installation Media The installation media. Configure Boot/Root/SWAP devices The boot device that will be used for installation and booting should be at least 4 GB in size. the source for installation. but is suggested because it enables unattended reboots after this procedure is complete. Verify that the QLogic XXXXXXX SAN HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. select the installation source to boot from. See the installation procedures to determine which packages are needed for the type of system being installed. c. Here. this is not a requirement for the installation. NFS. Note which device will be used for the installation before proceeding. v Because of the way Linux discovers SAN devices. Linux will discover the same physical device multiple times. If the utility cannot see the correct device. Chapter 5. once for each logical path to the device. FTP.

Install the IBM SDD driver. Note: The following instructions are examples and the values used herein might be different on your systems. cfgvpath query /dev/sda (8. To modify the boot/root and other devices for booting using the SDD driver continue with the following steps. 260 Multipath Subsystem Device Driver User's Guide . sdd start The /etc/vpath. Download and install the IBM SDD driver for the kernel version being used. Edit the /etc/vpath. Verify this by looking at the mounted devices and where the root is mounted. however.conf file has now been created. sda1 and vpatha1. 32) lun_id=12220870 /dev/sdd (8. This completes the single-path boot from SAN. Also verify that the swap. There is a one-to-one correlation between sd and vpath minor devices. /dev/vpatha1 will be the mount device for /boot. Modify the /etc/fstab. using swapon -s and other configured partitions are correctly mounted.7. for example. SAN boot configuration The system should be booted in single-path mode from the SAN. Also. 3. sdb1 could be vpathd1.lst and /boot/initrd. using the df command. SDD is packaged in an RPM format and can be installed using the rpm command. The cfgvpath query device command will produce output similar to the following: Note that some data from the following output has been modified for ease of reading. for example. 2. Major devices. menu. sda is the root device). Remove all other entries from this file (they will be automatically added later by SDD). 48) lun_id=12320870 host=0 ch=0 id=0 lun=0 vid=IBM pid=2105800 serial=12020870 host=0 ch=0 id=0 lun=1 vid=IBM pid=2105800 serial=12120870 host=0 ch=0 id=0 lun=2 vid=IBM pid=2105800 serial=12220870 host=0 ch=0 id=0 lun=3 vid=IBM pid=2105800 serial=12320870 The lun_id for /dev/sda is 12020870.conf file using the lun_id for vpatha. 1. any time a kernel is upgraded a new initrd for that kernel must be built following the instructions in the procedure. Gather SDD data in preparation for configuring /etc/fstab. Upgrading the SDD driver At the end of this document are instructions on how to upgrade the SDD driver. might not necessarily correlate. You must ensure that vpatha is the root device. (In this procedure. 0) lun_id=12020870 /dev/sdb (8. ensuring that root/boot/swap is mounted on vpath devices. Use the cfgvpath query device command to obtain the LUN ID of the root's physical device. 16) lun_id=12120870 /dev/sdc (8.conf file. Because /boot was installed on /dev/sda1 and vpatha corresponds to sda in the/etc/vpath. See “Installing SDD” on page 213 for more information.

Mount the initrd file system. Change directory to /boot and un-archive the initrd image to /boot/mnt. For resizing the initrd to 33554432 add the following to the /boot/grub/menu. Make additional directories in /boot/mnt if they do not exist. The [initrd file] refers to the current initrd in /boot.vp. Chapter 5. 5.0)/vmlinuz root=/dev/vpatha3 selinux=0 splash=silent barrier=off resume=/dev/sda2 elevator=cfq showopts ramdisk_size=34000 initrd (hd0.lst file.lst file.vp.0)/initrd.vp 4. mount -o loop -t ext2 initrd. dd if=/dev/zero of=initrd. Add entries for the SDD boot using initrd.gz mkdir /boot/mnt For the ext2 file system. cd /boot/mnt mkdir mnt mkdir -p opt/IBMsdd/bin chmod -R 640 opt/IBMsdd mkdir -p lib/tls 7.vp e2fsck -f /dev/loop0 resize2fs -f /dev/loop0 losetup -d /dev/loop0 Adding the ramdisk_size= option to the kernel entry in the boot loader file might be required after increasing the size of the initrd file.Example: Change from: /dev/sda3 /dev/sda1 /dev/sda2 / /boot swap ext3 ext3 swap defaults defaults defaults 1 1 1 2 0 0 To: /dev/vpatha3 /dev/vpatha1 /dev/vpatha2 / /boot swap ext3 ext3 swap defaults defaults defaults 1 1 1 2 0 0 Modify the /boot/grub/menu.vp seek=33554432 count=1 bs=1 losetup /dev/loop0 initrd.vp title Linux-sdd kernel (hd0. Prepare the initrd file. Copy the following /etc files to /boot/mnt/etc.gz gunzip initrd. the initrd might be required to resize the initrd file system. The correct initrd can be determined by the following method: ls -1A /boot | grep initrd | grep $(uname -r) cd /boot cp [initrd file] to initrd. ramdisk_size=34000 as mention previously. Using SDD on a Linux host system 261 .vp /boot/mnt 6.

6.conf /boot/mnt/etc/ 8.6.so.5-7. cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/ 11. Example: The uname -r command will return something similar to 2.cp cp cp cp /etc/vpath.2 These file must be copied to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. Copy cfgvpath to the initrd image./ 13.5-7. 262 Multipath Subsystem Device Driver User's Guide .5-7. Modify the /boot/mnt/etc/nsswitch.ko-2. Change: group: compat To: group: files 10.6 /lib/ld-linux.conf /boot/mnt/etc/ /etc/group /boot/mnt/etc/ /etc/passwd /boot/mnt/etc/ /etc/nsswitch. Copy the following binaries to the initrd bin directory.201-smp. 12.ko . Copy the correct sdd-mod. Copy required library files for cfgvpath.201-smp /boot/mnt/lib/ lib -s sdd-mod. Use the uname -r command to determine the correct sdd-mod.o file to the initrd file system. cp cd ln cd /opt/IBMsdd/sdd-mod.201-smp sdd-mod. a.ko-2. Example: ldd /opt/IBMsdd/bin/cfgvpath | awk {print $(NF-1)} | grep lib The output of the ldd command might be similar to the following: /lib/tls/libc.so. sysfs /sys sysfs defaults 0 0 9.conf file. Use the ldd command to determine the library files and locations. Change: passwd: compat To: passwd: files b.o file and create a soft link. Create an fstab file in the initrd etc directory with the following entry (this might already exist)..6.

cp /lib/libnss_files.so. Modify the /boot/mnt/linuxrc file.vp. Add the following lines just after the last kernel module is loaded. If they do exist. The values used for the [fstype] and [device] here are only examples.ko echo "Creating vpath devices" /opt/IBMsdd/bin/cfgvpath Ensure that an updated copy of vpath.so.Note: mount and umount might already exist. copy the following library file to /boot/mnt/lib/.6 /lib/ld-linux. Also.conf /mnt/etc/ cd /mnt /bin/tar cps /dev/IBMsdd /dev/vpath* | /bin/tar xps cd / /bin/umount /mnt 16. Use the ldd command to determine the library files and locations. Example: ldd /bin/mknod | awk ’{print $(NF-1)}’ | grep lib /lib/libselinux.so. so there might be duplications of copying. Note: Many binaries use the same libraries.gz initrd. The following commands can be used to verify the use of vpath devices. do not copy them to the initrd mount directory.so. verify that vpath devices are being used. echo "Loading SDD module" insmod /lib/sdd-mod. Using SDD on a Linux host system 263 . /bin/mount -o rw -t ext3 /dev/vpatha3 /mnt /bin/cp /etc/vpath. Prepare the mounted initrd file system for rebooting: cd /boot umount /boot/mnt gzip initrd. /bin/mount -o rw -t [fstype] [device] /mnt Add the following lines just after the cfgvpath command. Use the correct values for the system that is being configured.vp mv initrd. Copy the required library files for each binary.vp cd / shutdown -r now 17.2 /boot/mnt/lib 15.conf is copied to the /root file system by using the following syntax to mount the root file system. Chapter 5. cp /bin/tar /boot/mnt/bin/ cp /bin/chown /boot/mnt/bin/ 14.2 The above files must be copied to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. Once booted. Add all other paths and reboot again.1 /lib/tls/libc.

vp mnt cp /opt/IBMsdd/sdd-mod. It is assumed that no installation is present to work from and when completed. Prerequisite steps 1. Ensure that the following conditions exist before continuing with this procedure: 264 Multipath Subsystem Device Driver User's Guide .vp initrd. If so. umount mnt gzip initrd.rw / 4.mount swapon -s lsvpcfg datapath query device SDD upgrade procedure Use 1. rpm –e IBMsdd The /etc/vpath.gz initrd.x-y.vp for booting. enter the root password.vp. the boot and swap devices will be running on IBM SDD vpath devices. For systems that have /boot on a separate mount point.x. Copy the new cfgvpath command and use the ldd command to verify that the correct libraries are installed for /boot/mnt/opt/IBMsdd/bin/cfgvpath.gz mount –o loop –t ext2 initrd.x. Install the new SDD driver. 3.vp.vp mv initrd. At the prompt.i686. Reboot the system. 5. mount /boot partition using /dev/sd device.sles9.conf. 8.vp cd / 10.ko in /boot/mnt/lib points to the current SDD module. rpm –ivh IBMsdd-x. 6.rpmsave. Verify that the soft link sdd-mod. mount –n –o remount.ko-`uname –r` /boot/mnt/lib/ 7. Remove the previous SDD driver. SAN Boot instructions for SLES 9 with IBM (ppc) The following procedure is used to install SLES 9 ppc onto a IBM System p JS20 LPAR with fibre-channel connected storage. the following procedure to upgrade SDD: Place the SDD rpm update package on the system Restart the system in single-path mode (no SDD) You might come to a Repair filesystem prompt. remount the file system as read/write.conf file will be saved to vpath. Prepare the initrd. 2.rpm cd /boot mv initrd.gz gunzip initrd.vp. cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/ 9.

memory. Use this procedure to install SLES 9: 1. select the installation source and boot from the media. can be CD-ROM. HTTP. FTP. v Optionally. At the end of this document are instructions on how to upgrade the SDD driver. that is. v Be familiar with the SLES installation. Also. Upgrading the SDD driver. have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device. This is the minimum size for installing all packages from the installation media to the boot devices. it is not required that a /boot mount exists. Installation Media v The installation media. v Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the configured system can access LUNs from the storage unit. and for CD-ROM installs. Using SDD on a Linux host system 265 . and if SAN devices have already been configured for multiple path access. 2. however this is not a requirement for the installation. there should be redundant physical paths to the devices from the host (after installation). v Be familiar with how to connect to and operate IBM BladeCenter JS20 or IBM System p LPAR. Verify that the Emulex lpfcdd SAN HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. Although SDD functions correctly in single-path environments. Chapter 5. that is. This includes understanding which packages will be installed and how to select required options through the installation. NFS. Also note which of the Emulex devices is used to discover this device as it will be needed in a later step. 3. Any of the installation sources listed can be used. It is also recommended that the swap device be at least the size of physical memory that is configured in the system. an NFS-exported set of CD-ROMs was used.v Have a copy of SLES 9 SP2 either network-accessible or on CD-ROM. 4. From the SMS menu. 2. once for each logical path to the device. /dev/sdh. v Be familiar with how to set up an LPAR with processors. and so forth. and SAN HBAs. For simplicity. For this installation. 3. Note: Because of the way Linux discovers SAN devices. Select the desired options until arriving at the Installation Settings step of the yast install. these instructions assume that the root/boot/swap devices are all located on the same device. For network installs a network port is required. Take note which device will be used for the installation before proceeding. Linux will discover the same physical device multiple times. In some cases. a CD-ROM is required. v Be familiar with creating LUNs on the storage unit so that the LPAR can access the storage devices. the source for installation. Configure root/boot/swap devices v The physical boot device that will be used for installation and booting should be at least 4 GB in size. there will not be a /boot mount but rather the boot files will reside in the directory /boot on the root “/” mount.

a. This completes the single-path boot from SAN. b. 7. and go to the custom partition setup. 266 Multipath Subsystem Device Driver User's Guide . To modify the boot/root and other devices for booting using the SDD driver continue with the following steps. Make sure that there is a PReP boot partition on the root/boot device and that it is the first partition. Rebooting. Select to boot from a Hard Drive/SAN and select the Emulex device adapter associated with the SAN disk device on which the installation was completed. Also verify that the swap. you should already be presented with this menu. If you are booting from the network. Select the device and Custom partitioning for experts. at the time of this writing there is currently SP2 available. upgrade the installed system to the latest service pack using yast. 5. If you are installing from CD media.conf and verify that the data in this file looks correct for the boot and root partitions. Rebooting a. Select partitioning. Finish the installation. Select to boot an installed system. continue to a point were you can abort the installation and return to the command line menu system. c. enter the SMS menu. using the df command. g. using swapon -s and other configured partitions are correctly mounted. An error occurs while attempting to create the yaboot boot loader stating that the device type of fcp is unknown. Continue to partition devices as required to complete this configuration. 6. Boot from the installation source media. d. b. This time the boot device which has been setup over the previous steps is now ready to be booted. Finish the installation. This is to make sure that the device noted in the previous step will be used for the root/boot installation target. d. The details of installation and partitioning are not written up here. On reboot after initial install. a. b. Select the appropriate device and boot. Reboot again and enter the SMS menu. d. 4. The installation boot device should now be listed in the bootable devices discovered in the SAN on the selected Emulex HBA. Once this is complete. f.Here. Select OK and select No when asked to retry. SAN boot configuration The system should now be booted in single-path mode from the SAN. c. Verify this by looking at the mounted devices and where the root is mounted. modification of the partitioning settings is required for this installation. e. This permits the installation of the boot loader to the PReP boot partition of the drive where the installation error occurred from above. Yast will again come up but from the root partition. See the installation procedures to determine which packages are needed for the type of system being installed. c. Upgrading to the latest service pack If there is a service pack available. Once this is verified run lilo. view /etc/lilo. Select the root device that was just installed in the previous steps.

Install the IBM SDD driver.191-pseries64 initrd. If Chapter 5.o). there will not be a /boot mount but rather the boot files will reside in the directory /boot on the root “/” mounted file system. Using SDD on a Linux host system 267 .191-pseries64 lib/ /opt/IBMsdd/bin/cfgvpath opt/IBMsdd/bin/ /bin/cat bin/ /bin/cp bin /bin/chown bin/ For each of the above binary files (except sdd-mod.gz extension.5-7. if the file name is initrd-2.6. 2. run the ldd command and verify that the listed library files exist in the image temporary directory. /boot/mnt. It is recommended but not required that vpatha be used as the vpath boot device.vp. The following will unzip and extract the initrd image so that it can be modified to include the required elements to enable a vpath boot image.vp 3. foe example. Copy the file to a temporary filename with a . cp cp cp cp cp /opt/IBMsdd/sdd-mod. cd /boot Locate the initrd image used for booting. mkdir mkdir mkdir mkdir mkdir chmod mnt dev -p lib/tls -p lib64/tls -p opt/IBMsdd/bin -R 640 opt/IBMsdd Copy the following files to the following directories relative to the image temporary directory. This will be the image that /etc/yaboot. 1.6.conf is pointing to.gz Unzip the file using gunzip.5-7. In some cases. there might already be a mnt directory in the temporary initrd image. Note that the file pointed to might be symbolic link to another file. Modifying the /boot/initrd. that is./initrd. For SLES 9 on System p.vp.Note: The following instructions are examples and the values used herein might be different on your systems.191-pseries64 then: cp initrd-2. If there is not. This is referred to as the image temporary directory throughout the rest of this documentation. See “Installing SDD” on page 213 for more information.gz Create a temporary directory where the image will be manipulated. Create the following directories in the image temporary directory. Extract the image to that directory using the command: mkdir –p /boot/mnt cd /boot/mnt cpio -iv < .ko-2. Extracting the initrd. create one. SDD is packaged in an RPM format and can be installed using the rpm command. gunzip initrd.6. Download and install the IBM SDD driver for the kernel version being used..5-7.

user noauto. Original: /dev/sdd4 /dev/hda2 /dev/hda4 /dev/hda3 /dev/sdd3 devpts proc usbfs sysfs / /data1 /data2 swap swap /dev/pts /proc /proc/bus/usb /sys ext3 auto auto swap swap devpts proc usbfs sysfs acl. 48) host=0 ch=0 id=0 lun=3 lun_id=12320870 pid=2105800 serial=12020870 vid=IBM pid=2105800 serial=12120870 vid=IBM pid=2105800 serial=12220870 vid=IBM pid=2105800 serial=12320870 The lun_id for /dev/sda is 12020870. The cfgvpath query command produces output similar to the following example.conf file has now been created. sdd start The /etc/vpath. 16) host=0 ch=0 id=0 lun=1 lun_id=12120870 /dev/sdc (8. 5. copy the listed library files that do not exist to the corresponding lib and lib64 directories in the image temporary directory. Modify the /etc/fstab to use vpath devices for /root and swap. 0) host=0 ch=0 id=0 lun=0 vid=IBM lun_id=12020870 /dev/sdb (8. Use the cfgvpath query device command to obtain the LUN ID of the root's physical device. Edit the /etc/vpath. 4. do cp $line /boot/mnt$line done done Copy this additional library file. sda is the root device). /etc/yaboot.conf file using the lun_id for vpatha (vpatha 12020870).gid=5 defaults noauto noauto 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 Modified: 268 Multipath Subsystem Device Driver User's Guide . Note that some data from the following output has been modified for ease of reading. do ldd $libs | awk '{print $(NF-1)}’ | grep lib | while read line. Other devices using vpaths will also need to be changed. 32) host=0 ch=0 id=0 lun=2 lun_id=12220870 /dev/sdd (8.user pri=42 pri=42 mode=0620. (In this procedure. work only with the root/boot/swap devices and comment out other sd and hd devices until completed. For the initial install.they do not. An example script to gather the correct libraries and copy them to the correct directories: for libs in `opt/IBMsdd/bin/cfgvpath /bin/cat /bin/cp /bin/chown`. cfgvpath query /dev/sda (8. Gather SDD Data in preparation for configuring /etc/fstab.conf and /boot/initrd. You must ensure that vpatha is the root device. Remove all other entries from this file (they will be automatically added later by SDD).user_xattr noauto.

vp cd /boot gzip initrd. Using SDD on a Linux host system 269 .conf /boot/mnt/etc cp /etc/passwd /boot/mnt/etc cp /etc/group /boot/mnt/etc 7./dev/vpatha4 #/dev/hda2 #/dev/hda4 #/dev/hda3 /dev/vpatha3 devpts proc usbfs sysfs / /data1 /data2 swap swap /dev/pts /proc /proc/bus/usb /sys ext3 auto auto swap swap devpts proc usbfs sysfs acl. issue the following commands: cd /boot/mnt find .ko-2.191-pseries64 sdd-mod.6. Edit the init file in the image temporary directory.vp mv initrd. The linked module is the name of the module that was copied into the lib directory above.191-pseries64 now has the repackaged initrd image with the SDD drive and modified files required to boot from a vpath. Stop SDD and copy /etc files to the image temporary directories.vp.6. cd /boot/mnt/lib ln -s sdd-mod. Repackaging the initrd. Modifying root device files. | cpio -H newc -vo > /boot/initrd. Edit the /boot/mnt/linuxrc file.conf /mnt/etc /bin/umount /mnt insmod /lib/scsi_mod.user noauto. Go to the line that has the creating device nodes message.gid=5 defaults noauto noauto 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 6. and add the following lines after the echo –n .o where /dev/vpathXXX is the root drive/partition. Edit the load_modules. echo "Creating vpath devices”" /opt/IBMsdd/bin/cfgvpath echo "Mounting and copying some required SDD files" /bin/mount -o rw -t <PARTITION TYPE> /dev/vpathXXX /mnt /bin/cp /etc/vpath.user pri=42 pri=42 mode=0620.5-7.user_xattr noauto. To repackage all of the changes that have just been made to the initrd.sh file in the image temporary directory and add the following lines to the end of the script: echo "Loading SDD Module" insmod /lib/sdd-mod.5-7. Edit the /boot/mnt/load_modules.vp The initrd-2. 10. sdd stop cp /etc/vpath. right after the init script creates the device nodes using /bin/udevstart . Chapter 5. 8. command in the script block.ko Issue the following command from the lib directory.sh file.gz initrd.ko 9.

Original /etc/yaboot. When the system boots. From the SMS menu. select the boot devices as before. Add a new entry in the file and modify the entry to point at the new initrd image created in the above step. During the OS load.vp 11. e. enter the name for the new boot image. Ensure that no errors were printed to the console during boot. select the old image to boot from. When the yaboot prompt is shown during boot. Also modify the root device in the new entry to point to the vpath chosen from the previous steps. Verify System has reboot and SDD is configured correctly. using swapon -s. c. if the boot device is not already set up as the first boot device.conf: # header section partition = 4 timeout = 100 default = linux # image section image = /boot/vmlinux label = linux append = "root=/dev/sdd4 selinux=0 elevator=cfq" initrd = /initrd image = /boot/vmlinux label = linux-sdd append = "root=/dev/vpatha3 selinux=0 elevator=cfq" initrd = /boot/initrd. 12. Restart the system. Also make sure to modify the entry name.Additional files need to be modified on the root file system before the modifications are complete. b. Then repeat these steps. Once the system comes up.conf. the cfgvpath might have to timeout waiting for these devices to be created. Modify /etc/yaboot. Reboot the system. reboot the system and at the yaboot prompt. f.conf: # header section partition = 4 timeout = 100 default = linux # image section image = /boot/vmlinux label = linux append = "root=/dev/sdd4 selinux=0 elevator=cfq" initrd = /boot/initrd Modified /etc/yaboot. login and verify that the root mount device is the device specified during the configuration. If all the vpath devices that are discovered by cfgvpath were not created during the modification steps above. a. Remember to include the partition if required. are also now mounted on vpath devices. starting with step 9 (repackaging the initrd). review the previous steps and correct any errors. d. If there were errors. using df. 270 Multipath Subsystem Device Driver User's Guide . Also validate that any other configured partitions and swap devices. ensure that the IBMsdd module is loaded after the SAN disk devices are discovered.

rpm –e IBMsdd The /etc/vpath.conf file will be saved to vpath. Copy the new cfgvpath command and use the ldd command to verify that the correct libraries are installed for /boot/mnt/opt/IBMsdd/bin/cfgvpath. remount the file system as read/write.gz cd /boot/mnt cpio -iv < . the installed boot device can be set as the default boot device for the system.vp. 2.vp cd /boot gzip initrd.rpm mkdir –p /boot/mnt cd /boot mv initrd. mount –n –o remount. Restart the system in single-path mode (no SDD). Using SDD on a Linux host system 271 . mount /boot partition using /dev/sd device. The following commands can be used to verify the use of vpath devices: v mount v swapon –s v lsvpcfg v datapath query device At this point. cd /boot/mnt find . At the prompt. 3.rpmsave. 4. Verify that the soft link sdd-mod.vp for booting. rpm –ivh IBMsdd-x.conf.vp. Remove the previous SDD driver.gz initrd.ppc64. 5. | cpio -H newc -vo > /boot/initrd. cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/ 8.x.Once booted. Reboot the system.x-y.rw / For systems that have /boot on a separate mount point. enter the root password. verify that vpath devices are being used. This step is not required. Add all other paths and reboot again.ko in /boot/mnt/lib points to the current SDD module. but is suggested because it enables unattended reboots after this procedure is complete./initrd.sles9.vp initrd. Upgrading the SDD Use the following procedure to upgrade SDD: 1. 7.gz gunzip initrd. Prepare the initrd. If so.vp rm –rf mnt cd / 9.vp. Install the new SDD driver.vp cp /opt/IBMsdd/sdd-mod. Place the SDD rpm update package on the system.. You might come to a Repair filesystem prompt. Chapter 5.x.ko-`uname –r` /boot/mnt/lib 6.vp mv initrd.

i. g. /usr /var. 2. It is also recommended to limit the installation to a single LUN if possible to easy the transition from single-path to IBM SDD vpath. This procedure assumes that no installation is present to work from and when completed. Configure QLogic Devices. the total size of all mount points should be at least this size. Be familiar with setting up root. it is recommended that all internal disk drive controllers be disabled. This permits discovery and use of SAN disk devices during this procedure. Follow the IBM Host Systems Attachment Guide recommendations when setting up the QLogic BIOS for SAN Boot. c. Be familiar with the SLES 9 installation. d. 3.SAN Boot Instructions for SLES 9 with IBM SDD (x86) and LVM 2 The following procedure is used to install SLES 9 x86 on an xSeries host with fibre-channel connect storage and configure SDD with LVM. If the utility cannot see the correct device. boot. v For ease of installation and to avoid issues with internal SCSI or IDE controllers. Have a copy of SLES 9 SP2 i386 either network-accessible or on CD-ROM. check the SAN and ESS Model 800 configurations before continuing. this is not required. Optionally. the boot and swap devices will be running on IBM SDD vpath devices and will be under LVM control. have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device. While in the QLogic Utility. If multiple partitions are being used. however. f. Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the host system can access LUNs from the those storage systems. The installation target MUST be single-pathed before installing SLES 9. The QLogic BIOS should be enabled for discovery of SAN Devices and that the device that contains the kernel and initrd images. v Verify that the QLogic SAN HBA devices that are configured for the host have been set up to have their BOOT BIOS enabled. Ensure that the following conditions exist before continuing with this procedure: a. The root device that will be used for installation and booting should be at least 4 GB in size. This includes understanding which packages will be installed. /boot mount point. Ensure that there will be network access to the system. e. be selected as the boot device in the QLogic BIOS. Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the host system can access LUNs from those storage systems. it is recommended that there be redundant physical paths to the devices from the host after completing this procedure. This is the minimum size 272 Multipath Subsystem Device Driver User's Guide . swap and any other initial mount points that will be used for the setup of the initial system under LVM control. Although SDD functions correctly in single-path environments. Prerequisite steps 1. that is. b. This procedure assumes that this has been done. Configure Boot/Root/SWAP devices. h. configure the ESS Model 800 device from which the system will boot.

If they are not. SAN boot configuration The system should be booted in single-path mode from the SAN. (/usr. At this point the installed boot device can be set as the default boot device for the system. Verify that the system is booted from the correct disk and vpaths. The swap device be at least the size of physical memory that is configured in the host. 4. Here. and so on. /dev/sda. Note which device will be used for the installation before proceeding. modifications of the partitioning settings are required for this installation. a. and so forth. boot options. (/). HTTP. Also verify that the swap. and if SAN devices have already been configured for multiple path access. 5. 6. For simplicity. can be CD-ROM. however. v The details of installation. c. are not documented here. This completes the single-path boot from SAN. using swapon -s and other configured partitions are correctly mounted. v The root. select the installation source to boot from. partitioning. You can use any of the installation sources that are listed. at a minimum. LVM setup. but is suggested because it enables unattended reboots after this procedure is complete. package selection. /var. v Because of the way Linux discovers SAN devices. Using SDD on a Linux host system 273 . using the df command. Verify this by looking at the mounted devices and where the root is mounted. Verify that the QLogic qla2300 SAN HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. This is to make sure that the device noted in the previous step will be used for the root/boot installation target. Use the installation media. On reboot. for example. NFS. b. v SWAP can also be under LVM control but this is not a requirement but should at least use a vpath device. an NFS-exported set of CD-ROMs was used. /opt). the source for installation. For this installation. v Select the desired options until arriving at the Installation Settings. can be under LVM control. modify the BIOS to boot from hard disk. See the installation procedures to determine which packages are needed for the type of system that you are installing. Linux will discover the same physical device multiple times. The following list contains notes about the install procedure that should be reviewed before beginning: Chapter 5. FTP. this is not a requirement for the installation. Restarting the system. once for each logical path to the device. they should be mounted to an IBM SDD vpath device. the system should now boot to the newly installed OS. root. Installing the system. and other optional mount points. that is. This step is not required. these instructions assume that the boot. To modify the boot/root and other devices for booting using the SDD driver continue with the following steps. v The boot (/boot) device must NOT be under LVM control. and swap devices are all located on the same device. More space might be required depending on the package selection. v From the BIOS Menus. The installation media.for installing a base package set from the installation media to the boot devices.

1.09 GB / not usable 0 Allocatable yes PE Size (KByte) 32768 Total PE 291 Free PE 1 Allocated PE 290 PV UUID SSm5g6-UoWj-evHE-kBj1-3QB4-EVi9-v88xiI 3.conf to recognize vpath devices and ignore sd devices. LABEL= is not being used. v Modify /etc/lvm/lvm. SDD is packaged in an RPM format and can be installed using the rpm command. Perform this procedure in a single-path environment. /etc/vpath. configure the SAN for multipath. /dev/rootVolGroup/ /dev/rootVolGroup/rootVol /dev/rootVolGroup/swapVol /dev/rootVolGroup/rootVol -> /dev/mapper/rootVolGroup-rootVol /dev/rootVolGroup/swapVol -> /dev/mapper/rootVolGroup-swapVol Physical device is sda2 vpath device vpatha2 v The volume groups for root and swap in the example are as follows: v Before starting SDD. ensuring that: a.vp. Install the IBM SDD driver. Modify the /etc/fstab. however. v The /etc/fstab will also need to be modified to point /boot from /dev/sd[x] or LABEL=[some_label_name_here] to /dev/vpath[x]. All commands in this procedure begin with a # sign and might be followed by the output of that command such as the command pvdisplay. See “Installing SDD” on page 213 for more information.I In this procedure. v It is always a good idea to make copies of files that are going to be manually modified such as /etc/fstab. Because /boot will not be under LVM control. you will work with a copy of the current initrd named initrd.lst.lst file to add an entry for the SDD initrd. This will ensure all devices are written to the /etc/vpath.conf file. 2.v v v v v All values and devices in the following procedure might not be the same on the system where this procedures is being conducted. recommended (but not required) that you use vpatha as the physical device for the root volume group.Physical volume --PV Name /dev/sda2 VG Name rootVolGroup PV Size 9. The following is an example of the output from pvdisplay. . Once completed and booting with SDD and LVM. Use pvdisplay to show the physical volume(2) currently configured for use in LVM.conf /etc/lvm/lvm. 274 Multipath Subsystem Device Driver User's Guide . comment out any sd devices from /etc/fstab other than /boot. # pvdisplay --. These devices might later be changed to vpath devices if the intent is to have them multipathed. It is. it might be safer to work from within /boot. v Modify the /boot/grub/menu.conf and /boot/grub/menu. Download and install the IBM SDD driver for the kernel version being used. These volumes will be converted from a single-path sd drive to the IBM SDD vpath device.

Modify /etc/lvm/lvm.vp . sda1 and vpatha1.. Modify the boot scripts. such as. Because /boot was installed on /dev/sda1 and vpatha corresponds to sda in the/etc/vpath.sh: Chapter 5.0)/vmlinuz root=/dev/system/lv00 resume=/dev/system/swap selinux=0 splash=silent barrier=off elevator=cfq initrd (hd0. Major devices.vp. however. 5. # cd /etc/init. This procedure will modify LVM to only discover vpath style devices.. Example: Change from: /dev/rootVolGroup/rootVol / LABEL=/boot /boot /dev/rootVolGroup/swapVol swap ext3 defaults 1 1 ext3 defaults 1 2 swap defaults 0 0 To: /dev/rootVolGroup/rootVol / /dev/vpatha1 /boot /dev/rootVolGroup/swapVol swap ext3 defaults 1 1 ext3 defaults 1 2 swap defaults 0 0 4./dev/vpatha1 will be the mount device for /boot. Make sure the root and resume are identical to the current Linux installation. /boot is mounted on a vpath device. Using SDD on a Linux host system 275 .lst file./boot. Add an entry before the first title entry for the SDD/LVM boot using initrd. "r/sd*/" ] In the types = section add an entry for vpath types = [ "vpath".b.conf. There is a one-to-one correlation between sd and vpath minor devices. Example: filter = [ "a/.0)/initrd.. . The default line should point to the new entry. 16 ] 6. Verify which is the default boot image. might not necessarily correlate. Comment out the default filter line.. title Linux w/LVM w/SDD kernel (hd0.*/" ] Add the following line: filter = [ "a/vpath*/".conf file. you must add and modify the following boot scripts.device-mapper Add the following lines after /sbin/devmap_mknod.d/boot. sdb1 could be vpathd1.udev S04boot.udev # vi S06boot.d # ln -s . for example. Modify the /boot/grub/menu. To support the addition of vpath devices during boot as well as possible changes to the device-mapper..

6. sda is the root device). After issuing the e2fsck -f initrd.gz mkdir /boot/mnt For the ext2 file system. The [initrd file] refers to the current initrd in /boot. Resize and mount the initrd image. cd /boot cp [initrd file] to initrd. Enter y to create this directory.vp.gz gunzip initrd.vp. the initrd is an ext2 filesystem. You must ensure that vpatha is the root device.conf file has now been created./sbin/dmsetup mknodes # cd /etc/init.191-smp might be the result. Use the cfgvpath query device command to obtain the LUN ID of the root's physical device. Note that some data from the following output has been modified for ease of reading. This is the sd device that you will map to vpatha.conf file using the lun_id for vpatha and remove all other entries from this file. Start the SDD: sdd start The /etc/vpath.vp command you are prompted create a /lost+found directory. (SDD will automatically add them later. 276 Multipath Subsystem Device Driver User's Guide .gz mkdir /boot/mnt initrd-2. Prepare the initrd file. The correct initrd can be determined by the following method: # ls -1A /boot | grep initrd | grep $(uname -r) cd /boot cp [initrd file] to initrd. the initrd might be required to resize the initrd file system.vp.d # vi boot.local Add the following line to the bottom of the file: /opt/IBMsdd/bin/cfgvpath 7.vp.5-7. # cfgvpath query /dev/sda (8.gz gunzip initrd. you should increase the size of the image before continuing.) vpatha 12020870 8. 9. (In this procedure. For x86-based systems. 0) host=0 ch=0 id=0 lun=0 vid=IBM pid=2105800 serial=12020870 lun_id=12020870 The lun_id for /dev/sda is 12020870. Because of the need to add files to the initrd image. Edit the /etc/vpath. The cfgvpath query command produces output similar to the following example.

Modify the /boot/mnt/etc/fstab file. Copy the required library files for cfgvpath.conf /boot/mnt/etc/ /etc/fstab /boot/mnt/etc/ /etc/lvm/lvm. Remove all lines that the begin with /dev/* 13.conf file.36 (05-Feb-2005) Pass 1: Checking inodes. blocks. # # # # # # cp cp cp cp cp cp /etc/vpath. Modify the /boot/mnt/etc/nsswitch. Copy /opt/IBMsdd/bin/cfgvpath to /boot/mnt/opt/IBMsdd/bin/.conf /boot/mnt/etc/lvm/ 12. Example: ldd /opt/IBMsdd/bin/cfgvpath | awk {print $(NF-1)} Chapter 5. Use the ldd command to determine the library files and locations. Create<y>? # # # # dd if=/dev/zero of=initrd. and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity /lost+found not found. Make additional directories in /boot/mnt if they do not exist.e2fsck 1.conf /boot/mnt/etc/ /etc/group /boot/mnt/etc/ /etc/passwd /boot/mnt/etc/ /etc/nsswitch.vp /boot/mnt Note: For the remainder of this procedure. Change: group: compat To: group: files 14.vp mount -o loop -t ext2 initrd. Using SDD on a Linux host system 277 . # cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/ # chmod 755 /boot/mnt/opt/IBMsdd/bin/* 15. a. 10. # # # # mkdir mkdir chmod mkdir /boot/mnt/mnt -p /boot/mnt/opt/IBMsdd/bin -R 640 /boot/mnt/opt/IBMsdd -p /boot/mnt/lib/tls 11. Change: passwd: compat To: passwd: files b.vp seek=33554432 count=1 bs=1 e2fsck -f initrd.vp resize2fs -f initrd. Copy the following /etc files to /boot/mnt/etc. work from /boot/mnt.

there is no need to copy over a new version. Add the following lines just before the statement. Example: The uname -r command will return something similar to 2.o file to the initrd file system.ko. echo "Loading SDD module" insmod /lib/sdd-mod. so there might be duplications of copying. copy the following binaries: # # # # # # cp cp cp cp cp cp /bin/tar /boot/mnt/bin/ /bin/awk /boot/mnt/bin/ /bin/chown /boot/mnt/bin/ /bin/grep /boot/mnt/bin/ /bin/mknod /boot/mnt/bin/ /bin/cp /boot/mnt/bin/ 18. If they do not exist. cp /lib/libnss_files.ko 17.191-smp.so. Verify that the following files exist in the initrd image. 16.so.ko echo "Creating vpath devices" /opt/IBMsdd/bin/cfgvpath Ensure that an updated copy of vpath.2 The above files must be copied to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. Copy the required library files for each binary that you copied over in step 15.so.so.6 /lib/ld-linux.6 /lib/ld-linux.ko-<uname -r> /boot/mnt/lib/sdd-mod. if they already exist.so.2 /boot/mnt/lib 19.o file and create a soft link. Use the uname -r command to determine the correct sdd-mod. Modify the /boot/mnt/linuxrc file. # cp /opt/IBMsdd/sdd-mod. copy the following library file to /boot/mnt/lib/. Also.so. Verify that the libraries do not already exist in /boot/mnt/lib. Example: # ldd /bin/mknod | awk ’{print $(NF-1)}’ | grep lib /lib/libselinux. /bin/mount -o rw -t [fstype] [device] /mnt 278 Multipath Subsystem Device Driver User's Guide . Use the ldd command to determine the library files and locations. Note: Many binaries use the same libraries.The output of the ldd command might be similar to the following: /lib/tls/libc.5-7.1 /lib/tls/libc.6.2 These files must be copied to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. Copy the correct sdd-mod. echo Loading kernel/drivers/md/dm-snapshot.conf is copied to the /root file system by using the following syntax to mount the root file system.

The installation target MUST be single-pathed before installing RHEL 4. g.Add the following lines just after [ vgchange <. Configure QLogic Devices Chapter 5. The values used for the [fstype] and [device] here are only examples. 2..conf /mnt/etc/ cd /mnt 20. Prepare the mounted initrd file system for rebooting: # cd /boot # umount /boot/mnt # gzip initrd.vp # mv initrd. mount swapon -s pvdisplay lsvpcfg datapath query device SAN boot instructions for RHEL 4 with IBM SDD (x86) The following procedure is used to install RHEL 4 x86 on an xSeries host with fibre-channel connect storage. f. d. Prerequisite steps 1. Using SDD on a Linux host system 279 . This includes understanding which packages will be installed. Optionally. h. e. Ensure that the following conditions exist before continuing with this procedure: a. Although SDD functions correctly in single-path environments. Add all other paths and reboot again. b. (This procedure was performed on an ESS Model 800). Use the correct values for the system that is being configured.vp # cd / 21. Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the host system can access LUNs from those storage systems. Ensure that there will be network access to the system. Be familiar with the RHEL 4 installation. The following commands can be used to verify the use of vpath devices. the boot and swap devices will be running on IBM SDD vpath devices. have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device.> ] .gz initrd. Once booted.vp.. verify that vpath devices are being used. it is recommended that there be redundant physical paths to the devices from the host after installation of RHEL 4. This procedure assumes that no installation is present to work from and when completed. Have a copy of RHEL 4 U1 or U2 i386 either network-accessible or on CD-ROM. /bin/mount -o rw -t ext3 /dev/vpatha3 /mnt /bin/cp /etc/vpath. c. Be familiar with creating LUNs on the ESS Model 800 so that the host can access the ESS Model 800 devices.

once for each logical path to the device. Install v Verify that the QLogic qla2030 SAN HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. If the utility cannot see the correct device. Installation Media The installation media. an NFS-exported set of CD-ROMs was used. configure the ESS Model 800 device from which the system will boot. For simplicity these instructions assume that the boot. Configure Boot/Root/SWAP devices. SAN boot configuration The system should be booted in single-path mode from the SAN. can be CD-ROM. modifications of the partitioning settings are required for this installation. Linux will discover the same physical device multiple times. This step is not required. 6. At this point the installed boot device can be set as the default boot device for the system. This procedure assumes that this has been done. and so forth. This is the minimum size for installing a base package set from the installation media to the boot devices. You can use any of the installation sources that are listed. After the BIOS is enabled go into the Emulex BIOS during POST boot and enable the boot BIOS for each adapter and select the boot LUN from the list. 4. While in the QLogic Utility. Verify this by looking at the mounted devices and where the root is mounted. and if SAN devices have already been configured for multiple path access. See the installation procedures to determine which packages are needed for the type of system being installed.Note: For ease of installation and to avoid issues with internal SCSI or IDE controllers. Here. v Select the desired options until arriving at the Installation Settings. /dev/sda. use Emulex utility software for the Emulex model to enable the Emulex HBA BIOS (to use this utility. For this installation. v The details of installation and partitioning are not written up here. that is. However. 5. using the df 280 Multipath Subsystem Device Driver User's Guide . the system should now boot to the newly installed OS. v It is also recommended that the swap device be at least the size of physical memory that is configured in the host. check the SAN and ESS Model 800 configurations before continuing. modify the BIOS to boot from hard disk. NFS. the system must be booted to DOS). that is. v Because of the way Linux discovers SAN devices. 3. v For Emulex fibre HBAs. v The boot device that will be used for installation and booting should be at least 4 GB in size. This permits discovery and use of SAN disk devices during this procedure. all internal disk drive controllers should be disabled. FTP. root. b. Note which device will be used for the installation before proceeding. On reboot. v Verify that the QLogic SAN HBA devices that are configured for the host have been setup to have their BOOT BIOS enabled. the source for installation. This is to make sure that the device noted in the previous step will be used for the root/boot installation target. and swap devices are all located on the same device. HTTP. but is suggested because it enables unattended reboots after this procedure is complete. Rebooting a. this is not a requirement for the installation.

that is. LABEL=/boot. Modify the /etc/fstab file. configure the SAN for multipath. for example. Using SDD on a Linux host system 281 . using swapon -s and other configured partitions are correctly mounted. that vpatha be used as the vpath boot device. There is a one-to-one correlation between sd and vpath minor devices.conf file. In some cases. 4. Also verify that the swap. be a different label other than LABEL=/boot. In this procedure. See “Installing SDD” on page 213 for more information. This completes the single-path boot from SAN. recommended (but not required) to use vpatha as the physical device for the root volume group. might not necessarily correlate. sdb1 could be vpathd1. sda1 and vpatha1. comment out any sd devices from /etc/fstab other than /boot.conf and /boot/grub/menu.lst file to add an entry for the SDD initrd. Modify the /boot/grub/menu. such as. 5. /boot is mounted on a vpath device Because Red Hat writes labels to the disk and uses labels in the /etc/fstab the boot (/boot) device might be specified as a label. All values and devices in the following procedure might not be the same on the system where this procedures is being conducted. Perform this procedure in a single-path environment. Major devices. The following instructions are examples and the values used herein might be different on your system. All commands in this procedure begin with a # sign and might be followed by the output of that command. Install the IBM SDD driver. The following list of suggestions should be noted before beginning this procedure: Notes: 1. It is always a good idea to make copies of files that are going to be manually modified such as /etc/fstab. These devices might later be changed to vpath devices if the intent is to have them multipathed. This might. This ensures that all devices are written to the /etc/vpath. 2. 9. LABEL= in a multi-pathed environment confuses Red Hat. but not required. there will not be a /boot mount but rather the boot files will reside in the directory /boot on the root “/” mounted file system. 2. however. Before you start SDD. 7. Download and install the IBM SDD driver for the kernel version being used. ensuring that: a. SDD is packaged in an RPM format and can be installed using the rpm command. /etc/vpath.command. It is recommended.lst. 8. however. To modify the boot/root and other devices for booting using the SDD driver. 3. however. Also ensure that any other device specified with the LABEL= feature is changed to a /dev/sd or /dev/vpath device. you will work with a copy of the current initrd named initrd.vp. Because /boot was installed on Chapter 5. Check for line in the /etc/fstab where /boot is mounted and change it to the correct vpath device.conf /etc/lvm/lvm. The /etc/fstab will also need to be modified to point /boot from /dev/sd[x] or LABEL=[some_label_name_here] to /dev/vpath[x]. It is. 6. Once completed and booting with SDD and LVM. LABEL= is not being used b. continue with the following steps: 1.

conf file. (In this procedure. The [initrd file] refers to the current initrd in /boot. cfgvpath query /dev/sda (8.vp 4. /dev/vpatha1 will be the mount device for /boot.vp.gz gunzip initrd. Make additional directories in /boot/mnt. sda is the root device).9-11. 282 Multipath Subsystem Device Driver User's Guide ./dev/sda1 and vpatha corresponds to sda in the /etc/vpath. Add an entry for the SDD/LVM boot using initrd. 3.ELsmp) w/SDD root (hd0. Prepare the initrd file. Edit the /etc/vpath. cd /boot/mnt cpio -iv < .conf file has now been created.9-11. The cfgvpath query command produces output similar to the following example.lst file title Red Hat Enterprise Linux AS (2.lst and /boot/initrd.conf file using the lun_id for vpatha.vp 7. The correct initrd can be determined by the following method: ls -1A /boot | grep initrd | grep $(uname -r) initrd-2. cd /boot cp [initrd file] to initrd.vp 5.6..vp./initrd. Use the cfgvpath query device command to obtain the LUN ID of the root's physical device.9-11. menu.ELsmp.img might be the result.gz mkdir /boot/mnt 6. sdd start The /etc/vpath. Note that some data from the following output has been modified for ease of reading. Gather SDD data in preparation for configuring /etc/fstab. You must ensure that vpatha is the root device. 16) lun_id=12120870 /dev/sdc (8. Change directory to /boot and un-archive the initrd image to /boot/mnt. 0) lun_id=12020870 /dev/sdb (8.0) kernel /vmlinuz-2. 48) lun_id=12320870 host=0 ch=0 id=0 lun=0 vid=IBM pid=2105800 serial=12020870 host=0 ch=0 id=0 lun=1 vid=IBM pid=2105800 serial=12120870 host=0 ch=0 id=0 lun=2 vid=IBM pid=2105800 serial=12220870 host=0 ch=0 id=0 lun=3 vid=IBM pid=2105800 serial=12320870 The lun_id for /dev/sda is 12020870.6. Remove all other entries from this file (they will be automatically added later by SDD).ELsmp ro root=/dev/vpatha3 initrd /initrd. Modify the /boot/grub/menu. 32) lun_id=12220870 /dev/sdd (8.6.

11.conf /boot/mnt/etc/ /etc/group /boot/mnt/etc/ /etc/passwd /boot/mnt/etc/ /etc/nsswitch.9-11.so.so.6.6. Copy the following /etc files to /boot/mnt/etc.ELsmp cp /opt/IBMsdd/sdd-mod.so.so. Copy required library files for cfgvpath. cp cp cp cp cp /etc/vpath. Note: Many binaries use the same libraries so there might be duplications of copying. Copy the correct sdd-mod to the initrd file system. Use the ldd command to determine the library files and locations. Copy required library files for each binary copied to the /boot/mnt directory in the previous step. Using SDD on a Linux host system 283 . copy the following libraries: ldd /bin/mknod | awk ’{print $(NF-1)}’ | grep lib /lib/libselinux. Use the uname -r command to determine the correct sdd-mod and create a soft link. Use the ldd command to determine the library files and locations. Also. Copy the following binaries and change permissions to 755: cp /bin/mknod /boot/mnt/bin/ cp /bin/cp /boot/mnt/bin/ 13.ko 12.ko-2.6 /lib/ld-linux.2 These files must be copied to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively.9-11.conf /boot/mnt/etc/ /etc/fstab /boot/mnt/etc/ 9. Example: ldd /opt/IBMsdd/bin/cfgvpath | awk ’{print $(NF-1)}’ The ldd command returns something similar to the following: /lib/tls/libc.1 /lib/tls/libc.so. Example: The command will return something similar to 2.ELsmp /boot/mnt/lib/sdd-mod.6 /lib/ld-linux. Copy /opt/IBMsdd/bin/cfgvpath to /boot/mnt/opt/IBMsdd/bin/ cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin 10.2 Chapter 5.mkdir mkdir chmod mkdir mnt -p opt/IBMsdd/bin -R 640 opt/IBMsdd -p lib/tls 8.

Use cpio to archive the /boot/mnt directory and gzip it in preparation for rebooting. | cpio -H newc -vo > . remount the file system as read/write. You might come to a Repair filesystem prompt.gz initrd. cp /lib/libproc-3./initrd. find . If so. echo "Loading SDD module" insmod /lib/sdd-mod.vp cd /boot gzip initrd.2 /boot/mnt/lib/ cp /lib/libnss_files. Also. enter the root password. Add the following lines after the modules load and just before the /sbin/udevstart.conf /mnt/etc/ /bin/umount /mnt 15. Use the correct values for the system that is being configured.vp rm –rf mnt cd / shutdown -r now 16. At the prompt.vp.The above files must be copied to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. Note that /sbin/udevstart can exist multiple times in the initrd. /bin/mount -o rw -t ext3 /dev/vpatha3 /mnt /bin/cp /etc/vpath.3.2 /boot/mnt/lib/ 14.vp mv initrd. 284 Multipath Subsystem Device Driver User's Guide . mount swapon -s lsvpcfg datapath query device Upgrading the SDD Use the following procedure to upgrade SDD: 1.ko echo "Creating vpath devices" /opt/IBMsdd/bin/cfgvpath Ensure that an updated copy of vpath.so. The following commands can be used to verify the use of vpath devices. Restart the system in single-path mode (no SDD) 3. Modify the /boot/mnt/init file. Make sure these lines are added before the correct /sbin/udevstart entry which is located after the kernel modules load. Once booted. copy the following library files to /boot/mnt/lib/. /bin/mount -o rw -t [fstype] [device] /mnt Add the following lines to the init file just after the previously added entries.. Add all other paths and reboot again.2. Values used for the [fstype] and [device] here are only examples.so. Place the SDD rpm update package on the system 2.conf is copied to the /root file system at boot time by using the following syntax to mount the root file system.so /boot/mnt/lib/ cp /lib/libtermcap. verify that vpath devices are being used.

rpm mkdir –p /boot/mnt cd /boot mv initrd. cd /boot/mnt find .vp. the boot and swap devices will be running on IBM SDD vpath devices. Remove the previous SDD driver.x-y. Prerequisite steps 1. Install the new SDD driver.rhel4.conf. 7.conf file will be saved to vpath. Copy the new cfgvpath command and use the ldd command to verify that the correct libraries are installed for /boot/mnt/opt/IBMsdd/bin/cfgvpath.vp for booting. cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/ 8. v Have a copy of RHEL 4 either network accessible or on CD-ROM.vp rm –rf mnt cd / 9. Ensure that the following conditions exist before continuing with this procedure: v The installation target must be single-pathed before installing RHEL 4.x.rw / For systems that have /boot on a separate mount point. rpm –ivh IBMsdd-x. rpm –e IBMsdd The /etc/vpath. Chapter 5.x.rpmsave.vp.vp initrd.vp mv initrd./initrd.vp cd /boot gzip initrd. This includes understanding which packages will be installed and how to select required options through the installation. 4.gz initrd.mount –n –o remount.ko-`uname –r` /boot/mnt/lib/ 6. Reboot the system. Using SDD on a Linux host system 285 .ko in /boot/mnt/lib points to the current SDD module. | cpio -H newc -vo > /boot/initrd. It is assumed that no installation is present to work from and when completed. mount /boot partition using /dev/sd device.i686. Prepare the initrd.vp.vp cp /opt/IBMsdd/sdd-mod.gz gunzip initrd.. Verify that the soft link sdd-mod. 5.gz cd /boot/mnt cpio -iv < . SAN Boot instructions for RHEL 4 with IBM SDD (ppc) The following procedure is used to install RHEL 4 ppc onto a System p host with fibre-channel connected storage. v Be familiar with the RHEL 4 installation.

5. or source for installation. This is the minimum size for installing all packages from the installation media to the boot devices. For simplicity. v Verify that the QLogic SAN HBA devices that are configured for the host have been setup to have their BOOT BIOS enabled. 3. /dev/sda. Linux will discover the same physical device multiple times. v Ensure that there is network access to the system. This procedure assumes that this has been done. have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device. You can use any of the installation sources that are listed. Installation Media The installation media. Here. c. See the installation procedures to determine which packages are needed for the type of system being installed. there should be redundant physical paths to the devices from the host after installation of RHEL 4. FTP. 2. Configure QLogic devices v For ease of installation and to avoid issues with internal SCSI or IDE controllers. This permits discovery and use of SAN disk devices during this procedure. The details of installation and partitioning are not written up here. Note: Because of the way Linux discovers SAN devices. While in the QLogic Utility. Take note which device will be used for the installation before proceeding. modification of the partitioning settings is required for this installation. an NFS-exported set of CD-ROMs was used. d. check the SAN and ESS Model 800 configurations before continuing. select the installation source to boot from. these instructions assume that the root/boot/swap devices are all located on the same device. once for each logical path to the device.v Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the host system can access LUNs from those storage systems (this procedure was performed on an ESS Model 800). b. Use this procedure to install RHEL 4: a. 4. From the BIOS Menus. Although SDD functions correctly in single-path environments. all internal disk drive controllers should be disabled. NFS. configure the ESS Model 800 device from which the system will boot. Select the desired options until arriving at the Installation Settings. HTTP. v It is also recommended that the swap device be at least the size of physical memory that is configured in the LPAR. Configure root/boot/swap devices v The physical boot device that will be used for installation and booting should be at least 4 GB in size. or so on. that is. If the utility cannot see the correct device. v Be familiar with creating LUNs on the ESS Model 800 so that the host can access the ESS Model 800 devices. and if SAN devices have already been configured for multiple path access. For this installation. 286 Multipath Subsystem Device Driver User's Guide . this is not a requirement for the installation. Verify that the QLogic qla2030 SAN HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. can be CD-ROM. v Optionally. however. This is to make sure that the device noted in the previous step will be used for the root/boot installation target.

/etc/vpath. 2. SDD is packaged in an RPM format and can be installed using the rpm command. Download and install the IBM SDD driver for the kernel version being used. This might.conf file. however. comment out any sd devices from /etc/fstab other than /boot. such as. recommended (but not required) to use vpatha as the physical device for the root volume group. It is. It is always a good idea to make copies of files that are going to be manually modified such as /etc/fstab. SAN boot configuration The system should now be booted in single-path mode from the SAN. LABEL=/boot. Using SDD on a Linux host system 287 . 5. 2. All values and devices in the following procedure might not be the same on the system where this procedure is being conducted. Chapter 5. using swapon -s and other configured partitions are correctly mounted. 6. that is. See “Installing SDD” on page 213 for more information. /boot is mounted on a vpath device Because Red Hat writes labels to the disk and uses labels in the /etc/fstab the boot (/boot) device might be specified as a label. the system should now boot to the newly installed OS. LABEL= is not being used b. be a different label other than LABEL=/boot. All commands in this procedure begin with a # sign and might be followed by the output of that command.6. sda1 and vpatha1. Verify this by looking at the mounted devices and where the root is mounted. Perform this procedure in a single-path environment.vp. 4. At this point the installed boot device can be set as the default boot device for the system. This step is not required. LABEL= in a multi-pathed environment confuses Red Hat. Once completed and booting with SDD configure the SAN for multipath. however.lst file to add an entry for the SDD initrd. This is not absolutely required. but is suggested because it enables unattended reboots after this procedure is complete. These devices can later be changed to vpath devices if the intent is to have them multi-pathed. Also verify that the swap.\ 7. There is a one-to-one correlation between sd and vpath minor devices. ensuring that: a. b. Modify the /boot/grub/menu. Modify the /etc/fstab file. This procedure uses a copy of the current initrd named initrd. however. The following list of suggestions should be noted before beginning this procedure: Notes: 1. Ensure that any other device specified with the LABEL= feature is changed to a /dev/sd or /dev/vpath device. using the df command. 8.conf and /boot/grub/menu. Rebooting a. Install the IBM SDD driver.lst. Check for line in the /etc/fstab where /boot is mounted and change it to the correct vpath device. The /etc/fstab will also need to be modified to point /boot from /dev/sd[x] or LABEL=[some_label_name_here] to /dev/vpath[x]. To modify the boot/root and other devices for booting using the SDD driver continue with the following steps. This completes the single-path boot from SAN. Major devices. On reboot. Before starting SDD. might not necessarily correlate. 3. 1. This ensures that all devices are written to the /etc/vpath. modify the BIOS to boot from hard disk.

sdb1 could be vpathd1.gz gunzip initrd. cfgvpath query /dev/sda (8. Add an entry for the SDD/LVM boot using initrd. Remove all other entries from this file (they will be automatically added later by SDD) .vp.9-22. Edit the /etc/vpath. 3. The cfgvpath query command produces output similar to the following example.1. Because /boot was installed on /dev/sda1 and vpatha corresponds to sda in the /etc/vpath.0.9-22.EL label=linux-22.gz mkdir /boot/mnt 5.conf file using the lun_id for vpatha.EL. 16) lun_id=12120870 /dev/sdc (8. etc/yaboot. /etc/fstab/.0.9-22. Prepare the initrd file The [initrd file] refers to the current initrd in /boot.1. 0) lun_id=12020870 /dev/sdb (8. Modify /etc/yaboot.for example. (In this procedure.0.img append="console=hvc0 root=/dev/sda4" image=/vmlinuz-2. Note that some data from the following output has been modified for ease of reading.9-22. /dev/vpatha1 will be the mount device for /boot.img is the result.6.conf.conf file has now been created.01 read-only initrd=/initrd-2. You must ensure that vpatha is the root device. 4.01-sdd read-only initrd=/initrd. Use the cfgvpath query device command to obtain the LUN ID of the root's physical device.conf file.1.vp. cd /boot cp [initrd file] to initrd. image=/vmlinuz-2.conf and /boot/initrd.6.6. sdd start The /etc/vpath.vp. 288 Multipath Subsystem Device Driver User's Guide .6. 48) lun_id=12320870 host=0 ch=0 id=0 lun=0 vid=IBM pid=2105800 serial=12020870 host=0 ch=0 id=0 lun=1 vid=IBM pid=2105800 serial=12120870 host=0 ch=0 id=0 lun=2 vid=IBM pid=2105800 serial=12220870 host=0 ch=0 id=0 lun=3 vid=IBM pid=2105800 serial=12320870 The lun_id for /dev/sdb is 12020870. sda is the root device). 32) lun_id=12220870 /dev/sdd (8.conf. Change directory to /boot/mnt and un-archive the initrd image to /boot/mnt. The correct initrd can be determined by the following method: ls -1A /boot | grep initrd | grep $(uname -r) initrd-2.EL label=linux-22.0.1.vp append="console=hvc0 root=/dev/vpatha4 Collect SDD data in preparation for configuring /etc/vpath.

1 cp /opt/IBMsdd/sdd-mod.so..conf /boot/mnt/etc/ /etc/fstab /boot/mnt/etc/ 8.0. Copy the following binaries and change permissions to 755: cp /bin/mknod /boot/mnt/bin/ cp /bin/mount /boot/mnt/bin/ cp /bin/umount /boot/mnt/bin/ cp /bin/cp /boot/mnt/bin/ chmod 755 /boot/mnt/bin/* 12./initrd.vp 6.ko-2. mkdir mkdir chmod mkdir mnt -p opt/IBMsdd/bin -R 640 opt/IBMsdd -p lib/tls 7.6.2 These file must be copied to the /boot/mnt/lib64/tls/ and /boot/mnt/lib64/ directories respectively.ko 11. Many binaries use the same libraries so there might be duplications of copying. Use the ldd command to determine the library files and locations.conf /boot/mnt/etc/ /etc/group /boot/mnt/etc/ /etc/passwd /boot/mnt/etc/ /etc/nsswitch. Copy /opt/IBMsdd/bin/cfgvpath to /boot/mnt/opt/IBMsdd/bin/ cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/ 9.9-22. Copy the following /etc files to /boot/mnt/etc cp cp cp cp cp /etc/vpath. 10.6.0. Use the ldd command to determine the library files and locations.6 /lib/ld-linux. Example: ldd /opt/IBMsdd/bin/cfgvpath | awk ’{print $(NF-1)}’ The ldd command returns something similar to the following: /lib/tls/libc. The uname -r command returns something similar to 2. Copy required library files for cfgvpath.9-22. Using SDD on a Linux host system 289 .so. Example: Chapter 5. Copy required library files for each binary copied to the /boot/mnt directory in the previous step.cd /boot/mnt cpio -iv < . Copy the correct sdd-mod to the initrd file system. Use the uname -r command to determine the correct sdd-mod. Make additional directories in /boot/mnt.1 /boot/mnt/lib/sdd-mod.

vp mv initrd. Install the yaboot boot loader to a bootstrap partition using the ybin command.ko echo "Creating vpath devices" /opt/IBMsdd/bin/cfgvpath 14.so. 16. The following commands can be used to verify the use of vpath devices: v mount v swapon –s v lsvpcfg v datapath query device Upgrading the SDD Use the following procedure to upgrade SDD: 1.so. 290 Multipath Subsystem Device Driver User's Guide . Use cpio to archive the /boot/mnt directory and gzip it in preparation for rebooting.vp rm -rf mnt cd / shutdown -r now 15. | cpio -H newc -vo > .so. Make sure these lines are added before the correct /sbin/udevstart entry which is located after the kernel modules load. echo "Loading SDD module" insmod /lib/sdd-mod. verify that vpath devices are being used. find . Place the SDD rpm update package on the system 2.gz initrd. Restart the system in single-path mode (no SDD) 3.2 The above files must be copied to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. enter the root password. Add all other paths and reboot again. Also.vp.vp cd /boot gzip initrd. copy the following library files to /boot/mnt/lib/. Add the following lines after the modules load and just before the /sbin/udevstart. If so.1 /lib/tls/libc.6 /lib/ld-linux.so. Verify System has reboot and SDD is configured correctly Once booted.ldd /bin/mknod | awk ’{print $(NF-1)}’ | grep lib /lib/libselinux.2 /boot/mnt/lib/ cp /lib/libnss_files.2.2 /boot/mnt/lib/ 13../initrd. Ybin -b /dev/sda1 Where /dev/sda1 is the PreP partition.3.so.so /boot/mnt/lib/ cp /lib/libtermcap. At the prompt. cp /lib/libproc-3. You might come to a Repair filesystem prompt. Modify the /boot/mnt/init file. remount the file system as read/write. Note that /sbin/udevstart may exist multiple times in the initrd.

rpm –e IBMsdd The /etc/vpath. Verify that the soft link sdd-mod. Install the yaboot boot loader to a bootstrap partition using the ybin command. This procedure assumes that no installation is present to work from and when completed. SAN boot instructions for RHEL 4 with IBM SDD (x86) and LVM 2 Use this procedure to install RHEL 4 U1 (or later) and configure SDD with LVM. Install the new SDD driver. | cpio -H newc -vo > /boot/initrd. cd /boot/mnt find . mount /boot partition using /dev/sd device. Ensure that the following conditions exist before continuing with this procedure: v The installation target MUST be single-pathed before installing RHEL 4. 4. Prepare the initrd. Copy the new cfgvpath command and use the ldd command to verify that the correct libraries are installed for /boot/mnt/opt/IBMsdd/bin/cfgvpath.vp.gz initrd. the boot and swap devices will be running on SDD vpath devices and will be under LVM control.rpmsave.rw / For systems that have /boot on a separate mount point.ko-`uname -r` /boot/mnt/lib/ 6. cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/ 8. Prerequisite steps 1.vp mv initrd.ko in /boot/mnt/lib points to the current sdd module.vp.conf.gz cd /boot/mnt cpio -iv < .rpm mkdir -p /boot/mnt cd /boot mv initrd. Using SDD on a Linux host system 291 . 10. 7.rhel4.vp cd /boot gzip initrd. 5.gz gunzip initrd.x.mount –n –o remount.vp initrd.conf file will be saved to vpath./initrd.x-y.vp for booting. Ybin -b /dev/sda1 where /dev/sda1 is the PreP partition. rpm -ivh IBMsdd-x. Chapter 5.x..vp.ppc64. Remove the previous SDD driver. Reboot the system.vp rm -rf mnt cd / 9.vp cp /opt/IBMsdd/sdd-mod.

v Be familiar with how to set up a SAN network or direct-attached SAN storage devices so that the host system can access LUNs from the those storage systems (this procedure was performed on an ESS Model 800). it should at lease use a vpath device. v Verify that the HBA module is loaded and that the SAN devices that will be used for installation have been detected successfully. but this is not a requirement. 292 Multipath Subsystem Device Driver User's Guide . 3. HTTP. Swap might also be under LVM control. and so forth. have an understanding of how the Linux kernel boot process functions and what processes and procedures that are used to boot a Linux distribution for a local storage device. and swap devices are all located on the same device. This is the minimum size for installing a base package set from the installation media to the boot devices. can be CD-ROM. once for each logical path to the device. 2. all internal disk drive controllers should be disabled. v Select the desired options until arriving at the Installation Settings. Here. the source for installation. Configure the boot/root/swap devices. This procedure assumes that this has been done. v Be familiar with creating LUNs on the ESS Model 800 so that the host can access the ESS Model 800 devices. Install the system. v From the BIOS Menus select the installation source to boot from. This permits discovery and use of SAN disk devices during this procedure. FTP. Although SDD functions correctly in single path environments. It is also recommended that the swap device be at least the size of physical memory that is configured in the host. For this installation. however. it is recommended that there be redundant physical paths to the devices from the host after installation of RHEL 4. The boot (/boot) device must NOT be under LVM control. Configure the HBA devices. v Optionally. The boot device that will be used for installation and booting should be at least 4 GB in size. For simplicity these instructions assume that the boot. This includes understanding which packages will be installed. Linux will discover the same physical device multiple times. You can use any of the installation sources that are listed. The root (/) device must be under LVM control. Note: Due to the way Linux discovers SAN devices. if SAN devices have already been configured for multiple path access. this is not a requirement for the installation. NFS. root. Use the installation media. v Be familiar with the RHEL 4 installation. Note which device will be used for the installation before proceeding. Verify that the SAN HBA devices that are configured for the host have been setup to have their BOOT BIOS enabled. This is to make sure that the device noted in the previous step will be used for the root/boot installation target. that is. 4. v Ensure there will be network access to the system. The installation media.v A copy of RHEL 4 U1 i386 either network accessible or on CD. 5. /dev/sda. an NFS-exported set of CD-ROMs was used. that is. modification of the partitioning settings is required for this installation. Note: For ease of installation and to avoid issues with internal SCSI or IDE controllers. However. v Be familiar with setting up root and swap under LVM control.

This completes the single-path boot from SAN.conf /etc/lvm/lvm. 10. 4. Once completed and booting with SDD and LVM configure the SAN for multipath. 2. It is always a good idea to make copies of files that are going to be manually modified such as /etc/fstab. 5. but is suggested because it enables unattended reboots after this procedure is complete. To modify the boot/root and other devices for booting using the SDD driver continue with the following steps. Chapter 5.conf to recognize vpath devices and ignore SCSI disk devices.conf file.vp. Verify this by looking at the mounted devices and where the root is mounted.conf and /boot/grub/menu. 6. This is not absolutely required. 11. All commands in this procedure begin with a # sign and can be followed by the output of that command such as the command pvdisplay. /etc/vpath. All values and devices in the following procedure might not be the same on the system where this procedures is being conducted. using the df command. Because /boot will not be under LVM control.Note: The details of installation and partitioning are not documented here. Also verify that the swap. The Volume groups for root and swap on this system are as follows: /dev/rootVolGroup/ /dev/rootVolGroup/rootVol /dev/rootVolGroup/swapVol /dev/rootVolGroup/rootVol -> /dev/mapper/rootVolGroup-rootVol /dev/rootVolGroup/swapVol -> /dev/mapper/rootVolGroup-swapVol Physical device is sda2 vpath device vpatha2 7. 6. 8. a. This step is not required. modify the BIOS to boot from hard disk. SAN boot configuration The system should now be booted in single-path mode from the SAN. it might be safer to work from within /boot. b. This procedure uses a copy of the current initrd named initrd. 9. Reboot the system. 3. Modify /etc/lvm/lvm.lst. This will ensure that all devices are written to the /etc/vpath. Using SDD on a Linux host system 293 . The system should now boot to the newly installed OS.lst file to add an entry for the SDD initrd. On reboot. These devices can later be changed to vpath devices if the intent is to have them multipathed. Notes: 1. Modify the /boot/grub/menu. At this point the installed boot device can be set as the default boot device for the system. Verify that the system is booted from the correct disk and that the boot/root/swap and LVM configurations are correct. Perform this procedure in a single-path environment. Before starting SDD. using swapon -s and other configured partitions are correctly mounted. use vpatha as the physical device for the root volume group. See the installation procedures to determine which packages are needed for the type of system being installed. comment out any SCSI disk devices from /etc/fstab other than /boot. Although it is not required. The /etc/fstab will also need to be modified to point /boot from /dev/sd[x] or LABEL=[some_label_name_here] to /dev/vpath[x]. c.

conf file. 294 Multipath Subsystem Device Driver User's Guide .09 GB / not usable 0 Allocatable yes PE Size (KByte) 32768 Total PE 291 Free PE 1 Allocated PE 290 PV UUID SSm5g6-UoWj-evHE-kBj1-3QB4-EVi9-v88xiI 3. LABEL= is not being used b. however.rpm Change to the directory where the SDD rpm is located and use the rpm tool to install the IBMsdd driver and applications. Install SDD driver IBMsdd-1. that is. # rpm -ivh IBMsdd-1.i686. Use pvdisplay to get the physical volume for the root and swap lvm volume group(s).6.0. Check for the line in the /etc/fstab where /boot is mounted and change it to the correct vpath device. the boot (/boot) device might be specified as a label. that is.vp. Red Hat does not recognize LABEL= in a multipathed environment. Because /boot was installed on /dev/sda1 and vpatha corresponds to sda in the /etc/vpath.lst file.0.1. Also ensure that any other device specified with the LABEL= feature is changed to a /dev/sd or /dev/vpath device. This can. /boot is mounted on a vpath device Because Red Hat writes labels to the disk and uses labels in the /etc/fstab. Major devices.Physical volume --PV Name /dev/sda2 VG Name rootVolGroup PV Size 9. Example: Change: /dev/rootVolGroup/rootVol / LABEL=/boot /boot /dev/rootVolGroup/swapVol swap ext3 defaults 1 1 ext3 defaults 1 2 swap defaults 0 0 To: /dev/rootVolGroup/rootVol / /dev/vpatha1 /boot /dev/rootVolGroup/swapVol swap ext3 defaults 1 1 ext3 defaults 1 2 swap defaults 0 0 4. LABEL=/boot. Modify the /boot/grub/menu.1-8.i686. might not correlate. /dev/vpatha1 will be the mount device for /boot.6. Modify the /etc/fstab file. ensuring that: a. such as. sdb1 could be vpathd1. In this procedure /dev/sda2 (sda) is the device that will be used for /dev/vpatha2 (vpatha) # pvdisplay --.rhel4. be a different label other than LABEL=/boot.rpm 2. sda1 and vpatha1. however. Add an entry for the SDD/LVM boot using initrd. There is a one-to-one correlation between SCSI disk and vpath minor devices.rhel4.1-8.

6.0) kernel /vmlinuz-2.img title Red Hat Enterprise Linux AS-up (2. (8.*/" ] To: filter = [ "a/vpath*/".ELsmp ro root=/dev/rootVolGroup/rootVol initrd /initrd-2. The cfgvpath query command produces output similar to the following: Note: Some data from the following output has been modified for ease of reading. (8.9-11.xpm.6.6. (In this procedure.ELsmp ro root=/dev/rootVolGroup/rootVol initrd /initrd.ELsmp. The correct initrd can be determined by the following method: Chapter 5. Modify /etc/lvm/lvm.conf.6.default=1 timeout=10 splashimage=(hd0. Start SDD. (8. Remove all other entries from this file. Using SDD on a Linux host system 295 .6.9-11. Prepare the initrd file.9-11.img 5. 0) 16) 32) 48) host=0 host=0 host=0 host=0 ch=0 ch=0 ch=0 ch=0 id=0 id=0 id=0 id=0 lun=0 lun=1 lun=2 lun=3 vid=IBM vid=IBM vid=IBM vid=IBM pid=2105800 pid=2105800 pid=2105800 pid=2105800 serial=12020870 serial=12120870 serial=12220870 serial=12320870 lun_id=12020870 lun_id=12120870 lun_id=12220870 lun_id=12320870 The lun_id for /dev/sda is 12020870.6.0) kernel /vmlinuz-2.gz hiddenmenu title Red Hat Enterprise Linux AS (2. Use the cfgvpath query device command to obtain the LUN ID of the root's physical device.conf file using the lun_id for vpatha.0) kernel /vmlinuz-2. You must ensure that vpatha is the root device.9-11. # sdd start /etc/vpath. (They are automatically added later by SDD.EL) root (hd0.9-11.ELsmp) w/LVM w/SDD root (hd0. 16 ] 6. sda is the root device).) vpatha 12020870 7. "r/sd*/" ] In the types = section.ELsmp) root (hd0.0)/grub/splash. Edit the /etc/vpath.EL ro root=/dev/rootVolGroup/rootVol initrd /initrd-2.9-11. Change: filter = [ "a/.6.9-11.conf has now been created.9-11.6. add an entry for vpath types = [ "vpath".vp title Red Hat Enterprise Linux AS (2.EL. # cfgvpath query /dev/sda /dev/sdb /dev/sdc /dev/sdd (8. The [initrd file] refers to the current initrd in /boot.

# cd /boot/mnt # cpio -iv < .vp 9.vp.conf /boot/mnt/etc/ /etc/group /boot/mnt/etc/ /etc/passwd /boot/mnt/etc/ /etc/nsswitch. # cp /opt/IBMsdd/bin/cfgvpath /boot/mnt/opt/IBMsdd/bin/ # chmod 755 /boot/mnt/opt/IBMsdd/bin/* 13.gz gunzip initrd. Example: # ldd /opt/IBMsdd/bin/cfgvpath | awk ’{print $(NF-1)}’ 296 Multipath Subsystem Device Driver User's Guide .. this might already be done. # # # # # # cp cp cp cp cp cp /etc/vpath.gz mkdir /boot/mnt Note: The rest of this procedure involves working from /boot/mnt. Change directory to /boot/mnt and unarchive the initrd image to /boot/mnt. (For rhel4u1i386. Use the ldd command to determine the library files and locations. Modify the /boot/mnt/etc/nsswitch. Copy the following /etc files to /boot/mnt/etc. Copy the required library files for cfgvpath.# ls -1A /boot | grep initrd | grep $(uname -r) # # # # cd /boot cp [initrd file] to initrd./initrd.vp.conf /boot/mnt/etc/ /etc/fstab /boot/mnt/etc/ /etc/lvm/lvm. Change: group: compat To: group: files 12.conf /boot/mnt/etc/lvm/ 11. Change: passwd: compat To: passwd: files b. # # # # mkdir mkdir chmod mkdir mnt -p opt/IBMsdd/bin -R 640 opt/IBMsdd -p lib/tls 10. Make additional directories in /boot/mnt.conf file. Copy /opt/IBMsdd/bin/cfgvpath to /boot/mnt/opt/IBMsdd/bin/ and change permissions to 755.) a. 8.

Copy the required library files for each binary. Example: # ldd /bin/mknod | awk ’{print $(NF-1)}’ /lib/libselinux.0. copy all of the following library files to /boot/mnt/lib64/. Copy the correct sdd-mod to the initrd file system.so.9-11.so.6 /lib/ld-linux. Modify the /boot/mnt/init file.0: # cp /opt/IBMsdd/sdd-mod.6 /lib/ld-linux. 14.2.2 /boot/mnt/lib/ # cp /lib/libnss_files. Using SDD on a Linux host system 297 .ko For RHEL 4.9-11.1 /lib/tls/libc.3.so.so.so /boot/mnt/lib/ # cp /lib/libtermcap. Use the uname -r command to determine the correct sdd-mod. Note: Many binaries use the same libraries.6.2 For RHEL 4. # cp /lib/libproc-3. Add the following lines just before the following statement: Chapter 5. copy all of the following library files to /boot/mnt/lib/.so /boot/mnt/lib64/ # cp /lib/libtermcap.ko 15.5: # cp /opt/IBMsdd/sdd-mod.2.ko-2.ELsmp For RHEL 4.ELsmp /boot/mnt/lib64/sdd-mod. copy these files to the /boot/mnt/lib64/tls/ and /boot/mnt/lib64/ directories respectively.The ldd command returns the following: /lib/tls/libc.2 For RHEL 4.so. so there might be duplications when copying.9-11. copy these files to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively. uname -r returns 2.2 /boot/mnt/lib64 17. Use the ldd command to determine the library files and locations. copy these files to the /boot/mnt/lib/tls/ and /boot/mnt/lib/ directories respectively.3.2 /boot/mnt/lib For RHEL 4.6.0. Copy the following binaries and change permissions to 755: # # # # # cp /bin/mknod /boot/mnt/bin/ cp /bin/mount /boot/mnt/bin/ cp /bin/umount /boot/mnt/bin/ cp /bin/cp /boot/mnt/bin/ chmod 755 /boot/mnt/bin/* 16. copy these files to the /boot/mnt/lib64/tls/ and /boot/mnt/lib64/ directories respectively.ELsmp /boot/mnt/lib/sdd-mod.5. For RHEL 4. Also.so.6.so.2 /boot/mnt/lib64/ # cp /lib/libnss_files. # cp /lib/libproc-3.so.5.so.ko-2. Also.

you rely solely on either grub or lilo as the boot loader. The method it uses for reading the disk geometry is dependent on a hardcoded list of major numbers that only supports certain types of devices.conf /mnt/etc/ /bin/umount /mnt 18. # # # # # find .ko module" ] echo "Loading SDD module" insmod /lib/sdd-mod.gz initrd.vp cd / # shutdown -r now 19. The following commands can be used to verify the use of vpath devices.conf is copied to the /root filesystem by using the following syntax to mount the root file system. When the lilo command runs. The values used for the [fstype] and [device] here are only examples. when you are remote-booting and the MBR is contained on the vpath device. PowerPC-based systems use a combination of lilo and yaboot. When running the command lilo to update the MBR.conf and attempts to read the disk geometry of the boot device in order to write its data to the master boot record.vp cd /boot gzip initrd. verify that vpath devices are being used. This section only applies to booting using lilo and SDD vpath on x86-based systems. /bin/mount -o rw -t [fstype] [device] /mnt Add the following lines just after [ insmod /lib/dm-snapshot. you will get this error message: 298 Multipath Subsystem Device Driver User's Guide . lilo is not able to complete. lilo reads the configuration file /etc/lilo. Add all other paths and reboot again. /bin/mount -o rw -t ext3 /dev/rootVolGroup/rootVol /mnt /bin/cp /etc/vpath./initrd.vp.ko ]. # # # # # mount swapon -s pvdisplay lsvpcfg datapath query device Using lilo with SDD (remote boot) on x86 Use this section to boot SDD using the lilo boot loader in a remote/SAN boot environment. Once booted.. SDD vpath is not one of these supported devices.ko echo "Creating vpath devices" /opt/IBMsdd/bin/cfgvpath Ensure that an updated copy of vpath. Use the correct values for the system that is being configured.vp mv initrd. Use cpio to archive the /boot/mnt directory and gzip to compress it in preparation for rebooting.[ echo "Loading dm-mod. whereas on x86-based systems. Therefore. | cpio -H newc -vo > .

hdparm. counting from 0 Device Boot Start End #cyls #blocks Id /dev/vpatha1 * 0+ 63 64514048+ /dev/vpatha2 64 4966 4903 39383347+ /dev/vpatha3 4967 5220 254 2040255 /dev/vpatha4 0 0 0 System 83 Linux 83 Linux 82 Linux swap 0 Empty Note the cylinders. 63 sectors/track Units = cylinders of 8225280 bytes. heads. Use the sfdisk utility to find the cylinders. 255 heads.. where MM is the major number and mm is the minor number of the device in question (in hex). Your root disk is vpatha and there are four underlying SCSI disk devices or paths that correspond to that vpath device if your lsvpcfg output looks like the following example: Chapter 5.Fatal: Sorry. hdparm only works on scsi disk or ide disk devices (/dev/sdXX or /dev/hdXX) and does not work on vpath devices. 2. can be used to get the starting sector numbers for each partition. you can manually specify the geometry of the disk in the file /etc/lilo. partition = <last vpath partition> start = <sector start # of last partition> The following example is from a RHEL3 U4 system running on vpath devices: disk = /dev/vpatha bios = 0x80 sectors = 63 heads = 255 cylinders = 5221 partition = /dev/vpatha1 start = 63 partition = /dev/vpatha2 start = 1028160 partition = /dev/vpatha3 start = 79794855 Use the following procedure to find the information from your system in order to manually specify the disk geometry: 1. don't know how to handle device 0xMMmm. To prevent lilo from checking the major numbers. and blocks. Manually specifying disk geometry of the boot device The syntax for manually specifying the disk geometry is: disk = <disk-name> bios = 0x80 sectors = <# of sectors/track> heads = <# of heads> cylinders = <# of cylinders> partition = <first vpath partition> start = <sector start # of first partition> partition = <second vpath partition> start = <sector start # of second partition> . and sectors per track and use this information to fill in the appropriate lilo.. However. Use the -l option to list the current partition table and geometry numbers.conf. You can use one of the underlying paths that correspond to your boot disk to check the values. blocks of 1024 bytes. For example. heads.conf entries. A separate program. Using SDD on a Linux host system 299 . [root@server ~]# sfdisk -l /dev/vpatha Disk /dev/vpatha: 5221 cylinders.

0) 75022811540 = 6005076303ffc06a0000000000001540 = /dev/sda /dev/sdb /dev/sdc /dev/sdd 3. for example.conf shown in the example 299. Here is an example of a lilo. and then issue the following command: [root@server ~]# hdparm -g /dev/sda The following output is displayed: /dev/sda: geometry = 5221/255/63. sectors = 4080510. Rerun lilo. start = 63 [root@server ~]# hdparm -g /dev/sda2 /dev/sda2: geometry = 5221/255/63.conf file configured for remote boot: 300 Multipath Subsystem Device Driver User's Guide . These values correspond to the starting sector numbers in the example snippet from lilo. but instead uses the entries in lilo. 7. 8. Issue hdparm -g against every partition. start = 0 4. sectors = 83886080. 5.000 vpatha ( 254. sectors = 1028097. start = 79794855 6. start = 0 [root@server ~]# hdparm -g /dev/sda1 /dev/sda1: geometry = 5221/255/63. sectors = 78766695.l output. Choose one vpath. For example: [root@server ~]# hdparm -g /dev/sda /dev/sda: geometry = 5221/255/63. Insert the disk parameters and all the supporting information.conf parameters. /dev/sda. Use the values after the "start = " sections above as the starting sector numbers for the /etc/lilo. The command should now succeed because it does not have to probe the geometry of the vpath device. Compare this output to the sfdisk .conf. start = 1028160 [root@server ~]# hdparm -g /dev/sda3 /dev/sda3: geometry = 5221/255/63. sectors = 83886080.

4. This component consists of a UNIX application daemon that is installed in addition to the SDD device driver.4.ELsmp. Edit /etc/inittab and append the following text: #IBMsdd path recovery daemon: srv:345:respawn:/opt/IBMsdd/bin/sddsrv > /dev/null 2>&1 Chapter 5. use the following process to start sddsrv: 1. Verifying if the SDD server has started After you have installed SDD.ELsmp label=linux initrd=/boot/initrd-2. the SDD server has automatically started. the output from the ps command looks like this: 31616 31617 31618 31619 31620 31621 31622 ? ? ? ? ? ? ? S S S S S S S 0:00 0:00 0:00 0:10 0:00 0:00 0:00 /opt/IBMsdd/bin/sddsrv /opt/IBMsdd/bin/sddsrv /opt/IBMsdd/bin/sddsrv /opt/IBMsdd/bin/sddsrv /opt/IBMsdd/bin/sddsrv /opt/IBMsdd/bin/sddsrv /opt/IBMsdd/bin/sddsrv If processes are listed. Starting the SDD server manually If the SDD server did not start automatically after you performed the SDD installation.boot=/dev/vpatha map=/boot/map install=/boot/boot. If the SDD server (sddsrv) has automatically started. no processes will be listed and you should see “Starting the SDD server manually” for instructions to start sddsrv.b disk = /dev/vpatha bios = 0x80 sectors = 63 heads = 255 cylinders = 5221 partition = /dev/vpatha1 start = 63 partition = /dev/vpatha2 start = 1028160 partition = /dev/vpatha3 start = 79794855 prompt timeout=50 message=/boot/message default=linux image=/boot/vmlinuz-2. “Using the SDD server and the SDDPCM server. verify if the SDD server (sddsrv) has automatically started by entering ps wax | grep sddsrv.test read-only root=/dev/vpatha2 SDD server daemon The SDD server (also referred to as sddsrv) is an integrated component of SDD. See Chapter 12.21-27.img.” on page 415 for more information about sddsrv. Using SDD on a Linux host system 301 . If the SDD server has not started.21-27.

log on reception of this signal. it requires you to connect through a switch to multiple ports. Understanding SDD support for single-path configuration SDD does not support concurrent download of licensed machine code in single-path mode. 3. You should have at least two fibre-channel adapters to prevent data loss due to adapter hardware failure or software failure for multipath support. no processes will be listed when you enter ps wax | grep sddsrv. If your host has only one fibre-channel adapter port. Because of single-path connection. SDD cannot provide single-point-failure protection and load balancing. SDD supports single-path SCSI or fibre-channel connection from your Linux host system to a disk storage system and single-path fibre-channel connection from your Linux host system to a SAN Volume Controller. This command causes sddsrv to copy the trace data out of memory to the file /var/log/sdd. issue killall -IO sddsrv. see the instructions in “Changing the sddsrv or pcmsrv TCP/IP port number” on page 419. Collecting trace information SDD tracing can be enabled using the SDD server Web page. perform the following steps: 1. Save the file. However. 3. Stopping the SDD server To stop the SDD server. IBM does not recommend this. Save the file /etc/inittab. To extract that information. 2. See “Verifying if the SDD server has started” on page 301 to verify that the SDD server is not running. 302 Multipath Subsystem Device Driver User's Guide . 3. If sddsrv is not running. 4. Follow the directions in “Verifying if the SDD server has started” on page 301 to confirm that the SDD server started successfully. Enabling tracing puts the trace information into memory. SDD does not support SCSI adapters. Notes: 1.2. Edit /etc/inittab and comment out the SDD server entry: #IBMsdd path recovery daemon: #srv:345:respawn:/opt/IBMsdd/bin/sddsrv > /dev/null 2>&1 2. SDD supports one fibre-channel adapter on the host system. Issue telinit q. Enter the telinit q command. Changing to a different port number for the SDD server To change to a different port number for the SDD server.

2. Chapter 5. 247. 247. while /dev/sda1 is a logical device representing the first partition of the whole device /dev/sda. In other words. 247. SDD vpath device names can replace corresponding SCSI disk device name entries in system configurations files. specifies the unique device name. 247. 3.Partitioning SDD vpath devices Disk partitions are known as logical devices. fsck. All I/O management. 247. statistics. Similarly then. and failover processes of the logical device follow those of the whole device. only entire SCSI disks can be configured. 247. as with SCSI disk devices. Therefore. The following output demonstrates how the partitions are named: brw-r--r-brw-r--r-brw-r--r-brw-r--r-brw-r--r-brw-r--r-brw-r--r-brw-r--r-brw-r--r-brw-r--r-brw-r--r-brw-r--r-brw-r--r-brw-r--r-brw-r--r-brw-r--r-1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 root root root root root root root root root root root root root root root root root root root root root root root root root root root root root root root root 247. Device files /dev/vpathXY are also created for each partition of the multipath device (where Y represents the corresponding partition number). 247. 247. an SDD vpath device can be partitioned into logical devices. SDD resides above the SCSI subsystem in the block I/O stack of the Linux host system. such as /etc/fstab. A number following the device name denotes the partition number. 247. 247. 0 1 10 11 12 13 14 15 2 3 4 5 6 7 8 9 Apr Apr Apr Apr Apr Apr Apr Apr Apr Apr Apr Apr Apr Apr Apr Apr 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 16:57 16:57 16:57 16:57 16:57 16:57 16:57 16:57 16:57 16:57 16:57 16:57 16:57 16:57 16:57 16:57 /dev/vpatha /dev/vpatha1 /dev/vpatha10 /dev/vpatha11 /dev/vpatha12 /dev/vpatha13 /dev/vpatha14 /dev/vpatha15 /dev/vpatha2 /dev/vpatha3 /dev/vpatha4 /dev/vpatha5 /dev/vpatha6 /dev/vpatha7 /dev/vpatha8 /dev/vpatha9 Note: For supported file systems. For example. mkfs. a-z. Once configured. X might be one or two letters). such as fdisk. Disk partitions cannot be configured as SDD vpath devices. a specific device file /dev/vpathX is created for each supported multipath SCSI disk device (where X represents the unique device name. 247. The next letter (or two letters). and mount accept an SDD device name as a parameter. 247. use the standard UNIX fdisk command to partition SDD vpath devices. Using standard UNIX applications After successful installation. When a file system or user application wants to use the logical device. The SDD naming scheme for disks and disk partitions follows the standard Linux disk-naming convention. The following description illustrates the naming scheme for SCSI disks and disk partitions: 1. The first two letters indicate the SCSI device. SDD recognizes and communicates with the native device driver of your Linux host system and standard UNIX applications. 247. Each device and partition has its own major and minor number. 247. /dev/sda is the whole device. 247. it should refer to /dev/vpathXY (for example. Using SDD on a Linux host system 303 . /dev/vpatha1 or /dev/vpathbc7) as its multipath logical device.

“System log messages. if necessary. v A DEAD path is not reclaimed by sddsrv because it was offlined by the kernel. The only way to make paths available again is through manual intervention. Therefore. Consider setting the timeout value to 60 seconds.” on page 451 for more information about the messages that are logged for AIX SDD.6 kernels. Managing common issues The following list includes known general issues. By default. you have to unmount the SDD vpath device and. the SCSI midlayer allots only 30 seconds per SCSI command before cancelling the I/O command to the initiator. That is. Sometimes. the kernel's SCSI layer will offline a path. If you see SCSI errors of value 0x6000000.host number 304 Multipath Subsystem Device Driver User's Guide .Make sure that the SDD devices match the devices that are being replaced. Typically the system messages log contain messages indicating that this has happened. if a write I/O fails on all paths of an SDD vpath device.not ready or command retry failed after bus reset: host 2 channel 0 id 1 lun 0 On Linux 2. you can manually set the timeout value through the sysfs interface. v Setting SCSI midlayer timeout values to address loaded storage targets Some storage devices require a longer time period to retire an I/O command issued by an initiator under heavy load. For Linux 2. Replace the items in <> with the following (you can match with the values in /proc/scsi/scsi): – host .6 kernels.4 kernels. See Appendix B. LUN reset messages or abort I/O messages. issue the following command: echo 60 > /sys/class/scsi_device/<host>:<channel>:<target>:<lun>/timeout where. v Understanding SDD error recovery policy during total path failure To avoid potential data corruption.4 kernels: kernel: scsi: device set offline . It might also be necessary to stop all I/O operations and allow the target to retire all outstanding I/O before starting I/O operations again with the new timeout. you can change the timeout setting in case it helps to alleviate the situation. This behavior is done independently of SDD.not ready after error recovery: host 2 channel 0 id 1 lun 0 or. the only way to restore devices that have been taken offline by the SCSI layer is to reload the HBA driver. all the paths are put into DEAD state and OFFLINE mode. you can use the sysfs interface to dynamically re-enable SCSI devices that have been taken offline by the SCSI layer. For example: On Linux 2. scsi1 (1:1): rejecting I/O to offline device For 2. such as when a failure occurs or when there is a lot of I/O. To do this. run fsck to check and repair the filesystem if necessary. sddsrv cannot reclaim a path that has been taken offline by the SCSI layer. For 2.6 kernels: kernel: scsi: Device offlined . You can issue the lsvpcfg command to list all SDD devices and their underlying disks.

Directions for adjusting the queue depth is specific to each HBA driver and should be available in the documentation for the HBA. Chapter 5. Using SDD on a Linux host system 305 . it can work equally well in environments that use Qlogic host bus adapters. the recommended queue depth value to use would be 16 when using two active paths and 8 when using four active paths. given that Qlogic uses a default queue depth of 32. Because this script deals with SCSI disk devices. Details on how to use the tool are available on the Emulex Web site.sh at the Emulex Web site under the Linux tools page.channel number – target .– channel .lun number To simplify this process for multiple paths. With multipathing. this default value is magnified because it equals the default queue depth of the adapter multiplied by the number of active paths to the storage device.target number – lun . Emulex has provided the script set_timout_target. For example. v Changing default queue depth values to avoid overloaded storage targets You should lower the queue depth per LUN when using multipathing.

306 Multipath Subsystem Device Driver User's Guide .

upgrade.5 if servers are being clustered ConsoleOne v v v v v SCSI and fibre-channel device drivers © Copyright IBM Corp.1 if servers are being clustered NetWare Cluster Service for NetWare 6.1 with Support Pack – Novell NetWare 6 with Support Pack – NetWare 6.Chapter 6.5 with Support Pack NetWare Cluster Service for NetWare 5. configure.com/servers/storage/support/software/sdd Attention: SDD does not support Novell NetWare host systems attached to SAN Volume Controller. see the Readme file on the CD-ROM or visit the following SDD Web site: www. 1999. You can check for and download the latest APARs. which is a driver component that is associated with storage devices and the commands that control the storage device.0 if servers are being clustered NetWare Cluster Service for NetWare 6. 2010 307 .ibm. and microcode updates from the following Web site: www. maintenance level fixes. or NetWare 6. For updated and additional information not included in this chapter.ibm. Verifying the hardware and software requirements You must have the following hardware and software components in order to successfully install SDD. The SDD for NetWare is shipped as a Novell Custom Device Module (CDM).5) that is attached to a disk storage system.0.com/servers/storage/support/ Hardware requirements The following hardware components are needed: v IBM TotalStorage SAN Fibre Channel Switch 2109 is recommended v Host system v Fibre-channel switch v SCSI adapters and cables (ESS) v Fibre-channel adapters and cables Software requirements The following software components are needed: v Microsoft Windows operating system running on the client v One of the following NetWare operating systems running on the server: – Novell NetWare 5. and remove SDD on a NetWare host system (NetWare 5.1. NetWare 6. Using the SDD on a NetWare host system This chapter provides step-by-step procedures on how to install.

SP4.com/servers/storage/support 308 Multipath Subsystem Device Driver User's Guide . for DS6000 devices v IBM 2145.Supported environments SDD supports the following environments: v Novell NetWare 5. go to the following Web site: www. for ESS devices v IBM 2107xxx. SCSI requirements To use the SDD SCSI support.ibm.0 is supported only for fibre-channel devices. Disk storage system requirements To successfully install SDD.7 for Novell NetWare 6.1 is supported on fibre-channel and SCSI devices. ensure that your host system meets the following requirements: v A SCSI cable connects each SCSI host adapter to an ESS port. SP2. For information about the SCSI adapters that can attach to your NetWare host system.1 SP6 v Novell NetWare 6 SP1. or SP5 v Novell NetWare 6. Currently. ensure that a minimum of two SCSI adapters are installed. v Novell Cluster Services 1.1 or SP2 v Novell Cluster Services 1. such as a disk storage system host-bay-adapter replacement v DS8000 and DS6000 do not support SCSI connectivity. for SAN Volume Controller devices where xxx represents the disk storage system model number. v If you need the SDD I/O load-balancing and failover features.6 for Novell NetWare 6. only the following QLogic fibre-channel adapters are supported with SDD: v QL2310FL v QL2200F v QLA2340 and QLA2340/2 Unsupported environments SDD does not support: v A host system with both a SCSI and fibre-channel connection to a shared disk storage system LUN v Single-path mode during concurrent download of licensed machine code nor during any disk storage system concurrent maintenance that impacts the path attachment.01 for Novell NetWare 5. v Novell Cluster Services 1.5 SP1. ensure that the disk storage system devices are configured as one of the following: v IBM 2105xxx (SCSI-attached device) or IBM FC 2105 (fibre-channel-attached device).5 is supported only for fibre-channel devices. for DS8000 devices v IBM 1750xxx. SP3.

SDD requires a minimum of two independent paths that share the same logical unit to use the load-balancing and path-failover-protection features. See the Host Systems Attachment Guide for your disk storage system for information on working around Novell LUN limitations. For QLogic adapters. See the Installation and Planning Guide for your disk storage system for more information about how to configure the disk storage system.com/servers/storage/support Preparing for SDD installation Before you install SDD. maintenance level fixes. For information about the fibre-channel adapters that can be used on your NetWare host system. v If you need the SDD I/O load-balancing and failover features. you need to add /LUNS. and microcode updates from the following Web site: www. go to the following Web site: www. failover protection is not provided. For example: Chapter 6.Fibre-channel requirements You must check for and download the latest fibre-channel device driver APARs. To use the SDD fibre-channel support. You should have at least two fibre-channel adapters to prevent data loss due to adapter hardware failure or software failure. you need to connect through a switch to multiple disk storage system ports. v The disk storage system for single-port or multiple-port access for each LUN. /ALLPATHS. you must configure the disk storage system for your host system and attach required fibre-channel adapters. With a single path. v A fiber-optic cable connects each fibre-channel adapter to a disk storage system port. Follow the adapter-specific configuration instructions to configure the adapters. Using the SDD on a NetWare host system 309 . ensure that a minimum of two paths to a device are attached.com/servers/storage/support/ Note: If your host has only one fibre-channel adapter. Configuring the disk storage system Before you install SDD. you must configure the disk storage system: v The disk storage system to your host system and the required fibre-channel that are attached.ibm. Configuring fibre-channel adapters You must configure the fibre-channel adapters and the adapters' drivers that are attached to your NetWare host system before you install SDD. /PORTNAMES while loading FC HBA device driver. v The ESS to your host system and the required SCSI adapters that are attached. ensure that your host system meets the following requirements: v The NetWare host system has the fibre-channel device drivers installed.ibm.

This re-scan can potentially cause volumes to be dismounted. . Ensure that you can see all the LUNs before installing SDD. hangs. The CPQSHD. or abnormal ends can result. See the IBM TotalStorage Enterprise Storage Server: Host Systems Attachment Guide for working around NetWare LUN limitations..HAM slot=y LUN_ENABLE=FFFF See the IBM TotalStorage Enterprise Storage Server: Host Systems Attachment Guide for more information about how to install and configure fibre-channel adapters for your NetWare host system.CDM driver.CDM file at startup. This driver has different behavior than the standard Novell SCSIHD. modify the autoexec.NCF file or by comment out the line which loads CPQSHD. Then. or abnormal ends. SDD may not failover as designed. For Adaptec AHA2944 adapters.ncf file by adding SET MULTI-PATH SUPPORT=OFF at the top.CDM driver must be loaded in the place of the Compaq CPQSHD.HAM SLOT=x /LUNS /ALLPATHS /PORTNAMES /GNNFT LOAD QL2200.. add LUN_ENABLE=FFFF in startup..CDM driver will often do a re-scan after a path is lost. See the IBM TotalStorage Enterprise Storage Server Host Systems Attachment Guide for more information about how to install and configure fibre-channel adapters for your NetWare host system.CDM driver. Remove the reference to this file from the STARTUP. See the IBM TotalStorage Enterprise Storage Server: Host Systems Attachment Guide for information about working around NetWare LUN limitations. . Configuring SCSI adapters Before you install and use SDD.ncf by adding SCAN ALL LUNS before MOUNT ALL: ... SCAN ALL LUNS MOUNT ALL . The standard Novell SCSIHD. For example. To ensure that SDD failover functions as designed and to prevent potential volume dismounts. do not load the CPQSHD. Compaq servers running Novell NetWare can be configured to load the Compaq-specific CPQSHD.CDM. and hangs or abnormal ends can result.CDM file startup.. hangs.ncf: LOAD AHA2940. the STARTUP. Volume dismounts.LOAD QL2200.HAM SLOT=y /LUNS /ALLPATHS /PORTNAMES /GNNFT Modify the startup. Using a NetWare Compaq Server When SDD is installed on a Compaq server running Novell NetWare. Use the list storage adapters command to verify that all the LUNs are available.HAM slot=x LUN_ENABLE=FFFF LOAD AHA2940..NCF file should look similar to the following example in order for SDD to failover as designed on a Novell NetWare Compaq server: 310 Multipath Subsystem Device Driver User's Guide .. you must configure your SCSI adapters.

. unused To install the SDD: 1. Messages like the following are issued during the initial load: Chapter 6. Using the SDD on a NetWare host system 311 . #LOAD CPQSHD.HAM SLOT=6 /LUNS /ALLPATHS /PORTNAMES /GNNFT LOAD QL2300.INS.CDM .CDM rather than CPQSHD. This file starts the installation. Enter load XXX :\path \install. and updates the startup file. where XXX is the name of the CD volume mounted.CDM . copies SDD.NLM. Enter load XXX :\path \install. Download the NetWare SDD package from the web as a zipped file. datapath command v COPY. and updates the startup file. To unload the SDD module. Unzip the package and copy the files to the desired location.. Note: During the initial load of SDD.NLM. main body of the installer that contains the startup program v SDD. enter unload SDD.INS. enter load SDD. the file copy destination v STARTUP. Installing SDD This section describes how to install SDD from CD-ROM and downloaded code. 3..CDM.NLM.CDM to a target directory.INS..NLM. the STARTUP update v INFO. NetWare SDD takes over control of each LUN (path) from other device drivers in order to virtualize all the underlying paths (LUNs).CDM to a target directory. in the NetWare console window to invoke INSTALL.INS. contains messages displayed at installation v AUTOEXEC.SET MULTI-PATH SUPPORT = OFF .CDM will not cause any problems when running SDD on a Novell NetWare Compaq server. This file starts the installation. in the NetWare console window to invoke INSTALL. Insert the SDD installation media into the CD-ROM drive. LOAD QL2300. LOAD SCSIHD. where XXX is the location for the SDD installation files. Configuring SDD To load the SDD module. a device driver v DATAPATH..HAM SLOT=5 /LUNS /ALLPATHS /PORTNAMES /GNNFT Using SCSIHD. 2. Installing SDD from CD-ROM The installation CD-ROM contains the following files: v INSTALL. 2. copies SDD. Installing SDD from downloaded code Use this procedure to install SDD from downloaded code: 1..

the path is put back into the OPEN state. the path is put into the CLOSE (Error) state.Device " {V597-A2-D1:2} IBM 2105F20 REV:. The SDD has the following path states: v v v v OPEN state CLOSE (Error) state DEAD state INVALID (PERMANENTLY DEAD) state The OPEN state indicates that a path is available. In the single-path mode. However. failover and selection The SDD failover-protection system is designed to minimize any disruptions in I/O operations from a failing datapath. the path is put into the DEAD state in multipath mode. While path recovery is in progress. This indicates that further access to 312 Multipath Subsystem Device Driver User's Guide . it is put back into the OPEN state. Displaying the current version of the SDD Enter modules SDD to display the current version of the SDD. the SDD moves the I/O access to another available path in order to keep the data flow. The total devices supported equals the number of LUNs multiplied by the number of paths per LUN. Only a datapath command. If the SDD recovers the path. failover and selection v Manual operations (datapath command) v Path selection algorithms v Dynamic load balancing v Disk storage system logical unit detection v Error reporting and logging v SDD in NetWare-layered architecture Automatic path detection. all the paths except one are moved into the DEAD state. The path is put into the INVALID state and is placed offline if path reclamation fails. datapath set adapter <n> online or datapath set device <n> path <m> online. This is the initial path state after the system starts.105" deactivated by driver due to device failure. it stays in the CLOSE state. If a path failure occurs three consecutive times in the CLOSE (Error) state. can return the path to the OPEN state. Maximum number of LUNs SDD supports a total of less than 600 devices. When a path failure occurs in the OPEN state. In the event that all the paths fail. if the path is recovered. The one path will still be in OPEN state. the path is temporarily changed to OPEN state. Features SDD provides the following functions: v Automatic path detection. While path reclamation is in progress. the path is temporarily changed to the OPEN state. These messages are expected and are not cause for concern. When a path failure is detected.

NetWare does support the scan command. a message is sent to the SDD. a path is chosen at random from those paths. See Chapter 13. “Using the datapath commands. Therefore. “Using the datapath commands. if so. which scans the devices connected to the server. and df options currently have the same effect. Using the SDD on a NetWare host system 313 . SDD in the Novell NetWare environment supports the datapath set device policy command. default This is the round-robin-path operation and is the default value. where lb indicates load balancing v df. The load is a function of the number of I/O operations currently in process. which is supported on other platforms. The datapath open device path command.” on page 421 for more information about the datapath commands.LUNs is still accepted. lb. you should issue the scan all command manually instead of the addpath command used on other platforms. and the SDD updates the path configuration based on the message. is not supported in NetWare because it is not possible to open a device that failed to configure in NetWare. failover only (fo) All I/O operations for the device are sent to the same (preferred) path until the path fails because of I/O errors. which has the following options: v rr. where df indicates the default policy. which is round robin v fo. Then an alternate path is chosen for subsequent I/O operations. scan all refreshes the device table and sends a message to the SDD in case a new device is found. The path-selection policy algorithms are: round robin (rr) The path to use for each I/O operation is chosen at random from paths that were not used for the last I/O operation. See Chapter 13. Manual operations using the datapath commands The datapath commands allow manual path selection using a command line interface.” on page 421 for detailed information about the commands. At each access. all paths are attempted until at least one of them is recovered. If multiple paths have the same load. SDD alternates between the two. SDD checks to see if the new device is a LUN under the disk storage system and. You can also use the scan all command to put a new path under SDD control. If a device has only two paths. Chapter 6. load balancing (lb) The path to use for an I/O operation is chosen by estimating the load on the adapter to which each path is attached. where rr indicates round robin v lb. The error count is incremented only for the path in the OPEN state while all other paths are failed. where fo indicates failover policy Note: The rr. adds it to the path group. In case a device is detected.

v If an access fails through the path that has been returned to the OPEN state. SDD assumes that all logical units have 2105 as their first four characters in the Product ID in Inquiry Data. SDD puts the path into the DEAD state. SDD attempts a previously-failed path or paths even though that path (or paths) is in PERMANENTLY DEAD state. v If the consecutive error count on the path reaches three. only a single path is available in access to a device in a subsystem. SDD puts the path into the INVALID (PERMANENTLY DEAD) state and then never attempts the path again. Any path state transition is logged in the log file.ERR. v If the path is recovered as either a path recovery operation or a path reclamation operation. Error reporting and logging All error reports generated by SDD are logged in a NetWare standard log file. Disk storage system logical unit detection SDD works only with disk storage system logical units. v All knowledge of prior path failures is reset when a path returns to the OPEN state. The log has the following information: 314 Multipath Subsystem Device Driver User's Guide . v If all the available paths failed. Note: You can display the error count with the datapath command. SDD attempts to use the path again after 2 000 successful I/O operations through another operational path or paths. Dynamic load balancing SDD distributes the I/O accesses over multiple active paths.Understanding SDD error recovery algorithms SDD assumes the following two operation modes: v Single-path mode v Multiple-path mode Single-path mode In single-path mode. the path is then handled as a normal path in the OPEN state and the SDD stops keeping a history of the failed path. SDD has the following behavior concerning path operations: v After a path failure occurs on a path. The SDD also assumes that all logical units return unique serial numbers regardless of a port on the disk storage system. v SDD reverts the failed path from the DEAD state to the OPEN state after 50 000 successful I/O operations through an operational path or paths. SYS:\SYSTEM\SYS$LOG. v SDD never puts the last operational path into the DEAD state. This process is called Path Recovery. eliminating data path bottlenecks. Only a manual operation using a datapath command can reset a path from the PERMANENTLY DEAD state to the OPEN state. The SDD never puts this path into the DEAD state. This process is called Path Reclamation. SDD reports an I/O error to the application. two or more paths are available in access to a device in a subsystem. If the last operational path fails. Multiple-path mode In this mode. The Product ID indicates that it is a logical unit.

Removing the SDD Perform the following steps to remove the SDD: 1.324 Starting with SDD version 1.00i. Manually remove files from the C:\NWSERVER directory. SCSIHD.324 [V597-A4] QL2300 PCI FC-AL Host Adapter Module [V597-A4-D0:0] IBM 2105800 rev:. the list storage adapters displays: [V597-A3] QL2300 PCI FC-AL Host Adapter Module [V597-A3-D0:0] IBM 2105800 rev:. Remove SDD-related entries in startup. datapath query device will show all the paths for each device. the system will display a single device for a multipath device. Using the SDD on a NetWare host system 315 .00i. Chapter 6. It supports LUNs under disk storage systems only. the following output would be displayed for the list storage adapters command: V597-A3] QL2300 PCI FC-AL Host Adapter Module [V597-A3-D0:0] IBM 2105800 rev:. However.324 [V597-A4] QL2300 PCI FC-AL Host Adapter Module The datapath query device output will be same in both the cases.324 [V597-A3-D0:1] IBM 2105800 rev:. while a successful path recovery in Path Recovery or Path Reclamation is logged. The SDD.CDM has all functions that the standard CDM has in addition to the disk storage system-specific path management features.324 [V597-A3-D0:1] IBM 2105800 rev:. on a system with two LUNs with each having two paths. Any other LUNs are supported by a NetWare standard CDM. The SDD. with older versions of SDD.v v v v v Event source name Time stamp Severity Event number Event description v SCSI sense data (in case it is valid) Note: A failure in Path Recovery or Path Reclamation is not logged. SDD in NetWare-layered architecture All path-management features are implemented in an SDD-unique Custom Device Module (CDM).CDM. Display a single device for a multipath device With SDD version 1.ncf.CDM assumes that it will be working with a standard Host Adapter Module (HAM). 2. For example. NetWare has assigned the SDD CDM module ID 0x7B0.CDM.324 [V597-A4-D0:1] IBM 2105800 rev:. which is called SDD.

Cluster setup for Novell NetWare 5. LONG NSS 3 volumes mounted (start IO) END:datapath query device 316 Multipath Subsystem Device Driver User's Guide .novell.Property of IBM END:datapath query device Total Devices : 2 DEV#: 3A DEVICE NAME: 0x003A:[V596-A4-D1:0] TYPE: 2105E20 SERIAL: 30812028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003A:[V596-A4-D1:0] OPEN NORMAL 14 0 1 0x007A:[V596-A3-D1:0] OPEN NORMAL 14 0 2 0x001A:[V596-A4-D0:0] OPEN NORMAL 14 0 3 0x005A:[V596-A3-D0:0] OPEN NORMAL 14 0 DEV#: 3B DEVICE NAME: 0x003B:[V596-A4-D1:1] TYPE: 2105E20 SERIAL: 01312028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003B:[V596-A4-D1:1] OPEN NORMAL 1 0 1 0x007B:[V596-A3-D1:1] OPEN NORMAL 1 0 2 0x001B:[V596-A4-D0:1] OPEN NORMAL 1 0 3 0x005B:[V596-A3-D0:1] OPEN NORMAL 0 0 END:datapath query adapter Active Adapters :2 Adpt# Adapter Name State Mode Select Errors Paths Active 0 [V596-A4] NORMAL ACTIVE 30 0 4 4 1 [V596-A3] NORMAL ACTIVE 30 0 4 4 (Creating volume tempvol on DEV#3A through ConsoleOne. Volume TEMPVOL set to the ACTIVATE state.novell. LONG NSS P TEMPVOL DOS.07 July 17. NFS.html Examples of commands output on the Console Window The following examples show the basic commands output during path failover and failback. MAC. follow the steps described in the Novell Cluster Services document available online at: www. NFS. LONG Cp Sa _ADMIN DOS.0 To set up clustering in Novell NetWare 6. 2002 Licensed Materials . 2003 (C) Copyright IBM Corp. The examples are from NetWare 6. follow the steps described in the Novell Cluster Services document available online at: www.00.CDM Loaded from [C:\NWSERVER\] (Address Space = OS) IBM Enterprise Storage Server SDD CDM Version 1.00 ** Processing volume purge log ** .1. END:modules sdd SDD.com/ documentation/lg/ncs/index.00 ** Volume creation layout v35.0.1 To set up clustering in Novell NetWare 5. Mounting Volume TEMPVOL ** TEMPVOL mounted successfully END:volumes Mounted Volumes Name Spaces Flags SYS DOS. MAC.html Cluster setup for Novell NetWare 6.com/ documentation/lg/ncs6p/index. mount tempvol) END:mount tempvol Activating volume "TEMPVOL" ** Volume layout v35.0 SP2.

Device Error Code: 0x0000) [No sense data] This path is in CLOSE state. Device Error Code: 0x0000) [No sense data] This path is in CLOSE state. Device Error Code: 0x0000) [No sense data] This path is in CLOSE state.Total Devices : 2 DEV#: 3A DEVICE NAME: 0x003A:[V596-A4-D1:0] TYPE: 2105E20 SERIAL: 30812028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003A:[V596-A4-D1:0] OPEN NORMAL 224 0 1 0x007A:[V596-A3-D1:0] OPEN NORMAL 224 0 2 0x001A:[V596-A4-D0:0] OPEN NORMAL 224 0 3 0x005A:[V596-A3-D0:0] OPEN NORMAL 224 0 DEV#: 3B DEVICE NAME: 0x003B:[V596-A4-D1:1] TYPE: 2105E20 SERIAL: 01312028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003B:[V596-A4-D1:1] OPEN NORMAL 1 0 1 0x007B:[V596-A3-D1:1] OPEN NORMAL 1 0 2 0x001B:[V596-A4-D0:1] OPEN NORMAL 1 0 3 0x005B:[V596-A3-D0:1] OPEN NORMAL 1 0 END:datapath query adapter Active Adapters :2 Adpt# Adapter Name State Mode Select Errors Paths Active 0 [V596-A4] NORMAL ACTIVE 795 0 4 4 1 [V596-A3] NORMAL ACTIVE 794 0 4 4 (Pull one of the cable) Error has occured on device 0x3A path 2 (Adapter Error Code: 0x8007. Path Recovery (1) has failed on device 0x3A path 0 (Adapter Error Code: 0x8007. datapath set adapter command has been issued for adapter 4(Adpt# 2). Path Recovery (1) has failed on device 0x3A path 2 (Adapter Error Code: 0x8007. Device Error Code: 0x0000) [No sense data] This path is in CLOSE state. device 0x58 path 0 is in OPEN state. This adapter is in NORMAL state. Error has occured on device 0x3A path 0 (Adapter Error Code: 0x8007. Chapter 6. device 0x59 path 0 is in OPEN state. Using the SDD on a NetWare host system 317 . ND:datapath query device Total Devices : 2 DEV#: 3A DEVICE NAME: 0x003A:[V596-A4-D1:0] TYPE: 2105E20 SERIAL: 30812028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003A:[V596-A4-D1:0] CLOSE NORMAL 418 2 1 0x007A:[V596-A3-D1:0] OPEN NORMAL 740 0 2 0x001A:[V596-A4-D0:0] CLOSE NORMAL 418 2 3 0x005A:[V596-A3-D0:0] OPEN NORMAL 739 0 DEV#: 3B DEVICE NAME: 0x003B:[V596-A4-D1:1] TYPE: 2105E20 SERIAL: 01312028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003B:[V596-A4-D1:1] OPEN NORMAL 1 0 1 0x007B:[V596-A3-D1:1] OPEN NORMAL 1 0 2 0x001B:[V596-A4-D0:1] OPEN NORMAL 1 0 3 0x005B:[V596-A3-D0:1] OPEN NORMAL 1 0 END:datapath query adapter Active Adapters :2 Adpt# Adapter Name State Mode Select Errors Paths Active 0 [V596-A4] DEGRAD ACTIVE 901 5 4 2 1 [V596-A3] NORMAL ACTIVE 1510 0 4 4 (If reconnect cable and issue manual online command) END:datapath set adapter 0 online datapath set adapter command has been issued for adapter 4(Adpt# 0). This adapter is in NORMAL state.

Block 268680(file block 63)(ZID 3779) 318 Multipath Subsystem Device Driver User's Guide . path will be set to DEAD) Path Recovery (3) has failed on device 0x3A path 2 (Adapter Error Code: 0x8007. Device Error Code: 0x0000) [No sense data] This path is in DEAD state. Path Recovery (3) has failed on device 0x3A path 0 (Adapter Error Code: 0x8007.device 0x59 path 2 is in OPEN state. volume will be deactivated. device 0x58 path 2 is in OPEN state. Path Reclamation has failed on device 0x3A path 0 (Adapter Error Code: 0x8007.c[1912]). path will be set to INVALID if path reclamation fails) Path Reclamation has failed on device 0x3A path 2 (Adapter Error Code: 0x8007. Device Error Code: 0x0000) [No sense data] This path is in INVALID state. Device Error Code: 0x0000) [No sense data] This path is in INVALID state. This path is in OPEN state. (If cable is not reconnected. END:datapath query device Total Devices : 2 DEV#: 3A DEVICE NAME: 0x003A:[V596-A4-D1:0] TYPE: 2105E20 SERIAL: 30812028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003A:[V596-A4-D1:0] INVALID NORMAL 1418 8 1 0x007A:[V596-A3-D1:0] OPEN NORMAL 54740 0 2 0x001A:[V596-A4-D0:0] INVALID NORMAL 1418 8 3 0x005A:[V596-A3-D0:0] OPEN NORMAL 54739 0 DEV#: 3B DEVICE NAME: 0x003B:[V596-A4-D1:1] TYPE: 2105E20 SERIAL: 01312028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003B:[V596-A4-D1:1] OPEN NORMAL 1 0 1 0x007B:[V596-A3-D1:1] OPEN NORMAL 1 0 2 0x001B:[V596-A4-D0:1] OPEN NORMAL 1 0 3 0x005B:[V596-A3-D0:1] OPEN NORMAL 1 0 (If pull both cable. This path is in OPEN state. paths will be set to INVALID except one path left OPEN) Aug 8.c[7478] Volume TEMPVOL: User data I/O error 20204(zio. IO stops.02-xxxx: comnVol. END:datapath query device Total Devices : 2 DEV#: 3A DEVICE NAME: 0x003A:[V596-A4-D1:0] TYPE: 2105E20 SERIAL: 30812028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003A:[V596-A4-D1:0] DEAD NORMAL 1418 7 1 0x007A:[V596-A3-D1:0] OPEN NORMAL 4740 0 2 0x001A:[V596-A4-D0:0] DEAD NORMAL 1418 7 3 0x005A:[V596-A3-D0:0] OPEN NORMAL 4739 0 DEV#: 3B DEVICE NAME: 0x003B:[V596-A4-D1:1] TYPE: 2105E20 SERIAL: 01312028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003B:[V596-A4-D1:1] OPEN NORMAL 1 0 1 0x007B:[V596-A3-D1:1] OPEN NORMAL 1 0 2 0x001B:[V596-A4-D0:1] OPEN NORMAL 1 0 3 0x005B:[V596-A3-D0:1] OPEN NORMAL 1 0 (If cable is continually disconnected. Path Recovery (2) has succeeded on device 0x3A path 0. Success: set adapter 0 to online Adpt# Adapter Name State Mode Select Errors Paths Active 0 [V596-A4] NORMAL ACTIVE 2838 14 4 4 (If reconnect cable and let SDD do path recovery itself) Path Recovery (2) has succeeded on device 0x3A path 2. Device Error Code: 0x0000) [No sense data] This path is in DEAD state. after 3 retries. 2003 3:05:05 am NSS <comn>-3.

02-xxxx: comnVol.c[1912]).c[1890]).. the pool may need to be activated using the command nss /poolactivate=poolname.c[7478] Block 268685(file block 68)(ZID 3779) . 2 Undo(s).c[1912]). 2003 3:05:08 am NSS<COMN>-3. Aug 8. pool activation is need) END:mount tempvol Volume TEMPVOL could NOT be mounted. 2 Logical Undo(s) ** System verification completed ** Loading system objects ** Processing volume purge log ** .. Block 268681(file block 64)(ZID 3779) Deactivating pool "TEMPPOOL"..c[2516] Pool TEMPPOOL: System data I/O error 20204(zio. ** Pool layout v40..c[1912]).Volume TEMPVOL: User data I/O error 20204(zio.. 2003 3:05:06 am NSS<COMN>-3.c[7478] Volume TEMPVOL: User data I/O error 20204(zio... 2003 3:05:06 am NSS<COMN>-3.02-xxxx: comnVol.c[7478] Volume TEMPVOL: User data I/O error 20204(zio.02-xxxx: comnVol. volume tempvol could not be mounted. Block 610296(file block 10621)(ZID 3) Dismounting Volume TEMPVOL The share point "TEMPVOL" has been deactivated due to dismount of volume TEMPVOL .02-xxxx: comnPool. Some or all volumes segments cannot be located.. Using the SDD on a NetWare host system 319 . Block 268683(file block 66)(ZID 3779) Aug 8. Pool TEMPPOOL set to the ACTIVATE state..07 ** Processing journal ** 1 uncommitted transaction(s) ** 1839 Redo(s).02-xxxx: comnVol.c[7478] Block 268684(file block 67)(ZID 3779) Aug 8. Aug 8.. 2003 3:05:08 am NSS<COMN>-3. Block 268682(file block 65)(ZID 3779) Aug 8. END:nss /poolactivate=temppool Activating pool "TEMPPOOL". END:datapath query device Total Devices : 2 DEV#: 3A DEVICE NAME: 0x003A:[V596-A4-D1:0] TYPE: 2105E20 SERIAL: 30812028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003A:[V596-A4-D1:0] OPEN NORMAL 2249 3064 1 0x007A:[V596-A3-D1:0] INVALID OFFLINE 12637 1 2 0x001A:[V596-A4-D0:0] INVALID OFFLINE 2248 16 3 0x005A:[V596-A3-D0:0] INVALID OFFLINE 12637 4 DEV#: 3B DEVICE NAME: 0x003B:[V596-A4-D1:1] TYPE: 2105E20 SERIAL: 01312028 POLICY: Round Robin Path# Device State Mode Select Errors 0 0x003B:[V596-A4-D1:1] OPEN NORMAL 1 0 1 0x007B:[V596-A3-D1:1] OPEN NORMAL 1 0 2 0x001B:[V596-A4-D0:1] OPEN NORMAL 1 0 3 0x005B:[V596-A3-D0:1] OPEN NORMAL 1 0 END:datapath query adapter Active Adapters :2 Adpt# Adapter Name State Mode Select Errors Paths Active 0 [V596-A4] DEGRAD ACTIVE 4499 3080 4 2 1 [V596-A3] DEGRAD ACTIVE 25276 5 4 2 (After reconnect both cables.. If this is an NSS volume. ** Processing pool purge log ** .. END:mount tempvol Activating volume "TEMPVOL" Chapter 6. issue manual online command) END:datapath set adapter 0 online Success: set adapter 0 to online Adpt# Adapter Name State Mode Select Errors Paths Active 0 [V596-A4] NORMAL ACTIVE 4499 3080 4 4 END:datapath set adapter 1 online Success: set adapter 1 to online Adpt# Adapter Name State Mode Select Errors Paths Active 1 [V596-A3] NORMAL ACTIVE 25276 5 4 4 END:datapath query adapter Active Adapters :2 Adpt# Adapter Name State Mode Select Errors Paths Active 0 [V596-A4] NORMAL ACTIVE 4499 3080 4 4 1 [V596-A3] NORMAL ACTIVE 25276 5 4 4 (At this time.. 2003 3:05:07 am NSS<COMN>-3.. Loading volume "TEMPVOL" Volume TEMPVOL set to the DEACTIVATE state.

MAC. Volume TEMPVOL set to the ACTIVATE state.00 ** Volume creation layout v35. MAC. NFS.** Volume layout v35. Mounting Volume TEMPVOL ** TEMPVOL mounted successfully END:volumes Mounted Volumes Name Spaces Flags SYS DOS. LONG NSS P TEMPVOL DOS.00 ** Processing volume purge log ** . LONG Cp Sa _ADMIN DOS. NFS. LONG NSS 3 volumes mounted 320 Multipath Subsystem Device Driver User's Guide .

1999. you need a minimum of two SCSI (ESS only) or fibre-channel host adapters if you are attaching to a disk storage system. To install SDD and use the input/output (I/O) load-balancing and failover features. For updated and additional information not included in this manual. one or more SCSI host adapters. v Subsystem LUNs that are created and confirmed for multiport access. Using the SDD on a Solaris host system This chapter provides step-by-step procedures on how to install. SDD requires enabling the host-adapter persistent binding feature to have the same system device names for the same LUNs. To install SDD and use the input-output (I/O) load-balancing and failover features. it must connect through a switch to multiple disk storage system ports. In case of a single fibre-channel adapter. and use SDD on a Solaris host system that is attached to supported storage devices. Software SDD supports the following software components: v ESS on a SPARC system running 32-bit Solaris 7/8/9 or 64-bit Solaris 7/8/9/10 v DS8000 on a SPARC system running 32-bit Solaris 8/9 or 64-bit Solaris 8/9/10 v DS8000 on an X64 machine running 64-bit Solaris 10 v DS6000 on a SPARC system running 32-bit Solaris 8/9 or 64-bit Solaris 8/9/10 v SAN Volume Controller on a SPARC system running 64-bit Solaris 8/9 SDD does not support the following software: v Applications that issue SCSI 2 Reservation to storage © Copyright IBM Corp. v For parallel SCSI access to ESS. configure.Chapter 7. Hardware The following hardware components are needed: v One or more supported storage devices. see the Readme file on the CD-ROM or visit the following SDD Web site: www. Each LUN should have up to eight disk instances. 2010 321 . v One or more fibre-channel host adapters. with one for each path on the server.ibm.com/servers/storage/support/software/sdd Verifying the hardware and software requirements You must install the following hardware and software components to ensure that SDD installs and operates successfully. you need a minimum of two fibre-channel host adapters if you are attaching to a virtualization product. remove. v A SCSI cable to connect each SCSI host adapter to a storage system control-unit image port v A fiber-optic cable to connect each fibre-channel adapter to a disk storage system controller port or a fibre-channel switch connected with disk storage system or virtualization product port.

configure your virtualization product and fibre-channel switches to assign LUNs to the system with multipath access. For information about configuring your SAN Volume Controller. see the IBM System Storage SAN Volume Controller Software Installation and Configuration Guide. For more information about how SDD works.Supported environments SDD supports 32-bit applications on 32-bit Solaris 7/8/9. /var. see the IBM TotalStorage Enterprise Storage Server: Introduction and Planning Guide. such as an disk storage system host-bay-adapter replacement v Single-path configuration for Fibre Channel v DS8000 and DS6000 do not support SCSI connectivity Understanding how SDD works on a Solaris host system SDD resides above the Solaris SCSI disk driver (sd) in the protocol stack. see “The SDD architecture” on page 2. Solaris SDD supports up to 600 physical LUNs from different storage systems on a single Solaris host. failover protection is not provided. With a single path. /usr. Unsupported environments SDD does not support the following environments: v A host system with both a SCSI and fibre-channel connection to a shared LUN v A system start from an SDD pseudo device v A system paging file on an SDD pseudo device v Root (/). /tmp and swap partitions on an SDD pseudo device v Single-path mode during concurrent download of licensed machine code nor during any disk storage system concurrent maintenance that impacts the path attachment. Preparing for SDD installation Before you install SDD. Configuring virtualization products Before you install SDD. For information about how to configure your disk storage system. you must first configure the disk storage systems or virtualization products to your host system. Solaris SDD supports up to 32 paths per LUN. Determining if the SDD server for Expert is installed If you previously installed the SDD server (the stand-alone version) for IBM TotalStorage Expert V2R1 (ESS Expert) on your Solaris host system. SDD supports both 32-bit and 64-bit applications on 64-bit Solaris 7/8/9/10. SDD requires a minimum of two independent paths that share the same logical unit to use the load-balancing and path-failover-protection features. Configuring disk storage systems SDD requires a minimum of two independent paths that share the same logical unit to use the load-balancing and path-failover-protection features. /opt. you must 322 Multipath Subsystem Device Driver User's Guide .

0. where no logical volume manager (LVM) is present.com/servers/storage/support/software/swexpert/ For more information about the SDD server daemon. for example. bb represents 32 or 64 bit. v For instructions on how to remove the stand-alone version of the SDD server (for ESS Expert) from your Solaris host system.0. v LVMs such as Sun Solstice Disk Suite.0 Nov-14-2001 15:34 Note: v The installation package for the stand-alone version of the SDD server (for ESS Expert) is SDDsrvSUNbb_yymmdd.1. several different physical volumes to create the image of a single large volume. You can install SDD in four different ways. Three types of software communicate directly to raw or block disk-device interfaces such as sd and SDD: v UNIX file systems.1. which incorporates the functionality of the stand-alone version of the SDD server (for ESS Expert). enter: pkginfo -i SDDsrv If you previously installed the stand-alone version of the SDD server. Chapter 7. The installation package for SDD 1. v Major application packages.ibm.3. Using the SDD on a Solaris host system 323 .pkg for a 32-bit environment and SDDsrvSun64_020115. The way that you install SDD depends on the kind of software that you are running. the stand-alone SDD server installation package is SDDsrvSun32_020115.0 (or later) installation. go to “SDD server daemon” on page 337.0. LVMs allow the system manager to logically integrate.pkg for a 64-bit environment.pkg.0. For ESS Expert V2R1. you need to understand what kind of software is running on it. To determine if the stand-alone version of the SDD server is installed on your host system.0 (sddsrv) README for IBM TotalStorage Expert V2R1 at the following Web site: www. and yymmdd represents the date of the installation package. In this version. such as certain database managers (DBMSs). the output from the pkginfo -i SDDsrv command looks similar to the following output: application SDDsrv SDDsrv bb-bit Version: 1. Planning for installation Before you install SDD on your Solaris host system.remove this stand-alone version of the SDD server before you proceed with SDD 1.0 includes the SDD server daemon (also referred to as sddsrv). see the IBM® Subsystem Device Driver Server 1.3. The way that you choose depends on the kind of software that you have installed: Table 22 on page 324 further describes the various installation scenarios and how you should proceed.

ensure that the Solaris patches are installed on your operating system. software application. Go to: v An existing volume 1. Go to the following Web site for the latest information about Solaris patches: http://sunsolve. 324 Multipath Subsystem Device Driver User's Guide .sun. Installing SDD CD-ROM in Jumpstart Environment Table 23 lists the installation package file names that come with SDD. Go to “Installing SDD” on page 325.com/show. see the IBM TotalStorage Enterprise Storage Server Host Systems Attachment Guide or the IBM System Storage SAN Volume Controller Host Systems Attachment Guide. Operating systems and SDD package file names Operating system 32-bit Solaris 7/8/9 64-bit Solaris 7/8/9 64-bit Solaris 10 Package file name sun32bit/IBMsdd sun64bit/IBMsdd solaris10/IBMsdd For SDD to operate properly. Go to “Upgrading the SDD” on page 330. v No software application or DBMS is installed that communicates directly to the sd interface. “Standard UNIX applications” on page 339 Scenario 2 v SDD is not installed. “Using applications with SDD” on page 338 Scenario 3 Scenario 4 SDD is installed. or DBMS is installed that communicates directly to the sd interface. Attention: Analyze and study your operating system and application environment to ensure that there are no conflicts with these patches prior to their installation. Table 23. 2. “Installing SDD” on page 325 manager.do?target=patchpage For more information on the Solaris patches. 2. “Installing SDD” on page 325 v No volume managers are installed. Go to: 1.Table 22. SDD installation scenarios Installation scenario Scenario 1 Description How to proceed v SDD is not installed.

Insert the CD-ROM into your CD-ROM drive. ## Checking for setuid/setgid programs. Installing SDD from CD-ROM To install SDD on your Solaris host system from CD-ROM. pkgadd -d /cdrom/cdrom0/sun32bit IBMsdd or pkgadd -d /cdrom/cdrom0/sun64bit IBMsdd or pkgadd -G -d . This package contains scripts that run with super-user permission during the process of installing this package.Installing SDD Before you install SDD. For Solaris 10.n. 2. Note: If the OS is Solaris 8 or Solaris 9.ibm. Issue the pkgadd command and point the –d option of the pkgadd command to the directory that contains IBMsdd.?] 6. A message similar to the following message is displayed: Processing package instance <IBMsdd> from <var/spool/pkg> IBM SDD driver (sparc) 1 ## Processing package information. Do you want to continue with the installation of <IBMsdd> [y. you can check the OS bit-level that is running by issuing # isainfo -kv. Change to the installation directory: # cd /cdrom/cdrom0/sun32bit or # cd /cdrom/cdrom0/sun64bit 4. perform the following steps. For example. Enter y and press Enter to proceed. make sure that you have root access to your Solaris host system and that all the required hardware and software is ready. include the -G option to prevent SDD to be installed in nonglobal zones. ## Checking for conflicts with packages already installed. You can download the latest SDD package and readme from the SDD Web site: www. A message similar to the following message is displayed: Chapter 7. Make sure that the SDD CD-ROM is available. 3.com/servers/storage/support/software/sdd Note: Note that SDD package name has changed from IBMdpo to IBMsdd for SDD 1. Using the SDD on a Solaris host system 325 .0 or later.4. ## Processing system information. ## Verifying disk space requirements. 1.0. /cdrom/cdrom0/Solaris10 IBMsdd 5.

If the SDD installation package determines that the system requires reboot.conf /opt/IBMsdd/bin/cfgvpath /opt/IBMsdd/bin/datapath /opt/IBMsdd/bin/defvpath /opt/IBMsdd/bin/get_root_disk /opt/IBMsdd/bin/pathtest /opt/IBMsdd/bin/rmvpath /opt/IBMsdd/bin/setlicense /opt/IBMsdd/bin/showvpath /opt/IBMsdd/bin/vpathmkdev /opt/IBMsdd/devlink. (default: all) [?. /etc/defvpath /etc/rcS.d/S20vpath-config /etc/sample_sddsrv.??.tab /opt/IBMsdd/etc.system /opt/IBMsdd/vpath.vpath.conf /opt/IBMsdd/bin/cfgvpath /opt/IBMsdd/bin/datapath /opt/IBMsdd/bin/defvpath /opt/IBMsdd/bin/get_root_disk /opt/IBMsdd/bin/pathtest /opt/IBMsdd/bin/rmvpath /opt/IBMsdd/bin/setlicense /opt/IBMsdd/bin/showvpath /opt/IBMsdd/bin/vpathmkdev /opt/IBMsdd/devlink.0. The following packages are available: 1 IBMcli ibm2105cli (sparc) 1.vpath. /etc/rcS.1.0 2 IBMsdd IBM SDD driver Version: May-10-2000 16:51 (sparc) 1 Select package(s) you wish to process (or ’all’ to process all packages).q]: 7.profile /opt/IBMsdd/etc.tab /opt/IBMsdd/etc.cfg /sbin/sddsrv /usr/sbin/vpathmkdev [ verifying class ] ## Executing postinstall script.d/S20vpath-config /etc/sample_sddsrv. Issue shutdown -y -i6 -g0 and wait for the "Console Login:" prompt.conf /kernel/drv/sparcv9/vpathdd /kernel/drv/vpathdd.msg /opt/IBMsdd/vpathexcl.Installing IBM sdd driver as <IBMsdd> ## Installing part 1 of 1.conf /kernel/drv/sparcv9/vpathdd /kernel/drv/vpathdd.msg /opt/IBMsdd/vpathexcl.profile /opt/IBMsdd/etc. 326 Multipath Subsystem Device Driver User's Guide .cfg /sbin/sddsrv /usr/sbin/vpathmkdev [ verifying class ] Vpath: Configuring 24 devices (3 disks * 8 slices) Installation of <IBMsdd> was successful. a message similar to the following message will be displayed: *** IMPORTANT NOTICE *** This machine must now be rebooted in order to ensure sane operation.system /opt/IBMsdd/vpath.

Therefore. /dev/rdsk/vpath2c would be instance 2 and slice 2. In other words. SDD displays a message to inform you to reboot only if reboot is required. Using the SDD on a Solaris host system 327 . Similarly. Postinstallation If you install SDD from a CD-ROM. From this version. add installation of SDD using -R option in the Jumpstart finish script. you might need to reboot the system to ensure proper operation.conf Executables Location /kernel/drv /kernel/drv /opt/IBMsdd/bin Description Device driver SDD config file Configuration and status tools Chapter 7. A directory IBMsdd is created. A device with an instance number 1 would be: /dev/rdsk/vpath1a where a denotes a slice. SDD vpath devices are found in the /dev/rdsk and /dev/dsk directories. you can now manually unmount the CD. Table 25 on page 328. The SDD software installation procedure installs a number of SDD components and updates some system files. The installable package is located under the IBMsdd directory. After SDD is installed. Table 24. Vpath devices will be automatically configured after reboot. After you install SDD. Installing SDD in Jumpstart Environment Installation of SDD in a Jumpstart environment is supported by SDD 1. After you download the appropriate SDD package. You will need to reboot the system. To install SDD in a Jumpstart environment. Install the package using steps similar to those found in “Installing SDD” on page 325.6.0 and later. /dev/rdsk/vpath1c would be instance 1 and slice 2. In this type of installation. The SDD installation package determines if reboot is required. The SDD vpath device is named according to the SDD instance number. SDD now communicates to the Solaris device layer. Issue the umount /cdrom command from the root directory. The SDD packages are in tar archives. SDD components installed for Solaris host systems File vpathdd vpathdd. Go to the CD-ROM drive and press the Eject button.2. SDD will be installed with the directory specified by basedir as a root directory. vpath devices will not be configured during installation. Table 24.tar. the device driver resides above the Sun SCSI disk driver (sd) in the protocol stack. you can untar the file by issuing the following command: tar xvf IBMsdd*.Installing SDD from downloaded code There are several SDD packages for different Solaris versions on the SDD Web site. and Table 26 on page 329 list those components and files. you can specify the base directory to which you want to install SDD: pkgadd -R <basedir> IBMsdd Using this command.

Prior to SDD 1. this file was named S20vpath_config.d The SDD data collection tool for problem analysis Boot initialization script Notes: 1. System files updated for Solaris host systems File /etc/system /etc/devlink.tab Location /etc /etc Description Forces the loading of SDD Tells the system how to name SDD devices in /dev 328 Multipath Subsystem Device Driver User's Guide .conf Table 25.6. 2.xml /var/svc/manifest/system SMF service manifest for boot time initialization (only on Solaris 10) Boot initialization script used by ibmsdd-init.xml manifest (only on Solaris 10) SDD server daemon Sample SDD server config file ibmsddinit /lib/svc/method sddsrv sample_sddsrv.2.0. ibmsdd-init. SDD components installed for Solaris host systems (continued) sddgetdata S65vpath_config (except Solaris 10) /opt/IBMsdd/bin /etc/rcS.Table 24. This script must come before other LVM initialization scripts.conf /sbin/sddsrv /etc/sample_sddsrv.

Files vpathMsN in the /dev/dsk/ and /dev/rdsk/ directories provide block and character access to an application the same way as the cxtydzsn devices created by the system. Note: cfgvpath -c updates the configuration file but does not update the kernel.cfg and creates SDD vpath devices /devices/pseudo/vpathdd* Note: cfgvpath without -c option should not be used after hardware reconfiguration because the SDD driver is already initialized with previous configuration information. The SDD driver will be initialized after reboot. However. cfgvpath -r Reconfigures SDD vpath devices if SDD vpath devices exist. Creates files vpathMsN in the /dev/dsk/ and /dev/rdsk/ directories by creating links to the pseudo-vpath devices /devices/pseudo/vpathdd* that are created by the SDD driver. 3. you need to reboot.Table 26. Create configuration file /etc/vpath. vpathmkdev runs automatically during SDD package installation. If no SDD vpath devices exist. This option is used to reconfigure SDD after a hardware reconfiguration. Scan the host system to find all devices (LUNs) that are accessible by the Solaris host. datapath SDD driver console command tool. Lists all SDD vpath devices and their underlying disks. SDD commands and their descriptions for Solaris host systems Command cfgvpath [-c] Description Configures SDD vpath devices using the following process: 1.cfg to save the information about devices. use cfgvpath without -r option. With -c option: cfgvpath exits without initializing the SDD driver. To update the kernel. See “Option 2: Dynamic reconfiguration” on page 331. Chapter 7. Without -c option: cfgvpath initializes the SDD device driver vpathdd with the information stored in /etc/vpath. Determine which devices (LUNs) are the same devices that are accessible through different paths. Using the SDD on a Solaris host system showvpath vpathmkdev 329 . issue manually to update files vpathMsN after hardware reconfiguration. Reboot is required to properly initialize the SDD driver with the new hardware configuration information. 2.

Table 26. C shell: setenv PATH /opt/IBMsdd/bin:$PATH b. Install SDD using the procedure in “Installing SDD” on page 325. software application. such as Oracle. If you are not using a volume manager. In order to upgrade SDD without requiring reboot. software application. SDD devices were not closed before step 3 and step 4. Unmount all the filesystems mounted on SDD devices. or DBMS installed that communicates directly to the sd interface. c. The active path count for all the adapters should be zero. go to “Using applications with SDD” on page 338 and read the information specific to the application that you are using. export PATH c. Upgrading the SDD To upgrade the SDD without requiring a reboot: 1. To verify that all the SDD devices closed. enter datapath query device. all the SDD devices need to be closed before uninstalling SDD. Verifying the SDD installation To verify the SDD installation. ensure that all the vpath devices are in offline mode in the Volume Manager. Korn Shell: export PATH=/opt/IBMsdd/bin:$PATH To verify that you successfully installed SDD. 2. a. Uninstall SDD using the procedure in “Uninstalling the SDD” on page 336. Issue shutdown -y -i6 -g0 and wait for the "Console Login:" prompt. for some reason. 3. See “Option 2: Dynamic reconfiguration” on page 331. Stop I/O activity on all SDD devices. the following message will be printed at the end of installation: *** IMPORTANT NOTICE *** This machine must now be rebooted in order to ensure sane operation. a. If the command runs. Bourne Shell: PATH=/opt/IBMsdd/bin:$PATH. perform the following steps: 1. issue the datapath query adapter command. Add /opt/IBMsdd/bin to the path. or DBMS that communicates directly to the sd interface. 330 Multipath Subsystem Device Driver User's Guide . If using Volume Manager. the installation procedure is nearly complete. SDD commands and their descriptions for Solaris host systems (continued) Command rmvpath [-b] [all | vpathname] rmvpath -ab Description Removes SDD vpath devices from the configuration. 4. If you have a volume manager. the system will need to be rebooted after SDD is installed. If. b. If reboot is required. SDD is installed.

6. Changing an SDD hardware configuration When adding or removing multiport SCSI devices from your system. the following commands will reconfigure the SDD. If you do not have a console attached to your host. Enter shutdown -i6 -g0 -y and press Enter. Enter devfsadm and press Enter to reconfigure all the drives. It will automatically add or remove vpath devices or paths based on the device attachment during boot up. make sure that you have successfully configured the disk storage system or virtualization product to which your host system is attached and that the disk storage system or virtualization product is operational. SDD automatically configures new vpath devices after each reboot. Issue the cfgvpath -r command. Before reconfiguring SDD. Using the SDD on a Solaris host system 331 . For SDD 1.0. Option 1: Reconfigure the system and reconfigure the SDD Perform the following steps to reconfigure the system and to reconfigure SDD. For SDD prior to 1.6. 1.Configuring the SDD Before you start the SDD configuration process. Option 2: Dynamic reconfiguration If the system can recognize the hardware change without reboot. 5. Save a copy of the output of both commands so that the change in the SDD configuration after the dynamic reconfiguration can be easily verified. perform a configuration restart by entering boot -r and pressing Enter at the OK prompt. 6. 2. Shut down the system.2.0 or later. 3. After the restart. you entered shutdown -i0 -g0 -y in step 1).6.2.2. Enter cfgvpath -c and press Enter.0. enter shutdown -i6 -g0 -y and press Enter to shut down and reboot the system. If you have a console attached to your host (that is. Shut down the system. If you have a console attached to your host. SDD only configures vpath devices during installation or reconfigures vpath devices initiated by the user running the SDD reconfiguration utility. In the earlier versions. the system needs to first recognize the hardware change. SDD will reconfigure vpath devices after each reboot.2. Tip: Before issuing the following SDD dynamic reconfiguration commands. dynamic reconfiguration provides a way to automatically detect path configuration changes without requiring a reboot. Enter vpathmkdev and press Enter to create all the SDD vpath devices. Step 1 and step 2 of this process reconfigure the system for the hardware change and the remaining steps reconfigure SDD.6. enter shutdown -i0 -g0 -y and press Enter. you must reconfigure SDD to recognize the new devices. SDD automatically reconfigures after reboot. issue the showvpath and datapath query device commands. After the system has recognized the new hardware change. 8.0 and later. you must perform the following steps to reconfigure the SDD. From SDD 1. change to the /opt/IBMsdd/bin directory by entering: cd /opt/IBMsdd/bin 7. 1. 4. Beginning with SDD 1. Chapter 7. Run the SDD utility to reconfigure SDD.

issue the following command: # rmvpath -all b. issue the following command: # rmvpath vpathname For example. The cfgvpath -r operation issues these commands to the vpath driver: a. Issue the rmvpath command to remove one or more SDD vpath devices. to that LUN. To remove one SDD vpath device if the SDD vpath device is not busy. All OFFLINE paths would not be selected for I/Os. issue the following command: # rmvpath -ab Note: Option -ab does not remove any existing SDD vpath devices. Note: When an SDD vpath device. rmvpath vpath10 will remove vpath10. the corresponding path would be marked OFFLINE. This operation finds the current hardware configuration and compares it to the SDD vpath device configuration in memory and then works out a list of differences. In the case of SDD vpath device removal failure. vpathN. It then issues commands to put the SDD vpath device configuration in memory up-to-date with the current hardware configuration. Add one or more paths to the SDD vpath device. Issue cfgvpath to configure new SDD vpath devices. this will fail if the device is busy. SDD will also create a binding between that SDD vpath name. However. d. the path selection policy of the SDD vpath device will be changed to load-balancing policy. 2.Note: If there are no existing SDD vpath devices. is created for a LUN. a. Then issue devfsadm -i vpathdd and vpathmkdev. In the case of a path removal failure. When the system is rebooted. c. c. the cfgvpath -r command does not dynamically reconfigure new SDD vpath devices. issue the following command: # rmvpath -b -all or # rmvpath -b vpathname d. issue devfsadm -i vpathdd and vpathmkdev. This deletion of the path will fail if the device is busy. the SDD configuration file would be modified to reflect the paths or SDD vpath devices. To remove all SDD vpath devices that are not busy. but the path will be set to DEAD and OFFLINE. To remove SDD vpath devices if the SDD vpath devices are not busy and also to remove the bindings between SDD vpath device names and LUNs so that the removed SDD vpath device names can be reused for new devices. If the SDD vpath device changes from single path to multiple paths. all the paths of the SDD vpath device would be marked OFFLINE. Remove an SDD vpath device. Remove a path for an SDD vpath device. vpathN. If you are adding new SDD vpath devices. Add one or more SDD vpath devices. Removing paths of an SDD vpath device or removing an SDD vpath device can fail if the corresponding devices are busy. the new SDD configuration would be used to configure SDD vpath devices. The 332 Multipath Subsystem Device Driver User's Guide . To remove all bindings associated with currently unconfigured vpath names so that all unconfigured SDD vpath device names can be reused for new LUNs. b.

you can issue the following commands: zpool create vpool1 vpath1a vpath1b vpath2c zpool create vpool2 mirror vpath3c vpath4c For information about known ZFS issues and the minimum Solaris 10 update that is required.6.5. To port vpath devices into a nonglobal zone. you must prevent SDD from being installed in the nonglobal zone. You can use vpath devices in place of native system devices in cXtYdZsN format when you create ZFS pools. see the Solaris SDD readme. you can create device special files to port the SDD vpath devices to a nonglobal zone. Log in as root in the global zone.0 or later supports Solaris 10 ZFS. SDD installation in a server with nonglobal zones CAUTION: When you install SDD in a global zone that has nonglobal zones installed. Then. Using the SDD on a Solaris host system 333 . Find the zone path of the nonglobal zone where you want to port the vpath device. # zonecfg -z ngb_zone2 info zonename: ngb_zone2 zonepath: /zones/ngb_zone2 autoboot: true pool: limitpriv: inherit-pkg-dir: dir: /lib inherit-pkg-dir: dir: /platform inherit-pkg-dir: dir: /sbin inherit-pkg-dir: dir: /usr Chapter 7. however. vpathN.0 or later. To do this. the binding needed to be removed before reconfiguring SDD. In the following example. so they can be accessed in the nonglobal zone environment. Solaris 10 Zone support SDD cannot be installed in nonglobal zones. 2. specify option -G when you issue the pkgadd command.binding will not be removed even after the LUN has been removed from the host.5. to be assigned to the same LUN when it is reconnected to the host.6. Solaris 10 ZFS support SDD 1. For example. perform the following steps: 1. #pkgadd -G -d IBMsdd Access SDD vpath devices in nonglobal zones You can port vpath devices that are created in global zones to nonglobal zones. Note: You are no longer required to specify Option -G when you install Solaris SDD 1. the zonepath is /zones/ngb_zone2 for zone ngb_zone2. In order to reuse an SDD vpath name for a new LUN. The binding allows the same SDD vpath device name. SDD installs only in global zones. SDD can be installed in a global zone.

18 Jan 16 16:00 vpath2c 5. to show the current active policy for that device.3.4. In the following example.0 (or later) supports dynamic changing of the path-selection policy for SDD devices. The device special files are based on the major and minor numbers of the vpath devices. load balancing (lb) The path to use for an I/O operation is chosen by estimating the load on the adapter to which each path is attached. 18 Jan 16 15:25 /dev/rdsk/vpath2c 4. If multiple paths have the same load. The path-selection policy is set at the SDD device level. determine the active policy for the device. SDD alternates between the two. If a device has only two paths.0 (or later) supports multiple path-selection policies and allows users to change the path-selection policies dynamically. The load is a function of the number of I/O operations currently in process. In the nonglobal zone environment. The following path-selection policies are supported: failover only (fo) All I/O operations for the device are sent to the same (preferred) path until the path fails because of I/O errors. You can change the policy for an SDD device. Then an alternate path is chosen for subsequent I/O operations. Load-balancing mode also incorporates failover protection. SDD version 1. the major number of vpath device vpath2c is 271 and the minor number is 18. This policy does not attempt to perform load balancing among paths. Note: The load-balancing policy is also known as the optimized policy. Issue datapath query device N.0. where N is the device number of the SDD vpath device. 18 Jan 16 15:59 vpath2c Issue the mknod command to create a raw device special file: root 271. The default path-selection policy for an SDD device is load balancing. Issue the mknod command to create a block device special file: # cd /zones/ngb_zone2/dev/dsk # mknod vpath2c b 271 18 # ls -l total 0 brw-r--r-1 root #cd /zones/ngb_zone2/dev/rdsk #mknod vpath2c c 271 18 #ls -l total 0 crw-r--r-1 root root 271. the vpath devices are now available in the /dev/dsk and /dev/rdsk directories. Create block and raw device special files in the nonglobal zone /dev/dsk and /dev/rdsk directories using the mknod command.0. Before changing the path-selection policy. a path is chosen at random from those paths. Dynamically changing the SDD path-selection policy algorithm SDD 1. Find the major and minor number of the vpath device that you want to port to the nonglobal zone by using the ls -lL command. # ls -lL /dev/rdsk/vpath2c crw------1 root sys 271. The output should look similar to the following example: 334 Multipath Subsystem Device Driver User's Guide .4. round robin (rr) The path to use for each I/O operation is chosen at random from paths that were not used for the last I/O operation.

cfg.cfg. Run cfgvpath -r to reconfigure SDD. See “datapath set device policy” on page 446 for more information about the datapath set device policy command.cfg is replaced and the LUN exclusion list is lost. See “Determining the LUN identifier of a LUN” for information about how to determine LUN identifiers.7450@b/pci1077.700000/fibre channel@3 sd@1. In order to retain the exclusion list after SDD upgrade: 1. Determining the LUN identifier of a LUN After the SDD configuration.0:c.0/pci1022. /etc/vpathexcl.600000/fibre channel@1 sd@1.raw CLOSE NORMAL 0 0 datapath set device policy command Use the datapath set device policy command to change the SDD path-selection policy dynamically.0:c.raw /devices/pci@0.7450@b/pci1077.raw /devices/pci@0.DEV#: 2 DEVICE NAME: vpath1c TYPE: 2105800 POLICY: OPTIMIZED SERIAL: 03B23922 ======================================================================== Path# Adapter H/W Path Hard Disk State Mode Select Error 0 /pci@8.7450@b/pci1077. for example.101@4/sd@0.cfg.raw CLOSE NORMAL 0 0 1 /pci@8.1:c.1:c. Subsequent SDD configuration will always exclude the LUN associated with the LUN identifier listed in /etc/vpathexcl.raw CLOSE NORMAL 0 0 3 /pci@8.raw /devices/pci@0. Copy the existing /etc/vpathexcl.101@4/sd@3.raw Special consideration during SDD upgrade: During SDD upgrade. issue the showvpath -l command.0:c.0/pci1022.7450@b/pci1077.0/pci1022. /etc/vpathexcl. Add one LUN Identifier per line. Add the list of LUN identifiers to the /etc/vpathexcl.sav.cfg to a new file.700000/fibre channel@3 sd@2. Vpath devices associated with the LUN identifiers in the /etc/vpathexcl.cfg file will be removed. Determine the LUN identifier(s) of the LUN(s) to be excluded. The LUN identifier will be shown under the serial number.1/sd@1.1/sd@0. remove the LUN identifier from /etc/vpathexcl.101@4.1:c.600000/fibre channel@1 sd@2. 2. For example: showvpath -l vpath47: c4t0d1s2 c4t3d1s2 c5t0d1s2 c5t1d1s2 Serial Number : 13014712201 Lun Identifier: 6005076303FFC0590000000000002201 /devices/pci@0.raw CLOSE NORMAL 0 0 2 /pci@8.1:c. Excluding LUNs from being configured by SDD Use the following procedure to exclude LUNs from being configured by SDD: 1. before installing the new SDD package.cfg file. 3. Using the SDD on a Solaris host system 335 . If you want to re-create the vpath device for a LUN that has been excluded. Chapter 7.101@4.0:c.0/pci1022.

n.vpath. Enter y and press Enter. A message similar to the following message is displayed: The following package is currently installed: IBMsdd IBMsdd Driver 64-bit Version: 1.sav. Do you want to continue with the removal of this package [y.6. uninstall SDD requires you to uninstall either the IBMdpo or the IBMsdd package. A message similar to the following message is displayed: ## Verifying package dependencies. Issue cfgvpath -r again to exclude the LUNs. Because the SDD package name has changed from IBMdpo to IBMsdd for SDD 1. Unmount all file systems on SDD devices.cfg.profile /opt/IBMsdd/devlink. Make sure that you specify the correct package to uninstall. 3.q] y 5. 2.6. replace /etc/vpathexec.q] y 4.0 (or later). /etc/vpathexcl.2.d/S20vpath-config /etc/defvpath ## Updating system information.5 Do you want to remove this package? [y.conf /kernel/drv/sparcv9/vpathdd /etc/sample_sddsrv.?. such as Oracle. 336 Multipath Subsystem Device Driver User's Guide . Enter # pkgrm IBMdpo or # pkgrm IBMsdd and press Enter depending on the previous SDD package installed.4.cfg with the saved file. If you are using SDD with a database. After installing the new SDD package.system /opt/IBMsdd/etc. ## Processing package information.cfg /opt/IBMsdd/vpath.0.5 Oct-21-2004 19:36 (sparc) 1. Uninstalling the SDD The following procedure explains how to uninstall an SDD.msg /opt/IBMsdd/etc. 3.0.?.tab /opt/IBMsdd/bin /opt/IBMsdd /kernel/drv/vpathdd.n. Enter y and press Enter. A message similar to the following message is displayed: ## Removing installed package instance <IBMsdd> This package contains scripts that run with super-user permission during the process of removing this package. Perform the following steps to uninstall SDD: 1.conf /etc/rcS. Attention: A number of different installed packages are displayed. You must uninstall the current level of SDD before you upgrade to a newer level.0. ## Removing pathnames in class <none> usr/sbin/vpathmkdev /sbin/sddsrv /opt/IBMsdd/vpathexcl. Removal of <IBMsdd> was successful. edit the appropriate database configuration files (database partition) to remove all the SDD devices. ## Executing preremove script.

3. use the following process to start sddsrv: 1. go to “Starting the SDD server manually.3. However. see the instructions in “Changing the sddsrv or pcmsrv TCP/IP port number” on page 419. Save the file /etc/inittab.” Starting the SDD server manually If the SDD server does not start automatically after you perform the SDD installation or you want to start it manually after stopping sddsrv. Issue init q. verify that the SDD server (sddsrv) has automatically started by entering ps –ef | grep sddsrv If the SDD server (sddsrv) has automatically started. you do not need to reboot at this point. Understanding SDD support for single-path configuration for disk storage system SDD does not support concurrent download of licensed machine code in single-path mode. you should not use a single-path configuration. If you are in the process of upgrading SDD. you should now reboot the system. SDD does support single-path SCSI or fibre-channel connection from your SUN host system to a disk storage system. Chapter 7. Verifying if the SDD server has started After you have installed SDD. 4. Changing to a different port number for the SDD server To change to a different port number for the SDD server. See Chapter 12. You can reboot the system after installing the new SDD package. because SDD cannot provide single-point-failure protection and load balancing with a single-path configuration.1. “Using the SDD server and the SDDPCM server. Follow the directions in “Verifying if the SDD server has started” to confirm that the SDD server started successfully. If the SDD server has not started.0 (or later). the output will display the process number on which sddsrv has started. It is possible to create a volume group or an SDD vpath device with only a single path.Attention: If you are not performing an SDD upgrade. SDD server daemon The SDD server (also referred to as sddsrv) is an integrated component of SDD 1. Edit /etc/inittab and verify the sddsrv entry.” on page 415 for more information about sddsrv. For example: srv:234:respawn:/sbin/sddsrv > /dev/null 2>&1 2. This component consists of a UNIX application daemon that is installed in addition to the SDD device driver. Using the SDD on a Solaris host system 337 .

To change the default value. If you set a value that is not valid.4. 4.conf file by copying the sample_sddsrv. software application. You must either create the probe_retry variable in the existing sddsrv.conf file to the sddsrv.x on Solaris. Using applications with SDD If your system already has a volume manager. you can dynamically change the value of the retry count for probing inquiries. You also need to customize the volume manager.conf file in the /etc directory. This section describes how to use the following applications with SDD: v Standard UNIX applications v NFS v Veritas Volume Manager v Oracle v Solaris Volume Manager 338 Multipath Subsystem Device Driver User's Guide . or DBMS in order to have it communicate with the SDD devices instead of the Solaris devices. uncomment and set probe_retry to a valid value. If you set a value greater than 5. In addition. If sddsrv is still running.x on Solaris. sddsrv uses 2. 3.conf file or generate a new sddsrv. Save the file. you need to insert the new SDD device layer between the program and the Solaris disk device layer. you must ensure that the new SDD devices accessed by these software applications or DBMS have the same attributes as the Solaris sd devices that they replace. perform the following steps: 1. Make sure that this variable is commented with the default value of probe_retry=2. A retry count of 2 indicates that sddsrv will attempt three consecutive probing inquiries. You need to customize the software application or DBMS to accomplish this. The valid range for probe_retry is from 2 to 5. Edit /etc/inittab and comment out the SDD server entry: #srv:234:respawn:/sbin/sddsrv > /dev/null 2>&1 2. sddsrv uses the default value of 2.4. Therefore. issue kill -9 pid of sddsrv. Before SDD version 1. The retry count specifies how many consecutive probing inquiries that sddsrv makes before a path is determined to be nonfunctional and is set to DEAD state.6. Issue init q.6. Stopping the SDD server To stop the SDD server.Changing the retry count value when probing SDD server inquiries Beginning with SDD version 1. sddsrv uses 5. the retry count was statically compiled in the sddsrv binary and was set to 2. software application. many software applications and DBMS need to control certain device attributes such as ownership and permissions. If you set a value less than 2. or DBMS installed that communicates directly with the Solaris disk device drivers. Check if sddsrv is running by running ps -ef | grep sddsrv.

enter the following command: # newfs /dev/rdsk/vpathNs In this example. Partition the selected volumes using the Solaris format utility. 2. mkfs. Install the file system mount points into the directory /etc/exports for export. N is the SDD device instance of the selected volume. the device driver resides above the Solaris SCSI disk driver (sd) in the protocol stack. which normally take a disk device or raw disk device as a parameter. install SDD using the procedure in the “Installing SDD” on page 325 section. 7. Match the sd device link names found in step 2 with SDD device link names (files named /dev/(r)dsk/vpathN) by issuing the showvpath command. Installing the SDD on a NFS file server The procedures in this section show how to install SDD for use with an exported file system (NFS file server). SDD now communicates to the Solaris device layer. Determine which SDD (vpathN) volumes that you will use as file system devices. 4. 4. Make sure that the devices that you want to replace are replaced with the corresponding SDD device. Click yes in the mount at boot field. install SDD using the procedure in “Installing SDD” on page 325. If you are using the standard Solaris UFS file system. List the mount points for all currently exported file systems by looking in the /etc/exports directory. also accept the SDD device as a parameter. Chapter 7. After you install SDD. 2.Standard UNIX applications If you have not already done so. 3. Installing SDD on a system that already has the Network File System file server Perform the following steps to install SDD on a system if you have the Network File System file server already configured to export file systems that reside on a multiport subsystem and to use SDD partitions instead of sd partitions to access file systems: 1. Similarly. Using the SDD on a Solaris host system 339 . such as newfs. Make a backup copy of the current /etc/fstab file. Issue the showvpath command to list all SDD devices and their underlying disks. 6. and mount. fsck. 5. 3. If you have not already done so. Create file systems on the selected SDD devices using the appropriate utilities for the type of file system that you will use. you can replace entries in files such as vfstab and dfstab (in the format of cntndnsn) by entries for the corresponding SDD vpathNs devices. Standard UNIX applications. Restart the system. In other words. Match the mount points found in step 1 with sdisk device link names (files named /dev/(r)dsk/cntndn) by looking in the /etc/fstab directory. Install the file systems into the /etc/fstab directory. Create mount points for the new file systems. Setting up NFS for the first time Perform the following steps if you are installing exported file systems on SDD devices for the first time: 1.

7. the ESS vpath devices will have names such as VPATH_SHARK0_0. 340 Multipath Subsystem Device Driver User's Guide . ASL to enable DMP passthru mode for ESS is called libvxvpath. Restart the system. 2. SDD supports ESS devices for Veritas Volume Manager 3. There are two types of ASL for each device type: v One type of ASL allows DMP to be the multipathing devices for a IBM device type v Another type of ASL enables DMP to work in passthru mode and allows SDD to become the multipath driver for that device type Each device type requires an appropriate ASL for that device type. 3.5. In Veritas Volume Manager. Edit the /etc/fstab file. 1. Verify that each exported file system: v Passes the start time fsck pass v Mounts properly v Is exported and available to NFS clients If a problem exists with any exported file system after you complete step 7. Follow the procedure in the Veritas Volume Manager manual to create the rootdg disk group and other required groups. To initially install SDD with Veritas Volume Manager: Case 1: Installing Veritas Volume Manager for the first time. Install SDD using the procedure in “Installing SDD” on page 325.5 MP2 Point Patch 3. ensure that you have a copy of the Veritas Volume Manager System Administrator's Guide and the Veritas Volume Manager Command Line Interface for Solaris to use for reference. if you have not already done so.so. ASL to enable DMP passthru mode for SAN Volume Controller is called libvxsvc. you must install an appropriate ASL (device-specific library) for a specific IBM storage device type. 4. Veritas Volume Manager For these procedures. and you must install it as a separate package. replacing each instance of an sd device link named /dev/(r)dsk/cntndn with the corresponding SDD device link.so. Then review your steps and try again.com The latest Veritas Volume Manager does not allow you to disable DMP. The ASL package is available for downloading from Veritas Web site. In order to use SDD as a multipathing driver under Veritas Volume Manager. These publications can be found at the following Web site: www.5 MP2 or later and SAN Volume Controller devices for Veritas Volume Manager 3. which is normally included in the Veritas Volume Manger package.veritas. 6. Install the Veritas Volume Manager package. restore the original /etc/fstab file and restart to restore Network File System service. Ensure that you have rebooted the system after SDD is installed.1 or later with appropriate ASLs for SAN Volume Controller devices from Veritas.

Note: Multipathing of ESS and SAN Volume Controller devices managed by DMP before SDD installed will be managed by SDD after SDD is installed. 2. install SDD using the procedure that is described in “Installing SDD” on page 325. VPATH_SANVC0_1.) 3. VPATH_SHARK0_1. 2. Chapter 7. If you have not already done so. Oracle You must have superuser privileges to perform the following procedures. Ensure that you have rebooted the system after SDD is installed. You must also complete the following related activities: a. Create the Oracle software owner user in the local server /etc/passwd file. install SDD using the procedure outlined in the “Installing SDD” on page 325 section. Complete the rest of the Oracle preinstallation tasks described in the Oracle8 Installation Guide. If you have not already done so. 2. You also need to have Oracle documentation on hand. slice 2 on Sun is the whole disk. SAN Volume Controller vpath devices will have names such as VPATH_SANVC0_0. Follow the Oracle Installation Guide for instructions on how to install to a file system. In Veritas Volume Manager. For example. Make sure that the ownership and permissions of the SDD devices are the same as the ownership and permissions of the raw devices that they are replacing. Case 2: Installing SDD with Veritas already installed. Do not use disk cylinder 0 (sector 0).) Using raw partitions: Attention: If using raw partitions. make sure all the databases are closed before going further. the disk label is corrupted. You can set it up to use a file system or raw partitions. 1. In the following procedure.5. you will replace the raw devices with the SDD devices. which is the disk label. (During the Oracle installation. Using it corrupts the disk. These procedures were tested with Oracle 8.1 patch set from Oracle. and so on. Using the SDD on a Solaris host system 341 . 1.VPATH_SHARK0_1. and so on. SAN Volume Controller vpath devices will have names such as VPATH_SANVC0_0. The procedure for installing your database can differ depending on the choice that you make. (Oracle recommends three mount points on different physical devices.0. If you use this device without repartitioning it to start at sector 1. you will be asked to name three mount points. Create and mount file systems on one or more SDD partitions. the ESS vpath devices will have names such as VPATH_SHARK0_0. and so on. and so on. Install SDD using the procedure in “Installing SDD” on page 325. Installing an Oracle database for the first time You can set up your Oracle database in one of two ways. Using a file system: 1.0. Supply the mount points for the file systems that you created on the SDD partitions.5 Enterprise server with the 8. VPATH_SANVC0_1. Plan to install Oracle8 on a file system that resides on an SDD partition.

log. create symbolic links from the three redo log locations to SDD raw device links that point to the slice. be sure to leave sector 0/disk cylinder 0 of the associated volume unused. If Oracle8 is to use SDD raw partitions as database devices. Installing an SDD on a system that already has Oracle in place The installation procedure for a new SDD installation differs depending on whether you are using a file system or raw partitions for your Oracle database. redo logs. 11. Issue the create tablespace SQL command to set up each of the temp. Determine which SDD (vpathN) volumes you will use as Oracle8 database devices. and s is the partition ID. Install the Oracle8 server following the instructions in the Oracle Installation Guide. 9. See step 2 on page 341. To use SDD raw partitions as redo logs. The subdirectory is used as a control file and redo log location for the installer's default database (a sample database) as described in the Installation Guide. Each of the resulting three mount points should have a subdirectory named oradata. and users database files that you created. 8. Select the Install New Product . and system data files that you have set up. Ensure that the Oracle software owner has read and write privileges to the selected SDD raw partition device files under the /devices/pseudo directory. 5. rbs. where N is the SDD instance number. c. 7. see the Oracle8 Server SQL Language Reference Manual. For the syntax of these three create commands. 10. tools. Issue the create rollback segment SQL command to create the three redo log files that you set. You can get this information from the /etc/vfstab file if you know where the Oracle files are. Issue the create database SQL command. Oracle recommends using raw partitions for redo logs. Specify the raw device links set up in step 2 for the redo logs. These files are named /dev/rdsk/vpathNs. This protects UNIX disk labels from corruption by Oracle8. Link the database files to SDD raw device links (files named /dev/rdsk/vpathNs) that point to partitions of the appropriate size. specifying the control. or you can check for directories with the name oradata. Make sure any raw devices and file systems that you set up reside on SDD volumes. 12. Set up the ORACLE_BASE and ORACLE_ HOME environment variables of the Oracle user to be directories of this file system. you must set up control files. 342 Multipath Subsystem Device Driver User's Guide . Be sure to be logged in as the Oracle software owner when you run the orainst /m command. 3. 6. Select Raw Devices for the storage type. Create two more SDD-resident file systems on two other SDD volumes. Launch the sqlplus utility.Create Database Objects option. Record the raw disk partitions being used (they are in the cntndnsn format) or the partitions where the Oracle file systems reside. 4. Specify the raw device links set up in step 3 for the database files of the default database. If using a file system: Perform the following procedure if you are installing SDD for the first time on a system with an Oracle database that uses a file system: 1. Your database administrator can tell you where the Oracle files are. and database files following the guidelines in the Oracle8 Administrator's Reference.b. To set up other Oracle8 databases. Set up symbolic links in the oradata directory under the first of the three mount points. Partition the selected volumes using the Solaris format utility.

Enter the command: select * from sys. Check the directory list to find a cntndn directory that is the same as the one where the Oracle files are. log. If you originally used the following Solaris identifiers: mount /dev/dsk/c1t3d2s4 /oracle/mp1 you now use the following SDD partition identifiers: mount /dev/dsk/vpath2e /oracle/mp1 For example. Change to the directory where you installed the SDD utilities. For example.0:c. the process of converting an Oracle installation from sdisk to SDD has two parts: v Change the Oracle mount points' physical devices in /etc/fstab from sdisk device partition links to the SDD device partition links that access the same physical partitions.0/pci@1/scsi@2. assume that vpath2c is the SDD identifier. you will know that /dev/dsk/vpath0c is the same as /dev/dsk/clt8d2s2. (SDD partition identifiers end in an alphabetical character from a-g rather than s0.0/pci@1/scsi@2/sd@1. A message similar to the following message is displayed: vpath1c c1t8d0s2 c2t8d0s2 /devices/pci@1f. If you find it. Use the SDD partition identifiers instead of the original Solaris identifiers when mounting the file systems. s1. 3.1/sd@1. /dev/rdsk/c1t4d0s4). s2. Using the SDD on a Solaris host system 343 .2.dba_data_files. 2. Chapter 7. and so forth). if the Oracle files are on c1t8d0s4. All Oracle8 control. Converting an Oracle installation from sd to SDD partitions: Perform the following steps to convert an Oracle installation from sd to SDD partitions: 1. 3. control files. 4. Issue the showvpath command.raw 5. v Re-create any links to raw sdisk device links to point to raw SDD device links that access the same physical partitions. Back up your Oracle8 database files. Launch the sqlplus utility. 4. and data files are accessed either directly from mounted file systems or through links from the oradata subdirectory of each Oracle mount point set up on the server.raw /devices/pci@1f. Therefore. If using raw partitions: Perform the following procedure if you have Oracle8 already installed and want to reconfigure it to use SDD partitions instead of sd partitions (for example. Obtain the sd device names for the Oracle8 mounted file systems by looking up the Oracle8 mount points in /etc/vfstab and extracting the corresponding sd device link name (for example. look for c1t8d0s2. Complete the basic installation steps in the “Installing SDD” on page 325 section. and redo logs.0:c. Follow the instructions in Oracle Installation Guide for setting ownership and permissions. partitions accessed through /dev/rdsk/cntndn files).

Enter the ls -l command on each device link found in step 4 on page 343 and extract the link source device file name. Depending on the DiskSuite version. Issue the ls -l command on each SDD device link. 13. re-create the link using the appropriate SDD device link as the source file instead of the associated sd device link. Verify that all file system and database consistency checks complete successfully. Complete the basic installation steps in the “Installing SDD” on page 325 section.7 in the corresponding cntndnsn slice names.0/pci@1/scsi@2/sd@1.oracle dba 32. 14. Make a copy of the existing /etc/vfstab file for recovery purposes. changing each Oracle device link to its corresponding SDD device link. 8. As you perform this step. The following procedure applies to Solaris Volume Manager. 10. 5. Match each cntndns device with its associated vpathNs device link name by issuing the showvpath command. Remember that vpathNs partition names use the letters a . For example. 9. Write down the file ownership and permissions by issuing the ls -lL command on either the files in /dev/ or /devices (it yields the same result). 11.h in the s position to indicate slices 0 . 15. Change the attributes of each SDD device to match the attributes of the corresponding disk device using the chgrp and chmod commands. generate a reversing shell script that can restore all the original links in case of error. if you enter the command: # ls -lL /dev/rdsk/c1t1d0s4 A message similar to the following message is displayed: crw-r--r-.252 Nov 16 11:49 /dev/rdsk/c1t1d0s4 7. Solaris Volume Manager (formerly Solstice DiskSuite) Note: Sun has renamed Solstice DiskSuite to Solaris Volume Manager. Edit the /etc/vfstab file. 344 Multipath Subsystem Device Driver User's Guide . For example. For each link found in an oradata directory. Restart the server. 12. if you enter the command: # ls -l /dev/rdsk/c1t1d0s4 A message similar to the following message is displayed: /dev/rdsk/c1t1d0s4 /devices/pci@1f. Determine the underlying device where each data file resides.tab file is in either the /etc/opt/SUNWmd/ directory or the /etc/lvm/ directory.The output lists the locations of all data files in use by Oracle. Write down the SDD device nodes for each SDD device link by tracing back to the link source file.0:e 6. the md. You can do this by either looking up mounted file systems in the /etc/vfstab file or by extracting raw device link names directly from the select command output.

The devices appear as vpathNs.2 with the patch 106627-04 (DiskSuite patch) installed. Note: You do not need to partition any sd (cntndn) devices. Set up the replica databases on a separate partition. Do not use a partition that includes Sector 0 for this database replica partition.For these procedures. enter the showvpath command and press Enter. of the device that you want to use as a replica. This partition needs to be at least three partitions of three cylinders. where N is the SDD vpath device instance number and s is the letter denoting the three-cylinder partition. Follow the instructions in the Solaris Volume Manager Administration Guide to build the types of metadevices that you need.h of an SDD vpath device correspond to slices 0 . 5. cntndn. Using the SDD on a Solaris host system 345 . Insert the setup of all vpathNs devices that are used by DiskSuite into the md. Install the Solaris Volume Manager packages and the answerbook. Partition these devices by selecting them in the Solaris format utility. where N is the vpath driver instance number.7 of the underlying multiport subsystem device. The DiskSuite Tool (metatool) does not recognize and present SDD devices for configuration. 6. just as you would for an sd device link of the form. which issues SCSI 2 Reservation to storage. you need access to the Solaris answerbook facility. If you want to know which cntndn links correspond to a particular SDD vpath device. You must have superuser privileges to perform these procedures. Perform the following instructions for setting up replica databases on the vpathNs partitions. Remember that partitions a . Installing SDD on a system that already has Solstice DiskSuite in place Perform the following steps if Solstice DiskSuite is already installed and you want to convert existing sd devices used in metadevice configuration to the corresponding SDD devices: Chapter 7. or slice. 3. if you have not already done so. Note: Verify that Solaris Volume Manager on the host supports replica databases on SAN devices before you set up replica databases on SDD vpath devices. 2. SDD does not support Solaris Volume Manager disk set feature. 4. 7. Note: SDD only supports Solstice DiskSuite line command interface. These procedures were tested using Solstice DiskSuite 4. Installing Solaris Volume Manager for the first time Perform the following steps if you are installing Solaris Volume Manager on the multiport subsystem server for the first time: 1. Do not restart yet. Install SDD using the procedure in the “Installing SDD” on page 325 section. Use the metainit command and the /dev/(r)dsk/vpathNs device link names wherever the instructions specify /dev/(r)dsk/cntndnsn device link names. Reserve at least three partitions of three cylinders each for use as Solaris Volume Manager Replica database locations. Use the boot -r command to configure the SPARC server to recognize all devices over all paths. You should have a copy of the DiskSuite Administration Guide available to complete these procedures. Determine which SDD vpath devices that you will use to create Solaris Volume Manager metadevices.tab file. Use the partition submenu.

tab file and recording the output of the metastat and metadb -i commands. Create any missing metadevices and reconfigure the metadevices based on the configuration information from the saved metastat output. When you are confident that the new file is correct. 4. you need access to the Solaris answerbook facility. 12. including file systems. Do not do this for start device partitions (vpath does not currently support these). Restart. This verifies the SDD vpath installation. c. Using a plain sheet of paper. Install SDD using the procedure in the “Installing SDD” on page 325 section.tab file. Perform the following steps if you are installing a new UFS logging file system on SDD vpath devices: 1. 6. Reverse the procedures in step 4 to step 6. Delete each replica database that is currently configured with a /dev/(r)dsk/cntndnsn device by using the metadb -d -f <device> command. Use the showvpath command to do this step. 9. if you have not already done so. 7. 11. After the installation completes. Insert the corresponding vpathNs device link name in place of each cntndnsn device link name. b. Restart the server. Stop all applications using DiskSuite. Compare the metadevices that are created with the saved metastat output from step 2. Install SDD using the procedure in the “Installing SDD” on page 325 section. make a two-column list and match the /dev/(r)dsk/cntndnsn device links found in step 2 with the corresponding /dev/(r)dsk/vpathNs device links. reinstalling the original md. if you have not already done so. Setting up transactional volume for UFS logging on a new system For these procedures. Back up all data. Enter the following commands for each existing metadevice: metaclear <device> 10. Enter the pkgrm IBMsdd command. install it in either the /etc/opt/SUNWmd directory or the /etc/lvm directory. Back up the current Solstice configuration by making a copy of the md. Make sure all sd device links in use by DiskSuite are entered in the md. 346 Multipath Subsystem Device Driver User's Guide .1. 2. enter shutdown -i6 -y -g0 and press Enter. depending on the DiskSuite version.tab file and that they all come up properly after a restart.tab in the /etc/opt/SUNWmd directory or the /etc/lvm directory depending on the DiskSuite version. You must have super-user privileges to perform these procedures. Enter metainit -a to create metadevices on the /dev/(r)dsk/vpathNs devices. Replace the replica database with the corresponding /dev/(r)dsk/vpathNs device found in step 2 by using the metadb -a <device> command. Create a new md. Restart your applications. Note: Do not do a reconfiguration restart. 3. or proceed to the next step if you want to avoid restarting your system. 8. 5. To back out the SDD vpath in case there are any problems following step 7: a.

specifying /dev/md/(r)dsk/d <metadevice number> for the raw and block devices. 3. Chapter 7. assume /dev/dsk/vpath1d is your UFS master device used in step 3. 2. set them up before continuing. 7.tab file. Create the metatrans device using metainit. Note: If there is a problem with a metatrans device after steps 7 and 8. Set the mount at boot field to yes.logging device> command. Using the SDD on a Solaris host system 347 . 3. Determine which SDD vpath (vpathNs) volumes that you will use as file system devices.tab file and restart the system. if you have not already done so. Unmount all current UFS logging file systems known to reside on the multiport subsystem using the umount command. Create new metatrans devices from the vpathNs partitions found in step 4 that correspond to the sd device links found in step 2. Install Solaris Volume Manager if you have not already done so. Match the sd device link names found in step 2 with SDD vpath device link names (files named /dev/(r)dsk/vpathNs) by issuing the /opt/IBMsdd/bin/ showvpath command.h correspond to sd slices 0 . 6. Enter metaclear -a and press Enter. 7. Use the metainit d <metadevice number> -t <"vpathNs" . Installing vpath on a system that already has transactional volume for UFS logging in place Perform the following steps if you already have UFS logging file systems residing on a multiport subsystem and you want to use vpath partitions instead of sd partitions to access them. Enter metainit d0 -t vpath1d vpath1e and press Enter. Save a copy of the md. Be sure to use the same metadevice numbering as you originally used with the sd partitions. Edit the md. 5.tab file. 1. Install the file systems into the /etc/vfstab directory. Review your steps and try again.7. /dev/dsk/vpath1e is its corresponding log device. 5.master device> <"vpathNs" . 8.tab file to change each metatrans device entry to use vpathNs devices. Install SDD using the procedure in the “Installing SDD” on page 325 section. Remember that vpath partitions a . Partition the selected SDD vpath volumes using the Solaris format utility.2. 8. Be sure to create partitions for UFS logging devices as well as for UFS master devices. restore the original md. and d0 is the trans device that you want to create for UFS logging. Make sure that all configured metatrans devices are set up correctly in the md. Make a list of the DiskSuite metatrans devices for all existing UFS logging file systems by looking in the /etc/vfstab directory. 4. If the devices are not set up now. Restart the system. Match the device names found in step 1 with sd device link names (files named /dev/(r)dsk/cntndnsn) using the metastat command. Restart your system. Create file systems on the selected vpath UFS master device partitions using the newfs command. Create mount points for each UFS logging file system that you have created using steps 3 and 5. For example. 4. 6.

348 Multipath Subsystem Device Driver User's Guide .

see the Readme file on the CD-ROM or visit the following SDD Web site: www. © Copyright IBM Corp. configure. v Single-path mode during concurrent download of licensed machine code nor during any ESS concurrent maintenance that impacts the path attachment. remove. Using the SDD on a Windows NT host system This chapter provides procedures for you to install. v You cannot store the Windows NT operating system or a paging file on an SDD-controlled multipath device (that is. Attention: The SDD does not support Windows NT for systems attached to DS8000 or DS6000 devices. For updated and additional information that is not included in this chapter.ibm. Hardware The following hardware components are needed: v One or more supported storage devices v Host system v For ESS devices: SCSI adapters and cables v Fibre-channel adapters and cables Software The following software components are needed: v Windows NT 4. Verifying the hardware and software requirements You must install the following hardware and software components to ensure that SDD installs and operates successfully. 1999. 2010 349 . such as an ESS host-bay-adapter replacement. v SDD does not support I/O load balancing in a Windows NT clustering environment.Chapter 8.com/servers/storage/support/software/sdd Click Subsystem Device Driver. and use the SDD on a Windows NT host system that is attached to an ESS device. SDD does not support boot from ESS device).0 operating system with Service Pack 6A or later v For ESS devices: SCSI device drivers v Fibre-channel device drivers Unsupported environments SDD does not support the following environments: v A host system with both a SCSI channel and fibre-channel connection to a shared LUN.

However. For information about the fibre-channel adapters that can attach to your Windows NT host system. concurrent download of licensed machine code is supported with SCSI devices. ensure that a minimum of two SCSI adapters are installed. v For information about the SCSI adapters that can attach to your Windows NT host system. go to the following Web site: www. SDD requires a minimum of two independent paths that share the same logical unit to use the load-balancing and path-failover-protection features. v A SCSI cable connects each SCSI host adapter to an ESS port.com/servers/storage/support 350 Multipath Subsystem Device Driver User's Guide .com/servers/storage/support Fibre-channel requirements To use the SDD fibre-channel support. The host system can be a uniprocessor or a multiprocessor system.0 Service Pack 6A (or later) installed. ensure that your host system meets the following requirements: v No more than 32 fibre-channel adapters are attached. the load-balancing and failover features are not available. Host system requirements To successfully install SDD. SCSI requirements To use the SDD SCSI support on ESS devices.0 Service Pack 6A or higher installed. your Windows NT host system must be an Intel processor-based system with Windows NT Version 4. Note: SDD also supports one SCSI adapter on the host system. ensure that your host system meets the following requirements: v No more than 32 SCSI adapters are attached. ensure that a minimum of two fibre-channel paths are configured between the host and the subsystem. To install all components. v If you need the SDD I/O load-balancing and failover features. you must have 1 MB (MB equals approximately 1 000 000 bytes) of available disk space. v A fiber-optic cable connects each fibre-channel adapter to an supported storage device port. it requires you to connect through a switch to multiple supported storage device ports.ibm. Note: If your host has only one fibre-channel adapter. go to the following Web site: www. ensure that your host system is configured to the ESS as an Intel processor-based PC server with Windows NT 4. v If you need the SDD I/O load-balancing and failover features.ibm.ESS requirements To successfully install SDD. With single-path access.

see the IBM TotalStorage Enterprise Storage Server: Introduction and Planning Guide. ensure that the BIOS for the adapter is enabled.Preparing for SDD installation Before you install SDD. Using the SDD on a Windows NT host system 351 . See “Verifying the hardware and software requirements” on page 365 for more information. Chapter 8. Installing SDD These sections describe how to install the IBM System Storage Multipath Subsystem Device Driver. SDD requires a minimum of two independent paths that share the same LUN. Configuring SCSI adapters for ESS devices Attention: Failure to disable the BIOS of attached nonstart devices might cause your system to attempt to start unexpectedly from a nonstart device. configure the ESS for single-port or multiport access for each LUN. SCSI adapters are not supported on DS8000 or DS6000 devices. SDD supports Emulex adapters with full port driver only. you must configure the supported storage device to your host system and the required fibre-channel adapters that are attached. failover protection is not provided. See the IBM TotalStorage Enterprise Storage Server Host Systems Attachment Guide for more information about installing and configuring fibre-channel adapters for your Windows NT host systems. the BIOS must be disabled. ensure that the BIOS for the adapter is disabled. select Allow Multiple paths to SCSI Targets in the Emulex Configuration Tool panel. For information about configuring ESS. Configuring the ESS Before you install SDD. For SCSI adapters that attach start devices. Follow the adapter-specific configuration instructions to configure the adapters attached to your Windows NT host systems. Installing for the first time This section describes how to install SDD. Note: When the adapter shares the SCSI bus with other adapters. Before you install and use SDD. Configuring fibre-channel adapters You must configure the fibre-channel adapters that are attached to your Windows NT host system before you install SDD. Make sure that your Windows NT host system has Service Pack 6A or higher. Make sure that all hardware and software requirements are met before you install the Subsystem Device Driver. you must configure your SCSI adapters. With a single path. To use the load-balancing and failover features. When you configure the Emulex adapter for multipath functions. For all other adapters that attach nonstart devices.

A list of all the installed directories on the compact disc is displayed. When you log on again. The User Information window opens. SDD is successfully installed if the command runs successfully. 2. 10. Select I accept the terms in the License Agreement and then click Next. Click Next. Start the Windows NT Explorer program. Click Yes to start your computer again. 14. 4. 6.exe program is finished. 352 Multipath Subsystem Device Driver User's Guide . README Note: You can use the datapath query device command to verify the SDD installation. Unzip the SDD code to your installation subdirectory. 15. The Installshield Wizard Completed window opens. Click Next. The Ready to Install The Program window opens. Double-click the CD-ROM drive. Select the type of setup that you prefer from the following setup choices. 9. 3. 4. The Setup Type window opens. 6. Insert the SDD installation compact disc into the CD-ROM drive.Installing SDD from CD-ROM Perform the following steps to install the SDD filter and application programs on your system: 1. The Installshield program starts. Click Install. Subsystem Device Driver manual c. Run the setup. Click Next. 16. Log on as the administrator user. you see a Subsystem Device Driver Management entry in your Program menu containing the following files: a. 11.exe program and follow the instructions. Subsystem Device Driver Management b. Change any zoning information that needs to be updated. 2. 8. Reconnect all cables that connect the host bus adapters and the supported storage devices if needed. Double-click the \winNt\IBMsdd directory. When the setup. 7. Complete (recommended) Selects all options.exe program. Restart your Windows NT host system. The Software License agreement is displayed. Shut down your Windows NT host system. Click Next. Click Finish. you will be asked if you want to reboot. 5. 13. The Choose Destination Location window opens. 3. Type your name and your company name. Custom Select the options that you need. 12. 5. 7. Installing SDD from downloaded code Use this procedure to install SDD from downloaded code: 1. The Installation program prompts you to restart your computer. Run the setup.

Uninstall the previous version of SDD. 2. Using the SDD on a Windows NT host system 353 . Attention: After uninstalling the previous version. If you perform a system restart before installing the new version. The sddpath. Install the new version of SDD. Click Start → Programs → Accessories → Windows Explorer.) 2.sys file and then click Properties. you must immediately install the new version of SDD to avoid any potential data loss. 4. If Windows is installed on E: drive. Perform the following steps to upgrade to a newer SDD version: 1. a restart is required to activate multipath support whenever a new file system or partition is added. Windows will open Windows Explorer. the Windows NT server could lose the ability to access existing data on that device. you might lose access to your assigned volumes. Chapter 8. Perform the following steps to view the properties of the sddpath. click Version. %SystemDrive% is C:. you need to restart your Windows NT system after it is installed. Otherwise. You must uninstall any previous version of the SDD before installing a new version of SDD. If Windows is installed on the C: drive.sys properties window opens. In fact. (See “Uninstalling the SDD” on page 359 for instructions.) Displaying the current version of the SDD You can display the current SDD version on a Windows NT host system by viewing the sddpath. In the sddpath. Right-click the sddpath.sys properties window. (See “Installing SDD” on page 351 for instructions. Maximum number of LUNs There is no physical limitation on the maximum number of devices that can be configured for SDD on a Windows platform. Consult the Host Attachment Guide for your product. Adding paths to SDD devices Attention: Ensure that SDD is installed before you add a new path to a device.Upgrading SDD If you attempt to install over an existing version of SDD. %SystemDrive% is E: 3. In Windows Explorer. the installation fails. go to the %SystemRoot%\system32\drivers directory. Any limitation exists on the host operating system.sys file is displayed. Configuring the SDD To activate the SDD.sys file properties. A practical limit is 250 LUNs. where %SystemRoot% is: %SystemDrive%\winnt for Windows NT.sys file: 1. The file version and copyright information about the sddpath.

The operating system masks this partition from theuser. the policy field is changed to reserved when a LUN becomes a cluster resource. and one partition has been assigned on each drive for this configuration. Click Start → Program → Subsystem Device Driver → Subsystem Device Driver Management. Enter datapath query adapter and press Enter. Reviewing the existing SDD configuration information Before adding any additional hardware. The output includes information about all the installed adapters. Total Devices : 10 DEV#: 0 DEVICE NAME: Disk2 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02B12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk2 Part0 OPEN NORMAL 14 0 DEV#: 1 DEVICE NAME: Disk2 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02B12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk2 Part1 OPEN NORMAL 94 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02C12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk3 Part0 OPEN NORMAL 16 0 DEV#: 3 DEVICE NAME: Disk3 Part1 SERIAL: 02C12028 TYPE: 2105E20 POLICY: OPTIMIZED ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk3 Part1 OPEN NORMAL 94 0 DEV#: 4 DEVICE NAME: Disk4 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02D12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk4 Part0 OPEN NORMAL 14 0 354 Multipath Subsystem Device Driver User's Guide . 2. Perform the following steps to display information about the adapters and devices: 1. In a cluster environment. Enter datapath query device and press Enter. An MS-DOS window opens. In the example shown in the following output.This section contains the procedures for adding paths to SDD devices in multipath environments. There are five physical drives. Note: In a stand-alone environment. In the following example. SDD displays 10 devices. Partition 0 stores information about the drive. Each SDD device reflects a partition that has been created for a physical drive. one host bus adapter has 10 active paths: Active Adapters :1 Adpt# Adapter Name 0 Scsi Port6 Bus0 State NORMAL Mode ACTIVE Select 542 Errors Paths Active 0 10 10 3. but it still exists. Verify that the number of adapters and the number of paths to each supported storage device volume match the known configuration. the policy field is optimized. review the configuration information for the adapters and devices currently on your Windows NT server.

Type datapath query adapter and press Enter. Install any additional hardware on the supported storage device. 3. an additional path is installed to the previous configuration: Chapter 8. 4. In the example shown in the following output. Restarting will ensure correct multipath access to both existing and new storage and to your Windows NT server. v The number of adapters and the number of paths to each storage volume match the updated configuration. The output includes information about any additional adapters that were installed. Perform the following steps to verify that the additional paths have been installed correctly: 1. See “Verifying additional paths are installed correctly. An MS-DOS window opens. Using the SDD on a Windows NT host system 355 . 5.” Verifying additional paths are installed correctly After installing additional paths to SDD devices. Install any additional hardware on the Windows NT server. Restart the Windows NT server. v The Windows disk numbers of all primary paths are labeled as path #0. Verify that the path is added correctly. verify the following conditions: v All additional paths have been installed correctly. Click Start → Program → Subsystem Device Driver → Subsystem Device Driver Management. 2. 2. Configure the new paths to the server.DEV#: 5 DEVICE NAME: Disk4 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02D22028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk4 Part1 OPEN NORMAL 94 0 DEV#: 6 DEVICE NAME: Disk5 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02E12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk5 Part0 OPEN NORMAL 14 0 DEV#: 7 DEVICE NAME: Disk5 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02E12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk5 Part1 OPEN NORMAL 94 0 DEV#: 8 DEVICE NAME: Disk6 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02F12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk6 Part0 OPEN NORMAL 14 0 DEV#: 9 DEVICE NAME: Disk6 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02F12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk6 Part1 OPEN NORMAL 94 0 Installing and configuring additional paths Perform the following steps to install and configure additional paths: 1.

The output includes information about any additional devices that were installed. In the example shown in the following output. the output includes information about the new host bus adapter that was assigned: Total Devices : 10 DEV#: 0 DEVICE NAME: Disk2 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02B12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk2 Part0 OPEN NORMAL 5 0 1 Scsi Port7 Bus0/Disk7 Part0 OPEN NORMAL 9 0 DEV#: 1 DEVICE NAME: Disk2 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02B12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk2 Part1 OPEN NORMAL 32 0 1 Scsi Port7 Bus0/Disk7 Part1 OPEN NORMAL 32 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02C12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk3 Part0 OPEN NORMAL 7 0 1 Scsi Port7 Bus0/Disk8 Part0 OPEN NORMAL 9 0 DEV#: 3 DEVICE NAME: Disk3 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02C22028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk3 Part1 OPEN NORMAL 28 0 1 Scsi Port7 Bus0/Disk8 Part1 OPEN NORMAL 36 0 DEV#: 4 DEVICE NAME: Disk4 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02D12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk4 Part0 OPEN NORMAL 8 0 1 Scsi Port7 Bus0/Disk9 Part0 OPEN NORMAL 6 0 DEV#: 5 DEVICE NAME: Disk4 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02D22028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk4 Part1 OPEN NORMAL 35 0 1 Scsi Port7 Bus0/Disk9 Part1 OPEN NORMAL 29 0 DEV#: 6 DEVICE NAME: Disk5 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02E12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk5 Part0 OPEN NORMAL 6 0 1 Scsi Port7 Bus0/Disk10 Part0 OPEN NORMAL 8 0 DEV#: 7 DEVICE NAME: Disk5 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02E22028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk5 Part1 OPEN NORMAL 24 0 1 Scsi Port7 Bus0/Disk10 Part1 OPEN NORMAL 40 0 356 Multipath Subsystem Device Driver User's Guide .Active Adapters :2 Adpt# Adapter Name 0 Scsi Port6 Bus0 1 Scsi Port7 Bus0 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 188 204 Errors Paths Active 0 10 10 0 10 10 3. Type datapath query device and press Enter.

2. two host bus adapters are installed on the Windows NT host server: Active Adapters :2 Adpt# Adapter Name 0 Scsi Port6 Bus0 1 Scsi Port7 Bus0 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 188 204 Errors Paths Active 0 10 10 0 10 10 3. but it still exists. An MS-DOS window opens. Click Start → Program → Subsystem Device Driver → Subsystem Device Driver Management. but it is the same volume on the ESS. Enter datapath query adapter and press Enter. Reviewing the existing SDD configuration information Before adding any additional hardware. and path 1: Scsi Port7 Bus0/Disk7). you will see one more partition from the output of the datapath query device command than what is being displayed from the Disk Administrator application. Enter datapath query device and press Enter. In general. This partition stores information about the Windows partition on the drive. Perform the following steps to display information about the adapters and devices: 1. The example shows partition 0 (Part0) for each of the devices. In the following example output from an ESS device. Adding or modifying a multipath storage configuration to the supported storage device This section contains the procedures for adding new storage to an existing configuration in multipath environments. Using the SDD on a Windows NT host system 357 . The volume appears at the SCSI level as multiple disks (more properly. Adapter/Bus/ID/LUN). In the example shown in the following output. Verify that the number of adapters and the number of paths to each supported storage device volume match the known configuration. four devices are attached to the SCSI path: Total Devices : 2 DEV#: 0 DEVICE NAME: Disk2 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02B12028 =========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port5 Bus0/Disk2 Part0 OPEN NORMAL 4 0 1 Scsi Port5 Bus0/Disk8 Part0 OPEN NORMAL 7 0 Chapter 8. The output includes information about all the installed adapters.DEV#: 8 DEVICE NAME: Disk6 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02F12028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk6 Part0 OPEN NORMAL 8 0 1 Scsi Port7 Bus0/Disk11 Part0 OPEN NORMAL 6 0 DEV#: 9 DEVICE NAME: Disk6 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02F22028 ===================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port6 Bus0/Disk6 Part1 OPEN NORMAL 35 0 1 Scsi Port7 Bus0/Disk11 Part1 OPEN NORMAL 29 0 The definitive way to identify unique volumes on the storage subsystem is by the serial number displayed. The previous example shows two paths to each partition (path 0: Scsi Port6 Bus0/Disk2. The operating system masks this partition from the user. review the configuration information for the adapters and devices currently on your Windows NT server.

the output includes information about the new devices that were assigned: Total Devices : 2 DEV#: 0 DEVICE NAME: Disk2 Part0 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02B12028 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port5 Bus0/Disk2 Part0 OPEN NORMAL 4 0 1 Scsi Port5 Bus0/Disk8 Part0 OPEN NORMAL 7 0 2 Scsi Port6 Bus0/Disk14 Part0 OPEN NORMAL 6 0 3 Scsi Port6 Bus0/Disk20 Part0 OPEN NORMAL 5 0 DEV#: 1 DEVICE NAME: Disk2 Part1 SERIAL: 02B12028 TYPE: 2105E20 POLICY: OPTIMIZED 358 Multipath Subsystem Device Driver User's Guide . v The Windows disk numbers of all primary paths are labeled as path #0. The output includes information about all the installed adapters. An MS-DOS window opens. Perform the following steps to verify that the additional storage has been installed correctly: 1. In the following example output from an ESS device. v The number of adapters and the number of paths to each ESS volume match the updated configuration. you should verify the following conditions: v The new storage is correctly installed and configured. Click Start → Program → Subsystem Device Driver → Subsystem Device Driver Management. 2. Restart the Windows NT server. Restarting will ensure correct multipath access to both existing and new storage and to your Windows NT server. 4. Install any additional hardware to the supported storage device. Configure the new storage to the server. Enter datapath query device and press Enter. 3. 2. Verify that the new storage is added correctly.2 3 Scsi Port6 Bus0/Disk14 Part0 Scsi Port6 Bus0/Disk20 Part0 OPEN OPEN NORMAL NORMAL 6 5 0 0 DEV#: 1 DEVICE NAME: Disk2 Part1 TYPE: 2105E20 POLICY: OPTIMIZED SERIAL: 02B12028 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port5 Bus0/Disk2 Part1 OPEN NORMAL 14792670 0 1 Scsi Port5 Bus0/Disk8 Part1 OPEN NORMAL 14799942 0 2 Scsi Port6 Bus0/Disk14 Part1 OPEN NORMAL 14926972 0 3 Scsi Port6 Bus0/Disk20 Part1 OPEN NORMAL 14931115 0 Adding new storage to an existing configuration Perform the following steps to install additional storage: 1. In the example shown in the following output. two SCSI adapters are installed on the Windows NT host server: Active Adapters :2 Adpt# Adapter Name 0 Scsi Port6 Bus0 1 Scsi Port7 Bus0 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 295 329 Errors Paths Active 0 16 16 0 16 16 3. The output includes information about any additional devices that were installed. See “Verifying that new storage is installed correctly.” Verifying that new storage is installed correctly After adding new storage to an existing configuration. Enter datapath query adapter and press Enter.

In the Add/Remove Programs window. and path 1: Scsi Port7 Bus0/Disk10). Click Add/Remove. (See “Installing SDD” on page 351 for instructions. 2. The Add/Remove Programs window opens. If you do not plan to install the new version of SDD immediately. The previous example shows two paths to each partition (path 0: Scsi Port6 Bus0/Disk2. you need to remove the multipath access to your shared volume. Uninstalling the SDD Perform the following steps to uninstall SDD on a Windows NT host system: 1. 3. but it is the same volume on the ESS. The volume appears at the SCSI level as multiple disks (more properly.2.0 Enterprise Edition with Service Pack 6A v Microsoft hotfix Q305638 for the clustering environment Note: SDD does not support I/O load balancing in a Windows NT clustering environment. In general. select Subsystem Device Driver from the Currently installed programs selection list. see the Multiple-Path Software May Cause Disk Signature to Change Microsoft article (Knowledge Base Article Number Q293778). Chapter 8. but it still exists. The operating system masks this partition from the user. you will see one more partition from the output of the datapath query device command than what is being displayed in the Disk Administrator application. you must immediately install the new version of SDD to avoid any potential data loss. 4. you may permanently lose access to your assigned volume. Log on as the administrator user. Adapter/Bus/ID/LUN). Using the SDD on a Windows NT host system 359 . This partition stores information about the Windows partition on the drive. For additional information.============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port5 Bus0/Disk2 Part1 OPEN NORMAL 14792670 0 1 Scsi Port5 Bus0/Disk8 Part1 OPEN NORMAL 14799942 0 2 Scsi Port6 Bus0/Disk14 Part1 OPEN NORMAL 14926972 0 3 Scsi Port6 Bus0/Disk20 Part1 OPEN NORMAL 14931115 0 The definitive way to identify unique volumes on the ESS device is by the serial number displayed. Attention: v After uninstalling the previous version.microsoft. Click Start → Settings → Control Panel.1 or later v Windows NT 4. Double-click Add/Remove Programs.) v If you perform a system restart and accidentally overwrite the disk signature. The example shows partition 0 (Part0) for each device.com Using high-availability clustering on an ESS The following items are required to support the Windows NT operating system on an ESS in a clustering environment: v SDD 1. The Control Panel window opens. This article can be found at the following Web site: http://support. 5.

Special considerations in the high-availability clustering environment
There are subtle differences in the way that SDD handles path reclamation in a Windows NT clustering environment compared to a nonclustering environment. When the Windows NT server loses a path in a nonclustering environment, the path condition changes from open to dead and the adapter condition changes from active to degraded. The adapter and path condition will not change until the path is made operational again. When the Windows NT server loses a path in a clustering environment, the path condition changes from open to dead and the adapter condition changes from active to degraded. However, after a period of time, the path condition changes back to open and the adapter condition changes back to normal, even if the path has not been made operational again. The datapath set adapter # offline command operates differently in a clustering environment as compared to a nonclustering environment. In a clustering environment, the datapath set adapter offline command does not change the condition of the path if the path is active or being reserved. If you issue the command, the following message is displayed: to preserve access some paths left online.

Configuring a Windows NT cluster with the SDD installed
The following variables are used in this procedure: server_1 represents the first server with two host bus adapters (HBAs). server_2 represents the second server with two HBAs. hba_a represents the first HBA for server_1. hba_b represents the second HBA for server_1. hba_c represents the first HBA for server_2. hba_d represents the second HBA for server_2. Perform the following steps to configure a Windows NT cluster with the SDD: 1. Configure LUNs on the ESS as shared for all HBAs on both server_1 and server_2. 2. Connect hba_a to the ESS, and restart server_1. 3. Click Start → Programs → Administrative Tools → Disk Administrator. The Disk Administrator is displayed. Use the Disk Administrator to verify the number of LUNs that are connected to server_1. The operating system recognizes each additional path to the same LUN as a device. 4. Disconnect hba_a and connect hba_b to the ESS. Restart server_1. 5. Click Start → Programs → Administrative Tools → Disk Administrator. The Disk Administrator is displayed. Use the Disk Administrator to verify the number of LUNs that are connected to server_1. If the number of LUNs that are connected to server_1 is correct, proceed to 6 on page 361. If the number of LUNs that are connected to server_1 is incorrect, perform the following steps:

360

Multipath Subsystem Device Driver User's Guide

a. Verify that the cable for hba_b is connected to the ESS. b. Verify that your LUN configuration on the ESS is correct. c. Repeat steps 2 on page 360 - 5 on page 360. 6. Install SDD on server_1, and restart server_1. For installation instructions , go to “Installing SDD” on page 351. 7. Connect hba_c to the ESS, and restart server_2. 8. Click Start → Programs → Administrative Tools → Disk Administrator. The Disk Administrator is displayed. Use the Disk Administrator to verify the number of LUNs that are connected to server_2. The operating system recognizes each additional path to the same LUN as a device. 9. Disconnect hba_c and connect hba_d to the ESS. Restart server_2. 10. Click Start → Programs → Administrative Tools → Disk Administrator. The Disk Administrator is displayed. Use the Disk Administrator to verify that the correct number of LUNs are connected to server_2. If the number of LUNs that are connected to server_2 is correct, proceed to 11. If the number of LUNs that are connected to server_2 is incorrect, perform the following steps: a. Verify that the cable for hba_d is connected to the ESS. b. Verify your LUN configuration on the ESS. c. Repeat steps 7 - 10. 11. Install SDD on server_2, and restart server_2. For installation instructions , go to “Installing SDD” on page 351. 12. Connect both hba_c and hba_d on server_2 to the ESS, and restart server_2. 13. Use the datapath query adapter and datapath query device commands to verify the number of LUNs and paths on server_2. 14. Click Start → Programs → Administrative Tools → Disk Administrator. The Disk Administrator is displayed. Use the Disk Administrator to verify the number of LUNs as online devices. You also need to verify that all additional paths are shown as offline devices. 15. Format the raw devices with NTFS. Make sure to keep track of the assigned drive letters on server_2. 16. Connect both hba_a and hba_b on server_1 to the ESS, and restart server_1. 17. Use the datapath query adapter and datapath query device commands to verify the correct number of LUNs and paths on server_1. Verify that the assigned drive letters on server_1 match the assigned drive letters on server_2. 18. Restart server_2. v Install the Microsoft Cluster Server (MSCS) software on server_1. When server_1 is up, install Service Pack 6A (or later) to server_1, and restart server_1. Then install hotfix Q305638 and restart server_1 again. v Install the MSCS software on server_2. When server_2 is up, install Service Pack 6A (or later) to server_2, and restart server_2. Then install hotfix Q305638 and restart server_2 again. 19. Use the datapath query adapter and datapath query device commands to verify the correct number of LUNs and paths on server_1 and server_2. (This step is optional.)

Chapter 8. Using the SDD on a Windows NT host system

361

Note: You can use the datapath query adapter and datapath query device commands to show all the physical volumes and logical volumes for the host server. The secondary server shows only the physical volumes and the logical volumes that it owns.

Making the MoveGroup Service startup type automatic
The MoveGroup Service is shipped with Windows NT 1.3.4.4 or later to enable access to the cluster resources when a movegroup is performed and the primary path is disabled in a cluster environment. The default startup type of MoveGroup Service is manual. To activate this change, the startup type needs to be automatic. You can change the startup type to automatic as follows: 1. Click Start → Settings → Control Panel→ Services → SDD MoveGroup Service. 2. Change Startup type to Automatic. 3. Click OK. After the startup type of MoveGroup Service is changed to Automatic, a movegroup of all cluster resources will be performed when a node of the NT cluster is restarted. Note: The startup type of the MoveGroup Service should be the same for both cluster nodes.

SDD server daemon
The SDD server (also referred to as sddsrv) is an integrated component of SDD 1.3.4.x (or later). This component consists of a Windows application daemon that is installed in addition to the SDD device driver. See Chapter 12, “Using the SDD server and the SDDPCM server,” on page 415 for more information about sddsrv.

Verifying that the SDD server has started
After you have installed SDD, verify that the SDD server (sddsrv) has automatically started: 1. Click Start → Settings → Control Panel. 2. Double-click Services. 3. Look for SDD_Service. The status of SDD_Service should be Started if the SDD server has automatically started.

Starting the SDD server manually
If the SDD server did not start automatically after you performed the SDD installation, you can start sddsrv: 1. Click Start → Settings → Control Panel. 2. Double-click Services. 3. Select SDD_Service. 4. Click Start.

Changing to a different port number for the SDD server
To change to a different port number for the SDD server, see the instructions in “Changing the sddsrv or pcmsrv TCP/IP port number” on page 419.

362

Multipath Subsystem Device Driver User's Guide

Stopping the SDD server
You can stop the SDD server by performing the following steps: v Click Start → Settings → Control Panel. v Double-click Services. v Select SDD_Service. v Click Stop.

Chapter 8. Using the SDD on a Windows NT host system

363

364

Multipath Subsystem Device Driver User's Guide

Chapter 9. Using the SDD on a Windows 2000 host system
This chapter provides procedures for you to install, configure, remove, and use the SDD on a Windows 2000 host system that is attached to a supported storage device. For updated and additional information not included in this chapter, see the Readme file on the CD-ROM or visit the SDD Web site at: www.ibm.com/servers/storage/support/software/sdd

Verifying the hardware and software requirements
You must have the following hardware and software components in order to install SDD: Hardware The following hardware components are needed: v One or more supported storage devices v Host system v SCSI adapters and cables (ESS) v Fibre-channel adapters and cables Software The following software components are needed: v Windows 2000 operating system with Service Pack 4 v SCSI device drivers (ESS devices) v Fibre-channel device drivers

Unsupported environments
SDD does not support the following environments: v DS8000 and DS6000 devices do not support SCSI connectivity. v A host system with both a SCSI channel and a fibre-channel connection to a shared LUN. v Single-path mode during concurrent download of licensed machine code nor during any ESS-concurrent maintenance that impacts the path attachment, such as an ESS host-bay-adapter replacement. v Support of HBA Symbios SYM8751D has been withdrawn starting with ESS Model 800 and SDD 1.3.3.3.

Supported storage device requirements
Before you can install the SDD, you must configure the following devices. Ensure that the supported storage devices are configured: v IBM 2105xxx, for ESS devices v IBM 2107xxx, for DS8000 devices v IBM 1750xxx, for DS6000 devices v IBM 2145, for SAN Volume Controller

© Copyright IBM Corp. 1999, 2010

365

where xxx represents the disk storage system model number. To successfully install SDD on your virtualization product, ensure that you configure the virtualization product devices as fibre-channel devices attached to the virtualization product on your Windows 2000 host system.

Host system requirements
To successfully install SDD, your Windows 2000 host system must be an Intel-based system with Windows 2000 Service Pack 4 installed. The host system can be a uniprocessor or a multiprocessor system. To install all components, you must have at least 1 MB (MB equals approximately 1 000 000 bytes) of disk space available on the drive where Windows 2000 is installed. SCSI is not supported on SAN Volume Controller, DS8000, or DS6000. To use the SDD SCSI support, ensure that your host system meets the following requirements: v No more than 32 SCSI adapters are attached. v A SCSI cable connects each SCSI host adapter to an ESS port. v If you need the SDD I/O load-balancing and failover features, ensure that a minimum of two independent paths are configured between the host and the subsystem. Note: SDD also supports one SCSI adapter on the host system. With single-path access, concurrent download of licensed machine code is supported with SCSI devices. However, the load-balancing and failover features are not available. v For information about the SCSI adapters that can attach to your Windows 2000 host system, go to the following Web site: www.ibm.com/servers/storage/support

Fibre-channel requirements
To use the SDD fibre-channel support, ensure that your host system meets the following requirements: v Depending on the fabric and supported storage configuration, the number of fibre-channel adapters attached should be less than or equal to 32 / (n * m), where n is the number of supported storage ports and m is the number of paths that have access to the supported storage device from the fabric. v A fiber-optic cable connects each fibre-channel adapter to a supported storage port or a fabric. v If you need the SDD I/O load-balancing and failover features, ensure that a minimum of two independent fibre-channel paths are installed. Note: You should have at least two fibre-channel adapters to prevent data loss due to adapter hardware failure. For information about the fibre-channel adapters that can attach to your Windows 2000 host system, go to the following Web site at: www.ibm.com/servers/storage/support

366

Multipath Subsystem Device Driver User's Guide

Preparing for an SDD installation
Before you install the SDD, you must perform the following steps: 1. Ensure that all hardware and software requirements are met before you install SDD. See “Verifying the hardware and software requirements” on page 365 for more information. 2. Configure the supported storage device to your host system. See “Configuring the supported storage device” for more information. 3. Configure the fibre-channel adapters that are attached to your Windows 2000 host system. See “Configuring fibre-channel adapters” for more information. 4. Configure the SCSI adapters that are attached to your Windows 2000 host system. See “Configuring SCSI adapters for ESS devices” for more information.

Configuring the supported storage device
Before you install SDD, configure your supported storage device for single-port or multiport access for each LUN. SDD requires a minimum of two independent paths that share the same logical unit to use the load-balancing and failover features. For information about configuring your supported storage device, see the Introduction and Planning Guide or Configuration Guide for your product.

Configuring fibre-channel adapters
You must configure the fibre-channel adapters that are attached to your Windows 2000 host system before you install SDD. Follow the adapter-specific configuration instructions to configure the adapters attached to your Windows 2000 host systems. To get the latest recommendation for host adapter settings for ESS disk storage system, see the Enterprise Storage Server interoperability matrix at the following Web site: www-03.ibm.com/systems/storage/disk/ess/interop.html To get the latest recommendation for host adapter settings for the SAN Volume Controller, see the IBM System Storage SAN Volume Controller Host Systems Attachment Guide and the following Web site: www.ibm.com/storage/support/2145/ To get the latest recommendation for host adapter settings for DS disk storage system, see the HBA interoperability search tool at the following Web site: http://www-03.ibm.com/servers/storage/support/config/hba/index.wss . Note: SDD supports the Emulex HBA with full-port driver. When you configure the Emulex HBA for multipath functions, select Allow Multiple Paths to SCSI Targets in the Emulex Configuration Tool panel.

Configuring SCSI adapters for ESS devices
Attention: Failure to disable the BIOS of attached nonstart devices may cause your system to attempt to restart from an unexpected nonstart device.

Chapter 9. Using the SDD on a Windows 2000 host system

367

Before you install and use SDD, you must configure your SCSI adapters. For SCSI adapters that are attached to start devices, ensure that the BIOS for the adapter is enabled. For all other adapters that are attached to nonstart devices, ensure that the BIOS for the adapter is disabled. Note: When the adapter shares the SCSI bus with other adapters, the BIOS must be disabled.

Installing SDD
The following section describes how to install SDD on your system.

Installing for the first time
Attention: Ensure that there is single-path connection from the system to the storage device without SDD. Ensure that SDD is installed before adding additional paths to a device. Otherwise, the Windows 2000 server could lose the ability to access existing data on that device.

Installing SDD from CD-ROM
Perform the following steps to install SDD on your system: 1. Log on as the administrator user. 2. To a. b. c. install from CD-ROM: Insert the SDD installation CD-ROM into the selected drive. Start the Windows 2000 Explorer program. Double-click the CD-ROM drive. A list of all the installed directories on the compact disc is displayed. d. Double-click the \win2k\IBMsdd directory. 3. Run the setup.exe program. Tip: The setup.exe program offers the following command line options for silent install:
--> setup -s : silent install --> setup -s -n : silent install; no reboot (requires SDD 1.6.0.4 or later)

4. When the setup.exe program is finished, you will be asked if you want to reboot. If you answer y, the setup.exe program restarts your Windows 2000 system immediately. Follow the instructions to restart. Otherwise, the setup.exe program exits, and you need to manually restart your Windows 2000 system to activate the new installation. 5. Shut down your Windows 2000 host system. 6. Reconnect all cables that connect the host bus adapters and the supported storage devices if needed. 7. Change any zoning information that needs to be updated. 8. Restart your Windows 2000 host system. After completing the installation procedures and when you log on again, your Program menu will include a Subsystem Device Driver entry containing the following selections: 1. Subsystem Device Driver management 2. SDD Technical Support Web site 3. README

368

Multipath Subsystem Device Driver User's Guide

Notes: 1. You can verify that SDD has been successfully installed by issuing the datapath query device command. If the command runs, SDD is installed. You must issue the datapath command from the datapath directory. You can also use the following operation to verify that SDD has been successfully installed: a. Click Start → Programs → Administrative Tools → Computer Management. b. Double-click Device Manager. c. Expand Disk drives in the right pane. IBM 2105xxx SDD Disk Device: indicates ESS devices connected to Windows 2000 host. Figure 7 shows six ESS devices connected to the host and four paths to each of the disk storage system devices. The Device manager shows six IBM 2105xxx SDD Disk Devices and 24 IBM 2105xxx SCSI Disk Devices.

Figure 7. Example showing ESS devices to the host and path access to the ESS devices in a successful SDD installation on a Windows 2000 host system

2. You can also verify the current version of SDD. For more information, go to “datapath query version” on page 440.

Installing SDD from downloaded code
Use this procedure to install SDD from downloaded code: 1. Unzip the SDD code to your installation subdirectory. 2. Run the setup.exe program. Tip: The setup.exe program offers the following command line options for silent install:
--> setup -s : silent install --> setup -s -n : silent install; no reboot (requires SDD 1.6.0.4 or later)

Chapter 9. Using the SDD on a Windows 2000 host system

369

6 or later) If you have previously installed a 1.3.3. Otherwise.0.exe program will restart your Windows 2000 system immediately.exe program is finished. Double-click the \win2k\IBMsdd directory. answer y to the "Upgrade?" question. If you answer y.exe program exits. you will see an "Upgrade?" question while the setup program is running. SDD is installed. you will be asked if you want to reboot. Reconnect all cables that connect the host bus adapters and the supported storage devices if needed. Double-click the CD-ROM drive. To download code from the SDD web site: a. 2.3. 6. You can also verify the current version of SDD. the setup program restarts your Windows 2000 system immediately. You can verify that SDD has been successfully installed by issuing the datapath query device command. Follow the instructions to restart. Change any zoning information that needs to be updated. Log on as the administrator user.x installed on your Windows 2000 host system. d.3. 3. no reboot (requires SDD 1. Follow the instructions to restart. 4. 2.2 or 1. Run the setup. When the setup program is finished. To upgrade from CD-ROM: a. If you answer y. Shut down your Windows 2000 host system.2. Tip: The setup program provides the following command-line options for silent install/upgrade: --> setup -s : silent install/upgrade --> setup -s -n : silent install/upgrade. By viewing the sddbus.1 (or earlier) version of SDD. Unzip the SDD code to your installation subdirectory b. Upgrading SDD Perform the following steps to upgrade SDD on your host system: 1.exe program.sys file properties 370 Multipath Subsystem Device Driver User's Guide . If the command runs. and you need to manually restart your Windows 2000 system to activate the new installation Notes: 1. and you need to manually restart your Windows 2000 system to activate the new installation. c. Restart your Windows 2000 host system. When the setup. See “Displaying the current version of SDD. Insert the SDD installation CD-ROM into the selected drive. Otherwise the setup program exits. 4. Follow the displayed setup program instructions to complete the installation.1.1. b.6.” Displaying the current version of SDD There are two ways to display the current version of SDD on a Windows 2000 host system: 1. the setup. you will be asked if you want to reboot. 5. 7. A list of all the installed directories on the compact disc is displayed. Start the Windows 2000 Explorer program. You should answer y to this question to continue the installation. the setup. If you currently have SDD 1.

You can display the current version of SDD on a Windows 2000 host system by viewing the sddbus.sys file properties. Perform the following steps to view the properties of sddbus.sys file: a. Click Start → Programs → Accessories → Windows Explorer to open Windows Explorer. b. In Windows Explorer, go to the %SystemRoot%\system32\drivers directory, where %SystemRoot% is: %SystemDrive%\winnt for Windows 2000. If Windows is installed on the C: drive, %SystemDrive% is C:. If Windows is installed on E: drive, %SystemDrive% is E: c. Right-click the sddbus.sys file, and then click Properties. The sddbus.sys properties window opens. d. In the sddbus.sys properties window, click Version. The file version and copyright information about the sddbus.sys file is displayed. 2. By running datapath query version command (requires SDD 1.6.1.x or later).

Configuring SDD
Use the following sections to configure SDD.

Maximum number of LUNs
There is no physical limitation on the maximum number of devices that can be configured for SDD on a Windows 2000 platform. Any limitation exists on the host operating system. A practical limit is 250 LUNs. Consult the Host Attachment Guide for your product.

Verifying the configuration
To activate SDD, you need to restart your Windows 2000 system after it is installed. Attention: Ensure that SDD is installed before you add additional paths to a device. Otherwise, the Windows 2000 server could lose the ability to access existing data on that device. Before adding any additional hardware, review the configuration information for the adapters and devices currently on your Windows 2000 server. Perform the following steps to display information about the adapters and devices: 1. You must log on as an administrator user to have access to the Windows 2000 Computer Management. 2. Click Start → Program → Subsystem Device Driver → Subsystem Device Driver Management. An MS-DOS window opens. 3. Enter datapath query adapter and press Enter. The output includes information about all the installed adapters. In the example shown in the following output, one host bus adapter is installed:
Active Adapters :1 Adpt# 0 Adapter Name Scsi Port4 Bus0 State NORMAL Mode ACTIVE Select 592 Errors Paths Active 0 6 6

Chapter 9. Using the SDD on a Windows 2000 host system

371

4. Enter datapath query device and press Enter. In the following example showing disk storage system device output, six devices are attached to the SCSI path:
Total Devices : 6

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06D23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk1 Part0 OPEN NORMAL 108 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06E23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk2 Part0 OPEN NORMAL 96 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06F23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk3 Part0 OPEN NORMAL 96 0 DEV#: 3 DEVICE NAME: Disk4 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07023922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk4 Part0 OPEN NORMAL 94 0 DEV#: 4 DEVICE NAME: Disk5 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07123922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk5 Part0 OPEN NORMAL 90 0 DEV#: 5 DEVICE NAME: Disk6 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07223922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk6 Part0 OPEN NORMAL 98 0

Activating additional paths
Perform the following steps to activate additional paths to an SDD vpath device: 1. Install any additional hardware on the Windows 2000 server or the ESS. 2. Click Start → Program → Administrative Tools → Computer Management. 3. Click Device Manager. 4. Right-click Disk drives. 5. Click Scan for hardware changes. 6. Verify that the path is added correctly. See “Verifying that additional paths are installed correctly.”

Verifying that additional paths are installed correctly
After installing additional paths to SDD devices, verify that the additional paths have been installed correctly. Perform the following steps to verify that the additional paths have been installed correctly: 1. Click Start → Program → Subsystem Device Driver → Subsystem Device Driver Management. An MS-DOS window opens.

372

Multipath Subsystem Device Driver User's Guide

2. Enter datapath query adapter and press Enter. The output includes information about any additional adapters that were installed. In the example shown in the following output, an additional host bus adapter has been installed:
Active Adapters :2 Adpt# 0 1 Adapter Name Scsi Port1 Bus0 Scsi Port2 Bus0 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 1325 1312 Errors Paths Active 0 8 8 0 8 8

3. Enter datapath query device and press Enter. The output should include information about any additional devices that were installed. In this example, the output includes information about the new host bus adapter that was assigned:
Total Devices : 6

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06D23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk1 Part0 OPEN NORMAL 108 0 1 Scsi Port5 Bus0/Disk1 Part0 OPEN NORMAL 96 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06E23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk2 Part0 OPEN NORMAL 96 0 1 Scsi Port5 Bus0/Disk2 Part0 OPEN NORMAL 95 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06F23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk3 Part0 OPEN NORMAL 96 0 1 Scsi Port5 Bus0/Disk3 Part0 OPEN NORMAL 94 0 DEV#: 3 DEVICE NAME: Disk4 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07023922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk4 Part0 OPEN NORMAL 94 0 1 Scsi Port5 Bus0/Disk4 Part0 OPEN NORMAL 96 0 DEV#: 4 DEVICE NAME: Disk5 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07123922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk5 Part0 OPEN NORMAL 90 0 1 Scsi Port5 Bus0/Disk5 Part0 OPEN NORMAL 99 0 DEV#: 5 DEVICE NAME: Disk6 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07223922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk6 Part0 OPEN NORMAL 98 0 1 Scsi Port5 Bus0/Disk6 Part0 OPEN NORMAL 79 0

Uninstalling SDD
Perform the following steps to uninstall SDD on a Windows 2000 host system. 1. Shut down your Windows 2000 host system. 2. Ensure that there is a single-path connection from the system to the storage device.
Chapter 9. Using the SDD on a Windows 2000 host system

373

3. 4. 5. 6.

Turn on your Windows 2000 host system. Log on as the administrator user. Click Start → Settings → Control Panel. The Control Panel opens. Double-click Add/Remove Programs. The Add/Remove Programs window opens. 7. In the Add/Remove Programs window, select the Subsystem Device Driver from the currently installed programs selection list. 8. Click Add/Remove. You will be asked to verify that you want to uninstall SDD. 9. Restart your system. The SDD setup.exe program provides the following command line options for silent uninstall:
--> setup -s -u: silent uninstall --> setup -s -u -n : silent uninstall; no reboot (requires SDD 1.6.0.6 or later)

SAN boot support
Note: Support for remote boot from ESS devices connected to a SCSI adapter is not available. See the Host Systems Attachment Guide for your supported storage device for additional information.

Booting from a SAN device with Windows 2000 and the SDD using Qlogic HBA <BIOS 1.43> or later
For information about how SAN boot works, see the Qlogic Web site: http://www.qlogic.com/. Perform the following steps to set up a SAN boot device with SDD: 1. Configure the SAN environment so that both Qlogic HBAs in the host system can see the SAN boot device. Ensure that there is a single-path connection from each Qlogic HBA to the SAN boot device. 2. Turn on the host system with 2 fibre-channel cables connected to both HBAs. When the HBA banner appears, press CTRL-Q 3. Select the first HBA from the displayed list. 4. Select Configuration Settings. 5. Select Host Adapter Settings. Select Host Adapter BIOS and enable it. Press the Back key to back up one menu. Select Selectable Boot Settings. Under Selectable Boot Settings, enable Selectable Boot. Under (Primary) Boot Port Name, LUN, select the IBM device that will be providing the storage for SAN boot. At the Select LUN prompt, select the first supported LUN, which is LUN 0. 11. This returns to the previous screen, which will now have information under (Primary) Boot Port Name, LUN for the device that you selected in the previous step. 12. Press the Back key twice to exit the menus. Then select Save Changes to save the changes. 6. 7. 8. 9. 10.

374

Multipath Subsystem Device Driver User's Guide

13. Select the second HBA and repeat steps 4-12. 14. Unplug the fibre-channel cable from second HBA and plug the fibre-channel cable to the first HBA so that you have a single path from first HBA to the SAN device. 15. Insert the Windows 2000 with latest Service Pack CD-ROM into the CD-ROM drive. 16. Restart the host system. 17. At the very first Windows 2000 installation screen, quickly press F6 to install third-party device. 18. Select S to specify an additional device. 19. Insert the diskette with the Qlogic driver into diskette drive and press Enter. 20. Continue with the Windows 2000 installation process. Remember to select the IBM SAN device seen by Qlogic HBA as the device to install Windows 2000. Continue with the OS installation. 21. After Windows 2000 is successfully installed on the SAN boot device, shut down the host system. 22. Unplug the fibre-channel cable from the first HBA and plug the fibre-channel cable to the second HBA so that you have is single connection from the second HBA to the IBM SAN boot device. 23. Restart the host system to boot from SAN. 24. Install the latest SDD version on your host system and reboot. 25. To add multipath support to the SAN boot device, perform the following steps: a. Shut down the host system. b. Configure the SAN to allow additional paths to the SAN boot device if needed. c. Connect all fibre-channel cables. 26. Restart the host system.

Booting from a SAN device with Windows 2000 and the SDD using an Emulex HBA <Firmware v3.92a2, v1.90.x5> or later
Note: The Automatic LUN Mapping checkbox of the Emulex Configuration Settings should be selected so that both HBA ports can see all assigned LUNs. Perform the following steps to set up a SAN boot device with Windows 2000 and SDD using an Emulex HBA: 1. Configure the SAN environment so that both Emulex HBAs in the host system can see the SAN boot device. Ensure that there is a single-path connection from each of the Emulex HBA to the SAN boot device. 2. Turn on the host system with 2 fibre-channel cables connected to both HBAs. Press Alt+E to go to Emulex BIOS Utilities. 3. Select the first HBA. 4. Select Configure HBA Parameter Settings. 5. Select Option 1 to enable BIOS for this HBA. 6. Press the Page Up key to go back. Then select Configure boot device. 7. Select the first unused boot device for Select Boot Entry from the List Of Saved Boot Devices.
Chapter 9. Using the SDD on a Windows 2000 host system

375

8. 9. 10. 11. 12.

Select 01 for Select The Two Digit Number Of The Desired Boot Device. Enter 00 for Enter Two Digit Of Starting LUNs (hexadecimal). Select the device number 01 for Enter Selection For Starting LUN. Select Boot Device Via WWPN. Page up and select the second HBA. Repeat steps 4-11 to configure boot support for this HBA. 13. Unplug the fibre-channel cable from second HBA and plug the fibre-channel cable to the first HBA so that you have a single path from first HBA to the SAN device. 14. Insert the Windows 2000 with latest Service Pack CD-ROM into the CD-ROM drive. 15. Restart the host system. 16. At the very first Windows 2000 installation screen, quickly press F6 to install a third-party device. 17. Select S to specify an additional device.

18. Insert the diskette with Emulex HBA driver into diskette drive and press Enter. 19. Continue with the Windows 2000 installation process. Remember to select the IBM SAN device that is seen by the Emulex HBA as the device to install Windows 2000. Continue with the OS installation. 20. After Windows 2000 is successfully installed on the SAN boot device, shut down the host system. 21. Disconnect the fibre-channel cable from the first HBA. Reconnect the fibre-channel cable to the second HBA. Ensure that there is a single-path connection from second HBA to IBM SAN boot device. 22. Unplug the fibre-channel cable from first HBA and plug the fibre-channel cable to the second HBA so that you have a single path from second HBA to the IBM SAN device. 23. Restart the host system to boot from SAN. 24. Install the latest SDD and reboot. 25. To add multipath support to the SAN boot device, perform the following steps: a. Shut down the host system. b. Configure the SAN to add multipaths to the SAN boot device if needed. c. Reconnect all fibre-channel cables. 26. Restart the host system.

Limitations when you boot from a SAN boot device on a Windows 2000 host
The following limitations apply when you boot from a SAN boot device on a Windows 2000 host: v You cannot use the same HBA as both the SAN boot device and a clustering adapter. This is a Microsoft physical limitation. v The following limitations might apply to a host system that is running at a BIOS or Firmware level older than the specified one. 1. If you reboot a system with adapters while the primary path is in failed state, you must: a. Manually disable the BIOS on the first adapter. b. Manually enable the BIOS on the second adapter.

376

Multipath Subsystem Device Driver User's Guide

2. You cannot enable the BIOS for both adapters at the same time. If the BIOS for both adapters is enabled at the same time and there is path failure on the primary adapter, you will receive the error message INACCESSIBLE_BOOT_DEVICE when the system restarts.

SAN boot disk migration
The datapath set bootdiskmigrate command migrates a SAN BOOT Host with SDD from one storage type to another.
datapath set bootdiskmigrate device type

where, device type The device type to which you are migrating. To migrate the boot disk from a Model 2105 to a Model 2107: 1. Enter the datapath set bootdiskmigrate 2107 command on the remote boot host that needs to be migrated. 2. Shutdown the host. 3. Using Metro Mirror, PPRC or any other tool, migrate the data to the 2107 disk. 4. Boot the host from the 2107 disk instead of the 2105 disk.

Support for Windows 2000 clustering
SDD 1.6.0.0 (or later) is required to support load balancing in Windows 2000 clustering. The following table indicates support by release.
Table 27. Windows 2000 clustering SCSI-2 Reserve/Release and Persistent Reserve/Release support with MSCS SDD version SDD 1.5.x.x and earlier SDD 1.6.0.400 SDD 1.6.0.x and later (except for 1.6.0.400) SCSI-2 Reserve/Release support with MSCS Yes Yes No Persistent Reserve/Release support with MSCS No No Yes

When running Windows 2000 clustering, clustering failover might not occur when the last path is being removed from the shared resources. See Microsoft article Q294173 for additional information. Windows 2000 does not support dynamic disks in the MSCS environment.

Special considerations in the Windows 2000 clustering environment
Note: The adapter goes to DEGRAD state when there are active paths left on the adapter. It goes to FAILED state when there are no active paths. The datapath set adapter # offline command operates differently in a SCSI Reserve/Release support clustering environment when compared to a nonclustering or Persistent Reserve/Release environment. In a SCSI
Chapter 9. Using the SDD on a Windows 2000 host system

377

Reserve/Release support clustering environment, the datapath set adapter offline command does not change the condition of the path if the path is active or being reserved. If you issue the command, the following message is displayed: to preserve access some paths left online

Configuring a Windows 2000 cluster with the SDD installed
The following variables are used in this procedure: server_1 server_2 Represents the first server with two host bus adapters (HBAs). Represents the second server with two HBAs.

Perform the following steps to configure a Windows 2000 cluster with SDD: 1. On both server_1 and server_2, configure SAN devices on supported storage as shared for all HBAs . 2. Install the latest SDD on server_1. For installation instructions, see “Installing SDD” on page 368. 3. Connect fibre-channel cables from server_1 to the supported storage device, and restart server_1. 4. Use the datapath query adapter and datapath query device commands to verify the correct number of SAN devices and paths on server_1. 5. Click Start → All Programs → Administrative Tools → Computer Management. From the Computer Management window, select Storage and then select Disk Management to work with the storage devices attached to the host system. 6. Format the raw devices with NTFS and assign drive letters for all SAN devices that are going to be used as MSCS resources. Ensure that you keep track of the assigned drive letters on server_1. 7. Shut down server_1. 8. Install the latest SDD on server_2. For installation instructions, see “Installing SDD” on page 368. 9. Connect fibre-channel cables from server_2 to the supported storage device, and restart server_2. 10. Use the datapath query adapter and datapath query device commands to verify the correct number of SAN devices and paths on server_2. 11. Click Start → All Programs → Administrative Tools → Computer Management. From the Computer Management window, select Storage and then select Disk Management to work with the storage devices attached to the host system. Verify that the assigned drive letters for MSCS resources on server_2 match the assigned drive letters on server_1. 12. Insert the Windows 2000 CD-ROM into the CD-ROM drive and install the MSCS software on server_2. 13. Restart server_1. 14. Insert the Windows 2000 CD-ROM into the CD-ROM drive and install the MSCS software on server_1 as the second node of the MSCS cluster. Information about installing a Windows 2000 cluster can be found at: www.microsoft.com/windows2000/techinfo/planning/server/clustersteps.asp

Upgrading the SDD in a two-node cluster environment
Perform the following steps to upgrade SDD in a two-node cluster environment: 1. Move all cluster resources from node A to node B.

378

Multipath Subsystem Device Driver User's Guide

2. On node A, follow the instructions from “Upgrading SDD” on page 370. 3. After node A is started, move all resources from node B to node A. 4. On node B, follow the instructions from “Upgrading SDD” on page 370.

Uninstalling the SDD in a two-node cluster environment
Use the following steps if you intend to remove the multipathing functions from an supported storage device in a two-node cluster environment. Perform the following steps to uninstall SDD in a two-node cluster environment: 1. Move all cluster resources from node A to node B. 2. Ensure that there is single-path connection from the system to the storage device, which may include the following activities: a. Disable access of second HBA to the storage device. b. Change the zoning configuration to allow only one port accessed by this host. c. Remove shared access to the second HBA. d. Remove multiple supported storage port access, if applicable. 3. Uninstall SDD. See “Uninstalling SDD” on page 373 for details. 4. Restart your system. 5. Move all cluster resources from node B to node A. 6. Perform steps 2 - 5 on node B. The above procedures can be expanded to upgrade SDD or to uninstall SDD in an n-node cluster environment.

SDD server daemon
The SDD server (also referred to as sddsrv) is an integrated component of SDD 1.3.4.1 (or later). This component consists of a Windows application daemon that is installed in addition to the SDD device driver.

Verifying if the SDD server has started
After you have installed SDD, verify if the SDD server (sddsrv) has automatically started: 1. Click Start → Programs → Administrative Tools → Computer Management. 2. Expand the Services and Applications tree. 3. Click Services. 4. Right-click SDD_Service. 5. Click Start. The status of SDD Service should be Started if the SDD server has automatically started.

Starting the SDD server manually
If the SDD server did not start automatically after you performed the SDD installation, you can use the following process to start sddsrv: 1. Click Start → Programs → Administrative Tools → Computer Management. 2. Expand the Services and Applications tree. 3. Click Services. 4. Right-click SDD_Service. 5. Click Start.
Chapter 9. Using the SDD on a Windows 2000 host system

379

Changing to a different port number for the SDD server
To change to a different port number for the SDD server, see the instructions in “Changing the sddsrv or pcmsrv TCP/IP port number” on page 419.

Stopping the SDD server
To 1. 2. 3. 4. stop the SDD server, perform the following steps: Click Start → Programs → Administrative Tools → Computer Management. Expand the Services and Applications tree. Click Services. Right-click SDD_Service.

5. Click Stop.

380

Multipath Subsystem Device Driver User's Guide

v DS8000 and DS6000 do not support SCSI connectivity. configure. Using SDD on a Windows Server 2003 host system This chapter provides procedures for you to install. For updated and additional information that is not included in this chapter. 1999. Disk storage system requirements To successfully install SDD. remove. install the package from the \win2k3\i386\IBMsdd directory of the SDD CD-ROM. v Device driver for SCSI or fibre-channel adapters Unsupported environments SDD does not support the following environments: v A host system with both a SCSI channel and a fibre-channel connection to a shared LUN. see the Readme file on the CD-ROM or visit the SDD Web site: www.ibm. install the package from the \win2k3\IA64\IBMsdd directory of the SDD CD-ROM. SDD supports both 32-bit and 64-bit environments running Windows Server 2003. for ESS devices © Copyright IBM Corp. v SDD is not supported on the Windows Server 2003 Web edition. and use the SDD on a Windows Server 2003 host system that is attached to a supported storage device.com/servers/storage/support/software/sdd Verifying the hardware and software requirements You must have the following hardware and software components in order to install SDD: Hardware The following hardware components are needed: v Supported storage devices v Host system v SCSI adapters and cables (ESS) v Fibre-channel adapters and cables Software The following software components are needed: v Windows Server 2003 operating system Standard or Enterprise edition.Chapter 10. 2010 381 . For the Windows 2003 Server 64-bit environment. such as a disk storage system host-bay-adapter replacement. For the Windows 2003 Server 32-bit environment. ensure that the disk storage system devices are configured as either an: v IBM 2105xxx. v Single-path mode during code distribution and activation of LMC nor during any disk storage system concurrent maintenance that impacts the path attachment.

com/servers/ storage/support 382 Multipath Subsystem Device Driver User's Guide .v IBM 2107xxx. (DS8000 and DS6000 do not support SCSI connectivity. you must have at least 1 MB (MB equals approximately 1 000 000 bytes) of disk space available on the drive where Windows Server 2003 is installed. SCSI requirements To use the SDD SCSI support. However.) v If you need the SDD I/O load-balancing and failover features. for DS6000 devices v IBM 2145. ensure that a minimum of two independent paths are configured between the host and the subsystem. Note: SDD also supports one SCSI adapter on the host system. ensure that your host system meets the following requirements: v No more than 32 SCSI adapters are attached. You should have at least two fibre-channel adapters to prevent data loss due to adapter hardware failure. v A SCSI cable connects each SCSI host adapter to an ESS port.ibm. concurrent download of licensed machine code is supported with SCSI devices. Host system requirements To successfully install SDD. The host system can be a uniprocessor or a multiprocessor system. ensure that a minimum of two independent fibre-channel paths are installed. for DS8000 devices v IBM 1750xxx. where n is the number of supported storage ports and m is the number of paths that have access to the supported storage device from the fabric. v For information about the SCSI adapters that can attach to your Windows Server 2003 host system. With single-path access. For information about the fibre-channel adapters that can attach to your Windows Server 2003 host system. for SAN Volume Controller devices where xxx represents the disk storage system model number. v If you need the SDD I/O load-balancing and failover features. your Windows Server 2003 host system must be an Intel-based system with Windows Server 2003 installed.com/servers/storage/support Fibre-channel requirements To use the SDD fibre-channel support. v A fiber-optic cable connects each fibre-channel adapter to a disk storage system port. To install all components. ensure that your host system meets the following requirements: v Depending on the fabric and supported storage configuration. the load-balancing and failover features are not available.ibm. the number of fibre-channel adapters attached should be less than or equal to 32 / (n * m). go to the following Web site: www. go to the following Web site at: www.

com/storage/support/2145/ Chapter 10.x. SDD requires a minimum of two independent paths that share the same logical unit to use the load-balancing and failover features. 3. Configuring fibre-channel adapters You must configure the fibre-channel adapters that are attached to your Windows 2000 host system before you install SDD. see the IBM System Storage SAN Volume Controller Host Systems Attachment Guide and the following Web site: www.com/systems/support/ To get the latest recommendation for host adapter settings for the SAN Volume Controller. See “Configuring SCSI adapters for ESS devices” on page 384 for more information. 2. Upgrade Windows 2000 server to Windows Server 2003. see the Enterprise Storage Server interoperability matrix at the following Web site: http://www-947. Ensure that all hardware and software requirements are met before you install SDD. For information about configuring your disk storage system. Note: During heavy usage. To get the latest recommendation for host adapter settings for the ESS disk storage system.Preparing for SDD installation Note: If you have Windows 2000 server running and SDD 1. See “Configuring fibre-channel adapters” for more information. Follow the adapter-specific configuration instructions to configure the adapters attached to your Windows 2000 host systems. See “Verifying the hardware and software requirements” on page 381 for more information. Configure the SCSI adapters that are attached to your Windows Server 2003 host system. Using SDD on a Windows Server 2003 host system 383 . the Windows Server 2003 operating system might slow down while trying to recover from error conditions. you should: 1. you must: 1.x already installed and you want to upgrade to Windows Server 2003.ibm. 2. see the Introduction and Planning Guide or Configuration Guide for your product. Before installing SDD. See “Configuring the supported storage device” for more information. Configuring the supported storage device Before you install SDD. Upgrade SDD to the latest level.ibm. Uninstall any previously installed version of SDD on your host system. 4. configure your supported storage device for single-port or multiport access for each LUN. Configure the supported storage device to your host system.3. For SDD uninstallation and installation instructions. see “Uninstalling the SDD” on page 391 and “Installing SDD” on page 384. Configure the fibre-channel adapters that are attached to your Windows Server 2003 host system. 5.

6. you must configure your SCSI adapters. Configuring SCSI adapters for ESS devices Attention: Failure to disable the BIOS of attached non-start devices may cause your system to attempt to restart from an unexpected nonstart device. see the HBA interoperability search tool at the following Web site: www-03. A list of all the installed directories on the CD-ROM is displayed. Otherwise. no reboot (requires SDD 1. Insert the SDD installation CD-ROM into the selected drive. select Allow Multiple Paths to SCSI Targets in the Emulex Configuration Tool panel. 2. Installing for the first time The following section describes how to install SDD on your system. b. Attention: Ensure that there is single-path connection from the system to the storage device that does not have SDD installed. and upgrading from Windows NT.To get the latest recommendation for host adapter settings for a disk storage system. Ensure that SDD is installed before you add additional paths to a device. Before you install and use SDD. Installing SDD from CD-ROM Perform the following steps to install SDD on your system: 1. For SCSI adapters that are attached to start devices. For all other adapters that are attached to nonstart devices.com/servers/storage/support/config/hba/index.wss SDD supports the Emulex HBA with full-port driver. Run the setup. Installing SDD This section describes first time installation. To install from the CD-ROM: a. Log on as the administrator user.exe program offers the following command line options for silent install: --> setup -s : silent install --> setup -s -n : silent install.4 or later) 384 Multipath Subsystem Device Driver User's Guide . ensure that the BIOS for the adapter is enabled. displaying current versions. c. d. Start the Windows Server 2003 Explorer program. ensure that the BIOS for the adapter is disabled.exe program. the BIOS must be disabled.ibm. the Windows 2003 server could lose the ability to access existing data on that device. Tip: The setup. Note: When the adapter shares the SCSI bus with other adapters. Select the \win2k3\i386\IBMsdd directory for 32-bit or \win2k3\IA64\IBMsdd directory for IA 64-bit 3. upgrading. When you configure the Emulex HBA for multipath functions.0. Double-click the CD-ROM drive.

the setup program restarts your Windows 2003 system immediately. If you answer y. Double-click Device Manager. You must issue the datapath command from the datapath directory. If the command runs. Change any zoning information that needs to be updated. README Notes: 1. SDD Technical Support Web site 3. Using SDD on a Windows Server 2003 host system 385 . Reconnect all cables that connect the host bus adapters and the supported storage devices. 8. The Device manager shows six IBM 2105xxx SDD disk devices and 24 IBM 2105xxx SCSI disk devices. and you need to manually restart your Windows 2003 system to activate the new installation. Subsystem Device Driver Management 2.4. Chapter 10. SDD is installed. you will see a Subsystem Device Driver entry in your Program menu containing the following selections: 1. there are six ESS devices connected to the host and four paths to each of the ESS devices. You can verify that SDD has been successfully installed by issuing the datapath query device command. c. 7. 6. if needed. b. When the setup program is finished. Follow the instructions to restart. After completing the installation procedures and when you log on again. Restart your Windows 2003 host system. IBM 2105 indicates an ESS device IBM 2107 indicates a DS8000 device IBM 1750 indicates a DS6000 device IBM 2145 indicates a SAN Volume Controller device In Figure 8 on page 386. Shut down your Windows 2003 host system. 5. Otherwise the setup program exits. you are asked if you want to reboot. You can also use the following procedure to verify that SDD has been successfully installed: a. Click Start → Programs → Administrative Tools → Computer Management. Expand Disk drives in the right pane.

exe program exits. 5. Restart your Windows 2003 host system. the setup.exe program will restart your Windows 2003 system immediately.exe program and follow the instructions. If you answer y. no reboot (requires SDD 1. When the setup. Tip: The setup. 7. Change any zoning information that needs to be updated. 4. Installing SDD from downloaded code Use this procedure to install SDD from downloaded code: 1. Unzip the SDD code to your installation subdirectory. Reconnect all cables that connect the host bus adapters and the supported storage devices if needed.exe program is finished. Otherwise.4 or later) 3.0. Run the setup. go to “Displaying the current version of the SDD” on page 387. Example showing ESS devices to the host and path access to the ESS devices in a successful SDD installation on a Windows Server 2003 host system 2. you will be asked if you want to reboot. and you need to manually restart your Windows 2003 system to activate the new installation. 386 Multipath Subsystem Device Driver User's Guide . You can also verify the current version of SDD. 2. For more information.Figure 8.6. Follow the instructions to restart. the setup.exe program offers the following command line options for silent install: --> setup -s : silent install --> setup -s -n : silent install. Shut down your Windows 2003 host system. 6.

Select the \win2k3\i386\IBMsdd directory for 32-bit or \win2k3\IA64\IBMsdd directory for IA 64-bit. You can also verify the current version of SDD.exe program provides the following comand-line options for silent install/upgrade: --> setup -s : silent install/upgrade --> setup -s -n : silent install/upgrade. If you currently have SDD 1. Tip: The setup.x installed on your Windows 2000 host system. If you answer y. Run the setup. then SDD is installed. Chapter 10.2. When the setup program is finished. c. %SystemDrive% is C:. the setup program restarts your Windows Server 2003 system immediately.2 or 1. Follow the instructions to restart. To upgrade from CD-ROM: a.1. You can verify that SDD has been successfully installed by issuing the datapath query device command.exe program. Follow the displayed setup instructions to complete the installation. Using SDD on a Windows Server 2003 host system 387 . Start the Windows Server 2003 Explorer program. Insert the SDD installation CD-ROM into the selected drive.sys file: a. If the command runs. Right-click the sddbus.ibm. b. Click Start → Programs → Accessories → Windows Explorer to open Windows Explorer. 2.sys file. Double-click the CD-ROM drive. where %SystemRoot% is: %SystemDrive%\Windows for Windows Server 2003.3. go to the %SystemRoot%\system32\drivers directory. answer y to the Upgrade? question. d. View the sddbus. Log on as the administrator user.1 (or earlier) version of SDD. If Windows is installed on E: drive. Otherwise the setup program exits. 4.Upgrading the SDD Perform the following steps to upgrade SDD on your host system: 1.” Displaying the current version of the SDD There are two ways to display the current version of SDD on a Windows Server 2003 host system: 1.sys file properties To view the properties of sddbus.0. In Windows Explorer. you will see an Upgrade? question while the setup program is running. A list of all the installed directories on the compact disc is displayed.6. You should answer y to this question to continue the installation. If Windows is installed on the C: drive. and you need to manually restart your Windows Server 2003 system to activate the new installation.1. For more information. Download code from the SDD Web site: www.6 or later) If you have previously installed a 1.3. no reboot (requires SDD 1. and then click Properties. 3. %SystemDrive% is E: c. you are asked if you want to reboot. Unzip the SDD code to your installation subdirectory. b.com/servers/storage/support/software/sdd a.3. see “Displaying the current version of the SDD.

Install the latest version of SDD for Windows 2003. Before adding any additional hardware. Consult the Host Attachment Guide for your product. Attention: Ensure that SDD is installed before you add additional paths to a device. Configuring the SDD Use these sections to configure the SDD. Shut down the system. Verifying the configuration To activate SDD. 8. Maximum number of LUNs There is no physical limitation on the maximum number of devices that can be configured for SDD on a Windows Server 2003 platform. the Windows Server 2003 server could lose the ability to access existing data on that device. 4. review the configuration information for the adapters and devices currently on your Windows Server 2003 server.sys properties window. 3.x or later). 5. See “Uninstalling the SDD” on page 359. In the sddbus. Uninstall SDD from the Windows NT host system. install Windows Server 2003-supported HBA drivers. Reboot the system.d. enabling additional paths to the supported storage device. An MS-DOS window opens. click Version. After your host upgrade is complete. Upgrading from a Windows NT host system to Windows Server 2003 Use the following procedure to upgrade SDD to a Windows Server 2003 host: 1. Enable a single-path access from your server to the supported storage device. Disconnect all cables that allow the Windows NT host to access to the supported storage devices.6. 10. 388 Multipath Subsystem Device Driver User's Guide . 7. 6. Any limitation exists on the host operating system. Click Start → Program → Subsystem Device Driver → Subsystem Device Driver Management. Restart the system. Otherwise. You must log on as an administrator user to have access to the Windows Server 2003 Computer Management. Restart your host. Run datapath query version command (requires SDD 1. you need to restart your Windows Server 2003 system after it is installed.1. 2. See “Installing SDD” on page 384. Perform the Windows NT to Windows Server 2003 upgrade according to your migration plans. Perform the following steps to display information about the adapters and devices: 1.sys file is displayed. The file version and copyright information about the sddbus. 2. A practical limit is 250 LUNs. 9. 2.

Verify that the path is added correctly. Using SDD on a Windows Server 2003 host system 389 .3. six devices are attached to the SCSI path: Total Devices : 6 DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06D23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk1 Part0 OPEN NORMAL 108 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06E23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk2 Part0 OPEN NORMAL 96 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06F23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk3 Part0 OPEN NORMAL 96 0 DEV#: 3 DEVICE NAME: Disk4 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07023922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk4 Part0 OPEN NORMAL 94 0 DEV#: 4 DEVICE NAME: Disk5 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07123922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk5 Part0 OPEN NORMAL 90 0 DEV#: 5 DEVICE NAME: Disk6 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07223922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk6 Part0 OPEN NORMAL 98 0 Activating additional paths Perform the following steps to activate additional paths to an SDD vpath device: 1. 2. In the example shown in the following output. Click Scan for hardware changes. Enter datapath query adapter and press Enter. Install any additional hardware on the Windows Server 2003 server or the ESS. Enter datapath query adapter and press Enter. Click Device Manager. Right-click Disk drives. 5. Click Start → Program → Administrative Tools → Computer Management. 4. See “Verifying that additional paths are installed correctly” on page 390. In the example shown in the following output. The output includes information about all the installed adapters. 3. one HBA is installed: Active Adapters :1 Adpt# 0 Adapter Name Scsi Port4 Bus0 State NORMAL Mode ACTIVE Select 592 Errors Paths Active 0 6 6 4. Chapter 10. 6.

In the example shown in the following output. verify that the additional paths have been installed correctly. An MS-DOS window opens. The following output is displayed: 390 Multipath Subsystem Device Driver User's Guide . the output includes information about the new host bus adapter that was assigned. Enter datapath query adapter and press Enter. an additional HBA has been installed: Active Adapters :2 Adpt# 0 1 Adapter Name Scsi Port4 Bus0 Scsi Port5 Bus0 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 592 559 Errors 0 0 Paths 6 6 Active 6 6 3. Click Start → Program → Subsystem Device Driver → Subsystem Device Driver Management. The output includes information about any additional adapters that were installed. In this example.Verifying that additional paths are installed correctly After installing additional paths to SDD devices. The output should include information about any additional devices that were installed. Enter datapath query device and press Enter. 2. Perform the following steps to verify that the additional paths have been installed correctly: 1.

You will be asked to verify that you want to uninstall SDD. 2. 7. The Control Panel opens.Total Devices : 6 DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06D23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk1 Part0 OPEN NORMAL 108 0 1 Scsi Port5 Bus0/Disk1 Part0 OPEN NORMAL 96 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06E23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk2 Part0 OPEN NORMAL 96 0 1 Scsi Port5 Bus0/Disk2 Part0 OPEN NORMAL 95 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06F23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk3 Part0 OPEN NORMAL 96 0 1 Scsi Port5 Bus0/Disk3 Part0 OPEN NORMAL 94 0 DEV#: 3 DEVICE NAME: Disk4 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07023922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk4 Part0 OPEN NORMAL 94 0 1 Scsi Port5 Bus0/Disk4 Part0 OPEN NORMAL 96 0 DEV#: 4 DEVICE NAME: Disk5 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07123922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk5 Part0 OPEN NORMAL 90 0 1 Scsi Port5 Bus0/Disk5 Part0 OPEN NORMAL 99 0 DEV#: 5 DEVICE NAME: Disk6 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07223922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk6 Part0 OPEN NORMAL 98 0 1 Scsi Port5 Bus0/Disk6 Part0 OPEN NORMAL 79 0 Uninstalling the SDD Perform the following steps to uninstall the SDD on a Windows Server 2003 host system: 1. 4. Click Start →Settings→Control Panel. Ensure that there is a single-path connection from the system to the supported storage device.exe program provides the following command line options for silent uninstall: Chapter 10. select Subsystem Device Driver from the currently-installed programs selection list. Using SDD on a Windows Server 2003 host system 391 . 3. Click Add/Remove. 6. Tip: The SDD setup. Shut down your Windows 2003 host system. 5. Restart your Windows Server 2003 host system. The Add/Remove Programs window opens. 8. Turn on your Windows 2003 host system. Double-click Add/Remove Programs. Log on as the administrator user. In the Add/Remove Programs window.

Press the Page Up key to back up one menu. Under (Primary) Boot Port Name. 17. Select S to specify an additional device. Restart the host system. 8. shut down the system. 6. Configure the SAN environment so that both Qlogic HBAs in the host system can see the SAN boot device. no reboot (requires SDD 1. Select Save Changes to save the changes. 18. 11. 7. 16. After Windows Server 2003 is successfully installed on the SAN boot device. LUN for the device you selected in the previous step. Press CTRL-Q when the HBA banner appears during boot process. Insert the diskette with the Qlogic driver into diskette drive and press Enter. quickly press F6 to install a third-party device.43> or later Perform the following steps to install the SDD: 1. 20.0. 5. Select Configuration Settings. At the Select LUN prompt. Turn on the host system with 2 fibre-channel cables connected to both HBAs. Select the second HBA and repeat steps 4-12. which now has information under (Primary) Boot Port Name. Ensure that there is a single-path connection from each of the Qlogic HBAs to the SAN boot device. Booting a SAN device with Windows Server 2003 and the SDD using Qlogic HBA <BIOS 1. Select Host Adapter BIOS and enable it. select the first supported LUN. 21.--> setup -s -u : silent uninstall --> setup -s -u -n : silent uninstall. Unplug the fibre-channel cable from first HBA and plug the fibre-channel cable to the second HBA so that you have a single path from second HBA to the SAN device. which is LUN 0. 12. 4. LUN. enable Selectable Boot. Under Selectable Boot Settings. Select Selectable Boot Settings. 15. Insert the Windows Server 2003 with latest Service Pack CD-ROM into the CD-ROM drive. 23. Remember to select the SAN device seen by the Qlogic HBA as the device to install Windows Server 2003. 13. At the first Windows Server 2003 installation screen. This returns you to the previous screen. Select the first HBA from the displayed list. Continue with the Windows Server 2003 installation process. 2. 9. select the IBM device that will be providing the storage for SAN boot. Restart the host system to boot from the SAN. Press the Page Up key twice to exit the menus. 10. 14.6. 22. 392 Multipath Subsystem Device Driver User's Guide . Unplug the fibre-channel cable from second HBA and plug the fibre-channel cable to the first HBA so that you have a single path from first HBA to the SAN device. 3. 19. Select Host Adapter Settings.6 or later) SAN boot support Use the following procedures for remote boot support.

Load EFI code v1. 4. It displays the addresses of all available QLogic adapters. Booting a SAN device with IA64-bit Windows Server 2003 and the SDD using a Qlogic HBA Perform the following steps to install the SDD: 1. if needed. c. 2. At the EFI prompt. Build the QLogic EFI code using the ISO file. enter Drvcfg -s DRVNUM CTRLNUM. A list of installed EFI drivers is displayed. e.com/technology/efi/ 6. Insert the disk that contains the ramdisk. Insert the EFI code CD-ROM in the CD-ROM drive. 25. 26. The USB drive should be attached to fs0. Enter the following command: fs0: load ramdisk. Connect all FC cables. enter drivers -b. See www. Chapter 10. At EFI shell prompt.24. f. enter b to enable the boot option.intel. At the QLcConfig> prompt. Determine the DRVNUM of that driver. perform the following steps: a. Connect the USB drive to the system. Enable the boot option in the QLogic EFI configuration. Reboot the system. 3. c. Each adapter has its own CTRLNUM. Exit the EFI environment.07 into QLogic HBA flash. 3) For each HBA for which you need to configure boot option. The topology should be point-to-point. Configure the SAN to allow additional paths to the SAN boot device. You can obtain this file from the Intel Application Tool Kit in the binaries\sal64 directory. To add multipath support to the SAN boot device. 1) Enter DrvCfg DRVNUM. d. Install the latest SDD version on your host system and restart. or enter d to display the storage back-end WWN. a. the flash utility starts. Restart the host system. enter the following commands: fs0 flasutil After some time. 2) A list of adapters under this driver is displayed. Using SDD on a Windows Server 2003 host system 393 . b.efi This will create virtual storage. a.efi file. Shut down the host system b. b. 5. enter c for the connection option. Locate the driver named QlcFC SCSI PASS Thru Driver. Select the address of each HBA and select f option to load code into flash memory. c.

Install SDD. Enter 00 for Enter Two Digit Of Starting LUNs (hexadecimal). 3. 6. 394 Multipath Subsystem Device Driver User's Guide . 11. b. Continue to install Windows 2003. Restart the host system. Select Boot Device Via WWPN.90. Ensure that there is a single-path connection from each of the Emulex HBAs to the SAN boot device. Select Configure HBA Parameter Settings. 4. Repeat steps 4-11 to configure boot support for this HBA. 16. 9. Power up the host system with 2 fibre-channel cables connected to both HBAs. Press Alt+E to go to the Emulex Bios Utilities. 7. Use the QLogic driver loaded from virtual storage c. Select the first ESS volume seen by the QLogic HBA as the device on which to install Windows Server 2003. Enter map -r to refresh. At the very first Windows Server 2003 installation screen. Configure the SAN Environment so that both Emulex HBAs in the host system can see the SAN Boot Device. a. Press Page-up to go back and select Configure boot device. 14. 13. 12.x5> or later Perform the following steps to set up a SAN boot device with Windows Server 2003 and SDD using an Emulex HBA: 1. Page up and select the second HBA. You can enter map -b to find out fs0: 9. 12.92a2. Select S to specify an additional device. if applicable. Add multipaths to ESS. Enter copy fs1:\*. Booting from a SAN device with Windows Server 2003 and SDD using an EMULEX HBA <Firmware v3. quickly press F6 to install a third-party device.7. press F6 to install a third-party device. d. Select first unused boot device for Select Boot Entry from the List Of Saved Boot Devices. Insert the diskette that contains the QLogic driver for your QLA HBAs. 10. Select the first HBA. Install the Windows Server 2003 64-bit OS on the SAN device. 10.* This will copy the QLogic driver to the virtual storage. 8. Assume that fs0 is virtual storage and fs1 is the USB drive. 15. 2. Insert the Windows Server 2003 with the latest Service Pack CD-ROM into the CD-ROM drive. Unplug the fibre-channel cable from second HBA and plug the fibre-channel cable to the first HBA so that you have a single path from first HBA to the SAN device. At the first Windows 2003 installation panel. Install the Windows Server 2003 Service Pack. Select 01 for Select The Two Digit Number Of The Desired Boot Device. Select device number 01 for Enter Selection For Starting LUN. 8. e. 17. v1. 11. 5. Select Option 1 to enable BIOS for this HBA.

Restart the host system. SAN boot disk migration The datapath set bootdiskmigrate command migrates a SAN BOOT Host with SDD from one storage type to another. 21. Shutdown the host. c. 1. Windows Server 2003 does not support dynamic disks in the MSCS environment. PPRC or any other tool. 25. After Windows Server 2003 is successfully installed on the SAN boot device. 2. 3. Using SDD on a Windows Server 2003 host system 395 . Unplug the fibre-channel cable from first HBA and plug the fibre-channel cable to the second HBA so that you have a single path from second HBA to the SAN device. datapath set bootdiskmigrate device type where. 23. Install the latest SDD and reboot. 24. b. 22. Using Metro Mirror.5. Configure the SAN to add multipaths to the SAN boot device if needed. Boot the host from the 2107 disk instead of the 2105 disk. Reconnect all fibre-channel cables.0. Support for Windows Server 2003 clustering SDD 1.18. 19.0 (or later) is required to support load balancing in a Windows Server 2003 clustering. Continue with the Windows Server 2003 installation process. shut down the host system. clustering failover might not occur when the last path is being removed from the shared resources. 4. Shut down the host system. To add multipath support to SAN boot device. Insert the diskette with Emulex HBA driver into diskette drive and press Enter. Restart the host system to boot from the SAN. perform the following steps: a. 20. When running Windows Server 2003 clustering.6.x. migrate the data to the 2107 disk. device type The device type to which you are migrating. To migrate the boot disk from a Model 2105 to a Model 2107: Enter the datapath set bootdiskmigrate 2107 command on the remote boot host that needs to be migrated. Chapter 10. See Microsoft article Q294173 for additional information. Continue with the OS installation. Remember to select the SAN device seen by the Emulex HBA as the device to install Windows Server 2003. SDD 1.x does not support I/O load balancing in a Windows Server 2003 clustering.

From the Computer Management window. Install the latest SDD on server_1. On both server_1 and server_2. Use the datapath query adapter and datapath query device commands to verify the correct number of SAN devices and paths on server_1. For installation instructions. 7. select Storage and then select Disk Management to work with the storage devices attached to the host system. Install the latest SDD on server_2. 8. 5. 13. configure SAN devices on supported storage as shared for all HBAs. and restart server_2 10. Click Start → All Programs → Administrative Tools → Computer Management. For installation instructions. the datapath set adapter offline command does not change the condition of the path if the path is active or being reserved. 11. Verify that the assigned drive letters for MSCS resources on server_2 match the assigned drive letters on server_1. It goes to FAILED state when there are no active paths.Special considerations in the Windows Server 2003 clustering environment Note: The adapter goes to DEGRAD state when there are active paths left on the adapter. If you issue the command. and restart server_1. 12. Restart server_1. Shut down server_1. Configure Windows 2003 cluster with the SDD installed The following variables are used in this procedure: server_1 server_2 Represents the first server with two HBAs. Represents the second server with two HBAs. 4. 396 Multipath Subsystem Device Driver User's Guide . select Storage and then select Disk Management to work with the storage devices attached to the host system. Insert the Windows Server 2003 CD-ROM into the CD-ROM drive and install the MSCS software on server_2. 9. Click Start → All Programs → Administrative Tools → Computer Management. Use the datapath query adapter and datapath query device commands to verify the correct number of SAN devices and paths on server_2. The datapath set adapter # offline command operates differently in a SCSI Reserve/Release support clustering environment as compared to a nonclustering or Persistent Reserve/Release environment. In a SCSI Reserve/Release support clustering environment. 2. Connect fibre-channel cables from server_1 to the supported storage. see “Installing SDD” on page 384. From the Computer Management window. Perform the following steps to configure a Windows Server 2003 cluster with SDD: 1. see “Installing SDD” on page 384. Ensure that you keep track of the assigned drive letters on server_1. Connect fibre-channel cables from server_2 to the supported storage device. 3. Format the raw devices with NTFS and assign drive letters for all SAN devices that are going to be used as MSCS resources. the following message is displayed: to preserve access some paths left online. 6.

2.0 (or later). Uninstalling SDD in a two-node cluster environment Perform the following steps to uninstall SDD in a two-node cluster environment: 1. 2. Click Start. 4. The procedures listed in this section can be expanded to upgrade SDD or to uninstall SDD in an n-node cluster environment. Remove multiple supported storage port access. Remove shared access to the second HBA. if applicable. Perform steps 2 . Right-click SDD_Service.com/downloads/details.exe. 5. Move all cluster resources from node A to node B. Insert the Windows Server 2003 CD-ROM into the CD-ROM drive and install the MSCS software on server_1 as the second node of the MSCS cluster. Move all cluster resources from node A to node B. 3. 5. After node A is started. Information about installing a Windows 2003 cluster can be found in a file. verify if the SDD server (sddsrv) has automatically started: 1. confclus. See “Uninstalling the SDD” on page 391 for instructions.aspx?familyid=96F76ED7-9634-4300-9159-89638F4B4EF7&displaylang=en Upgrading SDD in a two-node cluster environment Perform the following steps for upgrading SDD in a two-node cluster environment: 1. Click Services. Move all cluster resources from node B to node A. Restart your system. Using SDD on a Windows Server 2003 host system 397 . 4. The status of SDD Service should be Started if the SDD server has automatically started. d. 3. c. This component consists of a Windows application daemon that is installed in addition to the SDD device driver. follow the instructions from the “Upgrading the SDD” on page 387. move all resources from node B to node A. 4. b. Expand the Services and Applications tree. 6. located at: http://www. Click Start → Programs → Administrative Tools → Computer Management. SDD server daemon The SDD server (also referred to as sddsrv) is an integrated component of SDD 1. Disable access of second HBA to the storage device.microsoft. 3. Change the zoning configuration to allow only one port accessed by this host. Chapter 10. Uninstall SDD.6.14. follow the instructions from the “Upgrading the SDD” on page 387. On node A.5 on node B.0. Verifying if the SDD server has started After you have installed SDD. Ensure that there is a single-path connection from the system to the storage device which may include the following activities: a. 2. On node B.

2. Changing to a different port number for the SDD server To change to a different port number for the SDD server. 5. Click Start → Programs → Administrative Tools → Computer Management. Click Start → Programs → Administrative Tools → Computer Management. Expand the Services and Applications tree. 4. 4. Click Stop. 3. Click Services. Right-click SDD_Service. you can start sddsrv: 1. Expand the Services and Applications tree. see the instructions in “Changing the sddsrv or pcmsrv TCP/IP port number” on page 419. 398 Multipath Subsystem Device Driver User's Guide . perform the following steps: 1. Stopping the SDD server To stop the SDD server. 5.Starting the SDD server manually If the SDD server did not start automatically after you performed the SDD installation. 2. Click Services. Right-click SDD_Service. Click Start. 3.

2010 399 . see the readme file on the CD-ROM or visit the SDDDSM Web site: www. SDDDSM is a device-specific module designed to provide support for supported storage devices. Install the package from the %ProgramFiles%\IBM\SDDDSM directory of the SDDDSM CD-ROM or the location where SDDDSM package was saved. use. configure. install and uninstall MPIO. the MPIO drivers that are shipped with the operating system are used. and SDDDSM – SDDDSM readme file – SDDDSM command windows shortcut © Copyright IBM Corp.Chapter 11. MPIO is not shipped with the Windows Server 2003 operating system but is shipped with the Windows Server 2008 operating system.exe.com/servers/storage/support/software/sdd Verifying the hardware and software requirements You must have the following hardware and software components in order to install SDDDSM: Hardware The following hardware components are needed: v Supported storage devices v Host system v Fibre-channel adapters and cables Software The following software components are needed: v Operating system: – Windows Server 2003 or Windows Server 2008 (32-bit) – Windows Server 2003 or Windows Server 2008 (x64) v Device driver for fibre-channel adapters (it must be a STORPORT-based miniport driver) v SDDDSM installation package. the MPIO drivers that are shipped with the SDDDSM package are used.sys – SDDDSM installer: setup. For updated and additional information that is not included in this chapter. which includes: – MPIO drivers. On Windows Server 2003. while for Windows Server 2008.ibm. – SDDDSM driver: sdddsm. Using SDDDSM on a Windows Server 2003 or Windows Server 2008 host system Subsystem Device Driver Device Specific Module (SDDDSM) provides multipath I/O support based on the MPIO technology of Microsoft. This chapter provides procedures for you to install. 1999. and remove SDDDSM on a Windows Server 2003 or a Windows Server 2008 host system that is attached to a supported storage device.

Ensure that all hardware and software requirements are met before you install SDDDSM. v DS8000 and DS6000 do not support SCSI connectivity. For information about the fibre-channel adapters that can attach to your Windows Server 2003 or Windows Server 2008 host system. ensure that your host system meets the following requirements: v No more than 32 fibre-channel adapters are attached. or an Intel EM64T-based system with Windows Server 2003 installed. 400 Multipath Subsystem Device Driver User's Guide . an AMD-based 64-bit system. You should have at least two fibre-channel adapters to prevent data loss due to adapter hardware failure or software failure. The host system can be a uniprocessor system or a multiprocessor system. your Windows Server 2003 or Windows Server 2008 host system must be an x86-based 32-bit system. v A fiber-optic cable connects each fibre-channel adapter to a disk storage system port. v SDDDSM is not supported on the Windows Server 2003 Web edition. when applicable For additional information. you must: 1. such as a disk storage system host-bay-adapter replacement. Fibre-channel requirements To use the SDDDSM fibre-channel support. Note: If your host has only one fibre-channel adapter. Host system requirements To successfully install SDDDSM. go to the following Web site: www. it requires you to connect through a switch to multiple disk storage system ports.com/servers/storage/support Preparing for SDDDSM installation Before installing SDDDSM. v If you need the SDDDSM I/O load-balancing and failover features. your Windows Server 2003 or Windows Server 2008 host system must have the following installed: v SDDDSM v Any Windows Server 2003 or Windows Server 2008 hotfixes. To support SDDDSM. To install all components. ensure that a minimum of two fibre-channel adapters are installed. when applicable v Any HBA-required Windows Server 2003 or Windows Server 2008 hotfixes. see the SDDDSM readme and the HBA driver readme or release notes.ibm. See “Verifying the hardware and software requirements” on page 399 for more information. you must have at least 1 MB (MB equals approximately 1 000 000 bytes) of disk space available on the drive where Windows Server 2003 or Windows Server 2008 is installed.Unsupported environments SDDDSM does not support the following environments: v Single-path mode during code distribution and activation of LMC nor during any disk storage system concurrent maintenance that impacts the path attachment.

select the %ProgramFiles%\IBM\SDDDSM installation subdirectory and go to step 9. go to the SDD Web site and download and save it to a directory. see the Implementation Guide or Configuration Guide for your product. Multipath access to the storage can be added after SDDDSM is installed. 9. ensure that there is a single connection from your host to your storage. go to step 9. 3. A list of all the installed directories on the compact disc is displayed. For information about configuring your supported storage device. Installing SDDDSM You can install SDDDSM either from a CD-ROM or download. Configuring fibre-channel adapters You must configure the fibre-channel adapters that are attached to your Windows Server 2003 host system before you install SDDDSM. 2. 3. Installing for the first time The following section describes how to install SDDDSM on your system. Extract the zip file for the SDDDSM package to a directory and go to that directory. For SDDDSM uninstallation and installation instructions. Configure the fibre-channel adapters that are attached to your Windows Server 2003 or Windows Server 2008 host system. Start the Windows Server 2003 or Windows Server 2008 Explorer program.” Configuring the supported storage device Before you install SDDDSM. the Windows Server 2003 or Windows Server 2008 server could lose the ability to access existing data on that device. Otherwise.exe program. If you plan to install SDDDSM from CD-ROM. Using SDDDSM on a Windows Server 2003 or Windows Server 2008 host system 401 . After it is installed. configure your supported storage device for single-port or multiport access for each LUN. Run the setup. you can update SDDDSM or display the current version number. continue with step 3. Installing SDDDSM from CD-ROM Perform the following steps to install SDDDSM on your system: 1. If this is a new installation. Follow the adapter-specific configuration instructions to configure the adapters attached to your Windows Server 2003 or Windows Server 2008 host systems. Follow the instructions. See “Configuring fibre-channel adapters” for more information. see “Uninstalling SDDDSM” on page 408 and “Installing SDDDSM. Select the CD-ROM drive. SDDDSM requires a minimum of two independent paths that share the same logical unit. 4. To use the load-balancing and failover features. Note: Ensure that SDDDSM is installed before adding additional paths to a device. Log on as the administrator user. Insert the SDDDSM installation CD-ROM into the selected drive. Chapter 11. 6. otherwise.2. 5. If you still do not have the zip file for the SDDDSM package available. 8. 7. If you have the zip file for the SDDDSM package available.

c. In Figure 9 on page 403. 402 Multipath Subsystem Device Driver User's Guide . 12. Expand Disk drives in the right pane. there are eight SAN Volume Controller devices connected to the host and four paths to each of the SAN Volume Controller devices. SDDDSM is installed. The Device manager shows eight 2145 Multipath Disk Devices and 32 2145 SDDDSM SCSI Devices. 13. Click Start → Programs → Administrative Tools → Computer Management. You can also use the following operation to verify that SDDDSM has been successfully installed: a. Connect additional cables to your storage if needed. After completing the installation procedures and when you log on again.10. 11. Make any necessary zoning configuration changes. you will see an SDDDSM entry in your Program menu containing the following selections: 1. SDDDSM Technical Support Web site 3. Shut down your Windows Server 2003 or Windows Server 2008 host system. Restart your Windows Server 2003 or Windows Server 2008 host system. You must issue the datapath command from the datapath directory. You can verify that SDDDSM has been successfully installed by issuing the datapath query device command. README Notes: 1. Double-click Device Manager. b. Subsystem Device Driver DSM 2. If the command runs.

exe program and follow the instructions. Using SDDDSM on a Windows Server 2003 or Windows Server 2008 host system 403 . Run the setup. 5. Shut down your SDDDSM host system. Change any zoning information that needs to be updated. Unzip the SDDDSM code to your installation subdirectory.exe program exits. Installing SDDDSM from downloaded code Use this procedure to install SDDDSM from downloaded code: 1. Chapter 11.exe program offers the following command line options for silent install: --> setup -s : silent install --> setup -s -n : silent install. you will be asked if you want to reboot. Reconnect all cables that connect the host bus adapters and the supported storage devices if needed. 2. go to “Displaying the current version of SDDDSM” on page 404. Example showing SAN Volume Controller devices to the host and path access to the SAN Volume Controller devices in a successful SDDDSM installation on a Windows Server 2003 host system 2.exe program is finished. When the setup. 7. the setup.exe program will restart your SDDDSM system immediately. and you need to manually restart your SDDDSM system to activate the new installation. Otherwise. 6. no reboot (requires SDDDSM 2. For more information. If you answer y.Figure 9. You can also verify the current version of SDDDSM. Tip: The setup. Restart your SDDDSM host system.1. Follow the instructions to restart.0 or later) 3. 4. the setup.1.

%SystemDrive% is E: 3. The file version and copyright information about the sdddsm. Log in as administrator.exe. You can check the SDDDSM version to verify that SDDDSM has been successfully upgraded.sys file.Upgrading SDDDSM Use the following procedure to upgrade SDDDSM. click Version. Verifying the configuration To activate SDDDSM. see “Displaying the current version of SDDDSM. where %SystemRoot% is: %SystemDrive%\Windows for Windows Server 2003 or Windows Server 2008. If Windows is installed on the C: drive. A practical limit is 250 LUNs. In Windows Explorer. In the sdddsm. Consult the Host Attachment Guide for your product. and then click Properties. 4. 4. Configuring SDDDSM Use these topics to configure SDDDSM.sys properties window. 1. 5. 404 Multipath Subsystem Device Driver User's Guide .sys file properties. If Windows is installed on E: drive. Click Start → Programs → Accessories → Windows Explorer to open Windows Explorer. Double-click the file setup. Any limitation exists on the host operating system. SDDDSM will ask you to reboot.sys file: 1. you need to restart your Windows Server 2003 or Windows Server 2008 system after it is installed. Issue the datapath query version command. The sdddsm.” Displaying the current version of SDDDSM You can display the current version of SDDDSM on a Windows Server 2003 host system by viewing the sdddsm. 3. If the SDDDSM version is 2.1. Open Windows Explorer and go to the directory where the SDDDSM package is located. Maximum number of LUNs There is no physical limitation on the maximum number of devices that can be configured for SDDDSM.sys properties window opens. Answer yes to reboot the system and activate the new SDDDSM.0 or later. Right-click the sdddsm. 2.sys file is displayed. Open the SDDDSM command window.1. Follow the instructions to continue with SDDDSM setup procedure. you can use the datapath query version command: 1. %SystemDrive% is C:. 2. 2. When the upgrade is complete. For more information. go to the %SystemRoot%\system32\drivers directory. Perform the following steps to view the properties of sdddsm.

Otherwise. 2. Perform the following steps to display information about the adapters and devices: 1. 3. the Windows Server 2003 or Windows Server 2008 server could lose the ability to access existing data on that device. one HBA is installed: Active Adapters : 1 Adpt# 0 Adapter Name Scsi Port4 Bus0 State NORMAL Mode ACTIVE Select 592 Errors Paths Active 0 6 6 4. Click Start → Program → Subsystem Device Driver DSM → Subsystem Device Driver Management. The output includes information about all the installed adapters. Using SDDDSM on a Windows Server 2003 or Windows Server 2008 host system 405 . In the example shown in the following output. You must log on as an administrator user to have access to the Windows Server 2003 or Windows Server 2008 Computer Management. review the configuration information for the adapters and devices currently on your Windows Server 2003 server. eight devices are attached to the SCSI path: Total Devices : 6 DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06D23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk1 Part0 OPEN NORMAL 108 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06E23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk2 Part0 OPEN NORMAL 96 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 06F23922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk3 Part0 OPEN NORMAL 96 0 DEV#: 3 DEVICE NAME: Disk4 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07023922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk4 Part0 OPEN NORMAL 94 0 DEV#: 4 DEVICE NAME: Disk5 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07123922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk5 Part0 OPEN NORMAL 90 0 DEV#: 5 DEVICE NAME: Disk6 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 07223922 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port4 Bus0/Disk6 Part0 OPEN NORMAL 98 0 Chapter 11. Enter datapath query adapter and press Enter.Attention: Ensure that SDDDSM is installed and activated before you add additional paths to a device. An MS-DOS window opens. In the example shown in the following output. Before adding any additional hardware. Enter datapath query device and press Enter.

Enter datapath query device and press Enter. Right-click Disk drives. 6. Verify that the path is added correctly. See “Verifying that additional paths are installed correctly. 3. Click Start → Program → Subsystem Device Driver DSM → Subsystem Device Driver Management. verify that the additional paths have been installed correctly. Enter datapath query adapter and press Enter.Activating additional paths Perform the following steps to activate additional paths to an SDDDSM vpath device: 1. 5. Install any additional hardware on the Windows Server 2003 or Windows Server 2008 server or the SAN Volume Controller. 2. In this example. The following output is displayed: 406 Multipath Subsystem Device Driver User's Guide .” Verifying that additional paths are installed correctly After installing additional paths to SDDDSM devices. 2. In the example shown in the following output. the output includes information about the new HBA and the new device numbers that were assigned. an additional HBA has been installed: Active Adapters : 2 Adpt# 0 1 Adapter Name Scsi Port2 Bus0 Scsi Port3 Bus0 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 391888 479686 Errors 844 566 Paths 16 16 Active 16 16 3. 4. Click Scan for hardware changes. Click Device Manager. The output should include information about any additional devices that were installed. Click Start → Program → Administrative Tools → Computer Management. The output includes information about any additional adapters that were installed. Perform the following steps to verify that the additional paths have been installed correctly: 1. An MS-DOS window opens.

Using SDDDSM on a Windows Server 2003 or Windows Server 2008 host system 407 .Total Devices : 8 DEV#: 0 DEVICE NAME: \Device\Harddisk2\DR0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801968009A800000000000023 ====================================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Path0 OPEN NORMAL 3079 103 1 Scsi Port2 Bus0/Disk2 Path1 OPEN NORMAL 43 6 2 Scsi Port3 Bus0/Disk2 Path2 OPEN NORMAL 45890 72 3 Scsi Port3 Bus0/Disk2 Path3 OPEN NORMAL 30 4 DEV#: 1 DEVICE NAME: \Device\Harddisk3\DR0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801968009A800000000000025 ====================================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Path0 OPEN NORMAL 51775 101 1 Scsi Port2 Bus0/Disk3 Path1 OPEN NORMAL 34 6 2 Scsi Port3 Bus0/Disk3 Path2 OPEN NORMAL 64113 68 3 Scsi Port3 Bus0/Disk3 Path3 OPEN NORMAL 30 4 DEV#: 2 DEVICE NAME: \Device\Harddisk4\DR0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801968009A800000000000024 ====================================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk4 Path0 OPEN NORMAL 43026 124 1 Scsi Port2 Bus0/Disk4 Path1 OPEN NORMAL 440 6 2 Scsi Port3 Bus0/Disk4 Path2 OPEN NORMAL 51992 63 3 Scsi Port3 Bus0/Disk4 Path3 OPEN NORMAL 11152 4 DEV#: 3 DEVICE NAME: \Device\Harddisk5\DR0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801968009A800000000000026 ====================================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk5 Path0 OPEN NORMAL 47507 106 1 Scsi Port2 Bus0/Disk5 Path1 OPEN NORMAL 402 6 2 Scsi Port3 Bus0/Disk5 Path2 OPEN NORMAL 51547 76 3 Scsi Port3 Bus0/Disk5 Path3 OPEN NORMAL 10930 4 DEV#: 4 DEVICE NAME: \Device\Harddisk6\DR0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801968009A800000000000027 ====================================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk6 Path0 OPEN NORMAL 45604 107 1 Scsi Port2 Bus0/Disk6 Path1 OPEN NORMAL 45 6 2 Scsi Port3 Bus0/Disk6 Path2 OPEN NORMAL 60839 76 3 Scsi Port3 Bus0/Disk6 Path3 OPEN NORMAL 31 4 DEV#: 5 DEVICE NAME: \Device\Harddisk7\DR0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801968009A800000000000029 ====================================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk7 Path0 OPEN NORMAL 46439 80 1 Scsi Port2 Bus0/Disk7 Path1 OPEN NORMAL 423 6 2 Scsi Port3 Bus0/Disk7 Path2 OPEN NORMAL 50638 76 3 Scsi Port3 Bus0/Disk7 Path3 OPEN NORMAL 10226 4 DEV#: 6 DEVICE NAME: \Device\Harddisk8\DR0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801968009A800000000000028 ====================================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk8 Path0 OPEN NORMAL 42857 92 1 Scsi Port2 Bus0/Disk8 Path1 OPEN NORMAL 46 6 2 Scsi Port3 Bus0/Disk8 Path2 OPEN NORMAL 61256 53 3 Scsi Port3 Bus0/Disk8 Path3 OPEN NORMAL 31 4 DEV#: 7 DEVICE NAME: \Device\Harddisk9\DR0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801968009A80000000000002A ====================================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk9 Path0 OPEN NORMAL 2161 62 1 Scsi Port2 Bus0/Disk9 Path1 OPEN NORMAL 108007 27 2 Scsi Port3 Bus0/Disk9 Path2 OPEN NORMAL 50767 50 3 Scsi Port3 Bus0/Disk9 Path3 OPEN NORMAL 10214 4 Chapter 11.

select Subsystem Device Driver DSM from the currently installed programs selection list. 3. 408 Multipath Subsystem Device Driver User's Guide . 12. In the Add/Remove Programs window. Press Esc and select Save Changes. 7. 8. 4. At the Select LUN prompt. Select first (primary) boot and press Enter. Select Selectable Boot Settings. Click Add/Remove. Shut down your Windows Server 2003 or Windows Server 2008 host system after the uninstallation process has been completed. The Add/Remove Programs window opens. Select Host Adapter Setting. 2. You must install SDDDSM immediately before performing a system restart to avoid any potential data loss. Press Crl+Q to go to Qlogic Bios Fast Utilities. 9.Uninstalling SDDDSM Attention: 1. Enable the BIOS. The Control Panel opens. If you are not planning to reinstall SDDDSM after the uninstallation. Select IBM storage device and press Enter. You will be asked to confirm that you want to uninstall. Click Start → Settings → Control Panel. 6. Log on as the administrator user. 8. 3. 5. Enable Selectable Boot. Double-click Add/Remove Programs. Perform the following steps to uninstall SDDDSM on a Windows Server 2003 or Windows Server 2008 host system: 1. 5. 2. 10. 2. 11. 6. Press ESC. Configure the SAN Environment so that both Qlogic HBAs in the server can see the SAN boot device. Start the server with 2 fibre-channel cables connected to both HBAs. Select the first HBA. Go to “Installing SDDDSM” on page 401 for instructions. Remote boot support for 32-bit Windows Server 2003 or Windows Server 2008 using a QLogic HBA Perform the following steps to install SDD: 1. select the first supported LUN. ensure that there is a single-path connection from the system to the storage device before performing a restart to avoid any potential data loss. which is LUN 0. 4. 13. Select Configuration Settings. Power on your Windows Server 2003 or Windows Server 2008 host system. SAN boot support Use the following procedures for SAN boot support. Change the zoning configuration or cable connections to ensure that there is only single-path connection from the system to the storage device. 7.

Continue with the OS installation. Select Option 1 to enable BIOS for this HBA. 1. To add multipath support to a SAN boot device: a. Select 01 for Select The Two Digit Number Of The Desired Boot Device. 22. 18. Continue with the Windows Server 2003 or Windows Server 2008 installation process. 6. 10. Select the first HBA.14. Then select Configure Boot Device. Insert the diskette with the Qlogic storport miniport driver into the diskette drive and press Enter. 25. 2. Select Boot Device Via WWPN. skip to step 20. quickly press F6 to install the third-party device. 4. For Windows Server 2008. 26. At the very first Windows 2003 installation screen. 3. 23. Plug in the fibre-channel cable to the other HBA. Press Alt+E to go to the Emulex BIOS Utilities. Unplug the fibre-channel cable from first HBA and plug the fibre-channel cable to the second HBA so that you have a single path from second HBA to the SAN device. Remove the fibre-channel cable from the second HBA so that you have only a single path to first HBA. Booting from a SAN device with Windows Server 2003 or Windows Server 2008 and the SDD using an Emulex HBA Note: The Automatic LUN Mapping checkbox of the Emulex Configuration Settings should be selected so that both HBA ports can see all assigned LUNs. shut down the system. 7. Perform the following steps. Select the second HBA and repeat steps 5-13. 16. Restart the server. 19. 24. 8. 20. Boot the server with 2 fibre-channel cables connected to both HBAs. Restart the server with Windows Server 2003 or Windows Server 2008 with latest Service Pack CD-ROM. 12. Configure the SAN to have more paths to SAN boot device if needed. The system should come up in SAN Boot mode. Using SDDDSM on a Windows Server 2003 or Windows Server 2008 host system 409 . 15. Select device number 01 for Enter Selection For Starting LUN. After Windows Server 2003 or Windows Server 2008 is successfully installed on the SAN boot device. 21. Shut down the server. Install the latest SDDDSM and restart. Select the first unused boot device for Select Boot Entry from the List Of Saved Boot Devices. 11. 9. Enter 00 for Enter Two Digit Of Starting LUNs (hexadecimal). Chapter 11. 5. c. Press Page Up to go back. Select Configure HBA Parameter Settings. Restart the server. Select S to specify an additional device. Configure the SAN Environment so that both Emulex HBAs in the server can see the SAN boot device. Remember to select the SAN device that is seen by Qlogic HBA as the device to install Windows Server 2003 or Windows Server 2008. b. 17.

When the Windows Server 2003 server loses a path in a nonclustering environment. Note: The adapter goes to DEGRAD state when there are active paths left on the adapter. the path condition changes from open to dead and the adapter condition changes from active to degraded. 21. Restart the server with Windows Server 2003 or Windows Server 2008 with latest Service Pack CD-ROM. 16. However. At the very first Windows 2003 installation screen.13. Continue with the Windows Server 2003 or Windows Server 2008 installation process. Support for Windows Server 2003 or Windows Server 2008 clustering When running Windows Server 2003 clustering. skip to step 19. To add multipath support to a SAN boot device: a. Special considerations in the Windows Server 2003 clustering environment There are subtle differences in the way that SDDDSM handles path reclamation in a Windows Server 2003 clustering environment compared to a nonclustering environment. Insert the diskette with the Emulex HBA driver into the diskette drive and press Enter. Restart the server. 15. Continue with the OS installation. Page up. b. Remember to select the SAN device seen by the Emulex HBA as the device to install Windows 2003. after a period of time. The system should be up in SAN boot mode. clustering failover might not occur when the last path is being removed from the shared resources. 18. After Windows 2003 is successfully installed on the SAN Boot device. 14. 24. Repeat steps 5-12 to configure boot support for this HBA. For Windows Server 2008. 23. the path condition changes from open to dead and the adapter condition changes from active to degraded. 20. Restart the server. the path condition changes back to open and the adapter condition changes back to normal. Unplug the fibre-channel cable from second HBA and plug the fibre-channel cable to the first HBA so that you have a single path from first HBA to the SAN device. Windows Server 2003 does not support dynamic disks in the MSCS environment. It goes to FAILED state when there are no active paths. Unplug the fibre-channel cable from the first HBA and plug in the fibre-channel cable to the second HBA so that you have a single path from second HBA to SAN device. Plug in the fibre-channel cable to the other HBA. shut down the system. Configure the SAN to have more paths to the SAN boot device if needed. c. 410 Multipath Subsystem Device Driver User's Guide . 17. When the Windows Server 2003 server loses a path in a clustering environment. The adapter and path condition will not change until the path is made operational again. 22. See Microsoft article Q294173 for additional information. Select S to specify an additional device. 19. Then select the second HBA. Shut down the server. quickly press F6 to install third-party device. Install the latest SDD and restart. 25. even if the path has not been made operational again.

11. Shut down server_1. enable the “Failover Clustering” feature and configure MSCS on server_2. For installation instructions. Perform the following steps to configure a Windows Server 2003 or Windows Server 2008 cluster with SDDDSM: 1. 13. 3. 5.The datapath set adapter # offline command operates differently in a clustering environment as compared to a nonclustering environment. For installation instructions. 14. Install the latest SDDDSM on server_1. Use the datapath query adapter and datapath query device commands to verify the correct number of SAN devices and paths on server_2. Restart server_1. 4. From the Computer Management window. Insert the Windows 2003 CD-ROM into the CD-ROM drive and install the MSCS software on server_2. If you issue the command. In a clustering environment. Chapter 11. 2. see “Installing SDDDSM” on page 401. select Storage and then select Disk Management to work with the storage devices attached to the host system. 7. the datapath set adapter offline command does not change the condition of the path if the path is active or being reserved. enable the “Failover Clustering” feature and configure MSCS on server_1 as the second node of the MSCS cluster. Ensure that you keep track of the assigned drive letters on server_1. Connect fibre-channel cables from server_2 to the supported storage. Format the raw devices with NTFS and assign drive letters for all SAN devices that are going to be used as MSCS resources. Represents the second server with two HBAs. Use the datapath query adapter and datapath query device commands to verify the correct number of SAN devices and paths on server_1. and restart server_2. 8. From the Computer Management window. the following message is displayed: to preserve access some paths left online. Insert the Windows 2003 CD-ROM into the CD-ROM drive and install the MSCS software on server_1 as the second node of the MSCS cluster. Install the latest SDDDSM on server_2 . For Windows 2008. On both server_1 and server_2. select Storage and then select Disk Management to work with the storage devices attached to the host system. configure SAN devices on supported storage as shared for all HBAs. Verify that the assigned drive letters for MSCS resources on server_2 match the assigned drive letters on server_1. Click Start → All Programs → Administrative Tools → Computer Management. 9. Using SDDDSM on a Windows Server 2003 or Windows Server 2008 host system 411 . see “Installing SDDDSM” on page 401. 12. Click Start → All Programs → Administrative Tools → Computer Management. 6. Connect fibre-channel cables from server_1 to the supported storage. For Windows 2008. Configuring a Windows Server 2003 or Windows Server 2008 cluster with SDDDSM installed The following variables are used in this procedure: server_1 server_2 Represents the first server with two HBAs. 10. and restart server_1.

Use the datapath query adapter and datapath query device commands to verify that the correct number of LUNs and paths on server_1 and server_2.exe file. c. SDDDSM also supports the following datapath commands: v datapath query version v datapath query portmap v datapath query essmap v datapath set device device_number policy rr/rrs/fo/lb/lbs/df v datapath clear device device_number count error/all 412 Multipath Subsystem Device Driver User's Guide . Move all cluster resources from node B to node A. 6. 2.0. The secondary server shows only the physical volumes and the logical volumes that it owns. 5. Ensure that there is a single-path connection from the system to the storage device which may include the following activities: a.aspx?familyid=96F76ED7-9634-4300-9159-89638F4B4EF7&displaylang=en Removing SDDDSM in a two-node cluster environment Use the following steps if you intend to remove the multipathing functions to a SAN device in a two-node cluster environment. Move all cluster resources from node A to node B. Restart your system.0 Specialist.1. 4. if applicable.com/downloads/details. Information about installing a Windows 2003 cluster can be found in the confclus. located at: http://www. Remove shared access to the second HBA through the IBM TotalStorage Expert V. SDDDSM datapath command support SDDDSM supports the following datapath commands: v datapath query adapter [ ] v datapath query device [ ]/ [ ] [-d ] [-i x/x y] [-l] [-s] v datapath set adapter online/offline v datapath set device online/offline v datapath query adaptstats [ ] v datapath query devstats [ ]/ [ ] [-d ] [-i x/x y] [-l] [-s] Beginning with SDDDSM version 2.2.15. Perform the following steps to remove SDDDSM in a two-node cluster environment: 1. Perform steps 2 .1. See “Uninstalling SDDDSM” on page 408 for instructions. (This step is optional.) Note: You can use the datapath query adapter and datapath query device commands to show all the physical and logical volumes for the host server.microsoft. Change the zoning configuration to allow only one port accessed by this host. Remove multiple SAN Volume Controller port access. Uninstall SDDDSM. b. Disable access of second HBA to the storage device. 3.1. d.5 on node B.

[-d ]. Starting the SDDDSM server manually If the SDDDSM server did not start automatically after you performed the SDDDSM installation. Using SDDDSM on a Windows Server 2003 or Windows Server 2008 host system 413 . Verifying if the SDDDSM server has started After you have installed SDD. For RSSM devices. 3. and the output of datapath query adapter shows only one adapter. 4.0 or later. 2. 2. 5.v datapath disable/enable ports ess Notes: 1. SDDDSM server daemon The SDDDSM server (also referred to as sddsrv) is an integrated component of SDDDSM. 3. The status of SDD Service should be Started if the SDDDSM server has automatically started. Right-click SDD Service. Right-click SDD Service. Click Services. [-s] in datapath query device are supported only by SDDDSM 2. Click Services. Changing to a different port number for the SDDDSM server To change to a different port number for the SDDDSM server. SDDDSM finds only a single HBA. Right-click SDD Service. Stopping the SDDDSM server To 1. This component consists of a Windows application daemon that is installed in addition to the SDDDSM device driver. perform the following steps: Click Start → Programs → Administrative Tools → Computer Management. perform the following steps to start sddsrv: 1. “Using the datapath commands. even when two or more serial-attached SCSI (SAS) HBAs are installed on the host. The options [].” on page 421.1. 4. [-i x/x y]. 2. Chapter 11. 5. see the instructions in the “Changing the sddsrv or pcmsrv TCP/IP port number” on page 419. see Chapter 13. Click Stop. Click Start → Programs → Administrative Tools → Computer Management. 5. stop the SDDDSM server. 2. 4. perform the following steps to verify that the SDDDSM server (sddsrv) has automatically started: 1. 3. Click Start. Click Services. Expand the Services and Applications tree. Click Start. Expand the Services and Applications tree.1. For additional information about the datapath commands. Click Start → Programs → Administrative Tools → Computer Management. Expand the Services and Applications tree.

414 Multipath Subsystem Device Driver User's Guide .

sddsrv changes the states of INVALID and CLOSE_DEAD paths of that SDD vpath device to OPEN.Chapter 12. Using the SDD server and the SDDPCM server The SDD server (sddsrv) is an application program that is installed in addition to SDD. CLOSE_DEAD. 1999. Stopping the SDD server daemon loses this FTDC function. sddsrv then changes the states of these paths according to the following characteristics: v If the SDD vpath device is in the OPEN state and the path is not working. or DEAD) at regular intervals between two and five minutes unless otherwise indicated for a specific platform. close_dead. If the test succeeds. © Copyright IBM Corp. sddsrv reclaims these paths and changes the states of these paths according to the following characteristics: v If the state of the SDD vpath device is OPEN.0. SDD server daemon The SDD server daemon (sddsrv) starts automatically after the SDD driver package is installed. The daemon sleeps for three-minute intervals between consecutive runs unless otherwise specified for a specific platform. The daemon probes idle paths that are in the CLOSE or OPEN states at regular. sddsrv changes the states of CLOSE_DEAD paths of that SDD vpath device to CLOSE. sddsrv changes the state of the path from OPEN to DEAD. The sddsrv daemon runs in the background at all times. Note: The sddsrv daemon is not available on NetWare host systems. which periodically saves the SDD driver internal trace into a log file. one-minute intervals unless otherwise indicated for a specific platform. v The sddsrv daemon changes the states of dead paths to OPEN. SDDPCM server (pcmsrv) is an integrated component of SDDPCM 2. 2010 415 . Path reclamation The SDD server regularly tests and recovers broken paths that have become operational. v If the state of the SDD vpath device is CLOSE. or dead paths and detects if these paths have become operational. Path probing The SDD server regularly tests close paths and open paths that are idle to see if they are operational or have become not operational. The daemon scans to find failing paths (INVALID. It tests invalid. The server daemon also provides the First Time Data Capture (FTDC) function. which makes debugging much harder. If the test fails.0 (or later). Understanding how the SDD server daemon works The sddsrv daemon provides path reclamation and path probing.1. See the appropriate chapter in this guide for the specific platform modifications to sddsrv operation. The daemon sleeps for one-minute intervals between consecutive runs unless otherwise specified for a specific platform.

SDDPCM and SDDDSM data collection for problem analysis.1. Starting with SDD 1.x (or later).2. For all platforms except Linux.conf. The sddsrv daemon also collects the SDD driver trace and puts it in log files.conf by copying it and naming the copied file sddsrv.conf file is located in the /etc directory. Note: The sddsrv daemon will not test paths that are manually placed offline.5.1. In SDD 1.log and sddsrv_bak. 416 Multipath Subsystem Device Driver User's Guide .0 The IBM TotalStorage Expert V. sddsrv by default was binding to a TCP/IP port and listening for incoming requests. v The sddsrv daemon will set the last path to DEAD or CLOSE_DEAD depending upon the state of the SDD vpath device. The sddsrv TCP/IP port must be enabled to listen over the network when the IBM TotalStorage Expert V.v If the SDD vpath device is in the CLOSE state and the path is not working.1. sddsrv changes the state of the path from CLOSE to CLOSE_DEAD. Apply your corporate security rules to this port. the sample_sddsrv.log.log files.log and sdd_bak.5.log files in the following directory based on your host system platform: v AIX: /var/adm/ras v v v v HP-UX: /var/adm/IBMsdd Linux: /var/log Solaris: /var/adm Windows 2000 and Windows NT: \WINNT\system32 v Windows Server 2003: \Windows\system32 See Appendix A. It creates sdd.conf file to create the sddsrv. The SDD driver trace log files are also wrap-around files and each file is maximum of 4 MB in size. On all UNIX platforms except Linux.conf. The sddsrv trace log files are wrap-around files and each file is maximum of 4 MB in size. sddsrv and the IBM TotalStorage Expert V. You must use the sample_sddsrv.x. On Windows platforms.log files for the driver trace. the SDD package ships a template file of sddsrv.” on page 449 for information about reporting SDD problems.log. sddsrv and IBM TotalStorage support for Geographically Dispersed Sites for Microsoft Cluster Service The sddsrv TCP/IP port must be enabled to listen over the network if you are using IBM TotalStorage Support for Geographically Dispersed Sites for Microsoft Cluster Service (MSCS). you cannot get sddsrv traces from a Web browser like you could in SDD releases earlier than 1. In SDD 1. You can then dynamically change port binding by modifying parameters in sddsrv.conf that is named sample_sddsrv. Because the TCP/IP interface of sddsrv is disabled by default. sddsrv_bak. You will find sddsrv.1.log and sdd_bak. but allows port binding to be dynamically enabled or disabled.2. the sample_sddsrv. the sddsrv trace is saved in sddsrv.x (or earlier). sdd.conf. Apply your corporate security rules to this port.0 needs to communicate with sddsrv through a TCP/IP socket on the port on which sddsrv is running. sddsrv does not bind to any TCP/IP port by default.1.0 is collecting host volume data.1.conf file is in the directory where SDD is installed.2.5.0.5.conf file in the same directory as sample_sddsrv. “SDD.0.

It regularly tests and recovers broken paths that have become operational. However. These are wrap-around files and each is a maximum of 4 MB in size. the pcmsrv path healthcheck function does not check the path that is in good open state. Using the SDD server and the SDDPCM server 417 . Trace files are located in the /var/adm/ras directory. pcmsrv will not test CLOSED_FAILED paths and always change the path state from CLOSED_FAILED to CLOSED regardless of whether the path is operational or not.log and pcmsrv_bak.SDDPCM server daemon The SDDPCM server daemon (pcmsrv) component consists of a UNIX application daemon that is installed in addition to the SDDPCM path control module. The pcmsrv daemon also provides the First Time Data Capture (FTDC) function. The pcmsrv daemon sleeps for one-minute intervals between consecutive runs.conf file contains the following parameters: enableport Use this parameter to enable or disable sddsrv to bind to a TCP/IP port.conf file to create the pcmsrv. You must use the sample_pcmsrv. You can set this parameter to true if you want to enable the TCP/IP interface of sddsrv. which makes debugging much harder. which periodically saves the SDDPCM internal trace into a log file. The default value of this parameter is set to false (disabled). the loopbackbind parameter specifies whether sddsrv listens to any Internet address or the loopback (127. it tests CLOSE_FAILED paths for devices that are in the CLOSED state. By default.log files. loopbackbind If you set the enableport parameter to true. v If the state of the device is CLOSE. You can then dynamically change port binding by modifying parameters in pcmsrv. pcmsrv changes the state of CLOSE_FAILED path of the device to CLOSE. The pcmsrv daemon only provides the path-reclamation function for SDDPCM.0. Starting with SDDPCM v2123. pcmsrv changes the state of OPEN_FAILED path of that device to OPEN.conf.conf file format The sddsrv. the pcmsrv path healthcheck adds an enhancement that automatically starts or stops the path healthcheck of a device when you issue the pcmpath set device hc_interval command to disable or enable a device's internal healthcheck function. The SDDPCM package ships a template file of pcmsrv.0. When you stop the pcmsrv daemon.1) address. the FTDC function is lost.conf.conf that is named as sample_pcmsrv. the pcmsrv daemon reclaims the path and changes the state of the path according to the following characteristics: v If the state of the device is OPEN and the healthcheck function is turned off.conf. The pcmsrv daemon does not bind to any TCP/IP port by default but allows port binding to be dynamically enabled or disabled. If the test succeeds. sddsrv. Note: For RSSM devices.conf and naming the copied file pcmsrv. The trace for the pcmsrv daemon is saved in pcmsrv. It also tests OPEN_FAILED paths when a device internal healthcheck is turned off. set the Chapter 12.conf file is located in the /etc directory. The sample_pcmsrv.conf file in the /etc directory by copying sample_pcmsrv. To enable sddsrv to listen to any Internet address. which the internal path healthcheck function do