Professional Documents
Culture Documents
To ensure the best performance with the Pure Storage FlashArray's, please use this guide for configuration and
implementation of HP-UX hosts in your environment. Pure Storage recommends you follow HP’s best practices and
install the latest patch bundles and quality packs on your server.
These recommendations apply to the versions of HP-UX that we have certified as per our Compatibility Matrix.
General Considerations
1. There is no HP-UX native SCSI UNMAP support. So no way to reclaim deleted blocks with the native HP-UX JFS
or OnlineJFS file systems.
2. Do not connect Pure volumes to a host or host group until the host personality has been set to HP-UX:
3. HP-UX 11i v3 introduces native multipathing that will allow you to take advantage of not only the failover
protection, but also performance gains from true load-balancing. If you are running HP-UX 11i v2, it's
recommended that you consider upgrading.
HP-UX 11i v3 introduces a new representation of mass storage devices, known as the agile view. In the agile
view, disk devices and tape drives are identified by the actual object, not by a hardware path to the object. In
addition, paths to the device can change dynamically and multiple paths to a single device can be
transparently treated as a single virtualized path, with I/O being distributed across those multiple paths.
In HP-UX 11i v3, there are three different types of paths to a device: legacy hardware path, lunpath hardware
path, and LUN hardware path. All three are numeric strings of hardware components, Special considerations
31 with each number typically representing the location of a hardware component on the path to the device.
The new agile view increases the reliability, adaptability, performance, and scalability of the mass storage
stack, all without the need for operator intervention. For more information, see the white papers “The Next
Generation Mass Storage Stack: HP-UX 11i v3” and “HP-UX 11i v3 Persistent DSF Migration Guide”
(http://hp.com/go/hpux-core-docs ).
[Source]
Pure Storage recommends that you use Agile DSF due to the round-robin load-balancing capability that has been
introduced.
policy distributes IOs evenly across all Active/Optimized paths. A newer MPIO policy, least_cmd_load, is similar to round
robin in that IOs are distributed across all available Active/Optimized paths, however, it provides some additional
benefits. The least_cmd_load policy will bias IOs towards paths that are servicing IO quicker (paths with shorter
queues). In the event that one path becomes intermittently disruptive or is experiencing higher latency,
least_cmd_load will prevent the utilization of that path reducing the effect of the problem path.
2. Use the following commands to set the load-balancing algorithm for the Pure Storage LUNs:
Match the Pure Storage volume serial number to the HP-UX disk
Use the following command on the HP-UX server to get the serial number attributes:
Device IDs
Customers can set device IDs which are user-friendly names. User-friendly device identifiers can only be set for devices
supporting the SET DEVICE IDENTIFIER and REPORT DEVICE IDENTIFIER SCSI commands. In this case, the
identifier resides in non-volatile memory on the device and can be queried by all systems accessing the device. The
alias is stored locally in the system registry. Therefore it must be set on each HP-UX system accessing the device (such
as a cluster).
To assign the following user-friendly device identifier to disk device disk0: “Engineering”:
HP-UX 11i V2
This version of HP-UX does not include native round-robin load balancing.
Considerations:
1. PV Links are active/passive in nature, with only one of the paths being active to the array.
2. If the primary path fails, PV links will switch the active path to one of the remaining paths.
3. PV Links provide basic path failover only, and not the load-balancing and performance gains you would get from
round-robin by upgrading to HP-UX 11i v3
4. The order in which PV Links selects alternate paths during failures is controlled by the order in which the logical
disk device special files are added into the volume group.
5. Using PV Links with a 4GB, or even 8GB HBA, and you may not get the performance results or gains that would
be expected due to only one path being active to the array.
Configuration:
For this example, we will assume you have a Pure Storage volume with four paths to the array, and then you will have
four disk devices on the HP-UX system that map to the target ports.
/dev/dsk/c1t0d0 = ct0.fc0
/dev/dsk/c2t0d0 = ct0.fc2
/dev/dsk/c3t0d0 = ct1.fc0
/dev/dsk/c4t0d0 = ct1.fc2
As per consideration number four, you will want to be sure you add the disk devices to the volume group in such a way
that you alternate between controllers. This is important, because the default PV timeout value is 30 seconds. If a
controller were to go down, or you're upgrading Purity and your primary path and first alternate path are on the same
controller, a failover could take a long time. PV Links will wait 30 seconds, or whatever the timeout value is set to, to
switch paths and move on to the next one.
With the example devices above, you would run the following commands: