You are on page 1of 16

multipathing in aix with netapps storage

2009 4 20 |
Hi friends,

I have AIX 5.3 servers in my environment which is connected with Netapps storage. (FAS6080-R5). We dont have multipathing software installed
on our servers. I am a HP-UX guy I know how to configure in HPUX but dont have knowledge on AIX. Last week I have added disks on AIX
server, but havnt done anything with multipathing, any help will be appreciable. Need help on configuring multipathing on AIX.

Thanks
Madan

cggibbo 2009 4 21 1:31

Re: multipathing in aix with netapps storage


2009 4 21
As long as the NetApp FCP AIX Host Utilities are installed, then multipathing is handled by the AIX Default Path Control Module (PCM). So, all
you should need to do is run cfgmgr once the disk has been presented to the AIX hosts.

You can use the AIX command, lspath and the NetApp tool, sanlun to check that multipathing is working as expected. e.g.
1. lspath -l hdisk0
Enabled hdisk0 fscsi0
Enabled hdisk0 fscsi0
Enabled hdisk0 fscsi2
Enabled hdisk0 fscsi2
2. sanlun lun show
filer: lun-pathname device filename adapter protocol lun size lun state
FAS608001X: /vol/UNIX_PRD_VOL006/HVIO/LUN001 hdisk0 fcs2 FCP 50g (53687091200) GOOD
FAS608001X: /vol/UNIX_TST_VOL015/HXAIX/LUN001 hdisk15 fcs0 FCP 500.1g (536952700928) GOOD
FAS608001X: /vol/UNIX_TST_VOL009/HXAIX/LUN001 hdisk9 fcs0 FCP 500.1g (536952700928) GOOD
3. sanlun lun show all -pv | head -30
ONTAP_PATH: FAS608001A:/vol/UNIX_TST_VOL017/HXAIX/LUN001
LUN: 18
LUN Size: 10g (10737418240)
Host Device: hdisk17
LUN State: GOOD
Controller_CF_State: Cluster Enabled
Controller Partner: FAS608001X
Multipath Provider: AIX Native
Multipathing Algorithm: round_robin
MPIO Controller AIX Controller AIX MPIO
path path MPIO host target HBA path
status type path HBA port priority
Enabled primary path0 fcs0 0e 1
Enabled secondary path1 fcs0 0e 1
Enabled primary path2 fcs2 0f 1
Enabled secondary path3 fcs2 0f 1
===============================================================================
MPIO:

Multipath I/O is a technique that defines more than one physical path between the computer and the
storage system, which helps in fault-tolerance and performance-enhancement. For example a disk
can connect to 2 Fibre Channel adapters, so we have 2 paths to the disk. If one path (adapter) fails,
I/O can be routed to the remaining adapter without interruption in the application. (If both paths are
used simultaneously data can be transported at double speed.)

Patch Control Module (PCM) is responsible for controlling these multiple paths. Each storage device
requires a PCM. PCM is storage vendor supplied code that gets control from the device driver to handle
the path management. It can be a separate (3rd party) software (driver) or AIX has a native PCM package,
which comes with the base operating system. (Usually people are referring to it as AIXPCM, MPIOPCM or
just MPIO).

# lslpp -L devices.common.IBM.mpio.rte
Fileset Level State Type Description (Uninstaller)
----------------------------------------------------------------------------
devices.common.IBM.mpio.rte 6.1.9.30 C F MPIO Disk Path Control Module

As IBM creates storage systems (DS8000...) it provides additional drivers to these storage devices which
are separate softwares from AIX, but as mentioned above AIX has a native package which can be used for
multipathing as well.

With AIX and multipathing (on IBM storage) we have the following options (in AIX 5.3):
-classic SDD: (ODM definitions: ibm2105.rte, SDD driver: devices.sdd.53.rte)
-default PCM (MPIO): it comes with AIX (no other filesets, it is activated only if there are no SDD
ODM definitions)
-SDDPCM: SDD version which uses MPIO and has same commands as SDD
(ODM def: devices.fcp.disk.ibm2105.mio.rte, SDDPCM driver:devices.sddpcm.53.rte)

As a summary native MPIO is installed as part of the base OS, packaged as a kernel extension. Paths are
discovered during system boot (cfgmgr) and disks are created from paths at the same time. No further
configuration is required. (Only this native MPIO will be discussed here on this page.)

--------------------------------------------------

Path statuses

Path status values of lspath:

enabled: path is configured and operational. It will be considered when paths are selected for IO.
disabled: path has been manually disabled and will not be considered when paths are selected for IO.
(set back to enabled with 'chpath')
failed: path had IO failures that have rendered it unusable. It will not be considered when paths are
selected for IO.
defined: path has not been configured into the device driver.
missing: path was defined in a previous boot, but was it not detected in the most recent boot. (these
can be recovered with 'cfgmgr')
detected: path was detected during boot, but it was not configured (this status should never appear,
only during boot)

It is best to manually disable paths before storage maintenance (rmpath). AIX MPIO stops using any
disabled or Defined paths, so no error detection or recovery will be done. This ensures that the AIX
host does not go into extended error recovery during a scheduled maintenance. After the maintenance is
complete, the paths can be re-enabled with cfgmgr. (When disabling multiple paths for multiple LUNs,
rmpath is simpler than chpath, as it does not have to be run on a per-disk basis.)

Additional path_status values of lsmpio:


Sel Path is being selected for I/O operations, for the time when the lsmpio command is to be run.
Rsv Path has experienced reservation conflict. It might indicate usage or configuration error with
multiple hosts accessing the same disk.
Fai Path experienced a failure. I/O sent on this path is failing.
In some cases, AIX MPIO leaves one path to the device in Enabled state, even when all paths are
experiencing errors.
Deg Path is in a degraded state. The path was used for I/O, but there were errors, which causing to
temporarily avoid the use of the path.
Clo Path is closed. If only some paths are closed, those paths might have experienced errors. If all
paths are closed, device is closed.
AIX MPIO periodically attempts to recover closed paths, until the device path is open.

--------------------------------------------------

Disk Parameters

# lsattr -El hdisk0


PCM PCM/friend/vscsi Path Control Module False
algorithm fail_over Algorithm True
hcheck_cmd test_unit_rdy Health Check Command True+
hcheck_interval 60 Health Check Interval True+
hcheck_mode nonactive Health Check Mode True+

algorithm: (It determines how many paths should be used to transmit I/O)
fail_over: I/O is routed to one path at a time. If if fails next enabled path is selected. (Path
priority determines which path is next)
round_robin: I/O is distributed to all enabled paths. Paths with same prio. has equal I/O, otherwise
higher prio. has higher % of I/O.)
shortest_queue: Similar to round_robin, but when load increases it favors path with fewest active I/O
operations. Path priority is ignored.

The fail_over algorithm is always used for virtual SCSI (VSCSI) disks on a Virtual I/O Server (VIOS)
client, although the backing devices on the VIOS instance might still use round_robin. Fail_over is also
the only algorithm that might be used if using SCSI-2 reserves (reserve_policy=single_path).

hcheck_mode: (It determines if a path can be used for I/O or not. Paths in Disabled or Missing state
are not checked.)
nonactive: Paths with no active I/O (no ongoing I/O operation) will be checked only.
enabled: All enabled paths are being checked. (Does not matter if there is an I/O operation or
not.)
failed: Only failed paths are checked.

With nonactive setting paths marked as 'failed' will be checked as well (in addition to 'enabled'). With
round_robin and shortest_queue all paths are being used for I/O, so health check command is sent only on
failed paths. The default value for all devices is nonactive, and there is little reason to change this
value unless business or application requirements dictate otherwise.
hcheck_interval: (It is the interval in seconds when health check will occur to check paths for
availability.)
A hcheck_interval = 0 disables path health checking, which means any failed paths require manual
intervention to recover that path.

The best practice is that it should be greater than or equal to the rw_timeout (read/write timeout)
value on the disks. Better performance is achieved when hcheck_interval is slightly greater than the
rw_timeout value on the disks.

--------------------------------------------------

smitty mpio

lspath lists paths (lspath -l hdisk46)


lspath -l hdisk0 -HF "name path_id parent connection path_status status" more detailed info about a
device (like lsdev for devices)
lspath -AHE -l hdisk0 -p vscsi0 -w "810000000000" display attrib. for given path and connection (-w)
(-A is like lsattr for devices)
(if only 1 path exist to parent device connection can be omitted:
lspath -AHE -l hdisk0 -p vscsi0)

lsmpio lists addtional info about paths (which path is selected)


lsmpio -q shows disks with its size
lsmpio -ql hdiskX shows disk serial number (LUN ID)
lsmpio -Sl hdisk0 | grep Path shows path statistics (which path was used mostly in the past)

chpath changing path state (enabbled, disabled)


chpath -s enabled -l hdisk -p vscsi0 it will set the path to enabled status

rmpath -l hdiskX -p vscsi0 -w 870000000000 put path in defined state (-w can be omitted if only 1 path
exist to parent device)
rmpath -dl hdiskX -p fcsiY dynamically remove all paths under a parent adapter from a supported
storage MPIO device
(-d: deletes, without it puts it to define state)
(The last path cannot be removed, the command will fail if you try to
remove the last path)

--------------------------------------------------

Failed path handling:


(there were Hitachi disks in Offline (E) state, but they were not unconfigured earlier)
-lspath | grep -v Enab
-rmpath -p fscsiX -d
-cfgmgr -l fcsX
-lspath | grep -v Enab
-dlnkmgr view -lu -item

--------------------------------------------------

Change adapter setting online:

rmpath -d -p vscsi0 <--removes all paths from adapt. (rmpath -dl hdisk0 -p
vscsi0, it removes only specified path)
rmdev -l vscsi0 <--puts adapter into defined state
chdev -l vscsi0 -a vscsi_err_recov=fast_fail <--change adapter setting (if -P is used it will be
activated after reboot)
cfgmgr -l vscsi0 <--configure back adapter
Labels: STORAGE
29 comments:
1.
AnonymousJuly 23, 2012 at 11:36 AM
some very useful information... Thanks.
Reply
2.
AnonymousAugust 26, 2012 at 7:09 AM
Hi AIX,

Whether the Hitachi LUNs can be accessed without dlinkmgr software ie., by using default SDDPCM
driver.
Reply
Replies

1.
aixAugust 26, 2012 at 11:12 AM
The answer to your question is yes and no.
Yes, Hitachi LUNs can be accessed without dlnkmgr software, but SDDPCM is not for Hitachi
LUNs.
You have to differentiate SDDPCM and PCM (native AIX PCM)
SDDPCM is for IBM storage only, for this you need to install additional fileset, for example
devices.sddpcm53.rte and then you can use pcmpath commands. But for native AIX PCM (MPIO) you
don't have to install addtional software. AIX, with a base operating system install, is
capable to use some third party devices (ie. Hitachi) as MPIO devices (lspath, rmpath...)
But you need to ask Hitachi support as well, if the model you have is capable for using this
way.
Reply

3.
aixOctober 16, 2012 at 8:20 AM
Hi,

I have received this from Jose, and I would like to share:

"I was interested in a summary table of my disks so I wrote the below script, pcmpath query essmap
was as well ok for me but gave me further info

display the disk in lspath in No hdiskxx size mb No-paths with no root authority
p="/usr/sbin/lspath";for i in `$p| awk ' !/Missing/ {print $2}'|sort|uniq `;do echo "$i; `getconf
DISK_SIZE /dev/$i` mb; `$p| awk ' !/Missing/ &&/'$i' / {print $2}'|wc -l|sed 's/ //g'`" ; done|cat
-n

Output
1 hdisk0; 70006 mb; 1
2 hdisk1; 70006 mb; 1
3 hdisk2; 20480 mb; 4
4 hdisk25; 20480 mb; 4
5 hdisk26; 20480 mb; 4"
Reply
4.
AnonymousDecember 16, 2012 at 6:21 PM
Very helpful.
Is there anyway, I can locate the network adapter location by turn-on the light, as I have 3 Network
adapter in P550 Machine in production environment. My oslevel is AIX52TL4.
Thanks in advance.
Reply
Replies

1.
aixDecember 16, 2012 at 7:26 PM
Hi, on AIX 5.3, if you issue the command: diag -> Task Selection -> Hot Plug Task -> PCI Hot
Plug Manager -> Identify a PCI Hot Plug Slot
This will blink the light at that location on AIX 5.3, you should try, maybe it works on AIX
5.2 as well
2.
AnonymousDecember 18, 2012 at 8:14 PM
Thank for your help. It work.

3.
aixDecember 18, 2012 at 10:22 PM
Welcome :)
Reply
5.
SivaApril 21, 2013 at 8:29 PM
Hi,

Please explain briefly about Missing, Failed and Defined states in lspath output.

Regards,
Siva
Reply
Replies

1.
aixApril 27, 2013 at 5:34 PM
Hi,

failed
Indicates that the path is configured, but it has had IO failures that have rendered it
unusable. It will not be considered when paths are selected for IO.
defined
Indicates that the path has not been configured into the device driver.
missing
Indicates that the path was defined in a previous boot, but it was not detected in the most
recent boot of the system.
2.
AnonymousJuly 25, 2013 at 4:18 PM
Hi,

enabled
Indicates that the path is configured and operational. It will be considered when paths are
selected for IO.
disabled
Indicates that the path is configured, but not currently operational. It has been manually
disabled and will not be considered when paths are selected for
IO.
failed
Indicates that the path is configured, but it has had IO failures that have rendered it
unusable. It will not be considered when paths are selected for
IO.

defined
Indicates that the path has not been configured into the device driver.
missing
Indicates that the path was defined in a previous boot, but it was not detected in the most
recent boot of the system.
detected
Indicates that the path was detected in the most recent boot of the system, but for some
reason it was not configured. A path should only have this status
during boot and so this status should never appear as a result of the lspath command.
Regards
Virender Kumar
Reply
6.
AnonymousApril 25, 2013 at 5:53 AM
hello,

if I have the command pcmpath then I have everything right sddpcm? stg ds8k
Reply
Replies

1.
aixApril 25, 2013 at 9:03 AM
hi, I would say yes...if it works correctly.
Reply
7.
AnonymousMay 2, 2013 at 6:31 AM
i am having MPIO on vio , i use lspath display 1 of the SAN disk in vio server.
# lspath -l hdisk12 -H -F "name parent connection path_id"
name parent connection path_id

hdisk12 fscsi0 500507630700067a,4060400500000000 0


hdisk12 fscsi0 50050763070b067a,4060400500000000 1
hdisk12 fscsi1 500507630710067a,4060400500000000 2
hdisk12 fscsi1 50050763071b067a,4060400500000000 3

how to explain connection and path_id ?


Reply
Replies

1.
aixMay 2, 2013 at 8:41 AM
Hi,
both of them can be used tu uniquely identify paths, for example with chpath commands.

The connection information differentiates the multiple path instances that share the same
logical parent (adapter). (SCSI ID and LUN ID of the device associated with this path.)

path_id: Indicates the ID of the path, it is used to uniquely identify a path


Reply
8.
AnonymousAugust 16, 2013 at 3:10 PM
Hi,

I need to findlut the MPIO package version which is installed in AIX. And need to know which version
of HBA cards are using?

Thank you
Reply
Replies

1.
Marcel GueryNovember 23, 2013 at 7:03 AM
Hi ,
try the following :
# lslpp -L '-a' devices.common.IBM.mpio.rte
or :
# smit list_installed
Reply
9.
AnonymousOctober 30, 2013 at 1:59 PM
Hi,

how can we check the disk raid level from AIX 7.1 Machine.

Note: In AIX 5.3 it is #lsattr -El hdisk8 | grep -i raid

Thanks
Reply
10.
AnonymousFebruary 25, 2014 at 7:26 PM
Hi AIX man !
I have this environment were a HDS disk is connected to a LPAR. The point is, even with mpio
installed, there is not 2 paths to the disks. So, my question is, what are the required packages
required at AIX to configure multipath?

AIX 6100-04
devices.common.IBM.mpio.rte
6.1.5.0 COMMITTED MPIO Disk Path Control Module
devices.fcp.disk.Hitachi.array.mpio.rte
5.4.0.0 COMMITTED AIX MPIO Support for Hitachi
5.4.0.1 APPLIED AIX MPIO Support for Hitachi
5.4.0.2 APPLIED AIX MPIO Support for Hitachi
5.4.0.3 APPLIED AIX MPIO Support for Hitachi
5.4.0.4 APPLIED AIX MPIO Support for Hitachi
5.4.0.5 APPLIED AIX MPIO Support for Hitachi
devices.fcp.disk.Hitachi.modular.mpio.rte
6.0.0.0 COMMITTED AIX MPIO Support for Hitachi
6.0.0.1 APPLIED AIX MPIO Support for Hitachi
devices.common.IBM.mpio.rte
6.1.5.0 COMMITTED MPIO Disk Path Control Module
Reply
11.
AnonymousApril 30, 2014 at 12:37 AM
Hi All
I have a question ? I am trying to install Hitachi software 6001 which is already their in my . file
after running everything the output is coming failed.
This is what i am getting can anyone tell me where i am doing mistake ?

cannot open /output/hitachi/odm/mpio/6000/HTC_MPIO_Modular_ODM_6000I: No such file or directory


Please mount volume 1 on /output/hitachi/odm/mpio/6000/HTC_MPIO_Modular_ODM_6000I
...and press Enter to continue installp: An error occurred while running the restore command.
Use local problem reporting procedures.

installp: CANCELED software for:


devices.fcp.disk.Hitachi.modular.mpio.rte 6.0.0.0

Thanks.
Reply

12.
yogesh cSeptember 25, 2015 at 7:14 PM
How to collect the MPIO related error logs/ event logs on AIX?
Reply
13.
AnonymousApril 22, 2016 at 3:40 PM
Hi Admin,
please could you help me to understand the difference between MPIO and SDDPCM .
also, what are the advantages for moving from MPIO/SDD to SDDPCM ?

Thanks in adv.
Reply

14.
liferAugust 12, 2016 at 11:43 AM
I have have question about reserve_policy=single_path . If this setting is configured, does it stops
from other HBA loging (PLOGI) on to storage?
Reply

15.
MichaelSeptember 11, 2016 at 7:43 AM
Is there any timeouts in FC adapter (fcsX) or device driver (fscsiX) that we can tune?
we have a need to extend the time that AIX spent on the alternate path when failover.
We observed AIX failed quickly when the LUNs on the alternate path were not yet up,
and when AIX failed and bubbled up the error to the application, the application failed.
In other OSes, we can extend this duration to 300 seconds but in AIX we do not know
what it is and what is the default value. lsattr -El fcsX or fscsiX do not show any
relevant attributes.

Thanks.
Michael
Reply
Replies

1.
aixSeptember 11, 2016 at 4:35 PM
Probably there are some parameters in MPIO which could help, I suggest checking MPIO best
practices: https://www.ibm.com/developerworks/aix/library/au-aix-mpio/
Reply

16.
MichaelSeptember 11, 2016 at 7:46 PM
Thank you very much for the reply.

I came across the following statement in the page you mentioned:

https://www.ibm.com/developerworks/aix/library/au-aix-mpio/

"AIX implements an emergency last gasp health check to recover paths when needed. If a device has
only one non-failed path and an error is detected on that last path, AIX sends a health check command
on all of the other failed paths before retrying the I/O, regardless of the health check interval
setting. This eliminates the need for a small health check interval to recover paths quickly. If
there is at least one good path, AIX discovers it and uses it before failing user I/O, regardless
of the health check interval setting."

From the traces we have, since the primary path was gone (rebooted and switch sent RSCN), the
alternate path was still alive just the LUNs were in transition (not yet active), AIX checked the
state of LUNs (using Test Unit Ready first, then resent 8 failed IOs, then sent Start/Stop Unit and
8 failed IOs, then repeat the Start/Stop Unit and 8 IOs sequences), after N seconds, AIX failed the
IOs and the application on AIX failed. We yet to capture a longer traces which shows the duration
of this check, but we need to know what timeout it is, what is the default value, can it be extended.
We need AIX to check at most 300 seconds so the application on AIX can survive the LUNs
takeover/failback. Thanks.
Reply

17.
MichaelSeptember 11, 2016 at 7:55 PM
From the application log, the entire LUNs takeover took around 80 seconds, and AIX spent 15-20
seconds on checking the primary path before failover to alternate path, so the AIX spent less than
60 seconds checking the alternate path (failed user IOs after that). The length of time for LUNs
takeover varies, but in our test we used a predefined configuration for verification. Thanks.
Reply
18.
AnonymousMarch 5, 2017 at 12:45 AM
Hi,

Can anyone advise me on this pls..


While Storage array(EMC VNX)SP's rebooting AIX LPARS went to hung state due to VIO not able to
failover the paths.what could be the reason like why MPIO is not worked on VIO as expected?
Reply

19.
BloggerSeptember 21, 2017 at 1:37 AM
If you need your ex-girlfriend or ex-boyfriend to come crawling back to you on their knees (even if
they're dating somebody else now) you got to watch this video
right away...

(VIDEO) Text Your Ex Back?


sample Power 822S
cbsrac1:/>lsmpio
name path_id status path_status parent connection

hdisk0 0 Enabled Sel,Opt sas0 2070e29e00,0


hdisk1 0 Enabled Sel,Opt sas0 4070e29e00,0
hdisk2 0 Enabled Non fscsi1 200200a098a55f42,0
hdisk2 1 Enabled Sel,Opt fscsi1 200400a098a55f42,0
hdisk2 2 Enabled Non fscsi3 200100a098a55f42,0
hdisk2 3 Enabled Sel,Opt fscsi3 200300a098a55f42,0
hdisk3 0 Enabled Non fscsi1 200200a098a55f42,1000000000000
hdisk3 1 Enabled Sel,Opt fscsi1 200400a098a55f42,1000000000000
hdisk3 2 Enabled Non fscsi3 200100a098a55f42,1000000000000
hdisk3 3 Enabled Sel,Opt fscsi3 200300a098a55f42,1000000000000
hdisk4 0 Enabled Non fscsi1 200200a098a55f42,2000000000000
hdisk4 1 Enabled Sel,Opt fscsi1 200400a098a55f42,2000000000000
hdisk4 2 Enabled Non fscsi3 200100a098a55f42,2000000000000
hdisk4 3 Enabled Sel,Opt fscsi3 200300a098a55f42,2000000000000
hdisk5 0 Enabled Non fscsi1 200200a098a55f42,3000000000000
hdisk5 1 Enabled Sel,Opt fscsi1 200400a098a55f42,3000000000000
hdisk5 2 Enabled Non fscsi3 200100a098a55f42,3000000000000
hdisk5 3 Enabled Sel,Opt fscsi3 200300a098a55f42,3000000000000
hdisk6 0 Enabled Non fscsi1 200200a098a55f42,4000000000000
hdisk6 1 Enabled Sel,Opt fscsi1 200400a098a55f42,4000000000000
hdisk6 2 Enabled Non fscsi3 200100a098a55f42,4000000000000
hdisk6 3 Enabled Sel,Opt fscsi3 200300a098a55f42,4000000000000
hdisk7 0 Enabled Non fscsi1 200200a098a55f42,5000000000000
hdisk7 1 Enabled Sel,Opt fscsi1 200400a098a55f42,5000000000000
hdisk7 2 Enabled Non fscsi3 200100a098a55f42,5000000000000
hdisk7 3 Enabled Sel,Opt fscsi3 200300a098a55f42,5000000000000
hdisk8 0 Enabled Non fscsi1 200200a098a55f42,6000000000000
hdisk8 1 Enabled Sel,Opt fscsi1 200400a098a55f42,6000000000000
hdisk8 2 Enabled Non fscsi3 200100a098a55f42,6000000000000
hdisk8 3 Enabled Sel,Opt fscsi3 200300a098a55f42,6000000000000
hdisk9 0 Enabled Non fscsi1 200200a098a55f42,7000000000000
hdisk9 1 Enabled Sel,Opt fscsi1 200400a098a55f42,7000000000000
hdisk9 2 Enabled Non fscsi3 200100a098a55f42,7000000000000
hdisk9 3 Enabled Sel,Opt fscsi3 200300a098a55f42,7000000000000
hdisk10 0 Enabled Non fscsi1 200200a098a55f42,8000000000000
hdisk10 1 Enabled Sel,Opt fscsi1 200400a098a55f42,8000000000000
hdisk10 2 Enabled Non fscsi3 200100a098a55f42,8000000000000
hdisk10 3 Enabled Sel,Opt fscsi3 200300a098a55f42,8000000000000
hdisk11 0 Enabled Non fscsi1 200200a098a55f42,9000000000000
hdisk11 1 Enabled Sel,Opt fscsi1 200400a098a55f42,9000000000000
hdisk11 2 Enabled Non fscsi3 200100a098a55f42,9000000000000
hdisk11 3 Enabled Sel,Opt fscsi3 200300a098a55f42,9000000000000
cbsrac1:/>lspath
Available pdisk0 sas0
Available pdisk1 sas0
Enabled hdisk0 sas0
Enabled hdisk1 sas0
Enabled hdisk2 fscsi1
Enabled hdisk2 fscsi1
Enabled hdisk3 fscsi1
Enabled hdisk3 fscsi1
Enabled hdisk4 fscsi1
Enabled hdisk4 fscsi1
Enabled hdisk5 fscsi1
Enabled hdisk5 fscsi1
Enabled hdisk6 fscsi1
Enabled hdisk6 fscsi1
Enabled hdisk7 fscsi1
Enabled hdisk7 fscsi1
Enabled hdisk8 fscsi1
Enabled hdisk8 fscsi1
Enabled hdisk9 fscsi1
Enabled hdisk9 fscsi1
Enabled hdisk10 fscsi1
Enabled hdisk10 fscsi1
Enabled hdisk11 fscsi1
Enabled hdisk11 fscsi1
Enabled hdisk2 fscsi3
Enabled hdisk3 fscsi3
Enabled hdisk4 fscsi3
Enabled hdisk5 fscsi3
Enabled hdisk6 fscsi3
Enabled hdisk7 fscsi3
Enabled hdisk8 fscsi3
Enabled hdisk9 fscsi3
Enabled hdisk10 fscsi3
Enabled hdisk11 fscsi3
Enabled hdisk2 fscsi3
Enabled hdisk3 fscsi3
Enabled hdisk4 fscsi3
Enabled hdisk5 fscsi3
Enabled hdisk6 fscsi3
Enabled hdisk7 fscsi3
Enabled hdisk8 fscsi3
Enabled hdisk9 fscsi3
Enabled hdisk10 fscsi3
Enabled hdisk11 fscsi3
cbsrac1:/>sanlun -help
Usage:
sanlun lun show [-v] [-d <host_device_filename> |
all |
<controller/vserver_name> |
<controller/vserver_name>:<path_name>]
sanlun lun show -wwpn
sanlun lun show -p [-v] [ all |
<controller/vserver_name> |
<controller/vserver_name>:<path_name>]
sanlun fcp show adapter [ -c | [ -v ] [<adapter_name> | all ]]
sanlun version
sanlun [ lun | fcp ] help

cbsrac1:/>sanlun lun show all


controller(7mode)/ device host lun
vserver(Cmode) lun-pathname filename adapter protocol size mode
------------------------------------------------------------------------------------------------------
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_ocr2 hdisk10 fcs3 FCP 2g C
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_ocr3 hdisk11 fcs3 FCP 2g C
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_data1 hdisk2 fcs3 FCP 100g C
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_data2 hdisk3 fcs3 FCP 100g C
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_data3 hdisk4 fcs3 FCP 100g C
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_fr1 hdisk5 fcs3 FCP 50g C
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_fr2 hdisk6 fcs3 FCP 50g C
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_fr3 hdisk7 fcs3 FCP 50g C
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_fr4 hdisk8 fcs3 FCP 50g C
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_ocr1 hdisk9 fcs3 FCP 2g C
cbsrac1:/>sanlun fcp show adapter all ()

fcs0 WWPN:10000090fadca364

fcs1 WWPN:10000090fadca365

fcs2 WWPN:10000090fadca6e8

fcs3 WWPN:10000090fadca6e9
cbsrac1:/>sanlu help
ksh: sanlu: not found.
cbsrac1:/>sanlun help
Usage:
sanlun lun show [-v] [-d <host_device_filename> |
all |
<controller/vserver_name> |
<controller/vserver_name>:<path_name>]
sanlun lun show -wwpn
sanlun lun show -p [-v] [ all |
<controller/vserver_name> |
<controller/vserver_name>:<path_name>]
sanlun fcp show adapter [ -c | [ -v ] [<adapter_name> | all ]]
sanlun version
sanlun [ lun | fcp ] help

cbsrac1:/>sanlun lun help


sanlun lun show [-v] [-d <host_device_filename> |
all |
<controller/vserver_name> |
<controller/vserver_name>:<path_name>]
sanlun lun show -wwpn
sanlun lun show -p [-v] all |
<controller/vserver_name> |
<controller/vserver_name>:<path_name>]

-v gives verbose output.


-wwpn includes target WWPN information.
-p includes multipathing information.
<host_device_filename>
is the name of a character special device filename or a block
device filename that may represent a controller/vserver LUN.

all will list all controller/vserver LUNs under the device directory (Ex: /dev/rdsk)
<controller/vserver_name>
will list all controller/vserver LUNs under the device directory
which are on <controller/vserver_name>.

<controller/vserver_name>:<path_name>
will list all controller/vserver LUNs
under /dev/rdsk which are connected to the
controller/vserver LUN on <controller/vserver_name> with path <path_name>.

Examples:
sanlun lun show -d /dev/rdsk/c3t4d2s2
sanlun lun show -v all
sanlun lun show all -pv
sanlun lun show toaster
sanlun lun show toaster:/vol/vol0/lun0
cbsrac1:/>sanlun lun show -v all
device host lun
vserver lun-pathname filename adapter protocol size mode
------------------------------------------------------------------------------------------------------
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_ocr2 hdisk10 fcs3 FCP 2g C
LUN Serial number: 80861+JmIXjs
Controller Model Name: FAS8040
Vserver FCP nodename: 200500a098a55f42
Vserver FCP portname: 200400a098a55f42
Vserver LIF name: SAN_LIF1_node2_0f
Vserver IP address: 10.0.101.21
Vserver volume name: vol_rac_aix01 MSID::0x00000000000000000000000080CC83A2
Vserver snapshot name:
device host lun
vserver lun-pathname filename adapter protocol size mode
------------------------------------------------------------------------------------------------------
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_ocr3 hdisk11 fcs3 FCP 2g C
LUN Serial number: 80861+JmIXjt
Controller Model Name: FAS8040
Vserver FCP nodename: 200500a098a55f42
Vserver FCP portname: 200400a098a55f42
Vserver LIF name: SAN_LIF1_node2_0f
Vserver IP address: 10.0.101.21
Vserver volume name: vol_rac_aix01 MSID::0x00000000000000000000000080CC83A2
Vserver snapshot name:
device host lun
vserver lun-pathname filename adapter protocol size mode
------------------------------------------------------------------------------------------------------
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_data1 hdisk2 fcs3 FCP 100g C
LUN Serial number: 80861+JmIXjk
Controller Model Name: FAS8040
Vserver FCP nodename: 200500a098a55f42
Vserver FCP portname: 200400a098a55f42
Vserver LIF name: SAN_LIF1_node2_0f
Vserver IP address: 10.0.101.21
Vserver volume name: vol_rac_aix01 MSID::0x00000000000000000000000080CC83A2
Vserver snapshot name:
device host lun
vserver lun-pathname filename adapter protocol size mode
------------------------------------------------------------------------------------------------------
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_data2 hdisk3 fcs3 FCP 100g C
LUN Serial number: 80861+JmIXjl
Controller Model Name: FAS8040
Vserver FCP nodename: 200500a098a55f42
Vserver FCP portname: 200400a098a55f42
Vserver LIF name: SAN_LIF1_node2_0f
Vserver IP address: 10.0.101.21
Vserver volume name: vol_rac_aix01 MSID::0x00000000000000000000000080CC83A2
Vserver snapshot name:
device host lun
vserver lun-pathname filename adapter protocol size mode
------------------------------------------------------------------------------------------------------
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_data3 hdisk4 fcs3 FCP 100g C
LUN Serial number: 80861+JmIXjm
Controller Model Name: FAS8040
Vserver FCP nodename: 200500a098a55f42
Vserver FCP portname: 200400a098a55f42
Vserver LIF name: SAN_LIF1_node2_0f
Vserver IP address: 10.0.101.21
Vserver volume name: vol_rac_aix01 MSID::0x00000000000000000000000080CC83A2
Vserver snapshot name:
device host lun
vserver lun-pathname filename adapter protocol size mode
------------------------------------------------------------------------------------------------------
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_fr1 hdisk5 fcs3 FCP 50g C
LUN Serial number: 80861+JmIXjn
Controller Model Name: FAS8040
Vserver FCP nodename: 200500a098a55f42
Vserver FCP portname: 200400a098a55f42
Vserver LIF name: SAN_LIF1_node2_0f
Vserver IP address: 10.0.101.21
Vserver volume name: vol_rac_aix01 MSID::0x00000000000000000000000080CC83A2
Vserver snapshot name:
device host lun
vserver lun-pathname filename adapter protocol size mode
------------------------------------------------------------------------------------------------------
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_fr2 hdisk6 fcs3 FCP 50g C
LUN Serial number: 80861+JmIXjo
Controller Model Name: FAS8040
Vserver FCP nodename: 200500a098a55f42
Vserver FCP portname: 200400a098a55f42
Vserver LIF name: SAN_LIF1_node2_0f
Vserver IP address: 10.0.101.21
Vserver volume name: vol_rac_aix01 MSID::0x00000000000000000000000080CC83A2
Vserver snapshot name:
device host lun
vserver lun-pathname filename adapter protocol size mode
------------------------------------------------------------------------------------------------------
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_fr3 hdisk7 fcs3 FCP 50g C
LUN Serial number: 80861+JmIXjp
Controller Model Name: FAS8040
Vserver FCP nodename: 200500a098a55f42
Vserver FCP portname: 200400a098a55f42
Vserver LIF name: SAN_LIF1_node2_0f
Vserver IP address: 10.0.101.21
Vserver volume name: vol_rac_aix01 MSID::0x00000000000000000000000080CC83A2
Vserver snapshot name:
device host lun
vserver lun-pathname filename adapter protocol size mode
------------------------------------------------------------------------------------------------------
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_fr4 hdisk8 fcs3 FCP 50g C
LUN Serial number: 80861+JmIXjq
Controller Model Name: FAS8040
Vserver FCP nodename: 200500a098a55f42
Vserver FCP portname: 200400a098a55f42
Vserver LIF name: SAN_LIF1_node2_0f
Vserver IP address: 10.0.101.21
Vserver volume name: vol_rac_aix01 MSID::0x00000000000000000000000080CC83A2
Vserver snapshot name:
device host lun
vserver lun-pathname filename adapter protocol size mode
------------------------------------------------------------------------------------------------------
GYNS_SVM01 /vol/vol_rac_aix01/c_lun_ocr1 hdisk9 fcs3 FCP 2g C
LUN Serial number: 80861+JmIXjr
Controller Model Name: FAS8040
Vserver FCP nodename: 200500a098a55f42
Vserver FCP portname: 200400a098a55f42
Vserver LIF name: SAN_LIF1_node2_0f
Vserver IP address: 10.0.101.21
Vserver volume name: vol_rac_aix01 MSID::0x00000000000000000000000080CC83A2
Vserver snapshot name:
cbsrac1:/>sanlun lun show all -pv

ONTAP Path: GYNS_SVM01:/vol/vol_rac_aix01/c_lun_ocr1


LUN: 7
LUN Size: 2g
Host Device: hdisk9
Mode: C
Multipath Provider: AIX Native
Multipathing Algorithm: round_robin
--------- ----------- ------ ------- ---------------------------------------------- ----------
host vserver AIX AIX MPIO
path path MPIO host vserver path
state type path adapter LIF priority
--------- ----------- ------ ------- ---------------------------------------------- ----------
up secondary path0 fcs1 SAN_LIF1_node1_0f 1
up primary path1 fcs1 SAN_LIF1_node2_0f 1
up secondary path2 fcs3 SAN_LIF1_node1_0e 1
up primary path3 fcs3 SAN_LIF1_node2_0e 1

ONTAP Path: GYNS_SVM01:/vol/vol_rac_aix01/c_lun_fr4


LUN: 6
LUN Size: 50g
Host Device: hdisk8
Mode: C
Multipath Provider: AIX Native
Multipathing Algorithm: round_robin
--------- ----------- ------ ------- ---------------------------------------------- ----------
host vserver AIX AIX MPIO
path path MPIO host vserver path
state type path adapter LIF priority
--------- ----------- ------ ------- ---------------------------------------------- ----------
up secondary path0 fcs1 SAN_LIF1_node1_0f 1
up primary path1 fcs1 SAN_LIF1_node2_0f 1
up secondary path2 fcs3 SAN_LIF1_node1_0e 1
up primary path3 fcs3 SAN_LIF1_node2_0e 1

ONTAP Path: GYNS_SVM01:/vol/vol_rac_aix01/c_lun_fr3


LUN: 5
LUN Size: 50g
Host Device: hdisk7
Mode: C
Multipath Provider: AIX Native
Multipathing Algorithm: round_robin
--------- ----------- ------ ------- ---------------------------------------------- ----------
host vserver AIX AIX MPIO
path path MPIO host vserver path
state type path adapter LIF priority
--------- ----------- ------ ------- ---------------------------------------------- ----------
up secondary path0 fcs1 SAN_LIF1_node1_0f 1
up primary path1 fcs1 SAN_LIF1_node2_0f 1
up secondary path2 fcs3 SAN_LIF1_node1_0e 1
up primary path3 fcs3 SAN_LIF1_node2_0e 1

ONTAP Path: GYNS_SVM01:/vol/vol_rac_aix01/c_lun_fr2


LUN: 4
LUN Size: 50g
Host Device: hdisk6
Mode: C
Multipath Provider: AIX Native
Multipathing Algorithm: round_robin
--------- ----------- ------ ------- ---------------------------------------------- ----------
host vserver AIX AIX MPIO
path path MPIO host vserver path
state type path adapter LIF priority
--------- ----------- ------ ------- ---------------------------------------------- ----------
up secondary path0 fcs1 SAN_LIF1_node1_0f 1
up primary path1 fcs1 SAN_LIF1_node2_0f 1
up secondary path2 fcs3 SAN_LIF1_node1_0e 1
up primary path3 fcs3 SAN_LIF1_node2_0e 1

ONTAP Path: GYNS_SVM01:/vol/vol_rac_aix01/c_lun_fr1


LUN: 3
LUN Size: 50g
Host Device: hdisk5
Mode: C
Multipath Provider: AIX Native
Multipathing Algorithm: round_robin
--------- ----------- ------ ------- ---------------------------------------------- ----------
host vserver AIX AIX MPIO
path path MPIO host vserver path
state type path adapter LIF priority
--------- ----------- ------ ------- ---------------------------------------------- ----------
up secondary path0 fcs1 SAN_LIF1_node1_0f 1
up primary path1 fcs1 SAN_LIF1_node2_0f 1
up secondary path2 fcs3 SAN_LIF1_node1_0e 1
up primary path3 fcs3 SAN_LIF1_node2_0e 1

ONTAP Path: GYNS_SVM01:/vol/vol_rac_aix01/c_lun_data3


LUN: 2
LUN Size: 100g
Host Device: hdisk4
Mode: C
Multipath Provider: AIX Native
Multipathing Algorithm: round_robin
--------- ----------- ------ ------- ---------------------------------------------- ----------
host vserver AIX AIX MPIO
path path MPIO host vserver path
state type path adapter LIF priority
--------- ----------- ------ ------- ---------------------------------------------- ----------
up secondary path0 fcs1 SAN_LIF1_node1_0f 1
up primary path1 fcs1 SAN_LIF1_node2_0f 1
up secondary path2 fcs3 SAN_LIF1_node1_0e 1
up primary path3 fcs3 SAN_LIF1_node2_0e 1

ONTAP Path: GYNS_SVM01:/vol/vol_rac_aix01/c_lun_data2


LUN: 1
LUN Size: 100g
Host Device: hdisk3
Mode: C
Multipath Provider: AIX Native
Multipathing Algorithm: round_robin
--------- ----------- ------ ------- ---------------------------------------------- ----------
host vserver AIX AIX MPIO
path path MPIO host vserver path
state type path adapter LIF priority
--------- ----------- ------ ------- ---------------------------------------------- ----------
up secondary path0 fcs1 SAN_LIF1_node1_0f 1
up primary path1 fcs1 SAN_LIF1_node2_0f 1
up secondary path2 fcs3 SAN_LIF1_node1_0e 1
up primary path3 fcs3 SAN_LIF1_node2_0e 1

ONTAP Path: GYNS_SVM01:/vol/vol_rac_aix01/c_lun_data1


LUN: 0
LUN Size: 100g
Host Device: hdisk2
Mode: C
Multipath Provider: AIX Native
Multipathing Algorithm: round_robin
--------- ----------- ------ ------- ---------------------------------------------- ----------
host vserver AIX AIX MPIO
path path MPIO host vserver path
state type path adapter LIF priority
--------- ----------- ------ ------- ---------------------------------------------- ----------
up secondary path0 fcs1 SAN_LIF1_node1_0f 1
up primary path1 fcs1 SAN_LIF1_node2_0f 1
up secondary path2 fcs3 SAN_LIF1_node1_0e 1
up primary path3 fcs3 SAN_LIF1_node2_0e 1

ONTAP Path: GYNS_SVM01:/vol/vol_rac_aix01/c_lun_ocr3


LUN: 9
LUN Size: 2g
Host Device: hdisk11
Mode: C
Multipath Provider: AIX Native
Multipathing Algorithm: round_robin
--------- ----------- ------ ------- ---------------------------------------------- ----------
host vserver AIX AIX MPIO
path path MPIO host vserver path
state type path adapter LIF priority
--------- ----------- ------ ------- ---------------------------------------------- ----------
up secondary path0 fcs1 SAN_LIF1_node1_0f 1
up primary path1 fcs1 SAN_LIF1_node2_0f 1
up secondary path2 fcs3 SAN_LIF1_node1_0e 1
up primary path3 fcs3 SAN_LIF1_node2_0e 1

ONTAP Path: GYNS_SVM01:/vol/vol_rac_aix01/c_lun_ocr2


LUN: 8
LUN Size: 2g
Host Device: hdisk10
Mode: C
Multipath Provider: AIX Native
Multipathing Algorithm: round_robin
--------- ----------- ------ ------- ---------------------------------------------- ----------
host vserver AIX AIX MPIO
path path MPIO host vserver path
state type path adapter LIF priority
--------- ----------- ------ ------- ---------------------------------------------- ----------
up secondary path0 fcs1 SAN_LIF1_node1_0f 1
up primary path1 fcs1 SAN_LIF1_node2_0f 1
up secondary path2 fcs3 SAN_LIF1_node1_0e 1
up primary path3 fcs3 SAN_LIF1_node2_0e 1

cbsrac1:/>lsmpio -h

Usage:
lsmpio [-l device name]
lsmpio -S [-l device name] [-d]
lsmpio -z [-l device name]
lsmpio -q [-l device name]
lsmpio -a [-r]
lsmpio -h
cbsrac1:/>lsmpio -a
Adapter Driver: fscsi1 -> AIX PCM
Adapter WWPN: 10000090fadca365
Link State: Up

Adapter Driver: fscsi3 -> AIX PCM


Adapter WWPN: 10000090fadca6e9
Link State: Up

cbsrac1:/>lsmpio -ar
Adapter Driver: fscsi1 -> AIX PCM
Adapter WWPN: 10000090fadca365
Link State: Up
Paths Paths Paths Paths
Remote Ports Enabled Disabled Failed Missing ID
200200a098a55f42 20 0 0 0 0x21601
200400a098a55f42 20 0 0 0 0x21701

Adapter Driver: fscsi3 -> AIX PCM


Adapter WWPN: 10000090fadca6e9
Link State: Up
Paths Paths Paths Paths
Remote Ports Enabled Disabled Failed Missing ID
200100a098a55f42 20 0 0 0 0x11601
200300a098a55f42 20 0 0 0 0x11701

cbsrac1:/>
cbsrac1:/>lspv
hdisk0 00faf3115c5b1a6d rootvg active
hdisk1 00faf311fbd66687 rootvg active
hdisk2 none None
hdisk3 none None
hdisk4 none None
hdisk5 none None
hdisk6 none None
hdisk7 none None
hdisk8 none None
hdisk9 none None
hdisk10 none None
hdisk11 none None

10 LUN, 4 20+20=40

cbsrac1:/>lspath
Available pdisk0 sas0
Available pdisk1 sas0
Enabled hdisk0 sas0
Enabled hdisk1 sas0
Enabled hdisk2 fscsi1
Enabled hdisk2 fscsi1
Enabled hdisk3 fscsi1
Enabled hdisk3 fscsi1
Enabled hdisk4 fscsi1
Enabled hdisk4 fscsi1
Enabled hdisk5 fscsi1
Enabled hdisk5 fscsi1
Enabled hdisk6 fscsi1
Enabled hdisk6 fscsi1
Enabled hdisk7 fscsi1
Enabled hdisk7 fscsi1
Enabled hdisk8 fscsi1
Enabled hdisk8 fscsi1
Enabled hdisk9 fscsi1
Enabled hdisk9 fscsi1
Enabled hdisk10 fscsi1
Enabled hdisk10 fscsi1
Enabled hdisk11 fscsi1
Enabled hdisk11 fscsi1
Enabled hdisk2 fscsi3
Enabled hdisk3 fscsi3
Enabled hdisk4 fscsi3
Enabled hdisk5 fscsi3
Enabled hdisk6 fscsi3
Enabled hdisk7 fscsi3
Enabled hdisk8 fscsi3
Enabled hdisk9 fscsi3
Enabled hdisk10 fscsi3
Enabled hdisk11 fscsi3
Enabled hdisk2 fscsi3
Enabled hdisk3 fscsi3
Enabled hdisk4 fscsi3
Enabled hdisk5 fscsi3
Enabled hdisk6 fscsi3
Enabled hdisk7 fscsi3
Enabled hdisk8 fscsi3
Enabled hdisk9 fscsi3
Enabled hdisk10 fscsi3
Enabled hdisk11 fscsi3

You might also like