You are on page 1of 30

Switch

                   In the computer storage field, a Fibre Channel switch is a network


switch compatible with the Fibre Channel (FC) protocol. The host will access the read\write
operation from the storage through the switch. In detailed, the host & Storage is connected to the
switch.

SAN Connectivity

 Switch Topology

1. Core edge Topology


2. Edge core Topology

There are so many vendors in the market who manufacturers the FC switches. Vendors
like Brocade, Cisco, Mc-data & Connectrix.

Front view of a Switch


Rear view of a Switch

In a switch the major activity will be the Zoning.

Zoning means “Grouping of Host HBA WWPN and Storage Front End Ports WWPN to
speak each other.”

Zoning Structure
Types of Zoning:

1. Soft Zoning
2. Hard Zoning

3. Mixed Zoning

 Soft Zoning: 

It uses the Server HBA WWN number and Storage Front End Port WWN number. It’s also
known as WWN Zoning. A major advantage of WWN Zoning is its flexibility. It allows the SAN
to be recabled without re configuring the Zone information.
Physical Cabling between Server ,Switch and Storage with Single Path

Physical Cabling between Server ,Switch and Storage with Multipath

Hard Zoning: 

It uses the Server and Storage physical ports connected on a FC switch. It’s also known
as Port Zoning. A major disadvantage is any change in the fabric configuration affects zoning
database.

Mixed Zoning: 

It combines the qualities of both WWN zoning and port zoning.


Introduction
 Here we can see some of the vendor details who is leading in the IT Infrastructure.       

Storage Vendors: EMC, Netapp, HP, Hitachi, IBM, Dell, Tintri, Orcale, and e.t.c.,

HBA Vendors: Qlogic and Emulex.

Servers Vendors: Orcale, Dell, IBM, HP, Hitachi Servers.

Types of drives: Sata, SAS, NL-SAS, FC and EFD/SSD/Flash drives.

There are some familiar words will be remained in the Storage Platform.

LUN:  LUN is known as Logical Unit Number. It’s a slice of space from a hard drive.

Raid Group:  A collection of 16 drives in a group with same drive type from where the LUN is created.

Storage Pool: A collection of drives in a pool with same or different type of drives from where the LUN is
created.

Masking:  It means particular LUN is visible to particular Host. In clear, A LUN can be visible to only one
Storage Group/Host.

Storage Group:  It’s nothing but a Host name. Storage group is a collection of one or more LUNs (or
meta LUNs) to which you connect one or more servers.

Meta LUN: The meta LUN feature allows Traditional LUNs to be aggregated in order to increase the size
or performance of the base LUN. LUN will be expanded by the addition of other LUNs. The LUNs that
make up a meta LUN are called meta members and the base LUN is known as Meta head. We can add
255 meta members to 1 meta head (256 LUNs).

Access logix: Access Logix provides LUN masking that allows sharing of storage system.

PSM: The Persistent Storage Manager LUN stores the configuration information about the VNX/Clariion
such as Disks, Raid Groups, Luns, Access Logix information, SnapView configuration, MirrorView and
SanCopy configuration as well.

The FLARE Code is broken down as follows:-

 1.14.600.5.022 (32 Bit)

 2.16.700.5.031 (32 Bit)

 2.24.700.5.031 (32 Bit)

 3.26.020.5.011 (32 Bit)

 4.28.480.5.010 (64 Bit)

The first digit: 1, 2, 3 and 4 indicate the Generation of the machine this code level can be installed on.
For the 1st and the 2nd generation of machines (CX600 and CX700), you should be able to use standard
2nd Generation code levels. CX3 code levels would have a 3 in front of it and so forth. 

These numbers will always increase as new Generations of VNX/Clariion machines are added.

The next two digits are the release numbers; these release numbers are very important and really give
you additional features related to the VNX/Clariion FLARE Operating Environment. When someone
comes up to you and says, my VNX/Clariion CX3 is running Flare 26, this is what they mean.

These numbers will always increase, 28 being the latest FLARE Code Version.

The next 3 digits are the model number of the VNX/Clariion, like the CX600, CX700, CX3-20 and CX4-480.

These numbers can be all over the map, depending what the model number of your VNX/Clariion is.
The 5 here is unknown, it’s coming across from previous FLARE releases. Going back to the pre CX days
(FC), this 5 was still used in there. I believe this was some sort of code internally used at Data General
indicating it’s a FLARE release.

The last 3 digits are the Patch level of the FLARE Environment. This would be the last known compilation
of the code for that FLARE version.

Failover mode: They are 4 types

                1. Failover mode 1 or Passive/Passive mode

                2. Failover mode 2 or Passive/Active mode

                3. Failover mode 3 or Active/Passive mode

                4. Failover mode 4 or Active/Active mode

As per best practice, failover mode 4 is suitable than others.

With Failover Mode 4, in the case of a path, HBA, or switch failure, when I/O routes to the non-owning
SP, the LUN may not trespass immediately.

Clariion/VNX LUN Provisioning or Allocation

Login into the Unisphere with the specific IP Address and authorized user credentials.

For Ex:- 10.XX.XX.X.XX

The VNX Unisphere Dashboard will look as it


Dashboard page

Go to Storage Tab and select the LUN option

Slide 1

Click on Create option a popup window will open.

Slide 2
Fill all the columns with the required info.  and click on Apply and hit Ok button.

Slide 3

Select the newly created LUN and then select the Add to Storage Group option below right side of the
window.
Slide 4

Select the Specific Storage group (Host) listed in the available storage group column and click the right
side arrow, the selected storage group will be listed in the selected storage group column and
hit Ok option.

Slide 5

Now we have to inform this information to the platform team to check the visibility of the LUN.

VNX LUN Expansion

Procedure:

Platform team will request you about the expansion of LUN activity.The workflow will be in this
below pattern:
 If a request raised to expand the LUN. We have to check the prerequisites like the LUN
naa ID & LUN Name.

 Login in to the VNX Unisphere with authorized credentials.

 Go to the Storage tab and select the LUN option.

 Select the specific LUN and click the properties option, verify the LUN naa ID and LUN Name.

 Once verified right click on the specific LUN and select the Expand option.

 Mention the size of space needs to expand the LUN and click OK.

Creation of Storage Group

The procedure is as follows:

Login to the Unisphere.

Go to Host Tab and select the Storage Group option.

Unisphere image

 Select the Create option at the below of the page.


Storage Group

Name the storage group and hit on OK button.

Creation Window

Creation of Storage Pool in VNX Unisphere

Storage Pool: 

                  It's a physical collection of disks on which logical units (LUNs) are created. 

Pools are dedicated for use by pool (thin or thick) LUNs. Where RAID group can only contain up to 16
disks, pool can contain hundreds of disks. 

Login to the VNX Unisphere.


VNX Unisphere

Go to Storage Tab at the menu bar and select the “Storage Pool” option.

Storage Pool Tab

Click on Create button to create a new storage pool.


Storage Pool Create option

A pop-window will open and fill all the columns like Name of the Storage Pool, ID, what type of pool you
required to create like Extreme Performance, Performance and capacity and select the Automatic Disks
option and hit on OK button.

Creation of Storage Pool


LUN Provisioning for a New Server

            Whenever a new server deployed in the environment, the platform team either it is
Windows, Linux and Solaris domain will contact you for the free ports on the Switch (For
Example:- Cisco Switch) to connect with the Storage. 

We will login into the Switch with authorized credentials via putty. 

To download putty please find the link below:

 http://www.sanadmin.net/2015/12/putty-download.html

Once logged in we check the free ports details by using the command.

Switch # sh interface brief


 

Note:  As a storage admin, we have to know the server HBA’s details too. According to that information
we have to fetch the free ports details on the two switches.

Will update the free ports details to the platform team, then platform team will contact the Data center
folks to lay the physical cable connectivity between the new server to the switches.

Note:  The Storage ports are already connected to the switches.

Once the cable connectivity completes the platform team will inform us to do zoning.

Zoning:

Grouping of Host HBA WWPN and Storage Front End Ports WWPN to speak each other.

All the commands should be run in Configuration Mode.


SwitchName # config t      (configuration terminal)

SwitchName # zone name ABC vsan 2

SwitchName – zone # member pwwn 50:06:01:60:08:60:37:fb

SwitchName – zone # member pwwn 21:00:00:24:ff:0d:fc:fb

SwitchName – zone # exist

SwitchName # zoneset name XYZ vsan 2

SwitchName – zoneset # member ABC

SwitchName – zoneset # exist

SwitchName # zoneset activate name XYZ vsan 2

For more details about the zoning, please refer the below link.

http://www.sanadmin.net/2015/11/cisco-zoning-procedure-with-commands.html

Once the zoning is completed, we have to check the Initiators status by login into the Unisphere (VNX
Storage Array).

The procedure is as follows below:


Go to the Host Tab and select the Initiators tab.

Search for the Host for which you have done the zoning activity

Verify the Host Name, IP Address and Host HBA WWPN & WWNN No.’s.

In the Initiators window check the Registered and Logged In columns. If “YES” is there in both the
columns then your zoning activity is correct and the Host is connected to the Storage Box.

If “YES” is in one column and “NO” in another column, then your zoning part is not correct. Check the
zoning steps, if it correct and the issue is sustain then check the Host WWPN &WWNN and cable
connectivity.

Now we have to create a New Storage Group for the New Host.

The procedure is as follows:

Go to Host Tab and select the Storage group option.

Click on the Create option to create a new storage group.

Name the storage group for your identification and hit on OK button to complete your task.

Before creating a LUN to a specified size we have to check the prerequisite like:

Check the availability of free space in the Storage Pool from where you are going to create a LUN.
If free space is not available in that specific storage pool, share the information to your Reporting
Manager or to your seniors.

Now will create a LUN with the specified size

Login to the VNX Unisphere.

Go to Storage Tab and select the LUN option.

Fill all the Columns like Storage Pool, LUN Capacity, No. of LUNs to be created, Name of the LUN and
specific the LUN is THICK or THIN.

And hit on OK button to complete the LUN creation task.

Now we have to add the newly created LUN to the newly created Storage Group (Masking). 

To know more about the Storage Terminology, refer the below link:

http://www.sanadmin.net/2015/12/storage-terminology.html

In the LUN creation page, there is an option known as “ADD TO STORAGE GROUP” at the below of the
page.

Click on it and a new page will open.

Two columns will be appeared in the page one is “Available Host ” and “Connected Host”.
Select the New Storage Group in the available host column and click on the Right side arrow and the
host will appear in the connected host column and then hot on OK button.

Inform the Platform team that the LUN has been assigned to host taking a snapshot of the page and also
inform them to rescan the disks from the platform level.

Initiating LUN Migration

Login to the Unisphere, 

Go to the Storage tab and select the LUN which you want to migrate.

Right Click on the ‘Source LUN = eg-145’ and choose Migrate

Select the ‘targeted destination LUN = LUN 149’ from the Available Destination LUN field

The Migration Rate drop-down menu: Low

NOTE:  The Migration Rate drop-down menu offers four levels of service: ASAP, High, Medium, and Low. 

Source LUN > 150GB; Migration Rate: Medium

Source LUN < 150GB; Migration Rate: High

Click OK 

YES

DONE

The Destination LUN assumes the identity of Source LUN.


The Source LUN is unbound when the migration process is complete.

Note: During Migration Destination LUN will be shown under ‘Private LUNS’

IMP: Only 1 LUN Migration per array at any time. The size of target LUN should be equal or greater than
source LUN.

Gather SP Collects in VNX Unisphere GUI

The procedure as follows :

The main purpose of collecting the SP Collects Logs is to analysis the Storage Array
Performance.
We can also able to find errors like disk soft media error, hardware failures errors and so on.

Login to the VNX Unisphere.

Click on the System tab on the menu bar.

Storage tab in VNX Unisphere

Right side of the main screen you can able to see Wizard columns.

Go to Diagnostic Files tab.


Select the Generate Diagnostic Files – SP A to generate the logs for Storage Processor A

Diagnostic Files Column

And also select the Generate Diagnostic Files – SP B to generate the logs for Storage Processor B as
shown above.

Select the Get Diagnostic Files – SP A to retrieve the logs

Diagnostic Files Column 

And also select the Get Diagnostic Files – SP B to retrieve the logs as shown above.

A page will open with the logs generating, sort out the logs by Date range and your log file name will be
shown as XXX.runlog.txt
SP File Transfer Manager

It will take 10 -15 mins of time to gather the logs for each SP’s.

Once it complete the log file will convert from runlog.txt to data.zip as shown below

SP File Transfer Manager

Transfer the file to your desired location and upload the logs to EMC Support Team to analysis the Array
Performance.

Commands to gather SP Collects from NaviCLI

Open a command prompt on the Management Station.


Type cd c:\program files\emc\navisphere cli

Type navicli -h <SP_IP_address> spcollect -messner            

This will starts to gather the SP collect script.

Type navicli -h <SP_IP_address> managefiles -list     

This will list the files created by SPcollect.

Type navicli -h <SP_IP_address> managefiles –retrieve

This will display the files that can be moved from the SP to the Management Station.

Example:

Gathering SP Collects through NaviCLI

Enter files to be retrieved with index separated by comma (1,2,3,4,5) OR by a range (1-3) OR enter 'all'
to retrieve all file OR 'quit' to quit> 13.

This will pull the index number 13 from the corresponding SP and copy it to the c:\program
files\emc\navisphere cli directory with a filename of SPA__APM00023000437_9c773_05-
27-2004_46_data.zip.
 
Upload the file or files to an FTP site as directed by EMC Support.   

Email Alert Notification

Notification alerts are like taking the  pro-active steps to overcome the future hardware or
software failures.
Whenever a change occurs in our storage array either it is critical, informational, warring
message an alert notification will trigger to your specified email address or to a group members.

Login to the VNX Unisphere.

Click on the System tab on the menu bar.

Click on Monitoring and Alert option.

Select Notification option.

Click on Notification template tab.

Go to Engineering Mode by pressing the Shift+Ctrl+F12 buttons on your keyboard.

Type the password as “messner” and click on Ok.

Select the Call_Home_Template and click on Properties.


Notification Template

A new window will open and in “General” tab click on “Advanced” option to select the Event code like
Critical Error, Error, Warning, Information alerts to trigger to your email address.

Go to E-Mail tab and specify all the parameters for the email notification.

To trigger the email notification alerts SMTP Server IP address is mandatory.


Template Properties

The Message will look like in this pattern:

Time Stamp %T% (GMT) Event Number %N%

Severity %SEV% Host %H%

Storage Array %SUB% %SP% Device %D%

Description %E%

Company Name:

Contact Name:
Contact Phone Number:

Contact Email Address:

Secondary Contact Name:

Secondary Contact phone Number:

Secondary Contact Email Address:

Additional Comments:

IP Address SP A: 10.XX.XX.XXX SP B:10.XX.XX.XXX

Once all the parameters are specified, click on Test to test that the email notifications alerts are
triggering to your specified email address or not.

If Test is completed click on Ok.

.NAR Files

Whenever we face any Performance issues either on Server, Storage and Switch level sides. We
have to generate the .NAR Files and uploaded to the EMC. The Performance related Team will
analyze the files and they will give the recommendation to resolve the issue.

To generate the logs , please follow the below steps.

Login to the Unisphere


Go to System tab and select the Monitoring and alerts options.

Select the Statistics option

Go to the Performance Data Logging under the Setting column.

Performance Data Logging option under the Statistics tab

Check the Status of the Data Logging. If the status is in Stop mode, please start it and we have to wait
for 24 – 48 hrs of time to generate the .NAR file.

If the Status is in Start mode,  go to the Retrieve archives option under the Archive Management
column.

Select the file and browse it to the desired location and then click OK button.

Trespassing works using ALUA (Failover mode 4) on a VNX/CLARiiON 

Product:

CLARiiON CX3 & CX4 Series/VNX

Description:
 How trespassing works using ALUA (Failover mode 4) on a VNX/CLARiiON storage system?

Resolution:

Since FLARE 26, Asymmetric Active/Active has provided a new way for CLARiiON arrays to present LUNs
to hosts, eliminating the need for hosts to deal with the LUN ownership model. Prior to FLARE 26, all
CLARiiON arrays used the standard active/passive presentation feature which one SP "owns" the LUN
and all I/O to that LUN is sent only to that SP. If all paths to that SP fail, the ownership of the LUN was
'trespassed' to the other SP and the host-based path management software adjusted the I/O path
accordingly.

Asymmetric Active/Active introduces a new initiator Failover Mode (Failover mode 4) where initiators
are permitted to send I/O to a LUN regardless of which SP actually owns the LUN.

Manual trespass:

when a manual trespass is issued (using Navisphere Manager or CLI) to a LUN on a SP that is accessed by
a host with Failover Mode 1, subsequent I/O for that LUN is rejected over the SP on which the manual
trespass was issued. The failover software redirects I/O to the SP that owns the LUN.   

A manual trespass operation causes the ownership of a given LUN owned by a given SP to change. If this
LUN is accessed by an ALUA host (Failover Mode is set to 4), and I/O is sent to the SP that does not
currently own the LUN, this would cause I/O redirection. In such a situation, the array based on how
many  I/Os (threshold of 64000 +/- I/Os) a LUN processes on each SP will change the ownership of the
LUN.

Path, HBA, switch failure:

If a host is configured with Failover Mode 1 and all the paths to the SP that owns a LUN fail, the LUN is 
trespassed to the other SP by the host’s failover software.

With Failover Mode 4, in the case of a path, HBA, or switch failure, when I/O routes to the non-owning
SP, the LUN may not trespass immediately (depending on the failover software on the host). If the LUN
is not trespassed to the owning SP, FLARE will trespass the LUN to the SP that receives the most I/O
requests to  that LUN. This is accomplished by the array keeping track of how many I/Os a LUN processes
on each SP. If the non-optimized SP processes 64,000 or more I/Os than the optimal SP, the array will
change the ownership to the non-optimal SP, making it optimal.   

SP failure

In case of an SP failure for a host configured as Failover Mode 1, the failover software trespasses the
LUN to the surviving SP.

With Failover Mode 4, if an I/O arrives from an ALUA initiator on the surviving SP (non-optimal), FLARE
initiates an internal trespass operation. This operation changes ownership of the target LUN to the
surviving SP since its peer SP is dead. Hence, the host (failover software) must have access to the
secondary SP so that it can issue an I/O under these circumstances.  

Single backend failure

Before FLARE Release 26, if the failover software was misconfigured (for example, a single attach 
configuration), a single back-end failure (for example, an LCC or BCC failure) would generate an I/O error
since the failover software would not be able to try the alternate path to the other SP with a stable
backend.

With release 26 of FLARE, regardless of the Failover Mode for a given host, when the SP that owns the
LUN cannot access that LUN due to a back-end failure, I/O is redirected through the other SP by the
lower redirector. In this situation, the LUN is trespassed by FLARE to the SP that can access the LUN.
After the  failure is corrected, the LUN is trespassed back to the SP that previously owned the LUN.  See
the “Enabler for masking back-end failures” section for more information.   

Note: Information in this solution is taken from the White Paper "EMC CLARiiON  Asymmetric
Active/Active Feature"

                        For more information refer to Primus” emc202744 “.


Presenting VNX LUN's as read only to the Veeam Windows host

I have a VNX5200 block only, no control station. I'm pretty new at this so I take no
responsibility for any damage you may do to your LUN's .

I figured out how to present LUN's as read only to my Veeam windows host.

To do it I created a second storage group and added my windows host to that storage group. I called
that storage group Veeam.

Using the navisphere cli issue the following command.

naviseccli -h <SP_IP> storagegroup -addhlu -gname <storage_group_name>


-hlu <host_lun_SCSIID> -alu <CX_lun_ID> -readonly

My command looked like this.

naviseccli -h 10.2.3.99 storagegroup -addhlu -gname Veeambackup -hlu


100 -alu 1 -readonly

I started off with test LUN's, and test vm's.

After you issue the command rescan the storage on the windows host and rescan the proxy in veeam. I
put my veeam proxy into SAN mode and backed up the test VM. It worked!

I also cannot initialize, delete or do anything with the volumes in windows. They truly are read only.

I went ahead and tried this with some of my tier 3 lun's/vm's and it worked. I ran a veeam backup job
against a VM that was not presented to the storage group and it failed, as expected, since the proxy was
in SAN only mode.

I have not tried this with any tier 2 or tier 1 servers yet because I was hoping someone could chime in on
this because the -readonly switch isn't really documented anywhere.

You might also like