Professional Documents
Culture Documents
SAN Connectivity
Switch Topology
There are so many vendors in the market who manufacturers the FC switches. Vendors
like Brocade, Cisco, Mc-data & Connectrix.
Zoning means “Grouping of Host HBA WWPN and Storage Front End Ports WWPN to
speak each other.”
Zoning Structure
Types of Zoning:
1. Soft Zoning
2. Hard Zoning
3. Mixed Zoning
Soft Zoning:
It uses the Server HBA WWN number and Storage Front End Port WWN number. It’s also
known as WWN Zoning. A major advantage of WWN Zoning is its flexibility. It allows the SAN
to be recabled without re configuring the Zone information.
Physical Cabling between Server ,Switch and Storage with Single Path
Hard Zoning:
It uses the Server and Storage physical ports connected on a FC switch. It’s also known
as Port Zoning. A major disadvantage is any change in the fabric configuration affects zoning
database.
Mixed Zoning:
Storage Vendors: EMC, Netapp, HP, Hitachi, IBM, Dell, Tintri, Orcale, and e.t.c.,
There are some familiar words will be remained in the Storage Platform.
LUN: LUN is known as Logical Unit Number. It’s a slice of space from a hard drive.
Raid Group: A collection of 16 drives in a group with same drive type from where the LUN is created.
Storage Pool: A collection of drives in a pool with same or different type of drives from where the LUN is
created.
Masking: It means particular LUN is visible to particular Host. In clear, A LUN can be visible to only one
Storage Group/Host.
Storage Group: It’s nothing but a Host name. Storage group is a collection of one or more LUNs (or
meta LUNs) to which you connect one or more servers.
Meta LUN: The meta LUN feature allows Traditional LUNs to be aggregated in order to increase the size
or performance of the base LUN. LUN will be expanded by the addition of other LUNs. The LUNs that
make up a meta LUN are called meta members and the base LUN is known as Meta head. We can add
255 meta members to 1 meta head (256 LUNs).
Access logix: Access Logix provides LUN masking that allows sharing of storage system.
PSM: The Persistent Storage Manager LUN stores the configuration information about the VNX/Clariion
such as Disks, Raid Groups, Luns, Access Logix information, SnapView configuration, MirrorView and
SanCopy configuration as well.
1.14.600.5.022 (32 Bit)
2.16.700.5.031 (32 Bit)
2.24.700.5.031 (32 Bit)
3.26.020.5.011 (32 Bit)
4.28.480.5.010 (64 Bit)
The first digit: 1, 2, 3 and 4 indicate the Generation of the machine this code level can be installed on.
For the 1st and the 2nd generation of machines (CX600 and CX700), you should be able to use standard
2nd Generation code levels. CX3 code levels would have a 3 in front of it and so forth.
These numbers will always increase as new Generations of VNX/Clariion machines are added.
The next two digits are the release numbers; these release numbers are very important and really give
you additional features related to the VNX/Clariion FLARE Operating Environment. When someone
comes up to you and says, my VNX/Clariion CX3 is running Flare 26, this is what they mean.
These numbers will always increase, 28 being the latest FLARE Code Version.
The next 3 digits are the model number of the VNX/Clariion, like the CX600, CX700, CX3-20 and CX4-480.
These numbers can be all over the map, depending what the model number of your VNX/Clariion is.
The 5 here is unknown, it’s coming across from previous FLARE releases. Going back to the pre CX days
(FC), this 5 was still used in there. I believe this was some sort of code internally used at Data General
indicating it’s a FLARE release.
The last 3 digits are the Patch level of the FLARE Environment. This would be the last known compilation
of the code for that FLARE version.
With Failover Mode 4, in the case of a path, HBA, or switch failure, when I/O routes to the non-owning
SP, the LUN may not trespass immediately.
Login into the Unisphere with the specific IP Address and authorized user credentials.
Slide 1
Slide 2
Fill all the columns with the required info. and click on Apply and hit Ok button.
Slide 3
Select the newly created LUN and then select the Add to Storage Group option below right side of the
window.
Slide 4
Select the Specific Storage group (Host) listed in the available storage group column and click the right
side arrow, the selected storage group will be listed in the selected storage group column and
hit Ok option.
Slide 5
Now we have to inform this information to the platform team to check the visibility of the LUN.
Procedure:
Platform team will request you about the expansion of LUN activity.The workflow will be in this
below pattern:
If a request raised to expand the LUN. We have to check the prerequisites like the LUN
naa ID & LUN Name.
Select the specific LUN and click the properties option, verify the LUN naa ID and LUN Name.
Once verified right click on the specific LUN and select the Expand option.
Mention the size of space needs to expand the LUN and click OK.
Unisphere image
Creation Window
Storage Pool:
It's a physical collection of disks on which logical units (LUNs) are created.
Pools are dedicated for use by pool (thin or thick) LUNs. Where RAID group can only contain up to 16
disks, pool can contain hundreds of disks.
Go to Storage Tab at the menu bar and select the “Storage Pool” option.
A pop-window will open and fill all the columns like Name of the Storage Pool, ID, what type of pool you
required to create like Extreme Performance, Performance and capacity and select the Automatic Disks
option and hit on OK button.
Whenever a new server deployed in the environment, the platform team either it is
Windows, Linux and Solaris domain will contact you for the free ports on the Switch (For
Example:- Cisco Switch) to connect with the Storage.
http://www.sanadmin.net/2015/12/putty-download.html
Once logged in we check the free ports details by using the command.
Note: As a storage admin, we have to know the server HBA’s details too. According to that information
we have to fetch the free ports details on the two switches.
Will update the free ports details to the platform team, then platform team will contact the Data center
folks to lay the physical cable connectivity between the new server to the switches.
Once the cable connectivity completes the platform team will inform us to do zoning.
Zoning:
Grouping of Host HBA WWPN and Storage Front End Ports WWPN to speak each other.
For more details about the zoning, please refer the below link.
http://www.sanadmin.net/2015/11/cisco-zoning-procedure-with-commands.html
Once the zoning is completed, we have to check the Initiators status by login into the Unisphere (VNX
Storage Array).
Search for the Host for which you have done the zoning activity
Verify the Host Name, IP Address and Host HBA WWPN & WWNN No.’s.
In the Initiators window check the Registered and Logged In columns. If “YES” is there in both the
columns then your zoning activity is correct and the Host is connected to the Storage Box.
If “YES” is in one column and “NO” in another column, then your zoning part is not correct. Check the
zoning steps, if it correct and the issue is sustain then check the Host WWPN &WWNN and cable
connectivity.
Now we have to create a New Storage Group for the New Host.
Name the storage group for your identification and hit on OK button to complete your task.
Before creating a LUN to a specified size we have to check the prerequisite like:
Check the availability of free space in the Storage Pool from where you are going to create a LUN.
If free space is not available in that specific storage pool, share the information to your Reporting
Manager or to your seniors.
Fill all the Columns like Storage Pool, LUN Capacity, No. of LUNs to be created, Name of the LUN and
specific the LUN is THICK or THIN.
Now we have to add the newly created LUN to the newly created Storage Group (Masking).
To know more about the Storage Terminology, refer the below link:
http://www.sanadmin.net/2015/12/storage-terminology.html
In the LUN creation page, there is an option known as “ADD TO STORAGE GROUP” at the below of the
page.
Two columns will be appeared in the page one is “Available Host ” and “Connected Host”.
Select the New Storage Group in the available host column and click on the Right side arrow and the
host will appear in the connected host column and then hot on OK button.
Inform the Platform team that the LUN has been assigned to host taking a snapshot of the page and also
inform them to rescan the disks from the platform level.
Go to the Storage tab and select the LUN which you want to migrate.
Select the ‘targeted destination LUN = LUN 149’ from the Available Destination LUN field
NOTE: The Migration Rate drop-down menu offers four levels of service: ASAP, High, Medium, and Low.
Click OK
YES
DONE
Note: During Migration Destination LUN will be shown under ‘Private LUNS’
IMP: Only 1 LUN Migration per array at any time. The size of target LUN should be equal or greater than
source LUN.
The main purpose of collecting the SP Collects Logs is to analysis the Storage Array
Performance.
We can also able to find errors like disk soft media error, hardware failures errors and so on.
Right side of the main screen you can able to see Wizard columns.
And also select the Generate Diagnostic Files – SP B to generate the logs for Storage Processor B as
shown above.
And also select the Get Diagnostic Files – SP B to retrieve the logs as shown above.
A page will open with the logs generating, sort out the logs by Date range and your log file name will be
shown as XXX.runlog.txt
SP File Transfer Manager
It will take 10 -15 mins of time to gather the logs for each SP’s.
Once it complete the log file will convert from runlog.txt to data.zip as shown below
Transfer the file to your desired location and upload the logs to EMC Support Team to analysis the Array
Performance.
This will display the files that can be moved from the SP to the Management Station.
Example:
Enter files to be retrieved with index separated by comma (1,2,3,4,5) OR by a range (1-3) OR enter 'all'
to retrieve all file OR 'quit' to quit> 13.
This will pull the index number 13 from the corresponding SP and copy it to the c:\program
files\emc\navisphere cli directory with a filename of SPA__APM00023000437_9c773_05-
27-2004_46_data.zip.
Upload the file or files to an FTP site as directed by EMC Support.
Notification alerts are like taking the pro-active steps to overcome the future hardware or
software failures.
Whenever a change occurs in our storage array either it is critical, informational, warring
message an alert notification will trigger to your specified email address or to a group members.
A new window will open and in “General” tab click on “Advanced” option to select the Event code like
Critical Error, Error, Warning, Information alerts to trigger to your email address.
Go to E-Mail tab and specify all the parameters for the email notification.
Description %E%
Company Name:
Contact Name:
Contact Phone Number:
Additional Comments:
Once all the parameters are specified, click on Test to test that the email notifications alerts are
triggering to your specified email address or not.
.NAR Files
Whenever we face any Performance issues either on Server, Storage and Switch level sides. We
have to generate the .NAR Files and uploaded to the EMC. The Performance related Team will
analyze the files and they will give the recommendation to resolve the issue.
Check the Status of the Data Logging. If the status is in Stop mode, please start it and we have to wait
for 24 – 48 hrs of time to generate the .NAR file.
If the Status is in Start mode, go to the Retrieve archives option under the Archive Management
column.
Select the file and browse it to the desired location and then click OK button.
Product:
Description:
How trespassing works using ALUA (Failover mode 4) on a VNX/CLARiiON storage system?
Resolution:
Since FLARE 26, Asymmetric Active/Active has provided a new way for CLARiiON arrays to present LUNs
to hosts, eliminating the need for hosts to deal with the LUN ownership model. Prior to FLARE 26, all
CLARiiON arrays used the standard active/passive presentation feature which one SP "owns" the LUN
and all I/O to that LUN is sent only to that SP. If all paths to that SP fail, the ownership of the LUN was
'trespassed' to the other SP and the host-based path management software adjusted the I/O path
accordingly.
Asymmetric Active/Active introduces a new initiator Failover Mode (Failover mode 4) where initiators
are permitted to send I/O to a LUN regardless of which SP actually owns the LUN.
Manual trespass:
when a manual trespass is issued (using Navisphere Manager or CLI) to a LUN on a SP that is accessed by
a host with Failover Mode 1, subsequent I/O for that LUN is rejected over the SP on which the manual
trespass was issued. The failover software redirects I/O to the SP that owns the LUN.
A manual trespass operation causes the ownership of a given LUN owned by a given SP to change. If this
LUN is accessed by an ALUA host (Failover Mode is set to 4), and I/O is sent to the SP that does not
currently own the LUN, this would cause I/O redirection. In such a situation, the array based on how
many I/Os (threshold of 64000 +/- I/Os) a LUN processes on each SP will change the ownership of the
LUN.
If a host is configured with Failover Mode 1 and all the paths to the SP that owns a LUN fail, the LUN is
trespassed to the other SP by the host’s failover software.
With Failover Mode 4, in the case of a path, HBA, or switch failure, when I/O routes to the non-owning
SP, the LUN may not trespass immediately (depending on the failover software on the host). If the LUN
is not trespassed to the owning SP, FLARE will trespass the LUN to the SP that receives the most I/O
requests to that LUN. This is accomplished by the array keeping track of how many I/Os a LUN processes
on each SP. If the non-optimized SP processes 64,000 or more I/Os than the optimal SP, the array will
change the ownership to the non-optimal SP, making it optimal.
SP failure
In case of an SP failure for a host configured as Failover Mode 1, the failover software trespasses the
LUN to the surviving SP.
With Failover Mode 4, if an I/O arrives from an ALUA initiator on the surviving SP (non-optimal), FLARE
initiates an internal trespass operation. This operation changes ownership of the target LUN to the
surviving SP since its peer SP is dead. Hence, the host (failover software) must have access to the
secondary SP so that it can issue an I/O under these circumstances.
Before FLARE Release 26, if the failover software was misconfigured (for example, a single attach
configuration), a single back-end failure (for example, an LCC or BCC failure) would generate an I/O error
since the failover software would not be able to try the alternate path to the other SP with a stable
backend.
With release 26 of FLARE, regardless of the Failover Mode for a given host, when the SP that owns the
LUN cannot access that LUN due to a back-end failure, I/O is redirected through the other SP by the
lower redirector. In this situation, the LUN is trespassed by FLARE to the SP that can access the LUN.
After the failure is corrected, the LUN is trespassed back to the SP that previously owned the LUN. See
the “Enabler for masking back-end failures” section for more information.
Note: Information in this solution is taken from the White Paper "EMC CLARiiON Asymmetric
Active/Active Feature"
I have a VNX5200 block only, no control station. I'm pretty new at this so I take no
responsibility for any damage you may do to your LUN's .
I figured out how to present LUN's as read only to my Veeam windows host.
To do it I created a second storage group and added my windows host to that storage group. I called
that storage group Veeam.
After you issue the command rescan the storage on the windows host and rescan the proxy in veeam. I
put my veeam proxy into SAN mode and backed up the test VM. It worked!
I also cannot initialize, delete or do anything with the volumes in windows. They truly are read only.
I went ahead and tried this with some of my tier 3 lun's/vm's and it worked. I ran a veeam backup job
against a VM that was not presented to the storage group and it failed, as expected, since the proxy was
in SAN only mode.
I have not tried this with any tier 2 or tier 1 servers yet because I was hoping someone could chime in on
this because the -readonly switch isn't really documented anywhere.