You are on page 1of 40

LUN Provisioning for a New Server

            Whenever a new server deployed in the environment, the platform team either it is
Windows, Linux and Solaris domain will contact you for the free ports on the Switch (For
Example:- Cisco Switch) to connect with the Storage. 

We will login into the Switch with authorized credentials via putty. 

To download putty please find the link below:

 http://www.sanadmin.net/2015/12/putty-download.html

Once logged in we check the free ports details by using the command.

Switch # sh interface brief


 
Note:  As a storage admin, we have to know the server HBA’s details too. According to that
information we have to fetch the free ports details on the two switches.

Will update the free ports details to the platform team, then platform team will contact the Data
center folks to lay the physical cable connectivity between the new server to the switches.

Note:  The Storage ports are already connected to the switches.

Once the cable connectivity completes the platform team will inform us to do zoning.

Zoning:

Grouping of Host HBA WWPN and Storage Front End Ports WWPN to speak each
other.

All the commands should be run in Configuration Mode.

SwitchName # config t      (configuration terminal)

SwitchName # zone name ABC vsan 2

SwitchName – zone # member pwwn 50:06:01:60:08:60:37:fb

SwitchName – zone # member pwwn 21:00:00:24:ff:0d:fc:fb

SwitchName – zone # exist

SwitchName # zoneset name XYZ vsan 2


SwitchName – zoneset # member ABC

SwitchName – zoneset # exist

SwitchName # zoneset activate name XYZ vsan 2

For more details about the zoning, please refer the below link.

http://www.sanadmin.net/2015/11/cisco-zoning-procedure-with-commands.html

Once the zoning is completed, we have to check the Initiators status by login into the
Unisphere (VNX Storage Array).

The procedure is as follows below:

Go to the Host Tab and select the Initiators tab.

Search for the Host for which you have done the zoning activity

Verify the Host Name, IP Address and Host HBA WWPN & WWNN No.’s.

In the Initiators window check the Registered and Logged In columns. If “YES” is there in
both the columns then your zoning activity is correct and the Host is connected to the Storage
Box.

If “YES” is in one column and “NO” in another column, then your zoning part is not correct.
Check the zoning steps, if it correct and the issue is sustain then check the Host WWPN
&WWNN and cable connectivity.

Now we have to create a New Storage Group for the New Host.

The procedure is as follows:

Go to Host Tab and select the Storage group option.

Click on the Create option to create a new storage group.

Name the storage group for your identification and hit on OK button to complete your task.

Before creating a LUN to a specified size we have to check the prerequisite like:

Check the availability of free space in the Storage Pool from where you are going to create a
LUN.
If free space is not available in that specific storage pool, share the information to your Reporting
Manager or to your seniors.

Now will create a LUN with the specified size

Login to the VNX Unisphere.

Go to Storage Tab and select the LUN option.

Fill all the Columns like Storage Pool, LUN Capacity, No. of LUNs to be created, Name of the
LUN and specific the LUN is THICK or THIN.

And hit on OK button to complete the LUN creation task.

Now we have to add the newly created LUN to the newly created Storage Group (Masking). 

To know more about the Storage Terminology, refer the below link:

http://www.sanadmin.net/2015/12/storage-terminology.html

In the LUN creation page, there is an option known as “ADD TO STORAGE GROUP” at the
below of the page.

Click on it and a new page will open.

Two columns will be appeared in the page one is “Available Host ” and “Connected Host”.

Select the New Storage Group in the available host column and click on the Right side arrow and
the host will appear in the connected host column and then hot on OK button.

Inform the Platform team that the LUN has been assigned to host taking a snapshot of the page
and also inform them to rescan the disks from the platform level.

Initial Switch Configuration


                         Once the Customer Engineer (CE) racked the switch and all the cabling task was
done, the next task will be the switch configuration.

Configuration Prerequisites

Before you configure a switch in the Cisco MDS 9000 Family for the first time, make sure you
have the following information: Administrator password
Switch name —This name is also used as your switch prompt.

IP address for the switch’s management interface.

Subnet mask for the switch's management interface.

IP address of the default gateway.

Network team will assists you to get the IP and Subnet mask details.

Procedure to configure switch

1.         Verify the following physical connections for the new Cisco MDS 9000 Family switch

2.         Power on the switch. The switch boots automatically

Note: If the switch boots to the loader> or switch (boot) prompt, contact your storage vendor
support organization for technical assistance.

After powering on the switch, you see the following output:

General Software Firmbase[r] SMM Kernel 1.1.1002 Aug 6 2003


22:19:14 Copyright (C) 2002 General Software, Inc.

Firmbase initialized.

00000589K Low Memory Passed


01042304K Ext Memory Passed
Wait.....

General Software Pentium III Embedded BIOS 2000 (tm) Revision


1.1.(0)
(C) 2002 General Software, Inc.ware, Inc.
Pentium III-1.1-6E69-AA6E

+----------------------------------------------------------------
--------------+
| System BIOS Configuration, (C) 2002 General Software, Inc. |
+---------------------------------------
+--------------------------------------+
| System CPU : Pentium III | Low Memory : 630KB |
| Coprocessor : Enabled | Extended Memory : 1018MB |
| Embedded BIOS Date : 10/24/03 | ROM Shadowing : Enabled |
+---------------------------------------
+--------------------------------------+
Loader Loading stage1.5.
Loader loading, please wait...
Auto booting bootflash:/m9500-sf1ek9-kickstart-mz.2.1.1a.bin
bootflash:/m9500-s f1ek9-mz.2.1.1a.bin...
Booting kickstart image: bootflash:/m9500-sf1ek9-kickstart-
mz.2.1.1a.bin...................Image verification OK

Starting kernel...
INIT: version 2.78 booting
Checking all filesystems..... done.
Loading system software
Uncompressing system image: bootflash:/m9500-sf1ek9-mz.2.1.1a.bin
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
CCCCCCCCC
INIT: Entering runlevel: 3

3.         Make sure you enter the password you wish to assign for the admin username.

Tip: If you create a password that is short and easy to decipher, then your password is rejected.
Be sure to configure a strong password. Passwords are case-sensitive.

4.         Enter yes to enter setup mode.

This setup utility will guide you through the basic configuration
of the system. Setup configures only enough connectivity for
management of the system.
*Note: setup is mainly used for configuring the system initially,
when no configuration is present. So setup always assumes system
defaults and not the current system configuration values.
Press Enter at any time to skip a dialog. Use ctrl-c at anytime
to skip the remaining dialogs.

Would you like to enter the basic configuration dialog


(yes/no): yes

The switch setup utility guides you through the basic configuration process. Press Ctrl-C at any
prompt to end the configuration process.

5.         Enter no (no is the default) to not create any additional accounts.

Create another login account (yes/no) [n]: no

6.         Enter no (no is the default) to not configure any read-only SNMP community strings.
Configure read-only SNMP community string (yes/no) [n]: no

7.         Enter no (no is the default) to not configure any read-write SNMP community strings.

Configure read-write SNMP community string (yes/no) [n]: no

8.         Enter a name for the switch.

Note: The switch name is limited to 32 alphanumeric characters. The default is switch.

Enter the switch name: switch_name

9.         Enter yes (yes is the default) to configure the out-of-band management configuration.

Continue with Out-of-band (mgmt0) management configuration?


(yes/no) [y]: yes

a.         Enter the IP address for the mgmt0 interface.

Mgmt0 IP address: mgmt_IP_address

b.         Enter the netmask for the mgmt0 interface in the xxx.xxx.xxx.xxx format.

Mgmt0 IP netmask : xxx.xxx.xxx.xxx

10.       Enter yes (yes is the default) to configure the default gateway (recommended).

Configure the default-gateway: (yes/no) [y]: yes

11.       Enter the default gateway IP address.

IP address of the default-gateway: default_gateway

12.       Enter no (no is the default) to configure advanced IP options such as in-band
management, static routes, default network, DNS, and domain name.

Configure Advanced IP options (yes/no)? [n]: no

13.       Enter yes (yes is the default) to enable Telnet service.


Enable the telnet service? (yes/no) [y]: yes

14.       Enter no (no is the default) to not enable the SSH service.

Enable the ssh service? (yes/no) [n]: no

15.       Enter no (no is the default) to not configure the NTP server.

Configure the ntp server? (yes/no) [n]: no

16.       Enter noshut (shut is the default) to configure the default switch port interface to the
noshut state.

Configure default switchport interface state (shut/noshut)


[shut]: noshut

17.       Enter on (on is the default) to configure the switch port trunk mode.

Configure default switchport trunk mode (on/off/auto) [on]: on

18.       Enter deny (deny is the default) to configure a default zone policy configuration.

Configure default zone policy (permit/deny) [deny]: deny

This step denies traffic flow for all members of the default zone.

19.       Enter yes (no is the default) to enable a full zone set distribution (refer to the Cisco MDS
9000 Family Configuration Guide).

Enable full zoneset distribution (yes/no) [n]: yes

You see the new configuration. Review and edit the configuration that you have just entered.

20.       Enter no (no is the default) if you are satisfied with the configuration.

The following configuration will be applied:


switchname switch_name
interface mgmt0
ip address mgmt_IP_address
subnetmask mgmt0_ip_netmask
no shutdown
ip default-gateway default_gateway
telnet server enable
no ssh server enable
no system default switchport shutdown
system default switchport trunk mode on
no zone default-zone permit vsan 1-4093
zoneset distribute full vsan 1-4093
Would you like to edit the configuration? (yes/no) [n]: no

21.       Enter yes (yes is the default) to use and save this configuration.

Use this configuration and save it? (yes/no) [y]: yes

------------------------------------------------------

To know about Switch Installation, click the URL below

http://www.sanadmin.net/2015/11/fibre-channel-switch-installation.html

Brocade
Brocade Communications Systems, Inc. is an American technology company specializing in
data and storage networking products. Originally known for its leadership in Fibre
Channel storage networks, the company has expanded its focus to include a wide range of
products for New IP and Third platform technologies.

Brocade was founded in August 1995, by Seth Neiman (a venture capitalist, a former executive
from Sun Microsystems and a professional auto racer), Kumar Malavalli (a co-author of
the Fibre Channel specification).

The company's first product, SilkWorm, which was a Fibre Channel Switch, was released in
early 1997. A second generation of switches was announced in 1999.

On January 14, 2013, Brocade named Lloyd Carney as new chief executive Officer.

Brocade FC Switch have so many models with the port variations, the details are below
List of Brocade FC switches 

Work flow for zoning activity

  The Platform team will inform you that they are going to provision a new server in the
environment and requests you to give the free port details on the switches which are exists in the
data center.

 Once you share the information to Platform team, they co-ordinate with the Data center guys to
lay the cables between the server and switch. (Already the storage ports or tape library are
connected to the switch).

 After laying the cables, Platform team will requests you to check the connectivity and they
shares the server HBA WWPN to verify with the connected one.

Physical cabling between Server and storage through Switch with Single path
Physical cabling between Server and storage through Switch with Multipath

Zoning can be done in 7 simple steps, the pictorial diagram is as follows.

Steps to perform zoning


Zoning steps:-

1. Identify the WWPN of Server HBA and Storage HBA.

2. Create Alias of server and storage HBA’s.

     Alicreate

3. Create zones for server and storage by using the command

     Zonecreate

4. We need to check whether active configurations is present or not by using the command.

      Cfgactvshow

5. If an active configuration already exits we just need to add the zone to this, by using the
command.

     Cfgactvadd

6. If not there we need to create new active configuration by using the command.

      Cfgcreate

7. Save it and enable it.

Please find the example for zoning,

alicreate "ser ver_hba","11:11:11:11:11:11:11:11"

alicreate "storage_hba","22:22:22:22:22:22:22:22"

zonecreate "server_hba-storage_hba"," ser ver_hba; storage_hba "

cfgcreate "cfg_switch1"," server_hba-storage_hba "

cfgenable " cfg_switch1"

cfgsave

Brocade switches uses both web and CLI, the table below displays some but not all the CLI
commands.

help prints available commands


switchdisabled disable the switch
switchenable enable the switch
licensehelp license commands
diaghelp diagnostic commands
configure change switch parameters (BB credits, etc)
diagshow POST results since last boot
routehelp routing commands
switchshow display switch show (normally first command to run to obtain switch configuration)
supportshow full detailed switch info
portshow # display port info
nsshow namesever contents
nsallshow NS for full fabric
fabricshow Fabric information
version firmware code revision
reboot full reboot with POST
fastboot reboot without POST

B-Series (Brocade) zoning commands are detailed in the below table

zonecreate (zone) create a zone


zoneshow shows defined and effective zones and configurations
zoneadd adds a member to a zone
zoneremove removes a member from a zone
zonedelete delete a zone
cfgcreate (zoneset) create a zoneset configuration
cfgadd adds a zone to a zone configuration
cfgshow display the zoning information
cfgenable enable a zone set
cfgsave saves defined config to all switches in fabric across reboots
cfgremove removes a zone from a zone configuration
cfgdelete deletes a zone from a zone configuration
cfgclear clears all zoning information (must disable the effective config first)
cfgdisable disables the effective zone set

B-series creating a zone commands

zonecreate "zone1", "20:00:00:e0:69:40:07:08 ;


Creating zone by WWN
50:06:04:82:b8:90:c1:8d"
Create a zone configuration cfgcreate "test_cfg", "zone1 ; zone2"
saving the zone configuration cfgsave (this will save across reboots)
enable the zone configuration cfgenable "test_cfg"
saving the zone configuration cfgsave
view zoning information zoneshow or cfgshow

aliAdd Add a member to a zone alias


aliCopy Copy a zone alias
aliCreate Create a zone alias
aliDelete Delete a zone alias
aliRemove Remove a member from a zone alias
aliRename Rename a zone alias
aliShow Print zone alias information

cfgAdd Add a member to a configuration


cfgCopy Copy a zone configuration
cfgCreate Create a zone configuration
cfgDelete Delete a zone configuration
cfgRemove Remove a member from a configuration
cfgRename Rename a zone configuration
cfgShow Print zone configuration information

zoneAdd Add a member to a zone


zoneCopy Copy a zone
zoneCreate Create a zone
zoneDelete Delete a zone
zoneRemove Remove a member from a zone
zoneRename Rename a zone
zoneShow Print zone information

cfgClear Clear all zone configurations


cfgDisable Disable a zone configuration
cfgEnable Enable a zone configuration
cfgSave Save zone configurations in flash
cfgSize Print size details of zone database
cfgActvShow Print effective zone configuration
cfgTransAbort Abort zone configuration transaction
------------------------------------------------------------------------------
-------------

Troubleshooting Zone Configuration Issues with the CLI


 

Troubleshooting commands
Note: To issue commands with the internal keyword, you must have a network-admin group
account.

Example for Full Zoning Analysis

Switch # show zone analysis vsan 1


Zoning database analysis vsan 1
Full zoning database
Last updated at: 15:57:10 IST Feb 20 2006
Last updated by: Local [ CLI ]
Num zonesets: 1
Num zones: 1
Num aliases: 0
Num attribute groups: 0
Formattted size: 36 bytes / 2048 Kb
Unassigned Zones: 1
zone name z1 vsan 1

Example for Active Zoning Database Analysis

Switch # show zone analysis active vsan 1


Zoning database analysis vsan 1
Active zoneset: zs1 [*]
Activated at: 08:03:35 UTC Nov 17 2005
Activated by: Local [ GS ]
Default zone policy: Deny
Number of devices zoned in vsan: 0/2 (Unzoned: 2)
Number of zone members resolved: 0/2 (Unresolved: 2)
Num zones: 1
Number of IVR zones: 0
Number of IPS zones: 0
Formattted size: 38 bytes / 2048 Kb

Example for Zone Set Analysis

Switch # show zone analysis zoneset zs1 vsan 1


Zoning database analysis vsan 1
Zoneset analysis: zs1
Num zonesets: 1
Num zones: 0
Num aliases: 0
Num attribute groups: 0
Formattted size: 20 bytes / 2048 Kb

Resolving Host Not Communicating with Storage Using the CLI

To verify that the host is not communicating with storage using the CLI, follow these steps:
1.         Verify that the host and storage device are in the same VSAN.

2.         Configure zoning, if necessary, by using the show zone status vsan-id command to determine if
the default zone policy is set to deny.

Switch # show zone status vsan 1

VSAN: 1 default-zone: deny distribute: active only Interop:


default
mode: basic merge-control: allow session: none
hard-zoning: enabled
Default zone:
qos: low broadcast: disabled ronly: disabled
Full Zoning Database :
Zonesets:0 Zones:0 Aliases: 0
Active Zoning Database:
Name: Database Not Available
Status:

The default zone policy of permit means all nodes can see all other nodes. Deny means all nodes
are isolated when not explicitly placed in a zone.

3.         Use the show zone member command for host and storage device to verify that they
are both in the same zone.

4.         Use the show zoneset active command to determine if the zone in Step 4 and the
host and disk appear in the active zone set.

Switch # show zoneset active vsan 2

zoneset name ZoneSet3 vsan 2


zone name Zone5 vsan 2
pwwn 10:00:00:00:77:99:7a:1b
pwwn 21:21:21:21:21:21:21:21

5.   If there is no active zone set, use the zoneset activate command to activate the zone set.

Switch (config) # zoneset activate ZoneSet1 vsan 2

6.         Verify that the host and storage can now communicate

Resolving Host and Storage Not in the Same Zone Using the CLI

To move the host and storage device into the same zone using the CLI, follow these steps:
1.         Use the zone name zonename vsan-id command to create a zone in the VSAN if
necessary, and add the host or storage into this zone.

Switch (config) # zone name NewZoneName vsan 2


Switch (config-zone) # member pwwn 22:35:00:0c:85:e9:d2:c2
Switch (config-zone) # member pwwn 10:00:00:00:c9:32:8b:a8

Note:   The pWWNs for zone members can be obtained from the device or by issuing the show
flogi database vsan-id command.

2.             Use the show zone command to verify that host and storage are now in the same zone.

Switch # show zone

zone name NewZoneName vsan 2


pwwn 22:35:00:0c:85:e9:d2:c2
pwwn 10:00:00:00:c9:32:8b:a8

zone name Zone2 vsan 4


pwwn 10:00:00:e0:02:21:df:ef
pwwn 20:00:00:e0:69:a1:b9:fc

zone name zone-cc vsan 5


pwwn 50:06:0e:80:03:50:5c:01
pwwn 20:00:00:e0:69:41:a0:12
pwwn 20:00:00:e0:69:41:98:93

3.         Use the show zoneset active command to verify that you have an active zone set. If you
do not have an active zone set, use the zoneset activate command to activate the zone set.

4.         Use the show zoneset active command to verify that the zone in Step 2 is in the active
zone set. If it is not, use the zoneset name command to enter the zone set configuration
submode, and use the member command to add the zone to the active zone set.

Switch (config) # zoneset name zoneset1 vsan 2


Switch (config-zoneset) # member NewZoneName

5.         Use the zoneset activate command to activate the zone set.

Switch (config) # zoneset activate ZoneSet1 vsan 2

6.         Verify that the host and storage can now communicate.

 Resolving Zone is Not in Active Zone Set Using the CLI


To add a zone to the active zone set using the CLI, follow these steps:

1.         Use the show zoneset active command to verify that you have an active zone set. If you
do not have an active zone set, use the zoneset activate command to activate the zone set.

2.         Use the show zoneset active command to verify that the zone in Step 1 is not in the
active zone set.

3.         Use the zoneset name command to enter the zone set configuration submode, and use the
member command to add the zone to the active zone set.

Switch (config) # zoneset name zoneset1 vsan 2


Switch (config-zoneset) # member NewZoneName

4.         Use the zoneset activate command to activate the zone set.

Switch (config) # zoneset activate ZoneSet1 vsan 2

5.         Verify that the host and storage can now communicate.

Data Protection: RAID

In the late 1980’s, rapid growth of new applications and databases created a high demand for
storage capacity. At that time, data was stored on a single large, expensive disk drive called
Single Large Expensive Drive (SLED).

In 1987, Patterson, Gibson, and Katz at the University of California, Berkeley, published a paper
titled “A Case for Redundant Arrays of Inexpensive Disks (RAID).” This paper described the
use of small-capacity, inexpensive disk drives as an alternative to large-capacity drives common
on mainframe computers. The term RAID has been redefined to refer to independent disks, to
reflect advances in the storage technology.

There are two types of RAID implementation, hardware and software. Both have their merits and
demerits and are discussed in this section.

Software RAID

Software RAID uses host-based software to provide RAID functions. It is implemented at the
operating-system level and does not use a dedicated hardware controller to manage the RAID
array.

Hardware RAID
A specialized hardware controller is implemented either on the host or on the array. These
implementations vary in the way the storage array interacts with the host.

RAID Array Components

A RAID array is an enclosure that contains a number of HDDs and the supporting hardware and
software to implement RAID. HDDs inside a RAID array are usually contained in smaller sub-
enclosures. These sub-enclosures, or physical arrays, hold a fixed number of HDDs, and may
also include other supporting hardware, such as power supplies. A subset of disks within a RAID
array can be grouped to form logical associations called logical arrays, also known as a RAID set
or a RAID group.

Logical arrays are comprised of logical volumes (LV). The operating system recognizes the LVs
as if they are physical HDDs managed by the RAID controller. The number of HDDs in a logical
array depends on the RAID level used.

Components of a Raid Array

Raid Levels

RAID levels are defined on the basis of striping, mirroring, and parity techniques. These
techniques determine the data availability and performance characteristics of an array.

RAID 0: RAID 0 is also known as disk striping. All the data is spread out in chunks across all
the disks in the RAID set. RAID 0 is only good for better performance, and not for high
availability, since parity is not generated for RAID 0 disks. RAID 0 requires at least two physical
disks.
Raid 0 Stripping

Raid 1:  RAID 1 is also known as disk mirroring. All the data is written to at least two separate
physical disks. The disks are essentially mirror images of each other. If one of the disks fails, the
other can be used to retrieve data. Disk mirroring is good for very fast read operations. It's slower
when writing to the disks, since the data needs to be written twice. RAID 1 requires at least two
physical disks.

Raid 1 Mirroring
RAID 5: RAID 5 uses disk striping with parity. The data is striped across all the disks in the
RAID set; it achieves a good balance between performance and availability. RAID 5 requires at
least three physical disks.

Raid 5 Single Distributed Parity

RAID 6: RAID 6 increases reliability by utilizing two parity stripes, which allows for two disk failures
within the RAID set before data is lost. RAID 6 is seen in SATA environments, and solutions that
require long data retention periods, such as data archiving or disk-based backup.

Raid 6 Double Distributed Parity

RAID 1+0: RAID 1+0, which is also called RAID 10, uses a combination of disk mirroring and
disk striping. The data is normally mirrored first and then striped. RAID 1+0 requires a
minimum of four physical disks.
Raid 1 0 Stripped Mirroring

RAID 0+1: RAID 0+1 also called as RAID 01, is a RAID level using a mirror of stripes,
achieving both replication and sharing of data between disks.

Raid 0 1 Mirrored Stripping


RAID Comparison

Here we will discuss about the comparison between all RAID levels such as read & write
performance and min. disks required to build a Raid and so on.
Comparison Chart 1

Comparison Chart 2
Mini. Disks in a Raid 

Mini / Maxi. disks in a Raid


Raid Group Capacity Utilization

Application IOPS and RAID Configurations

When deciding the number of disks required for an application, it is important to consider the
impact of RAID based on IOPS generated by the application. The total disk load should be
computed by considering the type of RAID configuration and the ratio of read compared to write
from the host.

The following example illustrates the method of computing the disk load in different types of
RAID.
Consider an application that generates 5,200 IOPS, with 60 percent of them being reads.

The disk load in RAID 5 is calculated as follows:

RAID 5 disk load = 0.6 × 5,200 + 4 × (0.4 × 5,200) [because the write penalty for RAID 5 is 4]
= 3,120 + 4 × 2,080
= 3,120 + 8,320
= 11,440 IOPS

The disk load in RAID 1 is calculated as follows:

RAID 1 disk load = 0.6 × 5,200 + 2 × (0.4 × 5,200) [because every write manifests as two writes
to the disks]
= 3,120 + 2 × 2,080
= 3,120 + 4,160
= 7,280 IOPS

The computed disk load determines the number of disks required for the application. If in this
example an HDD with a specification of a maximum 180 IOPS for the application needs to be
used, the number of disks required to meet the workload for the RAID configuration would be as
follows:

RAID 5: 11,440 / 180 = 64 disks

RAID 1: 7,280 / 180 = 42 disks (approximated to the nearest even number)

Hot Spares

A hot spare refers to a spare HDD in a RAID array that temporarily replaces a failed HDD of a
RAID set. A hot spare takes the identity of the failed HDD in the array.

Hot spares are of 2 types. Permanent and Temporary hot spare.

Permanent Hot Spare: The hot spare replaces the new HDD permanently. This means that it is
no longer a hot spare, and a new hot spare must be configured on the array.

Temporary Hot Spare: When a new HDD is added to the system, data from the hot spare is
copied to it. The hot spare returns to its idle state, ready to replace the next failed drive.

Intelligent Storage System

Business-critical applications require high levels of performance, availability, security, and


scalability. RAID technology made an important contribution to enhancing storage performance
and reliability,
but hard disk drives even with a RAID implementation could not meet performance requirements
of today’s applications.

With advancements in technology, a new breed of storage solutions known as an intelligent


storage system has evolved. These arrays have an operating environment that controls the
management, allocation, and utilization of storage resources. These storage systems are
configured with large amounts of memory called cache.

Components of an Intelligent Storage System

An intelligent storage system consists of four key components: front end, cache, back end, and
physical disks. An I/O request received from the host at the front-end port is processed through
cache and the back end, to enable storage and retrieval of data from the physical disk. A read
request can be serviced directly from cache if the requested data is found in cache.
Components of an intelligent storage system
Front End

The front end provides the interface between the storage system and the host. It consists of two
components: front-end ports and front-end controllers. The front-end ports enable hosts to
connect to the intelligent storage system.

Front-end controllers route data to and from cache via the internal data bus. When cache receives
write data, the controller sends an acknowledgment message back to the host.

Cache

Cache is an important component that enhances the I/O performance in an intelligent storage
system. Cache is semiconductor memory where data is placed temporarily to reduce the time
required to service I/O requests from the host.

Structure of Cache

Cache is organized into pages or slots, which is the smallest unit of cache allocation. The size of
a cache page is configured according to the application I/O size. Cache consists of the data store
and tag RAM. The data store holds the data while tag RAM tracks the location of the data in the
data store and in disk. Entries in tag RAM indicate where data is found in cache and where the
data belongs on the disk. Tag RAM includes a dirty bit flag, which indicates whether the data in
cache has been committed to the disk or not.
Structure of cache
Read Operation with Cache

When a host issues a read request, the front-end controller accesses the tag RAM to determine
whether the required data is available in cache. If the requested data is found in the cache, it is
called a read cache hit or read hit.

Read performance is measured in terms of the read hit ratio, or the hit rate, usually expressed as a
percentage. This ratio is the number of read hits with respect to the total number of read requests.
A higher read hit ratio improves the read performance.

(b) Read Miss

Write Operation with Cache


Write operations with cache provide performance advantages over writing directly to disks.
When an I/O is written to cache and acknowledged, it is completed in far less time (from the
host’s perspective) than it would take to write directly to disk.

A write operation with cache is implemented in the following ways:

Write-back cache: Data is placed in cache and an acknowledgment is sent to the host
immediately. Later, data from several writes are committed (de-staged) to the disk. Write
response times are much faster. However, uncommitted data is at risk of loss in the event of
cache failures.

Write-through cache: Data is placed in the cache and immediately written to the disk, and an
acknowledgment is sent to the host. Because data is committed to disk as it arrives, the risks of
data loss are low but write response time is longer because of the disk operations.

Cache Management

Cache is a finite and expensive resource that needs proper management. Even though intelligent
storage systems can be configured with large amounts of cache, when all cache pages are filled,
some pages have to be freed up to accommodate new data and avoid performance degradation.

Least Recently Used (LRU): An algorithm that continuously monitors data access in cache and
identifies the cache pages that have not been accessed for a long time. LRU either frees up these
pages or marks them for reuse.

Most Recently Used (MRU): An algorithm that is the converse of LRU. In MRU, the pages that
have been accessed most recently are freed up or marked for reuse.

As cache fills, the storage system must take action to flush dirty pages (data written into the
cache but not yet written to the disk) in order to manage its availability. Flushing is the process
of committing data from cache to the disk.

On the basis of the I/O access rate and pattern, high and low levels called watermarks are set in
cache to manage the flushing process. High watermark (HWM) is the cache utilization level at
which the storage system starts high speed flushing of cache data. Low watermark (LWM) is the
point at which the
storage system stops the  high-speed or forced flushing and returns to idle flush behavior.

Idle flushing: Occurs continuously, at a modest rate, when the cache utilization level is between
the high and low watermark.

High watermark flushing: Activated when cache utilization hits the high watermark. The
storage system dedicates some additional resources to flushing. This type of flushing has
minimal impact on host I/O processing.
Forced flushing: Occurs in the event of a large I/O burst when cache reaches 100 percent of its
capacity, which significantly affects the I/O response time.  In forced flushing, dirty pages are
forcibly flushed to disk.

Types of flushing
Back End

The back end provides an interface between cache and the physical disks. It consists of two
components: back-end ports and back-end controllers. The back end controls data transfers
between cache and the physical disks. From cache, data is sent to the back end and then routed to
the destination disk. Physical disks are connected to ports on the back end.

Storage Topologies 

Three topologies are there.

1. Direct Attached Storage (DAS)


2. Storage Area Network (SAN)
3. Network Attached Storage (NAS)

Direct Attached Storage:  The storage device is internally connected to the host by a serial or
parallel bus. The physical bus has distance limitations and can only be sustained over a shorter
distance for high-speed connectivity.
DAS Architecture

Storage Area Network: A storage area network (SAN) carries data between servers (also
known as hosts) and storage devices through fibre channel switches. A SAN enables storage
consolidation and allows storage to be shared across multiple servers.
SAN Implementation

Structure of SAN

Before going to learn  more about SAN, we have to look about Fibre Channel overview.
The FC architecture forms the fundamental construct of the SAN infrastructure. Fibre Channel is
a high-speed network technology that runs on high-speed optical fiber cables (preferred for front-
end SAN connectivity) and serial copper cables (preferred for back-end disk connectivity). The
FC technology was
created to meet the demand for increased speeds of data transfer among computers and servers.

FC Connectivity

The FC architecture supports three basic inter connectivity options: point-to point, arbitrated
loop (FC-AL), and fibre Channel switched fabric.

Point-to-Point

Point-to-point is the simplest FC configuration — two devices are connected directly to each
other. This configuration provides a dedicated connection for data transmission between nodes.
However, the
point-to-point configuration offers limited connectivity, as only two devices can communicate
with each other at a given time.

Point-to-Point Topology

Fibre Channel Arbitrated Loop

In the FC-AL configuration, devices are attached to a shared loop. FC-AL has the characteristics
of a token ring topology and a physical star topology. In FC-AL, each device contends with other
devices to perform I/O operations. At any given time, only one device can perform I/O
operations on the loop.
FC-AL shares the bandwidth in the loop. Only one device can perform I/O operations at a time.
Because each device in a loop has to wait for its turn to process an I/O request, the speed of data
transmission is low in an FC-AL topology.

FC-AL uses 8-bit addressing. It can support up to 127 devices on a loop.

Fibre Channel arbitrated loop

Fibre Channel Switched Fabric

 A Fibre Channel switched fabric (FC-SW) network provides interconnected devices, dedicated
bandwidth, and scalability. The addition or removal of a device in a switched fabric is minimally
disruptive; it does not affect the ongoing traffic between other devices.

Fibre Channel switched fabric


Fibre Channel Ports

 Ports on the switch can be one of the following types:

N_port: An end point in the fabric. This port is also known as the node port. Typically, it is a
host port (HBA) or a storage array port that is connected to a switch in a switched fabric.

NL_port: A node port that supports the arbitrated loop topology. This port is also known as the
node loop port.

E_port: An FC port that forms the connection between two FC switches. This port is also known
as the expansion port. The E_port on an FC switch connects to the E_port of another FC switch
in the fabric through a link, which is called an Inter-Switch Link (ISL). ISLs are used to transfer
host-to-storage data as well as the fabric management traffic from one switch to another. ISL is
also one of the scaling mechanisms in SAN connectivity.

F_port: A port on a switch that connects an N_port. It is also known as a fabric port and cannot
participate in FC-AL.

FL_port: A fabric port that participates in FC-AL. This port is connected to the NL_ports on an
FC-AL loop. A FL_port also connects a loop to a switch in a switched fabric. As a result, all
NL_ports in the loop can participate in FC-SW. This configuration is referred to as a public loop.
In contrast, an arbitrated loop without any switches is referred to as a private loop. A private loop
contains nodes with NL_ports, and does not contain FL_port.

G_port: A generic port that can operate as an E_port or an F_port and determines its
functionality automatically during initialization.

SAN Topologies
World Wide Names
Each device in the FC environment is assigned a 64-bit unique identifier called the World Wide
Name (WWN).

The Fibre Channel environment uses two types of WWNs: World Wide Node Name (WWNN)
and World Wide Port Name (WWPN).

Structure of a WWPN format

Description of  WWN

Network-Attached Storage: Network-attached storage (NAS) is an IP-based file-sharing


device attached to a local area network. It provides storage consolidation through file-level data
access and sharing.

NAS uses network and file-sharing protocols to perform filing and storage functions. These
protocols include TCP/IP for data transfer and CIFS and NFS for remote file service. NAS
enables both UNIX and Microsoft Windows users to share the same data seamlessly. To enable
data sharing, NAS typically uses
NFS for UNIX, CIFS for Windows. A NAS device is a dedicated, high-performance, high-speed,
single-purpose file serving and storage system.

Benefits of NAS
NAS offers the following benefits:

Improved efficiency
Improved flexibility
Centralized storage
Simplified management
Scalability
High availability
Security

FC Protocol Architecture 

FC Protocol Architecture
FC Cables and Transceivers

FC Cables and Transceivers


Fibre Channel cabling

Multimode Fiber
Multimode Step-Index Fiber

Single-Mode Fiber
Single-Mode Step-Index Fiber

Inside of a Single Mode Step Index Fiber


 Fibre Channel Frame

Frame of a Fibre Channel

To refer the Storage Area Network Class 1, please click on the below link:
http://www.sanadmin.net/2015/10/introduction-to-information-storage.html

You might also like