You are on page 1of 19

Here is Your Customized Document

Your Configuration is:


Manage storage pools
Model - VNX5300
Storage Type - Unified (NAS and SAN)
Connection Type - Fibre Channel Switch or Boot from SAN
Operating System - ESX Server 5i
Path Management Software - VMware native
Document ID - 1428635554847

Reporting Problems
To send comments or report errors regarding this document,
please email: mydocs@emc.com.
For Issues not related to this document, contact your service provider.
Refer to Document ID:
1428635554847
Content Creation Date April 9, 2015

EMC VNX Series


Managing LUNs on your VNX System
November, 2014
This guide describes how to manage LUNs within Unisphere for EMC VNX platforms.
Topics include:
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u
u

Starting Unisphere..................................................................................................... 2
Committing VNX for Block Operating Environment (OE) software with Unisphere........ 3
Configuring cache with Unisphere.............................................................................. 3
Enabling storage groups with Unisphere.................................................................... 3
Verifying that each LUN is fully initialized using Unisphere.........................................3
MetaLUNs overview....................................................................................................4
Allocating storage on a new system with the Unisphere LUN Provisioning Wizard....... 5
Create pool LUNs....................................................................................................... 6
Create classic LUNs....................................................................................................7
Create a LUNs folder...................................................................................................8
Add LUNs to folders................................................................................................... 8
Remove LUNs from a folder........................................................................................ 8
Setting LUN properties............................................................................................... 9
Set classic LUN write cache or FAST Cache properties.................................................9
Auto assign for a LUN............................................................................................... 10
Default owner of a LUN.............................................................................................10
Source LUN.............................................................................................................. 11
Destination LUN definition....................................................................................... 11
Verify priority for a LUN.............................................................................................11
Rebuild priority for a LUN......................................................................................... 12
Start the Storage Expansion wizard.......................................................................... 12
Delete LUNs............................................................................................................. 13
LUN migration overview............................................................................................14
Start a LUN migration............................................................................................... 14
Cancel (stop) a LUN migration.................................................................................. 15
Display the status of active LUN migrations..............................................................15
Creating storage groups with Unisphere................................................................... 15
Making virtual disks visible to an ESXi Server...........................................................16
Verifying that native multipath failover sees all paths to the LUNs............................16

Starting Unisphere
Procedure
1. Log in to a host (which can be a server) that is connected through a network to the
systems management ports and that has an Internet browser: Microsoft Internet
Explorer, Netscape, or Mozilla.
2. Start the browser.
3. In the browser window, enter the IP address of one of the following that is in the same
domain as the systems that you want to manage:
l

A system SP with the most recent version of the VNX Operating Environment (OE)
installed
Note

This SP can be in one of the systems that you want to manage.


l

A Unisphere management station with the most recent Unisphere Server and UIs
installed

Note

If you do not have a supported version of the JRE installed, you will be directed to the
Sun website where you can select a supported version to download. For information
on the supported JRE versions for your version of Unisphere, refer to Environment and
System Requirements in the Unisphere release notes on the EMC Online Support
website.
4. Enter your user name and password.
5. Select Use LDAP if you are using an LDAP-based directory server to authenticate user
credentials.
If you select the Use LDAP option, do not include the domain name.
When you select the LDAP option, the username / password entries are mapped to an
external LDAP or Active Directory server for authentication. Username / password
pairs whose roles are not mapped to the external directory will be denied access. If
the user credentials are valid, Unisphere stores them as the default credentials.
6. Select Options to specify the scope of the systems to be managed.
Global (default) indicates that all systems in the domain and any remote domains can
be managed. Local indicates that only the targeted system can be managed.
7. Click Login.
When the user credentials are successfully authenticated, Unisphere stores them as
the default credentials and the specified system is added to the list of managed
systems in the Local domain.
8. If you are prompted to add the system to a domain, add it now.
The first time that you log in to a system, you are prompted to add the system to a
Unisphere domain. If the system is the first one, create a domain for it. If you already
have systems in a domain, you can either add the new system to the existing domain
or create a new domain for it. For details on adding the system to a domain, use the
Unisphere help.

Managing LUNs on your VNX System

Committing VNX for Block Operating Environment (OE) software


with Unisphere
If you did not install a VNX for Block OE update on the system, you need to commit the
VNX for Block OE software now.
Procedure
1. From Unisphere, select All Systems > System List.
2. From the Systems page, right-click the entry for the system for which you want commit
the VNX for Block OE and select Properties.
3. Click the Software tab, select VNX-Block-Operating-Environment, and click Commit.
4. Click Apply.

Configuring cache with Unisphere


Procedure
1. From Unisphere, select All Systems > System List.
2. From the Systems page, right-click the entry for the system for which you want to
verify cache properties and select Properties.
3. Enable or configure the cache as described in the Unisphere online help.
Note

The latest version of Unisphere automatically sets the read and write cache sizes. If
your system is running an older version of Unisphere, refer to the system's version of
the online help for advice on setting read/write cache values and setting watermarks.

Enabling storage groups with Unisphere


You must enable storage groups using Unisphere if only one server is connected to the
system and you want to connect additional servers to the system.
Procedure
1. From Unisphere, select All Systems > System List.
2. From the Systems page, right-click the icon for the system, and click Properties.
3. Click the General tab, and select Storage Groups.
4. Click OK.

Verifying that each LUN is fully initialized using Unisphere


Although the storage group with a new LUN is assigned to the server, the server cannot
see the new LUN until it is fully initialized (completely bound). The time the initialization
process takes to complete varies with the size of the LUN and other parameters. While a
LUN is initializing, it is in a transitioning state, and when the initialization is complete, its
state becomes ready.
To determine the state of a LUN:
Committing VNX for Block Operating Environment (OE) software with Unisphere

Procedure
1. From Unisphere, navigate to the LUN you want to verify (Storage > LUNs).
2. Right-click the LUN and click Properties.
3. Verify that the state of the LUN is Normal.
If the state is Transitioning, wait for the state to change to Ready before continuing.

MetaLUNs overview
MetaLUNS are available for classic LUNs only.
NOTICE

EMC strongly recommends that you do not expand LUN capacity by concatenating LUNs
of different RAID types. Do this only in an emergency situation when you need to add
capacity to a LUN and you do not have LUNs of the same RAID type or the disk capacity to
create new ones. Concatenating metaLUN components with a variety of RAID types could
impact the performance of the resulting metaLUN. Once you expand a LUN, you cannot
change the RAID type of any of its components without destroying the metaLUN.
Destroying a metaLUN destroys all LUNs in the metaLUN, and therefore causes data to be
lost.
A metaLUN is a type of LUN whose maximum capacity can be the combined capacities of
all the LUNs that compose it. The metaLUN feature lets you dynamically expand the
capacity of a single LUN (base LUN) into a larger unit called a metaLUN. Do this by adding
LUNs to the base LUN. You can also add LUNs to a metaLUN to further increase its
capacity. Like a LUN, a metaLUN can belong to a Storage Group, and can participate in
SnapView, MirrorView, and SAN Copy sessions.
Note

Thin LUNs cannot be part of a metaLUN.


A metaLUN may include multiple sets of LUNs and each set of LUNs is called a
component. The LUNs within a component are striped together and are independent of
other LUNs in the metaLUN. Any data that is written to a metaLUN component is striped
across all the LUNs in the component. The first component of any metaLUN always
includes the base LUN.
You can expand a LUN or metaLUN in two ways stripe expansion or concatenate
expansion:
u

A stripe expansion takes the existing data on the LUN or metaLUN you are expanding,
and restripes (redistributes) it across the existing LUNs and the new LUNs you are
adding. The stripe expansion may take a long time to complete.

A concatenate expansion creates a new metaLUN component that includes the new
expansion LUNs, and appends this component to the existing LUN or metaLUN as a
single, separate, striped component. No restriping of data between the original
storage and the new LUNs occurs. The concatenate operation completes immediately.

During the expansion process, the host is able to process I/O to the LUN or metaLUN, and
access any existing data. It does not, however, have access to any added capacity until
the expansion is complete. Whether you can actually use the increased user capacity of
the metaLUN depends on the operating system running on the servers connected to the
storage system.

Managing LUNs on your VNX System

Allocating storage on a new system with the Unisphere LUN


Provisioning Wizard
NOTICE

If you have a Hyper-V or ESX server, perform this procedure on your Hyper-V or ESX server.
Procedure
1. Select the system for which you want to allocate storage.
2. Select Storage > LUNS > LUNS.
3. Under the Wizards list, select the LUN Provisioning Wizard.
4. On the Select Servers page, select Assign LUNs to the Servers, and select the servers
or virtual machines that will have access to the new LUNs.
5. Select the system in which the new LUNs will reside.
6. Create a LUN:
a. Select a pool or RAID group in which to create a LUN, or create a new pool for the
LUN.
We recommend you use an existing pool or create a pool instead of a RAID group
because a pool supports options, such as Fully Automated Storage Tiering (FAST)
and Thin Provisioning, which a RAID group does not support.
b. If you are creating a pool LUN and you want the LUN to be a thin LUN, select Thin
LUN.
The Thin LUN option is available and will be selected by default if the Thin
Provisioning enabler is installed. To learn about pools and thin LUNs, click the ?
icon next to Thin LUN.
c. Select the properties for the LUN.
d. Add the LUNs to a user-defined folder or do not place them in a folder.
e. Click Finish to create the LUN.
7. Verify that the server was assigned to the storage group containing the LUNs you
created:
l

If you know the name of the storage group in which the LUNs reside, from
Unisphere, select Storage > Storage Pools.

If you know the name of the server or virtual machine to which the storage group is
assigned, from Unisphere, select Storage > LUNs and confirm that the new LUNs
are listed.

If you do not see any of the LUNs you just created, you may not have selected the
Assign LUNs to a server option in the Select Servers page of the LUN Provisioning
wizard. You can use the Storage Assignment Wizard for Block to assign the LUNs to a
server.
8. Create a hot spare policy (a RAID group with a hot spare RAID Type) as described in
the Unisphere online help. To do this, select System > Hardware > Hot Spare Policy.
A hot spare is a single disk that serves as a temporary replacement for a failed disk in
a 6, 5, 3, 1, or 1/0 RAID group. Data from the failed disk is reconstructed
automatically on the hot spare from the parity or mirrored data on the working disks in
the LUN, so the data on the LUN is always accessible.
Allocating storage on a new system with the Unisphere LUN Provisioning Wizard

Note

Only RAID group LUNs can be hot spares.


Note

Vault drives (the first 4 drives) cannot be qualified as hot spares.

Create pool LUNs


Lets you create one or more pool LUNs of a specified size within a storage pool and
specify details such as LUN name, and the number of LUNs to create.
Procedure
1. In the systems drop-down list on the menu bar, select a storage system.
2. Select Storage > LUNs > LUNs.
3. In the LUNs view, click Create.
4. In the Create LUN dialog, under Storage Pool Properties:
a. Select Pool.
b. Select a RAID type for the pool in which the LUN will be created.
For Pool LUNs, only RAID 6, RAID 5, and RAID 1/0 are valid. RAID 5 is the default
RAID type. If mixed tiers are available that use different RAID types, this field
displays Mixed.
If available, the software populates Storage Pool for new LUN with a list of pools
that have the specified RAID type, or displays the name of the selected pool. The
Capacity section displays information about the selected pool. If there are no
pools with the specified RAID type, click New to create a new one.
5. In LUN Properties, the Thin is selected by default. If you do not want to create a thin
LUN, clear the Thin checkbox.
6. Assign a User Capacity and ID to the LUN you want to create.
7. If you want to create more than one LUN, select a number in Number of LUNs to
create.
Note

For multiple LUNs, the software assigns sequential IDs to the LUNs as they are
available. For example, if you want to create five LUNs starting with LUN ID 11, the LUN
IDs might be 11, 12, 15, 17, and 18.
8. In LUN Name, either specify a name or select Automatically assign LUN IDs as LUN
Names.
9. Choose one of the following:
l

Click Apply to create the LUN with the default advanced properties, or

Click the Advanced tab to assign the properties yourself.

10.Assign optional advanced properties for the LUN:


a. Select a default owner (SP A or SP B) for the new LUN or accept the default value of
Auto.
b. Set the FAST VP tiering policy option.
6

Managing LUNs on your VNX System

11.Click Apply to create the LUN, and then click Cancel to close the dialog box.
An icon for the LUN is added to the LUNs view window.

Create classic LUNs


If you need to create one or more LUNs of a specified size within a RAID group and specify
details such as SP owner, element size, and the number of LUNs to create, you may want
to determine if the RAID Group has enough free space to accommodate the new LUNs.
Note

If no LUNs exist on a storage system connected to a NetWare server, refer to the Release
Notice for the NetWare Unisphere Agent for information on how to create the first LUN.
IIf you are creating LUNs on a storage system connected to a Solaris server, and no
failover software is installed, refer to the Storage System Host Utilities for Solaris
Administrators Guide for information on how to create the first LUN.
If the LUNs you are creating reside on a storage system connected to a VMware ESX
server, and these LUNs will be used with layered applications such as SnapView,
configure the LUNs as raw device mapping volumes set to physical compatibility mode.
You may receive a message that this ID is already being used by a private classic LUN. If
you get this message, assign a new ID, keeping in mind that the system assigns high
numbers to private LUN IDs.
Procedure
1. Select Storage > LUNs > LUNs.
2. In the LUNs view, click Create.
3. In the General tab, under Storage Pool Properties, select RAID Group.
4. Select a RAID type that you want to assign to the LUN.
5. If there are no RAID groups with the specified RAID type, click New to create a new
RAID group.
The software populates Storage Pool for new LUN with a list of RAID Groups with the
specified RAID type, or displays the name of the selected RAID group. The Capacity
section displays information about the selected RAID group.
6. In LUN Properties, assign a user capacity and ID to the LUN you want to create.
If you want to create more than one LUN, select a number in Number of LUNs to
create.
For multiple LUNs, the software assigns sequential IDs to the LUNs as they are
available. For example, if you want to create five LUNs, starting with LUN ID 11, the
LUN IDs may be similar to 11, 12, 15, 17, and 18.
7. In LUN Name, either type a name or select the Automatically assign LUN IDs as LUN
Names checkbox.
8. Choose one of the following:
l

Click Apply to create the LUN with the default advanced properties.

Click the Advanced tab to manually assign the properties.

9. Assign advanced properties for a classic LUN.


a. By default, the Use SP Write Cache checkbox is selected to enable write caching
for the classic LUN. Clear the checkbox if you want to disable write caching.
Create classic LUNs

b. If you want to perform an initial background verify to eliminate latent soft media
errors on the newly bound LUN, do NOT select the No Initial Verify checkbox
(cleared is the default).
c. If you do NOT want to perform the background verify operation, select the No Initial
Verify checkbox.
NOTICE

Do not send data to the LUN until the background verify operation is complete.
d. In the Rebuild Priority list, select a rebuild priority of either ASAP, High (default),
Medium, or Low.
e. In the Verify Priority list, select a verify priority of either ASAP, High, Medium
(default), or Low.
f. Select a default owner (SP A or SP B) for the new LUN.

Create a LUNs folder


Lets you create a new user-defined LUNs folder, which is a folder created by you in order
to organize your LUNs. You can modify user-defined folders.
Procedure
1. In the systems drop-down list on the menu bar, select a system.
2. Select Storage > LUNs > LUN Folders.
3. Click Create.
4. In Folder Name, enter a name for the new folder.
EMC recommends that the name you select is one that will help you identify the LUNs
in the folder. For example, you might use a name of Accounts Payable.
5. Click OK to save the changes and close the dialog box.

Add LUNs to folders


Lets you add a LUN to one or more folders.
Procedure
1. In the systems drop-down list on the menu bar, select the system that includes the
folders.
2. Select Storage > LUNs > LUNs.
3. In the LUNs view, right-click the icon for a LUN, and then click Select Folders.
4. In Available Folders, double-click the folder to which you want to add the LUN.
The folder moves into the Selected Folders list.
5. Click OK to save the changes and close the dialog box.
The software adds the LUN to the specified folder.

Remove LUNs from a folder


Lets you remove LUNs from the selected folder.
8

Managing LUNs on your VNX System

Procedure
1. In the systems drop-down list on the menu bar, select a system.
2. Select Storage > LUNs > LUN Folders.
3. In the Folders view, right-click the folder from which you want to remove LUNs and
select Select LUNs.
4. In the LUNs tab, under Selected LUNs, select one or more LUNs and click Remove.
5. Click OK to save the changes and close the dialog box.
The software removes the selected LUNs from the folder.

Setting LUN properties


Note

In this topic, the term LUN refers to both pool LUNs and classic LUNs.
The LUN properties determine the individual characteristics of a LUN. You set LUN
properties when you create the LUN. You can also change some LUN properties after the
LUN is created.
Procedure
1. In the systems drop-down list on the menu bar, select a system.
2. Select Storage > LUNs > LUNs.
3. In the LUNs view, select a LUN and click Properties.
4. Click one of the property tabs to view and change the current properties for the LUN.

Set classic LUN write cache or FAST Cache properties


NOTICE

For a classic LUN to use write cache, write cache must be enabled for the system. For a
classic LUN to use the FAST Cache, FAST Cache must be configured on the system and
enabled on the LUN.
Procedure
1. In the systems drop-down list on the menu bar, select the storage system.
2. Select Storage > LUNs > LUNs.
3. Right-click the icon for the classic LUN, and then click Properties.
4. Select the Cache tab.
5. By default, the Use SP Write Cache checkbox is selected to enable write caching for
the classic LUN. Clear the checkbox if you want to disable write caching.
6. Select the FAST Cache checkbox to enable the FAST Cache for the classic LUN, or clear
it to disable the FAST Cache for the classic LUN.
You should not enable the FAST Cache for write intent log LUNs and Clone Private
LUNs. Enabling the FAST Cache for these LUNs is a suboptimal use of the FAST Cache
and may degrade the cache's performance for other LUNs.

Setting LUN properties

Note

If the FAST Cache enabler is not installed, FAST Cache is not displayed.
7. Click Apply to save changes without closing the dialog box, or click OK to save
changes and close the dialog box.

Auto assign for a LUN


NOTICE

Enable this LUN property only if the connected host does not use failover software. The
auto assign property is ignored when the storage system's failover mode for an initiator is
set to 1. This property will not interfere with PowerPath's control of a LUN.
Auto assign enables or disables (default) auto assign for a LUN. Auto assign controls the
ownership of the LUN when an SP fails in a storage system with two SPs. You enable or
disable auto assign for a LUN when you bind it. You can also enable or disable it after the
LUN is created without affecting the data on it.
With auto assign enabled, if the SP that owns the LUN fails and the server tries to access
that LUN through the second SP, the second SP assumes ownership of the LUN to enable
access. The second SP continues to own the LUN until the failed SP is replaced and the
storage system is powered up. Then, ownership of the LUN returns to its default owner. If
auto assign is disabled in this situation, the second SP does not assume ownership of
the LUN, and access to the LUN does not occur.
If you are running failover software on a server connected to the LUNs in a storage
system, you must disable auto assignment for all LUNs that you want the software to fail
over when an SP fails. In this situation, the failover software, not auto assign, controls
ownership of the LUN in a storage system with two SPs.
Note

The auto assign property is not available for a Hot Spare LUN.

Default owner of a LUN


The default owner is the SP that assumes ownership of the LUN when the storage system
is powered up. If the storage system has two SPs, you can choose to create some LUNs
using one SP as the default owner and the rest using the other SP as the default owner, or
you can select Auto, which tries to divide the LUNs equally between SPs. The primary
route to a LUN is the route through the SP that is its default owner, and the secondary
route is through the other SP.
If you do not specifically select one of the Default Owner values, default LUN owners are
assigned according to RAID Group IDs as follows:
Table 1 Default LUN owners

RAID Group IDs Default LUN owner

10

Odd-numbered

SP A

Even-numbered

SP B

Managing LUNs on your VNX System

Note

The default owner property is unavailable for a Hot Spare LUN.

Source LUN
A classic LUN, metaLUN, or thin LUN from which data is moved. After a LUN migration
completes, the source LUN is destroyed (becomes private).
NOTICE

The source LUN cannot be:


u

a Hot Spare

in the process of being created

in the process of expanding

a private LUN

a component of a metaLUN.

Destination LUN definition


A classic LUN, metaLUN, or thin LUN to which data is moved. After a LUN migration
completes, the destination LUN assumes the identity of the source LUN, and the source
LUN is destroyed. The capacity of the destination LUN must be equal to or greater than
the capacity of the source LUN. The destination can be a different RAID type than that of
the source LUN.
NOTICE

The destination LUN cannot be:


u

a Hot Spare

in the process of being created

in the process of expanding

in a Storage Group

a private LUN

a LUN that is participating in a MirrorView, SnapView, or SAN Copy session.

Verify priority for a LUN


The verify priority is the relative importance of validating the consistency of redundant
information in a LUN. The priority dictates the amount of resources the SP devotes to
checking LUN integrity versus performing normal I/O. You set the verify priority for a LUN
when you create it, and you can change it after the LUN is bound without affecting the
data on the LUN.
If an event happens, such as when an SP fails and the LUN is taken over by the other SP,
a background verification begins to check the redundant information within the LUN.
Valid verify priorities are ASAP, High, Medium (default), and Low. The ASAP setting
checks and verifies as fast as possible, but may degrade storage-system performance.
Source LUN

11

Note

When creating a RAID 0, Disk or Hot Spare LUN, the verify priority property is unavailable.

Rebuild priority for a LUN


The rebuild priority is the relative importance of reconstructing data on either a hot spare
or a new disk that replaces a failed disk in a LUN. It determines the amount of resources
the SP devotes to rebuilding instead of to normal I/O activity. Valid rebuild priorities are:
Table 2 Rebuild priorities

Value

Target rebuild rate in GB/hour

ASAP

0 (as quickly as possible)

High

12 (default value)

Medium 6
Low

Rebuild priorities correspond to target rebuild rates in the table above. Actual time to
rebuild a LUN is dependent on I/O workload, LUN size and LUN RAID type. Each LUN
builds at its own specified rate.
A rebuild operation with an ASAP or High (default) priority restores the LUN faster than
one with Medium or Low priority, but may degrade storage system performance.
You set the rebuild priority for a LUN when you create it, and you can change it after the
LUN is bound without affecting the data on the LUN.
Note

The rebuild priority property is unavailable for a RAID 0, Disk, or Hot Spare LUN.

Start the Storage Expansion wizard


The Storage Expansion wizard is supported for classic LUNs only.
TThe RAID Group LUN Expansion Wizard lets you dynamically expand the capacity of new
or existing LUNs by combining multiple LUNs into a single unit called a metaLUN. You can
add additional LUNs to a metaLUN to increase its capacity even more. The wizard
preserves the expanded LUN's data. You do not have to unbind the LUN you want to
expand and lose all the data on this LUN. Once you create a metaLUN, it acts like a
standard LUN. You can expand it, add it to a Storage Group, view its properties, and
destroy it.
For existing metaLUNs, you can expand only the last component of the metaLUN. If you
click a component other than the last one and select Add LUNs, the software displays an
error message.
A metaLUN can span multiple RAID Groups and, depending on expansion type
(concatenate or stripe), the LUNs in a metaLUN can be different sizes and RAID Types.

12

Managing LUNs on your VNX System

Note

The software allows only four expansions per storage system to be running at the same
time. Any additional requests for expansion are added to a queue, and when one
expansion completes, the first one in the queue begins.
Procedure
1. In the systems drop-down list on the menu bar, select a storage system.
2. From the task list, under Wizards, select RAID Group LUN Expansion Wizard.
3. Follow the steps in the wizard, and when available, click the Learn more links for
additional information.

Delete LUNs
NOTICE

Deleting a LUN (classic LUN or pool LUN) will delete all data stored on the LUN. If the LUN
is part of a Storage Group, you must remove the LUN from the Storage Group before you
unbind it. Before unbinding a LUN, make a backup copy of any data on it that you want to
retain.
Typically, you delete a LUN only if you want to:
u

Delete a storage pool (RAID group or pool) on a storage system. You cannot delete a
storage pool that includes LUNs.

Add disks to it. If the LUN is the only LUN in a storage pool, you can add disks to it by
expanding the storage pool.

Use its disks in a different storage pool.

Recreate it with a different capacity of disks.

In any of these situations, you should make sure that the LUN contains the disks that you
want.
Procedure
1. To determine which disks make up a LUN, do the following:
a. In the systems drop-down list on the menu bar, select a system.
b. Select Storage > LUNs > LUNs.
c. Open the LUN Properties dialog box by double-clicking the LUN icon, or by
selecting the LUN and clicking the Properties button.
d. Select Disks to view a list of disks.
2. To delete a LUN, do the following:
a. In the systems drop-down list on the menu bar, select a system.
b. Select Storage > LUNs > LUNs.
c. Right-click the LUN icon, and select Delete, or select the LUN and click the Delete
tab.
d. Click Yes to continue with the operation, or click No to cancel the operation.

Delete LUNs

13

LUN migration overview


The LUN migration feature, included in the Unisphere software and the VNX for Block CLI,
lets you move the data in one LUN, thin LUN, or metaLUN to another LUN, thin LUN, or
metaLUN. You might do this to:
u

Change the type of drive the data is stored on (for example, from more economical
NL-SAS to faster SAS, or vice-versa).

Select a RAID type that better matches the data usage.

Recreate a LUN with more disk space.

For example, you may have a metaLUN that has been expanded several times by
concatenation with other LUNs (not by addition of another entire disk unit), and whose
performance suffers as a result. You can use the migration feature to copy the metaLUN
onto a new LUN, which, being a single entity and not a group of several entities, provides
better performance.
During a LUN migration, the Unisphere software copies the data from the source LUN to a
destination LUN. After migration is complete:
u

The destination LUN assumes the identity (World Wide Name and other IDs) of the
source LUN.

The source LUN consumes the destination LUN's storage, and frees the storage it
consumed in its former storage pool or RAID group.

The destination LUN is removed.

The migration operation detects the zeros on the source LUN and deallocates them on the
target LUN which frees up more storage capacity on the target LUN. For better
performance and improved use of space, make sure that the target LUN is a newly
created LUN with no existing data.
Using the Unisphere software, you can start migrations, display and modify migration
properties, and display a summary of all current migrations on one storage system or all
the systems in the domain. You can also cancel (stop) a migration, which deletes the
destination copy and restores the storage system to its original state.
The number of supported active and queued migrations is based on the storage system
type.

Start a LUN migration


Lets you configure and start the LUN migration operation. Prior to starting the migration
operation, if the source LUN and the destination LUN belong to different SPs, the software
trespasses the destination LUN to the SP that owns the source LUN.
Note

If the destination LUN is a thin LUN, the migration operation detects the zeros on the
source LUN and deallocates them on the target LUN which frees up more storage capacity
on the target LUN. For better performance and improved use of space, make sure that the
target LUN is a newly created LUN with no existing data.
Procedure
1. In the systems drop-down list on the menu bar, select a system.
2. Select Storage > LUNs > LUNs.
14

Managing LUNs on your VNX System

3. Navigate to the LUN that you want to be the source LUN for the migration operation,
and right-click Migrate.
4. In the Start Migration dialog box, select a migration rate, and then select the
participating destination LUN.
5. Click OK to start the data migration, or click Cancel to close the dialog box.

Cancel (stop) a LUN migration


Lets you cancel an active LUN migration. Canceling a LUN migration deletes the
destination copy and restores the storage system to its original state.
Procedure
1. Select Storage > LUNs > LUNs and navigate to the source LUN that is participating in
the data migration.
2. Click Properties.
3. In the LUN Properties dialog box, select the Migration tab, and click Cancel Migration.
The Unisphere software displays a confirmation dialog box, asking you to confirm the
cancel request.
4. Click Yes to cancel LUN migration.

Display the status of active LUN migrations


Shows a summary of all the currently active migrations for a particular storage system, or
for all storage systems within the domain that support the LUN migration feature.
u

Display status of active migrations for a specific storage system.

Display status of active migrations for all supported storage systems in the domain.

Procedure
1. In the systems drop-down list on the menu bar, select a system.
2. Select Storage > LUNs > LUNs.
3. In the task list, under Block Storage, select LUN Migration Summary.

Creating storage groups with Unisphere


If you do not have any storage groups created, create them now.
Procedure
1. In the systems drop-down list on the menu bar, select a system.
2. Hosts > Storage Groups.
3. Under Storage Groups, select Create.
4. In Storage Group Name, enter a name for the Storage Group to replace the default
name.
5. Choose from the following:
l

Click OK to create the new Storage Group and close the dialog box, or

Click Apply to create the new Storage Group without closing the dialog box. This
allows you to create additional Storage Groups.
Cancel (stop) a LUN migration

15

6. Select the storage group you just created and click the Connect hosts.
7. Move the host from Available host to Host to be connected and click OK.

Making virtual disks visible to an ESXi Server


To allow ESXi Server to access the virtual disks you created, must make the virtual disks
visible to ESXi:
Procedure
1. Log in to the VMware vSphere Client as administrator.
2. From the inventory panel, select the server, and click the Configuration tab.
3. Under Hardware, click Storage Adapters.
4. In the list of adapters, select the adapter (HBA), and ciick Rescan above the Storage
Adapters panel.
Note

NICs are listed under iSCSI Software Adapters.


5. In the Rescan dialog box, select Scan for New Storage Devices and Scan for New
VMFS Volumes, and click OK.
6. Verify that the new virtual disks that you created are in the disk/LUNs list.

Verifying that native multipath failover sees all paths to the LUNs
Note

If you have a Hyper-V or ESX server, perform this procedure on your Hyper-V or ESX server.
Procedure
1. For paths to VMFS volumes:
a. Log in to the VMware vSphere Client as administrator.
b. From the inventory panel, select the server, and click the Configuration tab.
c. Under Hardware, click Storage and select the LUN.
d. Click Properties and then Volume Properties.
e. In the Volume Properties page, click Manage Paths.
The Manage Paths windows lists the paths and their states for all paths from the
server to the LUNs. ESX Server scans for paths to LUNs. When it finds a LUN through a
path, it assigns a name to the LUN.
For example, vmhba6:1:2, read from left to right, is the adapter, the target (SP), and
the LUN 2.
2. For paths to RDM volumes:
a. Log in to the VMware vSphere Client as administrator.
b. From the inventory panel, select the server, and click the Configuration tab.
c. In the Configuration tab, click Storage Adapters.
d. Select the adapter, right-click the path, and click Manage Paths.
16

Managing LUNs on your VNX System

The Manage Paths window lists the paths and their states for all paths from the server
to the LUNs ESX Server scans for paths to LUNs. When it finds a LUN through a path, it
assigns a name to the LUN.
For example, vmhba36:C0:T2:L0, read from left to right, is the adapter, the target
(SP), and the LUN (LUN 2).
3. For each LUN, verify that all paths to the system are working:
In the Status column of the Manage Paths, you should see:
l

One active path to each LUN

One or more standby (non-active) paths to each LUN

No dead paths.

The active path is the path that the server is currently using to access data on the LUN.
The standby paths are available for failover, should the active path fail. You should
not see any dead paths, which are paths that have failed and need to be repaired.
Disabled paths are paths that have been intentionally turned off.
For example, for a switch configuration, you should see something like:
Runtime Name
Target
Status
vmhba2:C0:T0:L2

50:06:01:60:bb:60:00:56 50:06:01:6c:3b:
60:00:56

Active

vmhba2:C0:T1:L2

50:06:01:60:bb:60:00:56 50:06:01:6d:3b: Standby


60:00:56

vmhba1:C0:T0:L2

50:06:01:60:bb:60:00:56 50:06:01:64:3b: Standby


60:00:56

vmhba1:C0:T1:L2

50:06:01:60:bb:60:00:56 50:06:01:65:3b: Standby


60:00:56

Under SAN Identifier, the world wide names (WWNs) indicate the SP used for each
path. In the WWN, the fourth set of digits from the left indicate the SP. For the example
WWN 50:xx:xx:nn:xx:xx:xx:xx, the nn indicates the SP port as follows:
60
61
68
69

=
=
=
=

SP
SP
SP
SP

A
A
B
B

port
port
port
port

0
1
0
1

The switch configuration example shows four paths through two HBAs (6 and 7), and
four SP ports (SP A port 0, SP A port 1, SP B port 0, and SP B port 1) to LUN 0.

Verifying that native multipath failover sees all paths to the LUNs

17

Copyright 2006-2014 EMC Corporation. All rights reserved. Published in USA.


Published November, 2014
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a
particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software
license.
EMC, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries.
All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to EMC Online Support (https://support.emc.com).
18

You might also like