You are on page 1of 13

(DMX)Symmetrix Architecture

Today we will discuss about the EMC Symmetrix Architecture and about it's
components.
Components

Symmetrix DMX

Rear view of Symmetrix DMX

Description
• It is a high-end storage array.
• It is an active-active architecture.
• It has 24 directories. 8 - FA directories, 8 - DA directories and 8 - global
directories.
• It will support up to 128 point-to-point serial connections.

Front View of System Bay

Rear view of System Bay

• FA's are used for host i/o requests.


• DA's are used to communicate with the back-end disks.
• The DA's are connected through Link Control Cards (LCC).
• Each FA has 4 processors named as A, B, C and D and each processor has 2
ports named as 0 and 1.

Image of DMX Array

Image of DAE & SPS

• The speed of the FA directories is 4 Gbps and the speed of the DA directories
is 2 Gbps.
• The FA's directories are paired using rule 17 for redundancy.

Rule 17

• There are 2XCM modules which are connected to service processors, it will
send a message to EMC in case of hardware failures.

A full view of Symmertix DMX Array

A rear view of Symmertix DMX Array

Symmetrix DMX-3
• 2005
• World's first petabyte disk array
• 24-slot, scalable (2 to 9 bays)
• Up to 16 directors
• Up to 8 memory boards (Max Capacity: 512 GB)
• Max number of drives: 2,400 (73 GB, 146 GB, 300 GB, 500 GB)
• Max system capacity up to 1 PB
Symmetrix DMX-4
• 2007
• World's first enterprise class flash drive array
• 24-slot, scalable (2 to 9 bays)
• Up to 16 directors
• Up to 8 memory boards (Max Capacity: 512 GB)
• Max number of drives: 2,400
• 73 GB, 146 GB, 200 GB, 400 GB EFD
• 73 GB, 146 GB, 300 GB, 450 GB FC
• 500 GB, 1 TB SATA
• Max system capacity up to 2 PB

Symmetrix DMX 3 & 4 DMX 4

Symmetrix VMAX Virtual Matrix Architecture Family


Symmetrix VMAX
• 2009
• World's first high-end array purpose built for virtual environments
• Virtual Matrix sRIO interface
• 1 system bay, up to 10 storage bays
• Up to 8 VMAX Engines, running Intel multi-core CPUs
• Up to 16 Symmetrix directors
• Up to 1 TB of global memory
• Max number of drives: 2,400
• 200 GB, 400 GB EFD
• 146 GB, 300 GB, 450 GB, 600 GB FC
• 1 TB, 2 TB SATA
• Max system capacity up to 3 PB

Symmetrix VMAX

DMX Features

TimeFinder, TimeFinder/Clone — Local Replication


Symmetrix Remote Data Facility (SRDF) supports remote replication
Symmetrix Optimizer -- Dynamical swap disks based on workload
Symmetrix command line interface (SymmCli)
SymmWin, Eguinuty -- Symmetrix GUI console (since Symm3, Symm4 models)
AnatMain — Symmetrix Pseudo-GUI console (before Symm 3, Symm4 models)
Symmetrix remote console (SymmRemote)
FAST -- Fully automated storage tiering
FTS -- Federated tiered storage
ECC --EMC Control Center

How to do LUN Provisioning in Symmetrix DMX | Procedure for Symmetrix - LUN


Provisioning | DMX LUN Allocation Steps and procedure
LUN Provisioning

Today we will discuss about the LUN Provisioning in the DMX through CLI.

Symmetrix LUN Provisioning

Before going for the LUN Provisioning, have understand the Symmetrix Architecture

Basically, LUN allocation will have 4 simple steps like below

1. Creating STD device


2. Meta device creation
3. Mapping
4. Masking

The step by step procedure for LUN Provisioning in Symmetrix DMX is as follows:

1. Open a text file to create STD devices, by using the command

Create dev count=7, size=10240, emulation=FBA, config=2-way-mir, disk_group=2;

Execute the text file using symconfigure command with preview, prepare and commit
options.

Symconfigure -sid XXX -f "name of the text file" -v -noprompt preview

Symconfigure -sid XXX -f "name of the text file" -v -noprompt prepare

Symconfigure -sid XXX -f "name of the text file" -v -noprompt commit

Verify the newly created devices by using the command

Symdev -sid XXX list -noport

2. Open a text file to form metas and devices to the meta head.
Form meta from dev 27CA, config=striped,stripe_size=1920; add dev 27CB:27E4 to
meata 27CA

Execute the text file using symconfigure command with preview, prepare and commit
options.

Symconfigure -sid XXX -f "name of the text file" -v -noprompt preview

Symconfigure -sid XXX -f "name of the text file" -v -noprompt prepare

Symconfigure -sid XXX -f "name of the text file" -v -noprompt commit

Verify the newly created meta devices by using the command

Symdev -sid XXX list -noport

Find the host connected directors and port details by using the command

Symcfg -sid XXX list -connections

Find the available addresses on that port by using the command

Symcfg -sid XXX list -address -available -dir 6 d -p 1

3. Open a text file with the following entry to map the device to the FA port

Map dev 27CA to dir 6d:1, lun=023;

Execute the text file using symconfigure command with preview, prepare and commit
options.

Symconfigure -sid XXX -f "name of the text file" -v -noprompt preview

Symconfigure -sid XXX -f "name of the text file" -v -noprompt prepare

Symconfigure -sid XXX -f "name of the text file" -v -noprompt commit

4. Mask the devices to the host HBA

Symmaskdb -sid XXX -wwn 1000000c94d35cd -dir 6 d -p 1 add devs 27CA -nop

Refresh the Sym configuration by using the command

Symmask -sid XXX -refresh


LUN De-allocation

Today we will let you know about the LUN deallocation procedure in Symmetrix DMX.
To know about LUN Provisioning in DMX, refer this link below
http://www.sanadmin.net/2016/04/Symmetrix-lun-provisioning.html

Symmetrix DMX LUN Deallocation


Steps:
Below are the 5 simple steps to perform the LUN deallocation in Symmetrix DMX.

1. Unmasking
2. Write disable
3. Un-mapping
4. Dissolve meta
5. Deleting hypers/Luns

Procedure:

1. Unmasking devices from the host, by using the below command

Symmaskdb –sid XXX –wwn 10000003efgae62 –dir 6d – p 1 remove devs 27CA

Refresh the symmetrix array, by using the command

Symmask –sid XXX –refresh

2. Write disable the device before unmapping from the director port, by using the
below command

Symdev –sid XXX write –disable 27CA –sa 6d –p 1 –noprompt

3. Open a text file to unmap the devices, by using the command

Unmap dev 27CA from dir all;all

Perform preview, prepare and commit operations using symconfigure command

Symconfigure –sid XXX –f "name of the text file" –v –nop preview

Symconfigure -sid XXX -f "name of the text file" -v -noprompt prepare


Symconfigure -sid XXX -f "name of the text file" -v -noprompt commit

Verify that the device has been unmapped or not, by using the below command

Symdev –sid XXX list –noport

4. Open a text file to dissolve the meta head, by using the command

Dissolve meta dev 27CA;


Perform preview, prepare and commit operations using symconfigure command

Symconfigure –sid XXX –f "name of the text file" –v –nop preview

Symconfigure -sid XXX -f "name of the text file" -v -noprompt prepare

Symconfigure -sid XXX -f "name of the text file" -v -noprompt commit

Verify that the meta has been dissolve.

Symdev –sid XXX list –noport

5. Open a text file to delete the device, by using the command

Delete dev 27CA;

Perform preview, prepare and commit operations using symconfigure command

Symconfigure –sid XXX –f "name of the text file" –v –nop preview

Symconfigure -sid XXX -f "name of the text file" -v -noprompt prepare

Symconfigure -sid XXX -f "name of the text file" -v -noprompt commit

Verify that hypers have been deleted.

Symdev –sid XXX list - noport


##########################

PROVISIONING A LUN/STORAGE TO HOST IN DMX

Provisioning LUN from a storage to host involves a simple process to follow. First
we need to understand the requirement and then we start the actual process. Here we
deal with only STD Devices and meta devices formed from STD devices. Thin device
creation and virtual provisioning will be discussed later for DMX4 and VMAX

In provisioning LUN(/storage) to host process below steps are involved

1) CREATING DEVICES

devices may be individual STD devices or Meta devices formed by combining two or
more STD devices. we will discuss about different types of devices later this

post for now we stick to provisoning STD/meta devices in DMX 3/4

2) MAPPING

Devices that we created needs a path to assign them to a host. so we map them to
symmetrix front end ports and the process is called mapping

3) MASKING
so we have devices(LUNs) created for host and prepared a way to access. then what
is masking and why we need this? when we map the devices to a front end port, all
the hosts which are zoned to a front end port will have access to all LUNs or
devices we created and mapped to that port. so to provide secure way of accessing a
particular device/LUN by particular host we require masking them.

lets jump into the actual process through solutions enabler SYMCLI

let our requirement is 4x100gb concatinated meta luns protection is raid 5 (3+1)
host name: windows host_wwn: 1000000000123

####################################################

1) CREATING DEVICES

a) STD device creation:

step1: Check if any configuration lock

$symconfigure -sid 1234 verify

abort any running/hault config lock

$symconfigure -sid 1234 abort

step2: check the free space in array

$symconfigure -sid 1234 list -freespace -units MB

step3: create std devices

first create a file "dev_create.txt" with the following command

create dev count=16, size=25gb, config=RAID-5, data_member_count=3, emulation=FBA,


disk_group=2;

$symconfigure -sid 1234 -f dev_create.txt commit -v -nop

this command will create devices of 25 gb. lets say the devices are 0001:000F,0010
(So the 16 digits of the hexadecimal number system are 1, 2, 3, 4, 5, 6, 7, 8, 9,
A, B, C, D, E, F, 10)
b) CREATE META DEVICES

first create a file "meta_create.txt" with below commands

form meta from dev 0001 config=concatinated


add dev 0002:0004 to meta 0001

form meta from dev 0005 config=concatinated


add dev 0006:0008 to meta 0005

form meta from dev 0009 config=concatinated


add dev 000A:000C to meta 0009

form meta from dev 000D config=concatinated


add dev 000E:0010 to meta 000D
now run the configuration command to create meta device

$symconfigure -sid 1234 -f meta_create.txt -v -nop commit

now we have created 4x100gb meta device

Lets check the devices which we created just now or anytime we can search free
devices in the storage array using

$symdev -sid 1234 list -noport

####################################################

2) MAPPING

lets check what are front end ports

$symcfg -sid 1234 list -fa all

Now lets check on which FA port the host logged in ( from the requirement session
we will sure have either or both host name and host wwn)

$symmask -sid 1234 list logins -wwn host_wwn

We have the front end port where the host logged in now we need to find the
available address for LUN on the front end port. let 7c port 0 is our front end
directory/adaptor

$symcfg -sid 1234 list -available -address -dir 7c -p 0

##now from the output find the available address##

mapping the devices to front end port

we can map one device at a time so it is better to create a file ( map.txt ) with
mapping commands for all devices then we map all of them with a single command

map dev 0001 to dir 7c:0 lun=123 target=0;


map dev 0005 to dir 7c:0 lun=124 target=0;
map dev 0009 to dir 7c:0 lun=125 target=0;
map dev 000D to dir 7c:0 lun=126 target=0;

$symconfigure -sid 1234 -f map.txt -v -nop commit

we are done with mapping devices to front end ports. lets check the mapping

$symmask -sid 1234 list no_assignment -dir 7c -p 0

####################################################

3) MASKING

masking is a straight forward one and have a single step


$symmask -sid 1234 -wwn 10000000000012 -dir 7c -p 0 add devs 0001,0005,0009,000D

to update the info in VCMDB

$symmask refresh

DMX RECLAMATION USING SYMCLI


DMX Reclamation using SYMCLI
When you have to reclaim san storage from the servers we have two scenarios.
1. Complete Reclamation for e.g. server is decommissioned
2. Partial Reclamation – Remove 1 Lun or few LUNs from server

Complete reclamation for DMX


Steps for Complete Reclamation
1. Remove Zoning from both the switches
2. Unmask devices from HBAs
3. Write disable the devices
4. Unmap devices
5. Dissolve Metas
6. Change the VSAN in device manager to “1” to list the ports as available
To remove zoning ->delete zone -> activate Zone configuration
Once Zoning is removed, check on array if the server HBAs is not logged in.
Below command will list the HBAs of the server and display the status if they are
logged in or not.
Symmask –sid <ArrayID> list logins | grep <server name>
Run “Symmaskdb –sid <ArrayID> list capacity –host <server name>” to get the list
of devices masked to the host and port information.

To unmask the devices


Symmask -sid <ArrayID> -wwn <WWN1> -dir <dir1> -p <port> remove devs <dev1, dev2,
dev3>
Symmask -sid <ArrayID> -wwn <WWN2> -dir <dir2> -p <port> remove devs <dev1, dev2,
dev3>

To write disable the devices, Populate a file named test.dev with the list of
devices that are unmasked and run below command
Symdev -sid <ArrayID> –f test.dev write_disable -sa <dir1> -p <port>
Symdev -sid <ArrayID> –f test.dev write_disable -sa <dir2> -p <port>

To Unmap the devices, populate a file named unmap.dev with the below commands for
all the devices unmasked and ports to which devices are mapped with.
Unmap dev <dev1> from dir <dir1>:<port>;
Unmap dev <dev1> from dir <dir2>:<port>;
Run the file using symconfigure to unmap the devices.
Symconfigure -sid <ArrayID> -f unmap.dev -v preview/prepare/commit
Preview – checks for the syntax of commands in the file
Prepare – checks if the given commands are valid for the devices in the symmetrix
Commit – configuration changes will be done

Try to run the command for list capacity of server to confirm all the devices are
reclaimed
Symmaskdb –sid <ArrayID> list capacity –host <server name>
If the reclaimed devices are Metas, dissolve the Metas. To dissolve Metas,
populate file dissolve.dev with below commands for all Metas to be dissolved.
Dissolve meta dev <dev1>;
Symconfigure -sid <ArrayID> -f dissolve.dev -v preview/prepare/commit
Once, the reclamation of devices on array is complete.Login to Fabric manager
delete fcalais of the server.
Login to device manager and change the vsan of the ports of this server to 1.
Partial reclamation for DMX
For partial reclamation, we need to exclude few steps from that of complete and
rest remains same.
1. List the device IDs which should be removed from the server.
2. Unmask devices from HBAs
3. Write disable the devices
4. Unmap devices
5. Dissolve Metas
NO Changes in switch /NO removal of Zoning for partial reclamations. Fabric
manager and device manager does not require any modifications.
DMX Replications
TF Snap - Point in time copy of Source Device.
TF Clone - exact copy of source device.
TF Mirror - Similar to clone and will be sync regularly.
The main differences?
TimeFinder VP Snap is part of TimeFinder clone and TDEVs as the snap targets, and
are bound to Thin Pools.
TimeFinder Snap uses VDEVs as the snap targets and requires that you create a Snap
pool using SAV devs (similar to TDATs).

EMC – SRDF Theory


EMC SRDF (Symmetrix Remote Data Facility) is a replication product which can be
used to replicate the data from one array to second array. Primary use is for
business continuity/disaster recovery. Other use is migrating the data from one
array to another. This post contains brief explanation of terms associated with
SRDF, its operational use, and the commands commonly used.
RA port and Group:
RA Port – used for replication between EMC arrays
RA Group – a group of RA ports between 2 EMC arrays. Each RA group has a unique
group id.
They are emulation of replication ports.
RF is a fiber director converted to be an RDF port, it uses the fiber protocol.
RE is GigE so it uses TCIP and has an IP address assigned to it.
RA is remote adapter.
RDF ports are replication ports.

Various Link status:


RW – Ready – Enabled for both reads and writes
NR – Not Ready – Disabled for both reads and writes
WD – Write Disabled – Enabled for reads but not writes
NA – Not available – Unable to report on correct state

Different modes of SRDF


Synchronous replication (SRDF/S):
In this mode of the operation, R2 is a real time mirror image of R1. Data on R1 and
R2 are always fully synchronized. The write I/O is received from local host into
cache of source array. It is then transferred to the cache of the remote array.
Upon successful receipt of write in its cache, remote array sends an
acknowledgement to source array. Source array in turn acknowledges successful
ending status to local host.
Asynchronous replication (SRDF/A):
In this mode, host writes are accumulated on source array, and host receives
acknowledgement for all writes. When the cycle time is reached, all accumulated
data is transferred to remote array in one delta set. It uses 4 types of Delta Sets
to manage the IOs. These are as below:
Capture Delta set on Source Array – It captures all incoming writes to all R1s
defined in SRDF/A group.
Transmit Delta Set on Source Array – It transfers its contents from the source to
the remote array.
Receive Delta Set on Remote Array – It receives the data being transferred by the
source-side Transmit Delta Set.
Apply Delta Set on Remote Array – It writes the delta sets to R2s defined in SRDF/A
group to create a consistent recoverable remote copy.
Above cycle is repeated to provide a continuous checkpoint of delta sets. Data on
remote array is within the range of seconds to minutes of source array. Delta sets
are resident in cache and hence requires extra additional cache. Approximately
0.75GB additional cache per 1TB of data being remotely mirrored is the minimum
requirement. Extra space for delta set space can be assigned from hard drives using
the feature called as “Delta Set Extension”.
Since the writes are transferred in cycles, any duplicated written to can be
eliminted through Symmetrix ordered write processing, which transfers the changed
tracks over the links only once within any single cycle.
Enginuity 5874 or higher incorporates a feature called as “Write Pacing”. When the
SRDF I/O service rates are lower than the host I/O rates, it takes corrective
action to slow host I/O rates to match the SRDF I/O service rates. It is a dynamic
feature applied at the SRDF group level. This helps control the amount of cache on
source array used by SRDF/by preventing it from getting exhausted.
Adaptive Copy replication (SRDF Data Mobility – SRDF/DM):
In this mode, the source R1s and target R2s are a few or many IOs out of
synchronization. The write I/O is received from local host into cache of source
array. Source array acknowledges successful ending status to local host. Source
array processes the IO using either Adaptive Copy Write Pending Mode or Adaptive
Copy Disk mode. It is then placed into SRDF queue, and transmitted to cache of the
remote array. Upon successful receipt of write in its cache, remote array sends an
acknowledgement to source array.
There are two ways in which Adaptive Copy can operate – Adaptive Copy Disk Mode
– Adaptive Copy Write Pending Mode.
Adaptive Copy Disk Mode
Source array acknowledges all the writes to hosts, and saves them in cache of local
array. These writes are de-staged from cache to source volume (R1s) and these IOs
are tracked as invalid tracks. The data is subsequently transferred to remote
volume (R2s).
Adaptive Copy Write Pending Mode
Source array acknowledges all the writes to hosts. These writes are saved in the
cache of local array until it is successfully written to both the source volumes
(R1s) and the target volumes (R2s).
SRDF/DM is a data replication solution (data migrations and data center moves)
instead of disaster recovery solution.
SRDF/Automated Replication (SRDF/AR):
It is an automated replication solution that uses both SRDF and TimeFinder to
provide a periodic asynchronous replication of a re-startable data image. It is
offered in two varieties – SRDF/AR Single Hop and SRDF/AR Multiple Hop.
SRDF/AR (Single Hop)
In this scenario, BCVs of source devices at local site act as R1s and they
replicate to R2 devices at remote site which might have its own BCV devices.
SRDF/AR (Multiple Hop)
3 sites are involved in this operation. Source devices from primary site are
replicated to R2 devices at intermediate site via SRDF/S. BCV devices of these R2
devices acts as R1s and they replicate to R2 devices at target site using SRDF/AR
Single Hop.
SRDF/Star Using Concurrent SRDF Mode:
3 sites are involved in this operation. Primary site has R11 devices which are
source devices for remaining 2 sites, secondary site has R21 devices, and tertiary
site has R22 devices. R22 volumes have 2 SRDF mirrors, only one of which is allowed
to be active at a given time. Using R22 involves below mentioned steps:
1. Create R1/R2 pair between primary and secondary sites using SRDF/S
2. Create R11/R2 pair between primary and tertiary sites using SRDF/A
3. Create R21/R22 pair between secondary and tertiary sites using SRDF/A.
In case of primary site failure – failover to secondary site making it primary, and
resume remote mirroring to tertiary site.
SRDF/Extended Distance Protection (SRDF/EDP):
It involves 3 sites and provides ‘No Data Loss’ extended distance disaster
protection solution. It uses the basic cascaded SRDF configuration. Device R21 is a
cache only device and there is no disk copy of data. R1 at primary site and R3 at
tertiary site are only two full copies of data.

Preview – checks for the syntax of commands in the file


Prepare – checks if the given commands are valid for the devices in the symmetrix
Commit – configuration changes will be done

Simple Clone Operations


Steps to create and delete a Clone copy session using a Device Group (DG).
Table of contents
• Create a Regular Device Group
• Add devices to the Device Group
• Create Clone Session
• Activate Session
• Check Copy operation status
• Break or Terminate Session

1. Create a Regular Device Group


 symdg create TestDg -type regular

2. Add devices to Device Group


Add the standard devices AAA( source ) and AAB( Target )to the DG. Assume this will
add as DEV001 and DEV002.
 symld -sid 1234 -g TestDg add dev AAA
 symld -sid 1234 -g TestDg add dev AAB

3. Create Clone Session


 symclone -g TestDg create DEV001 sym ld DEV002
The above command will create a clone session and also make the target device in
NotReady( NR ) state.You can also add the -precopy option to start a background
copy from source to target device, prior to the activation.
 symclone -g TestDg create DEV001 sym ld DEV002 –precopy

4. Activate Session
 symclone -g TestDg activate DEV001 sym ld DEV002
The above command will start the copy operation from sorce device to target.And
also make the target device to Read/Write,thus accessable to the host.
5. Check status of Copy operation
 symclone -g TestDg query
Once the the source device is fully copied from source to target , the state of the
pair will change from 'CopyInProg' to 'Copied'.

6. Break or Terminate Session


 symclone -g TestDg terminate DEV001 sym ld DEV002
This will terminate the clone copy session , deletes the pairing information from
the storage array and removes any hold on target device.You have to 'Terminate'
while the pair in 'Copied' state to get a fully valid data.

You might also like