You are on page 1of 47

EMC Symmetrix with

Microsoft Windows Server 2003 and 2008


Best Practices Planning





































Abstract
This white paper outlines the concepts, procedures, and best practices associated with deploying Microsoft
Windows Server 2003 and 2008 with EMC

Symmetrix

DMX-3 and DMX-4, and Symmetrix V-Max


storage.

October 2009




Copyright 2009 EMC Corporation. All rights reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION
MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE
INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com
All other trademarks used herein are the property of their respective owners.
Part Number h6665
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 2



Table of Contents

Executive summary ............................................................................................5
Introduction.........................................................................................................5
Audience....................................................................................................................... 5
Windows storage connectivity ..........................................................................5
Symmetrix front-end director flags............................................................................ 6
Additional director flag information ............................................................................. 7
SCSI-3 persistent group reservations......................................................................... 9
LUN mapping and masking ........................................................................................ 9
Connectivity recommendations.......................................................................11
Multipathing ............................................................................................................... 12
Symmetrix storage............................................................................................14
Understanding hypervolumes.................................................................................. 14
Understanding metavolumes ................................................................................... 15
Metavolume configurations....................................................................................... 15
Gatekeepers ............................................................................................................... 16
RAID options.............................................................................................................. 17
Disk types................................................................................................................... 17
Virtual Provisioning................................................................................................... 18
Discovering storage .........................................................................................19
Windows Server 2008 SAN Policy............................................................................ 20
Offline Shared ............................................................................................................ 21
Automount.................................................................................................................. 22
Initializing and formatting storage ..................................................................22
Disk types................................................................................................................... 22
Master Boot Record (MBR) ...................................................................................... 22
GUID partition table (GPT) ....................................................................................... 22
Basic disks................................................................................................................ 23
Dynamic disks .......................................................................................................... 23
Veritas Storage Foundation for Windows................................................................. 24
Disk type recommendations ..................................................................................... 24
Large volume considerations.................................................................................... 24
Partition alignment .................................................................................................... 25
Partition alignment prior to Windows Server 2003 SP1............................................ 25
Partition alignment with Windows Server 2003, SP1, or later versions.................... 25
Partition alignment with Windows Server 2008 ........................................................ 26
Querying alignment .................................................................................................. 26
Formatting.................................................................................................................. 27
Allocation unit size.................................................................................................... 27
Quick format vs. regular format ................................................................................ 28
Windows Server 2003 format ................................................................................... 28
Windows Server 2008 format ................................................................................... 28
Volume expansion ............................................................................................28
Striped metavolume expansion example ................................................................ 29
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 3



Symmetrix replication technologies and management tools .......................32
EMC TimeFinder family............................................................................................. 32
EMC SRDF family....................................................................................................... 34
Open Replicator overview......................................................................................... 35
Symmetrix Integration Utilities................................................................................. 37
EMC Replication Manager......................................................................................... 38
Managing storage replicas...............................................................................39
Symmetrix device states........................................................................................... 39
Read write (RW) ....................................................................................................... 39
Write disabled (WD) ................................................................................................. 39
Not ready (NR) ......................................................................................................... 40
Managing the mount state of storage replicas ....................................................... 41
Conclusion ........................................................................................................47
References ........................................................................................................47

EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 4



Executive summary
The success of deploying and managing storage in Windows environments is heavily dependent on
utilizing vendor-qualified and vendor-supported configurations while ensuring the proper processes and
procedures are used during implementation. Supported configurations and defined best practices are
continually changing, which requires a high level of due diligence to ensure new, as well as existing,
environments are properly deployed.
EMC

Symmetrix V-Max and Symmetrix DMX storage systems undergo rigorous qualifications to
ensure supported topologies throughout the storage stack (operating system, driver, host bus adapter,
firmware, switch, and so on) provide the highest levels of stability and performance available in the
industry. Additionally, best practices and recommendations are continually tested and re-evaluated to
ensure deployments are optimized as new operating system versions, patches, and features are made
available. EMC provides a myriad of delivery mechanisms for relaying the information found during
qualification and testing, including documentation and white papers, support forums, technical advisory
notifications, and extensive support matrices as qualified by EMCs quality assurance organizations,
including EMC E-Lab.
By combining best-of-breed software and hardware technologies like the Symmetrix DMX and Symmetrix
V-Max with thorough qualification, support, and documentation facilities, EMC provides the most
comprehensive set of tools to ensure five 9s availability in the most demanding environments.
Introduction
Critical information for deploying Windows-based servers on Symmetrix storage is available today but can
be spread across various white papers, technical documentation, and knowledgebase articles. The goal of
this paper is to define and consolidate key concepts and frequently asked questions for implementing
Windows Server 2003 and 2008-based operating systems with Symmetrix

storage. Some topics will be


directly addressed in this paper, while others will reference more in-depth information available from other
resources where detailed step-by-step guidance is required.
The general topics covered include settings and best practices in the context of storage connectivity, device
presentation, multipathing, Windows and Symmetrix disk configurations, and LUN management including
growth and replication. Additional documentation will be referenced where appropriate and a list of related
resources will be included in the References section on page 47.
Audience
This white paper is intended for storage architects and administrators responsible for deploying Microsoft
Windows Server 2003 and 2008 operating systems on Symmetrix V-Max, and Symmetrix DMX-4 and
DMX-3 and storage systems.
Windows storage connectivity
Symmetrix storage systems support several modes of connectivity for Windows hosts including Fibre
Channel (FC), Fibre Channel over Ethernet (FCoE), and Internet Small Computer System Interface
(iSCSI). Additionally the Symmetrix can support direct connections from host bus adapters (HBA)
utilizing Fibre Channel Arbitrated Loop or connections via switched architectures (FC-SW). FCoE
environments currently require an FCoE switch to convert from native Fibre Channel from the Symmetrix
array. For each of these connectivity options, specific host and operating system functionality can be
supported, including boot from SAN and clustering configurations. For detailed information on supported
hardware and software configurations with these technologies, please see the EMC Host Connectivity
Guide for Windows and the EMC Support Matrix (ESM), both available at http://elabnavigator.emc.com
(access required).

Beyond the supported configurations listed within the ESM, specific configurations are qualified as part of
the Microsoft Windows Server Catalog (WSC), also referred to as the hardware compatibility list (HCL).
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 5



For clustering with Windows Server 2003, referred to as Microsoft Cluster Service (MSCS), Microsoft
Customer Support Services (CSS) only supports clusters where the hardware and software, in their entirety,
are listed on the WSC. Microsoft Knowledge Base (KB) article 309395, which can be found at
http://support.microsoft.com/kb/309395/en-us, has additional details.

For failover clustering with Windows Server 2008, officially supported solutions require software and
hardware components to receive a Certified for Windows Server 2008 logo. Windows Server 2008
failover clusters, however, do not need to be listed in the WSC in contrast to the requirements of Windows
Server 2003. For Windows Server 2008 failover clustering, the fully configured cluster must pass a
validation test. The validation test is provided as part of the validate a configuration wizard included
with the Windows Server 2008 operating system. The cluster validation runs a set of tests against the
defined cluster nodes in the environment, including tests for processor architecture, drivers, networking
configuration, storage, and Active Directory, among other components. By allowing specific
configurations to be tested by an end user, the validation process allows for a much simpler and streamlined
procedure for qualifying a specific clustered environment. Because of this change in support policy,
specific Windows Server 2008 failover clustering configurations will not necessarily be listed in the ESM
or WSC.

Geographically dispersed clusters are unique in the way they are validated with Windows Server 2008.
Geographically dispersed clusters are clusters where nodes and storage arrays are separated across data
centers for the purposes of disaster recovery. The Symmetrix Remote Data Facility/Cluster Enabler, or
SRDF

/CE, is an EMC-developed extension to Windows Server configurations, which implements support


of a geographically dispersed cluster. With SRDF/CE, nodes within a cluster will access different storage
arrays, depending on their geographic locations, and subsequently different LUNs where data is replicated
consistently with SRDF. With nodes potentially accessing separate LUNs, some of the storage specific
tests performed by the validation wizard, including SCSI-3 persistent reservation tests, will not be
successful. The storage test failures are expected, and due to the nature of geographical clusters such as
SRDF/CE, Microsoft does not require them to pass the storage tests within the validation process. For
more information regarding cluster validation with Windows Server 2008, including Microsoft policy
around geographically dispersed clusters, please see Microsoft Knowledge Base article 943984
(http://support.microsoft.com/kb/943984)
Symmetrix front-end director flags
The EMC Support Matrix is the definitive guide for information regarding Symmetrix director flags and
should be consulted prior to server deployments or operating system upgrades. The ESM can be viewed at
http://elabnavigator.emc.com, which is also known as the E-Lab Interoperability Navigator. One method
for using the Navigator to determine the appropriate director flags is to utilize the Advanced Query option.

From within the Navigator as depicted in Figure 1, under the Advanced Query tab, select the appropriate
host operating system and storage array. Once selected and queried via get results, support statements
will become available for the selected components. Within the support statements, under Networked
Storage a link called Director Bit/Flag Information appears. This link contains the most up-to-date
information regarding the appropriate director flags for the selected operating system and Symmetrix
storage array.

EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 6




Figure 1. E-Lab Interoperability Navigator
Table 1 outlines the director flags required for Windows Server 2003 and 2008 standalone or clustered
hosts on Symmetrix V-Max and Symmetrix DMX-3/DMX-4 arrays at the time of this papers publication.
Please note for Windows Server 2008 failover clustering an additional device level flag is required to
enable SCSI-3 persistent reservations. Please see the section SCSI-3 persistent group reservations for
additional details.

Table 1. Windows Server 2003 and 2008 required Symmetrix port flags
Bit Description
Common_Serial_Number (C) This flag should be enabled for multipath configurations or hosts that
need a unique serial number to determine which paths lead to the
same device.
SCSI_3 (SC3) When enabled, the Inquiry data is altered when returned by any
device on the port to report that the Symmetrix supports the SCSI_3
protocol.
SPC-2 Compliance (SPC-2) Provides compliance to newer (SCSI primary commands - 2) protocol
specifications. For more information, see the SPC-2 section.
Host SCSI Compliance 2007
(OS2007)
When enabled, this flag provides a stricter compliance with SCSI
standards for managing device identifiers, multi-port targets, unit
attention reports, and the absence of a device at LUN 0. For more
information please see the OS2007 section.

Additional director flag information
For Symmetrix V-Max, volume masking is enabled via the ACLX director flag. For Symmetrix DMX-
3/DMX-4, volume masking is enabled via the VCM director flag. In most switched Fibre Channel
environments it is recommended to enable masking. For iSCSI environments it is required to have masking
enabled in order to allow initiators to log in to the Symmetrix. The section LUN mapping and masking
has additional information.

For FC Loop-based topologies, logically enable the following base setting in addition to the required
Windows settings in Table 1: EAN (Enable Auto Negotiation), UWN (Unique WWN). For FC switched-
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 7



based topologies, logically enable the following base setting with the required Windows settings in Table 1:
EAN (Enable Auto Negotiation), PP (Point-to-Point), UWN (Unique WWN).

SPC-2
With Windows Server 2003 versions prior to SP1, SPC-2 was not a required director flag. With Windows
2003 SP1 and later, specific Microsoft applications began checking for SPC-2 storage compliance,
including the Microsoft Hardware Compatibility Test (HCT) 12.1, as well as the Volume Shadow Copy
Service (VSS) when used in conjunction with Microsoft clusters. Due to specific applications requiring
SPC-2 compliance, it was recommended to enable SPC-2 in legacy Windows Server 2003 SP1
environments. Current Windows Server 2003-based qualifications for the Windows Server Catalog are
executed with the SPC-2 flag enabled; therefore it is a requirement to have SPC-2 enabled in environments
for compliance. For Windows Server 2008 environments, the SPC-2 flag has always been required.

Should any software modifications, including service packs, hotfixes, or driver updates, be made to a
legacy Windows Server 2003 environment where SPC-2 is not enabled, the SPC-2 director flag should be
enabled at that time. Specific Windows Server 2003 hotfixes (including Microsoft hotfix 950903) may
require SPC-2 compliance and could otherwise cause an outage in the environment if this flag is not set.

OS2007
Windows Server 2008 configurations require the OS2007 director flag be enabled. For Windows Server
2003 environments it is recommended to have this setting enabled; however, it is not required in legacy
Windows Server 2003 environments. Having the OS2007 flag enabled in Windows Server 2003
environments does not affect the OS and is recommended to be enabled in case there is a future upgrade to
Windows Server 2008. As with the SPC-2 flag, future Windows 2003 Windows Server Catalog
qualifications will be executed with the OS2007 flag enabled, which will impact Windows Server 2003
compliance where OS2007 is not enabled in new or upgraded environments.

Methods for setting director flags
Director flags can be configured at the director port level or at the HBA level. When director flags are set
at the director port level, all hosts connected to those ports will be presented with the same settings. In a
heterogeneous environment where ports are shared, different host operating systems may require different
flags. In such cases it is possible to enable specific settings based on a HBA-to-director port relationship.

Director-level flags can be set via configuration changes commonly done with the Solutions Enabler (SE)
command line interface (CLI) symconfigure. HBA-level director flags are enabled via masking
operations, such as with the symmask or symaccess hba_flag functionality. Director- or HBA-level
settings can also be managed via the Symmetrix Management Console (SMC) graphical user interface
(GUI), or with EMC Ionix ControlCenter

(ECC).

The following is an example of using the symconfigure CLI command to enable the OS2007 flag at the
director port (port 0 on director 7f) level:

symconfigure -cmd "set port 7f:0 SCSI_Support1=enable;" -sid 94 commit

The following is an example of using the symaccess CLI command to enable the OS2007 flag for a specific
WWN:

symaccess -sid 94 set hba_flags on OS2007 -enable -wwn 10000000c96d0a50

Conflicts regarding director flags can occur in existing environments where requirements change based on
the introduction of new or updated operating systems. Most flags can be modified while the director port
remains online, however, the hosts connected to those ports may need to be restarted for the operating
system to properly detect and otherwise manage the change in settings. The requirement for restarting is
especially true for director-level changes that cause modification to SCSI inquiry data, such as the SPC-2 or
OS2007 director flags.

EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 8



For configurations where changes to flags are required for some, but not all hosts, connected to a common
set of director ports, modifying the flags at the HBA level will ensure the smallest impact to the existing
environment. The tradeoff for setting director flags at the HBA level is the additional overhead for
managing the settings at a more granular level, which can be problematic in large environments.

It is also important to ensure in multipathed or clustered environments that all paths for all cluster nodes
have the same director flag settings. To configure director flags inconsistently across ports and HBAs or in
a piecemeal fashion, in an effort to avoid system reboots, is not supported and could lead to instability in
the environment.

Recommendations regarding the ability to modify director flags without impact to Windows or other
operating systems are outside of the scope of this paper. For the most up-to-date and detailed resources
regarding director configuration changes and their impact on specific operating systems, please see the
ESM or query the EMC support knowledgebase available on Powerlink

(http://powerlink.emc.com).
SCSI-3 persistent group reservations
Functionality new to Windows Server 2008 failover clustering is the use of SCSI-3 persistent group
reservations. Persistent reservations allow for multiple hosts to register unique keys with a storage array
through which a persistent reservation can be taken against a specified LUN. Persistent reservations
introduce several improvements over the previously used SCSI-2 reserve/release commands utilized by
MSCS with Windows Server 2003, including the ability to maintain reservations such that a shared LUN is
never left in an unprotected state.

For a Symmetrix to support SCSI-3 persistent reservations, and subsequently support Windows Server
2008 clustering, a logical device level setting must be enabled on each LUN requiring persistent reservation
support. This setting is commonly referred to as the PER bit, or the SCSI3_persist_reserv attribute from a
Solutions Enabler perspective. The SCSI3_persist_reserv attribute can be enabled via configuration
changes commonly done with the Solutions Enabler command line interface (CLI) symconfigure. The
setting can also be managed via SMC or ControlCenter.

Metavolumes require that all member devices have the same attributes prior to forming the metadevice.
With this in mind, it is necessary to set the SCSI3_persist_reserv attribute against any hypervolumes
intended to form metavolumes in the future. For existing metavolumes, this attribute needs only to be set
on the metavolume head device when making configuration changes using Solutions Enabler.

The following is an example of using the symconfigure CLI command to set SCSI-3 persistent reservation
support for a contiguous range of devices:

symconfigure -sid 94 -cmd "set dev 42D:430 attribute =
SCSI3_persist_reserv;" commit
When the persistent reservation attribute is enabled, the Symmetrix is required to store and otherwise query
the reservation status of the device. Because of this, it is generally recommended to only enable persistent
reservation support for the devices that require this functionality. If the environment is dynamic enough
such that enabling the persistent reservation attribute on demand creates significant administrative
overhead, it is possible to set the attribute on all devices.
LUN mapping and masking
Symmetrix arrays manage the presentation of devices to operating systems through front-end director ports
via a combination of mapping and masking functionality. Mapping Symmetrix devices is the process by
which a LUN address is assigned to a specific device on a given front-end director. Should masking be
disabled on a director port (VCM or ACLX director flags set to disabled) any hosts zoned to or directly
attached to that director will have access to all mapped devices. The LUN address assigned to the device is
the LUN number by which the host will discover and access the storage. For example, if the LUN address
is defined on the director as F0 in hex (240 decimal), the host will discover the device as LUN 240.
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 9



In switched environments, where multiple hosts commonly access the same front-end directors, an
additional level of device presentation granularity can be accommodated with the Symmetrix masking
functionality. Masking operations allow for the restriction of access for a given WWN (defined on an
HBA) to mapped devices regardless of the physical or zoned connectivity in the environment. Masking
records define which WWN is allowed to access which Symmetrix devices on which director ports.
Masking operations also allow for the modification of LUN addresses as seen by the host to provide a more
predictable, uniform approach.

In iSCSI environments it is required for masking to be enabled on the Symmetrix front-end directors.
iSCSI connectivity to a Symmetrix requires the iSCSI Qualified Name (IQN) to have masking entries that
subsequently allow an HBA or NIC to log in to a front-end director.

One exception to the rule that masking prevents access to all mapped devices involves the VCM or ACLX
device. The VCM or ACLX flag is a special device attribute that allows a LUN, when mapped, to be
viewed by hosts regardless of masking entries. In older versions of the Symmetrix operating environment
Enginuity the VCM device was the repository where masking records were maintained. With newer
versions of Enginuity the VCM or ACLX device is simply a gatekeeper that can be used for the initial
configuration of the Symmetrix from a host.

The VCM or ACLX device need not be mapped to Symmetrix front-end adapters or otherwise presented to hosts
in order to perform masking operations. Masking can be performed through regular gatekeeper devices.
Additionally the VCM or ACLX device, when presented to potential cluster nodes undergoing cluster validation
with Windows Server 2008, may cause validation warnings. These warnings can be avoided by removing the
VCM or ACLX from being mapped to the front-end directors

When mapping and masking Symmetrix devices to a host, it is important to note the Windows maximum
limit of 255 usable LUNs per HBA target. While this number applies to the total number of addressable
LUNs per target, it also impacts the LUN numbers through which Windows allows access for devices. The
LUN address range for Windows is from 0 to 254. Should a LUN have an address higher than 254, even if
the operating system is not accessing more than 255 total LUNs on that target, the device will not be
detected for use by the operating system. To some degree this limitation can be managed by the HBA
driver. For instance the Emulex SCSIPort driver with Windows Server 2003 allows for higher LUN
addresses to be managed (up to 512) via an adjusted LUN mapping. However, with Windows Storport and
HBA miniport drivers, the 254 LUN address limit is enforced as a part of the operating system limit.

A Symmetrix can support a much higher number of mapped devices per director, well beyond 255.
Therefore the ability to modify LUN addresses via masking can be an important feature in large
environments. With older versions of Solutions Enabler and Enginuity, a lun offset feature was used to
adjust the starting LUN address for a given HBA and director combination. The lun offset functionality,
however, has become obsolete with newer code revisions and is otherwise replaced by Dynamic LUN
Addressing (DLA). DLA allows for Symmetrix devices, regardless of their LUN address on the front-end
director, to start at address 0 for a given HBA and director port pairing. In addition, DLA can be used to
directly specify a LUN address for a given device. The Symmetrix V-Max, with the use of Auto-
provisioning Groups, not only automates director LUN mapping but also utilizes DLA to simplify LUN
addressing. For more information regarding dynamic LUN addressing, please see the Symmetrix Dynamic
LUN Addressing Technical Note available on Powerlink.

The LUN address value is very different from the Symmetrix device number. The Symmetrix device number is
assigned to a Symmetrix addressable volume upon its creation and will remain the same independent of the LUN
address used across directors.

When using multiple paths to a Symmetrix device, or when presenting shared storage to a cluster, it is
recommended to ensure the LUN address is the same across all given directors. This guideline is more for
ease of troubleshooting and not a hard requirement, as it is possible for LUNs to be multipathed to a
Windows host or presented to multiple clustered hosts with different LUN addresses.
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 10



Connectivity recommendations
It is recommended to configure at least two HBAs per Windows server with the goal of presenting multiple
unique paths to the Symmetrix system. The benefits of multiple paths include high availability from a host,
switch, and Symmetrix front-end director perspective, as well as enhanced performance.
From a high-availability perspective, given the possibility for director maintenance, each Windows server
should have redundant paths to multiple front-end directors. For a Symmetrix V-Max, this can be
accomplished by connecting to opposite even and odd directors within a V-Max Engine, or across directors
within multiple V-Max Engines (recommended when multiple engines are available). In the case of a
Symmetrix DMX array this can be accomplished by ensuring a given host is connected to different
numbered directors (director 4a and director 13a for example).
For each HBA port at least one Symmetrix front-end port should be configured. For I/O intensive hosts in
the environment, it could prove beneficial to connect each HBA port to multiple Symmetrix front-end
ports. Connectivity to the Symmetrix front-end ports should consist of first connecting unique hosts to port
0 of the front-end directors before connecting additional hosts to port 1 of the same director and processor.
This methodology for connectivity ensures all front-end directors and processors are utilized, providing
maximum potential performance and load balancing for I/O intensive operations.
As port 0 and port 1 of a given director number and letter or slice share a given processor complex, it is
not recommended to connect the same HBAs for a given host to both port 0 and port 1 of the same director.
Ideally individual hosts should be connected to port 0 or port 1 from different directors. For Windows
Server 2008 failover clustering environments it is currently required to ensure a given HBA is not
presented to both port 0 and port 1 from the same front-end director processor. For example, to zone map
and mask devices from director 7A port 0 and director 7A port 1 to the same HBA is not supported in a
Windows Server 2008 failover cluster. At the time this paper was published, the SCSI-3 persistent
reservations of a given initiator are maintained at the front-end processor level. Because port 0 and port 1
of a given director slice share the same processors it is not supported to have an application that utilizes
SCSI-3 persistent reservations access a LUN on an HBA sharing both ports.
Figure 2 uses a physical view of a Symmetrix V-Max Engine to provide a depiction of the aforementioned
recommendations.

Figure 2. Connectivity recommendations for a Symmetrix V-Max Engine
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 11



Multipathing
Configurations with multiple paths to storage LUNs will require a path management software solution on
the Windows host. The recommended solution for multipathing software is EMC PowerPath

, the
industry-leading path management software with benefits including:
Enhanced path failover and failure recovery logic
Improved I/O throughput based on advanced algorithms such as the Symmetrix Optimization load
balancing and failover policy
Ease of management including a Microsoft Management Console (MMC) GUI snap-in and CLI
utilities to control all PowerPath features
Value-added functionality including Migration Enabler, to aid with online data migration, and LUN
encryption utilizing RSA technology
Product maturity with proven reliability over years of development and use in the most demanding
enterprise environments

While PowerPath is recommended, an alternative is the use of the native Multipath I/O (MPIO) capabilities
of the Windows operations system. The MPIO framework has been available for the Windows operating
system for many years; however, it was not until the release of Windows Server 2008 where a generic
device specific module (DSM) was provided by Microsoft to manage Fibre Channel devices. For more
information regarding the Windows MPIO DSM implementation, please see the Multipath I/O Overview
article at http://technet.microsoft.com/en-us/library/cc725907.aspx.
Should native MPIO be chosen as the method for path management, the default failover policy with the
RTM release of Windows Server 2008, for devices that do not report ALUA support such as the
Symmetrix, is Fail Over Only. For performance reasons, especially in I/O intensive environments, it will
be beneficial to modify this default behavior to one of the other options, including but not limited to, Least
Queue Depth.
The load-balance policy can be found under the MPIO tab within the Properties of each physical disk
resource in the Windows Device Manager, as depicted in Figure 3.
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 12




Figure 3. MPIO load-balance policy for Windows Server 2008 RTM

With Windows Server 2008 R2, the default load-balance policy for non-ALUA reporting devices, including
Symmetrix, has changed from Fail Over Only to Round Robin. MPIO also has an additional load-balance
policy with Windows Server 2008 R2 called least block. To help with managing MPIO more efficiently,
Windows Server 2008 R2 has an enhanced mpclaim CLI with the ability to modify the default load-balance
policy at either a device, target hardware ID (such as Symmetrix), or global DSM level. The following
section gives an example of how to set the default load-balancing policy at the target hardware ID level
using the mpclaim CLI.

To view the target hardware identifier:

mpclaim /e

"Target H/W Identifier " Bus Type MPIO-ed ALUA Support
-----------------------------------------------------------------------
--------
"EMC SYMMETRIX " Fibre NO ALUA Not
Supported

To claim all devices for the Microsoft MPIO DSM based on target hardware ID (if not already done),
please do the following. Note that the spaces are required within the EMC Symmetrix hardware ID string.

mpclaim -n -i -d "EMC SYMMETRIX "

Success, reboot required.

To set the load-balance policy to least queue depth (4 in this example) based on target hardware ID:
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 13




mpclaim -l -t "EMC SYMMETRIX " 4

To view target-wide load-balance policies after being set:

mpclaim -s -t

"Target H/W Identifier " LB Policy
-----------------------------------------------------------------------
--------
"EMC SYMMETRIX " LQD

With the preceding commands completed all existing and any future Symmetrix devices discovered by
MPIO will have a load-balance policy of least queue depth.

Additional information regarding connectivity and multipathing can be found in the EMC Host
Connectivity Guide for Windows.
Symmetrix storage
Understanding hypervolumes
To provide data storage, a Symmetrix systems physical devices must be configured into logical volumes
called hypervolumes. Hypervolumes are the unit of storage at which RAID protection is defined. A given
open systems, Fixed Block Architecture (FBA) hypervolume can have a RAID 1, RAID 5, or RAID 6
configuration. Cache-only hypervolumes, such as thin devices or virtual (TimeFinder/Snap) devices, are
unique in that they do not have a direct RAID protection. RAID protection for the physical storage used by
cache only devices is defined within the pools that provide the storage area for cache-based hypervolumes.

Symmetrix systems allow a maximum of 512 logical volumes on each physical drive, depending on the
hardware configuration and the type of RAID protection used. Prior to Enginuity 5874 on the Symmetrix
V-Max, the largest single hypervolume that could be created on a Symmetrix was 65,520 cylinders,
approximately 59.99 GB. With Enginuity version 5874, a hypervolume can be configured up to a
maximum capacity of 262,668 cylinders, or approximately 240.48 GB, about four times as large as with
Enginuity version 577x.

Figure 4 shows four disks with hypervolumes configured in a logical-to-physical ratio of 8 to 1.


Figure 4. Symmetrix physical disks with hypervolumes
In general, fewer larger hypervolumes are recommended where applicable in a Symmetrix environment;
however, to ensure the best possible performance experience, large hypervolumes should be carefully
considered in a traditional, fully provisioned environment. For example, to assign a single large
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 14



hypervolume that is RAID 1 protected would only allow for two physical spindles to support the workload
intended for that LUN. Should the RAID protection for a single large hypervolume be RAID 5 7+1,
however, then this concern is lessened as eight disks would be available to service the workload.
Additionally, striped metavolumes, outlined in the next section, provide the ability to spread a given
workload across a larger number of physical spindles.

Large hypervolumes provide additional value in Virtual Provisioning environments. In these
environments, administrators may strive to overprovision the thin pool as a means to improve storage
utilization. Furthermore, Virtual Provisioning deals with the performance needs by utilizing a striping
mechanism across all data devices allocated to the thin pool. Performance limits can be mitigated by the
total number of spindles allocated to the thin pool. Additional information about Virtual Provisioning is
provided later.
Understanding metavolumes
A metavolume is an aggregation of two or more Symmetrix hypervolumes presented to a host as a single
addressable device. Creating metavolumes provides the ability to define host volumes larger than the
maximum size of a single hypervolume. A single Symmetrix system metavolume can contain a maximum
of 255 hypervolumes. When combining the maximum hypervolume size with the maximum number of
metavolume members, the largest addressable single LUN is 61.32 TB (240.28 GB * 255 members) for a
Symmetrix V-Max and 15.29 TB (59.99 GB * 255 members) for a DMX-3/DMX-4 .

Configuring metavolumes helps to reduce the number of host-visible devices as each metavolume is
counted as a single logical volume. Devices that are members of the metavolume, however, are counted
toward the maximum number of host-supported logical volumes for a given Symmetrix director.
Metavolumes contain a head device, which provides control information, member devices, and a tail
device. All devices defined for the metavolume are used to store data. Metavolumes also provide the
mechanism by which a host addressable LUN can be expanded. Metavolumes allow for additional
members to be added for the purposes of presenting additional storage within an existing LUN. The
section Volume expansion on page 28 provides additional details.

Figure 5 shows a metavolume comprised of four hypervolumes on different physical devices.

Figure 5. Symmetrix metavolume
Metavolume configurations
Metavolumes provide two ways to access data: Concatenated and striped.

Concatenated metavolumes organize addresses for the first byte of data at the beginning of the first volume,
and continue sequentially to the end of the volume. Once the first hypervolume is full, data is then written
to the next member device, again sequentially, beginning with the first byte until the end of the volume.

Figure 6 shows a concatenated metavolume.
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 15




Figure 6. Concatenated metavolume
Integration
Striped metavolumes organizes addresses across all members, by using addresses that are interleaved
between hypervolumes. The interleave or striping of data across the metavolume is done at a default stripe
depth of 960K (one or two cylinders depending on the Enginuity version). Data striping benefits
configurations with random operations by avoiding stacking I/O on a single hypervolume, spindle, and
director. In this fashion data striping helps to balance the I/O activity between the drives and the
Symmetrix system directors.

Figure 7 shows a striped metavolume.

Figure 7. Striped metavolume
Gatekeepers
Low-level I/O commands executed using Solutions Enabler SYMCLI are routed to the Symmetrix array by
way of a Symmetrix storage device that is specified as a gatekeeper. The gatekeeper device allows
SYMCLI commands to retrieve configuration and status information from the Symmetrix array without
interfering with normal Symmetrix operations. A gatekeeper is not intended to store data and is usually
configured as a small device (typically six cylinders or 2.8 MB.) The gatekeeper must be accessible from
the host where the commands are being executed.

Gatekeepers should be dedicated to the specific host that will be issuing commands to control or otherwise
query a Symmetrix. In Microsoft failover clustering environments it is recommended to not cluster
gatekeeper devices and to present unique gatekeepers to each cluster node as required.

When presented to a Windows host, there is no requirement to signature or otherwise format a gatekeeper
device. It will automatically become available for use by the host to communicate with the Symmetrix.
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 16



Detailed information regarding gatekeeper devices can be found in the EMC Solutions Enabler Symmetrix
Array Management CLI Product Guide available on Powerlink.
RAID options
Symmetrix systems support varying levels of RAID protection. RAID protection options are configured at
the physical drive level based on hypervolumes. Multiple types of RAID protection can be configured for
different datasets in a Symmetrix system. Table 2 shows the levels of RAID protection available for open
systems hosts like Microsoft Windows.

Table 2. RAID protection options
RAID
option
Provides the following Configuration considerations
RAID 1 The highest level of performance and availability for
all mission-critical and business-critical applications.
Maintains a duplicate copy of a volume on
two drives:
If a drive in the mirrored pair fails, the Symmetrix
system automatically uses the mirrored partner
without interruption of data availability.
When the drive is (nondisruptively) replaced, the
Symmetrix system re-establishes the mirrored pair
and automatically re-synchronizes the data with
the drive.

Withstands failure of a single
drive.
RAID 1 provides 50% data
storage capacity.
For a single write operation from
a host RAID 1 devices will
perform two disk I/O operations
(a write to each mirror member).
RAID 5 Distributed parity and striped data across all drives in
the RAID group. Options include:
RAID 5 (3 + 1) Consists of four drives with
parity and data striped across each device.
RAID 5 (7 + 1) Consists of eight drives with
data and parity striped across each device.

RAID 5 (3 + 1) provides 75%
data storage capacity.
RAID (7 + 1) provides 87.5%
storage capacity.
Withstands failure of a single
drive.
For a single random write
operation from a host, RAID 5
devices will perform four disk
I/O operations (two reads and two
writes).

RAID 6 Striped drives with double distributed parity
(horizontal and diagonal). Options include:
RAID 6 (6 + 2) Consists of eight drives with
dual parity and data striped across each device.
RAID 6 (14 + 2) Consists of 16 drives with
dual parity and data striped across each device.

RAID 6 (6 + 2) provides 75%
data storage capacity.
RAID 6 (14 + 2) provides 87.5%
storage capacity.
Withstands failure of two drives.
For a single random write
operations from a host, RAID 6
devices will perform six disk I/O
operations (three reads and three
writes).


Disk types
Along with the aforementioned RAID technologies, Symmetrix storage can be configured across a wide
range of disk technologies. Symmetrix storage systems support high-capacity, low-cost SATA II drives,
high-performing 10k rpm and 15k rpm Fibre Channel drives, as well as ultra-high-performance solid state
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 17



Enterprise Flash Drives. Supported drive types, capacities, and speeds are continually changing as new
technology becomes available. Please see Powerlink for the most up-to-date lists of supported drive types
and capacities for Symmetrix systems.
Virtual Provisioning
Virtual Provisioning, generally known in the industry as thin provisioning, enables organizations to
enhance performance and increase capacity utilization in their Symmetrix storage environments. Virtual
Provisioning features provide:

Simplified storage management Allows storage to be provisioned independent of physical
constraints and reduces the steps required to accommodate growth.
Improved capacity utilization Reduces the storage that is allocated but unused.
Simplified data layout Includes automated wide striping that can provide similar and
potentially better performance than standard provisioning.

Symmetrix thin devices are host-accessible devices that can be used in many of the same ways that
Symmetrix devices have traditionally been used. Unlike regular host-accessible Symmetrix devices, thin
devices do not need to have physical storage completely allocated at the time the device is created and
presented to a host. The physical storage that is used to supply disk space to thin devices comes from a
shared storage pool called a thin pool. The thin pool is comprised of devices called data devices that
provide the actual physical storage to support the thin device allocations.

When a write is performed to a part of the thin device for which physical storage has not yet been allocated,
the Symmetrix allocates physical storage from the thin pool that covers that portion of the thin device.
Enginuity satisfies the requirement by providing a block of storage from the thin pool called a thin device
extent. This approach allows for on-demand allocation from the thin pool and reduces the amount of
storage that is consumed or otherwise dedicated to a particular device. When more storage is required to
service existing or future thin devices, data devices can be added to the thin storage pools. Virtual
Provisioning data devices are supported on all RAID types. However, thin pools cannot be protected by a
mixture of RAID types.

The architecture of Virtual Provisioning creates a naturally striped environment where the thin extents are
allocated across all volumes in the assigned storage pool. By striping the data across all devices within a
thin storage pool, a widely striped environment is created. The larger the storage pool for the allocations,
then the greater number of devices that can be leveraged for a thin device. It is this wide and evenly
balanced striping across a large number of devices in a pool that allows for optimized performance in the
environment.

If metavolumes are required for the thin devices in a particular environment, it is recommended that the
metavolume be concatenated rather than striped since the thin pool is already striped using thin extents.
Concatenated metavolumes also support fast expansion capabilities, as new metavolume members can
easily be appended to the existing concatenated metavolume. This functionality may be applicable when
the provisioned thin device has become fully allocated at the host level, and it is required to further increase
the thin device to gain additional space. Striped metavolumes are supported with Virtual Provisioning and
there may be workloads that will benefit from multiple levels of striping.

For additional information on the use of Virtual Provisioning with Windows operating systems, please see
the white papers Implementing Virtual Provisioning on EMC Symmetrix DMX with Microsoft Exchange
2007and Implementing Virtual Provisioning on EMC Symmetrix DMX with Microsoft SQL Server 2005,
both available on Powerlink.

EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 18



Discovering storage
Once the appropriate steps have been taken from the connectivity, zoning, volume creation, mapping, and
masking perspectives, devices can be discovered by the operating system. In most cases the discovery of
new devices can be done by performing a rescan operation from the disk management console or from the
diskpart command line interface as depicted in Figure 8.


Figure 8. Diskpart rescan
In some instances the discovery of the initial target requires either a host reboot or an HBA reset. Once the
target and first device(s) are discovered, the host should not need to be rebooted and the HBA should not
need to be reset in order to discover additional storage. A reset should not be issued on a host already
accessing in-use storage devices from the HBA to be refreshed as this may interrupt access to in-use
devices.

Should a disk management console or diskpart rescan not prove successful in discovering new devices, a
plug and play rescan can also be issued. Plug and play rescans can be executed from Windows Device
Manager using the Scan for hardware changes option. With Windows Server 2003, the devcon CLI, a
free download from Microsoft, can also be used to perform these kinds of rescans. EMC also offers ways
to perform this operation with the Symmetrix Integration Utilities (SIU). Among its functions the SIU CLI
symntctl has a rescan function to assist in discovering storage. SIU is available as a free download from
Powerlink and is now included with Solutions Enabler 7.0 or later.

Rescan operations for storage are generally not synchronous with regards to the completion of the rescan
command that initiated the discovery. What this means is that a rescan may return complete; however, the
actual discovery and surfacing of the LUNs to the operating system may happen several seconds after the
command finishes. This behavior is important to note when scripting operations that surface LUNs and
then perform a subsequent action against those LUNs. In this case it may be necessary to sleep, loop, or
provide additional checks in scripts to allow all LUNs to be discovered and otherwise be available to the
operating system.

Once LUNs are discovered they are given a physicaldrive or disk number generally based on the order
of discovery by the operating system based on LUN address. There are several methods to ensure the
correct Symmetrix devices are being seen as disks by the host. One method is to use the EMC inq utility
available at ftp://ftp.emc.com/pub/symm3000/inquiry. The inq CLI uses SCSI inquiry information to list
Symmetrix specific information, including Symmetrix serial number and device numbers associated with a
given physical drive. Figure 9 gives an example of using the inq utility with the sym_wwn option.

EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 19




Figure 9. Inq utility

In addition to inq, Solutions Enabler can be installed on the host for the purposes of querying physical drive
specific information. Similar to the inq utility, Solutions Enabler includes a syminq CLI that performs a
SCSI inquiry collection and returns the current disk information. Along with syminq SE provides a
sympd CLI that can return additional Symmetrix specific information associated with a physical drive. It
should be noted that the drive associations used by the sympd command are cached within the SE (symapi)
database. To update this cached information a symcfg discover command should be run if any changes
were made to the drives presented to the host. At the time of publication, a symcfg sync command does
not update the physical drive specific information in the symapi database.

In environments where masking is enabled, it is possible for the VCM or ACLX device to be mapped to
director ports. As previously discussed, this means the VCM or ACLX device will be available to all hosts
connected to those directors. In multipathed environments, where EMC PowerPath is used, the VCM or
ACLX device is the only Symmetrix LUN where PowerPath will not automatically manage multipathing.
With this in mind it should be expected that the VCM or ACLX device will be seen multiple times by the
operating system.
Windows Server 2008 SAN Policy
Functionality new to Windows Server 2008, referred to as SAN Policy, allows administrators to control
how newly discovered storage devices are managed by the operating system. With Windows Server 2003,
new disks discovered by Windows would automatically be brought online for potential use by the operating
system. With Windows Server 2008, the SAN Policy allows administrators to control the way disks are
brought online. Specifically, the SAN Policy determines if new disks are brought online or remain offline
or marked as read-only or read/write.

EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 20



The specific options offered by the SAN policy are shown in Table 3.
Table 3. SAN Policy options
SAN Policy Description
Offline Shared

Offline Shared is the default policy for Windows Server 2008 Enterprise and Data
Center editions. This policy makes any storage discovered on a shared bus (FC,
SCSI, iSCSI, SAS, and so on), to be made offline and read-only. Any storage
discovered on a non-shared bus, as well as the boot disk, will be brought online
read/write.

Symmetrix devices presented to a Windows Server 2008 host with the Offline
Shared policy will be placed offline and read-only. The only exception would be
the boot device in a boot from SAN configuration.
Online

This policy will bring all discovered storage devices online and read/write
automatically.
Offline All In this case all disks, except for the boot disk, will be marked offline and read-only.

To modify the policy the diskpart CLI can be used. Specifically the SAN option within diskpart can be
used to view and change the policy. The full syntax of the SAN command can be obtained by typing help
san from a diskpart command prompt.

The state of the disks can be managed from either the disk management console or the diskpart CLI.
Changing online or offline status for disks from the disk management console will also affect the read/write
state of the device. For example, an online from disk management will also read/write enable the disk, and
conversely an offline of a disk will subsequently mark the device as read-only automatically. The diskpart
CLI offers more granular control, as an offline or online (using the online disk syntax) does not modify
the read/write state of the device. To modify the read/write state of the disk, the disk specific setting must
also be modified via the attributes disk diskpart command. Figure 10 provides an example of how to
online and read/write a specific disk using diskpart.


Figure 10. Diskpart command to online and read/write a disk
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 21



Automount
Windows Server 2003 and 2008 include the ability to automatically mount newly discovered basic disk
storage to the next available drive letter upon discovery. For Windows Server 2003, this setting is disabled
by default, while for Windows Server 2008 this setting is enabled by default. To view or otherwise modify
this setting the diskpart CLI can be used, specifically using the automount command. The mountvol CLI
can also be used to disable or enable automounting of new devices. In most SAN environments it is not
necessary to have Windows automatically mount storage, as applications or scripts are used to manage the
device state. With this said, it may not be necessary to change the automount setting unless otherwise
recommended by the application vendor or as required due to unwanted behavior in a specific environment.
Initializing and formatting storage
Newly presented and previously unused storage will display as not initialized when marked as online to
Windows. The act of initializing a disk performs several functions including the assignment of a disk
signature, boot record, and partition table as written to the disk. Prior to initializing the storage, the disk
type, be it Master Boot Record (MBR) or GUID partition table (GPT), needs to be determined.
Additionally, whether the disks are basic or dynamic needs to be considered and defined based on storage
requirements. The following sections outline the definitions and capabilities of MBR and GPT style basic
or dynamic disk storage with Windows Server 2003 and 2008.
Disk types
Master Boot Record (MBR)
The MBR partitioning has historically been the most commonly used disk type on the Windows platform.
MBR disks create a 512-byte record in the first sector of a disk containing boot information, disk signature,
and a table of primary partitions. The following list highlights the main features and limitations of MBR
disks on Windows operating systems:

Support up to four primary partitions.
Support for more than four partitions requires an extended partition in which logical drives are
created.
Support 32-bit entries for partition length and partition starting address, which limits the maximum
size of the disk to 2^32 blocks (512 bytes) or 2 TB.
Contain a 32-bit, eight-character hexadecimal signature.
Partition GUIDs for MBR basic disk volumes are not stored on disk and are otherwise assigned by the
operating system and maintained in the registry.
Support with Windows Server 2003 and 2008 standalone or clustered hosts.
GUID partition table (GPT)
The GPT disk format was designed to overcome the limitations in the MBR style of partitioning. GPT
disks start with a protective MBR in the first sector of the disk. The protective MBR is designed to prevent
operating systems that do not recognize the GPT format from assuming the disk is not partitioned. After
the protective MBR, GPT information is maintained in the next 32 sectors of the disk. This information
includes the primary GPT header and self-identifying partition entries. GPT disks also maintain a
redundant copy of its information at the end of the disk and have CRC32 checksums for added integrity.
The following list highlights additional features and limitations of GPT disks on Windows operating
systems.

Support up to 128 partitions
Support 64-bit partition table entries, which in theory can produce disks or partitions that are zettabytes
(2^64 blocks) in size.
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 22



Windows limits supportable disk sizes to 18 exabytes where raw partitions are used and 256 terabytes
for NTFS formatted partitions.
Maintain a 128-bit Globally Unique ID (GUID) for each disk
Maintain a 128-bit GUID for each partition on a disk.
Support Windows Server 2003 SP1 or later
Windows Server 2003 clustering support requires a hotfix (http://support.microsoft.com/kb/919117)
Full support with Windows Server 2008
Basic disks
Basic disks utilize the native partitioning capabilities of the MBR and GPT formats. MBR basic disks will
support primary partitions, extended partitions and logical drives. GPT basic disks will support the
partition table entries native to this format. Volumes on MBR or GPT basic disks cannot span across
multiple disks, but can be expanded in-place assuming there is space available on the disk where the
partition resides. Basic disks are also natively supported in Microsoft clusters.
Dynamic disks
The native Microsoft logical disk manager (LDM) also offers the ability to create so called dynamic disks.
Dynamic disks maintain a 1 MB private region on each disk to store the LDM database. The LDM
database stores the relevant information regarding dynamic disks in the system including volume types,
offsets, memberships, and drive letters for each volume. Dynamic disks can be either MBR- or GPT-based
and include the capability to distribute filesystems across multiple disks as presented to the OS. Dynamic
disks, while providing for enhanced functionality, are not supported in Microsoft clusters when using the
base LDM. Dynamic disks can be used to create several types of volumes in non-clustered environments
including simple, spanned, striped, mirrored, and RAID 5.

Simple
A simple dynamic volume is a volume that resides on a dynamic disk but does not span across multiple
disks. Simple volumes can be created from free space on a dynamic disk, or by converting a basic disk
with existing partitions. The value of a simple volume is the ability to subsequently create a spanned
volume (assuming it is not a system or boot partition) or a mirrored volume. A simple volume cannot be
used to create a striped or RAID 5 volume.

Spanned
A spanned dynamic volume is a concatenation of multiple volumes across one or multiple dynamic disks.
Spanned volumes write data sequentially to each volume, filling one before moving onto the next volume
in the spanned set. The value of a spanned volume is the ability to grow a filesystem across multiple
dynamic disks non-disruptively. A spanned volume can be created or expanded between two to 32
dynamic disks, but is not fault-tolerant. Should any one member of the spanned volume become
unavailable, the entire volume will go into a failed state.

Striped
A striped dynamic volume, as it sounds, is a dynamic volume that stripes a filesystem across multiple disks.
The stripe depth (amount of data written to one disk before moving onto the next in the stripe) is 64 KB. A
striped dynamic volume can be formed with anywhere between two and 32 dynamic disks. Once created, a
striped volume cannot be expanded with the base Windows LDM. A striped volume is not fault-tolerant
and is considered to be a RAID 0 device. Should any one member of the striped volume become
unavailable, the entire volume will go into a failed state.

Mirrored
Mirrored dynamic volumes are volumes synchronized across two physical disks. Mirrored dynamic
volumes are considered RAID 1 protected and provide for fault-tolerance should one of the disks fail. A
mirrored volume will required twice the amount of storage for the same amount of usable space. Mirrored
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 23



dynamic disks can be created or broken online without disruption to the availability of the volume. Once
created a mirrored volume cannot be extended

RAID 5
RAID 5 dynamic volumes are fault-tolerant volumes that contain data and parity striped across a set of at
least three and up to 32 dynamic disks. The parity space required will consume an amount of storage equal
to one full member of the RAID 5 set. Should any one disk fail the RAID 5 volume will remain online.
Data and parity can be rebuilt from the remaining members upon recovery of the failed disk. Once created
a RAID 5 volume cannot be extended.
Veritas Storage Foundation for Windows
The dynamic disk functionality and restrictions listed above apply to the base LDM included with
Windows Server 2003 and 2008 operating systems. With Veritas Storage Foundation for Windows (SFW),
dynamic disk support and capabilities are expanded to include additional functionality. The following list
details some but not all of the additional functionality provided by SFW with dynamic disks over and above
the base Windows LDM:

Simple volumes can be dynamically converted to striped volumes.
Spanned volumes can support up to 256 dynamic disks.
Mirrored volumes can be extended and striped to create RAID 0 + 1 devices. Mirrored volumes can
also be assigned a preferred mirrored disk or plex.
Striped volumes can be mirrored and extended to create RAID 0 + 1 devices. Striped volumes can also
be dynamically modified to change stripe characteristics including change to a concatenated volume.
Stripe depth can also be controlled.
RAID 5 volumes can be extended.
Multiple dynamic disk groups are supported.
Microsoft clustering is supported with dynamic disks.

Additional functionality provided by Veritas Storage Foundation for Windows can be found on the
Symantec website.
Disk type recommendations
In most environments, MBR basic disks with a single partition fulfill the majority of storage requirements.
MBR basic disks offer a disk type supported by all Microsoft and third-party applications. The
functionality offered by dynamic disks, including striped volumes, RAID protection, and volume growth is
somewhat mitigated as they can occur with more efficiency in the Symmetrix array. Additionally, the
restriction that dynamic disks are not supported with Microsoft clustering when using the base LDM
prohibits their use in many environments.

The GPT disk type is generally reserved for environments that require volumes larger than 2 TB in size.
While the GPT disk type, upon first release on Windows platforms, did have some support limitations,
most of those limits have been removed by both Microsoft and third-party applications. In the future, GPT-
based disks should become the standard partitioning format. Before utilizing GPT disks, ensure the disk
type is supported by the required Microsoft or third-party applications.
Large volume considerations
While GPT disks allow for larger disk sizes, volumes that are multiple terabytes in size should be created
and used with some degree of caution. The main concerns regarding large volumes are generally
performance-related or tied to the ability to perform administrative tasks in a timely manner. Common
administrative tasks where very large volumes become a concern include backup and restore activities,
defragmentation, or filesystem verification tasks like chkdsk. The amounts of time to perform
administrative tasks like chkdsk have as much to do with the number of the files in the filesystem as the
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 24



size of the volume itself. A small number of large files will chkdsk much faster than a large number of
small files in a comparably sized file system.

Performance concerns also come from the fact that a single large volume could contain enough data that,
when accessed with enough user concurrency, would potentially saturate the performance capabilities of
the underlying disks. This concern can be mitigated on the Symmetrix by creating metavolumes with
enough meta members to spread the workload across a larger number of physical spindles. The use of
Virtual Provisioning can also provide a mechanism to spread large LUNs across a greater number of drives.

Partition alignment
Historically Windows operating systems have calculated disk geometry based on generic SCSI information
including Cylinder-Head-Sector or CHS values as reported by SCSI controllers. The perceived or assumed
geometry of the disk based on CHS values led Windows to create partitions based on 63 sectors per track.
Generally speaking this meant Windows would create the first partition in the 63
rd
sector or at an offset
32,256 bytes into the physical drive, assuming 512-byte sectors. The creation of partitions based on the
assumption of 63 sectors per track led to the partition and subsequently data within the partition to be
misaligned with storage boundaries in the Symmetrix. Misalignment with these storage boundaries could
potentially lead to performance problems.

The logical geometry of Symmetrix host addressable logical volumes is listed in Table 4.
Table 4 Symmetrix device geometry
Symmetrix DMX-2 and prior Symmetrix DMX-3 and later, including V-Max
Cylinder = 15 tracks (480K) Cylinder = 15 tracks (960K)
Track = 8 sectors (32K) Track = 8 sectors (64K)
Sector = 8 blocks (4K) Sector = 16 blocks (8K)
RAID 5/6 Stripe Boundary = 4 tracks (128K) RAID 5/6 Stripe Boundary = 2 tracks (128K)
Metavolume default stripe boundary = 2 Cylinders
(960K)
Metavolume default stripe boundary = 1 Cylinder
(960K)

Based on these values, misaligned I/O could cause partial sector write activity and additional, unwanted I/O
within the Symmetrix from crossing track and/or stripe boundaries.

Depending on the version of Windows, there are several ways to correct alignment and ensure optimal
performance. In all cases it is recommended that the partition offset or alignment be equal to some
increment of 64 KB. This could mean that the partition may start 128 sectors or 65,536 bytes into the disk,
or at some number larger but evenly divisible by 128 sectors or 64 KB. In either case, the partition will be
considered aligned.
Partition alignment prior to Windows Server 2003 SP1
Prior to Windows Server 2003 SP1, the diskpar utility could be used to manually create partitions on a
specific offset or boundary within a physical drive. The recommended offset value when creating a
partition using the diskpar command is 128 sectors. For dynamic disks a filler partition must first be
created with diskpar prior to converting the disk to dynamic and subsequently creating volumes for user
data. For detailed information regarding partition alignment with the diskpar command, please see Using
diskpar and diskpart to Align Partition on Windows Basic and Dynamic Disks available on Powerlink.
Partition alignment with Windows Server 2003, SP1, or later versions
With Windows Server 2003 SP1 or later, Microsoft introduced a version of the diskpart CLI command
that included an option to align a partition upon creation. The recommended value when creating a
partition using the diskpart command is 64 KB. Figure 11 gives an example of how to align an MBR basic
partition using the diskpart command including the align option.

EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 25



For dynamic disks, the diskpart command cannot be used to create aligned volumes. The first reason for
this is that the align option is not available as a part of the diskpart command when creating dynamic
volumes. Secondly, the diskpart command cannot be used to create a filler partition in order to force
alignment for subsequent volumes (the filler partition created by diskpart starts aligned, but does not end
aligned). The diskpar command must be used to create a filler partition prior to converting the disk to
dynamic and creating subsequent volumes for user data. For detailed information regarding partition
alignment with the diskpart command, please see the white paper Using diskpar and diskpart to Align
Partition on Windows Basic and Dynamic Disks and the Aligning GPT Basic and Dynamic Disks For
Microsoft Windows 2003 Technical Note available on Powerlink.


Figure 11. Using Diskpart to create an aligned partition on an MBR basic disk
Partition alignment with Windows Server 2008
With Windows Server 2008, the issues around partition alignment when using default tools, such as the
disk management MMC, have been corrected. By default Windows Server 2008 will create partitions
based on a 1 MB boundary or offset. Specifically, for disks larger than 4 GB, Windows will create
partitions with an offset of 1 MB increments. For disks smaller than 4 GB, Windows will default to an
offset of 64 KB. In both cases, the partition will be aligned with the recommended Symmetrix best practice
of 64 KB increments.
Querying alignment
One method to query and otherwise ensure alignment is to use the WMI interfaces native to Windows
Server 2003 and 2008. These versions of Windows include a WMI CLI interface called wmic that can be
used to determine if a partition is properly aligned. The example in Figure 12 uses the wmic CLI to return
specific partition information including the starting offset, from an MBR basic disk and a GPT basic disk
created specifying a 64 KB alignment with diskpart.

EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 26




Figure 12. Using the wmic CLI to query partition alignment

The starting offset provided by the wmic command is in bytes. To ensure proper alignment, this number
should be evenly divisible by 65536. Alternatively, the provided offset in bytes can be divided by the
block size (512 bytes) to get the number of blocks or sectors for the offset. The sector offset should then be
evenly divisible by 128.
Formatting
Once a partition is created it will generally be formatted with an NTFS file system. The process of
formatting a partition with a filesystem performs several functions such as creating NTFS metadata
including the Master File Table (MFT), defining the allocation unit size and determining if a quick
format should be performed.
Allocation unit size
The allocation unit size, or cluster size, is the smallest amount of storage that can be allocated to an object
or fragment of an object in a filesystem. The ideal allocation unit size, generally speaking, should represent
the average file size for the filesystem in question. An allocation unit size that is too large could lead to
wasted space in the filesystem, while an allocation unit size that is too small could lead to excessive
fragmentation.

In the context of alignment, the allocation unit size will also determine where an object resides in the
filesystem. To have a properly aligned partition is the first step in ensuring aligned operations in the
environment. However, files do not live or otherwise start in the first sector of an aligned and formatted
partition (which is reserved for the NTFS header). Files will start in the filesystem at some offset based on
the allocation unit size. For example, to have an aligned partition at 64 KB with an allocation unit size of 4
KB, would cause files to be created at 64 KB, plus some number of 4 KB into the filesystem. This may not
be an issue for general purpose filesystems, but for database applications such as Microsoft Exchange and
Microsoft SQL Server, this could cause the internal structures of the data file to be misaligned with some of
the critical storage boundaries as mentioned in the Partition alignment section. Because of the impact to
alignment caused by the allocation unit size, it is recommended, especially for database applications, to
format a volume with a cluster size of 64 KB. The 64 KB allocation unit size will ensure that the file(s)
created in the filesystem will maintain a 64 KB offset from the beginning of the partition. Assuming the
partition is also aligned with a 64 KB offset, this will ensure I/O operations are as aligned as possible with
the critical boundaries in the Symmetrix.

Querying allocation unit size
The allocation unit size can be determined by using the wmic CLI. Specifically, the WMI volume object
can be queried to determine the blocksize (in bytes) of the filesystem. Figure 13 gives an example of using
wmic to determine the allocation unit size.

EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 27




Figure 13. Using the wmic CLI to query allocation unit size
Quick format vs. regular format
Depending on the version of the Windows operating system, the behavior of a non quick or regular
format will differ. In either case, references to files in an existing filesystem will be removed. But the
difference in behavior is most interesting in the context of Virtual Provisioning, specifically the potential
impact to thin pool allocations.
Windows Server 2003 format
With Windows Server 2003, the difference between a regular and quick format is that a regular format will
scan the entire disk for bad sectors. The scan for bad sectors (SCSI verify command) is a read operation.
In virtually provisioned environments, this read operation will not cause space to be allocated in the thin
pool. When a read is requested from a thin device on an area of the LUN that has not been allocated, the
device will simply return zeroes to the application. Since a full format is an unnecessary operation when
considering there is no actual allocation or disk to verify in virtually provisioned environments, a quick
format should be used. However, no harm will be done should a regular format accidentally be selected;
there will simply be unnecessary I/O to the array. So whether a regular format or a quick format is
selected, only a small number of writes will occur against the thin device, causing minimal allocation
within the thin pool.
Windows Server 2008 format
With Windows Server 2008 the difference between a regular and a quick format is that a regular format
will write zeroes to every block in the filesystem. From a Virtual Provisioning perspective this will cause a
thin device to become fully allocated within its respective thin pool. With this behavior in mind, it is
important to select the quick format option (/Q from the command line) when formatting any thin device on
Windows Server 2008. A quick format will perform similarly to Windows Server 2003 where only a small
number of tracks will become allocated within a thin pool.

Volume expansion
Storage administrators are continually looking for flexibility in the way that storage is provisioned and may
be altered in-place and online. Administrators in Microsoft environments may find the need to increase
storage for a given filesystem due to an increase in storage requirements. One method to account for
growth in storage needs is to expand the LUN on which a given partition or filesystem resides.

Previous versions of Enginuity have provided methods in which to grow Symmetrix volumes. The method
to expand volumes in place and online would involve adding additional members to an existing
metavolume. If the metavolume was concatenated, then only the additional volumes to be added to the
meta would be required to expand the volume online with no disruption to the application. Striped
metavolume expansion, however, required not only the additional volumes but also a mirrored BCV in
order to perform the expansion with data protection. The requirement for a mirrored BCV excluded other
more cost-effective protection types, such as RAID 5, which may be more desirable for BCV volumes.
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 28




With Enginuity 5874 and Symmetrix V-Max arrays, users may now use other protection types for the BCV
used in conjunction with striped metavolume expansion, including RAID 5 or RAID 6. The following
section provides an example of online striped metavolume expansion using a RAID 5 BCV.
Striped metavolume expansion example
This example focuses on Symmetrix metavolume 41F, which happens to hold a Microsoft Exchange
database. We will expand metavolume 41F with four new devices (42D, 42E, 42F, and 430) that reside in
the same 15k rpm Fibre Channel disk group that holds the existing metavolume. The RAID 5 BCV
metavolume to be used in order to protect data during the migration, device 431, exists on a separate disk
group.

In preparation for expanding a striped metavolume with data protection, it is necessary to ensure there are
no existing Symmetrix-based replication sessions occurring against the device. This includes ensuring
TimeFinder

, SRDF, and Open Replicator sessions have been removed, terminated, or canceled as
appropriate to the respective technology. The requirement to remove all replication sessions also applies to
the TimeFinder BCV to be used for protecting data during the expansion. The BCV cannot be
synchronized or otherwise have a relationship with the metavolume prior to running the expansion
procedure using Solutions Enabler 7.0.

It is also important to ensure that the devices being added to the existing metavolume have the same
attributes. In this example the metavolume is a clustered resource within a Windows Server 2008 failover
cluster. A Symmetrix device within a Windows Server 2008 failover cluster requires that the SCSI-3
persistent reservation attribute be set. Since at the beginning of this example the SCSI-3 persistent
reservation attribute is not set on the volumes being used for the expansion, the following command needs
to be issued:

symconfigure -sid 94 -cmd "set dev 42D:430 attribute =
SCSI3_persist_reserv;" commit

Once the environment is prepared, the LUN expansion can be executed. The expansion procedure will be
executed from a host with gatekeeper access to the required Symmetrix using the symconfigure CLI
command. Figure 14 shows the partition for the LUN in this example, as seen from the disk administrator,
prior to it being expanded.


Figure 14. Striped metavolume prior to expansion
To expand the metavolume, the following command was executed:
symconfigure -sid 94 -cmd "add dev 42d:430 to meta 41f
protect_data=true bcv_meta_head=431;" commit

Once the expansion process has begun, the following high-level steps will be taken:

1. The BCV metadevice specified for data protection will begin a clone copy operation from the source
metavolume.
2. During the clone copy operation, writes from an application, like Exchange, will be mirrored between
the source metavolume and the BCV.
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 29



3. When the BCV copy is complete, the BCV is split from the source and all read and write I/O is
redirected to the BCV device.
4. While the I/O is redirected, the source metavolume will be expanded with the specified volumes.
5. After the metavolume is expanded, the data from the BCV is copied back and restriped across all
members of the newly expanded metavolume.
6. During the copy from the BCV, I/O is redirected back to the expanded metavolume.
7. Once the copy back is complete, the BCV clone relationship is terminated and the expansion
completes.

Due to the nature of the volume expansion there will be a performance impact for reads and writes to the
LUN. With this in mind it is recommended to perform any expansion operations during maintenance
windows or times of low I/O rates to the LUN.

The symconfigure command will monitor the expansion throughout the process as seen in Figure 15. Once
the expansion is complete symconfigure will exit.


Figure 15. symconfigure command during the expansion process
After the symconfigure command completes, it will now require the administrator to extend the partition
that resides on the now larger LUN. To do this the first step is to perform a rescan from the host via the
disk manager console or the diskpart cli command. Since this is a clustered environment we will need to
perform a rescan from all nodes in order to discover the new LUN size. Once the rescan is executed the
new size of the LUN should be seen from all hosts as depicted in Figure 16.

EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 30




Figure 16. Metavolume after the expansion
At the completion of the metavolume expansion the diskpart command can be used to grow the partition
into the newly discovered free space. From the diskpart command, either the volume or the partition needs
to be selected prior to issuing the extend option. Figure 17 gives an example of using diskpart to select
the target disk, selecting the appropriate partition on the disk, followed by issuing the extend command.


Figure 17. diskpart commands to expand the NTFS partition
The extend command will grow the partition into the free space on the disk, as shown in Figure 18.
Another rescan will need to be issued on all cluster nodes in order to discover the now larger partition.
This completes the expansion process and the Exchange database can grow into the now larger volume on
which it resides.

Figure 18. Metavolume following the diskpart extend command
This particular example was tested on a Windows Server 2008 failover cluster while running a light
loadgen workload against the database LUN being expanded (~200 IOPS). The ~88 GB LUN was
expanded in roughly 35 minutes.
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 31



Symmetrix replication technologies and management
tools
In many environments a key aspect of managing Symmetrix storage involves storage replication. The
Symmetrix offers several native forms of replication including TimeFinder, SRDF, and Open Replicator.
Each of these technologies offers LUN-based replication either within a Symmetrix array (TimeFinder),
between multiple Symmetrix arrays (SRDF or Open Replicator), or between the Symmetrix and other
qualified storage arrays (Open Replicator). The following sections offer an introductory description to
these technologies.
EMC TimeFinder family
The TimeFinder family of software provides a local copy or image of data, independent of the host and
operating sytem, application, and database. TimeFinder local replication software helps to manage backup
windows while minimizing or eliminating any impact on the application and host performance. It allows
for immediate application and host access during restores, also referred to as instant restore. TimeFinder
also allows for fast data refreshes for activities such as data warehousing and decision support as well as
test and development.

The TimeFinder family of software includes:

TimeFinder/Clone, depicted in Figure 19, creates full-volume copies of production data within a
Symmetrix system. TimeFinder/Clone allows up to 16 active clones of a single production device.
Clone devices can have RAID 1, RAID 5, or RAID 6 protection schemes. TimeFinder/Clone can be
used to copy data between Symmetrix standard devices (which can optionally be labeled target
devices), between standard and business continuance volumes (BCV), or between BCV volumes.

Figure 19. TimeFinder/Clone example
TimeFinder/Snap, depicted in Figure 20, creates space-saving copies of production data within a
Symmetrix system. TimeFinder/Snap allows up to 128 active snapshot copies of a single production
device. TimeFinder/Snap utilizes cache-only devices referred to as VDEVs to create a pointer-based
copy of a production standard device. Should any writes occur to the standard device or the VDEV,
data representing the point-in-time image of the VDEV will be copied to what is called a save pool.
Save pools are comprised of save devices that are durable storage in a RAID 1, RAID 5, or RAID 6
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 32



configuration. The mechanism used to maintain the pointer-based copy of a VDEV is commonly
referred to as copy-on-first-write. In a Symmetrix system this copy-on-first-write activity can be done
asynchronously, resulting in minimal impact to the first writes performed on the production standard
devices.

Figure 20. TimeFinder/Snap example

The TimeFinder family of software also consists of additional options to help a wider range of business
needs. The TimeFinder options include:

TimeFinder/Consistency Groups (TF/CG) provides, at no additional cost, dependent-write
consistency of an application or group of applications when creating a point-in-time image across
multiple devices either within a single Symmetrix system or which span multiple Symmetrix systems.
TimeFinder/Exchange Integration Module (TF/EIM) provides a CLI driven recovery management
interface for Windows servers that support Microsoft Exchange databases in Symmetrix systems.
TF/EIM automates the process of creating TimeFinder copies for backup and restore operations in
Exchange Server 2003 and Exchange Server 2007 environments. TF/EIM utilizes the Windows
Volume Shadow Copy Service (VSS) to coordinate the operation of creating an Exchange-based full,
copy only or log (vssdiff) TimeFinder replica of production databases. TF/EIM utilizes the EMC VSS
hardware provider to coordinate the TimeFinder replica creation with the necessary Exchange
processes, including a freeze and thaw of write I/O activity to an Exchange database, mounting and
checksum verification of the TimeFinder replica, and log truncation following a successful backup.
TimeFinder/SQL Integration Module (TF/SIM) provides a CLI driven recovery management
interface for Windows servers that support Microsoft SQL Server databases residing in Symmetrix
systems. TF/SIM automates the process of creating TimeFinder copies for backup and restore
operations in SQL Server 2005 and 2008 environments. TF/SIM can utilize either the Virtual Device
Interface (VDI) native to SQL Server or the Windows VSS framework in order to coordinate
TimeFinder replica creation with the given instance of SQL Server.
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 33



TimeFinder/Clone Emulation Mode (included at no charge with TimeFinder/Clone) enables
customers to easily leverage their existing TimeFinder/Mirror scripts with new Symmetrix systems that
utilize TimeFinder/Clone functionality.
EMC SRDF family
SRDF is a business continuance solution that maintains a replica of data at the device level in Symmetrix
arrays located in physically separate sites. The Solutions Enabler SRDF component extends the basic
SYMCLI command set to include SRDF commands that allow you to perform control operations on
remotely located RDF devices. SRDF provides a recovery solution for component or site failures between
remotely mirrored devices, as shown in Figure 21. SRDF mirroring reduces backup and recovery costs and
significantly reduces recovery time after a disaster.

Figure 21. SRDF bidirectional configuration

In an SRDF configuration, the individual Symmetrix devices are designated as either a source mirror or a
target mirror to synchronize and coordinate SRDF activity. If the source (R1) device fails, the data on its
corresponding target (R2) device can be accessed. When the source (R1) device is replaced, the source
(R1) device can be resynchronized. SRDF configurations have at least one source (R1) device mirrored to
one target (R2) device.

SRDF site configurations provide for either a unidirectional or a bidirectional data transfer from one
storage site to another. In a unidirectional SRDF configuration, all source (R1) devices reside in the local
Symmetrix array and all target (R2) devices in the remote site Symmetrix array. Data flows from the source
(R1) devices over an SRDF link to the target (R2) devices. In a bidirectional configuration, both source
(R1) and target (R2) devices reside in each Symmetrix array, as the master copy point and the mirror copy
point, in the SRDF configuration. Data flows from the source (R1) devices to the target (R2) devices.

The SRDF family of software provides the following products:
SRDF/Synchronous (SRDF/S) maintains realtime synchronous remote data replication from one
Symmetrix production site to one or more Symmetrix systems located within campus, metropolitan, or
regional distances. SRDF/S provides for a recovery point objective (RPO) of zero data loss
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 34



SRDF/Asynchronous (SRDF/A) maintains asynchronous data replication usually at extended
distances and provides an RPO that could be as minimal as a few seconds. SRDF/A maintains
dependent write consistent copies of data across a group of devices by creating delta sets as a unit of
consistency for asynchronous replication between sites.
SRDF/Data Mobility (SRDF/DM) provides for the transfer of a Symmetrix data volume to a
secondary Symmetrix locally or across an extended distance. General uses of SRDF/DM can include
disaster restart, information sharing for decision support or data warehousing, or migration of data
between Symmetrix systems.

The SRDF family of software also consists of other add-on options including advanced three-site
capabilities using the combination of SRDF/S, SRDF/A, SRDF/DM, and TimeFinder. The other SRDF
options and advanced three-site solutions include:

SRDF/Automated Replication (SRDF/AR) enables rapid disaster restart over any distance with a
two-site single hop option using SRDF/DM in combination with TimeFinder, or a three-site multi-hop
option using a combination of SRDF/S, SRDF/DM, and TimeFinder.
SRDF/Cluster Enabler (SRDF/CE) enables automated or semi-automated site failover using SRDF/S
or SRDF/A with Microsoft failover clusters. SRDF/CE allows Windows Server 2003 and Windows
Server 2008 editions running Microsoft failover clusters to operate across pairs of SRDF-connected
Symmetrix arrays as geographically distributed clusters.
SRDF/Star is a three-site disaster-restart solution that can enable zero data loss with SRDF/S between
two sites, while preserving SRDF/A replication to a third site. With this SRDF/Star offers a
combination of continuous protection, incremental data resynchronization, and enterprise consistency
between two remaining sites in the event of the workload site going offline due to a site failure, fault,
or disaster event.
SRDF/Concurrent provides the ability to remotely mirror a Symmetrix production-site device to two
secondary-site Symmetrix arrays simultaneously using either SRDF/S or a combination of SRDF/S and
SRDF/A.
SRDF Cascaded is an advanced three-site solution that utilizes SRDF/S between a workload site and a
secondary-site Symmetrix, then SRDF/A from that secondary-site Symmetrix to an out-of-region third
Symmetrix array. This configuration offers zero data loss achievable in the out-of-region site in the
event of a production-site disaster event.
SRDF/Extended Distance Protection (SRDF/EDP) is a new disaster-restart solution providing
customers the ability to achieve no data loss at an out-of-region site at a lower cost. Using cascaded
mode SRDF operations as a building block for this solution, SRDF/EDP allows the intermediate site to
provide data pass-through to the out-of-region site without the need to allocate an equal amount of
storage within the intermediate site.
SRDF/Consistency Groups (SRDF/CG) ensures dependent-write consistency of an application or
group of applications being remotely mirrored by SRDF. SRDF/CG helps allow for a business point
of consistency for remote-site disaster restart for all applications associated with a business function.
Open Replicator overview
Open Replicator provides a method for copying device data from various types of arrays within a storage
area network (SAN) infrastructure to or from a Symmetrix DMX or Symmetrix V-Max storage array. For
example, Symmetrix Open Replicator provides a tool that can be used to migrate data from older
Symmetrix arrays, EMC CLARiiON

arrays, and certain third-party storage arrays to a Symmetrix DMX


or V-Max storage array. Alternatively, the Open Replicator command can also be used to migrate data
from a Symmetrix DMX or V-Max storage array to other types of storage arrays within the SAN
infrastructure. Copying data from a Symmetrix DMX or V-Max storage array to devices on remote storage
arrays allows for data to be copied fully or incrementally.

Open Replicator is commonly used for the following functions:

EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 35



Migrate data between Symmetrix DMX or V-Max storage arrays and third-party storage arrays within
the SAN infrastructure without interfering with host applications and ongoing business operations.
Back up and archive existing data within the SAN infrastructure as part of an information lifecycle
management solution.

Open Replicator copy operations are controlled from a local host attached to the Symmetrix V-Max or
DMX storage array. Data copying is accomplished as part of the storage system process and does not
require host resources. Optionally, the data can be copied online between the Symmetrix array and remote
devices, allowing host applications, such as a database or file server, to remain operational (function
normally) during the copy process. Data is copied in sessions with up to 512 sessions allowed per
Symmetrix array.

The Symmetrix V-Max or DMX array and its devices will always be referred to as the control side of the
copy operation. Older Symmetrix arrays, CLARiiON arrays, or third-party arrays on the SAN will always
be referred to as the remote array/devices. With the focus on the control side, there are two types of copy
operations, push and pull. A push operation copies data from the control device to the remote device(s). A
pull operation copies data to the control device from the remote device(s). Copy operations are either hot
(online) or cold (offline).

Open Replicator can be used to migrate data into a Symmetrix V-Max or DMX or array from older
Symmetrix arrays, CLARiiON, or other third-party arrays. Figure 22 shows two Open Replicator copy
sessions performing a pull operation, where data is copied through the SAN infrastructure from remote
devices to the Symmetrix array.


Figure 22. Open Replicator pull operation
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 36



Open Replicator can be used to copy data from a Symmetrix V-Max or DMX array to older Symmetrix
and CLARiiON arrays. Figure 23 shows two Open Replicator copy sessions performing a push operation,
where data is copied from the Symmetrix array to remote devices within the SAN infrastructure.

Figure 23. Open Replicator push operation
Symmetrix Integration Utilities
Symmetrix Integration Utilities (SIU) is a CLI (symntctl) that integrates and extends the Windows Server
2003 and 2008 disk management functionality to better operate with EMC Symmetrix storage devices.
SIU provides particular value in environments where TimeFinder, SRDF, or Open Replicator is being used.
SIU is not a replacement for the Windows logical disk manager (LDM), but bridges lacking functionalities
necessary for Windows administrators to optimally work with EMC storage devices. Specifically, SIU
enables administrators to perform the following actions:
View the physical disk, volume, and VMware datastore configuration data.
Update the partition table on a disk.
Set and clear volume flags.
Flush any pending cached file system data to disk.
Show individual disk, volume, or VMware datastore details.
Mount and unmount volumes to a drive letter or mount point.
Manipulate disk signatures.
Scan the drive connections and discover any new disks available to the system.
Mask devices to and unmask devices from the Windows host.

EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 37



With the release of Solutions Enabler 7.0, the command line utility symntctl, also referred to as SIU, is now
included. The typical install of Solutions Enabler 7.0 installs the symntctl CLI onto Windows platforms
automatically.
It should be noted that versions of SIU at 4.2 or later are separate from and not dependent on the SIU
service. The symntctl CLI functions that previously relied on the SIU service are now managed directly by
SIU or the operating system, therefore the SIU service need not be installed in order to utilize symntctl.
Additional details regarding SIU functionality and recommended operation can be found in the EMC
Solutions Enabler Symmetrix Array Controls CLI Product Guide available on Powerlink.
EMC Replication Manager
EMC Replication Manager is an EMC software application that dramatically simplifies the management
and use of disk-based replications to improve the availability of users mission-critical data and rapid
recovery of that data in case of corruption.

Replication Manager helps users manage replicas as if they were tape cartridges in a tape library unit.
Replicas may be scheduled or created on demand, with predefined expiration periods and automatic
mounting to alternate hosts for backups or scripted processing. Individual users with different levels of
access ensure system and replica integrity. In addition to these features, Replication Manager is fully
integrated with many critical applications such as DB2 LUW, Oracle, and Microsoft Exchange.

Replication Manager makes it easy to create point-in-time, disk-based replicas of applications, file systems,
or logical volumes residing on existing storage arrays. It can create replicas of information stored in the
following environments:

Windows file systems
Microsoft SQL Server databases
Microsoft Exchange databases
Microsoft Office SharePoint Server
Oracle databases
DB2 LUW databases
UNIX file systems

Replication Manager has a generic storage technology interface that allows it to connect and invoke a wide
range of replication technologies. Replicas created by Replication Manager can be stored on Symmetrix
TimeFinder/ Mirrors, TimeFinder/Clone, or TimeFinder/Snap (VDEVs); CLARiiON

clones or snapshots;
Invista

clones, Celerra

SnapSure local snapshots, or Celerra Replicator remote snapshots.


Replication Manager also supports data using the RecoverPoint Appliance storage service.

Replication Manager allows for local and remote replications using TimeFinder, SRDF, SAN Copy ,
Navisphere

, Celerra iSCSI, Celerra NFS, and/or replicas of MirrorView/A or MirrorView/S secondaries


using SnapView/Snap and SnapView/Clone replication technologies where they are appropriate.

Some of the use cases for Replication Manager include:

Create point-in-time replicas of production data in seconds.
Facilitate quick, frequent, and non-destructive backups from replicas.
Mount replicas to alternate hosts to facilitate offline processing (for example, decision-support
services, integrity checking, and offline reporting).
Restore deleted or damaged information quickly and easily from a disk replica.
Set the retention period for replicas so that storage is made available automatically.
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 38




Specific to managing Symmetrix replicas, Replication Manager utilizes Solutions Enabler software and
interfaces to the TimeFinder family of products. Replication Manager automatically controls the
complexities associated with creating, mounting, restoring, and expiring replicas of data. Replication
Manager performs all of these tasks and offers a logical view of the production data and corresponding
replicas via the Replication Manager console, as depicted in Figure 24.


Figure 24. Replication Manager console
Additional information can be found in the Replication Manager Product Guide available on Powerlink.
.
Managing storage replicas
Symmetrix device states
Symmetrix host addressable devices can be placed into several states depending on their use. The expected
device states will depend on several factors including the type of device and whether it is being used for
replication. The following section outlines the possible device states, when they are expected, and how
Windows manages the state.
Read write (RW)
The expected device state of host-accessible, mounted, and in-use Symmetrix devices will be read/write.
Read/write Symmetrix devices report as RW when queried using various Solutions Enabler commands. As
the state implies the device is open for read and write access from a host. Windows will be able to perform
all expected disk management operations when a Symmetrix device is RW.
Write disabled (WD)
A device that is RW can be placed into a write disabled or WD state in the Symmetrix. When a device is
WD, any write activity to the device will fail, including initializing, partitioning, formatting, and setting
volume attributes. If a WD device has an existing filesystem, Windows will allow the mounting of the
device for read-only access. Windows Server 2008 will automatically detect the write disabled state of the
device and mark the disk attribute as read-only (note that the volume attribute will not be set to read-only).
By automatically marking the disk as read-only any options to manipulate the disk or volume from disk
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 39



management will be grayed out. Any mount point manipulation of the volume would need to occur from
other interfaces, like diskpart, mountvol, or symntctl.

SRDF R2 or target devices are marked by default as WD in the Symmetrix. It is not recommended to
present a WD R2 device to a host for the purposes of mounting while SRDF is active. While the R2 device
can technically be mounted while WD, the data on the device will be changing within the Symmetrix as
updates are propagated from the R1 device. This will lead to inconsistency with what is read from the disk
into host cache and what may be changing on the disk from an SRDF perspective. Should an SRDF R2
device be properly mounted, while replication is inactive and while the device is RW, it should be
unmounted prior to resuming replication and again marking the device as WD. For more details please see
the section titled Managing the mount state of storage replicas.

While it is possible to mount a device marked as WD in the Symmetrix it is recommended to manage write
access by marking the volume as read-only from Windows. Marking a volume as read-only from Windows
requires the device be RW in the Symmetrix. Marking the volume as read-only can be accomplished by
using the diskpart command, specifically the attributes command and setting the readonly flag at the
volume level as depicted in Figure 25.

If certain Symmetrix-based administrative tasks require that a device be write disabled, for example, when
unmapping a device from a front-end director, this state needs to be managed by write disabling the device in the
Symmetrix.


Figure 25. Using diskpart to set the readonly volume attribute
Not ready (NR)
In certain instances Symmetrix devices will be placed into a not ready or NR state. A NR device will be
visible but not usable to a Windows hosts. An NR device will be seen as either not initialized or
unreadable from the disk management console. The following list provides details on when this NR
state should be expected:
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 40




When a VDEV is not in a relationship with a standard device, the default state of the VDEV will be
NR.
VDEVs in a created state, in preparation for being activated and made read/write, will be in a NR state.
When a TDEV is not bound to a thin pool and otherwise not in use, the default state of the TDEV will
be NR.
When BCVs or clone targets are in the process of copying data from a production device during an
establish or create operation, the devices will be NR.
During a TimeFinder restore the BCV or clone device used to restore data will be NR. In addition, the
source standard or production device being restored will toggle NR for a very short window (a few
seconds) while the restore is initiated.

At no time should a Symmetrix device about to enter an NR state be mounted on a Windows host. Access
to the volume must be removed using one of the methods outlined in the next section,
Managing the mount state of storage replicas
Managing the mount state of storage replicas is a critical aspect of any TimeFinder-, SRDF-, or Open
Replicator-based deployment. If volumes are not properly managed from a Windows host prior to being
updated at the storage system level, there is the possibility that Windows will maintain cached file system
information about the prior state of the volumes. This possible inconsistency between the operating system
view of the LUN and the storage state could lead to data corruption.

There are several methods that can be used to properly manage host access to TimeFinder-, SRDF-, or
Open Replicator-based replicas. The following sections outline the main manual or scripting based
options.

Properly unmount replica volumes presented to a host
The simplest and most commonly used method to manage replica access is to unmount volumes being
updated by TimeFinder, SRDF, or Open Replicator from their respective host. The proper method of
unmounting the volume must be deployed in order to avoid cache inconsistencies between the host and the
storage. Windows Server 2003 and 2008 have the ability to Offline a volume to ensure that volume
cache is removed from the operating system and that volume access is blocked to prevent use while
unmounted.

The recommended method for mounting, unmounting, and otherwise managing volumes used with
replication technologies is to use SIU. The SIU mount and unmount commands provide the necessary
flush, mount point removal, and volume offline operations to properly manage operating system cache for
storage replicas.

Native to both Windows Server 2003 and 2008, Microsoft provides another potential method for
unmounting storage devices, specifically using the mountvol command. Mountvol contains two options for
unmounting volumes, the /P switch and the /D switch. Volumes can be properly unmounted and placed
offline by using the mountvol command with the /P switch. Mountvol with the /P switch performs the
same Windows Virtual Disk Service (VDS) operations as SIU, which properly unmount and offline a
volume. Conversely, mountvol should not be used with the /D switch as the unmount done in this case
does not place the volume offline. Mountvol with the /D switch simply removes the volume mount point,
but does not perform the other necessary operations to ensure host cache coherency for devices to be
updated from a TimeFinder, SRDF, or Open Replicator perspective.

When using mountvol to view a properly unmounted and offlined volume, the state of the volume will
report *** NOT MOUNTABLE UNTIL A VOLUME MOUNT POINT IS CREATED ***.

EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 41



When using SIU to view a properly unmounted and offlined volume, the state of the volume will report
PERMANENTLY DISMOUNTED.

The following is a simplified example of the commands and workflow needed to correctly unmount,
refresh and mount TimeFinder BCV devices on a mount host:

1. Unmount the volumes to be updated by TimeFinder.
a. symntctl unmount -path s:\sqldata
b. symntctl unmount -path s:\sqllogs

2. Establish, verify, and split the TimeFinder copy.
a. symmir -g sql est -nop
b. symmir -g sql verify -synched -i 15
c. symmir -g sql split -consistent -nop

3. Mount the volumes.
a. symntctl mount -path s:\sqldata -symdev 162 -sid 58
b. symntctl mount -path s:\sqllogs -symdev 163 -sid 58

Additional examples on how to use SIU to manage storage replicas can be found in the EMC Solutions
Enabler Symmetrix Array Controls CLI Product Guide available on Powerlink.
Older versions of SIU (prior to version 4.2) relied on a proprietary SIU service to take a filesystem lock of
the unmounted volume to block access. These older versions can still be used; however, the offline volume
state as previously discussed does not apply. Earlier versions of SIU also had specific scenarios where
unmounting a volume could still lead to issues with cache coherency. The first scenario was if the force
flag was used to unmount the volume. If the force flag was used, the volume would be unmounted,
however, cache would not be removed for that volume. If the volume was not subsequently masked from
the system, cache corruption could occur. SIU with version 4.2 or later does not have this issue as a forced
unmount will still ensure to offline the volume and maintain cache coherency. The second scenario with
SIU prior to 4.2 was if the mount host was rebooted while the volumes were unmounted. The service used
to maintain the filesystem lock (which prevented volume access) would be restarted with the system and
subsequently lose its lock. Upon reboot Windows could read the unmounted filesystem information into
cache, which would cause corruption during the following refresh of the SRDF, TimeFinder, or Open
Replicator copy. SIU with version 4.2 or later does not have cache consistency issues due to the offline
volume state. Once a volume is offline, it will remain offline across host reboots and cannot be accessed by
the operating system or applications without an explicit mount command being issued.

Use masking to remove host access to replicas
Removing host access to replicas via masking also ensures no cached volume information is maintained by
the host. Prior to removing LUN access from the host, any associated volumes should be unmounted.
Depending on the nature of the replica, an appropriate method for unmounting the volume should be
considered. For example, if the replica will be overwritten by an establish or recreate TimeFinder
operation, the volume can be forcibly unmounted with the Windows mountvol CLI, or with SIU (symntctl)
including the force switch. If the data on the replica must be maintained, the volume should be gracefully
unmounted by removing any open handles against the volume prior to unmounting. Ideally in this latter
case, SIU will be used to unmount the volume. The mountvol CLI can also be used in conjunction with the
/P switch. It is important to note that mountvol forcibly unmounts volumes automatically, so if an
application is left open against a volume inadvertently, it will be disconnected. Because of this it is
recommended to use SIU, as SIU will provide error conditions to indicate usage by online applications.

Once the volume is unmounted, the LUN can be removed from the host using masking commands. When
the volume is masked, any cached volume information maintained for that LUN will be removed by the
Operating System. Along with native symmask or symaccess commands, SIU can also be used to mask
volumes from the host using the symntctl mask and unmask syntax.

EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 42



Power off the host accessing the replicas
Powering off the host that has access to the SRDF or TimeFinder replicas also ensures the server does not
maintain any cached information about the content of the volumes. If the replica disks were mounted using
the same signatures as the source LUN, the volumes will mount to the same mount points upon restart,
assuming the TimeFinder, SRDF, or Open Replicator copies are read/write at system start and there are no
identical copies in the system, which could lead to signature conflicts and automatic resignaturing.

Manage clustered storage replicas
Microsoft Windows Server 2003 Cluster Service (MSCS)
To properly manage volume cache for a clustered physical disk resource on Windows Server 2003, the
resource should either be taken offline (from a clustered resource perspective) or set into extended
maintenance mode. The act of taking a disk resource offline or setting it into extended maintenance mode
will clear any volume cache and make the volume inaccessible. There is no need to attempt to unmount a
volume that is offline in the cluster or in extended maintenance mode with Windows Server 2003-based
clustering

Disk resources should be taken offline or set into extended maintenance mode prior to refreshing data to or
from an SRDF-, TimeFinder-, or Open Replicator-based copy. Because enabling SRDF replication will
mark the R2 device as write disabled, and refreshing a TimeFinder copy will make the device go not ready,
the disk resource within the cluster would otherwise fail if not taken offline or set into extended
maintenance mode. The following section gives additional information on extended maintenance mode
with Windows Server 2003 Cluster Service.

Volumes and LUNs under MSCS control undergo specific health checks in order to determine their
availability. Should any of these health checks fail, MSCS will attempt to offline the disk resource or move
the disk resource to another node. When a volume or LUN is in a locked or Not Ready state, the cluster
health checks are likely to fail. Some normal administrative tasks cause volumes or LUNs to become
exclusively locked or Not Ready (for example, a chkdsk command, or a TimeFinder restore).

To solve this problem, Microsoft Windows introduced a new MSCS feature called maintenance mode in
Windows Server 2003 Service Pack 1. Shortly after SP1s release a post SP1 hotfix was issued that
introduced an extended maintenance mode.

The goal of maintenance mode is to help enable certain administrative tasks against volumes or LUNs that
would otherwise cause resources such as disks to fail. Correct usage of the maintenance modes will
suppress the normal volume and disk LookAlive/IsAlive checks so that operations such as TimeFinder
restores (which will set a LUN Not Ready for a short amount of time) can be done without impacting an
entire resource group.

In order to perform TimeFinder or SRDF based operations without impacting the disk resource, extended
maintenance mode must be used.

The act of enabling extended maintenance mode is equivalent to performing an offline of the disk resource.
The disk resource will be offline internally, however, to the cluster service and dependent applications,
the disk resource will appear online. For more information regarding MSCS health checks and
maintenance mode, including information on how to obtain the hotfix for extended maintenance mode,
please reference Microsoft KB article 903650 (http://support.microsoft.com/kb/903650/ ).

EMC supports the use of extended maintenance mode in performing TimeFinder, SRDF, or Open
Replicator operations. An important note when using extended maintenance mode is to ensure the correct
disk signature is being restored. In some environments, the BCV or R2 device may be presented to a
backup or reporting host. If the backup/reporting host detects a signature conflict due to multiple copies of
the same LUN, or a problem with multipathing, Windows will automatically change the disk signature of
the conflicting device. Should that modified signature be restored to the Windows Server 2003-based
cluster from which it came, the clustered resource will fail the next time the signature is checked. In its
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 43



current form, extended maintenance mode will not check the signature of the disk device when the disk
device is placed back into normal operation. Therefore a restore may appear to have been performed
without a problem, but the next time that disk resource is moved or placed offline, the resource will fail
with an invalid signature when it attempts to come online. So it is imperative to validate disk signatures
before performing any restores into a MSCS environment.

Please also note, it is important to ensure that all application level access to the volume(s) being placed in
extended maintenance mode must be terminated. In the case of restorations, the database store will
typically be detached or deleted from the storage engine. This style of operation will terminate user access
to the device. Taking a resource offline by placing it in extended maintenance mode without first
appropriately addressing user/process access, is not supported, and may result in adverse effects.

The following is a simplified example of the commands and workflow needed to correctly restore a SQL
database backup taken with the TimeFinder SQL Integration Module (TF/SIM) on a Windows Server 2003
failover cluster, utilizing extended maintenance mode:

1. Detach the SQL database(s) to be restored
2. Verify the signature of the clustered disk resource matches the BCV:
a. Validate the signature of the clustered disk resource:
i. cluster res ResourceName /priv
b. Validate the signature of the BCV on the backup or reporting host using SIU:
i. symntctl list physical
c. After verifying the signature, ensure the BCV is unmounted on the backup host with
SIU.
3. Enable maintenance mode for the clustered disk resources to be restored:
a. cluster res ResourceName /maint:on
4. Enable extended maintenance mode for the clustered disk resources to be restored:
a. cluster res ResourceName /extmaint:on
5. Run the appropriate TimeFinder restore command targeting the disks that hold the SQL
database(s):
a. symmir -g DgName restore
i. If the restore is protected, move to step 6. Otherwise, wait for the restore to
complete, then split.
1. symmir -g DgName split
6. Disable extended maintenance mode for the restored clustered disk resources:
a. cluster res ResourceName /extmaint:off
7. Disable maintenance mode for the restored clustered disk resources:
a. cluster res ResourceName /maint:off
8. Restore the database using the appropriate TF/SIM restore command.
9. If the option to use protected restore was used, monitor the progress of the restore and ensure to
split the BCV devices once they are in a restored state
Windows Server 2008 failover clustering
Similar to Windows Server 2003, the proper way to manage volume cache for a clustered physical disk
resource is to either take the resource offline or set it into maintenance mode. Unlike Windows Server
2003, Windows Server 2008 only has one form of maintenance mode. The form of maintenance mode with
Windows Server 2008 does not offline volumes or manage volume cache for a device. Therefore it is
required to unmount the volume after placing it into maintenance mode with SIU or mountvol with the /P
switch. An offline of the resource, however, will correctly stop access to the volume and clear any volume
cache without the need of an unmount.

Windows Server 2008 failover clustering also offers additional functionality beyond maintenance mode to
help with managing storage, including disk-based replicas. One such feature is the ability add or remove
cluster resource dependencies online and without impact to dependent resources. This ability is useful in
the context of offlining disk resources in the cluster prior to disk-based restores or replica refresh
operations. By removing dependencies on the disk to be offlined, resources that would otherwise be
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 44



impacted can remain online. Once the necessary disk maintenance is complete, the disk can be brought
online and the dependency can be re-added without impacting the dependent resource and potentially other
databases or applications.

It is recommended to use online dependency modification, in conjunction with offlining the disk resource,
in preference to maintenance mode, in the context of managing volume cache for replica operations in
Windows Server 2008 failover clusters. Maintenance mode, and the requirement to unmount the
filesystems, brings added complexity that is unnecessary thanks to online dependency addition and
removal.

The following is a simplified example of the commands and workflow needed to correctly restore a SQL
database backup taken with the TimeFinder SQL Integration Module (TF/SIM) on a Windows Server 2008
failover cluster, utilizing online dependency modification:

1. Detach the SQL database(s) to be restored
2. In Cluster Administrator, remove SQL Server resource dependencies for the disks that contain the
database(s) to be restored
3. Verify the signature of the clustered disk resource matches the BCV:
a. Validate the signature of the clustered disk resource:
i. cluster res ResourceName /priv
b. Validate the signature of the BCV on the backup or reporting host using SIU:
i. symntctl list physical
c. After verifying the signature, ensure the BCV is unmounted on the backup host with
SIU.
4. In Cluster Administrator, take offline all disk resources that contain the database(s) targeted for
restore.
5. Run the appropriate TimeFinder restore command for the disks that hold the SQL database(s):
a. symmir -g DgName restore
i. If the restore is protected, move to step 6. Otherwise, wait for the restore to
complete, then split.
1. symmir -g DgName split
6. In Cluster Administrator, online all disk resources previously taken offline in step 4.
7. In Cluster Administrator, add back the disk dependencies previously removed in step 2.
8. Restore the database using the appropriate TF/SIM restore command.
9. If the option to use protected restore was used, monitor the progress of the restore and ensure to
split the BCV devices once they are in a restored state

Another feature with Windows Server 2008 failover clustering that helps with managing disk resources is
the self-healing disk functionality. Disk resources within failover clustering are uniquely identified based
on two parameters, the disk signature and the unique SCSI identifier of the LUN as reported by the
Symmetrix. Should either one of these identifiers not match that information already defined for the disk
resources, the cluster will automatically change the resource to reflect and otherwise correct the conflict.
For example, should a TimeFinder restore from a replica with an incorrect signature cause the production
LUN to conflict with its registered signature, the cluster will use the unique SCSI identifier to determine
that the correct disk is being used and subsequently change the signature of the resource in the registry
(cluster hive).

This functionality is also invoked in geographically dispersed clusters, such as those created with
SRDF/CE. When a disk resource moves from one Symmetrix array to another via SRDF, the disk
signature will remain the same; however the unique LUN identifier will be different between the R1 device
and the R2 device. The self-healing mechanism of failover clustering will determine that the correct
resource is being used, based on the signature, and subsequently update the cluster hive to represent the
unique SCSI identifier of the new SRDF device from the opposite Symmetrix array.

EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 45



Presenting multiple disk replicas to the same hosts
The MBR- and GPT-based disk signatures defined in the Disk types section are critical structures used
by Windows to uniquely identify storage devices. Windows uses the disk signature to identify mount
points in the case of basic disks and as a mechanism to determine shared storage in clustered environments.

Due to the importance of signature uniqueness it is generally not recommended to present LUNs with the
same MBR or GPT disk signature to the same standalone or clustered Windows hosts. While it is not
recommended, in some environments it is required to present multiple disk-based replicas from the same
source to a common backup, test, or reporting host. Should multiple disk-based copies be presented to the
same Windows Server 2003 or 2008 host, and be read/write from a Symmetrix perspective, the conflicting
disk will automatically be resignatured by the operating system. In the case of basic disks multiple copies
will then be usable by the operating system.

It should also be noted that the volume GUID, commonly used as an identifier for scripting mount
operations with mountvol, will be modified as a part of resignaturing operations. In the case of MBR basic
disks, the volume GUID is maintained in the registry. For GPT disks, the volume GUID is on disk. In
either case, scripts must be written to expect GUID changes in the event there is the possibility for
signature conflicts on a given backup, testing, or reporting host. SIU can help mitigate this concern by
allowing for mount operations to be specified against the Symmetrix device ID, when issuing mount
commands with the symdev option. It should be noted that SIU relies on an up-to-date symapi database
from the perspective of locally attached physical drive numbers as previously noted in the Discovering
storage section. With this in mind, it is important to issue the symcfg discover command when changes
are made in an environment utilizing this functionality.

Should any device be resignatured and later need to be restored to a production system, it is critical to
ensure the signature being restored is the same as the original source. EMC provides for several
mechanisms with which to manipulate MBR-based disk signatures. MBR signatures can be modified from
a Windows host with access to the disk using SIU. The Symmetrix also has a native internal mechanism
that can modify the signature of an MBR disk signature. Specifically the symlabel command can be used
to assign a label to a device to be applied on demand using the symdev relabel command and syntax.

Neither SIU nor the symlabel command can be used to manipulate GPT signatures. For Windows Server
2008, GPT signatures as well as MBR signatures can be manipulated using diskpart. Specifically the
unique syntax can be used to modify or otherwise assign a specific MBR or GPT signature.

Dynamic disks are unique in that they also rely on a private region or database at the end of each disk to
uniquely identify the available volumes to a system. Because of this private region, it is not possible to
present multiple dynamic disk-based copies to the same host and have them be available at the same time.
At the time of publication of this paper, there is no method available either with native LDM tools or with
Veritas Storage Foundation for Windows to modify the private region to allow a host to access the same
replica in parallel.

Much of the complexity and process defined and discussed in this section can be eased and otherwise
automated using tools like Replication Manager. Replication Manager can help to greatly simplify the
creation, mounting, and restoration of disk-based replicas.
EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 46



Conclusion
Symmetrix V-Max and Symmetrix DMX storage systems offer a wide range of connectivity, storage
capacity, performance, and replication options for Windows-based environments. Connectivity options are
extensively testing by EMC E-Lab to ensure the highest levels of availability in the industry. Flexibility in
deploying storage across various RAID architecture and disk types allows for tiering based on service
levels. Technologies like Virtual Provisioning provide for efficiencies in storage capacity utilization as
well as ease of provisioning. Additionally storage-based replication technologies such as TimeFinder,
SRDF, and Open Replicator help to deliver business continuity, optimize backup processes, and enhance
reporting, testing, and QA, as well as protect customer resources in geographically dispersed datacenters.
All of these technologies are tested to ensure interoperability with the Windows platform and are constantly
being improved to further drive down cost and provide the lowest total cost of ownership.
References

White papers
Using diskpar and diskpart to Align Partitions on Windows Basic and Dynamic Disks
EMC Symmetrix V-Max and Microsoft Exchange Server Applied Technology
EMC Symmetrix V-Max and Microsoft SQL Server Applied Technology
Implementing Virtual Provisioning on EMC Symmetrix DMX with Microsoft Exchange 2007 Applied
Technology
Implementing Virtual Provisioning on EMC Symmetrix DMX with Microsoft SQL Server 2005
Applied Technology
EMC Symmetrix DMX-4 Flash Drives with Microsoft Exchange Applied Technology
EMC Symmetrix DMX-4 Flash Drives with Microsoft SQL Server Databases Applied Technology
EMC Symmetrix with Microsoft Hyper-V Virtualization Applied Technology

TechBooks
Microsoft Exchange Server on EMC Symmetrix Storage Systems
Microsoft SQL Server on EMC Symmetrix Storage Systems

Product documentation
EMC Solutions Enabler Symmetrix Array Management CLI Product Guide
EMC Solutions Enabler Symmetrix Array Controls CLI Product Guide
EMC Solutions Enabler Symmetrix CLI Command Reference
EMC Solutions Enabler Symmetrix Open Replicator CLI Product Guide
EMC Solutions Enabler Symmetrix SRDF Family Product Guide
EMC Solutions Enabler Symmetrix TimeFinder Family CLI Product Guide
Replication Manager Product Guide
SRDF/Cluster Enabler Plug-in Product Guide
TimeFinder/Integration Modules (TF/EIM, TF/SIM) Product Guide
EMC Host Connectivity Guide for Windows


EMC Symmetrix with
Microsoft Windows Server 2003 and 2008
Best Practices Planning 47

You might also like