FAST VP for EMC® Symmetrix® VMAX™ Theory and Best Practices for Planning and Performance

Technical Notes
P/N 300-012-014 REV A02 June 2011

This technical notes document contains information on these topics:

         

Executive summary ................................................................................... 2 Introduction and overview ....................................................................... 2 Fully Automated Storage Tiering ............................................................ 3 FAST and FAST VP comparison .............................................................. 5 Theory of operation ................................................................................... 6 Performance considerations ................................................................... 11 Product and feature interoperability ..................................................... 14 Planning and design considerations ..................................................... 18 Summary and conclusion ....................................................................... 31 Appendix: Best practices quick reference ............................................. 33

1

Executive summary

Executive summary
EMC Symmetrix VMAX™ series with Enginuity™ incorporates a scalable fabric interconnect design that allows the storage array to seamlessly grow from an entry-level configuration to a 2 PB system. ® Symmetrix VMAX provides predictable, self-optimizing performance and enables organizations to scale out on demand in Private Cloud environments. VMAX automates storage operations to exceed business requirements in virtualized environments, with management tools that integrate with virtualized servers and reduce administration time in private cloud infrastructures. Customers are able to achieve ―always on‖ availability with maximum security, fully nondisruptive operations, and multi-site migration, recovery, and restart to prevent application downtime. Enginuity 5875 for Symmetrix VMAX extends customer benefits in the following areas: More efficiency — Zero-downtime tech refreshes with Federated Live Migration, and lower costs with automated tiering More scalability — Up to 2x increased system bandwidth, with the ability to manage up to 10x more capacity per storage admin More security — Built-in Data at Rest Encryption Improved application compatibility — Increased value for virtual environments, including improved performance and faster provisioning Information infrastructure must continuously adapt to changing business requirements. EMC Symmetrix Fully Automated Storage Tiering for Virtual Pools (FAST VP) automates tiered storage strategies, in Virtual Provisioning™ environments, by easily moving workloads between Symmetrix tiers as performance characteristics change over time. FAST VP performs data movements, improving performance and reducing costs, all while maintaining vital service levels.
®

Introduction and overview
EMC Symmetrix VMAX FAST VP for Virtual Provisioning environments automates the identification of data volumes for the purposes of relocating application data across different performance/capacity tiers within an array. FAST VP proactively monitors workloads at both the LUN and sub-LUN level in order to identify ―busy‖ data that would
2 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance

Fully Automated Storage Tiering

benefit from being moved to higher-performing drives. FAST VP will also identify less ―busy‖ data that could be moved to higher-capacity drives, without existing performance being affected. This promotion/demotion activity is based on policies that associate a storage group to multiple drive technologies, or RAID protection schemes, via thin storage pools, as well as the performance requirements of the application contained within the storage group. Data movement executed during this activity is performed nondisruptively, without affecting business continuity and data availability.

Audience
This technical notes document is intended for anyone who needs to understand FAST VP theory, best practices, and associated recommendations as necessary to achieve the best performance for FAST VP configurations. This document is specifically targeted at EMC customers, sales, and field technical staff who are either running FAST VP or are considering FAST VP for future implementation. Significant portions of this document assume a base knowledge regarding the implementation and management of FAST VP. For information regarding the implementation and management of FAST VP in Virtual Provisioning environments, please refer to the Implementing Fully Automated Storage Tiering for Virtual Pools (FAST VP) for EMC Symmetrix VMAX Series Arrays Technical Note (P/N 300-012-015).

Fully Automated Storage Tiering
Fully Automated Storage Tiering (FAST) automates the identification of data volumes for the purposes of relocating application data across different performance/capacity tiers within an array. The primary benefits of FAST include: Elimination of manually tiering applications when performance objectives change over time Automating the process of identifying data that can benefit from Enterprise Flash Drives or that can be kept on higher-capacity, less-expensive SATA drives without impacting performance Improving application performance at the same cost, or providing the same application performance at lower cost. Cost is defined as acquisition (both hardware and software), space/energy, and management expense Optimizing and prioritizing business applications, allowing customers to dynamically allocate resources within a single
FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 3

and the low cost per unit of storage request processing without requiring a log of storage capacity. Fibre Channel drives — FC drives are the best drive type for workloads with a back-end storage request density that is not consistently high or low. but with a high cost per unit of storage capacity At the other extreme are SATA drives. which have a low cost per unit of storage capacity. but high response times and high cost per unit of storage request processing In between these two extremes lie Fibre Channel drives Based on the nature of the differences that exist between these three drive types. and the need for storage consolidation. Fibre Channel. Enterprise Flash Drives — EFDs are more suited for workloads that have a high back-end random read storage request density. To the degree it can be arranged for storage workloads to be served by the best suited drive technology. and SATA. the number of drive types supported by Symmetrix has grown significantly. These drives span a range of storage service specializations and cost characteristics that differ greatly. Such workloads take advantage of both the low response time provided by the drive. which have a very low response time. Several differencesexist between the three drive technologies supported by the Symmetrix VMAX – Enterprise Flash Drive (EFD). SATA drives — SATA drives are suited toward workloads that have a low back-end storage request density. the following observations can be made regarding the most suited workload type for each drive.Fully Automated Storage Tiering array Delivering greater flexibility in meeting different price/performance ratios throughout the lifecycle of the information stored Due to advances in drive technology. the opportunity exists to improve FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance 4 . The primary areas they differ in are: Response time Cost per unit of storage capacity Cost per unit of storage request processing At one extreme are EFDs. This disparity in suitable workloads presents both an opportunity and a challenge for storage administrators.

and a single thin device may have extents allocated across multiple thin pools within the array. This also needs to be done while taking into account optional constraints on tier capacity usage that may be imposed on specific groups of storage devices. FAST VP operates on Virtual Provisioning thin devices. and to automatically and nondisruptively move storage between tiers to optimize storage resource usage accordingly. The approach taken with FAST is to automate the process of identifying which regions of storage should reside on a given drive technology. FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 5 . reduce hardware acquisition expenses. Note: For more information on Virtual Provisioning. As such. and reduce operating expenses (including energy costs and space consumption). however. FAST and FAST VP comparison EMC Symmetrix VMAX FAST and FAST VP automate the identification of data volumes for the purposes of relocating application data across different performance/capacity tiers within an array. respectively – they both can operate simultaneously within a single array.FAST and FAST VP comparison application performance. the management and operation of each can be considered separately. While the administration procedures used with FAST VP are also very similar to those available with FAST. The challenge. or disk group provisioned. Note: For more information on FAST please refer to the Implementing Fully Automated Storage Tiering (FAST) for EMC Symmetrix VMAX Series Arrays technical note available on Powerlink. data movements executed can be performed at the subLUN level. Data movements executed between tiers are performed at the full volume level. Simple Capacity Allocation with EMC Symmetrix Virtual Provisioning Technical Note available on Powerlink. The goal of FAST and FAST VP is to optimize the performance and costefficiency of configurations containing mixed drive technologies. Aside from some shared configuration parameters. While FAST monitors and moves storage in units of entire logical devices. Symmetrix volumes. lies in how to realize these benefits without introducing additional administrative overhead and complexity. please refer to the Best Practices for Fast. FAST operates on non-thin. the major difference being storage pools used by FAST VP is thin storage pools. Because FAST and FAST VP support different device types – non-thin and thin.

In this way. At any given time the hot regions of a thin device managed by FAST VP may be mapped to an EFD tier. The Symmetrix microcode is a part of the Enginuity storage operating environment that controls components within the array. FAST VP delivers better performance and greater cost efficiency that FAST. By more effectively exploiting drive technology specializations. This includes the ability to nondisruptively adjust the quality of storage service (response time and throughput) provided to a storage group. FAST VP more closely aligns storage access workloads with the best suited drive technology than is possible if all regions of a given device must be mapped to the same tier.Theory of operation FAST VP monitors data access with much finer granularity. This allows FAST VP to determine the most appropriate tier (based on optimizing performance and cost efficiency) for each 7680 KB region of storage. FAST VP builds upon and extends the existing capabilities of Virtual Provisioning and FAST to provide the user with enhanced Symmetrix tiering options. or storage group priority. the ability of FAST VP to monitor and move data with much finer granularity greatly enhances the value proposition of automated tiering. This is due to the fact that the workload may be adjusted by moving less data. The FAST controller is a service that runs on the service processor. Theory of operation There are two components of FAST VP – Symmetrix microcode and the FAST controller. FAST VP also better adapts to shifting workload locality of reference. This further contributes to making FAST VP more effective at exploiting drive specializations and also enhances some of the operational advantages of FAST. The Virtual Provisioning underpinnings of FAST VP allow FAST VP to combine the core benefits of Virtual Provisioning (wide striping and thin provisioning) with the benefits of automated tiering. 6 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance . changes in tier allocation limits. and the warm parts may be mapped to an FC tier and the cold parts may be mapped to a SATA tier.

to issue data movement requests to the VLUN VP data movement engine. Defined data movement windows determine when to execute the data FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 7 . upon which analysis is performed to determine the appropriate tier for devices. as well as supporting calculations performed by the FAST controller. both components participate in the execution of two algorithms—the intelligent tiering algorithm and the allocation compliance algorithm—to determine appropriate data placement.Theory of operation Figure 1. FAST VP components When FAST VP is active. Performance time windows can be defined to specify when the FAST VP controller should collect performance data. this will occur 24 hours a day. The intelligent tiering algorithm uses performance data collected by the microcode. The allocation compliance algorithm enforces the upper limits of storage capacity that can be used in each tier by a given storage group by also issuing data movement requests to the VLUN VP data movement engine. By default.

when FAST is enabled and the FAST VP operating mode is Automatic. previously discussed. The intelligent tiering algorithm runs continuously during open data movement windows. When determining the appropriate tier for each extent group. the main component makes use of both the FAST VP metrics.Theory of operation movements necessary to move data between tiers. The determination of which extent groups need to be moved is performed by a task that runs within the Symmetrix array. Data movements performed by the microcode are achieved by moving allocated extents between tiers. the required data movement requests are issued to the VLUN VP data movement engine. As such.680 KB in size. supporting. Intelligent tiering algorithm The goal of the intelligent tiering algorithm is to use the performance metrics collected at the sub-LUN level to determine which tier each extent group should reside in and to submit the needed data movements to the Virtual LUN (VLUN) VP data movement engine. Allocation compliance algorithm The goal of the allocation compliance algorithm is to detect and correct situations where the allocated capacity for a particular storage group within a thin storage tier exceeds the maximum capacity allowed by the associated FAST policy. and supporting calculations performed by the secondary component on the service processor. The main component periodically assesses whether extent groups need to be moved in order to optimize the use of the FAST VP storage tiers. The size of data movement can be as small as 768 KB. representing a single allocated thin device extent. performance-related data movements can occur continuously during an open data movement window. which is 7. The intelligent tiering algorithm is structured into two components: a main component which executes within Symmetrix microcode and a secondary. If so. A storage group is considered to be in compliance with its associated FAST policy when the configured capacity of the thin devices in the storage group is located on tiers defined in the policy and when the 8 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance . component that executes within the FAST controller on the service processor. The following sections further describe each of these algorithms. but will more typically be an entire extent group.

when FAST is enabled and the FAST VP operating mode is Automatic. To complete a move. as the result of host writes. The movement of extents. in turn. Compliance violations may occur for multiple reasons. The VP data movement window must be open. This is done by coordinating the movement requests with the analysis performed by the intelligent tiering algorithm in determining the most appropriate extents to be moved. The thin device affected must not be pinned. including: New extent allocations performed for thin devices managed by FAST VP Changes made to the upper usage limits for a VP tier in a FAST policy Adding thin devices to a storage group that are themselves out of compliance Manual VLUN VP migrations of thin devices The compliance algorithm will attempt to minimize the amount of movements performed to correct compliance that may. when correcting compliance violations. New allocations for the thin device. the thin device will still remain bound to the pool it was originally bound to. There must be sufficient unallocated space in the thin pools included in the destination tier to accommodate the data being moved. extents are not swapped between pools. will continue to come from the bound pool.Theory of operation usage of each tier is within the upper limits of the tier usage limits specified in the policy. Data movement Data movements executed by FAST VP are performed by the VLUN VP data movement engine. FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 9 . does not change the thin device binding information. the following must hold true: The FAST VP operating mode must be Automatic. or extent groups. and involve moving thin device extents between thin pools within the array. That is. Extents are moved via a move process only. generate movements performed by the intelligent tiering algorithm. and the most appropriate tier. The compliance algorithm runs every 10 minutes during open data movement windows.

365 days a year. Intelligent tiering algorithm related movements are requested and executed by the Symmetrix microcode. 24 hours a day. That is. such as DRVs. Performance information from the intelligent tiering algorithm is used to determine more appropriate sub-extents to move when restoring compliance. and executed by the microcode. This request will explicitly indicate which thin device extents should be moved. Other movement considerations include: Only extents that are allocated will be moved. These movements bring the capacity of the storage group back within the boundaries specified by the associated policy. There are two types of data movement that can occur under FAST VP – generated by the intelligent tiering algorithm and the allocation compliance algorithm. 10 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance . the algorithm will generate a data movement request to return the allocations within the required limits. Data movement time windows are used to specify date and time ranges when data movements are allowed. As swaps are not performed. These data movements will be governed by the workload on each extent group. then an alternate tier may be considered by the movement task. more critical workloads. there is no requirement for any swap space.Theory of operation The destination tier must contain at least one thin pool that has not exceeded the pool reserved capacity (PRC). No back-end configuration changes are performed during a FAST VP data movement. but will only be executed within the constraints of the associated FAST policy. Data movement windows can be planned so as to minimize impact on the performance of other. FAST VP data movements run as low-priority tasks on the Symmetrix back end. to facilitate data movement. and as such no configuration locks are held during the process. respectively. and the specific thin pools they should be moved to. When a compliance violation exists. a performance movement will not cause a storage group to become non-compliant with its FAST policy. 7 days a week. or not allowed. Both types of data movement will only occur during userdefined data movement windows. A default data movement time window excludes all performance data samples. to be performed. Allocation compliance related movements are generated by the FAST controller. Note: If the selected destination tier contains only pools that have reached the PRC limit.

The metrics collected are: Read miss Write Prefetch (sequential read) The read miss metric accounts for each DA read operation that is performed. Performance metrics When collecting performance data at the LUN and sub-LUN level for use by FAST VP. to cache. or even FAST VP data movements. based on a Workload Analysis Period of 24 hours. Writes related to specific RAID protection schemes will also not be counted. This metric considers each DA read operation performed as a front-end prefetch operation. are not considered. By default. Also. read hits. the Symmetrix microcode only collects statistics related to Symmetrix back-end activity that is the result of host I/O. In the case of RAID 5 and RAID 6 protected devices. Reads to areas of a thin device that have not had space allocated in a thin pool are not counted. clone operations. is not included in the FAST VP metrics. Prefetch operations are accounted for in terms of the number of distinct DA operations performed to prefetch data spanning a FAST VP extent. are not considered. In the case of RAID 1 protected devices. Write operations are counted in terms of the number of distinct DA operations that are performed. The metric accounts for when a write is destaged – write hits. parity reads and writes are not counted. VLUN migrations. the write I/O is only counted for one of the mirrors. This data is then analyzed by the FAST controller and guidelines generated for the placement of thin device data on the defined VP tiers within the array. an I/O that has just been received is weighted two times more heavily than an I/O received 24 hours previously. which are serviced from cache. Workload related to internal copy operations. These FAST VP performance metrics provide a measure of activity that assigns greater weight to more recent I/O requests.Performance considerations Performance considerations Performance data for use by FAST VP is collected and maintained by the Symmetrix microcode. such as drive rebuilds. but are also influenced by less recent activity. FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 11 .

In order to maintain the sub-LUN-level metrics. The times during which metrics are not being collected does not contribute to reducing the weight assigned to those metrics already collected. FAST VP tuning FAST VP provides a number of parameters that can be used to tune the performance of FAST VP and to control the aggressiveness of the data movements. and the priority given to moving the data between pools. If a thin device is removed from FAST VP control. the Symmetrix allocates one cache slot for each thin device that is under FAST VP control. Relocation Rate The Relocation Rate is a quality of service (QoS) setting for FAST VP and affects the ―aggressiveness‖ of data movement requests generated by FAST VP. This aggressiveness is measured as the amount of data that will be requested to be moved at any given time. 12 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance . This unit of storage consists of 10 contiguous thin device extents and is known as an extent group. cache slots are allocated for both the metahead and for each of the metamembers. When managing metadevices. These parameters can be used to nondisruptively adjust the amount of tier storage that a given storage group is allowed to use.Performance considerations Note: Performance metrics are only collected during user-defined performance time windows. Note: The rate at which data is moved between pools can also be controlled via the Symmetrix Quality of Service VLUN setting. The metrics collected at the sub-LUN level for thin devices under FAST VP control contain measurements to allow FAST VP to make separate data movement requests for each 7. This can be done either by removing the thin device from a storage group associated with a FAST policy or disassociating the storage group from a policy. or to adjust the manner in which storage groups using the same tier compete with each other for space. then the cache slot reserved for collecting and maintaining the sub-LUN statistics is released. Note: A Symmetrix VMAX cache slot represents a single track of 64 KB. collected by the microcode.680 KB unit of storage that make up the thin device.

They specify date and time ranges (past or future) when performance samples should be collected. for the purposes of FAST VP performance analysis. Performance time window The performance time windows are used to identify the business cycle for the Symmetrix array. this can be overridden for each individual pool by using the pool-level setting. FAST VP will no longer perform data movements into that pool. An inclusion time window indicates that the action should be performed during the defined time window. By default. However. An exclusion time window indicates that the action should be performed outside the defined time window. The data movement time windows define when to perform the data relocations necessary to move data between tiers. The intent of defining performance time windows is to distinguish periods of time FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 13 . to fail. and subsequently cause a new extent allocation. data movements may continue to occur out of the pool to other pools. a result of a host write. The PRC can be set both system-wide and for each individual pool. the system-wide setting is applied to all thin pools that have been included in VP tier definitions.Performance considerations Pool reserved capacity The PRC reserves a percentage of each pool included in a VP tier for non-FAST VP activities. or not collected. performed by FAST and Optimizer. There are two possible window types:   Performance time window Data movement time window The performance time windows are used to specify when performance metrics should be collected by the microcode. The purpose of this is to ensure that FAST VP data movements do not fill a thin pool. FAST VP time windows FAST VP utilizes time windows to define certain behaviors regarding performance data collection and data movement. When the percentage unallocated space in a thin pool is equal to the PRC. However. and sub-LUN data movement performed by FAST VP. Both performance time windows and data movement windows may be defined as inclusion or exclusion windows. FAST VP can begin performing data movements into that pool again. Separate data movement windows can be defined for full LUN movement. When the percentage of unallocated space becomes greater than the PRC.

365 days a year. Storage group priority When a storage group is associated with a FAST policy. what must be considered is that data movements are FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance 14 .Product and feature interoperability when the Symmetrix is idle from periods when the Symmetrix is active. This priority value can be between 1 and 3. TimeFinder/Snap. and Open Replicator. a priority value must be assigned to the storage group. Data movement time window Data movement time windows are used to specify date and time ranges when data movements are allowed. By default data movement is prevented 24 hours a day. Similarly. to be performed. 7 days a week. 365 days a year. When multiple storage groups share the same policy. By default performance metrics will be collected 24 hours a day. R1 or R2. and to only include performance data collected during the active periods. all incremental relationships are maintained for the moved or swapped devices. SRDF Thin SRDF devices. Any active replication on a Symmetrix device remains intact while data from that device is being moved. Extents of SRDF devices can be moved between tiers while the devices are being actively replicated. 7 days a week. the priority value is used when the data contained in the storage groups is competing for the same resources in one of the associated tiers. and Auto-provisioning Groups. can be associated with a FAST policy. Dynamic Cache Partitioning. Storage groups with a higher priority will be given preference when deciding which data needs to be moved to another tier. FAST VP data movements run as low-priority tasks on the Symmetrix back end. While there are no restrictions in the ability to manage SRDF devices with FAST VP. FAST VP also operates alongside Symmetrix features such as Symmetrix Optimizer. or not allowed. with 1 being the highest priority—the default is 2. Product and feature interoperability FAST VP is fully interoperable with all Symmetrix replication ® ® technologies—EMC SRDF . in either synchronous or asynchronous mode. EMC TimeFinder /Clone.

the thin pool the device is being re-bound to must belong to one of the VP tiers contained in the policy the device is associated with. TimeFinder/Snap The source device in a TimeFinder/Snap session can be managed by FAST VP. or user requested pre-allocations. Also. are performed to this pool. in an SRDF/Asynchronous environment. production array being failed over from. There is no coordination of data movements on both sides of the link. TimeFinder/Clone Both the source and target devices of a TimeFinder/Clone session can be managed by FAST VP. However. However. can have extents moved by FAST VP. during the space reclamation process. Open Replicator for Symmetrix The control device in an Open Replicator session. and as such may end up with different extent allocations across tiers. This means that. push or pull. Virtual Provisioning All thin devices. and no data movements will be FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 15 . All host write generated allocations. may only be bound to a single thin pool. FAST VP data movements will not change the binding information for a thin device. when rebinding a device that is under FAST VP control. in a SRDF failover scenario. Virtual Provisioning space reclamation Space reclamation may be run against a thin device under FAST VP control. FAST VP data movements on the production R1 array could result in an unbalanced configuration between R1 and R2 (where the performance characteristics of the R2 device are lower than that of the paired R1 device). the remote Symmetrix array will have different performance characteristics than the local.Product and feature interoperability restricted to the array upon which the FAST VP is operating. target device VDEVs are not managed by FAST VP. whether under FAST VP control or not. the source and target will be managed independently. no sub-LUN performance metrics will be updated. with FAST VP acting independently on both the local and remote arrays. However. However. It is possible to change the binding information for a thin device without changing any of the current extent allocations for the device.

Prior to issuing the space reclamation task. 16 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance . While both data device draining and automated pool rebalancing may be active in a thin pool that is included in a VP tier. that extent can be deallocated and free space returned to the pool. automated pool rebalancing may be run. both of these processes may affect performance of FAST VP data movements. Similarly. If the unmap command range covers only some tracks in an extent. Virtual Provisioning pool management Data devices may be added to or removed from a thin pool that is included in the FAST VP tier. a request to reclaim space on that device will fail. This will suspend any active FAST VP data movements for the device and allow the request to succeed. The extent is not deallocated. Such a migration will result in all allocated extents of the device being moved to a single thin pool. however. those tracks are marked Never Written by Host (NWBH). no FAST VP related data movements will be performed. will continue while the data devices are being modified. Note: If FAST VP is actively moving extents of a device. Virtual LUN VP Mobility A thin device under FAST VP control may be migrated using VLUN VP. however those tracks will not have to be retrieved from disk should a read request be performed. Virtual Provisioning T10 unmap Unmap commands can be issued to thin devices under FAST VP control. If this range covers a full thin device extent. Once the migration is complete. the device should first be pinned. While the migration is in progress. into or out of the thin pool. In the case of adding data devices to a thin pool. The T10 SCSI unmap command for thin devices advises a target thin device that a range of blocks are no longer in use. when disabling and removing data devices from the pool. Instead. they will drain their allocated tracks to other enabled data devices in the pool. all allocated extents of the thin device will be available to be retiered. Data movements related to FAST VP.Product and feature interoperability performed. the Symmetrix array will immediately return all zeros.

the device can later be unpinned. FAST will only perform full device movements of non-thin devices. FAST Both FAST and FAST VP may coexist within a single Symmetrix. FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 17 . thin and non-thin. As data movements use the same cache partition as the application. As such. Both FAST and FAST VP do share some configuration parameters. Auto-provisioning Groups Storage groups created for the purposes of Auto-provisioning may also be used for FAST VP. it may only be contained in one storage group that is associated with a FAST policy (DP or VP). However. To re-enable FAST VP-related data movements. These are: Workload Analysis Period Initial Analysis Period Performance Time Windows Symmetrix Optimizer Symmetrix Optimizer operates only on non-thin devices. Should a storage group contain a mix of device types. it is recommended that the device first be pinned. These are: Workload Analysis Period Initial Analysis Period Performance Time Windows Dynamic Cache Partitioning (DCP) Dynamic Cache Partitioning can be used to isolate storage handling of different applications. while a device may be contained in multiple storage groups for the purposes of Auto-provisioning.Product and feature interoperability To prevent the migrated device from being retiered by FAST VP immediately following the migration. there will be no impact on FAST VP’s management of thin devices. there will be no impact to FAST VP’s management of thin devices. Both Optimizer and FAST VP share some configuration parameters. As such. movements of data on behalf of one application do not affect the performance of applications that are not sharing the same cache partition.

every day. However. Also. Solutions Enabler 7.198. Performance time window The performance time windows specify date and time ranges when performance metrics should be collected. Note: For more information on each of these configuration parameters. the default performance window should be left unchanged. respectively. if there are extended periods of time when the 18 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance . then separate storage groups will need to be created. performance metrics are collected 24 hours a day. Planning and design considerations The following sections detail best practice recommendations for planning the implementation of a FAST VP environment. performance collection and data movement time windows can be defined. for the purposes of FAST VP performance analysis. As a best practice. By default. These include settings to determine the effect of past workloads on data analysis. and Symmetrix Management Console 7. and pool space to be reserved for non-FAST VP activities. however. The best practices documented are based on features available in Enginuity 5875. The following sections describe best practice recommendations for each of these configuration parameters. A separate storage group containing the thin devices will be associated with a policy containing VP tiers.3. If it is intended that both device types in an Auto-provisioning storage group be managed by FAST and FAST VP.3. as well as exclude other time periods. refer to the Implementing Fully Automated Storage Tiering for Virtual Pools (FAST VP) for EMC Symmetrix VMAX Series Arrays technical note available on Powerlink. or not collected. Time windows may be defined. to include only certain days or days of the week. quality of service for data movements.Planning and design considerations only the devices matching the type of FAST policy it is associated with will be managed by FAST.148. A storage group with the nonthin devices may then be associated with a policy containing DP tiers. FAST VP configuration parameters FAST VP includes multiple configuration parameters that control its behavior.

Note: The performance time window is applied system-wide. the data movement window should allow data movements for the same period of time that the performance time windows allow data collection. This will allow FAST VP to react more quickly and more dynamically to any changes in workload that occur on the array. but active at different times. at a minimum. during a backup window. Initial Analysis Period The Initial Analysis Period (IAP) defines the minimum amount of time a thin device should be under FAST VP management before any performance-related data movements should be applied.Planning and design considerations workloads managed by FAST VP are not active. Unless there are specific time periods to avoid data movements. Workload Analysis Period Rate The Workload Analysis Period (WAP) determines the degree to which FAST VP metrics are influenced by recent host activity. and also less recent host activity. The best practice recommendation for the workload analysis period is to use the default value of 7 days (168 hours). or not allowed. If multiple applications are active on the array. This parameter should be set to a long enough value so as to allow sufficient data samples for FAST VP to establish a good characterization of the typical workload on that device. that takes place while the performance time window is considered open. to be performed. these time periods should be excluded. every day. Data movement time window Data movement time windows are used to specify date and time ranges when data movements are allowed. The longer the time defined in the workload analysis period. it may be appropriate to set the data movement window to allow FAST VP to perform movements 24 hours a day. The best practice recommendation is that. FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 19 . for example. Note: If there is a concern about possible impact of data movements occurring during a production workload then the FAST VP Relocation can be used to minimize this impact. then the default performance time window behavior should be left unchanged. the greater the amount of weight assigned to less recent host activity.

will cause FAST VP to greatly reduce the amount of data that is moved and the pace at which it is moved. while no increase in response time was seen when the relocation rate was set to 8. 1. as quickly as it can. However. The same test was carried out with three separate relocation rates – 1. FAST VP Relocation Rate The FAST VP Relocation Rate (FRR) is a quality of service (QoS) setting for FAST VP. being run on an environment containing two FAST VP tiers. the distribution of data across tiers will be completed in a shorter period of time. Fibre Channel (FC) and Enterprise Flash (EFD).500 IOPS of type OLTP2. This aggressiveness is measured as the amount of data that will be requested to be moved at any given time. it may make sense to set the initial analysis period to at least 24 hours to ensure that a typical daily workload cycle is seen. the steady state for the response time is seen in a much shorter period of time for the lower. will cause FAST VP to attempt to move the most data it can. The FRR affects the ―aggressiveness‖ of data movement requests generated by FAST VP. However. Setting the FRR to the least aggressive value. 20 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance . an initial increase in response time is seen at the twohour mark. 5. and the priority given to moving the data between pools. 1. setting. setting the IAP back to the default of 8 hours will allow newly associated devices to benefit from FAST VP movement recommendations more quickly.Planning and design considerations At the initial deployment of FAST VP. more aggressive. once FAST VP data movement has begun. and 8. With a FRR of 1. Dependent on the amount of data to be moved. However. This setting will cause no impact to host response time. when FAST VP data movement was initiated. due to the additional back-end overhead being generated by the FAST VP data movements. Setting the FRR to the most aggressive value. but the final distribution of data will take longer. Figure 2 shows the same workload. 10. an FRR of 1 will be more likely to cause impact to host I/O response times.

This will allow FAST VP to adjust to small changes in workload more quickly.Planning and design considerations Figure 2. When the percentage of unallocated space in a thin pool is equal to the PRC. For the system-wide setting. If the PRC has not been set for a pool. FAST VP will no longer perform data movements into that pool. The reason for this is that when FAST VP is first enabled the amount of data to be moved is likely to be greater. FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 21 . Example workload with varying relocation rates The default value for the FRR is 5. possibly 2 or 3. is to start with a more conservative value for the relocation rate. for the initial deployment of FAST VP. or the PRC for the pool has been set to NONE. At a later date. perhaps 7 or 8. then the system-wide setting is used. the best practice recommendation. when it is seen that the amount of data movements between tiers is less. However. Pool Reserved Capacity The Pool Reserved Capacity (PRC) reserves a percentage of each pool included in a VP tier for non-FAST VP activities. The PRC can be set both as a system-wide setting and for each individual pool. compared to when FAST VP has been running for some time. the FRR can be set to a more aggressive level. the best practice recommendation is to use the default value of 10 percent.

or are going to be bound. if a warning is triggered when a thin pool has reached an allocation of 80 percent of its capacity. This will ensure that the remaining 20 percent of the pool will only be used for new hostgenerated allocations. and 100 percent SATA. if thin devices are bound to the pool. policy. storage group association The usage limit for each tier must be between 1 percent and 100 percent.Planning and design considerations For individual pools. Note: If the PRC is increased. then the PRC should be set to the lowest possible value. The ideal FAST VP policy would be 100 percent EFD. FAST VP policy configuration A FAST VP policy groups between one and three VP tiers and assigns an upper usage limit for each storage tier. Storage tier. Creating a policy with a total upper usage limit greater than 100 percent allows flexibility with the configuration of a storage group whereby data may be moved between tiers without necessarily having to move a corresponding amount of other data within the same storage group. If no thin devices are bound to the pool. Figure 3. logical capacity of the associated storage group. the best practice recommendation is to set the PRC based on the lowest allocation warning level for that thin pool. The upper limit specifies the maximum amount of capacity of a storage group associated with the policy can reside on that particular tier. For example. The upper capacity usage limit for each storage tier is specified as a percentage of the configured. 100 percent FC. causing a thin pool to be within the PRC limit. but may be greater than 100 percent. the upper usage limit for all thin storage tiers in the policy must total at least 100 percent. then the PRC should be set to 20 percent. Such a policy would provide the greatest amount 22 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance . 1 percent. and not FAST VP data movements. The PRC value only affects the ability of FAST VP to move data into a pool. When combined. FAST VP will not automatically start moving data out of the pool.

meaning a small percentage of the data on the array may be servicing the majority of the workload on the array. FAST VP data FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 23 . a policy containing just a small percentage for EFD would be recommended. or user requested pre-allocations. The best way to determine appropriate policies for a FAST VP implementation is to examine the workload skew for the application data to be managed by FAST VP. With this information. or application. as it would allow 100 percent of the storage group’s capacity to be promoted or demoted to any tier within the policy. Similarly. operationally it may not be appropriate to deploy the 100/100/100 policy. While ideal. it may be appropriate to restrict the amount of SATA capacity a storage group will utilize. which can become inactive from time to time. Tier Advisor Tier Advisor is a utility. whether under FAST VP control or not. are performed from this pool. Tier Advisor can examine performance data collected from Symmetrix. or CLARiiON storage arrays and determine the workload skew at the full LUN level. There may be reasons to limit access to a particular tier within the array. VNX. it may be appropriate to limit the amount of a storage group’s capacity that can be placed on EFD. This may be used to prevent one single storage group. Thin device binding Each thin device. The workload skew defines an asymmetry in data usage over time. may require a minimum level of performance when they become active again. and SATA) within Symmetrix VMAX storage arrays. In this case. One tool that provides insight into this workload skew is Tier Advisor. All host write generated allocations. Some applications. It can also estimate the workload skew at the subLUN level. Tier Advisor can model an optimal storage array configuration by enabling the ability to interactively experiment with different storage tiers and storage policies until achieving the desired cost and performance preferences. available to EMC technical staff. For such applications a policy excluding the SATA tier could be appropriate. that estimates the performance and cost of mixing drives of different technology types (EFD. As an example.Planning and design considerations of flexibility to an associated storage group. from consuming all of the EFD resources. FC. may only be bound to a single thin pool.

under FAST VP control. Performance consideration If no advance knowledge of the expected workload on newly written data is available. both performance requirements and capacity management should be taken into consideration. it is recommended that all thin devices be bound to a single pool within the Symmetrix array. then it may be appropriate to bind the thin devices to the SATA tier. and SATA). ultimately. However. the Pool Reserved Capacity. FAST VP will then make the decision to promote or demote the data as appropriate. If performance needs require the thin devices to be bound to FC. cause that single pool to be oversubscribed. this would imply the Fibre Channel tier. and space allocated in the pool. binding all thin devices to the SATA tier will reduce the likelihood of the bound pool filling up. FC.Planning and design considerations movements do not change the binding information for a thin device. SATA configuration. this would imply the SATA tier. If it is known that newly allocated tracks will not be accessed by a host application for some time after the initial allocation. Host writes to unallocated areas of a thin device will fail if there is insufficient space in the bound pool In an EFD. From an ease-of-management and reporting perspective. can be used to alleviate a potential pool full condition. In a three-tier configuration (EFD. as well as the use of thin device preallocation and the system write pending limit. Capacity management consideration Binding all thin devices to a single pool will. the best practice recommendation is to bind all thin devices to a pool within the second highest tier. Once the data has been written. performance requirements of newly written data should be considered prior to using preallocation. the FAST VP policy configuration. FC. Note: Unless there is a very specific performance need it is not recommended to bind thin devices. Preallocation A way to avoid writes to a thin device failing due to a pool being fully allocated is to preallocate the thin device when it is bound. This could potentially lead to issues as the pool fills up. When FAST VP performs data movements only allocated extents are moved. to the EFD tier. or a combination of both. This applies not only to extents allocated as the result of a host 24 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance . In determining the appropriate pool. if the FC tier is significantly smaller than the SATA tier. In a FC and SATA configuration.

Note: When moving preallocated. but unwritten. extents will show as inactive and as such will be demoted to the lowest tier included in the associated FAST VP policy. If the write pending count reaches 60 percent of the write pending limit. as the write pending count approaches the system write pending limit. System write pending limit FAST VP is designed to throttle back data movements. As the write pending count decreases below this level. A very busy workload running on SATA disks. but unwritten. will impact FAST VP from promoting active extents to FC and/or EFD tiers. Preallocation should only be used selectively for those devices that can never tolerate a write failure due to a full pool. A best practice recommendation is to not preallocate thin devices managed by FAST VP. the thin devices being migrated to should be bound to a pool that has sufficient capacity to contain the full capacity of each of the devices. When these extents are eventually written to.Planning and design considerations write. it is recommended to bind thin devices to a pool in the FC tier. Note: By default the system write pending limit on a Symmetrix VMAX running Enginuity 5875 is set to 75 percent of the available cache. data movements will automatically restart. extents no data is actually moved. it is possible that the thin devices will become fully allocated as a result of the migration. The pointer for the extent is simply redirected to the pool in the target tier. the front-end zero detection capabilities of SRDF and Open Replicator can be used during the migration. In an environment with a high write workload. or near. As such. the write performance will be that of the tier it has been demoted to. This throttling gives an even higher priority to host I/O to ensure that tracks marked as write pending are destaged appropriately. FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 25 . Preallocated. both promotions and demotions. Virtual Provisioning zero space reclamation can be used following the migration to deallocate zero data copied during the migration. These preallocated extents will be moved even if no data has yet been written to them. with SATA disks at. Alternatively. but also to extents that have been preallocated. Migration When performing a migration to thin devices. 100 percent utilization causing a high write pending count. FAST VP data movements will stop.

RAID protection considerations When designing a Virtual Provisioning configuration.Planning and design considerations Rebinding a thin device It is possible to change the binding information for a thin device without moving any of the current extent allocations for the device. the thin pool the device is being re-bound to must belong to one of the VP tiers contained in the policy the device is associated with. particularly choosing a RAID protection strategy. Note: For new environments being configured in anticipation of implementing FAST VP. the allocation levels in both pools will remain unchanged. both device level performance and availability implications should be carefully considered. within the array. This is done by a process called rebinding. the availability of an individual thin device will not be based just on the availability characteristics of the thin pool the device is bound to. This is due to the slower rebuild times of the SATA drives (compared to EFD and FC) and the increased chance of a dual drive failure leading to data unavailability with RAID 5 protection. refer to the Best Practices for Fast. Instead. What FAST VP does change. availability will be based on the characteristics of the tier with the lowest availability. however. However. Because of this. Simple Capacity Allocation with EMC Symmetrix Virtual Provisioning technical note available on Powerlink. of varying RAID protection and drive technology. Also. FAST VP does not change these considerations and recommendations from a performance perspective. While performance and availability requirements will ultimately determine the configuration of each tier within the Symmetrix array. Rebinding a thin device will increase the subscription level of the pool the device is being bound to. and decrease the subscription level of the pool it was previously bound to. or for existing environments having additional tiers 26 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance . it is recommended to use either RAID 1 or RAID 6 on the SATA tier. Note: When rebinding a device that is under FAST VP control. is that a single thin device can now have its data spread across multiple tiers. For more information on these considerations. The faster rebuild times of EFDs provide higher availability for these protection schemes on that tier. as a best practice it is recommended to choose RAID 1 or RAID 5 protection on EFDs.

if the EFD raw capacity requirement was 3. larger EFDs to spread I/O load as wide as possible. When multiple storage groups are associated with FAST VP policies. than fewer.Planning and design considerations added. the priority value is used when the data contained in the storage groups is competing for the same resources on one of the FAST VP tiers. and the overall capability of the VMAX. An example of this is where the customer desires partitioning and isolation of disk resources to separate customer environments and workloads within the VMAX. This is of particular relevance for Enterprise Flash Drives (EFDs). however. Generally it is appropriate to configure more.2 TB. it would be more optimal to configure 16 x 200 GB EFDs than 8 x 400 GB EFDs. This priority value can be between 1 and 3. Storage group priority When a storage group is associated with a FAST policy. These values may then be modified if it is seen. The best practice recommendation is to use the default priority of 2 for all storage groups associated with FAST VP policies. there will be scenarios where it is not appropriate to evenly distribute disk resources across DAs. which are each able to support thousands of I/Os per second. For example. that a high-priority application is not getting sufficient resources from a higher tier. for example. SRDF FAST VP has no restrictions in its ability to manage SRDF devices. and therefore are able to create a load on the DA equivalent to that of multiple hard disk drives. with 1 being the highest priority—the default is 2. Storage groups with a higher priority will be given preference when deciding which data needs to be moved to another tier. what must be considered is that data movements are restricted to the array upon which FAST VP is operating. There is no coordination FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 27 . An even distribution of I/O across all DAs is optimal to maximize their capability. smaller EFDs. it is highly recommended that EMC representatives be engaged to assist in determining the appropriate RAID protection schemes for each tier. a priority value must be assigned to the storage group. Drive configuration The VMAX best practice configuration guideline for most customer configurations recommends an even balanced distribution of physical disks across the disk adapters (DAs). on a 2engine VMAX with 16 DAs. However. as each DA could have one EFD configured on it.

Note: The following sections assume that SRDF is implemented with all Symmetrix arrays configured for Virtual Provisioning (all SRDF devices are thin devices) and installed with the minimum Enginuity version capable of running FAST VP. there is no guarantee that such a balance will be maintained.Planning and design considerations of data movements. the SRDF R2 devices will typically only experience write activity during normal operations. for both synchronous and asynchronous modes of SRDF operation. assuming this was included in the FAST VP policy. These writes are only acknowledged to 28 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance . Ideally. on the same drive type. host writes are transferred synchronously from R1 to R2. no writes. and also promotes and demotes data at the sub-LUN level. particularly if the remote R2 array has also been configured with tiered storage capacity. with FAST VP acting independently on both the local and remote arrays. data on each array would be located on devices configured with the same RAID protection type. In SRDF synchronous (SRDF/S) mode. Prior to following this general best practice. then the corresponding extents on the R2 devices will see no I/O activity. As a general best practice. recommend implementing a balanced configuration on both the R1 and R2 Symmetrix arrays. the information in the following sections should be considered. Note: Each SRDF configuration will present its own unique behaviors and workloads. If there are R1 device extents that only experience read activity. As such. This will likely lead to these R2 device extents being demoted to the SATA tier. FAST VP should be employed for both R1 and R2 devices. Meanwhile. SRDF operating mode EMC best practices. FAST VP will promote and demote extent groups based on the read and write activity experienced on the R1 devices. As FAST VP operates independently on each array. Prior to implementing FAST VP in an SRDF environment it is highly recommended that EMC representatives be engaged to assist in determining the appropriate application of FAST VP on both the local and remote Symmetrix VMAX arrays. FAST VP is likely to promote only R2 device extents that are experiencing writes. Similar FAST VP tiers and policies should be configured at each site. FAST VP behavior For SRDF R1 devices.

on the higher-performing tiers on the R2 array. In SRDF asynchronous (SRDF/A) mode. where the R2 data resides on a lowerperforming tier than on the R1. host writes are transferred asynchronously in pre-defined time periods or delta sets. and the apply set. Note: For more information on SRDF/A. this may not be a large issue as the data under write workload will be promoted and maintained on the higher-performing tiers. R2 array. SRDF failover As FAST VP works independently on both the R1 and R2 arrays it should be expected that the data layout will be different on each side. In an unbalanced configuration.Planning and design considerations the host when the data has been received into cache on the remote. and host applications are FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 29 . potentially causing an elongation of the SRDF/A cycle time. Note: For more information on SRDF/DSE. Similarly to SRDF/S mode. Note: For more information on SRDF/S. see the EMC Solutions Enabler Symmetrix SRDF Family CLI Product Guide available on Powerlink. SRDF/A DSE (delta set extension) should be considered to prevent SRDF/A sessions from dropping should a situation arise where writes propagated to the R2 array are being destaged to a lower tier. compared to the R1. If the data resides on a lower-performing tier on the R2 array. and maintained. the transmit/receive set. These writes to cache are then destaged asynchronously to disk on the R2. see the EMC Solutions Enabler Symmetrix SRDF Family CLI Product Guide available on Powerlink. At any given time there will be three delta sets in effect – the capture set. performance impact may be seen at the host if the number of write pendings builds up and writes to cache are delayed on the R2 array. A balanced SRDF configuration is more important for SRDF/A as data cannot transition from one delta set to the next until the apply set has completed destaging to disk. With FAST VP this will typically not cause a problem as the promotions that occur on the R2 side will be the result of write activity. then the SRDF/A cycle time may elongate and eventually cause the SRDF/A session to drop. Areas of the thin devices under heavy write workload are likely to be promoted. If an SRDF failover operation is performed. in most environments. see the Best Practices for EMC SRDF/A Delta Set Extension technical note available on Powerlink.

dynamically. then it is recommended not to include the EFD tier in the FAST VP policies on the R2 array. Note: This same behavior would be expected following an SRDF personality swap and applications are brought up on the devices that were formerly R2 devices. to the policy associated with the R2 devices. Examples of a configuration difference include either fewer EFDs or no EFDs. where each Symmetrix array has both R1 and R2 devices configured. Once again. Should a failover be performed. if the EFD configuration on the R2 array does not follow the best practice guideline of being balanced across DAs. it should also be expected that the performance characteristics on the R2 will be different from those on the R1. EFD considerations If there is a difference in the configuration of the EFD tier on the remote array. Similarly. Note: This recommendation assumes that the lower tiers are of sufficient capacity and I/O capability to handle the expected SRDF write workload on the R2 devices. do not include the EFD tier on the R2 side.Planning and design considerations brought up on the R2 devices. In this scenario. This can be done by excluding any EFD tiers from the policies associated with the R2 devices. In this case the R1 and R2 devices on the same array will be under different workloads – read and write for R1 and write only for R2. Note: This recommendation assumes that the lower tiers are of sufficient capacity and I/O capability to handle the expected SRDF write workload on the R2 devices. 30 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance . it should be expected that some period of time will pass before performance on the R2 devices will be similar to that of the R1 devices prior to the failover. SRDF bi-directional Best practice recommendations change slightly in a bi-directional SRDF environment. it is recommended to reserve the EFD tier for R1 device usage. In this situation it will take FAST VP some period of time to adjust to the change in workload and start promotion and demotion activities based on the mixed read and write workload. then the EFD tier can be added.

EMC Symmetrix VMAX FAST VP for Virtual Provisioning environments automates the identification of data volumes for the purposes relocating application data across different performance/capacity tiers within an array. and determines the appropriate tier for each extent group. Information infrastructure must continuously adapt to changing business requirements. as well as the performance requirements of the application contained within the storage group. without affecting business continuity and data availability. self-optimizing performance and enables organizations to scale out on demand in Private Cloud environments. FAST VP performs data movements. Data movements executed by FAST VP are performed by the VLUN VP data movement engine. FAST VP uses two distinct algorithms. Data movement executed during this activity is performed nondisruptively. or RAID protection schemes. and reducing costs. in Virtual Provisioning environments.Summary and conclusion Summary and conclusion EMC Symmetrix VMAX series with Enginuity incorporates a scalable fabric interconnect design that allows the storage array to seamlessly grow from an entry-level configuration to a 2 PB system. without existing performance being affected. The intelligent tiering algorithm considers the performance metrics of all thin devices under FAST VP control. The FAST controller is a service that runs on the service processor. Promotion/demotion activity is based on policies that associate a storage group to multiple drive technologies. by easily moving workloads between Symmetrix tiers as performance characteristics change over time. Symmetrix VMAX provides predictable. The Symmetrix microcode is a part of the Enginuity storage operating environment that controls components within the array. The allocation compliance algorithm is used to enforce the per-tier storage capacity usage limits. one performance-oriented and one capacity allocation-oriented. FAST VP proactively monitors workloads at both the LUN and sub-LUN level in order to identify ―busy‖ data that would benefit from being moved to higher-performing drives. via thin storage pools. FAST VP automates tiered storage strategies. FAST VP will also identify less ―busy‖ data that could be moved to higher-capacity drives. in order to determine the appropriate tier a device should belong to. improving performance. There are two components of FAST VP – Symmetrix microcode and the FAST controller. all while maintaining vital service levels. and involve moving thin device extents between FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 31 .

Extents are moved via a move process only. Any active replication on a Symmetrix device remains intact while data from that device is being moved. and Open Replicator. extents are not swapped between pools. This data is then analyzed by the FAST controller and guidelines generated for the placement of thin device data on the defined VP tiers within the array. When collecting performance data at the LUN and sub-LUN level for use by FAST VP. or to adjust the manner in which storage groups using the same tier compete with each other for space. Dynamic Cache Partitioning. FAST VP provides a number of parameters that can be used to tune the performance of FAST VP and to control the aggressiveness of the data movements. FAST VP also operates alongside Symmetrix features such as Symmetrix Optimizer. TimeFinder/Snap. FAST VP is fully interoperable with all Symmetrix replication technologies—EMC SRDF. the Symmetrix microcode only collects statistics related to Symmetrix back-end activity that is the result of host I/O. all incremental relationships are maintained for the moved or swapped devices. These parameters can be used to nondisruptively adjust the amount of tier storage that a given storage group is allowed to use. Performance data for use by FAST VP is collected and maintained by the Symmetrix microcode. Similarly. 32 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance . EMC TimeFinder/Clone.Summary and conclusion thin pools within the array. and Autoprovisioning Groups.

FAST VP policy configuration The ideal FAST VP policy would be 100 percent EFD. possibly 2 or 3. Data movement time window Create a data movement window to allow data movements for the same period of time that the performance time windows allow data collection.198. perhaps 7 or 8. set the initial analysis period to at least 24 hours to ensure. The best practices documented are based on features available in Enginuity 5875. that a typical daily workload cycle is seen. For more detail on these recommendations. at a minimum.148. set the PRC to 1 percent. FAST VP Relocation Rate For the initial deployment of FAST VP. see ―Planning and design considerations.3.3. Solutions Enabler 7. start with a conservative value for the relocation rate. Workload Analysis Period Use the default workload analysis period of 7 days (168 hours). At a later date the FRR can be gradually lowered to a more aggressive level. 100 percent FC. Pool Reserved Capacity For individual pools with bound thin devices set the PRC based on the lowest allocation warning level for that thin pool.‖ FAST VP configuration parameters FAST VP includes multiple configuration parameters that control its behavior. FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 33 . every day. and Symmetrix Management Console 7.Appendix: Best practices quick reference Appendix: Best practices quick reference The following provides a quick reference to the general best practice recommendations for planning the implementation of a FAST VP environment. Performance time window Use the default performance time window to collect performance metrics 24 hours a day. and other considerations. The following sections describe best practice recommendations for each of these configuration parameters. For pools with no bound thin devices. Initial Analysis Period At the initial deployment of FAST VP.

34 FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance . the best practice recommendation is to bind all thin devices to a pool within the second highest tier. Drive configuration Where possible. to the EFD tier. by using a tool such as Tier Advisor. In a three-tier configuration this would imply the FC tier. choose RAID 1 or RAID 6 protection. For FC. or RAID 6 protection. operationally it may not be appropriate to deploy the 100/100/100 policy.Appendix: Best practices quick reference and 100 percent SATA. RAID protection considerations For EFD. SRDF As a general best practice. There may be reasons to limit access to a particular tier within the array. smaller EFDs. FAST VP should be employed for both R1 and R2 devices. Configure more. A best practice recommendation is to not preallocate thin devices managed by FAST VP. choose RAID 1. Storage group priority The best practice recommendation is to use the default priority of 2 for all storage groups associated with FAST VP policies. balance physical drives evenly across DAs. For SATA. Similar FAST VP tiers and policies should be configured at each site. than fewer larger EFDs to spread I/O load as wide as possible. RAID 5. choose RAID 1 or RAID 5 protection. Thin device binding If no advance knowledge of the expected workload on newly written data is available. under FAST VP control. The best way to determine appropriate policies for a FAST VP implementation is to examine the workload skew for the application data to be managed by FAST VP. While ideal. particularly if the remote R2 array has also been configured with tiered storage capacity. It is not recommended to bind thin devices.

The information is subject to change without notice. For the most up-to-date listing of EMC product names. All other trademarks used herein are the property of their respective owners. see EMC Corporation Trademarks on EMC.com.Appendix: Best practices quick reference Copyright © 2010. copying." EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION. All Rights Reserved. 2011 EMC Corporation. FAST VP Theory for EMC Symmetrix VMAX and Best Practices for Planning and Performance 35 . THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS. EMC believes the information in this publication is accurate as of its publication date. and distribution of any EMC software described in this publication requires an applicable software license. AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use. For the most up-to-date regulatory document for your product line. go to the Technical Documentation and Advisories section on EMC Powerlink.

Sign up to vote on this title
UsefulNot useful