You are on page 1of 67

EMC Global Education – IMPACT

Home

IMPACT modules consist of focused, in-depth training content that can be consumed in about 1-2 hours

Welcome to Symmetrix Foundations

Course Description Start Training Run/Download the PowerPoint presentation Student Resource Guide Training slides with notes Assessment Must be completed online
(Note: Completed Assessments will be reflected online within 24-48 hrs.)

Complete Course Directions on how to update your online transcript to reflect a complete status for this course.

For questions or support please contact Global Education

© 2004 EMC Corporation. All rights reserved.

EMC Global Education – IMPACT

Course Completion

Link to Knowledgelink to update your transcript and indicate that you have completed the course.

Symmetrix Foundations
Course Completion Steps:

1. 2. 3. 4.

Logon to Knowledgelink (EMC Learning management system). Click on 'My Development'. Locate the entry for this learning event you wish to complete. Click on the complete icon [ ].

Note: The Mark Complete button does not apply to items with the Type: Class, Downloadable (AICC Compliant) or Assessment Test. Any item you cancel from your Enrollments will automatically be deleted from your Development Plan. Click here to link to Knowledgelink

For questions or support please contact Global Education Back to Home
© 2004 EMC Corporation. All rights reserved.

EMC Global Education

Symmetrix Foundations - IMPACT
Course Description This foundation level course provides participants with an understanding of the Symmetrix Architecture and how it is an integral component of EMC’s offering. This course is part of the EMC Technology Foundations curriculum and is a pre-requisite to other learning paths. Audience This course is intended for any person who presently or plans to: • • • • • Educate partners and/or customers on the value of EMC’s Symmetrix-based storage infrastructure Provide technical consulting skills and support for EMC products Analyze a Customer’s business technology requirements Qualify the value of EMC’s products Collaborate with customers as a storage solutions advisor
Course Number: Method:

MR-5WP-SYMMFD IMPACT Duration: 3 hours

e-Learning

Prerequisites Prior to taking this course, participants should have strong understanding of IT concepts and a basic knowledge of storage concepts. Course Objectives Upon successful completion of this course, participants should be able to: • • • • • • • Draw and describe the basic architecture of a Symmetrix Integrated Cached Disk Array (ICDA) Write a detailed list of host connectivity options for Symmetrix Explain how Symmetrix functionally handles I/O requests from the host environment Illustrate the relationship between Symmetrix physical disk drives and Symmetrix Logical Volumes Describe the media protection options available on the Symmetrix Referencing a diagram, explain some of the high availability features of Symmetrix and how this potentially impacts data availability Describe the front-end, back-end, cache, and physical drive configurations of a DMX and other Symmetrix models

Modules Covered • Labs Labs reinforce the information you have been taught. The labs for this course include: • • None Assessments validate that you have learned the knowledge or skills presented during a learning experience. This course includes a self-assessment quiz, to be conducted on-line via KnowledgeLink, EMC’s Learning Management System. Assessments This course includes a single module on Symmetrix Architecture

If you have any questions, please contact us by email at GlobalEd@emc.com

Page

1 of 1

Copyright © 2004 EMC Corporation. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION. 1 Symmetrix Foundations EMC Global Education © 2004 EMC Corporation.Symmetrix Foundations. 1 Welcome to Symmetrix Foundations. to the unsurpassed DMX 3000 at the high end. Copyright © 2004 EMC Corporation. and distribution of any EMC software described in this publication requires an applicable software license. All rights reserved. . but prior generations of Symmetrix will also be discussed. The information is subject to change without notice. EMC offers a full range of storage platforms. copying. EMC believes the information in this publication is accurate as of its publication date. The focus will be on DMX. This training provides an architectural introduction to the Symmetrix family of products. Use. All rights reserved. These materials may not be copied without EMC's written consent. All Rights Reserved. from the CLARiiON CX200 at the low end. AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

cache. and physical drive configurations of various Symmetrix models EMC Global Education © 2004 EMC Corporation. back-end. Copyright © 2004 EMC Corporation. All rights reserved.Symmetrix Foundations. 2 Symmetrix Foundations After completing this course. All Rights Reserved. you will be able to: Draw and describe the basic architecture of a Symmetrix Integrated Cached Disk Array (ICDA) Write a detailed list of host connectivity options for Symmetrix Explain how Symmetrix functionally handles I/O requests from the host environment Illustrate the relationship between Symmetrix physical disk drives and Symmetrix Logical Volumes Describe the media protection options available on the Symmetrix Explain some of the high availability features of Symmetrix and how it impacts data availability Describe the front-end. 2 These are the learning objectives for this training. Please take a moment to read them. .

dual power suppliers—use the second one if the first one breaks. that’s what mid-tier arrays do.Today. and send it any distance if need be. a 50 TB CLARiiON is the better deal. What was once considered high-end is provided in the CLARiiON today. All rights reserved.Symmetrix Foundations. delivering high application performance…all at the same time. testing. It’s not just the array.It used to be that high-end availability meant simple redundancy. mirrored cache boards. Use two of everything. And you can’t measure that with a simple benchmark. raised the bar. The Symmetrix has provided higher levels of capabilities that were never before available. and how fast. But today. and non-disruptive serviceability. and the server. which can handle up to a petabyte. simple operation. or Centera. But today. even if there’s a surprise like an unpredictable workload. performance. just about every mid-tier array can do replication. High service levels mean being able to guarantee great application response. and service – Benchmark performance (IOPs and MB/s) • Single and/or simple workloads – Predictable performance… unpredictable world • Complex. Flexibility —Being able to handle requirements with just the right mix of performance and capacity. It’s the switch. It means supporting the right connections—like iSCSI and GigE. and the applications—it’s the whole end-to-end stack. Performance . Copyright © 2004 EMC Corporation. IOPs and megabytes per second. any time. dynamic workloads – Basic local and remote data replication • Backup windows. etc. connectivity. And one of the things that sets high-end apart is its ability to handle change. operation. easy-to-use GUI that helped you configure the array. And it means being able to handle different requirements— cost-effectively—if things change. 3 High-End Storage: The New Definition High-End Then – Simple redundancy • Automated Fail-over High-End Today – Non-disruptive everything • Upgrades. Replication . that’s a mid-tier feature. If you want large capacities. Availability . anywhere • Replicate any amount of data. across any distance. workloads. . – Manage the storage array • Easy configuration. without impact to service levels – Scalability • Capacity – Flexibility • Capacity. Today. minimal tuning – Manage service levels • Centralized management of the storage environment – Both Open Systems and Mainframe connectivity EMC Global Education © 2004 EMC Corporation.In today’s world. 3 Both of EMC’s Storage Platforms. Scalability . All Rights Reserved. Two busses. not high-end. and disaster recovery – Replicate any amount. High-end needs to be always online—which means “non-disruptive everything”—non-disruptive upgrades. simple benchmarks are used to measure mid-tier arrays. High-end customers want predictable performance in an unpredictable world. High-end means being able to copy any amount of data. non-disruptive reconfigurations. There’s a new requirement.It used to be all about low-level benchmarks—how many.It wasn’t so long ago that management at the high end meant a nice. Management . at any time during the day. SANs give you lots of ports. the CLARiiON and the DMX Symmetrix.

Symmetrix Foundations, 4

Symmetrix Integrated Cached Disk Array
Highest level of performance and availability in the industry Consolidation
– Capacities to 84TB – Up to 64 host ports – SAN or NAS

Advanced functionality
– Parallel processing architecture – Intelligent prefetch – Auto cache destage – Dynamic mirror service policy – Multi-region internal memory – Predictive failure analysis and call home – Back-end optimization
EMC Global Education
© 2004 EMC Corporation. All rights reserved. 4

Enginuity Operating Environment
– Base services for data integrity, optimization, security, and Quality of Service – Core services for data mobility, sharing, repurposing, and recovery

There are basically three categories of storage architectures: Cache Centric, Storage Processor centric, and JBOD or Just a Bunch Of Disks. The Symmetrix falls under the category of cache centric storage. We call it an ICDA, or Integrated Caching Disk Array. It is not a RAID box, it is an Integrated Caching Disk Array! As we go through this presentation, you will understand the differences.

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 5

Enginuity Operating Environment

Symmetrix Based Applications Host Based Management Software ISV Software WideSky Management Middleware Enginuity Operating Environment Symmetrix Hardware

Enginuity Operating Environment is the Symmetrix software that: – Manages all operations – Ensures data integrity – Optimizes performance Enginuity is often referred to as “the microcode” WideSky middleware provides common API and CLI interface for managing Symmetrix and the entire storage infrastructure EMC and ISV develop management software supporting heterogeneous platforms using WideSky API and CLIs
5

EMC Global Education
© 2004 EMC Corporation. All rights reserved.

Before we get into the hardware, let’s briefly introduce the software components, as most functionality is based in software and supported by the hardware. Enginuity is the operating environment for the Symmetrix storage systems. Enginuity manages all Symmetrix operations, from monitoring and optimizing internal data flow, to ensuring the fastest response to the user’s requests for information, to protecting and replicating data. Enginuity is often referred to as “the Microcode”. WideSky is storage management middleware that provides a common access mechanism for managing multivendor environments, including the Symmetrix, storage, switches, and host storage resources. It enables the creation of powerful storage management applications that don’t have to understand the management details of each piece within an EMC user’s environment. In addition to being middleware, WideSky is a development initiative (that is, a program available to ISVs and developers through the EMC Developers Program™) and provides a set of storage application programming interfaces (APIs) that shield the management applications from the details beneath. It provides a common set of interfaces to manage all aspects of storage. With WideSky providing building blocks for integrating layered software applications, ISVs and third-party software developers (through the EMC Developers Program), and EMC software developers are given wide-scale access to Enginuity functionality.

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 6

Symmetrix Architecture

Front End Channel Director

Shared Global Memory “Cache”

Back End Disk Director

All Symmetrix share the same basic architecture
EMC Global Education
© 2004 EMC Corporation. All rights reserved. 6

All members of the Symmetrix family share the same fundamental architecture. This architecture was initially called MOSAIC 2000 and is the architecture that continues to drive the Symmetrix through the year 2000 and beyond. This modular hardware framework allows rapid development of new storage technology, while supporting existing configurations. There are three functional areas:
• • •

Shared Global Memory - provides cache memory and link between independent front end and back end (intelligent boards comprised of memory chips) Front End - how the Symmetrix connects to the host (server) environment, referred to as Channel Directors (multi-processor circuit boards) Back End - how the Symmetrix controls and manages its physical disk drives, referred to as Disk Directors or Disk Adapters (multi-processor circuit boards)

What differentiates the different generations and models is the number, type, and speed of the various processors, and the technology used to interconnect the front-end and back-end with cache.

Copyright © 2004 EMC Corporation. All Rights Reserved.

All Rights Reserved.Symmetrix Foundations. Data is transferred throughout the Symm (from Channel Director to Memory to Disk Director) in a serial fashion along the system busses. the b processor (top half) and the a processor (bottom half).where the microcode lives) and the logic to arbitrate for and control the internal system busses. Even Numbered Directors connect to the Y Bus.MF 3630 . 7 Symmetrix 4. except for SDRAM (Control Store . Each director card has two sides. These Memory Words are then sent in a serial fashion across the internal busses to director from cache or from cache to director. For every 64 bits of data. The processors for the Symm 4. 7 The Symm 4.8 = 75 MHz). All rights reserved.MF Shared Global Memory Y Bus Back End Disk Director ½ Bay Cabinet Port C Processor b 68060-75Mhz Processor b 68060-75Mhz Port D Cache Port C Processor a 68060-75Mhz Processor a 68060-75Mhz Port D X Bus 40 MBS UWD SCSI Bus 360 MBS Internal Bus (Total of 720 MBS) EMC Global Education Dual “X” and “Y” Buses © 2004 EMC Corporation. Each director is connected to either the X bus (odd numbered director) or Y bus (even numbered director).OS 5830 . the Symm creates a 72 bit “Memory Word” (64 bits of data + 8 bits of parity).x Architecture includes: • • • • • Dual “X” and “Y” buses Odd Numbered Directors connect to the X Bus. Copyright © 2004 EMC Corporation.MF 3830 .OS 5930 . Memory Boards connect to both the X and Y Busses Motorola Processors 40 MB/Second Ultra SCSI Back-end The Symm 4. .X are Motorola 68000 series (Symm 4 core frequency of 66 MHz | Symm 4.8 Architecture Front End Channel Director 3 Bay Cabinet 1 Bay Cabinet 3930 .X family is based on a dual system bus design. The a & b processors have their own dedicated circuitry.OS 5630 .

All Rights Reserved. handles Control Store mirroring functions and is responsible for SDRAM control. 8 Symmetrix 5. The director processors are IBM/Motorola (jointly developed) PowerPC 750 (RISC-based processor). Each director connects to 2 internal system busses (Top High & Bottom Low for odd directors | Bottom Low & Top High for even directors). Memory boards connect to either the Top High and Bottom High (High Memory) or Top Low and Bottom Low (Low Memory). each director has an additional chip (called “the Gumba”) that makes the PowerPC “look like” a 68060 to the CPU Control Gate Array. each bus still 360MB/s for an aggregate of 1440 MB/s Odd Numbered Directors connect to both Top High and Bottom Low Busses. but has been enhanced.0 Architecture Front End Shared Global Memory Top High Channel Director Top Low Disk Director 3 Bay Cabinet 8730 8430 Back End 1 Bay Cabinet Processor b PowerPC 750 266 Mhz Processor b PowerPC 750 266 Mhz 360 MBS Internal Bus Cache 360 MBS Internal Bus Processor a PowerPC 750 266 Mhz Processor a PowerPC 750 266 Mhz High Memory Low Memory Bottom Low Bottom High 40 MBS SCSI Bus New Memory and Quad Bus Architecture EMC Global Education © 2004 EMC Corporation. . The basic architecture has not changed from Symm 4 to Symm 5. This processor switch required Symm microcode to be translated from Motorola Assembler Language to C++. To further enable the processor swap. 8 The Symmetrix 5 is a prime example of MOSAIC 2000. All rights reserved.Symmetrix Foundations. M3 Generation of Memory Boards introduced the concept of 4 addressable regions per board (High Memory = board connected to Top High & Bottom High | Low Memory = board connected to Top Low & Bottom Low) • • • • Copyright © 2004 EMC Corporation. Here is what has changed: • • Addition of 2 internal system busses (total of 4). Even Numbered Directors connect to both Top Low and Bottom High Busses.

ESCON directors are 400 Mhz. All rights reserved.X LVD Architecture: • • • • • Increased bus speed to 400MB/s for an aggregate of 1600 MB/s Back End Directors and Drives support Ultra 2 SCSI LVD (Low Voltage Differential) and the bus speed has increased to 80 MBs. 9 Again. here is another example of the MOSAIC 2000 Architecture. Faster Bus. 9 Symmetrix 5. M4 Generation of Memory Boards support LVD ( Low Voltage Differential or Ultra 2 SCSI Enginuity 5567 or greater Copyright © 2004 EMC Corporation. Faster Back-end. All Rights Reserved. Each director connects to 2 internal system busses (Top High & Bottom Low for odd directors | Bottom High & Top Low for even directors ). EMC Global Education © 2004 EMC Corporation. The basic architecture hasn’t change but has been enhanced to improve performance by eliminating bottlenecks.X LVD Architecture Front End Channel Director Top High 3 Bay Cabinet 8830 8530 8230 Shared Global Memory Top Low Back End Disk Director 1 Bay Cabinet ½ Bay Cabinet Processor b PowerPC 750 333Mhz Processor b PowerPC 750 333Mhz 400 MBS Internal Bus Cache 400 MBS Internal Bus Processor a PowerPC 750 333 Mhz Processor a PowerPC 750 333 Mhz High Memory Low Memory 80 MBS SCSI LVD Bus Bottom High Bottom Low Faster Processors.Symmetrix Foundations. The director processors are now 333 Mhz. . Here is what has changed for Symm 5.

it contains the same functional blocks with a significant advantage beyond yesterday’s bus and switch architecture. Symmetrix DMX has a unique ability to effectively react to bursts of unexpected activity. and the ability to do on-line upgrades. Copyright © 2004 EMC Corporation. while continuing to deliver high service levels. including the elimination of buses and switches. . and Communications Matrix EMC Global Education © 2004 EMC Corporation. and consolidated environments. decision support. Quad Processor Directors. Faster Processors. More importantly. have been dramatically improved. when coupled with Enginuity storage operating system. While Symmetrix Direct Matrix (DMX) is a radical redesign. and the incorporation of triple-module-voting for key components. 2GB Fibre Channel Back-end. Power systems. The result is even greater performance and availability. All rights reserved.Symmetrix Foundations. 10 A testimonial to EMC’s Symmetrix architecture is the DMX. Performance – The Symmetrix DMX dramatically reset performance expectations in a broad range of demanding transactional. All Rights Reserved. Availability – The Symmetrix DMX goes beyond yesterday’s design to set a new standard in availability. 10 Symmetrix DMX Architecture Front End Channel Director CACHE SLOTS Processor d PowerPC 500 Mhz Shared Global Memory Back End Disk Director Processor d PowerPC 500 Mhz Processor c PowerPC 500 Mhz Processor c PowerPC 500 Mhz Cache Processor b PowerPC 500 Mhz Processor b PowerPC 500 Mhz Track Table Processor a PowerPC 500 Mhz Status and Communications MAILBOXES Processor a PowerPC 500 Mhz Direct Matrix – Each Director gets its Own 500MB/sec Point to Point Connection to each Cache Board 2 GB Fibre Channel Direct Matrix.

Enhanced global memory technology supports multiple regions and 16 connections on each global memory director. All rights reserved. The major components of Symmetrix DMX architecture are the front-end channel directors (and their interface adapters). each of the eight director ports on the sixteen directors connects to one of the sixteen memory ports on each of the eight global memory directors. 11 Symmetrix DMX Architecture Servers Each board gets its own direct connection to cache! Disks EMC Global Education © 2004 EMC Corporation. and global memory directors in Symmetrix DMX systems. 11 The Symmetrix DMX features a high-performance.Symmetrix Foundations. In a fully configured Symmetrix DMX1000 system. . These 64 individual point-to-point connections facilitate up to 64 concurrent global memory operations in the system In a fully configured Symmetrix DMX2000/3000 system. In the Direct Matrix Architecture. disk directors. contention is minimized because control information and commands are transferred across a separate and dedicated message matrix. there are two connections between each director and each global memory director. global memory directors. and back-end disk directors (and their interface adapters). These 128 individual pointto-point connections facilitate up to 128 concurrent global memory operations in the system. Symmetrix DMX technology is distributed across all channel directors. That is. All Rights Reserved. each of the eight director ports on the eight directors connects to one of the sixteen memory ports on each of the four global memory directors. Copyright © 2004 EMC Corporation. Direct Matrix Architecture (DMX) supporting up to 128 point-topoint serial connections in the DMX2000/3000 (up to 64 in the DMX1000).

without consuming cache bandwidth. All Rights Reserved. All rights reserved.Symmetrix Foundations. 12 Symmetrix DMX Architecture Servers Separate Control and Communications Message Matrix Disks EMC Global Education © 2004 EMC Corporation. . This enables communication between the directors. Copyright © 2004 EMC Corporation. It becomes more apparent as we talk about read and write operations and the information flow through the Symmetrix later in this module. 12 Another major performance improvement with the DMX is the separate control and communications matrix.

All Rights Reserved.0) as blocks requested for logical volume 001. logical volumes are given a channel address for the 1)Channel Director. Port D 2.now most recently used). . Host sees storage on Symm as entire physical drive (actually logical volume on Symmetrix . The data is sent from the Channel Director back to the host. By looking in the bin file (stored within the director’s EPROM).Cache Hit Front End Channel Director 4.Cache Hit! CD retrieves data and sends to host Read operation completed at memory speed! EMC Global Education © 2004 EMC Corporation. Total I/O response time would be something on the order of 1 millisecond. In this case (read-hit). At this point. Processor a. Through the configuration file (bin file). Host sends READ request Channel Director checks Track Table 3. 2) Processor. 1. The channel director reads the requested data from cache. it translates the blocks requested for (1. Port C Track Table Status and Communications Port D 1. the data that is being requested is resident within cache.0) on SA # 3.piece of a physical drive). 13 Read Operation . the Age-Link-Chain is updated to reflect the access (data moved to top of LRU queue . 4. All rights reserved. 2. Copyright © 2004 EMC Corporation.Symmetrix Foundations. Requested data located in cache . LUN 0 (continuing from the previous example). Open systems hosts view disk drives using the SCSI target and LUN addressing scheme (target ranging from 0-16 | LUN ranging from 0-16) Channel Director receives the request to read some number of blocks for target 1. 13 Host sends read request (requesting to read some number of blocks from a physical disk). The Channel Director then scans the track table to discover if the requested blocks on 001 are already resident in cache. port A). Processor Shared Global Memory CACHE SLOTS Back End Disk Director Port C 3. and 3) Port that will be accessing that volume (logical volume 001 gets channel address (1.

1. the impact of a cache miss is reduced greatly. . Directors communicate through “Mailboxes”. While this is certainly true. with DMX architecture and the efficiency gained through director communications via the communications matrix. 14 Read Operation . 3.Data Not in Cache CD notifies DA using Message Matrix 4. the requested read would not be placed in cache for future access. 1. This eliminates the added burden on cache of continuously polling the mailbox. the operation that occurs is exactly the same as a read cache hit. Processor Port D 3. Host sends READ request CD checks Track Table . 6. Basically.Symmetrix Foundations. the Channel Director will disconnect from the channel (known as “long miss”). On Symm 4 and Symm 5 Architectures. From this point. DA retrieves data from disk (updates track table) CD is notified that data is in cache CD retrieves data and sends to host 14 EMC Global Education © 2004 EMC Corporation. The Channel Director is notified by the Disk Director (via the Status & Communications Mailboxes or in a DMX through the communication matrix) to check the track table once more. you would also lose the integrity checking that occurs as data is placed within cache. 4. It may seem that in the case of a read request not being in cache. DA retrieves data from physical disk and place it in an available cache slot. Additionally. and 2GB fibre channel back-end drive. up to 128GB of cache. If the Channel Director has disconnected from the channel. Host sends Read Request. If the requested data is in the process of prefetch (known as “short miss”). 5. With the DMX. it would simply be faster to bypass cache and retrieve the information directly from the physical storage on the back end. the important thing to keep in mind is that if cache is bypassed. Again. Copyright © 2004 EMC Corporation. This enables the host to perform other operations. All rights reserved.Cache Miss Front End Channel Director Shared Global Memory CACHE SLOTS Back End Disk Director Port C 6. Port C Track Table Status and Communications Processor Port D 5. the faster quad processor directors. all directors monitor the mailbox area in cache to see if there is work for it. 2. the Channel Director will not disconnect from the channel. Directors communicate to each other through the communication matrix. If the data being requested is not in the process of prefetch. it must now reconnect the channel. All Rights Reserved. 2.

This greatly enhances the performance of the host itself. The effect of a write cache hit is that the host is immediately freed up to process more I/O as soon as the write is received in cache. Port C Port D Track Table Status and Communications Port C Port D 1. The data is then written to the physical disk. Host is notified that write is complete. AND 2) it becomes the Least Recently Used data in cache. Less cycles are spent awaiting acknowledgement. the Channel Director will write the first block (I/O# 2) to the same slot in cache where the last block (I/O# 1) on that track is already residing. 2. Tracks marked as Write Pending . All Rights Reserved. If the data is not accessed. 2. As soon as the Disk Director(s) that are managing the physical copy(ies) of the data are available. the data will be read from cache and write to buffer on the Disk Director. 8 sectors in a track) is marked as write pending. Channel Director locates an available cache slot and places data in cache. I/O # 1 consists of a write to the last block on the first track on the first cylinder of Logical Volume 001. the Channel Director will write the data in question to the existing slot in cache. If the data is frequently accessed. I/O # 2 then consists of a write to the first block on that same track. it will move to the end of the Chain and subsequently cycled out of cache (slot in cache made available for other use). Note: Even if the host only writes/updates one block. Copyright © 2004 EMC Corporation. it will see that the track in question is already flagged as a write pending. the entire track (8 blocks in a sector. it will remain towards the front of the Chain. . Host sends WRITE request to CD CD places data in an available cache slot Write Complete sent to host 4. 3. When the data is restaged to disk (removal of write pending flag). Therefore. Remember that the data remains in cache until 1) it is committed to disk. For example. Host sends write request to Channel Director. 1. 15 Write Operation . Write pending tracks are not subject to the LRU algorithm.Cache Hit Front End Channel Director 3.DA will de-stage at earliest convenience Data remains in cache until replaced by LRU algorithm 15 EMC Global Education © 2004 EMC Corporation.Symmetrix Foundations. If the track(s) already exists in cache as write pending (waiting to be written to disk). All rights reserved. Processor Processor Shared Global Memory CACHE SLOTS Back End Disk Director 4. freeing the host to process application data. When the track is marked as write pending. all four mirrored positions are flagged write pending for that track. When the Channel Director checks the track table for an available slot in cache. it then enters the LRU Age-Link-Chain as the most recently used data.

By default. . the Symmetrix Operating Environment (Enginuity) dynamically adjusts the allocation of cache.dynamically determined by Symmetrix) – Symmetrix System Level (80% of Symmetrix cache slots contain “write pendings”) EMC Global Education © 2004 EMC Corporation. As soon as reads and writes to volumes begins. Copyright © 2004 EMC Corporation. 16 Fast Write Ceiling Cache algorithms are designed to optimize cache utilization and “fairness” for all Symmetrix Volumes Cache allocation dynamically adjusted based on current usage – Symmetrix constantly monitors system utilization (including individual volume activity) – “More active” volumes dynamically are allocated additional cache resources from relatively “less active” volumes – Each volume has a minimum and maximum number of cache slots for write operations based on configuration (known as “Fast Write Ceiling” or “Write Pending Ceiling”) During a write operation. If only 1 of the 100 volumes was active. the amount of available cache resources is automatically distributed to all of the logical volumes in the configuration. All Rights Reserved. For example. then at IMPL. It is important to remember that there will always be cache resources available for reads. it would get incrementally more cache and the remaining amount would be redistributed to the other 99 volumes. 16 When a Symmetrix is IMPL’ed (Initial Microcode Program Load). All rights reserved. if a Symmetrix were configured with 100 logical volumes of the same size and emulation. a Delayed Write occurs when the Write Pending Ceiling is reached – Logical Volume Level (not a fixed percentage .Symmetrix Foundations. the 80% fast write ceiling ensures that at least 20% of cache resources will be free for read requests. each one would receive 1% of available cache resources. Managing each individual volume’s write activity (via the dynamic fast write ceiling) enables Enginuity to typically prevent system-wide delayed write situations.

the entire Symm’s performance is impacted. Host Sends Write request. 17 Write Operation . It is likely that information just written by a host will be read in the near future. 5. 1. Copyright © 2004 EMC Corporation. If cache were bypassed and the data written directly to disk.Symmetrix Foundations. Track Table Status and Communications Port C Port D 1. Port C Processor 2. The Channel Director does not find available cache slots for writing because the volume has reached its Fast Write Ceiling. Channel Director Shared Global Memory CACHE SLOTS Back End Disk Director 3. this operation takes significantly longer than a fast write but ensures that the I/O flows through cache. All rights reserved. Again. . the data would not then be available directly from cache for the next request. 3. All Rights Reserved. The rest of the operation is similar to a fast write. Host sends WRITE request to CD CD cannot locate free cache slot and signals DA to destage DA will do a forced de-stage of Write Pendings to free cache slots 4. 6. Processor Port D 4. 2. When the Symm System Fast Write Ceiling is reached. When the volume Fast Write ceiling is reached. The Disk Director signals the Channel Director through the Mailbox or the Communication Matrix on DMX. only that volume’s performance is impacted. DA signals CD of available slots CD places data in an available cache slot Write complete sent to host 17 EMC Global Education © 2004 EMC Corporation. Disk Director frees up cache slots.Delayed Fast Write Front End 6. 5. or the entire Symm has 80% of its cache slots containing “write pendings”.

All rights reserved. 18 Normally. providing redundancy and continuous availability in the event of repair or replacement to any one Channel Director.Symmetrix Foundations. 18 Symmetrix Front End Channel Directors allow Symmetrix to connect to the host environment – Minimum of 2 directors per frame (redundancy) – Maximum of 4. . Copyright © 2004 EMC Corporation. All Rights Reserved. Each Channel Director has multiple microprocessors and supports multiple independent data paths to the global memory to and from the host system. 6 or 8 directors per frame (depending upon model and configuration) Port A Port B Port A Channel Director Port B Type(s) of Channel Director cards determined by the type of host and the selected protocol for communication with Symmetrix Cards are Field Replaceable Units (FRUs) and “hot swappable” Open Systems and Windows hosts connect to Symmetrix using either: – SCSI (Small Computer System Interface) – Fibre Channel (SCSI protocol to be sent over greater distances via Fibre Channel protocol and fiber optic cable) Mainframe hosts will typically connect to Symmetrix using ESCON or FICON (IBM-based protocols that allow mainframe hosts to connect to storage using fiber optic cables) EMC Global Education © 2004 EMC Corporation. Channel Directors are installed in pairs.

4 port fibre Channel directors.Symmetrix Foundations. SCSI Channel Directors support HVD and LVD and speeds to 80MB/sec. and SCSI Front-end directors are supported on nonDMX systems. DMX2000. Earlier Symmetrix offered 2 port. 1Gb Ethernet for SRDF attach. Fibre Channel is the interface of choice. and DMX3000. iSCSI allows block level access over IP networks.192 Logical Volumes per director (2048 per port) SCSI Channel Directors supported on Symm 8000 – 4 ports. All Rights Reserved. . Both switched fabrics and arbitrated loops SANs are supported. The standard fibre channel connection uses Short-wave Laser optics and multimode fiber optical cables for distances of up to 500 Meters over a 50 micron cable. The 2-port director is supported on all DMX systems. The optional Long wave laser uses 9 micron single mode optics for distances of 10K and greater. 19 Open Systems Connectivity Options DMX supports eight-port. For SAN connectivity. networked storage (SAN or NAS) is the preferred method to connect hosts with storage. This director can be configured to support FICON. Legacy systems often use parallel SCSI. and a 12 port director with an embedded switch. from 2 to 8 front-end Channel Directors are supported per system. Fibre Channel—The DMX supports an eight port four processor Fibre Channel Director. 4 concurrent I/Os (Ultra 40MB/sec) – 4 ports. All rights reserved. Today. 19 Depending upon the model. SCSI Channel Directors are not supported in DMX. It is supported on the DMX using the new Multi-Protocol Channel Director. Copyright © 2004 EMC Corporation. 4 concurrent I/Os (Ultra LVD 80MB/sec iSCSI support using Multi-Protocol Channel Director – Low cost connectivity using existing IP network infrastructure EMC Global Education © 2004 EMC Corporation. four processor Fibre Channel Directors – 2Gb/sec (can be configured for 1Gb/sec) – Single-mode and multi-mode configurations: • Eight multi-mode ports • Seven multi-mode ports and one single-mode port • Six multi-mode ports and two single-mode ports – 8. and 1Gb Ethernet for iSCSI host attach. Note: The 4 Port Multi-Protocol Channel Director is supported on the DMX1000. iSCSI is ideal for storage and server consolidation environments that require low cost connectivity that leverages existing IP networks.

All rights reserved. Greater distances are supported using media converters.192 Logical Volumes per director (2048 per port) FICON support using Multi-Protocol Channel Director – 2Gb/Sec – Point-to-point – Switched point-to-point • Single FICON Fibre Channel Director between server and storage • No mixing FICON and FC Open Systems on the same Switch EMC Global Education © 2004 EMC Corporation.Symmetrix Foundations. Copyright © 2004 EMC Corporation. FICON may also use single mode fiber optics for distances of up to 10KM and beyond. most mainframe customers will adopt FICON as their primary mainframe channel connectivity over the next few years. As such. Except for a few legacy systems. four Processor Director – Supports data transfer rates up to 17 MB/s per port – Single-mode and multi-mode configurations: • Eight multi-mode ports • Seven multi-mode ports and one single-mode port • Six multi-mode ports and two single-mode ports – 8. FICON is Fibre Channel for mainframes. 20 Mainframe Connectivity Options ESCON eight-port. mainframe connectivity is through either ESCON or FICON serial channels. ESCON uses multimode fiber optics and supports distances of up to 3 kilometers. It offers superior performance and extended distance as compared to its predecessor. FICON uses multimode fiber optics and supports distances of up to 500 meters. The original mainframe connectivity was through parallel interfaces with bus and tag cables. 20 Today. . All Rights Reserved. this bus and tag has been replaced with ESCON because of increased speed and flexibility. ESCON.

When the test completes. The disk director writes several worst case patterns to that track searching for media errors.” When the directors detect an uncorrectable error in cache. All Rights Reserved. the disk microprocessor can reallocate up to 32 blocks of data on that track. If necessary. This feature maximizes data availability by diagnosing marginal media errors before data becomes unreadable. “Cache Scrubbing” or Cache Error Correction and Error Verification: The disk directors use idle time to periodically read cache. the Symmetrix service processor automatically dials the EMC Customer Support Center and notifies the host system of errors via sense data. and write the corrected data back to cache. . 21 Symmetrix Back End Disk Director Processor b Port C Port D Disk Director (also called Disk Adapter or DA) writes and reads data to/from physical disk drives – DA also responsible for disk and cache “scrubbing” and assists in parity-based data rebuilding Port C Port D Processor a DAs are Field Replaceable Units (FRUs) and are “hot swappable” DAs installed in pairs on adjacent slots within the card cage of Symmetrix SYMM 4 and 5 architectures use 40/80MB/s SCSI to connect physical drives with a maximum of 12 drives per port DMX Architecture uses 2Gb Fibre Channel drives – Eight ports per DA – Maximum 18 dual ported drives per port – In addition to the Direct Matrix connections to cache. This entire process is called “error verification. This is referred to as disk and cache “scrubbing”. All rights reserved. Symmetrix reads the data from disk and takes the defective cache memory block offline until an EMC Customer Engineer can repair it. Disk Director Processor d PowerPC 500 Mhz Processor c PowerPC 500 Mhz Processor b PowerPC 500 Mhz Processor a PowerPC 500 Mhz 21 The primary purpose of the Back End director is to read and write data to the physical disks. each disk device has several spare cylinders available. thus skipping defects in the media. the disk director rewrites the data from cache to the disk device. the disk director reads all data on that track to Symmetrix cache memory. To further safeguard the data. the disk director is responsible for proactive monitoring of physical drives and cache memory. If a disk read error occurs.” The disk director increments a soft error counter with each bad block detected.Symmetrix Foundations. verifying the write operation. when it is not staging data in cache or destaging data to disk. If the number of bad blocks per track exceeds 32 blocks. Copyright © 2004 EMC Corporation. When the internal soft error threshold is reached. Error verification maximizes data availability by significantly reducing the probability of encountering an uncorrectable error by preventing bit errors from accumulating in cache. each director has a separate message matrix for the transfer of control information EMC Global Education © 2004 EMC Corporation. It also invokes dynamic sparing (if the Dynamic Sparing option is enabled). This process is called “error verification or scrubbing. However. “Disk Scrubbing” or Disk Error Correction and Error Verification: The disk directors use idle time to read data and check the polynomial correction bits for validity. The disk microprocessor maps around any bad block (or blocks) detected during the worst case write operation. the disk director rewrites the data to an available spare cylinder. correct errors.

A 10. The faster the drive turns. I/Os are services from cache. but with buffers on the drive modules themselves. Actuator positioning is the time it takes to move the read/write heads over the desired cylinder. This is limited by the internal transfer rate. All Rights Reserved.000 RPM drive has an average rotational latency of approximately 3. The external rate is the speed that the drive transfers data to the controller. This is accomplished using cache. The design objective of a Symmetrix is not to limit the performance of host applications based on the performance limitations of the physical disk. Read operations are from cache using the Least recently Used algorithm and prefetching to keep the information that is most likely to be accessed in memory. 22 Disk Performance Basics Three components of disk performance – Time to reposition actuator . This time is the function of the revolutions per second.00 milliseconds. but this contributes to the greatest share of the overall response time. The actual time that it takes to reposition depends on how far the heads have to move. not from the physical HDA – Minimizes the inherent latencies of physical disk I/O – Disk I/O at memory speeds Position Actuator Transfer Data Disk I/O = Seek time + Rotational Delay + Transfer Rate time EMC Global Education © 2004 EMC Corporation. All rights reserved. which is half the time it takes to make one revolution. Transfer Rate is the smallest time component and consists of the time it takes to actually read/write the data. . This is mechanical movement and is typically measured in milliseconds. or drive RPM. It is often measured as internal transfer rate or external transfer rate. the lower the rotational latency. This is a function of drive RPM and the data density. Copyright © 2004 EMC Corporation. it allow faster transfer rates. 22 When you look at a physical disk driver.seek time – Rotational latency – Transfer rate Rotational Delay With a Symmetrix. Rotational Delay is the time it takes for the desired information to come under the ready write head. a read or write operation has three components that add up to the overall response time. Write operations are to cache and asynchronously destage to disk.Symmetrix Foundations.

the most functionally robust microcode available.97 OS 181 GB 73 GB 146 GB Spindle Speed Symmetrix Architecture Interface Formatted Capacity (Mkt GB) Formatted Capacity (Eng GB) 7. while the physical speed of disk drives does contribute to the overall performance. Again.86 OS 10.38 OS 7.72 OS 10.200 Sym 5 Ultra SCSI 178.10 OS 68. 23 Symmetrix physical drives are manufactured by our suppliers (Seagate) to meet EMC’s rigorous quality standards and unique product specifications.97 OS EMC Global Education © 2004 EMC Corporation.10 OS 68.000 Sym 5 Ultra SCSI 35.8 MF 36 OS 33.000 Sym 5 Ultra SCSI 136 MF 146 OS 135.90 MF 18. dedicated microprocessors (that can be XOR capable).72 OS 10.000 Sym 5 Ultra SCSI 72. All rights reserved. . All Rights Reserved.17 MF 73.80 MF 36.8 Ultra SCSI 35.Symmetrix Foundations.31 OS 10. and large onboard buffer memory (4MB – 32MB).000 Sym 5 Ultra SCSI 17.000 DMX Fibre Channel 136 MF 146 OS 135.38 OS 10.20 OS 33. Copyright © 2004 EMC Corporation.10 OS 16.17 MF 73. while Engineering defines a GB as 1024 X 1024 X 1024. Note: Marketing defines a GB as 1000 X 1000 X 1000. the design is for most read or write operations to be handled from cache.7 MF 181 OS 169. 23 Symmetrix Disk Comparisons 36 GB 18 GB 36 GB 73 GB 146 GB 10.000 DMX Fibre Channel 72. These specification include.200 Sym 4.

follow these guidelines: • A minimum of four and a maximum of eight cache director boards is required for the DMX2000 system configuration. Symm 4. 24 Symmetrix Global Cache Directors Memory boards are now referred to as Global Cache Directors and contain global shared memory Symmetrix has a minimum of 2 memory boards and a maximum of 8 Individual cache directors are available in 2 GB. “Hot swappable” means that a Customer Engineer. and a minimum of two and a maximum of four cache director boards is required for the DMX1000 system configuration. On Symm 5. • • • • . All rights reserved. Cache directors can be added one at a time to configurations of two boards and greater. Memory boards in the DMX are referred to as Global Cache Directors with CacheStorm technology. Boards are comprised of memory chips and divided into four addressable regions Generally installed in pairs Memory boards are FRUs and “hot swappable” (does not require Symm power down or “reboot”) EMC Global Education © 2004 EMC Corporation. When configuring cache for the Symmetrix DMX systems. and the smallest cache director must be at least one-half the size of the largest cache director. and 16 GB sizes. A maximum of two different cache director sizes is supported. 24 Model DMX 800 DMX 1000 DMX 2000 DMX 3000 8830 8530 8230 Number of Cache Boards 2 4 8 8 4 2/4 2 Maximum Cache Size 32 GB 64 GB 128 GB 128 GB 64 GB 32 GB/64 GB 32 GB Cache boards are designed for each family of Symm. In cache director configurations with more than two boards. cache connects to both the X and Y internal busses. All Rights Reserved. no more than one half of the boards can be smaller than the largest cache director. memory boards that connect to the Top High and Bottom High internal system busses are referred to as “High Memory”. The DMX uses M5.8 uses the M2 generation of memory boards. Because these boards have different designs. Important to note that even on the Symm 4. Symm 5 uses the M3/M4 generation of memory boards. they cannot be swapped between families of Symm. following documented procedure. boards that connect to Top Low and Bottom Low are known as “Low Memory”. The CE procedure includes destaging all remainder data in cache and fencing off the board in order to prevent loss of data.Symmetrix Foundations. Two-board cache director configurations require boards of equal size. Copyright © 2004 EMC Corporation. Conversely. 8 GB. DMX uses direct connections between directors and cache. can be removed and replace the board without powering down the Symm.X. 4 GB.

as well as provide the facility for Channel Directors to communicate with Disk Directors. Cache is allocated in tracks and is referred to as cache slots. With DMX. which are 32Kbytes in size (47 or 57 Kbytes for Mainframe). but again. a 2GB volume would use approximately 1MB of cache for track table space (( 200GB/32KB) X 16B) of cache space. whether or not someone (CE or customer) is referencing it. You can see that cache requirements depend on the actual configuration. The CQS system provides sizing guidelines based on actual configuration. All Rights Reserved. while the Mailboxes still exist. 25 The actual size requirements for cache depends on the configuration. 25 Symmetrix Shared Global Memory Shared Global Memory contain three types of information – Cache Slots: temporary repository for frequently accessed data (staging area between host and physical drive) – Track Table: directory of the data residing in cache and of the location and condition of the data residing on Symmetrix physical disk(s) – Communications and Mailboxes: contains performance and diagnostic information concerning Symmetrix and allows independent front end and back end to communicate – DMX also uses message matrix for control and communications CACHE SLOTS Track Table Status and Communications MAILBOXES EMC Global Education © 2004 EMC Corporation. If the Symm is supporting both FBA and CKD emulation within the same frame. cache hit rate and read/write percentage for the entire system. The primary use for cache is for staging and destaging data between the host and the disk drives. the cache slots will be the size of the largest track size. So. Performance data includes I/Os per second. Copyright © 2004 EMC Corporation. Approximately 16 Bytes of cache space is used for each track. The Symm maintains diagnostic information for every component within the architecture. The Track Table is used to keep track of the status of each track of each logical volume. The Mailbox is used for communications between the directors. This information is accumulated and stored as part of the Symm’s normal operations. All rights reserved. individual directors. the actual requirements is a function of the configuration and application access patterns. and individual devices (logical volumes). The general rule that “more is better” also applies to cache. either 47K (3380) or 57K (3390) track size. a Communications and Control Matrix allows direct communication between directors. Cache is also used to maintain all diagnostic and short-term performance information. .Symmetrix Foundations.

it is given a pseudo timestamp. the next sequential access will initiate the prefetching of the next 5 tracks on that volume (access to track 1 on cylinder 10 will prompt the prefetch of tracks 2. Prefetch is initiated by 2 sequential accesses to a volume. it must take control of the LRU algorithm. After 100 sequential accesses to that volume. All rights reserved.4. This allows the Symm to maintain only the most frequently accessed data to remain in cache memory. 26 Prefetching .5. Any non-sequential accesses to that volume will turn the prefetch capability off. the Symm will pull the next two successive tracks into cache (access to track 1 on cylinder 1 and will prompt the prefetch of tracks 2 & 3 on cylinder 1). 26 Symmetrix Cache Management Symmetrix Cache management is based upon the following principles: – Locality of Reference • If a data block has been recently used.6.Symmetrix Foundations. adjacent data will be needed soon • Data staged from disk to cache at a minimum of 4K or blocks to end of track or full track • Prefetch algorithm detects sequential data access patterns CACHE SLOTS – Data re-use • Accessed Data will probably be used again Track Table Status and Communications MAILBOXES – Least Recently Used (LRU) data is flushed from the cache first • Only keep active data in the cache • Free up cache slots that are inactive to make room for more active data EMC Global Education © 2004 EMC Corporation. All Rights Reserved. . Copyright © 2004 EMC Corporation.3. Every time a director performs a cache operation.5 & 6 on cylinder 10). The data residing in cache is ordered through an Age-Link-Chain. As data is placed into cache or accessed within cache. As data is touched (read operation for example). for every sequential access.7. prefetch is turned on for that logical volume. Once turned on.8 & 9 on cylinder 100). After the next 100 sequential accesses to that volume.3.Once sequential access is detected. the prefetch track value is increased to 8 (access to track 1 on cylinder 100 will prompt the prefetch of tracks 2. it moves to the top of the Age-LinkChain.4. This forces the director to mark the least recently used data in cache as available (to be overwritten by the next cache operation).

All Rights Reserved. these director and memory cards reside side-by-side within the card cage of the Symm. The DMX “P” model are configured for maximum performance rather than connectivity. Copyright © 2004 EMC Corporation. . Back End. and Shared Global Memory. 27 Symmetrix Card Cage 0 1 2 3 4 5 A B C D E F A0 1 9 2 8 4 7 6 5 3 A0 1 9 2 8 4 7 6 5 3 0 F A0 1 9 2 8 4 7 6 5 3 A0 1 9 2 8 3 7 6 5 4 0 F A 0 1 9 2 8 7 6 543 A0 1 9 2 8 7 6 543 0 F A01 2 89 3 7 6 5 4 A01 9 2 8 3 7 6 5 4 0 F A 0 1 9 2 8 7 6 543 A0 1 9 2 8 7 6 543 0 F A0 1 9 2 8 3 7 65 4 A0 1 9 2 8 3 7 6 54 0 F A 0 1 9 2 8 7 6 543 A0 1 9 2 8 7 6 543 0 F A0 1 9 2 8 4 7 6 5 3 A0 1 9 2 8 3 7 6 5 4 0 F 0 F 0 F 0 F 0 F 0 F 0 F 0 F 0 F DA DA SA FA EA MM MM EA FA SA DA DA DMX800 Model DMX800 DMX1000 DMX1000P DMX2000 DMX2000P DMX 3000 8830 8530 8230 DMX1000 Maximum Front End Directors DMX2000 Maximum Back End Directors DMX3000 1 2 3 4 5 M1 M2 12 13 14 15 16 Maximum Cache Directors Maximum Cache Maximum Disk Drives 2 6 4 12 8 8 8 4 2 2 2 4 4 8 8 8 4 2 2 4 4 8 8 8 4 4 2 32GB 128GB 64GB 256GB 256GB 256GB 64GB 64GB 32GB 120 144 144 288 288 576 384 96 48 27 EMC Global Education © 2004 EMC Corporation. Though we logically divide the architecture of the Symm into Front End. All rights reserved. physically.Symmetrix Foundations.

The SFS stores statistic data that is generated and is used to provide a number of benefits: • • • • • Dynamically adjusting performance algorithms Enhancement of dynamic mirror service policy Enhancement of Symmetrix Optimizer More rapid recovery from problems Enhanced system audit and investigation Enginuity also allows Quality of Service (QoS).Symmetrix Foundations.34. This space is automatically allocated while initially loading the Enginuity Operating Environment on Symmetrix systems and is not visible to the host environment. . Copyright © 2004 EMC Corporation. All rights reserved. TimeFinder.22 Symmetrix Hardware Supported: 50 = Symm3 52 = Symm4 55 = Symm5 56 = DMX Microcode ‘Family’ (Major Release Level) Field Release Level of Symmetrix Microcode (Minor Release Level) Field Release Level of Service Processor Code (Minor Release Level) EMC Global Education © 2004 EMC Corporation. This 12 GB of raw SFS space is translated into 6 GB of usable space (mirrored configuration) and is spread equally across two 3 GB volumes. 2003 5568. giving the ability to set varying priority levels to applications residing within a Symmetrix to meet varying customer needs or agreements. – All DMX shipped with the latest Enginuity 5670 as of Sept. 28 Enginuity Overview Operating Environment for Symmetrix – Each processor in each director is loaded with Enginuity • Downloaded from service processor to directors over internal LAN • Zipped code loaded from EEPROM to SDRAM (control store of director) – Enginuity is what allows the independent director processors to act as one Integrated Cached Disk Array • Also provides the framework (coding) for advanced functionality like SRDF. etc. All Rights Reserved. 28 Enginuity automatically reserves 12 GB (raw) for internal use as a Symmetrix File System (SFS).

Though Customer Service has the capability to do remote bin file updates (using the SymmRemote application).bin file or simply “the bin file” Stored in two places: – On the hard disk of the Symmetrix Service Processor – In the EEPROM of each Symmetrix Director From system Default Director Configuration changes can also be made using EMC ControlCenter Configuration Manager GUI and WideSky CLI EMC Global Education © 2004 EMC Corporation. CS requires that all CEs do a comparison analysis prior to committing changes (read out existing IMPL. The bin file also allows Channel Directors to map host requests to a channel address. or target and LUN to the Symmetrix Logical Volume. This enables Channel Directors to be aware of the Disk Directors that are managing the physical copy(ies) of Symmetrix Logical Volumes and vice versa. In addition.BIN.) Copyright © 2004 EMC Corporation. 29 Symmetrix Configuration Information Symmetrix configuration information includes the following: – Physical hardware that is installed – number and type of directors.BIN file) PC Hard disk Configuration information is referred to as the IMPL. All Rights Reserved. memory. All rights reserved.BIN and compare to proposed IMPL. Changes made to the bin file (non-SDR changes) must first be made to the IMPL. . their standard operating procedure mandates the CE be physically present for all configuration changes.Symmetrix Foundations. and physical drives – Mapping of physical disks to logical volumes – Mapping of SCSI addresses to volumes and volumes to front-end directors – Operational parameters for front-end directors PC Memory From Disk Edit Configuration Information (IMPL.BIN on the Service Processor and then downloaded to the directors over the internal Ethernet LAN. 29 Two very important concepts: Each director (both Channel and Disk) has a local copy (stored in EPROM) of the configuration file.

Symmetrix Logical Volumes are defined by the Symmetrix Configuration (BIN File). but may be mirrored to more than one hyper-volume on the back end. Do not confuse Symmetrix Logical Volumes with host-based logical volumes. 30 While “hyper -volume” and “split” refer to the same thing (a portion of a Symmetrix physical drive).). NT Disk Administrator.Symmetrix Foundations. All rights reserved. a “logical volume” is a slightly different concept.etc. .. an SLV physically resides on at least one hyper-volume. As far as the host is concerned. Copyright © 2004 EMC Corporation. This would result in each of the 4 splits in this example being approximately 4. In actuality.).5 GB 4.. As we will see. .5 GB 4. All Rights Reserved. A logical volume is the disk entity presented to a host via a Symmetrix channel director port.21GB in size (open systems). Note: This is a very simplistic example of hyper-volume extensions on a physical drive. the Symmetrix Logical Volume (SLV) is a physical drive. etc.5 GB 4. 30 Mapping Physical Volumes to Logical Volumes Symmetrix Physical Drives are split into Hyper Volume Extensions Physical Drive 18 GB Logical Volume Logical Volume Logical Volume Logical Volume 4. the true useable capacity of the drive would be less than 18GB due to disk formatting and overhead (track tables.5 GB Hyper Volume Extensions (disk slices) are then defined as Symmetrix Logical Volumes – Symmetrix Logical Volumes internally labeled with hexadecimal identifier (0000-FFFF) – Maximum number of Logical Volumes per Symmetrix configuration = 8192 EMC Global Education © 2004 EMC Corporation. Host-based logical volumes are configured (by customers) through Logical Volume Manager software (Veritas LVM.

however a consistent size makes planning and ongoing management easier – Hyper Volume(s) are the physical disk partitions that comprise Symmetrix Logical Volumes • One mirrored Symmetrix Logical Volume = Two Hyper Volumes EMC Global Education © 2004 EMC Corporation. Copyright © 2004 EMC Corporation. size ~16 GB – All Hyper Volumes on a physical disk do not have to be the same size.Symmetrix Foundations. with a max. All rights reserved. All Rights Reserved. 31 Volume specifications are illustrated here. . 31 Symmetrix Logical Volume Specifications Physical Disk Physical Disk Physical Disk Physical Disk Physical Disk Volume Specifications – Enginuity allows up to 128 Hyper Volumes to be configured from a single Physical Drive – Size of Volumes defined as number of Cylinders (FBA Cylinder = 15 * 32K).

today the customer may make configuration changes using EMC ControlCenter GUI or the WideSky GUI. Prior to 5x66 Enginuity. Copyright © 2004 EMC Corporation. . All Rights Reserved. All rights reserved.Symmetrix Foundations. They use time-honored and extensive best practices and tools to configure Symms. BIN file configuration was performed using a DOS-based program called AnatMain. allow at least 5 days to produce a BIN file or make major changes to a configuration. For planning purposes. An important misperception to correct is that only the CE can change the bin-file. There is also much manual review to be done to ensure that BIN files are valid. The C4 group (Configuration and Change Control Committee) is the division of Global Services responsible for initial Symm configuration and any subsequent changes to the configuration.BIN) that is downloaded from the service processor to each director EMC Global Education Most configuration changes can be performed on-line at the discretion of the EMC Customer Engineer Configuration changes can be performed online using the EMC ControlCenter Configuration Manager and WideSky Command Line Interface 32 © 2004 EMC Corporation. 32 Defining Symmetrix Logical Volumes Physical Disk Physical Disk Physical Disk Physical Disk Physical Disk Symmetrix Service Processor Running SymmWin Application Symmetrix Logical Volumes are configured using the service processor and SymmWin interface/application – EMC Configuration Group uses information gathered during presite survey to create initial configuration • Subsequent changes to configuration must be approved by Configuration Group through their standard change control process (expected turnaround is 5 days) – Generates configuration file (IMPL. While this might have been true at one time.

The extra 8 bytes are for host system overhead. Also. K+.096 Bytes) Track = 8 Sectors (32. 33 Symmetrix Logical Volume Types Open Systems hosts use Fixed Block Architecture (FBA) – – – – – Each block is a fixed size of 512 bytes Sector = 8 Blocks (4. It uses 520 bytes per block. BE AWARE THAT CHANGING THE LOW-LEVEL FORMAT OF PHYSICAL DEVICES TYPICALLY REQUIRES SYMMETRIX DOWNTIME.520 Bytes) Volume size referred to by the number of Cylinders Data Block 512 Bytes Mainframes use Count Key Data (CKD) – Variable block size specified in “count” – Emulate Standard IBM volumes Count Key Data • 3380D. This allows some drives to be formatted 512 and others 520.664 bytes) • Volume size defined as a number of Cylinders Symmetrix stores data in cache in FBA and CKD and on physical disk in FBA format (32 KB tracks) – Emulates “expected” disk geometry to host OS through Channel Directors EMC Global Education © 2004 EMC Corporation. K. Open Systems hosts other than the AS/400 must be configured to use 520-formatted volumes. requiring a potentially complex backup and restore of all Open Systems data (VTOC the drives).Symmetrix Foundations. ESP allows the Symmetrix to deal with the 2 fundamentally different types of low-level formats. 33 CKD and FBA physicals can be mixed in a Symmetrix if the ESP license is purchased for that Symm.476 bytes) • 3390-1. If you connect an AS/400 to a pre-5566 Symmetrix.768 Bytes) Cylinder = 15 Tracks (491. . prior to 5566 on the Symmetrix 5. -2. all FBA devices must be formatted 520. only supports a single type of FBA format on Open Systems drives. track size ~ 56. avoiding the complications mentioned above. -3. All rights reserved. track size 47. Enginuity. Copyright © 2004 EMC Corporation. E. -9 (max. All Rights Reserved. With 5566+ on Symm 5 +. K++ (max. A notable exception to the “512-byte” Open Systems rule is AS/400. reformatting existing 512 devices will erase them. Enginuity has SLLF (Selective LowLevel Format) capabilities.

com/emc. availability and functionality • Two mirrors of one Symmetrix Logical Volume located on separate physical drives – RAID 1/0 – Mirrored Stripped Mainframe Volumes – Dynamic Sparing • 3 +1 (3 data and 1 parity volume) or 7 +1 (7 data and 1 parity volume) • Known as RAID S or RAID R in Symm 5 and earlier – SRDF (Symmetrix Remote Data Facility) • One or more HDAs that are used when Symmetrix detects a potentially failing (or failed) device • Can be utilized to augment data protection scheme • Minimizes exposure after a drive failure and before drive replacement • Mirror of Symmetrix Logical Volume maintained in separate Symmetrix frame The RAID Advisory Board has rated configurations with both SRDF and Parity RAID or RAID 1 Mirroring with the highest availability and protection classification: Disaster Tolerant Disk System Plus (DTDS+) EMC Global Education © 2004 EMC Corporation. 34 Media Protection Data protection options are configured at the volume level and the same system can employ a variety of protection schemes – Mirroring (RAID 1) – Parity RAID • Highest performance. All rights reserved. .Redundant Array of Independent Disks See http://www.Symmetrix Foundations.raid-advisory.html for the ratings. All Rights Reserved. Copyright © 2004 EMC Corporation. 34 RAID .

Each position either represents a mirror or is unused. if a BCV was established to a RAID1 protected volume. All Rights Reserved. M2 M3 M4 35 Before getting too far into volume configuration. 35 Mirror Positions Internally. If this volume was also protected with SRDF. M4 Mirror positions are actually data structures that point to a physical location of a mirror of the data and status of each track Each mirror position represents a mirror copy of the volume or is unused Unprotected Volume Symmetrix Logical Volume 001 Remote Replica Local Replica M1 EMC Global Education © 2004 EMC Corporation. These Mirror Positions are actually data structures that point to a physical location of a data mirror and the status of each track. all four mirror positions would be used. M4. an unprotected volume will only use the M1 position to point to the only data copy. M3. All rights reserved. a BCV or SRDF mirror is assigned the next available unused mirror position. Copyright © 2004 EMC Corporation. each logical volume is represented by four mirror positions – M1. understanding the concept of mirror positions is very important. three mirror positions would be used. the pointer is to the physical hyper volume (Disk Director. M3. With local mirrors. Another thing to keep in mind is Mirror Positions are logical pointers. it would assume the M3 mirror position. and if we add a BCV to this SRDF protected RAID-1 volume. Note that the order that mirror positions are assigned is not important. A RAID-1 protected volume will use the M1 and M2 positions. M2.Symmetrix Foundations. the mirror position actually points to a Logical Volume in the remote Symmetrix. For example. For example. M2. and Split). In the case of SRDF. Drive. . For example. Within the Symmetrix. each Symmetrix Logical Volume is represented by one four mirror position – M1.

The mirroring function is transparent to attached hosts as the hosts views the mirrored volumes as a single logical volume. 3/4. Because of where within the card cage the DA pairs reside (1/2. they equal 17. Two physical mirrors of one logical volume are being presented to the host (using SCSI address 1. the mirrors will always be on different internal system busses (for the highest availability and maximum Symm resources). All Rights Reserved. Notice that if the director numbers of the DA’s are added together (2+15) . 3/14. Mirroring maintains a duplicate copy of a logical volume on two physical drives. The Symmetrix maintains these copies internally by writing all modified data to both devices. . 13/14. All rights reserved. Copyright © 2004 EMC Corporation. as long as the sum total of the DA director numbers equal 17 (1/16. 2/15. 36 Mirroring provides the highest level of performance and availability for all applications. 15/16). Hyper 3 on Physical Drive 0 on DA 2 is the M1 for Logical Volume 001. 4/13).0) as if it were an entire physical drive. 36 Mirroring: RAID-1 Two physical “copies” or mirrors of the data Host is unaware of data protection being applied Disk Director 2 Physical Drive Target = 1 LUN = 0 Logical Volume 001 Disk Director 15 Physical Drive LV 001 M2 LV 001 M1 EMC Global Education © 2004 EMC Corporation. This is what is known as the “rule of 17”. In the example shown.Symmetrix Foundations. Hyper 0 on physical drive 0 on DA 15 is the M2 for Logical Volume 001.

37 During a read operation. . Dynamic Mirror Service Policy (DMSP) -DMSP dynamically chooses between the Interleave and Split policies at the logical volume level based on current performance and environmental variables. All Rights Reserved. As the access patterns and workloads change. the DMSP algorithm analyzes the new workload and adjusts the service policy to optimize performance. to maximum throughput and minimum head movement. • Interleave Service Policy – Share the read operations of a mirror pair by reading tracks from both logical volumes in an alternating method: a number of tracks from the primary volume (M1) and a number of tracks from the secondary volume (M2). and which sector is currently under the disk head in each device. the Symmetrix reads the data from the volume chosen for best overall system performance. Based on these measurements. directs read operation for mirrored data to the appropriate mirror. • • Copyright © 2004 EMC Corporation. Split Service Policy – Different from the Interleave Service Policy because read operations are assigned to either the M1 or the M2 logical volumes. This is the default mode. DMSP adjusts each logical volume dynamically based on recent access patterns. 37 Mirrored Service Policy Physical Drive LV 000 M1 LV 004 M1 LV 008 M1 LV 00C M1 Logical Volume 008 Logical Volume 00C Logical Volume 000 Logical Volume 004 Physical Drive LV 000 M2 LV 004M2 LV 008 M2 LV 00C M2 Symmetrix leverages either or both mirrors of a Logical Volume to fulfill read requests as quickly and efficiently as possible Two options for mirror reads: Interleave and Split – Interleave maximizes throughput by using both Hyper Volumes for reads alternately – Split minimizes head movement by targeting reads for specific volumes to either M1 or M2 mirror Dynamic Mirror Service Policy (DMSP) sets. The Interleave Service Policy is designed to achieve maximum throughput. but not both.Symmetrix Foundations. and disk directors. Performance algorithms within Enginuity track path-busy information. physical disks. if data is not available in cache memory. policy is dynamically adjusted based on I/O patterns – Adjusted approximately every 5 minutes – Set at a logical volume level EMC Global Education © 2004 EMC Corporation. The Symmetrix system tracks I/O performance of logical volumes (Including BCV volumes). Symmetrix performance algorithms for a read operation choose the best volume in the mirrored pair based upon these service policies. All rights reserved. Split Service policy is designed to minimize head movement. as well as the actuator location.

All rights reserved. Within the same Symmetrix system. data can be protected through Parity RAID. and resembles RAID-5. This redundancy information. mirroring. Parity RAID is also referred to as RAID-S in Symm 5 and earlier architectures. the data is not striped (“Volume A” in the diagram above is an entire Logical Volume and only related to “Volume B” and “Volume C” via parity calculations) EMC Global Education © 2004 EMC Corporation. that is. 38 Parity RAID RAID Rank “0” RAID Rank “1” RAID Rank “2” RAID Rank “3” LV 001 Volume A LV 004 Volume D LV 007 Volume G Parity for JKL LV 002 Volume B LV 005 Volume E Parity for GHI LV 00a Volume J LV 003 Volume C Parity for DEF LV 008 Volume H LV 00b Volume K Parity for ABC LV 006 Volume F LV 009 Volume I LV 00c Volume L Parity RAID is also referred to as RAID-S in SYMM 5 and earlier architectures 3 +1 (3 data volumes and 1 parity volume) or 7 +1 – Parity calculated by Symmetrix Disk Drives using Exclusive-OR (XOR) function – Parity and difference data (result of XOR calculations) passed between drives by DAs – Member drives must be on different DA ports (ideally on different DAs) Parity volumes distributed across member drives in RAID Group Unlike RAID-5. EMC’s Parity RAID implementation reduces the overhead associated with parity computation by moving the operation from controller microcode to the hardware on the XOR-capable disk drives. however. Like the Mirroring or Dynamic Sparing options. the Boolean operation EXCLUSIVE OR (XOR). . However. Compared to a mirrored Symmetrix system. Parity RAID employs the same technique for generating parity information as many other commercially available RAID solutions. Symmetrix RAID parity protection can be dynamically added or removed. can be used to regenerate data if the data on a disk drive becomes unavailable. Copyright © 2004 EMC Corporation. called parity. for higher performance requirements and high availability. 38 Symmetrix Parity RAID technology is a combination of hardware and software functionality that improves data availability on drives in Symmetrix systems by using a portion of the array to store redundancy information. Parity RAID offers more usable capacity than a mirrored system containing the same number of disk drives. However. Parity is striped across all disks in the rank. Two Configurations are supported: 3 +1 and 7+1. For example. EMC’s Parity RAID DOES NOT STRIPE DATA.Symmetrix Foundations. All Rights Reserved. and SRDF. a Parity RAID group of volumes can be reconfigured as multiple mirrored pairs.

RAID-1 Mirroring continues to provide the highest in availability and performance. Copyright © 2004 EMC Corporation. All rights reserved. the parity volume is likely to reach its Fast Write Ceiling. it is not offered as a performance solution – For high data availability environments where cost and performance must be balanced – Fixed 3 + 1 configuration means 25% of disk space used for protection – Avoid using in application environments that are 25% or greater write intensive – Every write to a data volume requires an update (write) to the parity volume within that rank – Write activity to the parity volume equals the total writes to the 3 data volumes within that rank – In write intensive environments. however. and should be positioned as such. 39 Parity RAID Considerations While Symmetrix Parity RAID minimizes some of the hardware and software overhead associated with typical RAID-5. sending the entire rank into delayed write mode High write volumes spread across Parity RAID Groups (avoid spindle contention) In some configurations. planning and careful attention to layout is required to ensure optimal performance. 39 Some of the inefficiencies associated with RAID –5 have been eliminated with EMC’s Parity RAID in a DMX system. . Parity RAID in a DMX environment may perform as well as RAID 1 protection on a Symmetrix 8000 EMC Global Education © 2004 EMC Corporation.Symmetrix Foundations. If customer requirements dictate using Parity RAID. All Rights Reserved.

All Rights Reserved. 40 Dynamic Sparing Dynamic Spare Dedicated spare(s) protects storage Disk errors are detected during I/O operations or through DA’s “Disk Scrubbing” Data from failed disk is copied to Dynamic Spare When failed disk is replaced. since a physical drive is the FRU (Field Replaceable Unit) in the Symmetrix. the actual data migration from the volumes on the failed drive to the dynamic spare occurs at the logical volume level. Copyright © 2004 EMC Corporation. Dynamic Sparing is also supported with Parity RAID. and mirroring is a faster way to make sure the data is redundantly protected. . They simply point to potential physical locations (on the back end of the Symmetrix) for the logical volume entity.Symmetrix Foundations. If there are at least 3 spares available. mirroring the entire RAID group results in the best way to protect against data loss until the problematic drive can be replaced. only the disk it resides on. When sparing is necessitated. 40 Every Symmetrix logical volume has 4 mirror positions. you can’t just replace a failed hyper volume. a dynamic spare drive will copy the data volumes onto itself by rebuilding them from parity and reading from any remaining uncorrupted data. All rights reserved. The other 2 spares will copy the contents of the remaining data volumes on the unaffected drives in the group. If a drive fails. There is no priority associated with any of these positions. A minimum of volumes in the config. In other words. hyper volumes on the spare disk devices take the next available mirror position for the logical volumes present on the failing volume. It is now the responsibility of the Symmetrix to copy all tracks over to the Dynamic Spare. the 1st spare will also start copying data from uncorrupted drives in the group. This results in the formerly parity-protected volumes now being temporarily mirrored. Since parity can’t be calculated with a drive lost. data is automatically restored and Dynamic Spare resumes role as standby EMC Global Education © 2004 EMC Corporation. However. Dynamic sparing occurs at the physical drive level. minimum of 3 spares are suggested. All of these dynamic spare hyper volumes are marked as having all tracks invalid in the respective mirror positions of the logical volumes.

For example. the Drive lettering puts a limit on the number of volumes. Meta Volumes allow customers to present larger Symmetrix Logical Volumes to the host environment. All rights reserved.Symmetrix Foundations. and Meta Volumes prevent “running out of drive letters” by presenting larger volumes to NT hosts (Engineering has successfully presented a 1 TB volume to NT). First. 41 Meta Volumes Between 2 and 255* Symmetrix Logical Volumes can be grouped into a Meta Volume configuration and presented to Open System hosts as a single disk – Assigned one SCSI address SCSI Address: Target = 1 LUN = 0 Logical Volume 001 Logical Volume 002 Logical Volume 003 Logical Volume 00F Meta Volume LV 001 LV 002 LV 003 LV 00F Allows volumes larger than the current maximum hyper volume size of 16GB – Satisfies requirements for environments where there is a limited number of SCSI addresses or volume labels available Data is striped or concatenated within the Meta Volume Stripe size is configurable – 2 cylinders stripe is default and appropriate for most environments *Note: Symmetrix Engineering recommends Meta Volumes no larger than 512GB 41 EMC Global Education © 2004 EMC Corporation. . All Rights Reserved. Copyright © 2004 EMC Corporation. They are able to present more GBs with less channel addresses. For example. the maximum number of devices that can be presented on a Symm 5 FA port is 256 (128 for Symm 4. There is a limitation on the number of volumes a host can manage. Meta Volumes become very useful in several environments.X). the environment where channel addresses are at a premium. with NT. devices will be presented down multiple Symm ports. Four paths to 64 volumes has just exhausted the 256 devices for those four Symm ports. If the customer has multipathing software (like PowerPath).

“Establish” BCV 2. synchronized. the BCV assumes the next available mirror position of the source volume. . TimeFinder supports incremental establish by default where only changed data since the last established is synchronized. Synchronized 3. All rights reserved. or Business Continuance Volume – BCV can be dynamically attached to another volume. All Rights Reserved. it is “hidden” from view and cannot be accessed. Restore allows the BCV to be established as a mirror to either the original source or a different volume and the data on the BCV is synchronized. 42 TimeFinder Introduction TimeFinder allows local replication of Symmetrix Logical Volumes for business continuance operations Utilizes special Symmetrix Logical volume called a BCV. Execute BC operations using BCV TimeFinder uses Business Continuance Volumes (BCVs) to create copies of a volume for parallel processing. While a BCV is established. “Split” BCV 4. Synchronize data from Source to BCV volume. Synchronization will take place while production continues on the source volume. and split off – Host can access BCV as an independent volume that may be used for business continuance operations – Full volume copy EMC Global Education © 2004 EMC Corporation. • • • Copyright © 2004 EMC Corporation.Symmetrix Foundations. Basic TimeFinder operations include: • Establish Mirror relationship between any standard volume and BCV. Split allows the BCV to be accessed as an independent volume for parallel processing. 42 STD BCV BCV Established STD BCV BCV Split 1. Basically.

” The snapshots are not full copies of data. A set of pointers to the source volume data tracks is instantly created upon activation of the snapshot. they are logical images of the original information based on the time the snapshot was created. logical point-in-time images or “snapshots. Copyright © 2004 EMC Corporation. . All rights reserved.Symmetrix Foundations. New writes copied to Save Area 43 EMC Snap creates space-saving. It’s simply a view into the data. 43 EMC SNAP Introduction EMC SNAP uses Snapshot techniques to create logical point-in-time images of a source volume – Snapshot is virtual abstraction of a volume – Multiple Snapshots can be created from same source – Snapshots are available immediately Volume A Production view of volume EMC SNAP does a Copy-on-Write – Writes to production volume are first copied to Save Area – Uses only a fraction of the source volume’s capacity (~20–30%) Snapshot of Volume A (VDEV) Save Area Snapshot view of volume Snapshots can be used for both read and write processing – Reads of unchanged data will be from Production volume – Changed data will be read from Save Area – Writes to Snapshot as save in Saved Area EMC Global Education © 2004 EMC Corporation. All Rights Reserved. This set of pointers is addressed as a logical volume and is made accessible to a secondary host that uses the point-in-time image of the underlying data.

the communication bandwidth requirement is less than a synchronous mode operation. The greater the distance. If the production site becomes inoperable.Symmetrix Foundations. the data on the target will only be current to the last resync from the BCV. Because the copy is periodically split off. All rights reserved. SRDF/A bridges the gap between SRDF and SRDF/AR by balancing response time. and decision support applications. mirrored data storage solution that duplicates production site data (source) to a secondary site (target). 44 SRDF Introduction Symmetrix Remote Data Facility (SRDF) Maintains real-time. and communications and infrastructure costs. The communications connections must be sized appropriately to handle peak processing workloads without impacting performance. and recovery point objectives to provide a new level of remote replication. in synchronous mode. allowing critical data to be available to the business operation in minutes. peak for SRDF synchronous). EMC offers a complete set of replication solutions to meet a wide range of service level requirements. the remote copy can also be used for business continuance during planned outages as well as backups. infrastructure costs. is no longer synchronous to the source. meaning in the event of a source failure. but only requires communication links sized to meet the average I/O work load (vs. All Rights Reserved. While it is easy to see this as a disaster recovery solution. SRDF/A provides an improved recovery point objective (vs. SRDF in Synchronous mode offers minimal impact to the application. but requires BCVs (Business Continuance Volumes) to allow point-in-time copies to be periodically split off from the source copy. SRDF/A offers no impact to the host servers. requires some additional cache to operate adding slightly to the infrastructure costs. except mirror is located in a different Symmetrix Primary copy is called Source. SRDF enables rapid manual fail over to the secondary site. . the more overhead to complete the write operation. however. testing. SRDF/AR (formerly SAR) offers no impact to the host server performance. provides the highest level of data integrity. host-independent. or near real-time copy of data at remote location Similar concept as RAID-1. recovery point objectives. The target copy. Performance is dependent on the distance between the source and target Symmetrix. Copyright © 2004 EMC Corporation. When implementing a remote replication solutions. remote copy is called Target Link options between local and remote Symmetrix based on distance and performance requirements – ESCON – Fibre Channel – Gigabit Ethernet Three different options to meet recovery objectives Source EMC Global Education © 2004 EMC Corporation. Target 44 SRDF is an online. SRDF/AR) and allows customers to deploy remote replication over extended distances. communication requirements. users must balance application response time. SRDF.

45 Physical and Logical Volumes Symmetrix Physical Drives are divided into Hyper Volumes (disk slices) One or more Hyper Volumes comprise Symmetrix Logical Volumes – Mirroring would require 2 Hyper Volumes for every 1 Symmetrix Logical Volume (M1 & M2) Channel Director Channel Director Symmetrix Logical Volumes are made available to hosts through Channel Directors – Bin file must map Logical Volume’s channel address to the Channel Director -Processor. All rights reserved. the emulation type. dynamic sparing. the information provided back to the OS appears to be referencing a series of SCSI disk drives. a remote mirror using SRDF. the host “thinks its getting” an entire physical drive.. When more than one host is connected to a port. In other words. using Volume Logix. The bin file also tells the Channel director what volumes are presented on what port and the address used to access it. a mirror of a Symmetrix Logical Volume. count. To an Open Systems host. The host is unaware of the bin file. a parity volume for RAID S. called Hyper Volumes. 45 From the Symmetrix perspective. . Hyper Volumes could be used as unprotected Symmetrix Logical Volume. From the Host’s perspective.etc. LUN Masking.Symmetrix Foundations. . size in cylinders. Within the Symmetrix bin-file. Dynamic Spare) are defined. All Rights Reserved. RAID protection. Each Symmetrix Logical Volume is assigned a hexadecimal identifier. …etc. and special flags (like BCV. a Disk Reallocation Volume (DRV). when a device discovery process occurs. is used to further restrict which host has access to which volume. number of mirrors. remote mirroring. a Business Continuance Volume (BCV). BCV mirrors. physical disk drives are being partitioned into disk slices.Port in order to be discovered/used by hosts Disk Directo r Cach e Disk Directo r Host sees Symmetrix Logical Volumes as if they were entire physical drives Physical Logical Disk Drives Disk Volumes EMC Global Education © 2004 EMC Corporation. Copyright © 2004 EMC Corporation. DRV.. the Symm looks like JBOD (Just Bunch Of Disks).

. 46 The best advice for configuring a Symmetrix storage subsystem for maximum performance is “ Go wide before deep!”.Symmetrix Foundations.Clustering – Symmetrix provides flexibility for different sizes and protection within a system – Standard sizes make it easier to manage – Number of channels available from each host Determine Volume size and appropriate level of protection Determine connectivity requirements Distribute workloads from the busiest to the least busy EMC Global Education © 2004 EMC Corporation. Copyright © 2004 EMC Corporation. This is much easier said than done. All rights reserved. you will have a better chance for success. but through careful planning. Planning starts with understanding the host and application requirements. 46 Configuration Considerations Understand the applications on the host connected to the Symmetrix system – – – – Capacity requirements I/O rates Read/Write ratios Locality of reference – Sequential or Random Understand special host considerations – Maximum drive and file system sizes supported – Consider Logical Volume Manager (LVM) on the host and the use of data striping – Device sharing requirements . All Rights Reserved. This means the best possible performance will only be achieved if all the resources within the system are being equally utilized.

An EMC Product Support Engineer at the Customer Support Center can also run diagnostics remotely through the service processor to determine the source of a problem and potentially resolve it before the problem becomes critical. The service processor communicates with the EMC Customer Support Center through a customer-supplied.Symmetrix Foundations. When required. All rights reserved. All Rights Reserved. a Customer Engineers will be dispatched to the Symmetrix to replace hardware or perform other maintenance. Copyright © 2004 EMC Corporation. Most call-home incidents are software-related and can be resolved remotely by dialing back into the Symmetrix. 47 Symmetrix Availability: Phone-Home and Dial-In EMC Phone-Home capability – Service Processor connects to external modem (can fit in existing teleco racks) – Communicates error and diagnostic information to EMC Customer Service – Provides problem resolution Dial-In capability – Product Support Engineer (PSE) or Customer Engineer (CE) dial-in – Allows full control of service processor through proprietary and secure interface – Allows for proactive and reactive maintenance – Can be disabled by customer through external modem EMC Global Education © 2004 EMC Corporation. The service processor automatically dials the Customer Support Center whenever Symmetrix detects a component failure or environmental violation. 47 Every Symmetrix unit has an integrated service processor that continuously monitors the Symmetrix environment. direct phone line. .

48 Symmetrix Availability: Hardware Redundancy Symmetrix Architecture based on the concept of N + 1 redundancy – One more component than is necessary for operation Continuous operation. Symmetrix will load executable code at selected “windows of opportunity” within each director hardware resource until all directors have been loaded. . Component. the full microcode is loaded. thus maintaining application access. environmental. Power Modules Batteries 48 The Symmetrix undergoes the most rigorous pre-ship testing in the industry. This capability can be utilized to upgrade or to back down from a release level within a family. The new microcode loads into the EEPROM areas within the channel and disk directors and remains idle until requested for hot load in control storage. Once the executable code is loaded. Symmetrix takes advantage of a multi-processing and redundant architecture to allow for hot loadability of similar microcode platforms. All channel and disk directors remain in an on-line state to the host processor. Copyright © 2004 EMC Corporation. which consists of the same base code plus additional patches that reside in the patch area. release levels can be non-disruptively loaded without interruption to user access. NOTE: During a nondisruptive microcode load within a code family. even if failures occur to any major component: – – – – – – – – – Global Memory Director boards Channel Director boards Disk Director boards Disk drives Communications Control Module Cooling Fan Modules Power modules Batteries Service Processor Channel Adapters and Cache Cards Disks Non-disruptive Microcode Upgrades and Loads EMC Global Education © 2004 EMC Corporation. the Product Support Engineer downloads the new microcode to the service processor. and operational testing all but guarantee the elimination of defective or substandard components.Symmetrix Foundations. During a non-disruptive microcode upgrade. All rights reserved. Non-disruptive Microcode Upgrades and Loads: Non-disruptive microcode upgrade and load capabilities are currently available for the Symmetrix. internal processing is synchronized and the new code becomes operational. The Symmetrix system does not require manual intervention on the customer’s part to perform this function. All Rights Reserved. Within a code family.

EMC PowerPath. For example. with 4 Symm Ports and 100 volumes. the physical connections from the HBA should be to: 1) separate channel directors. Though PowerPath can accommodate up to 32 paths to one Logical Volume. host-based software application that allows UNIX and Windows hosts to have multiple paths to the same Symmetrix Logical Volume (disk from a host’s perspective). 49 Advanced Availability: PowerPath Channel Director Processor Processor Processor Processor CACHE PowerPath from EMC is host-based software that supports multiple paths to a Symmetrix Volume – Open Systems only (not needed for OS/390) – GUI or CLI management capabilities Channel Director Processor Processor Processor Processor Symmetrix is configured so that volumes can be accessed through multiple directors/ports Eliminates HBA. realistically. . Copyright © 2004 EMC Corporation. All Rights Reserved. cable. Important note. PowerPath is an Open Systems. this could quickly exhaust available addresses. along with a properly architected connectivity from hosts to storage. the more paths that exist to one Symmetrix Logical Volume. and director as single points of failure Load balancing across paths also improves performance 49 EMC Global Education © 2004 EMC Corporation. While Channel Directors are redundant. it would be impossible to present all 100 volumes on all 4 ports (paths). it is important to remember that there is no automatic fail over on the front end. the more SCSI addresses are being used within the Symm.Symmetrix Foundations. The easiest way to achieve this configuration is to ensure one Channel Director is odd numbered and one is even numbered. This is because of the 256 maximum devices on any one FA port. This is not an issue with the DMX in that all directors have a direct path to cache. ensures continuous availability on the front end. For the highest availability. All rights reserved. switch. and 2) that are located on different internal system busses.

50 DMX: Dual-ported Disk and Redundant Directors Directors are always configured in pairs to facilitate secondary paths to drives Each disk module has two fully independent Fibre Channel ports Drive port connects to the Director by a separate loop – Each port connects to different Director in the Director pair – Star-hub topology – Port bypass cards prevent a drive failure or replacement for effecting the other drives on the loop Disk Director 1 Disk Director 16 S P S P S P S P P S P S P S P Directors have four primary loops for normal drive communication. . All rights reserved.Symmetrix Foundations. Copyright © 2004 EMC Corporation. Each drive connects to two Disk Directors through separate Fibre Channel loops. The loops are configured in a star-hub topology with gated hub ports and bypass switches that allow individual Fibre Channel disk drives to be dynamically inserted or removed. and four secondary loops to provide alternate path if other director fails EMC Global Education © 2004 EMC Corporation. S P = Primary Connection to Drive S= Secondary Connection for Redundancy 50 Symmetrix DMX back-end employs an arbitrated loop design and dual-ported disk drives. All Rights Reserved.

.x family. All Rights Reserved.Symmetrix Foundations. Symmetrix returns the I/O servicing of the two disk directors to their normal state. When the source of the failure is corrected. If Symmetrix detects a disk director hardware failure. All rights reserved. 51 Symm 5: Dual-Initiator Disk Director Disk Directors are installed in pairs to facilitate secondary paths to drives In the unlikely event of a disk director processor failure. This feature works by having two disk directors shadow the function of the each other. Under normal conditions. That is. each disk director services its disk devices. Note: On the 4. Copyright © 2004 EMC Corporation. dual-initiator occurs by physically connecting one disk directors’ port card to the port card of the adjacent disk director. Symmetrix “calls home” but continues to read from or write to the disk devices through the disk director it is paired with. Processor b Port D MIDPLANE Solid line = Primary Path Dotted line = Secondary Path 51 Symm 4 and 5 architectures utilize a dual-initiator back-end architecture that ensures continuous availability of data in the unlikely event of a Disk Director failure. each disk director has the capability of servicing any or all of the disk devices of the disk director it is paired with. DA1 processor “b” would see ports C & D for DA2 processor “b” as its A &B ports in a fail-over scenario DA 1 Processor b MIDPLANE Port C Port D DA 2 Port C Protecting against DA processor card failure Physical drives are not dualported but are connected via a dual-initiator SCSI Bus Volumes are typically mirrored across directors EMC Global Education © 2004 EMC Corporation. the adjacent director will continue servicing the attached drives through secondary path – In this example.

Three AC/DC power supply modules operate in a parallel configuration. Data directed to non-mirrored volumes is written to the first available spare area on any devices available for write. the power subsystem automatically switches to the other AC line. The backup battery subsystem allows Symmetrix to remain online to the host system for three minutes in the event of an AC power loss. Symmetrix immediately switches to battery backup and initiates writes of cache data. When a power failure occurs. System Battery Backup: The Symmetrix backup battery subsystem maintains power to the entire system if AC power is lost. spins down the disk devices and retracts the heads and powers down. Symm will continue to accept I/O from the host environment for 90 seconds – If power is not re-established: – Symm then waits for battery timer to run down to begin the “graceful” shutdown process (spin-down the drives and retract heads) – Symm would be immediately available to hosts (no IML required) if power returns prior to battery timer run down • Symm will stop accepting I/O • Destage all write pending data to its actual location on disk EMC Global Education © 2004 EMC Corporation. all data is written to its proper volume and mirrored pairs are reestablished as part of the initial load sequence. Symmetrix presents a busy status to prevent the attached hosts from initiating any new I/O. the other battery is acting as the “secondary standby” (periodically switch roles) Power modules and batteries are FRUs and “hot swappable” Batteries are periodically load tested to ensure their availability in the event of main power system failure Batteries power cache and all disks within the ICDA – Upon detection of main power failure. 52 Power Subsystem: The Symmetrix has a modular power subsystem featuring a redundant architecture that facilitates field replacement without interruption. The first available mirror device receives the data.Symmetrix Foundations. The Symmetrix power subsystem connects to two dedicated or isolated AC power lines. The Symmetrix destages any Write data still in cache to disk. Symmetrix Emergency Power Off: The Symmetrix emergency power off sequence allows 20 seconds to destage pending write data. Data directed to mirrored pairs is written to only one device. 52 Advanced Availability: Power Subsystem Each Symmetrix has 3 power supplies and redundant batteries – Symmetrix can connect to 2 external power sources (primary / auxiliary) – Three AC/DC and three DC/DC power supply modules operate in a redundant parallel configuration – While one battery is acting as the “primary standby”. The director records that there are pending write operations to complete. When the battery timer window elapses. Symmetrix continually recharges the battery subsystem whenever it is under AC power. Copyright © 2004 EMC Corporation. allowing the directors flush cache write data to the disk devices. If any single AC/DC power supply module fails. and stores the location of all data that has been temporarily redirected. When the EPO switch is set to off. All rights reserved. the remaining power supplies continue to share the load. and the other mirror device status is set to invalid. These modules provide 5V and 12V power to the various components in the Symmetrix unit. . When power is restored. power switches immediately to the backup battery and Symmetrix continues to operate normally. All Rights Reserved. If AC power fails on one AC line. These modules provide 56V power for the DC/DC power distribution system.

and provides unique methods for detecting and preventing failures in a proactive way. 53 Advanced Availability: Cache Protection Why is cache not mirrored? . This proactive error tracking can usually prevent an error in cache by generating a call-home for service or by fencing off a failing memory segment before any hard data errors occur. Symmetrix can recognize patterns of error activity and predict a hard failure before it occurs. if they exceed a preset level. If the predetermined error threshold is reached for single-bit errors. All rights reserved.Advanced method of cache protection allows for more usable cache . By tracking these soft or temporary errors during normal operation. Should a multi-bit error be detected during the scrubbing process. This sets it apart from all others in providing continuous data integrity and high availability. the call home is executed. 53 Proactive Cache Maintenance: EMC makes every effort to provide the most highly reliable hardware in the industry. the segment is immediately fenced (removed from service). All Rights Reserved. each chip has a threshold for correctable errors When the correctable error threshold is reached or a permanent (uncorrectable) error is detected: – Call-Home initiated and the suspect area within cache is “fenced off” – Any write pending data is written to disk – The board is non-disruptively replaced by a Customer Engineer Data written to cache is rescanned against the data residing within DA or Channel Director buffer to ensure correctness EMC Global Education © 2004 EMC Corporation. Even in cases where errors are occurring and are easily corrected. it is considered a permanent error. Constant cache scrubbing reduces the potential for multi-bit or hard errors. the segment's contents are moved to another area in cache.Symmetrix Foundations. and then records them. Copyright © 2004 EMC Corporation. Cache Scrubbing: All locations in cache are periodically read and rewritten to detect any increase in single-bit errors. Symmetrix actively looks for “soft” errors before they become permanent. and Customer Service is immediately notified and a customer engineer is dispatched with the appropriate parts for speedy repair. This cache scrubbing technique maintains a record of errors for each memory segment. On-line Maintenance: Every Symmetrix is configured with a minimum of two memory boards to allow for on-line hot replacement of a failing memory board. .for optimal I/O performance – Proven effective by the Symm’s install base Minimum of 2 memory boards per Symmetrix (redundancy) – Each board is connected to and accessed by multiple busses – Each board has redundant power sources Memory boards comprised of multiple chips (chips proactively monitored through I/O activity and “cache scrubbing”) – Each chip has redundant paths (A and B port ASICs) – Through Enginuity. the service processor call-home alerts EMC. This represents EMC Engineering philosophy of not accepting any level of probability for errors. the service processor generates a call-home for immediate attention.

All Rights Reserved. Data 64+8 Port ASIC B Command 10+1 Address 32+1 Port ASIC Data 64+8 A Command 10+1 Address 32+1 Maintenance Processor Data 64+8 Port ASIC B Command 10+1 Address 32+1 54 Symmetrix assures the highest level of data integrity by checking data validity through the various levels of the data transfer in and out of Cache. All rights reserved. and within each director and global memory board. The check bytes are the XOR (exclusive OR) value of the accumulated bytes in a 4KB sector. All data and command words passed on the system bus. include parity bits used to check integrity at each stage of the data transfer. each memory word and associated ECC (80 bits) are stored in 20 separate DRAM chips. is detected as a correctable error. Sector Level Longitude Redundancy Code (LRC) The LRC calculation further assures data integrity. LRC checking can detect both data errors and wrong block access problems. 4 consecutive bits (a nibble) can be rebuilt using the remaining healthy chips and the associated ECC. 54 Advanced Availability: Cache Protection Cache slots are protected using advanced error detection and correction logic along with data “interleaving” – ECC is employed by every director to allow for single-bit and non-consecutive doublebit error detection and correction – Data is sent between Directors and Cache as a 72 bit “memory word” (64 bits of data + 8 bits of parity) Port ASIC Data 64+8 A Command 10+1 Address 32+1 Maintenance Processor Data 64+8 Port ASIC B Command 10+1 Address 32+1 Port ASIC Data 64+8 A Command 10+1 Address 32+1 Maintenance Processor – Port ASIC on the Memory Board creates 80 bit package (64 bits of data + 16 bits of parity) from the incoming “memory word” – These 80 bits are interleaved amongst 20 different SDRAM chips (memory bank) – LRC (longitude redundancy check) also employed to XOR accumulated 4KB sectors within region Data 64+8 Port ASIC B Command 10+1 Address 32+1 Data 64+8 Port ASIC A Command 10+1 Address 32+1 Maintenance Processor These factors enable single nibble (4 bits) error correction (consecutive) and double nibble error detection – Resulting in the capability to withstand the failure of an entire SDRAM chip EMC Global Education © 2004 EMC Corporation. Nibble-level Interleaving Data and storage locations are spread across multiple components to improve error detection and recovery. . Copyright © 2004 EMC Corporation.Symmetrix Foundations. Error Checking and Correction (ECC) The directors detect and correct single-bit and non-consecutive double-bit errors and report uncorrectable 3-bit or more errors. Byte Level Parity Checking All data and control paths have parity generating and checking circuitry that verify hardware integrity at the byte or word level. the most common failure. For example. The failure of a single memory chip.

Symmetrix Foundations.3 TB (parity 7+1) 8 / 16 x 2 Gb Fibre Channel 2 32 GB 8/16 x 2 Gb FC 4 x 2 Gb FICON 4 x GigE SRDF 4 x GigE iSCSI These are the features of the DMX series. DMX1000 Integrated 144 21 TB 18.8 TB (parity 7+1) 32 x 2 Gb Fibre Channel 4–8 128 GB 96 x 2 Gb FC 96 x ESCON 48 x 2 Gb FICON 8 x GigE SRDF 48 x GigE iSCSI DMX3000 Integrated 576 84 TB 73. . Copyright © 2004 EMC Corporation. 55 Symmetrix DMX Series DMX800 Packaging Drives Capacity (raw) Capacity (usable) Drive channels Cache Directors Maximum cache Connectivity (combinations may be limited by board slots) EMC Global Education © 2004 EMC Corporation.4 TB (parity 7+1) 16 x 2 Gb Fibre Channel 2–4 64 GB 48 x 2 Gb FC 48 x ESCON 24 x 2 Gb FICON 8 x GigE SRDF 24 x GigE iSCSI DMX2000 Integrated 288 42 TB 36.6 / 15.75 / 17.5 TB 7. All rights reserved.5 TB (parity 7+1) 64 x 2Gb Fibre Channel 4–8 128 GB 64 x 2 Gb FC 64 x ESCON 32 x 2 Gb FICON 8 x GigE SRDF 32 x GigE iSCSI 55 Modular 60 / 120 8. All Rights Reserved.

read miss. FICON. . 56 These are some of the main features of the Symmetrix. 56 Symmetrix Foundations Summary Symmetrix basic architecture is comprised of three functional areas (Front End. iSCSI All I/O must be serviced through cache (read hit. All Rights Reserved.Symmetrix Foundations. which comprise Symmetrix Logical Volumes that are presented to the host environment as if they were entire physical drives Mirroring. connected by four internal system busses Hosts connect to Symmetrix using SCSI. Fibre Channel or ESCON. Parity RAID. SRDF. Please take a moment to read them. delayed write) Symmetrix physical disk drives are divided into Hyper Volumes. All rights reserved. and Dynamic Sparing are all media protection options available on Symmetrix Redundancy in the hardware design and intelligence through Enginuity allow Symmetrix to provide the highest levels of data availability EMC Global Education © 2004 EMC Corporation. fast write. Back End and Shared Global Memory). and today. Copyright © 2004 EMC Corporation.

Symmetrix Foundations. All rights reserved. cache. back-end. 57 Course Summary Key points covered in this course: Draw and describe the basic architecture of a Symmetrix Integrated Cached Disk Array (ICDA) Write a detailed list of host connectivity options for Symmetrix Explain how Symmetrix functionally handles I/O requests from the host environment Illustrate the relationship between Symmetrix physical disk drives and Symmetrix Logical Volumes Describe the media protection options available on the Symmetrix Referencing a diagram. Copyright © 2004 EMC Corporation. explain some of the high availability features of Symmetrix and how this potentially impacts data availability Describe the front-end. All Rights Reserved. . and physical drive configurations of various Symmetrix models EMC Global Education © 2004 EMC Corporation. Please take a moment to read them. 57 These are the main points covered in this training.

58 Enginuity 5670+ Update EMC Global Education © 2004 EMC Corporation.Symmetrix Foundations. All Rights Reserved. 58 Updates have been made to this course based on Enginuity code 5670+. All rights reserved. Copyright © 2004 EMC Corporation. . This section includes new features supported by this code update.

. All Rights Reserved. you will be able to list the features supported by Enginuity 5670+. Copyright © 2004 EMC Corporation.Symmetrix Foundations. 59 Upon completion of this update. All rights reserved. 59 Update Objectives Upon completing this update. you will be able to list: – Enginuity 5670+ Management Features – Enginuity 5670+ Business Continuity Features – Enginuity 5670+ Performance Features EMC Global Education © 2004 EMC Corporation.

Logical Volumes . Volume Expansion . All Rights Reserved.meta expansion EMC Global Education © 2004 EMC Corporation. These user configuration controls will simplify the task of reusing a Symmetrix by not requiring an EMC resource to modify the “bin” file.provides customers a secure method of deleting ( electronic shredding) sensitive data. 60 User Configuration . 60 Management Features 5670+ Management Features – End User Configuration • User control of volumes and type – Symm Purge • Secure deletion method – Logical Volumes • Increase number of “hypers” – Volume Expansion • Striped. . V5670+ will now support the expansion of both striped and concatenated meta volumes. or convert CKD volumes to FBA.v 5670+ will support an increased number of hypers per spindle.Previous microcode versions only supported the expansion of concatenated meta volumes. delete CKD volumes. Symm Purge . This will simplify the reuse of drive assets.Symmetrix Foundations. All rights reserved. Copyright © 2004 EMC Corporation. The number of “hypers” will be dependent upon the protection scheme.Enginuity v 5670+ will allow users to un-map CKD volumes.

All Rights Reserved. support will be available for multi-session SRDF/A data replication. writes to the Standard volume will not propagate to the BCV volume. With v5670+ code. Multi-session uses host control ( Mainframe only). While the restore is in progress. 61 SRDF/A. All rights reserved. Protected Restore.v 5670+ provides Protected Restore features. read miss data will come from the BCV volume. and the original Standard to BCV volume relationship will be maintained. SNAP Persistence .v 5670+allows a protected snap restore and preserves the virtual snap session when the restore terminates. Cycle switching is synchronized between the single-session SRDF/A Symmetrix pairs. .currently (v 5670) SRDF-A can only support a single-session.Symmetrix Foundations. 61 Business Continuity Features 5670+ Business Continuity Features – SRDF/A • multi-session support – Protected Restore • Enhanced restore features – SNAP Persistence • Preserves snap session EMC Global Education © 2004 EMC Corporation. Copyright © 2004 EMC Corporation.

Optimizer does not support Parity RAID. The same restrictions for Parity RAID must be observed ( mixing Parity RAID 3+1 and 7+1 in the same frame). Parity RAID (3+1) is not supported with RAID 5 ( 7+1). All rights reserved. Currents limitations include: RAID 5 3+1 and 7+1 configuration cannot exist in the same frame. BCV. All Rights Reserved.Symmetrix Foundations. A single Parity RAID protection scheme and RAID 5 of the same scheme can be configured within a frame with any combination of SRDF. or 7 data drives and 1 parity drive.) Optimizer – v 5670+ will provide Optimizer support for RAID-5. and mirroring protection. Parity RAID 3+1 is supported with RAID 5 3+1. as well as any combination of 3+1 and 7+1 Parity RAID and RAID 5 configurations. BCV. and mirroring protection (for example. A single parity RAID protection scheme can be configured within a frame with any combination of SRDF. and mirroring protection. 62 Performance Features 5670+ Performance Features – RAID 5 • Either (3+1) or (7+1) configurations in same system • Both Parity RAID and RAID 5 can exist in same system on same disks • SRDF/ BCV protection – Optimizer for RAID 5 • Support for swapping individual members • No support for Parity RAID EMC Global Education © 2004 EMC Corporation. . Copyright © 2004 EMC Corporation. BCV. A single RAID 5 protection scheme can be configured within a frame with any combination of SRDF. 62 RAID 5 will be available in two configurations: either 3 data drives and 1 parity drive. For example. The microcode will support the swapping of individual members of a RAID 5 group instead of swapping the entire RAID 5 group.

com. . the following key point were discussed: – Enginuity 5670+ Management Features – Enginuity 5670+ Business Continuity Features – Enginuity 5670+ Performance Features For additional information: http://powerlink. 63 New Enginuity 5670+ features were covered in this Symmetrix Foundations module update. Copyright © 2004 EMC Corporation. 63 Summary In this update. For additional information refer to http://powerlink. All Rights Reserved. All rights reserved.emc.emc.com EMC Global Education © 2004 EMC Corporation.Symmetrix Foundations.

64 EMC Global Education © 2004 EMC Corporation. 64 Thank you for your attention. Copyright © 2004 EMC Corporation. . All rights reserved. This ends our Symmetrix Foundations training.Symmetrix Foundations. All Rights Reserved.