You are on page 1of 336

Sun StorageTek 6000 Modular Product Line Installation and Configuration Training

Student Guide

Sun Microsystems, Inc. UBRM03-195 500 Eldorado Blvd. Broomfield, CO 80021 U.S.A. Sun Confidential: Internal Only Revision A

August 13, 2007 12:07 pmCopyright 2007 Sun Microsystems, Inc., 901 San Antonio Road, Palo Alto, California 94303, U.S.A. All rights reserved.

This product or document is protected by copyright and distributed under licenses restricting its use, copying, distribution, and decompilation. No part of this product or document may be reproduced in any form by any means without prior written authorization of Sun and its licensors, if any. Third-party software, including font technology, is copyrighted and licensed from Sun suppliers. Sun, Sun Microsystems, the Sun logo, Sun StorEdge, Sun StorageTek, Solaris, and Java are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the United States and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. UNIX is a registered trademark in the U.S. and other countries, exclusively licensed through X/Open Company, Ltd. Export Laws. Products, Services, and technical data delivered by Sun may be subject to U.S. export controls or the trade laws of other countries. You will comply with all such laws and obtain all licenses to export, re-export, or import as may be required after delivery to You. You will not export or re-export to entities on the most current U.S. export exclusions lists or to any country subject to U.S. embargo or terrorist controls as specified in the U.S. export laws. You will not use or provide Products, Services, or technical data for nuclear, missile, or chemical biological weaponry end uses. DOCUMENTATION IS PROVIDED AS IS AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS, AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. THIS MANUAL IS DESIGNED TO SUPPORT AN INSTRUCTOR-LED TRAINING (ILT) COURSE AND IS INTENDED TO BE USED FOR REFERENCE PURPOSES IN CONJUNCTION WITH THE ILT COURSE. THE MANUAL IS NOT A STANDALONE TRAINING TOOL. USE OF THE MANUAL FOR SELF-STUDY WITHOUT CLASS ATTENDANCE IS NOT RECOMMENDED. Export Control Classification Number (ECCN): March 24, 2006

Sun Confidential: Internal Only


Please Recycle

Copyright 2007 Sun Microsystems Inc., 901 San Antonio Road, Palo Alto, California 94303, Etats-Unis. Tous droits rservs. Ce produit ou document est protg par un copyright et distribu avec des licences qui en restreignent lutilisation, la copie, la distribution, et la dcompilation. Aucune partie de ce produit ou document ne peut tre reproduite sous aucune forme, par quelque moyen que ce soit, sans lautorisation pralable et crite de Sun et de ses bailleurs de licence, sil y en a. Le logiciel dtenu par des tiers, et qui comprend la technologie relative aux polices de caractres, est protg par un copyright et licenci par des fournisseurs de Sun. Sun, Sun Microsystems, le logo Sun, Sun StorEdge, Sun StorageTek, Solaris, et Java sont des marques de fabrique ou des marques dposes de Sun Microsystems, Inc. aux Etats-Unis et dans dautres pays. Toutes les marques SPARC, utilises sous licence, sont des marques dposes ou enregistres de SPARC International, Inc. aux Etats-Unis et dans dautres pays. Les produits portant les marques SPARC sont bass sur une architecture dveloppe par Sun Microsystems, Inc. UNIX est une marque enregistree aux Etats-Unis et dans dautres pays et licencie exclusivement par X/Open Company Ltd. Lgislation en matire dexportations. Les Produits, Services et donnes techniques livrs par Sun peuvent tre soumis aux contrles amricains sur les exportations, ou la lgislation commerciale dautres pays. Nous nous conformerons lensemble de ces textes et nous obtiendrons toutes licences dexportation, de r-exportation ou dimportation susceptibles dtre requises aprs livraison Vous. Vous nexporterez, ni ne r-exporterez en aucun cas des entits figurant sur les listes amricaines dinterdiction dexportation les plus courantes, ni vers un quelconque pays soumis embargo par les Etats-Unis, ou des contrles anti-terroristes, comme prvu par la lgislation amricaine en matire dexportations. Vous nutiliserez, ni ne fournirez les Produits, Services ou donnes techniques pour aucune utilisation finale lie aux armes nuclaires, chimiques ou biologiques ou aux missiles. LA DOCUMENTATION EST FOURNIE EN LETAT ET TOUTES AUTRES CONDITIONS, DECLARATIONS ET GARANTIES EXPRESSES OU TACITES SONT FORMELLEMENT EXCLUES, DANS LA MESURE AUTORISEE PAR LA LOI APPLICABLE, Y COMPRIS NOTAMMENT TOUTE GARANTIE IMPLICITE RELATIVE A LA QUALITE MARCHANDE, A LAPTITUDE A UNE UTILISATION PARTICULIERE OU A LABSENCE DE CONTREFAON. CE MANUEL DE RFRENCE DOIT TRE UTILIS DANS LE CADRE DUN COURS DE FORMATION DIRIG PAR UN INSTRUCTEUR (ILT). IL NE SAGIT PAS DUN OUTIL DE FORMATION INDPENDANT. NOUS VOUS DCONSEILLONS DE LUTILISER DANS LE CADRE DUNE AUTO-FORMATION.

Sun Confidential: Internal Only


Please Recycle

Table of Contents
About This Course .........................................................................1-vii Course Goals.................................................................................... 1-vii Sun StorageTek 6540 Product Overview ....................................1-1 Objectives ........................................................................................... 1-1 Sun StorageTek 6540 Product Overview ........................................ 1-2 Compare the Sun StorEdge 6140 and the Sun StorageTek 6540 Arrays ............................................................................. 1-4 Hardware Overview......................................................................... 1-6 Hardware Components of the Sun StorageTek 6540........... 1-6 Controller Tray .......................................................................... 1-7 6540 Controller Enclosure FRU details .................................. 1-8 6540 Controller Canister highlights ..................................... 1-17 Knowledge Check - 6540 Controller ............................................. 1-31 Sun StorageTek 6140 Product Overview ..................................2-37 Objectives ......................................................................................... 2-37 Sun StorageTek 6140 Product Overview ...................................... 2-38 Compare the Sun StorEdge 6130 and the Sun StorageTek 6140 Arrays ........................................................................... 2-39 Hardware Components of the Sun StorageTek 6140......... 2-41 Storage Management Software ............................................. 2-42 Hardware Overview....................................................................... 2-43 Controller Tray ........................................................................ 2-43 Back View of Controller Module .......................................... 2-50 Controller Architecture .......................................................... 2-63 Knowledge Check - 6140................................................................. 2-64 Sun StorageTek CSMII Expansion Tray Overview...................3-71 Objectives ......................................................................................... 3-71 Sun StorageTek CSMII Expansion Tray Overview..................... 3-72 Hardware Overview........................................................................ 3-73 CSMII Expansion Tray ........................................................... 3-74 CSMII Expansion Tray - Front View .................................... 3-74

Sun Confidential: Internal Only i


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

CSMII Expansion Tray - Back View ..................................... 3-82 Architecture Overview................................................................... 3-87 Switched Bunch of Disks (SBOD) Architecture.................. 3-88 Knowledge Check .................................................................. 3-89 Sun StorageTek 6540 - 6140 Hardware Installation .................... 4-93 Objectives ......................................................................................... 4-93 Overview of the Installation Process............................................. 4-94 Cabling Procedures......................................................................... 4-95 Cable Types.............................................................................. 4-95 Recommended Cabling Practices ................................................. 4-97 Cabling for Redundancy Top-Down Bottom-Up............................................................................. 4-98 Cabling for Performance........................................................ 4-99 Hot-adding an expansion enclosure ........................................... 4-101 Cabling Summary ................................................................. 4-105 Recommended Cabling Practices ................................................ 4-106 Drive Cabling for Redundancy Top-Down or Bottom-Up........................................................................... 4-106 Considerations for Drive Channel Speed................................... 4-113 Proper Power Procedures ............................................................ 4-114 Turning On the Power.......................................................... 4-114 Turning Off the Power ......................................................... 4-116 Set the Controller IP Addresses ................................................... 4-117 Configuring Dynamic IP Addressing ................................ 4-117 Configuring Static IP Addressing....................................... 4-118 Serial Port Service Interface.......................................................... 4-119 Serial Port Recovery Interface Procedure.......................... 4-119 Use the Hardware Compatibility Matrix to Verify SAN Components................................................................................. 4-122 Attach the Host Interface Cables ................................................ 4-122 Host Cabling for Redundancy ............................................ 4-122 Connecting Data Hosts Directly ......................................... 4-123 Connecting Data Hosts through an external FC switch.. 4-124 Sun StorageTek 6x40 - Common Array Manager ..................... 5-131 Objectives ....................................................................................... 5-131 What is the Sun StorageTek Common Array Manager? .......... 5-132 The CAM Interface........................................................................ 5-134 SMI-S Overview .................................................................... 5-134 Software Components .................................................................. 5-136 Firmware and NVSRAM files ............................................ 5-138 CAM Management Method ......................................................... 5-139 Out-of-Band Management Method.................................... 5-139 Sun StorageTek Common Array Manager Installation............ 5-141 Sun StorageTek Common Array Manager Navigation........... 5-142 Common Array Manager Banner....................................... 5-142
Sun Confidential: Internal Only Sun StorageTek 6540 Array Installation and Maintenance
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

ii

Common Array Managers Navigation Tree .................... 5-143 Common Array Managers Content Area ......................... 5-144 Additional Navigation Aids................................................ 5-145 Initial Common Array Manager Configuration ........................ 5-147 Configure IP Addressing ..................................................... 5-147 Accessing the Managment Software .................................. 5-150 Naming an Array .................................................................. 5-150 Configuring The Array Password ...................................... 5-151 Setting the System Time....................................................... 5-151 Adding Additional Users .................................................... 5-151 Setting Tray IDs..................................................................... 5-152 Array Configuration Using Sun StorageTek Common Array Manager.........................................................................................6-153 Objectives ....................................................................................... 6-153 Configuration Components of the Common Array Manager. 6-154 Creating a Volume With Common Array Manager ................. 6-156 Storage Profiles...................................................................... 6-156 Storage Pools.......................................................................... 6-160 Volumes.................................................................................. 6-160 Virtual Disks .......................................................................... 6-165 Administration functions and parameters ....................... 6-166 Knowledge Check .......................................................................... 6-171 Storage Domains ..........................................................................7-173 Objectives ....................................................................................... 7-173 What are Storage Domains? ......................................................... 7-174 Storage Domains Benefits (pre-sales).......................................... 7-175 Storage Domains Benefits (technical).......................................... 7-176 Storage Domains Terminology .................................................... 7-177 Steps for creating a Storage Domain ........................................... 7-181 How Storage Domains Works...................................................... 7-183 What the Host Sees ............................................................... 7-184 What the Storage System Sees............................................. 7-185 Summary of Creating Storage Domains ..................................... 7-188 Knowledge Check .......................................................................... 7-189 Monitoring Performance and Dynamic Features.......................8-193 Objectives ....................................................................................... 8-193 Storage system parameters that can improve Performance ..... 8-198 Integrated Data Services Snapshot .........................................9-213 Objectives ....................................................................................... 9-213 Data Services Overview ................................................................ 9-214 Snapshot .......................................................................................... 9-215 Snapshot Terminology ......................................................... 9-216 Snapshot - Benefits (pre-sales) ............................................ 9-219
Sun Confidential: Internal Only iii
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Snapshot Benefits (technical)............................................... 9-220 How does Snapshot work? .................................................. 9-221 Examples of how Snapshot works............................................... 9-222 Standard Read No Snapshot ............................................ 9-222 Snapshot is Created .............................................................. 9-223 Read From Snapshot (1st Case) .......................................... 9-224 Write to Base .......................................................................... 9-225 Re-Write to Base .................................................................... 9-226 Read From Snapshot (2nd Case)......................................... 9-227 Write to Snapshot.................................................................. 9-227 Write to Base (1st Case)........................................................ 9-229 Write to Base (2nd Case) ...................................................... 9-230 Disabling and Recreating..................................................... 9-230 Snapshot Considerations ..................................................... 9-231 Snapshot OS support............................................................ 9-232 Managing Snapshots ..................................................................... 9-234 Creating a Snapshot.............................................................. 9-234 Creating a Snapshot.............................................................. 9-237 Integrated Data Services Volume Copy ................................ 10-239 Objectives ..................................................................................... 10-239 Volume Copy Overview ............................................................. 10-240 Volume Copy Terminology............................................... 10-240 Volume Copy Benefits (pre-sales) ................................. 10-242 Volume Copy- Benefits (technical)................................... 10-243 How Volume Copy Works ................................................ 10-244 Factors Affecting Volume Copy ....................................... 10-245 Volume Copy States ........................................................... 10-245 Volume Copy Read/Write Restrictions........................ 10-247 Creating a Volume Copy ................................................... 10-248 Functions that can be performed on a Copy Pair........... 10-248 Recopying a Volume .......................................................... 10-249 Stopping a Volume Copy................................................... 10-249 Removing Copy Pairs......................................................... 10-250 Changing Copy Priority..................................................... 10-251 Volume Permissions ........................................................... 10-251 Volume Copy Compatibility with Other Data Services 10-252 Volume Copy OS Support ................................................. 10-255 Configuring a Volume Copy ...................................................... 10-257 Configuring a Volume Copy with Common Array Manager .. 10-257 Integrated Data Services Remote Replication ..................... 11-263 Objectives ..................................................................................... 11-263 Remote Replication Overview ................................................... 11-264 Remote Replication Terminology ..................................... 11-265 Summary of Remote Replication Modes ......................... 11-272
Sun Confidential: Internal Only Sun StorageTek 6540 Array Installation and Maintenance
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

iv

Benefits of Remote Replication ......................................... 11-273 Technical Features of Remote Replication ............................... 11-274 Suspend and Resume ......................................................... 11-275 Role Reversal ....................................................................... 11-275 How Remote Replication Works ............................................... 11-276 What Happens When an Error Occurs? .......................... 11-277 Configuring Remote Replication .............................................. 11-278 Configuring the Hardware for Data Replication ........... 11-278 Configuring Data Replication with CAM ....................... 11-280 Examples of Remote Replication Configurations .......... 11-290 Knowledge Check - Snapshot, Volume Copy, Remote Replication . 11-293 Problem Determination ..............................................................12-297 Objectives ..................................................................................... 12-297 Problem Determination............................................................... 12-298 What tools are available? ............................................................ 12-299 Sun StorageTek Common Array Manager CLI (SSCS) ..... 12-319

Sun Confidential: Internal Only v


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

vi

Sun Confidential: Internal Only Sun StorageTek 6540 Array Installation and Maintenance
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Preface

About This Course


Course Goals
Upon completion of this course, you should be able to:

Describe the features, functions and terminology of the Sun StorageTek 6540 array Describe the customer benefits and requirements to migrate to or use the Sun StorageTek 6540 array Describe the architecture of the Sun StorageTek 6540 array Install the Sun StorageTek 6540 array hardware Install the management software (Common Array Manager) Configure the 6540 array using CAM Attach production hosts to the Sun StorageTek 6540 array Configure and use Snapshots on the Sun StorageTek 6540 array Configure and use Volume Copies on the Sun StorageTek 6540 array Configure and use Replication Set on the Sun StorageTek 6540 array Diagnose problems using available tools

Sun Confidential: Internal Only Preface-vii


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision A

Course Goals

Preface-viii

Sun Confidential: Internal Only Sun StorageTek 6540 Array Installation and Maintenance
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision A

Module 1

Sun StorageTek 6540 Product Overview


Objectives
Upon completion of this module, you should be able to:

Describe the Sun StorageTek 6540 key features Identify the hardware components of the 6540 controller enclosure Describe the functionality of the 6540 Interpret LEDs for proper parts replacement

Sun Confidential: Internal Only 1-1


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Sun StorageTek 6540 Product Overview

Sun StorageTek 6540 Product Overview


Today's open systems environments create unique challenges for storage systems. Round-the-clock processing requires the highest availability and online administration. Varying applications result in a range of performance requirements: from transaction-heavy (I/O per second) to throughput-intensive (Mbyte per second). Unpredictable capacity growth demands efficient scalability. Finally, the sheer volume of storage in today's enterprise requires centralized administration and simple storage management. Sun StorageTek provides storage systems that are designed specifically to address the needs of the open systems environment: the Sun StorageTek 6140 and 6540. Both storage systems are high-performance, enterpriseclass, full 4-gigabit per second (Gbps) Fibre Channel/SATA II solution that combine outstanding performance with the highest reliability, availability, flexibility and manageability. This course focuses on the Sun StorageTek 6540 storage system. The Sun StorageTek 6540 storage system provides the performance demanded by high performance computing (HPC) environments that store and utilize vast amounts of data for high-bandwidth programs and complex application processing. The Sun StorageTek 6540 has the powerful 6998 controller architecture and 4 Gb/s interfaces which are ideally-suited for bandwidth-oriented applications such as sophisticated data-intensive research, visualization, 3-D computer modeling, rich media, seismic processing, data mining and large-scale simulation. The 6998 controller used in the 6540 storage system is the most sophisticated and highest-performing controller to date from SUN StorageTek for the 6000 mid-range disk product line. Its sixth-generation XBB architecture boasts our fastest cache memory, 4 Gbps Fibre Channel host and drive interfaces, high-speed busses, and multiple processing elements to optimize resource utilization. The 6998 controller's high-speed XOR engine generates RAID parity with no performance penalty, enabling this compute-intensive task to be handled efficiently and effortlessly. A separate 2.4 Ghz Xeon processor focuses on data movement control, allowing setup and control instructions to be processed and dispatched independent of data.

1-2

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Sun StorageTek 6540 Product Overview Two 6998 controllers are integrated into a controller enclosure and combined with one or more drive enclosures to create a fully featured 6540 storage system. These dual controller systems are fully-redundant and support up to eight 4, 2 or 1 Gbps Fibre Channel host connections and 224 Fibre Channel or SATA disk drives. The 6540 storage system has eight 4 Gbps FC-AL host or FC-SW SAN connections and eight 4 Gbps FC-AL drive expansion connections. Extensive compatibility and ability to auto-negotiate 4, 2, or 1 Gbps FC host connectivity speeds results in minimal or no impact on existing storage network, protecting customers infrastructure investment. The SUN StorageTek 6140 and 6540 storage systems run similar firmware. This unique implementation creates a lower total cost of ownership and higher return on investment by enabling seamless data and model migration, common features and functionality, centralized management, a consistent interface and reduced training and support costs. Additionally, the 6140 storage system can be upgraded to a high-performance 6540 HPC storage system. And in each instance, all configuration and user data remains intact on the drives. The Sun StorageTek 6540 storage system is modular and rack mountable, and scalable from a single controller tray (CRM=Controller RAID Module) plus one expansion tray (CEM=Controller Expansion Module) to a maximum of 13 additional expansion trays. Summary of he features offered by the Sun StorageTek 6540 storage system:

The SUN StorageTek 6540 has two 6998 controllers. Each 6998 controller has four 4Gb/s Fibre Channel host I/O ports (eight per dual controller storage system) supporting direct host or SAN attachments. The eight 4 Gb/s Fibre Channel host ports support 4, 2 and 1 Gb/s connectivity. Each 6998 controllers has the powerful 2.4 Ghz Intel Xeon processor. Each controller also has a dedicated next generation ASIC to perform the RAID parity calculation thereby off loading the processor of this function. Supports up to 224 drives, FC or SATA HotScale technology enables online capacity expansion up to 67 TB with FC drives (224 x 300 GB), or 89 TB with SATA drives (224 x 400 GB).

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-3

Sun StorageTek 6540 Product Overview

4GB, 8GB and 16GB cache options are available (2GB / 4GB / 8GB per controller respectively). 4 drive loops per controller that can support either 2Gb/s or 4Gb/s drive enclosures. All components are hot-swappable RoHS compliant

Compare the Sun StorEdge 6140 and the Sun StorageTek 6540 Arrays
The Sun StorageTek 6140 storage system is targeted for the SMB market (Small to Medium Businesses), while the Sun StorageTek 6540 storage system is targeted for enterprise environments. Table 1-1 Comparison Chart: 6140 and 6540 Differences Sun StorageTek 6140 Lite Controller CPU Processor 667 Mhz Xscale w/ XOR 1/2/4 Gb/s 2 per ctlr 2 per ctlr 1 GB per ctlr 2 per ctlr 3992 Expansion Tray IOM # of Disk Drives per Tray FC 16 FC 16 FC 16 667 Mhz Xscale w/ XOR 1/2/4 Gb/s 4 per ctlr 2 per ctlr 2 GB per ctlr 2 per ctlr 3994 2.4 Ghz Xeon dedicated XOR 1/2/4 Gb/s 4 per ctlr 4 per ctlr 2/4/8 GB per ctlr 2 per ctlr 6998 Sun StorageTek 6140 Sun StorageTek 6540

Host Ports Expansion Ports Controller Cache Ethernet Ports Controller

1-4

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Sun StorageTek 6540 Product Overview Table 1-1 Comparison Chart: 6140 and 6540 Differences Sun StorageTek 6140 Lite Disk Types # Expansion Trays Maximum Disks Disk Types 2/4 Gb/s: FC, SATA II 3 64 2/4 Gb/s: FC, SATA II Configuration Maximum Hosts 512 (256 redundant) 1024 512 (256 redundant) 1024 512 Sun StorageTek 6140 2/4 Gb/s: FC, SATA II 6 112 2/4 Gb/s: FC, SATA II Sun StorageTek 6540 2/4 Gb/s: FC, SATA 14 224 2/4 Gb/s: FC, SATAII

Maximum Volumes

1024

Performance Targets Burst I/O rate Cache Read Sustained I/O rate Disk Read Sustained I/O rate Disk Write Sustained throughput - Disk Read Sustained throughput Disk Writes 120,100 30,235 5,789 750 MBps 698 MBps 120,100 44,000 9,000 990 MBps 850 MBps 575,000 85,000 22,000 1,600 MBps 1,300 MBps

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-5

Hardware Overview

Hardware Overview
Hardware Components of the Sun StorageTek 6540
The Sun StorageTek 6540 storage system is comprised of two main trays: the 6540 controller tray and a minimum of one expansion tray. The expansion tray is also known as the Common Storage Module 2 (CSM2 or CSMII).

Figure 1-1

Sun StorageTek 6540 storage system

This section describes the main components of the Sun StorageTek 6540 controller tray (CRM). The CSMII is covered in another module.

Figure 1-2

Components of the 6540 controller enclosure

1-6

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

Controller Tray
Figure 1-2 shows a block diagram for the Sun StorageTek 6540. The blocks represent placement of controllers, power-fan canisters and removable mid-plane canister.

Figure 1-3

Block diagram for the Sun StorageTek 6540.

The Sun StorageTek 6540 controller enclosure has five main canisters:

two Power-Fan canisters one Interconnect canister (removable mid-plane) two controller canisters

There are also two battery FRUs (Field Replaceable Units) within the Interconnect-Battery canister, bringing the total number of FRUs for the 6540 controller enclosure to seven. The enclosure does not have a mid-plane but instead has been designed such that all the canisters interconnect with one another. Caution Service Advisor procedures should be followed when removing a FRU because there is interdependency between the FRUs.

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-7

Hardware Overview

6540 Controller Enclosure FRU details


Power-Fan canister (x2) Power supply Fans Battery chargers (x2) Thermal sensor Interconnect-Battery canister Mid-plane Battery packs (x2) Audible alarm Front bezel LEDs Controller canisters in the back (x2) Base controller board Manufacturing configurable host interface card

Controller A (top)
Controller A (top) Controller B (bottom )

Interconnect Canister Left Power/Fan Canister Left Power/Fan Canister Interconnect Canister both to both Connects to controller - Connects B to controller Connects A - to Connects controllerscontrollers Power Supply- Power supply - Fans 2 battery packs 2 battery packs Fans - 2 battery chargers - Audible alarm Audible alarm 2 Battery Chargers - Thermal sensor - LEDs LEDs Thermal Sensor

Right Power/Fan Canister Right Power /Fan Canister Connects to controller - Connects to controllerA B - Power supply Power Supply - Fans Fans- 2 Battery charger 2 Battery Chargers - Thermal sensor Thermal Sensor

Front

Figure 1-4

6540 FRUs

The two Power-Fan canisters and the Interconnect-Battery canister are located behind the front cover. The Power-Fan canister on the left is rightside up, and the Power-Fan canister on the right is upside down. The two controllers are located in the rear of the enclosure. All canisters are hot swappable as long as interdependencies between the FRUs are taken into consideration.

1-8

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

Power-Fan Canister

Figure 1-5

6540 Power Fan Canister and Battery Canister LEDs

The main purpose of the Power-Fan canister is as the name suggests - to provide power and cooling to the storage system. Each Power-Fan canister contains:

a power supply - provides power to the controllers by converting incoming AC voltage to the appropriate DC voltages. In addition to the AC-to-DC power supply, a DC-to-DC power supply will be supported when it becomes available (there is a DC connector on the controller canister but it is not currently functional). two system cooling fans - the fans are powered by the power supply in both Power-Fan canisters. If either power supply fails, the fans will continue to operate. two battery chargers - the battery chargers perform battery tests when the 6540 enclosure is first power on, and every 25 hours thereafter. If needed, the batteries will be recharged at that time. The batteries are located in the Interconnect-Battery canister. thermal sensor - prevents power supplies from overheating. Under normal operating conditions, with an ambient air temperature of 5C to 40C, (40F to 104F), the cooling fans maintain a proper operating temperature inside the enclosure.

Factors that can cause power supplies to overheat:

unusually high room temperature

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-9

Hardware Overview

fan failure defective circuitry in the power supply blocked air vent failure in other devices installed in the cabinet

If the internal temperature rises above 70C (158F) one or both power supplies will automatically shut down, and the storage management software will report the exception. Critical event notifications will also be issued if event monitoring is enabled and event notification is configured. In the figure above, note the black connector when looking at the back of the canister - this connector connects to one of the controllers. The PowerFan canister on the right has the connector at the top and therefore connects to controller A. The Power-Fan canister on the left is upside down and has the connector on the bottom and therefore connects to controller B.

Power-Fan canister LEDs

Figure 1-6

6540 Power Fan canister LEDs

Information about the condition of the power supplies, fans and battery charger is conveyed by indicator lights on the front of each Power-Fan canister. You must remove the front cover of the 6540 enclosure to see the indicator lights.

1-10

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview Typically there is a one-to-one relationship between the Needs Attention/Service Action Required (SAR) and Ok to Remove/Service Action Allowed (SAA) LEDs. But there are exceptions. An example is if both Power-Fan canisters have a fault, one due to a power fault, and the other due to a fan fault. The Power-Fan canister with the power fault should be removed and replaced first. If the Power-Fan canister with the fan fault is removed, the system would be left with no power. In this case, the Power-Fan canister with the fan fault would have the SAR LED ON, but the SAA LED OFF.

Interconnect-Battery Canister
The purpose of the Interconnect-Battery canister is to serve as a midplane for pass thru of controller status lines, power distribution lines, and drive channels. Additionally it contains the batteries to hold data in cache in the event of loss of power, summary indicators for the entire storage system, and the audible alarm. The Interconnect-Battery canister contains:

a removable mid-plane - provides cross-coupled signal connection between the controller canisters. The control output from each controller canister is connected to the control input of the alternate controller canister. two battery packs - provide backup power to the controller cache memory. Each battery pack is sealed and contains two clusters of lithium ion batteries. Each battery pack is connected to both controllers - one cluster to controller A, the other to controller B. The battery pack voltage ranges from 9 to 13 V. When two battery packs are present, the 6540 storage system data cache will be backed up for 3 days.

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-11

Hardware Overview

front bezel LEDs - the LEDs that are displayed through the front cover are located on the Interconnect-Battery canister.

Figure 1-7

Inter-connect battery canister LEDs

Information about the condition of the interconnect-battery canister is conveyed by indicator lights on the front of the Interconnect-Battery canister. The Power, Service Action Required, and Locate lights are general indicators for the entire command enclosure, not specifically for the Interconnect-Battery canister. The Service Action Required light turns on if a fault condition is detected in any component in the controller enclosure. The Power, Service Action Required, and Locate lights shine through the front cover.

1-12

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview The Service Action Allowed LED is for the Interconnect-Battery canister itself. Caution Never remove the Interconnect-Battery canister unless directed to do so by Customer Support. Removing the Interconnect-Battery canister after either a controller or a Power-Fan canister has already been removed could result in loss of data access. In the unlikely event an Interconnect-Battery canister must be replaced (i.e. due to a bent pin, or as a last resort to resolve a problem) then the storage management software will provide details on the procedure. Data access is limited to only one controller (Controller A) when the Interconnect-Battery canister is removed. Removal of the InterconnectBattery canister will automatically suspend controller B, and all I/O will be performed by controller A. It is recommended that you prepare for the removal of the Interconnect-Battery canister instead of just pulling it out. Preparation involves:

Placing controller B offline so that host failover software can detect the offline controller and re-route all I/O to controller A. Turning ON the Service Action Allowed LED using the storage management software. Removing and replacing the Interconnect-Battery canister Turning OFF the Service Action Allowed LED using the storage management software Placing controller B on-line and re-balancing the volumes

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-13

Hardware Overview

Interconnect-Battery Canister - Battery pack.

Figure 1-8

6540 Interconnect battery canister showing a single battery pack

The above figure shows the Interconnect-Battery canister with the access cover removed. For clarity, the picture shows only one battery pack, there would normally be two. The battery pack is mounted to a sheet metal bracket. You can see the flange at the end of the bracket closest to the access - grasp the flange to remove the battery pack. When replacing the battery pack, the battery pack must be pushed firmly into the interconnect-battery canister to ensure it completely engages with the connectors at the back of the Interconnect-Battery canister.

1-14

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

Power Distribution and Battery System

Cache Memory

Controller A (top)
Voltage Regulator

Left Power/Fan Canister Power Supply


Charger

Interconnect Canister Battery Packs

Right Power/Fan Canister


Charger

B A
Charger

B A
Charger

Power Supply

Front

Figure 1-9

6540 as seen from the top, showing the power distribution

The 6540 enclosure does not have a midplane (sometimes also referred to as a backplane) that can be found in all pre-sixth generation SUN StorageTek 6140 and 6130 products. This diagram shows how the canisters are interconnected, and also gives an overview of how the power distribution and battery system work. The power from the left Power-Fan canister is distributed via controller B, and power from the right Power-Fan canister is distributed via controller A. Both controllers must be in place in order to provide redundant power to each controller.

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-15

Hardware Overview

Figure 1-10 Which component should be replaced first - Right Power/Fan Canister or the Left Power/Fan Canister? Service Advisor procedures must be followed carefully if both the power supply connected to controller B fails (the left Power-Fan canister), and controller A fails. Removing controller A before replacing the failed Power-Fan canister will cause controller B to lose power, resulting in loss of data access. This occurs because power distribution from each PowerFan canister is through the controller physically connected to that PowerFan canister.

Figure 1-11 Which component should be replaced first - Controller A or the Left Power/Fan Canister? The battery system spans all the canisters:

The two battery packs are in the Interconnect-Battery canister. Half of each battery pack is dedicated to each controller. The two charging circuits in each of the Power/Fan Canisters - one charger for one battery cluster in each of the battery packs.

1-16

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

Voltage regulator in each controller to ensure the lithium ion batteries are not over charged.

6540 Controller Canister highlights


Processors

6091-0901 controller model number (also referred to as 6998) Next generation hardware XOR engine 2.4 GHz Xeon processor

Data cache

Optional 2, 4, or 8 GB of cache per controller

Faster memory Host Channels

Four independent 4 Gbps FC channels per controller (8 independent ports per dual-controller system) Auto-negotiate to 1, 2, and 4 Gbps speeds

Drive channels

Two 4 Gbps FC loop switches per controller Total of 8 drive loops per system Run at 2 Gbps and 4 Gbps Auto-detect drive side speed Can support both 2 Gbps and 4 Gbps drive enclosures behind the same controller on different drive channels.

Dual 10/100 Ethernet for out-of-band management


One for customer out-of-band management One for service diagnostics, serviceability Totally isolated to prevent exposure to customers LAN

RS-232 interface for diagnostics

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-17

Hardware Overview

6540 Controller Canister

Figure 1-12 6540 Controller Canister LEDs in Front The 6540 command enclosure has two 6998 controllers. Both controllers are identical. The controllers install from the rear of the command enclosure. The top controller is controller A and the bottom controller is controller B. All connections to the hosts and the drives in the storage controller are through the controller canisters. The host side connections support fibre-optic connections. The drive side connections support either copper or fibre-optic connections. The 6998 controller inside the controller canister is comprised of two circuit boards:

The base controller board - contains the 2.4 GHz processor, the DIMM slots for cache memory, and four Emulex SOC 422 loop switch chips for the four drive channels. Each loop switch combines two loops together for one drive channel, and also provides an external connection for each loop. The host interface card - plugs into the base controller board and provides the four 4 Gbps host side connections. In the future, there will be several variations of the host interface card, thus allowing the customer to order a host interface card that has the number, type and speed of host connections that meets his needs.

1-18

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

6540 Controller Canister Connections

Figure 1-13 6540 Controller canister connections

6540 Controller Canister LED indicators

Figure 1-14 6540 Controller canister LED indicators Each 6540 controller canister provides the following connections and LED indicators which are described in detail in the following sections:

Four 4 Gbps Host Interface Ports Four 4 Gbps Disk expansion ports Dual 10/100 Base-T Ethernet Ports With EEPROM Serial Port Connector Seven segment display Controller service indicators AC or DC power (DC power connector present, but DC power not currently implemented)

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-19

Hardware Overview

Figure 1-15 Controller Tray LEDs and Indicators

6540 4 Gbps Host Interface Ports

Figure 1-16 6540 Host and Drive interface ports The 6540 storage system has eight 4 Gbps FC-AL host or FC-SW SAN connections.

The host side connections perform link speed negotiation on each host channel port (also referred to as auto-negotiation) for 4, 2, or 1 Gbps FC host connectivity speeds resulting in minimal or no impact on the existing storage network. Link speed negotiation for a given host channel is limited to link speeds supported by the Small Formfactor Pluggable (SFP) transceiver on that channel. The controllers will enter into auto-negotiation at these points in time:

Controller boot-up sequence

1-20

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

Detection of a link-up event after a previous link-down event.

If the auto-negotiation process fails, the controllers will consider the link to be down until negotiation is again attempted at one of these points in time. For a 4-Gb controller, the supported link speeds are 1, 2, and 4Gbps.

Auto-Negotiation
The Fibre Channel host interface performs link speed negotiation on each host channel Fibre Channel port. This process, referred to as autonegotiation, means that it will interact with the host or switch to determine the fastest compatible speed between the controller and the other device. The fastest compatible speed will become the operating speed of the link. If the device on the other end of the link is a fixed speed device or is not capable of negotiating, the controller will automatically detect the operating speed of the other device and set its link speed accordingly.

6540 4 Gbps Disk Expansion Ports

Figure 1-17 Disk expansion ports Each 6540 controller canister has two dual-ported drive channels. Each 6540 controller has two drive channels, each channel consists of two drive loops, each drive loop has an external connection - so, each 6540 controller has 4 drive side port connections.

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-21

Hardware Overview The connections for Channel 1 and Channel 2 are on Controller A. The connections for Channel 3 and Channel 4 are on Controller B. When attaching drive enclosures, it is important that the drive enclosure is cabled to a drive channel on each controller to ensure redundancy. The drive channels can operate at 2 Gbps or 4 Gbps. The drive channels perform link speed detection (which is different than link speed negotiation) - the controller will automatically match the link speed of the attached drive enclosures. Drive channels can operate at different link speeds, but both ports of a single channel must run at the same speed. Two LEDs indicate the speed of the channel of the disk drive ports, as shown in the figure below.
4 2

P1 Ch 2 (Ctrl B) P2

Figure 1-18 Disk Expansion Ports The behavior of the LEDs is as follows:

When both LEDs are OFF, there is no FC connection or the link is down. With the first LED in the OFF position and the right LED in the ON position, the port is at 2 Gbps. When both LEDs are in the ON position, the port is at 4 Gbps.

Fibre Channel Port By-Pass Indicator


The fibre channel port by-pass indicator has two settings: on and off. Figure 1-19 shows the indicator.

Figure 1-19 Port By-Pass Indicator When in the OFF position, no SFP is installed or port is enabled. In the ON position, no valid device is detected and the channel or port is internally bypassed (AMBER).

1-22

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Ch 2 (Ctrl A)

Hardware Overview

6540 Drive Channels and Loop Switches

Figure 1-20 6540 Drive Channels and Loop Switches Each drive port is capable of delivering 400 MB/s of bandwidth; however, both ports of a loop switch (one channel) will run at the same link speed either both ports will run at 4 Gbps or 2 Gbps. Each controller has two dual-ported 4 Gbps FC Chips. Each FC Chip is attached to a Loop Switch chip on both Controller A and Controller B, therefore both controllers are connected to all four Drive Channels. Both the FC Chips and the Loop Switch chips support concurrent full link speed on both ports of each chip. Each Loop Switch chip represents a Drive Channel. Each Drive Channel can support a maximum of 126 devices (drives, IOMs and controllers). The 6540 subsystem supports a maximum of 224 disks. Each Drive Channel has two independent drive loops, represented by the two ports per Drive Channel.

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-23

Hardware Overview

Figure 1-21 Each drive channel has 2 ports Host and drive side cabling will be covered after a hardware overview of the CSMII drive enclosure.

Dual 10/100 Base-T Ethernet Ports With EEPROM


Figure 1-20 illustrates the ethernet status LEDs.

Figure 1-22 Ethernet Status Lights The 6540 has two RJ-45 ports per controller canister. Ethernet port 1 must be for Management Host while port 2 reserved for future use. Do not use this port for management of the trays. Default IP addresses (default subnet is 255.255.255.0): ControllerA interface 0: 192.168.128.101 ControllerA interface 1: 192.168.129.101 ControllerB interface 0: 192.168.128.102 ControllerB interface 1: 192.168.129.102

1-24

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview . Light Ethernet Link Speed Ethernet Link Activity Color Green LED Green LED Normal Status Off = 10 Base-T On = 100 Base-T Off = No link established On = Link established Blinking = Activity

Serial Port Connector


To access the serial port, use a RS232 DB9 null modem serial cable. This port is used to access the Service Serial Interface used for viewing or setting a static IP address for the controllers. This interface can also clear the storage system password. Figure 1-21shows the RS232 DB9 null modem cable for serial port access.

Figure 1-23 RS232 null modem cable

Seven-Segment Display

Figure 1-24 Seven-Segment Display and Heartbeat

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-25

Hardware Overview The numeric display consists of two seven-segment LEDs that provide information about enclosure identification and diagnostics. When the controller enclosure is operating normally, the numeric display shows the tray identification (tray ID) of the controller enclosure. The controller enclosure tray ID is intentionally set from 80-99 by the controller firmware and automatically adjusts during power-on to avoid conflicts with existing drive tray IDs. There is no physical ID selector on the controller enclosure but you can, however, set the controller enclosure tray ID through the storage management software. The controller tray ID should not be changed to an ID below 80 as it will not work properly. Each digit of the numeric display has a decimal point, and is rotated 180 degrees relative to the other digit. With this orientation, the display looks the same regardless of controller orientation. The numeric display as shown in Figure 1-24 shows the tray identification (Tray ID) or a diagnostic error code The heartbeat is the small decimal on the lower right hand corner of the 1st digit - when the heartbeat is blinking the number displayed is the Tray ID. The diagnostic light is the small decimal in the upper left hand corner of the 2nd digit - when the diagnostic light is blinking the number displayed is a diagnostic code. The tray ID is an attribute of the 6540 command enclosure; both controllers display the same tray ID. It is possible, however, that one controller will display the tray ID, while the other controller displays a diagnostic code.

Power on behavior - The Diagnostic Light, the Heartbeat Light, and all 7 segments of both digits will be on if a power-on or reset occurs. The tray ID display may be used to temporarily display diagnostic codes after each power cycle or reset. The Diagnostic Light will remain on until the tray ID is displayed. After diagnostics are completed, the current tray ID will be displayed. Diagnostic behavior - Diagnostic codes in the form of Lx or Hx, where x is a hexadecimal digit, are used to indicate state information. In general, these codes are displayed only when the canister is in a non-operational state. The canister may be non-operational due to a configuration problem (such as mismatched IOM and/or controller types), or it may be non-operational due to hardware faults. If the controller/IOM is non-operational due to system configuration, the

1-26

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview controller/IOM Fault Light will be off. If the controller/IOM is nonoperational due to a hardware fault, the controller/IOM Fault Light will be on. Value -FF 88 AA bb L0 L2 L3 L9 H0 H1 H2 H3 H4 H5 H6 H7 H8 Description Boot FW is booting up Boot Diagnostic executing This controller/IOM is being held in reset by the other controller/IOM ESM-A application is booting ESM-B application is booting Mismatched IOM types Persistent memory errors Persistent hardware errors Over temperature SOC (Fibre Channel Interface) Failure SFP Speed mismatch (2 Gb SFP installed when operating at 4 Gb) Invalid/incomplete configuration Maximum reboot attempts exceeded Cannot communicate with the other IOM Mid-plane harness failure Firmware failure Current enclosure Fibre Channel rate different than rate switch SFP(s) present in currently unsupported slot (2A or 2B)

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-27

Hardware Overview

Controller Service Indicators


Figure 1-25 shows controller service indicators.

Figure 1-25 Controller Service Indicators

Service Action Allowed LED


Figure 1-26 shows the Service Action Allowed LED.

Figure 1-26 Service Action Allowed LED


Normal Status is OFF Problem Status is ON - OK to remove canister. A service action can be performed on the designated component with no adverse consequences (BLUE).

Each drive, power-fan, and controller/IOM canister has a Service Action Allowed light. The Service Action Allowed light lets you know when you can remove a component safely. Caution Potential loss of data access Never remove a drive, power-fan, or controller or IOM canister unless the Service Action Allowed light is turned on.

If a drive, power-fan, or controller/IOM canister fails and must be replaced, the Service Action Required (Fault) light on that canister turns on to indicate that service action is required. The Service Action Allowed light will also turn on if it is safe to remove the canister. If there are data availability dependencies or other conditions that dictate that a canister should not be removed, the Service Action Allowed light will remain off. The Service Action Allowed light automatically turns on or off as conditions change. In most cases, the Service Action Allowed light turns on when the Service Action Required (Fault) light is turned on for a canister.

1-28

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

Note IMPORTANT. If the Service Action Required (Fault) light is turned on but the Service Action Allowed light is turned off for a particular canister, you might have to service another canister first. Check your storage management software to determine the action you should take.

Service Action Required (Fault)


Figure 1-27 shows the Service Action Required LED.

Figure 1-27 Service Action Required LED


Normal status is OFF. Problem status is ON. A condition exists that requires service. The canister has failed. Use the storage management software to diagnose the problem (AMBER).

Cache Active Indicator


Figure 1-28 shows the Cache Active Indicator LED.

Figure 1-28 Cache Active Indicator

If no data is in cache and all cache data has been written to disk. OFF. Data is in cache. ON (GREEN)

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-29

Hardware Overview

Summary of 6540 Controller Canister LED Definitions


Light
Host Channel SpeedL1 Host Channel SpeedL2 Drive Port Bypass (one light per port) Drive Channel Speed-L1 Drive Channel Speed-L2 Drive Port Bypass (one light per port) Service Action Allowed (SAA) Service Action Required (SAR) Cache Active LED

Color
Green LED Green LED Amber LED Green LED Green LED Amber LED Blue LED Amber LED Green LED Green LED Green LED

Normal Status

Problem Status

L1 L2 Definition Off Off No connection / link down On Off 1 X Gbps Off On 2 X Gbps On On 4 X Gbps Off On=Bypass

L1 L2 Definition Off Off No connection / link down Off On 2 X Gbps On On 4 X Gbps Off Off Off On=Data in cache Off=No data in cache Off=10Base-T On=100Base-T Off=No Link established On=Link established Blinking=Activity On=Bypass On=Controller safe to remove On=Controller needs attention Not applicable

Ethernet Link Speed Ethernet Link Activity

Not applicable Not applicable

Numeric Display (Tray ID and Diagnostic Display)

Green / yellow seven segment display

Diagnostic LED=Off; Tray ID Diagnostic LED=On; Diagnostic Code

Figure 1-29 Summary of 6540 Controller Canister LED definitions

1-30

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Knowledge Check - 6540 Controller

Knowledge Check - 6540 Controller


Complete the following

1) Identify the module shown above._______________________________ Using the letters, identify the parts of the component shown above 2a) A 2b) B 2c) C 2d) D 2e) E 2f) F _______________________________________ _______________________________________ _______________________________________ _______________________________________ _______________________________________ _______________________________________ 3a) If both LEDs in the middle are on, what speed is the port operating at? 3b) What are the function of the LEDs to the far left and far right?
P1 Ch 2 (Ctrl B) P2 Ch 2 (Ctrl A)

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-31

Knowledge Check - 6540 Controller 4) Explain how tray IDs are set. How can you change them?

5a) Why are there two ethernet ports?

5b) Which port should be used for normal operation? __________________

6) Why should you never remove the Interconnect Battery canister without Customer Support approval?

7) The left power-fan canister is distributed via controller _______. The right is distributed via controller ________. 8) If one drive port on one channel is set at 4 Gbps link speed and the other is set at 2 Gbps what will be the speed for both ports?

9) What is meant when a port is said to be able to autonegotiate?

1-32

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Knowledge Check - 6540 Controller

10) Where can you find the heart beat of the controller?

11) What is the default controller tray ID that is set by the controller firmware?

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-33

Knowledge Check - 6540 Controller

Knowledge Check - answers

1) Identify the module, shown above. The module is the 6998 controller canister module. Using the letters, identify the parts of the component shown above 2a) A 2b) B 2c) C 2d) D 2e) E 2f) F Host side ports Ethernet ports Controller Service Indicators (Service action allowed, Service Action Required, Data in Cache) 7 segment display for tray ID and fault identification Drive side ports Serial port 3b) If both LEDs in the middle are on, what speed is the port operating at? 4 Gbit

P1 Ch 2 (Ctrl B) P2 Ch 2 (Ctrl A)

3c) What are the function of the Leds to the far left and far right?

Port by-pass indicator; Off no SFP installed or port is enabled; ON (amber) No valid device is detected and the channel port/ is internally bypassed.

1-34

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Knowledge Check - 6540 Controller 4) Explain how tray IDs are set. How can you change them? Tray IDS are soft set by the controller to avoid tray ID conflicts. You can change them through CAM or through the SCSS command line. 5a) Why are there two ethernet ports? Ethernet port 1 is for normal operation. Ethernet port 2 is available for support to use.
1 2

5b) Which port should be used for normal operation? Ethernet port 1. 6) Why should you never remove the Interconnect Battery canister without Customer Support approval? Serves as a midplane for pass thru of controller status lines, power distribution lines and drive channels 7) The left power-fan canister is distributed via controller ___B____. The right is distributed via controller ___A_____. 8) If one drive port on one channel is set at 4 Gbps link speed and the other is set at 2 Gbps what will be the speed for both ports? Both ports on a drive channel must run at the same speed. 9) What is meant when a port is said to be able to autonegotiate? The port will interact with the host HBA or switch to determine the fastest compatible speed between the controller and the other device. 10) Where can you find the heart beat of the controller? On the lower right hand corner of the left box of the 7-segment display

Sun Confidential: Internal Only Sun StorageTek 6540 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

1-35

Knowledge Check - 6540 Controller 11) What is the default controller tray ID that is set by the controller firmware? 85

1-36

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Module 2

Sun StorageTek 6140 Product Overview


Objectives
Upon completion of this module, you should be able to:

Provide an overview of the Sun StorageTek 6140 and its associated management software Describe the hardware for the controller and expansion trays Describe the overall architecture of the controller and SBODs

Sun Confidential: Internal Only 2-37


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Sun StorageTek 6140 Product Overview

Sun StorageTek 6140 Product Overview


Today's open systems environments create unique challenges for storage systems. Round-the-clock processing requires the highest availability and online administration. Varying applications result in a range of performance requirements: from transaction-heavy (I/O per second) to throughput-intensive (Mbyte per second). Unpredictable capacity growth demands efficient scalability. Finally, the sheer volume of storage in today's enterprise requires centralized administration and simple storage management. There is a storage system designed specifically to address the needs of the open systems environment: the Sun StorageTek 6140. The 6140 storage system is a high-performance, enterprise-class, full 4-gigabit per second (Gbit/sec) Fibre Channel/SATA II solution that combines outstanding performance with the highest reliability, availability, flexibility and manageability. The Sun StorageTek 6140 storage system is modular, rack mountable and scalable from a single controller tray (CRM) to a maximum of six additional expansion trays (CEM). The Sun StorageTek 6140 storage system offers these new features:

New technology:

End-to-end 4-Gbit/sec FC Mix FC & SATA in tray

More connectivity using 8 host ports (4 per controller) More density:


16 drives per tray 112 drives in 7 trays (1 controller tray and 6 expansion trays) 4 Gbyte cache (2 Gbyte per controller) 120K IOPs, 1500 Mbytes Battery is a separate FRU RS232 interface

More performance:

More Serviceability:

On IOM

Removable drive cage

2-38

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Sun StorageTek 6140 Product Overview


ANSI standard LEDs RoHS compliant

Compare the Sun StorEdge 6130 and the Sun StorageTek 6140 Arrays
The Sun StorageTek 6140 storage system comprised of two tray types, the controller tray (CRM) and the expansion tray (CEM). Both tray types utilize the Common Storage Module 2 (CSM) and are differentiated by the module in the controller bay. A CRM uses a RAID controller whereas a CEM uses an IO Module (IOM). Table 2-1 Comparison Chart: 6130 and 6140 Differences StorEdge 6130 Controller CPU Processor 600 Mhz Xscale w/ XOR 1/2 Gb 2 per ctlr 1 per ctlr 1 GB per ctlr 1 per ctlr 2882 667 Mhz Xscale w/ XOR 1/2/4 Gb 2 per ctlr 2 per ctlr 1 GB per ctlr 2 per ctlr 3992 Expansion Tray IOM # of Disk Drives per Tray Disk Types # Expansion Trays Maximum Disks FC or SATA 14 1/2 Gb: FC, SATA 7 112 FC 16 2/4 Gb: FC, SATA II 3 64 FC 16 2/4 Gb: FC, SATA II 6 112 667 Mhz Xscale w/ XOR 1/2/4 Gb 4 per ctlr 2 per ctlr 2 GB per ctlr 2 per ctlr 3994 Sun StorageTek 6140 Lite Sun StorageTek 6140

Host Ports Expansion Ports Controller Cache Ethernet Ports Controller

Sun Confidential: Internal Only Sun StorageTek 6140 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

2-39

Sun StorageTek 6140 Product Overview Table 2-1 Comparison Chart: 6130 and 6140 Differences StorEdge 6130 Disk Types 1/2 Gb: FC, SATA Sun StorageTek 6140 Lite 2/4 Gb: FC, SATA II Sun StorageTek 6140 2/4 Gb: FC, SATA II

Configuration Maximum Hosts 256 512 (256 redundant) 1024 512 (256 redundant) 1024

Maximum Volumes

1024

Performance Targets Burst I/O rate Cache Read Sustained I/O rate Disk Read Sustained I/O rate Disk Write Burst throughput Cache Read Sustained throughput - Disk Read Sustained throughput Disk Writes 77,000 25,000 5,000 800 MBps 400 MBps 300 MBps 120,100 30,235 5,789 1,270 MBps 750 MBps 698 MBps 120,100 44,000 9,000 1,500 MBps 990 MBps 850 MBps

Self-Check Can you explain the difference between the application of IOps or MBps?

2-40

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Sun StorageTek 6140 Product Overview

Hardware Components of the Sun StorageTek 6140


The Sun StorageTek 6140 storage system is comprised of two main trays, the controller tray and the expansion tray. The expansion tray is also known as the Common Storage Module 2 (CSM2). Each tray has 16 FC or SATA II drives, switched architecture, 2 power-fan canisters and a removable drive cage. The controller tray can be a standalone storage system or you can add up to 6 expansion trays. The difference between the controller tray and the CSM2 are the controller canisters and the Input/Output Modules (IOMs). Two dual-active controllers are located in the controller tray. Each controller canister has:

2 or 4 Host/SAN connections - 1, 2, or 4 Gbit/sec speed autonegotiates


6140 lite has 2 host ports 6140 has 4 host ports

2 expansion ports - 2 or 4 Gbit/sec (set by link rate switch on front of tray) 2 Ethernet connections - 1 for storage management, 1 reserved for future functionality/service Serial port 7-segment display for tray ID and diagnostics

Two Input/Output Modules (IOMs) are located in the expansion tray (CSM2). Each IOM has:

2 Expansion ports - 2 or 4 Gbit/sec (set by link rate switch on front of tray) Serial port 7-segment display for tray ID and diagnostics 2 Expansion ports reserved for future functionality

Sun Confidential: Internal Only Sun StorageTek 6140 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

2-41

Sun StorageTek 6140 Product Overview Figure 2-1 shows a block diagram for the Sun StorageTek 6140. The blocks represent placement of drives, drive cage, power-fan canisters and either the controller canister or IOM.
2 power-fan canister

2 controllers or IOMs 16 drives

Removable drive cage

Figure 2-1

Diagram of Sun StorageTek 6140

Storage Management Software


There are four management software available to administrators to manage the Sun StorageTek 6140 array. These include:

Common Array Manager (CAM) This web user interface-based product is the main interface for managing and supporting Suns storage products. Sun Storage Configuration Service CLI The SSCS command-line interface provides a secure method of managing arrays, alarms, and logs on Suns storage products. SANtricity This graphical user interface-based product is a centralized storage management software from LSI Logic and is used to manage StorageTek storage disk-based systems. SANtricity CLI The SANtricity CLI (SMcli) is a script-based command line interface which provides another method of managing the storage products.

2-42

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

Note The preferred management tool for Sun Storage products is CAM. However, you should be aware of the other tools and be familiar with how to use them.

Hardware Overview
This section describes the main components of the Sun StorageTek 6140 controller tray (CRM) and the expansion tray CEM).

Controller Tray
The controller tray contains up to 16 drives, two controller canisters, two power-fan canisters and a removable drive cage. The front of the controller tray has a molded frame that contains global lights and the Link Rate switch. Figure 2-2 shows the front view of the Sun StorageTek 6140.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Figure 2-2

Sun StorageTek 6140 Front View

Drive Field Replaceable Unit (FRU)


Each disk drive is housed in a removable, portable canister. The FC drives are low-profile hot-swappable, dual-ported fibre channel disk drives.

Sun Confidential: Internal Only Sun StorageTek 6140 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

2-43

Hardware Overview The SATA II drives utilize the same canister as the FC drives, but since SATA II drives are single-ported, an additional SATA II Interface Card (SIC) is added to the rear of the canister. The card provides a fibre channel connector and simulates a dual-port configuration, 3 Gbyte to 4 Gbyte buffering and SATA II to FC protocol translation. The SATA II drive negotiates between 2 Gbit and 4 Gbit based on the setting to the Link Rate Switch on the tray. Figure 2-3 shows the SATA II interface card.

Figure 2-3

SATA II Interface Card

The drives are removed by gently lifting on the lower portion of the handle, which releases the handle. Caution Only add or remove drives while the storage system is powered on. The drive should not be physically removed from its slot until it has stopped spinning. Release the handle of the drive to pop the drive out of its connector. The drive can be removed from its slot after it has spun down. This usually takes 30 - 60 seconds. Disk Drive Options include:

10K RPM FC Drives


2 Gbyte interface 146 Gbyte and 300 Gbyte 2 Gbyte or 4 Gbyte interface 73 Gbyte and 146 Gbyte

15K RPM FC Drives


SATA II Disk Options

2-44

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

3 Gbyte - has a 4 Gbyte interface Native command queuing 500 Gbyte

Self-Check List the differences between Fibre Channel and SATA II drives. What is the difference between SATA and SATA II drives?

DACstore
DACstore is a region on each drive that is reserved for the use of the storage system controller. One can think of it as storage system configuration metadata stored on each drive. The DACstore area is created when the drives are introduced to the controller. Each drive contains a DACstore area that is used to store information about the drives state or status, volume state or status, and other information needed by the controller. The DACstore region extends 512 Mbytes from the last sector of the disk.

DACstore Benefits
DACstore exists on every drive and can be read by all Engenio controllers. Therefore, when an entire virtual disk is moved from one storage system to a new storage system, the data remains intact and can be read by the controllers in the new storage system. Investment protection through data intact upgrades and migrations.

All Engenio controllers recognize configuration and data from other Engenio storage systems Storage system level relocation DACStore enables relocation of drives within the same storage subsystem in order to maximize availability. When expansion trays are added, DACstore gives the ability to relocate drives such that drives are striped vertically across all expansion trays, and no one tray has more than one drive of a virtual disk.

Sun Confidential: Internal Only Sun StorageTek 6140 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

2-45

Hardware Overview

Sundry Drive
A Sundry drive stores the summary information about all of the drives in the storage system. The controllers assign a minimum of three sundry drives. Sundry drives are designated to hold certain global information regarding the state of the storage system. This information resides within the DACstore area on the sundry drives. Every attempt is made to assign sundry drives to drives on different channels. The controllers assign at least one sundry drive from each virtual disk. This will guarantee that if a virtual disk is removed for migration, at least one sundry drive will remain in the storage system and a sundry drive will migrate to the destination storage system to transport Storage Partition Mappings (SPM) with the migrating volumes. There is no limit on the maximum number of sundry drives that may exist. Other information stored in the DACstore region of the sundry drive:

Failed Drive Store saves information about the current failed drives Global Hot Spare Store saves the current Global Hot Spare drive state/status Storage system password Media scan rate Cache configuration of the storage system Storage system user label MEL logs used to log controller events Volume/LUN mappings, host type mappings and other information used by the storage partitions feature NVSRAM store saves a copy of the current controller NVSRAM values for use in the case of a controller swap Premium feature keys and permissions allowed for this controller

2-46

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

Field Replaceable Drive Cage


The field replaceable drive cage holds sixteen 3.5 inch drives. The midplane is located on the back of the cage as shown in Figure 2-4.

Figure 2-4

Drive Cage

Disk Drive LEDs


The disk drive LEDs are illustrated in Figure 2-5.

Figure 2-5

Disk Drive LEDs

Sun Confidential: Internal Only Sun StorageTek 6140 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

2-47

Hardware Overview The LEDs are: 1. 2. 3. Drive Service Action Allowed: If this LED is on it is OK to remove. Normal status is OFF. Problem status is ON (BLUE). Drive Fault: Normal status is OFF. If BLINKING - Drive, volume or storage system locate function. Problem status is ON (AMBER). Drive Active: ON (not blinking) - No data is being processed. BLINKING - Data is being processed. Problem status is OFF (GREEN).

Global Controller Tray and Expansion Tray LEDs


Each component in a tray has LEDs that indicate functionality for that individual component. Global LEDs indicate functionality for the entire tray. Global LEDs are shown in Figure 2-6.

Figure 2-6

Global LEDs on 6140 Controller or Expansion Tray

The global LEDs are as follows: 1. 2. 3. Global locate: Normal status is OFF. Only on when the user is performing the locate function (WHITE). Global Summary Fault: Normal status is OFF. Problem status is ON (AMBER). Global Power: Normal status is ON. Problem status is OFF (GREEN).

2-48

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

Alarm Mute Button


The 6140 controller and expansion tray have a configurable audible alarm. The Alarm Mute button is located on the front bezel to the right of the Global LEDs, as shown in Figure 2-7. The audible alarm is turned off by a setting in NVSRAM for the Sun StorageTek 6140.

Figure 2-7

Alarm Mute Button

Link Rate Switch


The Link Rate Switch shown in Figure 2-8 enables you to select the data transfer rate between the IOMs, drives and controllers. Setting the Link Rate switch determines the speed of the back end drive channel.

Figure 2-8

Link Rate Switch

Important things to remember:


The correct position is 4 Gbit/sec to the left; 2 Gbit/sec to the right. All trays of a 6140 must be set to operate at the same data transfer rate. The drives in the controller and expansion tray must support the selected link rate speed. If a 6140 is set to operate at 4 Gbit/sec:

A 2 Gbit drive will be bypassed. If a tray is set to operate at 2 Gbit/sec, the tray will be bypassed.

Sun Confidential: Internal Only Sun StorageTek 6140 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

2-49

Hardware Overview

If a 6140 is set to operate at 2 Gbit/sec, all 4 Gbit/sec drives will operate at 2 Gbit/sec.

Caution Change the Link Rate switch only when there is no power to the CRM or CEM tray.

Back View of Controller Module


The Sun StorageTek 6140 controller tray is 4-Gbit capable and comes in two versions. At the back of the controller tray, the controller canisters and the power-fan canisters on the top are inverted 180 degrees from the canisters on the bottom, as shown in Figure 2-9. In a fully-configured system, the replaceable components are fully redundant. If one component fails, its counterpart can maintain operations until you replace the failed component.

Figure 2-9

Sun StorageTek 6140 Lite 3992 Controller

The Sun StorageTek 6140 lite with two host ports (3992 controller) is 4 Gbit capable, front and back. The 3992 auto-negotiates 1 Gbit, 2 Gbit and 4 Gbit speeds on the host side. With dual controllers, there are a total of 4 host ports per storage system. The 3992 controller has 1 Gbyte of cache memory. The two expansion ports support 2 Gbit or 4 Gbit speeds selected by the Link Rate Switch.

2-50

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview Figure 2-10 shows a 3994 controller.

Figure 2-10 Sun StorageTek 6140 3994 Controller The Sun StorageTek 6140 with 4 host ports is 4 Gbit capable, front and back. The 3994 controller auto-negotiates 1 Gbit, 2 Gbit and 4 Gbit speeds. With dual controllers, there are a total of 8 host ports per storage system. Each 3994 controller has 2 Gbyte of cache memory. The two expansion ports support 2 Gbit or 4 Gbit speeds selected by the Link Rate Switch. Caution Never insert a 3992 controller and a 3994 controller into the same unit. This will cause the storage system to become inoperable.

6140 Controller Tray Details

The top left controller (A) is inverted from the bottom-right controller (B). The top right power-fan canister is inverted from the bottom-left power-fan canister. The battery is located in its own separate removable FRU to the left of controller A and to the right of controller B.

Sun Confidential: Internal Only Sun StorageTek 6140 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

2-51

Hardware Overview

The Controller Canister


Figure 2-11 shows the 3994 controller canister.

Figure 2-11 3994 Controller Canister

FC host ports
The fibre channel host port LEDs indicate the speed of the ports, as shown in Figure 2-12.
1 4 2

Ch 1

Figure 2-12 Host Port LEDs The ports:


Support speeds of 1/2/4 Gbit/sec using Agilent DX4+ Auto-negotiate for speed

Host port LEDs - Two LEDs indicate the speed of the port OFF and OFF = No connection/link down ON and OFF = (Green) 1 Gbit/sec OFF and ON = (Green) 2 Gbit/sec ON and ON = (Green) 4 Gbit/sec

2-52

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

Auto-Negotiation
The Fibre Channel host interface performs link speed negotiation on each host channel Fibre Channel port. This process, referred to as autonegotiation, means that it will interact with the host or switch to determine the fastest compatible speed between the controller and the other device. The fastest compatible speed will become the operating speed of the link. If the device on the other end of the link is a fixed speed device or is not capable of negotiating, the controller will automatically detect the operating speed of the other device and set its link speed accordingly.

Dual 10/100 Base-T Ethernet Ports With EEPROM


Figure 2-13 illustrates the ethernet status LEDs.

Figure 2-13 Ethernet Status Lights Ethernet port 1 must be for Management Host while port 2 reserved for future use. Do not use this port for management of the trays. Light Ethernet Link Speed Ethernet Link Activity Color Green LED Green LED Normal Status Off = 10 Base-T On = 100 Base-T Off = No link established On = Link established Blinking = Activity

Sun Confidential: Internal Only Sun StorageTek 6140 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

2-53

Hardware Overview

Serial Port Connector


To access the serial port, use a RS232 DB9-MINI DIN 6 with a null modem serial cable. This port is used to access the Service Serial Interface used for viewing or setting a static IP address for the controllers. This interface can also clear the storage system password. Figure 2-14 shows the Serial Port connector.
Serial Port

Figure 2-14 Serial Port Connector Figure 2-15 shows the RS232 DB9-MINI DIN 6. Use with a null modem cable for serial port access.

Figure 2-15 RS232 DB9-MINI DIN 6

Dual Disk Expansion Ports


Two LEDs indicate the speed of the channel of the disk drive ports, as shown in Figure 2-16.
4 2

P1 Ch 2 (Ctrl B) P2

Figure 2-16 Disk Expansion Ports The behavior of the LEDs is as follows:

2-54

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Ch 2 (Ctrl A)

Hardware Overview

When both LEDs are OFF, there is no Ethernet connection or the link is down. With the first LED in the OFF position and the right LED in the ON position, the port is at 2 Gbit/sec. When both LEDs are in the ON position, the port is at 4 Gbit/sec.

Fibre Channel Port By-Pass Indicator


The fibre channel port by-pass indicator has two settings: on and off. Figure 2-17 shows the indicator.

Figure 2-17 Port By-Pass Indicator When in the OFF position, no SFP is installed or port is enabled. In the ON position, no valid device is detected and the channel or port is internally bypassed (AMBER).

Seven-Segment Display
Each controller module has a pair of seven-segment displays that form a two-digit display. Each digit has a decimal point, and is rotated 180 degrees relative to the other digit. With this orientation, the display looks the same regardless of controller orientation. The numeric display as shown in Figure 2-18 shows the tray identification (Tray ID) or a diagnostic error code.

Figure 2-18 Seven-Segment Display and Heartbeat The heartbeat is the small decimal on the lower right hand corner of the 1st digit. The diagnostic light is the small decimal in the upper left hand corner of the 2nd digit.

Sun Confidential: Internal Only Sun StorageTek 6140 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

2-55

Hardware Overview The controller tray ID is set at 85 by the controller firmware. The controller tray ID should not be changed as it will not work properly with an ID below 80. The expansion tray IDs are automatically set during power-on to avoid conflicts with existing expansion tray IDs. Values on each display will be shown as if the digits had the same orientation. During normal operation, the seven-segment display is used to display the tray ID. The display may also used for diagnostic codes. The Diagnostic Light (upper digit decimal point) indicates current usage. The Diagnostic Light is off when the display is used to show the current tray ID. The tray ID is an attribute of the enclosure. In other words, both controllers will always display the same tray ID. It is possible, however, that one controller may display the tray ID, while the other controller displays a diagnostic code.

Power on behavior - The Diagnostic Light, the Heartbeat Light, and all 7 segments of both digits will be on if a power-on or reset occurs. The tray ID display may be used to temporarily display diagnostic codes after each power cycle or reset. The Diagnostic Light will remain on until the tray ID is displayed. After diagnostics are completed, the current tray ID will be displayed. Diagnostic behavior - Diagnostic codes in the form of Lx or Hx, where x is a hexadecimal digit, are used to indicate state information. In general, these codes are displayed only when the canister is in a non-operational state. The canister may be non-operational due to a configuration problem (such as mismatched IOM and/or controller types), or it may be non-operational due to hardware faults. If the controller/IOM is non-operational due to system configuration, the controller/IOM Fault Light will be off. If the controller/IOM is nonoperational due to a hardware fault, the controller/IOM Fault Light will be on.

Table 2-2 Numeric Display Diagnostic Codes Value -FF 88 Description Boot FW is booting up Boot Diagnostic executing This controller/IOM is being held in reset by the other controller/IOM

2-56

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview Value AA bb L0 L2 L3 L9 H0 H1 H2 H3 H4 H5 H6 H7 H8 Description ESM-A application is booting ESM-B application is booting Mismatched IOM types Persistent memory errors Persistent hardware errors Over temperature SOC (Fibre Channel Interface) Failure SFP Speed mismatch (2 Gb SFP installed when operating at 4 Gb) Invalid/incomplete configuration Maximum reboot attempts exceeded Cannot communicate with the other IOM Mid-plane harness failure Firmware failure Current enclosure Fibre Channel rate different than rate switch SFP(s) present in currently unsupported slot (2A or 2B)

Controller Service Indicators


Figure 2-19 shows controller service indicators.

Figure 2-19 Controller Service Indicators

Sun Confidential: Internal Only Sun StorageTek 6140 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

2-57

Hardware Overview

Service Action Allowed LED


Figure 2-20 shows the Service Action Allowed LED.

Figure 2-20 Service Action Allowed LED


Normal Status is OFF Problem Status is ON - OK to remove canister. A service action can be performed on the designated component with no adverse consequences (BLUE).

Each drive, power-fan, and controller/IOM canister has a Service Action Allowed light. The Service Action Allowed light lets you know when you can remove a component safely. Caution Potential loss of data access Never remove a drive, power-fan, or controller or IOM canister unless the Service Action Allowed light is turned on.

If a drive, power-fan, or controller/IOM canister fails and must be replaced, the Service Action Required (Fault) light on that canister turns on to indicate that service action is required. The Service Action Allowed light will also turn on if it is safe to remove the canister. If there are data availability dependencies or other conditions that dictate that a canister should not be removed, the Service Action Allowed light will remain off. The Service Action Allowed light automatically turns on or off as conditions change. In most cases, the Service Action Allowed light turns on when the Service Action Required (Fault) light is turned on for a canister.

Note IMPORTANT. If the Service Action Required (Fault) light is turned on but the Service Action Allowed light is turned off for a particular canister, you might have to service another canister first. Check your storage manager software to determine the action you should take.

2-58

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

Service Action Required (Fault)


Figure 2-21 shows the Service Action Required LED.

Figure 2-21 Service Action Required LED


Normal status is OFF. Problem status is ON. A condition exists that requires service. The canister has failed. Use the storage management software to diagnose the problem (AMBER).

Cache Active Indicator


Figure 2-22 shows the Cache Active Indicator LED.

Figure 2-22 Cache Active Indicator

If no data is in cache and all cache data has been written to disk OFF. Data is in cache. ON (GREEN)

Battery
Figure 2-23 shows the controller with batter, which is to the right of the controller.

Figure 2-23 Controller With Battery

Sun Confidential: Internal Only Sun StorageTek 6140 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

2-59

Hardware Overview Batteries are used to preserve the contents of controller cache memory during power outages. An error is reported if any of the batteries are missing. An installation date is tracked for each battery FRU installed in the storage system. The battery installation date is set when the battery packages are installed into the storage system during manufacturing. If a battery package is replaced, the system administrator must set the installation date to the current date by resetting the battery age for that battery package in storage manager or the command line interface.

SANtricity command line: reset storageArray batteryInstallDate; (if no controller is specified, both battery dates are reset - otherwise to specify a controller add controller=a;

Each day, the storage system controllers determine the age of each battery package in the storage system by comparing the current date to the installation date. If a battery package has reached its expiration age, cache battery failure event notification will occur. The storage system can be configured to generate cache battery near expiration event notification prior to reaching the expiration age. The controller module has a removable battery canister. The Lithium Ion battery will need to be replaced every three years. It will hold data in cache for up to 72 hours. Figure 2-24 shows the battery removal procedure.

Figure 2-24 Battery Removal

2-60

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview Figure 2-25 shows the Battery LEDs.

Figure 2-25 Battery LEDs 1. 2. 3. Service Action Allowed (OK to Remove): Normal status is OFF. Problem Status is ON. Service Action Required (Fault). Normal status is OFF. Problem status is ON. Battery Charging: Normal operating status is ON. Blinking means charging. Problem status is OFF.

The Power-Fan Canister


The controller module has two removable power-fan canisters. Each power-fan canister contains one power supply and two fans. The four fans pull air through the canister from front to back across the drives. The fans provide redundant cooling, which means that if one of the fans in either fan housing fails, the remaining fans continue to provide sufficient cooling to operate the system. Cooling is improved by using side cooling for the controllers and IOMs. The 600-watt power supplies provide power to the internal components by converting incoming AC voltage to DC voltage. If one power supply is turned off or malfunctions, the other power supply maintains electrical power to the tray. The power-fan canister contains:

One 600 watt redundant switching power supply


Each power supply will generate +5 and +12 volts, The two power supplies are tied to a common power bus on the mid-plane using active current share between the redundant pair. The power supplies have power-factor correction and support wide-ranging AC or DC input.

Sun Confidential: Internal Only Sun StorageTek 6140 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

2-61

Hardware Overview

They are able to operate in ranges from 90 VAC to 264 VAC (50 Hz to 60 Hz) or if the DC supply is selected, they will operate in the range from 36VDC to 72VDC.T If one blower fails, the second blower will automatically increase to maximum speed to maintain cooling until a replacement power supply is available. Blower speed control will be monitored and controlled by a microcontroller and thermal sensor within the power supply.

Two integrated +12V blower fans.

Figure 2-26 shows the Power Fan Canister LEDs.

Figure 2-26 Power Fan Canister LEDs 1) Power (AC): Indicates input power is being applied to the power supply and the power switch is on. Normal status ON. Problem status OFF (GREEN). 2) Service Action Allowed (OK to remove): Normal status OFF. Problem status ON (BLUE). 3) Service Actions Required (Fault) glows amber when: a. b. The power cord is plugged in, the power switch is on and the power supply is not correctly connected to the mid-plane. Power cord is plugged in, the power switch is on, the power supply is correctly seated in the mid-plane, and a power supply or blower fault condition exists. Normal status OFF. Problem status ON (AMBER).

2-62

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview 4) Direct Current Enabled: DC Power LED glows green to indicate the DC power rails are within regulation. Normal status ON. Problem status OFF (GREEN).

Controller Architecture
Figure 2-27 illustrates the architecture for the controller.

Figure 2-27 Controller Block Diagram


Uses 667 MHz Xscale processor with embedded XOR engine 1 DIMM memory slot

Shared memory bus (first 128 Mbyte is used by processor)


3994 2 GB per controller 3992 1 GB per controller

Sun Confidential: Internal Only Sun StorageTek 6140 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

2-63

Knowledge Check - 6140

Knowledge Check - 6140

1) Identify the module, shown above._______________________________ Using the letters, identify the parts of the component shown above 2a) A 2b) B 2c) C 2d) D 2e) E 2f) F _______________________________________ _______________________________________ _______________________________________ _______________________________________ _______________________________________ _______________________________________ 3a) On which module would you find this set of ports and LEDs? ___________________________________
P1 Ch 2 (Ctrl B) P2 Ch 2 (Ctrl A)

3b) If both LEDs in the middle are on, what speed is the port operating at?

3c) What are the function of the Leds to the far left and far right?

2-64

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Knowledge Check - 6140 4) List 3 benefits of DACstore.

5) Differentiate the functionality of the Sundry drives compared to the other drives in the system.

6) Explain how tray IDs are set. How can you change them?

7) How do you differentiate the 6140 lite and 6140 controllers?

8a) Why are there two ethernet ports?

8b) Which port should be used for normal operation? __________________

Circle the correct system for each item below 9a) 2 GB cache per controller 9b) 112 Maximum disks 6130 6130 6140 - 2 port 6140 - 2 port 6140 - 4 port 6140 - 4 port

Sun Confidential: Internal Only Sun StorageTek 6140 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

2-65

Knowledge Check - 6140 9c) 2 Gb SATA disk drives 9d) 16 disks per tray 9e) 1 and 2 Gb host ports 9f) 2048 volumes, max 9g) Max 3 expansion trays 9h) 2 expansion ports per controller 6130 6130 6130 6130 6130 6130 6140 - 2 port 6140 - 2 port 6140 - 2 port 6140 - 2 port 6140 - 2 port 6140 - 2 port 6140 - 4 port 6140 - 4 port 6140 - 4 port 6140 - 4 port 6140 - 4 port 6140 - 4 port

10) Why are the controllers inverted in a 3994/3992 controller module?

2-66

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Knowledge Check - 6140

Exercise Solutions
Task - Complete the Following

1) Identify the module, shown above. The module is the 3994 controller module. Using the letters, identify the parts of the component shown above 2a) A 2b) B 2c) C 2d) D 2e) E 2f) F Host side ports Ethernet ports Service action allowed 7 segment display for tray ID and fault identification Drive side ports Serial port 3a) On which module would you find this set of ports and LEDs? Controller tray 3b) If both LEDs in the middle are on, what speed is the port operating at? 4 Gbit
P1 Ch 2 (Ctrl B) P2 Ch 2 (Ctrl A)

Sun Confidential: Internal Only Sun StorageTek 6140 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

2-67

Knowledge Check - 6140 3c) What are the function of the Leds to the far left and far right? Port by-pass indicator; Off no SFP installed or port is enabled; ON (amber) No valid device is detected and the channel port/ is internally bypassed.

4) List 3 benefits of DACstore. 1. All controllers recognize configuration and data from other storage systems. 2. Storage system level relocation 3. DACStore also enables relocation of drives within the same storage subsystem in order to: a. maximize performance - as customers add expansion units, allows customer to relocate drives such that drives within an array are spread across all drive channels. b. maximize availability - as customer adds expansion units, DACstore allows user to relocate drives such that drives are striped vertically across all expansion trays, and no one tray has more than one disk of a virtual disk. 5) Differentiate the functionality of the Sundry drives compared to the other drives in the system. The sundry drive contains information about the entire system. Whereas all the other drives just contain their own information in the DACstore.

6) Explain how tray IDs are set. How can you change them? Tray IDS are soft set by the controller to avoid tray ID conflicts. You can change them through the SANtricity GUI or through the command line.

2-68

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Knowledge Check - 6140 7) How do you differentiate the 6140 lite and 6140 controllers? The 6140 lite controller has two host ports. The 6140 controller has four host ports.

8a) Why are there two ethernet ports? Ethernet port 1 is for normal operation. Ethernet port 2 is available for support to use.
1 2

8b) Which port should be used for normal operation? Ethernet port 1. Circle the correct system for each item below 9a) 2 GB cache per controller 9b) 112 Maximum disks 9c) 2 Gb SATA disk drives 9d) 16 disks per tray 9e) 1 and 2 Gb host ports 9f) 2048 volumes, max 9g) Max 3 expansion trays 9h) 2 expansion ports per controller 6130 6130 6130 6130 6130 6130 6130 6130 6140 - 2 port 6140 - 2 port 6140 - 2 port 6140 - 2 port 6140 - 2 port 6140 - 2 port 6140 - 2 port 6140 - 2 port 6140 - 4 port 6140 - 4 port 6140 - 4 port 6140 - 4 port 6140 - 4 port 6140 - 4 port 6140 - 4 port 6140 - 4 port

10) Why are the controllers inverted in a 3994/3992 controller module? Cooling and power cord management

Sun Confidential: Internal Only Sun StorageTek 6140 Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

2-69

Knowledge Check - 6140

2-70

Sun Confidential: Internal Only Sun StorageTek 6540 - Product Overview


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Module 3

Sun StorageTek CSMII Expansion Tray Overview


Objectives
Upon completion of this module, you should be able to:

Describe the Sun StorageTek CSMII Expansion Tray key features Identify the hardware components of the CSMII Expansion Tray Describe the functionality of the CSMII Expansion Tray Interpret LEDs for proper parts replacement

Sun Confidential: Internal Only 3-71


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Sun StorageTek CSMII Expansion Tray Overview

Sun StorageTek CSMII Expansion Tray Overview


The CSMII is the latest disk drive enclosure in the SUN StorageTek midrange 6000 Series of products. This 3U enclosure has 4 Gbps Fibre Channel (FC) interfaces, and supports up to 16 disk drives. The 4 Gbps ready CSMII Expansion tray offers a new 16-bay disk enclosure for attachment to selected mid-range 6000 storage systems, with up to 4.8 terabytes (TB) physical capacity per expansion unit using sixteen 300 GB FC disk drives. The CSMII supports the current 2 Gbps FC drives, and the intermix of 4 Gbps FC drives and SATA II drives, all within the same enclosure. The CSMII contains redundant (AC) power and cooling modules, and IOM interfaces. Summary of the features offered by the CSMII expansion tray

16 drives per enclosure Support for multiple drive types:


2 Gbps 10K RPM FC drives: 146GB and 300GB 4 Gbps 15K RPM FC drives: 73 GB and 146GB 3 Gbps 7.2K RPM SATAII drives: 500GB

SATA II and FC drives can be intermixed in the same enclosure (controller firmware dependant) Enclosure has selectable loop speed switch allowing enclosure to run at 2 Gbps or 4 Gbps speed (not auto-sensing) Switched loop design improves RAS (Reliability, Availability and Serviceability) and reduces latency. All components are hot-swappable RoHS compliant

3-72

Sun Confidential: Internal Only Sun StorageTek 6540 - CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

Hardware Overview
Hardware Components of the Sun StorageTek 6540
The Sun StorageTek 6540 storage system is comprised of two main trays: the 6540 controller tray and a minimum of one expansion tray. The expansion tray is also known as the Common Storage Module 2 (CSMII).

Figure 3-1

Sun StorageTek 6540 storage system

This section describes the main components of the CSMII expansion tray.

Sun Confidential: Internal Only Sun StorageTek CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

3-73

Hardware Overview

CSMII Expansion Tray


Figure 2-2 shows a block diagram for the CSMII Expansion tray. The blocks represent placement of IOMs, power-fan canisters and removable mid-plane canister.
Power/cooling

FC/SATA Drives IOM

Global LED's and Link Rate Switch

Figure 3-2

Block diagram for the CSMII Expansion Tray.

The CSMII enclosure has the following FRUs:


Disk drive canisters Power-Fan canisters IOM canister

The enclosure has a removable drive cage, and removable midplane. Caution Service Advisor procedures should be followed when removing a FRU.

CSMII Expansion Tray - Front View


The controller tray contains up to 16 drives, two controller canisters, two power-fan canisters and a removable drive cage. The front of the controller tray has a molded frame that contains global lights and the Link Rate switch.

3-74

Sun Confidential: Internal Only Sun StorageTek 6540 - CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview Figure 3-3 shows the front view of the Sun StorageTek CSMII.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Figure 3-3

Sun StorageTek CSMII Expansion tray Front View

Drive Field Replaceable Unit (FRU)


Each disk drive is housed in a removable, portable canister. The FC drives are low-profile hot-swappable, dual-ported fibre channel disk drives. The SATA II drives utilize the same canister as the FC drives, but have a SATA II Interface Card (SIC) added to the rear of the canister. The SIC card serves three purposes: 4. provides redundant paths to the disk. SATA II drives are singleported so the SIC card acts as a multiplexer. and effectively simulates a dual-ported disk provides SATA II to FC protocol translation thereby enabling a SATAII disk to function within an FC expansion tray. provides speed-matching. The SIC card negotiates between 2 Gbps and 4 Gbps based on the setting of the Link Rate Switch on the expansion tray. SATAII drives run at 3 Gbps, the SIC card does the 3 Gbps to 4 Gbps buffering so the SATAII drive effectively runs at 4 Gbps speed (and similarly can run at 2 Gbps speed).

5. 6.

Sun Confidential: Internal Only Sun StorageTek CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

3-75

Hardware Overview Figure 3-4 shows the SATA II interface card.


S A T AD is k D r iv e S A T A C o n n e c t o r I n t e r p o s e r C a r d I n t e llig e n t C o n t r o lle r

F C C o n n e c t o r

Figure 3-4

SATA II Interface Card

The drives are removed by gently lifting on the lower portion of the handle, which releases the handle. Caution Only add or remove drives while the storage system is powered on. The drive should not be physically removed from its slot until it has stopped spinning. Release the handle of the drive to pop the drive out of its connector. The drive can be removed from its slot after it has spun down. This usually takes 30 - 60 seconds. Disk Drive Options include:

10K RPM FC Drives


2 Gbps interface 146 GB and 300 GB 4 Gbps interface (can also run at 2 Gbps speed) 73 GB, 146 GB and 300 GB 3 Gbps (with SIC can run at either 2 Gbps or 4 Gbps speed) Native command queuing 500 GB and 750 GB

15K RPM FC Drives


SATA II Disk Options


3-76

Sun Confidential: Internal Only Sun StorageTek 6540 - CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

DACstore
DACstore is a region on each drive that is reserved for the use of the storage system controller. One can think of it as storage system configuration metadata stored on each drive. The DACstore area is created when the drives are introduced to the controller. Each drive contains a DACstore area that is used to store information about the drives state or status, volume state or status, and other information needed by the controller. The DACstore region extends 512 Mbytes from the last sector of the disk.

DACstore Benefits
DACstore exists on every drive and can be read by all 6140 / 6540 controllers. Therefore, when an entire virtual disk is moved from one storage system to a new storage system, the data remains intact and can be read by the controllers in the new storage system. Investment protection through data intact upgrades and migrations.

All Engenio controllers recognize configuration and data from other Engenio storage systems Storage system level relocation DACStore enables relocation of drives within the same storage subsystem in order to maximize availability. When expansion trays are added, DACstore gives the ability to relocate drives such that drives are striped vertically across all expansion trays, and no one tray has more than one drive of a virtual disk.

Sundry Drive
A Sundry drive stores the summary information about all of the drives in the storage system. The controllers assign a minimum of three sundry drives. Sundry drives are designated to hold certain global information regarding the state of the storage system. This information resides within the DACstore area on the sundry drives.

Sun Confidential: Internal Only Sun StorageTek CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

3-77

Hardware Overview Every attempt is made to assign sundry drives to drives on different channels. The controllers assign at least one sundry drive from each virtual disk. This will guarantee that if a virtual disk is removed for migration, at least one sundry drive will remain in the storage system and a sundry drive will migrate to the destination storage system to transport Storage Domain Mappings with the migrating volumes. There is no limit on the maximum number of sundry drives that may exist. Other information stored in the DACstore region of the sundry drive:

Failed Drive Store saves information about the current failed drives Global Hot Spare Store saves the current Global Hot Spare drive state/status Storage system password Media scan rate Cache configuration of the storage system Storage system user label MEL logs used to log controller events Volume/LUN mappings, host type mappings and other information used by the storage partitions feature NVSRAM store saves a copy of the current controller NVSRAM values for use in the case of a controller swap Premium feature keys and permissions allowed for this controller

3-78

Sun Confidential: Internal Only Sun StorageTek 6540 - CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

Field Replaceable Drive Cage


The field replaceable drive cage holds sixteen 3.5 inch drives. The midplane is located on the back of the cage as shown in Figure 3-5.

Figure 3-5

Drive Cage

Disk Drive LEDs


The disk drive LEDs are illustrated in Figure 3-6.

Figure 3-6

Disk Drive LEDs

Sun Confidential: Internal Only Sun StorageTek CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

3-79

Hardware Overview The LEDs are: 1. 2. 3. Drive Service Action Allowed: If this LED is on it is OK to remove. Normal status is OFF. Problem status is ON (BLUE). Drive Fault: Normal status is OFF. If BLINKING - Drive, volume or storage system locate function. Problem status is ON (AMBER). Drive Active: ON (not blinking) - No data is being processed. BLINKING - Data is being processed. Problem status is OFF (GREEN).

Global CSMII Expansion Tray LEDs


Each component in a tray has LEDs that indicate functionality for that individual component. Global LEDs indicate functionality for the entire tray. Global LEDs are shown in Figure 3-7.

Figure 3-7

Global LEDs on CSMII Expansion Tray

The global LEDs are as follows: 1. 2. 3. Global locate: Normal status is OFF. Only on when the user is performing the locate function (WHITE). Global Summary Fault: Normal status is OFF. Problem status is ON (AMBER). Global Power: Normal status is ON. Problem status is OFF (GREEN).

3-80

Sun Confidential: Internal Only Sun StorageTek 6540 - CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

Link Rate Switch


The Link Rate Switch shown in Figure 3-8 enables you to select the data transfer rate between the IOMs, drives and controllers. Setting the Link Rate switch determines the speed of the back end drive channel.

Figure 3-8

Link Rate Switch

Important things to remember:


The correct position is 4 Gbps to the left; 2 Gbps to the right. All trays on a pair of drive channels must be set to operate at the same data transfer rate. The drives in the expansion tray must support the selected link rate speed. If a tray is set to operate at 4 Gbps, all 2 Gbit drives in that tray will be bypassed. If a tray is set to operate at 2 Gbps, all 4 Gbps drives will operate at 2 Gbps.

Caution Change the Link Rate switch only when there is no power to the CSMII tray.

Sun Confidential: Internal Only Sun StorageTek CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

3-81

Hardware Overview

CSMII Expansion Tray - Back View

Figure 3-9

Back View of Expansion Tray (CSMII)

At the back of the expansion tray, the IO modules (IOM) and the powerfan canisters on the top are inverted 180 degrees from the canisters on the bottom. In a fully configured system, the field replaceable canisters are fully redundant. If one component fails, its counterpart can maintain operations until the failed component is replaced. IOM modules are also sometime referred to as ESMs (Environmental Services Modules).

3-82

Sun Confidential: Internal Only Sun StorageTek 6540 - CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview Figure 3-10 shows the LEDs .

Figure 3-10 LEDs and Indicators The IOM has 4 drive ports. However, only two are available to use. Do not use the drive ports (2A and 2B) nearest the seven-segment display. These are reserved for future functionality. The IOM is 2 Gbit or 4 Gbit, determined by the switch on the front side of the expansion tray.

Figure 3-11 IOM (input-output module) The following environmental conditions are monitored by the IOM:

The presence and absence of disk drives and two power-fan canisters The operational status line of two power-fan canisters

Sun Confidential: Internal Only Sun StorageTek CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

3-83

Hardware Overview

Enclosure temperature reading


Temperature shutdown will occur The preset values will be


60 degrees Celsius for the high warning fault 68 degrees Celsius for the high critical fault

Fan rotational speed for all four fans (two per power-fan canister) Voltage level reading for 5V, and 12V supply buses Voltage level reading for 1.2V, 1.8V, 2.5V, and 3.3V on board supply bus Control the fault status lines for the drives Control of the Locator LED, Summary Fault LED, and Service Action Allowed LED Presence of the second 4 Gbit FC ESM in the enclosure

Dual Disk Expansion Ports


Only disk drive ports 1A and 1B should be used. Two LEDs indicate the speed of the channel of the disk drive ports, as shown in Figure 3-12.

1A

1B

Figure 3-12 Disk Expansion Ports The behavior of the LEDs is as follows:

When both LEDs are OFF, there is no FC connection or the link is down. With the first LED in the OFF position and the right LED in the ON position, the port is at 2 Gbps. When both LEDs are in the ON position, the port is at 4 Gbps.

3-84

Sun Confidential: Internal Only Sun StorageTek 6540 - CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hardware Overview

Fibre Channel Port By-Pass Indicator


The fibre channel port by-pass indicator has two settings: on and off. Figure 3-13 shows the indicator.

Figure 3-13 Port By-Pass Indicator When in the OFF position, no SFP is installed or port is enabled. In the ON position, no valid device is detected and the channel or port is internally bypassed (AMBER).

Seven Segment Display and IOM Service Indicators


The Seven Segment Display and Service Indicators shown in Figure 3-14 have the same function and definition as already described for the 6540 controller module - please refer to the 6540 module for a description of these indicators.

Figure 3-14 Seven Segment Display and Service Indicators

The Power-Fan Canister


The CSMII tray has two removable power-fan canisters. Each power-fan canister contains one power supply and two fans. The four fans pull air through the canister from front to back across the drives. The fans provide redundant cooling, which means that if one of the fans in either fan housing fails, the remaining fans continue to provide sufficient cooling to operate the system. Cooling is improved by using side cooling for the IOMs. The 600-watt power supplies provide power to the internal components by converting incoming AC voltage to DC voltage. If one power supply is turned off or malfunctions, the other power supply maintains electrical power to the tray. The power-fan canister contains:

One 600 watt redundant switching power supply

Each power supply will generate +5 and +12 volts,

Sun Confidential: Internal Only Sun StorageTek CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

3-85

Hardware Overview

The two power supplies are tied to a common power bus on the mid-plane using active current share between the redundant pair. The power supplies have power-factor correction and support wide-ranging AC or DC input. They are able to operate in ranges from 90 VAC to 264 VAC (50 Hz to 60 Hz) or if the DC supply is selected, they will operate in the range from 36VDC to 72VDC.T If one blower fails, the second blower will automatically increase to maximum speed to maintain cooling until a replacement power supply is available. Blower speed control will be monitored and controlled by a microcontroller and thermal sensor within the power supply.

Two integrated +12V blower fans.

Figure 3-15 shows the Power Fan Canister LEDs.

Figure 3-15 Power Fan Canister LEDs 1) Power (AC): Indicates input power is being applied to the power supply and the power switch is on. Normal status ON. Problem status OFF (GREEN). 2) Service Action Allowed (OK to remove): Normal status OFF. Problem status ON (BLUE). 3) Service Actions Required (Fault) glows amber when:

3-86

Sun Confidential: Internal Only Sun StorageTek 6540 - CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Architecture Overview a. b. The power cord is plugged in, the power switch is on and the power supply is not correctly connected to the mid-plane. Power cord is plugged in, the power switch is on, the power supply is correctly seated in the mid-plane, and a power supply or blower fault condition exists. Normal status OFF. Problem status ON (AMBER). 4) Direct Current Enabled: DC Power LED glows green to indicate the DC power rails are within regulation. Normal status ON. Problem status OFF (GREEN).

Architecture Overview
The following section shows the architecture for the CSMII expansion tray which is a switched bunch of disks (SBODs).

Figure 3-16 Comparing JBOD and SBOD technology

Sun Confidential: Internal Only Sun StorageTek CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

3-87

Architecture Overview

Switched Bunch of Disks (SBOD) Architecture


The loop-switch technology enables direct and detailed FC communication with each drive. A loop switch allows devices on a FC loop to operate as though they were on a private Fibre Channel Arbitrated Loop (FCAL), but have the performance and diagnostic advantages of Fibre Channel fabric. A SOC (switch-on-a-chip) allows FCAL (fibre channel arbitrated loop) devices to communicate directly to each other which reduces the loop latency inherent in a true arbitrated loop. Because Fibre Channel communication is essentially point-to-point with a loop switch, diagnosis and isolation of loop problems is simplified. Figure 3-17 shows the SBOD architecture.

Figure 3-17 SBOD Architecture

3-88

Sun Confidential: Internal Only Sun StorageTek 6540 - CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Architecture Overview

Knowledge Check
A B C

1) Identify the module, shown above._______________________________ Using the letters, identify the parts of the component shown above 2a) A 2b) B 2c) C 2d) D 2e) E _______________________________________ _______________________________________ _______________________________________ _______________________________________ _______________________________________

2) List 3 benefits of DACstore

3) Differentiate the functionality of the Sundry drives compared to the other drives in the system.

Sun Confidential: Internal Only Sun StorageTek CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

3-89

Architecture Overview

3-90

Sun Confidential: Internal Only Sun StorageTek 6540 - CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Architecture Overview

Knowledge Check Answers


A B C

1) Identify the module, shown above. The module is the CSMII IOM module. Using the letters, identify the parts of the component shown above A B C D E Drive expansion ports IOM Service Indicators 7 segment display for tray ID and fault identification Serial port Reserved ports

2) List 3 benefits of DACstore. 1. All controllers recognize configuration and data from other storage systems. 2. Storage system level relocation - relocation of drive trays or drives to another storage system. 3. DACStore also enables relocation of drives within the same storage subsystem in order to: a. maximize performance - as customers add expansion units, allows customer to relocate drives such that drives within a Virtual Disk are spread across all drive channels. b. maximize availability - as customer adds expansion units, DACstore allows user to relocate drives such that drives are striped vertically across all expansion trays, and no one tray has more than one disk of a virtual disk.

Sun Confidential: Internal Only Sun StorageTek CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

3-91

Architecture Overview 3) Differentiate the functionality of the Sundry drives compared to the other drives in the system. The sundry drive contains information about the entire system. Whereas all the other drives just contain their own information in the DACstore.

3-92

Sun Confidential: Internal Only Sun StorageTek 6540 - CSMII Expansion Tray Overview
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Module 4

Sun StorageTek 6540 - 6140 Hardware Installation


Objectives
Upon completion of this module, you should be able to:

List the basic steps for installing the Sun StorageTek 6540 Describe proper cabling techniques and methodologies List the basic steps of hot-adding CSMII drive trays to a 6540 Perform the proper power sequence for the 6540 storage system Describe procedure to set static IP addresses for the 6540

Sun Confidential: Internal Only 4-93


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Overview of the Installation Process

Overview of the Installation Process


The following list outlines the tasks required for installing the Sun StorageTek 6540 hardware. The installation of the management software, Common Array Manager (CAM) will be covered in another module. The first three tasks will not be covered in this section. Utilize the instructions for unpacking and physically installing the hardware that ships with the product to complete the first three installation tasks. 1. 2. 3. 4. 5. 6. 7. 8. 9. Unpack the hardware according to the directions in the unpacking guide that should be attached to the outside of the shipping carton. Install the cabinet, controller tray and expansion trays by following the directions in the hardware installation guide. Attach the power cables. Attach Ethernet cables - one to each controller. Cable the controller and expansion trays. Check the link rate switch. Turn on the power. Set the controllers IP addresses. Use the hardware compatibility matrix to evaluate system set-up.

10. Attach the host interface cables. (Items in bold are covered in detail in this module) Standard 19 cabinets can be customized for maximum flexibility and can contain a combination of twelve enclosures. Always start loading the cabinet from the bottom up. Always push the cabinet from the front. U - a unit of measurement used to measure the height of computer equipment components and the height of the standard racks in which these components are mounted. 1U is equal to 1.75 inches (44.45 millimeters), so for example, a 2U component is 3.5 inches high. The height of the drive tray is 3 U. The height of a controller enclosure is 4 U.

4-94

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Cabling Procedures The 72 in cabinet is approximately 41 U. This means that eleven drive trays and one controller enclosure will fit in a 72 inch cabinet.

Cabling Procedures
The following section highlights proper cabling methods for the controller and expansion trays, keeping in mind how to cable for redundancy.

Cable Types
Fiber-optic cables and small form-factor pluggable (SFP) transceivers are used for connections to the host. If the system will be cabled with fiberoptic cables, you must install active SFPs into each port where a cable will be connected before plugging in the cable.

Figure 4-1

Fibre optic cable with LC connector

Copper cables do not require separate SFP transceivers. The SFP transceivers are integrated into the cables themselves. Copper cables are used to connect the expansion trays. The two types of cables for expansion cabling include:

2 Gbit = Molex 4 Gbit = Tyco

Note Host connections require the use of fiber-optic cables but either copper or fiber-optic cables can be used to connect expansion trays.

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-95

Cabling Procedures

Comparing copper and fibre optic


The choice between optical fiber and electrical (or "copper") transmission for a particular system is made based on a number of trade-offs. Optical fiber is generally chosen for systems with higher bandwidths, spanning longer distances, than copper cabling can provide. The main benefits of fiber are its exceptionally low loss, allowing long distances between amplifiers or repeaters; and its inherently high data-carrying capacity, such that thousands of electrical links would be required to replace a single high bandwidth fiber. Typically copper cables are used for short distances, such as interconnecting drive enclosures. Fibre cables are used for long distances, such as connecting the storage system directly to servers or to a FC switch.
Fiber-Optic Connection
Fiber-optic cable

Active SFP transceiver (separate from cable)

Copper Connection
Passive SFP transceiver (integrated with cable)

Copper cable

Figure 4-2

Fiber-Optic and Copper Cables

Cable Considerations

Light is transmitted through the Fibre cables therefore kinks or tight bends can degrade performance or damage cables.

4-96

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Recommended Cabling Practices

Fibre optic cables are fragile. Bending, twisting, folding, or pinching fiber optic cables can cause damage to the cables, degraded performance or data loss. To avoid damage, do not step on, twist, fold, or pinch the cables. Do not bend the cables tighter than a 2-inch radius. Install SFP transceivers only in the ports that are used.

Caution Fibre-optic cables are fragile. Do not bend, twist, fold, pinch, or step on the fiber-optic cables. Doing so can degrade performance or cause loss of data connectivity.

Recommended Cabling Practices


This section explains recommended cabling practices. To ensure that your cabling topology results in optimal performance and reliability, observe these practices.

Whats wrong with this cabling method?

Figure 4-3

If a drive enclosure fails, neither Drive Channel 1 or 3 can access the remaining drive enclosure.

If both redundant drive loops are cabled in the same direction, then a loss of power or communication to one drive enclosure can result in loss of access to the remaining drive enclosures.

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-97

Recommended Cabling Practices

Cabling for Redundancy Top-Down Bottom-Up

Figure 4-4

If a drive enclosure fails, the remaining drive enclosure can be accessed with Drive Channel 3.

When attaching expansion trays, create a cabling topology that uses the redundant paths to eliminate inter-tray connections as a potential single point of failure. To ensure that the loss of an expansion tray itself does not effect access to other trays, cable one drive channel from controller A of the 6540 top-down, and one drive channel from controller B bottom-up. Thus, the loss of a single tray will not prohibit access to trays on the other side of the failure from being accessed by the other path. Figure 3-4 shows full redundancy cabling on the drive channel side. Each drive tray is cabled to both controllers - i.e. from each drive tray, one IOM is cabled to Controller A, and the other IOM is cabled to Controller B. Drive Channel 1 from Controller A is cabled top down. The redundant loop, Drive Channel 3 from Controller B is cabled bottom up. Even if a whole drive tray fails, the connection to all other drive trays is not lost.

4-98

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Recommended Cabling Practices Each 6540 controller has two drive channels, and dual expansion ports for each drive channel. Splitting the trays between drive channels, or between each ports of a single channel further isolates the effect of a tray failure by half.

Cabling for Performance

Figure 4-5

6540 best practice for creating redundant drive side loops

Generally speaking, performance is enhanced by maximizing bandwidth, or the ability to process more I/O across more channels. Therefore, a configuration that maximizes the number of host channels and the number of drive channels available to process I/O will maximize performance. Of course, faster processing speeds also maximize performance. Drive enclosures should be balanced across controller backend loops to achieve maximum throughput performance. Balancing drive trays also provides some additional tray loss protection if Virtual Disks are properly configured across enclosures.

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-99

Recommended Cabling Practices

Figure 4-6

Which configuration is cabled for optimal performance?

4-100

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hot-adding an expansion enclosure

Hot-adding an expansion enclosure


The Top Down - Bottom Up cabling methodology has the added benefit of enabling hot-adding an expansion enclsoure whiel the storage system is in production. After power has been applied to a storage system and it is in production, the cabling methodology along with the Hot-add or HotScale technology enables online system expansion and reconfiguration with no forced downtime. Port bypass technology is built-in to the ports of the interface modules (minihub) and the ESM modules. Port bypass automatically opens and closes ports when devices are added or removed. Fibre channel loops stay intact so that system integrity is maintained. You can add drive enclosures or hosts on the fly without suspending user access or compromising availability in any way. The system expansion is a simple process: Note This is only a high level overview of the procedure, please refer to the appropriate user documentation for details.

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-101

Hot-adding an expansion enclosure 1. Install the the new drive enclosure in the rack but do not apply power to it yet.

Figure 4-7

Hot Add step 1 - install new drive enclosure in the rack but do not apply power to it yet

4-102

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hot-adding an expansion enclosure 2. Add the new enclosure to the top-down loop (in this example Drive Channel 1)

Figure 4-8 3. 4.

Hot Add step 2 - Cable top down

Power up the new drive enclosure Verify storage management software recognizes and displays the new drive enclosure

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-103

Hot-adding an expansion enclosure 5. Re-cable the bottom-up loop to include the new enclosure (in this example Drive Loop Channel 3)

Figure 4-9

Hot Add step 5 - cable bottom up

4-104

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Hot-adding an expansion enclosure

Stack 2 Stack 1

Stack 3 Stack 4

Figure 4-10 A fully configured 6540 with 14 CSMII expansion enclosures

Cabling Summary

Have two fibre channel drive loops to each drive enclosure for redundancy. One drive loop from Controller A to the left side IOM of a drive enclosure. The redundant drive loop from Controller B to the right side IOM of a drive enclosure. Have drive loops travel in opposite directions across all of the drive enclosures on those loops for robustness in case of a drive enclosure failure Use all of the drive side channels (drive loops) available for improved performance From the controller enclosure, cable to the 1B port of the drive enclosure IOM. Use odd numbered drive channels as a redundant pair, and even numbered drive channels as a redundant pair.

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-105

Recommended Cabling Practices

Recommended Cabling Practices


This section explains recommended cabling practices. To ensure that your cabling topology results in optimal performance and reliability, observe these practices.

Drive Cabling for Redundancy Top-Down or Bottom-Up


When attaching expansion trays, create a cabling topology that uses the redundant paths to eliminate inter-tray connections as a potential single point of failure. To ensure that the loss of an expansion tray itself does not effect access to other trays, cable one side of the 6140 bottom-up and the other side top-down. Thus, the loss of a single tray will not prohibit access to trays on the other side of the failure from being accessed by the other path. Each controller also has dual expansion ports. Splitting the trays between each port or channel further isolates the effect of a tray failure by half. The first path is created by cabling the expansion trays sequentially from Controller A. For example, Controller A is connected to expansion tray 1 through Port 1B, which is connected to expansion tray 2 through Port 1B, which is connected to expansion tray 3 through Port 1B, which is connected to expansion tray 4 through Port 1B. The alternate path is created by cabling the drive modules in the reverse order from Controller B. For example, Controller B is connected to expansion tray 4, which is connected to expansion tray 3, which is connected to expansion tray 2, which is connected to expansion tray 1.

4-106

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Recommended Cabling Practices In the event that expansion tray 2 fails, expansion tray 3 and 4 are still accessible through the alternate path. While identical cabling topologies are simpler, a single point of failure exists. If a expansion tray fails, all expansion trays beyond the failure are no longer accessible. This topology is vulnerable to loss of access to data due to an expansion tray failure.
A B

B A

Expansion Trays

Controller Tray

Figure 4-11 Redundant Cabling With One Controller Tray and Four Expansion Trays

Self-Check How many back-end loops are shown in the diagram above?

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-107

Recommended Cabling Practices The following figures show one controller tray cabled to a number of expansion trays, culminating with on page 4-112, which shows six expansion trays in the configuration.

Expansion Tray

Controller Tray

Figure 4-12 One Controller Tray and One Expansion Tray

Expansion Tray

Expansion Tray

Controller Tray

Figure 4-13 One Controller Tray and Two Expansion Trays

4-108

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Recommended Cabling Practices

Expansion Tray

Expansion Tray

Expansion Tray

Controller Tray

Figure 4-14 One Controller Tray and Three Expansion Trays

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-109

Recommended Cabling Practices

Expansion Tray

Expansion Tray

Expansion Tray

Expansion Tray

Controller Tray

Figure 4-15 One Controller Tray and Four Expansion Trays

4-110

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Recommended Cabling Practices

Expansion Tray

Expansion Tray

Expansion Tray

Expansion Tray

Expansion Tray

Controller Tray

Figure 4-16 One Controller Tray and Five Expansion Trays

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-111

Recommended Cabling Practices

Expansion Tray

Expansion Tray

Expansion Tray

Expansion Tray

Expansion Tray

Expansion Tray

Controller Tray

Figure 4-17 One Controller Tray and Six Expansion Trays

4-112

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Considerations for Drive Channel Speed

Considerations for Drive Channel Speed


When multiple expansion trays are connected to the same 6540 controller tray, all expansion trays attached to the same drive channel must operate at the same speed. The drive channels are used in pairs for redundancy Drive Channels 1 and 3 are used as a redundant pair, and Drive Channels 2 and 4 are used as a redundant pair. Therefore, all enclosures attached to the redundant pair of Drive Channels must operate at the same speed. Before powering on the system, check to see if the Link Rate switch is set to the appropriate data transfer rate. If the Link Rate switch is not set to the correct data transfer rate, move the switch to the correct position.

4 Gbps to the left 2 Gbps to the right

Since the switch is recessed, you will need to use a small tool to slide the switch to the proper position.

Figure 4-18 Link Rate Switch

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-113

Proper Power Procedures Before powering on the system, check to see the link rate switch is set the appropriate data transfer rate. Note All expansion trays and the controller tray must be set to operate at the same data transfer rate.

Proper Power Procedures


The following section highlights the proper power-on procedures for the controller and expansion disk trays.

Turning On the Power


The process of powering on a storage system is easy if the right procedure is followed. First, make sure that all the trays have been cabled correctly. Then the key to this process is that you should power on the expansion trays before the controller tray. The controllers read the storage system configuration from the DACstore on the drives, therefore all drives need to have power to them before the controllers are turned on. The first thing a controller does is issue a Drive Spin Up command to each drive. After all drives are spun up, the controller goes out and reads the DACstore information from each drive. Caution Potential damage to drives - Repeatedly turning the power off and on without waiting for the drives to spin down can damage the drives. Always wait at least 30 seconds from when you turn off the power until you turn on the power again.

Caution If you are connecting a power cord to an expansion tray, turn off both power switches on the controller/expansion tray first. If the main circuit breaker in the cabinet is turned off, be sure both power switches are turned off on each tray in the cabinet before turning on the main circuit breakers.

4-114

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Proper Power Procedures

Power-on Procedure
1. Are the main circuit breakers turned on? a. b. 2. YES - Turn off both power switches on each tray you intend to turn on. NO - Turn off both power switches on all trays in the cabinet.

If the main circuit breakers are turned off, turn them on.

Note IMPORTANT. Turn on the power to the expansion trays before turning on the power to the controller tray to ensure that the controllers acknowledge each attached expansion tray. If the controllers have power before the drives, the controllers could interpret this as a drive loss situation. 3. 4. Turn on both power switches on the back of each expansion tray. Turn on both power switches on the back of the controller tray.

A controller tray can take up to 10 seconds to power on and can take up to 15 minutes to complete its controller battery self-test. During this time, the lights on the front and back of the tray blink intermittently. a. Check the status of the LEDs on the front and back of each tray. Green lights indicate a normal status. Amber lights may indicate a hardware fault. If any fault lights are on, diagnose and correct the fault.

b.

Note To diagnose and correct the fault, you may need help from the storage management software. The use of storage manager to recover from faults will be covered in a later section.

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-115

Proper Power Procedures

Turning Off the Power


Storage systems are designed to run continuously, 24 hours a day. After you turn on power to a tray, it should remain on unless you need to perform certain upgrade and service procedures. There is the possibility of data loss during power off if it is not done correctly. This data loss can occur from data stored in cache, or IOs in the process of being written from a server or to the drives. Always ensure that IOs from the server are stopped, drive activity has ceased and the Cache Active LED at the controller is off, then power off the entire rack or the controller tray, then the expansion trays.

Power-off Procedure
1. Stop all I/O activity to each tray you are going to power off.

Note Always wait until the Cache Active light on the back of the controller tray turns off and all drive active lights stop blinking before turning off the power. 2. Check the lights on the back of the controller and expansion trays. a. 3. 4. If one or more fault lights are on, do not continue with the power-off procedure until you have corrected the fault.

Turn off the power switches on each fan-power canister in the controller tray. Turn off the power switches on each fan-power canister in each expansion tray.

Note Power on: first expansion trays then controller tray. Power off: first the controller tray then the expansion trays.

4-116

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Set the Controller IP Addresses

Set the Controller IP Addresses


The 6540 has two ethernet ports. Ethernet port 1 is used for management. Ethernet port 2 is reserved for is reserved for future use. Do not use for management.

Figure 4-19 Ethernet Ports on the 6540 Controller To configure the controllers IP address for ethernet port 1, you need to have an IP connection between the controllers and a management host. You can configure the controllers with either a dynamic or a static IP address. Note Each controller must have its own IP address. The default IP address for controller port A1 is 192.168.128.101. The default IP address for controller port B1 is 192.168.128.102.

Configuring Dynamic IP Addressing


Dynamic IP addresses for each controller can be assigned through a DHCP server. The dynamic IP address from a DHCP server can be used if BOOTP services are available.

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-117

Set the Controller IP Addresses

Configuring Static IP Addressing


If a DHCP server is not available, the controllers use the following default internal IP addresses.

Controller A1: 192.168.128.101 Controller B1: 192.168.128.102 Controller A2: 192.168.129.101 Controller B2: 192.168.129.102

There are several ways to change the controllers default IP address to the desired static IP addresses. 1. Connect the controller tray directly to a management host using a cross-over Ethernet cable and change the IP address using the management software CAM. Connect the controller tray to a management host using an Ethernet hub and change the IP address using the management software CAM. Connect the controller tray on an existing subnet and change the IP address using the management software CAM. Utilize the Serial Port Service Interface through the serial port

2.

3. 4.

4-118

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Serial Port Service Interface

Serial Port Service Interface


With the Sun StorageTek 6540, the IP address can be changed through the Serial Port Service Interface. Use this interface if a DHCP server or setting a static IP address through Ethernet is not available. This interface:

Displays network parameters Sets network parameters Clears the storage system password.

To connect to the 6540 serial port, use the null-modem cable. This should be supplied with the controller tray.

Figure 4-20 6-Pin to 9-Pin Serial Converter and Null-Modem Cable

Serial Port Recovery Interface Procedure


Once you have a physical connection on the serial port, use the following commands to complete your connection: 1. 2. 3. 4. 5. Connect to the serial port of controller A with a terminal emulation program. (9600, 8, none, 1) Send <break> for the Interface or baud rate change. Within 5 seconds, press S (<shift>+s) to enter the Serial Port Interface Enter password within 60 seconds or else access will terminate a. password = kra16wen Make selection from menu.

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-119

Serial Port Service Interface Figures Figure 4-21 and Figure 4-22 show sample screens of the Service Interface Main Menu and the Ethernet Port Configuration screen.

Figure 4-21 Service Interface Main Menu

Figure 4-22 Display IP Configuration

4-120

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Serial Port Service Interface Figure 4-23 shows the screen where Change IP Configuration can be selected.

Figure 4-23 Change IP Configuration If you answer Y to configure using DHCP, the system tries for 20 seconds to connect to the DHCP server. If no DHCP server is found, the system cycles back to the main menu.

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-121

Use the Hardware Compatibility Matrix to Verify SAN Components

Use the Hardware Compatibility Matrix to Verify SAN Components


Interoperability and solution test labs conduct rigorous testing on components and systems. Upon successful completion of comprehensive testing, these products are added to the appropriate compatibility list. Always utilize the hardware compatibility matrix to verify that all SAN components are certified with the Sun StorageTek 6540. The components include data host, OS version, Host Bus Adaptors and switches. Always verify firmware levels and bios settings for new systems or firmware upgrades. Note Refer to the SUN web for the Interoperability Matrix. As of 3/2007 the URL for the Interoperability Matrix is: https://extranet.stortek.com/interop/interop

Attach the Host Interface Cables


You can connect data hosts to access the Sun StorageTek 6540 storage system through Fibre Channel switches or directly to the system. The 6540 system has eight host connections: four per 6998 controller. This allows for 4 redundant hosts to be directly connected to the system. Caution If you will be using Remote Replication, for the 6998 controller, do not use host port 4 on both controller A and Controller B. When Remote Replication is activated, host port 4 on each controller is reserved for Replication and any data host connected will be logged out.

Host Cabling for Redundancy


To ensure that, in the event of a host channel failure, the storage system will remain accessible to the host, establish two physical paths from each host or switch to the controllers, and install a path failover driver like MPxIO on the host. This cabling topology, when used with a path failover driver, ensures a redundant path from the host to the controllers.

4-122

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Attach the Host Interface Cables

Connecting Data Hosts Directly


A direct point-to-point connection is a physical connection in which the HBAs are cabled directly to the storage systems host ports. Before you connect data hosts directly to the system, check that the following prerequisites have been met:

Fiber-optic cables of the appropriate length are available to connect the array host ports to the data host HBAs. Redundant connections from the host to each controller module are available. Certified failover software is enabled on the host.

Figure 4-24 Direct host connections

Note Check the hardware/software compatibility matrix to determine the certified failover solutions.

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-123

Attach the Host Interface Cables

Connecting Data Hosts through an external FC switch


You can connect the Sun StorageTek 6540 storage system to data hosts through external FC switches. Always check the compatibility matrix to ensure the switch is on the matrix as part of a certified solution. Before you connect data hosts, check that the following prerequisites have been met:

The FC switches are installed and configured as described in the vendors installation documentation.

Redundant switches inherently provide two distinct connections to the storage system.

Interface cables are connected and routed between the host bust adapters (HBAs), switches and installation site. Fiber-optic cables of adequate length are available to connect the array to the FC switch.

CH1

CH2 4 3 2 1

1 2

3 4 CH3 CH4

Figure 4-25 Fabric host connections

4-124

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Attach the Host Interface Cables

Knowledge Check
1. On the diagram below, number the expansion trays and design a cabling scheme for the Sun StorageTek 6540 that has one controller tray and 6 expansion trays

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-125

Attach the Host Interface Cables 2. On the diagram below, design a cabling scheme for the Sun StorageTek 6140 that has one controller tray and 6 expansion trays.

4-126

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Attach the Host Interface Cables 3. Why is it important to have an unique tray ID assigned to a drive enclosure?

4.

Why would you choose to use fibre cabling over copper?

5.

Why is top-down bottom-up cabling important?

6.

What is the best way to power on an entire storage system?

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-127

Attach the Host Interface Cables 1. Knowledge Check - this is only one solution of many valid ones.On

the diagram below, design a cabling scheme for the Sun StorageTek

4-128

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Attach the Host Interface Cables 6140 that has one controller tray and 6 expansion trays.

Sun Confidential: Internal Only Sun StorageTek 6540 - 6140 Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

4-129

Attach the Host Interface Cables

4-130

Sun Confidential: Internal Only Sun StorageTek 6540 Array - Hardware Installation
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Module 5

Sun StorageTek 6x40 - Common Array Manager


Objectives
Upon completion of this module, you should be able to:

Describe the Sun StorageTek Common Array Manager (CAM) Explain the function of the main components of CAM List when and where to install each component of CAM Explain the function of firmware and NVSRAM files Describe the out-of-band method of management used by CAM Describe logging into and navigating within CAM

Sun Confidential: Internal Only 5-131


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

What is the Sun StorageTek Common Array Manager?

What is the Sun StorageTek Common Array Manager?


The SUN StorageTek Common Array Manager allows storage administrators to monitor, configure and maintain Sun StorageTek midrange 6000 storage systems over existing LANs and WANs. CAM features online administration of all 6x40 system functions. Fully dynamic reconfiguration allows for the creation, assignment or reassignment of volumes without interruption to other active volumes. Maintenance of the storage array is simplified as well since storage administrators can receive important event information regarding the status of the array where and when needed by e-mail notification. CAM management activities include:

Centralized management - monitor and manage 6x40 systems from any location on the network Web based GUI - the CAM GUI displays information about the storage systems logical components (storage volumes and virtual disks), physical components (controllers and disk drives), topological elements (host groups, hosts, host ports) and volume-to-LUN mappings. Volume configuration flexibility - The characteristics of a volume are defined not only during volume creation, but also by the Storage pool and profile that are associated to the volume. The volume characteristics ensure the most optimal configuration settings are used to create volumes to meet the requirements for specific types of applications. Volume characteristics include:

capacity segment size modification priority enable/disable read cache enable write cache (write back) disable write cache (write through) enable/disable write cache mirroring read-ahead multiplier enable/disable background media scan with or without redundancy check.

5-132

Sun Confidential: Internal Only Sun StorageTek 6540 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

What is the Sun StorageTek Common Array Manager?

Online administration - CAM enables most management tasks to be performed while the storage remains online with complete read/write data access. This allows storage administrators to make configuration changes, conduct maintenance or expand the storage capacity without disrupting I/O to its attached hosts. CAMs online capabilities include:

Dynamic expansion that enables new expansion trays to be added, virtual disks to be configured, and volumes to be created without disrupting access to existing data. Once a newly created volume is defined, LUNs are immediately available to be instantly mapped and accessed by the data host. Dynamic capacity expansion (DCE) adds up to two drives at a time to an existing virtual disk, creating free capacity for volume creation or expansion and improving the IOPS performance of the volumes residing on the virtual disk. Dynamic volume expansion (DVE) allows you to expand the capacity of an existing volume by using the free capacity on an existing virtual disk. Dynamic Virtual Disk Defragmentation allows you to consolidate all free capacity on a selected Virtual Disk. A fragmented Virtual Disk can result from volume deletion leaving groups of free data blocks interleaved between configured volumes. Using the Defrag option on a Virtual Disk allows you to maximize the amount of free capacity available to create additional volumes on that Virtual Disk.

Highest availability - CAM software ensures uninterrupted access to data with online storage management and support for up to 15 global hot spares.

Intuitive diagnostics and recovery - The Service Advisor provides valuable troubleshooting assistance by diagnosing storage system problems and determining the appropriate procedure to use for recovery. Extensive operating system support - CAM software provides a broad range of platform support for open systems environments that include Windows 2000, Windows Server 2003, Solaris, HP-UX, Linux, AIX, NetWare and IRIX. CAM, however, is only installable on Windows, Linux, and Solaris SPARC/x86

Sun Confidential: Internal Only Sun StorageTek 6x40 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

5-133

The CAM Interface In summary, CAM allows management from one or more points on the network, centralization of capacity allocation decisions and remote support and management.

The CAM Interface


The Sun StorageTek Common Array Manager (CAM), provides a common management interface for Sun supported storage solutions. It greatly reduces the complexity of data storage implementations by providing:

A standard interface for storage management and event reporting Standard terms associated with Suns storage systems A shorter learning curve when transitioning to newer storage products

The Common Array Manger allows users to manage multiple storage devices from a single interface by adhering to the Storage Management Initiative - Specification (SMI-S) created by the Storage Networking Industry Association (SNIA).

SMI-S Overview
Management today is a myriad of different software packages by different vendors that are not coordinated with each other. Furthermore, many of these applications are deficient in the necessary functionality, security, and dependability needed to ensure greater business efficiency. Incompatible Application Program Interfaces (API) for storage management spread throughout today's multi-vendor SANs. The Storage Management Initiative Specification (SMI-S) assists administrators to gather and examine data from dissimilar vendors' products, and puts it in a common format. This lets storage managers manipulate all devices on the SAN from a centralized application.

5-134

Sun Confidential: Internal Only Sun StorageTek 6540 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

The CAM Interface Figure 5-1 displays an overview of SMI-S.

Figure 5-1

SMI-S Overview

SMI-S is derived from the Web-based Enterprise Management (WBEM) initiative. WBEM contains the Common Information Model (CIM) for managing network infrastructures, together with a data model, a transport mechanism that uses Hypertext Transfer Protocol (HTTP) and encoding that uses Extensible Markup Language (XML). SMI-S goes further than the open management functionality found in the Internet Engineering Task Force's (IETF) longstanding and much-used Simple Network Management Protocol (SNMP). There are two current versions of SMI-S, version 1.01 and version 1.0.2. The Common Array Manager uses version 1.0.2. This will allow for the most up to date support for SAN infrastructures. SMI-S gives heterogeneous vendor support, functionally rich, dependable, and secure monitoring/control of mission essential resources. This interface sweeps away the deficiencies related with legacy management.

Sun Confidential: Internal Only Sun StorageTek 6x40 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

5-135

Software Components

Software Components
The management software is delivered on compact disk (CD) or can be dowloaded from the Sun website. The management software consists of a number of components:

Sun StorageTek Management Host Software (CAM) Sun StorageTek Data Host Software Sun StorageTek Remote Management Host Software

Sun StorageTek Management Host Software


The management host software is the Sun StorageTek Common Array Manager (CAM) 5.xx package and contains the following:

graphical user interface (GUI), also referred to as the browser user interface (BUI), which includes the Java platform and the SUN Web Console. CAMs web-based Java console is the primary interface for configuration and administration of the array. It enables users to manage the array from any system with a web browser that is on the same network as the management host. For a list of supported browsers, see the release notes. Sun StorageTek Configuration Service (SSCS) which is the command line interface (CLI). The SSCS CLI provides the same control and monitoring capability as the web browser. In addition, the CLI is also scriptable for running frequently performed tasks. built-in Service Advisor and background Fault Management Service (FMS). CAM has a built-in Service Advisor and Fault Management Service. Both were formerly features of a separate product called the Sun Storage Automated Diagnostic Environment (StorADE) but have now been incorporated into CAM. The Service Advisor provides online advise to replace components and to diagnose and resolve issues. The Fault Management Service is a service or daemon that runs in the background and monitors the arrays for exceptions. Via CAM you can configure the FMS to monitor the arrays on a 24-hour basis, collecting information that enhances the reliability, availability, and serviceability (RAS) of the array. CAM automates the transmission of alerts, which can be sent via email, pager, or other diagnostic software installed (i.e. SNMP service) on a management host on the network.

5-136

Sun Confidential: Internal Only Sun StorageTek 6540 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Software Components

Sun StorageTek Data Host Software


Data host software controls the data path between the data host and the array. The data host software contains tools that manage the data path I/O connections between the data host and the array. This includes drivers and utilities that enable hosts to connect to, monitor, and transfer data in a storage area network (SAN). The type of data host software you need depends on your operating system. You must obtain the data host software from the Sun Download Center or other source. The data host software in the past has consisted of the following:

Sun StorageTek SAN Foundation Software - Used to manage the data paths between data hosts and the storage arrays. This software includes drivers and utilities that enable data hosts to connect to, monitor, and transfer data in a Storage Area Network (SAN). Sun StorageTek Traffic Manager software - Provides multipathing functionality and the ability to reliably communicate with the storage array.

Sun StorageTek Remote Management Host Software


The Remote Management Host software contains only the CLI. This can be installed on remote Solaris and non-Solaris systems allowing users to manage the storage array remotely. Note The Remote Management Host software that comes on the CD is only for the Solaris OS on the SPARC platform. Versions for other platforms can be downloaded from the sun web site. Use of the Remote Management Host Software still requires the use of a CAM managment host. The CLI simply communicates with the CAM management host to perform desired tasks.

Sun Confidential: Internal Only Sun StorageTek 6x40 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

5-137

Software Components

Firmware and NVSRAM files


The controller firmware, NVSRAM, IOM and drive firmware are bundled with CAM. When installing a new version of CAM, the CAM software will confirm that the firmware on the controller is compatible with the version of CAM. All versions of CAM are backwards compatible, which allows higher levels of CAM to manage storage systems running on lower levels of firmware. If the detected array is not at a baseline firmware level, the firmware can be upgraded during the array registration process, or at a later time. Consult the CAM installation and support guide and the compatibility matrix for more information. Firmware resides on each controller, each IOM and each disk drive. Non-volatile Static Random Access Memory (NVSRAM) is a file that specifies default settings for the controllers. Caution Each 6x40 controller model has a unique NVSRAM file. Inappropriate application of this files can cause serious problems including loss of connectivity with the storage system.

5-138

Sun Confidential: Internal Only Sun StorageTek 6540 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

CAM Management Method

CAM Management Method


CAM uses the out-of-band management method, and will support inband management in the future.

Out-of-Band Management Method


The out-of-band management method allows storage management commands to be sent to the controllers in the storage system directly over the network through each controllers Ethernet connection. Out-of-band management requires that each controller already have an IP address that was either set statically, or assigned via a DHCP server. The procedure to statically set the IP address is described in the 6540 Hardware Installation chapter. Note Full array management requires that both controllers are accessible via Ethernet. If only one controller is accessible then only a subset of the array management functions will be available. To manage the storage system through an Ethernet connection: 1. 2. 3. 4. Attach cables from the Ethernet connections on the storage system to the network The 6x40 storage system has two Ethernet ports. Be sure that the Ethernet cable used for management is connected to Ethernet port 1. Install CAM on a management host Register the storage system with CAM by completing an autodiscovery (Scan the Subnet) or by entering the IP address of one of the storage system controllers.

Note Multiple users can be logged into the CAM management server concurrently.

Note The default IP address for controller A is 192.168.128.101. The default IP address for controller B is 192.168.128.102.

Sun Confidential: Internal Only Sun StorageTek 6x40 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

5-139

CAM Management Method

CAM Server W 2K Managem ent Host


Network Management Station (for SNMP traps)

Ethernet Network

HBA Drivers and Failover software

M anagem ent Path


W indows Data Host Solaris Data Host Linux Data Host

Any server can also be a Management Host if CAM is installed on it

Fabric
Controllers Ethernet Connections Controllers Ethernet Connections

Firm ware/NVSRA M

FC I/O Path

Firm ware/NVSRA M

Figure 5-2

Out-of-Band Management and location of CAM components

Management Host - used to manage the storage system. This can be any host that has a network connection to the storage system and has the CAM Management Host Software installed. Data Host - used to read and write data to the storage system. This can be any host that has a FC connection to the storage system and has the CAM Data Host Software installed. Hosts that have both network and FC connection to the storage system can act as both Management and Data hosts.

5-140

Sun Confidential: Internal Only Sun StorageTek 6540 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Sun StorageTek Common Array Manager Installation

Sun StorageTek Common Array Manager Installation


Common Array Manager is installed using the CD provided with the 6x40 or from the SUN website, and currently runs on the Solaris, Linux and Windows operating systems. Following the installation steps will insure proper function once installation is complete. Note Detailed installation instructions are provided in the Sun StorageTek Common array Manager Software Installation Guide. Before you start the installation, ensure the following requirements are met:

The root password of the management host is available (for running the installation script). Note that the root password is required for the initial login to the Sun Java Web Console after the software is installed. The following amount of space will be required for the installation:

555 Mbytes on Solaris 660 Mbytes on Linux 530 Mbytes on Windows

Note Review the release notes for the most up to date list of supported operating systems.

Previous versions of the management software are not installed.

The installation script verifies these requirements. If a requirement is not met, the script informs the user or, in some cases, exits. The installation wizard provides two choices for installation: typical and custom. In a typical installation, the management host software, the data host software and the Sun StorageTek Configuration Service packages are installed. If the custom installation is selected the user can choose the packages to be installed. Select the custom option only if you want to specify the installation of only the management host software or only the data host software, or if you also want to install the Remote Management Host software.

Sun Confidential: Internal Only Sun StorageTek 6x40 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

5-141

Sun StorageTek Common Array Manager Navigation

Note During the software installation, the progress indicator may reflect 0% for a considerable portion of the installation process. This is the expected progress indication when the typical installation process is selected.

Sun StorageTek Common Array Manager Navigation


Navigation through the Common Array Manager Java console is performed in the same manner used to navigate a typical web page. The navigation tree to the left of most screens provides navigation among pages within an application. On-screen links can be clicked for additional details. In addition, information displayed on a page can be sorted and filtered. When the cursor is moved over a button, tree object, link, icon, or column, a tool tip with a brief description of the object will be displayed. Most screens are broken into three sections: the banner, the navigation tree and the content area.

Common Array Manager Banner


The banner consists of access buttons across the top and quick status displays on the left and right sides. Figure 5-3 displays the page banner.

Figure 5-3

Page Banner

The access buttons provide the following functions:


Console - Returns to the Sun Java Web Console page Version - Displays the version of the component currently being viewed on screen. Refresh - Updates the current view

5-142

Sun Confidential: Internal Only Sun StorageTek 6540 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Sun StorageTek Common Array Manager Navigation

Log Out - Logs the current user out and then displays the Sun Java Web Console login page Help - Opens the online help system

Note There are additional buttons specific to some screens. The quick status display on the left of the banner provides the current users role and the server name. The display on the right provides the number of current users logged in, the date and time of the array was refreshed (by the refresh button), and current alarms.

Common Array Managers Navigation Tree


The navigation tree is only displayed in the Sun StorageTek Configuration Service console. It is used to navigate between areas of the interface that allow users to view, configure, manage, and monitor the system. Each folder can be expanded or collapsed by clicking the triangle on the left side of the folder. The Common Array Managers navigation tree is displayed in Figure 5-4.

Figure 5-4

Common Array Manger Navigation Tree

The main headings in the tree are:

Sun Confidential: Internal Only Sun StorageTek 6x40 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

5-143

Sun StorageTek Common Array Manager Navigation

Logical Storage - Enables users to configure volumes, snapshots, replication sets, virtual disks, storage pools and storage profiles. Physical Storage - Allows users to configure initiators, ports, Virtual Disks, trays and disks. Mappings - Used to view the mappings for the selected array. Jobs - Provides access to current configuration processes (jobs running). This area also provides a history of jobs. Administration - Allows users to configure base system parameters as well as perform administrative tasks.

Common Array Managers Content Area


The content area of the Common Array Manager displays information about either data storage systems or hosts depending on what has been requested. Content area pages are generally displayed as tables or forms. Each may contain links to additional information or steps, drop down menu boxes and text boxes. An example of a content area is displayed in Figure 5-5.

Figure 5-5

Common Array Manager Content Area

5-144

Sun Confidential: Internal Only Sun StorageTek 6540 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Sun StorageTek Common Array Manager Navigation

Additional Navigation Aids


There are a variety of other icons, drop down menu boxes and links that will help you navigate and organize information presented by the Common Array Manager. Table 5-1 provides a list of the more commonly used items. Note Not all of the navigation aides in Table 5-1 are available in every content screen. Table 5-1 Common Navigation Aids Icon/Indicator Description Filters out undesirable information. To filter information choose the filter criterion from the drop down menu. When filtering tables, use the following guidelines:

A filter must have a least one defined criterion. A filter applies to the current server only.

Toggles between displaying a page at a time and displaying 15 or 25 rows at a time. When the top icon is displayed, click the icon to page through all data. When the bottom icon is displayed, click the icon to page through 15 or 25 rows at a time. Selects or deselects all check boxes in a table. The icon on the left selects all items and the icon the right deselects all items. Indicates that the column in the table is sorted in ascending order. For example, 0 to 9. A highlighted symbol indicates that it is the active column being used to sort the data.

Sun Confidential: Internal Only Sun StorageTek 6x40 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

5-145

Sun StorageTek Common Array Manager Navigation Icon/Indicator Description Indicates that the column in the table is sorted in descending order. For example, 9 to 0. A highlighted symbol indicates that it is the active column being used to sort the data. Indicates the current page out of the total number of pages. You can also type in the desired page and click Go to jump to a desired page. Red Asterisk Double down arrows Double up arrows Indicates a required field Displays the part of the form indicated by the text next to the icon. Click to returns to the top of the form.

5-146

Sun Confidential: Internal Only Sun StorageTek 6540 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Initial Common Array Manager Configuration

Initial Common Array Manager Configuration


Once CAM has been properly installed, the user must setup the arrays for management and use. The following procedures will be performed to complete the initial CAM configuration:

Configure IP address Access the management software Register the array Name the array Set the array password Set the system time Add any additional users Setup the Sun StorageTek Automated Diagnostic Environment

Note Unless otherwise specified, all steps detailed are for the Sun StorageTek Configuration Service interface.

Configure IP Addressing
To configure the IP address for each controller's Ethernet port, an IP connection between the controller trays and a management host must already have been established using the controllers default IP addresses. It is important that both controllers are configured with an IP address to ensure proper function. The controller's Ethernet ports can be configured with either a dynamic or a static IP address.

Configuring Dynamic IP Addresses


Dynamic IP addresses for the controllers ethernet ports are assigned by a dynamic host configuration protocol (DHCP) server. The address from the DHCP server will then be used if bootstrap protocol (BOOTP) services are available.

Sun Confidential: Internal Only Sun StorageTek 6x40 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

5-147

Initial Common Array Manager Configuration

Configuring Static IP Addresses


The Sun StorageTek 6x40 array has the following default internal IP addresses for the first port:

Controller A: 192.168.128.101 Controller B: 192.168.128.102

In order to change the controllers' default IP addresses to desired static IP addresses, first set up an Ethernet interface on the management host with an IP address of 192.168.128.100 (or any IP address on the 192.168.128.0 subnet, provided it does not conflict with the controller tray's IP address). Connect the management host to the storage. Note If connecting directly to the storage without an Ethernet hub or switch, a crossover cable may need to be used. Review the Getting Started Guide for details. The procedure to change the default IP addresses to the controller's Ethernet ports is: 1. To access the management software, open a web browser and enter the IP address of the management host using the following format, where management-host is the IP address of the machine where you installed the management software: https://management-host IP:6789 (or type localhost in place of IP address of management host) 2. The login page is displayed. Log in as a user that has privileges of root (for Solaris) or as Administrator (for Windows) of the management host: login: root password: password

Note The password is the root or Administrator password of the machine where you installed the management software. 3. 4. From the Sun Java Web Console page, click Sun StorageTek Configuration Service. Click the Register button.

5-148

Sun Confidential: Internal Only Sun StorageTek 6540 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Initial Common Array Manager Configuration

Note If the array is displayed, then it is already registered. 5. Follow the Register Array wizard. Using the Array Registration wizard, the management software can either auto-discover one or more arrays that are connected to the network and are not already registered (this is also called Scan the Subnet), or you can choose to manually register each array. The Scan the Subnet process sends out a broadcast message across the local network to identify any unregistered arrays. The discovery process displays the percentage of completion while the array management software polls devices in the network to determine whether any new arrays are available. When complete, a list of discovered arrays is displayed. Select one or more arrays to register from the list. Manual registration enables you to register an array by identifying the IP address of its controller. This option is typically used only to add a storage array that is outside of the local network. It can also be used to register arrays that have been registered on other management hosts. Ensure that Scan the Subnet is selected in step one of the wizard to discover any array on the subnet. With Scan the Subnet selected, the management software detects the array you installed and adds it to the Array Summary page. During this process the software will ensure the controller firmware is up to date. Note It takes approximately 2 minutes for the software to discover arrays. 6. Verify that the array(s) have been added to the Array Summary page. If the array is not displayed, check the hardware connections and ensure that the array can be contacted using the ping command. Note Steps 7 through 19 only need to be performed if the default IP address is not desired. 7. 8. Select the array for which you want to modify the IP addresses. Click Administration.

Sun Confidential: Internal Only Sun StorageTek 6x40 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

5-149

Initial Common Array Manager Configuration 9. The General Setup page will be displayed. Enter the array name and default host type and then click OK.

10. Click Administration > Controllers. 11. For Controller A's Ethernet ports select Specify Network Configuration and then enter the IP address, gateway address, and subnet, then Click OK. 12. Repeat steps 10 and 11 for controller Bs Ethernet ports. Note An error message indicating that contact has been lost with the array may be displayed as a result of the changed IP address. This is expected due to the change. 13. Log out and log back in to the console. 14. On the Array Summary page, select the original array with the original IP address, and delete it to remove the old IP address entry. 15. Click Scan the Subnet to have the management software find the array with the its new IP addresses. 16. If multiple arrays are being configured, clean the Address Resolution Protocol (ARP) table entry for each controller.

Accessing the Managment Software


To access the management software, open a browser and type: https://[ip-address]:6789 Then log in to the Common Array manager as root using the root password for the managment host (on Solaris this is typically login root, and password root, on Windows this is typically login Administrator and password is whatever was specified.). Then select Sun StorageTek Configuration Service from the Storage section of the Sun Java Web Console page.

Naming an Array
The storage array will come with a default name which you should change to a unique name to simplify identification. The Array Name can be changed on the Administration Details page.

5-150

Sun Confidential: Internal Only Sun StorageTek 6540 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Initial Common Array Manager Configuration

Configuring The Array Password


One of the additional options on the Administration Details page is the Manage Passwords button located at the top of the screen. A new Sun StorageTek 6x40 Array is shipped with a blank, or empty, password field. Sun recommends that an array password be created during initial setup for security purposes. The password prevents other management hosts from unauthorized access to the configuration of the array. If an array is moved from one management host to another, the user will need to provide the password when registering the array that was moved. Note The password can be unique between different arrays on the same management host. However, if a single array is being managed by more than one management host, the password must be the same on each management host. The management software stores an encrypted copy of the array password, known as the local password, on the management host. Use the Update Array Password in Array Registration Database to ensure that there is no password conflict with another instance of the management software.

Setting the System Time


Another option on the Administration Details page is the system time and date. When the time and date for a selected array are set, the values are updated for all arrays in the system. The time will be set automatically using the networks Network Time Protocol (NTP) server. If an NTP server is used in the network, click Synchronize with Server to synchronize the time on the array with your management host. This will save steps since the time will not have to be manually set.

Adding Additional Users


There are two types of privileges that can be assigned to users. The assignable privileges are:

Storage - the storage role can view and modify all attributes Guest - the guest role can only view (monitor) all attributes

Sun Confidential: Internal Only Sun StorageTek 6x40 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

5-151

Initial Common Array Manager Configuration To be eligible for privileges to the CAM interface users must have valid Solaris or Windows user accounts and have access to the management host. The users can then log in to the CAM interface using their Solaris or Windows user names and passwords. If multiple users are logged in to the array with Storage privileges there is a risk of one users changes overwriting those of another users. For this reason Storage administrators should develop procedures to manage this risk.

Setting Tray IDs


Although setting the tray IDs are not a requirement to configure the Common Array Manager, it is a good practice to ensure the tray IDs are unique and in order. By default the controller tray is set to ID 85, thus, when viewed it may be at the top or bottom of your screen. Each additional expansion tray added to the system should be numbered one, two, three, etc... For example, if you have two expansion trays attached, their IDs should be set to 1 and 2 respectively. To set the tray IDs perform the following steps: 1. 2. 3. 4. Open the physical devices folder in the navigation tree if it is not already open. Select Trays. Set the tray ID using the drop down list. Click Save.

5-152

Sun Confidential: Internal Only Sun StorageTek 6540 - Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Module 6

Array Configuration Using Sun StorageTek Common Array Manager


Objectives
Upon completion of this module, you should be able to:

List the configuration components of CAM List the functions available in CAM Describe the parameters that affect a volume

Sun Confidential: Internal Only 6-153


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Configuration Components of the Common Array Manager

Configuration Components of the Common Array Manager


Prior to administering the Sun StorageTek 6x40 it is important to understand the basic configuration components used in the CAM interface. To configure storage resources you must work with both physical and logical components. The physical components are:

Initiator A port on a Fibre Channel (FC) host bus adapter (HBA) that allows a data host to gain access to the storage array for data I/O purposes. The initiator has a World Wide Name (WWN) that is globally unique. Hosts A server, or data host, with one or more initiators that can store data on an array. A host can be viewed as a logical grouping of initiators. You can define volume-to-logical unit number (LUN) mappings to an individual host or assign a host to a host group. Host Groups A collection of one or more data hosts in a clustered environment. A host can be part of only one host group at a time. You can map one or more volumes to a host to enable the hosts in the group to share access to a volume. Controllers The RAID controllers in the Sun StorageTek 6x40 Array. Ports The physical ports in the Sun StorageTek 6x40 Array. Trays An enclosure that contains from 5 to 16 disks. Disks A non-volatile, randomly addressable, re-writable data storage device. Physical disks are managed as a pool of storage space for creating volumes.

The logical components are:

Virtual Disks One or more physical disks that are configured with a given RAID level (or RAID set). All physical disks in a virtual disk must be of the same type, FC or SATA II. Volume A container into which applications, databases, and file systems store data. Volumes are created from a virtual disk, based on the characteristics of a storage pool. You assign a LUN number to a volume and map it to a host or host group. Profiles A set of attributes that are used to create a storage pool. The system has a pre-defined set of storage profiles. You can choose a profile suitable for the application that is using the storage, or you can create a custom profile.

6-154

Sun Confidential: Internal Only Sun StorageTek 6540 - Array Configuration


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Configuration Components of the Common Array Manager

Pools A collection of volumes with the same configuration. A storage pool is associated with a storage profile, which defines the storage properties and performance characteristics of a volume. Storage domain A logical entity that defines the mappings between volumes and hosts or host groups. Snapshot A point-in-time copy of a primary volume. The snapshot can be mounted by an application and used for backup, application testing, or data mining without requiring you to take the primary volume offline. Snapshots are a premium feature that require a rightto-use license. Data Replication The data replication feature is a volume-level replication tool that protects your data. It can be used to replicate volumes between physically separate primary and secondary arrays in real time. The replication is active while your applications access the volumes, and it continuously replicates the data between volumes.

Figure 6-1 shows the relationship of basic configuration components.

Host Group

Based on a Storage Profile

Other vDisk and volumes

Figure 6-1

Relationship of Basic Configuration Components

Sun Confidential: Internal Only Array Configuration Using Sun StorageTek Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

6-155

Creating a Volume With Common Array Manager

Creating a Volume With Common Array Manager


To configure a volume on the StorageTek 6x40 perform the following steps: 1. 2. 3. Select or create a profile Create storage pools Create a volume

Storage Profiles
A storage profile consists of a set of attributes that are applied to a storage pool. Each disk or Virtual Disk must meet the attributes defined by the storage profile to be a member of a storage pool. The use of storage profiles simplifies configuration by configuring the basic attributes that have been optimized for a specific application or data type. Prior to configuring an array it is important to review the available storage profiles and ensure a profile exists that matches the users targeted application and performance needs - if not, the user can create a new storage profile. The Sun StorageTek 6x40 Array provides several predefined storage profiles that meet most storage configuration requirements, see Table 6-1. Table 6-1 Sun StorageTek 6140 Array Predefined Storage Profiles ReadAhead Mode Enabled Enabled Enabled Enabled Enabled Enabled Enabled

Name Default High_Capacity_ Computing High_Performance_ Computing Mail_Spooling Mircosoft_Exchange Microsoft_NTFS Microsoft_NTFS_HA

RAID Level RAID-5 RAID-5 RAID-5 RAID-1 RAID-5 RAID-5 RAID-1

Segment Size 512 KB 512 KB 512 KB 512 KB 32 KB 64 KB 64 KB

Drive Type FC SATA FC FC FC ANY FC

Number of Drives Variable Variable Variable Variable 4 4 Variable

6-156

Sun Confidential: Internal Only Sun StorageTek 6540 - Array Configuration


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Creating a Volume With Common Array Manager ReadAhead Mode Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled

Name NFS_Mirroring NFS_Striping Oracle_10_ASM_ VxFS_HA Oracle_8_VxFS Oracle_9_VxFS Oracle_9_VxFS_HA Oracle_DSS Oracle_OLTP Oracle_OLTP_HA Random_1 Sequential Sun_SAM-FS Sun_ZFS Sybase_DSS Sybase_OLTP Sybase_OLTP_HA

RAID Level RAID-1 RAID-5 RAID-5 RAID-5 RAID-5 RAID-1 RAID-5 RAID-5 RAID-1 RAID-1 RAID-5 RAID-5 RAID-5 RAID-5 RAID-5 RAID-1

Segment Size 512 KB 512 KB 256 KB 128 KB 128 KB 128 KB 512 KB 512 KB 512 KB 512 KB 512 KB 128 KB 128 KB 512 KB 512 KB 512 KB

Drive Type FC FC FC FC FC FC FC FC FC FC FC ANY ANY FC FC FC

Number of Drives Variable Variable 4 4 4 4 Variable Variable Variable Variable Variable 4 4 Variable Variable Variable

To view the Storage Profile Summary screen select Profiles from the navigation pane. In addition to the profiles parameters, the Storage Profile Summary screen provides the state of each profile. The possible states are In Use and Not In Use. The details of each profile can be viewed by clicking the profile name.

Sun Confidential: Internal Only Array Configuration Using Sun StorageTek Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

6-157

Creating a Volume With Common Array Manager The predefined storage profiles cannot be modified. Custom storage profiles created by the user can be modified. Note The last profile listed is Test. The Test profile is a custom profile and is selectable by clicking on the check box to the left of the profile name. If the provided storage profiles do not meet the performance needs of a specific application, a custom profile can be created based on the parameters listed below. To create a new profile perform the following steps: 1. 2. 3. Click the New button. The New StorageProfile screen is displayed and you will be prompted to provide the Storage Profile parameters listed below. Click OK. The new profile is displayed in the Storage Profile Summary list. Once selected, a custom profile can be copied or deleted. The copy function allows the user to copy a custom profile from one array to another. The default profiles cannot be copied as there is no need since they already exist on the other array by default. Additionally, default profiles cannot be deleted.

Storage Profile Parameters


Name is the unique identifier for the storage profile. The profile name can be up to 32 characters. Description is a typed description of the profile. This parameter is optional. RAID Level can be 0, 1, 3, 5 or 10. This is the RAID level that will be configured across all disks within a virtual disk. Note RAID 1 is used for 2 drives only. RAID 10 is used if RAID 1 is chosen and more than 2 drives are specified.

6-158

Sun Confidential: Internal Only Sun StorageTek 6540 - Array Configuration


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Creating a Volume With Common Array Manager Segment size is the amount of data, in kilobytes (KB) that the controller writes on a single drive in a Volume before writing data on the next drive. Data blocks store 512 bytes of data and are the smallest units of storage. The size of a segment determines how many blocks it contains. For example, an 8 KB segment holds 16 data blocks. A 64 KB segment holds 128 data blocks. For optimal performance in a multi-user database or file system storage environment, set your segment size in order to minimize the number of drives needed to satisfy an I/O request. Using a single drive for a single request leaves other drives available to simultaneously service other requests. If the volume is in a single-user, large I/O environment (multi-media) performance is maximized when a single I/O request can be serviced with a single data stripe. This is the segment size multiplied by the number of drives in the volume group that are used for I/O. In this case, multiple disks are used for the same request but each disk is only accessed once. Read ahead allows the controller, while it is reading and copying hostrequested data blocks from disk into the cache, to copy additional data blocks into the cache. This increases the chance that a future request for data could be fulfilled from the cache. Cache read-ahead is important for multimedia applications that use sequential I/O. The cache read-ahead multiplier value is multiplied by the segment size of the Volume to determine the amount of data that will be read ahead. The multiplier is chosen by the controllers based on the I/O pattern of the data. Setting this value to Disabled will turn off read ahead. Setting this value to Enabled tells the controllers to determine the most optimal multiplier value. Number of Disks can be set to a value of between 1 and 30, or to the value Variable. This parameter specifies the number of disks to be grouped together in a virtual disk. For example, if you create a storage pool with a profile that has the number of disks parameter set to a number, all virtual disks that are part of that storage pool must have the same number of disks. If the number of disks parameter is set to the Variable value you are

Sun Confidential: Internal Only Array Configuration Using Sun StorageTek Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

6-159

Creating a Volume With Common Array Manager prompted for the number of disks when storage is added to the pool. Note The maximum number of disk drives is 30 but actual limitation is based on the 2 Terabyte size restriction for a volume.

Note Tray loss protection is achieved when all the drives that comprise the Virtual Disk are located in different expansion trays. Disk Type specifies the drive type to be used for the volume. It can be set to FC, SATA or Any. Mixing drive types (SATA or Fibre Channel) within a single virtual disk is not permitted. If disk drives available have different capacities and/or different speeds, the overall capacity of the Virtual Disk will be based on the smallest capacity drive and the slowest drive.

Storage Pools
An array can be divided into storage pools. Each pool is associated with a profile and acts as a container for volumes or physical storage devices that meet the storage profile. This allows users to optimize each storage pool to the type of application that it will be used with. Note Removing a storage pool destroys all stored data in the pool and deletes all volumes that are members of the pool. The data can be restored from backup after new storage pools are added, but it is far easier to avoid the difficulty in the first place.

Volumes
A volume is a container into which applications, databases, and file systems can store data. A volume is created from a Virtual Disk that is part of a storage pool. The creation of a volume is comparable to partitioning a disk drive, in that a volume is a part of a Virtual Disk. There are several different types of volumes:

Standard volume - A standard volume is a logical structure created on a storage array for data storage. When you create a volume, initially it is a standard volume. Standard volumes are the typical volumes that users will access from data hosts.

6-160

Sun Confidential: Internal Only Sun StorageTek 6540 - Array Configuration


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Creating a Volume With Common Array Manager

Source volume - A standard volume becomes a source volume when it participates in a volume copy operation as the source of the data to be copied to a target volume. The source and target volumes maintain their association through a copy pair. When the copy pair is removed, the source volume reverts back to a standard volume. Target volume - A standard volume becomes a target volume when it participates in a volume copy operation as the recipient of the data from a source volume. The source and target volumes maintain their association through a copy pair. When the copy pair is removed, the target volume reverts back to a standard volume. Replicated volume - A replicated volume is a volume that participates in a replication set. A replication set consists of two volumes; each is located on a separate array. After you create a replication set, the software ensures that the replicated volumes contain the same data on an ongoing basis. Snapshot volume - A snapshot volume is a point-in-time image of a standard volume. The management software creates a snapshot volume when you use the snapshot feature. The standard volume on which a snapshot is based is also known as the base or primary volume. Reserve volume - There are two types of Reserve Volumes: a snapshot reserve volume, and a remote replication reserve volume. Every snapshot created results in the automatic creation of a snapshot reserve volume. The snapshot reserve volume is used to save original data from the base volume as changes are made to the base volume. The remote replication reserve volume is automatically created when the Remote Replication feature is activated. One remote replication reserve is created for each controller, the reserve is fixed in size (128 MB) and is used to store information about the state of the Remote Replication volumes.

Volume Configuration Preparation


Creating a volume involves a number of tasks and decisions about a variety of elements in your storage configuration. On a a brand new system that does not have anything configured, the creation of a volume will automatically result in the creation of a Virtual Disk. Prior to creating a volume, be prepared to provide the following information:

Volume name - Provide a unique name that identifies the volume.

Sun Confidential: Internal Only Array Configuration Using Sun StorageTek Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

6-161

Creating a Volume With Common Array Manager

Volume capacity - Identify the capacity of the volume in megabytes, gigabytes, or terabytes. The capacity is the amount of disk space on the Virtual Disk that will be used for this Volume. Storage Profile - Check the list of configured profiles to see if any contain the desired characteristics (RAID Level, Drive Type, Segment size, number of drives, etc...). If a suitable profile does not exist, create a new profile. Storage Pool - the storage pool selected is associated with a storage profile which determine the volumes characteristics. The management software supplies a default storage pool. This pool uses the default storage profile, which implements RAID-5 storage characteristics that can be used in the most common storage environments. Other pools may have also been configured. Choose a Storage Pool that is associated to the desired Storage Profile that has the attributes that best suit the application. Disk Selection method for creating the Virtual Disk- A volume can be created on a virtual disk as long as the RAID level, the number of disks, and the disk type (either FC or SATA) of the virtual disk matches the storage profile associated with the volume's pool. The virtual disk must also have enough capacity for the volume. In addition, the method of determining which virtual disk will be used to create the volume must be chosen. The following options are available:

Automatic - The management software automatically searches for and selects a virtual disk that matches the storage profile of the selected storage pool. If none are available, it creates a new virtual disk based on the profile. Create Volume on an Existing Virtual Disk - Manually select the virtual disks on which to create the volume from the list of all available virtual disks. Be sure that the number of disks you select have enough capacity for the volume. Create a New Virtual Disk - A new virtual disk is created by specifying the number of disks, or selecting from a list of available disks. The Virtual Disk is then used to create the volume. Be sure that the number of disks you select have enough capacity for the volume and to account for the parity that is used by the chosen RAID level.

Whether you want to map the volume now or later - You can add the volume to an existing storage domain, including the default storage domain, or create a new one by mapping the volume to a host or host group.

6-162

Sun Confidential: Internal Only Sun StorageTek 6540 - Array Configuration


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Creating a Volume With Common Array Manager

Once the volume or volumes has been successfully mapped to a host or host group, the storage resource will be available to the hosts operating systems.

Volume Parameters
The following parameters can be viewed, and most can be modified dynamically after the Volume has been created. Cache Settings Cache is high-speed memory designed to hold upcoming to-be-accessed and/or recently accessed data. Cache works like this: when the CPU needs data from memory, the cache hardware and software checks to see if the information is already in cache. If it is, it grabs the information. This is called a cache hit. If it is not, it is called a cache miss and the computer has to access the disk, which is slower. The use of cache increases controller performance in three ways.

Cache acts as a buffer so that host and drive data transfers do not need to be synchronized. The data for a read or write operation from the host may already be in the cache from a previous operation, thus eliminating the need to access the drive itself. If write caching is enabled, the host can continue before the write operation actually occurs.

Read caching allows read operations from the host to be stored in controller cache memory. If a host requests data that is not in the cache, the controller reads the needed data blocks from the disk and then places them in the cache. Until the cache is flushed, all other requests for this data are fulfilled with cache data rather than from a physical disk read, increasing throughput. Read caching is enabled by default and cannot be modified. Write caching allows write operations from the host to be stored in cache memory. Unwritten volume data in cache is written to disk, or flushed, automatically every 10 seconds.

Sun Confidential: Internal Only Array Configuration Using Sun StorageTek Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

6-163

Creating a Volume With Common Array Manager Write caching with replication allows cached data to be mirrored across redundant controllers with the same cache size. Data written to the cache memory of one controller is also written to the cache memory of the other controller. Therefore, if one controller fails, the other can complete all outstanding write operations. This option is available only when write caching is also enabled. Write caching without batteries allows write caching to continue, even if the controller batteries are discharged completely, not fully charged, or not present. If you select this parameter without a UPS for back-up power, you could lose data if power fails. Caution This option should never be used in production environments. Disk Scrubbing for an Individual Volume - Enabling the disk scrubbing process lets the process find media errors before they disrupt normal drive reads and writes. The media scan process scans all Volume data to verify that it can be accessed and optionally scans the Volume redundancy data. Disk Scrubbing with Redundancy scans the blocks in a RAID 3 or 5 Volume and checks the redundancy information for each block or it compares data blocks on RAID 1 mirrored pairs. The error is corrected, if possible. All errors are reported to the Event Log. Preferred controller ownership of a volume or Virtual Disk is the controller that is designated to be the owner. Modification Priority defines how much processing time is allocated for volume modification operations relative to system performance. You can increase the volume modification priority, although this might affect system performance. Operations affected by the Modification Priority include:

Copyback Reconstruction Initialization Changing Segment Size Defragmentation of a Virtual Disk Expanding a Virtual Disk (adding more drives to an existing Virtual Disk)

6-164

Sun Confidential: Internal Only Sun StorageTek 6540 - Array Configuration


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Creating a Volume With Common Array Manager


Dynamic Volume Expansion (DVE) Changing from one Storage Profile to another that would result in a change of RAID Level or Segment Size

Modification Priority Rates - The Lowest priority rate favors system performance, but the modification operation will take longer. The Highest priority rate favors the modification operation, but system performance might be compromised.

Virtual Disks
During the configuration of a volume, the Common Array Manager creates a Virtual Disk automatically. Virtual disks are created and removed indirectly through the process of creating or deleting volumes or snapshots. A Virtual Disk is the RAID set which contains the specified number of disks and is created based on the RAID level assigned in the storage profile. The disk drives that participate in the virtual disk must all be of the same type, either Serial Attached Technology Advancement (SATA) or Fibre Channel (FC). Once established, Virtual Disks can be modified in the following ways:

Defragment the Virtual Disk Defragmentating the Virtual Disk will ensure that the all volumes in the Virtual Disk are contiguous. For example, if there were three volumes in a Virtual Disk and the middle volume was deleted the defragment feature will move the third volume into the place previously occupied by the second.

Place the Virtual Disk offline Expand the Virtual Disk by adding additional drives to the Virtual Disk.

Summary and detail information on existing virtual disks can be displayed. Summary information about the disk drives and volumes associated with each virtual disk can also be displayed.

Sun Confidential: Internal Only Array Configuration Using Sun StorageTek Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

6-165

Creating a Volume With Common Array Manager

Administration functions and parameters


Once CAM has been properly installed, the user must setup the arrays for management and use. The following functions and parameters can be set via the General Configuration tab in CAM to complete the initial array configuration.

Array Name
When naming storage systems, keep the following in mind:

There is a 30 character limit. All leading and trailing spaces will be deleted from the name. Use a unique, meaningful name that is easy to understand and remember. A name can consist of letters and numbers but only two special characters may be used - the dash (-) and the underscore (_). No spaces.

Note The storage management software does not check for duplicate names. Verify that the name chosen is not already in use by another system.

Default Host Type


The host type defines how the controllers in the storage array will work with the particular operating system on the data hosts that are connected to it when volumes are accessed. The host type depicts an operating system (Windows 2000, for example) or variant of an operating system (Windows 2000 running in a clustered environment). Generally, you will use this option only if all hosts connected to the storage array have the same operating system (homogeneous host environment). If you are in an environment where there are attached hosts with different operating systems (heterogeneous host environment), you will define the individual host types as part of creating Storage Domains.

6-166

Sun Confidential: Internal Only Sun StorageTek 6540 - Array Configuration


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Creating a Volume With Common Array Manager

Hot Spares
A valuable strategy to keep data available is to assign available drives in the storage system as hot spare drives. A global hot spare (GHS) is a drive within the storage system that has been defined by the user as a spare drive to be used in the event a drive that is part of a volume with redundancy, fails. When the failure occurs, and a GHS is configured the controller will begin reconstructing the data from the failed drive to the GHS drive. When the failed drive is replaced with a good drive, the copyback process will automatically start. Your storage system volume remains online and accessible while you are replacing the failed drive, since the hot spare drive is automatically substituting for the failed drive. Reconstruction is the process of reading data from the remaining drives and the parity drive. This data is processed through an XOR operation to recreate the missing data. This data is written to the hot spare. Copy-back is the process of copying the data from the GHS drive to the drive that has replaced the failed drive. The time to reconstruct the GHS drive varies and depends on the activity of the storage system, the size of the failed volume and the speed of the drives. A hot spare drive is not dedicated to a specific volume group but instead is global which means that is can be used for any failed drive in the storage system with the same or smaller capacity. Hot spare drives are only available for a RAID level 1, 3, or 5 volume group. When creating a global hot spare, keep the following in mind: Select a drive with a capacity equal to or larger than the total capacity of the drive you want to cover with the hot spare. Generally, you should not assign a drive as a hot spare unless its capacity is equal to or greater than the capacity of the largest drive in the storage system. The maximum number hot spare drives per system is 15.

Storage System Cache Settings


There are cache settings that can be set at the storage system level that are in effect for all volumes in the storage system.

Sun Confidential: Internal Only Array Configuration Using Sun StorageTek Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

6-167

Creating a Volume With Common Array Manager Cache start and stop percentages - The start value (percentage) indicates when unwritten cache data should be written to disk (flushed). The stop value (percentage) indicates when a cache flush should stop When the cache holds the specified start percentage of unwritten data, a flush is triggered. When the cache flushes down to the specified stop percentage, the flush is stopped. For example, you can specify that the controller start flushing the cache when the cache reaches 80% full and stop flushing the cache when the cache reaches 16% full. Note Unwritten writes are written to disk every 10 seconds. This is not affected by the cache settings. For best performance, keep the start and stop values equal.

Cache block size


The cache block size indicates the cache block size used by the controller in managing the cache. For the 6540 the default cache block size is set to 16 KB and cannot be modified. This parameter is applied to the entire storage system. The Cache Block Size is for all volumes in the storage system. For redundant controller configurations, this includes all volumes owned by both controllers within the storage system.

4 KB (a good choice for file system or database application use) 16 KB (a good choice for applications that generate sequential I/O, such as multimedia)

Disk Scrubbing
The Disk Scrubbing feature provides a means of detecting drive media errors before they are found during a normal read or write to the drive. It is intended to provide an early indication of an impending drive failure and to reduce the possibility of encountering a media error during host operations.The feature also provides an option to verify data/parity consistency for those volumes that include redundancy information. When enabled, it runs on all volumes in the storage system that are

optimal have no modification operations in progress

6-168

Sun Confidential: Internal Only Sun StorageTek 6540 - Array Configuration


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

have the Disk Scrubbing parameter enabled on the Volume Properties dialog.

The disk scrubbing Interval specifies the number of days over which the media scan should be run on the eligible Volumes. The controller uses the duration period, in conjunction with its knowledge of which Volumes must be scanned to determine a constant rate at which to perform media scan activities. This rate is maintained regardless of host I/O activity. By default this parameter is not enabled. Additional disk scrubbing options exist for individual volumes.

Failover Alert Delay


The Failover Alert Delay specifies at the time that a critical event be logged when a Volume is transferred to a non-preferred controller. A value of 0 will create a log entry immediately.

System Time
The System Time option synchronizes the storage system controller clocks with the storage management station. This option ensures that event timestamps written by controllers to the Event Log match event timestamps written to host log files. Controllers remain available during synchronization. You also have the option to manually set the date and time.

Manage Passwords
Implementing destructive commands on a storage system can cause serious damage, including data loss. Unless a password is specified, all options are available within the storage management software. If you specify a password, then any option that is destructive will be password protected. A destructive option includes any functions that change the state of the storage system such as creation of Volumes, modification of cache settings and so on. The password is stored on the storage system. Therefore a password needs to be set for each storage system in the management domain. When selecting a password, keep the following in mind:

The maximum length is 30 characters. The password is case sensitive.

Creating a Volume With Common Array Manager

Trailing spaces are not stripped from the password.

Note If you have forgotten the password, contact your customer support representative.

6-170

Sun Confidential: Internal Only Sun StorageTek 6540 - Array Configuration


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Knowledge Check

Knowledge Check
1. You can mix drive types (SATA and Fibre Channel) in a single tray.
True False

2.

Why is it important to know what type of data you'll be working with when determining segment size?

3.

What is a preferred controller?

4.

What is cache? What effect does it have on a volume?

5.

What is media scan used for?

6.

What does a "global" refer to in relation to a hot spare?

7.

What is the difference between "reconstruction" and "copy-back" in relation to a hot spare?

8.

Why should you name your storage system?

Sun Confidential: Internal Only Array Configuration Using Sun StorageTek Common Array Manager
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

6-171

Knowledge Check 9. What can happen if you do not set your controller clocks to match your management station?

10. What part of the storage system takes advantage of the cache block size? What does it do with it?

11. Why is it important to keep a copy of all the support data?

6-172

Sun Confidential: Internal Only Sun StorageTek 6540 - Array Configuration


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Module 7

Storage Domains
Objectives
Upon completion of this module, you should be able to:

Explain the benefits of Storage Domains Define Storage Domains terminology Describe the functionality of Storage Domains Calculate Storage Domain usage

Sun Confidential: Internal Only 7-173


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

What are Storage Domains?

What are Storage Domains?


Storage Domains allows a single physical storage system to be shared among multiple servers regardless of server type, application or operating system. The storage system is partitioned into several virtual storage systems, so the job of several small storage systems can be done with a single larger system. Storage Domains also manages and controls host access to Volumes.

Figure 7-1

Partitioning one physical storage system into several virtual storage systems.

7-174

Sun Confidential: Internal Only Sun StorageTek 6540 - Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Storage Domains Benefits (pre-sales)

Storage Domains Benefits (pre-sales)

Consolidation through Storage Domains capitalizes on the power of SUN 6x40 series storage systems and delivers significant benefits to the IT environment. By solving many of the challenges faced by IT organizations today, Storage Domains enables higher storage ROI through improved efficiency, cost avoidance, and lower TCO. More efficient utilization of storage capacity is possible, allowing islands of isolated server-attached storage to be eliminated or minimized. There is no need for extra capacity on each server as available capacity can be easily allocated to servers as needed. This means that unused storage does not sit wasted on a given server More efficient storage management is also a benefit as busy storage administrators can reduce the number of individual storage systems that need to be managed. Fewer storage systems needed to support many servers allowing administrators to spend less time and money managing storage. Improved storage flexibility by de-coupling servers from storage and eliminating server-captive storage limitations. The typical one-to-one relationship between servers and storage can become a many-to-one or one-to-many relationship, allowing new servers or additional storage to be added quickly and easily. Existing servers and storage can also be reconfigured without the need to unload and reload data or interrupt data availability. Storage Domains can also play a significant role in reducing storage total cost of ownership (TCO). Through storage consolidation, enabled by Storage Domains, servers with low capacity requirements can take full advantage of larger storage systems in a cost effective manner, delivering greater performance, expanded functionality, and higher availability than is typically not offered by the low cost solutions designed for smaller capacity requirements. Enable sharing the cost of high performing, highly available storage over multiple servers or clusters; including servers where it was not previously economically feasible.

Sun Confidential: Internal Only Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

7-175

Storage Domains Benefits (technical)

Storage Domains Benefits (technical)

Storage Domains provides the same functionality as lun masking or lun mapping. Lun masking can be accomplished at various levels: the host adapter driver / software level, the fabric switch level or the storage system level. Storage Domains resides in the storage system and is not dependant on a particular HBA or driver. This allows usage of standard HBA drivers (certified drivers recommended), minimizing compatibility issues in shared server and OS environments. Management of the partitioning is done through the Storage Manager, providing a consistent interface across all host platforms. This eliminates the need to handle multiple operating vendor specific LUN masking/mapping mechanisms. Storage can be consolidated and centrally managed. Have a single point of storage management allows users to centralize and simplify administrative tasks, such as managing growth and allocating capacity. Storage Domains enables large-scale storage consolidation by providing multiple domains (up to 64) per storage system. In contrast to software or HBA driver controlled storage access management, Storage Domains protects Volumes against rogue hosts in the SAN. Volumes are not visible to or accessible by any host unless a specific mapping has been done. Any other host will not have access. Storage Domains heterogeneous hosts support allows the storage system to tailor its behavior to the needs of the host operating systems. This provides each individual host the view of the storage system that it would experience if it had exclusive access to the storage. Storage Domainss controller-based implementation ensures data integrity, as Volume access is maintained at the controller level, ensuring complete data integrity in multi-host, multi-OS environments. And finally, logical partitioning enables administrators to choose from a range of Volumes with different characteristics to meet a servers exact needs for a given LUN.

7-176

Sun Confidential: Internal Only Sun StorageTek 6540 - Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Storage Domains Terminology

Storage Domains Terminology


Storage Partition 1

H B A

Port1 Port2

LUN 0 LUN 1

Volume 0 Volume 1

Host

H Port1 B A Port2

Storage Partition 2
H Port1 B A Port2

LUN 0 LUN 5

Volume 2 Volume 3
Storage System

Host Group

Figure 7-2

Storage Domains terminology

Storage Domain: A storage domain consists of one or more Volumes that can be accessed by a single host or shared among hosts (known as a host group). A storage domain is created when the first Volume is mapped to the host or host group. This volume-to-LUN mapping allows you to define what host or host group will have access to a particular Volume in your storage subsystem. Hosts and host groups can only access data through assigned volume-to-LUN mappings.
Characteristics:

Configuring domains manages access to Volumes. Hosts residing in different domains will be isolated. This allows attachment of multiple hosts to a single storage system, even if the hosts are running different operating systems. Storage Domains can be licensed in steps: 4, 8, 16, 64 domains. So, as many as 64 virtual systems can be created on a single storage system. Each domain represents a virtual storage system and consists of one or more Volumes assigned to a host or group of hosts.

Default Storage Domain: The default storage domain is a collection of hosts and Volumes that do not already belong to a defined storage domain. Characteristics: All hosts in the Default Storage Domain share access to Volumes in the Default Storage Domain. A Volume resides in the Default Storage Domain only if it was assigned a default LUN

Sun Confidential: Internal Only Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

7-177

Storage Domains Terminology number during Volume creation (else the Volume has a State of Free waiting to be assigned a LUN number). A Default Storage Domain exists to include the following:

All host groups and hosts that do not have a volume explicitly mapped to it. All volumes that have a default volume-to-LUN mapping assigned. All automatically detected initiators.

Any volumes within the default storage domain can be accessed by all hosts and host groups within that storage domain. Creating an explicit volume-to-LUN mapping for any host or host group and volume within the default storage domain causes the management software to remove the specified host or host group and volume from the default storage domain and create a new separate storage domain. Host Group: a label for one or most hosts that need to share access to a Volume. Characteristics: Define a Host Group only if you have two or more hosts that will share access to the same Volumes. Host: a label for a Host that contains one or more FC ports that are connected to the storage system Characteristics: A host is a computer that is attached to the storage system and accesses various Volumes on the storage system through its host ports (host adapters). Host Ports: A host port is a physical connection on a host adapter that resides within a host. This physical connection is represented by a world wide port name (WWPN) in the storage management software. Characteristics:

When the host adapter only has one physical connection (host port), the terms host port and host adapter are synonymous. Host ports are automatically discovered by the storage management software after the storage subsystem has been connected and powered-up. A host port is the actual physical connection that allows a host to gain access to the Volumes in the storage system Therefore, if you want to define specific

7-178

Sun Confidential: Internal Only Sun StorageTek 6540 - Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Storage Domains Terminology volume-to-LUN mappings for a particular host and create storage domains, you must define the hosts associated host ports.

Initially, all discovered host ports belong to the Default Host Group and have access to any Volumes that were automatically assigned default LUN mappings by the controller firmware during Volume creation. A host is identified by the WWPN of the HBA. A list of HBA WWPNs can be viewed in the Mappings View of the storage management software. This list holds all HBAs that:

did a fibre channel port login into the storage subsystem are not already configured as host ports

The WWPN of the servers HBA must be matched with the WWPN in the list. The following tools can be used to determine the WWPN of the HBAs in the server: - HBA vendor tools such as SANsurfer, HBAnywhere, EZFibre - HBA Bios - Query the name server of the fibre channel switch, if you know the port number the HBA is plugged into. - Look into Solaris/Linux system logs to find messages showing the WWPN of discovered HBAs.

Note If you move or change a host adapter in a server, remember to remap any volume-to-LUN mappings. Access to your data will be lost until this is done.

Host Type: the type of OS or OS variant (ie. W2K or W2K Clustered) running on the host. The Host Type defines the behavior of the Volume (such as LUN reporting and error conditions). Characteristics: The Host Type allows hosts running different operating systems to access a single storage system. A Host Type could be set to completely different operating systems (such as Solaris and Windows 2000) or variants of the same operating system (such as Windows 2000 Clustered, and Windows 2000 Non-Clustered). When a Host Type is specified, the storage system tailors the behavior of the mapped Volume to the needs of the operating system.

Sun Confidential: Internal Only Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

7-179

Storage Domains Terminology Logical Unit Number: A LUN is the number a host uses to access a Volume on a storage system. Characteristics:

After a Volume on the storage system is mapped to a host or group of hosts, it is presented to the host or host group with a Logical Unit Number (LUN). Because each host has its own LUN address space, you can use the same LUN in more than one volume-to-LUN mapping, as long as that LUN is available for use by each host within the host group. This allows the storage to present up to 256 LUNs to a single host or host group, and up to 2048 Volumes in total. A Volume can only be mapped to a single LUN. A LUN cannot be mapped to more than one host group or host.

Default volume-to-LUN mapping: This mapping defines hosts and Volumes that belong to the Default Host group Characteristics: During Volume creation, you can specify that you want the controller to automatically assign a LUN to the Volume, or that you want to map the Volumes later (available only with the storage Domains premium feature enabled). These default volume-to-LUN mappings can be accessed by all host groups and hosts that do not have specific volume-to-LUN mappings. Specific volume-to-LUN mapping: This mapping defines hosts and Volumes that belong to a defined Storage Domain Characteristics: A specific volume-to-LUN mapping occurs when you select a defined host group or host, and assign to a Volume a specific logical unit number (volume-to-LUN mapping). This designates that only the selected host group or host has access to that particular Volume through the assigned LUN. You can define one or more specific volume-to-LUN mappings for a host group or host. Note Volume-to-lun mappings are dynamic, meaning that a mapping can be created and changed anytime without the need to reboot the storage system.

7-180

Sun Confidential: Internal Only Sun StorageTek 6540 - Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Steps for creating a Storage Domain

Steps for creating a Storage Domain


1. Enable the Storage Domain Feature - if Storage Domains is not already enabled on a storage system, a Feature Key file is needed. The Feature Key file can be created by your storage supplier by sending your supplier the Feature Enable Identifier specific to the storage system. You can obtain the Feature Enable Identifier by selecting the Administration->Licensing->List option in the SMW. After your storage supplier has sent back a Feature Key file, you can use it to enable storage Domains. Create or select the Storage Profile and Storage Pool with the appropriate characteristics for your application. Create Volumes using 1-100% of the Virtual Disk.

2. 3.

As part of the Volume creation, specify one of the following volume-to-LUN mapping settings: Automatic - this setting specifies that a LUN be automatically assigned to the Volume using the next available LUN within the Default Host Group. This setting will grant Volume access to host groups or hosts that have no specific volume-to-LUN mappings. If Storage Domains is disabled, this will be the default setting. Map later using the Mappings View - This setting specifies that a LUN not be assigned to the Volume during creation. This setting allows definition of a specific volume-to-LUN mapping and creation of storage domains using the Mappings View tab. The Volume will reside in the Undefined Mappings node until a specific volume-to-LUN mapping is defined. If Storage Domains is enabled, choose this setting. 4. Map the Volume to hosts or host groups and assign LUN numbers by defining the following: host groups and/or hosts host ports for each host - host port identifier (WWPN) - host type

Sun Confidential: Internal Only Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

7-181

Steps for creating a Storage Domain - host port name Volume-to-Lun Mappings - select a defined host group or host - define an additional mapping - select the Volume - select the next available LUN number The relationship between a host (or host group) and one or more Volumes is a storage domain.

7-182

Sun Confidential: Internal Only Sun StorageTek 6540 - Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

How Storage Domains Works

How Storage Domains Works

During Volume creation, each Volume is assigned a unique ID, which is referred to as the volume ID. When the user defines a storage domain, the storage Manager wizard builds a lookup table mapping a host initiators WWPN to a LUN. When the host sends an I/O request with its initiator WWPN and the LUN number it wishes to access, the controller verifies the request is an allowed combination by checking the lookup table. The lookup table then returns a volume ID for that LUN, and the I/O request is completed. The lookup table is stored in the DACstore region of every configured drive as well as in the controllers memory.

Storage System Host


I/O Request with port WWN and LUN # Verifies the request is an allowed combination by checking against the Storage Domains table

I/O Request is completed

Returns a Volume ID

Host

Figure 7-3

Storage Domains uses a lookup table of WWPNs to determine if host has access to a particular Volume

There are two types of mappings: default mapping or specific mapping. Default mapping means any Volumes in the default group can be accessed by any host attached to the storage, as long as that host is not already part of a domain. Host or host group specific mapping means a given server can only see and access the Volumes in its domain.

Note A host group or host can access Volumes with either default mappings or specific mappings, but not both.

When using the Storage Domains wizard, there is only one Volume to LUN mapping allowed. If more than one server needs to access a single Volume, a host group should be used. All servers in a host group can access all the Volumes in that domain.

Sun Confidential: Internal Only Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

7-183

How Storage Domains Works

Server clusters need to use host groups, so that all the servers can share access to the same Volumes. But, servers in a host group do not necessarily need to run clustering software. Keep in mind, however, that without file sharing software, all servers in a host group can access the same Volumes, which can lead to data integrity issues. A given host can be part of a host group and have its own individual mappings. Each host has its own LUN address space within a domain. Meaning the same LUN number can be used in multiple Volume to LUN mappings, just not in the same domain.

What the Host Sees

LUN 1

50 GB

Volume Marketing

Figure 7-4

Two Volumes are mapped to Host A

If we look at what the host sees, it has a LUN number that maps to a Volume on the storage system. Each host can only see the Volume in its domain. For instance, host A has two LUNs that map to two Volumes (shown in red). It has no idea that there is additional capacity on the storage system. The same is true for host B, which sees only its two blue Volumes, and host group C, which sees only its three green Volumes. Unmapped Volumes can be assigned to any of the domains.

7-184

Sun Confidential: Internal Only Sun StorageTek 6540 - Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

How Storage Domains Works

What the Storage System Sees

Port WWN

50 GB

Volume ID

Figure 7-5

The two Volumes mapped to Host A are on different Virtual Disks, and could even be on different drive technologies (A1 on FC and A3 on SATA)

From the storage systems perspective, it maps a volume ID to the world wide port name of a host adapter. It doesnt matter where the Volume resides within the storage system. Storage Domainss volume-to-LUN mapping implementation creates valuable flexibility for the storage administrator, as any available Volume can be mapped to any attached server. So, while the individual server sees a virtual storage system that consists of only their mapped LUNs, the physical Volumes can be intermixed throughout the storage system within one or more RAID Virtual Disks. The previous diagram showed Host A had two red Volumes that comprised its domain. On this diagram, you can see those two Volumes reside on two different RAID Virtual Disks. Volume A, which is Host As LUN 0, is in Virtual Disk A1, and volume G, which is Host As LUN1, is in Virtual Disk A3.

Sun Confidential: Internal Only Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

7-185

How Storage Domains Works This is a powerful feature as it enables administrators to choose from a range of Volumes with different characteristics to meet a servers exact needs for a given LUN. Each Volume can have unique configuration settings and reside on different drive types with different RAID levels. In this example, Virtual Disk A1 could be on high-speed FC drives configured as RAID 1, and Virtual Disk A3 could be on low-cost SATA drives configured as RAID 5. This flexibility enables a range of hosts with different capacity, performance or data protection demands to effectively share a single SUN series storage system.

7-186

Sun Confidential: Internal Only Sun StorageTek 6540 - Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

How Storage Domains Works

Storage Domains - how many domains are required?

LUNS - how do you number these LUNS?

Sun Confidential: Internal Only Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

7-187

Summary of Creating Storage Domains

Summary of Creating Storage Domains


1. 2. 3. Enable the Storage Domains premium feature Create Volumes on the storage system (during creation check 'Map Later with Storage Domains') Define the storage topology using the Mappings tab

host group or hosts host ports for each host host type for each host port

4. 5.

Define volume-to-LUN mappings Verify mappings from the host

run OS native utility to rescan the fibre channel loop or fabric (ie. in Solaris devfsadm may be necessary, on Windows use Disk Management->Tools->Rescan Disks, etc...) the new volume should be recognized by the host (ie. in Solaris the new volume will be listed by the format command, in Windows the new volume will be listed in Disk Management, etc...)

Now, the Volume(s) will be ready for use by the host (or host groups).

7-188

Sun Confidential: Internal Only Sun StorageTek 6540 - Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Knowledge Check

Knowledge Check
True or False

1. A storage domain is created when a host group or a single host is associated with a Volume-to-LUN mapping.
True False

2. A host group or host can access volumes with default mappings and specific mappings.
True False

3. You can not use the same LUN number in more than one Volume-toLUN mapping.
True False

4. A Default Host Group shares access to any volumes that were automatically assigned default LUN numbers.
True False

Multiple Choice

1. After defining the first specific Volume-to-LUN mapping for a host, a) Host ports must be defined b) the host type can no longer be changed c) The LUN number can not be used by other hosts in the topology d) The host and host ports move out of the Default host group 2. In a heterogeneous environment, a) Each host type must be set to the appropriate operating system during host port definition b) Volumes can have more than one Volume-to-LUN number c) Hosts with different operating systems can share volumes d) A host can access volumes with either default mappings or specific Volume-to-LUN mappings.

Sun Confidential: Internal Only Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

7-189

Knowledge Check
Customer Scenario

Mr. Customer has the 3 servers and one storage system (6540). The servers: W2003 (two single ported HBA's), Linux (one dual ported HBA) and AIX (one single ported HBA). The W2003 server will be running the Exchange application and the Exchange Administrator has requested 2 'drives' - one for the database, the other for a log file. The Linux server will be used for software development and will require disk space for source code and development tools (2 volumes). The AIX server will be running the engineering document database and will require 1 volume. The Finance Dept. has requested a 'disk' for storing employee expense statements. The application to access the employee expense statement will run on all the servers. First draw a diagram showing the servers and the storage, so you and the customer have the same understanding of the requested configuration. List the Host Groups that will be created:

List the Hosts that will be created under each Host Group:

List the number of Host Ports under each Host:

List the Host Types used for each Host Port:

Will the Default Host Group be empty ?

7-190

Sun Confidential: Internal Only Sun StorageTek 6540 - Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Knowledge Check How many domains will the customer require?

What needs to be done by the user or storage administrator when an HBA is replaced in one of the servers?

How many partitions would you need for the configuration below?_______

Sun Confidential: Internal Only Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

7-191

Knowledge Check

7-192

Sun Confidential: Internal Only Sun StorageTek 6540 - Storage Domains


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Module 8

Monitoring Performance and Dynamic Features


Objectives
Upon completion of this module, you will be able to:

Explain the data presented by the CAM built-in Performance Monitor List the factors that influence storage system performance Explain how cache parameters impact performance List the Dynamic functions and explain how they impact performance

This section describes various storage system configuration options that when utilized will maximize data availability. Hardware redundancy keeps the storage system working if a component fails. In addition, you can use the storage management software features describe in this section to implement strategies that will not only protect data, but also improve performance. 40/30/30 Rule Fine-tuning a RAID storage system begins with the recognition of the 40/30/30 performance rule. The 40/30/30 performance rule states that 40% of the performance from the system is within the hardware set-up; 30% is found in the system software and another 30% resides in the application software. Taken from Kreiser, Randy. I/O and Storage Tuning An Introduction to I/O and
Storage Tuning Tips and Techniques.

Sun Confidential: Internal Only 8-193


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Objectives This section explains the 40% of hardware set-up that consists of fine tuning the configuration of the storage system.

Stack 2 Stack 1

Stack 3 Stack 4

Performance Tip- Preferred Ownership for Volume Group and Volumes: A - Stacks 1 & 3 B Stacks 2 & 4

Cabling Cabling the drive trays as above ensures redundancy and utilizes all available backend channels. By utilizing all channels the maximum aggregate bandwidth can be obtained. Performance Monitor Use the CAM built-in Performance Monitor to monitor storage system performance in real-time and save performance data to a file for later analysis. You can monitor the performance for all volumes, an individual volume, or for just the controllers. Totals for the entire array are also available, which is data that combines the statistics for all volumes and both controllers in an active-active controller pair. Do not run the Performance Monitor if volumes are being initialized or a modification operation is occurring since these operations negatively impact performance.

8-194

Sun Confidential: Internal Only Sun StorageTek 6540 - Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Objectives

The Performance Monitor Pages Performance Summary Page - allows you to set performance monitoring options and view performance statistics for the array. Performance Statistics Summary, Volumes Page - enables you to view performance statistics for all volumes. Performance Statistics, Controller Details Page - enables you to view performance statistics for both controllers A and B. Performance Statistics, Volume Details Page - enables you to view performance statistics for the selected volume. The statistics displayed in each page is described in the CAM on-line help.

Fine Tuning
The following describes how some of the data fields can be used to analyze the performance of the storage array. Total IOPS - this data field is useful for monitoring the I/O activity to a specific controller and a specific volume. This field helps you identify possible I/O "hot spots." If I/O rate is slow on a volume, try increasing the number of drives in the virtual disk by using the DCE option. You might notice a disparity in the Total I/Os (workload) of controllers, for example, the workload of one controller is heavy or is increasing over time while that of the other controller is lighter or more stable. In this case, consider changing the controller ownership of one or more volumes to the controller with the lighter workload. Use the volume Total I/O statistics to determine which volumes to move. If you notice the workload across the storage subsystem (Total IOPS statistic on the Performance Monitoring page) continues to increase over time while application performance decreases, this might indicate the need to add additional storage arrays to your enterprise so that you can continue to meet application needs at an acceptable performance level.

Sun Confidential: Internal Only Monitoring Performance and Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

8-195

Objectives Since I/O loads are constantly changing, it can be difficult to perfectly balance I/O load across controllers and volumes. The volumes and data accessed during your polling session depends on which applications and users were active during that time period. It is important to monitor performance during different time periods and gather data at regular intervals so you can identify performance trends. Read Percentage - use this statistic for a volume to determine actual application behavior. If there is a low percentage of read activity relative to write activity, consider changing the RAID level of a virtual disk from RAID 5 to RAID 1 for faster performance. Cache Hit Rate - A higher percentage is desirable for optimal application performance. There is a positive correlation between the cache hit percentage and I/O rates. The cache hit percentage of all of the volumes may be low or trending downward. This may indicate inherent randomness in access patterns, or, at the storage subsystem or controller level, this can indicate the need to install more controller cache memory if you do not have the maximum amount of memory installed. If an individual volume is experiencing a low cache hit percentage, consider enabling cache read ahead for that volume. Cache Read Ahead can increase the cache hit percentage for a sequential I/O workload. Total Data Transferred - The transfer rates of the controller are determined by the application I/O size and the I/O request rate. In general, a small application I/O request size results in a lower transfer rate, but provides a faster I/O request rate and a shorter response time. With larger application I/O request sizes, higher throughput rates are possible. Understanding your typical application I/O patterns can give you an idea of the maximum I/O transfer rates that are possible for a given Storage system. Consider a Storage system equipped with controllers and fibre channel interfaces that supports a maximum of 100 MB (100,000 KB) per second transfer rate. You are typically achieving an average transfer rate of 20,000 KB per second on the Storage system. This KB per second average transfer rate is a function of the typical I/O size for the applications using the Storage system. (If the typical I/O size for your applications is 4K, 5,000 I/Os can be transferred per second to reach an average transfer rate of 20,000 KB.) In this case, I/O size is small and there is system overhead associated with each I/O transferred, so you can never expect to see transfer rates that approach 100,000 KB per second. However, if your typical I/O size is large, a transfer rate within a range of 80,000 - 90,000 KB per second might be achieved.

8-196

Sun Confidential: Internal Only Sun StorageTek 6540 - Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Objectives Because of the dependency on I/O size and transmission media, the only technique you can use to improve transfer rates is to improve the I/O request rate. Use host operating system utilities to gather I/O size data so you understand the maximum transfer rates possible. Then use tuning options available in the storage management software to optimize the I/O Request Rate so you can reach the maximum possible transfer rate. Average IOPS - Factors that affect I/Os per second include access pattern (random or sequential), I/O size, RAID level, segment size, and number of drives in the virtual disks or storage subsystem. The higher the cache hit rate, the higher I/O rates will be. Performance improvements caused by changing the segment size can be seen in the I/Os per second statistics for a volume. Experiment to determine the optimal segment size or use the file system or database block size. Higher write I/O rates are experienced with write caching enabled compared to disabled. In deciding whether to enable write caching for an individual volume, consider the current and maximum I/Os per second. You should expect to see higher rates for sequential I/O patterns than for random I/O patterns. Regardless of your I/O pattern, it is recommended that write caching be enabled to maximize I/O rate and shorten application response time. If you notice that the Total IOPS or Average IOPS is not as expected then a factor might be host-side file fragmentation. Minimize disk accesses by defragmenting your files. Each access of the drive to read or write a file results in movement of the read/write heads. Make sure the files on your volume are defragmented. When the files are defragmented, the data blocks making up the files are contiguous so the read/write heads do not have to travel all over the disk to retrieve the separate parts of the file. Fragmented files are detrimental to the performance of a volume with sequential I/O access patterns.

Sun Confidential: Internal Only Monitoring Performance and Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

8-197

Objectives

Polling Interval The frequency that the performance data is obtained from the storage array is controller by the Polling Interval. Each time the polling interval elapses, the Performance Monitor re-queries the storage array for performance statistics. If you are monitoring the array via CAM, update the statistics frequently by selecting a short polling interval, for example, 3 or 5 seconds. If you are saving results to a file to look at later via SSCS, choose a slightly longer interval, for example, 30 to 60 seconds, to decrease the system overhead and the performance impact. Note Best Practice: Be sure to monitor during different time periods to account for users/application variance.

Storage system parameters that can improve Performance


Enabling write caching

Higher write I/O rates are experienced with write-caching enabled compared to disabled, especially for sequential I/O access patterns. Regardless of your I/O pattern, it is recommended that you enable writecaching to maximize I/O rate and shorten application response time.
Optimizing the Cache Hit Percentage

A higher Cache Hit Percentage is desirable for optimal application performance and is positively correlated with I/O request rate. If the Cache Hit Percentage of all volumes is low or trending downward, and you do not have the maximum amount of controller cache memory installed, this could indicate the need to install more memory.

8-198

Sun Confidential: Internal Only Sun StorageTek 6540 - Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Objectives If an individual volume is experiencing a low cache hit percentage, consider enabling cache read-ahead for that volume. Cache read-ahead can increase the Cache Hit Percentage for a sequential I/O workload. If cache read-ahead is enabled, the cache reads the data from the disk. In addition to the requested data, the cache also fetches more data, usually from adjacent data blocks on the drive. This feature increases the chance that a future request for data could be fulfilled from the cache rather than requiring a disk access.
Choosing an appropriate RAID Level

Raid Level 0 1 1+0

Description Stripes data across multiple drives Disks data is mirrored to another drive Data is striped across multiple drives and mirrored to the same number of disks Data is distributed across multiple drives in lockstep. Parity information is written to one disk in the group. Drives operated independently with data and parity blocks distributed across all drives in the group.

Parity 0% 100% 50%

App I/Os MB/s I/Os I/Os

33% 3.33%

MB/s

33% 3.33%

I/Os MB/s

Advantages and Disadvantages of RAID Levels


Raid 0

Advantages: Performance due to parallel operation of the access. Disadvantages: No redundancy. One drive fails, data is lost.
Raid 1

Advantages: Performance as multiple requests can be fulfilled simultaneously.

Sun Confidential: Internal Only Monitoring Performance and Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

8-199

Objectives Disadvantages: Storage Costs are doubled.


Raid 1+0

Advantages: Performance as multiple requests can be fulfilled simultaneously. Disadvantages: Storage Costs are doubled.
Raid 3

Advantages: High performance for large, sequentially accessed files (image, video, graphical, etc.;) Disadvantages: Degraded performance with 8-9 i/o threads, random i/os, smaller more numerous I/Os.
Raid 5

Advantages: Good for reads, small I/Os, many concurrent I/Os and random I/Os. Disadvantages: Writes are particularly demanding.

Dynamic RAID Migration (DRM)


The RAID level on a selected virtual disk can be changed by applying a Storage Profile with the desired RAID level to an exisiting Virtual Disk. Applying the new profile will change the RAID level of every volume that comprises the virtual disk. Performance might be slightly affected during the operation. Important:

You cannot cancel this operation after it begins. Your data remains available during this operation. The virtual disk must be in an Optimal state before you can perform this operation.

If you do not have enough capacity in the virtual disk to convert to the new RAID level, you will receive an error message and the operation will not continue. If you have unassigned drives, add additional drives to the virtual disk. Then, retry the operation.

8-200

Sun Confidential: Internal Only Sun StorageTek 6540 - Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Objectives

RAID Level Drive Number Constraints


There are drive number requirements for choosing a particular RAID level. RAID level Drive Number Requirements: RAID 0 - Minimum of one drive up to the maximum number of 30 drives in allowed in the virtual disk.

RAID 1 - Minimum of two drives then after that you must have an even number of drives in the virtual disk. If you do not have an even number of drives and you have some remaining unassigned drives, use the Virtual Disk>>Add Free Capacity option to add additional drives to the virtual disk, then retry the operation. RAID 1+ 0 - Minimum of four drives and then after that an even number of drives. RAID 3 or 5 - Minimum of three drives in the virtual disk If you do not have enough capacity in the virtual disk to convert to the new RAID level, you will receive an error message and the operation will not continue. If you have unassigned drives, use the Virtual Disk >> Add Free Capacity option to add additional capacity to the virtual disk. Then, retry the operation.
Number of Volumes in a Virtual Disk

Creation of a virtual disk that only contains one volume is recommended. If you make a virtual disk that has more than one volume, try not to make more than three volumes. Having more than three active volumes on a virtual disk could cause disk thrashing and thus, poor I/O performance.
Choosing an Optimal Volume Modification Priority

The modification priority defines how much processing time is allocated for volume modification operations relative to system performance. You can increase the volume modification priority, although this may affect system performance. Operations affected by the modification priority include:

Sun Confidential: Internal Only Monitoring Performance and Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

8-201

Objectives

Copyback Reconstruction Initialization Changing Segment Size Defragmentation of a virtual disk Adding Free Capacity to a virtual disk Changing RAID Level of a virtual disk

Modification Priority Rates


The following priority rates are available.

Lowest Low Medium High Highest

The Lowest priority rate favors system performance, but the modification operation will take longer. The Highest priority rate favors the modification operation, but system performance may be compromised.
Choosing an Optimal Segment Size

A segment is the amount of data, in kilobytes, that the controller writes on a single drive in a volume before writing data on the next drive. Data blocks store 512 bytes of data and are the smallest units of storage. The size of a segment determines how many data blocks it contains. For example, an 8K segment holds 16 data blocks and a 64K segment holds 128 data blocks. A default segment size is set during volume creation, based on the virtual disk RAID level and the volume I/O characteristics specified in the Storage Profile in use. These two parameters should optimize the segment size appropriately for your environment. When you create the volume, you are also given the option of selecting a custom segment size rather than accepting the default segment size. Monitor your storage system and change segment size when necessary for optimal performance based on the guidelines below.

8-202

Sun Confidential: Internal Only Sun StorageTek 6540 - Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Objectives If your typical I/O size is larger than your segment size, increase your segment size in order to minimize the number of drives needed to satisfy an I/O request. This technique helps even more if you have random I/O access patterns. Using a single drive for a single request leaves other drives available to simultaneously service other requests. If you are using the volume in a single-user, large I/O environment such as multimedia application storage, performance is optimized when a single I/O request can be serviced with a single system data stripe (the segment size multiplied by the number of drives in the virtual disk used for I/ O). In this case, multiple disks are used for the same request, but each disk is only accessed once. Supported segment sizes are:

8K 16K 32K 64K 128K 256K 512K


Remember

You cannot cancel this operation once it begins. Do not begin this operation unless the virtual disk is Optimal. The controller firmware determines the segment size transitions that are allowed. Segment sizes that are inappropriate transitions from the current segment size are unavailable on the menu. Allowed transitions typically are double or half of current segment size. For example, if the current volume segment size is 32K, a new volume segment size of either 16K or 64K is allowed.

How Long Does a Change Segment Size Operation Take?


The operation is slower than other modification operations (for example, changing RAID levels or adding free capacity to a virtual disk) because of how the data is reorganized and because of the temporary internal backup procedures that occur during the operation.

Sun Confidential: Internal Only Monitoring Performance and Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

8-203

Objectives How long a Change Segment Size operation can take depends on many variables, including:

the I/O load from the hosts the modification priority of the volume the number of drives in the virtual disk the number of drive channels the processing power of the storage system controllers.

If you want this operation to complete faster, you can change the modification priority, although this may decrease system I/O performance. To change the priority, select a volume in the virtual disk, then select Volume >> Change>>Modification Priority.

Before DCE

After DCE

Figure 8-1

Dynamic Capacity Expansion

Dynamic Capacity Expansion (DCE)


Dynamic Capacity Expansion (DCE) describes a modification operation used to increase the available free capacity on a virtual disk. The increase in capacity is achieved by selecting unassigned drives to be added to the virtual disk. Once the capacity expansion is completed, additional free capacity is available on the virtual disk for creation of other volumes. The additional free capacity could then be used to perform a Dynamic Volume Expansion (DVE) on a standard or reserve volume. This modification operation is considered to be "dynamic" because you have the ability to continually access data on virtual disks, volumes, and disk drives.
Sun Confidential: Internal Only Sun StorageTek 6540 - Dynamic Features
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

8-204

Objectives You can add 1 or 2 drives that were either previously unassigned or newly inserted into an existing drive tray at a time.

What does DCE really do?


Improves performance by providing more drive I/Os It expands capacity by reducing the parity of a virtual disk It removes gaps from previous configurations

An added advantage of adding more spindles to a parity group is that parity becomes a smaller percentage of total capacity. As an example, in a five drive virtual disk with RAID 5, the capacity of one of the disks will be dedicated to parity. Parity is spread across all five disks in the group. As a percentage, a five drive virtual disk has a 20% overhead for parity data. As the number of disks in the group increase, the percentage of parity decreases. Example: Before Expansion: Consider a virtual disk of five, 18 GB drives, there are two unused drives (18 GB drives). Volume 0 capacity is 20 GB (4 GB per drive), Volume 1 has been deleted, Volume 2 capacity is 30 GB (6 GB per drive). Remaining capacity of this drive group is approximately 40 GB. Parity overhead is 20%

Figure 8-2

Before DCE

After Expansion: Two additional 18 GB drives are added making a virtual disk of seven drives. Volume 0 capacity is still 20 GB (approx. 2.8 GB per drive), Volume 2 capacity is still 30 GB (approx. 4.3 GB per drive). Remaining capacity of this virtual disk is now approx. 76 GB. Parity Overhead is approximately 14%.

Figure 8-3

After DCE

Sun Confidential: Internal Only Monitoring Performance and Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

8-205

Objectives

Dynamic Volume Expansion (DVE)


Dynamic volume expansion (DVE) is the ability to seamlessly increase the capacity of standard volumes and reserve volumes. DVE allows you to expand the capacity of an existing volume by either using free capacity on an existing virtual disk or by adding unconfigured capacity through dynamic capacity expansion to that virtual disk. You can expand a volume dynamically without losing access to it or to any other volumes.

Increasing the capacity of a standard volume is only supported on certain operating systems.

Windows 2000 Dynamic Disks Windows NT Basic Disks Linux Netware

If you increase the volume capacity on a host operating system that is unsupported, the expanded capacity will be unusable and you cannot restore the original volume capacity. However, in the case of Snapshot reserve volumes since they are not mapped to hosts, expansion is supported for all host environments. The DVE option is not available if the volume:

Has a non-optimal status There is no free capacity on the virtual disk or there is no unconfigured capacity on the storage system.

The availability of the capacity added to an existing volume depends on whether free capacity large enough for the expansion is located directly before or after the volume to modify. By nature, a volume must cover a contiguous disk capacity within a virtual disk. This leads to three possible scenarios:

8-206

Sun Confidential: Internal Only Sun StorageTek 6540 - Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Objectives

Free capacity available Enough free capacity in the virtual disk, but not directly before or after volume to expand

added capacity will be available immediately all volumes between the volume to expand and the free capacity have to be relocated. Once this background process to relocate the volumes is finished, the added capacity will be available.

Not enough free capacity the virtual disk needs to be expanded via in the virtual disk capacity expansion. DCE is then coupled with DVE. Once the restripe (DCE) is finished, the capacity will be available As soon as the free capacity is positioned properly, the extra capacity is available to the host. Example:

Step 1 - Look at space directly below volume 1 - no capacity Step 2 - Look at space directly above volume 1 - no capacity Steps 3 & 4 - Continue to move outward - capacity is found above volume 0

Must position capacity below Volume 1 Volumes 0 and 1 must move up 1 GB Must postion capacity below volume 1 Volumes 2 and 3 must move down 1.5 GB

Steps 5 - 1.5 GB still required


Drive
Unused 1 GB Volume 0 Volume 1 Volume 2 Volume 3 Unused 3 GB

Drive
Unused 1 GB Volume 0 Volume 1 Volume 2 Volume 3 Unused 3 GB

Drive
Unused 1 GB Volume 0 Volume 1 Volume 2 Volume 3 Unused 3 GB

Drive
Unused 1 GB Volume 0 Volume 1 Volume 2 Volume 3 Unused 3 GB 4) Move 0 & 1 up 1 GB 2) None available Increase Vol 1 by 2.5 GB 1) None available 3) None available 5) Move 2 & 3 down 1.5GB

Sun Confidential: Internal Only Monitoring Performance and Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

8-207

Objectives Dynamic volume expansion is considered an exclusive operation. Other exclusive operations include Dynamic Capacity Expansion (DCE), Dynamic Segment Sizing (DSS) and Dynamic RAID Migration (DRM). Only one such operation can be active per virtual disk. While dynamic volume expansion is in progress:

The DVE operation can not be stopped Affected volume(s)/group can not be deleted.

8-208

Sun Confidential: Internal Only Sun StorageTek 6540 - Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Objectives
Knowledge Check

1.

Explain the 40/30/30 rule.

2.

A high read cache hit rate is desirable for what kind of environments?

3.

What are the cache parameters that can be set for each volume?

4.

Which Volume cache parameters have a positive effect on performance?

5.

How is cabling important for performance?

True or False 1. Increasing the segment size will always improve performance.
True False

2. The Performance Monitor can monitor specific Virtual Disks, Volumes or Controllers, but not specific disks.
True False

Sun Confidential: Internal Only Monitoring Performance and Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

8-209

Objectives 3. The higher the modification priority is set, the faster the I/O are serviced, and the modification operations complete at slower pace.
True False

4. A segment is the amount of data, in kilobytes, that the controller writes on a single drive in a Volume before writing data on the next drive.
True False

5. The Immediate Availability Feature allows reads and writes to a Volume while initialisation is still taking place.
True False

6. The Dynamic Functions (DSS; DRM, DCE and DVE) will terminate if the storage system is powered off.
True False

Multiple Choice 1. What is performance? a) How well a storage system stores or retrieves data for various host workloads. b) The probability that a disk sub-system is available 7 x 24. c) The maximum ratio of read operations to write operations that a storage system can execute. d) The number of requests that can be fulfilled simultaneously to retrieve data. 2. You would enable write cache with mirroring when a) You need top performance b) You need additional reliability c) You need to have an extra copy of the volume d) You need to have more cache 3. Applications with a high read percentage do very well using a) RAID 0 b) RAID 1 c) RAID 3 d) RAID 5

8-210

Sun Confidential: Internal Only Sun StorageTek 6540 - Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Objectives 4. The Add Free Capacity option allows the addition of capacity to a virtual disk. How many drives can be added at one time? a) Only 1 drive at a time b) 1 or 2 drives c) A maximum of 2 for RAID 1 and maximum of 3 for RAID 3 and RAID 5 d) As many drives as are available 5. If your typical I/O size is larger than your segment size, a) Increase your segment size in order to minimize the number of drives needed to satisfy an I/O request. b) Decrease your segment size in order to maximize the number of drives needed to satisfy an I/O request. c) The number of drives should be equal to the segment size d) Multiply segment size by the number of drives in the Virtual Disk to optimize striping

Sun Confidential: Internal Only Monitoring Performance and Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

8-211

Objectives

8-212

Sun Confidential: Internal Only Sun StorageTek 6540 - Dynamic Features


Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Module 9

Integrated Data Services Snapshot


Objectives
Upon completion of this module, you should be able to:

Identify the available data services for the Sun StorageTek 6540 array List the benefits and application of Snapshot Explain how Snapshot is implemented

Sun Confidential: Internal Only 9-213


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Data Services Overview

Data Services Overview


The Sun StorageTek 6540 array offers separate licenses for the integrated data services software features:

Volume Snapshot - a point in time image (PiT) of a volume in a Sun StorageTek 6440 array. Currently, the Sun StorageTek 6540 array supports up to 1024 snapshots. Volume Copy - a complete (byte-by-byte) or (block-by-block) PiT replication of one volume to another within a storage system. Currently, the Sun StorageTek 6540 array supports up to 1024 Volume Copies. Remote Replication - a real time copy of volumes between two storage systems over a remote distance through a FC SAN. Currently, the Sun StorageTek 6540 array supports up to 64 remote mirrors.

These features are ideal for data protection surrounding backup, business continuance and disaster recovery situations. All three features require a license and can be enabled or disabled as you choose.

9-214

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Snapshot
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Snapshot

Snapshot
Snapshot creates a static or point-in-time (PiT) image of a volume. The Snapshot volume is created almost instantaneously and appears and functions as a volume as shown in Figure 9-1.

Figure 9-1

Snapshot Volume: Shows Base, Reserve and Snapshot

Self-Check Why is Snapshot not a good option for disaster recovery?

Sun Confidential: Internal Only Integrated Data Services Snapshot


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

9-215

Snapshot

Snapshot Terminology
To better understand snapshot, there are several terms which must be defined. This list includes:

Base volume Snapshot volume Reserve volume Original data blocks


Storage system
Logical Disk Space Physical Disk Space Physical Disk Space

Snapshot
- a logical pointpointinin-time image of another volume. Logical equivalent of a complete physical copy

Base Volume
- the volume from which the Snapshot will be created

Reserve
stores original blocks from Base before they are overwritten with new data

Base Volume
Definition: The base volume defines volume from which the Snapshot is taken. Characteristics: It must be a standard volume in your storage system, as you cannot take a Snap of a Snap. The base volume remains online and user-accessible regardless of the state of the Snapshot. Note Invalid base volumes include snapshot reserve volumes, snapshot volumes, mirror reserve volumes, target volumes participating in a volume copy.

9-216

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Snapshot
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Snapshot

Snapshot Volume
Definition: A snapshot volume is a logical point-in-time image of another volume in the storage system. It is the logical equivalent of a complete physical copy, but is created much more quickly than a physical copy and requires less disk space. Taking a Snapshot is like taking a photograph, freezing the state of the data. The exact state is kept, while the source volume can be used again for reading and writing purposes. Characteristics: The Snapshot is treated the same as any other volume that can be mapped to a host. The Snapshot volume has all of the characteristics of the Base volume, such as:

Same Size and RAID Level of the Base volume at time of Snapshot Mappable to any host Can be read from and written to Has a unique WWN (WWD) - A Snapshot volume has a unique World Wide Device Name (WWD). This allows the operating systems and applications to recognize it as an individual volume instead an alternative path to the Base volume

Additionally:

Snapshots can be Disabled - stopped Snapshots can be Recreated at a later time A maximum of 4 Snapshots per Base volume can exist. The maximum number of Snapshots per storage system is one half the total number of volumes supported by the controller model. The Snapshot is virtual, but actually consists of the Base and Reserve Volumes.

Note Due to the dependency on the Base volume, a Snapshot should not be utilized for data migration or disaster recovery purposes in protecting against catastrophic failure to the original volume or storage system

Reserve Volume
Definition: The Reserve volume (also called a Reserve) is a physical volume. Its used to hold the metadata (copy-on-write map) and the original data of the blocks that have been modified on the Base volume.

Sun Confidential: Internal Only Integrated Data Services Snapshot


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

9-217

Snapshot Characteristics: The reserve volume:

Need to consider capacity allocation during volume creation so as to retain free capacity for Reserve volumes Can be smaller than Base volume (defaults to 20% of Base; min 8 Mbyte), but can be expanded with Dynamic Volume Expansion. Some space consumed by metadata but metadata is very small (192KB), so it doesnt need to be taken into account when determining the size of the Reserve. Can be expanded later via DVE (Dynamic Volume Expansion) regardless of OS. When you create a Snapshot there is the possibility that the Reserve may need to be expanded due to more modifications being made to the Base volume than originally anticipated, therefore ensure enough free capacity exists on the same Virtual Disk next to the Reserve Volume in order to expand it without delay. Configurable warning/alert threshold Configurable response when Reserve is full Reserve volumes cannot be mapped; no host I/O One Reserve per Snapshot Can reside in different Virtual Disk from Base

Original Data Blocks


Definition: The original data blocks are data blocks that are on the base volume at the time the Snapshot was taken. Characteristics: Original data blocks will continue to reside on the Base Volume if those blocks have not been modified since the Snapshot was taken. Original data blocks will be copied to the Reserve Volume if those blocks on the Base Volume are overwritten (modified) with new data after the Snapshot was taken. The Snapshot Volume is comprised of Original data blocks that are still on the Base Volume and Original data blocks that have been copied to the Reserve Volume.

9-218

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Snapshot
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Snapshot

Snapshot - Benefits (pre-sales)


Data Protection - The time and cost to backup data is a major consideration, but more important is the ability to recover data and restore it in a timely manner. A PiT image or backup is needed to protect against the most common reason to restore - user or operator error. Even sophisticated disaster recovery sites with redundantly mirrored disk systems cannot protect against the need to go back to a point before corruption occurred. Tape is much slower than disk. Additionally, a Snapshot can be taken multiple times throughout the day, whereas tape backup is typically feasible only once per day. This allows more recent information to be recovered should an unfortunate need arise. Taking these multiple Snapshots by automated scripting means no operator intervention is required. Tape, being the least expensive medium, can be used for longer-term archives. Application Testing - Snapshot feature expedites application testing by utilizing the Snapshot volume in a test environment. The Snapshot is taken instantaneously and uses less disk space, thus providing an efficient data set for application development / testing. This facilitates enhanced data processing capabilities to create a competitive advantage. Upgrades and modifications can be tested on the Snapshot, which saves time compared to making full copies of the data. A disk-to-disk copy for 1 TB would take approximately 1 hour, whereas a Snapshot would be nearly instantaneous and typically only take 200 GB of storage (Based on a typical configuration where the Snapshot Reserve is 20% of the Base volume). Summary of Benefits (pre-sales):

Improves data utilization. Snapshot enables non-production servers to access an up-to-date copy of production data for a variety of applications - including backup, application testing, or data mining while the production data remains online and user-accessible. Improves employee productivity by having an immediate copy. No more waiting for large volumes of data to copy, Snapshot is nearly instantaneous. Protects data by providing a readily available online copy that reduces restore time. Reduces disk space requirements by using an innovative copy-onwrite technology. The Snapshot image only requires a fraction of the original volume's disk space. Provides a copy to use as the source of a backup. This allows continuous processing during the backup procedure.

Sun Confidential: Internal Only Integrated Data Services Snapshot


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

9-219

Snapshot

Provides more rapid application development through the immediate creation of a test environment and capitalizing on the ability to write to the Snapshot image.

Snapshot Benefits (technical)


Snapshot holds several benefits, including:

It provides instantaneous PiT image of the data. Snapshot utilizes only a small fraction of original disk space. It enables quick, frequent and non-disruptive backups. Snapshot allows the Snapshot to be read, written and copied. The ability to write to a Snapshot is a valuable capability that opens up new techniques for creating immediate test and small data mining environments. This is also a distinguishing capability of the Snapshot feature. It utilizes the same high-availability characteristics of the original (Base) volume, such as RAID protection and redundant path failover. It provides placement flexibility the Snapshot can be mapped and made accessible to any host on the SAN. Snapshot data can be available to secondary hosts for read and write access by mapping the Snapshot volume to these hosts. It is integrated into the Storage Manager software for consistent, simple management. It provides an easy-to-use GUI along with a command line interface for the flexibility to script Snapshot functions, such as automated backups from the copy. Up to four copies can be created per volume with a maximum of 512 copies in the 6140 storage system. Expandable Reserve capacity with full warning and statistical information.

9-220

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Snapshot
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Snapshot

How does Snapshot work?


A Snapshot is a Point in Time (PiT) logical view of data that is created by saving the original data to a Reserve whenever data in the Base volume is overwritten. The technique that allows a Snapshot to be created instantaneously is the innovative copy-on-write technology. Essentially, the Snapshot process creates an empty Reserve that will hold original values that later change in the Base volume after the time of Snapshot creation. The Snapshot only takes as long as needed to create an empty Reserve volume and Snapshot volume pointers, again a nearly instantaneous creation. It is recommended that the Base volume be quiesced during the Snapshot so that a stable image of this moment in time is available. The Snapshot is actually seen by combining the Reserve of original data with the Base volume, thus, the Snapshot creates an exact copy of the data at the moment the Snapshot was taken. This copy-on-write technology enables the instantaneous nature of the Snapshot while only requiring a fraction of the Base volume disk space. This instant creation and small size compared to the original volume distinguishes a Snapshot from a full-volume copy. The full-volume copy must physically copy all of the data. This can take more than an hour for a 500 GB volume. The Snapshot appears as a volume containing the original data at the time of creation, but is actually an image seen by combining the Reserve with the original Base volume. The Reserve, which houses the original data changed after the PiT, is the only additional disk space needed for the Snapshot. This is typically 10 to 20 percent of the Base volume, and will vary depending on the amount of changes to the original data. The longer a Snapshot is active, the larger the Reserve that is needed. The Storage Manager Snapshot wizard provides notification upon reaching a user-defined saturation point for the Reserve, thus notifying the administrator that the Reserve has reached a certain capacity limit and needs to be expanded. The default size of the Snapshot Reserve) is 20 percent the size of the Base volume. It should be noted that the Snapshot is dependent on the Base volume in order to reconstruct the PiT image. Note Due to the dependency on the Base volume, a Snapshot should not be utilized for data migration or disaster recovery purposes in protecting against catastrophic failure to the original volume or storage system.

Sun Confidential: Internal Only Integrated Data Services Snapshot


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

9-221

Examples of how Snapshot works

Examples of how Snapshot works


In order to better understand the relationship between the Base, Reserve and Snapshot volumes, consider the following examples.

Standard Read No Snapshot


This shows a standard read I/O from a Base volume. In this example, a single Base volume exists on the storage system with 8 data blocks. At 11 am, the host issues a read request for block A to the storage system. The data block resides on the Base volume, and the read data comes directly from there. Figure 9-2 shows a standard read with no snapshot.

Figure 9-2

Standard Read No Snapshot

9-222

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Snapshot
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Examples of how Snapshot works

Snapshot is Created
At 11:05, a Snapshot is created. When the Snapshot is taken, the controller suspends I/O to the Base volume for a few seconds while it creates a physical Reserve volume to store the Snapshot metadata and copy-on-write data. The logical Snapshot volume is also created and is immediately available for mapping. See Figure 9-3.

Figure 9-3

Snapshot is Created

Notice that the Snapshot volume is identical to the Base volume at the time the Snapshot is created. In this example with data blocks A, B, C, D, E, F, G and H. No matter how much the Base volume changes after 11:05, the Snapshot volume will look the same as the Base volume did at 11:05. Snapshot will always reflect the Original data.

Sun Confidential: Internal Only Integrated Data Services Snapshot


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

9-223

Examples of how Snapshot works

Read From Snapshot (1st Case)


The first use everyone thinks of for a Snapshot is backups, so the Snapshot needs to be readable. Refer to Figure 9-4.

Figure 9-4

Read From Snapshot

At 11:15, the host issues a read for data block A from the Snapshot volume. As mentioned previously, no physical data resides in the Snapshot volume. The Reserve volume combined with the original Base volume creates the logical Snapshot volume. So, when data is requested from the Snapshot volume, the disk system determines if the data is in the Base volume or the Reserve volume. In this case, data block A resides in the Base volume, so the read comes directly from there.

9-224

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Snapshot
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Examples of how Snapshot works

Write to Base
Now at 11:30, the host issues a write to the Base volume. Blocks Z,Y are going to overwrite blocks A,B. As blocks A,B are needed for the Snapshot volume, a copy-on-write occurs, copying blocks A,B into the Reserve volume for safekeeping. Once this is done, the write of blocks Z,Y completes to the Base volume and is acknowledged to the host. Figure 9-5 shows the Snapshot writing to base.

Figure 9-5

Snapshot Write to Base

Sun Confidential: Internal Only Integrated Data Services Snapshot


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

9-225

Examples of how Snapshot works

Re-Write to Base
New Re-write changes do not need to do anything to the Snapshot, because the original data is already in the Reserve as shown in Figure 9-6.

Figure 9-6

Snapshot Re-Write to Base

At 11:45, the host issues another write to the Base volume. This time block X is overwriting block Z. Snapshot is more correctly described as copy on FIRST write technology. No additional copy on write operation is needed because the original data block block A was moved to the Reserve when the first write to this block took place, therefore, subsequent writes to the block do not require any action. In this example, block X simply overwrites block Z, and the I/O write is acknowledged to the host.

9-226

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Snapshot
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Examples of how Snapshot works

Read From Snapshot (2nd Case)


Reads to the Snapshot will physically be read from the Base and the Reserve volumes. The metadata map in the Reserve is consulted to determine if the data should be read from the Base because it has not changed, or read from the Reserve because the data in the Base has been modified since the time the Snapshot was taken.

Figure 9-7

Read From Snapshot

At 12:00, the host issues a read for blocks A,B,C,D from the Snapshot volume. When we read A at 11:15 (see Figure 9-7), block A was still in its original location in the Base volume. Since then, however, it has been overwritten. So now when the host issues a read for blocks A,B,C,D to the Snapshot volume, the storage system uses the metadata map. A,B are in the Reserve volume and the data is read directly from there. C,D are still on the Base volume and have not been overwritten since the Snapshot was taken, so the Snapshot (metadata map) simply points to the original blocks still on the Base Volume, and blocks C,D are read directly from the Base Volume.

Write to Snapshot
If a write is performed to the Snapshot, the original data is overwritten in the Reserve, and the result is the Snapshot is now no longer a PiT of the original data.Writes to the Snapshot are stored in the Reserve, as the Snapshot is not a physical volume and therefore cannot store data.

Sun Confidential: Internal Only Integrated Data Services Snapshot


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

9-227

Examples of how Snapshot works In Figure 9-8, the host is overwriting data block D in the Snapshot volume with block M. As the Snapshot volume is not a physical volume, block M has to go somewhere. Writing to the Snapshot volume puts the data directly into the Reserve.

Figure 9-8

Write to Snapshot

9-228

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Snapshot
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Examples of how Snapshot works

Write to Base (1st Case)


Since writes, or updates to the Snapshot are written to the Reserve - any changes to those same blocks on the Base are not saved since the data written to the Snapshot supersedes point-in-time data. Once write data is issued to a Snapshot volume, it is no longer a PiT image of the Base Volume, as shown in Figure 9-9.

Figure 9-9

Snapshot - Write to Base - 1st Example

Sun Confidential: Internal Only Integrated Data Services Snapshot


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

9-229

Examples of how Snapshot works

Write to Base (2nd Case)


If another write is performed to a block that was already modified (such as W), there will be no change to the Reserve as this data is not original data. If a write is performed to a block that has NOT been modified (such as C), the copy-on-write procedure is performed again as in Figure 9-10.

Figure 9-10 Snapshot Write to base - 2nd Example

Disabling and Recreating


With the Snapshot volume enabled, a performance impact is experienced due to the copy-on-write procedure. If the Snapshot is no longer required, it can be Disabled (stopped) and the copy-on-write penalty on the Base goes away. An example of when a Snapshot should be DISABLED is when a backup completes. If the Snapshot volume is Disabled, it can be retained along with its associated Reserve volume. When it is needed again, it can re-created using the recreate option utilizing the same volumes from the previous Snapshot, taking less time. If a Snapshot has been Disabled, it can then be Re-created (i.e. re-snapped) later. A new point-in-time image is created taking less time because a Recreate will use the existing Reserve volume definitions and parameters for Snapshot creation.

9-230

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Snapshot
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Examples of how Snapshot works

Snapshot Considerations
There are several things to consider when you are creating a snapshot, including:

Performance Volume failover considerations Handling deleting base or snapshot respository volumes Maintaining the state on your volumes

Performance
The copy-on-write process consumes a portion of the available performance. High load systems might experience a performance degradation while one or multiple Snapshots are active. Especially writes might be slower, because the original data has to first be copied to the Reserve. Read operations from the Snapshot might be slower than reads from the Base volume, because the metadata map in the Reserve has to be consulted first.

Volume Failover
Ownership changes affect Base and ALL of its Snapshots. The Base volume, Snapshot and Reserve are all owned by the same controller. The rules that apply to the Base volume for AVT and RDAC modes, also apply to the associated Snapshots and Repositories. All related volumes change controller ownership as a group.

Deleting a Base or Snapshot


Base, Snapshot and Reserve volumes are all associated. Each Snapshot requires its own Reserve. A Snapshot cannot exist without a Base or a Reserve. When you delete a Base volume, all Snapshots of this volume and associated Repositories will also be deleted. When you delete a Snapshot, the associated Reserve will also be deleted.

Sun Confidential: Internal Only Integrated Data Services Snapshot


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

9-231

Examples of how Snapshot works

Logically Identical Volumes


When using cloning operations like Snapshot and Volume Copy, the most important part is to ensure that the source volume is in a consistent state. By nature, a clone is an identical copy of its source volume. If the source volume is not consistent, the clone is also not consistent. An inconsistent volume might be unusable for the purpose needed. Open applications like databases or even mounted file systems keep files open. Flags on the physical disks usually indicate the opened status. If we now take a Snapshot, the Snapshot will also indicate the opened state. If this had been a file system or database, the application would ask for a database check or file system check, which easily takes a couple of hours. To be fully consistent, it is preferred to bring a database into quiesce or hot backup mode. This closes the transaction logs and creates redo or recovery points. A file system can be unmounted (remove the drive letter in Windows) to make it consistent. Hosts use buffers - reserved space in the memory of the host - which act as a kind of cache space. The data used most, such as directory structures, or bitmap tables, is quite often kept in buffers to improve overall disk performance. Snapshot and Volume Copy can only copy whats physically on disk, but not whats stored in the hosts memory.

Snapshot OS support
Since the Snapshot is identical to the Base volume, OSs may not support the mapping of volumes with identical signatures/block information to the same data host. Table 9-1 Snapshot OS Support Host Environment and Volume Type Win NT Regular Disk Win NT Fault-Tolerant Win 2K Basic Disk Use Snapshot on SAME system as Base Volume
Supported Not Supported Supported

Use Snapshot on DIFFERENT system as Base Volume


Supported Not Supported Supported

9-232

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Snapshot
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Examples of how Snapshot works Table 9-1 Snapshot OS Support Host Environment and Volume Type Win 2K Dynamic Disk Solaris Regular Volume Solaris VxVM Volume HP-UX Regular Volume HP-UX Logical Volume Irix Regular Volume Irix XLV Volume NetWare Volume AIX Logical Volume Linux Regular Volume Linux Logical Volume Use Snapshot on SAME system as Base Volume
Not Supported Supported Not Supported Supported Supported Supported Supported Not Supported Supported Supported Not Supported

Use Snapshot on DIFFERENT system as Base Volume


Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported

Sun Confidential: Internal Only Integrated Data Services Snapshot


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

9-233

Managing Snapshots

Managing Snapshots
The following section covers things to consider when creating and managing a snapshot using CAM.

Creating a Snapshot
Prior to creating a snapshot with the Common Array Manager it is important to plan the following aspects:

The name of the snapshot reserve volume - When a snapshot is created, it must be assigned a unique name. This simplifies identification of the primary volume. Each snapshot has an associated reserve volume that stores information about the data that has changed since the snapshot was created. It too must have a unique name making it easy to identify as the reserve volume of the snapshot to which it corresponds.

The capacity of the reserve volume - To determine the appropriate capacity, calculate both the management overhead required and percentage of change expected on the base volume. The warning threshold - When a snapshot volume is created, the threshold at which the management software will generate messages to indicate the level of space left in the reserve volume can be specified. By default, the software generates a warning notification when data in the reserve volume reaches 50 percent of the available capacity. The percentage of space used can be monitored on the Snapshot Details page for the snapshot. The method used to handle snapshot failures - When a snapshot volume is created, you can determine how the management software will respond when the reserve volume for the snapshot becomes full. The management software can do either of the following:

Fail the snapshot volume. In this case the snapshot becomes invalid, but the base volume continues to operate normally. Fail the base volume. In this case, attempts to write new data to the primary volume fail. This leaves the snapshot as a valid copy of the original base volume.

The virtual disk selection method - A snapshot can be created on a virtual disk as long as the virtual disk has enough capacity for the snapshot. The following options are available:

9-234

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Snapshot
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Managing Snapshots

Automatic - The management software automatically searches for and selects a virtual disk that matches the necessary criteria. If there are none, and enough space is available, it creates a new virtual disk. Create Volume on an Existing Virtual Disk - You manually select the virtual disks on which you want to create the volume from the list of all available virtual disks. Be sure that the number of disks you select have enough capacity for the volume. Create a New Virtual Disk - Creates a new virtual disk on which to create the volume. Be sure that the virtual disk that you create has enough capacity for the volume.

The snapshot mapping option - The snapshot can be added to an existing host or host group. During snapshot creation, you can choose between the following mapping options:

Map Snapshot to One Host or Host Group - this option enables you to explicitly map the snapshot to a specific host or host group, or to include the snapshot in the default storage domain. Do Not Map this Snapshot - this option causes the management software to automatically include the snapshot in the default storage domain.

Note A host or host group will be available as a mapping option only if an initiator is associated with each individual host and each host is included in a host group.

Calculating Reserve Volume Capacity


When creating a snapshot, the size of the snapshot reserve volume that will store snapshot data and any other data that is needed during the life of the snapshot must be specified. When prompted to specify the size of the snapshot reserve volume, the size is entered as a percentage of the size of the base volume, as long as that percentage does not translate to a size of less than 8 megabytes.

Sun Confidential: Internal Only Integrated Data Services Snapshot


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

9-235

Managing Snapshots The capacity needed for the snapshot reserve volume varies, depending on the frequency and size of I/O writes to the base volume and how long the snapshot volume will be kept. In general, choose a large capacity for the reserve volume if you intend to keep the snapshot volume for a long period of time or if you anticipate heavy I/O activity, which will cause a large percentage of data blocks to change on the base volume during the life of the snapshot volume. Use historical performance to monitor data or other operating system utilities to help you determine typical I/O activity on the base volume. As noted earlier, when the snapshot reserve volume reaches a specified capacity threshold, a warning is given. This threshold is set at the time of the snapshot volume creation. The default threshold level is 50 percent. If you receive a warning and determine that the snapshot reserve volume is in danger of filling up before you have finished using the snapshot volume, increase its capacity by navigating to the Snapshot Details page and clicking Expand. If the snapshot reserve volume fills up before you have finished using the snapshot, the snapshot failure handling conditions specify the action that will be taken. Use the following information to determine the appropriate capacity of the snapshot reserve volume:

A snapshot reserve volume cannot be smaller than 8 megabytes. The amount of write activity to the base volume after the snapshot volume has been created dictates how large the snapshot reserve volume needs to be. As the amount of write activity to the base volume increases, the number of original data blocks that need to be copied from the base volume to the snapshot reserve volume also increases. The estimated life expectancy of the snapshot volume contributes to determining the appropriate capacity of the snapshot reserve volume. If the snapshot volume is created and remains enabled for a long period of time, the snapshot reserve volume runs the risk of reaching its maximum capacity.

9-236

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Snapshot
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Managing Snapshots

Creating a Snapshot
Figure 9-11 illustrates an overall view of creating the snapshot.

Figure 9-11 Creating Snapshot Flow Chart

Sun Confidential: Internal Only Integrated Data Services Snapshot


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

9-237

Managing Snapshots

9-238

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Snapshot
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Module 10

Integrated Data Services Volume Copy


Objectives
Upon completion of this module, you should be able to:

Explain how Volume Copy is implemented List the benefits and application of Volume Copy Explain the functions that can be performed on a Copy Pair

Sun Confidential: Internal Only 10-239


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Volume Copy Overview

Volume Copy Overview


The volume copy premium feature is used to copy data from one volume, the source volume, to another volume, the target volume, in a single storage system. Volume copy creates a complete physical replication of a source volume at a suspended point in time (PiT) to a target volume.

Figure 10-1 Volume Copy

Volume Copy Terminology


To better understand volume copy, there are several terms which must be defined. This list includes:

Source volume Target volume Copy pair

Source Volume
Definition: The source volume is the volume that accepts host I/O and stores application data. When a volume copy is started, data from the source volume is copied in its entirety to the target volume.

10-240

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Volume Copy
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Volume Copy Overview In order to maintain the data integrity of the point in time target, volume copy suspends write to the source volume during the copy process. Therefore, in order to maintain normal I/O activity and ensure data availability, volume Copy must be used in conjunction with Snapshot where the Snapshot volume is the source volume for the volume Copy. A source volume can be any of the following volume types:

Standard volume Snapshot Base volume of a snapshot Target volume - You can copy one source volume to several different target volumes.

Target volume
Definition: The target volume is a standard volume that maintains a copy of the data from the source volume. The target volume will be identical to the source volume after the copy completes. Caution A volume copy will overwrite all data on the target volume and automatically make the target volume read-only to hosts. Ensure that you no longer need the data or have backed up the data on the target volume before starting a volume copy. While the copy is in progress, the target volume is not available for any I/O from a host. When the copy is complete the target volume by default will be read-only, but can be modified by the user to be read and write accessible. The target volume must be the same or greater capacity as the source volume, but can be of a different RAID level. A target volume can be a:

Standard volume Base volume of a failed or disabled Snapshot volume Remote Mirror primary volume.

If you choose the base volume of an active Snapshot volume as the target volume, you must disable all Snapshot volumes associated with the base volume before creating a volume copy.

Sun Confidential: Internal Only Integrated Data Services Volume Copy


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

10-241

Volume Copy Overview

Copy Pair
The source volume and its associated target volume for a single volume copy are know as a copy pair. The copy pair relationship links the source and target volumes together. A copy pair can be:

Stopped - stop the copy, but copy pair relationship is maintained Re-copied - re-copy the source to the target, thereby overwriting the previous data on the target Removed - sever the copy pair relationship, leaving the data on the source and target intact.

Note A maximum of eight copy pairs can have a status of In Progress at one time. If more than eight volume copies are created, they will each have a status of Pending until one of the volume copies with a status of In Progress completes. For example, if a ninth copy pair is defined, it will be placed in a queue until one of the existing eight copy processes completes, at which time the ninth copy process will begin.

Volume Copy Benefits (pre-sales)


There are several benefits to implementing a volume copy. This includes:

Volume Copy creates an exact point in time clone of production data that can be mapped to a separate data host for analysis. The target volume can be mapped to any host and enables data analysis, data mining and application testing to run without degrading the performance of the production volume. Using the target volume for backup can eliminate I/O contention to a source volume compared to using a Snapshot. If the application write activity to the production volume is heavy while the Snapshot is being backed up, the production application performance may be affected. In these instances, volume Copy can be utilized to make a separate copy of the production data faster than the Snapshot can be transferred to tape. Once the copy is complete, the Snapshot can be deleted removing the performance overhead of its maintenance and the copy then backed up. This enables the production volume to sustain a performance hit for a minimum amount of time, while still creating a complete point in time copy of that volume for backup.

10-242

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Volume Copy
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Volume Copy Overview

Data can be backed up to and restored directly from the target volume. This enables faster backups and restoration compared to tape. Another benefit of Volume Copy is the ability to redistribute data or migrate data to newer/faster/larger drives (copy data from a virtual disk that use smaller capacity drives to a virtual disk using larger capacity drives). You can even migrate volumes to an virtual disk with more drives, or a more effective RAID level.

Figure 10-2 Migrate Data to Larger Drives and Change RAID Level

Volume Copy- Benefits (technical)

The Volume Copy function is controller based and resides in the storage system, and therefore requires no host interaction or server CPU cycles to perform the copy, thereby minimizing the performance impact to the server. Eight concurrent copies can be taking place at any given time. Volume Copy can be configured via an intuitive GUI, or can be scripted for automation via the CLI (command line interface). Volume Copy is a background operation with five priority settings that define how much of the storage systems resources are used to complete a volume copy versus fulfill I/O requests (ie. the higher the priority the quicker the volume copy will complete but the greater the performance impact on storage system I/O). The copy progress is checked every 60 seconds throughout the copy process. Interruptions while the copy is in progress (ie. controller reset or failover) will be recovered by continuing from the last known progress boundary.

Sun Confidential: Internal Only Integrated Data Services Volume Copy


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

10-243

Volume Copy Overview

As long as the copy pair relationship is maintained, the target Volume can be set to read-only upon copy completion so that the point in time clone cannot be modified.

How Volume Copy Works


A Volume Copy Wizard walks you through creating a volume copy. When configuration is completed through the Wizard, the application host sends a volume copy request to the controller that owns the source Volume. Data from the source Volume is read and copied to the target Volume. Operation in Progress icons are displayed on the source and target Volumes while the volume copy is completing.

During a volume copy, the same controller must own both the source and target Volumes. If both Volumes do not have the same preferred controller when the volume copy starts, the ownership of the target Volume is automatically transferred to the preferred controller of the source Volume. When the Volume Copy is completed or is stopped, ownership of the target Volume is restored to its original controller owner. If ownership of the source Volume is changed during the volume copy, ownership of the target Volume is also changed. If the storage system controllers experience a reset while a volume copy is in progress, the request to the controllers is restored during start-of-day processing and the copy will continue from the point when the controllers were reset. For example, if a Volume Copy was at 65% complete when a controller reset occurred, the volume copy will start from the 65% complete point when start-of-day processing begins.

10-244

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Volume Copy
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Volume Copy Overview

Factors Affecting Volume Copy


Several factors contribute to the storage arrays performance, including:

I/O activity Volume redundant array of independent disks (RAID) level Volume configuration (number of drives and cache parameters) Volume type (volume snapshots may take more time to copy than standard volumes).

When you create a new volume copy, you will define the copy priority to determine how much controller processing time is allocated for the volume copy process and diverted from I/O activity. There are five relative priority settings. The Highest priority rate supports the volume copy at the expense of I/O activity. The Lowest priority rate supports I/O activity at the expense of volume copy speed. You can specify the copy priority:

Before the volume copy process begins While it is in progress After it has finished (in preparation for re-copying the volume).

Volume Copy States


While creating and maintaining a volume copy, there are several states it will go through, both during and after the volume copy.

Sun Confidential: Internal Only Integrated Data Services Volume Copy


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

10-245

Volume Copy Overview

During the Volume Copy


Once the volume copy starts an operation from the source volume over to the target volume, no read or write requests to the target volume are allowed. The volume copy goes from an idle to an active state, displaying either an In Progress or Pending (resources not available) status. These status conditions are displayed in the Jobs window of CAM. Table 10-1 Volume Copy States During a Volume Copy State In Progress Description This status is displayed when data on the source volume is being read and then written to the target volume. While a volume copy has this status, the host has read-only access to the source volume and read and write requests to the target volume will not take place until the volume copy has completed. This status is displayed when a volume copy has been created, but system resources do not allow it to start. While in this status, the host has read-only access to the source volume and read and write requests to the target volume will not take place until the Volume Copy has completed.

Pending

After the Volume Copy


After the volume copy is complete, by default the target volume automatically becomes read-only to hosts, and write requests to the target volume will be rejected. State Copy Complete Description This status signifies that the data on the source volume has been successfully copied to the target volume. This status is accompanied by a timestamp attribute.

10-246

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Volume Copy
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Volume Copy Overview State Copy Failed Description This status is displayed when an error occurred during the volume copy. A status of Failed can occur because of a read error from the source volume, a write error to the target volume, or because of a failure on the storage system that affects the source volume or target volume. A critical event is logged in the Event Log and a Critical Alarm icon is displayed.

Volume Copy Read/Write Restrictions


The following restrictions apply to the source volume, target volume and storage system.

The source volume is available for read I/O activity only while a volume copy has a status of In Progress or Pending. Write requests are allowed after the volume copy is completed. A volume that is the source or target volume in another volume copy with a status of Failed, In Progress, or Pending cannot be used as a source or target volume. A volume with a status of Failed cannot be used as a source or target volume. A volume with a status of Degraded cannot be used as a target volume.

Table 10-2 Volume Copy Read/Write Restrictions State I/O


During volume copy After volume copy

Source

Target

Source

Target
writeprotect disabled

Target
writeprotect enabled

Read Write

Allowed Rejected

Rejected Rejected

Allowed Allowed

Allowed Allowed

Allowed Rejected

If a modification operation is running on a source volume or target volume, and the volume copy has a status of In Progress, Pending, or Failed, the volume copy will not take place. If a modification operation is running

Sun Confidential: Internal Only Integrated Data Services Volume Copy


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

10-247

Volume Copy Overview on a source or target volume after a volume copy has been created, the modification operation must complete before the volume copy can start. If a volume copy has a status of In Progress, modification operations will not be allowed.

Creating a Volume Copy


Before a volume copy is created, a target and source Volume must either already exist on the storage system or be created by the user at that point. When a volume copy is created, the data from the source Volume is written to the target Volume. To ensure that all the data is copied, the target Volumes capacity must be equal to or greater than the source Volumes capacity. After the volume copy has completed, the target Volume automatically becomes read-only to hosts, and write requests to the target Volume will not be permitted. Perform the following before startinga Volume Copy: 1. 2. Stop all I/O activity to the source and target Volumes. Unmount any file systems on the source and target Volumes.

Functions that can be performed on a Copy Pair


The source Volume and target Volume for a single volume copy are known as a copy pair. The Volume Details page of either the source or the target volumes can be used to re-copy a copy pair, stop a copy pair with a status of In Progress, remove copy pairs (which removes the copy pair association information from the storage system but leaves the data in tact on both the source and target Volumes), change the volume copy priority, and disable the target Volumes Read-Only attribute.

10-248

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Volume Copy
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Volume Copy Overview

Recopying a Volume
The Re-Copy option enables you to create a new Volume copy for a previously defined copy pair that may have been Stopped, Failed, or has Completed. This option can be used for creating scheduled, complete backups of the target Volume that can then be copied to tape drive for offsite storage. After starting the Re-Copy option, the data on the source Volume is copied in its entirety to the target Volume. Volume Copy does not support the ability to resynchronize the target with only the changes that occurred to the source after the copy was completed. The copy process is a full, block by block replication at a given point in time. It is not mirroring technology, which continuously updates the target. You can also set the copy priority for the volume copy at this time. The higher priorities will allocate storage system resources to the volume copy at the expense of the storage systems performance.

Re-Copy Considerations

This option will overwrite existing data on the target Volume and make the target Volume read-only to hosts. This option will fail all snapshot Volumes associated with the target Volume, if any exist. Only one copy pair at a time can be selected to be re-copied. Similar to Snapshot re-create New full-size point-in-time copy using same source and target The Re-copy option is always available EXCEPT when there is already a copy pending, a copy is already in progress (option available when Copy Failed), target is a degraded Volume, source or target Volume is also a secondary Enhanced Remote Mirror Volume, offline Volume, failed Volume or missing Volume.

Stopping a Volume Copy


The Stop Copy option allows you to stop a volume copy that has a status of In Progress, Pending, or Failed. Using this option on a volume copy with a status of Failed clears the Critical Alarm status displayed for the storage system in the Current Alarms of the storage management software.

Sun Confidential: Internal Only Integrated Data Services Volume Copy


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

10-249

Volume Copy Overview After the volume copy has been stopped, the Re-Copy option can be used to create a new volume copy using the original copy pair. Note When the volume copy is stopped, all mapped hosts will have write access to the source Volume, If data is written to the source Volume, the data on the target Volume will no longer match the data on the source Volume.

Stopping a Volume Copy Considerations


Stop Copy is available when status is Pending, In Progress or Failed Operation stops but copy pair relationship is still maintained.

Removing Copy Pairs


The Remove Copy Pairs option allows you to remove one or more volume copies pairs. Any volume copy-related information for the source Volume and target Volume is removed from the Volume Properties and Storage Array Profile dialogs. After the volume copy is removed, the target Volume can be selected as a source Volume or target Volume for a new volume copy. Removing a volume copy also permanently removes the Read-Only attribute for the target Volume. Note If the volume copy has a status of In Progress, you must stop the
volume copy before you can remove the copy pair.

Removing a Copy Pair Considerations

The data on the source Volume or target Volume is not deleted.

10-250

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Volume Copy
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Volume Copy Overview

Changing Copy Priority


The Change Copy Priority dialog allows you to set the rate at which the volume copy completes. The copy priority setting defines how much of the storage systems resources are used to complete a volume copy versus fulfill I/O requests. There are five relative settings ranging from Lowest to Highest. The Highest priority rate supports the volume copy, but I/O activity may be affected. The Lowest priority rate supports I/O activity, but the volume copy will take longer. You can change the copy priority for a copy pair:

before the volume copy begins, while the volume copy has a status of In Progress, after the volume copy has completed when recreating a volume copy using the Re-Copy option.

Changing Copy Priority Considerations


Available whenever a copy is Pending or In Progress. Enables resource balancing between copy and I/O.

Volume Permissions
Read and write requests to the target Volume will be rejected while the volume copy has a status of In Progress, Pending, or Failed. After the volume copy has completed, the target Volume automatically becomes read-only to hosts. You may want to keep the Read-Only attribute enabled in order to preserve the data on the target Volume. Examples of when you may want to keep the Read-Only attribute enabled include:

If you are using the target Volume for backup purposes If you are copying data from one array to a larger array for greater accessibility If you are planning to use the data on the target Volume to copy back to the base Volume in case of a disabled or failed Snapshot Volume

Sun Confidential: Internal Only Integrated Data Services Volume Copy


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

10-251

Volume Copy Overview If you decide to allow host write access to the data on the target Volume after the volume copy is completed, use the Volume details page in CAM to disable the Read-Only attribute for the target Volume.

Volume Permission Considerations

Setting target Volume permissions is not available when copy is Pending, In Progress or Failed. Target permissions toggle between read and write access to target Volumes.

Note Some OSs may report an error when accessing a read only device, in which case the read only access on the Target volume must be disabled in order to allow read and write access. Unix OSs may allow access to a read only device as long as it is mounted as a read only device.

Volume Copy Compatibility with Other Data Services


Volume Copy can be used in conjunction with other integrated data services.

Storage Partitioning
When a volume copy is created, the target volume automatically becomes read-only to hosts, to ensure that the data is preserved. Hosts that have been mapped to a target volume will not have write access to the volume. Any attempts to write to the read-only target volume will result in a host I/O error. If you want hosts to have read and write access to the data on the target volume, use the Volume Details page in CAM to disable the read-only attribute for the target volume.

10-252

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Volume Copy
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Volume Copy Overview

Snapshot
In order to maintain the data integrity of the point in time clone, volume Copy suspends writes to the source during the copy process. If the volume being copied is large, this can result in an extended period of time without the ability to make updates or changes. Even though the source volume does support read-only access, many operating systems still try to write to the volume when it is in a read-only mode. If this happens, the server can hang. Figure 10-3 shows that copying the Snapshot creates a full PiT clone copy while I/O continues to the Base volume.

Figure 10-3 Copying the Snapshot Therefore, in order to maintain normal I/O activity and ensure server availability, volume Copy must be used in conjunction with Snapshot, where the Snapshot is the source for the volume Copy.

Sun Confidential: Internal Only Integrated Data Services Volume Copy


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

10-253

Volume Copy Overview Copying the Snapshot creates the same full point in time clone of the desired source volume, and it does so while I/O continues to the production volume. The process is straightforward. First a Snapshot of a volume is created. Then volume Copy uses the Snapshot volume as its source volume. Once the copy is complete, the Snapshot volume can be deleted. The volume for which the Snapshot is created is known as the base volume and must be a standard volume in the storage system. For the volume copy feature, the base volume of a Snapshot volume is permitted to be selected as the source volume for a volume copy. Note If you choose the base volume of a Snapshot volume as your target volume, you must disable all Snapshot volumes associated with the base volume before you can select it as a target volume. Otherwise, the base volume cannot be used as a target volume. When you create a Snapshot volume, a Snapshot reserve volume is automatically created. The Snapshot reserve volume stores information about all the data altered since the Snapshot volume was created, and cannot be selected as a source volume or target volume in a volume copy. The Snapshot premium feature can be used in conjunction with the Volume Copy premium feature to back up data on the same storage system, and to restore the data on the Snapshot volume back to its original base volume.

Enhanced Remote Mirroring


In the Enhanced Remote Mirroring premium feature, a mirrored volume pair is created and consists of a primary volume on a primary storage system and a secondary volume on a secondary storage system.

10-254

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Volume Copy
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Volume Copy Overview A primary volume participating in a Enhanced Remote Mirror can be selected as the source volume for a volume copy. A secondary volume participating in a Enhanced Remote Mirror cannot be selected as either the source or target volume for a volume copy. Note If a primary volume is selected as the source volume for a volume copy, you must ensure that the capacity of the target volume is equal to, or greater than, the usable capacity of the primary volume. The usable capacity for the primary volume is the minimum of the primary and secondary volumes actual capacity. If a catastrophic failure occurs on the storage system containing the primary volume (also participating in a volume copy as a source volume), the secondary volume is promoted to the primary volume role, allowing hosts to continue accessing data and business operations to continue. Any volume copies that are In-Progress will fail and cannot be restarted until the primary volume is demoted back to its original secondary volume role. If the primary storage system is recovered but is unreadable due to a link failure, a forced promotion of the secondary volume will result in both the primary and secondary volumes viewing themselves in the primary volume role. If his occurs, the original primary volume and any associated volume copies will be unaffected.

Volume Copy OS Support


Since the volume copy target volume is identical to the source volume, some operating systems may not support the mapping of identical drives with identical signature/block information to the same server. Table 10-3 Volume Copy OS Support Host Environment and Volume Type Win NT Regular Disk Win NT Fault-Tolerant Win 2K Basic Disk Use Target on SAME system as Source Volume
Supported Not Supported Supported

Use Target on DIFFERENT system as Source Volume


Supported Not Supported Supported

Sun Confidential: Internal Only Integrated Data Services Volume Copy


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

10-255

Volume Copy Overview Table 10-3 Volume Copy OS Support Host Environment and Volume Type Win 2K Dynamic Disk Solaris Regular Volume Solaris VxVM Volume HP-UX Regular Volume HP-UX Logical Volume Irix Regular Volume Irix XLV Volume NetWare Volume AIX Logical Volume Linux Regular Volume Linux Logical Volume Use Target on SAME system as Source Volume
Not Supported Supported Not Supported Supported Supported Supported Supported Not Supported Supported Supported Not Supported

Use Target on DIFFERENT system as Source Volume


Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported

10-256

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Volume Copy
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Configuring a Volume Copy

Configuring a Volume Copy


The following section covers how to create and manage a volume copy using CAM.

Configuring a Volume Copy with Common Array Manager


Before a volume copy is created, a target and source volume must either already exist on the storage system or be created by the user at that point. After the volume copy has completed, the target volume automatically becomes read-only to hosts, and write requests to the target volume will not be permitted. When creating a volume copy, be prepared to do the following:

Select a source volume from the Volume Summary page or from the Snapshot Summary page.

Note In order for a volume to be used as a target volume, its snapshots need to be either failed or disabled.

Select a target volume from the list of target volume candidates.

Caution Remember, a volume copy will overwrite all data on the target volume and automatically make the target volume read-only to hosts. After the volume copy process has finished, you can enable hosts to write to the target volume by changing the target volumes Read-Only attribute on the Volume Details page.

Note Because a target volume can have only one source volume, it can participate in one copy pair as a target. However, a target volume can also be a source volume for another volume copy, enabling you to make a volume copy of a volume copy.

Set the copy priority for the volume copy. During a volume copy, the storage arrays resources may be diverted from processing I/O activity to completing a volume copy, which may affect the storage arrays overall performance.

Sun Confidential: Internal Only Integrated Data Services Volume Copy


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

10-257

Configuring a Volume Copy

Enabling the Volume Copy Feature


To enable the volume copy feature: 1. 2. 3. 4. 5. 6. Click Sun StorageTek Configuration Service. The Array Summary page is displayed. Click the array on which you want to use the volume copy feature. The Volume Summary page for that array is displayed. In the navigation pane, click Administration > Licensing. The Licensable Feature Summary page is displayed. Click Add License. The Add License page is displayed. Select Volume Copying from the License Type menu. Enter the version number and the key digest, and click OK.

Note If you disable the volume copy feature, but volume copy pairs still exist, you can still remove the copy pair, start a copy using the existing copy pair, and change the setting of the read-only attribute for target volumes. However, you cannot create new volume copies.

Creating a Volume Copy


Before creating a volume copy, be sure that a suitable target volume exists on the storage array, or create a new target volume specifically for the volume copy. You can create a copy of a standard volume, a target volume, or a snapshot volume. To create a volume copy of a standard volume or a target volume: 1. From the Volume Summary page, click the name of the volume whose contents you want to copy to another volume. The volume you select must be either a standard volume, a snapshot volume, or a target volume. The Volume Details page for that volume is displayed. Click Copy. When prompted to continue, click OK. The Copy Volume page is displayed. Select the copy priority. The higher the priority you select, the more resources will be allocated to the volume copy operation at the expense of the storage arrays performance.

2. 3. 4.

10-258

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Volume Copy
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Configuring a Volume Copy 5. Select the target volume you want from the Target Volumes list. Select a target volume with a capacity similar to the usable capacity of the source volume to reduce the risk of having unusable space on the target volume after the volume copy is created. Before starting the volume copy process: a. b. 7. 8. Stop all I/O activity to the source and target volumes. Unmount any file systems on the source and target volumes, if applicable.

6.

Review the specified information on the Copy Volume page. If you are satisfied, click OK to start the volume copy. A message confirms that the volume copy has successfully started. After the volume copy process has finished: a. b. Remount any file systems on the source volume and target volume, if applicable. Enable I/O activity to the source volume and target volume.

Recopying a Volume Copy


A volume copy can be recopied for an existing copy pair. Recopying a volume copy is useful when you want to perform a scheduled, complete backup of the target volume that can then be moved to a tape drive for off-site storage. Caution Recopying a volume copy will overwrite all data on the target volume and automatically make the target volume read-only to hosts. Ensure that you no longer need the data or have backed up the data on the target volume before recopying a volume copy. To recopy a volume copy: 1. 2. 3. 4. 5. Click Sun StorageTek Configuration Service. The Array Summary page is displayed. Click the array for which you want to recopy a volume copy. The Volume Summary page for that array is displayed. Click the name of the target volume that you want to recopy. The Volume Details page for that volume is displayed. Stop all I/O activity to the source volume and target volume. Unmount any file systems on the source volume and target volume, if applicable.

Sun Confidential: Internal Only Integrated Data Services Volume Copy


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

10-259

Configuring a Volume Copy 6. 7. 8. Click Recopy. The management software recopies the source volume to the target volume and displays a confirmation message. Remount any file systems on the source volume and target volume, if applicable. Enable I/O activity to the source volume and target volume.

Changing the Copy Priority


To change the copy priority for a volume copy: 1. 2. 3. Click the array for which you want to change the copy priority of a volume copy from the Array Summary page. The Volume Summary page for that array is displayed. Click the name of the volume for which you want to change the copy priority. The Volume Details page for the selected volume is displayed. In the Copy Priority field, select the copy priority you want. The higher the priority you select, the more resources will be allocated to the volume copy operation at the expense of the storage arrays performance. Click OK.

4.

A confirmation message indicates that the change was successful.

Re-Copying a Volume
After starting the Re-Copy option, the data on the source volume is copied in its entirety to the target volume. Volume copy does not support the ability to re-synchronize the target with only the changes that occurred to the source after the copy was completed. The copy process is a full, block by block replication at a given point in time. It is not mirroring technology, which continuously updates the target. You can also set the copy priority for the volume copy at this time. The higher priorities will allocate storage system resources to the volume copy at the expense of the storage systems performance. There are several things to consider when performing a re-copy.

This option will overwrite existing data on the target volume and make the target volume read-only to hosts. This option will fail all Snapshot volumes associated with the target volume, if any exist. Only one copy pair at a time can be selected to be re-copied.

10-260

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Volume Copy
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Configuring a Volume Copy


Similar to Snapshot re-create New full-size point-in-time copy using same source and target The Re-copy option is always available EXCEPT when there is already a copy pending, a copy is already in progress (option available when Copy Failed), target is a degraded volume, source or target volume is also a secondary Enhanced Remote Mirror volume, offline volume, failed volume or missing volume.

Stopping a Volume Copy


The Stop Copy option allows you to stop a volume copy that has a status of In Progress, Pending, or Failed. Using this option on a volume copy with a status of Failed clears the Critical Alarm status displayed for the storage system in the Current Alarms of the storage management software. After the volume copy has been stopped, the Re-Copy option can be used to create a new volume copy using the original copy pair. Note When the volume copy is stopped, all mapped hosts will have write access to the source volume. If data is written to the source volume, the data on the target volume will no longer match the data on the source volume. When you stop a volume copy, the following occurs:

Stop Copy is available when status is Pending, In Progress or Failed Operation stops but copy pair relationship is still maintained.

Removing Copy Pairs


The Remove Copy Pairs option allows you to remove one or more volume copies pairs. Any volume copy-related information for the source volume and target volume is removed from the volume Properties and Storage Array Profile dialogs. After the volume copy is removed, the target volume can be selected as a source volume or target volume for a new volume copy. Removing a volume copy also permanently removes the Read-Only attribute for the target volume.

Sun Confidential: Internal Only Integrated Data Services Volume Copy


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

10-261

Configuring a Volume Copy

Note If the volume copy has a status of In Progress, you must stop the volume copy before you can remove the copy pair. The data on the source volume or target volume is not deleted when you remove a copy pair.

Changing Copy Priority


The Change Copy Priority dialog allows you to set the rate at which the volume copy completes. Changing the copy priority:

Is available whenever a copy is Pending or In Progress. Enables resource balancing between copy and I/O.

Volume Permissions
Read and write requests to the target volume will be rejected while the volume copy has a status of In Progress, Pending, or Failed. After the volume copy has completed, the target volume automatically becomes read-only to hosts. You may want to keep the Read-Only attribute enabled in order to preserve the data on the target volume.Examples of when you may want to keep the Read-Only attribute enabled include:

If you are using the target volume for backup purposes If you are copying data from one virtual disk to a larger virtual disk for greater accessibility If you are planning to use the data on the target volume to copy back to the base volume in case of a disabled or failed Snapshot volume

If you decide to allow host write access to the data on the target volume after the volume copy is completed, use the Volume Details page in CAM to disable the Read-Only attribute for the target volume. The following are things to consider when changing the volume permissions:

Setting target volume permissions is not available when copy is Pending, In Progress or Failed. Target permissions toggle between read and write access to target volumes.

10-262

Sun Confidential: Internal Only Sun StorageTek 6540 Data Services Volume Copy
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Module 11

Integrated Data Services Remote Replication


Objectives
Upon completion of this module, you should be able to:

Explain how Replication is implemented in the storage manager Describe the benefits and applications of Replication Differentiate between Synchronous and Asynchronous mirroring modes

Sun Confidential: Internal Only 11-263


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Remote Replication Overview

Remote Replication Overview


Remote Replication provides the ability to maintain synchronous or asynchronous copies of online real-time replication of data between two Sun StorageTek 6540s over a remote distance. Ideally, Remote Replication is used for disaster recovery. In the event of a disaster, all data is mirrored to an alternate site, which comprises storage components and workstations. From a business continuance perspective, critical data can be mirrored to a remote location to enable the continuity of critical business activities such as billing, ordering, and production. When a disaster occurs at one site, the secondary or backup site takes over responsibility for computer services. Therefore, users and hosts that were previously mapped to a primary storage system can have access to a secondary storage system. Essentially this is a good BCDR (Business Continuity and Disaster Recovery) plan where the ability to have a robust business continuance strategy keeps essential services operational during and after a failure or a disaster. Business continuance strategy is diagrammed in Figure 11-1.

Figure 11-1 Business Continuance Strategy

11-264

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Remote Replication Overview

Note The terms local and remote are relative. Support for cross mirrors between storage systems means a given system can be considered both local and remote with both primary and secondary volumes.

Remote Replication Terminology

Mirror Mirror Repositories Repositories

Mirror Pairs or Replication Set V1 -> V1M V2 -> V2M Primary Secondary V3 -> V3M Secondary Primary

Figure 11-2 Example Remote Replication To better understand remote replication, there are several terms which must be defined. This list includes:

Primary volume Secondary volume Mirror reserve volume Replication set Synchronous mirroring Asynchronous mirroring Asynchronous mirroring with write consistency

Primary Volume
The volume residing in the primary or local storage system is the primary volume. The primary volume accepts host I/O and stores application data. The data on a primary volume is replicated to the secondary volume.

Sun Confidential: Internal Only Integrated Data Services Remote Replication


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-265

Remote Replication Overview When a mirror pair is first created, data from the primary volume is copied in its entirety to the secondary volume. This process is known as full synchronization and is directed by the controller that owns the primary volume. During a full synchronization, the primary volume remains fully accessible for all read and write host I/O.

Secondary volume
The volume residing in the secondary or remote storage system is the secondary volume. This volume is used to maintain a mirror (or copy) of the data on its associated primary volume. The controller that owns the secondary volume receives remote writes for the volume from the controller that owns the primary volume. The controller that owns the secondary volume does not accept host write requests. The secondary volume can be mapped to a host for use in disaster recovery situations. However, only read host I/O will be allowed. The secondary volume remains read-only to host applications while mirroring is underway. The secondary volume can be used for backups and analysis. Therefore, this capacity is not wasted while waiting for a disaster to occur. In the event of a disaster or catastrophic failure of the primary volume, a role reversal can be performed to promote the secondary volume to a primary role. Hosts will then be able to access the newly promoted volume and business operations can continue. The secondary volume has to be of equal or greater size than the primary. RAID level and drive type do not matter. The secondary volume can also be the base volume for a Snapshot.

Mirror Reserve Volume


A mirror reserve volume is a special volume in the storage system created as a resource for the controller to store mirroring information, such as specifics about remote writes that have not yet been written to the secondary volume. The controller can use this information to recover from controller resets and accidental powering-down of the storage system. Two mirror repositories are required per storage system, one for each controller. The mirror reserve volume is 128 Mbyte (256 Mbyte total per storage system). Unlike Snapshot reserves, mirror reserves are not required for each mirrored pair as actual read/write data is not stored in the mirror reserve.

11-266

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Remote Replication Overview The delta log and the FIFO log are kept in the mirror reserve. The delta log is used to track changes to the primary volume that have not yet been replicated to the secondary volume. Therefore, if an interruption occurs to the communication between the two storage systems, the delta log can be used to re-synchronize the data between the secondary and primary volumes. The delta log is a bit map (maximum 1 million bits per mirror), where each bit represents a section of the primary volume that was written by the host, but has not yet been copied to the secondary volume. The number of blocks represented by a single bit is computed based on the usable capacity of the primary volume. The minimum amount of data represented by a single bit is 64K, that is 128512-byte blocks. For example, for a 2TB volume, each bit will represent a data range of 2 Mbyte. The FIFO log is used during Write Consistency mirroring mode to ensure writes are completed in the same order on both the primary and secondary volumes. Figure 11-3 illustrates the replication bit map.

Figure 11-3 Replication Bitmap

Replication Set
When you create Remote Replication, a mirrored pair is created that consists of one primary volume at a local storage system and one secondary volume at a remote storage system. A replication set has the following characteristics:

A volume can only belong to one mirrored pair at any given time. Meaning a single primary volume cannot have two secondary volumes. A mirror pair (or the mirror relationship) is on a volume per volume basis, not on a file basis. Only standard volumes may be included in a Replication Set

Sun Confidential: Internal Only Integrated Data Services Remote Replication


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-267

Remote Replication Overview

A maximum of 32 Replication Sets are permitted on each storage system The primary volume is the volume that accepts host I/O When the Replication Set is first created, the controller that owns the primary volume copies all of the data from the primary volume to the secondary volume. This is a full synchronization. Both volumes in a mirror pair must be owned by the same controller in each storage system. Volume ownership is determined by the owner of the primary volume. An ownership change on the primary volume will automatically cause a subsequent ownership change on the associated secondary volume on the next I/O. AVT and failover controller ownership change requests to the secondary volume will be rejected.

Figure 11-4 shows a replication set diagram.

Figure 11-4 Replication Set

Synchronous Mirroring
A write I/O from a host must be written to both the primary and secondary volumes before the I/O is reported as complete. When the controller owner of the primary volume receives a write request from a host, the controller first logs the information about the write to the mirror reserve, then writes the data to the primary volume. The controller then initiates a remote write operation to copy the affected data blocks from the primary to the secondary volume.

11-268

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Remote Replication Overview After the host write request has been written to the primary volume and the data has been successfully copied to the secondary volume, the controller removes the log entry from the mirror reserve and sends an I/O completion status back to the data host. This mirroring mode is called synchronous because the controller does not send the I/O completion to the host until the data has been copied to both the primary and secondary volumes, When a read request is received from a host system, the controller that owns the primary volume handles the request normally. No communication takes place between the primary and secondary storage systems. Synchronous mirroring provides continuous mirroring between primary and secondary volumes to ensure absolute synchronization. Application performance is impacted because I/O is not complete until it has made the round trip journey to the secondary storage system. Figure 11-5 shows a diagram of synchronous mirroring.

Figure 11-5 Synchronous Mirroring

Asynchronous Mirroring
Host write requests are written to just the primary volume before the controller sends an I/O completion status back to the host system regardless of when the data was successfully copied to the secondary storage system.The asynchronous write mode offers faster I/O performance but does not guarantee that the copy has been successfully completed before processing the next write request.

Sun Confidential: Internal Only Integrated Data Services Remote Replication


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-269

Remote Replication Overview In Asychronous mirroring, the primary storage system does not wait for the I/O to complete to the secondary storage system before sending an I/O completion status to the server. Therefore, there can be multiple outstanding I/Os to the secondary storage system. Remote Replication supports up to 128 outstanding I/Os per mirror pair. After the 128th I/O has been issued to the secondary volume, the primary volume will suspend any new I/Os until one of the outstanding I/Os to the secondary volume has completed and freed up space in the queue for pending I/Os. Asynchronous mirroring offers the following benefits:

Queues remote write to offer faster host I/O performance, thereby improving response to applications using the primary volume. Can effectively replicate over longer distances since longer latency times are acceptable. Allows the secondary volume to fall behind during Peak Times

The following are things to consider when dealing with an asynchronous mirror:

Remote site may not have all the latest-greatest data. Non-Peak-Times needed for secondary volume to catch up with the primary volume. Maximum number of outstanding write requests is 128 per mirror.

Figure 11-6 illustrates Asynchronous remote replication.

Figure 11-6 Asynchronous Remote Replication

11-270

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Remote Replication Overview

Asynchronous Remote Replication with Write Consistency


Write consistency is a configuration option that ensures writes to the remote storage system complete in the same order as the local storage system. This method of remote replicaton is critical for maintaining data integrity in multi-volume applications, such as databases, by eliminating out-of-order updates at the remote site that can cause logical corruption. The write consistency option is available for any primary and secondary volumes participating in an asychronous remote replication relationship. When asychronous remote replication mode is selected, write requests to the primary volume are completed by the controller without waiting for an indication of a successful write to the secondary storage system. As a result of selecting the asychronous remote replication mode, write requests are not guaranteed to be completed in the same order on the secondary volume as they are on the primary volume. If the order of write request is not retained, data on the secondary volume may become inconsistent with the data on the primary volume and could jeopardize any attempt to recover data if a disaster occurs on the primary storage system. When the write consistency option is selected for multiple volumes on the same storage system, the order in which data is synchronized is preserved. Selecting the write consistency option for a single mirror pair does not make sense because the process in which data is replicated does not change. More than one mirror pair must have the write consistency option selected for the replication process to change. When multiple replication pairs exist on the same storage systems and have been configured for Asychronous Mirroring with Write Consistency, they are considered to be an interdependent group known as a write consistency group. All mirror pairs in the write consistency group maintain the same order when sending writes from the primary volume to their corresponding secondary volume. The data on the secondary volume cannot be considered fully synchronized until all mirror pairs in the write consistency group are synchronized. If one mirror pair in a write consistency group becomes unsynchronized, all of the mirrored pairs in the write consistency group will become unsynchronized. Any write activity to the remote site will be prevented to protect the write consistency of the remote data set. When implementing an asynchrnous mirror with write-consistency, it maintains data integrity in multi-volume databases.

Sun Confidential: Internal Only Integrated Data Services Remote Replication


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-271

Remote Replication Overview There are some things to consider however:

Asychronous Mirroring with the Write Consistency option will have decreased performance compared to just Asychronous Mirroring because I/Os for all the mirror pairs in the write consistency group are serialized. There is only one write consistency group per storage system.

Figure 11-7 shows a diagram of Asynchronous remote replication with write consistency.

Figure 11-7 Asynchronous Remote Replication With Write Consistency

Summary of Remote Replication Modes


The following table highlights the remote replication modes. Table 11-1 Summary of Remote Replication modes
Synchronous 1 Host issues write to primary volume 2 Primary controller adds entry to metadata 3 Write to primary volume 4 Copy to secondary volume Asynchronous 1 Host issues write to primary volume 2 Primary controller adds entry to metadata 3 Write to primary volume 4 Notify host, write is complete Preserved Write Order 1 Host issues write to primary volume 2 Primary controller adds entry to metadata 3 Write to primary volume 4 Primary controller moves log to FIFO log

11-272

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Remote Replication Overview Table 11-1 Summary of Remote Replication modes


Synchronous 5 Notify host, write is complete Asynchronous 5 Copy to secondary volume Preserved Write Order 4a) Read first entry from FIFO log 4b) Copy to secondary volume 4c) Remove first entry from FIFO log 5 Notify host, write is complete

6 Entry removed from metadata log

6 Entry removed from metadata log

Benefits of Remote Replication


Remote replication offers the following beneficial features:

Disaster recovery - Remote Replication allows the storage system to replicate critical data at one site to another storage system at another site. Data transfers occur at Fibre Channel speeds providing an exact mirror duplicate at the remote secondary site. In the event that the primary site fails, mirrored data at the remote site is used for data host fail over and recovery. Operations may then be shifted over to the remote mirror site for continued operation of all services normally provided by the primary site. Data vaulting and data availability - Remote Replication allows data to be sent off site where it can be protected from hardware failures and other threats. The off-site copy of the data can then be used for testing, or may be backed up without interruption to critical operations at the primary site. High performance remote copy - Remote Replication provides a complete copy of data on a second storage system for use in applications testing. This method removes the burden of processing from the original system with no impact on the host server. The secondary data host and storage system simply breaks the mirror, uses it, and re-syncs for the next testing cycle. Two-way data protection - Remote Replication provides the ability to have two storage systems provide backup to each other by mirroring critical volumes on each storage system to volumes on the other storage system. This provides the ability for each system to recover data from the other system in the event of any service interruptions.

Sun Confidential: Internal Only Integrated Data Services Remote Replication


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-273

Technical Features of Remote Replication

Technical Features of Remote Replication


The following are a list of the remote replication features available:

Synchronous, asychronous and write order consistency mirroring modes enable administrators to choose the replication method that best meets protection, distance or performance requirements. Dynamic mode switching without suspending the mirror accommodates changing application and bandwidth requirements. Suspend / resume with delta resynchronization reduces vulnerability associated with reestablishing the mirror. Without interrupting the normal mirroring from the local to remote site, Remote Replication provides read-only and Snapshot access to the secondary volume. This enables the remote data to be utilized prior to a disaster, such as backup, vaulting, data mining, application testing, etc., without sacrificing protection of the primary site data. Storage-based implementation has no host server or application overhead for high performance Multiple remote systems can mirror to a single system for centralized data protection, mining or backups. Cross-mirroring data between storage systems protects the data on each storage system. Remote Replication is a premium feature that is fully integrated into the storage management software for a single point of control for all storage administration and replication needs. User selectable synchronization priority controls the impact of data transfers on application performance Managed by the controllers, Remote Replication is transparent to the data host and applications. Once replication is established, data synchronization begins. Data is copied to a secondary volume in the background. After synchronization, on-line replication continues.

11-274

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Technical Features of Remote Replication

Suspend and Resume


A mirror pair can be suspended to stop data transfer between the primary and secondary volumes. When a mirror pair is in a suspended state, no attempt is made to contact the secondary volume. Any data that is written to the primary volume while the mirror pair is suspended will be logged in the mirror reserve, and will automatically be written to the secondary volume when the mirror relationship is resumed. A full synchronization will not be required. A mirror pair can be resumed to restart data transfer between a primary volume and a secondary volume participating in a mirror relationship, after the mirror has been suspended or unsynchronized. After the mirror pair is resumed, only the regions of the primary volume known to have changed since the mirror pair was suspended are written to the secondary volume. A mirror that was either manually suspended or stopped due to an unplanned communication error will not need to restart the lengthy process of establishing the mirror. When the mirror is resumed, only the data blocks written to the primary volume while the mirror was suspended will be copied to the secondary volume. This delta resynchronization process is user defined; initiated either as an operator command or automatically when communication is restored. The suspend-and-resume feature works in conjunction with major database solutions to extend backup and recovery best practices for enhanced business continuity.

Role Reversal
A role reversal is the act of promoting the secondary volume to be the primary volume within the mirrored volume pair, and or demoting the primary volume to be the secondary volume. The role reversal process always requires a user to initiate the process. When the secondary volume becomes a primary volume, any hosts that are mapped to the volume through a volume-to-LUN mapping will now be able to read or write to the volume.

Sun Confidential: Internal Only Integrated Data Services Remote Replication


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-275

How Remote Replication Works

How Remote Replication Works


When the controller that owns the primary volume receives a write request from the data host, the controller first logs information about the write to a mirror reserve volume. Then writes the data to the primary volume. The controller then initiates a remote operation to copy the affected data blocks to the secondary volume on the secondary storage array.

Figure 11-8 Remote Replication During the synchronous mode of Remote Replication, data is written to both storage arrays (Primary/Secondary) before a host is informed that data has been written. Typically the controller sends an IO completion indication to the host after the data has been successfully copied to the secondary storage system. This type of write mode is designed to offer one of the highest forms of protection with both the primary and secondary volumes kept current to the last update.

11-276

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

How Remote Replication Works During an Asynchronous mode in Remote Replication, data is written to one storage system but may not be sent to the other until a certain time has passed. The controller sends an IO completion indication back to the host system before the data has been successfully copied to the secondary storage system.

What Happens When an Error Occurs?


Primary Volume Secondary Volume

Host

Mirrored Pair Unsynchronized State

When processing write requests, the primary controller may be able to write to the primary volume, but a link interruption prevents communication with the secondary volume. In this case, the remote write cannot complete to the secondary volume, and the primary and secondary volumes are no longer correctly mirrored. The primary controller transitions the mirror pair to an unsynchronized status and sends an I/O completion to the primary host. The primary host can continue to write to the primary volume, but remote writes will not take place. When connectivity is restored between the primary volume and the secondary volume, a resynchronization will take place, either automatically or manually, depending on which method you choose when setting up the mirror pair. During resynchronization, only the blocks of data that have changed on the primary volume during the link interruption are copied to the secondary volume. After the resynchronization begins, the mirrored pair will transition from an unsynchronized status to a synchronization in progress status.

Sun Confidential: Internal Only Integrated Data Services Remote Replication


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-277

Configuring Remote Replication The primary controller will also mark the mirror pair as unsynchronized when a volume error on the secondary prevents the remote write from completing. For example, an offline or a failed secondary volume can cause the mirror pair to become unsynchronized. When the volume error is corrected (the secondary volume is placed online, or recovered to an optimal status), a synchronization (automatic or manual) is required, and the mirror pair transitions to synchronization in progress.

Configuring Remote Replication


The following section covers how to create and manage remote replication using CAM. First, you will need to configure the hardware to support remote replication.

Configuring the Hardware for Data Replication


To configure remote or data replication for the Sun StorageTek 6540 arrays, you will need to verify the following:

Confirm that you are able to manage each array that will be part of the remote replication with its appropriate host through the arrays IP address(es). Each array that will be part of a remote replication set should be on a fiber channel switch. If there are more arrays on the switch than will be in that specific replication set, you must zone the switch.

Setup the hardware.


Remote Replication requires two storage systems and a FC switch. The FC switch provides a Name Service function so storage systems can identify and access other storage systems on the SAN. The FC switch also provides diagnostics on the FC link between the local and remote site. Figure 11-9 shows the hardware ports for Remote Replication. The last host port on each controller is dedicated for Remote Replication (port 4 on the 6140, and port 2 on the 6140 Lite).

11-278

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Configuring Remote Replication

Figure 11-9 Dedicated Remote Replication Ports Caution The last port on each controller is dedicated to the replication function once Remote Replication is activated. Any hosts connected to this port will be logged out. Use this last port (port 4 on the 6540) to connect the system to the SAN for Remote Replication. The mirroring distances supported between storage systems participating in a mirror relationship are governed by the distance limits of the Fibre Channel standard. The distances that have been tested using a FC Fabric in conjunction with CNT routers:

Synchronous - 100 miles (160 km) Asychronous - 3200 miles (5150 km) Asychronous with Write Consistency - 800 miles (1285 km)

To configure the remote replication environment:

Direct-attach the management host to the array using any of ports 1 through 3. Port 4 is dedicated to remote replication. Using fiber channel cables, connect port 4 from each controller, controller A and controller B, to the fiber channel switch. This will be performed for each controller tray in the replication set.

Note Additional zones can be configured to group the connections from controller A from the arrays together while grouping the connections from controller B from the arrays, together, further isolating group A from group B.

Sun Confidential: Internal Only Integrated Data Services Remote Replication


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-279

Configuring Remote Replication Figure 11-10 on page 11-280 shows how the arrays with their associated hosts are cabled using three switches. Note The same goal can be obtained using a single 16-port switch configured with three zones.

Figure 11-10 Remote Replication Cabling

Configuring Data Replication with CAM


Installing the license for the Sun StorageTek Data Replicator software premium feature on an array enables data replication for that array only. Since two arrays participate in a replication set, you must install a license on both arrays that you plan to have participate in a replication set.

11-280

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Configuring Remote Replication

Note The array dedicates Fibre Channel (FC) port 4 on each controller for use with the Sun StorageTek Data Replicator software premium feature. Before enabling data replication on an array, you must ensure that FC port 4 on each controller is not currently in use. If it is in use, you must move all connections from FC port 4 to FC port 1.

Note On the Sun StorageTek 6540, FC port 4 is dedicated to remote replication functions. To enable data replication on an array: 1. 2. 3. 4. 5. 6. Click Sun StorageTek Configuration Service. The Array Summary page is displayed. Click the array on which you want to enable data replication. The Volume Summary page for that array is displayed. In the navigation pane, click Administration > Licensing. The Licensable Feature Summary page is displayed. Click Add License. The Add License page is displayed. Select Sun StorEdge Data Replicator Software from the License Type menu. Enter the version number and the key digest, and click OK.

Activating and Deactivating Data Replication


Activating the Sun StorageTek Data Replicator software premium feature prepares the array to create and configure replication sets. After data replication is activated, the secondary ports for each of the arrays controllers are reserved and dedicated to data replication. In addition, a replication reserve volume is automatically created for each controller in the array. Activating the feature does the following:

Reserves the last host port on each controller for mirroring operations - Remote Replication requires a dedicated host port between storage systems for mirroring data. After Remote Replication has been activated, one Fibre Channel host-side I/O port on each controller is solely dedicated to mirroring operations.

Sun Confidential: Internal Only Integrated Data Services Remote Replication


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-281

Configuring Remote Replication

Any host-initiated I/O operations will not be accepted by the dedicated port, and any requests received on this dedicated port will only be accepted from another controller participating in the mirror relationship. Controller ports dedicated to Remote Replication must be attached to a Fibre Channel fabric environment with support for the Directory Service and Name Service interfaces. Creates the Mirror Repositories - When you activate Remote Replication on the storage system, you create two mirror reserve volumes, one for each controller in the storage system. During this process you will have the option to decide where the mirror reserve volume will reside on free capacity on an existing virtual disk or in a newly created virtual disk. Because of the critical nature of the data being stored, the RAID level of the mirror reserve volumes must not be RAID 0 (data striping with no redundancy). Each Mirror Reserve volume will be a fixed size of 128 Mbyte. An individual mirror reserve volume is not needed for each mirror pair.

Caution Before activating Remote Replication, verify that creating the replication repositories will not exceed the volume limits of the storage system

Note The replication reserve volumes require a total of 256 megabytes of available capacity on an array. The two replication reserve volumes are created with a size of 128 MB, one for each controller. If no replication sets exist and the Sun StorageTek Data Replicator software premium feature is no longer required, you can deactivate data replication in order to reestablish normal use of dedicated ports on both storage arrays and delete both replication reserve volumes. Note You must delete all replication sets before you can deactivate the premium feature. To activate or deactivate the Sun StorageTek Data Replicator software premium feature: 1. 2. Click Sun StorageTek Configuration Service. The Array Summary page is displayed. Click the array containing the primary volume in the data replication set.

11-282

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Configuring Remote Replication 3. 4. 5. The Volume Summary page for that array is displayed. In the navigation pane, click Administration > Licensing. The Licensable Feature Summary page is displayed. Click Replication Sets. The Licensable Feature Details - Replication Sets page is displayed. Click Activate or Deactivate, as appropriate.

A confirmation dialog box indicates success or failure.

Disabling Data Replication


When data replication is in the disabled/activated state, previously existing replication sets can still be maintained and managed; however, new data replication sets cannot be created. When in the disabled/deactivated state, no data replication activity can occur. To disable data replication: 1. 2. 3. 4. 5. Click Sun StorageTek Configuration Service. The Array Summary page is displayed. Click the array on which you want to locate the primary volume in the data replication set. The Volume Summary page for that array is displayed. In the navigation pane, click Administration > Licensing. The Licensable Feature Summary page is displayed. Click the check box to the left of Replication Sets. This enables the Disable button. Click Disable.

Creating Replication Sets


Before any mirror relationships can be created, volumes must exist at both the primary and secondary sites. If a primary volume does not exist, one will need to be created on the primary storage system. If a secondary volume does not exist, one will need to be created on the secondary storage system. Consider the following when creating the secondary volume:

The secondary volume must be of equal or greater size than the associated primary volume. The RAID level and drive type of the secondary volume do not have to be the same as the primary volume.

Sun Confidential: Internal Only Integrated Data Services Remote Replication


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-283

Configuring Remote Replication When adequate volumes exist at both sites, mirror relationships can be created using the Create Replication Set Wizard.

Stop all I/O activity and unmount any file systems on the secondary volume. Do this just before creating the replication set. Log in to the system using the storage user role.

The Create Replication Set wizard enables you to create a replication set, either standalone or as part of the consistency group. To create a replication set: 1. 2. Click Sun StorageTek Configuration Service. The Array Summary page is displayed. Click the name of the array containing the primary volume that you want to replicate to the secondary volume. The Volume Summary page is displayed. Click the name of the primary volume that you want to replicate to the secondary volume. The Volume Details page for the selected volume is displayed.

3. 4.

Note You cannot replicate a volume that is already in a replication set. 5. 6. Click Replicate. The Create Replication Set wizard is displayed. Follow the steps in the wizard. The Create Replication Set wizard also allows you to include the new replication set in the consistency group, if desired. When creating the replication set, the system copies all data from the primary volume to the secondary volume, overwriting any existing data on the secondary volume. If replication is suspended, either manually or due to a system or communication problem, and then resumed, only the differences in data between volumes are copied. When creating a Replication Set you have the option to select the Synchronization Priority Level. You can choose from five different synchronization priorities for the primary volume, ranging from lowest to highest, that determine how much of a priority the full synchronization will receive relative to host I/O activity and, therefore, how much of a performance impact there will be. The following guidelines roughly approximate the differences between the five priorities. Note that volume size can cause these estimates to vary widely.
Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-284

Configuring Remote Replication

A full synchronization at the Lowest Synchronization Priority Level will take approximately eight times as long as a full synchronization at the Highest Synchronization Priority Level. A full synchronization at the Low Synchronization Priority Level will take approximately six times as long as a full synchronization at the Highest Synchronization Priority Level. A full synchronization at the Medium Synchronization Priority Level will take approximately three and a half times as long as a full synchronization at the Highest Synchronization Priority Level. A full synchronization at the High Synchronization Priority Level will take approximately twice as long as a full synchronization at the Highest Synchronization Priority Level.

The Synchronization Priority Level of a mirror relationship defines the amount of system resources used to synchronize the data between the primary and secondary volume of a mirror relationship. If the highest priority level is selected for a mirror relationship, the data synchronization uses a high amount of system resources to perform the full synchronization, but may decrease performance for host I/O, including other mirror relationships. Conversely, if the lowest synchronization level is selected, there is less impact on complete system performance, but the full synchronization may be slower. Note Use the highest replication synchronization priority that applications will permit. Applications will run faster if Remote Replication volumes are set to synchronize at lower priorities but synchronization rates will be slower. Applications will run slower and synchronization rates on Remote Replication volumes will be higher if Remote Replication volumes are set to synchronize at higher priorities.

Note An alternative method of creating a replication set is to go to the Replication Set Summary page and click on the New button. In this case, an additional step in the wizard prompts you to filter and select the primary volume from the current array.

Sun Confidential: Internal Only Integrated Data Services Remote Replication


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-285

Configuring Remote Replication

Reversing Roles
It is possible to perform the role reversal from either volume in the replication set. For example, when you promote the secondary volume to a primary role, the existing primary volume is automatically demoted to a secondary role (unless the system cannot communicate with the existing primary volume). A role reversal may be performed using one of the following methods:

Changing a secondary mirrored volume to a primary volume This option promotes the selected secondary volume to become the primary volume of the mirrored pair and would be used when a catastrophic failure has occurred. For step-by-step instructions, refer to Changing a Secondary volume to a Primary volume. in the online help. Changing a primary mirrored volume to a secondary volume This option demotes the selected primary volume to become the secondary volume of the mirrored pair and would be used during normal operating conditions. For step-by-step instructions, refer to Changing a Primary volume to a Secondary volume. in the online help.

If a communication problem between the secondary and primary sites prevents the demotion of the remote primary volume, an error message is displayed. However, you are given the opportunity to proceed with the promotion of the secondary volume, even though this will lead to a dualprimary condition. This condition can be remedied as the communication problem between the storage systems does not prevent the original primary from being demoted by a user. If communication with the remote storage system is down, you can force a role reversal even when there will be a resulting dual-primary or dualsecondary condition. Use the Recovery Guru to recover from one of these conditions after communication is restored with the remote system. Note If the role of a volume in a replication set that is a member of the consistency group is changed, the replication set will become a member of the consistency group on the array that hosts the newly promoted primary volume. To reverse the role of volumes within a replication set: 1. Click Sun StorageTek Configuration Service.

11-286

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Configuring Remote Replication 2. The Array Summary page is displayed. Click the name of the array containing the volume in the replication set whose role you want to reverse. The Volume Summary page is displayed. Click the Replication Sets tab. The Replication Set Summary page is displayed. Click the name of the replication set that includes the volume. The Replication Set Details page is displayed. Click Role to Secondary or Role to Primary, as appropriate. A confirmation message is displayed. Click OK.

3. 4. 5. 6.

The roles of the volumes are now reversed.

Changing Replication Modes


A number of factors must be considered and a number of decisions must be made before changing the replication mode of a replication set. Ensure you have a full understanding of the replications modes prior to changing the replication modes. To change the replication mode of a replication set: 1. 2. Click Sun StorageTek Configuration Service. The Array Summary page is displayed. Click the name of the array containing the replication set whose replication mode you want to change. The Volume Summary page is displayed. Click the Replication Sets tab. The Replication Set Summary page is displayed. Click the name of the replication set whose replication mode you want to change. The Replication Set Details page is displayed. Select Asynchronous or Synchronous, as appropriate, from the drop-down list. If you select Asynchronous, write order consistency is disabled by default. To enable write order consistency for all replication sets using asynchronous mode, select the Consistency Group check box. Click OK to save the changes.

3. 4. 5.

6.

Suspending and Resuming Data Replication


To suspend or resume data replication in an existing replication set:

Sun Confidential: Internal Only Integrated Data Services Remote Replication


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-287

Configuring Remote Replication 1. 2. Click Sun StorageTek Configuration Service. The Array Summary page is displayed. Click the name of the array containing the replication set for which you want to suspend or resume replication. The Volume Summary page is displayed. Click the Replication Sets tab. The Replication Set Summary page is displayed. Click the name of the replication set for which you want to suspend or resume replication. The Replication Set Details page is displayed. Do one of the following:

3. 4.

5.

If you want to suspend replication and track changes between the volumes, click Suspend.

Note If the replication set is already in a Suspended, Unsynchronized, or Failed/Suspended state, only the Resume button is available. Suspending a replication set will stop the coordination of data between the primary and the secondary volume. Any data that is written to the primary volume will be tracked while the replication set is suspended and will automatically be written to the secondary volume when replication is resumed. A full synchronization will not be required.

If you want to resume replication and copy only the data changes, not the entire contents of the volume, click Resume.

Note Any data that is written to the primary volume will be tracked while the replication set is suspended and will automatically be written to the secondary volume when replication is resumed. A full synchronization will not be required. 6. When prompted to confirm the selected action, click OK.

Note If you are suspending or resuming replication for a replication set that is part of the consistency group, all other replication sets in the group with primary volumes on the primary array will also be suspended or resumed.

11-288

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Configuring Remote Replication

Testing Replication Sets


You can test communication between volumes in a replication set by clicking the Test Communication button on the Replication Set Details page. If a viable link exists between primary and secondary volumes, a message displays indicating that communication between the primary and secondary volume is normal. If there is a problem with the link, a message displays details about the communication problem.

Removing a Mirror Relationship


Removing a mirror relationship between a primary and secondary volume does not affect any of the existing data on either volume. The relationship between the volumes is removed. They are no longer tied together. The primary volume will continue normal I/O operation. The secondary volume will become a standard volume and can be mapped to a host for read and write access. Note For backup routines, use the Suspend option rather than Removing the mirror relationship.

Sun Confidential: Internal Only Integrated Data Services Remote Replication


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-289

Configuring Remote Replication

Examples of Remote Replication Configurations

Ethernet Network

Host 1

Host 2

Host Cluster

Host Cluster

Host 3

Host 4

Site 1

Site 2

Switch 1A

Switch 2A

Fabric 10 Km Max

Controller A

Port A1 Storage Array 1 Port A2 Port B1 Port B2

Port A1 Port A2 Port B1 Port B2 Storage Array 2

Controller A

Controller B

Controller B

Switch 1B

Switch 2B

Fabric 10 Km Max

Host FC Cable Storage Array FC Cable Dedicated RVM Feature FC Cable

Sonoran-2 4 Switches Dual Site


S2Config1.vsd

Figure 11-11 A Fully Redundant Enhanced Remote Mirroring Configuration

11-290

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Configuring Remote Replication

Ethernet Network

Host 1

Host 2 Host Cluster Host Cluster

Host 3

Host 4

Site 1
Switch 1

Site 2
Switch 2

Fabric 10 Km Max

Controller A

Port A1 Port A2 Port B1

Port A1 Port A2 Port B1

Controller A

Controller B

Controller B

Port B2 Storage Array 1

Port B2 Storage Array 2

Host FC Cable Storage Array FC Cable Dedicated Enhanced Remote Mirroring Feature FC Cable

Low-Cost Enhanced Remote Mirroring Campus Configuration

Figure 11-12 Low-Cost Enhanced Remote Mirroring Campus Configuration

Sun Confidential: Internal Only Integrated Data Services Remote Replication


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-291

Configuring Remote Replication

E th e r n e t N e tw o rk

H ost 1

H ost 2

Host C lu s t e r

H ost C lu s t e r

H ost 3

Host 4

S w itc h 1

S w it c h 2

C o n t r o lle r A

P o rt A1 P o rt A2 P o rt B1 P o rt B2

P o rt A1 P o rt A2 P o rt B1 P o rt B2

C o n t r o lle r A

C o n t r o lle r B

C o n t r o lle r B

S to r a g e A rra y 1

S to ra g e A rra y 2

H o s t F C C a b le S to ra g e A rra y F C C a b le D e d i c a t e d R V M F e a t u r e F C C a b le
S 2 C o n fig 3 .v s d

Figure 11-13 Same-Site Enhanced Remote Mirroring Configuration

11-292

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Knowledge Check - Snapshot, Volume Copy, Remote Replication

Knowledge Check - Snapshot, Volume Copy, Remote Replication


1. A snapshot is a method for creating a point-in-time image of volumes and is immediately out of date as soon as a new write is made to the system.
True False

2.

Volume Copy volumes MUST have the same RAID level and configuration.
True False

3.

Remote Volume Mirroring continuously copies from one volume to another to produce an exact copy of the source volume.
True False

4.

Asynchronous mirroring is faster than synchronous mirroring.


True False

5.

IO's can continue to the source volume during a volume copy.


True False

6.

When using RVM, your mirrored volume must be located off-site.


True False

7.

Why is a snapshot referred to as a "point-in-time" (PiT) image?

8.

.Is snapshot a true disaster recovery feature? Why or why not?

Sun Confidential: Internal Only Integrated Data Services Remote Replication


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-293

Knowledge Check - Snapshot, Volume Copy, Remote Replication 9. What is the maximum number of snapshots that can be created on one base volume?

10. What happens if a data block on the base volume is changed more than once after the snapshot is taken?

11. What is the difference between disabling and deleting a snapshot?

12. What volumes are included in a "copy pair"?

13. What is the maximum number of copy pairs that can be in progress at one time?

14. Why would you want to change the copy priority?

15. Why are there two mirror repository volumes on a system?

16. What are the two logs kept in the mirror repository volume? Briefly describe what each does.

11-294

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Knowledge Check - Snapshot, Volume Copy, Remote Replication 17. How does "write consistency mode" differ from "asynchronous mode"?

18. What happens if there is a link interruption during the remote mirror process?

Sun Confidential: Internal Only Integrated Data Services Remote Replication


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

11-295

Knowledge Check - Snapshot, Volume Copy, Remote Replication

11-296

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Module 12

Problem Determination
Objectives
Upon completion of this module, you should be able to:

Utilize the tools in CAM to analyze information about a storage system issue Use service advisor to determine how to solve problems Use the SSCS to import and export the configuration

Sun Confidential: Internal Only 12-297


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Problem Determination

Problem Determination
Ask yourself these questions about the Storage System Environment:

What has or is changing? What does the physical look like (LEDs, cabling)? What is the configuration? Other Indicators? What other questions should you ask? What tools are available to aid in observation?

Figure 12-1 What can go wrong?

12-298

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

What tools are available?

What tools are available?

Figure 12-2 Visual clues

Compatibility Matrix
A table of all 3rd party hardware and software components that should be used with a particular level of controller firmware. When determining compatibility it is important to verify

Controller FW Release (05.40, 6.10, 6.12...) Vendor (Qlogic, Emulex, LSI...) Component Type (HBA, switch...)

Sun Confidential: Internal Only Problem Determination


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

12-299

What tools are available?


O/S Version Certified (Win2003, RH7.2...) Components description lists versions specific to the respective component (i.e. HBA Driver, BIOS, FW...

Problems and Recovery

Figure 12-3 Problems and recovery

Service Advisor
Collection

of service

procedures
Manually

locate the one you need ... OR ...


Arrive

via an Alarm

12-300

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

What tools are available?

Service Advisor Tasks


FRU Removal/Replacement Procedures (for both 6540 and CSMII)

Controllers Batteries Interconnect Canister Disk Drives I/O Module (IOM) Battery Power Supply SAS Interface Cable

X-Options

Adding Array Capacity Removing Array Capacity Adding Expansion Trays Removing Expansion Trays

Troubleshooting and Recovery


Offline/Online Controllers Reset Controller Correcting an IOM Firmware Mismatch Redistribute Volumes Setting the Drive Channel to Optimal Reviving a Disk Drive Recovering from an Overheated Power Supply

Service Only

Tray Midplane removal/Replacement

Sun Confidential: Internal Only Problem Determination


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

12-301

What tools are available?

Support Data

Various types of inventory, status, and performance data that can help troubleshoot any problems with your storage system. Gathered into zipped- file format Gather from through the storage manager or through the command line interface (CLI)

Figure 12-4 Collect support data

Support Data Bundle


C = current configuration info S = current state information PS = performance / statistical information E = event tracking 1. 2. NVSRAMdata.txt (C, current NVSRAM configuration) A controller file that specifies the default settings for the controllers. stateCaptureData.dmp* (S, current state of the controller from the view point of the controller firmware. This log is nothing more then a series of controller shell commands and their output.) moduleList - This command displays the versions of the loaded software modules on the controllers. arrayPrintSummary - Prints summary of storage system controller states, volume ownership cfgUnitList - Displays volume state for all volumes ghsList - Display information about global hot spare drives

12-302

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

What tools are available? printBatteryAge - Prints current, installation, expiration and warning time, both in number of seconds since UNIX time zero - 1 Jan 1970 cfgPhyList - Displays volume state for all volumes spmShowMaps - Shows volume to LUN mapping spmShow - Shows volume to LUN mapping getObjectGraph_MT (1. - Display status of the controllers getObjectGraph_MT (4 - Display status of the drives getObjectGraph_MT(8 - Display status of the power supplies, ESM, SFP etc ccmStateAnalyze - Summary controller information. This command can also be used to determine the state of the cache. controller flags - BPR battery present, BOK battery OK, ABPR alternate battery present, ABOK alternate battery ok volume flags - RCA read cache active, FWT forced write through, CWOB cache write without battery, WCE write cache enabled, WCA write cache active, CME cache mirroring enabled, CMA cache mirroring active, ACMA alternate cache mirroring active i - List summary task information dqflush dqPrint fcAll - Displays status and cumulative error counts for source and destination fibre loops showEnclosures - Displays information about enclosure devices showEnclosuresPage81 - Displays information about SOC (switched) enclosures excLogShow(0 - - Displays the exception log hwLogShow - Displays the exception hardware log vdShow - Shows detailed information about volume configuration. When no number is given, volume 0 is assumed.

3.

objectBundle ( S, the information the controller firmware has reported back to the management software concerning the current state of the storage system, normally intended for developer use. )

Sun Confidential: Internal Only Problem Determination


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

12-303

What tools are available? 4. 5. 6. 7. driveDiagnosticData.bin ( S, binary file used for failure analysis, intended for developer use ) storageSubsystemProfile.txt ( C, current physical and logical configuration for the storage system ) performanceStatistics.csv ( PS, point in time IO performance information ) majorEventLog.txt ( E, used for event tracking on the storage system ) A detailed list of events that occur on the storage array. The list is stored in the DACstore region on the disks in the storage system and records configuration events and storage array component failures. alarms.txt (S, used for current alarms on the storage system ) badBlocksData.txt (E, contains Volume, Date/Time, Volume LBA, Drive Location, Drive LBA, Failure Type)

8. 9.

10. readLinkStatus.csv ( S, used to diagnose drive side channel components errors(IOM, SFP, drives), this log is commonly used to isolate component errors in configurations with JBOD drive trays, be mindful of the backend architecture(SBOD or JBOD) to best interpret the information) 11. persistentReservation.txt ( S, viewing LUN persistent reservation locks, only time this log would be viewed is in the context of the storage system being used in a clustered application with multiple hosts accessing the same LUN. )

12-304

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

What tools are available?

CAM Navigation Tree

Figure 12-5 CAM navigation tree

Sun Confidential: Internal Only Problem Determination


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

12-305

What tools are available?

Alarms

Figure 12-6 Alarms

Current Alarms

Figure 12-7 Current alarms

12-306

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

What tools are available?


Black - Down Red - Major Yellow - Minor Blue - Note

Figure 12-8 Alarm list

Summary List

CAM Wide or Array Specific Acknowledge Re-Open Delete

Detailed Listing

Probable Cause Recommended Action View Aggregated Events Link to Service Advisor

Alarms Aggregate Problems


CAM aggregates events to provide better fault isolation. Example: Controller failure Other SW may report: Failed controller

Sun Confidential: Internal Only Problem Determination


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

12-307

What tools are available? 2x Loss of communication with unknown device (this is the batteries) Tray Path Redundancy Failure Volumes not on preferred path CAM will reports this as failed controller Example: Loss of power to Controller Enclosure Module Other SW reports 22 events CAM reduces this number to 1 entry

Service Advisor Links Alarms and Solutions

Figure 12-9 Service advisor links alarms and solutions

12-308

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

What tools are available?

Links to exact place

Figure 12-10 Links to exact place

With pictures

Figure 12-11 With pictures

Sun Confidential: Internal Only Problem Determination


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

12-309

What tools are available?

Active links check status

Figure 12-12 Active links from service advisor

Troubleshooting link from the CAM navigation tree

Figure 12-13 Troubleshooting link from CAM navigation tree

12-310

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

What tools are available?

Controller Diagnostics
Controller read test

Checks for data integrity and redundancy errors

Controller write test

Initiates a write command to the diagnostics region on a specified drive

Internal loopback test

Passes data through each controller's drive-side channel, out onto the loop and then back again to determine channel error conditions

All controller tests

All controller tests are run

Remote Peer Communication Check

Only if remote replication has been configured

Out of Band Diagnostics


Reset

Resets the controller

Test communications

Test the communication between the management host and the storage system

Sun Confidential: Internal Only Problem Determination


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

12-311

What tools are available?

Fru - Field Replaceable Units

Figure 12-14 FRU Summary

Summary Pages

Physical aspect of Managed components. Alarms link to alarm page. Installed and Slot Count determines configuration.

Component Summary

Name links to Details page which contains FRU properties. State and Status Revision or Firmware version FRU Id tied to physical element

12-312

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

What tools are available?

Events

Figure 12-15 Events


Summary of all events for device. Filter available. Some events turn into alarms. Some events get aggregated into a single event. Events can be sent using email notification.

Sun Confidential: Internal Only Problem Determination


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

12-313

What tools are available?

Array Administration

Figure 12-16 Array Administration

Administration

Manage Passwords Redistribute Volumes Reset Configuration Upgrade Firmware Change Array Name Define Default Host Type Define Start/Stop Cache % Configure Background Disk Scrubbing Configure Alert Fail-Over Delay Set Time Manually Synchronize Time with Time Server Array Health Monitoring Enable Health monitoring Configure Performance Monitoring Enable/Disable Set Polling interval Set Data Retention Perior. Add licenses Disable licenses View Activity Log View System Specific Alarms

12-314

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

What tools are available?

Sun Connection

Figure 12-17 Sun Connection


Alarms and other information sent back to Sun Product registration for CAM and all storage managed by CAM. Real time alarm reporting and asset information. Fault reporting for automatic case generation service offering. Reliability Reporting for delivering configuration information for analysis. Sun Service Pilot program in Feb. using CAM beta version.

Sun Confidential: Internal Only Problem Determination


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

12-315

What tools are available?

Health Administration

Figure 12-18 Health Administration

Health Agent Configuration


Enable/Disable or manually run a monitoring cycle. Select device types to monitor. Select monitoring frequency. Set number of unique monitoring threads Adjust timeout settings for monitoring activity.

Notification
Email

User email or pager Filters available per email address

12-316

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

What tools are available?

SNMP Traps

Programmatic data sent to SNMP trap listeners. Management Integration: SunMC, HP Openview...

Figure 12-19 Notification

Sun Confidential: Internal Only Problem Determination


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

12-317

What tools are available?

Activity Log

Figure 12-20 Activity log

Keeps record of management activity for all arrays being managed by CAM Date of operation recorded. Errors marked with icon. Operation details describe what was done.

Array Health and Status Sources


Alarms

Primary indicator of problems

Jobs

Asynchronous/long running array operations

Events

Log of array status changes

Activity log

Array operations history

Performance monitoring

I/O statistics

12-318

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Sun StorageTek Common Array Manager CLI (SSCS)

Sun StorageTek Common Array Manager CLI (SSCS)


Features

Remote CLI shared across product line Multi-Platform Support Full Feature Set Scriptable Backward compatible Continued support for 6120, 6320, 6920 Man pages (UNIX)

Benefits

All processing performed on the server. New features installed on the server immediately available to all clients. Client upgrade not necessary with server upgrade Performance independent of client machine Code sharing with GUI

Usage
Login
Must login into a CAM host before executing SSCS commands . Example: ./SSCS -h localhost -u storage

Built in help
help keyword Correct syntax as part of error messages

Sun Confidential: Internal Only Problem Determination


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

12-319

Sun StorageTek Common Array Manager CLI (SSCS)

To manage the Sun StorageTek arrays, use the /opt/SUNWsesscs/cli/bin/sscs command. From a terminal window, type the sscs command with a subcommand and any applicable parameters. Note The sscs command has an inactivity timer. The session terminates if you do not issue any sscs commands for 30 minutes. You must log in again after the timeout to issue a command.

Example - save configuration


sscs login u <storage|guest> h <host-name>

CLI provides exporting/importing of array configuration


sscs export -a <array name>

Exports the storage array configuration into XML format Can specify file name output file
> /temp/configfile.xml sscs import -x <xml file> array <array name>

Other useful information to collect


Pool sscs list -a <array name> pool <pool name> Profile sscs list -a <array name> profile <profile name>

12-320

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Sun StorageTek Common Array Manager CLI (SSCS) Virtual disk sscs list -a <array name> vdisk <vdisk name> Disks sscs list -a <array name> disks

Other Command Line Interface Tools


Fault Management Service Command Line Interface
$FMS_HOME/bin/ras_admin

The backend for many of the Browser User Interface (BUI) and CLI functions Useful for diagnosing why an array can't be registered through the BUI or CLI Offers in-band and device list method for discovering arrays Can also perform other functions

List and delete alerts List and delete devices Add, list and delete email addresses for notifications List and display reports List and display topologies

Command Service Module (CSM)


$SLM_HOME/bin/csmservice Field utility for analyzing and updating array firmware baselines (a collection of controller, IOM and disk firmwares that have been tested) With CAM 5.1 will support stand-alone mode, the displaying of current firmware versions and individual array component updates.

Sun Confidential: Internal Only Problem Determination


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

12-321

Sun StorageTek Common Array Manager CLI (SSCS) E.g. Update just the disk drives Note on Windows assumption is made that MSVCR71.dll is present. If it is be missing then download it from the web (just Google MSVCR71.dll) Example $ /opt/SUNEstksm/bin/csmservice -h CSMServices 5.1.0.10 usage: csmservice [-h|-s|-i] [-f] [-h] [-m num] Where: -hDisplays command usage statement -s -i -f Install in full service mode Install the array firmware CAM baseline Install in force mode.

-m numSet the number of arrays to update in parallel

12-322

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

Sun StorageTek Common Array Manager CLI (SSCS)

Support Data: Collect from the command line


Output Name
supportdata-<array key>-<array timestamp>.zip

Save Location

Unix

/tmp %SYSTEMDRIVE%\tmp

Windows

Methods of collecting

From the Service Advisor From the command line

$FMS_HOME/bin/supportData

Sun Confidential: Internal Only Problem Determination


Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

12-323

Sun StorageTek Common Array Manager CLI (SSCS)

12-324

Sun Confidential: Internal Only Sun StorageTek 6540 Data Service Remote Replication
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision A

You might also like