Professional Documents
Culture Documents
Sun - VERITAS Volume Manager - Student Guide
Sun - VERITAS Volume Manager - Student Guide
Student Guide
Sun Microsystems, Inc. UBRM05-104 500 Eldorado Blvd. Broomeld, CO 80021 U.S.A. Revision B
Copyright 2003 Sun Microsystems, Inc., 901 San Antonio Road, Palo Alto, California 94303, U.S.A. All rights reserved. This product or document is protected by copyright and distributed under licenses restricting its use, copying, distribution, and decompilation. No part of this product or document may be reproduced in any form by any means without prior written authorization of Sun and its licensors, if any. Third-party software, including font technology, is copyrighted and licensed from Sun suppliers. Sun, Sun Microsystems, the Sun Logo, Solaris, StorEdge, Sun Enterprise, SunSolve, Sun Enterprise Network Array, JumpStart, OpenBoot, Solstice, Sun BluePrints, and Solstice DiskSuite are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the U.S. and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. UNIX is a registered trademark in the U.S. and other countries, exclusively licensed through X/Open Company, Ltd. U.S. Government approval might be required when exporting the product. RESTRICTED RIGHTS: Use, duplication, or disclosure by the U.S. Government is subject to restrictions of FAR 52.227-14(g)(2)(6/87) and FAR 52.227-19(6/87), or DFAR 252.227-7015 (b)(6/95) and DFAR 227.7202-3(a). DOCUMENTATION IS PROVIDED AS IS AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS, AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.
THIS MANUAL IS DESIGNED TO SUPPORT AN INSTRUCTOR-LED TRAINING (ILT) COURSE AND IS INTENDED TO BE USED FOR REFERENCE PURPOSES IN CONJUNCTION WITH THE ILT COURSE. THE MANUAL IS NOT A STANDALONE TRAINING TOOL. USE OF THE MANUAL FOR SELF-STUDY WITHOUT CLASS ATTENDANCE IS NOT RECOMMENDED.
Export Control Classification Number (ECCN) assigned: 17 July 2002
Please Recycle
Copyright 2003 Sun Microsystems Inc., 901 San Antonio Road, Palo Alto, California 94303, Etats-Unis. Tous droits rservs. Ce produit ou document est protg par un copyright et distribu avec des licences qui en restreignent lutilisation, la copie, la distribution, et la dcompilation. Aucune partie de ce produit ou document ne peut tre reproduite sous aucune forme, par quelque moyen que ce soit, sans lautorisation pralable et crite de Sun et de ses bailleurs de licence, sil y en a. Le logiciel dtenu par des tiers, et qui comprend la technologie relative aux polices de caractres, est protg par un copyright et licenci par des fournisseurs de Sun. Sun, Sun Microsystems, the Sun Logo, Solaris, StorEdge, Sun Enterprise, SunSolve, Sun Enterprise Network Array, JumpStart, OpenBoot, Solstice, Sun BluePrints, et Solstice DiskSuite sont des marques de fabrique ou des marques dposes de Sun Microsystems, Inc. aux EtatsUnis et dans dautres pays. Toutes les marques SPARC sont utilises sous licence sont des marques de fabrique ou des marques dposes de SPARC International, Inc. aux Etats-Unis et dans dautres pays. Les produits portant les marques SPARC sont bass sur une architecture dveloppe par Sun Microsystems, Inc. UNIX est une marques dpose aux Etats-Unis et dans dautres pays et licencie exclusivement par X/Open Company, Ltd. Laccord du gouvernement amricain est requis avant lexportation du produit. LA DOCUMENTATION EST FOURNIE EN LETAT ET TOUTES AUTRES CONDITIONS, DECLARATIONS ET GARANTIES EXPRESSES OU TACITES SONT FORMELLEMENT EXCLUES, DANS LA MESURE AUTORISEE PAR LA LOI APPLICABLE, Y COMPRIS NOTAMMENT TOUTE GARANTIE IMPLICITE RELATIVE A LA QUALITE MARCHANDE, A LAPTITUDE A UNE UTILISATION PARTICULIERE OU A LABSENCE DE CONTREFAON.
CE MANUEL DE RFRENCE DOIT TRE UTILIS DANS LE CADRE DUN COURS DE FORMATION DIRIG PAR UN INSTRUCTEUR (ILT). IL NE SAGIT PAS DUN OUTIL DE FORMATION INDPENDANT. NOUS VOUS DCONSEILLONS DE LUTILISER DANS LE CADRE DUNE AUTO-FORMATION.
Please Recycle
Table of Contents
About This Course ........................................................... Preface--xiii Course Goals...................................................................... Preface--xiii Course Map......................................................................... Preface--xiv Topics Not Covered.............................................................Preface--xv How Prepared Are You?................................................... Preface--xvi Introductions ..................................................................... Preface--xvii How to Use Course Materials ........................................ Preface--xviii Conventions .........................................................................Preface--xix Icons .............................................................................Preface--xix Typographical Conventions ......................................Preface--xx Introducing the VERITAS Volume Manager Software Architecture ......................................................................................1-1 Objectives ........................................................................................... 1-1 Relevance............................................................................................. 1-2 Additional Resources ........................................................................ 1-3 Introducing Storage Management................................................... 1-4 Host-Based Storage Management........................................... 1-4 Controller-Based Storage Management................................. 1-5 Comparison of Storage Management Methods.................... 1-6 Exploring VxVM Software and Storage Management ................. 1-7 Relationship to the Operating System Environment ........... 1-7 Configuration Database ........................................................... 1-8 Device Discovery Layer (DDL) ............................................... 1-9 Drivers and Daemons............................................................... 1-9 VxVM Software Support Files.............................................. 1-11 Examining VxVM Software Objects.............................................. 1-20 Physical Disks......................................................................... 1-21 VxVM Software Disks ............................................................ 1-21 Disk Groups ............................................................................. 1-23 Subdisks.................................................................................... 1-24 Plexes ........................................................................................ 1-25 Volumes.................................................................................... 1-26 VxVM Software Layered Volume Objects........................... 1-27
Sun Proprietary: Internal Use Only v
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
Resynchronizing Volumes.............................................................. 1-29 Dirty Flag................................................................................. 1-29 Resynchronization Process .................................................. 1-30 Introducing VxVM Software Logging .......................................... 1-31 RAID 5 Logs............................................................................. 1-31 Dirty Region Logs (DRLs)...................................................... 1-32 Examining Plex States ..................................................................... 1-34 Plex State Descriptions ........................................................... 1-34 Plex Kernel States.................................................................... 1-36 General Plex State Cycle ...................................................... 1-37 Other Plex State Transitions .................................................. 1-37 Introducing Supported VxVM Software Version 3.2 Features .......................................................................................... 1-40 Device Discovery .................................................................... 1-40 Device Naming........................................................................ 1-43 Sun StorEdge Traffic Manager Software Support.............. 1-46 Explicit LUN Failover............................................................. 1-46 Ordered Storage Allocation................................................... 1-46 Introducing Unsupported VxVM Software Version 3.2 Features .......................................................................................... 1-47 Disk Group Split, Move, and Join......................................... 1-47 Persistent Fast Resync Function............................................ 1-47 Encapsulating Disks........................................................................ 2-1 Objectives ........................................................................................... 2-1 Relevance............................................................................................. 2-2 Additional Resources ........................................................................ 2-3 Introducing Disk Encapsulation ...................................................... 2-4 Encapsulated Disk Types......................................................... 2-4 Reasons to Encapsulate Disks ................................................. 2-4 Encapsulating Data Disks ................................................................. 2-5 Pre-Encapsulation Configuration Data................................. 2-6 Data Disk Encapsulation Process ........................................... 2-8 Post-Encapsulation Configuration Data .............................. 2-13 Related Post-Encapsulation Files.......................................... 2-18 Encapsulating a Non-Conforming Disk .............................. 2-20 Unencapsulating Data Disks .......................................................... 2-23 Encapsulating Boot Disks ............................................................... 2-31 Booting root Volumes ........................................................... 2-32 Volume Restrictions................................................................ 2-33 Boot Disk Encapsulation Process.......................................... 2-33 Pre-Encapsulation Configuration Data............................... 2-35 Post-Encapsulation Configuration Data .............................. 2-38 Mirroring the Encapsulated Boot Disk ................................ 2-41 Related Post-Encapsulation Files.......................................... 2-45
vi
Examining Sun Enterprise Services Best Practices for VxVM Software-Managed Boot Disks.................................................... 2-48 Best-Practice Boot Disk Configuration Guidelines ........... 2-49 Manually Bringing the Boot Disk Under VxVM Software Management ........................................................... 2-50 Scripted Process Using the EIS CD-ROM............................ 2-56 Unencapsulating Boot Disks .......................................................... 2-57 Unencapsulating a Boot Disk Using the vxunroot Utility ..................................................................................... 2-57 Manually Unencapsulating a Boot Disk .............................. 2-60 Unencapsulating When Booted From the CD-ROM ........ 2-64 Performing a Basic or Functional Unencapsulation .......... 2-68 Exploring Unencapsulation Issues ................................................ 2-72 Data Disks ................................................................................ 2-72 Boot Disks................................................................................. 2-72 Exercise: Encapsulating Disks........................................................ 2-75 Preparation............................................................................... 2-75 Task 1 Installing the VxVM Software and Encapsulating a Boot Disk ................................................. 2-76 Task 2 Unencapsulating a Boot Disk Using the vxunroot Utility ........................................................... 2-78 Task 3 Encapsulating a Boot Disk Using the vxdiskadm Utility.......................................................... 2-79 Task 4 Manually Unencapsulating a Boot Disk When Booted From the CD-ROM...................................... 2-79 Task 5 Encapsulating a Data Disk Using the vxdiskadm Utility (Optional) ...................................... 2-80 Task 6 Unencapsulating a Data Disk (Optional) ............. 2-82 Task 7 Encapsulating a Non-Conforming Data Disk (Optional) .............................................................................. 2-83 Task 8 Lab House Cleaning ................................................ 2-84 Exercise Summary............................................................................ 2-85 Exercise: Encapsulating Disks........................................................ 2-86 Task 1 Solutions....................................................................... 2-86 Task 2 Solutions...................................................................... 2-88 Task 3 Solutions...................................................................... 2-89 Task 4 Solutions....................................................................... 2-89 Task 5 Solutions....................................................................... 2-90 Task 6 Solutions...................................................................... 2-92 Task 7 Solutions...................................................................... 2-93 Managing Dynamic Multi-Pathing ...................................................3-1 Objectives ........................................................................................... 3-1 Relevance............................................................................................. 3-2 Additional Resources ........................................................................ 3-3 Examining VxVM Software and Dynamic Multi-Pathing ........... 3-4
VxVM Software Architecture and DMP................................ 3-4 Load Balancing .......................................................................... 3-5 Unique Disk Identifier............................................................. 3-6 DMP Device Paths .................................................................... 3-6 Solaris OE Drives and DMP Drives........................................ 3-6 DMP and Device Discovery .................................................... 3-8 Installing and Verifying DMP.......................................................... 3-9 Enabling and Disabling DMP......................................................... 3-10 Disabling DMP in Version 3.0 and Earlier .......................... 3-10 Disabling DMP in Version 3.1 and Later............................. 3-10 Files Related to Disabling and Suppression........................ 3-15 Administrating DMP With the vxdmpadm Command ................ 3-17 The listctrl Option............................................................. 3-17 Viewing Multi-Pathing Status............................................... 3-17 The getsubpaths Option ...................................................... 3-18 The getdmpnode Option ........................................................ 3-19 The disable Opt ion.............................................................. 3-19 The enable Option ................................................................. 3-20 The start restore and stop restore Options............. 3-20 Reviewing Common DMP Problems ............................................ 3-21 Disks Do Not Appear to Be Multi-Pathed........................... 3-21 Serial Number Problems........................................................ 3-21 The product_id and vendor_id Do Not Match .............. 3-22 VxVM Software Does Not See Disk Devices ...................... 3-22 Exercise: Operating DMP................................................................ 3-23 Preparation............................................................................... 3-23 Task 1 Enabling and Disabling DMP Operations .......... 3-24 Task 2 Administrating DMP .............................................. 3-25 Task 3 Using the /etc/vx/diag.d/vxdmping Script .... 3-26 Exercise Summary............................................................................ 3-27 Exercise: Operating DMP................................................................ 3-28 Task 1 Solutions....................................................................... 3-28 Task 2 Solutions....................................................................... 3-29 Task 3 Solutions....................................................................... 3-30 Troubleshooting Tools and Utilities............................................... 4-1 Objectives ........................................................................................... 4-1 Relevance............................................................................................. 4-2 Additional Resources ........................................................................ 4-3 Logging Errors.................................................................................... 4-4 Using the /var/vxvm/vxconfigd.log File ......................... 4-4 Interpreting /var/adm/messages File syslog Messages.................................................................................. 4-8 The root Mail............................................................................ 4-9 Recovering From Errors.................................................................. 4-10 Using the Debugging Tools and Utilities ..................................... 4-11
viii
The vxexplorer Utility ......................................................... 4-18 The vxprivutil Utility ......................................................... 4-33 The vxdevwalk Utility............................................................ 4-38 The vxkprint Utility............................................................. 4-40 Using System-Level Debugging Utilities ..................................... 4-41 Exercise: Using the Error Logging and Debugging Utilities............................................................................................ 4-42 Preparation............................................................................... 4-42 Task 1 Enabling vxconfigd Debug Logging................... 4-42 Task 2 Viewing the VxVM Software Messages ............... 4-43 Task 3 Using VxVM Software Debug Utilities ................ 4-44 Exercise Summary............................................................................ 4-46 Exercise: Using the Error Logging and Debugging Utilities............................................................................................ 4-47 Task 1 Solutions....................................................................... 4-47 Task 2 Solutions....................................................................... 4-47 Task 3 Solutions....................................................................... 4-49 Recovering Boot and System Processes.......................................5-1 Objectives ........................................................................................... 5-1 Relevance............................................................................................. 5-2 Additional Resources ........................................................................ 5-3 Surveying VxVM Software System Recovery Processes.............. 5-4 Examining the VxVM Software Boot Process ................................ 5-5 Single-User Boot Processing.................................................... 5-6 Multi-User Startup Files......................................................... 5-17 Other Boot Process Failures................................................... 5-18 Troubleshooting Boot Process Failures......................................... 5-20 Bootable Boot Disk.................................................................. 5-20 Valid /etc/system File ......................................................... 5-21 Valid /etc/vfstab File ......................................................... 5-22 Valid rootdg Disk Group...................................................... 5-23 Startable Volumes ................................................................... 5-23 Non-Corrupted and Appropriate Binaries and Libraries................................................................................. 5-24 Valid /etc/vx/volboot File ............................................... 5-27 VxVM Software Startup Scripts ............................................ 5-27 Troubleshooting VxVM Software Failures................................... 5-29 Exercise: Determining the VxVM Software Problem ................. 5-30 Preparation............................................................................... 5-30 Tasks ........................................................................................ 5-32 Exercise Summary............................................................................ 5-35 Exercise: Determining the VxVM Software Problem ................. 5-36 Task Solutions.......................................................................... 5-36
Recovering Disk, Disk Group, and Volume Failures .................... 6-1 Objectives ........................................................................................... 6-1 Relevance............................................................................................. 6-2 Additional Resources ........................................................................ 6-3 Introducing Recovery Processes ...................................................... 6-4 Identifying Disk Errors ..................................................................... 6-5 The vxprint Command........................................................... 6-5 The vxdisk Command............................................................. 6-6 The vxdiskadm list Option ................................................. 6-7 The vxstat Command............................................................. 6-7 Disk Failure Categories and root Mail ................................. 6-8 Full Disk Failure........................................................................ 6-8 Replacing Disks ................................................................................ 6-10 Replacing Disks Using the vxdiskadm Utility.................... 6-10 Replacing Disks Using the Command Line ........................ 6-10 Recovering a Disk Which VxVM Software Cannot See .... 6-11 Troubleshooting Volume Errors .................................................... 6-13 Listing Unstartable Volumes................................................. 6-13 Restarting a Disabled Volume.............................................. 6-14 Recovering a Mirrored Volume ............................................ 6-14 Recovering a RAID 5 Volume ............................................... 6-15 Forcibly Starting RAID 5 Volumes ....................................... 6-16 Troubleshooting Disk Group Errors ............................................. 6-17 Exercise: Determining the VxVM Software Disk Problem ........ 6-18 Preparation............................................................................... 6-18 Tasks ......................................................................................... 6-20 Exercise Summary............................................................................ 6-23 Exercise: Determining the VxVM Software Disk Problem ........ 6-24 Task Solutions.......................................................................... 6-24 Upgrading the VxVM Software........................................................ 7-1 Objectives ........................................................................................... 7-1 Relevance............................................................................................. 7-2 Additional Resources ........................................................................ 7-3 Surveying the Upgrade Processes and Procedures ...................... 7-4 Upgrading With a Script (VxVM 3.1/3.2/3.5)...................... 7-4 Upgrading Manually (VxVM 3.1/3.2/3.5)............................ 7-6 Upgrade Using pkgadd (VxVM 3.5)...................................... 7-9 Upgrading a Disk Group ....................................................... 7-11 Release Notes.......................................................................... 7-12 Licensing .................................................................................. 7-12 Upgrading the Solaris OE ............................................................... 7-13 Exercise: Upgrading the VxVM Software..................................... 7-14 Preparation............................................................................... 7-14 Tasks ......................................................................................... 7-14 Exercise Summary............................................................................ 7-16
SunSolve INFODOCs....................................................................... A-1 INFODOC 16051 ............................................................................... A-2 INFODOC 24663 ............................................................................... A-4 Disk Encapsulation Processes ...................................................... B-1 Example Five-Slice Boot Disk Encapsulation............................... C-1 Pre-Encapsulation Configuration Data.......................................... C-3 Pre-Encapsulation prtvtoc Command................................ C-3 Pre-Encapsulation format Print Utility ........................... C-4 Pre-Encapsulation df -k Command .................................... C-4 Pre-Encapsulation swap -l Command................................ C-4 Post-Encapsulation Configuration Data ........................................ C-5 Post-Encapsulation prtvtoc /dev/rdsk/c1t2d0s2 Command............................................................................... C-5 Post-Encapsulation format print Utility.......................... C-6 Post-Encapsulation df -k Command................................... C-6 Post-Encapsulation vxprint Command .............................. C-6 Post-Encapsulation /etc/vfstab File.................................. C-7 Post-Encapsulation Mirror Disk prtvtoc Command ........ C-8 Post-Encapsulation Mirror Disk format Print Utility ..................................................................................... C-9 Example Five-Slice Boot Disk Unencapsulation......................... C-10 Unencapsulating Using the vxunroot Command............ C-10 Manually Unencapsulating .................................................. C-10 The Boot Process ............................................................................ D-1 Configuring the VxVM Software..................................................... E-1 Objectives ........................................................................................... E-1 Relevance.............................................................................................E-2 Additional Resources ........................................................................E-3 Supported RAID Levels ....................................................................E-4 Layered Volumes ......................................................................E-4 Ordered Allocation of Storage ................................................E-4 VxVM Software States.......................................................................E-5 Plex States...................................................................................E-5 Plex and Volume Kernel States ...............................................E-5 Volume States ............................................................................E-5 Disk States ..................................................................................E-6 The vxprint Command....................................................................E-7 Output Header Information ....................................................E-7 Field Descriptions .................................................................... E-8 Stripe Width and Stripe Units ...............................................E-11 Complex RAID Levels.....................................................................E-12 Additional Layout Options ...................................................E-15 Additional Mirror Attributes ............................................... E-17 Disk Layout Practices ......................................................................E-18
Sun Proprietary: Internal Use Only xi
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
Guidelines for RAID Array Layouts ................................... E-19 Guidelines for Complex File System Layouts.....................E-19 Striping Considerations .........................................................E-20 Manipulating Disk Layouts............................................................E-22 Online Re-Layout ....................................................................E-22 Growing File Systems.............................................................E-23 Changing Disk Group Configurations..........................................E-27 Disk Group Reconfiguration Commands............................E-27 Disk Group Reconfiguration Recovery................................E-28 Reconfiguration Considerations ...........................................E-30 Hot-Relocation..................................................................................E-32 Hot-Relocation Process ..........................................................E-32 Hot-Relocation Configuration.............................................. E-36 Unrelocating ........................................................................... E-37 Hot-Spares.........................................................................................E-38 Comparison of Hot-Relocation and Hot-Spares.................E-38 Activating the Hot-Spare Function ..................................... E-39
xii
Preface
Dene information technology (IT) storage management Improve the availability of the VxVM software Identify and repair common disk and disk group management problems Upgrade the VxVM software Resolve VxVM software version and licensing problems
q q
Course Map
Course Map
The following course map enables you to see the general topics and the modules for that topic area in reference to the course goal.
Architecture
The VERITAS Volume Manager Software Architecture
Availability Management
Encapsulating Disks Managing Dynamic Multi-Pathing
Problem Management
Troubleshooting Tools and Utilities
Release Management
Upgrading the VxVM Software
Preface-xiv
Storage area networks Covered in ES-475: Design and Administration of Storage Area Networks Solaris Operating Environment (OE) administration Covered in SA-288: Solaris 8 System Administration II Basic VxVM software administration Covered in IES-310: Sun StorEdge Volume Manager Administration Hitachi Lightning SE9900 administration Covered in HDS-335: Hitachi Lightning SE9900 Overview and Conguration Sun StorEdge T3 administration Covered in ES-255: Sun Hardware RAID and T3 Storage Systems Administration
Refer to the Sun Educational Services catalog for specic information and registration.
Preface-xv
Can you install the VxVM software? Can you create the following VxVM software objects from the command line and Storage Administrator graphical user interface (GUI)?
q q q q
Can you use the vxprint utility to print VxVM software conguration information? Can you mirror the boot disk on a server? Can you replace a failed disk using command line utilities and the Storage Administrator GUI?
q q
Preface-xvi
Introductions
Introductions
Now that you have been introduced to the course, introduce yourself to the other students and the instructor, addressing the following items:
q q q q q q
Name Company afliation Title, function, and job responsibility Experience related to topics presented in this course Reasons for enrolling in this course Expectations for this course
Preface-xvii
Goals You should be able to accomplish the goals after nishing this course and meeting all of its objectives. Objectives You should be able to accomplish the objectives after completing a portion of instructional content. Objectives support goals and can support other higher-level objectives. Lecture The instructor will present information specic to the objective of the module. This information will help you learn the knowledge and skills necessary to succeed with the activities. Activities The activities take on various forms, such as an exercise, self-check, discussion, and demonstration. Activities are used to facilitate mastery of an objective. Visual aids The instructor might use several visual aids to convey a concept, such as a process, in a visual form. Visual aids commonly contain graphics, animation, and video.
Preface-xviii
Conventions
Conventions
The following icons and typographical conventions are used in this course to represent various training elements and alternative learning resources.
Icons
Additional resources Indicates additional reference materials are available.
!
?
Discussion Indicates a small-group or class discussion on the current topic is recommended at this time.
Power user Indicates additional supportive topics, ideas, or other optional information.
Note Indicates additional information that can help but is not crucial to understanding of the concept being described. Examples of notational information include keyword shortcuts and minor system adjustments. Caution Indicates that there is a risk of personal injury from a nonelectrical hazard, or risk of irreversible damage to data, software, or the operating system. A caution indicates that the possibility of a hazard (as opposed to certainty) might happen, depending on the action of the user. Caution Indicates that either personal injury or irreversible damage of data, software, or the operating system will occur if the user performs this action. A warning does not indicate potential events; if the action is performed, catastrophic events will occur.
Preface-xix
Conventions Caution Indicates the risk of injury due to heat or hot surfaces will result.
Typographical Conventions
Courier is used for the names of command, les, and directories, as well as on-screen computer output. For example: Use ls -al to list all les. system% You have mail. Courier bold is used for characters and numbers that you type. For example: system% su Password:
Courier italic is used for variables and command-line place-holders that are replaced with a real name or value. For example:
To delete a le, type the rm filename. command.
Courier italic bold is used to represent variables whose values are to be entered by the student as part of an activity; for example:
Type chmod a+rwx filename to grant read, write, and execute rights for lename to world, group, and users. Palatino italics is used for book titles, new words or terms, or words that are emphasized. For example: Read Chapter 6 in the Users Guide. You must be root to do this.
Preface-xx
Module 1
Describe the two storage management methodologies Describe the relationship between the VxVM software and the Solaris Operating Environment Identify and describe the major components of the VxVM software conguration database Identify and dene all the VxVM software objects Describe the resynchronization process Describe how the VxVM software identies disks under control Describe the different plex states List newly supported and unsupported features introduced in the VxVM software version 3.2
q q q q q
Relevance
Relevance
Discussion The following questions are relevant to understanding how the VxVM software internal architecture supports the management of enterprise storage subsystems:
q q
!
?
What VxVM software objects are used to build mountable volumes? What les are used to control the VxVM software internal conguration, and where are they located? Where are the VxVM software driver and driver conguration les stored? How does device discovery enable adding disks to enterprise storage subsystems without a reboot of the system? What new features are included in the VxVM software version 3.2?
1-2
Additional Resources
Additional Resources
Additional resources The following references provide additional information on the topics described in this module:
q
VERITAS Volume Manager 3.2 Administrators Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000392-011, TechPDF ID 240253. VERITAS Volume Manager 3.2 Installation Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000395-011, TechPDF ID 240256. VERITAS Volume Manager 3.2 Troubleshooting Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000394-011, TechPDF ID 240255. VERITAS Volume Manager Storage Administrator 3.2 Administrators Guide. Mountain View, California: VERITAS Software Corporation, July 2001, number 30-000393-011, TechPDF ID 240257.
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-3
Virtual Volume
Layer
User Process
Data
JBOD Enclosure Interface Adapter Board 3-GByte disk/slice/LUN disk disk disk disk disk disk disk disk disk disk disk disk
Figure 1-1
1-4
Introducing Storage Management JBOD disks also can be found in Suns midrange servers, such as the Sun Enterprise 3500 and Sun Enterprise 250 servers. These disks are not considered by the VxVM software to be enclosure-based JBODs because they are congured differently. This difference is addressed later in this module.
RAID Hardware
3-GByte Disk/Slice disk disk disk disk disk disk disk disk disk disk disk disk
Figure 1-2
Intelligent controller storage can be managed either by the storage subsystems controller, the VxVM software, or both.
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-5
1-6
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-7
Applications
DMP Driver
Kernel
Enclosure/Array
vxconfigd
DDL
Figure 1-3
Configuration Database
The VxVM software conguration database stores all disk and volume conguration data. The following apply to the conguration database:
q q q
Database access is managed through the /dev/vx/config device. Accesses are executed serially. Initial volume congurations are downloaded to the kernel through this device. The vxconfigd daemon updates the database to reect changes to the conguration of VxVM software objects.
1-8
The vxconfigd daemon is the exclusive owner of the /dev/vx/config device. Non-volatile copies of the database are stored in the private region of a VM disk as follows:
q
Conguration copies are replicated within the disk group and are available to prevent total loss of the database. The kernel conguration database is created from the disk group copies during the system boot process.
Kernel Drivers
VxVM software storage management drivers include the following:
q
vxio The vxio driver manages access to VxVM software virtual devices. Prior to initiating an input/output (I/O) operation to one of these virtual devices, the vxio driver consults the VxVM conguration database. The vxio driver is also responsible for reporting device errors. vxdmp The vxdmp daemon performs DMP operations on multipathed storage subsystems. vxspec The vxspec software control and status driver is used by vxconfigd and other VxVM software utilities to communicate with the Solaris OE kernel.
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-9
vxconfigd The vxconfigd software conguration daemon is responsible for maintaining disk and disk group conguration information. The vxconfigd daemon performs the following:
q
It takes conguration change requests from VxVM software utilities, communicates those to the kernel, and updates the VxVM software conguration database. During system boot processing, vxconfigd reads the kernel log to determine the current state of VxVM software objects and any recovery operations to be performed. During disk group imports, vxconfigd scans the private regions of the disk groups VM disks to nd the most current copy of its conguration database. The daemon then adds that data to the systems VxVM software kernel conguration database. It receives cluster related information from the vxclust utility. In a cluster environment, the different instances of vxconfigd running on the cluster nodes communicate with each other across the network. It logs any VxVM software object errors.
Note The vxconfigd logging and error messages are discussed in detail in Module 4, Troubleshooting Tools and Utilities.
q
vxrelocd The vxrelocd daemon performs hot-relocation to restore redundancy. The vxrelocd daemon performs the following:
q
Data located on subdisks that are part of a failed VM disk is relocated to spare disks that have sufcient free space congured in the disk group. When a relocation operation begins, vxrelocd sends mail to the local root account.
1-10
/kernel/drv /kernel/drv/sparcv9 /sbin /usr/sbin /etc/init.d /etc/rc*.d /opt /etc/vx /var/vxvm /dev/vx
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-11
If the VxVM software driver binaries are corrupted, copy the proper architecture version of the binary into the corrupted driver binary le to correct the problem. This process is presented in Module 5, Recovering Boot and System Processes.
1-12
Note The vxconfigd le has Solaris OE architectural requirements similar to the driver binaries in the /kernel/drv directory.
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-13
K10vmsa-server K99vxvm-shutdown
q q
The /etc/rc1.d directory contains the K10vmsa-server link. The /etc/rcS.d directory contains the following links:
q q q
The VxVM software startup sequence during a system boot is covered in Module 5, Recovering Boot and System Processes.
1-14
vmsaguide.pdf vmsaguide.ps
vmsa LICENSE.ps
vmsa_server
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-15
The /etc/vx/reconfig.d directory holds the les and subdirectories, shown in the following list output, that are used to dene the VxVM softwares present and prior congurations.
./vx/reconfig.d: total 18 2 drwxr-xr-x 2 drwxr-xr-x 2 drwxr-xr-x 2 -rw-r--r-2 drwxr-xr-x 2 -rw-r--r-2 drwxr-xr-x 2 -rw-r--r-2 drwxr-xr-x
6 8 3 1 2 1 3 1 2
29 29 29 29 29 29 29 29 29
There are three les in this directory that describe disks under VxVM software control:
q
enclrs This le lists the les in the directory that identify disks detected by the VxVM software. An example is shown here. :::::::::::::: enclrs :::::::::::::: OTHER_DISKS SENA0
OTHER_DISKS This le lists all non-enclosure disks, as seen in this example. :::::::::::::: OTHER_DISKS :::::::::::::: c0t1d0
1-16
SENA0 This le lists all disks in the enclosure named SENA0 detected by the VxVM software device discovery. An example of the SENA0 le is shown here. :::::::::::::: SENA0 :::::::::::::: c2t25d0 c2t16d0 c2t26d0 c2t10d0 c2t5d0 c2t0d0 c2t19d0
The /etc/vx/reconfig.d/disk/d directory lists subdirectories that hold pre- and post-encapsulation information for encapsulated disks, as seen in these list outputs.
512 Mar 29 16:01 . 512 Mar 29 16:06 .. 512 Mar 29 16:01 c0t0d0
./vx/reconfig.d/disk.d/c0t0d0: total 12 2 drwxr-xr-x 2 root other 2 drwxr-xr-x 3 root other 2 -rw-r--r-1 root other 2 -rw-r--r-1 root other 2 -rw-r--r-1 root other 2 -rw-r--r-1 root other
q
512 Mar 512 Mar 9 Mar 933 Mar 7 Mar 29 452 Mar
29 16:01 . 29 16:01 .. 29 16:01 dmname 29 16:01 newpart 16:01 primary_node 29 16:01 vtoc
The ./c0t0d0 directory contains les that describe present and prior conguration information about the encapsulated boot disk. The le named vtoc holds the boot disks pre-encapsulation vtoc and can be used to recover the original conguration for the boot disk if unencapsulation fails.
Note The procedure for using the vtoc le to unencapsulate the boot disk is addressed in Module 2, Encapsulating Disks.
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-17
The /etc/vx/reconfig.d/saveconf.d directory has a single subdirectory, ./etc, that contains dump device information as well as a current copy of the /etc/system le. These directories are shown in list output.
512 Mar 29 17:15 . 512 Mar 29 16:06 .. 512 Mar 29 16:08 etc
./vx/reconfig.d/saveconf.d/etc: total 14 2 drwxr-xr-x 2 root root 2 drwxr-xr-x 3 root root 2 -rw-r--r-1 root root 2 -rw-r--r-1 root root 2 -rw-r--r-1 root root 4 -rw-r--r-1 root other
q
29 29 29 29 29 29
The /etc/vx directory also holds a le called /etc/vx/volboot. This is the VxVM software bootstrap le. It is an ASCII le that adheres to a very strict format and should not be edited. This le has the following characteristics:
q q q
It is 512 bytes in length, including padding. Update this le using the vxdctl command. The volboot le holds the VxVM software host identier hostid. This is usually the Solaris OE node name, not the hardware hostid. Keep in mind the following concepts:
q
The VxVM software hostid does not have to match the servers node name, which can cause confusion. The hostid is used to establish disk and disk group ownership. If two or more servers can access the same disks using the same bus, the VxVM software hostid ensures that the two hosts do not interfere with each other when accessing the VxVM software disks.
The volboot le can also contain a list of simple disks for rootdg. Refer to the VxVM Software Disks on page 1-21 for more information.
1-18
512 Mar 29 17:15 . 512 Mar 29 15:41 .. 512 Mar 29 17:15 tempdb
2 3 1 1 1
29 29 29 29 29
Caution This directory must be present at boot time or the VxVM software does not start. If the boot disk is under VxVM control, the system does not boot. Do not remove this directory.
Further explanation of these device les is covered later in the Examining VxVM Software Objects on page 1-20.
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-19
Physical objects Physical objects are physical storage devices that present raw or block device interfaces to the Solaris OE. Virtual objects The VxVM software builds virtual objects from physical storage objects that are brought under the VxVM software control. The VxVM software virtual objects are combined into volumes which are then formatted and mounted for use by applications and users. All objects that are not physical objects are virtual objects.
Figure 1-4 illustrates the relationship between physical and virtual VxVM software objects.
Plex Subdisk Physical Disks Disk Group VxVM Software Private Public Subdisk Subdisk
Subdisk
Figure 1-4
1-20
Physical Disks
Physical disks are storage devices where data is ultimately stored. Physical disks, or physical objects, are identied by the Solaris OE using a unique identier called a ctd number. Valid ctd identiers are:
q q q q
c The system controller or host bus adapter number t A Small Computer System Interface (SCSI) target identier d A device or logical unit number s A slice or partition
The VxVM software uses a drive ctd number for identication of the physical device when it is brought under VxVM software control.
Initialized Disks
Initialized disks are reformatted with either one or two partitions, and all data is destroyed. The partitions are used to store the VxVM software conguration and data areas called private and public regions. The private region is a small partition where disk group conguration information is stored. The private region has the following characteristics:
q q
It is usually slice 3. Region size starts at 1024 sectors in early versions of the VxVM software, and 2048 sectors in version 3.2. Use the vxdisksetup command privlen option to expand the size of the private region. This is a complicated procedure and is not recommended. The procedure for expanding the private region is addressed in a lab exercise.
q q
It is assigned vtoc tag number 15 for identication purposes. This region is not used for data storage.
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-21
Disk name and indentier Disk group name and identier Disk group conguration copy
The public region uses the remaining space available on the physical disk to store subdisks. The public region has the following characteristics:
q q q q
It is usually slice 4. It is used for data storage. The region is maintained by the VxVM software commands. It is assigned vtoc tag number 14 for identication purposes.
The sliced conguration Public and private regions are dened as separate Solaris OE partitions. This is the preferred method for initializing a VxVM software disk. The simple conguration Public and private regions are dened as a single Solaris OE partition. These are volatile devices and disappear after a reboot if they are not in use or dened in the /etc/vx/volboot le.
The nopriv conguration The disk is congured without a private region. Conguration information is held and maintained by a sliced disk acting as a proxy. These disks are not automatically discovered at bootup unless dened in the /etc/vx/volboot le.
Note The nopriv conguration is not supported in future releases. Avoid using this conguration method.
1-22
Encapsulated Disks
Encapsulation brings a physical disk under the VxVM software control and preserves the data. The /etc/vfstab le is modied to reect the new volume names on the disks le systems. Encapsulation has the following characteristics:
q
The /etc/vfstab le requires free space on the disk, usually in the private region, to store conguration information. The le size starts at 1024 sectors to the VxVM software version 3.2, and 2048 sectors in version 3.2 and later. Encapsulation fails if there is not enough free space to build a private region. If space is not available for a private region, use the nopriv option. All partitions on the disk are reassigned to a new public region, which is usually slice 6. Boot disk can be encapsulated and remain bootable.
Note Avoid nopriv congurations. Support for this conguration is being phased out. Encapsulation, including boot disk encapsulation, is covered later in this course.
Disk Groups
A named collection of VxVM software disks that share a common conguration is called a disk group. Common conguration refers to a set of records that provide detailed information about related VxVM software objects, their connections and attributes. This conguration information is stored in the private region of the VxVM software disks. A backup copy for each congured disk group is stored in /var/vxvm/tempdb. Disk groups are virtual objects and have the following characteristics:
q q q q q
The default disk group is rootdg. Additional disk groups can be created on the y. Disk group names are a maximum of 31 characters in length. Disk groups can be renamed. Disk groups are versioned.
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-23
Disk groups allow grouping of the VxVM software disks into logical collections. Disk groups can be moved from one system to another with an import and deport process. Volumes created within a specic disk group can use only the VxVM software disks that are a member of that disk group. Volumes and disks can be moved among disk groups.
Note Moving volumes and disks among disk groups using an early version of the VxVM software was a risky procedure. The VxVM software version 3.2 has new options for the vxdg command to move the VxVM software objects among disk groups and to split and join disk groups. These new options require a special license.
Subdisks
Subdisks are contiguous blocks of space. Subdisks provide the basic building blocks for the VxVM software plexes and volumes, creating the VxVM software basic unit of storage allocation. Subdisks are virtual objects. The following characteristics apply to subdisks:
q
Subdisk storage is allocated from the public region of a VxVM software disk. A VxVM software disk can be subdivided into one or more subdisks. Multiple subdisks cannot overlap. Space on a VxVM software disk not allocated to a subdisk is considered free space.
Subdisk names are based on the VxVM software disk name where they reside, appended with an incremental numeric identier. Figure 1-5 on page 1-25 illustrates how subdisk names are derived.
1-24
Physical Disk
Subdisks
Public Region
disk01-01
disk01-03
Figure 1-5
Plexes
The VxVM software virtual objects built from subdisks are called plexes. A plex consists of one or more subdisks located on one or more VxVM software disks. Plexes are:
q q q
Also known as submirrors Mid-tier building blocks of the VxVM software volumes Named based on the name of the volume for which it is a submirror, plus an appended incremental numeric identier Organized using the following methods:
q q q q
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-25
Examining VxVM Software Objects Figure 1-6 illustrates the architectural components of a plex.
VxVM Software Disk Private Region Plex
Public Region
disk01-01 disk01-02 disk01-03 vol01-01 This is the first plex of volume vol01. Subdisks
Figure 1-6
Plex Components
Note Plex states are covered in detail in the section Resynchronizing Volumes on page 1-29. Additional plex state information is found in Appendix E, Conguring the VxVM Software.
Volumes
Volumes are virtual devices (virtual objects) that appear to be physical disks to applications, databases and le systems. Volumes are the VxVM softwares top-tier virtual objects. Although volumes appear to be physical disks, they do not share the limitations of physical disks. Volumes have the following characteristics:
q q
Volumes are directly addressed by the Solaris OE. They consist of one or more plexes. Each plex holds one copy of the volumes data. Volumes are not restricted to a single disk or specic areas of disks.
1-26
The conguration of a volume can be changed using the VxVM software utilities. Conguration changes can be accomplished without disruption to the volume operations. Volume names can contain up to 31 characters. Volumes can consist of up to 32 plexes. Each plex can contain multiple subdisks, and all subdisks must be in the same disk group.
q q
disk01-01 disk01-02 disk01-03 vol01-01 disk01-01 disk01-02 disk01-03 disk02-01 disk02-02 disk02-03 vol01-02 vol01-01 vol01 disk02-01 disk02-02 disk02-03 vol01-02
Figure 1-7
Two-Plex Volume
Note Detailed volume layouts are described in Appendix E, Conguring the VxVM Software.
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-27
Examining VxVM Software Objects Layered volumes are used to implement RAID 1 + 0 or RAID 10 virtual devices. RAID 1 + 0 devices are striped, mirrored volumes and provide for a more fault-resilient volume conguration. Implementing these layered volumes require two new VxVM software objects: subplexes and subvolumes. Figure 1-8 illustrates the how a layered volume differs from a standard volume.
Volume Volume Plex Mirror Plex SD SD SD SD Plex SD Subvolume Mirror Subplex SD Subplex SD
Stripe
SD
Stripe
SD SD
RAID 1+0
Figure 1-8
Layered Volumes
Layered volumes require more VxVM software objects than standard volumes. A standard volume requires 9 objects. A layered volume of the same capacity requires 17 objects. Large disk groups using layered volumes can exceed the alloted space for the disk groups private region.
1-28
Resynchronizing Volumes
Resynchronizing Volumes
The VxVM software ensures that all data stored redundantly using mirrored or RAID 5 volumes remains in a consistent state. Data is written in parallel to both mirrored and RAID volumes to ensure that data remains consistent unless there is a system crash or physical disk failure. If this occurs, data can become inconsistent or unsynchronized. System failures are not the only reason data can become unsynchronized. Data can become inconsistent during maintenance procedures when a mirrored plex or RAID 5 element is taken ofine. If data becomes inconsistent between mirrored plexes or between a RAID 5 volumes data, use a volume resynchronization to correct the problem. Volume resynchronization happens for some volumes during a system reboot. Other volumes undergo volume resynchronization when they are started. Plexes that are taken ofine for maintenance become stale. Stale plexes are resynchronized when congured back into a volume. The components of volume resynchronization are discussed in detail in this section.
Dirty Flag
The VxVM software keeps track of data synchronization operations using a ag called the dirty ag. When data is written to a volume, it is marked as dirty until the volume stops or all writes are completed and the data in the volume plexes are identical. Volumes that do not have the dirty ag reset require volume resynchronization when started or during a system reboot.
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-29
Resynchronizing Volumes
Resynchronization Process
The resynchronization (resync) process depends on the type of volume started. During resync of a RAID 5 volume, if a log is available, the log is replayed. If a log is not available, the volume is placed in reconstructrecovery mode, and all parity is regenerated. This is a very time consuming recovery process. Make sure that all RAID 5 volumes have a log attached to reduce the time it takes to resync a RAID 5 volume. During resync of a mirrored volume, the volume is placed into recovery mode (or read-writeback recovery mode). Data is available for use by the Solaris OE. This type of recovery is executed in the background and is very time consuming. Attaching a dirty region log (DRL) to mirrored volumes and using a fast resync operation can help speed up the recovery process. The resync process can be expensive and have an adverse impact on system resources. The VxVM software reduces the impact multiple resyncs have on the system by managing the recovery operations to avoid placing stress on specic disks and controllers.
1-30
RAID 5 Logs
RAID 5 logs perform the following tasks:
q
RAID 5 logs protect data and parity calculations from system crashes. Degraded RAID 5 volumes are not restarted during a reboot; they must be started manually. Attaching a RAID 5 log does not alter this fact.
RAID 5 parity is only used when data needs to be rebuilt. It is only written and never checked. It is advised to run the vxr5check utility on a regular basis to check parity.
Create a RAID 5 log that is large enough to allow multiple concurrent accesses to the volume. Make the log several times the stripe size of a RAID 5 plex. It is possible to congure a RAID 5 volume without a log, but it is highly discouraged. It is not a a good practice to congure RAID 5 volumes without logs. RAID 5 volumes without logs are subject to silent parity errors in the event of a system crash.
Sun StorEdge documentation recommends using two RAID 5 logs per RAID 5 volume. Using two logs can have a signicant performance impact on a volume type that already has performance problems. Most sites only use one log.
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-31
Introducing VxVM Software Logging The following is an example of a RAID 5 volume with an attached log:
Disk group: ddg DG DM V PL SD SV v pl sd sd sd pl sd NAME NAME NAME NAME NAME NAME r0 r0-01 ddg01-01 ddg02-01 ddg03-01 r0-02 ddg04-01 NCONFIG DEVICE USETYPE VOLUME PLEX PLEX RAID-5 r0 r0-01 r0-01 r0-01 r0 r0-02 NLOG TYPE KSTATE KSTATE DISK VOLNAME ENABLED ENABLED ddg01 ddg02 ddg03 ENABLED ddg04 MINORS PRIVLEN STATE STATE DISKOFFS NVOLLAYR ACTIVE ACTIVE 0 0 0 LOG 0 GROUP-ID PUBLEN LENGTH LENGTH LENGTH LENGTH 20480 21568 10800 10800 10800 4320 4320
STATE READPOL LAYOUT [COL/]OFF [COL/]OFF RAID RAID 0/0 1/0 2/0 CONCAT 0
PREFPLEX NCOL/WID MODE DEVICE MODE AM/NM MODE 3/32 c1t2d0 c1t2d1 c1t3d0 c1t4d0
It provides faster resynchronization in the event of a system crash or abnormal system termination only. In case of a system crash, only the dirty blocks are resynchronized. The logging does not perform the same function as the Solstice DiskSuite softwares dirty mirror region log. Dirty region logging does not enable partial resynchronization if a plex is taken ofine for maintenance. Partial resynchronization is handled by the fast mirror resync (FMR) facility. FMR requires a separate license.
q q
Dirty region logging requires additional system I/O overhead. A DRL is relatively small.
1-32
DRLs are not supported on core system volumes, such as the /, /usr, and /var volumes. Dirty region logging is usually implemented as a log plex.
STATE READPOL LAYOUT [COL/]OFF [COL/]OFF SELECT CONCAT 0 CONCAT 0 CONCAT LOG SELECT CONCAT LOG 0 CONCAT 0
PREFPLEX NCOL/WID MODE DEVICE MODE AM/NM MODE c1t4d0 c1t4d1 c1t4d0 c1t4d1 c1t2d0 c1t2d1
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-33
Indicate the state of the volumes contents Determine if a plex content is valid Track whether a plex was actively in use at the time of a system failure Monitor plex operations
Empty When a volume is created and the plex is not initialized, the plex is in an empty state. Clean A plex is in a clean state when it is known to contain a good copy (mirror) of the volume. If all the plexes of a volume are clean, no action is required. Active A plex is in the active state when the volume is started and the plex fully participates in normal volume I/O (meaning the plex contents change as the contents of the volume change). A plex state is also active when the volume is stopped as a result of a system crash and the plex was active at the moment of the crash. In this case, a system failure can leave plex contents in an inconsistent
1-34
Examining Plex States state. When a volume is started, the VxVM software performs a recovery action to guarantee that the contents of the plexes are identical and are marked as active. Note The active state is the most common state for plexes on a wellrunning system.
q
Stale If there is a possibility a plex does not have the complete and current volume contents, this plex is placed in a stale state. Additionally, if I/O errors occur on a plex, the kernel stops using and updating the plex, and the operation sets the state of the plex to stale.
q
To reattach the stale plex to the volume, synchronize the data and set the plex to the active state, use the following command: To force a plex into Stale state, use the following command:
Ofine The following command detaches a plex from a volume setting and changes the plex state to ofine: *vxmend -g (diskgroup) off plexname* Although the detached plex is associated with the volume, the changes to the volume are not reected to the plex while it is in the ofine state.
q
To set the plex state to stale and start to recover data after the vxvol start operation, use the following command:
Temp A utility sets the plex state to temp at the start of an operation, and also sets the plex to an appropriate state at the end of the operation. For example, attaching a plex to an enabled volume requires copying volume contents to the plex before it can be fully attached. If the system goes down for any reason, a temp plex state indicates the operation is incomplete; a subsequent vxvol operation starts to disassociate plexes in the temp state.
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-35
Temprm A temprm plex state resembles a temp state with the exception that, upon completion of the operation, the temprm plex is removed. If the system goes down for any reason, a temprm plex state indicates the operation is incomplete; a subsequent vxvol operation starts to disassociate plexes and remove the temprm plex.
Temprmsd The temprmsd plex state is used by vxassist when attaching new plexes. If the operation does not complete, the plex and its subdisk are removed. Iofail The iofail plex state is associated with persistent state logging. On the detection of a failure of an active plex, the vxconfigd operation places that plex in the iofail state to disqualify it from the recovery selection process at volume start time.
Disabled The plex may not be accessed. Detached A write to the volume is not reected to the plex. A read request from the volume is never satised from the plex. Plex operations and ioctl functions are accepted. Enabled A write request to the volume is reected to the plex. A read request from the volume is satised from the plex.
1-36
PS
= Plex state
Figure 1-9
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-37
Create plex
PS: Active PKS: Disabled After crash and reboot (vxvol start)
(vxplex att)
PS: IOFAIL PKS: Detached PS PKS = Plex state = Plex kernel state
Resync fails
Figure 1-10 Additional Plex State Transitions An example of how these additional states are used to manage the integrity of plexes is as follows: 1. 2. 3. A new plex is created leaving the plex state (PS) as empty and the plex kernel state (PKS) as disabled. The plex is initialized and transitions to the clean PS. It remains in the disabled PKS. Once the volume is started, the plex transitions to the active PS and to the enabled PKS. At this time, the plex is available for use by the volume and stays in this state until an error occurs or the volume is stopped.
1-38
Examining Plex States When an uncorrectable error happens, the following plex states are used to manage recovery: 1. 2. An un-correctable I/O failure occurs and the PS transitions to the iofail state. The PKS transitions to the detached state. Repairs are affected and the plex is reattached to the volume. This causes the execution of a data resync. Once the data in the plex is updated, the PS transitions to active, and the PKS transitions to enabled. The plex is now usable by the volume. Use the vxprint utility to view plex states and their transitions. If a small plex is re-synching, vxprint might not show the transition states as they may happen too fast for vxprint to report. The following example shows a VM disk (rootmirror) that has an I/O failure. The plex rootvol-02 has transitioned to a detached PKS and an iofail PS.
bash-2.03# vxprint -htg rootdg DG NAME NCONFIG NLOG MINORS DM NAME DEVICE TYPE PRIVLEN RV NAME RLINK_CNT KSTATE STATE RL NAME RVG KSTATE STATE V NAME RVG KSTATE STATE PL NAME VOLUME KSTATE STATE SD NAME PLEX DISK DISKOFFS SV NAME PLEX VOLNAME NVOLLAYR DC NAME PARENTVOL LOGVOL SP NAME SNAPVOL DCO dg rootdg dm rootdisk dm rootmirror v pl sd sd pl sd default c1t0d0s2 default sliced 0 3590 GROUP-ID PUBLEN PRIMARY REM_HOST LENGTH LENGTH LENGTH LENGTH
STATE DATAVOLS SRL REM_DG REM_RLNK READPOL PREFPLEX LAYOUT NCOL/WID [COL/]OFF DEVICE [COL/]OFF AM/NM
rootvol ENABLED ACTIVE 4197879 ROUND rootvol-01 rootvol ENABLED ACTIVE 4197879 CONCAT rootdisk-B0 rootvol-01 rootdisk 17678492 1 0 rootdisk-02 rootvol-01 rootdisk 0 4197878 1 rootvol-02 rootvol DETACHED IOFAIL 4197879 CONCAT rootmirror-01 rootvol-02 rootmirror 0 4197879 0
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-39
Device Discovery
Device discovery is separated from the base VxVM software functionality into a separate layer. Previously, the VxVM software discovered block storage devices by scanning the Solaris OE device tree using the vxiod daemon. This strategy assumed that the Solaris OE device tree remained static or changed very little. When changes occurred, command line utilities were used to update the VxVM software conguration. Additionally, dynamic multi-pathing (DMP) must be aware of all multipathed storage subsystems and know the type of storage subsystem to enable DMP support. Prior to the VxVM software version 3.2, support for new arrays would necessitate changes to the DMP device discovery code. This required patching the VxVM software and undergoing a probable system outage. With the growth of disk subsystem vendors and storage area networks implemented within storage environments, the previous strategy proved to be inadequate. The current device discovery facility is designed to allow the dynamic addition of new storage subsystems without modication to the VxVM software modules. The VxVM software DDL discovers all disks that are visible to the Solaris OE.
Discovers all block storage devices connected to a host Probes using SCSI commands from user space to determine the following:
q q
1-40
Attributes Executes from the vxdisk utility to perform a vxdisk scan command
Figure 1-11 illustrates DDL components and the relationship to the VxVM software kernel.
Applications
DMP Driver
Kernel
Enclosure/Array
vxconfigd
DDL
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-41
Introducing Supported VxVM Software Version 3.2 Features Support can be added dynamically for the following types of disk arrays:
q q q q
JBOD Support
The device discovery facility can detect multi-pathed disk storage devices that do not belong to a disk array but are capable of being multi-pathed by DMP. Use the vxddladm utility to add or remove JBODs. Disks must have a unique serial number that can be read through a SCSI inquiry or the mode_sense command to be detected correctly.
DDL Administration
The device discovery facility is managed by the vxddladm utility to provide a high-level function for device discovery and conguration. It is used to:
q q q q q q q
Add JBODs List supported JBODs Remove JBODs Include arrays Exclude arrays List supported arrays List excluded arrays
A sample output from the execution of the vxddladm command with the listsupport option is shown here. This option lists all disk enclosures currently supported by the systems DDL.
bash-2.03# vxddladm listsupport LIB_NAME ARRAY_TYPE VID PID ======================================================================= libvxap.so A/A SUN AP_NODES libvxatf.so A/A VERITAS ATFNODES libvxeccs.so A/A ECCS all libvxemc.so A/A EMC SYMMETRIX libvxhds.so A/A HITACHI OPEN-* libvxhitachi.so A/PG HITACHI DF350
1-42
Note See the vxddladm man page for command syntax options.
Device Naming
Enclosure-based and operating system (OS)-independent naming changes the way storage devices are identied by the VxVM software and provides the following benets:
q
Naming is independent of the OS. This mitigates device naming confusion in multi-OS environments.
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-43
OS-independent naming relieves problems with some third-party drivers that do not use the standard Solaris OE c#t#d# device naming schema. The VxVM software is now independent of arbitrary device names. Enclosure-based naming allows for better location identication of disks. This leads to faster fault isolation. In cluster environments, disk array names are the same on all nodes in the cluster.
logical_enclosure_name_#
Default disk names are based on the vendor identication (ID). For example, disks in a Sun StorEdge T3 multi-path array named purple0 by the VxVM software have the following names: purple0_1 purple0_2 purple0_3 Device names have the following characteristics:
q q q
Logical names persist across reboots. Logical enclosure names can be customized. Disk names within enclosures are persistent as long as the disks position within an enclosure remains static. All fabric devices are displayed using the enclosure format. During installation, an option allows the installer to display all nonfabric devices using the native open proling standard (OPS) format.
Enabled or disabled during installation. Enabled or disabled using vxdiskadm option 20.
1-44
Notice that the dmp metanodes now use devices named SENA0_xyz. A long listing of the directory shows that c1t0d0 and SENA0_0 are the same device by comparing the major and minor numbers.
bash-2.03# ls -las total 12 10 drwxr-xr-x 2 2 drwxr-xr-x 6 0 brw------1 ... 0 brw------1 ... /dev/vx/dmp root root root root other other other other 5120 Jul 5 09:17 . 512 Jun 8 09:49 .. 68, 2 Jul 5 09:05 SENA0_0 68, 2 Jun 8 09:41 c1t0d0
An experienced system administrator can take the c#t#d# number and cross-reference that to a physical device. A sample vxdisk command listing shows that the VxVM software is using enclosure-based names.
bash-2.03# vxdisk list DEVICE TYPE DISK SENA0_0 sliced rootdisk SENA0_1 sliced SENA0_2 sliced SENA0_3 sliced SENA0_4 sliced SENA0_5 sliced SENA0_6 sliced rootmirror SENA0_7 sliced GROUP rootdg rootdg STATUS online offline error online online offline online online
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-45
Single-host environment Set mp_support to rw. Multi-host environment Set mp_support to std.
1-46
Split Removes objects from an established disk group and creates a new disk group from those objects Move Moves objects from one disk group to another Join Joins two disk groups
q q
Sun Proprietary: Internal Use Only Introducing the VERITAS Volume Manager Software Architecture
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
1-47
Module 2
Encapsulating Disks
Objectives
Upon completion of this module, you should be able to:
q
Describe the encapsulation and unencapsulation processes for data and boot (system) disks Encapsulate a data disk Unencapsulate a data disk Encapsulate a boot disk Recover the loss of the encapsulated disk in a boot disk mirrored pair Describe best practice partition congurations for boot disk encapsulation Unencapsulate a boot disk, including a compact disc read-only memory (CD-ROM) Perform a successful vxunroot process on a boot disk mirror, including a mirror that has a replaced encapsulated disk Recover an encapsulated boot disk that failed the vxunroot process
q q q q q
Relevance
Relevance
Discussion The following questions are relevant to understanding how the VxVM software supports the preservation (encapsulation) of existing data on disks brought under the softwares control:
q q q
!
?
What is disk encapsulation? How is disk encapsulation implemented? What are the differences between encapsulating a data disk as opposed to a system (boot) disk? How does a user reverse the encapsulation process without having to restore data? Should the VxVM software be used to encapsulate a system boot disk? What are the supported slice congurations for a boot disk? What process recovers an encapsulated boot disk that fails the vxunroot utilitys attempt at unencapsulation?
q q
2-2
Additional Resources
Additional Resources
Additional resources The following references provide additional information on the topics described in this module:
q
VERITAS Volume Manager 3.2 Administrators Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000392-011, TechPDF ID 240253. VERITAS Volume Manager 3.2 Installation Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000395-011, TechPDF ID 240256. VERITAS Volume Manager 3.2 Troubleshooting Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000394-011, TechPDF ID 240255. VERITAS Volume Manager Storage Administrator 3.2 Administrators Guide. Mountain View, California: VERITAS Software Corporation, July 2001, number 30-000393-011, TechPDF ID 240257. http://storage.east, http://storage.central, and http://storage.west SunSolveSM Online Free Info Docs and Free Symptoms and Resolutions, [http://sunsolve.Sun.COM/pub-cgi/search.pl?mode= advanced].
q
INFODOC 16051 How to Encapsulate Disks With No Free Space Using Volume Manager 22 March 2002. (See Appendix A, SunSolve INFODOCs.) INFODOC 24663 Full and Basic/Functional Unencapsulation of a Volume Manager Encapsulated Root Disk While Booted CDROM 22 March 2002. (See Appendix A, SunSolve INFODOCs.) INFODOCS 13775, 13781, 15838, and 19245. Symptom and Resolution Database (SRDB) 19245.
q q q
VERITAS Software Corporation support knowledge base TechNote ID 244678, [http://seer.support.veritas.com/nav_bar/index.asp? content_sURL=%2Fsearch%5Fforms%2Ftechsearch%2Easp]. Sun BluePrints OnLine part number 806-6197-10, Toward a Reference Conguration for VxVM Managed Boot Disks, August, 2000. [http://www.sun.com/solutions/blueprints/online.html].
2-3
Increase the fault resiliency of data through mirroring Grow the data storage area Convert a data disk to a striped volume to increase performance Provide a disk to populate the rootdg disk group
2-4
Two unused partitions in the partition table A small amount of unallocated disk space (2 cylinders) at the beginning or end of the disk Slice 2 representing the whole disk
If these requirements are not met, encapsulation is a more difcult procedure as described in Encapsulating a Non-Conforming Disk on page 2-20. When the encapsulation process is nished, the encapsulated disk has two partitions. These are usually partition 6, used for the public region, and partition 7, used for the private region. Figure 2-1 illustrates this partitioning.
Data Disk Encapsulated Data Disk
Slice 0 Public Slice 2 Slice 1 Slice 3 Slice 4 Slice 7 Free Space Private Region Region (Slice 6) Slice 2 and Slice 6
Figure 2-1
Although the VxVM software re-partitions the disk, the original data resides in the same blocks they occupied prior to encapsulation. The data is not moved or overwritten. The VxVM software is responsible for performing the translation necessary to make the data available as VxVM software volumes.
2-5
/dev/rdsk/c1d0s2 /dev/rdsk/c1t0d0s0 -
1 2 3 4
Pre-Encapsulation df -k Command
This example output from the df -k command displays the le systems currently mounted from disk c1t3d0.
# df -k Filesystem /dev/dsk/c1t0d0s0 /proc fd mnttab swap swap kbytes used avail capacity 7670973 1536449 6057815 21% 0 0 0 0% 0 0 0 0% 0 0 0 0% 1209520 16 1209504 1% 1209520 16 1209504 1% Mounted on / /proc /dev/fd /etc/mnttab /var/run /tmp
2-6
Partition 0 1 2 3 4
Tag 0 0 5 0 0
Mount Directory /fs1 <-- Mount Points /fs2 <-/fs3 <-/fs4 <--
Notice that the beginning sector of unallocated space is the last sector of partition 4 plus 1.
2-7
Start the vxdiskadm utility from the shell command line while logged in as the superuser. Type the following:
2-8
? ?? q
Display help about menu Display help about the menuing system Exit from menus
2.
Select menu item 2, Encapsulate one or more disks, to begin the encapsulation process. Type the following:
Encapsulate one or more disks Menu: VolumeManager/Disk/Encapsulate Use this operation to convert one or more disks to use the Volume Manager. This adds the disks to a disk group and replaces existing partitions with volumes. Disk encapsulation requires a reboot for the changes to take effect. More than one disk or pattern may be entered at the prompt. some disk selection examples: Here are
all: all disks c3 c4t2:all disks on both controller 3 and controller 4, target 2 c3t4d2:a single disk (in the c#t#d# naming scheme) xyz_0 :a single disk (in the enclosure based naming scheme) xyz_ :all disks on the enclosure whose name is xyz
3.
Enter the disk to be encapsulated using c#t#d# notation, and answer yes to the Continue operation? prompt. Type the following:
Select disk devices to encapsulate: [<pattern-list>,all,list,q,?] c1t3d0 Here is the disk selected. c1t3d0 Continue operation? [y,n,q,?] (default: y) y You can choose to add this disk to an existing disk group or to a new disk group. To create a new disk group, select a disk group name that does not yet exist. Output format: [Device_Name]
2-9
Encapsulating Data Disks 4. Enter the name of the disk group for this encapsulated disk to join. In the following example the disk group name is storagedg. Answer all prompts as appropriate for this particular encapsulation procedure:
Which disk group [<group>,list,q,?] (default: rootdg) storagedg There is no active disk group named storagedg. Create a new group named storagedg? [y,n,q,?] (default: y) y Use a default disk name for the disk? [y,n,q,?] (default: y) n A new disk group will be created named storagedg and the selected disks will be encapsulated and added to this disk group with disk names that will be specified interactively. c1t3d0 Continue with operation? [y,n,q,?] (default: y) y The following disk has been selected for encapsulation. Output format: [Device_Name] c1t3d0 Continue with encapsulation? [y,n,q,?] (default: y) y
5.
Select a name for the newly encapsulated disk. Although the name is arbitrary, it is a good idea to follow some naming convention. In the following example, the disk name is storage01. Answer the remaining prompts as appropriate for this specic encapsulation procedure:
Enter disk name for c1t3d0 [<name>,q,?] (default: storage01) storage01 A new disk group storagedg will be created and the disk device c1t3d0 will be encapsulated and added to the disk group with the disk name storage01. Use a default private region length for this disk? [y,n,q,?] (default: y) y
The typical response to the prompt Use a default private region length for this disk? is yes. If the size of the private region in the selected disk group was modied, then answer no, and follow the prompts to adjust the size. Caution The size of the private region of all member disks in a disk group must be the same.
2-10
? ?? q
Display help about menu Display help about the menuing system Exit from menus
2-11
This example illustrates how the VxVM software views this disk prior to completeion of the encapsulation procedure. After the encapsulation process is completed, compare this example to the same information for the encapsulated disk.
Select an operation to perform: list
List disk information Menu: VolumeManager/Disk/ListDisk Use this menu operation to display a list of disks. You can also choose to list detailed information about the disk at a specific disk device address. Enter disk device or "all" [<address>,all,q,?] (default: all) all DEVICE c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0 c1t6d0 c1t16d0 c1t17d0 c1t18d0 c1t19d0 c1t20d0 c1t21d0 c1t22d0 DISK rootdisk rootmirror GROUP rootdg rootdg STATUS online error error error <--- Disk is in the error state. online online error error error error error online online online
Device to list in detail [<address>,none,q,?] (default: none) c1t3d0 Device: c1t3d0s2 devicetag: c1t3d0 type: sliced flags: online error private autoconfig errno: Disk is not usable <--- Disk is not usable Multipathing information: numpaths: 2 c1t3d0s2state=enabled c2t3d0s2state=enabled List another disk device? [y,n,q,?] (default: n) n [H[2J Volume Manager Support Operations Menu: VolumeManager/Disk
2-12
? ?? q
Display help about menu Display help about the menuing system Exit from menus
A system reboot is required before the disk is available for use by the VxVM software.
2-13
1 2 3 4
/dev/rdsk/c1d0s2 /dev/rdsk/c1t0d0s0 -
1 2 3 4
Post-Encapsulation df -k Command
The VxVM software volumes in the following example of the df -k command are displayed as the le system for fs1 through fs4.
bash-2.03# df -k Filesystem kbytes used avail capacity /dev/vx/dsk/rootvol 7670973 1646146 5948118 22% /proc 0 0 0 0% fd 0 0 0 0% mnttab 0 0 0 0% swap 1175152 16 1175136 1% swap 1175160 24 1175136 1% /dev/vx/dsk/storagedg/fs2 2055705 2073 1991961 1% Mounted on / /proc /dev/fd /etc/mnttab /var/run /tmp /fs2
2-14
1% 1% 1%
Mount Directory
The unencapsulation process for data disks is described in Unencapsulating Data Disks on page 2-23.
2-15
2-16
127000
1019688341.1137.lowtide
2-17
sliced
3590
17678493 ROUND CONCAT 0 1 ROUND CONCAT 0 ROUND CONCAT 0 c1t3d0 c1t3d0 c1t3d0 c1t3d0 c1t3d0 gen RW ENA ENA gen RW ENA gen RW ENA gen RW ENA
ENABLED ACTIVE 4197879 ENABLED ACTIVE 4197879 storage01 17678492 1 storage01 0 4197878 ENABLED ACTIVE 4197879 ENABLED ACTIVE 4197879 storage01 4197878 4197879 ENABLED ACTIVE 4197879 ENABLED ACTIVE 4197879 storage01 8395757 4197879
v fs2 pl fs2-01 fs2 sd storage01-03 fs2-01 v fs3 pl fs3-01 fs3 sd storage01-02 fs3-01 v fs4 pl fs4-01 fs4 sd storage01-01 fs4-01
ENABLED ACTIVE 4197879 ROUND ENABLED ACTIVE 4197879 CONCAT storage01 12593636 4197879 0
24 24 24 24 24 24 24
The dg File
The dg le in the following example delineates the disk group of which this encapsulated disk is a member.
2-18
2-19
2-20
Encapsulating Data Disks If there is not enough free space on the disk and additional free space cannot be congured, the situation can be alleviated by temporarily encapsulating the non-conforming disk without a private region, then mirroring the encapsulated disk volumes to another disk. Once the data is mirrored to a real VxVM software disk, the encapsulated plexes are detached, leaving the data on the mirror. At that point, mirror the mirror disk to another VxVM software disk to complete the operation. Only use this procedure if the data on a non-conforming disk must be brought under the VxVM software control. The following procedure is based on SunSolve INFODOC 16051. Refer to Appendix A, SunSolve INFODOCs, for a complete copy of this document. Check the SunSolve Online Web site to determine if the INFODOC has been updated. To encapsulate a non-conforming disk: 1. For each slice on the disk (excluding slice 2), run the following command to prepare the disk for encapsulation without a private region. In this example, only slices 5 and 6 have data on them. Type the following:
2.
Use the vxdg -g command to add each of the slices of the target disk to a disk group as though each slice is a disk, assigning the slice a name. In this example the names NPdisk05 and NPdisk06 are used, as follows:
3.
On each of the new disks, create a simple volume (not a le system) that spans the entire disk. Then use the vxdisk list command to determine the maximum size for the volumes to be created:
public len=8196096 public len=9400320
vxdisk list NPdisk05 | grep public: slice=0 offset=0 vxdisk list NPdisk06 | grep public: slice=0 offset=0
4.
Use the information derived from the vxdisk list command len value and the vxassist command to create the volumes. The following example names the volumes NPdisk05vol and NPdisk06vol.
vxassist -g <diskgroup> make NPdisk05vol 8196096 layout=nostripe alloc="NPdisk05" vxassist -g <diskgroup> make NPdisk06vol 9400320 layout=nostripe alloc="NPdisk06"
2-21
Encapsulating Data Disks 5. Mirror the volumes to a disk that has enough space to mirror both volumes. In the following example, the volumes are mirrored to a disk named disk01.
vxassist -g <diskgroup> mirror NPdisk05vol layout=nostripe alloc="disk01" vxassist -g <diskgroup> mirror NPdisk06vol layout=nostripe alloc="disk01"
6.
When the mirroring process is complete, remove the original side of the mirror. This removes the disk that does not have a private region congured. Type the following:
7.
Remove the old disks from the disk group and return them to their original state:
vxdg -g <diskgroup> rmdisk NPdisk05 vxdg -g <diskgroup> rmdisk NPdisk06 vxdisk rm c0t5d10s5 vxdisk rm c0t5d10s6
This leaves two concatenated volumes named NPdisk05vol and NPdisk06vol. These volumes contain the data that was originally located on c#t#d#s5 and c#t#d#s6.
2-22
bash-2.03# vxprint -g storagedg -ht DG NAME NCONFIG NLOG DM NAME DEVICE TYPE RV NAME RLINK_CNT KSTATE RL NAME RVG KSTATE V NAME RVG KSTATE PL NAME VOLUME KSTATE SD NAME PLEX DISK SV NAME PLEX VOLNAME DC NAME PARENTVOL LOGVOL SP NAME SNAPVOL DCO dg storagedg dm storage01 dm storage02 v pl sd sd pl sd fs1 fs1-01 storage01-B0 storage01-04 fs1-02 storage02-01 default c1t3d0s2 c1t4d0s2 fs1 fs1-01 fs1-01 fs1 fs1-02 default sliced sliced
1019688341.1137.lowtide 17678493 17674902 ROUND CONCAT 0 1 CONCAT 0 ROUND CONCAT 0 c1t3d0 c1t3d0 c1t4d0 c1t3d0 gen RW ENA ENA RW ENA gen RW ENA
ENABLED ACTIVE 4197879 ENABLED ACTIVE 4197879 storage01 17678492 1 storage01 0 4197878 ENABLED ACTIVE 4197879 storage02 0 4197879 ENABLED ACTIVE 4197879 ENABLED ACTIVE 4197879 storage01 4197878 4197879
2-23
ENABLED ACTIVE 4197879 ENABLED ACTIVE 4197879 storage01 12593636 4197879 ENABLED ACTIVE 4197879 storage02 12593637 4197879
Note that all volumes on c1t3d0 (fs1fs4) have a value of LAYOUT=CONCAT, which indicates that the volumes were not grown or converted to a striped volume. This means that the unencapsulation process should work on this disk. Each volume has two plexes, fs#-01, the primary, and fs#-02, the mirror. This is veried by printing the partition table of each disk. Additionally, the B0 subdisk is used as a safety mechanism to protect the bootblock of encapsulated disks. It can be safely ignored during this process. The partition tables for these example disks are as follows
q
Encapsulated disk The following capture is of disk c1t3d0. Notice that it has slicing indicative of an encapsulated disk. This is the target disk for the unencapsulation example.
partition> print Current partition table (original): Total disk cylinders available: 4924 + 2 (reserved cylinders) Part Tag 0 unassigned 1 unassigned 2 backup 3 unassigned 4 unassigned 5 unassigned 6 7 Flag wm wm wu wm wm wm wu wu
q
Blocks (0/0/0) 0 (0/0/0) 0 (4924/0/0) 17682084 (0/0/0) 0 (0/0/0) 0 (0/0/0) 0 (4924/0/0) 17682084 <-- Public (1/0/0) 3591 <-- Private
Mirror disk This is a partition print of disk c1t4d0. It is the mirror of c1t3d0 and has the partitioning scheme of a VxVM software disk. This is the disk from which the plexes are detached (unmirrored) during the unencapsulation process.
2-24
2.
Unmirror all volumes to be unencapsulated, using either the vxassist or vxplex command. The following example uses the vxplex command:
rm rm rm rm dis dis dis dis fs1-02 fs2-02 fs3-02 fs4-02
-o -o -o -o
Power user Alternatively, for disks with multiple mirrors, the vxplex command can be looped: bash-2.03# for i in 1 2 3 4 > do > vxplex -o rm dis fs$i-02 > done 3.
bash-2.03# DG NAME DM NAME RV NAME RL NAME V NAME PL NAME SD NAME SV NAME DC NAME SP NAME
Use the vxprint command to verify that all mirrors are detached:
MINORS PRIVLEN STATE STATE STATE STATE DISKOFFS NVOLLAYR GROUP-ID PUBLEN PRIMARY REM_HOST LENGTH LENGTH LENGTH LENGTH
vxprint -g storagedg -ht NCONFIG NLOG DEVICE TYPE RLINK_CNT KSTATE RVG KSTATE RVG KSTATE VOLUME KSTATE PLEX DISK PLEX VOLNAME PARENTVOL LOGVOL SNAPVOL DCO default c1t3d0s2 c1t4d0s2 fs1 fs1-01 fs1-01 default sliced sliced
1019688341.1137.lowtide 17678493 17674902 ROUND CONCAT 0 1 c1t3d0 c1t3d0 gen RW ENA ENA
ENABLED ACTIVE 4197879 ENABLED ACTIVE 4197879 storage01 17678492 1 storage01 0 4197878
2-25
ENABLED ACTIVE 4197879 ROUND ENABLED ACTIVE 4197879 CONCAT storage01 12593636 4197879 0
Notice that the -02 plexes are missing from the vxprint output for these volumes. This is a clear indication that the mirrors no longer exist. Also, note the subdisk numbers for the surviving plex of each volume. This information is used in the next step of the unencapsulation process. 4. For normal partitions (non-private), recreate the original partitions on the disk by running the vxmksdpart command on each subdisk, as shown in this example.
Caution The following steps must be performed exactly as shown or loss of data may occur. It is very important that a full understanding of the pre-encapsulation partitioned layout of the disk is known, or the postencapsulation partition layout could be incorrect. To make sure that the mapping of subdisks to encapsulated partitions is correct, cross-reference the output of the vxprint command with the commented lines of the /etc/vfstab le. These comments were added to the le by the VxVM software during the encapsulation process. This method provides a one-for-one match between subdisks and encapsulated partitions.
bash-2.03# /etc/vx/bin/vxmksdpart -g storagedg storage01-04 0 0x00 0x00 partition> print Current partition table (original): Total disk cylinders available: 4924 + 2 (reserved cylinders) Part Tag 0 unassigned 1 unassigned 2 backup 3 unassigned 4 unassigned 5 unassigned 6 7 Flag wm wm wu wm wm wm wu wu Cylinders 0 - 1168 0 0 - 4923 0 0 0 0 - 4923 4923 - 4923 Size 2.00GB 0 8.43GB 0 0 0 8.43GB 1.75MB Blocks (1169/0/0) 4197879 (0/0/0) 0 (4924/0/0) 17682084 (0/0/0) 0 (0/0/0) 0 (0/0/0) 0 (4924/0/0) 17682084 (1/0/0) 3591
2-26
Unencapsulating Data Disks Notice that after execution of the vxmksdpart command for the storage01-04 subdisk, the partition table of c1t3d0 is modied to reect the original sliced location of that subdisks data. The process continues by creating partitions for the remaining subdisks.
bash-2.03# /etc/vx/bin/vxmksdpart -g storagedg storage01-03 1 0x00 0x00 bash-2.03# /etc/vx/bin/vxmksdpart -g storagedg storage01-02 3 0x00 0x00 bash-2.03# /etc/vx/bin/vxmksdpart -g storagedg storage01-01 4 0x00 0x00
When the process is completed, a printout of the partition table for the c1t3d0 disk shows that all the original partitions were recreated and in the correct locations.
partition> print Current partition table (original): Total disk cylinders available: 4924 + 2 (reserved cylinders) Part Tag 0 unassigned 1 unassigned 2 backup 3 unassigned 4 unassigned 5 unassigned 6 7 Flag wm wm wu wm wm wm wu wu Cylinders 0 - 1168 1169 - 2337 0 - 4923 2338 - 3506 3507 - 4675 0 0 - 4923 4923 - 4923 Size 2.00GB 2.00GB 8.43GB 2.00GB 2.00GB 0 8.43GB 1.75MB Blocks (1169/0/0) 4197879 (1169/0/0) 4197879 (4924/0/0) 17682084 (1169/0/0) 4197879 (1169/0/0) 4197879 (0/0/0) 0 (4924/0/0) 17682084 (1/0/0) 3591
5.
bash-2.03# df -k Filesystem kbytes used avail capacity /dev/vx/dsk/rootvol 7670973 1646148 5948116 22% /proc 0 0 0 0% fd 0 0 0 0% mnttab 0 0 0 0% swap 1174480 16 1174464 1% swap 1174496 32 1174464 1% /dev/vx/dsk/storagedg/fs2 2055705 2073 1991961 1% /dev/vx/dsk/storagedg/fs1 2055705 2073 1991961 1% /dev/vx/dsk/storagedg/fs3 2055705 2073 1991961 1% /dev/vx/dsk/storagedg/fs4 2055705 2073 1991961 1%
2-27
Unencapsulating Data Disks When this step is completed, all volumes hosted by this disk are unmounted, as shown in the following example:
bash-2.03# df -k Filesystem /dev/vx/dsk/rootvol /proc fd mnttab swap swap kbytes used avail capacity 7670973 1646148 5948116 22% 0 0 0 0% 0 0 0 0% 0 0 0 0% 1176080 16 1176064 1% 1176096 32 1176064 1% Mounted on / /proc /dev/fd /etc/mnttab /var/run /tmp
6. 7.
Stop all applications using data on these volumes prior to executing the remaining steps. Edit the /etc/vfstab le by changing the mount statements to reect partitions instead of volumes:
mount mount at boot options yes -
#device device mount FS fsck #to mount to fsck point type pass # #/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr ufs 1 fd /dev/fd fd no /proc /proc proc no /dev/vx/dsk/swapvol swap no /dev/vx/dsk/rootvol /dev/vx/rdsk/rootvol / ufs swap /tmp tmpfs yes # # Storage # /dev/dsk/c1t3d0s0 /dev/rdsk/c1t3d0s0 /fs1 ufs /dev/dsk/c1t3d0s1 /dev/rdsk/c1t3d0s1 /fs2 ufs /dev/dsk/c1t3d0s3 /dev/rdsk/c1t3d0s3 /fs3 ufs /dev/dsk/c1t3d0s4 /dev/rdsk/c1t3d0s4 /fs4 ufs #NOTE: volume rootvol (/) encapsulated partition c1t0d0s0 #NOTE: volume swapvol (swap) encapsulated partition c1t0d0s1
no
logging
1 1 1 1
8.
bash-2.03# mountall bash-2.03# df -k Filesystem /dev/vx/dsk/rootvol /proc fd mnttab swap swap /dev/dsk/c1t3d0s1 /dev/dsk/c1t3d0s3 /dev/dsk/c1t3d0s4 /dev/dsk/c1t3d0s0
kbytes used avail capacity Mounted on 7670973 1646148 5948116 22% / 0 0 0 0% /proc 0 0 0 0% /dev/fd 0 0 0 0% /etc/mnttab 1174552 16 1174536 1% /var/run 1174568 32 1174536 1% /tmp 2055705 2073 1991961 1% /fs2 2055705 2073 1991961 1% /fs3 2055705 2073 1991961 1% /fs4 2055705 2073 1991961 1% /fs1
It is now safe to start applications that use data contained on these partitions.
2-28
bash-2.03# vxprint -g storagedg -ht DG NAME NCONFIG NLOG DM NAME DEVICE TYPE RV NAME RLINK_CNT KSTATE RL NAME RVG KSTATE V NAME RVG KSTATE PL NAME VOLUME KSTATE SD NAME PLEX DISK SV NAME PLEX VOLNAME DC NAME PARENTVOL LOGVOL SP NAME SNAPVOL DCO dg storagedg dm storage01 dm storage02 default c1t3d0s2 c1t4d0s2 default sliced sliced
Notice that all volumes are removed. 10. Remove the disk from the disk group. Type the following:
bash-2.03# vxdg rmdisk storage01 bash-2.03# vxdisk rm c1t3d0 bash-2.03# vxdisk list DEVICE TYPE c1t0d0s2 sliced c1t1d0s2 sliced c1t2d0s2 sliced c1t4d0s2 sliced c1t5d0s2 sliced c1t6d0s2 sliced c1t16d0s2 sliced c1t17d0s2 sliced c1t18d0s2 sliced c1t19d0s2 sliced c1t20d0s2 sliced c1t21d0s2 sliced c1t22d0s2 sliced
STATUS online error error online online error error error error error online online online
The vxdisk list output shows that c1t3d0 was removed from VxVM software control.
2-29
Unencapsulating Data Disks 11. Remove the public and private partitions using the format utility. Type the following:
partition> print Current partition table (unnamed): Total disk cylinders available: 4924 + 2 (reserved cylinders) Part Tag 0 unassigned 1 unassigned 2 backup 3 unassigned 4 unassigned 5 unassigned 6 unassigned 7 unassigned Flag wm wm wu wm wm wm wu wu Cylinders 0 - 1168 1169 - 2337 0 - 4923 2338 - 3506 3507 - 4675 0 0 0 Size 2.00GB 2.00GB 8.43GB 2.00GB 2.00GB 0 0 0 Blocks (1169/0/0) 4197879 (1169/0/0) 4197879 (4924/0/0) 17682084 (1169/0/0) 4197879 (1169/0/0) 4197879 (0/0/0) 0 (0/0/0) 0 (0/0/0) 0
2-30
rootvol
Overlay PartitionSlice 0 - / Slice 2 Overlay Partition Slice 1 -
/ (0)
swap
swap
(1)
Free Space
swapvol
Mirror Disk
rootvol
Slice 4 Private Region
Overlay Partition -
(0)
Slice 2
Overlay Partition -
swap (1)
swapvol
Figure 2-2
2-31
Encapsulating Boot Disks Partition mapping for an encapsulated boot disk with four system partitions is shown in Figure 2-3.
Boot Disk
rootvol swapvol
Slice 2 and Slice 6 Public Region
Free Space
Mirror Disk
rootvol swapvol
Slice 2
/ (0) Overlay Partition - swap (1) Overlay Partition - /usr (6) Overlay Partition - /var (7)
Overlay Partition -
Figure 2-3
Encapsulating the boot disk brings increased fault resiliency to a server but also brings with it additional system management issues that must be addressed by system administrators. These issues are addressed in this section on boot disk encapsulation.
2-32
Volume Restrictions
The rootvol, swapvol, and usr volumes have restrictions that other encapsulated volumes do not have. These restrictions are:
q
The root volume rootvol must be a member of the rootdg disk group. The rootvol, swapvol, and usr volumes must have the following specic minor numbers:
q q q
The rootvol, swapvol, usr, and var volumes use restricted mirrors that have overlay partitions created for them. Overlay partitions occupy the same disk blocks as the restricted mirror and are used to boot the system prior to the availability of the vxconfigd daemon. The rootvol, swapvol, usr, var, and opt volumes cannot be grown, spanned or occupy a plex that has multiple non-contiguous subdisks. All data associated with encapsulated system partitions must reside in contiguous blocks of space. When mirroring the boot disk, the mirror disk must be large enough to hold all plexes on that disk or mirroring fails for one or more volumes on the encapsulated boot disk. The rootvol, swapvol, and usr volumes cannot have DRL attached to their volumes.
2-33
Note The following processes and procedures are standard VERITAS Software Corporation recommendations for encapsulating and mirroring boot disks. Sun best practices for boot disk management are discussed in Examining Sun Enterprise Services Best Practices for VxVM SoftwareManaged Boot Disks on page 2-48. A detailed boot disk encapsulation example is not presented in this section due to the similarity of that process to the encapsulation process for data disks.
2-34
no ufs
no
logging
Pre-Encapsulation df -k Command
A df -k command delineates the example systems mounted le systems. At this time, this server does not have any application or data le system congured.
bash-2.03# df -k Filesystem /dev/dsk/c1t0d0s0 /proc fd mnttab swap swap kbytes used avail capacity 7670973 1536449 6057815 21% 0 0 0 0% 0 0 0 0% 0 0 0 0% 1209520 16 1209504 1% 1209520 16 1209504 1% Mounted on / /proc /dev/fd /etc/mnttab /var/run /tmp
2-35
Mount Directory /
In this example, the space problem was rectied by reducing the size of the swap partition, leaving sufcient unpartitioned free space for use by the encapsulation process. Additional swap space can be congured using swap les to supplement the reduced swap partition size. The corrected partition map now shows unallocated space.
bash-2.03# prtvtoc /dev/rdsk/c1t0d0s2 * /dev/rdsk/c1t0d0s2 partition map * * Dimensions: * 512 bytes/sector * 133 sectors/track * 27 tracks/cylinder * 3591 sectors/cylinder * 4926 cylinders
2-36
Mount Directory
Note It is a VxVM software best practice to always congure boot disks with free unpartitioned space at the beginning or end of the disk in the event that the disk is encapsulated at another time.
2-37
no
logging
Post-Encapsulation df -k Command
The following example of the df -k command shows that / is mounted using the new encapsulated root volume.
bash-2.03# df -k Filesystem /dev/vx/dsk/rootvol /proc fd mnttab swap swap kbytes used avail capacity 7670973 1646089 5948175 22% 0 0 0 0% 0 0 0 0% 0 0 0 0% 1175352 16 1175336 1% 1175360 24 1175336 1% Mounted on / /proc /dev/fd /etc/mnttab /var/run /tmp
2-38
2-39
A VxVM software disk called rootdisk The rootdisk is the default disk name for an encapsulated disk when using the vxinstall or the vxdiskadm utility to perform the encapsulation. Two plexes for the rootvol volume, as follows:
q
rootdisk-B0 A shadow subdisk used to preserve an encapsulated disk boot block. rootdisk-02 The encapsulated / partition.
q q
One plex for the swapvol volume Subdisk rootdisk-01 is the encapsulated swap partition.
bash-2.03# vxprint -ht Disk group: rootdg DG DM RV RL V PL SD NAME NAME NAME NAME NAME NAME NAME NCONFIG DEVICE RLINK_CNT RVG RVG VOLUME PLEX NLOG TYPE KSTATE KSTATE KSTATE KSTATE DISK MINORS PRIVLEN STATE STATE STATE STATE DISKOFFS GROUP-ID PUBLEN PRIMARY REM_HOST LENGTH LENGTH LENGTH
2-40
0 3590
1019673916.1025.lowtide 17682083 PRIVATE ROUND CONCAT 0 1 ROUND CONCAT 0 c1t0d0 c1t0d0 c1t0d0 c1t0d0 ENA root RW ENA ENA swap RW ENA
sd rootdiskPriv v pl sd sd rootvol rootvol-01 rootdisk-B0 rootdisk-02 rootvol rootvol-01 rootvol-01 swapvol swapvol-01
rootdisk 15581349 3590 ENABLED ENABLED rootdisk rootdisk ACTIVE ACTIVE 15581348 0 15581349 15581349 1 15581348
bash-2.03# vxprint -ht -g rootdg DG NAME NCONFIG NLOG DM NAME DEVICE TYPE RV NAME RLINK_CNT KSTATE RL NAME RVG KSTATE V NAME RVG KSTATE PL NAME VOLUME KSTATE SD NAME PLEX DISK SV NAME PLEX VOLNAME DC NAME PARENTVOL LOGVOL SP NAME SNAPVOL DCO dg rootdg default default
1019673916.1025.lowtide
2-41
rootvol rootvol-01 rootvol rootdisk-B0 rootvol-01 rootdisk-02 rootvol-01 rootvol-02 rootvol rootmirror-01 rootvol-02 swapvol swapvol-01 swapvol rootdisk-01 swapvol-01 swapvol-02 swapvol rootmirror-02 swapvol-02
ENABLED ACTIVE ENABLED ACTIVE rootdisk 17678492 rootdisk 0 ENABLED ACTIVE rootmirror 0
ENABLED ACTIVE 1052163 ROUND ENABLED ACTIVE 1052163 CONCAT rootdisk 15584939 1052163 0 ENABLED ACTIVE 1052163 CONCAT rootmirror 15581349 1052163 0
2-42
Mount Directory
2-43
Mount Directory
2-44
File List
A directory list of les specic to c1t0d0 is shown in the following example.
bash-2.03# pwd /etc/vx/reconfig.d/disk.d/c1t0d0 bash-2.03# ls -las total 12 2 drwxr-xr-x 2 2 drwxr-xr-x 4 2 -rw-r--r-1 2 -rw-r--r-1 2 -rw-r--r-1 2 -rw-r--r-1
26 24 26 26 26 26
2-45
2-46
2-47
Examining Sun Enterprise Services Best Practices for VxVM Software-Managed Boot Disks
The previous sections describe the processes and procedures to encapsulate and mirror a systems boot disk using standard VxVM software recommendations from the VERITAS Software Corporation. This type of encapsulation is found in many sites that have not contracted with Sun Enterprise Services to install the VxVM software and bring the systems boot disk under VxVM software management. The main difference between the standard VxVM software-encapsulated disk and the Sun Enterprise Services management process is that the Sun Enterprise Services method does not leave the root disk as an encapsulated disk. Once the boot disk is mirrored, the encapsulated root disk is detached, initialized, and mirrored to the original boot disk mirror. This removes the B0 subdisk and makes the two VxVM disks used to manage the rootability of the system identical. The complexity of the boot disk conguration is reduced by providing a consistent slice structure between the boot disk and its mirrors. This section addresses the following two methods of bringing a boot disk under VxVM software management using the Sun Enterprise Services best practice recommendations:
q q
Manual process using the command line Scripted process using the Sun Enterprise Installation Services (EIS) CD-ROM
Note The Sun Enterprise Services best practices for boot disk management are based on guidelines from the Sun BluePrints OnLine document Toward a Reference Conguration for VxVM Managed Boot Disks, part number 806-6197-10.
2-48
Simple Keep the conguration simple. A system administrator with moderate levels of experience should be able to view the boot disk and understand the conguration. Consistent Boot disk congurations that are alike across an enterprise result in simpler recovery operations and installation procedures. If system administrators can successfully troubleshoot a boot disk failure on one system, consistency increases the possibility that the administrator can do so on other systems. Resilient Design boot media so that a single hardware, driver or device failure does not cause a failure. Do not tolerate a single point of failure.
The following are recommended guidelines for conguring VxVM software-managed boot media:
q
Use a maximum of three slices for the boot disk. The suggested best practice is use only the / and swap slices. If necessary, also congure the /var slice as a discreet slice.
Never congure /usr as a separate slice. All the support les and utilities for the VxVM software are located in the /usr directory. If the /usr volume cannot be mounted, these support les are unavailable.
Attach mirrors in geographical, not alphabetical, order. The vxdiskadm process mirrors volumes in alphabetical order. Do not use vxdiskadm to mirror the boot media. Mirroring the disk in geographical order ensures that the mirror disk overlay slicing looks identical to the original boot disk. Depending on the release of the VxVM software, this is only an issue if the boot disk is sliced with partitions other than /, swap, and /var.
Convert the boot disk from an encapsulated disk to an initialized disk. An encapsulated disk is a special case and may be the only encapsulated device in the system. Replacing the encapsulated boot disk with an initialized disk insures that the mirror disk and the boot disk are exact clones of each other. This reduces the complexity of the VxVM software-managed boot disk conguration.
2-49
Map core operating system volumes to disk slices or partitions. This ensures that the system is bootable with minimal effort if the VxVM software does not start.
Ensure that all mirrors of the boot disk are bootable and that a device alias is present in the OpenBoot programmable read-only memory (PROM). Create a clone disk. The clone disk must be bootable from slices, not volumes, and be able to run VxVM software utilities. This disk is used if there is a complete failure of a VxVM software-managed boot disk which makes the system un-bootable.
Perform the following: 1. Save the boot disks vtoc to a le for later reference, if needed. Type the following:
2.
Encapsulate the boot disk, using either the vxinstall or vxdiskadm utility. Boot disk encapsulation is usually completed during the installation and initial setup of the VxVM software by using the vxinstall utility.
2-50
Examining Sun Enterprise Services Best Practices for VxVM Software-Managed 3. Initialize the disk to be used as the rootmirror and add it to the rootdg disk group. Type the following:
4.
Attach the mirrors in the order of the slices on the encapsulated boot disk.
Note If the boot disk was sliced with swap as the rst slice, reverse the order of mirroring for the / and swap slices. Perform the following steps: a. Use the following command to start the procedure to mirror / to the rootmirror disk: Use the following command to start the procedure to mirror the swap slice: Use the following command to start the procedure to mirror the /var slice:
# /etc/vx/bin/vxrootmir rootmirror&
b.
c.
Use the following procedure to display the progress of the mirroring process:
#while true > do > vxtask list > sleep 15 > echo ################## > done TASKID PTID TYPE/STATE PCT 160 ATCOPY/R 84.39% 163 ATCOPY/R 24.95% ############### TASKID PTID TYPE/STATE PCT 160 ATCOPY/R 86.00% 163 ATCOPY/R 26.57%
PROGRESS 0/4197879/3542680 PLXATT rootvol rootvol-02 0/4197879/1047304 PLXATT var var-02 PROGRESS 0/4197879/3610384 PLXATT rootvol rootvol-02 0/4197879/1115256 PLXATT var var-02
Wait for the mirroring procedure to complete before continuing this procedure. 5.
# # # # vxplex vxplex vxplex vxedit
-g rootdg dis rootvol-01 -g rootdg dis swapvol-01 -g rootdg dis var-01 -fr rm rootvol-01 swapvol-01 var-01
2-51
Examining Sun Enterprise Services Best Practices for VxVM Software-Managed 6. 7. Remove rootdisk from the rootdg disk group: Initialize the rootdisk and add it back to the rootdg disk group:
8.
Attach the mirrors in the order of the slices on the encapsulated boot disk. This is similar to the process outlined in Step 4.
Note If the boot disk was sliced with swap as the rst slice, reverse the order of mirroring for the / and swap slices. Perform the following steps: a. Use the following command to start the procedure to mirror / to the rootmirror disk: Use the following command to start the procedure to mirror the swap slice: Use the following command to start the procedure to mirror the /var slice:
# /etc/vx/bin/vxrootmir rootdisk&
b.
c.
Use the following procedure to display the progress of the mirroring process:
#while true > do > vxtask list > sleep 15 > echo ################## > done
Wait for the mirroring procedure to complete before continuing this procedure. 9. If needed, create the underlying partitions by creating overlay partitions for each partition on the boot disk. Use the vxmksdpart command to create the overlay partitions; vxmksdpart requires the partition name, ags, and tags as input.
Note Depending on the version of the VxVM software used, building overlay partitions for /, swap and /var might not be necessary.
2-52
Examining Sun Enterprise Services Best Practices for VxVM Software-Managed A list of valid ags is shown in Table 2-1. Table 2-1 Partition Flags Name Mountable, Read and Write Not Mountable Mountable, Read Only Valid tags are listed in Table 2-2. Table 2-2 Partition Tags Name UNASSIGNED BOOT ROOT SWAP USR BACKUP STAND VAR HOME ALTSCTR CACHE VxVM PRIVATE REGION VxVM PUBLIC REGION Tag 0x00 0x01 0x02 0x03 0x04 0x05 0x06 0x07 0x08 0x09 0x0a 0x15 0x14 Flag 0x00 0x01 0x10
2-53
Examining Sun Enterprise Services Best Practices for VxVM Software-Managed Use vxprint and prtvtoc to print partitioning and subdisk information to identify which overlay partitions must be built. These commands are shown in the following example.
# vxprint -qhtg rootdg dg rootdg default dm rootdisk dm rootmirror v pl sd sd pl sd v pl sd pl sd v pl sd pl sd c1t0d0s2 c1t1d0s2 default sliced sliced 0 3590 3590 1023554924.1025.lowtide 17678493 17674902 4197879 4197879 1 4197878 4197879 4197879 ROUND CONCAT 0 1 CONCAT 0 ROUND CONCAT 0 CONCAT 0 c1t0d0 c1t0d0 c1t1d0 c1t0d0 c1t1d0 root RW ENA ENA RW ENA swap RW ENA RW ENA
rootvol rootvol-01 rootvol rootdisk-B0 rootvol-01 rootdisk-02 rootvol-01 rootvol-02 rootvol rootmirror-01 rootvol-02 swapvol swapvol-01 swapvol rootdisk-01 swapvol-01 swapvol-02 swapvol rootmirror-02 swapvol-02 var var-01 var rootdisk-03 var-01 var-02 var rootmirror-03 var-02
ENABLED ACTIVE ENABLED ACTIVE rootdisk 17678492 rootdisk 0 ENABLED ACTIVE rootmirror 0
ENABLED ACTIVE 1052163 ENABLED ACTIVE 1052163 rootdisk 4197878 1052163 ENABLED ACTIVE 1052163 rootmirror 4197879 1052163
ENABLED ACTIVE 4197879 ROUND ENABLED ACTIVE 4197879 CONCAT rootdisk 9447920 4197879 0 ENABLED ACTIVE 4197879 CONCAT rootmirror 5250042 4197879 0
# * * * * * * * * * * * * * * * * * * * * * *
prtvtoc /dev/rdsk/c1t1d0s2 /dev/rdsk/c1t1d0s2 partition map Dimensions: 512 bytes/sector 133 sectors/track 27 tracks/cylinder 3591 sectors/cylinder 4926 cylinders 4924 accessible cylinders Flags: 1: unmountable 10: read-only Unallocated space: First Sector Last Sector Count Sector 3591 3591 7181 4205061 4290769417 7181 17682084 4281490273 4205060 9455103 8226981 17682083
2-54
Tag 2 3 5 15 14
Flags 00 01 00 01 01
Mount Directory
Note In this example, the capture was modied to show the /var slice as not built. This was done to help show how to use the vxmksdpart command to create overlay partitions. The VxVM software version 3.2 creates overlay partitions for /, swap and /var, and thus you do not need to execute any additional commands. If /opt or other non-system partitions are dened on the boot disk, use vxmksdpart to dene those partitions. In this example, an overlay partition must be built for the /var slice. The subdisk information listed in Table 2-3 is needed as input to the vxmksdpart command. Table 2-3 Required subdisk Information Subdisk
rootdisk-03 rootmirror-03
Slice 5 5
This example output from the prtctoc command now shows the /var partition as an overlay partition.
# * * * * * * * * * * * * * prtvtoc /dev/rdsk/c1t1d0s2 /dev/rdsk/c1t1d0s2 partition map Dimensions: 512 bytes/sector 133 sectors/track 27 tracks/cylinder 3591 sectors/cylinder 4926 cylinders 4924 accessible cylinders Flags: 1: unmountable 10: read-only
2-55
10. Set the dump device to a non-VxVM software disk, if available. If such a disk is not available, use the boot disk. Type the following:
# dumpadm -d /dev/dsk/c0t0d0s1
11. Create the OpenBoot PROM device aliases, if needed. Build the aliases using the eeprom commands nvedit or nvalias at the OpenBoot PROM prompt. Note Depending on the version of VxVM software installed, this step might not be necessary.
Caution This scripts is distributed as is and is not supported by Sun Microsystems. To gain familiarity with the process, try performing the manual procedure multiple times prior to using this script.
2-56
2.
Record and save the current conguration of rootdg using the vxprint -qhtg rootdg command. This preserves a copy of the rootdg conguration prior to unencapsulation, as shown in the following example.
bash-2.03# vxprint -qhtg rootdg >/rootdg.print bash-2.03# more /rootdg.print dg rootdg default default
1019673916.1025.lowtide
2-57
dm rootdisk dm rootmirror v pl sd sd pl sd v pl sd pl sd
c1t0d0s2 c1t22d0s2
sliced sliced
3590 3590
17678493 17674902 15581349 15581349 1 15581348 15581349 15581349 ROUND CONCAT 0 1 CONCAT 0 c1t0d0 c1t0d0 c1t22d0 c1t0d0 c1t22d0 root RW ENA ENA RW ENA swap RW ENA RW ENA
rootvol rootvol-01 rootvol rootdisk-B0 rootvol-01 rootdisk-02 rootvol-01 rootvol-02 rootvol rootmirror-01 rootvol-02 swapvol swapvol-01 swapvol rootdisk-01 swapvol-01 swapvol-02 swapvol rootmirror-02 swapvol-02
ENABLED ACTIVE ENABLED ACTIVE rootdisk 17678492 rootdisk 0 ENABLED ACTIVE rootmirror 0
ENABLED ACTIVE 1052163 ROUND ENABLED ACTIVE 1052163 CONCAT rootdisk 15584939 1052163 0 ENABLED ACTIVE 1052163 CONCAT rootmirror 15581349 1052163 0
3.
Capture the current conguration of the disk using the vxdisk utility: Detach all plexes associated with the rootmirror disk. Type the following:
4.
bash-2.03# vxprint -qhtg rootdg -s | grep -i rootmirror | awk {print $3}> /rmsub.plex bash-2.03# more /rmsub.plex rootvol-02 swapvol-02 bash-2.03# for i in cat /rmsub.plex > do > vxplex -g rootdg dis $i > vxprint -qhtg rootdg -p $i > done pl rootvol-02 sd rootmirror-01 rootvol-02 pl swapvol-02 sd rootmirror-02 swapvol-02 bash-2.03# DISABLED 15581349 CONCAT rootmirror 0 15581349 0 DISABLED 1052163 CONCAT rootmirror 15581349 1052163 0 c1t22d0 c1t22d0 RW ENA RW ENA
Adding the rm option to the vxplex command removes the mirror plexes in addition to performing the disable operation. The command syntax is as follows: # vxplex -g rootdg -o rm dis plex_name If executed within a loop, the command syntax is > vxplex -g rootdg -o rm dis $i 5. Verify that all rootmirror plexes were detached. Use the vxprint command as follows:
2-58
0 3590 3590
1019673916.1025.lowtide 17678493 17674902 15581349 CONCAT 15581349 0 c1t22d0 c1t22d0 c1t0d0 c1t0d0 c1t0d0 RW ENA RW ENA root RW ENA ENA swap RW ENA
pl rootvol-02 sd rootmirror-01 rootvol-02 pl swapvol-02 sd rootmirror-02 swapvol-02 v pl sd sd rootvol rootvol-01 rootdisk-B0 rootdisk-02 rootvol rootvol-01 rootvol-01 swapvol swapvol-01
DISABLED rootmirror 0
DISABLED 1052163 CONCAT rootmirror 15581349 1052163 0 ENABLED ENABLED rootdisk rootdisk ACTIVE ACTIVE 17678492 0 15581349 15581349 1 15581348 ROUND CONCAT 0 1 ROUND CONCAT 0
Successful completion of the previous steps is critical to the success of this process. If all plexes from the mirror are not disabled, the vxunroot utility fails. 6. Remove rootability using the vxunroot utility as follows:
bash-2.03# /etc/vx/bin/vxunroot This operation will convert the following file systems from volumes to regular partitions: root swap usr var opt home Replace volume rootvol with c1t0d0s0. This operation will require a system reboot. If you choose to continue with this operation, system configuration will be updated to discontinue use of the volume manager for your root and swap devices. Do you wish to do this now [y,n,q,?] (default: y) y Restoring kernel configuration... A shutdown is now required to install the new kernel. You can choose to shutdown now, or you can shutdown later, at your convenience. Do you wish to shutdown now [y,n,q,?] (default: n) n Please shutdown before you perform any additional volume manager or disk reconfiguration. To shutdown your system cd to / and type shutdown -g0 -y -i6 bash-2.03# init 6
2-59
Unencapsulating Boot Disks 7. After the reboot completes, check the devices used for the / and swap partitions. Verify that these partitions use non-VxVM software objects:
kbytes used avail capacity 7670973 1697985 5896279 23% 0 0 0 0% 0 0 0 0% 0 0 0 0% 655784 16 655768 1% 655792 24 655768 1% dev swaplo blocks free 118,145 16 1052144 1052144 Mounted on / /proc /dev/fd /etc/mnttab /var/run /tmp
bash-2.03# df -k Filesystem /dev/dsk/c1t0d0s0 /proc fd mnttab swap swap bash-2.03# swap -l swapfile /dev/dsk/c1t0d0s1
2.
Record the current conguration of rootdg using the vxprint -qhtg rootdg command. This preserves a copy of the rootdg conguration prior to unencapsulation. Type the following: Capture the current disk conguration using the vxdisk utility:
3.
2-60
Unencapsulating Boot Disks 4. Detach all plexes associated with the rootmirror disk. Use the following script:
bash-2.03# vxprint -qhtg rootdg -s | grep -i rootmirror | awk {print $3}> /rmsub.plex bash-2.03# for i in cat ./rmsub.plex > do > vxplex -g rootdg -o rm dis $i > done
5.
Verify that all rootmirror plexes were detached using the vxprint command as follows:
0 3590 3590 ACTIVE ACTIVE 17678492 0 1019673916.1025.lowtide 17678493 17674902 15581349 15581349 1 15581348 ROUND CONCAT 0 1 ROUND CONCAT 0 c1t0d0 c1t0d0 c1t0d0 root RW ENA ENA swap RW ENA
bash-2.03# vxprint -qhtg rootdg dg rootdg default default dm rootdisk dm rootmirror v pl sd sd rootvol rootvol-01 rootdisk-B0 rootdisk-02 c1t0d0s2 c1t22d0s2 rootvol rootvol-01 rootvol-01 swapvol swapvol-01 sliced sliced ENABLED ENABLED rootdisk rootdisk
Successful completion of the previous steps are critical to the success of this process. If all plexes from the mirror are not disabled, the vxunroot utility fails. 6. Remove rootability using the following manual procedure: a. Remove the /etc/vx/reconfig.d/state.d/root-done le. Typethe following: Removal of this le tells the VxVM software that the root disk is no longer encapsulated. b. Edit the /etc/system and /etc/vfstab les back to their preencapsulation state. Be sure to back up each le prior to editing. Perform the following tasks:
q
bash-2.03# rm /etc/vx/reconfig.d/state.d/root-done
Remove the following lines from the /etc/system le rootdev:/pseudo/vxio@0:0 set vxio:vol_rootdev_is_volume=1
2-61
Restore the /etc/vfstab le to its pre-encapsulation state. Edit the /etc/vfstab le and modify the mount statements for /, usr, var, and swap to use the original physical devices. The original partitions were preserved through the use of overlay partitions. Alternatively, copy the /etc/vfstab.prevm le to /etc/vfstab. This option is valid only if there were no changes to the system storage conguration since the boot disk was encapsulated. The edited le looks like the following example:
#device device mount #to mount to fsck point # #/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr fd /dev/fd fd no /proc /proc proc no /dev/dsk/c1t0d0s1 swap /dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0 swap /tmp tmpfs yes
FS type ufs / -
fsck pass 1
no ufs
no
logging
c.
If non-system (/, usr, var, and swap) partitions exist on the boot disk and do not have an overlap partition already congured, use the vxmksdpart command to recreate the partitions.
Note This process is similar to the one used to restore partitions for data disk unencapsulation. 7. 8. Reboot the system. Recursively remove boot volumes from the rootdg disk group:
9.
bash-2.03# vxprint -qhtg rootdg dg rootdg default default dm rootdisk dm rootmirror c1t0d0s2 c1t22d0s2 sliced sliced
10. Remove the boot disk from VxVM software control. Type the following:
bash-2.03# vxdg -g rootdg rmdisk rootdisk
2-62
Unencapsulating Boot Disks Caution There must be one disk in rootdg, or the VxVM software does not start. Do not remove the disk that is used as the root mirror. 11. Verify that the boot disk is removed from VxVM software control:
bash-2.03# vxprint -qhtg rootdg dg rootdg default default dm rootmirror c1t22d0s2 sliced 0 3590 1019673916.1025.lowtide 17674902 -
12. Delete public and private region partitions by using the format utility. Type the following:
partition> print Current partition table (unnamed): Total disk cylinders available: 4924 + 2 (reserved cylinders) Part Tag 0 root 1 swap 2 backup 3 unassigned 4 unassigned 5 unassigned 6 unassigned 7 unassigned Flag wm wu wm wu wu wm wm wm Cylinders 0 - 4338 4340 - 4632 0 - 4923 0 0 0 0 0 Size 7.43GB 513.75MB 8.43GB 0 0 0 0 0 Blocks (4339/0/0) 15581349 (293/0/0) 1052163 (4924/0/0) 17682084 (0/0/0) 0 (0/0/0) 0 (0/0/0) 0 (0/0/0) 0 (0/0/0) 0
13. Run the vxdctl enable command to update the VxVM software view of installed disks. Type the following:
bash-2.03# vxdctl enable bash-2.03# vxdisk list DEVICE TYPE c1t0d0s2 sliced c1t1d0s2 sliced c1t2d0s2 sliced c1t3d0s2 sliced c1t4d0s2 sliced c1t5d0s2 sliced c1t6d0s2 sliced c1t16d0s2 sliced c1t17d0s2 sliced c1t18d0s2 sliced c1t19d0s2 sliced c1t20d0s2 sliced c1t21d0s2 sliced c1t22d0s2 sliced
DISK rootmirror
GROUP rootdg
STATUS error error error online online online online invalid error error error error online online online
2-63
Bring the system to the ok prompt and insert a Solaris OE compact disc (CD) into the CD-ROM drive. Boot the system to single-user mode from the CD-ROM, as follows: After the system is booted from the CD-ROM, set the terminal type so that the vi utility works correctly. Type the following: If TERM=sun does not work, try TERM=vt100.
3.
# TERM=vt100;export TERM
4. 5.
Execute an fsck on the root le system. Type the following: If the fsck response is clean, mount slice 0 to /a. If fsck cannot repair the root le system, determine the source of the problem and correct it. This guide does not contain explanations of le system corruption or how to repair it. The fsck response must be clean to continue this procedure.
# fsck -y /dev/rdsk/c#t#d#s0
6. 7. 8.
# vi /a/etc/system
Mount the root le system to /a. Type the following: Make a backup of /a/etc/system. Edit the /etc/system le.
# mount /dev/dsk/c#t#d#s0 /a
# cp /a/etc/system /a/etc/system.orig
2-64
Unencapsulating Boot Disks 9. Completely remove the following lines from the system le:
Note If the disk is re-encapsulated, these lines are added correctly by the process, so there is no harm done by removing them. 10. Make a backup of /a/etc/vfstab. Type the following:
# cp /a/etc/vfstab /a/etc/vfstab.orig
11. Edit the vfstab le back to its original state, pointing /, swap, /usr, and /var to hard partitions on the disk, such as /dev/dsk and /dev/rdsk, rather than to /dev/vx/ entries. Type the following:
# vi /a/etc/vfstab
12. Temporarily comment out all other /dev/vx volumes from the /a/etc/vfstab le by using the # character. This includes le systems like /opt and /export, if they exist. The original /etc/vfstab looks like the following, assuming root is c0t0d0.
--------------------------------------------------------------------------/dev/vx/dsk/swapvol swap no /dev/vx/dsk/rootvol /dev/vx/rdsk/rootvol / ufs 1 no /dev/vx/dsk/usr /dev/vx/rdsk/usr /usr ufs 1 no /dev/vx/dsk/var /dev/vx/rdsk/var /var ufs 1 no /dev/vx/dsk/export /dev/vx/rdsk/export /export ufs 2 yes swap /tmp tmpfs yes /dev/vx/dsk/datadg/somevol /dev/vx/rdsk/datadg/somevol /somevol ufs 2 yes -
#NOTE: volume rootvol (/) encapsulated partition c0t0d0s0 #NOTE: volume swapvol (swap) encapsulated partition c0t0d0s1 #NOTE: volume usr (/usr) encapsulated partition c0t0d0s5 #NOTE: volume var (/var) encapsulated partition c0t0d0s6 #NOTE: volume export (/export) encapsulated partition c0t0d0s7 ---------------------------------------------------------------------------
2-65
13. Make sure that the VxVM software does not start during the next boot.
# touch /a/etc/vx/reconfig.d/state.d/install-db
This is important because if the root disk contains mirrors, and the system boots, the mirrors are synced; this corrupts the changes just made. 14. Remove the ag that tells the VxVM software that the root disk is encapsulated.
# rm /a/etc/vx/reconfig.d/state.d/root-done
15. Reboot the system for changes to take effect. Type the following:
# reboot
Once rebooted, the system comes up in a partially unencapsulated state with /, /usr, /var, and swap mounted. Note The VxVM software does not start. It can be started manually once the system is booted. 16. Start the VxVM software. Execute the following commands:
# # # # rm /etc/vx/reconfig.d/state.d/install-db vxiod set 10 vxconfigd -m disable vxdctl enable
17. Remove the volumes that existed on the encapsulated boot disk. These are generally rootvol, swapvol, usr, and var. This might also include home, opt, or other non-standard root partitions. Use the command vxprint -htg rootdg to list the volumes in rootdg before removing them. Then, for each volume, run the following command: # /usr/sbin/vxedit -rf rm volume_name 18. Remove the rootdisk from the rootdg disk group. The disk name is usually rootdisk. Type the following: # /usr/sbin/vxdg -k rmdisk disk_name
2-66
Unencapsulating Boot Disks 19. Re-write the vtoc of the disk so that hard partitions are again dened for the root le systems. There are several ways to put the hard partitions back into the vtoc le, including using the fmthard command on a modied /etc/vx/reconfig.d/disk.d/c#t#d#/vtoc le, using the format utility to partition the disk manually, or using the vxmksdpart command. The simplest method, however, is to use the vxedvtoc command. a. When the VxVM software encapsulates a disk, it makes a record of the old vtoc of the disk. This le is stored for each disk in /etc/vx/reconfig.d/disk.d/c#t#d#. It is stored in a VxVM software-specic format, so it cannot be used as an argument for the fmthard command unless it is modied. The vxedvtoc command is similar to the fmthard command except that it can read this vtoc le and write that vtoc to a disk. The command takes the following form: vxedvtoc -f filename devicename b. Assuming that the boot disk is c0t0d0, run the command as follows:
# /etc/vx/bin/vxedvtoc -f /etc/vx/reconfig.d/disk.d/c0t0d0/vtoc /dev/rdsk/c0t0d0s2 # THE ORIGINAL PARTITIONING IS AS FOLLOWS: #SLICE TAG FLAGS START SIZE 0 0x0 0x200 0 0 1 0x0 0x200 0 0 2 0x5 0x201 0 8794112 3 0x0 0x200 0 0 4 0x0 0x200 0 0 5 0x0 0x200 0 0 6 0xe 0x201 0 8794112 7 0xf 0x201 8790016 4096 # THE NEW PARTITIONING WILL BE AS FOLLOWS : #SLICE TAG FLAGS START SIZE 0 0x0 0x200 0 2048000 1 0x0 0x200 2048000 2048000 2 0x5 0x201 0 8794112 3 0x0 0x201 4096000 2048000 4 0x0 0x201 6144000 2048000 5 0x0 0x200 0 0 6 0x0 0x200 0 0 7 0x0 0x200 0 0 DO YOU WANT TO WRITE THIS TO THE DISK ? [Y/N] :y WRITING THE NEW VTOC TO THE DISK
This partitions the disk back to a pre-encapsulation state. 20. Uncomment the entries for any of the non-root partitions (/, /usr, /var, and swap) from /etc/vfstab, as well as any data volumes.
2-67
Unencapsulating Boot Disks In this example, comments were removed from /export and the data volume /somevol:
# vi /etc/vfstab /dev/dsk/c0t0d0s1 /dev/dsk/c0t0d0s0 /dev/dsk/c0t0d0s5 /dev/dsk/c0t0d0s6 /dev/dsk/c0t0d0s7 swap /dev/vx/dsk/datadg/somevol /dev/rdsk/c0t0d0s0 / /dev/rdsk/c0t0d0s5 /usr /dev/rdsk/c0t0d0s6 /var /dev/rdsk/c0t0d0s7 /export /tmp /dev/vx/rdsk/datadg/somevol swap no ufs 1 no ufs 1 no ufs 1 no ufs 2 yes tmpfs yes /somevol ufs 2 yes -
At this point the root disk is completely free of the VxVM software control. The VxVM software daemons are started, and all system le systems should be mounted.
2-68
Bring the system to the ok prompt, and insert a Solaris OE CD into the CD-ROM drive. Boot system to single-user mode from CD-ROM. Type the following: After the system is booted from the CD-ROM, set the terminal type so that the vi utility works correctly. If TERM=sun does not work, try TERM=vt100.
3.
# TERM=vt100;export TERM
4. 5.
Run the fsck utility on the root le system. Type the following: If the fsck response is clean, mount slice 0 to /a. If fsck cannot repair the root le system, determine the source of the problem and correct it. This guide does not contain explanations of le system corruption or how to repair it. The fsck response must be clean to continue this procedure.
# fsck -y /dev/rdsk/c#t#d#s0
6. 7. 8.
# vi /a/etc/system
Mount root le system to /a. Type the following: Make a backup of /a/etc/system. Edit the /etc/system le. Type the following: Comment out the following lines by using double asterisks (**):
# mount /dev/dsk/c#t#d#s0 /a
# cp /a/etc/system /a/etc/system.orig
9.
11. Edit the vfstab le back to its original state, pointing /, swap, /usr, and /var to hard partitions on the disk, such as /dev/dsk and /dev/rdsk, rather than to /dev/vx/ entries. Type the following:
# vi /a/etc/vfstab
12. Temporarily comment out all other /dev/vx volumes from the /a/etc/vfstab le by using the # character. This includes le systems like /opt and /export, if they exist.
2-69
Unencapsulating Boot Disks The original /etc/vfstab looks like the following, assuming root is c0t0d0:
--------------------------------------------------------------------------/dev/vx/dsk/swapvol swap no /dev/vx/dsk/rootvol /dev/vx/rdsk/rootvol / ufs 1 no /dev/vx/dsk/usr /dev/vx/rdsk/usr /usr ufs 1 no /dev/vx/dsk/var /dev/vx/rdsk/var /var ufs 1 no /dev/vx/dsk/export /dev/vx/rdsk/export /export ufs 2 yes swap /tmp tmpfs yes /dev/vx/dsk/datadg/somevol /dev/vx/rdsk/datadg/somevol /somevol ufs 2 yes -
#NOTE: volume rootvol (/) encapsulated partition c0t0d0s0 #NOTE: volume swapvol (swap) encapsulated partition c0t0d0s1 #NOTE: volume usr (/usr) encapsulated partition c0t0d0s5 #NOTE: volume var (/var) encapsulated partition c0t0d0s6 #NOTE: volume export (/export) encapsulated partition c0t0d0s7 ---------------------------------------------------------------------------
#NOTE: volume rootvol (/) encapsulated partition c0t0d0s0 #NOTE: volume swapvol (swap) encapsulated partition c0t0d0s1 #NOTE: volume usr (/usr) encapsulated partition c0t0d0s5 #NOTE: volume var (/var) encapsulated partition c0t0d0s6 #NOTE: volume export (/export) encapsulated partition c0t0d0s7 ---------------------------------------------------------------------------
13. Make sure the VxVM software does not start during the next boot. Type the following:
# touch /a/etc/vx/reconfig.d/state.d/install-db
This is important because if the root disk contains mirrors and the system boots, the mirrors are synced; this corrupts the changes just made. 14. Remove the ag that tells the VxVM software that the root disk is encapsulated.
# rm /a/etc/vx/reconfig.d/state.d/root-done
2-70
Unencapsulating Boot Disks 15. Reboot the system for changes to take effect. Type the following:
# reboot
Once rebooted, the system comes up in an unencapsulated state with /, /usr, /var, and swap mounted. At this point a basic or functional unencapsulation is complete. Do not leave the system in this state system permanently. It is a state that is useful for troubleshooting and system maintenance. When problems with the system are resolved and it is ready to be re-encapsulated, perform the following series of commands:
# # # # # touch /etc/vx/reconfig.d/state.d/root-done rm /etc/vx/reconfig.d/state.d/install-db cp /a/etc/vfstab.orig /a/etc/vfstab cp /a/etc/system.orig /a/etc/system reboot
2-71
Data Disks
Data disks fail unencapsulation if the following is true:
q q q
Any encapsulated partition was grown or has a modied layout. The encapsulated disk failed and was replaced. The encapsulated disk was non-conforming and was unencapsulated using the procedure in Encapsulating a Non-Conforming Disk on page 2-20.
If the data on the disk must be removed from the VxVM software control, back up the data and restore it to a non-VxVM software disk. Data must be restored because the encapsulated disks original mapping of partitions to blocks within the public region changes when the disk is replaced and synced. This prevents the vxmksdpart command from properly mapping subdisks within the public region to physical partitions.
Boot Disks
The issues described in this section affect boot disk encapsulation.
2-72
The /, /usr, /var, and swap partitions There should be no problems with these partitions; the disk unencapsulates successfully using both scripted and manual unencapsulation methods. In particular
q
On a two-slice boot disk (/ and swap partitions) The unencapsulated partition scheme is identical to preencapsulation except that slice 0 is moved back one or two cylinders from the start of the disk. Modications to the /etc/vfstab le are unnecessary if manually unencapsulated. On three- or four-slice boot disks (/, /usr, /var, and swap partitions) The unencapsulated partition scheme is different from the pre-encapsulated scheme. The /usr and /var partitions are relocated to partition 6 and 7 if originally congured in partitions 3 and 4. Scripted unencapsulation using the vxunroot command successfully unencapsulates boot disk and modies the /etc/vfstab le to reect the new location of the /usr and /var partitions. Manual unencapsulation requires manual modication of the /etc/vfstab le to reect the new locations of the /usr and /var partitions.
Caution If the required modications to the /etc/vfstab are not made, the reboot fails. Recovery from this error requires booting from CD-ROM, mounting the root le system, and editing the /etc/vfstab le.
2-73
The /, /usr, /var, and swap partitions plus /opt or /home The partition scheme can change depending on:
q
Scripted unencapsulation using vxunroot works with no problems. The nal partition scheme is changed from the original, which is reected in the /etc/vfstab le. The system boots with no problems. Manual unencapsulation can be confusing if the original (preencapsulation) partition scheme is not known. Determining where the original /, /usr, /var, and swap partitions were located is easy. Output from the format utility delineates those partitions. The confusing part is determining which of the preserved system partitions (such as /opt and /home) is which if the encapsulated boot disk has both. Additionally, manual unencapsulation procedures must be modied to reect the following:
q
The vxmksdpart and the vxedvtoc commands are not necessary because the disk is already partitioned. All system partitions (/, /usr, /var, and swap) and encapsulated system partitions (/opt and /home) are visible. Additionally, the data these commands use is invalid. Use the format utility only to remove the public or private region partitions. Edit the /etc/vfstab le to reect the new locations for any relocated partitions. If /usr was originally in slice 4 and now occupies slice 6, this must be reected in the /etc/vfstab le prior to a system reboot.
Note If the Sun Enterprise Services best practices boot disk management processes are followed, the partitioning concerns described in Encapsulated Disk Was Replaced on page 2-73 are eliminated.
2-74
Install the VxVM software Encapsulate a boot disk Unencapsulate a boot disk using the vxunroot command Unencapsulate a boot disk using manual methods Unencapsulate a boot disk that has a replaced primary mirror Encapsulate a data disk Unencapsulate a data disk Encapsulate a non-conforming data disk
Preparation
To prepare for this exercise:
q
Identify four disks in addition to the boot disk to use as mirror and data disks. Make sure that the boot disk has the /, swap, /usr, /var, and /opt partitions. If the boot disk is not congured this way, have the instructor perform a re-ash installation on your system using the proper boot disk conguration for this lab.
Ask your instructor for the location of the VxVM software packages, patches, and supporting Solaris OE software. Have paper and writing instruments for taking notes.
2-75
The prtvtoc value The format utility partition print The df -k output Contents of the /etc/vfstab le
Save the le as /bootdisk_capture. The information in this le is used later in this lab exercise. 2. 3. 4. Install the VxVM software release 3.2 packages. Install all VxVM software release 3.2 patches as indicated by the instructor. Use vxinstall to congure the system as follows: a. b. c. Do not use enclosure-based naming. Select custom install. Encapsulate the boot disk as follows: 1. 2. d. 5. 6. Assign it a VxVM software disk name of rootdisk (default). Accept the default private region size.
Reboot the system. Open /bootdisk_capture using a text editor and capture the following post-encapsulation boot disk information:
q q q q q
The prtvtoc value The format utility partition print The df -k value Contents of the /etc/vfstab le The vxprint -qhtg rootdg output
2-76
Exercise: Encapsulating Disks 7. Mirror the encapsulated boot disk using Sun Enterprise Services best practices procedure (see Examining Sun Enterprise Services Best Practices for VxVM Software-Managed Boot Disks on page 2-48). Assign the mirror disk a VxVM software disk name of rootmirror. While the boot disk is being mirrored, answer the following questions: a. Describe the difference between the pre- and post-encapsulation boot disk partition conguration. Use the contents of /bootdisk_capture as a guide. ________________________________________________________ ________________________________________________________ ________________________________________________________ ________________________________________________________ ________________________________________________________ b. What directory contains both pre- and post-encapsulation conguration information about the boot disk? ________________________________________________________ ________________________________________________________ ________________________________________________________ c. State the purpose of the following les:
q
8.
9.
After the boot disk is mirrored, use vxprint to capture the postmirror conguration. Copy this information to the /bootdisk_capture le for use later in this lab.
2-77
2-78
One the encapsulation process is complete and the system has rebooted, mirror the boot disk using the Sun Enterprise Services best practice procedure (see Examining Sun Enterprise Services Best Practices for VxVM Software-Managed Boot Disks on page 2-48). Assign the mirror disk a VxVM software disk name of rootmirror.
Task 4 Manually Unencapsulating a Boot Disk When Booted From the CD-ROM
Complete the following steps: 1. Unencapsulate the boot disk using the manual procedure when booted from CD-ROM described in Unencapsulating When Booted From the CD-ROM on page 2-64. Be sure to recover the encapsulated /opt partition using the vxedvtoc command outlined in this procedure. Be patientthis procedure reboots the system multiple times. 2. Was the ununcapsulation successful? _____________________________________________________________ _____________________________________________________________ 3. If the unencapsulation was not successful, what do you think went wrong? _____________________________________________________________ _____________________________________________________________ _____________________________________________________________ _____________________________________________________________ _____________________________________________________________
2-79
Exercise: Encapsulating Disks 4. Is the mistake recoverable? _____________________________________________________________ _____________________________________________________________ 5. Describe the process used to recover and successfully unencapsulate the boot disk. _____________________________________________________________ _____________________________________________________________ _____________________________________________________________ _____________________________________________________________ _____________________________________________________________ _____________________________________________________________ 6. 7. Re-encapsulate the boot disk using the vxdiskadm utility. After the system reboots, re-mirror the boot disk using the Sun Enterprise Services best practices procedure (see Examining Sun Enterprise Services Best Practices for VxVM Software-Managed Boot Disks on page 2-48). Assign the mirror disk a VxVM software disk name of rootmirror.
Caution Do not continue the lab without re-mirroring the boot disk. The next task requires that the boot disk be mirrored.
Create two partitions minimum (512 megabyte maximum size). Leave partitions 6 and 7 unallocated.
3.
2-80
Exercise: Encapsulating Disks 4. 5. 6. 7. Create mount points for each of the partitions on the data disk and verify that the new le systems successfully mounts. Update the /etc/vfstab le to auto-mount these le systems during system reboot. Verify that the le systems mount from the /etc/vfstab le. Use prtvtoc, the df command, and the format utility to capture pre-encapsulation and mount information to the /datadisk_capture le. Also capture the contents of the /etc/vfstab file. This information is used later in this lab exercise. Encapsulate this disk using the vxdiskadm utility. While encapsulating this disk, you are asked for a disk group for this disk to join. Create a new disk group called datadg, or use rootdg. 9. After the system reboots, verify that the encapsulation was successful using the df -k and vxprint commands.
8.
10. Mirror the encapsulated data disk using a disk you identied for this purpose and the vxdiskadm utility. 11. While the data disk is being mirrored, answer the following questions: a. What are the differences between the pre- and postencapsulation partition conguration for this disk? ________________________________________________________ ________________________________________________________ ________________________________________________________ b. How many reboots did the system execute? ________________________________________________________ ________________________________________________________ ________________________________________________________ c. Using output from the execution of a df -k command, contrast the differences in the devices used to mount the newly encapsulated le systems. ________________________________________________________ ________________________________________________________ ________________________________________________________
2-81
Exercise: Encapsulating Disks d. View the /etc/vfstab le and describe the differences between the pre- and post-data disk encapsulation content. ________________________________________________________ ________________________________________________________ ________________________________________________________ 12. Wait for the mirror process to complete before proceeding to the next task.
2-82
Exercise: Encapsulating Disks 6. Once the unencapsulation successfully completes, contrast the preencapsulation and post-unencapsulation partition scheme of this disk. Are there any differences? _____________________________________________________________ _____________________________________________________________ If yes, list them. _____________________________________________________________ _____________________________________________________________ _____________________________________________________________ _____________________________________________________________ 7. Unmount and remove all references to the le systems on this disk from the /etc/vfstab le.
Create two partitions, minimum, using all the available space on the disk. Do not leave any free space. Leave at least one partition unused.
3. 4. 5. 6.
Build le systems on each of the partitions. Create mount points for each of the partitions on the data disk, and verify that the new le systems successfully mount. Update the /etc/vfstab le to auto-mount these le systems during system reboot. Verify that the le systems mount from the /etc/vfstab le.
2-83
Exercise: Encapsulating Disks 7. Use prtvtoc, the df command, and the format utility to capture pre-encapsulation and mount information to the /datadisk_capture le. Also capture the contents of the /etc/vfstab le. This information is used later in this lab exercise. Encapsulate this disk using the procedure in Encapsulating a NonConforming Disk on page 2-20.
8.
2. 3.
Caution It is critical for the successful completion of labs in other modules that you complete the lab house-cleaning tasks. When you are nished, only the rootdg disk group should exist with the boot disk encapsulated and mirrored.
2-84
Exercise Summary
Exercise Summary
Discussion Take a few minutes to discuss what experiences, issues, or discoveries you had during the lab exercise.
q q q q
!
?
2-85
Task 1 Solutions
Complete the following steps: 1. Open a text editor such as vi and capture the following preencapsulation boot disk information:
q q q q
The prtvtoc value The format utility partition print The df -k output Contents of the /etc/vfstab le
Save the le as /bootdisk_capture. The information in this le is used later in this lab exercise. 2. 3. 4. Install the VxVM software release 3.2 packages. Install all VxVM software release 3.2 patches as indicated by the instructor. Use vxinstall to congure the system as follows: a. b. c. Do not use enclosure-based naming. Select custom install. Encapsulate the boot disk as follows: 1. 2. d. 5. 6. Assign it a VxVM software disk name of rootdisk (default). Accept the default private region size.
Reboot the system. Open /bootdisk_capture using a text editor and capture the following post-encapsulation boot disk information:
q q q
The prtvtoc value The format utility partition print The df -k value
2-86
7.
Mirror the encapsulated boot disk using Sun Enterprise Services best practices procedure (see Examining Sun Enterprise Services Best Practices for VxVM Software-Managed Boot Disks on page 2-48). Assign the mirror disk a VxVM software disk name of rootmirror. While the boot disk is being mirrored, answer the following questions: a. Describe the difference between the pre- and post-encapsulation boot disk partition conguration. Use the contents of /bootdisk_capture as a guide.
8.
There are two new disk partitions for the private and public regions. Additionally, /opt was encapsulated into the disks public region and is no longer visible. The /, swap, /usr, and /var partitions are still visible as overlay partitions. b. What directory contains both pre- and post-encapsulation conguration information about the boot disk?
/etc/vx/reconfig.d/state.d/root-done
/etc/vx/reconfig.d/state.d/i/etc/vx/reconfig.d/sta te.d/install-db
This le denes by its presence that vxinstall was not executed. It prevents the VxVM software daemons from starting.
q
/etc/vfstab.prevm
This le holds a copy of the pre-boot disk encapsulation /etc/vfstab le contents. 9. After the boot disk is mirrored, use vxprint to capture the postmirror conguration. Copy this information to the /bootdisk_capture le for use later in this lab.
2-87
Task 2 Solutions
Complete the following steps: 1. Unencapsulate the boot disk by using the vxunroot command. The procedure for this process is found in Examining Sun Enterprise Services Best Practices for VxVM Software-Managed Boot Disks on page 2-48, but can be successfully used for a ve-slice boot disk as described in Unencapsulating a Boot Disk Using the vxunroot Utility on page 2-57. To unencapsulate a ve-slice boot disk, use the following command syntax in place of that listed in the procedure to execute the loop to remove the mirrors: # for i in cat /rmsub.plex >do >vxplex -g rootdg -o rm dis $i >done This substitution results in full removal of the plexes and their subdisks. 2. Was the ununcapsulation successful?
It should be. 3. How do you know, and what commands do you use to verify this?
df -k vxprint mount
Compare the post-unencapsulation partition conguration of the boot disk with the pre-unencapsulation and describe the differences.
4.
Partitions were restored to pre-encapsulation conguration. The public and private region partitions were removed.
2-88
Task 3 Solutions
Complete the following steps: 1. Encapsulate the boot disk using the vxdiskadm utility menu selection 6. Use the following conguration: a. b. c. 2. Assign it a VxVM software disk name of rootdisk (default). Do not congure it as a spare disk. Accept all other defaults.
One the encapsulation process is complete and the system has rebooted, mirror the boot disk using the Sun Enterprise Services best practice procedure (see Examining Sun Enterprise Services Best Practices for VxVM Software-Managed Boot Disks on page 2-48). Assign the mirror disk a VxVM software disk name of rootmirror.
Task 4 Solutions
Complete the following steps: 1. Unencapsulate the boot disk using the manual procedure when booted from CD-ROM described in Unencapsulating When Booted From the CD-ROM on page 2-64. Be sure to recover the encapsulated /opt partition using the vxedvtoc command outlined in this procedure. Be patientthis procedure reboots the system multiple times. 2. Was the ununcapsulation successful?
It should be.
2-89
Exercise: Encapsulating Disks 3. If the unencapsulation was not successful, what do you think went wrong?
It depends on the problem. 5. Describe the process used to recover and successfully unencapsulate the boot disk.
It depends on the problem. 6. 7. Re-encapsulate the boot disk using the vxdiskadm utility. After the system reboots, re-mirror the boot disk using the Sun Enterprise Services best practices procedure (see Examining Sun Enterprise Services Best Practices for VxVM Software-Managed Boot Disks on page 2-48). Assign the mirror disk a VxVM software disk name of rootmirror.
Caution Do not continue the lab without re-mirroring the boot disk. The next task requires that the boot disk be mirrored.
Task 5 Solutions
Complete the following steps: 1. 2. Select a disk to be used as a data (non-root) disk for encapsulation. Use the format utility to create the following partition conguration for this disk
q q
Create two partitions minimum (512 megabyte maximum size). Leave partitions 6 and 7 unallocated.
3. 4. 5.
Build le systems on each of the partitions. Create mount points for each of the partitions on the data disk and verify that the new le systems successfully mounts. Update the /etc/vfstab le to auto-mount these le systems during system reboot.
2-90
Exercise: Encapsulating Disks 6. 7. Verify that the le systems mount from the /etc/vfstab le. Use prtvtoc, the df command, and the format utility to capture pre-encapsulation and mount information to the /datadisk_capture le. Also capture the contents of the /etc/vfstab file. This information is used later in this lab exercise. Encapsulate this disk using the vxdiskadm utility. While encapsulating this disk, you are asked for a disk group for this disk to join. Create a new disk group called datadg, or use rootdg. 9. After the system reboots, verify that the encapsulation was successful using the df -k and vxprint commands.
8.
10. Mirror the encapsulated data disk using a disk you identied for this purpose and the vxdiskadm utility. 11. While the data disk is being mirrored, answer the following questions: a. What are the differences between the pre- and postencapsulation partition conguration for this disk?
The post-encapsulated disk has only partitions 6 and 7. The original partitions were encapsulated in slice 6, the public region. Slice 7 is the private region. b. One. c. Using output from the execution of a df -k command, contrast the differences in the devices used to mount the newlyencapsulated le systems. How many reboots did the system execute?
Original devices were normal Solaris OE devices which use /dev/dsk/c#t#d#s# addresses. Post-encapsulation devices use the VxVM software volume names. These are /dev/vx/dsk/data1 or other volume names. d. View the /etc/vfstab le and describe the differences between the pre- and post-data disk encapsulation content.
The /etc/vfstab le now uses the VxVM software volumes as devices to mount and fsck. The original device information was saved as comments at the end of the le. 12. Wait for the mirror process to complete before proceeding to the next task.
2-91
Task 6 Solutions
Complete the following steps: 1. 2. Unencapsulate the data disk using the procedure outlined in Unencapsulating Data Disks on page 2-23. Was the unencapsulation successful?
It should be. 3. If the unencapsulation was not successful, what do you think went wrong?
It depends on the problem. 5. Describe the process used to recover and successfully unencapsulate the data disk.
It depends on the problem. 6. Once the unencapsulation successfully completes, contrast the preencapsulation and post-unencapsulation partition scheme of this disk. Are there any differences? No. If yes, list them. There should not be any differences. 7. Unmount and remove all references to the le systems on this disk from the /etc/vfstab le.
2-92
Task 7 Solutions
Complete the following steps: 1. 2. Select a disk to be used as a data (non-root) disk for encapsulation. Use the format utility to create the following partition conguration for this disk:
q
Create two partitions, minimum, using all the available space on the disk. Do not leave any free space. Leave at least one partition unused.
3. 4. 5. 6. 7.
Build le systems on each of the partitions. Create mount points for each of the partitions on the data disk, and verify that the new le systems successfully mount. Update the /etc/vfstab le to auto-mount these le systems during system reboot. Verify that the le systems mount from the /etc/vfstab le. Use prtvtoc, the df command, and the format utility to capture pre-encapsulation and mount information to the /datadisk_capture le. Also capture the contents of the /etc/vfstab le. Encapsulate this disk using the procedure in Encapsulating a NonConforming Disk on page 2-20.
8.
2-93
Module 3
Dene and explain how the dynamic multi-pathing (DMP) functions enhance the availability and accessibility of VxVM softwaremanaged storage devices Explain how DMP identies disks in both pre- and post-version 3.2 of the VxVM software Describe how to install and verify DMP Enable and disable multi-pathing to selected disks and controllers Administrate DMP by using the vxdmpadm utility Perform start restore and stop restore functions Identify common DMP problems
q q q q q
Relevance
Relevance
Discussion The following questions are relevant to understanding what DMP and DMP administration is all about:
q q q q
!
?
What are the key features of DMP? How does DMP manage load balancing? How does DMP improve the fault resiliency of system storage? How is DMP functionality enabled and disabled on a controller and on individual disks? How does DMP recognize disks? How does DMP interface with the DDL function introduced in the VxVM software version 3.2? Is DMP compatible with other Sun Microsystems multi-pathing applications? Is DMP compatible with the Sun Microsystems dynamic reconguration (DR) software? How does DMP interface with Sun StorEdge T3 storage arrays, and how is this different than DMP support of the Sun StorEdge A5000 series of array storage subsystems?
q q
3-2
Additional Resources
Additional Resources
Additional resources The following references provide additional details on the topics discussed in this module:
q
VERITAS Volume Manager 3.2 Administrators Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000392-011, TechPDF ID 240253. VERITAS Volume Manager 3.2 Installation Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000395-011, TechPDF ID 240256. VERITAS Volume Manager 3.2 Troubleshooting Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000394-011, TechPDF ID 240255. VERITAS Volume Manager Storage Administrator 3.2 Administrators Guide. Mountain View, California: VERITAS Software Corporation, July 2001, number 30-000393-011, TechPDF ID 240257. http://storage.east, http://storage.central, and http://storage.west SunSolveSM Online SRDBs and INFODOC 18314, [http://sunsolve.Sun.COM/pub-cgi/search.pl?mode= advanced]. VERITAS Software Corporation support knowledge base, [http://seer.support.veritas.com/nav_bar/index.asp? content_sURL=%2Fsearch%5Fforms%2Ftechsearch%2Easp]. Man pages for the vxdmpadm command.
3-3
Path 1
SD H i ta c h i D a t a S y s te m s
Path 2
R E A D YW A R N I N G A L A R MP O W E R
Figure 3-1
3-4
Load Balancing
Load balancing is the function which attempts to maximize I/O throughput by using the full bandwidth of all paths. Although the goal is the same, load balancing is implemented differently depending on which version of the VxVM software is used.
3-5
Conversely, DMP creates one metanode per LUN. That is, DMP identies LUNs and creates a metanode for each LUN. The result is that a disk with multiple paths is seen as a single disk by the VxVM software.
3-6
Examining VxVM Software and Dynamic Multi-Pathing A LUN can be a physical disk as in the individual disks of a Sun StorEdge A5000 series of array storage subsystems, or it may be a more complex conguration of multiple physical disks as in a software RAID device such as a Sun StorEdge T3 storage array. The operating system (and therefore the user) sees no difference between a LUN and a disk. For example, an array with ve LUNs has ve metanodes. If there are four paths from the host to the array, the host has 20 entries under /dev/dsk tree, and only ve entries under /dev/vx/dmp tree. The following example illustrates multiple paths to a single disk in a Sun StorEdge A5000 Sun Network Storage Array (SENA), with two interface boards (IBs) and two gigabit interface converters (GBICs).
bash-2.03# ls -las /dev/rdsk/*t0d0s0 2 lrwxrwxrwx 1 root root 74 Apr 21 15:22 /dev/rdsk/c1t0d0s0 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w210000203713f643,0:a,raw 2 lrwxrwxrwx 1 root root 74 Apr 21 15:22 /dev/rdsk/c2t0d0s0 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w220000203713f643,0:a,raw
In the following example, the same disks are managed by DMP, which arbitrarily chooses one of the two paths and creates a single device entry. To see the multiple paths to individual disks, use the vxdmpadm command.
bash-2.03# ls -las /dev/vx/rdmp/*t0d0s0 0 crw------1 root other 74, 0 May 21 15:34 /dev/vx/rdmp/c1t0d0s0
The following example shows output from the vxdisk list command to illustrate how the VxVM software sees the same disk shown in the previous examples.
bash-2.03# vxdisk list DEVICE TYPE c1t0d0s2 sliced c1t3d0s2 sliced c1t4d0s2 sliced c1t5d0s2 sliced c1t6d0s2 sliced c1t16d0s2 sliced c1t18d0s2 sliced c1t19d0s2 sliced c1t22d0s2 sliced DISK rootdisk rootmirror GROUP rootdg rootdg STATUS online error online online error error error error online
3-7
3-8
One way to verify that the DMP driver is installed correctly and that it is not corrupt is to compare the size of the DMP driver le to that of the driver_OSversion, which is in the same directory. Both drivers should be the same size as seen in this example. The /kernel/drv le holds the 32-bit drivers and /kernel/drv/sparcv9 holds the 64-bit driver.
bash-2.03# pwd /kernel/drv bash-2.03# ls -las vxdmp* 640 -rw-r--r-1 root 608 -rw-r--r-1 root 608 -rw-r--r-1 root 640 -rw-r--r-1 root 4 -rw-r--r-1 root bash-2.03# pwd /kernel/drv/sparcv9 bash-2.03# ls -las vxdmp* 800 -rw-r--r-1 root 768 -rw-r--r-1 root 800 -rw-r--r-1 root
21 15 15 21 15
3-9
3-10
Enabling and Disabling DMP One of the challenges of implementing DMP in this fashion is that, if multi-pathing is disabled or prevented for a disk, the VxVM software sees both paths for that disk. This can lead to administrative confusion. To resolve this problem the VxVM software suppresses the second path through options provided in the vxinstall and vxdiskadm utilities.
DMP Terminology
Both vxinstall and vxdiskadm use the following terminology to describe enabling and disabling DMP on selected devices:
q
Exclude Excludes a disk or controller from DMP control. Excludes have the following options:
q q
Prevent Prevents multi-pathing operations Suppress Suppresses the VxVM software operations on the secondary paths to a device
Allow Allows multi-pathing operations Un-Suppress Makes secondary paths visible to the VxVM software operations
When you are disabling DMP on a device or group of devices, the prevent operation executes rst, and the suppress operation executes second. When you are enabling DMP, the un-suppress operation occurs rst and the allow operation occurs second.
3-11
Quick Installation examines each disk attached to your system and attempts to create volumes to cover all disk partitions that might be used for file systems or for other similar purposes.
If you want to exclude any devices from being seen by VxVM or not be multipathed by VxDMP then use the Prevent multipathing/Suppress devices from VxVMs view option, before you choose Custom Installation or Quick Installation.
If you do not wish to use some disks with the Volume Manager, or if you wish to reinitialize some disks, use the Custom Installation option. Otherwise, we suggest that you use the Quick Installation option. Hit RETURN to continue.
3-12
Enabling and Disabling DMP Options 1 through 4 on the vxinstall utility main menu are used for suppression operations. Options 6 through 8 are used for prevention operations. Option 8 lists suppressed or non-multi-pathed devices. To use this options, select the menu option for the specic operation and follow the scripted prompts. Whenever multi-pathing is prevented and secondary paths are suppressed, the system must be rebooted for these changes to take effect.
3-13
? ?? q
Display help about menu Display help about the menuing system Exit from menus
Use the vxdiskadm utility menu items 17 and 18 to include and exclude DMP operations. Selecting option 17, Prevent multipathing/Suppress devices from VxVMs view, produces the following output:
Exclude Devices Menu: VolumeManager/Disk/ExcludeDevices This operation might lead to some devices being suppressed from VxVMs view or prevent them from being multipathed by vxdmp (This operation can be reversed using the vxdiskadm command). Do you want to continue ? [y,n,q,?] (default: y)
Volume Manager Device Operations Menu: VolumeManager/Disk/ExcludeDevices 1 2 3 4 5 6 7 8 ? ?? q Suppress all paths through a controller from VxVMs view Suppress a path from VxVMs view Suppress disks from VxVMs view by specifying a VID:PID combination Suppress all but one paths to a disk Prevent multipathing of all disks on a controller by VxVM Prevent multipathing of a disk by VxVM Prevent multipathing of disks by specifying a VID:PID combination List currently suppressed/non-multipathed devices Display help about menu Display help about the menuing system Exit from menus
To use these options, select the menu option for the specic operation and follow the scripted prompts. A system reboot is required to activate the changes. Selecting the vxdiskadm utility menu option 18, Allow multi-pathing/Unsuppressed devices from VxVMs view, produces the following output:
Include Devices Menu: VolumeManager/Disk/IncludeDevices The devices selected in this operation will become visible to VxVM and/or will be multipathed by vxdmp again. Only those devices which were previously excluded can be included again.
3-14
Volume Manager Device Operations Menu: VolumeManager/Disk/IncludeDevices 1 2 3 4 5 6 7 8 ? ?? q Unsuppress all paths through a controller from VxVMs view Unsuppress a path from VxVMs view Unsuppress disks from VxVMs view by specifying a VID:PID combination Remove a pathgroup definition Allow multipathing of all disks on a controller by VxVM Allow multipathing of a disk by VxVM Allow multipathing of disks by specifying a VID:PID combination List currently suppressed/non-multipathed devices Display help about menu Display help about the menuing system Exit from menus
To use these options, select the menu option for the specic operation and follow the scripted prompts. A system reboot is required to activate the changes.
When vxinstall or vxdiskadm performs any exclude or include operations, these les are modied. The /etc/vx/vxdmp.exclude le is updated for DMP prevent or allow operations. The /etc/vx/vxvm.exclude le is updated for suppress and un-suppress operations. The following example shows the /etc/vx/vxdmp.exclude after multipathing is disabled on disk c1t0d0.
vxdmp.exclude ::::::::::::::: exclude_all 0 paths c1t0d0s2 /sbus@2,0/SUNW,socal@d,10000/sf@0,0/ssd@w210000203713fc9f,0 c2t0d0s2 /sbus@2,0/SUNW,socal@d,10000/sf@1,0/ssd@w220000203713fc9f,0
3-15
The following example shows the /etc/vx/vxvm.exclude le when the secondary path (c2t0d0) for disk c1t0d0 is suppressed.
vxvm.exclude :::::::::::::: exclude_all 0 paths c2t0d0s2 /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w220000203713fc9f,0 # controllers # product # pathgroups #
Caution Do not edit these les manually. Use vxdiskadm to make all updates to these les. If it is necessary to edit these le manually be very careful and only delete the lines needed to get an inoperable system into service.
3-16
3-17
An example of the vxdisk list command shows that c1t0d0s2 is still usable by the VxVM software but the secondary path c2t1d0s2 is suppressed and not visible to the software.
.bash-2.03# vxdisk list DEVICE TYPE DISK c1t0d0s2 sliced rootdisk c1t3d0s2 sliced c1t4d0s2 sliced c1t5d0s2 sliced c1t6d0s2 sliced c1t16d0s2 sliced c1t18d0s2 sliced c1t19d0s2 sliced c1t22d0s2 sliced rootmirror GROUP rootdg rootdg STATUS online error online online error error error error online
3-18
Administrating DMP With the vxdmpadm Command It is also possible to list the subpaths to a particular node using the getsubpaths dmpnodename option. This helps to focus on a specic node by eliminating unwanted information and showing the status of the paths for that node, as seen in this example.
# vxdmpadm getsubpaths dmpnodename=c1t1d0s2 NAME STATE PATH-TYPE CTLR-NAME ENCLR-TYPE ENCLR-NAME ==================================================================== c1t1d0s2 ENABLED c1 SENA SENA0 c2t1d0s2 ENABLED c2 SENA SENA0
3-19
Administrating DMP With the vxdmpadm Command An example of a vxdmpadm listctlr all command shows the path disabled.
# vxdmpadm listctlr all bash-2.03# vxdmpadm listctlr all CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME ===================================================== c1 SENA DISABLED SENA0 c2 SENA ENABLED SENA0
3-20
For the VxVM software version 2.5.x, EMC Corporation disk arrays require a license. Use vxdmpdebug and check the following: DEBUG: DEBUG: DEBUG: DEBUG: DEBUG: DEBUG: DEBUG: DEBUG: DEBUG: is_module_present: vxdmp module is present is_module_present: vxdmp module is loaded is_module_present: vxdmp module is installed dmp_is_link: /dev/vx/dmp/ is NOT a link dmp_is_link: /dev/vx/rdmp/ is NOT a link Have Simple disk DMP license No SSA DMP license dmp_set_lic: Start dmp_set_lic: Done
3-21
Make sure that the operating system can see these disks. Make sure that I/O can be performed on the disks.
vxdmpinq vxdmpdebug
Note Problems seeing disks are addressed in Module 6, Recovering Disk, Disk Group, and Volume Failures. DMP debugging utilities and troubleshooting suggestions are discussed in Module 4, Troubleshooting Tools and Utilities.
3-22
Congure DMP to exclude a disk device from multi-pathing operations using the vxdiskadm utility. Suppress a path from a DMP-excluded disk using the vxdiskadm utility. Unsuppress a path from a DMP-excluded disk using the vxdiskadm utility. Congure DMP to include a disk device for DMP operations using the vxdiskadm utility. View DMP status using the vxdmpadm utility. Locate les used to disable and enable DMP.
q q
Preparation
To prepare for this exercise:
q q q
The VxVM software 3.2 must be installed and operational. The boot disk must be encapsulated and mirrored. A second disk group must exist other than rootdg with a congured volume that is formatted and mounted. Open a window, and execute an iostat while performing the lab exercises to see the effect on pathing as paths are removed and added. Use the following script to run the iostat command: while true do iostat -xcnm sleep 2 done
Generate looping I/Os to one of the non-root volumes created in the exercises in Module 2, Encapsulating Disks.
3-23
3-24
Exercise: Operating DMP View the contents of these les to see the changes that were made. _____________________________________________________________ 5. Un-suppress and allow DMP operations on the disk excluded in the rst two tasks of this lab exercise. What commands did you use, and in what order did you execute them? _____________________________________________________________ _____________________________________________________________ _____________________________________________________________ View the contents of the two exclude les to see what changes were made. _____________________________________________________________
3-25
Exercise: Operating DMP 4. Verify that you were successful. _____________________________________________________________ What format of the vxdmpadm command did you use? _____________________________________________________________ _____________________________________________________________ _____________________________________________________________ 5. Enable the storage controller you just disabled using the vxdmpadm utility. What format of the vxdmpadm command did you use? _____________________________________________________________ _____________________________________________________________ _____________________________________________________________ 6. Verify that you were successful. What format of the vxdmpadm command did you use? _____________________________________________________________ _____________________________________________________________ _____________________________________________________________
3-26
Exercise Summary
Exercise Summary
Discussion Take a few minutes to discuss what experiences, issues, or discoveries you had during the lab exercise.
q q q q
!
?
3-27
Task 1 Solutions
Complete the following steps: 1. Use the vxdiskadm utility to disable multi-pathing to one VxVM software disk on your lab teams system. Do not use the boot disk for this task. What vxdiskadm main menu option did you use? _____________________________________________________________ How did you activate the change you made? Reboot. 2. Use the vxdiskadm utility to suppress the secondary paths to the disk on which you just prevented DMP operations. What vxdiskadm main menu option did you use? Use option 17, Prevent multi-pathing/Suppress devices from VxVMs view. Did you have to reboot to activate this operation? No. 3. View the DMP multi-pathing status to verify that you were successful in the execution of the previous tasks. What commands did you use? Use the following:
q q
Issue a vxdisk list command. Issue a vxdmpadm listexclude command. What two les were modied to enable the previous operation?
4.
The les /etc/vx/vxdmp.exclude and /etc/vx/vxvm.exclude. View the contents of these les to see the changes that were made.
3-28
Exercise: Operating DMP 5. Un-suppress and allow DMP operations on the disk excluded in the rst two tasks of this lab exercise. What commands did you use and in what order did you execute them? Use vxdiskadm menu option 18 to unsuppress the secondary path. Use vxdiskadm menu option 18 to include both disk paths in DMP operations. View the contents of the two exclude les to see what changes were made.
Task 2 Solutions
Complete the following steps: 1. Use the vxdmpadm utility to list all controllers seen by the Solaris OE. What format of the vxdmpadm command did you use? Use vxdmpadm listctlr all. 2. Use the vxdmpadm utility to list all subpaths to a specic node. The DMP node you select depends on the conguration of your system. What format of the vxdmpadm command did you use? Use vxdmpadm getsubpaths dmpnodename=c#t#d#s2. 3. Disable a storage controller using the vxdmpadm utility. What format of the vxdmpadm command did you use? Use vxdmpadm disable ctlr=full_Solaris_OE_device_path-name. 4. Verify that you were successful. _____________________________________________________________ What format of the vxdmpadm command did you use? Use vxdmpadm listctlr all. 5. Enable the storage controller you just disabled using the vxdmpadm utility. What format of the vxdmpadm command did you use? Use vxdmpadm enable ctlr=full_Solaris_OE_device_path-name.
Sun Proprietary: Internal Use Only Managing Dynamic Multi-Pathing
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
3-29
Exercise: Operating DMP 6. Verify that you were successful. What format of the vxdmpadm command did you use? Use vxdmpadm listctlr all.
Task 3 Solutions
Use the vxdmping script to view the serial number of a disk of your choice. What command syntax did you use? Use /etc/vx/diag.d/vxdmpinq /dev/rdsk/c#t#d#s2. What is the serial number of the disk you selected? Depends on disk selected.
3-30
Module 4
Describe the troubleshooting tools and utilities available for the VxVM software Enable vxconfigd logging from the command line and the VxVM software startup scripts Reference a VxVM failure to the error messages section of the VxVM software troubleshooting manual Use the VxVM software debugging and information-gathering tools and utilities
Relevance
Relevance
Discussion The following questions are relevant to understanding how to use the VxVM software tools and utilities to troubleshoot VxVM software failures:
q
!
?
What tools and utilities are available to system administrators to use in debugging VxVM software problems? How is vxvonfigd logging enabled? What system-level information gathering tools are available for capturing VxVM software-specic information to use in debugging problems?
q q
4-2
Additional Resources
Additional Resources
Additional resources The following references provide additional information on the topics described in this module:
q
Service Support (IT Infrastructure Library Series). Stationary Ofce Books, February 2002, ISBN 0113300158. CCTA Staff. Service Delivery. Stationary Ofce Books, April 2001, ISBN 0113300174. Ivor Macfarlane and Colin Rudd. IT Service Management Version 2. United Kingdom: itSMF Ltd, March 2001, ISBN 0-9524706-1-6. VERITAS Volume Manager 3.2 Administrators Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000392-011, TechPDF ID 240253. VERITAS Volume Manager 3.2 Installation Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000395-011, TechPDF ID 240256. VERITAS Volume Manager 3.2 Troubleshooting Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000394-011, TechPDF ID 240255. VERITAS Volume Manager Storage Administrator 3.2 Administrators Guide. Mountain View, California: VERITAS Software Corporation, July 2001, number 30-000393-011, TechPDF ID 240257. http://storage.east, http://storage.central, and http://storage.west.
4-3
Logging Errors
Logging Errors
The VxVM software provides extensive logging capabilities that include the following error logging mechanisms:
q q q
4-4
Logging Errors
This enables maximum vxconfigd logging while suppressing vxconfigd log messages to the standard out and standard error outputs. To enable logging selectively from the command line, do one of the following:
q
Enable maximum logging of error and debug messages to /var/vxvm/vxconfigd.log. Type the following: # vxconfigd -x 9 -x log Enable syslog logging of the VxVM software console messages. Run the following command: # vxconfigd -x syslog This version of the vxconfigd command logs all Error, Fatal Error, Warning and Notice messages to the syslog. Debug messages are not logged.
Enable maximum logging to the vxconfigd log and the syslog le. Type the following: # vxconfigd -x 9 -x log -x syslog
To disable vxconfigd logging after it is enabled, use the following syntax: # pkill -9 vxconfigd; vxconfigd -m enable
4-5
Logging Errors
# to turn on debugging console output, uncomment the following line. # The debug level can be set higher for more output. The highest debug # level is 9. #debug=1 # enable debugging console ouptut
To activate logging after the vxvm-sysboot le is modied, you must reboot the system.
07/03 09:01:17: DEBUG: request_loop: notify_clients 0 07/03 09:01:17: DEBUG: request_loop: notify_clients 1
The vxconfigd logging Debug messages are not described in the VERITAS Volume Manager 3.2 Troubleshooting Guide. To make these messages useful, decode any device-related major and minor number information included in the Debug message.
4-6
This message is decoded as follows: 1. Look at the event type: event type = DMP_PATH_DISABLE_EVENT This indicates a path error to one or more disk devices. 2. The part of the message following the event type denes the disk device affected. The major and minor number of that device are listed. path 118/48 To nd the cxtxdx number, do a long list on the /dev/dsk directory and search for the major and minor number pair listed in the error message. Use the following form of the ls command: # ls -laRL /dev/dsk | grep 118, 48 brw-r----- 1 root sys 118, 48 Jun 7 23:47 c2t1d0s0 3. Using this information, the experienced system administrator maps this to a failing piece of hardware. From the following example and the ls -l /dev/dsk/c2t1d0s0 command, the hardware address for device c2t1d0s0s is: /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f96d,0 This address can now be cross-references to a specic I/O bus, card slot, port and cable. Information about bus mapping is available on the SunSolve Online Web site. 4. The remaining part of the message following the event type points to the DMP metanode. The major and minor number of that device are listed: dmpnode 68/48 This device can also be found by using the ls command to list the /dev/vx/dmp directory: # ls -laRL /dev/vx/dmp | grep 68, 48 brw------- 1 root other 68, 48 Jun 8 09:41 c1t1d0s0 This clearly denes the DMP metanode device as c1t1d0s0. Note DMP devices, metanodes, and other information about DMP were described in Module 3, Managing Dynamic Multi-Pathing.
4-7
Logging Errors
Cross-reference this message to the section in the VERITAS Volume Manager 3.2 Troubleshooting Guide on error messages generated by the VxVM software. Use the following procedure to troubleshoot this problem: 1. 2. 3. Go to section 3 in the VERITAS Volume Manager 3.2 Troubleshooting Guide. Go to the section on vxdmp NOTICE messages. Look up the NOTICE message for the failure listed in the messages log. The error message description is as follows: Description: A path under the control of the DMP driver failed. The failed device major and minor numbers are supplied in the message. Action: None Note The action recommendation is None because the error was not caused by the VxVM software. This is a hardware problem, and there are no VxVM software commands that can x the error. 4. Find the major and minor numbers, and use the following version of the ls command to nd the cxtxdx number: # ls -laRL /dev/dsk | grep 118, 48 brw-r----- 1 root sys 118, 48 Jun 7 23:47 c2t1d0s0 The failed path is c2t1d0s0. Map this path to a specic piece of hardware using step 3 of the procedure described on page 4-7.
4-8
Logging Errors
4-9
4-10
The debugging tools are in the /opt/VRTSspt directory. A list of the contents of this directory is as follows:
bash-2.03# ls -las /opt/VRTSspt total 14 2 dr-x-----5 root other 2 drwxr-xr-x 12 root sys 2 dr-xr-xr-x 4 root other 4 -r--r--r-1 root other 2 dr-xr-xr-x 4 root other 2 dr-xr-xr-x 3 root other /opt/VRTSspt/FS: total 8 2 dr-xr-xr-x 4 root other 2 dr-x-----5 root other 2 dr-x-----2 root other 2 dr-x-----2 root other /opt/VRTSspt/VRTSexplorer: total 200 2 dr-xr-xr-x 4 root other 2 dr-x-----5 root other 26 -r--r--r-1 root other 4 -r-xr--r-1 root other 2 dr-xr-xr-x 4 root other 8 -r-xr--r-1 root other 4 -r-xr--r-1 root other 6 -r-xr--r-1 root other 6 -r-xr--r-1 root other 2 -r-xr--r-1 root other 2 -r-xr--r-1 root other 2 -r-xr--r-1 root other 2 dr-xr-xr-x 2 root other 14 -r-xr--r-1 root other 2 -r-xr--r-1 root other 6 -r-xr--r-1 root other 4 -r-xr--r-1 root other 8 -r-xr--r-1 root other 8 -r-xr--r-1 root other 16 -r-xr--r-1 root other 6 -r-xr--r-1 root other 6 -r-xr--r-1 root other
2 2 2 8 2 2
2 2 2 2
. .. MetaSave VxBench
1024 512 12645 1880 1024 3833 1611 2452 2701 955 343 764 512 6491 809 2658 1692 3192 4022 7349 2078 2960
Jul Jul Aug Aug Jul Aug Aug Aug Aug Aug Aug Aug Jul Aug Aug Aug Aug Aug Aug Aug Aug Aug
2 2 21 21 2 21 21 21 21 21 21 21 2 21 21 21 21 21 21 21 21 21
08:19 08:19 2001 2001 08:19 2001 2001 2001 2001 2001 2001 2001 08:19 2001 2001 2001 2001 2001 2001 2001 2001 2001
. .. README VRTSexplorer bin.SunOS dbed dbed1 edition.dro edition.sybed fw gcm isis lib main.SunOS ndmp sal salr samba spc spcs spnas sybed1
4-11
Jul 2 08:19 . Jul 2 08:19 .. Jul 3 2001 Client.pl Jun 29 2001 README.Client_Server Jun 29 2001 Server.pl Jul 2 08:19 VolSig May 21 2001 vvrmemstat
Note The FS and VVR tools require additional licenses and are not part of the basic VxVM software. They are not described in this guide. The VxVM software debugging tools are listed in Table 4-1. Table 4-1 The VRTSspt Debugging Tools Tool metasave vxbench Use Gathers metadata from a VxVM le system Generates various I/Os for benchmarking le system performance Gathers conguration information about the system and any installed Sun StorEdge products Gathers Sun StorEdge Volume Replicator (VVR) memory usage statistics Tool License FS FS
VRTSexplorer*
vvrmemstat
* Some of the information generated by VRTSexplorer is redundant with the information generated by the Sun Explorer 3.5.0 data collector.
4-12
Using the Debugging Tools and Utilities Table 4-1 The VRTSspt Debugging Tools (Continued) Tool volsig Use Determines the data consistency between the primary and secondary volumes in a VVR conguration Simulates trafc between the primary and secondary nodes in a VVR conguration Tool License VVR
VVR
* Some of the information generated by VRTSexplorer is redundant with the information generated by the Sun Explorer 3.5.0 data collector.
There are additional diagnostic and debugging tools installed as part of the VRTSvxvm package. These tools and utilities are in the /etc/vx/diag.d directory. A list of the VRTSvxvm tools and utilities available for use by system administrators is as follows:
bash-2.03# ls /etc/vx/diag.d config.d vxaslkey vxdevwalk macros.d vxautoconfig vxdmpdbprint scripts vxconfigdump vxdmpdebug bash-2.03# ls -las /etc/vx/diag.d/* 544 -r-xr-xr-x 1 root sys 14 -r-xr-xr-x 1 root sys 464 -r-xr-xr-x 1 root sys 14 -r-xr-xr-x 1 root sys 168 -r-xr-xr-x 1 root sys 16 -r-xr-xr-x 1 root sys 12 -r-xr-xr-x 1 root sys 124 -r-xr-xr-x 1 root sys 114 -r-xr-xr-x 1 root sys 240 -r-xr-xr-x 1 root sys /etc/vx/diag.d/config.d: total 8 2 drwxr-xr-x 4 root sys 2 drwxr-xr-x 5 root other 2 drwxr-xr-x 2 root sys 2 drwxr-xr-x 2 root sys /etc/vx/diag.d/config.d/sparcv7: total 418 2 drwxr-xr-x 2 root sys 2 drwxr-xr-x 4 root sys 114 -r-xr-xr-x 1 root sys 114 -r-xr-xr-x 1 root sys 114 -r-xr-xr-x 1 root sys 24 -r-xr-xr-x 1 root sys vxdmpinq vxdmptp vxkprint vxprivutil
262172 6788 221192 6808 85396 7356 5932 62472 58164 114640
Apr 4 19:36 /etc/vx/diag.d/vxaslkey Aug 15 2001 /etc/vx/diag.d/vxautoconfig Apr 4 19:37 /etc/vx/diag.d/vxconfigdump Aug 15 2001 /etc/vx/diag.d/vxdevwalk Apr 4 19:36 /etc/vx/diag.d/vxdmpdbprint Apr 4 19:36 /etc/vx/diag.d/vxdmpdebug Apr 4 19:36 /etc/vx/diag.d/vxdmpinq Apr 4 19:36 /etc/vx/diag.d/vxdmptp Apr 4 19:37 /etc/vx/diag.d/vxkprint Aug 15 2001 /etc/vx/diag.d/vxprivutil
8 8 8 8
. .. sparcv7 sparcv9
8 09:19 . 8 09:19 .. 15 2001 vxautoconfig.SunOS_5.6 15 2001 vxautoconfig.SunOS_5.7 15 2001 vxautoconfig.SunOS_5.8 15 2001 vxdevwalk.SunOS_5.6
4-13
/etc/vx/diag.d/config.d/sparcv9: total 348 2 drwxr-xr-x 2 root sys 2 drwxr-xr-x 4 root sys 0 -r-xr-xr-x 1 root sys 140 -r-xr-xr-x 1 root sys 140 -r-xr-xr-x 1 root sys 0 -r-xr-xr-x 1 root sys 32 -r-xr-xr-x 1 root sys 32 -r-xr-xr-x 1 root sys /etc/vx/diag.d/macros.d: total 58 2 drwxr-xr-x 3 root 2 drwxr-xr-x 5 root 4 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 0 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 -rwxrwxr-x 1 root 2 drwxr-xr-x 2 root
8 09:19 . 8 09:19 .. 15 2001 vxautoconfig.SunOS_5.6 15 2001 vxautoconfig.SunOS_5.7 15 2001 vxautoconfig.SunOS_5.8 15 2001 vxdevwalk.SunOS_5.6 15 2001 vxdevwalk.SunOS_5.7 15 2001 vxdevwalk.SunOS_5.8
other other sys sys sys sys sys sys sys sys sys sys sys sys sys sys sys sys sys sys sys sys sys sys sys sys sys sys other
1024 512 1975 94 99 0 435 69 72 285 82 364 448 78 104 52 63 66 50 150 40 117 582 289 298 193 121 25 1024
Jun Jun Aug Apr Apr Apr Apr Apr Apr Apr Apr Apr Apr Apr Apr Aug Aug Aug Aug Aug Aug Apr Apr Apr Aug Apr Apr Apr Jun
8 8 15 4 4 4 4 4 4 4 4 4 4 4 4 15 15 15 15 15 15 4 4 4 15 4 4 4 8
09:31 09:31 2001 19:36 19:36 19:36 19:36 19:36 19:36 19:36 19:36 19:36 19:36 19:36 19:36 2001 2001 2001 2001 2001 2001 19:36 19:36 19:36 2001 19:36 19:36 19:36 09:31
. .. dmp dmp_cpuiocount dmp_cpuiocount_next dmp_cpuiocount_zero dmp_ctlr dmp_ctlr_list_next dmp_ctlr_path_next dmp_dev_list dmp_dev_list_next_dmpnode dmp_dmpnode dmp_dmpnode_next dmp_dmpnode_next_ptr dmp_dmpnode_path_next dmp_dmpopencount dmp_end_dev_list_ctlrs dmp_end_dev_list_dmpnodes dmp_end_dmp_nodes dmp_errq_buf dmp_opath_next dmp_opaths dmp_path dmp_print_dev_list_ctlrs dmp_print_dev_list_dmpnodes dmp_print_errq dmp_print_errq_next dmp_print_errq_null sparcv9
/etc/vx/diag.d/macros.d/sparcv9: total 56 2 drwxr-xr-x 2 root other 2 drwxr-xr-x 3 root other 4 -rwxrwxr-x 1 root sys 2 -rwxrwxr-x 1 root sys 2 -rwxrwxr-x 1 root sys
Jun 8 09:31 . Jun 8 09:31 .. Aug 15 2001 dmp Apr 4 19:36 dmp_cpuiocount Apr 4 19:36 dmp_cpuiocount_next
4-14
/etc/vx/diag.d/scripts: total 42 2 drwxr-xr-x 3 root 2 drwxr-xr-x 5 root 2 drwxr-xr-x 2 root 8 -r-xr-xr-x 1 root 6 -r-xr-xr-x 1 root 10 -r-xr-xr-x 1 root 12 -r-xr-xr-x 1 root
8 09:20 . 8 09:31 .. 8 09:20 fix_lib 15 2001 fixmountroot 15 2001 fixsetup 15 2001 fixstartup 15 2001 fixunroot
/etc/vx/diag.d/scripts/fix_lib: total 16 2 drwxr-xr-x 2 root sys 2 drwxr-xr-x 3 root sys 8 -r-xr-xr-x 1 root sys 4 -r-xr-xr-x 1 root sys
Jun 8 09:20 . Jun 8 09:20 .. Aug 15 2001 fixdevsetup Aug 15 2001 fixgetmajor
4-15
Using the Debugging Tools and Utilities Table 4-2 contains a partial list of the VRTSvxvm tools. Note For information on how to use utilities in the /etc/vx/diag.d directory that is not available in this guide or the man pages, on SunSolve, from the command line prompt or from the VERITAS Software Corporation support site, call support at Sun Microsystems. Table 4-2 The VRTSvxvm Debugging Tools Tool vxprivutil vxdevwalk Use Allows system administrators to list and modify the private region of disks. Traverses the dev_info tree in kernel space. It can be used to correlate user space entries with those in the dev_info tree. Prints kernel space information about VxVM objects such as disk groups, disks, and volumes. Creates a DMP conguration dump suitable for sending to Sun second-level support. Used in debugging DMP problems. Provides output that displays a disks unique serial number and vendor information. Used in debugging DMP problems. Prints device, driver, and controller information about a servers attached storage. This is very low-level information and might not be useful to system administrators. Prints contents of the DMP database. Useful for troubleshooting DMP path and device errors.
vxkprint vxdmpdebug
vxdmpinq
vxautoconfig
vxdmpdbprint
4-16
Using the Debugging Tools and Utilities Table 4-3 lists scripts found in the /etc/vx/diag.d/scripts directory. Use these scripts to recover a VxVM software conguration that is corrupted and prevents a system from booting. These scripts help recongure the VxVM software by dening alternative locations for binaries which can be located on a CD-ROM, and mounting boot disks that are VxVM software volumes. Caution The /etc/vx/diag.d/scripts scripts are designed to help recover a corrupted VxVM software conguration that prevents a system from booting or mounting the VxVM software devices. Module 2, Encapsulating Disks, describes how to recover problems with booting from encapsulated disks if the VxVM software does not render a system un-bootable. Use the /etc/vx/diag.d/scripts scripts with caution or under the direction of Sun support. Table 4-3 The /etc/vx/diag.d/scripts Script Utilities Script fixmountroot Use Mounts a root le system that is encapsulated as a volume. The le system is mounted as read-only, using a le system slice that underlies the root le system volume. Congures the VxVM software to run from an alternate directory. This allows the system administrator to make changes to a VxVM software conguration when the VxVM software binaries are not accessible. Run this script prior to running the fixmountroot script. Starts the VxVM software running from the VxVM CD-ROM. The script requires some information that the VxVM software normally gets from the root le system. Run the fixstartup command if the device containing the root le system or one of the mirrors of the root le system volume is known. The fixstartup command asks for a disk containing a copy of the root le system, and uses the le system on that disk to get the necessary startup les.
fixsetup
fixstartup
4-17
Using the Debugging Tools and Utilities Table 4-3 The /etc/vx/diag.d/scripts Script Utilities (Continued) Script fixunroot Use Converts system les so that the les no longer require the VxVM software to boot the root le system. Also disables startup of the VxVM software, so that future recovery of a mirrored root volume does not cause corruption. Caution After running this script, use caution when bringing up the VxVM software again. If the VxVM software conguration retains a mirrored root volume, starting the VxVM software can cause severe corruption to the root le system.
uname -a Displays host and OS revision information. ioscan -f Scans and lists system I/O hardware. model Prints system model (on systems that do not use SPARC technology). isainfo -v Displays supported instruction sets. prtdiag Displays system conguration and diagnostic data generated by power-on self-test (POST).
q q
4-18
prtconf -v Displays general system-hardware conguration information. showrev -p Displays system patch list. ps -elf Shows running processes, including the command line options used to start them. pkginfo -l Displays installed packages. modinfo Displays loadable modules that are active on the system. /etc/system, fstab, bootconf vfstab, and /etc/name_to_major Captures key system information les from /etc. (The fstab and bootconf les are not Solaris OE les.) /etc/services Lists Internet services ports and protocols. /var/adm/messages Contains messages showing the VxVM software and Solaris OE error messages. ls -l /kernel/drv/vx*, and /kernel/drv/sparcv9/vx* Collects a list of installed VxVM software drivers. /kernel/drv/*.conf Collects a complete list of driver conguration les. vxlicense -p, vxfsserial -p, and vxliccheck -vp Displays VxVM software licensing data. la -laR /dev/dsk, ls -laR /dev/rdsk, ls -laRL /dev/dsk, and ls -laRL /dev/rdsk Displays physical device information, including major and minor numbers. ls -laR /dev/es, ls -laRL /dev/es, luxadm (for each es device), and luxadm fcal_s_download Displays information about installed Sun Enterprise Network Array arrays. prtvtoc information Displays the vtoc for each disk. diskinfo A command (not Solaris OE) to show a disks characteristics. df -k and mount -v Lists basic le system information. ifconfig -a, netstat -in, netstat -rn, arp -a, and lanscan Lists basic network information. vgdisplay -v A command (not Solaris OE) to display information about logical volume manager (LVM) volume groups. ls -l /etc/rc* and cp -pR /etc/rc* Lists and copies boot and startup scripts.
q q
q q q
q q
q q
q q
4-19
cp -pR /etc/inet, cp -pR /etc/*router, cp -pR /etc/host*, cp -pR /etc/*nodename*, and cp -pR /etc/nsswitch.conf Obtains copies of network conguration les. cp -pR /var/spool/cron Obtains information on cron jobs. eeprom command output Displays OpenBoot PROM information. cp -pR /etc/dfs Obtains information on exported NFS les systems.
q q q
vxdctl mode Displays the current operating mode of vxconfigd. Effectively displays the state of the VxVM software. vxprint outputs Displays the conguration of all VxVM software objects. vxkprint output Displays the VxVM softwares conguration as seen by the kernel. This output can occasionally show useful information. vxdisk list output Shows state and conguration information for VxVM software disks. vxdg list output Displays disk group ownership and state information. Shows which disk groups are imported. /var/vxvm contents Lists and displays the contents of the temp_db les. These les can be useful in troubleshooting odd disk group problems. ls -laR /dev/vx/* Displays the VxVM software device tree. /etc/vx contents Contains various VxVM software conguration and library les. /VXVM*UPGRADE* contents Contains information pertaining to the VxVM software conguration before upgrade. This le is created by the upgrade_start script.
q q
DMP Information
The vxexplorer utility options for gathering DMP-specic information are:
4-20
vxautoconfig Captures the disks on a system that are visible to the VxVM software without DMP. Mimics the VxVM software device discovery routines. vxdevwalk Captures disk device information by walking the kernel device tree. SCSI inquiry output DMP requires a unique serial number for each disk on the system. It uses SCSI inquiries to read the serial number. The vxexplorer utility uses the vxdmpinq command to capture this information. vxconfigd -x 9 output Starting vxconfigd with the -x 9 option enables vxconfigd logging and displays all the steps used to start vxconfigd. This includes DMP initialization and the display of all the disks that are or are not visible to DMP. If the Sun StorEdge Cluster Server (VCS) or EMC PowerPath software is running, a storage or node failover might occur. DMP kernel table variables Various kernel table information is captured using the adb utility.
SRVM VxFS VxLD (NFS Accellerator) First Watch VCS Sybase Edition Oracle Edition Sanbox Global Cluster Manager VFR
4-21
When VRTSexplorer executes, the program prompts for an optional case number and whether to stop and restart vxconfigd to enable vxconfigd logging. Answer the prompts as appropriate for each system.
4-22
Using the Debugging Tools and Utilities The VRTSexplorer output le is by default stored in /tmp using the following le name format: VRTSexplorer_<case_number>_<VxVM_hostid>.tar.Z Open this le and use the contents to aid in troubleshooting VxVM software problems.
2.
root root root root root root root root root root root root root root root root root
other sys other sys other other other other other other other other other other other other other
2294 676 359 309 3308 529 4380 1152 1503 2426 9 477 54 177 7132 1759 542
Jul Jul Jul Jun Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul
3 3 3 7 3 3 3 3 3 3 3 3 3 3 3 3 3
06:27 06:27 06:00 23:11 06:01 06:00 06:00 06:00 06:00 06:01 05:59 06:00 05:59 06:00 06:00 06:00 06:00
. .. arp_a cron dev df_klaV df_klag df_klat eeprom etc hostid ifconfig_a isainfo_v kernel modinfo mount_v netstat_i
4-23
4.
From information stored in the preceding les, determine the systems conguration as follows: a. Verify the hostid. # more ./hostid 807d5d60 b. Verify the Solaris OE revision and hardware architecture. # more uname_a SunOS lowtide 5.8 Generic_108528-14 sun4u sparc SUNW,Ultra-Enterprise c. Determine the version of the VxVM software installed by viewing the contents of the pkginfo le. If the VxVM software was upgraded on the system, there may be more than one pkginfo le. To determine which is the most current listing, look at the PKGINST and INSTDATE elds, as shown in the following example.
# more ./pkginfo_l ... skipping PKGINST: VRTSvxvm NAME: VERITAS Volume Manager, Binaries CATEGORY: system ARCH: sparc VERSION: 3.2,REV=08.15.2001.23.27 BASEDIR: / VENDOR: VERITAS Software DESC: Virtual Disk Subsystem PSTAMP: VERITAS-3.2t_p2.5:23-May-2002 INSTDATE: Jun 08 2002 09:30 HOTLINE: 800-342-0652 EMAIL: support@veritas.com
4-24
Note The ./pkginfo_l le is a long list of all packages installed on the system. Search through the le to nd the VRTS packages. d. Verify that all correct modules are loaded for the VRTS packages installed. If the modules are not loaded, there may have been problems upgrading the VxVM software, or the system was not rebooted after the modules were installed. The following example shows a list of installed modules.
bash-2.03# grep vx ./modinfo 19 101e9005 ffa88 74 1 vxio (VxVM 3.2t_p2.5 I/O driver) 21 102d5440 17428 68 1 vxdmp (VxVM 3.2t_p2.5: DMP Driver) 22 102ea888 83f 75 1 vxspec (VxVM 3.2t_p2.5 control/status
e.
Verify the correct licenses are installed by viewing the vxlicense_p le.
bash-2.03# more ./vxlicense_p vrts:vxlicense: vrts:vxlicense: vrts:vxlicense: vrts:vxlicense: vrts:vxlicense: INFO: INFO: INFO: INFO: INFO: Feature name: PHOTON [98] Number of licenses: 99 Expiration date: No expiration date Release Level: 20 Machine Class: 334147806
f.
Check the OpenBoot PROM information by opening the eeprom le. Some OpenBoot PROM information is shown in the following example.
4-25
Using the Debugging Tools and Utilities g. Viewing the prtdiag displays system POST diagnostic information.
sun4u 8-slot Sun Enterprise 4000/5000
bash-2.03# more prtdiag System Configuration: Sun Microsystems System clock frequency: 84 MHz Memory size: 256Mb
========================= CPUs ========================= Run MHz ----168 168 Ecache MB -----1.0 1.0 CPU Impl. -----US-I US-I CPU Mask ---4.0 4.0
Brd --0 0
CPU --0 1
Module ------0 1
Brd --0
Bank ----0
MB ---256
Status ------Active
Condition ---------OK
Speed ----60ns
========================= IO Cards ========================= Bus Type ---SBus SBus SBus SBus SBus SBus Freq MHz ---25 25 25 25 25 25
Brd --1 1 1 1 1 1
Slot ---------0 1 2 3 3 13
501-2069
h.
The ./dev directory contains a copy of the contents of the /dev directory of the live system. Use this information to verify which devices that Solaris OE sees, including major and minor numbers, prtvtoc output and more. An example follows:
./dev root root root other other other 3308 Jul 2294 Jul 31869 Jul 3 06:01 . 3 06:27 .. 3 06:00 dsk
4-26
bash-2.03# more ./dev/dsk /dev/dsk: total 482 drwxr-xr-x 2 root sys 5120 Jun 7 23:47 drwxr-xr-x 14 root sys 3584 Jul 3 05:29 lrwxrwxrwx 1 root root 50 Jun 7 23:47 ../../devices/sbus@3,0/SUNW,fas@3,8800000/sd@6,0:a lrwxrwxrwx 1 root root 50 Jun 7 23:47 ../../devices/sbus@3,0/SUNW,fas@3,8800000/sd@6,0:b lrwxrwxrwx 1 root root 50 Jun 7 23:47 ../../devices/sbus@3,0/SUNW,fas@3,8800000/sd@6,0:c lrwxrwxrwx 1 root root 50 Jun 7 23:47 ../../devices/sbus@3,0/SUNW,fas@3,8800000/sd@6,0:d lrwxrwxrwx 1 root root 50 Jun 7 23:47 ../../devices/sbus@3,0/SUNW,fas@3,8800000/sd@6,0:e lrwxrwxrwx 1 root root 50 Jun 7 23:47 ../../devices/sbus@3,0/SUNW,fas@3,8800000/sd@6,0:f bash-2.03# more prtvtoc_c1t16d0s2 * /dev/rdsk/c1t16d0s2 partition map * * Dimensions: * 512 bytes/sector * 133 sectors/track * 27 tracks/cylinder * 3591 sectors/cylinder * 4926 cylinders * 4924 accessible cylinders * * Flags: * 1: unmountable * 10: read-only * * Unallocated space: * First Sector Last * Sector Count Sector * 3591 3591 7181 Sun Proprietary: Internal Use Only Troubleshooting Tools and Utilities
. .. c0t6d0s0 -> c0t6d0s1 -> c0t6d0s2 -> c0t6d0s3 -> c0t6d0s4 -> c0t6d0s5 ->
4-27
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
Tag 5 15 14
Flags 00 01 01
First Sector Last Sector Count Sector 0 17682084 17682083 0 3591 3590 7182 17674902 17682083
Mount Directory
i.
Check the ./etc directory, which contains a copy of the contents of the /etc directory of the live system. An example is as follows:
bash-2.03# ls -las total 640 16 drwxr-xr-x 12 16 drwxr-xr-x 9 16 -rw-r--r-1 16 drwxr-xr-x 2 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -r--r--r-1 16 drwxr-xr-x 2 32 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -r--r--r-1 16 -rwxr--r-1 16 drwxr-xr-x 2 16 -rwxr--r-1 16 drwxr-xr-x 2 16 -rwxr--r-1 16 drwxr-xr-x 2 16 -rwxr--r-1 16 drwxrwxr-x 2 16 -rwxr--r-1 16 -rwxr--r-1 32 -rwxr--r-1 16 drwxr-xr-x 2 16 drwxr-xr-x 3 16 -r--r--r-1 16 -r--r--r-1 16 -rw-r--r-1 16 drwxr-xr-x 2 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 drwxr-xr-x 9
./etc root root root root root root root root root root root root root root root root root root root root root root root root root root root root root root root root root root root root root root other other other sys other root root sys sys other sys root sys root sys sys sys sys sys sys sys sys sys sys sys sys sys sys sys sys other root root other other root other other 2426 2294 12 308 12 8 1 97 1223 12200 1731 8 780 5036 2792 2558 3177 2347 2885 3456 2341 650 2792 2792 9973 3275 181 184 3701 1001 182 2161 2161 2161 2161 728 415 1483 Jul 3 06:01 . Jul 3 06:27 .. Jun 8 05:43 defaultrouter Jun 7 23:12 dfs Jul 3 06:01 err.out Jun 7 23:47 hostname.hme0 Jun 7 23:47 hostname6.hme0 Jun 8 05:43 hosts Jun 8 04:52 inet Jul 3 06:00 ls_l_rc Jun 8 09:20 name_to_major Jun 7 23:47 nodename Jun 8 07:34 nsswitch.conf Jun 8 09:28 path_to_inst Jan 5 2000 rc0 Jun 8 09:32 rc0.d Jan 5 2000 rc1 Jun 8 09:23 rc1.d Jan 5 2000 rc2 Jun 8 09:23 rc2.d Jan 5 2000 rc3 Jun 8 08:11 rc3.d Jan 5 2000 rc5 Jan 5 2000 rc6 Jan 5 2000 rcS Jun 8 09:32 rcS.d Jun 7 23:11 rcm Dec 18 2001 release Jun 8 06:50 services Jun 7 23:12 syslog.conf Jul 3 06:00 syslog.d Jun 8 09:49 system Jun 8 09:49 system.GOOD Jun 21 17:15 system.sav Jun 24 14:39 system_06242002 Jun 8 11:23 vfstab Jun 8 09:41 vfstab.prevm Jul 3 06:01 vx
4-28
Jul 3 06:00 elm Jun 21 16:56 guid.state Jun 8 09:33 jbod.info Jun 8 09:19 lib Jun 8 09:49 reconfig.d Jun 8 11:56 saveconfig.d Jun 8 09:33 slib Jun 8 09:22 sr_port Jun 8 09:19 type Jul 3 06:27 voladm.d -> Jun Jun Jun Jun Jun 8 8 8 8 8 09:48 09:20 09:20 09:41 09:41 volboot vold_diag vold_request vxdmp.exclude vxvm.exclude
j.
The ./var directory contains the captured messages les. Use the information in this directory to detect failure messages logged by the VxVM software. An example is as follows:
bash-2.03# ls -las ./var total 64 16 drwxr-xr-x 4 root 16 drwxr-xr-x 9 root 16 drwxr-xr-x 2 root 0 -rw-r--r-1 root 16 drwxr-xr-x 3 root
3 3 3 3 3
bash-2.03# more ./var/adm/messages . . . Jun 8 09:56:55 lowtide pseudo: [ID 129642 kern.info] pseudo-device: devinfo0 Jun 8 09:56:55 lowtide genunix: [ID 936769 kern.info] devinfo0 is /pseudo/devinfo@0 Jun 8 11:55:40 lowtide vxdmp: [ID 619769 kern.notice] NOTICE: vxdmp: Path failure on 118/0x84 Jun 8 11:55:40 lowtide vxdmp: [ID 997040 kern.notice] NOTICE: vxvm:vxdmp: disabled path 118/0x80 belonging to the dmpnode 68/0x10
4-29
Using the Debugging Tools and Utilities The temp_db les are also captured under the ./var/vxvm directory. An example is as follows:
bash-2.03# ls -las total 48 16 drwxr-xr-x 2 16 drwxr-xr-x 3 16 -rw-r--r-1 ./var/vxvm/tempdb root root root root sys root 180 Jul 180 Jul 5120 Jul 3 05:29 . 3 05:29 .. 3 05:29 rootdg
k.
The ./vxvm directory contains information about the VxVM softwares disks and disk groups. There is also a ./vxvm/dmp directory that contains DMP debugging information. The ./dmp/dmp.out le contains the output from execution of the autoconfig, vxdevwalk, and vxdmpinq commands, among others. An example of the ./vxvm directory contents is as follows:
bash-2.03# ls -las total 464 16 drwxr-xr-x 3 16 drwxr-xr-x 9 16 drwxr-xr-x 2 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 32 -rw-r--r-1 16 -rw-r--r-1 16 -rw-r--r-1 32 -rw-r--r-1 16 -rw-r--r-1 bash-2.03# ls -las total 848 16 drwxr-xr-x 2 16 drwxr-xr-x 3 816 -rw-r--r-1
root root root root root root root root root root root root root root root root root root root root root root root root root root root ./dmp root root root
other other other other other other other other other other other other other other other other other other other other other other other other other other other
1953 2294 181 14 78 491 841 925 891 889 889 890 927 219 222 222 929 218 883 883 215 2107 13555 1985 2585 15366 432
Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul Jul
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
06:01 06:27 06:01 06:00 06:01 06:01 06:00 06:00 06:00 06:00 06:01 06:01 06:00 06:01 06:01 06:01 06:00 06:00 06:00 06:00 06:00 06:00 06:00 06:00 06:00 06:01 06:01
. .. dmp vxdctl_mode vxdg_list vxdg_list_rootdg vxdisk_list vxdisk_list_c1t0d0s2 vxdisk_list_c1t16d0s2 vxdisk_list_c1t17d0s2 vxdisk_list_c1t18d0s2 vxdisk_list_c1t19d0s2 vxdisk_list_c1t1d0s2 vxdisk_list_c1t20d0s2 vxdisk_list_c1t21d0s2 vxdisk_list_c1t22d0s2 vxdisk_list_c1t2d0s2 vxdisk_list_c1t3d0s2 vxdisk_list_c1t4d0s2 vxdisk_list_c1t5d0s2 vxdisk_list_c1t6d0s2 vxdisk_s_list vxkprint vxprint vxprint_ht vxprint_mpvshr_rootdg vxstat_g_rootdg
4-30
STATUS online online online spare error online online error online online online online error error error
1023554924.1025.lowtide 17674902 SPARE 17678493 17678493 4197879 4197879 1 4197878 4197879 4197879 ROUND CONCAT 0 1 CONCAT 0 ROUND CONCAT 0 CONCAT 0 c1t0d0 c1t0d0 c1t1d0 c1t0d0 c1t1d0 root RW ENA ENA RW ENA swap RW ENA RW ENA
rootvol rootvol-01 rootvol rootdisk-B0 rootvol-01 rootdisk-02 rootvol-01 rootvol-02 rootvol rootmirror-01 rootvol-02 swapvol swapvol-01 swapvol rootdisk-01 swapvol-01 swapvol-02 swapvol rootmirror-02 swapvol-02
ENABLED ACTIVE ENABLED ACTIVE rootdisk 17678492 rootdisk 0 ENABLED ACTIVE rootmirror 0
ENABLED ACTIVE 1052163 ENABLED ACTIVE 1052163 rootdisk 4197878 1052163 ENABLED ACTIVE 1052163 rootmirror 4197879 1052163
4-31
bash-2.03# more ./dmp/dmp.out DMP Debugging information testautoconfig output binding_name: node_name: node name: node addr: parent_name: instance: minor: binding_name: node_name: node name: node addr: parent_name: instance: minor: binding_name: node_name: node name: node addr: . . . ssd ssd ssd w220000203713f582,0 sf 14 a, ddi_block:wwn ssd ssd ssd w220000203713f643,0 sf 15 a, ddi_block:wwn ssd ssd ssd w220000203713e0b9,0
vxdevwalk output
SUNW,Ultra-Enterprise :: id=-268264420 driver properties: pm-hardware-state value=6e6f2d73 75737065 6e642d72 6573756d 6500 ascii=no-suspend-resume. system properties: relative-addressing value=00000001 MMU_PAGEOFFSET value=00001fff MMU_PAGESIZE value=00002000 PAGESIZE value=00002000 Driver packages :: id=-268251420 Driver packages/terminal-emulator :: id=-268212908
4-32
The dmp.out le is large and contains the output of all DMP debugging and information gathering l commands executed by vxexplorer. Use shell commands to parse this le for the information needed to verify the existence of a DMP problem.
Caution The set option can damage the conguration database (private region) beyond repair if not used properly. Some of the information presented in this section is for reference only.
4-33
0 -
1023554924.1025.lowtide 4197879 4197879 1 4197878 4197879 4197879 SPARE ROUND CONCAT 0 1 CONCAT 0 ROUND CONCAT 0 CONCAT 0 ROUND CONCAT 0 CONCAT 0 ROUND CONCAT 0 CONCAT 0 root RW DIS DIS RW DIS swap RW DIS RW DIS fsgen RW DIS RW DIS fsgen RW DIS RW DIS
rootvol rootvol-01 rootvol rootdisk-B0 rootvol-01 rootdisk-02 rootvol-01 rootvol-02 rootvol rootmirror-01 rootvol-02 swapvol swapvol-01 swapvol rootdisk-01 swapvol-01 swapvol-02 swapvol rootmirror-02 swapvol-02 usr usr-01 usr rootdisk-04 usr-01 usr-02 usr rootmirror-03 usr-02 var var-01 var rootdisk-03 var-01 var-02 var rootmirror-04 var-02
DISABLED ACTIVE DISABLED ACTIVE rootdisk 17678492 rootdisk 0 DISABLED ACTIVE rootmirror 0
DISABLED ACTIVE 1052163 DISABLED ACTIVE 1052163 rootdisk 4197878 1052163 DISABLED ACTIVE 1052163 rootmirror 4197879 1052163 DISABLED ACTIVE 4197879 DISABLED ACTIVE 4197879 rootdisk 5250041 4197879 DISABLED ACTIVE 4197879 rootmirror 5250042 4197879 DISABLED ACTIVE 4197879 DISABLED ACTIVE 4197879 rootdisk 9447920 4197879 DISABLED ACTIVE 4197879 rootmirror 9447921 4197879
4-34
4-35
# vxdisk list c1t15d0 Device: c1t15d0s2 devicetag: c1t15d0 type: sliced hostid: plstr04.veritas.com disk: name= id=968346728.1264.plstr04.veritas.com group: name=rootdg id=968426567.1289.plstr04.veritas.com flags: online ready private autoconfig noautoimport pubpaths: block=/dev/vx/dmp/c1t15d0s4 char=/dev/vx/rdmp/c1t15d0s4 privpaths: block=/dev/vx/dmp/c1t15d0s3 char=/dev/vx/rdmp/c1t15d0s3 version: 2.1 iosize: min=512 (bytes) max=2048 (blocks) public: slice=4 offset=0 len=17877575 private: slice=3 offset=1 len=3049 update: time=968446320 seqno=0.18 headers: 0 248
4-36
An attempt to import a disk group locks clearing import (-C), as shown in the following output:
# vxdg -C import 968426567.1289.plstr04.veritas.com vxvm:vxdg: ERROR: Disk group 968426567.1289.plstr04.veritas.com: import failed: Record already exists in disk group
4-37
4-38
Using the Debugging Tools and Utilities Use this information to correlate the kernel space information with user space entries by performing a long list on the /dev/dsk directory and grep on the wwn or major and minor numbers of the listed device. For example:
bash-2.03# ls -las /dev/dsk | grep 3f579 2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c1t2d0s0 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0:a 2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c1t2d0s1 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0:b 2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c1t2d0s2 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0:c 2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c1t2d0s3 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0:d 2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c1t2d0s4 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0:e 2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c1t2d0s5 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0:f 2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c1t2d0s6 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0:g 2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c1t2d0s7 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0:h 2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c2t2d0s0 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0:a 2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c2t2d0s1 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0:b 2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c2t2d0s2 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0:c 2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c2t2d0s3 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0:d 2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c2t2d0s4 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0:e 2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c2t2d0s5 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0:f 2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c2t2d0s6 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0:g 2 lrwxrwxrwx 1 root root 70 Jun 7 23:47 c2t2d0s7 -> ../../devices/sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0:h
If the device is multi-pathed, this output shows both paths. In the previous example, controllers c1 and c2 are multiple paths for a single device.
4-39
Disk group information presented by this command shows that disk group rootdg is imported by system lowtide. The disk group revision level is 90.
4-40
System administrators use these utilities to analyze panic dump les. The VxVM software could at some time cause a system panic. It is helpful to be able to diagnose a panic dump le to determine whether the VxVM software was responsible for the crash and what component of the VxVM software failed. Note Analyzing panic dump les is beyond the scope of this class. Sun Education provides classes that teach the use of system-level debugging tools to analyze crash dump les.
4-41
Enable vxconfigd debugging Generate simple errors View error messages in the messages log and the vxconfigd log Use the VERITAS Volume Manager 3.2 Troubleshooting Guide for an explanation of error message Run VRTSexplorer and view data Use vxprivutil to display the contents of a disks private region Use vxdevwalk to display dev_info tree data
q q q
Preparation
To prepare for this exercise, make sure that the VxVM software is installed and operational.
4-42
What error messages from the messages le were you able nd in the VERITAS Volume Manager 3.2 Troubleshooting Guide? ________________________________________________________ ________________________________________________________ ________________________________________________________ ________________________________________________________ ________________________________________________________ What is the cxtxdx address of the failed path? ________________________________________________________ ________________________________________________________ What is the physical path of the failed path? ________________________________________________________ ________________________________________________________
4-43
Power user The following task is for students who have the skills to map a device tree entry to a physical hardware slot. If you have access to SunSolve Online, pull the hardware mapping for the system used by your lab group and map the physical device address of the failing path to the cable that was disconnected. What is the physical Peripheral Component Interconnect (PCI) or SBus slot of the pulled cable? _____________________________________________________________ 8. 9. Replace the pulled cable. View any messages logged by the VxVM software as the path is brought online. _____________________________________________________________ _____________________________________________________________ 10. Disable vxconfigd logging. 11. Close the real-time viewing window for the vxconfigd log. 12. Remove the vxconfigd log.
hostid uname_a pkginfo_l Search for the VxVM software packages and verify the package levels and installation dates. vxlicense_p prtdiag eeprom
q q q
4-44
Contents of various les under the ./dev subdirectory Contents of the various les under the ./vxvm subdirectory.
4.
Run the vxprivutil command with the dumpconfig option on a selected VxVM software disk. Pipe the output to the vxprint command to display a vxprint -ht type of printout. What command did you use? _____________________________________________________________
5. 6. 7.
Run the vxprivutil command with the list option on a selected VxVM software disk, and view the output. Run the vxdevwalk command, and view the output. Run the vxkprint command, and view the output.
4-45
Exercise Summary
Exercise Summary
Discussion Take a few minutes to discuss what experiences, issues, or discoveries you had during the lab exercise.
q q q q
!
?
4-46
Task 1 Solutions
Complete the following steps: 1. Enable maximum vxconfigd logging from the command line. What command did you use?
# pkill -9 vxconfigd; vxconfigd -x 9 -x log -x syslog& 2. What is the location of the log?
In /var/vxvm/vxconfigd.log. 3. View the contents of the log. Is there data present? _____________________________________________________________ _____________________________________________________________ 4. Disable vxconfigd logging. What command did you use?
Task 2 Solutions
Complete the following steps: 1. Enable maximum vxconfigd logging. What command did you use?
# pkill -9 vxconfigd; vxconfigd -x 9 -x log -x syslog& 2. Open a new window, and view the contents of the vxconfigd log in real time. What command did you use?
# tail -f /var/vxvm/vxconfigd.log 3. Open a new window and view the current messages le in real time. What command did you use?
# tail -f /var/adm/messages
4-47
Exercise: Using the Error Logging and Debugging Utilities 4. 5. Pull a ber cable. Perform an I/O operation to the attached disks by executing the following command: # find / -name foo.txt 6. 7. View the messages and vxconfigd log as the DMP notices that a path is missing. After the logging completes, nd the error message logged by DMP in the messages le and locate the appropriate message in the VERITAS Volume Manager 3.2 Troubleshooting Guide. Answer the following questions:
q
What error messages from the messages le were you able nd in the VERITAS Volume Manager 3.2 Troubleshooting Guide? ________________________________________________________ ________________________________________________________ ________________________________________________________ ________________________________________________________ ________________________________________________________ What is the cxtxdx address of the failed path? ________________________________________________________ ________________________________________________________ What is the physical path of the failed path? ________________________________________________________ ________________________________________________________
Power user The following task is for students who have the skills to map a device tree entry to a physical hardware slot. If you have access to SunSolve Online, pull the hardware mapping for the system used by your lab group and map the physical device address of the failing path to the cable that was disconnected. What is the physical Peripheral Component Interconnect (PCI) or SBus slot of the pulled cable? _____________________________________________________________ 8. Replace the pulled cable.
4-48
Exercise: Using the Error Logging and Debugging Utilities 9. View any messages logged by the VxVM software as the path is brought online. _____________________________________________________________ _____________________________________________________________ 10. Disable vxconfigd logging. 11. Close the real-time viewing window for the vxconfigd log. 12. Remove the vxconfigd log.
Task 3 Solutions
Complete the following steps: 1. 2. 3. Install the VRTSspt package, if it is not already installed. Run the VRTSexplorer utility using the procedure described in The vxexplorer Utility on page 4-18. Browse the output, and view the following les:
q q q
hostid uname_a pkginfo_l Search for the VxVM software packages and verify the package levels and installation dates. vxlicense_p prtdiag eeprom Contents of various les under the ./dev subdirectory Contents of the various les under the ./vxvm subdirectory.
q q q q q
4.
Run the vxprivutil command with the dumpconfig option on a selected VxVM software disk. Pipe the output to the vxprint command to display a vxprint -ht type of printout. What command did you use?
4-49
Exercise: Using the Error Logging and Debugging Utilities 5. 6. 7. Run the vxprivutil command with the list option on a selected VxVM software disk, and view the output. Run the vxdevwalk command, and view the output. Run the vxkprint command, and view the output.
4-50
Module 5
Describe the VxVM software system recovery processes Describe how the VxVM software is initialized during system boot Successfully troubleshoot boot problems that prevent the VxVM software from starting Identify errors that prevent the VxVM software from functioning Use the correct recovery procedures to resolve initialization and operational problems Correctly determine when to reinstall the VxVM software Identify the VxVM software errors, match these errors to a list of known errors and successfully repair the problems
q q
q q
Relevance
Relevance
Discussion The following questions are relevant to understanding what the VxVM software boot and recovery processes are:
q
!
?
What les are accessed during system boot to initialize the VxVM software? What entries in the /etc/system le are necessary for the VxVM software initialization? What recovery procedure is used on a system that has a problem preventing the VxVM software from starting? What procedures can be used to boot a system without starting the the VxVM software? What recovery procedure is used on a system that has corrupted VxVM software binaries? What VxVM software conguration allows multiple rootdgs to coexist on a single system?
5-2
Additional Resources
Additional Resources
Additional resources The following references provide additional information on the topics described in this module:
q
VERITAS Volume Manager 3.2 Administrators Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000392-011, TechPDF ID 240253. VERITAS Volume Manager 3.2 Installation Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000395-011, TechPDF ID 240256. VERITAS Volume Manager 3.2 Troubleshooting Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000394-011, TechPDF ID 240255. VERITAS Volume Manager Storage Administrator 3.2 Administrators Guide. Mountain View, California: VERITAS Software Corporation, July 2001, number 30-000393-011, TechPDF ID 240257. SunSolveSM Online SRDB 24657, [http://sunsolve.Sun.COM/pub-cgi/search.pl?mode=advanced]. http://storage.east, http://storage.central, and http://storage.west.
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-3
Boot process failures that prevent the VxVM software from starting Failures of the VxVM software which prevent it from functioning once the system is booted Storage device errors
This module addresses errors that occur in the boot process and the VxVM software functionality. Storage device errors are addressed in Module 6, Recovering Disk, Disk Group, and Volume Failures.
5-4
Boot /etc/rc2.d
System Ready
Figure 5-1
Boot Scripts
Scripts which execute in single-user mode that affect the VxVM software initialization are:
q q q q q q q
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-5
Examining the VxVM Software Boot Process Scripts which execute in multi-user mode that affect the VxVM software initialization are:
q q q q q
Note Appendix D, The Boot Process, contains an example of the boot -v process, with annotations which indicate when each of these scripts execute. Refer to this appendix for more information on processes that execute during boot processing.
The rootdg ownership is determined. /etc/rcS.d/S25vxvmsysboot VxVM software is started in boot mode.
Figure 5-2
5-6
Examining the VxVM Software Boot Process Exclusion of certain DMP devices was implemented in the VxVM software version 3.1. Information about excluded drivers is downloaded to DMP by invoking vxdmpadm with the doioctl option prior to the start of vxconfigd by the S25vxvm-sysboot script. The VxVM software version 3.2 does not use the vxdmpadm doioctl command; this functionality is now incorporated into the vxconfigd process.
The dev_info tree is scanned for new devices. If the Solaris OE does not see a disk using format, the VxVM software does not see the disk.
Kernel and user-space entries are matched. These are the /dev/dsk links. If a user-space entry does not exist, the VxVM software guesses what it should be.
The dev_info tree is a kernel structure that is built from device tree information in the boot PROM. To view the dev_info tree, use the /etc/vx/diag.d/vxdevwalk command. An example shows an excerpt from the execution of vxdevwalk command. The following example shows the dev_info tree data for device ssd@w220000203713fc9f.
Driver sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713fc9f,0 :: id=25 instance=18 driver properties: inquiry-revision-id value=31373745 00 ascii=177E. inquiry-product-id value=53543139 31373146 4353554e 392e3047 00 ascii=ST19171FCSUN9.0G. inquiry-vendor-id value=53454147 41544500 ascii=SEAGATE. pm-hardware-state value=6e656564 732d7375 7370656e 642d7265 73756d65 00 ascii=needs-suspend-resume. ddi-kernel-ioctl device nodes: a :: dev=118,144:block nodetype=ddi_block:wwn b :: dev=118,145:block nodetype=ddi_block:wwn c :: dev=118,146:block nodetype=ddi_block:wwn d :: dev=118,147:block nodetype=ddi_block:wwn e :: dev=118,148:block nodetype=ddi_block:wwn f :: dev=118,149:block nodetype=ddi_block:wwn g :: dev=118,150:block nodetype=ddi_block:wwn
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-7
The entries are matched by the dev_info tree entry, shown in the following example.
Driver sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713fc9f,0 :: id=25 instance=18
Unmatched Entries
Unmatched entries are given new names, as follows:
q q
Entries are identied in the dev_info tree. If a match is not located in user space, an arbitrary name is created.
5-8
Note Only disks that are members of imported disk groups have entries in the DISK and GROUP columns.
Disk Ownership
The /etc/vx/volboot le is used to delineate disk ownership. The vxdctl process uses this le to manage the state of vxconfigd and to bootstrap rootdg during the VxVM software initialization. The following example illustrates a basic volboot le.
volboot 3.1 0.2 30 hostid lowtide end ############################################################### ############################################################### ############################################################### ############################################################### ############################################################### ############################################################### ############################################################### #########################
If simple disks are part of a servers VxVM software conguration, these disks must be listed in the volboot le, or the VxVM software does not recognize them. The following example illustrates a /etc/vx/volboot le that includes information on simple disks.
volboot 3.1 0.2 30 hostid lowtide disk c1t6d0s4 simple privoffset=1 disk c2t4d0s3 simple privoffset=1 disk c1t2d0s5 simple privoffset=1 end ############################################################### ############################################################### ############################################################### ############################################################### ############################################################### ######################################################
This le must be 512 bytes in length, including padding characters. Do not edit this le using vi or other text editor. If this le is corrupted, the VxVM software initialization fails.
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-9
Examining the VxVM Software Boot Process To update the /etc/vx/volboot le, use the following commands:
q q
Running the vxdctl init newname command recreates the /etc/vx/volboot le. Caution If simple disks are part of a servers VxVM disk conguration, these entries are deleted if a vxdctl init newname command is run. Record this information prior to running this command to allow restoration of the simple disk conguration using the vxdctl add disk c#t#d#s# command.
Boot Mode
During this part of the VxVM software initialization, the software is started in boot mode using the vxconfigd -m boot command. The following initialization operations occur at this time:
q q
The rootdg disk group is imported. If the boot disk is under the VxVM software control, rootvol and user volumes are started.
Note If the /etc/rcS.d/S25vxvm-sysboot le is modied for vxconfigd debugging using the vxconfigd -x 1-9 command, it starts in debug mode at this time. The 1-9 option increases the level of debugging information which vxconfigd displays during initialization. Other debug sub-options are available for use with the -x 1-9 option and are listed in the vxconfigd man page.
5-10
Examining the VxVM Software Boot Process An original equipment manufacturer (OEM)-supported script can be created to automatically suggest the alternate mirror. The /etc/vx/sbin/vxstale le is no longer packaged with the VxVM software distribution.
/etc/vx/reconfig.d/state.d/install-db
q
This le is created during installation of the VxVM software packages. This le is also created if an installation of the VxVM software is incomplete. In that case, if the boot disk is under the VxVM software control, the system does not boot past single-user mode.
This le prevents vxconfigd from starting when the system boots. The VxVM software starts if the boot disk is under the VxVM software control, but it is crippled.
Note Although vxconfigd may not be started, the vxdmp and vxspec modules are started.
q
/VXVM#.#.#-UPGRADE/.start_runed
q
The value #.#.# is the software level to which the system is being upgraded, such as 3.1.1. This is a hidden le, created by the upgrade_start script. It is removed when the upgrade is nished. This le prevents vxconfigd from starting even if the boot disk is under the VxVM software control.
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-11
/etc/rcS.d/ S35vxvm-startup1
Figure 5-3
This script executes after the / and /usr volumes are available and makes other volumes available that are needed early in the Solaris OE boot sequence.
Special Volumes
The following special volumes, if congured, are started:
q q
swap /var
Note These volumes must be in rootdg directory. Recovery operations are not performed, including:
q q
5-12
Dump Device
The dump device is used to store core information when the system panics. Dump device conguration is as follows:
q q q
A swap device must be listed in the /etc/vfstab le. The swap device must be in rootdg. A physical partition must be available underneath the swap volume. Swap les cannot be used as a dump device. The dump device is registered by adding and then removing the swap device. The VxVM software does not have hooks for dumping, so the swap device must be created prior to the creation of the dump device. The dump device must be the rst swap device listed in the /etc/vfstab le.
Core le creation and recovery are performed outside of the VxVM software operations.
The VxVM software treats all le systems with le type swap in the /etc/vfstab le as swap volumes. The primary swap volume must be:
q q q
Swap device size is limited to 2000 megabytes in the early releases of the Solaris OE.
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-13
/etc/rcS.d/S50devfsadm
Figure 5-4
The script executes during all boots. When the script executes as part of a reconguration reboot, new devices are discovered and congured. To perform a reconguration reboot, do the following: 1. 2. Use the boot -r command. Set the /reconfigure ag.
Solaris OE releases prior to release 8 used the /etc/rcS.d/S50drvconfig and S60devlinks scripts.
5-14
Examining the VxVM Software Boot Process Discovery of new devices can be performed without a system reboot using the following commands:
q
The dev_info tree is checked for new devices. All auto-import disk groups are imported. /etc/rcS.d/S85vxvm-startup2 All volumes are started.
Figure 5-5
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-15
Examining the VxVM Software Boot Process The /etc/rcS.d/S85vxvm-startup2 script must be run after the /, /usr, and /var volumes are mounted. This script:
q q q q
Starts some I/O daemons Rebuilds the /dev/vx/dsk and /dev/vx/rdsk directories Imports all disk groups Starts all volumes that were not started earlier in the boot sequence
Flag les may affect this process. Volume recovery is not performed. New device entries are added, and invalid entries are kept.
/etc/rcS.d/S86vxvm-reconfig
Figure 5-6
5-16
Flag Files
Flag les are queried for reconguration and reboot procedures. The /etc/vx/reconfig.d/state.d le contains ag les set by prior operations, as follows:
q q
Pre-recovery events that might have occurred Encapsulation procedures Encapsulation requires a reboot and creates ag les to delineate actions that need to be taken. If encapsulation fails or is incomplete, the ag les must be removed manually.
Note The root_done ag tells the VxVM software that the boot disk is under VxVM software control and that this startup script can exit without any action.
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-17
Volume recovery and re-synchronization is started on all volumes. Relocation daemons are started on all volumes. If a disk failure is detected, subdisks are relocated to another disk. During relocation operations, disks marked as spare are used to relocate data.
5-18
Examining the VxVM Software Boot Process 3. Clean up the system conguration, following these steps: a. b. c. d. e. 4. Clean up rootability. Clean up volumes. Clean up disk conguration. Recongure rootability. Perform the nal volume reconguration.
Note A detailed and extensive reinstallation recovery procedure is found in the VERITAS Volume Manager 3.2 Troubleshooting Guide under the heading Recovery from Boot Disk Failure.
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-19
Bootable boot disk Valid /etc/system le Valid /etc/vfstab le Valid rootdg disk group Startable volumes, including no stale plexes Non-corrupted driver and daemon binaries Appropriate VxVM software binaries loaded on the system for the specic release of the Solaris OE installed Access to library les needed by the VxVM software for initialization of devices and system volumes Valid /etc/vx/volboot le Existing /var/vxvm/tempdb directory and supporting tempdb les: This includes the temporary storage area for the VxVM software conguration copies. The vxconfigd process needs these les to transition to the enabled state.
q q
5-20
Note The boot disk device address depends on the system and on which storage device is congured as the boot disk. If the boot disk is under VxVM software control, check that the boot-device is set to the proper VxVM software-generated device alias, as follows:
boot-file: data not available boot-device=vx-rootdisk local-mac-address?=false
Power user If the primary boot device fails and the system must be rebooted, use the vx-rootmirror device alias to boot the system. If the primary boot disk fails and a spare was congured in rootdg, after the spare disk replaces the failed boot disk and the failed volumes are being hot-relocated, the VxVM software builds an additional device alias to enable booting from the spare disk, as follows:
vx-rdgspare01 /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w220000203713f96d,0:a
Use this device alias if the system must be booted from the spare disk. Note The spare disk device address depends on the system and on which storage device is congured as the spare disk. Also, the spare disk name is arbitrary and is reected in the device alias name.
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-21
Troubleshooting Boot Process Failures The last two lines of the /etc/system le are only present if the boot disk is under VxVM software control. If these lines are not present and the boot disk is under VxVM software control, the system will not boot. Also, the forceload statements for drv/sd and drv/ssd must be present in the /etc/system le if the boot device is one of these class of disks. If these forceload statements are not present or corrupted, the system rebootsrecursively. Power user Recovery of a corrupted /etc/system file requires booting with the -a option and selecting a backup /etc/system file that has the previously mentioned entries. If a backup /etc/system file is unavailable or does not contain these statements, perform the following: 1. 2. 3. 4. Boot from CD-ROM. Type the following: cdrom -s Mount the boot disk to /a. Set the terminal type to vt100. Use a text editor such as vi to edit /etc/system and x the problem, or copy the default from CD-ROM and modify it to match the system conguration. Save the new /etc/system le. Unmount /a. Reboot the system.
5. 6. 7.
Note This process is presented in greater detail in the VERITAS Volume Manager Troubleshooting Guide, in the section Recovery from Boot Disk Failure.
5-22
Recovery of this le requires editing the le to correct problems and, in severe cases, can require booting from the CD-ROM. Note Additional information on how to recover from problems in the /etc/vfstab le are found in the VERITAS Volume Manager Troubleshooting Guide in the section Recovery from Boot Disk Failure.
Startable Volumes
If the boot disk is under the VxVM software control, all system volumes (/, swap, /usr, and /var) must start. One reason a volume might not start is the presence of a stale or unusable plex. A stale plex is dened as a plex that has data which is inconsistent with other mirrors of that volume. During the boot process, only the plexes on the boot disk are accessed until the VxVM software is fully initialized and a complete conguration for the volumes on the boot disk can be obtained. If the data on the boot disk is stale, then the system must be rebooted from an alternate boot disk which does not contain stale plexes.
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-23
Troubleshooting Boot Process Failures The vxconfigd command displays a bootable disk that is suitable for booting and which does not contain stale or unusable plexes. If stale plexes are present, vxconfigd displays the following message:
vxvm:vxconfigd: Warning Ples rootvol-01 for root volume is stale or unusable. vxvm:vxconfigd: Error: System boot disk does not have a valid root plex Please boot from one of the following disks: Disk rootmirror Device: c1t21d0s2 vxvm:vxconfigd: Error: System startup failed The system is down
Note Additional information on how to recover from stale or unusable plexes is found in the VERITAS Volume Manager Troubleshooting Guide under the heading Recovery from Boot Disk Failure. If the boot-device nvram parameter is set, and aliases are set for both the rootdisk and rootmirror, the system reboots use the rootmirror disk.
Binary Files
Each version of the Solaris OE requires different VxVM software daemon and driver binaries. These binaries are found in /kernel/drv and /kernel/drv/sparcv9 (for the 64-bit Solaris OE). The binaries are:
q
bash-2.03# ls -las /kernel/drv/vx* 640 -rw-r--r-1 root sys 608 -rw-r--r-1 root sys 608 -rw-r--r-1 root sys 640 -rw-r--r-1 root sys 4 -rw-r--r-1 root sys 3776 -rw-r--r-1 root sys 3664 -rw-r--r-1 root sys 3696 -rw-r--r-1 root sys 3776 -rw-r--r-1 root sys 2 -rw-r--r-1 root sys 30 -rw-r--r-1 root other 28 -rw-r--r-1 root sys 30 -rw-r--r-1 root sys
5-24
1 root 1 root
sys sys
bash-2.03# ls -las /kernel/drv/sparcv9/vx* 800 -rw-r--r-1 root sys 393968 Nov 21 768 -rw-r--r-1 root sys 380840 Aug 15 800 -rw-r--r-1 root sys 393968 Nov 21 5328 -rw-r--r-1 root sys 2714688 Nov 21 5232 -rw-r--r-1 root sys 2666256 Aug 15 5328 -rw-r--r-1 root sys 2714688 Nov 21 38 -rw-r--r-1 root other 18504 May 5 36 -rw-r--r-1 root sys 17928 Aug 15 38 -rw-r--r-1 root sys 18504 Aug 15
Power user If a system message lists an incorrect version of the VxVM software installed for the Solaris OE, the most probable cause is that the incorrect version of the Solaris OE was selected during the VRTSvxvm package addition. The solution to this is to copy the correct binary file (vxio.xxxxx, vxdmp.xxxx, or vxspec.xxx) to either the vxio, vxspec or xvdmp files. Additionally, if these files become corrupt, copy over these files using the correct binary file for the Solaris OE installed. If the boot disk is under the VxVM software control and a driver le is corrupt, the system boot fails. The system administrator must perform a basic or functional unencapsulation (refer to Performing a Basic or Functional Unencapsulation on page 2-68 for details).
Library Files
The following library les must be in the /etc/vx/slib directory:
bash-2.03# ls -las /etc/vx/slib total 5346 2 drwxr-xr-x 2 root other 2 drwxr-xr-x 9 root other 186 -rwxr-xr-x 1 root other 10 -rwxr-xr-x 1 root other 2256 -rwxr-xr-x 1 root other 10 -rwxr-xr-x 1 root other 46 -rwxr-xr-x 1 root other 26 -rwxr-xr-x 1 root other 26 -rwxr-xr-x 1 root other 172 -rwxr-xr-x 1 root other 256 -rwxr-xr-x 1 root other 12 -rwxr-xr-x 1 root other 50 -rwxr-xr-x 1 root other 1776 -rwxr-xr-x 1 root other 48 -rwxr-xr-x 1 root other 140 -rwxr-xr-x 1 root other
512 512 95052 4392 1146284 4848 23348 12432 12432 87108 119244 5456 24968 898600 24360 70864
May 11 13:18 . May 20 07:20 .. May 5 08:33 liba5k.so.2 May 5 08:33 liba5k_stub.so.2 May 5 08:32 libc.so.1 May 5 08:34 libc_psr.so.1 May 5 08:33 libdevice.so.1 May 5 09:56 libdevid.so May 5 09:56 libdevid.so.1 May 5 08:33 libdevinfo.so.1 May 5 08:33 libg_fc.so.2 May 5 08:33 libg_fc_stub.so.2 May 5 08:33 libmp.so May 5 08:33 libnsl.so.1 May 5 08:33 libnvpair.so.1 May 5 08:32 libsocket.so.1
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-25
If any of these library les are not accessible and the boot disk is encapsulated, a message similar to the following is displayed:
Starting VxVM restore daemon... VxVM starting in boot mode... ld.so.1: vxconfigd: fatal: <missing library file name is displayed>: open failed: No such file or directory Killed Errors were encountered in starting the root disk group, as a result VxVM is unable to configure the root and/or /usr volumes. If you have mirrored the root disk, you can try booting from that disk. Please refer to Appendix C of the Installation Guide for more details. If you cannot boot from the root disk, you can try to repair the problem using a network-mounted root file system or some other alternate root file system. Again, see the Installation Guide for more details. Would you like a shell prompt right now? [no]
If this problem exists, perform a basic or functional unencapsulation and copy the missing libraries from /usr/lib to /etc/vx/slib. Refer to Performing a Basic or Functional Unencapsulation on page 2-68 for details.
5-26
The /etc/vx/volboot le is an ASCII le that adheres to a strict format and should not be edited. It has the following characteristics:
q q
512 bytes in length, including padding Updated using the vxdctl command
The /etc/vx/volboot le holds the VxVM software host identier hostid. This is usually the Solaris OE nodename, not the hardware hostid. The VxVM software hostid does not have to match the servers nodename, which can be very confusing. This le has the following characteristics
q q
It establishes disk and diskgroup ownership. If two or more servers access the same disks using the same bus, the VxVM software hostid ensures that the two hosts do not interfere with each other when they are accessing VxVM software disks.
If this le is corrupted or deleted, recover the le from backups of the affected system. If backups are not available, a reinstall may be necessary.
/etc/rcS.d/S25vxvm-sysboot Checks system and vfstab les, including install-db, to determine whether the VxVM software should start. (For VxVM software version 2.3, this script also checks for the /VXVM2.3 UPGRADE/.start_runed le.) If the software should start, the script prints the message VxVM starting in boot mode... and executes a vxconfigd -m boot command. /etc/rcS.d/S35vxvm-startup1 If needed, special volumes such as / and swap are started.
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-27
/etc/rcS.d/S85vxvm-startup2 Prints the message VxVM general startup... , and starts the vxiod daemons. This script also enables vxconfigd or starts it if it is not already started. This script is where the /dev/vx directory is made or updated, if needed, and a vxrecover command starts all volumes. /etc/rcS.d/S86vxvm-reconfig Checks for the existence of the /etc/vx/reconfig.d/state.d/reconfig le. If there are recongurations needed (for example, for encapsulated disks), the work is done here. The system can be rebooted from this script if needed. /etc/rc2.d/S95vxvm-recover Runs vxrecover, which xes any remaining problems or starts any unstarted volumes. Also starts vxrelocd, which starts another vxrelocd and a vxnotify deamon. /etc/rc2.d/S96vmsa-server [VxVM software version 3.x only] Starts the vmsa GUI server, which is needed for the vmsa GUI to work.
Power user The recovery from errors resulting from the execution of these scripts requires reading each individual script and determining where the failure is within the script. This requires shell programming expertise and a detailed understanding of the VxVM software commands and support files. All of the VxVM software startup scripts are hard linked from the /etc/init.d file.
5-28
Corrupt or missing commands and utilities The VxVM software commands and utilities are found in the following directories:
q q q
Corrupt or missing conguration les Most VxVM software conguration les are in the /etc/vx directory. Driver conguration les are in the /kernel/drv directory.
To correct these problems, restore a valid backup of the missing or corrupted information, or copy non-system-specic les from another system on the network. Caution Be careful when copying les from other systems on the network. If a system-specic le, such as /etc/vx/volboot, is copied from another server, the system to which it is copied can be placed into a non-bootable state. Use backups from the system missing the needed le whenever possible.
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-29
Preparation
To prepare for this exercise:
q q q
The VxVM software must be installed and operational. The boot disk must be encapsulated and mirrored. There must be one additional disk group other than rootdg with at least one congured, started and mounted volume. The instructor must give you the location and name of the breakand-x script.
The break-and-x script is a menu-driven script used to break a system based on bugs submitted by Sun Support. This script performs the following tasks:
q
Provides a list of bugs for use by students to inject real world bugs into the lab systems. Keeps track of bugs completed and those skipped or not completed by lab teams. Provides pointers to SunSolve SRDB records and INFODOCs. Provides hints to help lab teams troubleshoot bugs. Fixes the injected bug if the lab team desires to proceed to another bug without resolving the current bug.
q q q
To run the break-and-x script: 1. Enter the following: # breakfix The following is displayed:
breakfix Main Menu 1. 2. 3. 4. Problem Problem Problem Problem 1 2 3 4 11. 12. 13. 14. Problem Problem Problem Problem 11 12 13 14
5-30
2.
Select a bug by entering the bug number from the menu. For example, if you select Bug 1, the following is displayed:
Problem #1, Invalid disk.exclude file b) s) h) m) x) Break it Solution Hint Return to Main Menu Exit
3.
Enter b to break, f to x and m to return to the main menu. If you select the b or break option, a system reboot is necessary to activate the bug. After the symptom is observed by your lab team, access SunSolve Online, and search for the proper SRDB to help diagnose the problem. If you need or want a hint, re-execute the break-and-x script and select the h option for the bug currently being worked on. The following is displayed:
This is SRDB ID: 14736. The /etc/vx/disks.exclude file requires the following format: c#t#d# for each disk you wish to exclude in /etc/vx/disks.exclude.
Notice that the referring SRDB is listed along with hints to help resolve the problem.
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-31
Tasks
Execute the break-and-x script, and select bugs 1 through 10. List problem resolution steps for each bug in the space listed below. Problem 1: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ Problem 2: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ Problem 3: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________
5-32
Exercise: Determining the VxVM Software Problem Problem 4: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ Problem 5: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ Problem 6: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ Problem 7: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-33
Exercise: Determining the VxVM Software Problem __________________________________________________________________ __________________________________________________________________ Problem 8: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ Problem 9: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ Problem 10: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ The lab is complete when all 10 bugs are xed, your lab system with the VxVM software is operational, and all volumes started and accessible.
5-34
Exercise Summary
Exercise Summary
Discussion Take a few minutes to discuss what experiences, issues, or discoveries you had during the lab exercise.
q q q q
!
?
Sun Proprietary: Internal Use Only Recovering Boot and System Processes
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
5-35
Task Solutions
The solutions for the tasks in the lab exercise are found in the break-andx script and on SunSolve in the appropriate SRDB and INFODOC les.
5-36
Module 6
Describe the VxVM software utilities, commands, and virtual devices to help with recovery processes Identify and repair failed disks Perform disk recovery processes and procedures Identify and repair volume errors Identify and repair disk group failures
q q q q
Relevance
Relevance
Discussion The following questions are relevant to understanding the VxVM software disk recovery processes:
q
!
?
What tools, utilities, commands, or les are available to help identify failed VxVM software disks and other objects used to access managed storage devices? What are the indications that a disk failed? How are damaged disk groups repaired? How are new disks made visible to the VxVM software? What are the procedures to replace a failed VM disk? How are duplicate record entries recognized and repaired?
q q q q q
6-2
Additional Resources
Additional Resources
Additional resources The following references provide additional information on the topics described in this module:
q
VERITAS Volume Manager 3.2 Administrators Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000392-011, TechPDF ID 240253. VERITAS Volume Manager 3.2 Installation Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000395-011, TechPDF ID 240256. VERITAS Volume Manager 3.2 Troubleshooting Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000394-011, TechPDF ID 240255. VERITAS Volume Manager Storage Administrator 3.2 Administrators Guide. Mountain View, California: VERITAS Software Corporation, July 2001, number 30-000393-011, TechPDF ID 240257. http://storage.east, http://storage.central, and http://storage.west. SunSolveSM Online SRDBs and INFODOCs 13364, 14820, 21626 and 26367, [http://sunsolve.Sun.COM/pub-cgi/search.pl?mode=advanced]. Man pages for the following commands:
q q q q q q
Sun Proprietary: Internal Use Only Recovering Disk, Disk Group, and Volume Failures
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
6-3
The disk must be a VxVM software disk. The disk group of which the disk is a member must be imported. The disk must have subdisks that belong to an active volume that is in use by the system.
Depending on the architecture of the volume, the loss of a single disk should not effect the availability of the volume. Disk groups conguration problems can prevent the import or creation of the disk group. The result of this problem is that the resources managed by the affected disk group are not available to the system. Disk group import problems are generally due to ownership issues, such as the appearance that a disk group is imported when it is not. This module describes how to identify and recover from disk, disk group, and volume failures using the VxVM software.
6-4
Use the following commands and utilities to display the operational state of managed disks:
q q q q q
Note The vmsa GUI administrative console is not discussed in this module because it is unavailable when you are diagnosing problems remotely. Conguration of vxconfigd logging, interpreting error messages and using root mail is discussed in Module 4, Troubleshooting Tools and Utilities. This section discusses these commands and their use in recovering from full or partial disk failures.
PRIMARY
DATAVOLS
SRL
Sun Proprietary: Internal Use Only Recovering Disk, Disk Group, and Volume Failures
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
6-5
0 3590 -
1022020746.1025.lowtide 17678493 -
NODEVICE
rootvol ENABLED ACTIVE 15581349 ROUND root rootvol-01 rootvol ENABLED ACTIVE 15581349 CONCAT RW rootdisk-02 rootvol-01 rootdisk 7181 15581349 0 c1t0d0 ENA rootvol-02 rootvol DISABLED NODEVICE 15581349 CONCAT - RW rootmirror-01 rootvol-02 rootmirror 0 15581349 0 - RLOC swapvol ENABLED ACTIVE 1052163 ROUND swap swapvol-01 swapvol ENABLED ACTIVE 1052163 CONCAT RW rootdisk-01 swapvol-01 rootdisk 15588530 1052163 0 c1t0d0 ENA swapvol-02 swapvol DISABLED NODEVICE 1052163 CONCAT - RW rootmirror-02 swapvol-02 rootmirror 15581349 1052163 0 - NDEV
Note in this example that the failed plex for volume rootvol is rootvol-0. This volume has a failed subdisk named rootmirror-01. The volume is unaffected because it is enabled and active.
6-6
Identifying Disk Errors The STATUS column displays the status of disks visible to the VxVM software. A status of error does not always indicate a disk error; it usually indicates that the disk is not under the VxVM software control.
Sun Proprietary: Internal Use Only Recovering Disk, Disk Group, and Volume Failures
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
6-7
Identifying Disk Errors This clearly shows that the rootdisk disk experienced a read failure when accessing subdisks rootdisk-04 and -05.
6-8
Sun Proprietary: Internal Use Only Recovering Disk, Disk Group, and Volume Failures
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
6-9
Replacing Disks
Replacing Disks
If a disk fails, it must be replaced. Use one of the following two methods to replace a failed disk:
q q
Caution The vxrecover -bs command attempts to perform recovery operations on all volumes with failed plexes, including starting any nonstarting volumes, prior to performing the plex recovery operations. This might not be the correct operation to perform, depending on the current volume conguration of the system being recovered. See the vxrecover man page for more information.
6-10
Replacing Disks
Yes
No
No
Yes
No
Yes Run vxdctl initdmp and vxdctl enable Yes Can the Solaris OE see the disk? (format, prtvtoc)
Figure 6-1
Sun Proprietary: Internal Use Only Recovering Disk, Disk Group, and Volume Failures
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
6-11
Replacing Disks Perform the following steps to troubleshoot a disk visibility problem: 1. Does the VxVM software see the disk? If not, check the Solaris OE. If the Solaris OE cannot see a disk device, then the VxVM software cannot see it. Use the format or prtvtoc command to verify that the Solaris OE is able to detect the disk. If the Solaris OE cannot see the disk, run the devfsadm or drvconfig;disks command to make the Solaris OE re-scan the devices. If the Solaris OE can see the disk, then the visibility problem is with the VxVM software. If the Solaris OE still cannot see the disk, check again to see if the Solaris OE can see the disk. If the Solaris OE can see the disk, proceed to the next step in the ow diagram and execute the necessary commands that allow the VxVM software to see the disk. If the Solaris OE still cannot see the disk, there is a hardware, software or driver problem that must be corrected before this disk can be used by the VxVM software. 4. It might be necessary to recreate the DMP user-level nodes for all nodes in the system. In addition, it might be necessary to build (or rebuild) the VxVM software user-space devices. To force the VxVM software to re-scan the Solaris OE device tree and build the necessary VxVM device nodes, run the vxdctl initdmp or the vxdctl enable command.
2.
3.
When these procedures are complete the disk device should be available for use, unless there is a hardware or functional VxVM software problem.
6-12
An unstartable volume is listed as Unstartable in the third column of the command output. After the problem with the volume is corrected, use the vxvol command to restart the volume.
Sun Proprietary: Internal Use Only Recovering Disk, Disk Group, and Volume Failures
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
6-13
Note For more information on this procedure, refer to the VERITAS Volume Manager Troubleshooting Guide for the version of the VxVM software installed on the system, and to the man pages for the vxmend and vxvol commands.
6-14
System failures The system stopped operating abruptly due to a power outage or kernel panic. Errors of this type can leave the RAID 5 volume with stale parity. Stale parity must be reconstructed by reading all the data on the volume and rebuilding the parity blocks. This can take considerable time and impacts system performance. To mitigate errors of this type, attach a RAID 5 log.
Disk failures Data on some number of disks can become unavailable due to controller, disk or media failure. Errors of this type have a subdisk detached and can leave the RAID 5 volume in degraded mode. Once the failure that forced the RAID 5 volume into degraded mode is repaired, the data resident on the failed subdisk is reconstructed from data on other stripe units in the stripe. This reconstruction of the missing data, while the data managed by the volume remains available to the Solaris OE, is called a reconstructing-read. This process results in degraded performance and, if another subdisk fails during the reconstruction, the volume experiences total failure and moves into the disabled state.
Recovery of RAID 5 volumes is categorized into three groups, and is accomplished as follows:
q
Parity resynchronization uses the vxvol command. Type the following: # vxvol resync RAID_5_volume_name Log plex recovery uses the vxplex command. Type the following: # vxplex att RAID_5_volume_name Log_Plex_name
Sun Proprietary: Internal Use Only Recovering Disk, Disk Group, and Volume Failures
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
6-15
Stale subdisk recovery uses two versions of the vxvol command. To recover one stale subdisk, type the following: # vxvol recover RAID_5_volume_name stale_plex_name To recover multiple stale subdisks, type the following: # vxvol recover RAID_5_volume_name
Note RAID 5 recovery is complex. For more information, reference the VERITAS Volume Manager Troubleshooting Guide for the version of the VxVM software installed on the system, and to the man pages for vxplex and vxvol commands. Additional information is found in SunSolve INFODOCs 13364 and 26367.
6-16
Note The TUTIL led temporarily locks objects during conguration changes and recoveries. This eld should be cleared after a system reboot. The TUTIL and PUTIL elds are discussed in Changing Disk Group Congurations in Appendix E, Conguring the VxVM Software. 2. Try to use the vxdg command to complete the move. Type the following: # vxdg recover disk_group_name 3. If the previous step does not work, try to reset the move ag as follows: # vxdg -o clean recover disk_group_name 4. If the previous step does not work, try to remove the move ag. Type the following: # vxdg -o remove recover disk_group_name Power user If these commands cannot be run on the system performing the move and the disk group is still accessible from a second system, run these commands on the second system. To make sure that any disk group name conflicts are resolved, use a different name for the disk group being repaired.
Sun Proprietary: Internal Use Only Recovering Disk, Disk Group, and Volume Failures
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
6-17
Preparation
To prepare for this exercise:
q q q
Make sure that the VxVM software is installed and operational. The boot disk must be encapsulated and mirrored. There must be one additional disk group other than rootdg with at least one congured, started, and mounted volume. You must have access to another systems VxVM software-managed storage devices, either through a storage area network (SAN) or by reconguring shared storage attached to the two systems. The instructor must give you the location and name of the breakand-x script.
The break-and-x script is a menu driven script that is used to break a system based on bugs submitted by Sun Support. This script performs the following tasks:
q
Provides a list of bugs for use by students to inject real world bugs into lab systems. Keeps track of bugs completed and those skipped or not completed by lab teams. Provides pointers to SunSolve SRDB records and INFODOCs. Provide hints to help lab teams troubleshoot bugs. Fixes the injected bug if the lab team desires to proceed to another bug without resolving the current bug.
q q q
6-18
Exercise: Determining the VxVM Software Disk Problem To run the break-and-x script: 1. Enter the following: # breakfix The following is displayed:
breakfix Main Menu 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Problem Problem Problem Problem Problem Problem Problem Problem Problem Problem 1 2 3 4 5 6 7 8 9 10 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. Problem Problem Problem Problem Problem Problem Problem Problem Problem Problem 11 12 13 14 15 16 17 18 19 20
2.
Select a bug by entering the bug number from the menu. For example, if you select Bug 1, the following is displayed:
Problem #1, Invalid disk.exclude file b) f) h) m) x) Break it Solution Hint Return to Main Menu Exit
Sun Proprietary: Internal Use Only Recovering Disk, Disk Group, and Volume Failures
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
6-19
Exercise: Determining the VxVM Software Disk Problem 3. Enter b to break, f to x and m to return to the main menu. If you select the b or break option, a system reboot is necessary to activate the bug. After the symptom is observed by your lab team, access SunSolve Online and search for the proper SRDB to help diagnose the problem. If you need or want a hint, re-execute the break-and-x script and select the h option for the bug currently being worked on. The following is displayed:
This is SRDB ID: 14736. The /etc/vx/disks.exclude file requires the following format: c#t#d# for each disk you wish to exclude in /etc/vx/disks.exclude.
Notice that the referring SRDB is listed along with hints to help resolve the problem.
Tasks
Execute the break-and-x script, and select bugs 11 through 20. List problem resolution steps for each bug in the space provided. Problem 11: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ Problem 12: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________
6-20
Exercise: Determining the VxVM Software Disk Problem Problem 13: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ Problem 14: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ Problem 15: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ Problem 16: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________
Sun Proprietary: Internal Use Only Recovering Disk, Disk Group, and Volume Failures
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
6-21
Exercise: Determining the VxVM Software Disk Problem Problem 17: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ Problem 18: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ Problem 19: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ Problem 20: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ The lab is complete when all 10 bugs are xed and your lab system is operational, with the VxVM software and all volumes started and accessible.
6-22
Exercise Summary
Exercise Summary
Discussion Take a few minutes to discuss what experiences, issues, or discoveries you had during the lab exercise.
q q q q
!
?
Sun Proprietary: Internal Use Only Recovering Disk, Disk Group, and Volume Failures
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
6-23
Task Solutions
The solutions for the tasks in the lab exercise are found in the break-andx script and on SunSolve in the appropriate SRDB and INFODOC les.
6-24
Module 7
Describe the processes used to upgrade the VxVM software from release 3.1 to 3.2 and from 3.2 to 3.5 Perform an upgrade of the VxVM software to release 3.5 Read, review, and interpret release notes Identify the top three installation problems Identify bugs, nd patches, and apply patches for the release of the VxVM software installed Identify and resolve licensing issues Upgrade the Solaris OE with VxVM is installed on the system
q q q q
q q
Relevance
Relevance
Discussion The following questions are relevant to understanding the VxVM software upgrade processes:
q
!
?
What is the upgrade process for the VxVM software when the boot disk is encapsulated? What is the upgrade process for the VxVM software when the boot disk is not encapsulated? What is the resolution process for licensing problems when upgrading the VxVM software? How is the Solaris OE upgraded when the VxVM software is installed? Do release notes provide useful information when upgrading the VxVM software?
7-2
Additional Resources
Additional Resources
Additional resources The following references provide additional information on the topics described in this module:
q
VERITAS Volume Manager 3.2 Administrators Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000392-011, TechPDF ID 240253. VERITAS Volume Manager 3.2 Installation Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000395-011, TechPDF ID 240256. VERITAS Volume Manager 3.2 Troubleshooting Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000394-011, TechPDF ID 240255. VERITAS Volume Manager Storage Administrator 3.2 Administrators Guide. Mountain View, California: VERITAS Software Corporation, July 2001, number 30-000393-011, TechPDF ID 240257. http://storage.east, http://storage.central, and http://storage.west. SunSolveSM Online INFODOCS 14189, 17714, and 19492, [http://sunsolve.Sun.COM/pub-cgi/search.pl?mode=advanced]. VERITAS Software Corporation support knowledge base TechNote IDs 230184, 240004, and 240006, [http://seer.support.veritas.com/nav_bar/index.asp?] [content_sURL=%2Fsearch%5Fforms%2Ftechsearch%2Easp].
7-3
Encapsulated boot disk Non-encapsulated boot disk The VxVM software licensing Patches
The scripted, manual, and pkgadd upgrade processes make provisions for the encapsulated state of the boot disk. These procedures, as applied to the boot disk, must be executed precisely, or the upgrade fails. Upgrades to the Solaris OE are also affected if the VxVM software is installed. Some of the issues faced with this upgrade are:
q q q
Encapsulated boot disk Volume conguration information preservation Manual upgrade as opposed to a JumpStart process or ash upgrade Concurrent upgrades of both the Solaris OE and the VxVM software The VxVM software licensing Patches
q q q
/CD_Path/scripts /Package_Distribution_Path/scripts
7-4
Surveying the Upgrade Processes and Procedures The upgrade scripts include:
q
upgrade_start Performs rst-phase checks and preliminary operations for the VxVM software upgrade, including the following:
q q q q
Check for encapsulated boot disk Check for supported level of alternate pathing (AP) installed Check for required patches Check for problems that prevent VxVM software upgrades Handles encapsulated boot disk issues Performs necessary reboots Saves system conguration les such as the /etc/vfstab and /etc/system les Saves pre-upgrade VxVM software conguration les such as the /kernel/drv/dmp.conf le Unmounts volumes Upgrades the VxVM software eeprom entries
q q q
upgrade_finish Completes the VxVM software upgrade by restoring saved les and restarting the VxVM software.
There are extensive sections of code in each script that is dedicated to managing the upgrade process if AP is installed. Although this is called a scripted upgrade, there are manual steps needed to nish the upgrade process. A general description of the manual steps needed is as follows: 1. 2. Obtain and install any new license keys needed for the new release of the VxVM software. Make sure that any system-level le systems that are under VxVM software control have at least one plex where they begin on a cylinder boundary. If installing any documentation or man page packages, /opt must exist, be writable and not be symbolically linked. Boot the system to single-user mode. Load and mount the VxVM software upgrade CD-ROM. Execute the upgrade_start script. Reboot to single-user mode.
3. 4. 5. 6. 7.
7-5
Surveying the Upgrade Processes and Procedures 8. 9. Remove the old VxVM software packages. Reboot the system to single-user mode.
10. Load and mount the VxVM software upgrade CD-ROM. 11. Install the VRTSlic package. 12. Install the remaining VxVM software upgrade packages. 13. Run the upgrade_finish script. 14. Perform a reconguration reboot. A complete description of the upgrade process is found in VERITAS Software Corporation TechNote ID 230184 and the VERITAS Volume Manager 3.2 Installation Guide. Caution Be sure to backup the boot disk prior to performing any upgrade of the VxVM software or the Solaris OE.
Upgrading when the boot disk is encapsulated Detailed instructions for this procedure are found in VERITAS Software Corporation TechNote ID 240006. Upgrading when the boot disk is unencapsulated Detailed instructions for this procedure are found in VERITAS Software Corporation TechNote ID 240004.
Caution Be sure to back up the boot disk prior to performing any upgrade of the VxVM software or the Solaris OE.
7-6
10. Re-encapsulate the boot disk using the recommended Sun best practices. 11. Re-mirror the boot disk. 12. Import the original disk groups. Make sure that they are upgraded, if necessary, to use any new features provided by the new release of the VxVM software.
7-7
7-8
2. 3.
4.
7-9
Surveying the Upgrade Processes and Procedures Table 7-1 VxVM 3.5 Upgrade Comparison (Continued) Using pkgadd Disadvantage 1. If a system failure occurs during the pkgadd process, the boot disk may become unbootable. The new package is displayed as VRTSvxvm2 by pkginfo unless instance= overwrite is inserted in the default packge admin le. Using Upgrade Scripts Usually requires three reboots.
2.
Note The procedure for modifying the default packge admin le to support the upgrade without having multiple instances of VxVM packages is listed in VERITAS TechNoteID: 248394 3. The VRTSlic package might not be able to be removed if a second VRTSvxvm package is using it.
Following is a high level overview of the VxVM 3.5 upgrade process using pkgadd. 1. 2. 3. Make a copy of the default packge admin le. Modify the copy so instance=overwrite. Upgrade to VxVM 3.5 using the following pkgadd command syntax:
7-10
Surveying the Upgrade Processes and Procedures Caution It is important that the correct procedure is used to upgrade to release 3.5. If the procedures listed in the VERITAS Volume Manager 3.5 Installation Guide are not followed precisely, the system may become unusable.
7-11
Release Notes
All upgrades and patches include release notes. Be sure to read the release notes prior to performing any upgrade or patch. Release notes contain valuable installation and operation information specic to the patch or release of the VxVM software being installed. Release notes are normally found in the base directory of the patch or upgrade distribution and contain either release or notes in the le name.
Licensing
When upgrading the VxVM software, licensing must be taken into consideration. If upgrading from the VxVM software version 2.x to 3.x, a version 3.x license key must be installed prior to the upgrade, or the VxVM software does not start. Licensing issues are discusses in the following SunSolve INFODOCs:
q q
17714 VxVM and SEVM Licensing Explained 19492 How to Move a VxVM License between Systems
In addition, the upgrade procedures referred to in this module have a section on pre-upgrade licensing.
7-12
VERITAS Volume Manager 3.2 Installation Guide SunSolve INFODOC 14189 VERITAS Software Corporation TechNote ID 230184
Caution Be sure to back up the boot disk prior to performing any upgrade of the VxVM software or the Solaris OE. A Solaris OE upgrade using the JumpStart process or ash install requires extensive scripting to successfully complete the upgrade. It is beyond the scope of this course to provide the knowledge necessary to build these scripts.
7-13
Upgrade the VxVM software to a later release using either scripted or manual methods Upgrade disk groups
Preparation
To prepare for this exercise:
q
All lab systems must have Solaris 8 OE ash or JumpStart installed, with all supporting packages and patches. At a minimum, the VxVM software version 3.2 must be installed and congured. The boot disk must be encapsulated. The VxVM software packages and patches for the new version must be available either through network le system (NFS) mounts, ftp, or local access. You must have access either to all SunSolve and VERITAS Software Corporation documents referenced in this module.
q q
Tasks
Complete the following steps: 1. Upgrade the VxVM software to a newer release using one of the following methods:
q
Upgrade using the upgrade scripts located in the /Package_Distribution_Path/scritps directory. Reference the VERITAS Volume Manager 3.x Installation Guide section Upgrading VxVM on an Encapsulated Root Disk. This section provides a detailed, step-by-step procedure for using the upgrade scripts.
Upgrade using manual procedures. Reference VERITAS Software Corporation TechNote ID 240006 for the procedures to perform the upgrade without using the upgrade scripts.
7-14
Exercise: Upgrading the VxVM Software The method used by your lab group is not as important as completing the procedure successfully. Both procedures work. Pick the procedure that best ts the experience level of your lab group Do not perform the backup of the boot disk as recommended by the procedures. Tape drives are not available in the lab.
q
Upgrade using pkgadd procedures. Reference VERITAS Software Corporation TechNote ID 248394 for the procedures to perform the upgrade without generating additional instances of the VxVM software packages.
2. 3. 4. 5.
Re-encapsulate the boot disk. Upgrade all congured disk groups if not specied by the upgrade procedures. Mount any volumes not mounted as part of the upgrade procedures. Verify full functionality and x any problems encountered.
7-15
Exercise Summary
Exercise Summary
Discussion Take a few minutes to discuss what experiences, issues, or discoveries you had during the lab exercise.
q q q q
!
?
7-16
Appendix A
SunSolve INFODOCs
This appendix contains the following SunSolveSM Online Free Info Docs online sources, available at: http://sunsolve.Sun.COM/pub-cgi/search.pl?mode=advanced
q
INFODOC 16051 How to Encapsulate Disks With No Free Space Using Volume Manager. 22 March 2002. INFODOC 24663 Full and Basic/Functional Unencapsulation of a Volume Manager Encapsulated Root Disk While Booted CDROM. 22 March 2002.
INFODOC 16051
INFODOC 16051
INFODOC ID: 16051 SYNOPSIS: How to 'Encapsulate' Disks With No Free Space Using Volume Manager DETAIL DESCRIPTION: To encapsulate a disk into Veritas Volume Manager or Sun Enterprise Volume Manager[TM], you must have some free space on the disk in order for Volume Manager to write a private region to the disk. The private region is generally smaller than 2mb. However, if you absolutely do not have any free space on the disk, and you can't free up any, and you really want to get this data under Volume Manager control, you can work around this by TEMPORARILY "encapsulating" one or more slices from a disk into Volume Manager so that the data may be mirrored to another disk. Once the data is mirrored to a "real" Volume Manager disk with a private and public region, you can then break the mirror, leaving the data on the "real" Volume Manager disk. SOLUTION SUMMARY: Here is how to do it: For each slice on the disk (excluding slice 2), run the following command. In this example, only slices 5 and 6 have data on them. vxdisk define c#t#d#s5 type=nopriv vxdisk define c#t#d#s6 type=nopriv Then add each of these "slices" as a disk in a disk group and give them a name. This example names them NPdisk05 and NPdisk06. vxdg -g <diskgroup> adddisk NPdisk05=c#t#d#s5 vxdg -g <diskgroup> adddisk NPdisk06=c#t#d#s6 Next we create a simple volume (not a file system, just a volume) on each of these new "disks" that spans the entire "disk". To do this we first check to see what the max size is for the volumes we are about to create. We're looking for the len value to then use with the vxassist command to create the volumes. vxdisk list NPdisk05 | grep public public: slice=0 offset=0 len=8196096 vxdisk list NPdisk06 | grep public public: slice=0 offset=0 len=9400320 With this info we create the volumes, naming them NPdisk05vol and NPdisk06vol: vxassist -g <diskgroup> make NPdisk05vol 8196096 layout=nostripe alloc="NPdisk05" vxassist -g <diskgroup> make NPdisk06vol 9400320 layout=nostripe alloc="NPdisk06" Next step is to mirror the volumes, assuming that we are mirroring the volumes to a disk named disk01 that has enough space to mirror both volumes to it: vxassist -g <diskgroup> mirror NPdisk05vol layout=nostripe alloc="disk01" vxassist -g <diskgroup> mirror NPdisk06vol layout=nostripe alloc="disk01" Once that is complete we then remove the original side of the mirror. vxplex -g <diskgroup> -o rm dis NPdisk05vol-01 vxplex -g <diskgroup> -o rm dis NPdisk06vol-01 The final step is to remove the old disks from the disk group and return them to their original state. vxdg -g <diskgroup> rmdisk NPdisk05 vxdg -g <diskgroup> rmdisk NPdisk06 vxdisk rm c0t5d10s5 vxdisk rm c0t5d10s6 This leaves us with two concat volumes named NPdisk05vol and NPdisk06vol. These volumes will contain the data that was orignally located on c#t#d#s5 and c#t#d#s6.
A-2
INFODOC 16051
Keywords: SEVM, VxVM, Volume Manager, recover, recovery, configure, configured, configuration APPLIES TO: Storage/Veritas, Storage/Volume Manager, AFO Vertical Team Docs/Storage ATTACHMENTS:
A-3
INFODOC 24663
INFODOC 24663
INFODOC ID: 24663 SYNOPSIS: Full and Basic/Functional Unencapsulation of a Volume Manager Encapsulated Root Disk While Booted CDROM DETAIL DESCRIPTION: Overview: This document explains the steps necessary to unencapsulate the root disk from Volume Manager control. This document applies to both Sun Enterprise Volume Manager[TM] (SEVM) 2.x and Veritas Volume Manager (VxVM) 3.x. This document is divided into two distinct sections. The first section describes full unencapsulation while booted from a Solaris CDROM. This procedure should be used any time it is necessary to completely remove the root disk from Volume Manager control and bring the disk back to a pre-encapsulation state including all partitions such as /export, and /opt. The second section explains the steps to perform a Basic/Functional (BF) unencapsulation while booted from a Solaris CDROM. Basic/Functional unencapsulation temporarily unencapsulates the root disk so that troubleshooting of booting issues or other issues can be done. BF unencapsulation gives you access to an unencapsulated /, swap, /usr, and /var but no access to non "big-4" partitions. SOLUTION SUMMARY: Notes for Full Unencapsulation: Under normal circumstances, if the system can be booted to at least single user mode, it is recommended that the vxunroot command be used to unencapsulate root. A full unencapsulation should be performed if the vxunroot command is not working for some reason, or if the system cannot be booted and we want to completely remove Volume Manager from having any control over the root disk. You cannot perform a full unencapsulation and still maintain Volume Manager functionality if the root disk is the ONLY disk in the rootdg diskgroup. If the root disk is the only disk in rootdg, you can still unencapsulate, but Volume Manager will not work until another disk is initialized into rootdg using vxinstall after the system has been fully unencapsulated. Normally if root it is encapsulated, it is also mirrored which gives us another disk in rootdg, however it should always be verified that there is at least one other disk in rootdg before following this procedure so that we know what to expect once root is unencapsulated. Also note that Volume Manager will allow you to create volumes using free space on the root disk, after the root disk has been encapsulated. Volumes created post-encapsulation like this do not have underlying hard partitions and therefore are not recoverable with this procedure. If at all possible, make backups of any volumes created on the rootdisk post-encapsulation before following this procedure. Once you are unencapsulated, if you have free space and a free partition, you could newfs that partition and restore to it from your backup. Steps for Full Unencapsulation: Bring the system to the OK prompt and insert a Solaris CD into the CDROM drive. Then issue: boot cdrom -s Once booted to the cdrom, set your terminal type so that vi will work correctly. If TERM=sun doesn't work, often times TERM=vt100 will. TERM=sun;export TERM Fsck your root filesystem: fsck -y /dev/rdsk/c#t#d#s0
A-4
INFODOC 24663
If fsck comes back cleanly, mount slice 0 to /a. If fsck cannot repair the root file system, there are obviously a number of possibilities. This procedure does not attempt to explain file system corruption or how to repair it beyond fsck. Fsck must come back cleanly to continue and mount root. mount /dev/dsk/c#t#d#s0 /a Make a backup of /a/etc/system and then edit it: cp /a/etc/system /a/etc/system.orig vi /a/etc/system Completely remove the following lines from the system file. If you re-encapsulate in the future, these lines will be added back correctly so there is nothing to be lost by removing them. rootdev:/pseudo/vxio@0:0 set vxio:vol_rootdev_is_volume=1 Make a backup of /a/etc/vfstab and then edit it: cp /a/etc/vfstab /a/etc/vfstab.orig vi /a/etc/vfstab Edit the vfstab file back to it's original state, pointing /, swap, /usr, and /var to hard partitions on the disk like /dev/dsk and /dev/rdsk rather than /dev/vx/ entries. Temporarily comment out all other /dev/vx volumes from the /a/etc/vfstab file using the # character. This includes filesystems like /opt and /export, if they exist. The original /etc/vfstab will look something like this, assuming root is c0t0d0: Note: Columns have been aligned and spaces added for clarity. --------------------------------------------------------------------------/dev/vx/dsk/swapvol swap no /dev/vx/dsk/rootvol /dev/vx/rdsk/rootvol / ufs 1 no /dev/vx/dsk/usr /dev/vx/rdsk/usr /usr ufs 1 no /dev/vx/dsk/var /dev/vx/rdsk/var /var ufs 1 no /dev/vx/dsk/export /dev/vx/rdsk/export /export ufs 2 yes swap /tmp tmpfs yes /dev/vx/dsk/datadg/somevol /dev/vx/rdsk/datadg/somevol /somevol ufs 2 yes -
#NOTE: volume rootvol (/) encapsulated partition c0t0d0s0 #NOTE: volume swapvol (swap) encapsulated partition c0t0d0s1 #NOTE: volume usr (/usr) encapsulated partition c0t0d0s5 #NOTE: volume var (/var) encapsulated partition c0t0d0s6 #NOTE: volume export (/export) encapsulated partition c0t0d0s7 --------------------------------------------------------------------------Once edited, the vfstab should look something like this: --------------------------------------------------------------------------/dev/dsk/c1t0d0s1 swap no /dev/dsk/c1t0d0s0 /dev/rdsk/c0t0d0s0 / ufs 1 no /dev/dsk/c1t0d0s5 /dev/rdsk/c0t0d0s5 /usr ufs 1 no /dev/dsk/c1t0d0s6 /dev/rdsk/c0t0d0s6 /var ufs 1 no #/dev/dsk/c1t0d0s7 /dev/rdsk/c0t0d0s7 /export ufs 2 yes swap /tmp tmpfs yes #/dev/vx/dsk/datadg/somevol #NOTE: #NOTE: #NOTE: #NOTE: #NOTE: volume volume volume volume volume /dev/vx/rdsk/datadg/somevol /somevol ufs 2 yes -
rootvol (/) encapsulated partition c0t0d0s0 swapvol (swap) encapsulated partition c0t0d0s1 usr (/usr) encapsulated partition c0t0d0s5 var (/var) encapsulated partition c0t0d0s6 export (/export) encapsulated partition c0t0d0s7
A-5
INFODOC 24663
--------------------------------------------------------------------------Now make sure Volume Manager does not start on the next boot: touch /a/etc/vx/reconfig.d/state.d/install-db This is important because IF the root disk contains mirrors, and the system boots up, the mirrors will get resynced, corrupting the changes we just made. Remove the flag that tells Volume Manager that the root disk is encapsulated: rm /a/etc/vx/reconfig.d/state.d/root-done Reboot the system for changes to take effect: reboot When we reboot we will come up in an partially unencapsulated state with /, /usr, /var, and swap mounted. Volume Manager will not start but we can start it manually once we are booted. To start Volume Manager, run the following commands: rm /etc/vx/reconfig.d/state.d/install-db vxiod set 10 vxconfigd -m disable vxdctl enable Now we can remove the volumes that existed on the encapsulated boot disk. They will generally be rootvol, swapvol, usr, and var. They might also include home, opt, or other non-standard root partitions. Use the command 'vxprint -htg rootdg' to list the volumes in rootdg before removing them. Then, for each volume, run the command: /usr/sbin/vxedit -rf rm <volume name> Remove the rootdisk from rootdg now that it has no volumes, plexes or subdisks: The disk name is usually 'rootdisk' /usr/sbin/vxdg rmdisk <disk name> The final step is to re-write the vtoc of the disk so that hard partitions are again defined for the root file systems. There are several ways to put the hard partitions back, including using fmthard on a modified /etc/vx/reconfig.d/disk.d/c#t#d#/vtoc file, using format to manually repartition the disk, or using the vxmksdpart command. The simplest method however is to use the vxedvtoc command as explained below. When Volume Manager encapsulates a disk, it makes a record of the old vtoc of the disk. This file is stored for each disk in /etc/vx/reconfig.d/disk.d/c#t#d#. This file is stored in a Volume Manager specific format, so it can't be used as an argument to fmthard unless it is modified. The 'vxedvtoc' command is similar to fmthard but knows how to read this vtoc file and write that vtoc to a disk. The command takes the form: vxedvtoc -f <filename> <devicename> Assuming that the boot disk is c0t0d0 we would now run the command /etc/vx/bin/vxedvtoc -f /etc/vx/reconfig.d/disk.d/c0t0d0/vtoc /dev/rdsk/c0t0d0s2 # THE ORIGINAL PARTITIONING IS AS FOLLOWS : #SLICE TAG FLAGS START SIZE 0 0x0 0x200 0 0 1 0x0 0x200 0 0 2 0x5 0x201 0 8794112 3 0x0 0x200 0 0 4 0x0 0x200 0 0 5 0x0 0x200 0 0 6 0xe 0x201 0 8794112 7 0xf 0x201 8790016 4096 # THE NEW PARTITIONING WILL BE AS FOLLOWS : #SLICE TAG FLAGS START SIZE 0 0x0 0x200 0 2048000 1 0x0 0x200 2048000 2048000 2 0x5 0x201 0 8794112 3 0x0 0x201 4096000 2048000
A-6
INFODOC 24663
4 0x0 0x201 6144000 2048000 5 0x0 0x200 0 0 6 0x0 0x200 0 0 7 0x0 0x200 0 0 DO YOU WANT TO WRITE THIS TO THE DISK ? [Y/N] :y WRITING THE NEW VTOC TO THE DISK This will partition the disk back to a pre-encapsulation state. Now we can uncomment the entries for any of the non Big-4 root partitions from /etc/vfstab, as well as any data volumes. In this example we removed comments from /export and the data volume /somevol. vi /etc/vfstab /dev/dsk/c0t0d0s1 /dev/dsk/c0t0d0s0 /dev/dsk/c0t0d0s5 /dev/dsk/c0t0d0s6 /dev/dsk/c0t0d0s7 swap /dev/rdsk/c0t0d0s0 /dev/rdsk/c0t0d0s5 /dev/rdsk/c0t0d0s6 /dev/rdsk/c0t0d0s7 / /usr /var /export /tmp swap ufs ufs ufs ufs tmpfs 1 1 1 2 no no no no yes yes -
/dev/vx/dsk/datadg/somevol /dev/vx/rdsk/datadg/somevol /somevol ufs 2 yes Just to make sure, start all volumes /usr/sbin/vxvol startall Now issue a mountall to mount the now uncommented volumes mountall At this point the root disk is completely free of Volume Manager control, Volume Manager daemons are started, and all file systems / volumes should be mounted. Notes for Basic/Functional Unencapsulation: This section explains the steps necessary to temporarily unencapsulate the root disk from Volume Manager control. BF unencapsulation allows the system to be booted from the raw Solaris partitions while still leaving the root disk under Volume Manager control. This is a good method for troubleshooting boot issues that appear to be due to the disk being encapsulated because it can be undone by reversing the steps. This procedure can be used even if the root disk is the only disk in the rootdg diskgroup because, throughout the procedure, root will keep it's private and public region. This procedure will only allow you to mount /, /usr, /var, and swap. Any non "Big-4" partitions will not be mounted. If you must have non "Big-4" partitions available, you should perform a full unencapsulation as outlined above. Steps for Basic/Functional Unencapsulation: Bring the system to the OK prompt and insert a Solaris CD into the cdrom drive. Then issue: boot cdrom -s Once booted to the cdrom, set your terminal type so that vi will work correctly. If TERM=sun doesn't work, often times TERM=vt100 will. TERM=sun;export TERM Fsck your root filesystem: fsck -y /dev/rdsk/c#t#d#s0 If fsck comes back cleanly, mount slice 0 to /a. If fsck cannot repair the root file system, there are obviously a number of possibilities. This procedure does not attempt to explain file system corruption or how to repair it. fsck must come back cleanly to continue and mount root. mount /dev/dsk/c#t#d#s0 /a Make a backup of /a/etc/system and then edit it: cp /a/etc/system /a/etc/system.orig vi /a/etc/system
A-7
INFODOC 24663
Comment out the following lines using double asterisks **. **rootdev:/pseudo/vxio@0:0 **set vxio:vol_rootdev_is_volume=1 Make a backup of /a/etc/vfstab and then edit it: cp /a/etc/vfstab /a/etc/vfstab.orig vi /a/etc/vfstab Edit the vfstab file back to it's original state, pointing /, swap, /usr, and /var to hard partitions on the disk like /dev/dsk and /dev/rdsk rather than /dev/vx/ entries. Temporarily comment out all other /dev/vx volumes from the /a/etc/vfstab file using the # character. This includes filesystems like /opt and /export, if they exist. The original /etc/vfstab will look something like this, assuming root is c0t0d0: Note: Columns have been aligned and spaces added for clarity. --------------------------------------------------------------------------/dev/vx/dsk/swapvol swap no /dev/vx/dsk/rootvol /dev/vx/rdsk/rootvol / ufs 1 no /dev/vx/dsk/usr /dev/vx/rdsk/usr /usr ufs 1 no /dev/vx/dsk/var /dev/vx/rdsk/var /var ufs 1 no /dev/vx/dsk/export /dev/vx/rdsk/export /export ufs 2 yes swap /tmp tmpfs yes /dev/vx/dsk/datadg/somevol /dev/vx/rdsk/datadg/somevol /somevol ufs 2 yes -
#NOTE: volume rootvol (/) encapsulated partition c0t0d0s0 #NOTE: volume swapvol (swap) encapsulated partition c0t0d0s1 #NOTE: volume usr (/usr) encapsulated partition c0t0d0s5 #NOTE: volume var (/var) encapsulated partition c0t0d0s6 #NOTE: volume export (/export) encapsulated partition c0t0d0s7 --------------------------------------------------------------------------Once edited, it should look something like this: --------------------------------------------------------------------------/dev/dsk/c0t0d0s1 swap no /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 / ufs 1 no /dev/dsk/c0t0d0s5 /dev/rdsk/c0t0d0s5 /usr ufs 1 no /dev/dsk/c0t0d0s6 /dev/rdsk/c0t0d0s6 /var ufs 1 no #/dev/dsk/c0t0d0s7 /dev/rdsk/c0t0d0s7 /export ufs 2 yes swap /tmp tmpfs yes #/dev/vx/dsk/datadg/somevol /dev/vx/rdsk/datadg/somevol /somevol ufs 2 yes #NOTE: volume rootvol (/) encapsulated partition c0t0d0s0 #NOTE: volume swapvol (swap) encapsulated partition c0t0d0s1 #NOTE: volume usr (/usr) encapsulated partition c0t0d0s5 #NOTE: volume var (/var) encapsulated partition c0t0d0s6 #NOTE: volume export (/export) encapsulated partition c0t0d0s7 --------------------------------------------------------------------------Now make sure Volume Manager does not start on the next boot: touch /a/etc/vx/reconfig.d/state.d/install-db The presence of install-db tells Volume Manager not to start at boot. This is important because IF the root disk contains mirrors, and the system boots up, the mirrors will get resynced, corrupting the changes we just made. Remove the flag that tells Volume Manager that the root disk is encapsulated: rm /a/etc/vx/reconfig.d/state.d/root-done Reboot the system for changes to take effect: reboot
A-8
INFODOC 24663
When we reboot we will come up in an unencapsulated state with /, /usr, /var, and swap mounted. At this point we have performed a Basic/Functional unencapsulation. This is a not a state that they system should be left in permanently. It is a state that is useful for troubleshooting and system maintenance. If problems with the system are resolved and you are ready to re-encapsulate, perform the following: touch /etc/vx/reconfig.d/state.d/root-done rm /etc/vx/reconfig.d/state.d/install-db cp /a/etc/vfstab.orig /a/etc/vfstab cp /a/etc/system.orig /a/etc/system reboot Keywords: SEVM, VxVM, Volume Manager, encapsulation APPLIES TO: Operating Systems/Solaris/Solaris 8, Operating Systems/Solaris/Solaris 7, Operating Systems/Solaris/Solaris 2.6, Operating Systems/Solaris/Solaris 2.5.1, Storage/Veritas, Storage/Volume Manager, AFO Vertical Team Docs/Storage ATTACHMENTS:
A-9
Appendix B
Encapsulate non-root or data disks Unencapsulate non-root or data disks Encapsulated boot disks Unencapsulate boot disks by using the vxunroot command Unencapsulate boot disks by not using the vxunroot command Unencapsulate boot disks when the system is booted from a CDROM Perform a basic boot disk unencapsulation Perform the Sun Enterprise Servicess best practice procedures for managing boot disks with the VxVM software
q q
Start
Encapsulate?
Yes
Boot Disk?
Yes
Page 10 of 26
No No
Page 2 of 26
Unencapsulate?
Yes
Boot Disk?
Yes
Page 11 of 26
No
No
+
End
Page 7 of 26
B-2
This flow has been selected because a disk with existing data must be brought under VxVM software control.
Selected disk must Select disk to encapsulate. have sufficient space to hold the private region and have two unused slices.
Enter the disk group name that this disk shall join
Selected disk has sufficient free space and partitions for encapsulation? No
If a default disk name was not previously selected, enter an arbitrary name now
Page 3 Yes of 26
rootdisk.
;AI
Use default private region size? No
is the normal
vxdiskadm
utility.
Open
# vxdiskadm
question. If changing default size, all disks in this disk group must have the same private
Yes
region size.
Exit No
vxdiskadm
c#t#d#
End
to
y
Disk Encapsulation Flowchart
Rev-B Page 2 of 26 07/18/2002
Yes
B-3
This flow has been selected because a disk with existing data (non root) that does not met minimum encapsulation requirements, must be brought under VxVM software control.
vxdisk
Yes
No
B-4
This flow has been selected because a process executing in flow E was directed here.
Document the maximum size of the "disks" just created. This information shall be used to create volumes Yes Example
vxdisk list <diskname> | grep public #vxdisk list NPdisk05 | grep public: slice=0 offset=0 #vxdisk list NPdisk06 | grep public: slice=0 offset=0 public len=8196096 public len=9400320
<size> \
Yes
#vxassist -g datadg make NPdisk05vol 8196096 layout=nostripe \ alloc="NPdisk05" #vxassist -g datadg make NPdisk06vol 9400320 layout=nostripe \ alloc="NPdisk06"
Create additional volumes?
No
B-5
This flow has been selected because a process executing in flow F was directed here.
Yes
#vxassist -g datadg mirror NPdisk05vol layout=nostripe \ alloc="disk01" #vxassist -g datadg mirror NPdisk06vol layout=nostripe \ alloc="disk01"
No
No
Yes
Yes
No
B-6
This flow has been selected because a process executing in flow G was directed here.
Yes
#vxdg -g datadg rmdisk NPdisk05 #vxdg -g datadg rmdisk NPdisk06 #vxdisk rm c1t3d0s5 #vxdisk rm c1t3d0s6
No
End
B-7
This flow has been selected because an encapsulated data disk (non root) must be unencapsulated. Applications using the volumes encapsulated on this disk must be shutdown.
Yes
No
No
No Encapsulated disks use slices six and seven for public and private regions. Identify disk to be unencapsulated. Mirror disks use slices three and four. Use the following commands: Backup data and restore to non VxVM managed disk.
End
Volumes backed-up?
/etc/vfstab file
to define pre-encapsulation Yes configuration Disk Encapsulation Flowchart
Review content of
Page 8 of 26
Rev-B Page 7 of 26
07/18/2002
B-8
This flow has been selected because a process executing in flow C was directed here.
Yes
No
/etc/vfstab file and output from the vxprint -qhtg <diskgroup> command.
Recreate original partitions.
vxmksdpart -g <diskgroup> <plexname> <partition#> \tags flags Example #/etc/vx/bin/vxmksdpart -g datadg datadg01-03 0 0x00 0x00 #/etc/vx/bin/vxmksdpart -g datadg datadg01-02 1 0x00 0x00
Yes
No
The disk is unable to be unencapsulated, restore Partititions Restored? No the data from backups to non VxVM software managed disk. End
Yes
Page 9 of 26
Page 8 of 26
B-9
This flow has been selected because a process executing in flow I was directed here.
No
Re mount filesystems
vxedit -r -f rm <volume>
Recursively remove each volume. Example
Yes
No
End
B-10
This flow has been selected because the boot disk must be brought under Volume Manager control.
Encapsulate and mirror using Sun ES Best Practice for boot disks? Yes
Page 24 of 26
No
This option is used during the initial installation of the Volum e Manager Encapsulate using vxinstall? s o f t wa r e . T o e n c a p s u l a t e t h e b o o t Yes disk, answer
OAI
to the
"Encapsulate
and answer
Boot Disk"
the installation
prompt
No
End Page 2 of 26
B-11
This flow has been selected because the boot disk must be removed from Volume Manager control.
vxunroot?
Yes
Page 12 of 26
No
Manual?
Yes
Page 15 of 26
No
Page 19 of 26
No
Basic Functionality?
Yes
Page 22 of 26
No
This unencapsulation procedure, is designed to help troubleshoot system problems that require the VxVM software to be shutdown. It is also a good technique to use for system maintenance tasks
End
B-12
K
No
This flow has been selected because a process executing in flow D was directed here. The following procedure requires a reboot of the system. Terminate all production applications and logoff all non administrative users prior to starting.
Yes
Unm ount all file system s under the control of VxVM software except ( the boot disk.
/, /usr,
Properly configured boot disks only contain system partitions. This should not be an issue.
rootdg
vxdisk -o alldgs list > /vxdisk_alldgs.list vxprint -qhtg rootdg -s | grep -i rootmirror | \ awk '{print $3}' > /rmsub.plex
for i in `cat ./rmsub.plex` >do >vxplex -g rootdg -o rm dis $i >done vxprint -qhtg rootdg
"
rootmirror"
All
vxunroot
Yes
B-13
This flow has been selected because a process executing in flow D was directed here.
vxunroot
command.
/etc/vx/bin/vxunroot
Successful?
No
Page 14 of 26
Yes
Reboot system
Reboot successful?
No
Page 14 of 26
Yes
Page 14 of 26
Yes
End
B-14
This flow has been selected because a process executing in flow O was directed here.
vxunroot
still fails? Yes
No Yes
vxunroot
command failed? No
Reboot failed?
Go to CDROM
Yes
unencapsulation process.
No Yes Verify all non system volumes on this disk are unmounted. No Verify all volumes on this disk have not been grown, or structually modified. Other problem? Yes Call support. Still using VxVM software objects? Yes Execute
vxunroot
unencapsulation process again. Follow instructions exactly as written
No
End
rootmirror"
System bootable?
Other problem?
No
End No
Yes Yes
Execute
vxunroot
unencapsulation process again.
Call support.
Rev-B Page 14 of 26
07/18/2002
B-15
N
No
This flow has been selected because a process executing in flow D was directed here.
The following procedure requires a reboot of the system. Terminate all production applications Current boot disk backup? and logoff all non administrative users prior to starting.
Yes Unmount all file systems under the control of VxVM software except ( , boot disk.
/ /usr, /
Properly configured boot disks only contain system partitions. This should not be an issue.
rootdg
vxdisk -o alldgs list > /vxdisk_alldgs.list vxprint -qhtg rootdg -s | grep -i rootmirror | \ awk '{print $3}' > /rmsub.plex
for i in `cat ./rmsub.plex` >do >vxplex -g rootdg -o rm dis $i >done vxprint -qhtg rootdg
"
rootmirror"
All
All "rootmirror" plexes must be all volumes residing on the boot disk. If this step
Yes
B-16
2
Remove rootability using the following processes:
This flow has been selected because a process executing in flow N was directed here.
Remove the
Edit the
system
remove "
statements.
Edit the
vfstab
/etc/
file and
/opt, /home or
there have been no changes to this systems storage configuration after the boot disk was If there are any non "system" partitions on this disk, use encapsulated.
vxmksdpart
command to restore.
Reboot successful?
No
Page 18 of 26
Yes
Page 17 of 26
Page 16 of 26
B-17
This flow has been selected because a process executing in flow O was directed here.
/etc/vfstab
that all non system partitions were properly restored and match
vfstab
entries.
Yes
rootdg
No
Volumes removed?
Yes
Disk removed?
Delete public and private region partitions using the format utility.
vxdctl enable
End
Rev-B Page 17 of 26
07/18/2002
B-18
This flow has been selected because a process executing in flow P was directed here.
System bootable?
Yes
End
No
No
CDROM unencapsulation failed leaving an unbootable system? No Other problem? Yes Call support.
No
End
Yes
End
Yes Rebuild boot disk from Pre encapsulation backup? No backups. Once restored, boot from CDROM, disable boot disk encapsulation using CDROM unencapsulation procedure. Yes
End
B-19
5
No
This flow has been selected because a process executing in flow D was directed here.
The following procedure requires a reboot of the system. Terminate all production applications and logoff all non administrative users prior to Current boot disk backup? starting.
Yes Bring system to boot prom and boot to single user mode from CDROM.
vt100 so vi will
/a
vt100 so vi will
Edit the
system
/etc/ rootdev"
file and
remove "
statements.
/a/etc/vfstab
Rev-B
07/18/2002
Page 20 of 26
Page 19 of 26
B-20
If there are non system (/, swap, /usr, /var) partitions on this disk, do not restore those mount statements at this time. These non system partitions could include /opt, /home or other named filesystem. These mount statements shall be restored later in this process after their partitions have been recovered. It is acceptable to restore the mount statements at this time if they are commented out so they will not be executed during the next system reboot.
This flow has been selected because a process executing in flow S was directed here.
Edit the /etc/vfstab file Use either the comments at the end of the and restore to it's file or the contents of the /etc/vfstab.prevm pre boot disk to restore the original mount statements. encapsulation state.
The /etc/vfstab.prevm file can be copied over the /etc/vfstab file providing there have been no changes to this systems storage configuration after the boot disk was encapsulated.
Remove
boot disk
encapsulation flag.
rm /a/etc/vx/reconfig.d/state.d/ \ root-done
4
No Reboot successfull?
Page 18 of 26
The system will reboot in a partially unencapsulated state with /, swap, /usr and /var mounted.
Yes
rm /etc/vx/reconfig.d/state.d/ \ install-db vxiod set 10 vxconfigd -m disable vxdctl enable vxprint -qhtg rootdg
Disk Encapsulation Flowchart
List all volumes encapsulated on the boot disk. Note these for use later.
Rev-B
07/18/2002
Page 21 of 26
Page 20 of 26
B-21
This flow has been selected because a process executing in flow S was directed here.
rm rm rm rm rm
rootdg
vtoc
to it's pre
encapsulation state. This defines hard partitions for the file systems resident on the boot disk. Use the
vxedvtoc
.
vtoc
c#t#d#
Edit the /etc/vfstab file and restore the mount statements for any non system filesystems resident on the boot disk.
of data could occur. Additionally, the boot disk could be corrupted causing the system to become un-bootable.
home /export
,
/opt /
,
and others.
Mount all non system file systems that may have been resident on the boot disk.
Rev-B
07/18/2002
End
Page 21 of 26
B-22
8
No
This flow has been selected because a process executing in flow D was directed here.
The following procedure requires a reboot of the system. Terminate all production applications Current boot disk backup? and logoff all non administrative users prior to starting.
Yes Bring system to boot prom and boot to single user mode from CDROM.
/a
vt100
so vi will
Edit the
/etc/
/a/etc/vfstab
cp /a/etc/vfstab /a/etc/vfstab.orig
Page 23 of 26
Rev-B Page 22 of 26
07/18/2002
B-23
9
Edit the and restore
This flow has been selected because a process executing in flow V was directed here.
b file
to their pre
file
mounting from their pre-encapsulation physical Comment out all VxVM managed volumes
vx... /export
and
). This includes
(/dev/ /opt
devices.
if they exist.
Remove
boot disk
encapsulation flag.
/var
swap /usr
m ounted.
VxVM
software stopped.
system resident on the boot disk. If these filesystems are needed, a full unencapsulation of the boot is necessary.
No
Maintenance finished?
Yes
Restore the system to it's encapsulated and VxVM software enabled state.
Rev-B
07/18/2002
End
Page 23 of 26
B-24
This flow has been selected because a process executing in flow B was directed here.
/usr
/, swap
and
/var
(optional).
as a separate slice.
3. Attach mirrors in geographical order not in alphabetical order. attaches mirrors in alphabetical order. Do not use slices other than
vxdiskadm
vxdiskadm
builds and
are present.
4. Replace the encapsulated boot disk with an initialized disk. Replacing the encapsulated disk with an initialized disk insures that the boot disk and the mirror disk are identical. This reduces the complexity of the mirrored boot disk configuration. 5. Map the core system volumes to slices/partitions. This should not be necessary if and
swap
/var
/,
are the only slices on the boot disk and you are using VxVM 3.x or
higher. 6. Ensure that all boot disk mirrors are bootable and have a devalias built in the OpenBoot Prom. 7. Create a clone disk. This disk must be able to be booted from slices (not under VxVM control, non encapsulated copy of the boot disk) and be able to run VxVM software utilities (VxVM installed). This disk is used if there is a complete failure of the VxVM managed boot disk making the system un-bootable.
vtoc
for
reference if needed.
vxdiskadm vxinstall
or
rootmirror rootdg.
for
B-25
This flow has been selected because a process executing in flow X was directed here.
# /etc/vx/bin/vxrootmir rootmir # vxassist -g rootdg mirror swapvol rootmirror& # vxassist -g rootdg mirror var rootmirror&
Note: If the original boot disk was configured only using
and
/var
vxdiskadm
/, swap
is acceptable.
while true > do > vxtask list > sleep 15 > echo "#####################" > done
W ait for the mirroring process to complete
No
Yes
-g rootdg dis rootvol-01 -g rootdg dis swapvol-01 -g rootdg dis var-01 -fr rm rootvol-01 swapvol-01 var-01
Page 26 of
26
B-26
This flow has been selected because a process executing in flow Y was directed here.
Remove the
rootdisk rootdg
from
Initialize the
/etc/vx/bin/cxrootmir rootmir vxassist -g rootdg mirror swapvol rootmirror& vxassist -g rootdg mirror var rootmirror&
vxmksdpart.
# /usr/vx/bin/vxmksdpart -g rootdg rootdisk-03 \ 6 0x07 0x00 (This example creates the overlay partition for /var if configured.)
Note: If
/, swap
and
/var
boot disk, vxdiskadm can be used to create the new mirror and vxdiskadm will build all the overlay partitions making this step unnecessary. Note: Run
vxprint -qhtg rootdg prior to executing the vxmksdpart command to get the subdisk names for the overlay
partitions needing to be built.
# dumpadm -d /dev/dsk/c#t#d#s#
OBP
device aliases if
B-27
Appendix C
Figure C-1 illustrates an example partitioning scheme of a complex, veslice boot disk.
Boot Disk
Slice 0 - / Slice 1 - swap Slice 2 Slice 3 - /usr Slice 4 - /var Slice 5 - /opt Free Space
Overlay Partition - / (0) Overlay Partition - swap (1) Overlay Partition - /usr (3) Overlay Partition - /var (4)
swapvol Slice 2 and Slice 6 (Public Region) Overlaps Full Disk. /opt is Encapsulated in Public Region
Mirror Disk
rootvol swapvol
Slice 3 Private Region Overlay Partition - /opt (5) Overlay Partition - / (0)
Preserved /opt
Slice 2 Overlay Partition - swap (1) Overlay Partition - /usr (6) Overlay Partition - /var (7)
Figure C-1
C-2
Mount Directory /
Sun Proprietary: Internal Use Only Example Five-Slice Boot Disk Encapsulation
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
C-3
Pre-Encapsulation df -k Command
Example output from the df -k command delineates the physical devices supporting currently mounted le systems.
bash-2.03# df -k Filesystem /dev/dsk/c1t2d0s0 /dev/dsk/c1t2d0s3 /proc fd mnttab /dev/dsk/c1t2d0s4 swap swap /dev/dsk/c1t2d0s5 kbytes 1018382 2055705 0 0 0 2055705 1222104 1222104 2055705 used avail capacity 47206 910074 5% 772430 1221604 39% 0 0 0% 0 0 0% 0 0 0% 956248 1037786 48% 16 1222088 1% 16 1222088 1% 2133 1991901 1% Mounted on / /usr /proc /dev/fd /etc/mnttab /var /var/run /tmp /opt
C-4
Mount Directory
Sun Proprietary: Internal Use Only Example Five-Slice Boot Disk Encapsulation
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
C-5
Post-Encapsulation df -k Command
The following example of output from the df -k command shows that the VxVM software volumes are used for the le systems provided by the boot disk.
bash-2.03# df -k Filesystem /dev/vx/dsk/rootvol /dev/vx/dsk/usr /proc fd mnttab /dev/vx/dsk/var swap swap /dev/vx/dsk/opt kbytes 1018382 2055705 0 0 0 2055705 1180368 1180424 2055705 used 75398 805992 0 0 0 974276 16 72 53686 avail capacity 881882 8% 1188042 41% 0 0% 0 0% 0 0% 1019758 49% 1180352 1% 1180352 1% 1940348 3% Mounted on / /usr /proc /dev/fd /etc/mnttab /var /var/run /tmp /opt
C-6
0 3590 3590
1020469848.1025.lowtide5 17678493 17678493 4197879 4197879 4197879 4197879 4197879 ROUND CONCAT 0 CONCAT 0 ROUND CONCAT 0 1 CONCAT 0 ROUND CONCAT 0 CONCAT 0 ROUND CONCAT 0 CONCAT 0 c1t2d0 c1t20d0 c1t2d0 c1t2d0 c1t20d0 c1t2d0 c1t20d0 c1t2d0 c1t20d0 c1t2d0 c1t20d0 gen RW ENA RW ENA root RW ENA ENA RW ENA swap RW ENA RW ENA gen RW ENA RW ENA gen RW ENA RW ENA
opt opt-01 opt rootdisk-03 opt-01 opt-02 opt rootmirror-01 opt-02 rootvol rootvol-01 rootvol rootdisk-B0 rootvol-01 rootdisk-02 rootvol-01 rootvol-02 rootvol rootmirror-02 rootvol-02 swapvol swapvol-01 swapvol rootdisk-01 swapvol-01 swapvol-02 swapvol rootmirror-03 swapvol-02 usr usr-01 usr rootdisk-05 usr-01 usr-02 usr rootmirror-04 usr-02 var var-01 var rootdisk-04 var-01 var-02 var rootmirror-05 var-02
ENABLED ACTIVE 2100735 ENABLED ACTIVE 2100735 rootdisk 17678492 1 rootdisk 0 2100734 ENABLED ACTIVE 2100735 rootmirror 4197879 2100735 ENABLED ACTIVE 2100735 ENABLED ACTIVE 2100735 rootdisk 2100734 2100735 ENABLED ACTIVE 2100735 rootmirror 6298614 2100735 ENABLED ACTIVE 4197879 ENABLED ACTIVE 4197879 rootdisk 4201469 4197879 ENABLED ACTIVE 4197879 rootmirror 8399349 4197879
ENABLED ACTIVE 4197879 ROUND ENABLED ACTIVE 4197879 CONCAT rootdisk 8399348 4197879 0 ENABLED ACTIVE 4197879 CONCAT rootmirror 12597228 4197879 0
no no swap
no
Sun Proprietary: Internal Use Only Example Five-Slice Boot Disk Encapsulation
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
C-7
Last Sector Mount Directory 6302204 <-- / 8402939 <-- swap 17682083 <-- overlap/backup 3590 <-- Private 17682083 <-- Public 4201469 <-- /opt 12600818 <-- /usr 16798697 <-- /var
C-8
Sun Proprietary: Internal Use Only Example Five-Slice Boot Disk Encapsulation
Copyright 2003 Sun Microsystems, Inc. All Rights Reserved. Enterprise Services, Revision B
C-9
Manually Unencapsulating
The manual unencapsulation process for a ve-slice disk is similar to the manual procedure described in Manually Unencapsulating a Boot Disk on page 2-60. The one difference in the process of unencapsulating a veslice boot disk is the recovery of the /opt partition; this procedure is covered as part of the manual unencapsulation procedure. Additionally, the procedures in Unencapsulating When Booted From the CD-ROM on page 2-64, and Performing a Basic or Functional Unencapsulation on page 2-68 are similar for a ve-slice boot disk. Execute these procedures as written in the referenced sections. Be sure to recover the /opt partition using the instructions for one of the following commands:
q q
The vxmksdpart command See step 6.c on page 2-62. The vxedvtoc command See step 19.b on page 2-67.
C-10
Appendix D
ssd3 at sf1: name w210000203713f565,0, bus address e1 ssd3 is /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f565,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f565,0 (ssd3) online ssd4 at sf1: name w210000203713fc9f,0, bus address ef ssd4 is /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713fc9f,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713fc9f,0 (ssd4) online ssd5 at sf1: name w210000203713df01,0, bus address dc ssd5 is /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713df01,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713df01,0 (ssd5) online ssd6 at sf1: name w210000203713f96d,0, bus address e8 ssd6 is /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f96d,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f96d,0 (ssd6) online ssd7 at sf1: name w210000203713f7f0,0, bus address cb ssd7 is /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f7f0,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f7f0,0 (ssd7) online ssd8 at sf1: name w21000020372d5917,0, bus address c9 ssd8 is /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w21000020372d5917,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w21000020372d5917,0 (ssd8) online ssd9 at sf1: name w210000203713fc0f,0, bus address c6 ssd9 is /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713fc0f,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713fc0f,0 (ssd9) online ssd10 at sf1: name w2100002037140269,0, bus address cd ssd10 is /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w2100002037140269,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w2100002037140269,0 (ssd10) online ssd11 at sf1: name w210000203713f579,0, bus address e4 ssd11 is /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f579,0 (ssd11) online ssd12 at sf1: name w210000203716cf3d,0, bus address e0 ssd12 is /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203716cf3d,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203716cf3d,0 (ssd12) online ssd13 at sf1: name w210000203713f49f,0, bus address ca ssd13 is /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f49f,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@1,0/ssd@w210000203713f49f,0 (ssd13) online ssd14 at sf0: name w220000203713f582,0, bus address c7 ssd14 is /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f582,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f582,0 (ssd14) online ssd15 at sf0: name w220000203713f643,0, bus address cc ssd15 is /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f643,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f643,0 (ssd15) online ssd16 at sf0: name w220000203713e0b9,0, bus address e2 ssd16 is /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713e0b9,0
D-2
<SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713e0b9,0 (ssd16) online ssd17 at sf0: name w220000203713f565,0, bus address e1 ssd17 is /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f565,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f565,0 (ssd17) online ssd18 at sf0: name w220000203713fc9f,0, bus address ef ssd18 is /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713fc9f,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713fc9f,0 (ssd18) online ssd19 at sf0: name w220000203713df01,0, bus address dc ssd19 is /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713df01,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713df01,0 (ssd19) online ssd20 at sf0: name w220000203713f96d,0, bus address e8 ssd20 is /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f96d,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f96d,0 (ssd20) online ssd21 at sf0: name w220000203713f7f0,0, bus address cb ssd21 is /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f7f0,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f7f0,0 (ssd21) online ssd22 at sf0: name w22000020372d5917,0, bus address c9 ssd22 is /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w22000020372d5917,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w22000020372d5917,0 (ssd22) online ssd23 at sf0: name w220000203713fc0f,0, bus address c6 ssd23 is /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713fc0f,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713fc0f,0 (ssd23) online ssd24 at sf0: name w2200002037140269,0, bus address cd ssd24 is /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w2200002037140269,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w2200002037140269,0 (ssd24) online ssd25 at sf0: name w220000203713f579,0, bus address e4 ssd25 is /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f579,0 (ssd25) online ssd26 at sf0: name w220000203716cf3d,0, bus address e0 ssd26 is /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203716cf3d,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203716cf3d,0 (ssd26) online ssd27 at sf0: name w220000203713f49f,0, bus address ca ssd27 is /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f49f,0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /sbus@3,0/SUNW,socal@0,0/sf@0,0/ssd@w220000203713f49f,0 (ssd27) online /sbus@3,0/SUNW,fas@3,8800000 (fas0): rev 2.2 FEPS chip fas0 at sbus1: SBus1 slot 0x3 offset 0x8800000 and slot 0x3 offset 0x8810000 SBus level 3 sparc9 ipl 5 fas0 is /sbus@3,0/SUNW,fas@3,8800000 sd6 at fas0: target 6 lun 0 sd6 is /sbus@3,0/SUNW,fas@3,8800000/sd@6,0 root on /pseudo/vxio@0:0 fstype ufs
D-3
WARNING: forceload of drv/SUNW,socal failed fhc0 at root: UPA 0x0 0xf8800000 fhc0 is /fhc@0,f8800000 ac0 board 0 bank 0: base 0x0 size 256mb rstate 2 ostate 1 condition 1 ac0 is /fhc@0,f8800000/ac@0,1000000 fhc1 at root: UPA 0x2 0xf8800000 fhc1 is /fhc@2,f8800000 ac1 is /fhc@2,f8800000/ac@0,1000000 central0 at root: UPA 0x1f 0x0 ... fhc2 at root: UPA 0x0 0xf8800000 fhc2 is /central@1f,0/fhc@0,f8800000 WARNING: Core Power Supply 3 Failing sysctrl0 is /central@1f,0/fhc@0,f8800000/clock-board@0,900000 sysctrl0: Key switch is not in the secure position environ0 is /fhc@0,f8800000/environment@0,400000 environ1 is /fhc@2,f8800000/environment@0,400000 simmstat0 is /fhc@0,f8800000/simm-status@0,600000 sram0 is /fhc@0,f8800000/sram@0,200000 zs0 is /central@1f,0/fhc@0,f8800000/zs@0,902000 zs1 is /central@1f,0/fhc@0,f8800000/zs@0,904000 cpu0: SUNW,UltraSPARC (upaid 0 impl 0x10 ver 0x40 clock 168 MHz) cpu1: SUNW,UltraSPARC (upaid 1 impl 0x10 ver 0x40 clock 168 MHz) Starting /etc/rcS.d/S25vxvm-sysboot script Starting VxVM restore daemon... VxVM starting in boot mode... ses16 at sf1: name w5080020000034909,0, bus address d2 ses16 is /sbus@3,0/SUNW,socal@0,0/sf@1,0/ses@w5080020000034909,0 ses17 at sf1: name w508002000003490a,0, bus address b5 ses17 is /sbus@3,0/SUNW,socal@0,0/sf@1,0/ses@w508002000003490a,0 ses18 at sf0: name w508002000003490b,0, bus address d2 ses18 is /sbus@3,0/SUNW,socal@0,0/sf@0,0/ses@w508002000003490b,0 ses19 at sf0: name w508002000003490c,0, bus address b5 ses19 is /sbus@3,0/SUNW,socal@0,0/sf@0,0/ses@w508002000003490c,0 NOTICE: vxvm:vxdmp: enabled path 118/0x20 belonging to the dmpnode 68/0x0 NOTICE: vxvm:vxio: Cannot open disk c1t3d0s2: kernel error 6 NOTICE: vxvm:vxio: Cannot open disk c1t3d0s2: kernel error 6 SUNW,hme0 : Sbus (Rev Id = 22) Found hme0 at sbus0: SBus0 slot 0x1 offset 0x8c00000 and slot 0x1 offset 0x8c02000 and 0x1 offset 0x8c04000 and slot 0x1 offset 0x8c06000 and slot 0x1 offset 0x8c07000 level 4 sparc9 ipl 7 hme0 is /sbus@2,0/SUNW,hme@1,8c00000 SUNW,hme1 : Sbus (Rev Id = 22) Found hme1 at sbus1: SBus1 slot 0x3 offset 0x8c00000 and slot 0x3 offset 0x8c02000 and 0x3 offset 0x8c04000 and slot 0x3 offset 0x8c06000 and slot 0x3 offset 0x8c07000 level 4 sparc9 ipl 7 hme1 is /sbus@3,0/SUNW,hme@3,8c00000 configuring IPv4 interfaces: hme0. configuring IPv6 interfaces: hme0. Hostname: lowtide Executing /etc/rcS.d/S35vxvm-startup1 Script
slot SBus
slot SBus
D-4
VxVM starting special volumes ( swapvol var )... Executing /etc/rcS.d/standardmounts Script SUNW,hme0 : Internal Transceiver Selected. SUNW,hme0 : 100 Mbps Full-Duplex Link Up Executing /etc/rcS.d/S50devfsadm Script Executing /etc/rcS.d/S85vxvm-startup2 Script pseudo-device: devinfo0 devinfo0 is /pseudo/devinfo@0 VxVM general startup... NOTICE: vxvm:vxio: Cannot open disk c1t3d0s2: kernel error 6 dump on /dev/dsk/c1t0d0s1 size 513 MB NOTICE: vxvm:vxio: Cannot open disk c1t3d0s2: kernel error 6 Executing /etc/rcS.d/S86vxvm-reconfig Script The system is coming up. Please wait.
Executing /etc/rc2.d/S20sysetup Script Starting IPv6 neighbor discovery. Setting default IPv6 interface for multicast: add net ff00::/8: gateway fe80::a00:20ff:fe7d:5d60 starting rpc services: rpcbind done. Setting netmask of hme0 to 255.255.255.0 Setting default IPv4 interface for multicast: add net 224.0/4: gateway lowtide syslog service starting. Print services started. Jun 24 14:39:05 lowtide pseudo: pseudo-device: tod0 Jun 24 14:39:05 lowtide genunix: tod0 is /pseudo/tod@0 Jun 24 14:39:05 lowtide pseudo: pseudo-device: pm0 Jun 24 14:39:05 lowtide genunix: pm0 is /pseudo/pm@0 volume management starting. Executing /etc/rc2.d/S94vxnm-host_infod Script Executing /etc/rc2.d/S94vxnm-vxnetd Script Jun 24 14:39:07 lowtide pseudo: pseudo-device: vol0 Jun 24 14:39:07 lowtide genunix: vol0 is /pseudo/vol@0 Executing /etc/rc2.d/S95vxvm-recover Script Executing /etc/rc2.d/S96vmsa-server Script
D-5
Starting RMI Registry Starting VERITAS VM Storage Administrator Command Server Starting VERITAS VM Storage Administrator Server The system is ready. lowtide console login:
D-6
Appendix E
Relevance
Relevance
Discussion Modules in this course discussed the basic elements and architecture of the VxVM software. This appendix provides detailed information on how to build and assess VxVM software configurations in an environment, and how to determine the effectiveness of those configurations. The following questions are relevant to the material presented in this appendix:
q
!
?
What are important factors to look at when formulating a VxVM software solution? Can you identify the standards for RAID installations?
E-2
Additional Resources
Additional Resources
Additional resources The following references provide additional details on the topics discussed in this module:
q
VERITAS Volume Manager 3.2 Administrators Guide. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-000392-011, TechPDF ID 240253. VERITAS Volume Manager 3.2 SRT Instructor Guide. Mountain View, California: VERITAS Software Corporation. VERITAS Volume Manager 3.2 Release Notes. Mountain View, California: VERITAS Software Corporation, August 2001, number 30-0003962-011, TechPDF ID 240252. SunSolveSM Online INFODOC 40143, [http://sunsolve.Sun.COM/pub-cgi/search.pl?mode=advanced]. Man pages for the vxassist and vxdg commands.
E-3
RAID 0 Uses striping RAID 1 Uses mirroring RAID 0 + RAID 1 Uses striping and mirroring RAID 1 + RAID 0 Uses mirroring and stripingreferred to as RAID 10 RAID 5 Uses parity striping
Layered Volumes
Prior to version 3.2, the VxVM software applied default rules to assign disks using the vxassist command. The default rules essentially created the RAID stripe and then the mirror. The VxVM software version 3.0 introduced layered volumes, allowing for RAID 1 + RAID 0 (mirror and striping) congurations. This feature is now fully implemented in VxVM software version 3.2 through the vxassist command -o ordered option.
The VxVm software version 3.2 provides conguration control, allowing the selection of either the default conguration (RAID 0 + 1) or a layered volumes conguration (RAID 1 + 0), also referred to as a stripe professional or concat professional conguration.
E-4
Plex States
Plex state information reects consistent or inconsistent congurations and the state of those congurations. Valid plex states are described in detail in Module 1, Introducing the VERITAS Volume Manager Software Architecture, in the section Plex State Descriptions on page 1-34.
Volume States
Volume states consist of the following:
q
Clean Volume is not started; kernel state is disabled, but plexes are synchronized. Active Volume is started; kernel state is enabled. Empty Volume is not initialized; kernel state is disabled. Sycn Volume is in recovery mode; kernel state is enabled. Also indicates volume is recovered after boot; kernel state is disabled, and plexes need to be resynchronized. Needsync Volume requires resynchronization. Replay Volume is in transient state as part of log replay (only valid for RAID 5).
q q q
q q
E-5
Disk States
The following example shows output of the vxdisk list command:
DEVICE c0t0d0s2 c0t2d0s2 c1t0d0s2 c2t0d0s2 c3t0d1s2 c3t0d4s2 c3t1d1s2 c3t2d0s2 c3t2d1s2 c3t2d2s2 c3t3d0s2 c3t3d2s2 c3t3d3s2 c3t4d0s2 c3t4d3s2 c3t5d2s2 TYPE DISK GROUP STATUS sliced disk06 online sliced error sliced rootdisk rootdg online sliced error sliced online sliced rootmir rootdg online sliced newdg01 newdg online spare sliced newdg02 newdg online spare sliced disk01 rootdg online sliced newdg04 newdg online reserved sliced newdg05 newdg online nohotuse sliced newdg07 newdg online invalid sliced error sliced newdg09 newdg online failing sliced newdg12 newdg online altused sliced newdg15 newdg online newdg03 newdg removed was:c3t2d1s2 newdg08 newdg removed was:c3t3d3s2 newdg11 newdg failed was:c3t4d2s2
Invalid A private area exists, but the information in it is not a valid conguration. Altused The alternate conguration copy in the private area is in use. Failed was Indicates full disk failure. Removed was The vxdiskadm #4 command was executed. Error No private area on this disk. Reserved This disk is not used by the vxassist command to make new volumes or for relocation. Failing (VxVM software version 2.x and 3.x) This disk incurred a hardware error in the past. Nohotuse (VxVM software version 3.x) This disk is not used for relocated data. Dgname (VxVM software version 3.x) The disk group name, shown only when using the -o alldgs option.
q q q q
E-6
0 3590 3590
1023554924.1025.lowtide 17678493 17674902 4197879 4197879 1 4197878 4197879 4197879 ROUND CONCAT 0 1 CONCAT 0 c1t0d0 c1t0d0 c1t1d0 root RW ENA ENA RW ENA
rootvol rootvol-01 rootvol rootdisk-B0 rootvol-01 rootdisk-02 rootvol-01 rootvol-02 rootvol rootmirror-01 rootvol-02
ENABLED ACTIVE ENABLED ACTIVE rootdisk 17678492 rootdisk 0 ENABLED ACTIVE rootmirror 0
E-7
Field Descriptions
This section contains an explanation of the elds in the vxprint output. For additional information about the vxprint command, see the vxprint man page.
q
Disk group name Disk ID Record name Underlying disk access record Disk access record type (sliced, simple, or nopriv) Length of the disks private region Length of the disks public region Record name Associated plex, or dash (-) if the subdisk is dissociated Name of the disk media record used by the subdisk Device offset in sectors Subdisk length in sectors Plex association offset Optionally, this value is preceded by subdisk column number for subdisks associated to striped plexes, LOG for log subdisks, or the putil[0] eld if the subdisk is dissociated. The putil[0] eld can be non-empty to reserve the subdisk space for non-volume uses. If the putil[0] eld is empty, it is a dissociated subdisk. Subdisk state string:
q q q
ENA The subdisk is usable. DIS The subdisk is disabled. RCOV The subdisk is part of a RAID-5 plex and has stale content. DET The subdisk is detached. KDET The subdisk is detached in the kernel due to an error.
q q
E-8
RMOV The media record on which the subdisk is dened was removed from its disk access record by a utility. RLOC The subdisk has failed and is waiting to be relocated. NDEV The media record on which the subdisk is dened has no associated access record.
Record name. Associated plex, or dash (-) if the subvolume is dissociated. Name of the underlying (layered) volume record used by the subvolume. Number of layers used in the subvolume. Subvolume length in sectors. Plex association offset This value is optionally preceded by subvolume column number for subvolumes associated with striped plexes. Number of active plexes, followed by the number of plexes in the underlying (layered) volume. Subvolume state string:
q q q
q q q
ENA The subvolume is usable. DIS The subvolume is disabled. KDET The subvolume was detached in the kernel due to an error.
Record name. Associated volume, or dash (-) if the plex is dissociated. Plex kernel state. Plex utility state If an exception condition is recognized on the plex (such as an I/O failure, a removed or inaccessible disk, or an unrecovered stale data condition), that condition is listed instead of the value of the plex records state eld. Plex length in sectors. Plex layout type.
q q
E-9
Number of columns and plex stripe width, or if the plex is not striped Plex I/O mode:
q q q
For volumes, the output consists of the following elds, from left to right:
q
Record type v:
q q q q q q q
Record name Associated usage type Volume kernel state Volume utility state Volume length in sectors Volume read policy The preferred plex, if the read policy uses a preferred plex Record name Associated volume, or dash (-) if the data change object (DCO) is dissociated Name of the DCO log volume, or dash (-) if no DCO log volume is associated with the DCO object Record name Name of the volume which this snap record describes Name of the DCO with which this snap record is associated Record name Associated remote link (RLINK) object count Remote volume group (RVG) kernel state (derived from various ags) RVG utility state RVG primary ag (primary or secondary)
q q
E-10
Associated data volume count The srl volume Record name Associated RVG, or dash (-) if the RLINK is dissociated RLINK kernel state (derived from various ags) RLINK utility state The remote host The remote disk group The remote RLINK
E-11
RAID 0+1
Volume Mirror
RAID1+0
Plex
Plex
Stripe
sd
Stripe
Figure E-1
The layout in Figure E-1 is referred to as layer volumes or a VERITAS stripepro or VERITAS concat-pro layout. This layout enhances redundancy and reduces recovery time in case of an error. In a mirror-stripe layout, if a disk fails, the entire plex is detached, thereby losing redundancy on the entire volume. When the disk is replaced, the entire plex must be brought up to date. Recovering the entire plex can take a substantial amount of time.
E-12
Complex RAID Levels If a disk fails in a stripe-mirror layout, only the failing subdisk must be detached, and only that portion of the volume loses redundancy. When the disk is replaced, only a portion of the volume needs to be recovered. Compared to mirroring plus striping, striping plus mirroring offers a volume more tolerant to disk failure and quicker recovery time. The following example demonstrates RAID 1 + 0 using the -o ordered option:
# vxassist -g newdg3dg -o ordered make vol01 1g layout=stripe-mirror ncol=2 disk04 disk02 disk01 disk03 # vxprint -htr . . .
The previous example shows stripes of RAID 0 combined with two subvolumes of a RAID 1 mirror, as shown in Figure E-2. Volume - vol01 Plex - vol01-03 vol01-S01
Stripe vol01-S02
Figure E-2
E-13
Complex RAID Levels Matching the diagram in Figure E-2 on page E-13 against the output of the example vxprint command shows the following objects:
q q q q
Volume (v) vol01 Plex (pl) vol01-03 Subvolume vol01-S01 Subvolume vol01-S02
These objects provide the stripe. Notice that even though mirroring is implemented, it is not visible to the single upper layer plex. In essence, this conguration is a single stripe that is mirrored. There are two subvolumes with a total of four subdisks in the plex. From the perspective of the plex, the capacity is equal to two subdisks. The other subdisks provide the data redundancy. Underneath the plex, logical subvolumes provide the mirroring as shown in Figure E-3. Plex vol01-S01 vol01-L01 vol01-P01 vol01-P02
Disk 04 Mirror Disk 01
Figure E-3
The subvolumes each contain two subplexes which are concatenated as part of a mirror. Each subplex can be detached or resynchronized individually, allowing for faster recovery times.
E-14
Complex RAID Levels The following examples show a RAID 1 + 0 conguration without using the -o ordered option:
# vxassist -g testdg make vol01 1g layout=stripe-mirror ncol=2 # vxassist -g testdg make vol01 1g layout=stripe-mirror ncol=2 disk04 disk02 disk01 disk03
E-15
Complex RAID Levels This command allocates spacing, as shown in Figure E-4. Mirrored Stripe Volume
Striped Plex Column 1 disk01-01 disk02-01 Column 2 disk03-01 disk04-01 Column 3 disk05-01 disk06-01
Striped Plex
Figure E-4
Storage is also allocated based on controllers and enclosures with ordered allocation. For example, the following command creates a three column, mirrored-stripe volume across controllers:
# vxassist -o ordered make mirvol 80g layout=mirror-stripe ncol=3 \ ctlr:c1 ctlr:c2 ctlr:c3 ctlr:c4 ctlr:c5 ctlr:c6
This command allocates space for column 1 from disks on controllers c1, for column 2 from disks on controller c2, and so on.
E-16
The attribute mirror=ctlr species placement of disks in one mirror on a different controller than disks in other mirrors within the same volume. The attribute mirror=target species mirroring of volumes between identical target IDs on different controllers. The attribute mirror=enclr species placement of disks in one mirror in a different enclosure than disks in other mirrors within the same volume.
The following command creates a mirrored volume with two plexes on different controllers:
# vxassist make volmnt 10g layout=mirror nmirror=2 mirror=ctlr ctlr:c1 ctlr:c2
In this example, the disks in one plex are all attached to controller c1, and the disks in the other plex are all attached to controller c2. If a controller fails, only one side of the mirror is lost.
E-17
Start simple. Start with a simple disk layout; then add complexity. Learn the layout needed to identify the tool set requirements, and start further tuning.
More spindles are better. The ability to get top performance from a system depends on avoiding contention and evenly spreading throughput.
Bigger is not always better. Do not confuse capacity and performance. Often one feature must be compromised to achieve the other. Doubling the capacity of storage does not increase the performance speed. Capacity is measured in gigabytes while performance is measured in I/O per second.
Look at the big picture. Learn the system environment before focusing too much on where a problem is located. Look at the whole environment, and apply efforts across the entire environment.
Trade-offs must be made. There is no perfect solution, and there is always more than one solution. This usually results in a mutually-exclusive choice between things which cannot be obtained at the same time.
Performance is never the only consideration. Availability, reliability, and manageability are very high on the list of concerns in system layout, as are price of ownership (not to be confused with purchase price alone) and scalability (that is, room to grow).
E-18
RAID 0 Mirroring Production le systems are mirrored. Staging and development le systems are not. Proper backup procedures must be implemented. When possible, disks are mirrored across controllers. Experience shows that when this is done correctly, either a disk or a controller can fail and the system remains fully operational. Mirror across arrays and label the arrays A and B. Doing this allows a system to lose an entire array and still stay on line. A typical response to this situation is use the VxVM software and rename the disks to include the array identier. For example, rename disk22 to diskA22 and the mirror to diskB22.
RAID 1 Striping Striping is highly effective when used correctly to solve a known and understood problem. Experience shows that when this is used incorrectly, however, it compounds problems and makes problem resolution difcult. Implement striping in a standardized manner, where used, across all platforms. This reduces administration overhead.
RAID 5 Parity disks are used at the hardware level only or on readonly oriented le systems.
Small le systems (under 2 gigabytes) are generally mapped to a single disk. Medium le systems (between 2 and 8 gigabytes) are usually striped over four disks. Large le systems (8 gigabytes and above) are usually striped over four to eight disks.
E-19
Striping Considerations
The question of how to perform striping is a subject of hot debate with no correct answer. This is partly due to the fact that striping can either help or hurt performance, depending on the workload, the number of stripes per disk, concurrent access, and how well the application uses I/O.
Striping Characteristics
The objective of striping is to reduce head contention. Potentially, especially with high capacity disks, striping can increase head contention by increasing concurrent access, resulting in inconsistent allocation of the stripe or poor data layouts (multiple heavy I/O stripes on the same disks). After striping is set up, it can be more difcult to recongure than unstriped environments. This can result in additional time the system administrator needs to set up and maintain the striped environment. Another consideration is the human overhead associated with conguring and managing a striped I/O subsystem. This impact is far higher on a busy system than on an idle one. Operators and administrators must be trained on the striping setup and the impact each disk has on the system performance.
The primary goal for striping is to identify large le systems which require striping. Large le systems with high-intensity sequential reads or writes require striping. Place those le systems which do not require striping in a single disk volume.
When striping, you should use increments of disks available on the system. Usually, that is a four-column or six-column stripe. Using symmetrical striping avoids creating roving hot spots. A good starting point is a four-way stripe set with a 64 kilobyte stripe unit. The stripe should become a standard implementation. RAID 1 + 0 stripes can be potentially larger because recovery times are not as dramatically impacted.
E-20
Avoid a stripe width of greater than eight columns. Risk of contention goes up and the performance gains level off in systems with greater than eight-column stripes. The stripe size should always be a multiple of the database block size. It is grossly inefcient to split an I/O over multiple I/O requests. When performing I/Os that require multiple blocks, it is best to ensure that each I/O can be satised from one disk. Examples of this type of I/O requests are table scans, backups, and disaster recovery restore operations. Chose a stripe size that is a multiple of the le system block (for example, in Oracle this would be a multiple of db_block_multiblock_read_count init.ora parameter). Random access, non-table scanning applications tolerate a smaller block size. Do not set the stripe size so small that backups and restores are inefcient. Table scanning applications benet from large stripe sizes. Good stripe sizes vary from 128 kilobytes for online transaction processing (OLTP) applications to several megabytes for data warehouse applications.
E-21
Online Re-Layout
Online re-layout allows reconguration of RAID layouts on currently congured volumes. A volume can be re-laid out or converted to another layout. Online re-layout is supported for the following operations:
q q q q q q q q
RAID 5 to mirror Mirror to RAID 5 Mirror to Concat Concat to RAID 5 RAID 5 to Concat Adding or removing parity Adding or removing columns Changing stripe width
Only one plex is used if you are re-laying out to RAID 5. See the vxassist man pages for additional information.
E-22
2.
Determine the current size of the le system. a. For UFS, type the following:
# fstyp -v /dev/vx/dsk/test01/vol01 | head -10 | grep ncg ncg 5 size 20480 blocks 19183
The number after size is the size of the le system, in kilobytes. To translate to sectors, multiply by 2. b. For VxFS, type the following:
nau 3 # fstyp -v /dev/vx/dsk/test01/vol01 | grep nau bsize 1024 size 20480 dsize 91392 ninode 0
E-23
Manipulating Disk Layouts Multiply the number after bsize by the number after size. That determines the size of the le system, in bytes. To translate to sectors, divide that number by 512. 1024 (bsize) x 20480 (size) / 512 = 40960 sectors This le system size is 40960 sectors, which matches the volume size. 3. Determine the largest size an existing volume can be grown. Type the following:
# vxassist -g rootdg maxgrow vol01 disk01 disk02 disk03 Volume vol01 can be extended by 12244992 to 12285952 (5999Mb)
If you do not specify the disks, the vxassist command uses the disks in the diskgroup. 4. Grow the volume to the required size. Type the following: This command only grows the volume. Remember to grow the le system as well, if needed. Note The vxassist command also has a growby option. See the vxassist man pages for more information.
# vxassist -g rootdg growto vol01 10639360 disk01 disk02 disk03
2.
The mkfs command has an option to grow a le system instead of making one. To use this option, the le system must be mounted. You must also use the full path to the mkfs program; do not use /usr/sbin/mkfs. Note The above process is described in detail by Sun Solve INFODOC 14881.
E-24
Manipulating Disk Layouts To grow a le system, execute the following command for VxFS:
# /usr/lib/fs/vxfs/fsadm -b 10639360 -r /dev/vx/rdsk/rootdg/vol01 /mnt
MODE MODE
vol04 fsgen vol04-01 vol04 newdg01-01vol04-01 newdg02-01vol04-01 newdg03-01vol04-01 newdg04-01vol04-01 newdg05-01vol04-01 newdg06-01vol04-01 newdg07-01vol04-01 newdg08-01vol04-01
MODE MODE
RW
E-25
MODE MODE
E-26
Caution Using the same target name as an existing deported disk group destroys that group. To join disk groups, use the following syntax:
vxdg [-o expand] join sourcedg targetdg [object ...]
To move objects from one imported disk group to another, use the following syntax:
vxdg [-o expand] move sourcedg targetdg [object ...]
For each of these vxdg commands, the option -o expand includes all disks from volumes sharing subdisks. For a complete list of options for the vxdg command, see the man pages.
E-27
The following example commands each split a disk group and then restart the volumes in the new disk group: # vxdg split old-dg new-dg disk01 disk02 disk03 # vxrecover -g new-dg -m vol01 # vxvol -g new-dg start vol01
The following commands join a disk group: # vxdg join new-dg old-dg # vxrecover -g old-dg -m vol01
This command displays potential objects to move, including shared subdisks # vxdg -o expand listmove old-dg new-dg This command moves objects from one disk group to another: # vxdg -o expand move old-dg new-dg vol04
# vxprint -h new-dg Disk group: newdg TY dg NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 new-dgnewdg - - - MOVE
E-28
Changing Disk Group Configurations The VxVM software uses the TUTIL0 and PUTIL0 elds to lock the affected objects during transition. 2. Enter the following command to complete the move: # vxdg recover new-dg vxvm: vxdg: ERROR: diskgroup: Disk group does not exist 3. If the recovery fails, check to see if the disk group was imported onto another host. If it was imported, deport it from that host, and import it onto the current host. If all the required objects already exist in either the source or target disk group, use the following command to reset the MOVE ags in that disk group: # vxdg -o clean recover new-dg 5. Use the following command on the other disk group to remove the objects that have TUTIL0 elds marked as MOVE : # vxdg -o remove recover old-dg 6. If only one disk group is available to be imported, use the following command to reset the MOVE ags on this disk group: # vxdg -o clean recover old-dg
4.
The PUTIL elds are permanent and stay set even after a reboot. The TUTIL elds are temporary and do not survive a reboot.
E-29
Changing Disk Group Configurations To list the PUTIL and TUTIL elds, use the following command:
# vxprint -h vol01 TY NAME ASSOC v vol01 fsgen pl vol04-01 vol01 sd newdg02-02vol04-01 pl vol01-01 vol01 sd newdg11-01vol01-01 KSTATE ENABLED ENABLED ENABLED ENABLED ENABLED LENGTH 2050272 2050272 2050272 2050272 2050272 PLOFFS 0 0 STATE ACTIVE ACTIVE TEMPRMSD TUTIL0 ATT1 ATT PUTIL0 -
Reconfiguration Considerations
The disk group move, split, and join features have the following limitations:
q
Disk groups involved in a move, split, or join must be version 90 or greater. The reconguration must involve physical disks. Objects to be moved must not contain open volumes. Moved volumes are initially disabled following a disk group move, split or join. Use either vxrecover -m or vxvol startall to restart the volumes. Data change objects (DCOs) and snap objects that are dissociated by persistent fast resynchronization cannot be moved between disk groups. Sun StorEdge Volume Replicator (VR) objects cannot be moved between disk groups.
q q q
E-30
For a disk group reconguration to succeed, the source and target disk group must be able to store copies of the conguration database after the move. The conguration database in the target disk group must also be able to store all the object information in the enlarged disk group.
Splitting or moving a volume into a different disk group changes the volume record ID.
E-31
Hot-Relocation
Hot-Relocation
Hot-relocation allows a system to relocate data automatically in a redundant conguration, in the event of a subdisk failure. There are, however, a number of restrictions for hot-relocation use:
q q q
Subdisks must be in a redundant conguration (mirror or RAID 5). Space must be available to contain the recovered data. Hot-relocation fails if:
q
Space is only available in the same plex of the mirror as the failed sub-disk. Space is only available on a plex that contains a RAID 5 log. The failed subdisk is in the same plex as a DRL log.
q q q
This section discusses the hot-relocation process and how to perform it.
Hot-Relocation Process
To execute relocation, the hot-relocation deamon vxrelocd handles four distinct operations, in the following order: 1. 2. 3. 4. Failure detection Notication Relocation Recovery
The vxrelocd daemon monitors events that signify a subdisk or plex failure. When a failure occurs, data space is selected from hot-relocationreserved disks or from available free space in the disk group, and a resynchronization begins. The failed disk is marked as failing. The deamon noties designated users and then reconstructs the objects in the new location; the new subdisks retain the existing names. The last operation is to recover the data. Figure E-5 on page E-33 shows the interaction of various processes during these procedures.
E-32
Hot-Relocation .
vxrelocd
vxnotify
failure detected
vxrelocd
correct, action
vxrelocd
mailx vxconfigd
access disk notify determine available space
updates configuration
vxassist
builds objects
vxrecover
recovers data
Figure E-5
Failure Detection
The vxrelocd deamon is a script running in the background, interacting with vxnotify, to monitor three types of failure:
q q q
Disk failures using the failing ag in DM records Plex failures due to uncorrectable I/O errors Subdisk failures in a RAID 5 volume
E-33
Hot-Relocation After a failure is detected, the vxrelocd deamon takes object-specic actions and can include a kernel-initiated change that alters the conguration. If a conguration change is required, the vxconfigd deamon is notied with the request update. The vxconfigd deamon, in turn, writes out the conguration change to the database copies.
Notication
The vxrelocd deamon noties users (by default, the root user) using the mailx command, providing information about the failure and the status of relocation and recovery. The le /etc/rc2.d/s95-vxvm-recover contains a list of users to notify. To add users to the notication list, modify the /etc/rc2.d/s95-vxvm-recover le as follows: vxrelocd root username1 username2 & Changes to this le take effect at the next reboot or when the vxrelocd command is executed from the command line. To execute this command from the command line, kill the running vxrelocd deamon rst, but be careful not to kill the deamon in the middle of a relocation process. Error notication takes the following form:
Date: Wed, 6 Feb 2002 12:04:27 GMT From: root Subject: Volume Manager failures on host fred To: root Volume vol02 Subdisk disk04-02 relocated to disk06-03, but not yet recovered.
E-34
Hot-Relocation
Relocation
The vxrelocd daemon determines available space by looking for the spare ag in the DM record. To display the spare ag use either the vxdisk or vxprint command, as shown in the following example:
# vxdisk list DEVICE TYPE DISK GROUP STATUS c0t0d0s2 sliced rootdisk rootdg online c1t12d0s2 sliced altboot rootdg online c1t13d0s2 sliced newdg01 newdg online c1t14d0s2 sliced newdg02 newdg online c1t15d0s2 sliced newdg03 newdg online spare
If a spare ag is not set then the vxrelocd deamon uses available space in the disk group to build the VxVM object. To exclude a disk from use in hot-relocation, set the nohotuse ag as follows: # /etc/vx/bin/vxedit -g disk_group set nohotuse=on disk_name If a disk fails, then remove it from space allocation as follows: # vxedit set failing=on fail_disk To rejoin space allocation: # vxedit set failing=off fail_disk After space is determined, the vxrelocd deamon uses the vxassist command with the move option to create the new objects, which retain the existing names.
Recovery
Recovery is the last step in the hot-relocation process. After new objects are moved, the vxrelocd deamon calls the vxrecovery command to recover the data. Two elds are added to the DM record to identify the original location of the object: the orig_dmname= and orig_dmoffset= elds. These assist in manually restructuring the original object.
E-35
Hot-Relocation
Hot-Relocation Configuration
The vxdiskadm utility has new options which support hot-relocation, available as of the VxVM software version 3.1. A sample of the vxdiskadm menu display is as follows:
Volume Manager Support Operations Menu: VolumeManager/Disk 1 Add or initialize one or more disks 2 Encapsulate one or more disks 3 Remove a disk 4 Remove a disk for replacement 5 Replace a failed or removed disk 6 Mirror volumes on a disk 7 Move volumes from a disk 8 Enable access to (import) a disk group 9 Remove access to (deport) a disk group 10 Enable (online) a disk device 11 Disable (offline) a disk device 12 Mark a disk as a spare for a disk group 13 Turn off the spare flag on a disk 14 Unrelocate subdisks back to a disk 15 Exclude a disk from hot-relocation use 16 Make a disk available for hot-relocation use 17 Prevent multipathing/Suppress devices from VxVMs view 18 Allow multipathing/Unsuppress devices from VxVMs view 19 List currently suppressed/non-multipathed devices list List disk information q Exit from menus
Alternatively, use the vxedit command to set the spare ag from the command line. Use the following command: # vxedit set spare=on disk_name
E-36
Hot-Relocation
Unrelocating
If you try to manually move a relocated subdisk using the vxsd command, the following message is displayed:
vxvm:vxsd: ERROR: Relocate trace information in subdisk disk04-01 not empty. Use -r to retain or -d to discard it
You can then decide whether to retain the orig_dmname information or discard it. The VxVM software version 3.1 provides the vxunrelocate command for use to restore a relocated subdisk to its original location.
E-37
Hot-Spares
Hot-Spares
The hot-spare process is similar to hot-relocation, but there are signicant differences between the two. The primary functional difference is disk selection in the event of a failure; hot-relocation is able to relocate subdisks, whereas the hot-spare process relocates entire disks. With hotrelocation, the event of a failure to a subdisk no longer impacts all volumes on the physical disk, unless it is the disk that fails. Hot-relocation is the recovery process available in the VxVM software, starting with version 2.3. In the hot-spare process, the /etc/vol/sparelist le is maintained to determine locations of available spare disks. In the hot-relocation process, inherent policies determine where subdisks are relocated. The primary policy for both the hot-spare and hot-relocation processes is to locate recovered data based on the spare ag and drive locality. The closer the drive is to the failed drive, the greater the preference.
E-38
Hot-Spares
E-39