Exam Objectives Reference

The CX-310-202 Sun Certified System Administrator for Solaris 10 (SCSA) exam is the second of two exams required for obtaining the SCSA certification. Candidates can use this book to prepare for the SCSA Part II exam. The CX-310-202 exam tests the knowledge and skills you need to successfully install and manage a Solaris 10 system. The exam includes topics on managing virtual file systems, managing storage volumes, controlling system access, configuring naming services, and advanced installation procedures. The following topics are general guidelines for the content likely to be included on the Sun Certified System Administrator for Solaris 10 Part II exam. The exam objectives could change at any time, so it is recommended that you visit the www.UnixEd.com website for any updates. Other related topics might also appear on any specific delivery of the exam. To better reflect the contents of the exam and for purposes of clarity, the following guidelines might change at any time without notice. . Describe Network Basics .Control and monitor network interfaces including MAC addresses, IP addresses, network packets, and configure the IPv4 interfaces at boot time. .Explain the client-server model and enable/disable server processes. . Manage Virtual File Systems and Core Dumps .Explain virtual memory concepts and given a scenario, configure, and manage swap space. .Manage crash dumps and core file behaviors. .Explain NFS fundamentals, and configure and manage the NFS server and client including daemons, files, and commands. .Troubleshoot various NFS errors. .Explain and manage AutoFS and use automount maps (master, direct, and indirect) to configure automounting. .Implement patch management using Sun Connection Services including the Update Manager client, the smpatch command line, and Sun Connection hosted Web application. . Manage Storage Volumes .Analyze and explain RAID (0,1,5) and SVM concepts (logical volumes, soft partitions, state databases, hot spares, and hot spare pools). .Create the state database, build a mirror, and unmirror the root file system. .Describe the Solaris ZFS file system, create new ZFS pools and file systems, modify ZFS file system properties, mount and unmount ZFS file systems, destroy ZFS pools and file systems, work with ZFS snapshots and Clones, and use ZFS datasets with Solaris Zones. . Control Access and Configure System Messaging .Configure role-based access control (RBAC) including assigning rights profiles, roles, and authorizations to users. .Analyze RBAC configuration file summaries and manage RBAC using the command line. .Explain syslog function fundamentals, and configure and manage the /etc/syslog.conf file and syslog messaging. . Naming Services .Explain naming services (DNS, NIS, NIS+, and LDAP) and the naming service switch file (database sources, status codes, and actions) .Configure, stop, and start the Name Service Cache Daemon (nscd) and retrieve naming service information using the getent command. .Configure naming service clients during install, configure the DNS client, and set up the LDAP client (client authentication, client profiles, proxy accounts, and LDAP configurations) after installation.

Objective Matrix Continued .Explain NIS and NIS security including NIS namespace information, domains, processes, securenets, and password.adjunct. .Configure the NIS domain: build and update NIS maps, manage the NIS master and slave server, configure the NIS client, and troubleshoot NIS for server and client failure messages. . Perform Advanced Installation Procedures .Explain consolidation issues, features of Solaris zones, and decipher between the different zone concepts including zone types, daemons, networking, command scope, and given a scenario, create a Solaris zone. .Given a zone configuration scenario, identify zone components and zonecfg resource parameters, allocate file system space, use the zonecfg command, describe the interactive configuration of a zone, and view the zone configuration file. .Given a scenario, use the zoneadm command to view, install, boot, halt, reboot, and delete a zone. .Explain custom jumpstart configuration including the boot, identification, configuration, and installation services. .Configure a Jumpstart including implementing a Jumpstart server, editing the sysidcfg, rules and profile files, and establishing Jumpstart software alternatives (setup, establishing alternatives, troubleshooting, and resolving problems). .Explain Flash, create and manipulate the Flash archive and use it for installation. .Given a PXE installation scenario, identify requirements and install methods, configure both the install and DHCP server, and boot the x86 client. .Configure a WAN Boot Installation and perform a Live Upgrade Installation.

Exam CX-310-203 (Solaris 10 Upgrade Exam)
If you’re already certified on Solaris 2.6, 7, 8, or 9, you’ll only need to take the CX-310-203 upgrade exam to update your certification. As of this writing, here are the objectives for that exam (explained in the preceding section): . Install software . Manage file systems . Perform system boot and shutdown procedures for SPARC-, x64-, and x86-based systems . Perform user and security administration . Perform system backups and restores . Perform advanced installation procedures

Solaris 10 System Administration
(Exam CX-310-202), Part II
Bill Calkins

Solaris 10 System Administration Exam Prep (Exam CX-310-202), Part II Copyright © 2009 by Que Publishing
All rights reserved. No part of this book shall be reproduced, stored in a retrieval system, or transmitted by any means, electronic, mechanical, photocopying, recording, or otherwise, without written permission from the publisher. No patent liability is assumed with respect to the use of the information contained herein. Although every precaution has been taken in the preparation of this book, the publisher and author assume no responsibility for errors or omissions. Nor is any liability assumed for damages resulting from the use of the information contained herein. ISBN-13: 978-0-7897-3817-2 ISBN-10: 0-7897-3817-1

Associate Publisher David Dusthimer Acquisitions Editor Betsy Brown Senior Development Editor Christopher Cleveland Technical Editor John Philcox Managing Editor Patrick Kanouse Project Editor Jennifer Gallant Copy Editor Gayle Johnson Indexer Lisa Stumpf Proofreader Arle Writing and Editing Publishing Coordinator Vanessa Evans Book Designer Gary Adair Page Layout Mark Shirar

Library of Congress Cataloging-in-Publication Data:
Calkins, Bill. Solaris 10 system administration exam prep (Exam CX-310-200) / Bill Calkins. p. cm. ISBN 978-0-7897-3790-8 (pbk. w/cd) 1. Electronic data processing personnel--Certification. 2. Operating systems (Computers)-Examinations--Study guides. 3. Solaris (Computer file) I. Title. QA76.3.C34346 2008 005.4'32--dc22 2008031592 Printed in the United States of America First Printing: May 2009

Trademarks
All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Que Publishing cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark.

Warning and Disclaimer
Every effort has been made to make this book as complete and accurate as possible, but no warranty or fitness is implied. The information provided is on an “as is” basis. The author and the publisher shall have neither liability nor responsibility to any person or entity with respect to any loss or damages arising from the information contained in this book or from the use of the CD or programs accompanying it.

Bulk Sales
Que Publishing offers excellent discounts on this book when ordered in quantity for bulk purchases or special sales. For more information, please contact: U.S. Corporate and Government Sales 1-800-382-3419 corpsales@pearsontechgroup.com For sales outside of the U.S., please contact: International Sales +1-317-581-3793 international@pearsontechgroup.com

Contents at a Glance
Introduction Study and Exam Prep Tips Part I: Exam Preparation
CHAPTER 1 CHAPTER 2 CHAPTER 3 CHAPTER 4 CHAPTER 5 CHAPTER 6 CHAPTER 7 CHAPTER 8 CHAPTER 9

1 9

The Solaris Network Environment Virtual File Systems, Swap Space, and Core Dumps Managing Storage Volumes Controlling Access and Configuring System Messaging Naming Services Solaris Zones Advanced Installation Procedures: JumpStart, Flash Archive, and PXE Advanced Installation Procedures: WAN Boot and Live Upgrade Administering ZFS File Systems

17 49 121 187 217 271 315 415 469

Part II: Final Review
FF PE

Fast Facts Practice Exam

537 565 583

PA Answers to Practice Exam

What’s on the CD-ROM (On the Book’s Website) Glossary (On the Book’s Website) Index 591

Table of Contents
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Study and Exam Prep Tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Part I: Exam Preparation Chapter 1: The Solaris Network Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Client/Server Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 IPv4 Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Network Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Controlling and Monitoring an IPv4 Network Interface . . . . . . . . . . . . . . . . . . . . 22 Configuring an IPv4 Network Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Changing the System Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Network Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 RPC Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Network Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Key Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Apply Your Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Answers to Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Suggested Reading and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ` 47 Chapter 2: Virtual File Systems, Swap Space, and Core Dumps . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 The Swap File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Swap Space and TMPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Sizing Swap Space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Monitoring Swap Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Setting Up Swap Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Core File Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Crash Dump Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 NFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 NFS Version 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Servers and Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

vii

Contents

NFS Daemons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Setting Up NFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Mounting a Remote File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 NFS Server Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Troubleshooting NFS Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 AutoFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 AutoFS Maps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 When to Use automount . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Sun Update Connection Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Using the Update Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Sun Update Manager Proxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Key Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Apply Your Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Answers to Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Suggested Reading and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Chapter 3: Managing Storage Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 RAID 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 RAID 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 RAID 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 RAID 0+1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 RAID 1+0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Solaris Volume Manager (SVM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 SVM Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Planning Your SVM Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Metadisk Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 SVM Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Creating the State Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Monitoring the Status of the State Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Creating a RAID 0 (Concatenated) Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Creating a RAID 0 (Stripe) Volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Monitoring the Status of a Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Creating a Soft Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

viii

Contents

Expanding an SVM Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Creating a Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Unmirroring a Noncritical File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Placing a Submirror Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Mirroring the Root File System on a SPARC-Based System. . . . . . . . . . . . . . . . 162 Mirroring the Root File System on an x86-Based System . . . . . . . . . . . . . . . . . . 166 Unmirroring the Root File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Veritas Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Key Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Apply Your Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ` 180 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Answers to Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Suggested Reading and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Chapter 4: Controlling Access and Configuring System Messaging. . . . . . . . . . . . . . . . . . . . . . . 187 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Role-Based Access Control (RBAC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Using RBAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 RBAC Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 syslog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Using the logger Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Key Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Apply Your Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Answers to Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Suggested Reading and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Chapter 5: Naming Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Name Services Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 The Name Service Switch File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 /etc Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 NIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 The Structure of the NIS Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Determining How Many NIS Servers You Need . . . . . . . . . . . . . . . . . . . . . . . . . 228

ix

Contents

Determining Which Hosts Will Be NIS Servers. . . . . . . . . . . . . . . . . . . . . . . . . . 229 Information Managed by NIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Planning Your NIS Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Configuring an NIS Master Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Setting Up NIS Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Setting Up NIS Slave Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Creating Custom NIS Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 NIS Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Troubleshooting NIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 NIS+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Hierarchical Namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 NIS+ Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Authorization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Configuring the DNS Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Lightweight Directory Access Protocol (LDAP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Sun Java System Directory Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Name Service Cache Daemon (nscd) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 The getent Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Key Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Apply Your Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Answers to Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Suggested Reading and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Chapter 6: Solaris Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Consolidation and Resource Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Consolidation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Solaris Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Types of Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Zone Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Nonglobal Zone Root File System Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Networking in a Zone Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Zone Daemons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282

x

Contents

Configuring a Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Viewing the Zone Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Installing a Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Booting a Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Halting a Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Rebooting a Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Uninstalling a Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Deleting a Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Zone Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Creating a Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 Making Modifications to an Existing Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Moving a Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Migrating a Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Cloning a Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 Backing Up a Zone. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Key Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Apply Your Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Answers to Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Suggested Reading and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Chapter 7: Advanced Installation Procedures: JumpStart, Flash Archive, and PXE . . . . . . . . . . 315 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Custom JumpStart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Preparing for a Custom JumpStart Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 What Happens During a Custom JumpStart Installation?. . . . . . . . . . . . . . . . . . 321 Differences Between SPARC and x86/x64-Based Systems . . . . . . . . . . . . . . . . . . 321 The Boot Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 The Install Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 The Configuration Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 The Rules File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 begin and finish Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Creating class Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Testing Class Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 sysidcfg File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366

xi

Contents

Setting Up JumpStart in a Name Service Environment . . . . . . . . . . . . . . . . . . . . 372 Setting Up Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 Troubleshooting JumpStart. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 A Sample JumpStart Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 Solaris Flash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Creating a Flash Archive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Using the Solaris Installation Program to Install a Flash Archive. . . . . . . . . . . . 387 Creating a Differential Flash Archive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 Solaris Flash and JumpStart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 Preboot Execution Environment (PXE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Preparing for a PXE Boot Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Booting the x86 Client. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Key Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Apply Your Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 Answers to Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Suggested Reading and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 Chapter 8: Advanced Installation Procedures: WAN Boot and Live Upgrade . . . . . . . . . . . . . . . 415 Introduction to WAN Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 WAN Boot Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 WAN Boot Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 The WAN Boot Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 The WAN Boot Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 Configure the WAN Boot Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 Configure the WAN Boot and JumpStart Files . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 The wanboot.conf File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428 Booting the WAN Boot Client. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 Solaris Live Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 Live Upgrade Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Solaris Live Upgrade Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Maintaining Solaris Live Upgrade Boot Environments . . . . . . . . . . . . . . . . . . . . 456 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 Key Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462

xii

Contents

Apply Your Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Answers to Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 Suggested Reading and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 Chapter 9: Administering ZFS File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 Introduction to ZFS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 ZFS Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 ZFS Is Self-Healing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 Simplified Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 ZFS Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 ZFS Hardware and Software Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 ZFS RAID Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476 Creating a Basic ZFS File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476 Renaming a ZFS File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 Listing ZFS File Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 Removing a ZFS File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Removing a ZFS Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480 ZFS Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481 Using Disks in a ZFS Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 Using Files in a ZFS Storage Pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 Mirrored Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 RAID-Z Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 Displaying ZFS Storage Pool Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 Adding Devices to a ZFS Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 Attaching and Detaching Devices in a Storage Pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 Converting a Nonredundant Pool to a Mirrored Pool . . . . . . . . . . . . . . . . . . . . . 490 Detaching a Device from a Mirrored Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Taking Devices in a Storage Pool Offline and Online . . . . . . . . . . . . . . . . . . . . . . . . . . 492 ZFS History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 ZFS Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 Setting ZFS Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 Mounting ZFS File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 Legacy Mount Points. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 Sharing ZFS File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514 Replacing Devices in a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 zpool Scrub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524 Exercise . . . 515 A ZFS Root File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 . . . . . . .xiii Contents ZFS Web-Based Management GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509 Saving and Restoring a ZFS Snapshot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518 Adding a ZFS Dataset to a Nonglobal Zone . 525 Answers to Exam Questions . . . . . . . . . . . . . . . . . 524 Apply Your Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583 What’s on the CD-ROM (On the Book’s Website) Glossary (On the Book’s Website) Index . . . . 565 Answers to Practice Exam . . . . . . . 513 Replacing a ZFS File System with a ZFS Clone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519 Delegating a ZFS Dataset to a Nonglobal Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510 Renaming a ZFS Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508 Listing ZFS Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523 Key Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532 Suggested Reading and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 ZFS Snapshots . . . . . . . . . . . . . . . . . . . . . . . 510 Destroying a ZFS Snapshot . . . . . . . . . . . . . . . . . . . . 534 Part II: Final Review Fast Facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 ZFS Clones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508 Creating a ZFS Snapshot. . . . . . . . . . . . . . . . . . . . . . . . . . . . 510 Rolling Back a ZFS Snapshot . . . . . . . 512 Destroying a ZFS Clone . . . . . . . 537 Practice Exam. . . . . . . . . . . . . . . . . . . . . . . . 517 Using ZFS for Solaris Zones . . . . . 524 Exam Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ISBN 0-7897-3461-3) . . which are currently best sellers and are used by universities and training organizations worldwide: .” His experience covers all varieties of UNIX. Solaris 9 Training Guide (CX-310-014 and CX-310-015): System Administrator Certification (New Riders Publishing. When he’s not working in the field. He draws on his many years of experience in system administration and training to provide a unique approach to UNIX training. Solaris 2. Calkins also works as an instructor in government. and Linux. a computer training and consulting firm located near Grand Rapids. SCNA. His professional interests include consulting. and training at more than 150 different companies. IRIX. traveling.xiv Contents About the Author Bill Calkins is a Sun Certified System Administrator for the Solaris operating environment. Part I (New Riders Publishing. Part I and Part II (New Riders Publishing.6 Administrator Certification Training Guide. and university settings. ISBN 157870085X) . Solaris 8 Training Guide (CX-310-011 and CX-310-012): System Administrator Certification (New Riders Publishing. corporate. He is owner and president of Pyramid Consulting. HP-UX. Part II (New Riders Publishing. Solaris 7 Administrator Certification Training Guide. teaching. writing. He also consults with Sun Microsystems Professional Services and assists in the development of Solaris training and testing materials for the education division at Sun Microsystems. Recently he was recognized by the United States Central Command (CENTCOM) as the “technical trainer of choice for the joint war-fighting community. ISBN 1578702593) . ISBN 0-7897-3790-6) Calkins has worked with Sun Press and Prentice Hall as a technical editor and a major contributor to many of their Solaris titles. Solaris 10 System Administration Exam Prep (Que. Michigan. He has more than 20 years of experience in UNIX system administration. AIX. Part I (Que. Inc. Solaris 10 System Administration Exam Prep. ISBN 1578700868) . Solaris 2. Inside Solaris 9 (New Riders Publishing. He has helped thousands of administrators get their certification. and SCSECA certification exams. and developing web-based training materials. ISBN 0735711011) . he writes UNIX books and conducts training and educational seminars on various system administration topics. including Solaris.. ISBN 1578702496) . specializing in the implementation and administration of open systems.6 Administrator Certification Training Guide. ISBN: 0789729229) . He has authored several UNIX textbooks. consulting. He works as a consultant with the certification group at Sun Microsystems and assists with the development of the Solaris 10 SCSA.

With each book. You’ve been a great asset and have become a good friend to have along on all of my books and projects. but it’s a great team effort. After several books. and ships the book. Best of luck! A lot of people behind the scenes make a book like this happen. the reader. A big thanks to everyone who edits the text. This book would not be what it is if it were not for your valuable input over the years. May the material in this book help you better your skills. . I want to thank all the editors who have contributed to this book. and their work is a huge contribution to the quality of this book. our tech editors get more refined. you’ve done a great job. My efforts would be lost in a closet somewhere if it weren’t for your hard work. and the book would not be as complete without your help. I value your input greatly.Acknowledgments I’d like to thank John Philcox of Mobile Ventures Limited. and achieve your goal to become certified. lays out the pages. It’s been a great team effort. enhance your career. Thank you. John. for buying my books and providing comments to improve the content with each new release. As always. who once again has helped me get this book together. I still don’t have a clue how it all works.

where I will forward specific technical questions related to the book. We do have a User Services group. I will carefully review your comments and share them with the author and editors who worked on the book. and phone number. I welcome your comments. and we want to know what we’re doing right. what areas you’d like to see us publish in. As an associate publisher for Que Publishing. downloads. Please note that I cannot help you with technical problems related to the topic of this book. or errata that might be available for this book. what we could do better. please be sure to include this book’s title and author as well as your name. Email: Mail: feedback@quepublishing. . When you write.We Want to Hear from You! As the reader of this book. IN 46240 USA Reader Services Visit our website and register this book at www.com/register for convenient access to any updates. and any other words of wisdom you’re willing to pass our way. as well as what we can do to make our books better. you are our most important critic and commentator. however. You can email or write me directly to let me know what you did or didn’t like about this book.com Dave Dusthimer Associate Publisher Que Publishing 800 East 96th Street Indianapolis. We value your opinion. email address.quepublishing.

com. have helped thousands of Solaris administrators become certified. so make sure you visit this site to find additional training and study materials. This book is not a cheat sheet or cram session for the exam. administrators from around the world have used this book for self-study when instruction from a Sun training center is either unavailable or not within their budget. Parts I and II. If you are unsure about the objectives on the exams. this book teaches you what you need to know. After reading each chapter. . It covers updates that Sun has made to the Solaris 10 operating environment as of the October 2008 release. these two books will save you a great deal of time and effort searching for information you will need to know when taking the exam. over the years. Your suggestions are what keep making this guide more valuable. from start to finish. This is our second edition of the Solaris 10 System Administration Exam Prep. 7. In other words. The Solaris 10 System Administration Exam Prep books. It began with the Training Guide for Solaris 2. Experienced readers will find the material in these books complete and concise. Instructors from universities and training organizations around the world have used the book as courseware in their Solaris administration courses.6. and comments. When used as a study guide. We have made certain that this book addresses the exam objectives in detail. it does not merely give answers to the questions you will be asked on the exam. This book contains the training material that he uses in his basic and advanced Solaris administration courses that.UnixEd. use the practice exam at the end of the book and the ExamGear test engine on the CD-ROM to assess your knowledge of the objectives covered on each exam. suggestions. 8. Many of you have written with your success stories. This CD-ROM contains sample questions similar to what you are likely to see on the real exams. provide training materials for anyone interested in becoming a Sun Certified System Administrator (SCSA) for Solaris 10. it is a training manual. making it a valuable study guide for the Sun Certified System Administrator exams. Each book covers the exam objectives in enough detail for inexperienced administrators to learn the objectives and apply the knowledge to real-life scenarios. When you have completed reading a section. More sample questions are available at http://www.Introduction Bill Calkins has been training Solaris system administrators for more than 15 years. In addition. and 9 and is now the Exam Prep for Solaris 10. assess your knowledge of the material covered using the review questions at the end of the chapter.

. Throughout the book. This book is set up as follows: . Organization: This book is organized according to individual exam objectives. Part II How This Book Helps You This book teaches you advanced topics in administering the Solaris 10 operating system. . particularly as it is addressed on the exam. material that is directly related to the exam objectives is identified. exactly as they are defined by Sun. In addition. These tips were written by those who have already taken the Solaris 10 certification exams. Instructional features: This book is designed to provide you with multiple ways to learn and reinforce the exam material. This element provides you with valuable exam-day tips and information on exam/question formats such as adaptive tests and case study-based questions.2 Solaris 10 System Administration Exam Prep (Exam CX-310-202). Each chapter also begins with an outline that provides you with an overview of the material and the page numbers where particular topics can be found. and administering the Solaris 10 operating environment. Every objective you need to know to install. each chapter begins with a list of the objectives covered in the chapter. Study strategies: The beginning of each chapter also includes strategies for studying and retaining the material in the chapter. to pass the second part of the Sun Certified System Administrator exam for Solaris 10 (CX-310202). we have not hesitated to reorganize them as needed to make the material as easy as possible for you to learn. This book includes the full list of exam topics and objectives. you’ll find in-depth coverage of the new topics you need to learn for the CX-310-203 upgrade exam in both the SCSA Solaris 10 OS CX-310-200 and CX-310-202 Exam Prep books. If you are an experienced administrator who is upgrading an existing Solaris certification. we provide useful. Each chapter begins with a list of the objectives to be covered. . real-world exercises to help you practice the material you have learned. and administer a Solaris 10 system is in this book. . You will learn the specific skills that are required to administer a system and. It offers you a self-guided training course of all the areas covered on the CX-310-202 certification exam by installing. . Read the “Study and Exam Prep Tips” element early on to help develop study strategies. We have attempted to present the objectives in an order that is as close as possible to that listed by Sun. We have also attempted to make the information accessible in the following ways: . However. . specifically. Throughout each section. configuring. configure. The following are some of the helpful methods: . Exam Alerts: Throughout each chapter you’ll find exam tips that will help you prepare for exam day. . Objective explanations: As mentioned. we provide helpful tips and real-world examples that we have encountered as system administrators.

with questions written in styles similar to those used on the actual exam. Extensive practice test options: The book provides numerous opportunities for you to assess your knowledge and practice for the exam. Key Terms: A list of key terms appears near the end of each chapter. or side commentary on industry issues.” . Practice exam: A practice exam is included in Part II. such as tips on technol- ogy or administrative practices. Practice Exam: A full practice exam is included. Suggested Reading and Resources: At the end of each chapter is a list of addi- tional resources that you can use if you are interested in going beyond the objectives and learning more about the topics presented in the chapter. Fast Facts: This condensed version of the information contained in the book will prove extremely useful for last-minute review. ExamGear: The ExamGear software included on the CD-ROM provides further practice questions. Exercises: Found near the end of the chapters. . . . “What’s on . . They allow you to quickly assess your comprehension of what you just read in the chapter.” For a complete description of the ExamGear test engine. . The practice options include the following: .3 Introduction . see Appendix A. . Answers to the questions are provided in a separate element titled “Answers to Exam Questions. . Notes: These contain various types of useful information.” for each exam (as discussed in a moment). “Final Review. Use the practice exam to assess your readiness for the real exam. Cautions alert you to such potential problems. Exam questions: Each chapter ends with questions. Cautions: When you use sophisticated information technology. mistakes or even catastrophes are always possible because of improper application of the technology. exercises are performance-based opportunities for you to learn and assess your knowledge. NOTE ExamGear software the CD-ROM. . historical background on terms and technologies. . Final Review: This part of the book provides you with three valuable tools for prepar- ing for the exam: . Step By Steps: These are hands-on lab exercises that walk you through a particu- lar task or function relevant to the exam objectives.

Using the mouse: When using menus and windows. The continuation is preceded by a backslash.4 Solaris 10 System Administration Exam Prep (Exam CX-310-202). Here is the default mapping for a three-button mouse: Left button: Select Middle button: Transfer/adjust Right button: Menu You use the Select button to select objects and activate controls. You use the right mouse button. Arguments. including a glossary and a description of what is on the CD-ROM (Appendix A). You don’t type the < >. it is broken and continued to the next line. command options and arguments are enclosed in < >. with detailed explanations. Answers to Practice Exam: This element provides the answers to the full practice exam. You don’t type the <cr>. Appendixes: The book contains valuable appendixes as well. These and all the other book features mentioned previously will enable you to thoroughly prepare for the exam. to display and choose options from pop-up menus. These should help you assess your strengths and weaknesses. “Select File. Code continuation character: When a line of code is too long to fit on one line of the book. For example. Menu options: The names of menus and the options that appear on them are separat- ed by a comma. The <cr> that follows the command means to press Enter. the commands you type are displayed in a spe- cial monospace font. The words within the < > stand for what you will actually type. . lp -d<printer name> <filename> <cr> . You use the left mouse button to highlight text. The middle button can also be used to move windows around on the screen. you select items with the mouse. which means that you use this button to drag or drop list or text items. . The middle mouse button is configured for either Transfer or Adjust. Conventions Used in This Book . and <cr>: In command syntax. it is set up for Transfer. . . Open” means to pull down the File menu and choose the Open option. the Menu button. . Commands: In the steps and examples. Part II . By default. and then you use the middle button to move the text to another window or to reissue a command. options.

7. Certification programs promoted by these sites are not the same as the Sun certification program. Part I and Part II. you need to pass two exams: CX-310-200 (Part I) and CX-310-202 (Part II).UnixEd. feel free to visit our online Solaris certification discussion forum at www.com) for links to the real exams and information on Sun’s certification program if you are in doubt. CX-310-203. You must pass the CX-310-200 exam before taking the CX-310-202 exam. to become certified on Solaris 10. so be sure to evaluate them carefully.6. update their current Solaris certification. you need a solid understanding of the fundamentals of administering Solaris 10. The Sun Certified System Administrator Exams To become a Sun Certified System Administrator. You will not receive a certificate until you have passed both examinations. Whether or not you plan to become certified. . The only prerequisite is that you have read my Solaris 10 System Administration Exam Prep Part I book. This book is intended for experienced system administrators who want to become certified. you need to know the material covered in this book as well as in Solaris 10 System Administration Exam Prep: CX-310-200 Part I to take the upgrade exam. the Solaris 10 System Administration Exam Prep books. Beware of fakes. We have seen some websites promoting their own certification programs. To pass the CX-310-202 and CX-310-203 certification exams. This book helps you review the fundamentals required to pass the certification exam. or 9. if you are already certified in Solaris 2. or simply learn about the features of the Solaris 10 operating environment.UnixEd. where you can ask me questions directly. with text that is easy to read and understand. Go to my website (www. It’s the same training material that the author uses in his Solaris 10 Intermediate and Advanced System Administration courses. You will not receive a certificate from Sun until you pass Sun’s exams from a certified Sun testing center. Also.5 Introduction Audience This book is the second book in a series designed for anyone who has a basic understanding of UNIX and wants to learn more about Solaris system administration. Our goal is to present the material in an easy-to-follow format.com. In addition. 8. are the starting point to becoming a Solaris System Administrator. This book covers advanced system administration topics you need to know before you begin administering the Solaris operating system. This book covers the material on the Part II exam.

It is a preview of the types of questions to expect on the exams and tests your knowledge of all the exam objectives. the sample questions will help you identify that area so that you can go back to the appropriate chapter and study the topic. . This will give you a comprehensive skills assessment and help you evaluate your readiness and retention of the materials.6 Solaris 10 System Administration Exam Prep (Exam CX-310-202). Learning the objectives is the first step. Advice on Taking the Exam More extensive tips are found in the “Study and Exam Prep Tips” element and throughout the book. the errata for this book. for an additional cost. Read all the material. up-to-date sample exam questions. You need access to both SPARC and x86/x64-based systems running Solaris 10 so that you can practice what you have learned. We will provide all the information you need to pass the exam—all you need to do is devote the time. If you are weak in any area. This book includes information not reflected in the exam objectives to better prepare you for the exam and for real-world experiences. . Read all the material to benefit from this. Use the questions to assess your knowledge. but keep in mind the following advice as you study for the exam: . In the back of this book is the ExamGear software test CD that will prepare you for the questions you might see on the exam. the next step is to practice. We highly recommend that before you begin reading this book.UnixEd. Do the step-by-step lab exercises and complete the exercises in each chapter. Each question on the CD-ROM has a flash card to help you in case you get stuck. you can purchase more questions for the ExamGear test engine from our website. and any other last-minute notes about these exams. . Use these to asses your knowledge and determine where you need to review material. concise textbook excerpts that explain why each answer is correct so that you can learn as you test. The CD-ROM-based test engine was designed by educational experts to help you learn as you test. Also. you visit my website at www. it’s difficult to pass the exams without practice. This flash card contains brief. Each chapter contains review ques- tions and exam questions.com to get the most up-to-date list of exam objectives. You’ll receive hundreds of questions that will take you deep into each exam objective. Part II Summary It’s not uncommon for Sun to change the exam objectives or to shift them around after the exams have been published. This will help you gain experience and prepare you for the scenario-type questions that you will encounter. Unless you have a supernatural memory.

. .com. Any unfinished questions will be marked incorrect. . if you have prepared and you know Solaris network administration. Additional practice questions and sample exams for the ExamGear test engine. You can expect Sun to change the exams frequently. An online forum where you can discuss certification-related issues with me and other system administrators. . you should not find it difficult to pass the exam. just skip it and don’t waste time. Be sure to sleep well the night before the exam because of the stress that the time limitations put on you. answer all the questions as quickly as possible. take the real exams and become certified. . . You need to complete the exam in the time allotted. Visit my website. I always try to answer each one. . Additional study materials. Don’t attempt the real exam until you can pass every section of the practice exams with a 95% or better score. training programs. . . However. Don’t forget to drop me an email and let me know how you did on the exam (guru@UnixEd. If you don’t know the answer to a question. Make sure you check my website before taking the exam. Relax and sleep before taking the exam. .com). Late-breaking changes that Sun might make to the exam or the objectives.7 Introduction . Links to other informative websites. you will have plenty of time to answer all the questions. When you feel confident.UnixEd. Don’t be lazy during the examination. The time for taking the examination is lim- ited. A FAQs page with frequently asked questions and errata regarding this book or the exams. Review the exam objectives. . and online seminars related to Solaris certification. Review all the material in the “Fast Facts” element the night before or the morning you take the exam. It contains the following: . including some who have already taken the exam. The ExamGear test engine has hundreds of questions that you can use to further assess your retention of the material presented in the book. www. You can also email me directly from this website with questions or comments about this book. The exams feature electronic flash cards that take the place of those sticky notes that you’ve used as bookmarks throughout the book. Develop your own questions and examples for each topic listed. If you can develop and answer several questions for each topic.

.

Whatever your learning style. This will lead you to a more comprehensive understanding of the tasks and concepts outlined in the objectives and of computing in general. and supplementary material will not just add incrementally to what you know. test preparation takes place over time. Using this book. and now you are preparing for this certification exam. you need to focus on what you know and what you have yet to learn. Before tackling those areas. you cannot start studying for this exam the night before you take it. however. Obviously. It is important to understand that learning is a developmental process. however. as part of that process. you will actually change the organization of your knowledge as you integrate this new information into your existing knowledge base. or you might need to “see” things as a visual learner. . just as there are many different types of material to study. and you will make better decisions concerning what to study and how much more studying you need to do. Study Tips There are many ways to approach studying.Study and Exam Prep Tips These study and exam prep tips provide you with some general guidelines to help you prepare for the Sun Certified Security Administrator exam. think a little bit about how you learn. should work well for the type of material covered on the certification exam. this happens as a repetitive process rather than a singular event. Learning as a Process To better understand the nature of preparing for the exams. it is important to understand learning as a process. The information is organized into two sections. You probably know how you best learn new material. Keep this model of learning in mind as you prepare for the exam. As you study. You might find that outlining works best for you. The following tips. The second section offers some tips and hints for the actual test-taking situation. software. Learning takes place when we match new information to old. You have some previous experience with computers. Again. The first section addresses your pre-exam preparation activities and covers general study tips.

write down this information to process the facts and concepts in a more active fashion. that you understand how each of the objective areas is similar to and different from the others. To better perform on the exam. focusing on learning the details. attempt to learn detail rather than the big picture (the organizational information that you worked on in the first pass through the outline). You should adopt some study strategies that take advantage of these principles. and so on. rules and strategies. . for example) exemplifies a more surface level of learning in which you rely on a prompt of some sort to elicit recall. Next. The ability to analyze a concept and apply your understanding of it in a new way represent an even deeper level of learning. terms. Your learning strategy should enable you to know the material at a level or two deeper than mere recognition. writing forces you to engage in more active encoding of the information. you can work through the outline. facts. In human information-processing terms. This will help you do well on the exam.10 Study and Exam Prep Tips Study Strategies Although individuals vary in how they learn. One of these principles is that learning can be broken into various depths. facts. Active Study Strategies Develop and exercise an active study strategy. Then expand the outline by coming up with a statement of definition or a summary for each point in the outline. Comprehension or understanding (of the concepts behind the terms. Recognition (of terms. determine whether you can apply the information you have learned by attempting to create examples and scenarios on your own. Next. In this pass through the outline. Memorize and understand terms and their definitions. for example) represents a deeper level of learning. Think about how or where you could apply the concepts you are learning. advantages and disadvantages. Research has shown that attempting to assimilate both types of information at the same time seems to interfere with the overall learning process. with the goal of learning how they relate to one another. You also will be able to apply your knowledge to solve new problems. Again. and definitions. Work your way through the points and subpoints of your outline. you can study the outline by focusing on the organization of the material. Write down and define objectives. You will know the material so thoroughly that you can easily handle the recognition-level types of questions used in multiplechoice testing. separate your studying into these two approaches. You should delve a bit further into the material and include a level or two of detail beyond the stated exam objectives. some basic principles apply to everyone. First. Be certain. Just reading over it exemplifies more passive processing. An outline provides two approaches to studying. Macro and Micro Study Strategies One strategy that can lead to this deeper learning includes preparing an outline that covers all the exam objectives. for example.

number of questions. different final forms.11 Exam Prep Tips Commonsense Strategies Finally. A reasonable goal would be to score consistently in the 95% range in all categories. but the questions differ. You must complete the CX-310-200 exam before proceeding to the second exam—CX-310202. and allotted time. “What’s on the CD-ROM. For most people. take breaks when you become fatigued. however. The exam is based on a fixed set of exam questions. this can be difficult to assess objectively on their own. Pretesting Yourself Pretesting enables you to assess how well you are learning. . By using the practice exam. and assessing again until you think you are ready to take the exam. you also should follow commonsense practices when studying. Set a goal for your pretesting. but you won’t necessarily see the same questions. that some of the same questions appear on. Study when you are alert. reduce or eliminate distractions. Solaris exams also have a fixed time limit in which you must complete the exam. the percentage of sharing generally is small. and so on.” Exam Prep Tips The Solaris certification exam reflects the knowledge domains established by Sun Microsystems for Solaris OS administrators. Use it as part of the learning process. you will see the same number of questions. For a more detailed description of the exam simulation software. reviewing. One of the most important aspects of learning is what has been called metalearning. You should use this information to guide review and further study. When questions are shared among multiple final forms of an exam. see Appendix A. you can take a timed practice test that is quite similar to the actual Solaris exam. assessing how well you have learned. If you take the same exam more than once. The ExamGear software on the CD-ROM also provides a variety of ways to test yourself before you take the actual exam. you recognize how well or how poorly you have learned the material you are studying. The individual questions are presented in random order during a test session. In other words. Developmental learning takes place as you cycle through studying. Metalearning has to do with realizing when you know something well or when you need to study some more. You might notice. Practice tests are useful because they reveal more objectively what you have learned and what you have not learned. You might have noticed the practice exam included in this book. Solaris exams are similar in terms of content coverage. You will not receive a certificate until you have successfully passed both exams. or rather are shared among.

you’ll need to purchase another voucher and retake the exam after a two-week waiting period. refer to Sun Microsystems’ FAQ at www. More Pre-Exam Prep Tips Generic exam-preparation advice is always useful.com/training/certification/faq/index. If you feel that you were scored unfairly. the score you achieve on a fixed-form exam is based on the number of questions you answer correctly. you receive the results.sun. For other information related to the SCSA exams. with a report outlining your score for each section of the exam. Many of the multiple-choice questions are scenarios that have more than one correct answer. however.12 Study and Exam Prep Tips Finally. if you get even one answer wrong. You do not know which questions you answered correctly or incorrectly. there are no true/false or free-response-type questions.com. some of the material found on the exam is not taught in the Sun training courses. Although the Sun training courses can help you prepare. Your 105 minutes of exam time can be consumed very quickly. You receive one point for each correctly answered question. Number of Questions. Putting It All Together Given all these different pieces of information. Table 1 shows the exam’s format. Every exam contains different questions. you can request a review by sending an email to who2contact@sun. Table 1 Exam Sun Certified System Administrator for the Solaris 10 Operating System: Part II Time. every topic on the exam is covered in . The exam’s passing score is the same for all final forms of a given fixed-form exam. and Passing Score for the Exam Time Limit in Minutes 105 Number of Questions 60 Passing % 63 Question types on the exam are multiple choice and drag-and-drop. however.html. the task now is to assemble a set of tips that will help you successfully tackle the Solaris certification exam. and you do not receive a point. The question tells you how many answers to select. Remember not to dwell on any one question for too long. the entire question is marked wrong. Here are some tips: . As of this writing. and any unfinished questions are marked as incorrect. The certification exams are directed toward experienced Solaris system administra- tors—typically those who have 6 to 12 months of actual job experience. If you fail. When you finish the exam.

you can share your experiences with other Solaris administrators who are preparing for the exam. it is difficult. . for late-breaking changes and up-to-date study tips from other administrators who have taken the exam. and equipment. . to pass the exam without that experience. . I keep the questions up to date and relevant to the objectives. Review the current exam-preparation guide on the Sun website. and they may be illegal. These are true skill assessment exams with flash cards to help you learn and retain information while taking the exams. Use the forum to talk to others who have taken the exam. Besides. Too many users have written me to say that they thought they were prepared because they passed the exam simulators. through our Solaris Certification online forum. Hands-on experience is one of the keys to success. For more sample test questions.13 Exam Prep Tips this book. . Memorize foundational technical detail. which you will use to assess your retention of the materials. and there is no shortcut for learning the material. brain dumps do not prepare you for the scenario-type questions you will see on the exam. In addition. but not impossible. In addition. www. you can handle any scenariobased question thrown at you. just like you. .UnixEd. you need to retain everything presented in this book. To help you assess your skills. Sun knows that these exams and brain dumps are available. I don’t recommend taking the Sun certification exams until you consistently pass these practice exams with a 95% or higher in all categories. you can purchase hundreds of additional ExamGear test questions from www. Review the chapter-specific study tips at the beginning of each chapter for instructions on how to best prepare for the exam.com to assess your knowledge of the material. . you can visit my website. Avoid using “brain dumps” available from various websites and newsgroups. You need to know the objectives. and you’ll obtain a false sense of readiness. www. You cannot pass these exams without understanding the material. only to find that the questions and answers were different on the actual exam. Become familiar with general terminology. I recommend the practice exams included in this book and the exams available using the ExamGear software on the CD-ROM. Sun changes the questions too often for these types of practice exams to be useful. Take any of the available practice tests that assess your knowledge against the stated exam objectives—not the practice exams that cheat and promise to show you actual exam questions and answers.com. what good is the certification if you don’t know the material? You’ll never get through the job interview screening. I’ve created the ExamGear test engine. but remember that you need to be able to think your way through questions as well. To pass the exam. commands. Visit my website. If you know the material.UnixEd. The test engine on this CD is designed to complement the material in this book and help you prepare for the real exam by helping you learn and assess your retention of the materials.UnixEd. Your exam may not match that particular user’s exam.com. Sun goes through a 13-step process to develop these exams and to prevent cheating. In addition.

review your answers. Tackle the questions in the order they are presented. . and use this to pace yourself through the exam.14 Study and Exam Prep Tips and learn from those who have gone through the process. however. . As you check your answers. As for changing your answers. Take advantage of the fact that you can return to and review skipped or previously answered questions. In addition. noting the relative difficulty of each question. . The questions vary in degree of difficulty. . Many questions are scenarios that require careful reading of all the information and instruction screens. . Skipping around will not build your confidence. Good luck! . Do not second-guess yourself. return to the more difficult questions. you probably did. the general rule of thumb is don’t! If you read the ques- tion carefully and completely the first time and you felt like you knew the right answer. You may find that all answers are correct. on the scratch paper provided. but you may be asked to choose the best answer for that particular scenario. . These displays have been put together to give you information relevant to the exam you are taking. . but also do not linger on difficult questions. Make a rough calculation of how many minutes you can spend on each question. Note the time allotted and the number of questions on the exam you are taking. If you are at all unsure. the clock is always counting down. Pay particular attention to questions that seem to have a lot of detail or that involve graphics. Take a deep breath and try to relax when you first sit down for your exam session. Take a moment to write down any factual informa- tion and technical details you committed to short-term memory. change it. Read the exam questions carefully. Record the questions you cannot answer confidently. Do not rush. You will be provided scratch paper. Reread each question to identify all relevant details. After you have made it to the end of the exam. go with your first instinct. It is important to control the pressure you might (naturally) feel when taking exams. Don’t get flustered by a particularly difficult or verbose question. this website provides up-to-date links to the official Sun certification websites. . If session time remains after you have completed all the questions (and if you aren’t too fatigued!). If you have done your studying and you follow the preceding suggestions. During the Exam Session The following generic exam-taking advice that you have heard for years applies when you take this exam: . you should do well. if one clearly stands out as incorrectly marked. .

Flash Archive. and PXE Chapter 8 Advanced Installation Procedures: WAN Boot and Live Upgrade Chapter 9 Administering ZFS File Systems . and Core Dumps Chapter 3 Managing Storage Volumes Chapter 4 Controlling Access and Configuring System Messaging Chapter 5 Naming Services Chapter 6 Solaris Zones Chapter 7 Advanced Installation Procedures: JumpStart. Swap Space.PART I Exam Preparation Chapter 1 The Solaris Network Environment Chapter 2 Virtual File Systems.

.

enable/disable server processes. IP addresses. This chapter describes the files that are used to configure IPv4 network interfaces. . and configure the IPv4 interfaces at boot time. and how to test whether the interfaces are working correctly. Explain the client/server model. how to start and stop these network interfaces. . It also discusses two methods of changing the system hostname: editing a number of system files and using the sys-unconfig command. It also describes how the client/server model functions in the Solaris 10 environment. This chapter describes how to manage network services as well as adding new ones to be managed by SMF. The network services are started and managed by the Service Management Facility (SMF). . network packets.1 ONE The Solaris Network Environment Objectives The following test objectives for Exam CX-310-202 are covered in this chapter: Control and monitor network interfaces including MAC addresses.

Outline Introduction Client/Server Model Hosts IPv4 Addressing Planning for IPv4 Addressing Network Interfaces Controlling and Monitoring an IPv4 Network Interface Configuring an IPv4 Network Interface The /lib/svc/method/net-physical File The /etc/hostname.<interface> File The /etc/inet/hosts File Changing the System Hostname Suggested Reading and Resources Apply Your Knowledge Exercises Exam Questions Answers to Exam Questions Summary Key Terms Network Maintenance Network Services RPC Services .

You should understand each command in this chapter and be prepared to match the command to the correct description. You should be prepared to match each term presented in this chapter with the correct definition. You should know all the terms listed in the “Key Terms” section near the end of this chapter. it’s important that you practice using each command that is presented on a Solaris system. . .Study Strategies The following study strategies will help you prepare for the test: . and you should practice until you can repeat the procedure from memory. Practice is very important on these topics. which has changed with the introduction of Solaris 10. and know how to convert services to use the Service Management Facility (SMF). . You should pay special attention to the section on network services. As you study this chapter.

Typical examples of client/server relationships are with DNS and NFS. . the server fulfils that request. the model is more widely used across a network. and configuring the services that are started automatically at boot time. From a TCP/IP perspective. only two types of entities exist on a network: routers and hosts. windowing. Explain the client/server model. A client can also provide services to other client applications. When the client makes a service request to the server. The topics discussed here include an overview of the client/server model. Client/Server Model Objective . When a host initiates communication. The client/server model describes the communication process between computers or programs. It is a host or process that provides services to a client. which is often used as a synonym for computer or machine. or recipient. Although a system can be both a server and a client. or sender. For example. and each has a hostname. A server and client are both hosts on the network. Hosts If you are an experienced UNIX/Solaris user.20 Chapter 1: The Solaris Network Environment Introduction This chapter covers the basics of the Solaris network environment. you are no doubt familiar with the term host. it may provide disk space. it is called a sending host. managing network services. information on setting up IPv4 network interfaces. Both of these topics are described later in this book. A client is a host or process that uses services from another host or program. but it does provide you with the fundamental information you need to get started managing a Solaris system in a networked environment. or web services to a client. The host that is the target of the communication is called the receiving host. A server can provide and manage many different services for the client. a host initiates communications when the user uses ping or sends an email message to another user. It does not go into too much detail because Sun provides a separate certification track for Solaris network administrators. The later section “RPC Services” describes specifically how the server responds to a client’s request for services. For example.

also referred to as the media access control (MAC) address. named IPv6. Each host on a network has a unique Ethernet address. this address must also be unique to the Internet. .13). D.org). NOTE IPv6 Due to limited address space and other considerations of the IPv4 scheme. The IPv4 address space is the responsibility of Internet Corporation for Assigned Names and Numbers (ICANN.21 Client/Server Model Each host has an Internet address and a hardware address that identify it to its peers on the network. www. IP addresses are assigned by special organizations known as regional Internet registries (RIRs).11. IPv6 is compatible with IPv4. including the responsibility for allocation of IP ranges. Internet address Hardware address IPv4 Addressing In IPv4. which is assigned by an RIR. Table 1. which is assigned by the local administrator. An IPv4 address is a sequence of 4 bytes and is written in the form of four decimal integers separated by periods (for example.icann. a revised IP protocol is gradually being made available. Each machine on a TCP/IP network has a 32-bit Internet address (or IP address) that identifies the machine to its peers on the network.1. The manufacturer physically assigns this address to the machine’s network interface card(s). An IPv4 address consists of two parts: a network ID. and usually a hostname. This address is unique worldwide—not just for the network to which it is connected. Five classes of IPv4 addresses exist: A. Each integer is 8 bits long and ranges from 0 to 255. The first integer of the address (10.12.iana. Hostnames let users refer to any computer on the network by using a short. but IPv6 makes it possible to assign many more unique Internet addresses and offers support for improved security and performance. C. This address must be unique on the network.1 Identity Hostname Host Information Description Every system on the network usually has a unique hostname. each host on a TCP/IP network has a 32-bit network address—called the IP address— that must be unique for each host on the network. The protocol.0. For this reason. If the host will participate on the Internet. www. and E. 10. has been designed to overcome the major limitations of the current approach. B. easily remembered name rather than the host’s network IP address.0) determines the address type and is referred to as its class.org). The overall responsibility for IP addresses. These are described in Table 1. and a host ID.0. belongs to the Internet Assigned Numbers Authority (IANA.

as in this example: # ifconfig -a<cr> lo0: flags=1000849<UP.x. For networks that will be connected to the Internet—and hence visible to the rest of the world—you need to obtain legal IP addresses.x to 172.x. or 192. Control and monitor network interfaces including MAC addresses.0.BROADCAST.168. or 172. and configure an IPv4 network interface.168.x.16. As root.x. and configure the IPv4 interfaces at boot time. Instead.x.x.1.0. you might want to use the specially reserved IPv4 networks 192.168.x.IPv4. Controlling and Monitoring an IPv4 Network Interface Objective . a number of files need to be configured in order to create the connection between the hardware and the software address assigned to the interface. NOTE Be careful with IP addresses You should not arbitrarily assign network numbers to a network.x. or 172.x. even if you do not plan to attach your network to other existing TCP/IP networks.LOOPBACK.1.VIRTUAL> mtu 8232 index 1 inet 127.x.x range. IP addresses. you might decide to connect it to other networks.31.IPv4> mtu 1500 index 2 inet 192. to allow it to participate in a network environment. The following sections describe how to monitor.31.22 Chapter 1: The Solaris Network Environment Planning for IPv4 Addressing The first step in planning for IPv4 addressing on a network is to determine how many IP addresses you need and whether the network will be connected to the Internet. you could choose addresses in the 10.106 netmask ffffff00 broadcast 192. you can use the ifconfig -a command to display both the system’s IP and MAC addresses.x to 172.x.x.x.MULTICAST.RUNNING.x for networks that are not connected to the Internet. or 10.255 ether 0:3:ba:1f:85:7b .MULTICAST. Network Interfaces A Sun system normally contains at least one network interface.168. When you add a network interface to a system.16.1 netmask ff000000 eri0: flags=1000843<UP. control. Changing IP addresses at that time can be a great deal of work and can cause downtime.x. network packets.RUNNING. If the network won’t be connected to the Internet. As your network grows. This is necessary because each host on a network must have a unique IP address.

IPv4.255 ether 0:3:ba:1f:85:7b Notice that the up flag is no longer present for the eri0 interface and also that the value of flags has changed to 1000842.IPv4> mtu 1500 index 2 index 2 inet 192.MULTICAST. the MAC address is not displayed.255 ether 0:3:ba:1f:85:7b To determine whether another system can be contacted over the network.VIRTUAL> mtu 8232 index 1 inet 127. the root user must enter the ifconfig -a command. Serial #52397435.106 netmask ffffff00 broadcast 192. 1024 MB memory installed.0.LOOPBACK.106 netmask ffffff00 broadcast 192. Marking an interface as up allows it to communicate with other systems on the network. Ethernet address 0:3:ba:1f:85:7b. you use the ping command: # ping sunfire1<cr> .1.168.BROADCAST.1.VIRTUAL> mtu 8232 index 1 inet 127. to mark the eri0 interface as down. To display the MAC address.MULTICAST.1. you use the following command: # ifconfig eri0 down<cr> # ifconfig -a<cr> lo0: flags=1000849<UP. To mark the interface as up. No Keyboard OpenBoot 4.RUNNING.1.0.1 netmask ff000000 eri0: flags=1000843<UP.MULTICAST.0.0.LOOPBACK.168.168. You can mark an Ethernet interface as up or down by using the ifconfig command. NOTE Displaying a MAC address If you enter the /sbin/ifconfig -a command as a nonprivileged user. you use the following command: # ifconfig eri0 up<cr> # ifconfig -a<cr> lo0: flags=1000849<UP.IPv4> mtu 1500 index 2 inet 192. For example.RUNNING.MULTICAST.1 netmask ff000000 eri0: flags=1000842<BROADCAST.23 Network Interfaces You can also retrieve the MAC address from a system by using the banner command at the OpenBoot prompt: ok banner<cr> Sun Fire V120 (UltraSPARC-IIe 548MHz).0.168. Host ID: 831f857b.RUNNING.IPv4.RUNNING.

if sunfire1 is down or cannot receive the request (perhaps the network interface has been configured as down). The interface must be physically connected. Configuring the interface is discussed in the section “Configuring an IPv4 Network Interface.” . The network interface can communicate only when it is marked as up. A separate Solaris certification exam. . The network interface must be con- nected to the network. For example. this message is displayed: sunfire1 is alive NOTE Names to addresses The sunfire1 is alive command assumes that the host sunfire1 can be resolved either through an entry in the /etc/hosts file or by using DNS. This is done via the ifconfig command. to view data transmissions between systemA and system. this is carried out initially when you install the Solaris operating environment. using the appropriate cable. . The interface must have valid routes configured. The routing provides the direc- tions to the destination computer when each computer exists on a different network.” later in this chapter. as discussed in the section “Configuring an IPv4 Network Interface. you can use the ping command with the IP address instead of the hostname.24 Chapter 1: The Solaris Network Environment If host sunfire1 is up. This is an advanced networking topic that is not covered on the exam. An address must be assigned to a network inter- face. If you do not know the hostname. The interface must be plumbed. The message indicates that sunfire1 responded to the request and can be contacted. You can also use the /usr/sbin/snoop command to capture and inspect network packets to observe network communication between systems. “Solaris Network Administrator. However. you receive the following response: no answer from sunfire1 In order for a ping request to be successful. . The interface must be configured. the following conditions must be met: . The interface must be up. This is automatically carried out at boot time by the script /lib/svc/method/net-physical. but it is included here for completeness.” deals with routing in detail. use the following command: # snoop systemA system<cr> .

The following example uses the snoop command to enable audible clicks and to display only DHCP traffic: # snoop -a dhcp<cr> The system displays the following: Using device /dev/eri (promiscuous mode) 192.250 -> BROADCAST DHCP/BOOTP DHCPOFFER . Prints packet headers with lots of detail. because you will see questions on the exam related to the functionality of these options. Detailed verbose mode. Saves the captured packets to a file. The output is less than what is displayed with the -v option.168.250 -> BROADCAST DHCP/BOOTP DHCPDISCOVER 192.168.250 -> BROADCAST DHCP/BOOTP DHCPDISCOVER 192. you should be familiar with its options.1. Table 1. which can notify you of any network traffic.168.168. More than one line is printed for each packet.2 Option -a -v -V snoop Options Description Listens to packets on /dev/audio.2 lists some of the more common options used with the snoop command. The packet count is not displayed. EXAM ALERT Although snoop is more of a networking topic.1.1.27 -> sunfire1 TELNET C port=64311 sunfire1 -> 192.27 TELNET R port=64311 Using device /dev/eri The snoop command can be run only by the root user. Table 1. -q Expressions can also be supplied to the snoop command to filter the information.1.1. This option enables audible clicks. -o <filename> -i <filename> -d <devicename> Receives packets from the network using the interface specified by <devicename>. snoop continues to display information until you press Ctrl+C to stop it.168. Displays packets that were previously captured in a file rather than from the network interface. Verbose summary mode.25 Network Interfaces The system responds with one line of output for each packet on the network: 192.

<interface> is replaced by the device name of the primary network interface. /etc/hostname. This file contains only one entry: the hostname or IP address associated with the network interface. This enables the kernel to communicate with the named network interface and sets up the streams needed by IP to use the device.sh in previous releases.<interface> in the /etc directory.d/S30network. (Interface numbering starts with 0. not 1. Hence. or you can modify the original interface by having an understanding of only three files: .<interface> file should exist on the local machine.<interface> file defines the network interfaces on the local host. /lib/svc/method/net-physical . and the file would contain the entry system1.<interface> File The /etc/hostname.) For each hostname. eri1 would be the second eri interface on the system. /etc/inet/hosts Each of these is discussed in the following sections. you’ll recognize that this script performs the same functions as the file /etc/rcS. The /etc/hostname. NOTE A new startup script The file /lib/svc/method/net-physical is new in the Solaris 10 operating environment. At least one /etc/hostname. The Solaris installation program creates this file for you.<interface> file. the script runs the ifconfig command with the plumb option. but it is now part of the Service Management Facility (SMF). An example of such a file is /etc/hostname. You can configure additional interfaces at system boot time.eri0. you configure a network interface as part of the installation program.<interface> . The /lib/svc/method/net-physical File The svc:/network/physical:default service calls the /lib/svc/method/net-physical method script. For example. The file would be called /etc/hostname. In the filename. . suppose eri0 is the primary network interface for a machine called system1. It is one of the startup scripts that runs each time you boot the system.eri0.26 Chapter 1: The Solaris Network Environment Configuring an IPv4 Network Interface When you install the Solaris operating environment. If you’re familiar with releases prior to Solaris 10. which refers to the configuration file for the first eri network interface. The /lib/svc/method/net-physical method script uses the ifconfig utility to configure each network interface that has an IP address assigned to it by searching for files named hostname.

27 Network Interfaces The /etc/inet/hosts File The hosts database contains details of the machines on your network. An optional field that contains a nickname or an alias for the host. the system needs to know how to get to the host named xena. For example. the file /etc/hosts is a symbolic link to /etc/inet/hosts.3 Field <address> <hostname> <nickname> [# comment] The /etc/inet/hosts File Format Description The IPv4 address for each interface the local host must know about.0. Table 1. such as DNS. the IP address 127. we’ll configure the primary network interface (eri0) to achieve . its IP address. The following Step By Step demonstrates how to configure a network interface from the command line.1 192.3. When a user enters a command such as ping xena.200.0. and its hostname. it sets up the initial /etc/inet/hosts file. This file contains the hostnames and IP addresses of the primary network interface and any other network addresses the machine must know about. uses the loopback address for configuration and testing.3 localhost xena #loopback address loghost #hostname In the /etc/inet/hosts file for the machine xena.1 is the loopback address. Every machine on a TCP/IP network must have an entry for the localhost and must use the IP address 127. NIS. The operating system. The hostname assigned to the machine at setup and the hostnames assigned to additional network interfaces that the local host must know about. the reserved network interface used by the local machine to allow interprocess communication so that it sends packets to itself. You can use the /etc/inet/hosts file with other hosts databases. For compatibility with Berkeley Software Distribution (BSD)-based UNIX operating systems. When you run the Solaris installation program on a system. In this exercise. LDAP. This file contains the minimum entries that the local host requires: its loopback address.0. The /etc/inet/hosts file provides a cross-reference to look up and find xena’s network IP address.9. through the ifconfig command.0.0. An optional field in which you can include a comment. More than one nickname can exist.0.1. Each line in the /etc/inet/hosts file uses the following format: <address> <hostname> <nickname> [#comment] Each field in this syntax is described in Table 1. the Solaris installation program might create the following entries in the /etc/inet/hosts file for a system called xena: 127. and NIS+.

RUNNING. Take the network interface down using the ifconfig command: Display the current network interface configuration using the ifconfig command and make sure the interface is down: # ifconfig -a<cr> lo0: flags=2001000849<UP. Edit the file /etc/hostname. Edit the file /etc/inet/netmasks and add the following entry: 192.VIRTUAL> mtu 8232\ index 1 inet 127. Edit the files /etc/inet/hosts and add the following entry: 192.168.RUNNING. The preconfiguration of the interface is now complete.eri0 to contain the following entry: achilles 4.1.MULTICAST.IPv4> mtu 1500 index 2 inet 192.168.255.30 netmask ffffff00 broadcast 192.168.255.168.MULTICAST.RUNNING.IPv4> mtu 1500 index 2 inet 192.0. We can now use the ifconfig command to initialize the interface and make it operational: # ifconfig eri0 achilles netmask + broadcast + up<cr> 6.LOOPBACK.111 and a network mask of 255.1 Configuring an IPv4 Network Interface # ifconfig eri0 down<cr> 1.255.0.0.168.LOOPBACK.255 ether 0:3:ba:1f:85:7b 2.MULTICAST. with an IP address of 192.168. and the interface is made operational as well.RUNNING.111 achilles 3. The hostname is set to Achilles.1 netmask ff000000 eri0: flags=1000843<UP.1.BROADCAST.0.0 255.111 netmask ffffff00 broadcast 192. Verify that the interface is now operational and correctly configured using the ifconfig -a command: # ifconfig -a<cr> lo0: flags=2001000849<UP.1 netmask ff000000 eri0: flags=1000842<BROADCAST.28 Chapter 1: The Solaris Network Environment connectivity with other systems on the network.IPv4.0.MULTICAST.0.0 5.168.0.0.VIRTUAL> mtu 8232\ index 1 inet 127.255.0.0. STEP BY STEP 1.IPv4.255 ether 0:3:ba:1f:85:7b 0 .

This is the location where the system hostname is set. the change does not persist across reboots.29 Network Interfaces EXAM ALERT Use the plus (+) Using the + option to the ifconfig command causes a lookup in the /etc/inet/netmasks file to determine the correct values. The only information contained within this file is the name of the system (for example. For example. /etc/hostname. You must make sure the /etc/inet/netmasks file is accurate for this to work correctly. it contains the system’s hostname. There are two methods available for permanently changing the system hostname: The first is to edit the necessary files manually and reboot the system. based on the network mask value that has been inserted for the relevant network. the following command changes the system hostname to zeus: # hostname zeus<cr> Verify the hostname by typing the hostname command with no argument: # hostname<cr> The system responds with the current hostname: zeus When the system is rebooted. Beginning with Solaris 10 08/07. Use the hostname command with an argument to temporarily change the hostname.” .<interface>: This file defines the network interfaces on the local host and is discussed earlier in this chapter. which can be difficult when subnetworks are used. but if you do so. in the section “The /etc/hostname. It is necessary to modify all these files in order to successfully change the hostname of a system manually.<interface> File. /etc/nodename: This file contains the local source for a system name. which prints the system’s node name. You can always specify the full values to the ifconfig command. The command uname -n. Changing the System Hostname The system hostname can be changed temporarily or permanently. but it requires that the broadcast address is calculated manually. the system’s hostname is contained within three files on a Solaris system. looks in this file for the information. In other words. as described here. sunfire1). . the system changes back to its original hostname. You can change the hostname by running the command uname -S and supplying a new hostname. These files need to be changed: .

The second method for changing the hostname is to use the sys-unconfig command. naming service configuration. Table 1. in the section “The /etc/inet/hosts File.” Having changed the contents of the files just listed. the /etc/inet/ipnodes file is replaced with a symbolic link of the same name to /etc/inet/hosts. . Table 1. and the root password—all very similar to when you perform an initial installation of the Solaris 10 Operating Environment. Without an entry in this file. IP address. This file contained details of the machines on your network and included both IPv4 and IPv6 addresses. The entry should be the name of the network interface that functions as a router between networks.30 Chapter 1: The Solaris Network Environment . the system automatically shuts down. default router. the system needs to be rebooted to implement the new hostname. such as hostname. /etc/inet/hosts: The hosts file contains details of the machines on your network and is discussed earlier in this chapter. corresponding hostnames need to be present in the /etc/inet/hosts file. Before Solaris 10 08/07. To complete the process. you also needed to modify the /etc/inet/ipnodes file when changing the system hostname. Since Solaris 10 08/07. boot the system. subnet mask. When hostnames are used as entries in this file. the sendmail service displays the following message on the console: sunfire console login: Mar 10 18:54:29 sunfire sendmail[530]: My unqualified host name (sunfire) unknown. time zone. The presence of the /etc/defaultrouter file indicates that the system is configured to support static routing. You are presented with a number of configuration questions. sleeping for retry /etc/defaultrouter This file can contain an entry for each router that is directly connected to the network. For backward compatibility.4 lists other network-related files that are worth noting but that are not required for configuring the network interface. similar to when you initiate a Solaris 10 installation.4 File /etc/defaultdomain Miscellaneous Network Configuration Files Description This file contains one entry: the fully qualified domain name of the administrative domain to which the local host’s network belongs. The result of running this command is the removal of the system identification details. This is because no name service is running when the /etc/defaultrouter file is read at boot time. it is not necessary to maintain IPv4 entries in both the /etc/inet/hosts and /etc/inet/ipnodes files. When the command completes.

it automatically reads the /etc/inetd.conf.conf file and converts any entries to services that can run under SMF.conf. unlike in previous versions of Solaris where all the network services were listed.31 Network Services Table 1. as in previous releases of Solaris. you receive an error message. For example. but make sure that you refresh the inetd daemon after making changes to its configuration file. Make the entry in /etc/inetd.conf file now contains only a few entries. The inetd daemon can no longer be run manually from the command line. Changes or modifications to the configuration of network services are done using the inetadm or svccfg commands. . This topic is described fully in Chapter 3 of the Solaris 10 System Administration Exam Prep. /etc/inetd. this has all changed. /etc/inet/ipnodes is no longer functional in releases after Solaris 10 11/06. they must be converted to run under SMF. In previous releases of Solaris. NOTE The /etc/inetd. The netmasks database consists of a list of networks and their associated subnet masks. you might have a service that you want to have automatically started by the inetd daemon. The /etc/inetd.conf file You might need to make an entry in the /etc/inetd. /etc/inet/netmasks /etc/inet/ipnodes Network Services Objective . the inetd network daemon was responsible for running network services on demand and was configured by editing the file. Although this file had a role in previous versions of Solaris 10. It is now simply a link pointing to the /etc/inet/hosts file. As of Solaris 10. The default /etc/inetd.4 File Miscellaneous Network Configuration Files Description You need to edit this file only if you have set up subnetting on your network. is used to carry out the management of these network services.conf file. A new command. inetadm. Part I book. When you run this command with no options.conf file may still be used as a mechanism for adding new (third-party additional software) services. This is carried out using the inetconv command. nor can it be instructed to re-read its configuration file. Enable/disable server processes. but in order to make use of these services. The services that were previously configured using this file are now configured and managed by the Service Management Facility (SMF). outside of SMF. The following command instructs inetd to reread its configuration data: svcadm refresh inetd<cr> If you attempt to run inetd manually.

32 Chapter 1: The Solaris Network Environment To see the network services being managed by SMF. enter the inetadm command with no options: # inetadm<cr> ENABLED enabled enabled enabled enabled enabled disabled enabled enabled disabled disabled disabled enabled enabled enabled disabled disabled disabled disabled disabled disabled disabled disabled disabled disabled enabled disabled enabled disabled disabled enabled disabled enabled disabled disabled enabled enabled enabled disabled enabled disabled disabled disabled disabled STATE online online online online online disabled online online disabled disabled disabled online online online disabled disabled disabled disabled disabled disabled disabled disabled disabled disabled online disabled online disabled disabled online disabled online disabled disabled online online online disabled offline disabled disabled disabled disabled FMRI svc:/network/rpc/gss:default svc:/network/rpc/mdcomm:default svc:/network/rpc/meta:default svc:/network/rpc/metamed:default svc:/network/rpc/metamh:default svc:/network/rpc/rex:default svc:/network/rpc/rstat:default svc:/network/rpc/rusers:default svc:/network/rpc/spray:default svc:/network/rpc/wall:default svc:/network/tname:default svc:/network/security/ktkt_warn:default svc:/network/telnet:default svc:/network/nfs/rquota:default svc:/network/chargen:dgram svc:/network/chargen:stream svc:/network/daytime:dgram svc:/network/daytime:stream svc:/network/discard:dgram svc:/network/discard:stream svc:/network/echo:dgram svc:/network/echo:stream svc:/network/time:dgram svc:/network/time:stream svc:/network/ftp:default svc:/network/comsat:default svc:/network/finger:default svc:/network/login:eklogin svc:/network/login:klogin svc:/network/login:rlogin svc:/network/rexec:default svc:/network/shell:default svc:/network/shell:kshell svc:/network/talk:default svc:/application/font/stfsloader:default svc:/application/x11/xfs:default svc:/network/rpc/smserver:default svc:/network/rpc/ocfserv:default svc:/application/print/rfc1179:default svc:/platform/sun4u/dcs:default svc:/network/uucp:default svc:/network/security/krb5_prop:default svc:/network/apocd/udp:default .

use the inetadm command with the -d option: # inetadm -d spray<cr> Check again to verify that the service is now disabled: # inetadm | grep spray<cr> disabled disabled svc:/network/rpc/spray:default NOTE Other commands work too You are not limited to the inetadm command to view and control legacy network services.sprayd” user=”root” . The following code lists the properties of the spray service: # inetadm -l spray<cr> SCOPE NAME=VALUE name=”sprayd” endpoint_type=”tli” proto=”datagram_v” isrpc=TRUE rpc_low_version=1 rpc_high_version=1 wait=TRUE exec=”/usr/lib/netsvc/spray/rpc. you could disable spray by typing svcadm disable svc:/network/rpc/spray:default. For example. use the inetadm command with the -e option: # inetadm -e spray<cr> Now you can see that the service has been enabled and is available for use: # inetadm | grep spray<cr> enabled online svc:/network/rpc/spray:default To disable the spray service. The svcs -a command can also be used to view the status. and the svcadm command can control legacy network services as well. that the spray service is in the disabled state. To enable this service. for example. You can also list the properties and values of a selected network service using the -l option to the inetadm command. You can also use the svcadm command to disable network services.33 Network Services enabled enabled enabled enabled online online online online svc:/network/rpc-100235_1/rpc_ticotsord:default svc:/network/rpc-100083_1/rpc_tcp:default svc:/network/rpc-100068_2-5/rpc_udp:default svc:/network/tftp/udp6:default The preceding code shows.

Solaris utilizes a client/server model known as remote procedure calls (RPC). This section summarizes the information you need to know for the exam. Each network service uses a well-known port number that is used by all the hosts on the network.” When you boot the Solaris 10 OS. which is a “well-known service. the rpcbind daemon starts listening at port 111. a client connects to a special server process. Well-known ports are listed in the /etc/services file. rpcbind. you can see that the chargen service uses port 19 and uses both TCP and UDP protocols. especially on a network that supports several network services. The port number used by the rpcbind daemon is listed in the /etc/inet/services file. which is a symbolic link to /etc/inet/services. The following are a few entries from the /etc/services file: chargen chargen ftp-data ftp 19/tcp 19/udp 20/tcp 21/tcp ttytst source ttytst source From these entries. RPC Services EXAM ALERT You’ll see several questions related to RPC services on the exam. It also has aliases assigned. the /lib/svc/method/rpc-bind startup script initializes the rpcbind service. Make sure that you understand the two types of RPC services and how the client interacts with the server when requesting RPC services. With an RPC service. After the system starts up. Systems communicate with each other through these ports.34 Chapter 1: The Solaris Network Environment default default default default default default default default default default default bind_addr=”” bind_fail_max=-1 bind_fail_interval=-1 max_con_rate=-1 max_copies=-1 con_rate_offline=-1 failrate_cnt=40 failrate_interval=60 inherit_env=TRUE tcp_trace=FALSE tcp_wrappers=FALSE Each network service uses a port that represents an address space and is reserved for that service. . Keeping track of these ports can be difficult.

sysA (the client). There are two types of RPC services: . RPC services are started on available ports above 32768. rpcbind registers port numbers associated with each RPC service listed in the /etc/rpc file. the rpcbind process returns the port number of the requested service to the client. The sprayd service is listed in the /etc/rpc file. The rpcbind process receives all RPC-based client application connection requests and sends the client the appropriate server port number. 3. The rpcbind daemon on sysB reads the program number and determines that the request is for the sprayd service. Inc. 4.sprayd daemon takes over the spray session’s communication. which is started automatically by the svc:/network/nfs/server service. Some RPC services are started on demand. An example of an RPC service is the mountd daemon. A user on a remote system. Services that do not start automatically at boot and must start on demand (such as sprayd) RPC services that are started at bootup are started via their individual startup scripts.35 Network Services RPC services are services developed using a set of utilities developed by Sun Microsystems. 2. This rpc. When a client requests a service. issues a spray command to sysB (the serv- er). typically they are not assigned to well-known ports. Here’s how the process takes place: 1. For example. The developer assigns them a unique program number when they are written. When a remote system (client) makes an RPC call to a given program number on a server. it must first contact the rpcbind service on the server to obtain the port address. The inetd daemon receives the request. It registers its current port assignment and program number with the rpcbind process during boot. The rpcbind daemon is started via its startup script. The rpcbind daemon returns the current port number of the sprayd service to sysA. The spray request is initially addressed to port 111 and contains the program number of the sprayd service. The client then generates a new request using the port number it just received for the requested service. The client must do this before it can send the RPC requests. sysA sends a second request to the port number of the sprayd service on sysB. 5. mountd is listed in the /etc/rpc file as follows: mountd 100005 mount showmount . Services that start by default at system boot time (such as mountd) .

You can also use rpcinfo to unregister an RPC program. Network Maintenance Solaris provides several network commands that you can use to check and troubleshoot a network: . . You use the rpcinfo utility with the -p option to list registered RPC programs running on a system.36 Chapter 1: The Solaris Network Environment The mountd daemon has a program number of 100005 and is also known as mount and showmount. you can unregister and disable it: # rpcinfo -d sprayd 1<cr> The sprayd service would be unregistered from RPC.168. if sprayd is running on the local system. You could restart the sprayd service by issuing a restart command using the svcadm command: # svcadm restart spray<cr> This causes the spray service to restart and automatically re-register the RPC program associated with the spray service. Captured packets can be displayed as they are received or saved into a . version. When you use rpcinfo with the -d option. and service name. protocol. For example. ping: ping stands for packet Internet groper. If no packet is received from the remote system. port. you can delete registration for a service. One of them in this example is the mountd service. you can check on processes on another system like this: # rpcinfo -p 192. the ping command sends an ICMP packet to another host to test its network status. As described earlier in this chapter. and a message is returned to the calling host. The options to the command allow continuous packets or a specified number of packets to be sent as well as different sizes of packets. it is deemed to be down.1.21<cr> The system responds with a list of all the registered RPC services found running on that system: program 100005 vers 1 proto udp port 32784 service mountd The output displays the program number. the snoop command captures and inspects network packets. For example. The remote system sends an ICMP packet back to the originating host if the ping command succeeds. snoop: As described earlier in this chapter.

check your cable and make sure that both the local system and the remote system are configured properly.1. and monitor how many errors are occurring. STEP BY STEP 1. monitor how many packets are passing through the interface. you get this message: no answer from systemB If you get this negative response. the remote system replies with this: systemB is alive If the network is not active. Use the snoop utility to determine what information is flowing between systems.168.168.168. The ifconfig command can be used as described earlier to check the status of the network interface. namely 192.106 192.1.21<cr> .168.37 Network Maintenance file to be analyzed later.21: # snoop 192. with each entry being displayed in single-line summary form or multiline verbose form. snoop can produce large amounts of information. The following example shows network traffic being monitored between two hosts. The snoop utility can show what actually happens when one system sends a ping to another system. Each of the commands listed here are demonstrated in Step By Step 1. This command is used extensively in identifying overloaded networks where the packet collision rate would be much higher than expected.1. type ping systemB from systemA. Check the network connection to another system by typing the following: For example. If the check is successful. 2. EXAM ALERT You’ll see more than one question on the exam about using the snoop command to troubleshoot network connectivity problems. . You can see the status of the network interface. netstat: The netstat command displays network status information. to check the network between systemA and systemB.106 and 192. It could also be that the network interface is not marked as up.2 Verifying That a Network Is Operational # ping <options> <ip-address><cr> 1.1.2.

. to watch the eri0 interface only. type # snoop -d eri0 192. netstat can provide some basic data about how much and what kind of network activity is happening.168.21 -> 192.1.168.106 ICMP Echo reply (ID: 2677 Sequence number: 0) When you are finished viewing information from snoop. The last option.1. reissues the netstat command every 5 seconds to get a good sampling of network activity. The -i option shows the state of the network interface used for TCP/IP traffic.21<cr> 3. 5. You should ignore the first line of output. NOTE The -d option On a system with multiple network interfaces.1.106 -> 192. You can press Ctrl+C to break out of the netstat command.168.1.1. press Ctrl+C to quit.106 192. in this case 5 seconds. with each line showing the activity since the last display.168. Check for network traffic by typing the following: # netstat -i 5<cr> The system responds with this: input packets 95218 0 0 1 0 0 0 erieri0 output input errs packets errs colls 49983 189 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 (Total) output packets errs packets 218706 49983 123677 3 0 3 4 0 4 144 1 143 256 0 256 95 0 95 1171 0 1171 errs 1 0 0 0 0 0 0 colls 0 0 0 0 0 0 0 The netstat command is used to monitor the system’s TCP/IP network activity.168. For example.1. as this shows the overall activity since the system was last booted.168.21 ICMP Echo request (ID: 2677 Sequence number: 0) 192.38 Chapter 1: The Solaris Network Environment The system responds with one line of output for each packet on the network: Using device /dev/hme (promiscuous mode) 192. use the -d option with snoop to specify the network device you want to watch.

5. and ignore the first set of results. . If you suspect a physical problem. divide the number of output collisions (output colls) by the number of output packets. 6. To calculate the input packet error rate. the routers on the network are slow or the network is very busy. Issue the ping command twice. Look in the colls column to see if a large number of collisions occurred.39 Network Maintenance 4. Examine the errs column to see if a large number of errors occurred. Routers can drop packets. a poorly configured network. The ping -sRv command also displays packet losses. If the input error rate is high—more than 25%—the host might be dropping packets because of transmission problems. or hardware problems. If the response time (in milliseconds) from one host is not what you expect. you can use ping -sRv to find the response times of several hosts on the network. you should investigate that host. Transmission problems can be caused by other hardware on the network and by heavy traffic and low-level hardware problems. divide the number of input errors by the total number of input packets. forcing retransmissions and causing degraded performance. If the round-trip takes more than a few milliseconds. Type ping -sRv <hostname> from the client to determine how long it takes a packet to make a round-trip on the network. To calculate the network collision rate. A network-wide collision rate greater than 10% can indicate an overloaded network.

inetadm and inetconv.40 Chapter 1: The Solaris Network Environment Summary Although networking is a topic that could consume many chapters in this book. Hostname . You’ll also learn how to configure the automounter for use with AutoFS. IP address . and Core Dumps. Network interface . After reading this chapter. The same holds true for network administration. Packet . Network mask . You’re the one who decides whether these numbers are acceptable for your environment. Some new commands were introduced—specifically. You should monitor your network continuously. Client/server model . system performance depends on how well you’ve maintained your network. “Virtual File Systems. Host . ICMP . this chapter discussed some of the network-related commands and utilities that you can use for monitoring and maintaining the network.” describes how to manage swap space. All the concepts that you need to know for the Sun Certified System Administrator for the Solaris 10 Operating Environment exam (CX-310-202) are described. the fundamentals that you need to know to be able to manage a Solaris system on the network are described here. and use NFS to share file systems across a network. The network commands described in this chapter only report numbers. configure core and crash dump files. In a networked environment. Key Terms . Router . MAC address . As stated earlier. you should understand how to configure and manage network services in Solaris 10. practice and experience will help you excel at system administration. Swap Space. You need to know how the network looks when things are running well so that you know what to look for when the network is performing poorly. An overloaded network can disguise itself as a slow system and can even cause downtime. In addition. Chapter 2. Network service .

you’ll use the various network commands and utilities to obtain information about your system and network. 2.MULTICAST.41 Apply Your Knowledge .1 Obtaining Network Information In this exercise. In this case.IPv4>. Make sure you have an entry in your /etc/inet/hosts file for hostB.168. 0:3:ba is Sun Microsystems. Service Management Facility (SMF) Apply Your Knowledge Exercises The following exercises require that you have two hosts connected via an Ethernet network. use the ifconfig command to display information about your network inter- face: # ifconfig -a<cr> lo0: flags=1000849<UP.IPv4> mtu 1500 index 2\ inet 192. is unique for every system. one named hostA and the other named hostB.VIRTUAL mtu 8232 index 1\ inet 127. The first half of the address is generally specific to the manufacturer.BROADCAST. 1. use the rpcinfo utility to list the registered RPC programs: # rpcinfo<cr> 4. 3.MULTICAST. in this case 1f:85:7b.168. As root. Log in as root on hostA. The last half of the address.0.0.106 netmask ffffff00 broadcast 192.1. Look for the sprayd service on your system: # rpcinfo | grep sprayd<cr> .1 netmask ff000000 eri0: flags=1000843<UP. Use ping to send ICMP echo requests from hostA to hostB: # ping hostB<cr> On hostA.DHCP. Remote Procedure Calls (RPC) .RUNNING.255 ether 0:3:ba:1f:85:7b The ifconfig utility shows that the Ethernet address of the eri0 interface is 0:3:ba:1f:85:7b.1.LOOPBACK. Estimated time: 15 minutes 1.RUNNING.

type the following: # ping hostB<cr> 3. 4. Verify that the sprayd service has been unregistered from RPC: # rpcinfo | grep sprayd<cr> 7.2 Using snoop to Display Network Information In this exercise. Watch the information that is displayed in the first window that is running snoop. Gnome.42 Chapter 1: The Solaris Network Environment 5. start up the snoop utility: # snoop hostA hostB<cr> snoop shows what actually happens when hostA uses the ping command to communi- cate with hostB. Watch the information that is displayed in the first window that is running snoop. Estimated time: 10 minutes 1. In a second window on hostA. 2. On hostA. Stop the sprayd service on your local system: # rpcinfo -d sprayd 1<cr> 6. or Java Desktop System [JDS]) as root. Verify that the sprayd service is now registered with RPC: # rpcinfo | grep sprayd<cr> 1. Issue the spray command to send a one-way stream of packets to hostB: # spray hostB<cr> 5. and ping commands to obtain information from your network. . log in to an X Window session (CDE. In one window. you’ll use the snoop. Restart the sprayd service by issuing the svcadm restart command: # svcadm restart spray<cr> 8. spray.

The IP address identifies the machine to its peers on the network. /etc/hostname.interface ❍ B. inetd ❍ C. Hostname 2. MAC address ❍ C. ❍ B.43 Apply Your Knowledge Exam Questions 1. It is a Sparc executable file. IP addresses are divided into three unique numbers: network. When you are setting up at least one network interface. IP address ❍ B.) ❍ A. IP addresses provide a means of identifying and locating network resources.) ❍ A. Internet address ❍ D. It identifies a network interface on the local host. nfsd 4. It contains the hostname of the local host. It is a system script file. What is a name for a unique Ethernet address? ❍ A.xxy file? ❍ A. /etc/inet/hosts ❍ D. Which of the following statements is true about the /etc/hostname. ❍ D. ❍ C. IP addresses are written as four sets of numbers separated by periods. which of the following network configuration files does the Solaris installation program always set up? (Choose three. /etc/defaultdomain ❍ E. Which of the following statements about IPv4 addresses are true? (Choose all that apply. rpcinfo ❍ D. and host. inetadm ❍ B. /etc/inet/ipnodes 3. 5. ❍ C. class. Which command lists the network services and their current state? ❍ A. ❍ D. /etc/nodename ❍ C. . ❍ B.

Which of the following statements are true of the snoop command? (Choose two. vmstat ❍ C. /etc/inet/hosts ❍ E. ❍ C. Which command is used to determine the information that is flowing between systems across a network? ❍ A. Which of the following are files that have to be edited when you manually change the hostname on a Solaris system? (Choose two. netstat ❍ D. /etc/nodename 7.) ❍ A.xxy ❍ C. You press Ctrl+D to stop the command. snoop ❍ C. /etc/inet/ipnodes 8. ❍ B.) ❍ A. /etc/hostname.44 Chapter 1: The Solaris Network Environment 6. iostat ❍ B. /etc/nodename ❍ B. ❍ D. ping 9. /etc/defaultdomain ❍ C. ping 10. snoop displays the network statistics for the physical interfaces. /etc/networks ❍ D. . netstat ❍ B. /etc/inet/hosts ❍ B. You press Ctrl+C to stop the command. iostat ❍ D. Each packet on the network produces one line of output. /etc/defaultdomain ❍ D. Which of the following commands is used to monitor the system’s TCP/IP network activity? ❍ A. Which of the following contains the IP addresses and hostnames of machines on a network? ❍ A.

For more information. ❍ C. Answers to Exam Questions 1.100 13. The interface is not configured with an IP address. You must use netstat -a to see all the network interfaces. B. Which of the following are correct entries in the /etc/hostname. qfe2. /etc/nodename.100 ❍ E. Which methods can you use to determine the MAC address of a Solaris-based system? (Choose two.” 2. The /etc/defaultdomain file is an optional file and is not set up by the installation program.) ❍ A.interface. see the section “Configuring an IPv4 Network Interface.168. qfe1. eeprom 12. The interface is not plumbed. When you issue the netstat -i command. ❍ D. qfe3.168.eri0 file? (Choose two. Use the banner command at the ok prompt. ifconfig 192.45 Apply Your Knowledge 11.) ❍ A. A. ❍ B.<interfacename> for each network interface.1. ifconfig -a ❍ ❍ E. C. eri0 192. you see only information for qfe0 displayed. You need to create a file named /etc/hostname. Your system has four network interfaces: qfe0.168. example. What is the problem? ❍ A. For more information. ifconfig <interfacename> -m ❍ C. and /etc/inet/hosts are initially set up by the Solaris installation program. B. ❍ B. netstat -a F.1. 192. systemA ❍ D. A host’s unique Ethernet address is also referred to as the MAC address.1.100 ❍ B. The network configuration files /etc/hostname.com ❍ C. uname -a ❍ D.” . Answers A and C are wrong because these names refer to the unique Internet address that is assigned to a network interface by the system administrator. Answer D is wrong because the hostname is the alphanumeric system name that is assigned to a system. see the section “Hosts.

netstat is used to monitor network statistics.” 10. For more information. see the section “Configuring an IPv4 Network Interface. Use the netstat -i command. see the section “Changing the System Hostname. see the section “Network Maintenance. C. You use the iostat command to monitor disk I/O. A. D. Answer D is wrong because snoop does not display the network statistics of a physical interface. The file /etc/defaultdomain sets the domain name and /etc/networks identi- fies the different networks. The snoop command is used to determine what information is flowing between sys- tems across a network. The netstat command is used to monitor the system’s TCP/IP network activity. IP addresses provide a means of identifying and locating network resources.xxy file identifies the network interface on the local host.xxy file contains either an IP address or a hostname. ping is used to send ICMP packets to another network host. Answer A is wrong because Ctrl+D does not exit the snoop command. D. Answer C is wrong because IP addresses are not divided into three numbers. You use the iostat command to monitor disk I/O. but not both. For more information. A. B. ping is used to send ICMP packets to another network host. see the section “Network Maintenance.” . The snoop command continues to generate output until you press Ctrl+C to exit the command. For more information. vmstat is used to monitor virtual memory statistics. The following are true of IP addresses: IP addresses are written as four sets of numbers separated by periods. The /etc/hostname. and IP addresses identify the machines to their peers on the network. see the section “IPv4 Addressing.” 6. The /etc/inet/hosts file contains the IP addresses and hostnames of machines on a network.” 5. The /etc/hostname.” 7. B.” 8. netstat can provide some basic data about how much and what kind of network activ- ity is happening. A. The inetadm command lists the network services and their current state. The /etc/nodename file contains only the hostname. see the section “Network Maintenance. The /etc/defaultdomain file contains the domain name. C. B.” 9.46 Chapter 1: The Solaris Network Environment 3. D. see the section “Network Services” for a full description of the inetadm command. For more information. For more information. This is a new feature in Solaris 10. rpcinfo reports RPC information. A. snoop generates one line of output for each packet on the network. inetd and nfsd are daemons and do not list anything when they are executed. For more information. Answer C is wrong because the /etc/nodename file contains the hostname. 4. For more information. see the section “Configuring an IPv4 Network Interface. For more information. not an IP address. Answers A and B are wrong because this file is neither a script nor an executable file.

” Suggested Reading and Resources Internetworking with TCP/IP: Principles. A. it does not show up with the netstat com- mand.com. “IP Services” guide in the Solaris 10 documentation CD.com.<interface> file contains one entry: the hostname or IPv4 address that is associated with the network interface. Answer A is wrong because the -a command does not display the interface if it is not plumbed.sun. For more information.sun. .<interface> File.47 Suggested Reading and Resources 11. “Managing Services” section in the “Basic System Administration” guide in the Solaris 10 documentation CD. C.<interface> file. For more information. that hostname must also exist in the /etc/inet/hosts file. March 2000. If the network interface is not plumbed. See http://docs. The IPv4 address can be expressed in traditional dotted-decimal format or in CIDR notation. “Managing Services” section in the “Basic System Administration” guide in the System Administration Collection of the Solaris 10 documentation set. The /etc/hostname. Answer C is wrong because the netstat command displays information about a network interface even if it does not have an IP address assigned to it.” 12. see the section “Configuring an IPv4 Network Interface.” 13. see the section “Controlling and Monitoring an IPv4 Network Interface. B. Two methods can be used to obtain the MAC address on a SPARC-based system: The banner command and the ifconfig -a command. see the section “The /etc/hostname. D. Answer D is wrong because simply creating this file for the interface does not plumb the interface unless the system is also rebooted or the network services restarted. See http://docs. Prentice Hall. If you use a hostname as the entry for the /etc/hostname. For more information. Protocols and Architecture. “IP Services” guide in the System Administration Collection of the Solaris 10 documentation set. A. Douglas Comer.

.

You can create application core files on a global or per-process basis. and indirect) to configure automounting. This chapter describes NFS and the tasks required to administer NFS servers and clients. direct. Network File System (NFS) facilitates the sharing of data between networked systems. NFS servers share resources that are to be used by NFS clients.2 TWO Virtual File Systems. and configure and manage the NFS server and client including daemons. Manage crash dumps and core file behaviors. and Core Dumps Objectives The following test objectives for exam CX 310-202 are covered in this chapter: Explain virtual memory concepts and. Swap Space. You must have a thorough understanding of the problems that can arise with- in the NFS client/server process and how to address them. for temporary memory storage when a system does not have enough physical memory to handle currently running processes. You must be able to customize the configuration according to various circumstances. depending on the requirement. Explain and manage AutoFS and use automount maps (master. This chapter describes a number of problem areas and what to do in order to rectify them. The Solaris operating environment can use disk space. You can configure the creation and storage of crash dump and core files. files. given a scenario. Explain NFS fundamentals. and commands. . . . and you must be knowledgeable in swap space management in order to monitor these resources and make ongoing adjustments. . Troubleshoot various NFS errors. as needed. called swap areas or swap space. . configure and manage swap space. A system’s memory requirements change.

Implement patch management using Sun Connection Services including the Update Manager client. This chapter describes how to set up and use Sun Connection services. It also provides for centralized administration of NFS resources. AutoFS allows NFS directories to be mounted and unmounted automatically. the smpatch command line. Sun’s Connection Service provides an automated approach to patch manage- ment. and Sun Connection hosted web application.. . making it more convenient to keep your operating system up to date with the latest updates from Sun. Outline Introduction The Swap File System Swap Space and TMPFS Sizing Swap Space Monitoring Swap Resources Setting Up Swap Space Sun Update Connection Service Core File Configuration Crash Dump Configuration Summary NFS NFS Version 4 Servers and Clients NFS Daemons Setting Up NFS Mounting a Remote File System NFS Server Logging Troubleshooting NFS Errors The stale NFS file handle Message The RPC: Program not registered Error The NFS: service not responding Error The server not responding Error The RPC: Unknown host Error The NFS server not responding. This chapter describes AutoFS and how to configure the various automount maps. still trying Message The No such file or directory Error Suggested Reading and Resources Apply Your Knowledge Exercises Exam Questions Answers to Exam Questions Key Terms Using the Update Manager Sun Update Manager Proxy AutoFS AutoFS Maps Master Maps Direct Maps Indirect Maps When to Use automount .

it’s important that you practice on a Solaris system each Step By Step and each command that is presented. . so you should practice until you can repeat each procedure from memory. You need to know all the terms listed in the “Key Terms” section near the end of this chapter. You need to understand each command in this chapter and be prepared to match the command to the correct description. . . As you study this chapter. You must understand the concept of a virtual file system. Practice is very important on these topics. and how to use tools to monitor it. how to configure additional swap space. .Study Strategies The following study strategies will help you prepare for the test: . including how it works.

52 Chapter 2: Virtual File Systems. To view the amount of physical memory installed in your computer. You can use it to analyze all your systems for available operating system patches. Both of these methods for adding swap space are described in this chapter. Explain virtual memory concepts and. This chapter describes how to monitor the use of swap space as well as how to add more when necessary and how to delete additional swap space if it is no longer required. NFS allows multiple systems to make use of the same physical file system without having to maintain numerous copies of the data. Network File System (NFS) is a means of sharing file systems across the network. a method of automatically mounting file systems on demand and unmounting them when a specified amount of time has elapsed during which no activity has occurred. Crash dump files are produced when a system encounters a failure that it cannot recover from. The procedure for setting up and using the Sun Update Connection Service is described in this chapter. NFS is discussed in this chapter. It also describes NFS troubleshooting procedures that can be beneficial when problems occur. and Core Dumps Introduction Swap space is used to supplement the use of physical memory when a running process requires more resources than are currently available. When this happens. the memory contents of the process are dumped to a file for further analysis. Swap space can be allocated either as a dedicated disk slice or in an existing file system as a normal file. Physical memory is the random-access memory (RAM) installed in a computer. This chapter also describes crash dump files and how to manage and configure them. given a scenario. This chapter describes how to configure automount maps and make use of this extremely useful feature. type the following: # prtconf| grep ‘Memory size’<cr> The system displays a message similar to the following: Memory size: 1024 Megabytes . The Swap File System Objective . You also can remotely manage updates on all your systems. Core files are produced when a process encounters an unexpected error. Swap Space. This chapter describes the configuration of core files and how they can be managed effectively. The contents of kernel memory is dumped to a temporary location (normally the swap device) before the system reboots and subsequently is moved to a permanent location to save it from being overwritten. as is AutoFS. which could cause consistency problems. Sun Update Connection Manager is a facility that helps you keep your operating system up to date. configure and manage swap space. The latter option is often only used as an emergency solution.

because when paging occurs. swapfs must convert the virtual swap space addresses to physical swap space addresses in order for paging to actual disk-based swap space to occur. and sometimes there are not enough pages in physical memory for all of a system’s processes. together they are referred to as virtual memory. This is because swapfs provides virtual swap space addresses rather than real physical swap space addresses in response to the requests to reserve swap space. In addition to swap partitions. and other pages are used to store the process’s data. With the virtual swap space provided by swapfs. the virtual memory system brings that data back into physical memory. The remaining memory is referred to as available memory. real disk-based swap space is required only with the onset of paging. This is referred to as virtual swap space. Some of a process’s pages are used to store the process executable. Physical memory is supplemented by specially configured space on the physical disk that is known as swap space. When a process requests data that has been sent to a swap area. This speeds up access to those files and results in a major performance enhancement for applications such as compilers and database management system (DBMS) products that use /tmp heavily. it is transparent to the user. /tmp is a good example of a TMPFS file system where temporary files and their associated information are stored in memory (in the /tmp directory) rather than on disk. The Solaris virtual memory system maps the files on disk to virtual addresses in memory. When a physical memory shortfall is encountered. special files called swap files can also be configured in existing UNIX file systems (UFS) to provide additional swap space when needed. Physical memory is a finite resource on any computer. the virtual memory system begins moving data from physical memory out to the system’s configured swap areas. Swap space is configured either on a special disk partition known as a swap partition or on a swap file system (swapfs). As data in those files is needed. Processes and applications on a system can use available memory.53 The Swap File System Not all physical memory is available for Solaris processes. processes are contending for memory. In this situation. Every process running on a Solaris system requires space in memory. This process is known as paging. Swap Space and TMPFS The temporary file system (TMPFS) makes use of virtual memory for its storage. This mapping process greatly reduces the need for large amounts of physical swap space on systems with large amounts of available memory. This can be either physical RAM or swap space. The virtual swap space provided by swapfs reduces the need for configuring large amounts of disk-based swap space on systems with large amounts of physical memory. Space is allocated to processes in units known as pages. Some memory is reserved for kernel code and data structures. . the virtual memory system maps the virtual addresses in memory to real physical addresses in memory.

You need to determine whether large applications (such as compilers) will use the /tmp directory. NOTE Movement of swap Starting with the release of Solaris 9. so if you have 1GB of physical memory.) . including third-party applications and compilers. . Swap Space. which used to be a recommendation with older versions of SunOS. Then you need to allocate additional swap space to be used by TMPFS.54 Chapter 2: Virtual File Systems. This practice allows the root file system the maximum space on the disk and allows for expansion of the file system during an upgrade. It is quite rare nowadays to need more swap space than RAM. If you are prepared to keep track of your swap space and administer it regularly. plus the requirements of any concurrently running processes. Sizing Swap Space The amount of swap space required on a system is based on the following criteria: . In fact. The amount of disk-based swap space on a system must be large enough to be able to accommodate a kernel memory dump. you can run with much less swap space than in older versions of SunOS. Kernel memory accounts for around 20% of total memory. you are also using up virtual memory space. Many other factors also contribute to the amount of swap space you need to configure. your system could run out of this resource. the installation program allocates swap at the first available cylinder on the disk (this is normally cylinder 0). (How to monitor swap space and how to add additional space to a running system are discussed in the next few sections. and Core Dumps TMPFS allocates space in the /tmp directory from the system’s virtual memory resources. You should follow the manufacturer’s recommendation for swap space requirements. such as the number of concurrent users and the naming service. you will need about 256MB of disk-based space for a worstcase crash dump. This information is usually contained in the documentation that comes with the application. Application programs need a minimum amount of swap space to operate properly. To prevent any possible panic dumps resulting from fatal system failures. . So if your applications use /tmp heavily and you do not monitor virtual memory usage. the opposite is often true—you now often need less swap space than physical RAM. Network Information System Plus (NIS+). there must be sufficient swap space to hold the necessary kernel memory pages in RAM at the time of a failure. This means that as you use up space in /tmp.

Therefore. you should generally experience no swap space problems. A common problem is when someone uses /tmp as a place to store large temporary files. swap space limit exceeded or this one: <directory>: File system full. This can occur. . you get error messages on your system’s console. Besides these commercial tools. The error might look something like this: <application> is out of memory malloc error O messages. no swap space to grow stack for pid 100295 (myprog) This error means that an application is trying to get more memory but no swap space is available to accommodate it. for example.1:SJul 18 15:12:47 sunfire genunix: [ID 470503 kern. System performance monitoring is not covered on the administrator certification exams.warning] WARNING: Sorry. This helps you determine whether you are running on the edge and need to increase the resource or maybe you have too much swap space allocated and are wasting disk space. You could fill up a TMPFS file system due to the lack of available swap and get the following error message: <directory>: File system full. when TMPFS tries to write more than it is allowed or when TMPFS runs out of physical memory while attempting to create a new file or directory.1). Monitoring Swap Resources If you run into a swap shortfall due to heavy demand on memory. you can use the helpful tools that Solaris provides (see Table 2. memory allocation failed This type of message is displayed if a page cannot be allocated when a file is being written. you may want to restrict how much space the /tmp file system can consume by specifying the size option in the /etc/vfstab file: swap /tmp /tmpfs yes size=4096m This example limits the /tmp file system to 4096MB of space. Be aware that anything in /tmp uses available swap space. By default. so this chapter describes only the /usr/sbin/swap command. although the type of application being used on the system is a major factor.55 The Swap File System NOTE Reducing swap space problems If the amount of swap space is equal to the amount of physical RAM. You need to regularly monitor your swap space. available space in the /tmp file system is equal to the size of your swap space. Most commercial performance monitoring tools keep track of swap space or can be configured to generate warnings when it gets low.

and Core Dumps Table 2. You can use the -l option to list swap space and to determine the location of a system’s swap areas: # swap -l<cr> The system displays details of the system’s physical swap space. swaplo is a kernel parameter that you can modify.9 16 1049312 free 1049312 This output is described in Table 2. in 512-byte blocks. /dev/dsk/c0t0d0s1). The swaplo value for the area. deleting.1 Command Swap Monitoring Tools Description The /usr/sbin/swap utility provides a method for adding. The value includes all mapped files and devices. /usr/sbin/swap /usr/bin/ps /usr/ucb/ps /usr/bin/vmstat /usr/bin/sar /usr/bin/prstat You can use two options with the /usr/sbin/swap command to monitor swap space.2 Keyword path dev swaplo Output from the swap -l Command* Description The pathname for the swap area (for example. in 512-byte blocks. The number of 512-byte blocks in this area that are not currently allocated. This tool reports virtual memory statistics. The swaplen value for the area. and it defines the size of the swap area. and it is reported in kilobytes rather than pages. These device mappings do not use swap space.2. and monitoring the system swap areas used by the memory manager. in 512-byte blocks. where usable swap space begins. Table 2. Use the prstat command with the -a option to report swap size information for processes and users. Swap Space. Use the -t option to report a total swap usage summary for each user. The major/minor device number for a block special device. and it represents the offset. blocks free ular swap area. and it is reported in pages. The value includes all mapped files and devices. You can use this Berkley version of the ps command with the -alx options to report the total size of a process that is currently in virtual memory.56 Chapter 2: Virtual File Systems. *This table does not include swap space in the form of physical memory because that space is not associated with a partic- . swaplen is a kernel parameter that you can modify. This system has a 512MB swap slice allocated: swapfile /dev/dsk/c0t0d0s1 dev swaplo blocks 136. This is a system activity reporter. this value is zeros otherwise. You can use the -al options with the /usr/bin/ps command to report the total size of a process that is currently in virtual memory. in 512-byte blocks.

Then you can identify what changes to the system might have caused swap space usage to increase. .3.3 Keyword bytes allocated reserved used available Output from the swap -s Command Description The total amount of swap space.024-byte blocks.919848k available 919848k available This output is described in Table 2. in 1. you can use swap -s to see how much swap space is available. in 1. Keep in mind when using the swap command that the amount of physical memory available for swap usage changes dynamically as the kernel and user processes reserve and release physical memory. The total amount of swap space. that is currently available for future reservation and allocation. in 1. that is not currently allocated but is claimed by memory for possible future use. Swap space calculations The swap -l command displays swap space in 512-byte blocks. that is currently allocated as backing store (that is. disk-backed swap space). If a system’s performance is good.024-byte blocks.024-byte blocks.024-byte blocks. which shows the details of the system’s physical swap space and includes physical memory too. Table 2. The total amount of swap space. you’ll see that it is less than the swap space used plus available (as shown in the swap -s output) because swap -l does not include physical memory in its calculation of swap space.57 The Swap File System You use the -s option to list a summary of the system’s virtual swap space: # swap -s<cr> The system displays the following information. If you add up the blocks from swap -l and convert them to kilobytes.024-byte blocks. You can use the amounts of swap space available and used (in the swap -s output) as a way to monitor swap space usage over time. you can check the amount of swap space available to see if it has decreased. in 1. This system has 384MB of physical memory and a 512MB swap slice: total: 191388k bytes allocated + 38676k reserved = 230064k used. and the NOTE swap -s command displays swap space in 1. The total amount of swap space. that is either allocated or reserved. When the performance of a system slows down.

NOTE Crash dumps As described later in this chapter. There are two methods for adding more swap to a system: . which is typically a local disk partition.58 Chapter 2: Virtual File Systems. By default. the dump device is configured to be an appropriate swap partition. The process is described in Step By Step 2.1 Creating a Secondary Swap Partition 1. the system may not have enough room to store the crash dump. you might need to add more swap space. Create a swap file in an existing UFS Creating a secondary swap partition requires additional. When a fatal operating system error occurs. but the /data directory (currently mounted on slice 4 of disk c0t1d0) is 512MB in size. otherwise. The operating system then generates a crash dump by writing the contents of kernel memory to a predetermined dump device. a message describing the error is printed to the console. in the sections “Core File Configuration” and “Crash Dump Configuration.” The software installation program adds entries for swap slices and files in the /etc/vfstab file. You use the format command as described in Solaris 10 System Administration Exam Prep (Exam CX-310-200). Add an additional 512MB of swap space to your system. These swap areas are activated each time the system is booted by /sbin/swapadd. Part I. Swap Space. you make an entry in the /etc/vfstab file so that the swap space is activated at bootup. After you create the swap partition. and Core Dumps Setting Up Swap Space Swap space is initially configured during software installation through the installation program. unused disk space. As system configurations change. it’s necessary to make sure that your swap area is at least as large as about 25% of your physical RAM. STEP BY STEP 2. a crash dump is a disk copy of the kernel memory of the computer at the time of a fatal system error. You don’t have any more room on the disk for more swap space. the Solaris installation program allocates a default swap slice of 512MB. Create a secondary swap partition . Part I to create a new partition on a disk. Therefore. . Move all the data in /data to another server to free up the partition so that you can use it as a swap partition. If you use the installation program’s automatic layout of disk slices and do not manually change the size of the swap slice. and new software packages are installed. You can then analyze this crash dump to determine the cause of the system error. more users are added.1. Crash dumps and core files are discussed later in this chapter. You can use any of the backup methods described in Solaris 10 System Administration Exam Prep (Exam CX-310-200).

59 The Swap File System 2. 512. Label the disk: Partition> la Ready to label disk? Y 3.3 16 1052624 1052624 /dev/dsk/c0t1d0s4 has been added to the list of available swap areas. use the format utility to set the tag name to swap and the permission flag to wu (writable and unmountable): partition> 4 Part Tag Flag Cylinders Size Block 4 unassigned wm 3400 . Make an entry to the /etc/vfstab file. 1040e. 0.37MB (1041/0/0) 1049328 Enter partition id tag[unassigned]: swap Enter partition permission flags[wm]: wu Enter new starting cyl[3400]: <cr> Enter partition size[1049328b. Verify that the swap has been added: # swap -l<cr> The system responds with this: swapfile /dev/dsk/c0t0d0s1 /dev/dsk/c0t1d0s4 dev swaplo blocks free 136. Run the swapadd script to add the swap to your system: # /sbin/swapadd<cr> 5.9 16 1049312 1049312 136. After freeing up the /data directory and unmounting /dev/dsk/c0t1d0s4. 1041c.50gb]: <cr> The bold text indicates what the user enters. .37mb. where the fields are as follows: Device to mount: <name of swap block device or swap file> Device to fsck: Mount point: FS-type: swap fsck pass: - Mount at boot: no Mount options: Here’s an example of an entry for the swap partition just added: /dev/dsk/c0t1d0s4 swap .4480 512.no - 4.

. Then. that would just add overhead and slow down paging. As root. This method of creating a swap file has a negative effect on system performance because the swap file is slower than a dedicated swap slice. use the df -h command to locate a file system that has enough room to support a swap file that’s the size that you want to add: . you need to put a separate entry in /etc/vfstab for each slice.60 Chapter 2: Virtual File Systems. On systems running the 32-bit version of Solaris. Swap Space. especially because of the hyphens. If you wanted to add a 9GB disk to a swap area. On systems running the 64-bit version of Solaris 10. when you back up a file system. you can use a block device larger than 2GB. The syntax can be tricky. Although you can do this for longer durations as well. unless one is full. Because a swap file is simply a file in some file system. The easiest way to add more swap space is to use the mkfile and swap commands to designate a part of an existing UFS file system as a supplementary swap area.2 Adding Swap Space Without Repartitioning a Disk 1. Swap space is allocated in a round-robin fashion from swap partition to swap partition. It is not worth making a striped metadevice to swap on. The additional notes explain how to add swap partitions: . You get a large performance benefit from having swap partitions spread across separate disks. A swap file is considered a file within a file system. Step By Step 2. it has a few disadvantages: . . you cannot unmount that file system while the swap file is in use.2 explains how to add more swap space without repartitioning a disk. and Core Dumps EXAM ALERT /etc/vfstab syntax You should be familiar with the entry for swap files in /etc/vfstab. therefore. . swap areas must not exceed 2GB. Swap space is allocated 1MB at a time from each swap partition in turn. STEP BY STEP 2. a rather large swap file (empty file) is also backed up if you don’t specifically exclude it. and it is not possible to prioritize usage of the various swap areas. You can do this as a temporary or semitemporary solution for a swap shortage. you should slice it up into 2GB chunks. .

Verify that the new swap area was added: # swap -l<cr> The system should respond with a message such as the following that shows the swap file: swapfile dev swaplo blocks free . The swap file is added and available until the file system is unmounted.9G 0K 0K 0K 0K 1.2G 1.2G 1% 0K 0% 0K 0% 2.2G 44% 5. Notice that the sticky bit (which is described in Solaris 10 System Administration Exam Prep (Exam CX-310-200). 2.9G 5.7G 0K 0K 0K 0K 1. 3.0G 1. Activate the swap area by using the swap command: # /usr/sbin/swap -a /data2/swapfile<cr> You must use the absolute pathname to specify the swap file. Part I) has automatically been set.1M avail capacity 1.2G 0K 0K 4.7G 7.4G 40% 1.2G 3. the system is rebooted. but it is a good idea for root to be the owner of the swap file to prevent someone from accidentally overwriting it. 4.2G used 3. Keep in mind that you can’t unmount a file system while the swap file is still being used or a process is swapping to the swap file. Use the mkfile command to add a 512MB swap file named swapfile in the /data2 partition: # mkfile 512m /data2/swapfile<cr> Use the ls -l /data2 command to verify that the file has been created: # ls -l /data2/swapfile<cr> -rw———T 1 root root 536870912 Aug 19 23:31 /data2/swapfile The system shows the file named swapfile along with the file size.2G 1% 2.2G 77% 0K 0% 0K 0% 0K 0% 0K 0% 1.2G 1% 1.5G 304K 48K 1. or the swap file is removed.0M 0K 0K 1.1G 1% Mounted on / /devices /system/contract /proc /etc/mnttab /etc/svc/volatile /system/object /dev/fd /var /tmp /var/run /data1 /data2 NOTE Swap permissions You can create a swap file without root permissions.61 The Swap File System # df -h<cr> Filesystem /dev/dsk/c0t0d0s0 /devices ctfs proc mnttab swap objfs fd /dev/dsk/c0t0d0s7 swap swap /dev/dsk/c0t1d0s0 /dev/dsk/c0t1d0s7 size 4.

until you can add a swap partition. For example. is that there will be a performance impact if you go the swap file route rather than the partition route. The steps involved in removing a swap file are outlined in Step By Step 2.3. Sun recommends that you use swap files only as a temporary solution. because this allocates disk blocks only as they are written. use the swap -d command to remove the swap area. whereas a swap partition has data written to it at a lower level. this makes a swap partition slightly faster than a swap file. this is described later in this chapter.3 Removing a Swap File 1. because it puts an increased load on your network and makes performance unacceptable. . and Core Dumps /dev/dsk/c0t0d0s1 /data2/swapfile 136. A swap file has to work through the file system when updates are made. and the general consensus in the user community. Swap Space. however. Alternatively the additional swap space might have been temporarily added to accommodate a one-off large job.9 16 1049312 1049312 16 1048560 1048560 5.no - There is some disagreement about which type of swap area provides the best performance: a swap partition or a swap file. these are two of the best reasons in favor of swap partitions: . bypassing the interaction with the file system. you might determine that you have allocated too much swap space and that you need that disk space for other uses. Both scenarios have advantages. NOTE Swap files on NFS In an emergency. If you do need to use NFS for additional swap files. If this will be a permanent swap area. Use the following for a swap partition: # swap -d /dev/dsk/c0t0d0s4<cr> or use this for a swap file: . Using NFS to access swap space on another host is not recommended. STEP BY STEP 2. A partition provides contiguous space and can be positioned between the specific cylin- ders that will provide the best performance. when no other local space is available. try using the -n option when you run mkfile.. add to the /etc/vfstab file an entry for the swap file that specifies the full pathname of the swap file and designate swap as the file system type: /data2/swapfile . however. it’s possible to add a swap file to a networked file system by using NFS. As root.62 Chapter 2: Virtual File Systems.swap . Swap files can be deleted as well as added. Sun’s official statement.

This would make administration and management of core files much easier because core files can sometimes take up a significant amount of disk space. Not only can software problems cause core dumps. Remove the swap file to recover the disk space: # rm /data2/swapfile<cr> If the swap area was in a partition. Core File Configuration Objective . Manage crash dumps and core file behaviors. .. you can now allocate this disk space as you would a normal file system. you might want to configure the system so that all core files are written to a central location. You manage core files by using the coreadm command: coreadm [-g <pattern>] [-G <content>] [-i <pattern>] [-I <content>] \ [-d <option>. but so can hardware problems.] [-e <option>.63 Core File Configuration # swap -d /data2/swapfile<cr> 2.. However. Core files are created when a program or application terminates abnormally.. The file itself is not deleted..4. delete the entry for the swap file.9 16 1049312 1049312 The swap file filename is removed from the list. as the system administrator. 4.] coreadm [-p <pattern>] [-P <content>] [pid] coreadm -u The options for the coreadm command are described in Table 2. In the /etc/vfstab file. The default location for a core file to be written is the current working directory. 3. Issue the swap -l command to ensure that the swap area is gone: # swap -l<cr> swapfile /dev/dsk/c0t0d0s1 dev swaplo blocks free 136. so you know it is no longer available for swapping.

-g <pattern> -G <content> -i <pattern> -I <content> -d <option> -e <option> -p <pattern> -P <content> -u Running coreadm with no options displays the current configuration. Specifies the effective user ID. . Specifies the decimal value of time. Specifies the system node name. as the number of seconds since 00:00:00 January 1. Sets the global core file content using one of the description tokens. Pattern %p %u %g %d %f %n %m %t %z %% The -d and -e flags of the coreadm command can take several options. This is the same as running uname -m.conf. Enables the specified core file option. Updates the systemwide core file options from the configuration file /etc/coreadm. Valid pattern variables are described in Table 2. Specifies the effective group ID. The values are then expanded when a core file is created. These are listed in Table 2.6. Sets the per-process core file name to content. Swap Space. Sets the per-process core file name pattern. Sets the per-process core file content to content. These variables are specified with a leading % character. Table 2. Specifies the name of the zone in which the process is executed (zonename). A core file name pattern consists of a file system pathname. Disables the specified core file option. Sets the per-process core file name pattern for each of the specified pids. This is the same as running uname -n. which you can determine by reading the file /etc/coreadm. Specifies the executable file directory name.4 Option coreadm Command Options Description Sets the global core file name pattern. Specifies the a literal % character. and Core Dumps Table 2. Specifies the machine name. along with embedded variables.64 Chapter 2: Virtual File Systems. 1970.5.5 coreadm coreadm Patterns Description Specifies the process ID (PID). Specifies the executable filename.conf.

As root.%f init core file content: default global core dumps: disabled per-process core dumps: enabled global setid core dumps: disabled per-process setid core dumps: disabled global core dump logging: disabled . using the global core pattern. Allows set-id core dumps. followed by the system name and then the name of the program being run. using the per-process core pattern. issue the following command to change the core file setup: # coreadm -i /cores/core. Allows set-id core dumps. using the per-process core pattern. Produces a syslog message when an attempt is made to generate a global core file.%f<cr> 3. use the coreadm command to display the current coreadm configuration: # coreadm<cr> global core file pattern: global core file content: default init core file pattern: core init core file content: default global core dumps: disabled per-process core dumps: enabled global setid core dumps: disabled per-process setid core dumps: disabled global core dump logging: disabled 2.%n.6 Option global process coreadm -d and -e Flag Options Description Allows core dumps. Allows core dumps.4 Configuring Core Files 1.65 Core File Configuration Table 2.4.%n. using the global core pattern. As root. global-setid proc-setid log To modify the core file configuration so that all files are dumped into the directory /cores and named core. you can follow the procedure described in Step By Step 2. STEP BY STEP 2. Run coreadm again to verify that the change has been made permanent: # coreadm<cr> global core file pattern: global core file content: default init core file pattern: /cores/core.

Running this command with no options displays the current configuration. crash dumps are configured to use the swap partition to write the contents of memory. When a serious error is encountered. The savecore program runs when the system reboots and saves the image in a predefined location.conf: # dumpadm<cr> The system responds with this: Dump content: kernel pages . but those commands are beyond the scope of this book. and the -g option produces a global core file. to create a per-process core image of the current shell. Swap Space. The service name for this process is svc:/system/coreadm:default. mdb. and then reboots the system. You configure crash dump files by using the dumpadm command. Use the svcs command to check its status. For example. A crash dump is a snapshot of the physical memory. the system displays an error message on the console. and pstack can be used to analyze a core dump file.sunfire. type # gcore -p $$<cr> The system responds with this: gcore: /cores/core. Crash Dump Configuration Objective . where <hostname> represents the name of the system. and Core Dumps Use the gcore command to manually generate a core dump of a process. usually /var/crash/<hostname>. Normally.sh dumped The -p option produces per-process specific content. This is useful for verifying your coreadm settings or if you need to generate a core dump for analysis purposes. dumps the entire contents of physical memory to the disk. which is obtained from the file /etc/dumpadm. Various commands such as dbx. at the time a fatal system error occurs. The coreadm process is configured by the Service Management Facility (SMF) at system boot time.66 Chapter 2: Virtual File Systems. Manage crash dumps and core file behaviors. saved on disk.

the entire disk is used for a crash dump.7 Option -c <content-type> dumpadm Command Syntax Description Modifies crash dump content. Specifies a different root directory. in which case the system identifies the best swap area to use. any crash dumps would be lost. This can be specified either as an absolute pathname (such as /dev/dsk/c0t0d0s1) or the word swap. Valid values are kernel (just kernel pages). and curproc (kernel pages and currently executing process pages). If this option is not used.7. -d <dump-device> -m <mink> | <minm> | <min%> -n -s <savecore-dir> -u -r <root-dir> -y To set up a dedicated disk named c0t2d0s2 for crash dumps. all (all memory pages). Enables savecore to run on the next reboot.conf. you issue the following command: # dumpadm -d /dev/dsk/c0t2d0s2<cr> When you specify s2. The system responds with this: Dump content: kernel pages Dump device: /dev/dsk/c0t2d0s2 (dedicated) Savecore directory: /var/crash/sunfire Savecore enabled: yes .67 Crash Dump Configuration Dump device: /dev/dsk/c0t0d0s1 (swap) Savecore directory: /var/crash/sunfire Savecore enabled: yes The following is the syntax of the dumpadm command: /usr/sbin/dumpadm [-nuy] [-c <content-type>] [-d <dump-device>]\ [-m <mink> | <minm> | <min%>] [-s <savecore-dir>] [-r <root-dir>] The options for the dumpadm command are described in Table 2. This setting is used by default. Disables savecore from running on reboot. specified either in kilobytes. the default / is used. or a percentage of the total current size of the directory. megabytes. Specifies a savecore directory other than the default /var/crash/hostname. Modifies the dump device. This is not recommended because with it. Table 2. Forcibly update the kernel dump configuration based on the contents of /etc/dumpadm. Maintains minimum free space in the current savecore directory.

but the connection is transparent to the user regardless of the hardware or operating systems. Makes mounting of file systems transparent to users. Reduces system administration overhead. Just as the mount command lets you mount a file system on a local disk. The NFS service lets computers of different architectures. and configure and manage the NFS server and client including daemons. function as though they are occurring on local files. . and commands. you may want to generate a system crash dump. The NFS service provides the following benefits: . Explain NFS fundamentals. This eliminates the need to have redundant data on several systems. Furthermore. such as Multiprogramming using Virtual Storage (MVS). such as reading and writing. you must first use dumpadm to set a nonswap device as the dump device. To use the savecore command. . . files. running different operating systems. NFS lets you mount a file system that is located on another system anywhere on the network. Response time might be slower when a file system is physically located on a remote system. Provides data consistency and reliability because all users access the same data. Makes accessing remote files transparent to users. Reduces storage costs by having computers share applications and data. Another method is to press Stop+A to get to the OpenBoot PROM and then type the OBP command sync to force a crash dump. a Sun system can mount the file system from a Microsoft Windows or Linux system. NFS support has been implemented on many platforms. . Lets multiple computers use the same files so that everyone on the network can access the same data. ranging from Microsoft Windows on personal computers to mainframe operating systems. Supports heterogeneous environments. File system operations. Swap Space. Use the svcs command to check its status. . For example. . share file systems across a network. The dumpadm process is now configured by the Service Management Facility (SMF) at system boot time. NFS Objective . . You can do this by issuing the reboot -d command or by using the savecore -L command to create a live OS core dump. and Core Dumps For testing purposes.68 Chapter 2: Virtual File Systems. Each operating system applies the NFS model to its file system semantics. The service name for this process is svc:/system/dumpadm:default.

you can place one copy on one computer’s disk and have all other systems across the network access it. which has the following features: . NFS version 4 supports delegation. or nfslogd daemons. regardless of location.” . NFS version 4 is a stateful protocol in that both the client and the server hold informa- tion about current locks and open files. nfsmapid. Delegation is supported in both the NFS server and the NFS client. TCP is used as the transport. The nfsmapid daemon is described later in this chapter. A new daemon process. the client and the server work together to re-establish the open or locked files. Under NFS operation. or a write delegation. In previous versions of NFS. . With NFS. RDMA improves performance by reducing load on the CPU and I/O. All state and lock information is destroyed when a file system is unshared. . . When a crash or failure occurs. A client can be granted a read delegation. a technology for memory-to-memory transfer over high speed data networks. NFS Version 4 Solaris 10 introduced a new version of the NFS protocol. maps these IDs to local numeric IDs. . Servers and Clients With NFS. NFS version 4 provides a pseudo file system to give clients access to exported objects on the NFS server. NFS version 4 no longer uses the mountd. statd. .69 NFS The NFS service makes the physical location of the file system irrelevant to the user. Any system with a local file system can be an NFS server. The User ID and Group ID are represented as strings. The system administrator has complete control over which file systems can be mounted and who can mount them. providing exclusive access to a file. a technique where management responsibility of a file can be delegated by the server to the client.” you can configure the NFS server to make file systems available to other systems and users. systems have a client/server relationship. in the section “Setting Up NFS. The default transport for NFS version 4 is the Remote Direct Memory Access (RDMA) protocol. this information was retained. If RDMA is unavailable on both server and client. As described later in this chapter. remote file systems are almost indistinguishable from local ones. You can use NFS to allow users to see all the data. . in the section “NFS Daemons. instead of placing copies of commonly used files on every system. The NFS server is where the file system resides. which can be granted to multiple clients.

This daemon is usually invoked at the multi-user-server milestone and is started by the svc:/network/nfs/server:default service identifier. You use the showmount command. in the section “NFS Server Logging.” how you can create a local directory and mount the file system.8 Daemon nfsd NFS Daemons Description An NFS server daemon that handles file system exporting and file access requests from remote systems. This daemon is started by the svc:/network/nfs/client service identifier at the multi-user milestone.8. and Core Dumps An NFS client is a system that mounts a remote file system from an NFS server. An NFS server runs multiple instances of this daemon. An NFS server daemon that handles mount requests from NFS clients. This daemon is not used in NFS version 4. in the section “Mounting a Remote File System. Swap Space. and also references /etc/nsswitch. The most important NFS daemons are described in Table 2. to view this information.70 Chapter 2: Virtual File Systems. Table 2. a system can be both an NFS server and an NFS client. As you will see. This daemon provides information about which file systems are mounted by which clients.conf to determine the order of access. described later in this chapter. This daemon is not used in NFS version 4. You’ll learn later in this chapter. A daemon that runs on the NFS server and NFS client and interacts with lockd to provide the crash and recovery functions for the locking services on NFS. A new daemon that maps to and from NFS v4 owner and group identification and UID and GID numbers. This daemon is usually invoked at the multi-user-server milestone and is started by the svc:/network/nfs/server:default service identifier. A daemon that runs on the NFS server and NFS client and provides file-locking services in NFS. It uses entries in the passwd and group files to carry out the mapping. These services are initialized at startup from the svc:/network/nfs/server:default and svc:/network/nfs/client:default startup service management functions. NFS Daemons NFS uses a number of daemons to handle its services.” The nfslogd daemon is not used in NFS version 4. This daemon is started by the svc:/network/nfs/client service identifier at the multi-user milestone. nfslogd is described later in this chapter. A daemon that provides operational logging to the Solaris NFS server. mountd lockd statd rpcbind nfsmapid nfs4cbd nfslogd . A new client side daemon that listens on each transport and manages the callback functions to the NFS server. A daemon that facilitates the initial connection between the client and the server.

# # Issue the command ‘svcadm enable network/nfs/server’ to # run the NFS daemon processes and the share commands. the only time manual sharing should occur is during testing or troubleshooting. The system also reads entries in the /etc/dfs/dfstab file when the nfs/server service is enabled: # svcadm enable nfs/server<cr> when the nfs/server service is restarted: # svcadm restart nfs/server<cr> or when you issue the /usr/sbin/shareall command: # shareall<cr> Each line in the dfstab file consists of a share command. If you want to modify /etc/dfs/dfstab to add or delete a file system or to modify the way sharing is done. the system reads the updated /etc/dfs/dfstab to determine which file systems should be shared automatically. A shared file system is referred to as a shared resource. after adding # the very first entry to this file. You specify which file systems are to be shared by entering the information in the file /etc/dfs/dfstab.71 NFS Setting Up NFS Servers let other systems access their file systems by sharing them over the NFS environment. as shown in the following example: # more /etc/dfs/dfstab<cr> The system responds by displaying the contents of /etc/dfs/dfstab: # Place share(1M) commands here for automatic execution # on entering init state 3. # # share [-F fstype] [ -o options] [-d “<text>”] <pathname> [resource] # . You should set up automatic sharing if you need to share the same set of file systems on a regular basis. The /etc/dfs/dfstab file lists all the file systems your NFS server shares with its NFS clients. It also controls which clients can mount a file system. NOTE /etc/dfs/dfstab The system does not need to be rebooted just to share file systems listed in the /etc/dfs/dfstab file. Most file system sharing should be done automatically. The next time the computer enters the multi-user-server milestone. such as vi.g. you edit the file with a text editor.e. # share -F nfs -o rw=engineering -d “home dirs” /export/home2 share -F nfs /export/install/sunfire share -F nfs /jumpstart . Entries in this file are shared automatically whenever you start the NFS server operation.

anon=<uid>: Sets <uid> to be the effective user ID (UID) of unknown users.. but only to the listed clients. If aclok is not set. aclok: Allows the NFS server to do access control for NFS version 2 clients (running Solaris 2. If it is invoked with no arguments. ro=<client>[:<client>].9 Option share Command Syntax Specifies the file system type.72 Chapter 2: Virtual File Systems. One of the following options: rw: Makes <pathname> shared read-write to all clients. Table 2. if anyone has read permissions.9 describes the options of the share command.4 or earlier). If <uid> is set to -1. By default. For example. Swap Space. the first file system type listed in /etc/dfs/fstypes is used as the default (nfs). share displays all shared file systems. Only one file system per server can use this option. such as NFS. public: Enables NFS browsing of the file system by a WebNFS-enabled browser. ro: Makes <pathname> shared read-only to all clients. . and Core Dumps The /usr/sbin/share command exports a resource or makes a resource available for mounting. If the -F option is omitted.: Makes <pathname> shared read-write but only to the listed clients. with aclok set. access is denied.: Makes <pathname> shared read-only. clients can create files on the shared file system if the setuid or setgid mode is enabled.. unknown users are given the effective UID nobody. rw=<client>[:<client>]. Table 2.. See Solaris 10 System Administration Exam Prep (Exam CX-310-200). When aclok is set on the server. everyone does. The -ro=<list> and -rw=<list> Description -F <FSType> -o <options> options can be included with this option. maximum access is given to all clients. nosuid: Causes the server file system to silently ignore any attempt to enable the setuid or setgid mode bits.. This is also the default behavior. The share command can be run at the command line to achieve the same results as the /etc/dfs/dfstab file. This is the syntax for the share command: share -F <FSType> -o <options> -d <description> <pathname> where <pathname> is the name of the file system to be shared. This only applies to NFS versions 2 and 3 because NFS version 4 does not use the Mount protocol. index=<file>: Loads a file rather than a listing of the directory containing this specific file when the directory is referenced by an NFS uniform resource locator (URL). nosub: Prevents clients from mounting subdirectories of shared directories. By default. No other systems can access <pathname>. No other systems can access <pathname>. but you should use this method only when testing. minimal access is given to all clients. Part I for a description of setuid and setgid.

The <mode> option establishes the security mode of NFS servers. dh: Use a Diffie-Hellman public key system. When you execute the share command. If the NFS connection uses the NFS version 2 protocol. the client may be denied access. you add this line to the /etc/dfs/dfstab file: # share -F nfs -o ro /data1<cr> After you edit the /etc/dfs/dfstab file. This is the most secure. none: Use null authentication. The following are valid modes: sys: Use AUTH_SYS authentication. no host has root access. NFS clients can force the use of a specific security mode by specifying the sec=<mode> option on the command line. If no tag is specified.: Specifies that only root users from the specified hosts have root access. -d <description> Describes the resource being shared. The optional <tag> determines the location of the related log files. the NFS clients must query the server for the appropriate <mode> to use. the default values associated with the global tag in /etc/nfs/nfslog. To share a file system as read-only every time the system is started. krb5i: Use the Kerberos version 5 authentication with integrity checking to verify that the data has not been compromised. The tag is defined in etc/nfs/nfslog. which is currently sys. the share is not persistent across reboots. sec=<mode>: Uses one or more of the security modes specified by <mode> to authenticate clients. restart the NFS server to start the NFS server daemons by either rebooting the system. krb5: Use the Kerberos version 5 authentication. if the file system on the server is not shared with that security mode.. typing the shareall command.conf are used. unauthenticated by the NFS server.73 NFS root=<host>[: <host>].. log=<tag>: Enables NFS server logging for the specified file system. but also incurs additional overhead. or restarting the nfs/server service as follows: # svcadm restart nfs/server<cr> . The user’s UNIX user ID and group IDs are passed in clear text on the network. NFS logging is described later in this chapter. By default. so root users are mapped to an anonymous user ID (see the description of the anon=<uid> option). the nfs/server service is enabled. the NFS client uses the default security mode. in the section “NFS Server Logging.” Support for NFS logging is only available for NFS versions 2 and 3.conf. because you did not make an entry in the /etc/dfs/dfstab file. and all the required NFS server daemons are started automatically. krb5p: Use the Kerberos version 5 authentication with integrity checking and privacy protection (encryption). However. However. If the NFS connection uses the NFS version 3 protocol.

all resources currently being shared on the local host are displayed. and any new entries in the /etc/dfs/dfstab file are shared. Another place to find information on shared resources is in the server’s /etc/dfs/sharetab file. the file system is not automatically shared the next time the system is restarted. Be aware. The dfshares command displays information about the shared resources that are available to the host from an NFS server. the service remains disabled. Part I describes how to mount a local file system by using the mount command. You can use the same mount command to mount a shared file system on a remote host using NFS. when the system enters the multi-user-server milestone. this is similar to mounting a local file system. and <mount-point> is the name of the local directory that serves as the mount point. <server> is the name of the NFS server in which the file system is located. After you have made an initial entry in the /etc/dfs/dfstab file and have executed either the shareall or svcadm enable nfs/server command. Here is the syntax for mounting NFS file systems: mount -F NFS <options> <-o specific-options> <-O> <server>:<file-system> <mount-point> In this syntax. and Core Dumps At startup. like this: # dfshares apollo<cr> If no <servername> is specified.10. As you can see. Mounting a Remote File System Solaris 10 System Administration Exam Prep (Exam CX-310-200). if the /etc/dfs/dfstab file does not contain a share command. This file contains a list of the resources currently being shared. Swap Space. Even when you enable the nfs/server service. <filesystem> is the name of the shared file system on the NFS server. EXAM ALERT File system sharing The exam often has at least one question related to the sharing of file systems. mountd and nfsd are not started if the /etc/dfs/dfstab file does not contain a share command. The options for the mount command are described in Table 2. that if you don’t add the entry to the /etc/dfs/dfstab file. however. Here is the syntax for dfshares: dfshares <servername> You can view the shared file systems on a remote NFS server by using the dfshares command. You simply execute the shareall command.74 Chapter 2: Virtual File Systems. . you can add entries to the /etc/dfs/dfstab file without restarting the daemons. Remember that the NFS server daemons must be running in order for a shared resource to be available to the NFS clients. You can share additional file systems by typing the share command directly from the command line.

nocto: Do not perform the normal close-to-open consistency. The default is 3 seconds. The default is fg. noac: Suppress data and attribute caching. actimeo=<n>: Set minimum and maximum times for directories and regular files. suid | nosuid: setuid execution is enabled or disabled.75 NFS Table 2. remount: If a file system is mounted as read-only. If the file system has quotas enabled on the server. The default is 30 seconds. performance may be improved. bg | fg: If the first attempt to mount the remote file system fails. The default is suid. acdirmax=<n>: The maximum time that cached attributes are held after directory update. Does not append an entry to the /etc/mnttab table of the mounted file systems. In this case. this option remounts it as read-write. acregmax=<n>: The maximum time that cached attributes are held after file modification. this option retries it in the background (bg) or in the foreground (fg). The default is 60 seconds. . Mounts the specified file system as read-only. grpid: The GID of a new file is unconditionally inherited from that of the parent directory. noquota: This option prevents quota from checking whether the user has exceeded the quota on this file system. -o <specific. In this case the value is NFS. in seconds. quotas are still checked for operations on this file system. Using noforcedirectio causes buffering to be done on the client. quota: This option checks whether the user is over the quota on this file system. forcedirectio | noforcedirectio: If the file system is mounted with forcedirectio. acregmin=<n>: The minimum time that cached attributes are held after file modification.Can be any of the following options. The default is 60 seconds. overriding any set-GID options. quotas are still checked for operations on this file system.10 Option -F NFS -r -m NFS mount Command Syntax Description Specifies the FSType on which to operate. with no buffering on the client. separated by commas: options> rw | ro: The resource is mounted read-write or read-only. If the file system has quotas enabled on the server. This option can be used when only one client is accessing a specified file system. acdirmin=<n>: The minimum time that cached attributes are held after directory update. data is transferred directly between client and server. but it should be used with caution. The default is rw.

The default value is 32768 with version 3 or 4 of the NFS protocol. making the underlying file system inaccessible. The default can be negotiated down if the server prefers a smaller transfer size. because it is assumed that the transport will perform retransmissions on behalf of NFS. the default value is 8192. public: Forces the use of the public file handle when connecting to the NFS server. Version 3 mounts pick the first mode supported. The default is 10000. xattr | noxattr: Allow or disallow the creation of extended attributes. and Core Dumps retry=<n>: This option specifies the number of times to retry the mount operation. which makes it possible for clients to interrupt applications that might be waiting for an NFS server to respond. NFS version 4 is not used. The default is NFS_PORT. this option has no effect. With version 2. The default is xattr (allow extended attributes). For connection-oriented transports. until one is successful. The default can be negotiated down if the server prefers a smaller transfer size. retrans=<n>: This option sets the number of NFS retransmissions to <n>. NFS versions 3 and 4 mounts negotiate a security mode. Swap Space. vers=<NFS-version-number>: By default. The default value is 32768 with version 3 or 4 of the NFS protocol. The default value is 11 tenths of a second for connectionless transports and 600 tenths of a second for connection-oriented transports. If the NFS server does not support the NFS 4 protocol. so if you specify proto=udp. the version of NFS protocol used between the client and the server is the highest one available on both systems. With NFS version 2. If a mount is attempted on a preexisting mount point and this flag is not set. The default value is hard. whereas version 4 mounts try each supported mode in turn. the NFS mount uses version 2 or 3. or it continues the retry request until the server responds (hard). the default value is 8192. intr | nointr: This option enables or does not enable keyboard interrupts to kill a process that hangs while waiting for a response on a hard-mounted file system. If no rdma. wsize=<n>: This option sets the write buffer size to <n> bytes. UDP. the mount fails. rsize=<n>: This option sets the read buffer size to <n> bytes. sec=mode: Set the security mode for NFS transactions. producing the “device busy” error. TCP is used and. port=<n>: This option specifies the server IP port number. soft | hard: This option returns an error if the server does not respond (soft). Note that NFS version 4 does not use UDP. the default value is 5.76 Chapter 2: Virtual File Systems. timeo=<n>: This option sets the NFS timeout to <n> tenths of a second. failing that. The default is intr. If you’re using hard. the system appears to hang until the NFS server responds. . proto=netid | rdma: The default transport is the first rdma protocol supported by both client and server. -O: The overlay mount lets the file system be mounted over an existing mount point.

The process continues to hang until the NFS server or network connection becomes operational. a file system mounted with the soft option returns an error. you can do this by pressing Ctrl+C. If no response arrives. Sun recommends that file systems mounted as read-write or containing executable files should always be mounted with the hard option. After the file system is mounted.intr thor:/data /data<cr> If a file system is mounted hard and the intr option is not specified. each NFS request made in the kernel waits a specified number of seconds for a response (specified with the timeo=<n> option). A read-write file system should always be mounted with the specified hard and intr options. unexpected I/O errors can occur./thor_data nfs .77 NFS File systems mounted with the bg option indicate that mount is to retry in the background if the server’s mount daemon (mountd) does not respond when. For a background process. the /data file system from the server thor is mounted read-only on /thor_data on the local system. the timeout is multiplied by 2. consider a write request: If the NFS server goes down. For a terminal process.yes ro . this can be annoying. the mount is lost. the NFS server is restarted. To mount a file system called /data that is located on an NFS server called thor. for example. as root. From the NFS client. the pending write request simply gives up. from the NFS client: # mount -F nfs -o ro thor:/data /thor_data<cr> In this case. For example. you can add the following line to the /etc/vfstab file: thor:/data . resulting in a corrupted file on the remote file system. You use the following to mount a file system named /data located on a host named thor with the hard and intr options: # mount -F nfs -o hard. For a terminal process. the process hangs when the NFS server goes down or the network connection is lost. you issue the following command. This lets users make their own decisions about killing hung processes. and the file system mounted with the hard option prints a warning message and continues to retry the request. If the umount command is issued or the client is restarted. If intr is specified. sending an interrupt signal to the process kills it. If you would like this file system to be mounted automatically at every startup. sending an INT or QUIT signal usually works: # kill -QUIT 3421<cr> NOTE Overkill won’t work Sending a KILL signal (-9) does not terminate a hung NFS process. and the request is retransmitted. If you use soft-mounted file systems. If the number of retransmissions has reached the number specified in the retrans=<n> option. mount retries the request up to the count specified in the retry=<n> option. Mounting from the command line enables temporary viewing of the file system.

168.thor:/home/data/usr/local/data<cr> Replication is discussed further in the “AutoFS” section. you specify an alternative host to use in case the primary host fails. This option is available only on readonly file systems. in the section “Setting Up NFS. NFS server logging provides event and audit logging functionality to networked file systems. as described earlier in this chapter. the client IP address (or hostname if it can be resolved).168. . you use the dfmounts command: # dfmounts sunfire<cr> The system responds with a list of file systems currently mounted on sparcserver: RESOURCE SERVER PATHNAME sunfire/usr sunfire/usr/dt CLIENTS 192. If the NFS server were to go down unexpectedly. To view resources that can be mounted on the local or remote system. the UID of the requestor. You can address this issue by using client-side failover.201 192. The umount command and /etc/vfstab file are described in Solaris 10 System Administration Exam Prep (Exam CX-310-200). The data recorded includes a timestamp. You can do this from the command line or by adding an entry to the /etc/vfstab file that looks like the following: zeus. and the type of operation that occurred. failover uses the next alternative server to access files. To mount a replicated set of NFS file systems. on the NFS client. The daemon nfslogd provides NFS logging.1. NFS Server Logging A feature that first appeared in Solaris 8 is NFS server logging.” When NFS logging is enabled. which might have different paths to the file system. mount the file system using the -ro option. To set up client-side failover. The primary and alternative hosts should contain equivalent directory structures and identical files. and you enable it by using the log=<tag> option in the share command. The nfslogd daemon converts this information into ASCII records that are stored in ASCII log files. you use the following mount command: # mount -F nfs -o ro zeus:/usr/local/data. Part I. Swap Space.78 Chapter 2: Virtual File Systems.201 Sometimes you rely on NFS mount points for critical information. the file handle of the resource that is being accessed. With client-side failover. the kernel records all NFS operations on the file system in a buffer. later in this chapter.thor:/data /remote_data nfs no -o ro If multiple file systems are named and the first server in the list is down. you would lose the information contained at that mount point. and Core Dumps NOTE mount permissions The mount and umount commands require root access.1.

the logs can become large and consume huge amounts of disk space. however.conf. follow the procedure described in Step By Step 2. start it by entering this: #/usr/lib/nfs/nfslogd<cr> EXAM ALERT NSF server logging configuration You should be familiar with the concept of NFS server logging.conf). If the nfslogd daemon is not already running. Each definition is associated with a tag. and you lose an exam point unnecessarily if you leave it out. With logging enabled. but you can create new tags and specify them for each file system you share.log=global <file-system-name><cr> Add this entry to your /etc/dfs/dfstab file if you want it to go into effect every time the server is booted. You can change the configuration settings in the NFS server logging configuration file /etc/nfs/nfslog. . filenames. The global tag defines the default values. The nfs directory in the path can be easily forgotten. 2.5. It is necessary to configure NFS logging appropriately so that the logs are pruned at regular intervals. NOTE Logging pros and cons NFS server logging is particularly useful for being able to audit operations carried out on a shared file system.5 Enabling NFS Server Logging 1.79 NFS NOTE No logging in NFS version 4 Remember that NFS logging is not supported in NFS version 4. and types of logging to be used by nfslogd. especially the location of the configuration file (/etc/nfs/nfslog. STEP BY STEP 2. As root. This file defines pathnames. The logging can also be extended to audit directory creations and deletions. To enable NFS server logging. The NFS operations to be logged by nfslogd are defined in the /etc/default/nfslogd configuration file. share the NFS by typing the following entry at the command prompt: # share -F nfs -o ro.

and Core Dumps Troubleshooting NFS Errors Objective . the NFS server generates a new file handle for the new file. and try starting the nfs/server service: # svcadm enable svc:/network/nfs/server<cr> The NFS: service not responding Error This error message indicates that the NFS server may not be running the required NFS server daemons. The stale NFS file handle Message This message appears when a file was deleted on the NFS server and replaced with a file of the same name. In this case. To solve the problem. To solve the problem. This message indicates that the NFS server is not running the mountd daemon. log in to the NFS server and type: # who -r<cr> Make sure that the server is at run level 3. it’s not uncommon to encounter various NFS error messages. Swap Space. Check to verify that the nfsd daemon is running by issuing the following command: # pgrep -fl mountd<cr> If nfsd is not running. the file handle remains the same. The following sections describe some of the common errors you may encounter while using NFS. verify that you have the directory shared in the /etc/dfs/dfstab file. If the client is still using the old file handle. A solution to this problem is to unmount and remount the NFS resource on the client. the server returns an error that the file handle is stale. log in to the NFS server and type # who -r<cr> Make sure that the server is at run level 3. Troubleshoot various NFS errors After you configure NFS.80 Chapter 2: Virtual File Systems. Check to verify that the mountd daemon is running by issuing the following command: # pgrep -fl mountd<cr> . If a file on the NFS server was simply renamed. The RPC: Program not registered Error You may receive this message while trying to mount a remote NFS resource or during the boot process.

make sure that you are specifying the correct directory name that is shared on the server. The RPC: Unknown host Error This message indicates that the hostname of the NFS server is missing from the hosts table. To solve the problem. verify that network connectivity exists between the client and the NFS server. and try starting the nfs/server service: # svcadm enable svc:/network/nfs/server<cr> The server not responding Error This message appears when the NFS server is inaccessible for some reason.81 AutoFS If mountd is not running. Execute the dfshares command on the server to verify the name of the shared resource. still trying Message This message appears when the NFS server is inaccessible for some reason. It’s possible that the NFS server hostname is down or a problem has occurred with the server or the network. The No such file or directory Error You may receive this message while trying to mount a remote resource or during the boot process. To solve the problem. The NFS server not responding. and indirect) to configure automounting. AutoFS Objective . This error indicates that an unknown file resource is on the NFS server. verify that network connectivity exists between the client and the NFS server. Explain and manage AutoFS and use automount maps (master. You may need to set up a failover server or move the NFS resource to a server that has the capacity to better respond to the NFS requests. direct. . It could be that the NFS server is too busy to respond to the NFS request. your NFS server might have failed. verify that you’ve typed the server name correctly and that the hostname can be resolved properly. verify that you have the directory shared in the /etc/dfs/dfstab file. Check the spelling on the command line or in the /etc/vfstab file. To solve the problem. To solve the problem.

the system boots faster because NFS mounts are done later. These mounts are not automatically mounted at startup time. is designed to handle such situations by providing a method by which remote directories are mounted automatically. which is run automatically when a system is started. AutoFS. Two programs support the AutoFS service: automount and automountd. and systems can be shut down with fewer ill effects and hung processes. mounting and unmounting remote directories on an as-needed basis. The automount service sets up the AutoFS mount points and associates the information in the /etc/auto_master file with each mount point. AutoFS is initialized by automount. under which file systems are mounted in the future. Swap Space.82 Chapter 2: Virtual File Systems. and the user does not need to know the superuser password to mount a directory (normally file system mounts require superuser privilege). For example. only when they are being used. and you cannot take advantage of AutoFS. Mounting does not need to be done at system startup. When the file system is no longer needed or has not been accessed for a certain period. the file system is automatically unmounted. /usr. is a file system structure that provides automatic mounting. also called trigger nodes. automountd. The automount daemon. users do not use the mount and umount commands. also called the automounter. However. all trying to mount file systems from each other. on a diskless computer you must mount / (root). a clientside service. runs continuously. managing NFS can quickly become a nightmare. the mount is established. The AutoFS service mounts file systems as the user accesses them and unmounts file systems when they are no longer required. File systems shared through the NFS service can be mounted via AutoFS. without any intervention on the part of the user. Both are run when a system is started by the svc:/system/filesystem/autofs:default service identifier. some file systems still need to be mounted by using the mount command with root privileges. With AutoFS. The AutoFS facility. which is called at system startup time. The automount command. The following is the syntax for automount: automount [-t <duration>] [-v] Table 2. network overhead is lower. As a result. . and /usr/kvm by using the mount command. They are trigger points. and Core Dumps When a network contains even a moderate number of systems. reads the master map file /etc/auto_master to create the initial set of AutoFS mounts. When a user or an application accesses an NFS mount point.11 describes the syntax options for the automount command.

Logs all status messages to the console. Checking AutoFS every 300 seconds (5 minutes) might be better. The syntax of this command is as follows: automountd [-Tnv] [-D <name>=<value>] Table 2. you might need to decrease the <duration> value. if a server has many users. If AutoFS receives a request to access a file system that is not currently mounted. however. You use this option for troubleshooting. In most circumstances. AutoFS calls automountd. the value for <duration> of an unused mount is set to 10 minutes. The automountd daemon handles the mount and unmount requests from the AutoFS service. . in seconds. Running the automount command in verbose mode allows easier troubleshooting. this value is good. In particular. it is possible to add. on systems that have many automounted file systems. Substitutes value for the automount map variable indicated by <name>. delete. -t <duration> -v If it is not specifically set. that a file system is to remain mounted if it is not being used. The automountd daemon is completely independent from the automount command.12 Option -T -n -v -D <name>=<value> automountd Command Syntax Description Displays each remote procedure call (RPC) to standard output. Because of this separation. The default <value> for the automount map is /etc/auto_master. active checking of the automounted file systems every 10 minutes can be inefficient.11 Option automount Command Syntax Description Sets the time.83 AutoFS Table 2. Selects verbose mode.12 describes the syntax options for the automountd command. Table 2. which mounts the requested file system under the trigger node. You can edit the /etc/default/autofs script to change the default values and make them persistent across reboots. The default value is 600 seconds. Disables browsing on all AutoFS nodes. or change map information without first having to stop and start the automountd daemon process.

To see who might be using a particular NFS mount. and Core Dumps When AutoFS runs. Prints the list of shared file systems. Table 2. AutoFS allows the intercepted request to proceed. Swap Space. you use the showmount command. the system goes through the following steps: 1. and <directory> is the root of the file system that has been mounted. NOTE Automatic. Even if the operation is successful. automount and automountd initiate at startup time from the svc:/system/filesystem/autofs service identifier.13. If a request is made to access a file system at an AutoFS mount point. AutoFS intercepts the request. Lists directories that have been remotely mounted by clients.84 Chapter 2: Virtual File Systems. 4. and this can result in possible inconsistency. The following example illustrates the use of showmount to display file systems currently mounted from remote systems. The syntax for showmount is shown here: showmount <options> The options for the showmount command are described in Table 2. the AutoFS service does not check that the object has been unmounted. 3. <hostname> is the name of the client. mounts Mounts managed through the AutoFS service should not be manually mounted or unmounted. not manual.13 Option -a -d -e showmount Command Syntax Description Prints all the remote mounts in the format <hostname> : <directory>. AutoFS unmounts the file system after a period of inactivity. 5. AutoFS sends a message to the automountd daemon for the requested file system to be mounted. automountd locates the file system information in a map and performs the mount. On the NFS server named neptune. you could enter the following command: # showmount -a<cr> . A restart clears all AutoFS mount points. 2.

Map files contain information.85 AutoFS The system would display the following information: apollo:/export/home/neil showmount says that the remote host. such as the location of other maps to be searched or the location of a user’s home directory. The line that contains +auto_master specifies the AutoFS NIS table map. “Naming Services. The master map is a list that specifies all the maps that AutoFS should check. which is explained in Chapter 5. AutoFS searches maps to navigate its way through the network. The master map. The three types of automount maps are the master map. The following example shows what an auto_master file could contain: # Master map for automounter # +auto_master /net -hosts -nosuid. The lines that begin with # are comments. is currently mounting /export/home/neil on this server. Each is described in the following sections. which is in the /etc/auto_master file.14. . the automount command reads the master map at system startup. /etc/auto_master. This map tells the automounter about map files and mount points. has the following syntax: <mount-point> <map-name> <mount-options> Each of these fields is described in Table 2. apollo. the direct map. Master Maps To start the navigation process. called maps.nobrowse /home auto_home -nobrowse This example shows the default auto_master file.” Each line thereafter in the master map. for example. AutoFS Maps The behavior of the automounter is governed by its configuration files. associates a directory with a map. and the indirect map. The master map lists all direct and indirect maps and their associated directories.

If the directory does exist and is not empty. you can put a backslash (\) at the end of the line. Another system. The following entry in /etc/auto_master allows this to happen: /net -hosts -nosuid. called /etc/auto_master. In that case. exists on the network. Using the notation /.86 Chapter 2: Virtual File Systems. Every Solaris installation comes with a master map. and Core Dumps Table 2.024. AutoFS creates it. <mount-point> <map-name> <mount-options> NOTE Map format A line that begins with a pound sign (#) is a comment.10. Without any changes to the generic system setup. If the name is preceded by a slash (/). To split long lines into shorter ones. For NFS-specific mount points. clients should be able to access remote file systems through the /net mount point. let’s say that you have an NFS server named apollo that has the /export file system shared. Swap Space. unless the entries list other options. that has the default entries described earlier. Options for each specific type of file system are listed in Table 2. “Administering ZFS File Systems. if possible. the bg (background) and fg (foreground) options do not apply. named zeus. mounting it hides its contents. If you type the following.conf).as a mount point indicates that a direct map with no particular mount point is associated with the map. AutoFS searches for the mount information by using the search specified in the name service switch configuration file (/etc/nsswitch. The map that AutoFS uses to find directions to locations or mount information. This system has the default /etc/auto_master file.nobrowse For example. the command comes back showing that the directory is empty—nothing is in it: # ls /net<cr> Now type this: # ls /net/apollo<cr> The system responds with this: export . The maximum number of characters in an entry is 1. Name service switches are described in Chapter 9. and everything that follows it until the end of the line is ignored. Otherwise. If the directory does not exist.14 Field /etc/auto_master Fields Description The full (absolute) pathname of a directory that is used as the mount point. by default. AutoFS issues a warning.” An optional comma-separated list of options that apply to the mounting of the entries specified in <map-name>. it has a directory named /net. AutoFS interprets the name as a local file.

ignore. The system responds with a list of files located on the mounted file system. and created a local mount point for /net/apollo/export. When you specified /net with a hostname. It then went to apollo. If you enter mount. in the /export directory. you see a file system mounted on apollo that wasn’t listed before: # mount<cr> / on /dev/dsk/c0t3d0s0 read/write/setuid/largefiles on Mon Aug 11 09:45:21 2008 /usr on /dev/dsk/c0t3d0s6 read/write/setuid/largefiles on Mon Aug 11 09:45:21 2008 /proc on /proc read/write/setuid on Mon Aug 11 09:45:21 2008 /dev/fd on fd read/write/setuid on Mon Aug 11 09:45:21 2008 .dev=2b80005 941812769 This entry in the /etc/mnttab table is referred to as a trigger node (because changing to the specified directory. For this particular system. found the exported file system. the mount of the file system is “triggered”). It also added this entry to the /etc/mnttab table: -hosts /net/apollo/export autofs nosuid.nest. you won’t see anything mounted at this point: # mount<cr> The system responds with this: / on /dev/dsk/c0t3d0s0 read/write/setuid/largefiles on Mon Aug 11 09:45:21 2008 /usr on /dev/dsk/c0t3d0s6 read/write/setuid/largefiles on Mon Aug 11 09:45:21 2008 /proc on /proc read/write/setuid on Mon Aug 11 09:45:21 2008 /dev/fd on fd read/write/setuid on Mon Aug 11 09:45:21 2008 /export on /dev/dsk/c0t3d0s3 setuid/read/write/largefiles on \ Mon Aug 11 09:45:24 2008 /export/swap on /dev/dsk/c0t3d0s4 setuid/read/write/largefiles on \ Mon Aug 11 09:45:24 2008 /tmp on swap read/write on Mon Aug 11 09:45:24 2008 Now type this: # ls /net/apollo/export<cr> You should have a bit of a delay while automountd mounts the file system. automountd looked at the map file—in this case. /etc/hosts—and found apollo and its IP address. If you enter mount. it responds with the following: files lost+found The files listed are files located on apollo.nobrowse. why did it find a subdirectory? This is the automounter in action.87 AutoFS Why was the /net directory empty the first time you issued the ls command? When you issued ls /net/apollo.

nobrowse. It pings the server’s mount service to see if it’s alive.rw. Now the /etc/mnttab file contains the following entries: -hosts /net/apollo/export autofs nosuid.suid.nest. and you will see additional entries: # more /etc/mnttab<cr> /dev/dsk/c0t3d0s0 / ufs rw. 2.\ dev=80001c 941454349 swap /tmp tmpfs dev=1 941454349 -hosts /net autofs ignore.indirect. the AutoFS service completes the process.dev=80001e.suid.\ dev=2b80005 941812769 apollo:/export /net/apollo/export nfs nosuid.noquota. Swap Space.ignore.dev=2b80002\ 941454394 -xfn /xfn autofs ignore.indirect.rw.largefiles.dev=800018. It mounts the requested file system under /net/apollo/export.nobrowse.largefiles.\ dev=2b80005 941812769 apollo:/export /net/apollo/export nfs nosuid.dev=2b40003 941813283 If the /net/apollo/export directory is accessed.suid.largefiles 941454346 /proc /proc proc rw.nobrowse.88 Chapter 2: Virtual File Systems.indirect.nobrowse.dev=2b80003 941454394 sunfire:vold(pid246) /vol nfs ignore.ignore.dev=2940000 941454346 fd /dev/fd fd rw.dev=2a00000 941454346 /dev/dsk/c0t3d0s3 /export ufs suid.nest.largefiles 941454346 /dev/dsk/c0t3d0s6 /usr ufs rw.nosuid.suid. and Core Dumps /export on /dev/dsk/c0t3d0s3 setuid/read/write/largefiles on Mon Aug 11 09:45:24 2008 /export/swap on /dev/dsk/c0t3d0s4 setuid/read/write/largefiles on Mon Aug 11 09:45:24 \ 2008 /tmp on swap read/write on Mon Aug 11 09:45:24 2008 /net/apollo/export on apollo:/export nosuid/remote on \ Fri Aug 15 09:48:03 2008 The automounter automatically mounted the /export file system that was located on apollo.dev=2b40001\ 941454409 -hosts /net/apollo/export autofs nosuid.dev=2b80001\ 941454394 auto_home /home autofs ignore. Now look at the /etc/mnttab file again.dev=2b40003 941813283 . with these steps: 1.dev=80001b 941454349 /dev/dsk/c0t3d0s4 /export/swap ufs suid.

This is a typical /etc/auto_direct map: /usr/local -ro /share ivy:/export/local/share /src ivy:/export/local/src /usr/man -ro apollo:/usr/man zeus:/usr/man neptune:/usr/man /usr/game -ro peach:/usr/games /usr/spool/news -ro jupiter:/usr/spool/news saturn:/var/spool/news NOTE Map naming The direct map name /etc/auto_direct is not a mandatory name. and these mounts appear as links with the name of the direct mount point. so there is no need to unmount them when you are done. and you would cover up the local /usr/bin and /usr/etc directories when you established the mount. With a direct map. The /usr directory contains many other directories. it cannot be an indirect mount point. such as /usr/bin and /usr/local. there is a direct association between a mount point on the client and a directory on the server. The name of a direct map must be added to the /etc/auto_master file. If you used an indirect map for /usr/man. root access is not required. Direct Maps A direct map lists a set of unrelated mount points that might be spread out across the file system. it is used here as an example of a direct map.89 AutoFS Because the automounter lets all users mount file systems. but it can be any name you choose. A direct map lets the automounter complete mounts on a single directory entry such as /usr/man. the local /usr file system would be the mount point. Lines in direct maps have the following syntax: <key> <mount-options> <location> The fields of this syntax are described in Table 2. A complete path (for example. /usr/man) is listed in the map as a mount point. A good example of where to use a direct mount point is for /usr/man. /usr/local/bin.15. although it should be meaningful to the system administrator. AutoFS also provides for automatic unmounting of file systems. therefore. A direct map has a full pathname and indicates the relationship explicitly. A direct map is specified in a configuration file called /etc/auto_direct. .

/usr/man and /usr/spool/news. a file system can be called a replica if each file is the same size and it is the same type of file system. In the previous example. such as those shown here. <pathname> should not include an automounted mount point. it should be the actual absolute path to the file system. more than one server can export the current set of man pages. minutes later.90 Chapter 2: Virtual File Systems. creation dates. and other file attributes are not a consideration. you could mount the man pages from the . modify the “same” file on another server. the location of a home directory should be listed as server:/export/home/username. If the file systems are configured as replicas. the remap fails and the process hangs until the old server becomes available. These options. Permissions. Replication makes sense only if you mount a file system that is read-only because you must have some control over the locations of files that you write or modify. Not only is the best server automatically determined. not as server:/home/username. Indicates the remote location of the file system. <mount-options> <location> In the previous example of the /etc/auto_direct map file. Swap Space. as long as the server is running and sharing its file systems. Any options added to an automounter map override all the options listed in previously searched maps. The benefit of replication is that the best available server is used automatically. without any effort required by the user. You don’t want to modify one server’s files on one occasion and. With multiple mount locations specified. the clients have the advantage of using failover. and Core Dumps Table 2. options included in the auto_master map would be overwritten by corresponding entries in any other map. the client automatically uses the next-best server. For the purposes of failover. if that server becomes unavailable. For instance. are required only if they differ from the map default options specified in the /etc/auto_master file. multiple replicas are expressed as a list of mount locations in the map entry. or failover. Which server you mount them from doesn’t matter. but. which are listed in Table 2. An example of a good file system to configure as a replica is the manual (man) pages. list more than one location: /usr/man -ro /usr/spool/news apollo:/usr/man zeus:/usr/man neptune:/usr/man -ro jupiter:/usr/spool/news saturn:/var/spool/news Multiple locations. If the file size or the file system types are different. specified as <server:pathname>. the mount points. are used for replication. There is no concatenation of options between the automounter maps. This pathname specifies the local directory on which to mount the automounted directory. In a large network.10. For instance. Indicates the options you want to apply to this particular mount. More than one location can be specified.15 Field <key> Direct Map Fields Description Indicates the pathname of the mount point in a direct map.

neptune(2):/usr/man Servers without a weighting have a value of 0. During the sorting process. and the fastest time is used. The protocol supported on the most servers is the protocol that is supported by default. If version 3 servers are most abundant. Failover is particularly useful in a large network with many subnets. including the number of servers supporting a particular NFS protocol level. If there are more version 2 servers than version 3 servers. Servers on the local subnet are given preference over servers on a remote subnet. After the largest subset of servers that have the same protocol version is found. and the closest version 3 server is on a remote subnet. the proximity of the server. In the following example. only a version 2 server is selected. that server list is sorted by proximity. If several servers are supporting the same protocol on the local subnet. the time to connect to each server is determined. the sorting process becomes more complex. which reduces latency and network traffic. and weighting. AutoFS chooses the nearest server and therefore confines NFS network traffic to a local network segment. Normally servers on the local subnet are given preference over servers on a remote subnet. the version 2 server is given preference. The closest server is given preference. It then selects the nearest interface to the client. The higher the weighting value is. or neptune servers. and 4 protocols is done. Weighting is considered only in selections between servers with the same network proximity.91 AutoFS apollo. Currently. The best server depends on a number of factors. which makes them the most likely servers to be selected. the sorting is checked once at mount time. In servers with multiple network interfaces. and again if the mounted server becomes unavailable. You can influence the selection of servers at the same proximity level by adding a numeric weighting value in parentheses after the server name in the AutoFS map. you set up a direct map for /usr/local on zeus. This provides the client with the maximum number of servers to depend on. Here’s an example: /usr/man -ro apollo. because they will be chosen as long as a version 2 server on the local subnet is not being ignored. 3. AutoFS lists the hostname associated with each network interface as if it were a separate server. the less chance the server has of being selected.zeus(1). a count of the number of servers supporting the NFS ver- sion 2. zeus has a directory called /usr/local with the following directories: . All other server-selection factors are more important than weighting. 2. to select one server from which to mount. A version 2 server on the local subnet can complicate matters because it could be closer than the nearest version 3 server. The process of selecting a server goes like this: 1. This is checked only if there are more version 3 servers than version 2 servers. If there is a version 2 server on the local subnet. zeus. With failover.

Create the following files and directories in the /usr/local directory: # mkdir /usr/local/bin /usr/local/etc<cr> # touch /usr/local/files /usr/local/programs<cr> Perform steps 3 through 5 on the local system (client): 3. zeus: 1.92 Chapter 2: Virtual File Systems.6 Creating a Direct Map For this Step By Step. and Core Dumps # ls /usr/local<cr> The following local directories are displayed: bin etc files programs If you set up the automount direct map. and share it: # mkdir /usr/local<cr> # share -F nfs /usr/local<cr> 2.6. but if your remote system name is not named zeus. STEP BY STEP 2. be sure to substitute your system’s hostname. run automount to reload the AutoFS tables: # automount<cr> If you have access to the /usr/local directory. Create the direct map file called /etc/auto_direct with the following entry: /usr/local zeus:/usr/local 5. Swap Space. Add the following entry in the master map file called /etc/auto_master: //etc/auto_direct 4. It does not matter what the local (client) system is named. the NFS mount point is established by using the direct map you have set up. Create a directory named /usr/local. Because you’re modifying a direct map. The contents of /usr/local have changed because the direct map has covered up the local copy of /usr/local: # ls /usr/local<cr> You should see the following directories listed: fasttrack answerbook . you need two systems: a local system (client) and a remote system named zeus. you can see how the /usr/local directory is overwritten by the NFS mount. Perform steps 1 and 2 on the remote system. Follow the procedure shown in Step By Step 2.

and the following entry is made in the /etc/mnttab file: -hosts /share/ws autofs nosuid.dev=### . Indirect maps are useful for accessing specific file systems.ignore. you need to create an indirect map file named /etc/auto_share. If you enter the mount command. For this entry. from anywhere on the network. the AutoFS service creates a trigger node for /share/ws.93 AutoFS NOTE Overlay mounting The local contents of /usr/local have not been overwritten. such as home directories.nobrowse. which would look like this: # share directory map for automounter # ws neptune:/export/share/ws If the /share/ws directory is accessed.nest. /etc/auto_share is the name of the indirect map file for the mount point /share. the original contents of /usr/local are redisplayed. you see that /usr/local is now mounted remotely from zeus: # mount<cr> / on /dev/dsk/c0t3d0s0 read/write/setuid/largefiles on Mon Aug 11 09:45:21 2008 /usr on /dev/dsk/c0t3d0s6 read/write/setuid/largefiles on Mon Aug 11 09:45:21 2008 /proc on /proc read/write/setuid on Mon Aug 11 09:45:21 2008 /dev/fd on fd read/write/setuid on Mon Aug 11 09:45:21 2008 /export on /dev/dsk/c0t3d0s3 setuid/read/write/largefiles on Mon Aug 11 09:45:24 2008 /export/swap on /dev/dsk/c0t3d0s4 setuid/read/write/largefiles on Mon Aug 11 09:45:24 2008 /tmp on swap read/write on Mon Aug 11 09:45:24 2008 /usr/local on zeus:/usr/local read/write/remote on Sat Aug 16 08:06:40 2008 Indirect Maps Indirect maps are the simplest and most useful AutoFS maps. An indirect map uses a key’s substitution value to establish the association between a mount point on the client and a directory on the server. After the NFS mount point is unmounted. The following entry in the /etc/auto_master file is an example of an indirect map: /share /etc/auto_share With this entry in the /etc/auto_master file.

It mounts the requested file system under /share. As users log in to several different systems. These options. Table 2. specified as <server:pathname>. <pathname> should not include an automounted mount point. The remote location of the file system. More than one location can be specified. which are described in Table 2. it should be the actual absolute path to the file system.nosuid peach:/export/home/neil . A typical /etc/auto_home map file might look like this: # more /etc/auto_home<cr> dean willow:/export/home/dean william cypress:/export/home/william nicole poplar:/export/home/nicole glenda pine:/export/home/glenda steve apple:/export/home/steve burk ivy:/export/home/burk neil -rw. It’s convenient for the users to use the automounter to access their home directories. The options you want to apply to this particular mount. and Core Dumps If the /share/ws directory is accessed.10.nobrowse.16 Field <key> <mount-options> Indirect Map Field Syntax Description A simple name (with no slashes) in an indirect map. not as server:/net/server/usr/local. the default /etc/auto_master map file needs to contain the following entry: /home /etc/auto_home -nobrowse /etc/auto_home is the name of the indirect map file that contains the entries to be mounted under /home. are required only if they differ from the map default options specified in the /etc/auto_master file. Swap Space. It pings the server’s mount service to see if it’s alive.94 Chapter 2: Virtual File Systems. 2. the AutoFS service completes the process with these steps: 1. To accomplish this. <location> For example. For instance.dev=#### ##### Lines in indirect maps have the following syntax: <key> <mount-options> <location> The fields in this syntax are described in Table 2. their home directories are not always local to the system. say an indirect map is being used with user home directories.ignore.16. regardless of what system they’re logged in to. Now the /etc/mnttab file contains the following entries: -hosts /share/ws autofs nosuid.nest.dev=### neptune:/export/share/ws /share/ws nfs nosuid. the location of a directory should be listed as server:/usr/local.

7 Setting Up an Indirect Map 1. determines the contents of the /data directory. Now assume that the /etc/auto_home map is on the host oak. on any computer that has the /etc/auto_home map set up. Under these conditions. including Neil. the actual name of an indirect map is up to the system administrator. Step By Step 2.7 shows how to do this. STEP BY STEP 2. Edit /etc/auto_data to create a map that looks like the following: compiler window files drivers man tools apollo:/export/data/& apollo:/export/data/& zeus:/export/data/& apollo:/export/data/& zeus:/export/data/& zeus:/export/data/& . The auto_data map is organized so that each entry describes a subproject. but a corresponding entry must be placed in the /etc/auto_master file. AutoFS mounts the directory /export/home/neil. or rlogin. Anyone. and his home directory is mounted in place for him. Add an entry for the /data directory to the /etc/auto_master map file: /data /etc/auto_data -nosuid The auto_data map file. named /etc/auto_data. Neil’s home directory is mounted read-write. 2. has access to this path from any computer set up with the master map referring to the /etc/auto_home map in this example. user neil can run login. Another example of when to use an indirect map is when you want to make all project-related files available under a directory called /data that is to be common across all workstations at the site. 3. whenever he logs in to computer oak. The -nosuid option prevents users from creating files with the setuid or setgid bit set. If user neil has an entry in the password database that specifies his home directory as /home/neil. Add the -nosuid option as a precaution.95 AutoFS NOTE Indirect map names As with direct maps. nosuid. which resides on the computer peach. Create the /etc/auto_data file and add entries to the auto_data map. and the name should be meaningful to the system administrator.

or add entries to maps to meet the needs of the environment.96 Chapter 2: Virtual File Systems. However. those changes do not take place until the AutoFS tables are reloaded: # automount<cr> . changes do not take place until the file system is unmounted and remounted. You can modify. the mount point to apollo:/export/data/compiler is created: # cd /data/compiler<cr> 5. For instance. You can modify AutoFS maps at any time. These users are provided direct access to local files through loopback mounts instead of NFS mounts. Because the servers apollo and zeus view similar AutoFS maps locally. NOTE Directory creation There is no need to create the directory /data/compiler to be used as the mount point. delete. If a change is made to the auto_master map or to a direct map. Swap Space. Because you changed the /etc/auto_master map. 4. the first entry is equivalent to the compiler apollo:/export/data/compiler. the final step is to reload the AutoFS tables: # automount<cr> Now. and Core Dumps NOTE Using the entry key The ampersand (&) at the end of each entry is an abbreviation for the entry key. the mount point to zeus:/export/data/tools is created under the mount point /data/tools. any users who log in to these computers find the /data file system as expected. As applications (and other file systems that users require) change location. if a user changes to the /data/compiler directory. the maps must reflect those changes. Type mount to see the mount point that was established: # mount<cr> The system shows that /data/compiler is mapped to apollo:/export/data/compiler: /data/compiler on apollo:/export/data/compiler read/write/remote on Fri Aug\ 15 17:17:02 2008 If the user changes to /data/tools. AutoFS creates all the necessary directories before establishing the mount.

97 Sun Update Connection Service EXAM ALERT Direct versus indirect maps Remember the difference between direct and indirect maps. You should avoid using automount to mount frequently used file systems. This means that an absolute pathname is specified in the map. By using automount instead of conventional NFS mounting. Part I. The Sun Update Connection service has been available in Solaris 10 since the 1/06 release. such as online reference man pages. Patching the operating system is covered in Solaris 10 System Administration Exam Prep (Exam CX-310-200). such as /home. the smpatch command line. appears in the /etc/auto_master entry for an indirect map. conventional NFS mounting is more efficient in this situation. Without the AutoFS service. so the starting mount point. such as those that contain user commands or frequently used applications. You also use automount if a read-only file system exists on more than one server. install. The /entry in /etc/auto_master signifies a direct map because no mount point is specified. When to Use automount The most common and advantageous use of automount is for mounting infrequently used file systems on an NFS client. you can configure the NFS client to query all the servers on which the file system exists and mount from the server that responds first. to permit access. You certainly don’t want to create permanent NFS mounts for all user home directories on each system. This works well for users who do not have a dedicated system and who tend to log in from different locations. and it can easily become out of sync. so I won’t explain all the methods used to manage patches. You’ll use the service to keep your system up to date with all the latest OS patches. a system administrator has to create home directories on every system that the user logs in to. It is quite practical and typical to combine the use of automount with conventional NFS mounting on the same NFS client. The Sun . Data has to be duplicated everywhere. Another common use is accessing user home directories anywhere on the network. Implement patch management using Sun Connection Services including the Update Manager client. Sun Update Connection Service Objective: . and Sun Connection hosted web application. That book describes how to verify. so mounting infrequently used file systems on an NFS client is an excellent use for automount. Indirect maps contain relative addresses. This section describes how to use the Sun Update Manager to handle the OS patching process. and remove OS patches.

but do not use both at the same time. . . and Core Dumps Update Connection services include the following: . Installs selected OS updates . Swap Space. Removes (backs out) installed OS updates You begin by starting the Update Manager client software. SunSolve Patch and Updates Portal: Provides access to OS patches for manual download. Sun Update Manager: Consists of two interfaces: a graphical and command-line interface that you will use to manage the updates on your system. Analyzes your system for available OS updates . To start the GUI. . . Choose either the GUI version or the command-line version of Update Manager. Sun Update Connection: A web application hosted at Sun that allows you to remote- ly manage the updates on all your Sun systems. type # /usr/bin/updatemanager<cr> The Sun Update Manager GUI opens. as shown in Figure 2.1. Sun Update Connection Proxy: A local caching proxy that provides all the OS updates obtained from Sun to the clients inside your network. Update Manager performs the following tasks: . Provides details of each available OS update . Using an updated version of the PatchPro tool.98 Chapter 2: Virtual File Systems. Using the Update Manager The Update Manager replaces the Solaris Patch Manager application that was available in previous releases of Solaris. Displays the list of updates that are available for your system .

. . . add: Installs patches on single or multiple machines. download: Downloads patches from the SunSolve Online database to the patch direc- tory. The basic syntax is as follows: /usr/sadm/bin/smpatch <subcommand> [<auth_args> ] — [<subcommand_args>] The smpatch command uses subcommands for the various modes of operation. The smpatch subcommands are as follows: .1 Sun Update Manager. so I’ll type #/usr/sbin/smpatch<cr> The command syntax for the smpatch command varies. analyze: Analyzes and lists the patches required for a specified machine. The following examples use the command-line interface.99 Sun Update Connection Service FIGURE 2. each subcommand has its own list of options. remove: Removes a single patch from a system. depending on the mode of operation. .

Before you use the sconadm command. -r <registration_profile>: Pathname to a registration profile. If you do not have a service contract. -e softwareUpdate: Enables the client to be managed at the Sun-hosted Update Connection Service. . Swap Space. . you need to establish one. -N: Never registers. . -p <proxy_host>[:<proxy_port>]: Proxy hostname and optional proxy port number. you’ll use the vi editor to create a profile file named /tmp/regprofile. . . For the example. . -E softwareUpdate: Disables the client’s ability to be managed at the Sun-hosted Update Connection Service. The advantage of using the command-line version of Update Manager is that you can embed the smpatch commands into shell scripts to increase efficiency. and Core Dumps Refer to the online man pages for a complete set of options for each subcommand. Use the sconadm command to register. -a: Is used to accept the Terms of Use and Binary Code License. If you do not already have a Sun online account. You need to register your system at Sun before you can use the Update Manager client. . and data integrity updates. Its syntax is as follows: /usr/sbin/sconadm register -a [-e softwareUpdate | -E softwareUpdate] [-h <hostname>] [-l <logfile>] [-N] [-p <proxy_host>[:<proxy_port>]] [-r <registration_profile>] [-u <username>] [-x <proxy_username>] where: . Absence of this option means that you do not accept the license. create a registration profile. . The types of patches that you can download depends on the type of Sun service contract you have. -u <username>: Specifies the username used to connect to the Sun Online Account. -h <hostname>: Specifies the hostname of the machine you want to register.100 Chapter 2: Virtual File Systems. you can still register. but you can download only security. . Information in this file will be used when you register. The registration pro- file is described later in this section. in Step By Step 2. . -x <proxy_username>: Specifies the username on the proxy host.8. hardware driver. -l <logfile>: Specifies the pathname of a log file.

101 Sun Update Connection Service STEP BY STEP 2.10: Install and Patch Utilities Patch 119963-10 SunOS 5. you can use the smpatch command to analyze your system for patches: # smpatch analyze<cr> A list of patches is displayed: 120199-13 SunOS 5.10: System Administration Applications.. and \ Core Libraries Patch 121430-25 SunOS 5.6: Runtime library patch for Solaris 10 .8 Registering Your System with Sun Connection Services 1. Change the permissions on the profile to 400 or 600: # chmod 600 /tmp/regprofile<cr> 4.8 5. Use the vi editor to open the profile named regprofile: # vi /tmp/regprofile<cr> 2.10: sysidtool patch 119252-23 SunOS 5.10: CD-ROM Install Boot Image Patch 119254-57 SunOS 5. Now that the system is registered.10: Live Upgrade Patch 124628-06 SunOS 5. Add the following lines to your profile using your Sun Online user account name and password: userName=<Sun Online account username> password=<password> hostName= subscriptionKey= portalEnabled=false proxyHostName= proxyPort= proxyUserName= proxyPassword= 3. and the -r option specifies the use of a registration profile.10: Shared library patch for C++ 119280-18 CDE 1. Register using the sconadm command: # sconadm register -a -r /tmp/regprofile<cr> sconadm is running Authenticating user .. finish registration! The -a option is used to accept the terms of the license.10: System Administration Applications Patch 124630-17 SunOS 5.9 5. Network.

download. Patch 119278-23 has been successfully installed.proxy. and Core Dumps 119278-23 CDE 1. is /var/sadm/spool.user **** “” /var/sadm/spool /var/sadm/spool rebootafter:reconfigafter:standard https://getupdates1. Use the following command to verify that this directory is the default and has not been modified: # smpatch get<cr> The system responds with this: patchpro.baseline..17@19:01:35:EDT.com/ current “” **** 8080 “” Just because the patch has been downloaded doesn’t mean it has been installed. and install a patch in one step: # smpatch update -i 119278-23<cr> The system responds with this: Installing patches from /var/sadm/spool.backout.07. Swap Space.07. by default. /var/sadm/spool/patchpro_dnld_2008.directory patchpro.txt has been moved \ to /var/sadm/spool/patchproSequester/patchpro_dnld_2008. The patch is downloaded into the spool directory.proxy.102 Chapter 2: Virtual File Systems.directory patchpro.install. type # smpatch download -i 119278-23<cr> The system responds with this: 119278-23 has been validated.host patchpro. As an alternative to performing all the previous steps.6: dtlogin patch <output has been truncated> To download a specific patch using smpatch. download. you can analyze.proxy.directory patchpro.sun.types patchpro.. You still need to install the patch using the following command: #smpatch add -i 119278-23<cr> The system responds with this: add patch 119278-23 Transition old-style patching. 119278-23 has been applied.patch.port patchpro.passwd patchpro.source patchpro.patchset patchpro.17@19:01:35:EDT. which.txt .proxy.

The Update Manager proxy is an optional feature. To address this. issue the following command: # smpatch remove -i 119278-23<cr> remove patch 119278-23 Transition old-style patching. There is much more to the Sun Update Connection service that I have not covered.txt To remove (back out) the 119278-23 patch. Patch 119278-23 has been backed out. available only to those with Sun service contracts.17@19:18:09:EDT. Sun Update Manager Proxy Many systems cannot be directly connected to the Internet due to security concerns. the proxy server obtains the updates from Sun via the Internet and serves those updates to your local systems.txt has been moved \ to /var/sadm/spool/patchproSequester/patchpro_\ dnld_2008. When you configure an Update Manager proxy server on your network. use the Sun Update Manager Proxy.07. I recommend that you refer to the “Sun Update Connection System Administrator Guide” described at the end of this chapter for more information.07. .17@19:18:09:EDT.103 Sun Update Connection Service /var/sadm/spool/patchpro_dnld_2008.

Direct map . which is described in Solaris 10 System Administration Exam Prep (Exam CX-310-200). and delete swap files and partitions. Indirect map . you have learned how a Solaris system utilizes the swapfs file system as virtual memory storage when the system does not have enough physical memory to handle the needs of the currently running processes. There is much more to discuss on this topic. Key Terms . You have also learned how to manage core files and crash dumps. Hard mount . I described the Sun Update Connection Service for automating the installation of OS patches. monitor. Master map . You have learned how to add. nfs4cbd . Crash dump . Dynamic failover . NFS . and Core Dumps Summary In this chapter. The troubleshooting section described some of the more common problems and error messages that you may encounter while using NFS. automount . Accessing resources on the NFS client from a server was discussed. I recommend that you look over the suggested readings at the end of this chapter. This chapter also described AutoFS and the many options that are available when you’re mounting NFS resources so that user downtime is minimized by unplanned system outages and unavailable resources.104 Chapter 2: Virtual File Systems. as was configuring NFS to record all activity via the NFS logging daemon. lockd . Swap Space. Finally. This chapter also described what NFS is and how to share resources on an NFS server. Part I. mountd . nfslogd. Core file . You were reintroduced to the smpatch command.

Swap file . 2. Use the mkfile command to add a 512MB swap file named swapfile in a directory: # mkfile 512m /<directory>/swapfile<cr> . Update Manager Proxy . nfsd . Secondary swap partition . Replication . NFS client . Soft mount . nfsmapid . Trigger point . Virtual file system Apply Your Knowledge Exercises 2. As root. Update Manager . temporary swap space on your system. Estimated time: 15 minutes 1. Update Manager Client . Shared resource .1 Adding Temporary Swap Space In this exercise.105 Apply Your Knowledge . NFS Version 4 . NFS logging . you’ll create a swap file to add additional. use the df -h command to locate a file system that has enough room to sup- port a 512MB swap file. NFS server .

2. Issue the swap -l command to verify that the swap area is gone: # swap -l<cr> 8. Use the swap -l command to verify that the new swap area was added: # swap -l<cr> 6. Make the following entry in the /etc/dfs/dfstab file: # share -F nfs -o ro /usr/share/man<cr> 2. Verify that the resource is shared by typing this: # share<cr> The system displays this: /usr/share/man “ro “” . The NFS server must have man pages installed in the /usr/share/man directory. and Core Dumps 3. Estimated time: 30 minutes 1. you’ll set up an NFS server to share the contents of the /usr/share/man directory for read-only access. You need to determine in advance which system will serve as the NFS server and which system will be the NFS client. Swap Space. Activate the swap area with the swap command: # /usr/sbin/swap -a /<directory>/swapfile<cr> 5. Verify that the NFS server service is online by typing this: # svcs nfs/server<cr> 4. 4. Use the swap -d command to remove the swap area: # swap -d /<directory>/swapfile<cr> 7. Remove the swap file that was created: # rm /<directory>/swapfile<cr> The following two exercises require a minimum of two networked Solaris systems. Restart the NFS server service to start the nfsd and mountd daemons: # svcadm restart nfs/server<cr> 3. Use the ls -l /<directory> command to verify that the file has been created.2 NFS Server Setup In this exercise.106 Chapter 2: Virtual File Systems.

4 /usr/share/man CLIENTS 192.0. 13.168. Verify that you can see the shared resource on the NFS server by typing this: # dfshares <nfs-server-name><cr> The system should display a message similar to the following: RESOURCE SERVER ACCESS TRANSPORT 192. See if the man pages are accessible by typing this: # man tar<cr> 11.0. rename the /usr/share/man directory so that man pages are no longer accessible: # cd /usr/share<cr> # mv man man.168.21 12.0. Unmount the directory on the NFS client: # umountall -r<cr> The -r option specifies that only remote file system types are to be unmounted. Verify that the manual pages are no longer accessible by typing this: # man tar<cr> 7.107 Apply Your Knowledge 5.4 - 9.168. On the NFS client. Verify that the file system is no longer mounted by typing this: # dfmounts <nfs-server-name><cr> 14.bkup<cr> 6. On the NFS server. Verify the list of mounts that the server is providing by typing this: # dfmounts <nfs-server-name><cr> The system should display something like this: RESOURCE SERVER PATHNAME 192.4:/usr/share/man 192. Create a new man directory to be used as a mount point: # mkdir man<cr> 8.168. Mount the /usr/share/man directory located on the NFS server to the directory you created in step 8: # mount <nfs-server-name>:/usr/share/man /usr/share/man<cr> 10. unshare the /usr/share/man directory: # unshare /usr/share/man<cr> .0.

3 Using AutoFS This exercise demonstrates the use of AutoFS. Run the automount command to update the list of directories managed by AutoFS: # automount -v<cr> 6. use mount to see whether AutoFS automatically mounted the remote directory on the NFS server: # mount<cr> . try to mount the /usr/share/man directory from the NFS server: # mount <nfs-server-name>:/usr/share/man /usr/share/man<cr> The NFS server should not allow you to mount the file system. See if man pages are working on the NFS client by typing this: # man tar<cr> 7.108 Chapter 2: Virtual File Systems. 2. verify that the man pages are not working by typing this: # man tar<cr> 2. On the NFS client. 16. remove the directory you created in Exercise 2. use vi to create a new file named /etc/auto_direct. It looks like this: # share -F nfs -o ro /usr/share/man<cr> The nfsd and mountd daemons should also be running on this server. On the NFS client. The NFS server should already have an entry in the /etc/dfs/dfstab file from the previous exercise. Swap Space. Check the shared resources on the NFS server by typing this: # dfshares <nfs-server-name><cr> The file system can no longer be mounted because it is no longer shared. Add the fol- lowing line to the new file: # /usr/share/man <nfs-server-name>:/usr/share/man<cr> 5. On the NFS client. edit the /etc/auto_master file to add the following line for a direct map: /auto_direct 4. On the NFS client. and Core Dumps 15. On the NFS client. Estimated time: 30 minutes 1. On the NFS client.2: # rmdir /usr/share/man<cr> 3.

unshare the shared directory by typing this: # unshareall<cr> 9. swapadd E. On the NFS server. what is the correct method to ensure the swap space is available following subsequent reboots? A. After you create and add additional swap space. You can modify the startup scripts to include a swapadd command.109 Apply Your Knowledge 8. You can add an entry to the /etc/vfstab file. On the NFS client. therefore. shut down the NFS server daemons: # svcadm disable nfs/server<cr> 10. remove the file named /etc/auto_direct: # rm /etc/auto_direct<cr> 12. On the NFS client. mkfile D. D. 2. Additional steps are required because the necessary changes are made to the startup file when the swap space is added. On the NFS client. cat B. B. run the automount command to update the list of directories man- aged by AutoFS: # automount -v<cr> 13. C. edit the /etc/auto_master file and remove this line: /auto_direct 11. Which command is used to create a swap file? A. Swap cannot be added. On the NFS server. like this: # cd /usr/share<cr> # rmdir man<cr> # mv man. return /usr/share/man to its original state. touch C.bkup man<cr> Exam Questions 1. On the NFS client. newfs . you must adjust the size of the swap partition.

E. /etc/vfstab D. swap -l E. automount -v . svcadm restart autofs B. which of the following do you use? A. A swap file is the preferred method of adding swap space on a permanent basis. The entry in the /etc/vfstab file activates them. The /sbin/swapadd script activates them. and Core Dumps 3. prtconf B. Which file does this describe? A. C. svcadm restart nfs/client C. To stop and restart NFS to enable a new share. /etc/dfs/dfstab B. You cannot unmount a file system while a swap file is in use. you can then make the resources available and unavailable by using the shareall and unshareall commands. B. How are swap areas activated each time the system boots? A. svcadm restart nfs/server D. B. iostat C. F. An NFS file system can be used for a temporary swap area. Using a striped metadevice for swap space is very advantageous and improves performance. /etc/dfs/sharetab C. Swap Space. D. 5. Which command is used to show the available swap space? A. The swapon command activates them. /usr/bin/ps 4. A swap file is created in any ordinary file system. D. The /usr/sbin/swap -a command activates them. swap -s D. A swap area must not exceed 2GB on a Solaris 10 system.) A. If you add resources to a particular file. C. Which statements are true about swap areas? (Choose three.110 Chapter 2: Virtual File Systems. /etc/mnttab 7. 6.

automountd 9. Which file do you use to specify the file systems that are to be shared? A. nfslogd 13. Which NFS daemons are found only on the NFS server? (Choose three. direct 10.111 Apply Your Knowledge 8. indirect D. /etc/dfs/dfstab C. rpcd B. which of the following associates a directory with a map? A. nfsd B. mountd C. lockd D. svc:/network/nfs/server B. NIS B. /etc/vfstab D. svd:/network/nfs/client C. automount D. NFS daemons are started at bootup from which of the following services or files? (Choose two.) A. /etc/mnttab . /etc/dfs/sharetab B. direct C. Which of the following is not an NFS daemon? A. auto_master C. lockd C.) A. statd 12. /etc/inittab 11. svc:/system/filesystem/autofs D. Which of the following maps has a full pathname and indicates the relationship explicitly? A. indirect B. mountd D. In AutoFS.

remount 18. export B. timeo C. shareall B. File systems that are mounted read-write or that contain executable files should always be mounted with which option? A. share C. share C. dfinfo 16. nointr . retrans B.112 Chapter 2: Virtual File Systems. dfshares D. Which command displays information about shared resources that are available to the host from an NFS server? A. File systems mounted with which of the following options indicate that mount is to retry in the background if the server’s mount daemon (mountd) does not respond? A. Swap Space. fg C. intr B. bg D. hard B. Which of the following options to the mount command specifies how long (in seconds) each NFS request made in the kernel should wait for a response? A. intr C. soft D. soft 17. Which command makes a resource available for mounting? A. mount 15. retry D. exportfs D. and Core Dumps 14.

mount D. dfshares C. svc:/network/nfs/server D. timeo 21. svc:/system/filesystem/autofs C. which of the following options to the mount command allows you to send a kill signal to a hung NFS process? A. ps . automount B. Which of the following commands do you use to see who is using a particular NFS mount? A. bg B. svc:/network/nfs/client 23. nfsstat B. remount D. When an NFS server goes down. retrans C. share 22.d/volmgt B.113 Apply Your Knowledge 19. From the NFS client. nointr C.) A. which of the following options makes mount retry the request up to a specified number of times when the NFS server becomes unavailable? A. showmount D. retry B. From which of the following files does automountd start? A. Which of the following programs support the AutoFS service? (Choose two. intr D. timeo 20. /etc/init. automountd C.

/net C. /lib/svc/method/svc-autofs 25. Direct and indirect B. mount B. and indirect C. share . Every Solaris installation comes with a default master map with default entries. Which of the following files lists all direct and indirect maps for AutoFS? A. /export D. 180 seconds 28. Swap Space. clients should be able to access remote file systems through which of the following mount points? A. /etc/auto_share D. Direct map B. All are equal 27. 60 seconds C. /etc/auto_master B. Master and indirect 29. /etc/auto_direct C. /export/home 26. Master. direct. Master map D. Without any changes to the generic system setup. 600 seconds B. What is the default time for automountd to unmount a file system that is not in use? A. Indirect map C. and Core Dumps 24. What types of maps are available in AutoFS? A. Which of the following is the simplest and most useful AutoFS map? A. Master and direct D.114 Chapter 2: Virtual File Systems. Which of the following commands is used to cause a disk resource to be made available to other systems via NFS? A. /tmp_mnt B. 120 seconds D.

smpatch B. Recommended patches C. sconadm B. Patchadd 34. Update Manager C.conf C.sun. /etc/default/nfs 31. /etc/dfs/dfstab D. Which of the following describes how to register your system with Sun Update Connection services? A. All patches 33. Which of the following daemons provides NFS logging? A. PatchTool C. dfshares 30. download. which patches will you have access to? (Choose three.com and create an account. /usr/lib/nfs/nfslogd B. Driver patches D. statd D. smpatch -u <username> -p <password> D. Which of the following scripts or services starts the NFS log daemon? A.115 Apply Your Knowledge C. Which command-line utility allows the system administrator to embed the patch analyze. Data integrity patches B. When using the smpatch command to analyze your system. nfslogd 32. syslogd B. Go to http://sunsolve. nfsd C. . /etc/nfs/nfslog.) A. Your company does not have a Sun service contract. and add commands into shell scripts to increase efficiency? A. Security patches E. export D. Update Manager D.

The cat command does not create a swap file. see the section “Setting Up Swap Space. For more information. not swap. see the section “AutoFS.” 2. C. Swap Space. see the section “NFS Daemons. Answer D is wrong because automountd answers file system mount and unmount requests described by the automount command. but not swap. but only in emergencies. which is called at system startup time. C.” 7. Answer D is wrong because the automount command is not used to stop and start NFS server daemons or to enable a share. The swap -s command is used to display the available swap space on a system. you can add an entry for that swap space in the /etc/vfstab file to ensure that the swap space is available following subsequent reboots. An entry in the vfstab file is used by swapadd. see the section “Setting Up Swap Space. Correct procedure is to add the entry to the vfstab file. Answers A and B are wrong because indirect and direct are invalid commands. C. and you cannot unmount a file system while a swap file is in use. The automount command. A.” 8. For more information. To restart NFS to enable a new share. but it does not activate the swap space directly. C. The touch command is used to change the time and date on a file or creates an empty file when used on a filename that does not exist. Answer F is wrong because swap should not be put on a striped device. reads the master map file named auto_master to create the initial set of AutoFS mounts. not to share a resource. swap -a is used to add swap space on a running system. For more information. It adds overhead and can slow down paging. D. The ps command is used to display process information. see the section “Setting Up Swap Space. prtconf is used to print the system configuration. see the section “Setting Up Swap Space.116 Chapter 2: Virtual File Systems.” . For more information. For more information. Answer E is wrong because swap can exceed 2GB. Swap areas are activated by the /sbin/swapadd script each time the system boots. Answer B is wrong because a swap file should be used only on a temporary basis. A. C. The newfs command is used to create a file system. You use the mkfile and swap commands to designate a part of an existing UFS as a supplementary swap area.” 6. a swap file is created in any ordinary file system. These statements are all true of a swap area: An NFS file system can be used for a swap area. it’s used to view a file. B. you type svcadm restart nfs/server. iostat is used to display I/O statistics. see the section “Setting Up NFS. swapon is not used during the boot process and is not available on Solaris 10. For more information. For more information. not nfs/client. After you create and add additional swap space.” 5. Answer B is wrong because editing startup scripts directly to add swap is a poor policy. For more information.” 4. Answer A is wrong because autofs is not used to stop and start NFS server daemons and enable a new share. but not during the boot process. see the section “Setting Up Swap Space.” 3. C is wrong because the vfstab file is used to mount a shared resource. Answer B is wrong because the service name should be nfs/server. and Core Dumps Answers to Exam Questions 1.

and nfslogd. see the section “Mounting a Remote File System. D. For more information. lockd is both an NFS server and client daemon. B. B. the NFS server is restarted. The export command is a shell built-in used to export variables. For more information. see the section “Setting Up NFS. and statd are all NFS daemons.” 10.” 11. see the section “NFS Daemons. B. The sharetab and mnttab files are not edited directly. For more information. The intr option allows keyboard interrupts to kill a process that is waiting for a response from a hard-mounted file system. For more information. rpcd is not an NFS daemon. see the section “Setting Up NFS. NFS uses a number of daemons to handle its services. The dfshares command displays information about the shared resources that are available to the host from an NFS server. A shared file system is called a shared resource. see the section “AutoFS Maps. but it also displays information about shared file systems when used alone.117 Apply Your Knowledge 9. It is part of the BSD compatibility package. For more information. it is not used to share file systems. there is a direct association between a mount point on the client and a directory on the server. exportfs is a compatibility script that uses the share command to share file systems. The share command exports a resource and makes a resource available for mounting. Answer A is wrong because NIS is a name service. mountd.” 16. A.” 13. File systems mounted with the bg option indicate that mount is to retry in the background if the server’s mount daemon (mountd) does not respond when. The mount command does not share a resource. Answers C and D are wrong because neither of these starts the NFS server daemons. C. see the section “Setting Up NFS. File systems mounted with the fg option indicate that mount is to retry in the foreground if the server’s mount daemon (mountd) does not respond. D. mountd.” . see the section “NFS Daemons. dfinfo is an invalid command. For more information. A. lockd. For more information. it does not display shared resources on an NFS server. The soft option makes the NFS client give up and return an error when the NFS server does not respond. Answer B is wrong because auto_master is the master map for automounter. A direct map has a full pathname and indicates the relationship explicitly.” 14. The shareall command is used to share all the file systems listed in the /etc/dfs/dfstab file. Answer C is wrong because there is not a direct association between a mount point when using indirect maps. For more information. When used from the client.” 12. with no arguments on the NFS server. C. The share command shares file systems. You specify which file systems are to be shared by entering the information in the file /etc/dfs/dfstab. see the section “NFS Daemons.” 15. These services are initialized at startup from the svc:/network/nfs/server and svc:/network/nfs/client service identifiers. for example. The vfstab file contains all the mount points that are to be mounted during the boot process. C. A. The NFS daemons found only on the NFS server are nfsd. With a direct map.

A master map. After the file system is mounted. The timeo option sets the NFS timeout value. A. sending an interrupt signal to the process kills it. C. After the file system is mounted. C. each NFS request made in the kernel waits a specified number of seconds for a response (which is specified with the timeo=<n> option).” 24. Both are run when a system is started by the svc:/system/filesystem/autofs service identifier. is a file system structure that provides automatic mounting. For more information. The retrans option sets the number of retransmission attempts. Answers A. The share command is executed on the NFS server to share a resource. mounting and unmounting remote directories on an as-needed basis. Sun recommends that file systems that are mounted as read-write or that contain executable files should always be mounted with the hard option. not files. C. . To see who is using a particular NFS mount.” 20. named automountd. see the section “Mounting a Remote File System. The timeo option sets the NFS timeout value. associates a directory with a map. The intr option allows keyboard interrupts to kill a process that is waiting for a response from a hard-mounted file system. The mount command is used to manually mount a file system. For more information. For more information. Swap Space. File systems that are shared through the NFS service can be mounted by using AutoFS. A. see the section “AutoFS. see the section “Mounting a Remote File System. If intr is specified.” 23.118 Chapter 2: Virtual File Systems. The retrans option sets the number of retransmission attempts. The nointr option disallows keyboard interrupts to kill a process that is waiting for a response from a hardmounted file system. /etc/auto_direct and /etc/auto_share are not master maps and do not list direct and indirect maps. For more information. B. runs continuously. For more information. A. the process hangs until the remote file system reappears if the NFS server goes down.” 21. The nfsstat command displays NFS statistics. B. B. a client-side service.” 19. AutoFS is initialized by automount. you use the showmount command. and Core Dumps 17. Only file systems that are mounted as read-only should be mounted with the soft option. From the NFS client. see the section “Mounting a Remote File System. mount retries the request up to the count specified in the retry=<n> option. A master map is a list that specifies all the maps that AutoFS should check. The automount daemon. File systems mounted with the bg option indicate that mount is to retry in the background if the server’s mount daemon (mountd) does not respond. and D are commands.” 18. see the section “AutoFS. The dfshares command is used to list available resources. The remount option sets a read-only file system as read-write (using the rw option). For more information. which is in the /etc/auto_master file. A. each NFS request that is made in the kernel waits a specified number of seconds for a response. The retry option sets the number of times to retry the mount operation. The remount option sets a read-only file system as read-write (using the rw option).” 22. For more information. The ps command displays system process information. Two programs support the AutoFS service: automount and automountd. If a file system is mounted hard and the intr option is not specified. see the section “Mounting a Remote File System. AutoFS. see the section “AutoFS. which is run automatically when a system is started. The nointr option disallows keyboard interrupts to kill a process that is waiting for a response from a hard-mounted file system.

For more information. see the section “Using the Update Manager. The /usr/lib/nfs/nfslogd script starts the NFS log daemon (nfslogd). A. A. all NFS operations on the file system are recorded in a buffer by the kernel.” 27. The -t option to the automount command sets the time. The /etc/default/nfs file is used to configure NFS parameters. The syslogd daemon logs system messages. see the section “Using the Update Manager. A direct map is a more complex AutoFS map compared to an indirect map. The advantage of using smpatch is that you can embed all the smpatch commands into shell scripts. Without any changes to the generic system setup. /tmp_mnt. B. A. hardware driver.” 26. When NFS logging is enabled.” 30. but you can download only security. You use the sconadm command to register your system with Sun Update Connection services. For more information. The dfshares command lists available resources.” 32. and /export/home are not default mount points for NFS file systems. B. For more information. see the section “AutoFS Maps. you can still register. Indirect maps are useful for accessing specific file systems. from anywhere on the network.” 28. For more information. see the section “Using the Update Manager. A master map.conf is the NFS server logging configuration file. For more information.” 25.” . The nfsd daemon handles client file system requests. For more information. For more information. see the section “NFS Server Logging. The types of patches you can download depends on the type of Sun service contract you have. The dfstab file contains a list of file systems to be shared. The nfslogd daemon provides NFS logging and is enabled by using the log=<tag> option in the share command. that a file system is to remain mounted if it is not being used. /export.” 31. see the section “AutoFS. D. The three types of AutoFS maps are master. For more information. associates a directory with a map.119 Apply Your Knowledge /lib/svc/method/svc-autofs is not a master map. which is in the /etc/auto_master file. see the section “AutoFS Maps. in seconds. see the section “AutoFS Maps. share exports a resource or makes a resource available for mounting. For more information. For more information. The statd daemon works with the lockd daemon to provide crash recovery functions for the NFS lock manager. A master map is a list that specifies all the maps that AutoFS should check. Indirect maps are the simplest and most useful maps. For more information.” 29. see the section “NFS Server Logging. and data integrity updates. such as home directories. nfslog. see the section “AutoFS Maps. C. The share command is used to specify a disk resource that is to be made available to other systems via NFS. see the section “Setting Up NFS. direct. B. and indirect maps. B. D.” 34. export is a built-in shell for exporting shell variables. clients should be able to access remote file systems through the /net mount point. The default is 600 seconds.” 33. A. Recommended patches are provided only to companies that have an active Sun service contract. The mount command simply connects to the remote resource. If you do not have a service contract. A.

com.sun.sun. and Core Dumps Suggested Reading and Resources “System Administration Guide: Advanced Administration” and “System Administration Guide: Network Services” manuals from the Solaris 10 documentation CD.” and “System Administration Guide: Advanced Administration” books in the System Administration Collection of the Solaris 10 documentation set. . “System Administration Guide: Network Services.120 Chapter 2: Virtual File Systems. “Sun Update Connection System Administrator Guide.” part number 819-4687-10 at http://docs. Swap Space.com. See http://docs.

hot spares. 1. A thorough understanding of the most popular RAID levels is essential to any system administrator managing disk storage. This chapter details the procedure for creating the state databases as well as mirroring and unmirroring the root file system. and unmirror the root file system. . Create the state database. soft partitions. The system administrator needs to be able to manipulate the state database replicas and create logical volumes. state databases. and hot spare pools). This chapter covers all the basic Solaris Volume Manager (SVM) concepts that the system administrator needs to know for the exam. 5) and SVM concepts (logical volumes.THREE 3 Managing Storage Volumes Objectives The following test objectives for Exam CX-310-202 are covered in this chapter: Analyze and explain RAID (0. build a mirror. . . such as mirrors (RAID 1).

Outline Introduction RAID RAID 0 RAID 1 RAID 5 RAID 0+1 RAID 1+0 Solaris Volume Manager (SVM) SVM Volumes Concatenations Stripes Concatenated Stripes Mirrors RAID 5 Volumes Planning Your SVM Configuration Metadisk Driver SVM Commands Creating the State Database Monitoring the Status of the State Database Recovering from State Database Problems Creating a RAID 0 (Concatenated ) Volume Creating a RAID 0 (Stripe) Volume Monitoring the Status of a Volume Creating a Soft Partition Expanding an SVM Volume Creating a Mirror Unmirroring a Noncritical File System Placing a Submirror Offline Mirroring the Root File System on a SPARC-Based System Suggested Reading and Resources Summary Key Terms Apply Your Knowledge Exercise Exam Questions Answers to Exam Questions Mirroring the Root File System on an x86-Based System Unmirroring the Root File System Troubleshooting Root File System Mirrors Veritas Volume Manager .

You’ll be required to recommend the best storage configuration for a particular “real-life” scenario. Practice is very important on these topics. They will describe various IT situations. For this chapter it’s important that you practice each Step By Step example on both Solaris SPARC and x86/x64-based systems (with more than one disk). and you will need to choose the best storage solution. the main objective is to become comfortable with the terms and concepts that are introduced. . As you study this chapter. Be sure that you understand the levels of RAID discussed and the differences between them. Pay special attention to metadevices and the different types that are available. . so you should practice until you can repeat each procedure from memory. . Be sure that you know all the terms listed in the “Key Terms” section near the end of this chapter. Questions on SVM will be scenario-based and quite lengthy. .Study Strategies The following strategies will help you prepare for the test: .

This chapter introduces you to SVM and describes SVM in enough depth to meet the objectives of the certification exam. When describing SVM volumes. These levels are not ratings. RAID Objective . data availability. with standard Solaris file systems. an unbundled product that is purchased separately. Even though this product is not specifically included in the objectives for the exam. On a large server with many disk drives. which has always been included as part of the standard Solaris 10 release. but rather classifications of functionality. the maximum size of a file system is limited to the size of a single disk. also called virtual volumes. and data integrity . Different RAID levels offer dramatic differences in performance.124 Chapter 3: Managing Storage Volumes Introduction With standard disk devices. In other words. In other words. virtual volume management packages can create virtual volume structures in which a single file system can consist of nearly an unlimited number of disks or partitions. or a SAN connection. RAID is an acronym for Redundant Array of Inexpensive (or Independent) Disks. standard methods of disk slicing are inadequate and inefficient. It is by no means a complete reference for SVM. each referring to a method of organizing data while ensuring data resilience or performance. it’s common to describe which level of RAID the volume conforms to. Usually these disks are housed together in a cabinet and referred to as an array. another form of creating virtual volumes. To eliminate the limitation of one slice per file system. and Sun has addressed virtual volume management with their Solaris Volume Manager product called SVM. a file system cannot span more than one disk slice. This was a limitation in all UNIX systems until the introduction of virtual disks. I’ve devoted an entire chapter to it. Also in this chapter. In addition. Several RAID levels exist. disk partitions are grouped across several disks to appear as a single volume to the operating system. Each flavor of UNIX has its own method of creating virtual volumes. Refer to Chapter 9. The key feature of these virtual volume management packages is that they transparently provide a virtual volume that can consist of many physical disk partitions. New in the Solaris 10 6/06 release is the ZFS file system. “Administering ZFS File Systems. The objectives on the Part II exam have changed so that you are now required to be able to set up virtual disk volumes. we have included a brief introduction of Veritas Volume Manager.” for more information. it provides some useful background information. Because ZFS is a large topic. Analyze and explain RAID (Redundant Array of Independent Disks). each disk slice has its own physical and logical device.

0+1. The slices. If the hardware is properly configured. and parity is generated and stored on a dedicated disk. or entire disks. Unlike RAID 3 and 4. Table 3. Then the slices are combined into a stripe.1 describes the various levels of RAID supported by Solaris Volume Manager. you should be familiar with RAID levels 0. Parity is generated and stored on a dedicated disk. Also referred to as a “mirrored stripe” or “mirroring above striping. Also referred to as a “striped mirror” or “striping above mirroring. and 1+0. Data striping and bit interleave.” Create a RAID 1+0 device opposite of how you would create a RAID 0+1 device. EXAM ALERT RAID levels For the exam.1 RAID Level 0 1 2 RAID Levels Description Striped disk array without fault tolerance. This method is very slow for disk writes and is seldom used today since Error Checking and Correction (ECC) is embedded in almost all modern disk drives.125 RAID depending on the specific I/O environment. The parity information is used to re-create data in the event of a disk failure. Checksum data is recorded in a separate drive. the stripe and its mirrors must be allocated from separate disks. a RAID 1+0 volume can tolerate a higher percentage of hardware failures than RAID 0+1 without disabling the volume.” First. . both parity and data are striped across a set of disks. This is the same as level 3 RAID except data is striped across a set of disks at a block level. however. Data is written across each drive in succession one bit at a time. 5. Then. are mirrored first. Similar to RAID 5. These are the only levels that can be used with Solaris Volume Manager.1 described some of the more popular RAID levels. 3 4 5 6 0+1 1+0 RAID level 0 does not provide data redundancy. 1. Data is striped across a set of disks one byte at a time. a stripe is created by spreading data across multiple slices or entire disks. but with additional parity information written to recover data if two drives fail. For mirroring above striping to be effective. Maintains duplicate sets of all data on separate disk drives (mirroring). many are not provided in SVM. the entire stripe is mirrored for redundancy. but is usually included as a RAID classification because it is the basis for the majority of RAID configurations in use. where parity is stored on one disk. Table 3. The following is a more in-depth description of the RAID levels provided in SVM. Data striping with bit interleave and parity checking. Table 3.

1 RAID 0 concatenated volume.126 Chapter 3: Managing Storage Volumes RAID 0 Although they do not provide redundancy. the first slice is filled first. as shown in Figure 3. a logical device is created by combining slices from multiple disks. With striping. These slices must be of equal size. Because data is written to one disk at a time. With striping. With a concatenated device. With concatenations. concatenations and stripes are often referred to as RAID 0. and so on.1.2. The I/O data stream . The process continues until all the slices in the concatenated device are full. performance is no better than with a single disk. as shown in Figure 3. and then the second is filled. More space can easily be added to a concatenation simply by adding more disk slices. I/O is balanced and significantly improved by using parallel data transfer to and from the multiple disks. a logical device is created by combining slices from two or more physical disks. the size of each individual slice can vary. RAID 0 Concatenated Volume 108 GB Interface 1 Physical disk 1 36GB Interface 2 Interface 3 Interface 4 Interface 1 Interface 2 Interface 3 Interface 4 Interface 5 Interface 5 Interface 6 Physical disk 2 36GB Interface 6 Interface 7 Interface 7 Interface 8 Interface 8 Interface 9 Interface 9 Physical disk 3 36GB Interface 10 Interface 11 Interface 12 Interface 10 Interface 11 Interface 12 FIGURE 3. As data is written to a concatenated device.

it cannot be added as easily as with the concatenated device. all the disk drive capacity is available for use.2 RAID 0 striped volume.127 RAID is divided into segments called interlaces. A RAID 0 striped device has better performance than a concatenated device. The interlaces are spread across relatively small. . It is quite easy to add space to a RAID 0 concatenated device. RAID 0 Striped Volume 108 GB Interface 1 Physical disk 1 36GB Interface 4 Interface 7 Interface 10 Interface 1 Interface 2 Interface 3 Interface 4 Interface 5 Interface 2 Interface 6 Physical disk 2 36GB Interface 5 Interface 7 Interface 8 Interface 8 Interface 11 Interface 9 Interface 3 Physical disk 3 36GB Interface 6 Interface 9 Interface 12 Interface 10 Interface 11 Interface 12 FIGURE 3. To add more space to a stripe. . The disadvantage of a RAID 0 logical device is that it has no redundancy. the logical device must be destroyed and re-created. The advantages of RAID 0 concatenated devices are as follows: . With both RAID 0 configurations. The loss of a single disk results in the loss of all the data across the entire logical device. equally sized fragments that are allocated alternately and evenly across multiple physical disks. . Read operations on a RAID 0 concatenated device may be improved slightly over that of a standard UNIX partition when read operations are random and the data accessed is spread over multiple disk drives. . If additional space is needed in a striped device.

3. the entire submirror must be resynchronized. In the event of a disk failure. RAID 1 provides a high level of availability because the system can switch automatically to the mirrored disk with minimal impact on performance and no need to rebuild lost data. two physical disks are concatenated to form each RAID 0 volume—the submirrors.128 Chapter 3: Managing Storage Volumes RAID 1 RAID 1 employs data mirroring to achieve redundancy. RAID 1 (Mirror) 2GB submirror RAID 0 (concatenation) 2GB submirror RAID 0 (concatenation) 2GB Physical disk 1 Slice 0 1GB Physical disk 2 Slice 0 1GB Physical disk 3 Slice 0 1GB Physical disk 4 Slice 0 1GB FIGURE 3. because read requests are directed to the mirrored copy if the primary copy is busy. performance of the entire mirror is degraded. However. because data must be written on both submirrors.3 RAID 1 volume. RAID 1 provides an opportunity to improve performance for reads. On a large volume. Two copies of the data are created and maintained on separate disks. On the other hand. this resync process can be lengthy. . Although data remains available during the resync process. Finally. See the following sections on RAID 0+1 and RAID 1+0. mirroring can degrade performance for write operations. In Figure 3. the two submirrors are mirrored to form a RAID 1 volume. To begin. when a disk in a submirror fails and the disk is replaced. each containing a mirror image of the other. where striping is used to improve performance on a mirrored volume. RAID 1 is the most expensive of the array implementations because the data is duplicated. four drives are used to create a mirrored volume.

the disk access arms can move independently of one another. In RAID 5. thereby satisfying multiple concurrent I/O requests and providing higher transaction throughput. but instead interleaves both data and parity on all disks. as shown in Figure 3. consider mirroring. Therefore. A “write penalty” is associated with RAID 5. and two to write the new data and parity. If data redundancy is needed on a write-intensive volume.4 RAID 5 volume. Interface 1 Physical disk 1 36GB Interface 4 Interface 7 Parity for 10-12 RAID 5 Volume 108 GB Interface 1 Interface 2 Interface 3 Interface 2 Interface 4 Physical disk 2 36GB Interface 5 Interface 5 Parity for 7-9 Interface 6 Interface 10 Interface 7 Interface 3 Physical disk 3 36GB Parity for 4-6 Interface 8 Interface 11 Interface 8 Interface 9 Interface 10 Interface 11 Interface 12 Parity for 1-3 Physical disk 4 36GB Interface 6 Interface 9 Interface 12 FIGURE 3. RAID 5 is best suited for random access data in small blocks. This enables multiple concurrent accesses to the multiple physical disks.129 RAID RAID 5 RAID 5 provides data striping with distributed parity. Every write I/O results in four actual I/O operations—two to read the old data and parity. RAID 5 does not have a dedicated parity disk. . volumes with more than approximately 20% writes would not be good candidates for RAID 5.4.

With this scheme.6. In Figure 3. If the volume has five components. many administrators add striping to the mirrored volume to improve disk I/O. a RAID 5 volume increases data availability. also called a “mirrored stripe. it can handle only a single component failure. data and parity segments are spread across all the disks.” As described in the “RAID 1” section. only the data in that submirror needs to be resynced. This resync process degrades performance and can be lengthy for a large volume. Even if a second disk fails in another submirror. as shown in Figure 3. When a disk fails in a RAID 0+1 volume. When the disk is replaced. however. A parity segment for these is then written to disk 4. This redundant data contains information about user data stored on the remainder of the RAID 5 volume’s components. the slices are first mirrored and then striped. with the parity protecting against a single disk failure. this configuration combines the benefits of RAID 1 (mirroring) for redundancy and RAID 0 (striping) for performance. Similar to a mirror. All other submirrors remain functional. the entire submirror must be resynchronized. This method enhances redundancy and reduces recovery time after a disk failure. Therefore. approximately 25% of the disk space is used to store parity data. and disk 3.5.4. write performance can suffer on a mirrored volume. The failure of a single disk in a RAID 1+0 volume affects only the submirror it was located in. Therefore. With RAID 0+1. The segment consists of an exclusive OR of the first three segments of data. as shown in Figure 3. data is still available from the alternate submirror. data is striped across multiple disk drives to improve disk I/O and then is mirrored to add redundancy.130 Chapter 3: Managing Storage Volumes A RAID 5 device must consist of at least three components. the equivalent of one component is used for parity information. . disk 2. As with RAID 0+1. With RAID 1+0. four disk drives are being striped. The two levels of RAID differ in how they are constructed. With no hot spares. the stripes are created and then mirrored. the equivalent of one component is used for the parity information. With RAID 0+1. the data in the RAID 1+0 volume is still available. The parity information is distributed across all components in the volume. so a RAID 1+0 is less vulnerable than a RAID 0+1 volume. RAID 1+0 SVM also supports RAID 1+0 (mirrors that are then striped). the entire submirror fails. but with a minimum of cost in terms of hardware and only a moderate penalty for write operations. Be aware that when the failed disk is replaced. As you can see. A RAID 5 volume uses storage capacity equivalent to one component in the volume to store parity information. RAID 0+1 SVM supports RAID 0+1 (stripes that are then mirrored). The first three data segments are written to disk 1. if the volume has three components.

6 RAID 1+0 (striped mirror).5 RAID 0+1 (mirrored stripe).131 RAID RAID 1+0 Mirrored Volume 2GB RAID 0 Stripe 2GB RAID 0 Stripe 2GB Physical disk 1 Slice 0 1GB Physical disk 2 Slice 0 1GB Physical disk 3 Slice 0 1GB Physical disk 4 Slice 0 1GB FIGURE 3. the entire stripe or concatenation is not taken offline. only the failed device. If a device fails. RAID 1+0 Mirrored Volume 2GB RAID 1 submirror 1GB RAID 1 submirror 1GB RAID 1 submirror 1GB Physical disk 1 Slice 0 1GB Physical disk 2 Slice 0 1GB Physical disk 3 Slice 0 1GB Physical disk 4 Slice 0 1GB Physical disk 5 Slice 0 1GB Physical disk 6 Slice 0 1GB FIGURE 3. .

A group of physical slices that appear to the system as a single. volumes are built from standard disk slices that have been created using the format utility. A volume is functionally identical to a physical disk from the point of view of an application. hot spares. d0 through d127. and unmirror the root file system. to manage physical disks and their associated data. . This breaks the traditional eight-slices-per-disk barrier by allowing disks. A recent feature of SVM is soft partitions. to be subdivided into many more partitions. but the default is to support 128 logical volumes—namely. Create the state database. you’ll remember that virtual disks were called metadevices. The various types of volumes are described in the next section of this chapter.2. given the ever-increasing capacity of disks. Using either the SVM command-line utilities or the graphical user interface of the Solaris Management Console (SMC). the system administrator creates each device by executing commands or dragging slices onto one of four types of SVM objects: volumes. In SVM. state database replicas. and hot spare pools.192 logical volumes per disk set.2 Object Volume SVM Elements Description Also called a metadevice. enabling applications to treat a volume like a physical device. Solaris 10 SVM can support up to 8. to coordinate I/O to and from physical devices and volumes. This type of driver is also called a logical or pseudo driver. build a mirror. You may also hear volumes referred to as virtual or pseudo devices. logical device. These elements are described in Table 3. and hot spare pools). state databases.132 Chapter 3: Managing Storage Volumes Solaris Volume Manager (SVM) Objective . One reason for doing this might be to create more manageable file systems. called volumes. Table 3. Analyze and explain SVM concepts (logical volumes. disk sets. NOTE SVM terminology If you are familiar with Solstice DiskSuite. soft partitions. comes bundled with the Solaris 10 operating system and uses virtual disks. or logical volumes. SVM. . formerly called Solstice DiskSuite. A volume is used to increase storage capacity and increase data availability. called the metadisk driver. SVM uses a special driver.

you should note that transactional volumes are no longer available with the Solaris Volume Manager (SVM). This type of fail-over configuration is referred to as a clustered environment. A slice that is reserved for use in case of a slice failure in another volume. Each copy is referred to as a state database replica. You should create at least three state database replicas when using SVM because the validation process requires a majority (half + 1) of the state databases to be consistent with each other before the system will start up correctly. All the SVM volumes are described in the following sections. such as a submirror or a RAID 5 metadevice. another host can take over the failed host’s disk set. It is used to increase data availability. concatenated stripes. A means of dividing a disk or volume into as many partitions as needed. overcoming the current limitation of eight. If one host fails. This is done by creating logical partitions within physical disk slices or logical volumes. This means that data is written to the first available slice until it is full and then moves to the next available slice. Each state database is a collection of multiple. and RAID 5 volumes. For example. replicated database copies. volumes. while another pool provides resilience for data disks. by multiple hosts. If partitions are concatenated. A collection of hot spares. the addressing of the component blocks is done on the components sequentially. mirrors. a pool may be used to provide resilience for the rootdisk. but not at the same time. and hot spares that can be shared exclusively. Concatenations Concatenations work much the same way the UNIX cat command is used to concatenate two or more files to create one larger file. Each state database replica should ideally be physically located on a separate disk (and preferably a separate disk controller) for added resilience. The file system can .2 Object SVM Elements Description A database that stores information about the state of the SVM configuration. SVM cannot operate until you have created the state database and its replicas.133 Solaris Volume Manager (SVM) Table 3. A hot spare pool can be used to provide a number of hot spares for specific volumes or metadevices. stripes. NOTE No more transactional volumes As of Solaris 10. State database Soft partition Disk set Hot spare Hot spare pool SVM Volumes The types of SVM volumes you can create using Solaris Management Console or the SVM command-line utilities are concatenations. Use UFS logging to achieve the same functionality. A set of disk drives containing state database replicas.

except that the addressing of the component blocks is interlaced on all the slices comprising the stripe rather than sequentially. improving read performance. The size of the interlace can be configured when the slice is created and cannot be modified afterward without destroying and recreating the stripe. Some mirror options can be defined when the mirror is initially created. multiple controllers can access data simultaneously. Table 3. the default value being 16K. A concatenation can contain disk slices of different sizes because they are merely joined together. the components making up a stripe must all be the same size. For example. You should note that. Striping is used to gain performance. An interlace refers to a grouped segment of blocks on a particular slice. .134 Chapter 3: Managing Storage Volumes use the entire concatenation. if most of the I/O requests are for large amounts of data. 64K. or following the setup. For example. unlike a concatenation. with a stripe containing five physical disks. an interlace size of 2 megabytes produces a significant performance increase when using a five disk stripe. these options can allow all reads to be distributed across the submirror components. All disk writes are duplicated. Different interlace values can increase performance. all disks are accessed at the same time in parallel. In other words. and the entire volume fails if a single slice fails.3 describes the mirror read policies that can be configured. the specific application must be taken into account. SVM makes duplicate copies of the data located on multiple physical disks. In determining the size of the interlace. Mirrors A mirror is composed of one or more stripes or concatenations. four chunks of data (16K each because of the interlace size) are read simultaneously due to each sequential chunk residing on a separate slice. such as 10 megabytes. This type of volume provides no data redundancy. Stripes A stripe is similar to a concatenation. and presents one virtual disk to the application. The volumes that are mirrored are called submirrors. When data is striped across disks. A mirror replicates all writes to a single logical device (the mirror) and then to multiple devices (the submirrors) while distributing read operations. disk reads come from one of the underlying submirrors. Concatenated Stripes A concatenated stripe is a stripe that has been expanded by concatenating additional striped slices. This provides redundancy of data in the event of a disk or hardware failure. say. even though it spreads across multiple disk drives. For example. if an I/O request is.

Round Robin Geometric First Write performance can also be improved by configuring writes to all submirrors simultaneously. . however. but merely striping doesn’t provide data protection (redundancy). Some space is allocated to parity information and is distributed across all slices in the RAID 5 metadevice. as described in the “Stripes” section earlier. Table 3. Write Policy If a submirror goes offline. The striped metadevice performance is better than the RAID 5 metadevice because the RAID 5 metadevice has a parity overhead. There have been exam questions that ask for the valid mirror policies. the data can be regenerated using available data and the parity information.3 Read Policy Mirror Read Policies Description This is the default policy and distributes the reads across submirrors. This directs all reads to use the first submirror only. This policy specifies that writes to one submirror must complete before writes to the next submirror are started. is that all submirrors will be in an unknown state if a failure occurs. RAID 5 replicates data by using parity information. but in addition to striping.4 Parallel Serial Mirror Write Policies Description This is the default policy and directs the write operation to all submirrors simultaneously. In the case of missing data. A RAID 5 metadevice is composed of multiple slices. it must be resynchronized when the fault is resolved and it returns to service.4 describes the write policies that can be configured for mirror volumes. The trade-off with this option. Table 3. RAID 5 Volumes A RAID 5 volume stripes the data.135 Solaris Volume Manager (SVM) Table 3. EXAM ALERT Read and write policies Make sure you are familiar with the policies for both read and write. Reads are divided between the submirrors based on a logical disk block address.

A RAID 0 stripe’s performance is better than that of a RAID 5 volume. RAID 5 volume performance is lower than a striped RAID 0 performance for write operations because the RAID 5 volume requires multiple I/O operations to calculate and store the parity. especially for write operations. therefore RAID 5 volumes have a lower hardware cost than RAID 1 volumes. RAID 0 volumes have the lowest hardware cost. For raw random I/O reads. and if the underlying submirror is a stripe. Mirroring does improve random read performance. but it offers no data protection. a financial application with mission-critical data would require mirroring to provide the best protection for the data. a RAID 0 stripe is superior to RAID 5 volumes. . and you’ll be asked to choose which SVM configuration is best for a given situation. but both generally result in lower performance. . . mirroring can improve write operations. . RAID 1 and RAID 5 volumes both increase data availability. Both the stripe and RAID 5 volume split the data across multiple disks. For write-intensive applications. . . and the RAID 5 volume parity calculations aren’t a factor in reads except after a slice failure. RAID 0 striping generally has the best performance. For example. or performance. Make sure you are familiar with the pros and cons of each RAID solution. . RAID 5 requires less disk space. data availability. . Try to memorize the following guidelines so that you understand why one configuration might be chosen over another for a given situation. You’ll be given criteria. For raw random I/O writes.136 Chapter 3: Managing Storage Volumes Planning Your SVM Configuration EXAM ALERT You’ll see several questions that describe various data center scenarios. but RAID 0 stripes do not provide data protection (redundancy). Both RAID 0 stripes and RAID 5 volumes distribute data across multiple disk drives and help balance the I/O load. . You might need to choose a configuration based on cost constraints. and increase access bandwidth to that data with mirroring or striping. keep in mind the following guidelines: . RAID 1 generally has better performance than RAID 5. When designing your storage configuration. the RAID 0 stripe and the RAID 5 volume are compara- ble. EXAM ALERT RAID solutions You might get an exam question that describes an application and then asks which RAID solution would be best suited for it. Identify the most frequently accessed data. whereas a video editing application would require striping for the pure performance gain.

-a: Displays all disk sets. -c: Displays concise output. . you can use Solaris utilities such as iostat. metastat. The metastat and metadb utilities provide status information on the metadevices and state databases. respectively. Its syntax is as follows: metastat [<options>] <metadevice> The options are as follows: . -t: Prints the current status and timestamp for the specified metadevices and hot spare pools. -B: Displays the current status of all the 64-bit metadevices and hot spares. starting at the toplevel metadevice. -s: Specifies the name of the disk set on which metastat works. . The inquiry causes all components of each metadevice to be checked for accessibility. -r: Displays whether subdevices are relocatable. In addition. . you can utilize volumes to provide increased capacity. After you have set up your configuration. -i: Checks the status of all active metadevices and hot spares.tab file. -q: Displays the status of metadevices without the device relocation information. . . . When problems are discovered. and metadb to report on its operation.137 Solaris Volume Manager (SVM) Using SVM. and better performance. the hot spare capability provided by SVM can provide another level of data availability for mirrors and RAID 5 volumes. . higher availability. The timestamp provides the date and time of the last state change. . The metastat command displays the current status for each metadevice. . The iostat utility is used to provide information on disk usage and shows you which metadevices are being heavily utilized. -h: Displays the command usage message. -p: Displays the active metadevices in the same format as the md. the metadevice state databases are updated as if an error occurred. For example. Hot spares were described earlier in this chapter. the following output provides information from the metastat utility while two mirror metadevices are being synchronized: # metastat -i<cr> d60: Mirror Submirror 0: d61 State: Okay Submirror 1: d62 State: Resyncing Resync in progress: 16 % done .

and d50 contains submirrors d51 and d52. Use of this utility is important as there could be a noticeable degradation of service during the resynchronization operation on these volumes.sd@SATA_____VBOX_HARDDISK____VB139b5e81-7e6b7a18 # Notice from the preceding output that there are two mirror metadevices. A hot spare volume completes resynchronization. A hot spare volume starts to resynchronize.0 GB) Stripe 0: Device Start Block Dbase c2t4d0s6 0 No State Reloc Hot Spare Okay Yes d62: Submirror of d60 State: Resyncing Size: 10420224 blocks (5.0 GB) Stripe 0: Device Start Block Dbase c2t5d0s6 0 No State Reloc Hot Spare Okay Yes Device Relocation Information: Device Reloc Device ID c2t5d0 Yes id1. d60 contains submirrors d61 and d62. Configure SVM’s SNMP trap to trap the following instances: . Further information on these utilities is available from the online manual pages. A RAID 1 or RAID 5 subcomponent goes into “needs maintenance” state.” . . A disk fail- ure or too many errors would cause the software to mark the component as “needs maintenance.138 Chapter 3: Managing Storage Volumes Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 10420224 blocks (5.sd@SATA_____VBOX_HARDDISK____VB62849a49-3829a15b c2t4d0 Yes id1. each containing two submirror component metadevices.0 GB) d61: Submirror of d60 State: Okay Size: 10420224 blocks (5. in percentage complete terms. . which can be closely monitored as metastat also displays the progress of the operation. You can also use SVM’s Simple Network Management Protocol (SNMP) trap generating daemon to work with a network monitoring console to automatically receive SVM error messages. It can be seen that the metadevices d52 and d62 are in the process of resynchronization. . A hot spare volume is swapped into service.

up to 8192. and remove metadevices. It uses other physical device drivers to pass I/O requests to and from the underlying devices. By default. there are 128 unique metadisk devices in the range of 0 to 127. There is also a character (or raw) device that provides for direct transmission between the disk and the user’s read or write buffer.block metadevice d0 . the driver used to manage SVM volumes.conf file.raw metadevice d0 You must have root access to administer SVM or have equivalent privileges granted through RBAC. .”) SVM Commands A number of SVM commands help you create. monitor. and the names of the raw devices are found in the /dev/md/rdsk directory. The names of the block devices are found in the /dev/md/dsk directory. The following is an example of a block and raw logical device name for metadevice d0: /dev/md/dsk/d0 /dev/md/rdsk/d0 . which is described in the next section. The metadisk driver operates between the file system and application interfaces and the device driver interface. maintain. The meta block device accesses the disk using the system’s normal buffering mechanism. All the commands are delivered with the standard Solaris 10 Operating Environment distribution. is implemented as a set of loadable pseudo device drivers. The system administrator now can receive. and monitor. and it has all the same characteristics as any other disk device driver. . A disk set is taken by another host and the current host panics. Table 3. It interprets information from both the UFS or applications and the physical device drivers.5 briefly describes the function of the more frequently used commands that are available to the system administrator. can be added to the kernel by editing the /kernel/drv/md. All operations that affect SVM volumes are managed by the metadisk driver.139 Solaris Volume Manager (SVM) . Additional volumes. The volume name begins with “d” and is followed by a number. The metadevice is a loadable device driver. messages from SVM when an error condition or notable event occurs. information is received in the expected form by both the file system and the device drivers. A mirror is taken offline. After passing through the metadevice driver. (RBAC is described in Chapter 4. “Controlling Access and Configuring System Messaging. Metadisk Driver The metadisk driver.

Used to create and delete the state database and its replicas. Used to update the metadevice information. . Used to recover soft partition information. metadb with the -i option is used to monitor the status of the state database and its replicas. metaclear metadb metadetach metadevadm metahs metainit metattach metaoffline metaonline metareplace metarecover metaroot metastat NOTE No more metatool You should note that the metatool command is no longer available in Solaris 10. Used to replace components of submirrors or RAID 5 metadevices. Similar functionality—managing metadevices through a graphical utility—can be achieved using the Solaris Management Console (SMC)—specifically. /usr/sbin contains links to these commands as well. Used to detach a metadevice. metaroot configures the / (root) file system to use a metadevice. metastat. It adds an entry to /etc/system and also updates /etc/vfstab to reflect the new device to use to mount the root (/) file system. typically removing one half of a mirror. an example being if a disk device changes its target address (ID). Used to place submirrors in an offline state. all metadevices. You would use metainit to create concatenations or striped metadevices. Used to set up the system files for the root metadevice. although you should be aware that metainit. Used to place submirrors in an online state. and metarecover reside in /sbin. You would use metareplace when replacing a failed disk drive. Used to manage hot spare devices and hot spare pools. Used to attach a metadevice. the Enhanced Storage section. metadb. or hot spare pools. Used to display the status of a metadevice. typically used when creating a mirror or adding additional mirrors. metadevadm.5 Command Solaris Volume Manager Commands Description Used to delete metadevices and can also be used to delete hot spare pools.140 Chapter 3: Managing Storage Volumes NOTE Where they live The majority of the SVM commands reside in the /usr/sbin directory. Table 3. Used to configure metadevices.

. . together with its replicas. I’ve seen Sun increase the size of a metadb in the past from 1024 blocks to 8192. If insufficient state database replicas are available. called replicas. use drives that are on different host bus adapters. because disk space is relatively cheap. The Solaris operating system continues to function normally if all state database repli- cas are deleted. the majority of the remaining replicas must be available and consistent—that is. There are normally multiple copies of the state database. . . and delete or replace enough of the corrupted or missing database replicas to achieve a quorum. and disk sets. The following are some guidelines: . so I like to be prepared. or even different controllers if possible. I recommend 10MB per state database replica. When possible. However. If a system crashes and corrupts a state database replica. When possible. the system loses all Solaris Volume Manager configuration data when no state database replicas are available on a disk. Also. This is why at least three state database replicas must be created initially to allow for the majority algorithm to work correctly. The state database. The algorithm used by SVM for database replicas is as follows: . The system cannot reboot into multiuser mode unless a majority (half + 1) of the total number of state database replicas are available. . . when the system is rebooted. hot spares. It is recommended that state database replicas be located on different physical disks. NOTE No automatic problem detection The SVM software does not detect problems with state database replicas until an existing SVM configuration changes and an update to the database replicas is required. You also need to put some thought into the placement of your state database replicas. You cannot create state database replicas on slices containing existing file systems or data. to provide added resilience. guarantees the integrity of the state database by using a majority consensus algorithm. and the size of a database replica could be increased if you create more than 128 metadevices. place state database replicas on slices that are on separate disk drives. Sun recommends that you create state database replicas on a dedicated slice that is at least 4MB in size for each database replica it will store. half + 1. you need to boot to single-user mode. The system will panic if fewer than half of the state database replicas are available. If possible. The system will continue to run if at least half of the state database replicas are avail- able.141 Solaris Volume Manager (SVM) Creating the State Database The SVM state database contains vital information on the configuration and status of all volumes.

this is /kernel/drv/md.142 Chapter 3: Managing Storage Volumes . Displays status information about all database replicas.conf) should be updated with entries from /etc/lvm/mddb. . When distributing your state database replicas. Forces the creation of the first database replica (when used in conjunction with the -a option) and the deletion of the last remaining database replica (when used in conjunction with the -d option). that if the drive fails.192 blocks.6 describes the options available for the metadb command. Displays the usage message. and your system will crash. follow these rules: . by default. Specifies the name of the kernel file where the replica information should be written. -h -i -k <system-file> -l <length> -p -s <setname> slice .. The default length is 8.cf. . The syntax of this command is as follows: /sbin/metadb -h<cr> /sbin/metadb [-s <setname>] /sbin/metadb [-s <setname>] /sbin/metadb [-s <setname>] [-l <length>] slice. all your database replicas will be unavailable. -i -p [-k <system-file>] [mddb. however. Specifies the disk slice to use.. The state database and its replicas are managed using the metadb command. such as /dev/dsk/c0t0d0s6. Table 3. Specifies the number of replicas to be created on each device. Create two replicas on each drive for a system with two to four disk drives. Specifies the size (in blocks) of each replica. Create three replicas on one slice for a system with a single disk drive. /sbin/metadb [-s <setname>] /sbin/metadb [-s <setname>] /sbin/metadb [-s <setname>] /sbin/metadb [-s <setname>] -a [-f] [-k <system-file>] mddbnn -a [-f] [-k <system-file>] [-c <number>]\ -d [-f] [-k <system-file>] mddbnn -d [-f] [-k <system-file>] slice. The default is 1. Create one replica on each drive for a system with five or more drives. Specifies that the system file (the default is /kernel/drv/md.conf. Realize.. Deletes all the replicas that are present in the specified slice.cf-file] Table 3.. Specifies the name of the disk set on which metadb should run.6 Option -a -c <number> -d -f metadb Command Options Description Specifies the creation of a new database replica.

namely c0t0d0s4 and c0t1d0s4: # metadb -d c0t0d0s4 c0t1d0s4<cr> The next section shows how to verify the status of the state database. to see the current status. and I’ll create two copies in each reserved disk slice. you also see a description of the status flags. this is replica selected as input W . To create the state database and its replicas. you can use the metadb command. In this scenario.locator for this replica was read successfully c . -a indicates a new database is being added. I have reserved a slice (slice 4) on each of two disks to hold the copies of the state database. -f forces the creation of the initial database. if I had created only three database replicas and the drive containing two of the replicas fails. Examine the state database as shown here: # metadb -i<cr> flags first blk block count a m p luo 16 8192 /dev/dsk/c0t0d0s4 a p luo 8208 8192 /dev/dsk/c0t0d0s4 a p luo 16 8192 /dev/dsk/c0t1d0s4 a p luo 8208 8192 /dev/dsk/c0t1d0s4 r . The following example demonstrates how to remove the state database replicas from two disk slices. The system returns the prompt. enter the following command: # metadb -a -f -c2 c0t0d0s4 c0t1d0s4<cr> Here.replica is master.143 Solaris Volume Manager (SVM) In the following example. Monitoring the Status of the State Database When the state database and its replicas have been created. For example. and the two cxtxdxsx entries describe where the state databases are to be physically located. If you use the -i flag. but the system will continue to function. the failure of one disk drive will result in the loss of more than half of the operational state database replicas. The system will panic only when more than half of the database replicas are lost.replica’s location was in /etc/lvm/mddb.replica’s location was patched in kernel m . there is no confirmation that the database has been created. using the reserved disk slices.replica does not have device relocation information o . with no options.replica is up to date l .replica has device write errors .replica active prior to last mddb configuration change u . giving a total of four state database replicas. -c2 indicates that two copies of the database are to be created.cf p . the system will panic.

144 Chapter 3: Managing Storage Volumes a M D F S R replica replica replica replica replica replica is active.replica does not have device relocation information o . . you would need to increase the size of all state databases. Uppercase status letters indicate a problem and lowercase letters are informational only. all four replicas are active and up to date and have been read successfully. commits are occurring to this replica had problem with master blocks had problem with data blocks had format problems is too small to hold current data base had device read errors Each line of output is divided into the following fields: . but the size could be increased if you anticipate creating more than 128 metadevices. they must be removed with the system at the Single User state. first blk: The starting block number of the state database replica in its partition.replica active prior to last mddb configuration change . flags: This field contains one or more state database status letters. If we run metadb -i. The default length is 8192 blocks (4MB). additional replicas can again be created. Multiple state database replicas in the same partition will show different starting blocks. each with two state database replicas on slices c0t0d0s7 and c0t1d0s7. we can see that the state database replicas are all present and working correctly: # metadb -i<cr> flags first blk block count a m p luo 16 8192 /dev/dsk/c0t0d0s7 a p luo 8208 8192 /dev/dsk/c0t0d0s7 a p luo 16 8192 /dev/dsk/c0t1d0s7 a p luo 8208 8192 /dev/dsk/c0t1d0s7 r . A normal status is a “u” and indicates that the database is up-to-date and active. Recovering from State Database Problems SVM requires that at least half of the state database replicas must be available for the system to function correctly. The following example shows a system with two disks. When the system is operational again (albeit with fewer state database replicas). . As the code shows. When a disk fails or some of the state database replicas become corrupt. The last field in each state database listing is the path to the location of the state database replica. block count: The size of the replica in disk blocks. to allow the system to boot correctly. in which case. there is one master replica.

a disk failure or corruption occurs on the disk c0t1d0 and renders the two replicas unusable. you need to be in single-user mode. Reboot the system when finished to reload the metadevice database. To repair the situation. commits are occurring to this replica M .replica active prior to last mddb configuration change u .replica had format problems S . repair any broken database replicas which were deleted. Ignore any Read-only file system error messages. so boot the system with -s and then remove the failed state database replicas on c0t1d0s7: # metadb -d c0t1d0s7<cr> . The metadb -i command shows that errors have occurred on the two replicas on c0t1d0s7: # metadb -i<cr> flags first blk block count a m p luo 16 8192 /dev/dsk/c0t0d0s7 a p luo 8208 8192 /dev/dsk/c0t0d0s7 M p 16 unknown /dev/dsk/c0t1d0s7 M p 8208 unknown /dev/dsk/c0t1d0s7 r .replica is active.replica’s location was patched in kernel m .replica does not have device relocation information o .replica had problem with master blocks D .replica has device write errors a . Use metadb to delete databases which are broken. commits are occurring to this replica replica had problem with master blocks replica had problem with data blocks replica had format problems replica is too small to hold current data base replica had device read errors Subsequently. this is replica selected as input replica has device write errors replica is active.locator for this replica was read successfully c .replica had device read errors When the system is rebooted.replica is master. After reboot.replica’s location was in /etc/lvm/mddb.cf p .cf replica’s location was patched in kernel replica is master. this is replica selected as input W . the following messages appear: Insufficient metadevice database replicas located.replica had problem with data blocks F .145 Solaris Volume Manager (SVM) u l c p m W a M D F S R replica is up to date locator for this replica was read successfully replica’s location was in /etc/lvm/mddb.replica is up to date l .replica is too small to hold current data base R .

Specifies the names of the components that are used. When the first component is full. m for megabytes. or another volume. It provides no redundancy but gives you a method to quickly expand disk storage. Specifies the number of components each stripe should have. It boots with no problems. such as d1. Use the following syntax when using metainit to create a concatenated volume: /sbin/metainit <volume-name> <number-of-stripes> <components-per-stripe>\ <component-names> [-i <interlace>] Table 3. This enables you to repair the failed disk and re-create the metadevice state database replicas. This probably will change in the future.7 describes the metainit options used to create a concatenated volume. but it starts with the first available component and uses it until it’s full. followed by either k for kilobytes. the volume starts to fill the next available component. The concatenated volume spreads data across all the components in the volume. or b for blocks. Command Option <volume-name> <number-ofstripes> <componentsper-stripe> <component-names> -i <interlace> . The interlace width is a value. Use a standard naming convention for your volume names to simplify administration. Creating a RAID 0 (Concatenated) Volume A RAID 0 volume is also called a concatenated volume or a simple volume. In fact. The interlace specified cannot be less than 16 blocks or greater than 100 megabytes. You create a concatenated volume when you want to place an existing file system under SVM control. The component name could be a physical disk slice. If more than one component is used. such as d0 and d10. We currently don’t have much flexibility in our volume names. Solaris 10 currently requires that all volume names begin with d followed by a number. Specifies the number of stripes to create.146 Chapter 3: Managing Storage Volumes Now reboot the system again. although you now have fewer state database replicas. Use the metainit command to create an SVM volume. This is especially true when I describe setting up mirrored volumes. Specifies the interlace width to use for the stripe.7 metainit Command Options Description Specifies the name of the volume to create. There is no performance gain over conventional file systems and slices. Table 3. such as c0t1d0s2. some releases of OpenSolaris allow the use of descriptive names. because the system still writes to only one disk at a time. The default interlace width is 16 kilobytes. separate each component with a space.

Displays the command usage message. all devices are checked but not initialized. This option is necessary if you are configuring mirrors on root (/). If used with -a. Sets up all metadevices that were configured before the system crashed or was shut down. metainit operates on your local metadevices and/or hot spares. View the metadevice with the metastat command as follows: # metastat -c<cr> d100 s 2. Without this option. and the stripe is composed of one slice (components per stripe = 1): # metainit -f d100 1 1 c0t0d0s5<cr> d100: Concat/Stripe is setup The -f option is a generic option that forces the metainit command to continue even if one of the slices contains a mounted file system or is being used as swap.0GB c0t0d0s5 The -c option to the metastat command displays the output in a concise format. -h -n -r -s <setname> In the following example. a concatenated metadevice (simple volume) is created using a single disk slice named /dev/dsk/c0t0d0s5. or /usr. A metadevice is removed with the metaclear command.147 Solaris Volume Manager (SVM) Table 3. Checks the syntax of your command line or md. The concatenation consists of one stripe (number of stripes = 1). and /usr. Used only in a shell script at boot time.7 metainit Command Options Description Generic Command Option That Can Be Used When Creating All Types of SVM Volumes -f Forces the metainit command to continue even if one of the slices contains a mounted file system or is being used as swap. This option is useful when you’re configuring mirrors on root (/).tab entry without actually setting up the metadevice. Specifies the name of the disk set on which metainit will work. swap.8 lists the options of the metaclear command. The metadevice is named d100. The syntax for the metaclear command is as follows: /sbin/metaclear [<options>] <metadevice> Table 3. . swap.

The concatenation consists of three stripes (number of stripes = 3). Purges (deletes) all soft partitions from the specified metadevice or component. The metadevice is named d101. Deletes forcibly.8 Option -a -f -h -p -r metaclear Command Options Description Deletes all metadevices. and each stripe is composed of one slice (components per stripe = 3): # metainit -f d101 3 1 c2t1d0s6 1 c2t2d0s6 1 c2t3d0s6<cr> d101: Concat/Stripe is setup View the metadevice with the metastat command as follows: # metastat<cr> d101: Concat/Stripe Size: 50135040 blocks (23 GB) Stripe 0: Device Start Block Dbase c2t1d0s6 0 No Stripe 1: Device Start Block Dbase c2t2d0s6 0 No Stripe 2: Device Start Block Dbase c2t3d0s6 0 No Reloc Yes Reloc Yes Reloc Yes Use the metaclear command to remove an SVM volume. I’ll create a concatenation of three separate disk slices (c2t1d0s6. Use this to delete a metadevice or component that is in an error state. This option does not delete metadevices on which other metadevices depend. type the following: # metaclear d101<cr> d101: Concat/Stripe is cleared . c2t2d0s6. Displays the metaclear usage message.148 Chapter 3: Managing Storage Volumes Table 3. and c2t3d0s6). For example. to remove the volume named d101. Remove the metadevice with the metaclear command: # metaclear d100<cr> d100: Concat/Stripe is cleared In this next example. Recursively deletes specified metadevices and hot spares.

149 Solaris Volume Manager (SVM) Creating a RAID 0 (Stripe) Volume A RAID 0 stripe volume provides better performance than a RAID 0 concatenated volume because all the disks are accessed in parallel rather than sequentially. as with the concatenated stripe. The slices used are c2t1d0s6. c2t2d0s6. and c2t3d0s6. Use the metaclear command to remove an SVM volume. and the stripe is composed of three slices (components per stripe = 3). Display the metadevice with the metastat command: # metastat<cr> d200: Concat/Stripe Size: 50135040 blocks (23 GB) Stripe 0: (interlace: 32 blocks) Device Start Block Dbase c2t1d0s6 0 No c2t2d0s6 0 No c2t3d0s6 0 No Reloc Yes Yes Yes Notice the difference between this striped metadevice (d200) and the concatenated metadevice (d101) that was created in the previous section. I’ll specify more than one slice per stripe: # metainit -f d200 1 3 c2t1d0s6 c2t2d0s6 c2t3d0s6<cr> d200: Concat/Stripe is setup The volume named d200 consists of a single stripe (number of stripes = 1). You create a striped volume using the metainit command that was described earlier when I created a RAID 0 concatenated volume. The syntax of this command is as follows: /usr/sbin/metastat -h /usr/sbin/metastat [-a] [-B] [-c] [-i] [-p] [-q] [-s <setname>] [-t <metadevice>] component . For example. However. type the following: # metaclear d200<cr> d200: Concat/Stripe is cleared Monitoring the Status of a Volume Solaris Volume Manager provides the metastat command to monitor the status of all volumes. to remove the RAID 0 volume named d200. when I create the stripe volume.

Restricts the status to that of the specified disk set. only one line per metadevice.9 describes the options for the metastat command. or extents. The soft partition is created by specifying a start block and a block size. without the limitations imposed by hard slices. In the following example. Displays the status of all 64-bit metadevices and hot spares. the metastat command is used to display the status of a single metadevice. Displays concise output. Table 3.0 GB) Stripe 0: Device Start Block Dbase State Reloc Hot Spare c0t0d0s5 0 No Okay Yes Device Relocation Information: Device Reloc Device ID c0t0d0 Yes id1. the status of all metadevices is displayed. but this time in concise format: # metastat -c d100<cr> d100 s 5. Displays the list of active metadevices and hot spare pools.0GB c0t0d0s5 Creating a Soft Partition Soft partitions are used to divide large partitions into smaller areas.150 Chapter 3: Managing Storage Volumes Table 3. If this option is omitted. Specifies the component or metadevice to restrict the output. but without the device relocation information. Displays the status and timestamp of the specified metadevices and hot spares. Displays a usage message. Checks the status of RAID 1 (mirror) volumes as well as RAID 5 and hot spares. The timestamp shows the date and time of the last state change.9 Option -a -B -c -h -i -p -q -s <setname> -t <metadevice> component metastat Command Options Description Displays the metadevices for all disk sets owned by the current host. Displays the status of metadevices.dad@ASAMSUNG_SP0411N=S01JJ60X901935 In the next example. d100: # metastat d100<cr> d100: Concat/Stripe Size: 10489680 blocks (5. Soft partitions differ from hard slices created using the format command because .tab. the metastat -c command displays the status for the same metadevice (d100). The output is displayed in the same format as the configuration file md.

The syntax is as follows: metainit <soft-partition> -p [-e] <component> <size> Table 3. or B or b for blocks. For maximum flexibility and high availability. Command Option <soft-partition> <component> <size> -p -e For example. and then create soft partitions on the mirror or RAID volume. The size is specified as a number followed by M or m for megabytes.10 describes the metainit options used to create a soft partition. The name begins with d followed by a number. which takes most of the disk. issue the following command: # metainit d10 -p c2t1d0s1 1g<cr> The system responds with d10: Soft Partition is setup View the soft partition using the metastat command: # metastat d10<cr> d10: Soft Partition Device: c2t1d0s1 State: Okay Size: 2097152 blocks (1. whereas a hard slice is contiguous. Specifies the size of the soft partition. A soft partition can be built on a disk slice or another SVM volume. and assuming that you’ve already created the required database replicas. To create a soft partition named d10 which is 1GB in size. build RAID 1 (mirror) or RAID 5 volumes on disk slices. such as a concatenated device. G or g for gigabytes. Slice 7 is also created. Table 3. soft partitions can cause I/O performance degradation.10 metainit Command Options Description The name of the metadevice. Formatting the disk creates slice 0. T or t for terabytes.0 GB) Device Start Block Dbase Reloc c2t1d0s1 25920 Yes Yes . you create soft partitions using the SVM command metainit. Specifies that the metadevice will be a soft partition. Specifies that the entire disk should be reformatted. The name of the disk slice or SVM volume that the soft partition will be created on. Therefore. for storing a state database replica.151 Solaris Volume Manager (SVM) soft partitions can be noncontiguous. As when creating other SVM volumes. let’s say that we have a hard slice named c2t1d0s1 that is 10GB in size and was created using the format command. with a size of 4MB.

d200 represents a volume that is approximately 6GB. my performance will be improved as a result of the striped volume. I’ve already created a RAID 0 striped volume named d200. such as a striped or mirrored volume. therefore. I’ll create two 500MB soft partitions named d40 and d50 on this striped volume: # metainit d40 -p d200 d40: Soft Partition is # metainit d50 -p d200 d50: Soft Partition is 500m<cr> setup 500m<cr> setup Because the soft partitions are built on top of a striped volume. In the following example.2G564442__________ Create a file system on the soft partition using the newfs command: # newfs /dev/md/rdsk/d10<cr> It’s good practice to check the new file system using the fsck command: # fsck /dev/md/rdsk/d10<cr> Now you can mount a directory named /data onto the soft partition: # mount /dev/md/dsk/d10 /data<cr> To remove the soft partition named d10. .sd@SIBM_____DDRS34560SUN4. The system responds with d10: Soft Partition is cleared You can also create a soft partition on an existing SVM volume.152 Chapter 3: Managing Storage Volumes Extent 0 Start Block 25921 Block count 2097152 Device Relocation Information: Device Reloc Device ID c2t1d0 Yes id1. unmount the file system that is mounted to the soft partition and issue the metaclear command: # metaclear d10<cr> CAUTION Removing the soft partition destroys all data that is currently stored on that partition. The stripe is built on top of three 2GB slices.

153

Solaris Volume Manager (SVM)

Remove all the metadevices by typing this:
# metaclear -a<cr> d50: Soft Partition is cleared d40: Soft Partition is cleared

The -a option deletes all metadevices.

Expanding an SVM Volume
With SVM, you can increase the size of a file system while it is active and without unmounting the file system. The process of expanding a file system consists of first increasing the size of the SVM volume using the metattach command. The metattach command is used to grow soft partitions, metadevices, submirrors, and mirrors. Furthermore, metadevices can be grown without interrupting service. The syntax for using the metattach command to expand a soft partition is as follows:
/sbin/metattach [-s <setname>] <metadevice> <size>

where:
. -s <setname>: Specifies the name of the disk set on which the metattach or metadetach command will work. Using the -s option causes the command to perform its

administrative function within the specified disk set. Without this option, the command performs its function on local metadevices.
. <metadevice>: Specifies the metadevice name of the existing soft partition or metadevice. . <size>: Specifies the amount of space to add to the soft partition in K or k for kilo-

bytes, M or m for megabytes, G or g for gigabytes, T or t for terabytes, or B or b for blocks (sectors). After increasing the size of the volume with metattach, you grow the file system that has been created on the partition using the growfs command. growfs nondestructively expands a mounted or unmounted UNIX file system (UFS) to the size of the file system’s slice(s). The syntax for the growfs command is as follows:
/sbin/growfs [-M <mountpoint>] [<newfs-options>] [<raw-device>]

where:
. -M <mountpoint>: Specifies that the file system to be expanded is mounted on <mountpoint>. File system locking (lockfs) is used.

. <newfs-options>: See the newfs man pages. . <raw-device>: Specifies the name of the raw metadevice residing in /dev/md/rdsk or
/dev/rdsk.

154

Chapter 3: Managing Storage Volumes

In Step By Step 3.1, I’ll use metattach to increase the size of a soft partition, and I’ll use growfs to increase the size of the file system mounted on it.

STEP BY STEP
3.1 Increasing the Size of a Mounted File System
# df -h /data<cr> Filesystem /dev/md/dsk/d10

1. Check the current size of the /data file system:
size 960M used 1.0M avail capacity 901M 1% Mounted on /data

Note that the size of /data is currently 960MB. A metastat -c shows the size as 1.0GB:
# metastat -c d10<cr> d10 p 1.0GB c2t1d0s1

2. Use the metattach command to increase the SVM volume named d10 from 1GB to 2GB as follows:
# metattach d10 1gb<cr>

Another metastat -c shows that the soft partition is now 2GB, as follows:
# metastat -c d10<cr> d10 p 2.0GB c2t1d0s1

Check the size of /data again, and note that the size did not change:
# df -h /data<cr> Filesystem /dev/md/dsk/d10 size 960M used 1.0M avail capacity 901M 1% Mounted on /data

3. To increase the mounted file system /data, use the growfs command:
# growfs -M /data /dev/md/rdsk/d10<cr> Warning: 416 sector(s) in last cylinder unallocated /dev/md/rdsk/d10: 4194304 sectors in 1942 cylinders of 16 tracks, 135 sectors 2048.0MB in 61 cyl groups (32 c/g, 33.75MB/g, 16768 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 69296, 138560, 207824, 277088, 346352, 415616, 484880, 554144, 623408, 3525584, 3594848, 3664112, 3733376, 3802640, 3871904, 3941168, 4010432, 4079696, 4148960,

Another df -h /data command shows that the /data file system has been increased as follows:
# df -h /data<cr> Filesystem /dev/md/dsk/d10 size 1.9G used 2.0M avail capacity 1.9G 1% Mounted on /data

155

Solaris Volume Manager (SVM)

Soft partitions can be built on top of concatenated devices, and you can increase a soft partition as long as there is room on the underlying metadevice. For example, you can’t increase a 1GB soft partition if the metadevice on which it is currently built is only 1GB in size. However, you could add another slice to the underlying metadevice d9. In Step By Step 3.2 you will create an SVM device on c2t1d0s1 named d9 that is 4GB in size. You then will create a 3GB soft partition named d10 built on this device. To add more space to d10, you first need to increase the size of d9. The only way to accomplish this is to add more space to d9, as described in the Step by Step.

STEP BY STEP
3.2 Concatenate a New Slice to an Existing Volume
1. Log in as root and create the state database replicas as described earlier in this chapter. 2. Use the metainit command to create a simple SVM volume on c2t1d0s1:
# metainit d9 1 1 c2t1d0s1<cr> d9: Concat/Stripe is setup

Use the metastat command to view the simple metadevice named d9:
# metastat d9<cr> d9: Concat/Stripe Size: 8311680 blocks (4.0 GB) Stripe 0: Device Start Block Dbase c2t1d0s1 25920 Yes

State Reloc Hot Spare Okay Yes

Device Relocation Information: Device Reloc Device ID c2t1d0 Yes id1,sd@SIBM_____DDRS34560SUN4.2G564442__________

3. Create a 3GB soft partition on top of the simple device:
# metainit d10 -p d9 3g<cr> d10: Soft Partition is setup

4. Before we can add more space to d10, we first need to add more space to the simple volume by concatenating another 3.9GB slice (c2t2d0s1) to d9:
# metattach d9 c2t2d0s1<cr> d9: component is attached

The metastat command shows the following information about d9:
# metastat d9<cr> d9: Concat/Stripe Size: 16670880 blocks (7.9 GB)

156

Chapter 3: Managing Storage Volumes
Stripe 0: Device c2t1d0s1 Stripe 1: Device c2t2d0s1

Start Block 25920 Start Block 0

Dbase Yes Dbase No

State Reloc Hot Spare Okay Yes State Reloc Hot Spare Okay Yes

Device Relocation Information: Device Reloc Device ID c2t1d0 Yes id1,sd@SIBM_____DDRS34560SUN4.2G564442__________ c2t2d0 Yes id1,sd@SIBM_____DDRS34560SUN4.2G3Z1411__________

Notice that the metadevice d9 is made up of two disk slices (c2t1d0s1 and c2t2d0s1) and that the total size of d9 is now 7.9GB. 5. Now we can increase the size of the metadevice d10 using the metattach command described in Step By Step 3.1.

Creating a Mirror
A mirror is a logical volume that consists of more than one metadevice, also called a submirror. You create a mirrored volume using the metainit command used earlier to create a RAID 0 volume. However, the syntax and options are not the same:
/sbin/metainit [<generic options>] <mirror> -m <submirror> [<read_options>]\ [<write_options>] [<pass_num>]

Table 3.11 describes the generic options for the metainit command and the options specific to creating a mirror. Table 3.11
metainit Mirror Options Description
<mirror> is the metadevice name of the mirror. The -m option indicates that the configuration being created is a mirror. <submirror> is

Command Option
<mirror> -m <submirror>

a metadevice that makes up the initial one-way mirror.
<read_options>: The following

Description

read options are available for mirrors:
-g -r

Enables the geometric read option, which results in faster performance on sequential reads. Directs all reads to the first submirror. This flag cannot be used with the -g option.

157

Solaris Volume Manager (SVM)

Table 3.11

metainit Mirror Options Description

<write_options>: The following

write options are available for mirrors:
-S

Performs serial writes to mirrors. The first submirror write completes before the second is started. This may be useful if hardware is susceptible to partial sector failures. If -S is not specified, writes are replicated and dispatched to all mirrors simultaneously. A number in the range 0 to 9 at the end of an entry defining a mirror that determines the order in which that mirror is resynced during a reboot. The default is 1. Smaller pass numbers are resynced first. Equal pass numbers are run concurrently. If 0 is used, the resync is skipped. 0 should be used only for mirrors mounted as read-only, or as swap.

<pass_num>

This example has two physical disks: c0t0d0 and c0t1d0. Slice 5 is free on both disks, which will comprise the two submirrors, d12 and d22. The logical mirror will be named d2; it is this device that will be used when a file system is created. Step By Step 3.3 details the whole process.

STEP BY STEP
3.3 Creating a Mirror
# metainit d12 1 1 d12: Concat/Stripe # metainit d22 1 1 d22: Concat/Stripe c0t0d0s5<cr> is setup c0t1d0s5<cr> is setup

1. Create the two simple metadevices that will be used as submirrors:

2. Having created the submirrors, now create the actual mirror device, d2, but attach only one of the submirrors. The second submirror will be attached manually.
# metainit d2 -m d12<cr> d2: Mirror is setup

At this point, a one-way mirror has been created. 3. Attach the second submirror to the mirror device, d2:
# metattach d2 d22<cr> d2: Submirror d22 is attached

At this point, a two-way mirror has been created. The second submirror will be synchronized with the first submirror to ensure that they are identical.

158

Chapter 3: Managing Storage Volumes

CAUTION
It is not recommended that you create a mirror device and specify both submirrors on the command line. Even though this would work, no resynchronization will occur between the two submirrors, which could lead to data corruption.

4. Verify that the mirror has been created successfully and that the two submirrors are being synchronized:
# metastat -q<cr> d2: Mirror Submirror 0: d12 State: Okay Submirror 1: d22 State: Resyncing Resync in progress: 27 % done Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 4194828 blocks (2.0 GB) d12: Submirror of d2 State: Okay Size: 4194828 blocks (2.0 GB) Stripe 0: Device Start Block Dbase c0t0d0s5 0 No d22: Submirror of d2 State: Resyncing Size: 4194828 blocks (2.0 GB) Stripe 0: Device Start Block Dbase c0t1d0s5 0 No

State Reloc Hot Spare Okay Yes

State Reloc Hot Spare Okay Yes

Notice that the status of d12, the first submirror, is Okay, and that the second submirror, d22, is currently Resyncing and is 27% complete. The mirror is now ready for use as a file system. 5. Create a UFS file system on the mirrored device:
# newfs /dev/md/rdsk/d2<cr> newfs: construct a new file system /dev/md/rdsk/d2: (y/n)? y Warning: 4016 sector(s) in last cylinder unallocated /dev/md/rdsk/d2: 4194304 sectors in 1029 cylinders of 16 tracks,\ 255 sectors 2048.0MB in 45 cyl groups (23 c/g, 45.82MB/g, 11264 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 94128, 188224, 282320, 376416, 470512, 564608, 658704, 752800, 846896, 3285200, 3379296, 3473392, 3567488, 3661584, 3755680, 3849776, 3943872, 4037968, 4132064,

159

Solaris Volume Manager (SVM) Note that it is the d2 metadevice that has the file system created on it. 6. Run fsck on the newly created file system before attempting to mount it. This step is not absolutely necessary, but is good practice because it verifies the state of a file system before it is mounted for the first time:
# fsck /dev/md/rdsk/d2<cr> ** /dev/md/rdsk/d2 ** Last Mounted on ** Phase 1 - Check Blocks and Sizes ** Phase 2 - Check Pathnames ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cyl groups 2 files, 9 used, 2033046 free (14 frags, 254129 blocks, 0.0% fragmentation)

The file system can now be mounted in the normal way. Remember to edit /etc/vfstab to make the mount permanent. Remember to use the md device and for this example, we’ll mount the file system on /mnt.
# mount /dev/md/dsk/d2 /mnt<cr> #

Unmirroring a Noncritical File System
This section details the procedure for removing a mirror on a file system that can be removed and remounted without having to reboot the system. The metadetach command is used to detach submirrors from a mirror. When the submirror is detached, it is no longer part of the mirrored volume. You cannot detach the only existing submirror from a mirrored volume. The syntax for the metadetach command is as follows:
/sbin/metadetach <mirror> <metadevice>

where:
. <mirror>: Specifies the name of the mirrored volume that the submirror is being

detached from.
. <metadevice>: Specifies the name of the submirror that will be detached from the

mirrored volume. Step By Step 3.4 shows how to detach a submirror and then remove the mirrored volume. This example uses a file system, /test, that is currently mirrored using the metadevice, d2, a mirror that consists of submirrors d12 and d22. To start, /test will be unmounted. Then I will use metadetach to break the submirror d12 (c0t0d0s5) away from the mirrored volume. I’ll use metaclear to remove the mirrored volume and remaining submirror, d22. Finally, I’ll mount /dev/dsk/c0t0d0s5 onto the /test mountpoint in a nonmirrored environment.

160

Chapter 3: Managing Storage Volumes

STEP BY STEP
3.4 Unmirror a Noncritical File System
# umount /test<cr>

1. Unmount the /test file system:

2. Detach the submirror, d12, that will be used as a UFS file system:
# metadetach d2 d12<cr> d2: submirror d12 is detached

3. Delete the mirror (d2) and the remaining submirror (d22):
# metaclear -r d2<cr> d2: Mirror is cleared d22: Concat/Stripe is cleared

At this point, the file system is no longer mirrored. It is worth noting that the metadevice, d12, still exists and can be used as the device to mount the file system. Alternatively, the full device name, /dev/dsk/c0t0d0s5, can be used if you do not want the disk device to support a volume. For this example, we will mount the full device name (as you would a normal UFS file system), so we will delete the d12 metadevice first. 4. Delete the d12 metadevice:
# metaclear d12<cr> d12: Concat/Stripe is cleared

5. Edit /etc/vfstab to change this entry:
/dev/md/dsk/d2 /dev/md/rdsk/d2 /test ufs 2 yes -

to this:
/dev/dsk/c0t0d0s5 /dev/rdsk/c0t0d0s5 /test ufs 2 yes -

6. Remount the /test file system:
# mount /test<cr>

Placing a Submirror Offline
Taking a submirror offline is preferred to detaching a submirror when you simply want to take a submirror offline temporarily, such as to perform a backup. Use the metaoffline command to take a submirror offline. The metaoffline command differs from the metadetach command in that it does not sever the logical association between the submirror and the mirror. A submirror that has been taken offline remains offline until the metaonline command is invoked or the system is rebooted.

161

Solaris Volume Manager (SVM)

In Step By Step 3.5, I have a mirrored volume named d10. I’ll offline the d12 submirror so that I can back up the submirror. This allows me to back up a read-only image of the data on d10 without backing up a live file system. While I run the backup, read/write operations can still take place on d10, but the mirror is inactive. Data will be out of sync on d12 as soon as data is written to d10.

STEP BY STEP
3.5 Offlining a Submirror
1. Use the metastat command to view the current SVM configuration. The system has a file system named /data that has been created on a mirrored volume named d10. The d10 mirror has two submirrors, d11 and d12:
# metastat -c<cr> d10 m d11 s d12 s 2.0GB d11 d12 2.0GB c2t0d0s6 2.0GB c2t1d0s6

2. Take the d12 submirror (c2t1d0s6) offline using the metaoffline command:
# metaoffline d10 d12<cr> d10: submirror d12 is offlined

A second metastat shows the status as offline:
# metastat -c<cr> d10 m d11 s d12 s 2.0GB d11 d12 (offline) 2.0GB c2t0d0s6 2.0GB c2t1d0s6

The /data file system continues to run, and users can read/write to that file system. However, as soon as a write is made to /data, the mirror is out of sync. The writes to d10 are tracked in a dirty region log so that d12 can be resynchronized when it is brought back online with the metaonline command. 3. Mount the offlined submirror (d12) onto a temporary mount point so that you can back up the data on the submirror:
# mkdir /bkup<cr> # mount -o ro /dev/md/dsk/d12 /bkup<cr>

You can only mount the d12 submirror as read-only. A read-only image of the data (at the time the submirror was offlined) exists in the /bkup file system. Now you can back up the /bkup file system safely with ufsdump, tar, or cpio. 4. When the backup is complete, umount the submirror, and bring the submirror back online:
# cd /<cr> # umount /bkup<cr> # metaonline d10 d12<cr> d10: submirror d12 is onlined

162

Chapter 3: Managing Storage Volumes

When the metaonline command is used, read/write operations to the d12 submirror resume. A resync is automatically invoked to resync the regions written while the submirror was offline. Writes are directed to the d12 submirror during resync. Reads, however, come from submirror d11 until d12 is back in sync. When the resync operation completes, reads and writes are performed on submirror d12. The metaonline command is effective only on a submirror of a mirror that has been taken offline.

Mirroring the Root File System on a SPARC-Based System
In this section we will create another mirror, but this time it will be the root file system on a SPARC-based system. This is different from Step By Step 3.3 because we are mirroring an existing file system that cannot be unmounted. We can’t do this while the file system is mounted, so we’ll configure the metadevice and a reboot will be necessary to implement the logical volume and to update the system configuration file. The objective is to create a two-way mirror of the root file system, currently residing on /dev/dsk/c0t0d0s0. We will use a spare disk slice of the same size, /dev/dsk/c0t1d0s0, for the second submirror. The mirror will be named d10, and the submirrors will be d11 and d12. Additionally, because this is the root (/) file system, we’ll also configure the second submirror as an alternate boot device, so that this second slice can be used to boot the system if the primary slice becomes unavailable. Step By Step 3.6 shows the procedure to follow for mirroring the boot disk on a SPARC-based system:

STEP BY STEP
3.6 Mirror the Boot Disk on a SPARC-Based System NOTE
The system that we are mirroring in this Step By Step has a single hard partition for / (root) and a second hard partition for swap. Everything (/var, /opt, /usr, and /export/home) is in the / (root) file system on a single slice. This is the scenario that you will likely see on the certification exam. However, if you have a separate partition for /var and/or /export/home, this procedure must be modified accordingly. If your system has separate disk partitions for /var and/or /export/home, you may want to review Step By Step 3.7, which describes how to mirror a boot disk on an x86-based system that has a separate / (root), /var, and /export/home file system.

1. Verify that the current root file system is mounted from /dev/dsk/c0t0d0s0:
# df -h /<cr> Filesystem /dev/dsk/c0t0d0s0 size 4.9G used 3.7G avail capacity 1.2G 77% Mounted on /

163

Solaris Volume Manager (SVM) 2. Create the state database replicas, specifying the disk slices c0t0d0s4 and c0t1d0s5. We will create two replicas on each slice.
# metadb -a -f -c2 c0t0d0s4 c0t1d0s4<cr>

3. Create the two submirrors for the / (root) file system, d11 and d12:
# metainit -f d11 1 1 c0t0d0s0<cr> d11: Concat/Stripe is setup # metainit d12 1 1 c0t1d0s0<cr> d12: Concat/Stripe is setup

Note that the -f option was used in the first metainit command. This is the option to force the execution of the command, because we are creating a metadevice on an existing, mounted file system. The -f option was not necessary in the second metainit command because the slice is currently unused. 4. Create the two submirrors for swap, d21 and d22:
# metainit d21 1 1 c0t0d0s3<cr> d11: Concat/Stripe is setup # metainit d22 1 1 c0t1d0s3<cr> d12: Concat/Stripe is setup

5. Create a one-way mirror for / (root), d10, specifying d11 as the submirror to attach:
# metainit d10 -m d11<cr> d10: Mirror is setup

6. Create a one-way mirror for swap, d20, specifying d21 as the submirror to attach:
# metainit d20 -m d21<cr> d20: Mirror is setup

7. Set up the system files to support the new metadevice, after taking a backup copy of the files that will be affected. It is a good idea to name the copies with a relevant extension, so that they can be easily identified if you later have to revert to the original files, if problems are encountered. We will use the .nosvm extension in this Step By Step.
# cp /etc/system /etc/system.nosvm<cr> # cp /etc/vfstab /etc/vfstab.nosvm<cr> # metaroot d10<cr>

The metaroot command has added the following lines to the system configuration file, /etc/system, to allow the system to boot with the / file system residing on a logical volume. This command is only necessary for the root device.
* Begin MDD root info (do not edit) rootdev:/pseudo/md@0:0,0,blk * End MDD root info (do not edit)

164

Chapter 3: Managing Storage Volumes It has also modified the /etc/vfstab entry for the / file system. It now reflects the metadevice to use to mount the file system at boot time:
/dev/md/dsk/d10 /dev/md/rdsk/d10 /ufs 1 no -

You also need to modify swap in the /etc/vfstab file:
/dev/md/dsk/d20 swap no -

8. Synchronize file systems prior to rebooting the system:
# lockfs -fa<cr>

The lockfs command is used to flush all buffers so that when the system is rebooted, the file systems are all up to date. This step is not compulsory, but is good practice. 9. Reboot the system:
# init 6<cr>

10. Verify that the root file system is now being mounted from the metadevice /dev/md/dsk/d0:
# df -h /<cr> Filesystem /dev/md/dsk/d10 size 4.9G used avail capacity 3.7G 1.2G 77% Mounted on /

11. Attach the second submirror for / (root) and verify that a resynchronization operation is carried out:
# metattach d10 d12<cr> d10: Submirror d12 is attached

Verify the new metadevice:
# metastat -q d10<cr> d10: Mirror Submirror 0: d11 State: Okay Submirror 1: d12 State: Resyncing Resync in progress: 62 % done Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 10462032 blocks (5.0 GB) d11: Submirror of d10 State: Okay Size: 10462032 blocks (5.0 GB) Stripe 0: Device Start Block Dbase c0t0d0s0 0 No d12: Submirror of d10

State Reloc Hot Spare Okay Yes

0/pci@1./. For this step you need to be at the ok prompt. svc. Attach the second submirror for swap: # metattach d20 d22<cr> d20: Submirror d22 is attached 13. This is required to assign an OpenBoot alias for a backup boot device. 14.165 Solaris Volume Manager (SVM) State: Resyncing Size: 10462032 blocks (5. The dump device currently points to the physical device. so enter init 0 to shut down the system: # init 0<cr> # svc./devices/pci@1f. 15. and change the dad string to disk.0 GB) Stripe 0: Device Start Block Dbase c0t1d0s0 0 No State Reloc Hot Spare Okay Yes 12.0:a # 2008 /dev/dsk/c0t1d0s0 ->\ Record the address starting with /pci. so you need to change the dump device to reflect the metadevice: # dumpadm -s /var/crash/’hostname’ -d /dev/md/dsk/d20<cr> The system responds with the following: Dump content: Dump device: Savecore directory: Savecore enabled: kernel pages /dev/md/dsk/d20 (swap) /var/crash/train10 yes 16.startd: 74 system services are now being stopped. Please wait.. # installboot /usr/platform/’uname -i’/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s0<cr> # The uname -i command substitutes the system’s platform name. [ output truncated ] ok . This step is necessary because it is the root (/) file system that is being mirrored. Install a boot block on the second submirror to make this slice bootable.startd: The system is coming down. # ls -l /dev/dsk/c0t1d0s0<cr> lrwxrwxrwx 1 root root 46 Mar 12 . Identify the physical device name of the secondary submirror.1/ide@3/disk@1..0/pci@1.1/ide@3/dad@1. In this case...0:a. this leaves you with /pci@1f.

. Boot the system from the secondary submirror to prove that it works. .1/ide@3/disk@1.0/pci@1. All rights reserved.0/pci@1.lst file to set up the alternate boot device. The process is similar to mirroring the boot disk on a SPARC-based system. When this has been done. Modify the menu. which points to the address recorded in step 11: ok nvalias backup-root /pci@1f.0 File and args: SunOS Release 5.1/ide@3/disk@1. so that this device is used before going to the network.. Run fdisk on a new disk before it is partitioned. enter the nvstore command to save the alias created: ok printenv boot-device<cr> boot-device = disk net ok setenv boot-device disk backup-root net<cr> boot-device = disk backup-root net ok nvstore<cr> 17. Use is subject to license terms. . Execute the installgrub command to install the stage1 and stage2 programs. Disk device names are different on the x86/x64 platform. with the following exceptions: ..10 Version Generic 64-bit Copyright 1983-2008 Sun Microsystems.. output truncated] Rebooting with command: boot backup-root Boot device: /pci@1f. Inc. [. . as described earlier.. [.166 Chapter 3: Managing Storage Volumes Enter the nvalias command to create an alias named backup-root. output truncated] <hostname> console login: Mirroring the Root File System on an x86-Based System In this section I will describe how to create a mirror of the boot disk on an x86/x64-based system. This can be done manually from the ok prompt: ok boot backup-root<cr> Resetting ..0:a<cr> Inspect the current setting of the boot-device variable and add the name backup-root as the secondary boot path.

and I’ll make it 20MB. Another option is to use the fmthard command to copy the label from c0d0 to c1d0: # prtvtoc /dev/rdsk/c0d0s2 | fmthard -s .3G used 3. swap is on slice 3. Now. The format command was used to partition c1d0 exactly like c0d0. I’ll create two state databases on c0d0s5 and c1d0s5: # metadb -a -f -c2 c0d0s5 c1d0s5<cr> Verify the database replicas as follows: # metadb -i<cr> flags a u a u a u a u first blk 16 8208 16 8208 block count 8192 8192 8192 8192 /dev/dsk/c0d0s5 /dev/dsk/c0d0s5 /dev/dsk/c1d0s5 /dev/dsk/c1d0s5 .7 shows the procedure to follow for mirroring the boot disk on an x86/x64based system. . . /export/home is on slice 7.1G 74% Mounted on / 2. /var is on slice 1. . . The alternate boot disk that will be used for the secondary submirror is c0d1. . The boot disk does not have an empty partition.167 Solaris Volume Manager (SVM) The x86 system that will be used in the Step By Step is configured as follows: . .1G avail capacity 1.7 Mirror the root File System on an x86/x64-Based System # df -h /<cr> Filesystem /dev/dsk/c0d0s0 1./dev/rdsk/c1d0s2<cr> Step By Step 3. Create the state database replicas on slice 5. . Slice 5 is available and will be used to store the state database replicas. I’ll define slice 5 to start at cylinder 3531. so I’ll create one using the format command. STEP BY STEP 3. / (root) is on slice 0. Solaris 10 is currently installed on c0d0. Verify that the current root file system is mounted from /dev/dsk/c0d0s0: size 4.

Create the primary submirrors on c0d0 for /. These are also RAID 0 simple volumes. Create a RAID 1 volume (a one-way mirror) for each file system on c0d0 specifying the primary submirror as the source. and /export/home. and /export/home. Create the secondary submirrors on c1d0 for /. Create the secondary submirror for / (root): # metainit d12 1 1 c1d0s0<cr> d12: Concat/Stripe is setup b. These are RAID 0 simple volumes. a. Create the primary submirror for swap: # metainit -f d31 1 1 c0d0s3<cr> d31: Concat/Stripe is setup d. The volume names for the RAID 1 mirrors will be as follows: d10: / (root) d20: /var d30: swap . swap.168 Chapter 3: Managing Storage Volumes 3. Create the primary submirror for /var: # metainit -f d21 1 1 c0d0s1<cr> d22: Concat/Stripe is setup c. a. Create the secondary submirror for /var: # metainit d22 1 1 c1d0s1<cr> d22: Concat/Stripe is setup c. swap. Create the primary submirror for /export/home: # metainit -f d41 1 1 c0d0s7<cr> d41: Concat/Stripe is setup 4. Create the secondary submirror for /export/home: # metainit d42 1 1 c1d0s7<cr> d42: Concat/Stripe is setup 5. /var. Create the primary submirror for / (root): # metainit -f d11 1 1 c0d0s0<cr> d11: Concat/Stripe is setup b. Create the secondary submirror for swap: # metainit d32 1 1 c1d0s3<cr> d32: Concat/Stripe is setup d. /var.

Also. after taking a backup copy of the files that will be affected.blk * End MDD root info (do not edit) It has also modified the /etc/vfstab entry for the / file system. This command is only necessary for the root device. We will use the . if a drive fails. swap will point to d30 and not a physical device.0. Set up the system files to support the new metadevice. * Begin MDD root info (do not edit) rootdev:/pseudo/md@0:0.nosvm<cr> # cp /etc/vfstab /etc/vfstab. we can still boot to the alternate disk. we need to have swap mirrored just like any other file system.169 Solaris Volume Manager (SVM) d40: /export/home a. Create the RAID 1 volume for /export/home: # metainit d40 -m d41<cr> d40: Mirror is setup NOTE Why mirror swap? If we want to survive the loss of a submirror. That way. Create the RAID 1 volume for /var: # metainit d20 -m d21<cr> d20: Mirror is setup c. Create the RAID 1 volume for swap: # metainit d30 -m d31<cr> d30: Mirror is setup d.nosvm<cr> # metaroot d10<cr> The metaroot command has added the following lines to the system configuration file. It is a good idea to name the copies with a relevant extension so that they can be easily identified if you later have to revert to the original files if problems are encountered. /etc/system.nosvm extension in this Step By Step. The /etc/vfstab file now reflects the metadevice to use to mount the file system at boot time: /dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no - . Create the RAID 1 volume for /: # metainit d10 -m d11<cr> d10: Mirror is setup b. and swap will point to the available submirror—whichever that good submirror may be at that time. # cp /etc/system /etc/system. 6. to allow the system to boot with the / file system residing on a logical volume.

170 Chapter 3: Managing Storage Volumes If you look at the /etc/vstab file. Synchronize file systems before rebooting the system: # lockfs -fa<cr> . so you need to change the dump device to reflect the metadevice: # dumpadm -s /var/crash/’hostname’ -d /dev/md/dsk/d30<cr> The system responds with this: Dump content: Dump device: Savecore directory: Savecore enabled: kernel pages /dev/md/dsk/d30 (swap) /var/crash/train10 yes 8. and /export/home file systems: #device #to mount # fd /proc /dev/md/dsk/d30 /dev/md/dsk/d10 /dev/md/dsk/d20 /dev/md/dsk/d40 /devices ctfs objfs swap device to fsck mount point FS type no no no / ufs /var ufs /export/home devfs ctfs no yes fsck pass mount mount at boot options /dev/fd fd /proc proc swap /dev/md/rdsk/d10 /dev/md/rdsk/d20 /dev/md/rdsk/d40 /devices /system/contract /system/object objfs /tmp tmpfs - 1 1 ufs no no - no no 2 - yes - 7. it now looks like this: # more /etc/vfstab<cr> #device device mount #to mount to fsck point # fd /dev/fd fd /proc /proc proc /dev/dsk/c0d0s3 swap /dev/md/dsk/d10 /dev/md/rdsk/d10 /dev/dsk/c0d0s1 /dev/rdsk/c0d0s1 /dev/dsk/c0d0s7 /dev/rdsk/c0d0s7 /devices /devices ctfs /system/contract objfs /system/object objfs swap /tmp tmpfs FS type no no no / ufs /var ufs /export/home devfs ctfs no yes fsck pass mount mount at boot options 1 1 ufs no no - no no 2 - yes - You still need to make additional modifications to the /etc/vfstab file for the /var. swap. The dump device currently points to the physical device.

so.so. but is good practice. Attach the secondary submirrors on c1d0 using metattach: # metattach d10 d12<cr> d10: submirror d12 is attached # metattach d20 d22<cr> d20: submirror d22 is attached # metattach d30 d32<cr> d30: submirror d32 is attached # metattach d40 d42<cr> D40: submirror d32 is attached Verify that a resynchronization operation is carried out: # metastat -c<cr> d40 m 2. This step is not compulsory. /var is mounted on d20.0GB c1d0s7 .3G 3.171 Solaris Volume Manager (SVM) The lockfs command is used to flush all buffers so that when the system is rebooted.0GB c0d0s7 d42 s 2.3G 3. the file systems are all up to date. Now check swap: # swap -l<cr> swapfile /dev/md/dsk/d30 dev swaplo blocks free 85.1G 74% 0K 0% 0K 0% 0K 0% 0K 0% 701M 1% 0K 0% 1.1G /devices 0K 0K ctfs 0K 0K proc 0K 0K mnttab 0K 0K swap 702M 908K objfs 0K 0K /usr/lib/libc/libc_hwcap1. Verify that the file systems are now being mounted from the metadevices: # df -h<cr> Filesystem size used /dev/md/dsk/d10 4. and /export/home is mounted on d40.1G 0K 812M 701M 701M 773M 74% 0% 9% 1% 1% 13% Mounted on / /devices /system/contract /proc /etc/mnttab /etc/svc/volatile /system/object /lib/libc.30 8 1208312 1208312 11.0GB d41 d42 (resync-0%) d41 s 2. 9.1G fd 0K 0K /dev/md/dsk/d20 940M 72M swap 701M 80K swap 701M 28K /dev/md/dsk/d40 940M 111M avail capacity 1.1 4.1 /dev/fd /var /tmp /var/run /export/home Notice that / is mounted on metadevice d10. Reboot the system: # init 6<cr> 10.

lst file and is as follows: #——————————-END BOOTADM—————————— title Solaris 10 5/08 s10x_u5wos_10 X86 (Alternate Boot Path) root (hd1. c1d0.4GB d11 d12 (resync-15%) s 4.0GB d21 d22 (resync-65%) s 1. restart the system. 12. At the GRUB menu.172 Chapter 3: Managing Storage Volumes d30 d31 d32 d20 d21 d22 d10 d11 d12 m 590MB d31 d32 (resync-85%) s 590MB c0d0s3 s 590MB c1d0s3 m 1.0GB c1d0s1 m 4. it is referred to as hd1.a) kernel /platform/i86pc/multiboot module /platform/i86pc//boot_archive 14. The secondary submirror will be the alternate boot device.lst file is configured to boot from the master IDE drive connected to the primary IDE controller (hd0.0GB c0d0s1 s 1.0. You need to configure your system to boot from the secondary submirror if the primary submirror fails. . The entry to boot from the alternate disk will be added to the end of the /boot/grub/menu. You need to define the alternate boot path in the /boot/grub/menu.0. the menu. Currently. This will be the master IDE drive that is connected to the secondary IDE controller. After the submirrors have finished synchronizing.a. select the entry titled “Solaris 10 5/08 s10x_u5wos_10 X86 (Alternate Boot Path).DO NOT EDIT ————— title Solaris 10 5/08 s10x_u5wos_10 X86 kernel /platform/i86pc/multiboot module /platform/i86pc/boot_archive I’ll add a new entry to the menu.0.4GB c0d0s0 s 4.4GB c1d0s0 Notice that the secondary submirrors are being synchronized.a): #————— ADDED BY BOOTADM .lst GRUB configuration file.” and make sure that the system boots from the alternate boot device for verification. 233 sectors starting at 50 (abs 4146) 13. Use the installgrub command to install the stage1 and stage2 programs onto the Solaris fdisk partition of the secondary disk drive: # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1d0s0<cr> The system responds with this: stage1 written to partition 0 sector 0 (abs 4096) stage2 written to partition 0.lst file to allow booting from the alternate boot device.

issue the following command: # metaroot /dev/dsk/c0t0d0s0<cr> Notice that the entry that was added to /etc/system when the file system was mirrored has been removed. If you created backup copies of the /etc/vfstab and /etc/system files before setting up the mirror. unmirroring a root file system is different because it cannot be unmounted while the system is running. If your boot disk has separate file systems for /var and/or /export/home. c1t0d0 will be unused. it is necessary to perform a reboot to implement the change. d11 and d12. If you don’t have backup copies. STEP BY STEP 3.4. d10. Detach the submirror that is to be used as the / file system: # metadetach d10 d11<cr> d10: Submirror d11 is detached 3.7G 1. Verify that the current root file system is mounted from the metadevice /dev/md/dsk/d10: size 4. d20. The objective is to remount the / file system using its full disk device name. Step By Step 3. /dev/dsk/c0t0d0s0. You still need to manually edit the /etc/vfstab file to revert to /dev/dsk/c0t0d0s3. and that the /etc/vfstab entry for / has reverted to /dev/dsk/c0t0d0s0. and the secondary submirror is on c1t0d0. The primary submirror is on c0t0d0. consisting of submirrors d21 and d22. This example comprises a mirror of / (root).9G used avail capacity 3. instead of using /dev/md/dsk/d10 and remount swap on /dev/dsk/c0t0d0s3.8 Unmirror the Boot Disk # df -h /<cr> Filesystem /dev/md/dsk/d10 1. Detach the submirror that is being used as swap: # metadetach d20 d21<cr> d20: Submirror d21 is detached . There is also a mirror of swap. Set up the /etc/system file and /etc/vfstab to revert to the full disk device name.6. /dev/dsk/c0t0d0s0. the boot disk has two partitions: slice 0 is used for / (root).173 Solaris Volume Manager (SVM) Unmirroring the Root File System Unlike Step By Step 3. In this next scenario. When the file systems have been unmirrored. a. where a file system was unmirrored and remounted without affecting the operation of the system. and slice 3 is used for swap.2G 77% Mounted on / 2. consisting of two submirrors.8 shows how to unmirror the root file system that was successfully mirrored in Step By Step 3. you could simply move those backup files back into place. you need to modify this procedure to also unmirror those additional file systems. In this case.

a more serious problem occurs. Sometimes. temporarily remove the SVM configuration so that you boot from the original c0t0d0s0 device. Reboot the system to make the change take effect: # init 6<cr> 5. as shown here: # # # # cp cp cp cp /etc/system /etc/system. you must reinstate pre-SVM copies of the files /etc/system and /etc/vfstab. To disable SVM. First. a root mirror fails and recovery action has to be taken.2G 77% Mounted on / 6. Often. You then replace the faulty disk and reattach it.9G used 3.7G avail capacity 1. Second. Verify that the root file system is now being mounted from the full disk device.6 we took a copy of these files (step 7).nosvm /etc/system<cr> /etc/vfstab. though.svm<cr> /etc/vfstab /etc/vfstab.svm<cr> /etc/system.nosvm /etc/vfstab<cr> . and its remaining submirrors # metaclear -r d20<cr> d20: Mirror is cleared d22: Concat/Stripe is cleared Troubleshooting Root File System Mirrors Occasionally.174 Chapter 3: Managing Storage Volumes b. Remove the mirror d10. you have two options. in which case it can be detached using the metadetach command. and then copy the originals back to make them operational. /dev/dsk/c0t0d0s0: # df -h /<cr> Filesystem /dev/dsk/c0t0d0s0 size 4. Change the dump device to the physical slice c0t0d0s3: #dumpadm -s /var/crash/’hostname’ -d /dev/dsk/c0t0d0s3<cr> 4. you can boot from a CD-ROM and recover the root file system manually by carrying out an fsck. In this case. Remove the submirror named d21 as follows: # metaclear d21<cr> c. This is good practice and should always be done when editing important system files. and its remaining submirrors. to take a current backup. In Step By Step 3. Copy these files again. d11 and d12: # metaclear -r d10<cr> d10: Mirror is cleared d11: Concat/Stripe is cleared d12: Concat/Stripe is cleared Remove the mirror d20. only one side of the mirror fails. prohibiting you from booting the system with SVM present.

If the preceding does not work. This can be achieved easily by detaching the second submirror and then reattaching it. The following example shows a mirror d10 consisting of d11 and d12: # metadetach d10 d11<cr> d10: submirror d11is detached # metattach d10 d11<cr> d10: submirror d11 is attached To demonstrate that the mirror is performing a resynchronization operation. On a SPARC system.) On a SPARC-based system. (On an x86/x65-based system. which shows the progress as a percentage: # metastat d10<cr> d10: Mirror Submirror 0: d11 State: Okay Submirror 1: d12 State: Resyncing Resync in progress: 37 % done .175 Solaris Volume Manager (SVM) You should now be able to reboot the system to single-user without SVM and recover any failed file systems. you can manually run fsck on the root file system. boot to single-user from the CD-ROM as follows: ok boot cdrom -s<cr> When the system prompt is displayed. you might need to repair the root file system manually.Check Reference Counts ** Phase 5 . 1404922 free (201802 frags.Check Blocks and Sizes ** Phase 2 . boot to Failsafe mode from the GRUB menu. I assume that a root file system exists on /dev/rdsk/c0t0d0s0: # fsck /dev/rdsk/c0t0d0s0<cr> ** /dev/rdsk/c0t0d0s0 ** Last Mounted on / ** Phase 1 . 150390 blocks. insert the Solaris 10 DVD disk (or the Solaris 10 CD 1) and shut down the system if it is not already shut down. In this example. requiring you to boot from a DVD or CD-ROM.9% fragmentation) ***** FILE SYSTEM WAS MODIFIED ***** You should now be able to reboot the system using SVM.Check Pathnames ** Phase 3 . you can issue the metastat command as follows. \ 3. and you should resynchronize the root mirror as soon as the system is available.Check Cyl groups FREE BLK COUNT(S) WRONG IN SUPERBLK SALVAGE? y 136955 files. 3732764 used.Check Connectivity ** Phase 4 .

0 GB) d11: Submirror of d10 State: Okay Size: 10462032 blocks (5. and HP. . even on large Sun servers that use Veritas Volume Manager to manage the remaining data. A course is run by Sun Microsystems for administrators using Veritas Volume Manager. SVM is now a much more robust product. in actual industry practice.dad@ASAMSUNG_SP0411N=S01JJ60X901935 Veritas Volume Manager EXAM ALERT Veritas Volume Manager The exam has no questions on the Veritas Volume Manager.176 Chapter 3: Managing Storage Volumes Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 10462032 blocks (5. or direct from Symantec.dad@AWDC_AC310200R=WD-WT6750311269 c0t1d0 Yes id1. Sequent. Veritas Volume Manager is an unbundled software package that can be purchased separately via Sun. This section has been included solely to provide some additional information for system administrators and to allow comparison between this product and the Solaris Volume Manager. It is widely used for performing Virtual Volume Management functions on large scale systems such as Sun. It used to be much more robust than the older Solstice DiskSuite product—the predecessor to the Solaris Volume Manager—providing tools that identify and analyze storage access patterns so that I/O loads can be balanced across complex disk configurations. but the difference is negligible.0 GB) Stripe 0: Device Start Block Dbase c0t0d0s0 0 No d12: Submirror of d10 State: Resyncing Size: 10462032 blocks (5. Although Veritas Volume Manager also provides the capability to mirror the OS drive. This product has traditionally been used for managing SAN connected storage. and does not come as part of the standard Solaris 10 release.0 GB) Stripe 0: Device Start Block Dbase c0t1d0s0 0 No State Reloc Hot Spare Okay Yes State Reloc Hot Spare Okay Yes Device Relocation Information: Device Reloc Device ID c0t0d0 Yes id1. you’ll still see SVM used to mirror the OS drive.

The Volume Manager builds virtual devices called volumes on top of physical disks. Volumes are accessed by the Solaris file system. a database. a given volume must be configured from disks belonging to the same disk group. The Volume Manager uses several Volume Manager objects to perform disk management tasks. Disk groups allow the administrator to group disks into logical collections for administrative convenience. databases.177 Veritas Volume Manager Veritas Volume Manager is a complex product that would take much more than this chapter to describe in detail.12 Object Name VM disk Volume Manager Objects Description A contiguous area of disk space from which the Volume Manager allocates storage. or other applications in the same way physical disk partitions would be accessed. . stripes and concatenations are normally achieved during the creation of the plex. A virtual disk device that appears to be a physical disk partition to applications. Often referred to as mirrors. A VM disk can be divided into one or more subdisks. Volumes and their virtual components are referred to as Volume Manager objects. Volumes are created within a disk group. The use of two or more plexes forms a functional mirror. Disk group Subdisk Plex Volume NOTE Plex configuration A number of plexes (usually two) are associated with a volume to form a working mirror. Volumes are created within a disk group. A physical disk can be accessed using a device name such as /dev/rdsk/c#t#d. The physical disk can be divided into one or more slices. subdisks are the basic units in which the Volume Manager allocates disk space. The default disk group used to be rootdg (the root disk group) in versions prior to version 4. forming one side of a mirror configuration. a plex consists of one or more subdisks located on one or more disks. This chapter introduces you to the Veritas Volume Manager and some of the terms you will find useful. A set of contiguous disk blocks. a given volume must be configured from disks belonging to the same disk group. Also. but now no default disk group is assigned.12. A physical disk is the underlying storage device (media). and file systems. as shown in Table 3. Each VM disk corresponds to at least one partition. which may or may not be under Volume Manager control. A VM disk usually refers to a physical disk in the array. Table 3. A collection of VM disks that share a common configuration. Additional disk groups can be created as necessary. but does not have the physical limitations of a physical disk partition.

Slice 3 maintains information about the virtual to physical device mappings. and Slice 4 is the public area. and the names of the raw devices are found in the /dev/vx/rdsk/<disk_group>/<volume_name> directory. The following is an example of a block and raw logical device name: /dev/vx/dsk/apps/vol01 . As a result. The names of the block devices for virtual volumes created using Veritas Volume Manager are found in the /dev/vx/dsk/<disk_group>/<volume_name> directory. Veritas Volume Manager manages disk space by using contiguous sectors. provide redundancy of data. data availability and disk subsystem throughput are improved. A standard Solaris disk partitioning environment has an eight-partition limit per disk.raw device . and perform backups or other administrative tasks on one or more physical disks without interrupting applications. Slice 3 is called a private area.178 Chapter 3: Managing Storage Volumes Volume Manager objects can be manipulated in a variety of ways to optimize performance. The application formats the disks into only two slices: Slice 3 and Slice 4.block device /dev/vx/rdsk/apps/vol01 . while Slice 4 provides space to build the virtual devices. The advantage of this approach is that there is almost no limit to the number of subdisks you can create on a single drive.

Hot-swappable . Volume . 0+1) . Concatenation . Hot-pluggable . 1+0. Stripe . Veritas Volume Manager objects . Metadevice . Meta state database . you learned about Veritas Volume Manager. Virtual volume . Key Terms . Submirror .179 Summary Summary This chapter described the basic concepts behind RAID and the Solaris Volume Manager (SVM). Mirror . We also covered the creation and monitoring of the state database replicas and how to mirror and unmirror file systems. as well as the elements of SVM and how they can be used to provide a reliable data storage solution. 1. Metadisk . RAID (0. Hot spare pool . a third-party product used predominantly in larger systems with disk arrays. This chapter described the various levels of RAID along with the differences between them. Finally. Soft partition . Interlaces . 5.

My example directory is /data. In the first window. The output is displayed and is updated every 3 seconds. 3. change to the directory where you have at least 50 Megabytes of free disk space and create an empty file of this size. you will enter a parameter to produce output every 3 seconds. start the iostat utility so that extended information about each disk or metadevice can be displayed.1 Monitoring Disk Usage In this exercise. or metadevice. For this exercise. The commands are identical whether or not you are running Solaris Volume Manager. # cd /data<cr> # mkfile 50M testfile<cr> 4. You should see the affected disk slice suddenly become a lot busier. Also. In the second window. which tells you how busy the disk. you’ll see how to use the iostat utility to monitor disk usage. The file to be created is called testfile. Estimated time: 5 minutes 1. You need a Solaris 10 workstation with local disk storage and a file system with at least 50 Megabytes of free space.180 Chapter 3: Managing Storage Volumes Apply Your Knowledge Exercise Along with the exercise in this section. Continue to monitor the output when the command has completed and notice that the disk returns to its normal usage level. Enter the following command at the command prompt: # iostat -xn 3<cr> 2. is at the moment. but watch the output being displayed in the first window and notice the increase in the %b column of output. as shown in the following code. Make sure you have write permission to the file system. Press Ctrl+C to stop the iostat output in the first window and delete the file created when you have finished. Watch the %b column. you do not have to make use of metadevices. The file will take several seconds to be created. as shown here: # rm testfile<cr> . make sure that you can perform all the Step By Steps in this chapter from memory. because the utility displays information on standard disks as well as metadevices. 5. You also need CDE window sessions. 3.

Which of the following types of addressing interlaces component blocks across all the slices? ❍ A. Disk group . Subdisks ❍ C. Pseudo device ❍ D. Which of the following is a device that represents several disks or disk slices? ❍ A. Which of the following is a collection of slices reserved to be automatically substituted in case of slice failure in either a submirror or RAID 5 metadevice? ❍ A. Mirror ❍ B. Metadevice ❍ B. Which of the following provides redundancy of data in the event of a disk or hardware failure? ❍ A. Stripe ❍ D. Stripe ❍ C. Mirror ❍ D. Concatenation ❍ D. Volume ❍ C. Metadevice 3.181 Apply Your Knowledge Exam Questions 1. Hot spare pool ❍ B. Physical device ❍ B. Plexes ❍ D. Concatenated stripe ❍ C. Concatenated stripe ❍ C. Which of the following volumes organizes the data sequentially across slices? ❍ A. Mirror ❍ B. Instance 2. Stripe 4. Metadevice 5.

Plex 8. metaclear -r d1 ❍ B. metadb -a -f -c3 c0t0d0s3 ❍ D. Solaris Volume Manager ❍ B. metadb -i 10. RAID 1 volume ❍ D. Standard Solaris SPARC disk ❍ E. Mirroring ❍ C. metaclear 9. metainit d1 1 1 d14 ❍ D. metadb -i ❍ B. using metadevice d14 as the submirror? ❍ A. metainit d1 -m d14 ❍ C. the missing data can be regenerated using available data and the parity information? ❍ A. RAID 5 7. so that in the case of missing data. metainit -a -f -c3 c0t0d0s3 ❍ C. Trans ❍ D. Your supervisor has given you the task of building an array of disks for an application that is very write-intensive. RAID 0 stripe volume ❍ C. RAID 5 volume . The budget is tight. Which of the following commands would create 3-state database replicas on slice c0t0d0s3? ❍ A. VM disk ❍ D. Which of the following commands would create a one-way mirror (d1). Which SVM volume would you use? ❍ A. RAID 0 concatenation volume ❍ B. Which of the following replicates data by using parity information.182 Chapter 3: Managing Storage Volumes 6. so he has not requested any redundancy. Which of the following has an eight-partition limit per disk? ❍ A. Veritas Volume Manager ❍ C. Hot spare pool ❍ B.

c1d0 (the master IDE drive on the secondary IDE controller)? ❍ A. Create two state databases on each disk. Which entry in the menu. The system will continue to run. RAID 1 volume ❍ D. ❍ D.and write-intensive. ❍ B. Which options would you choose? (Choose two.1. RAID 0+1 F. The server has two physical disks and two state database replicas on slice 7 of each disk. Create at least three state database replicas (one per disk).lst file provides an option to boot to the alternate boot device. ❍ B. root (hd1. altbootpath=/eisa/eha@1000. ❍ C. RAID 5 volume ❍ ❍ E. ❍ D.a) ❍ B. Create one state database on each disk. The system will panic. RAID 1+0 12. What is the recommended placement of your state database replicas when you have four disk drives? ❍ A. Your client has given you the task of building an array of disks for an application that is read. Nothing will happen with the current number of state database replicas still online. Your server uses SVM volumes to mirror the operating system disks.) ❍ A. The system cannot reboot to multiuser mode. altbootpath=hd1.183 Apply Your Knowledge 11. Data availability is critical.0/cmdk@1. 14. RAID 0 concatenation volume ❍ B.) ❍ A. root (hd0.a . and cost is not an issue. 13.0. Create three state databases on each disk. RAID 0 stripe volume ❍ C.a) ❍ C.0. What would happen if one of the disk drives failed? (Choose two. ❍ C.0:a ❍ D.

see the “Creating the State Database” section. This provides redundancy of data in the event of a disk or hardware failure. For more information. D. A. see the “Planning Your SVM Configuration” section. A standard Solaris SPARC disk-partitioned environment has an eight-partition limit per disk. 8. A volume is used to increase storage capacity and increase data availability. 9.184 Chapter 3: Managing Storage Volumes Answers to Exam Questions 1. 11. the data can be regenerated using available data and the parity information. see the “Creating a Mirror” section. For more information. For more information. RAID 0 stripes and concatenations do not provide redundancy. A. 4. thus providing high data transfer rates and high I/O throughput. A stripe is similar to concatenation. except that the addressing of the component blocks is interlaced on all the slices rather than sequentially. For more information. see the “SVM Volumes” section. For more information. With two state database replicas on each of two disks. D. For more information. Concatenations work in much the same way as the UNIX cat command is used to concatenate two or more files to create one larger file. RAID 5 replicates data by using parity information. 10. D. However. A volume (often called a metadevice) is a group of physical slices that appear to the system as a single. For more information. 7. C. For more information. C. see the “Solaris SVM” section. but there is a penalty for write operations. the addressing of the component blocks is done on the components sequentially. A RAID 1+0 and RAID 0+1 volume would provide the best option for redundancy and fast I/O throughput on read/write operations. A RAID 5 stripe performs slower than a RAID 0 stripe. A RAID 5 stripe provides the best performance for read operations while providing redundancy in the event of a disk failure. see the “Solaris SVM” section. If partitions are concatenated. logical device. The command metadb -a -f -c3 c0t0d0s3 would create the required state database replicas. see the “Solaris SVM” section. you will need to boot into single-user mode and delete the failed state database replicas before you can boot the system into multiuser mode. 12. see the “RAID” section. At the next reboot. In the case of missing data. C. F. see the “SVM Volumes” section. RAID 1 mirrored volumes and RAID 5 striped volumes provide redundancy and therefore require additional disks and money. A mirror is composed of one or more simple metadevices called submirrors. the server continues to run. see the “Planning Your SVM Configuration” section. E. . B. when one disk fails. The file system can use the entire concatenation. see the “Solaris SVM” section. 6. 2. For more information. 5. B. For more information. 3. A. A mirror replicates all writes to a single logical device (the mirror) and then to multiple devices (the submirrors) while distributing read operations. see the “Creating the State Database” section. a RAID 0 stripe spreads data evenly across multiple physical disks. The command metainit d1 -m d14 would create a one-way mirror. A hot spare pool is a collection of slices (hot spares) reserved to be automatically substituted in case of slice failure in either a submirror or RAID 5 metadevice. B.

Add the following new entry to the menu. B. Suggested Reading and Resources Solaris 10 Documentation CD. 14. create two state database replicas on each drive for a system with two to four disk drives.sun. Also available at http://docs. When distributing your state database replicas. For more information.185 Suggested Reading and Resources 13.a) For more information. part number 816-4520-12. B.com. ”Solaris Volume Manager Administration Guide” manual. .lst file to allow booting from the alternate boot device. c1d0: root (hd1. see the “Mirroring the Root File System on an x86-Based System” section. see the “Creating the State Database” section.0.

.

introduces the daemon responsible for managing the messaging. . This chapter describes Role-Based Access Control (RBAC). including assigning rights profiles. These can greatly assist the system administrator when managing a large number of rights that are to be assigned to a number of users. It also describes the new method of restarting/refreshing the syslog process when changes are made to its configuration file. .FOUR 4 Controlling Access and Configuring System Messaging Objectives The following test objectives for Exam CX-310-202 are covered in this chapter: Configure Role-Based Access Control (RBAC). Explain syslog function fundamentals. You will see how to assign a role to a user and use rights profiles by using commands that are described in this chapter. and authorizations to users. and identifies the four main databases involved with RBAC. .conf file and syslog messaging. . Analyze RBAC configuration file summaries and manage RBAC using the command line. This chapter describes the basics of system messaging in the Solaris operat- ing environment. and describes the configuration file that determines what information is logged and where it is stored. The system administrator needs to understand the function and structure of each of these databases and how to apply the RBAC functionality in real-world situations. roles. and configure and manage the /etc/syslog.

Be sure you understand each command and be prepared to match the command to the correct description. you must understand the concept of system messaging—its purpose. Hands-on experience is important when learning these topics. . Pay special attention to the databases used in Role-Based Access Control (RBAC) and the uses and format of each.Outline Introduction Role-Based Access Control (RBAC) Using RBAC RBAC Components Extended User Attributes (user_attr) Database Authorizations (auth_attr) Database Rights Profiles (prof_attr) Database Execution Attributes (exec_attr) Database syslog Using the logger Command Summary Key Terms Apply Your Knowledge Exercise Exam Questions Answers to Exam Questions Suggested Reading and Resources Study Strategies The following strategies will help you prepare for the test: . Be prepared to match the terms presented in this chapter with the correct description. and how to configure and manage it. . . As you study this chapter. how it works. Be sure you know all the terms listed in the “Key Terms” section near the end of this chapter. Finally. . it’s important that you practice each exercise and each command that is presented on a Solaris system. so practice until you can repeat the procedures from memory.

Analyze RBAC configuration file summaries and manage RBAC using the command line. a message. administrators can not only assign limited administrative capabilities to nonroot users. the system administrator had to set the setuid permission bit on the file so that a user could execute the command as root. and the system console). . you had to rely on a third-party package. Execution profiles: Bundling mechanisms for grouping authorizations and commands with special attributes. Authorizations: User rights that grant access to a restricted function. This has the advantage of being logged and therefore helps establish accountability. Role-Based Access Control (RBAC) Objectives . With Role-Based Access Control (RBAC) in the Solaris 10 operating environment. they can also provide the mechanism where a user can carry out a specific function as another user (if required). The use of Role-Based Access Control makes the delegation of authorizations much easier for the system administrator to manage. These are both related in that they participate in the securing and monitoring of systems in a Solaris environment. The problem was that sudo was an unsupported piece of freeware that had to be downloaded from the Internet and installed onto your system.189 Role-Based Access Control (RBAC) Introduction This chapter covers two main topics—Role-Based Access Control (RBAC) and system messaging (syslog). In extreme cases. the use of roles means that a user has to first log in using his or her normal ID and then use the su command to gain access to the role (and therefore assigned privileges). This is achieved through three features: . as groups of privileges can easily be given to a role through the use of profiles. greatly increasing the chances of it being noticed quickly. . such as sudo. roles. The system messaging service (syslog) stores important system and security messages and is fully configurable. The system administrator can tune the service so that certain messages are delivered to several places (such as a log file. . for example. Configure Role-Based Access Control (RBAC) including assigning rights profiles. In the past. Roles: Special type of user accounts intended for performing a set of administrative tasks. and authorizations to users. user and group IDs or superuser ID. to provide this functionality. Granting superuser access to nonroot users has always been an issue in UNIX systems. . Also.

Change user passwords. and /etc/user_attr files. Let’s use the role username “adminusr. it’s easier to first describe how a system administrator would utilize RBAC to delegate an administrative task to a nonroot user in a fictional setting at Acme Corp. . At Acme Corp. A role account is not accessible for normal logins. . through the CDE login window. Shut down the system. In RBAC. In this chapter. when we speak of delegating administrative tasks.190 Chapter 4: Controlling Access and Configuring System Messaging CAUTION Assigning superuser access using RBAC Most often.” After Neil logs in with his normal login name of ncalkins. The system administrator creates the role account using the roleadd command. a user can access commands with special attributes. So far we have determined that we want to name the role account adminusr. for example. the system administrator is overwhelmed with tasks. Mount and share file systems. the system administrator needs to define a role username for the tasks he wants to delegate. you will probably use RBAC to provide superuser access to administrative tasks within the system. The roleadd command adds a role account to the /etc/passwd. The system administrator first needs to define which tasks he wants Neil to help with. etc/shadow.. At Acme Corp. he then needs to issue the su command and switch to adminusr whenever he wants to perform administrative tasks. From a role account. The syntax for the roleadd command is as follows: . He decides to delegate some of his responsibility to Neil. He has identified three tasks: . which are unavailable to users with normal accounts.. although you should note that the Solaris Management Console can also be used. It is like a normal user account in most respects except that users can gain access to it only through the su command after they have logged in to the system with their normal login account. A role account is a special type of user account that is intended for performing a set of administrative tasks. typically the superuser privilege. it is referred to as a role account. a user from the engineering department who helps out sometimes with system administration tasks. Exercise caution and avoid creating security lapses by providing access to administrative functions by unauthorized users. you learn how to create a role account using the command line interface. but do not add or remove accounts. Using RBAC To better describe RBAC.

A directory that contains skeleton information (such as .191 Role-Based Access Control (RBAC) roleadd [-c comment] [-d dir] [-e expire] [-f inactive] [-g group] \ [-G group] [-m] [-k skel_dir] [-u uid] [-s shell] \ [-A authorization] [-P profile ] <role username> You’ll notice that roleadd looks a great deal like the useradd command. The default is /bin/pfsh. It redefines the role’s primary group membership. Table 4.profile) that can be copied into a new role’s home directory. Normal values are positive integers. Both of these options respectively assign authorizations and profiles to the role.1 describes the options for the roleadd command. It is a unique string that identifies what is being authorized as well as who created the authorization. It must be a nonnegative decimal integer. When creating a role account with the roleadd command. no user can access this role. Creates the new role’s home directory if it does not already exist. you can enter 10/30/02 or October 30. Table 4. -f <inactive> -g <group> -G <group> -k <skeldir> -s <shell> -A <authorization> -P <profile> -u <uid> The other options are the same options that were described for the useradd command. outlined in Solaris 10 System Administration Exam Prep: CX-310-200. It redefines the role’s supplementary group membership. This directory must already exist. The system provides the /etc/skel directory that can be used for this purpose. Part I. you need to specify an authorization or profile to the role. Specifies an existing group’s integer ID. Specifies the user’s shell on login. Specifies an existing group’s integer ID or character-string name. After this date. Specifies the maximum number of days allowed between uses of a login ID before that login ID is declared invalid.1 Option -c <comment> -d <dir> -m -e <expire> roleadd Options Description Any text string to provide a brief description of the role. a role does not have access to its home directory until the UID is manually reassigned using the chown command. The <expire> option argument is a date entered using one of the date formats included in the template file /etc/datemsk. Specifies a UID for the new role. Authorizations and profiles are described later in this section. 2002. Specifies the expiration date for a role. . For example. A value of “ “ defeats the status of the expired date. The home directory of the new role account. An authorization is a user right that grants access to a restricted function. The UID associated with the role’s home directory is not modified with this option. Duplicates between groups with the -g and -G options are ignored. or character string name.

write adminusr<cr> A role account named adminusr with the required directory structures has been created. you needed to ensure that the user was not logged in at the time of assigning a role. With the usermod command.system.192 Chapter 4: Controlling Access and Configuring System Messaging Certain privileged programs check the authorizations to determine whether users can execute restricted functionality. To access the administrative functions.shutdown solaris.system.write:::Mount and Share File Systems::\ help=AuthFsMgrWrite.write The system administrator would therefore issue the roleadd command as follows: # roleadd -m -d /export/home/adminusr -c “Admin Assistant” \ -A solaris.fsmgr. you received an error message and the role was not assigned.pswd.admin. example. This is no longer the case.usermgr.system. A role can be assigned to a user while the user is still logged in. Neil needs to first log in using his regular user account named neil. Following are the predefined authorizations from the /etc/security/auth_attr file that apply to the tasks to be delegated: solaris.pswd:::Change Password::help=AuthUserMgrPswd.admin.admin.fsmgr.html solaris.admin. For the Acme Corp.usermgr.fsmgr. we assign the role to the user account using the -R option: usermod -R adminusr neil NOTE No need to be logged out Previously.admin.\ solaris. The next step is to set the password for the adminusr role account by typing the following: passwd adminusr You are prompted to type the new password twice.usermgr. the system administrator needs to specify the authorizations shown here: solaris.solaris. Now we need to set up Neil’s account so he can access the new role account named adminusr. Neil can check which roles he has been granted by typing the following at the command line: $ roles<cr> The system responds with the roles that have been granted to the user account neil: adminusr .html All authorizations are stored in the auth_attr database.admin. so the system administrator needs to use one or more of the authorizations that are stored in that file. otherwise.pswd solaris.shutdown.shutdown:::Shutdown the System::help=SysShutdown.html solaris.

in which <base_dir> is the base directory for new login home directories. The syntax for the rolemod command is as follows: rolemod [-u uid] [-o] [-g group] [-G group] [-d dir] [-m] [-s shell]\ [-c comment] [-l new_name] [-f inactive] [-e expire] [-A Authorization]\ [-P profile] <role account> Table 4. numeric characters. Any other user trying to su to the adminusr account gets this message: $ su adminusr<cr> Password: Roles can only be assumed by authorized users su: Sorry $ If the system administrator later wants to assign additional authorizations to the role account named adminusr. period (. Moves the role’s home directory to the new directory specified with the -d option.). and <login> is the new login. Now Neil can modify user passwords. It defaults to <base_dir>/<login>. If the directory already exists. A future Solaris release might refuse to accept login fields that do not meet these requirements.193 Role-Based Access Control (RBAC) Neil then needs to su to the adminusr account by typing the following: $ su adminusr<cr> Neil is prompted to type the password for the role account. The <new_logname> argument must contain at least one character and must not contain a colon (:) or newline (\n).2 Option -A <authorization> -d <dir> rolemod Options Description One or more comma-separated authorizations as defined in the auth_attr database. The first character should be alphabetic and the field should contain at least one lowercase alphabetic character. and hyphen (-). This replaces any existing authorization setting. Specifies the new login name for the role. A warning message is written if these restrictions are not met. Table 4. he would do so using the rolemod command.2 describes options for the rolemod command where they differ from the roleadd command. -l <new_logname> -m . and mount and share file systems. in which group is the role’s primary group. The <new_logname> argument is a string no more than eight bytes consisting of characters from the set of alphabetic characters. The rolemod command modifies a role’s login information on the system. shut down the system. it must have permissions read/write/execute by group. Specifies the new home directory of the role. underline (_).

. To do this. If you want to remove a role account.write.usermgr.194 Chapter 4: Controlling Access and Configuring System Messaging Table 4.logsvc.admin.fsmgr..solaris. The UID associated with the role’s home directory is not modified with this option.admin.admin.solaris.shutdown. you need to add solaris.\ solaris. issue the following command: # roledel -r adminusr<cr> The next section discusses each of the RBAC databases in detail.2 Option -o rolemod Options Description Allows the specified UID to be duplicated (nonunique).usermgr. For example.logsvc. It must be a nonnegative decimal integer. a role does not have access to its home directory until the UID is manually reassigned using the chown command. Specifies a new UID for the role. Replaces any existing profile setting.purge.admin.purge to the list of authorizations for adminusr. issue the rolemod command: # rolemod -A solaris. use the roledel command: roledel [-r] <role account name> The -r option removes the role’s home directory from the system.admin. describing the entries made when we executed the roleadd and usermod commands. to remove the adminusr role account. -P <profile> -u <uid> To add the ability to purge log files.purge adminusr<cr> You can verify that the new authorizations have been added to the role by typing the auths command at the command line: # auths adminusr<cr> solaris.logsvc. it replaces any existing authorization setting.system.admin.system.pswd.fsmgr.\ write. .solaris.pswd.solaris.shutdown.solaris. One or more comma-separated execution profiles are defined in the auth_attr database. [ output has been truncated] CAUTION rolemod warning The rolemod command does not add to the existing authorizations.admin..

admin. see user_attr(4) # #pragma ident “@(#)user_attr 1.system. A common exam question is to match the description with the relevant RBAC database. /etc/security/auth_attr (authorization attributes database): Defines authoriza- tions and their attributes and identifies the associated help file.usermgr.admin. EXAM ALERT RBAC database functions You need to be able to correctly identify the function and location of each RBAC database.auths=solaris. /etc/security/prof_attr (rights profile attributes database): Defines profiles. Remember that the user_attr database resides in the /etc directory and not in the /etc/security directory.shutdown.roles=adminusr .profiles=All neil::::type=normal. Extended User Attributes (user_attr) Database The /etc/user_attr database supplements the passwd and shadow databases.write.*. .1 03/07/09 SMI” # adm::::profiles=Log Management lp::::profiles=Printer Management root::::auths=solaris.pswd. # # /etc/user_attr # # user attributes. /etc/security/exec_attr (profile attributes database): Defines the privileged operations assigned to a profile. such as authorizations and profiles. lists the profile’s assigned authorizations.195 Role-Based Access Control (RBAC) RBAC Components RBAC relies on the following four databases to provide users access to privileged operations: . It also allows roles to be assigned to a user.grant. It contains extended user attributes./ solaris.solaris. /etc/user_attr (extended user attributes database): Associates users and roles with authorizations and profiles. Following is an example of the /etc/user_attr database: # more /etc/user_attr<cr> # Copyright 2003 by Sun Microsystems.solaris. These four databases are logically interconnected.fsmgr. . All rights reserved.profiles=All adminusr::::type=role. . and identifies the associated help file. Inc.

limitpriv: The system administrator can limit the set of privileges allowed. which makes all commands available but without attributes. roles. defaultpriv. Reserved for future use. Profiles are described in the section titled “Authorizations (auth_attr) Database.device. if this account is for a role. Care must be taken when limiting privileges so as to not affect other applications the user might execute.) key-value pairs that describe the security attributes to be applied when the user runs commands. . A profile determines which commands a user can execute and with which command attributes. type can be set to normal. Eight valid keys exist: auths. type. limitpriv. Roles cannot be assigned to other roles. so that the user is placed in a default project at login time. defaultpriv is the list of default privileges the user is assigned. At a minimum.* means all the Solaris device authorizations. A normal user assumes a role after he has logged in. as specified in the passwd database. each user in user_attr should have the All profile. Contains an optional list of semicolon-separated (.196 Chapter 4: Controlling Access and Configuring System Messaging The following fields in the user_attr database are separated by colons: user:qualifier:res1:res2:attr Each field is described in Table 4. solaris. and this attribute contains the maximum set of privileges the user can be allowed.3 Field Name user qualifier res1 res2 attr user_attr Fields Description Describes the name of the user or role. profiles contains an ordered. Reserved for future use. comma-separated list of profile names chosen from prof_attr. if this account is for a normal user. They are indicated by setting the type value to role. and lock_after_retries: auths specifies a comma-separated list of authorization names chosen from names defined in the auth_attr database.” roles can be assigned to the user using a comma-separated list of role names. The first profile in the list that contains the command to be executed defines which (if any) attributes are to be applied to the command. Authorization names can include the asterisk (*) character as a wildcard. Reserved for future use. Note that roles are defined in the same user_attr database. project. or to role. profiles. Table 4. For example. project can be set to a project from the projects database. it works similarly to UNIX search paths. The order of profiles is important.3.

auths=solaris.admin.fsmgr. which in turn are assigned to users. we issued the following roleadd command to add a role named adminusr: # roleadd -m -d /export/home/adminusr -c “Admin Assistant”\ -A solaris.write. All authorizations are stored in the auth_attr database.pswd.solaris.usermgr.system.system.\ solaris. later in this chapter.admin.write adminusr<cr> The roleadd command made the following entry in the user_attr database: adminusr::::type=role.shutdown solaris. Authorizations can be assigned directly to users (or roles). The default is no.jobs.solaris.profiles=All We can then issue the following usermod command to assign the new role to the user neil: # usermod -R useradmin neil<cr> and then make the following entry to the user_attr database: neil::::type=normal. Assigning authorizations to the role named adminusr did this. For example.roles=adminusr Authorizations (auth_attr) Database An authorization is a user right that grants access to a restricted function.admin.fsmgr. Authorizations can also be assigned to profiles.admin authorization is required for one user to edit another user’s crontab file. Remember that we used the following authorizations to give Neil the ability to modify user passwords. In the previous section. An authorization is a unique string that identifies what is being authorized as well as who created the authorization. and mount and share file systems: solaris.\ solaris. In the previous section.shutdown.admin.pswd.usermgr.197 Role-Based Access Control (RBAC) lock_after_retries specifies whether an account is locked out following a number of failed logins. in which case they are entered in the user_attr database.write Certain privileged programs check the authorizations to determine whether users can execute restricted functionality. If no name service is used.system. the database is located in a file named /etc/security/auth_attr. as shown here: authname:res1:res2:short_desc:long_desc:attr . The number of failed logins is taken from the RETRIES option in /etc/default/login.fsmgr. the system administrator wanted to delegate some of the system administrative tasks to Neil.shutdown. shut down the system. the solaris.admin. They are described in the “Rights Profiles (prof_attr) Database” section.usermgr. The fields in the auth_attr database are separated by colons.pswd solaris.admin.

The authname solaris.html file in the /usr/lib/help/auths/locale/C directory.admin. Table 4.4 Field Name authname[suffix] auth_attr Fields Description A unique character string used to identify the authorization in the format prefix. and the type of user interested in using it.admin and solaris. Reserved for future use.) key-value pairs that describe the attributes of an authorization.usermgr. An optional list of semicolon-separated (. The authname solaris.admin. When no suffix exists (that is. The suffix indicates what is being authorized—typically. All other authorizations should use a prefix that begins with the reverse-order Internet domain name of the organization that creates the authorization (for example. such as in a scrolling list in a GUI.printmgr. com. res1 res2 short_desc long_desc attr The following are some typical values found in the default auth_attr database: solaris.grant is an example of a grant authorization.shutdown:::Shutdown the System::help=SysShutdown.admin.write.198 Chapter 4: Controlling Access and Configuring System Messaging Each field is described in Table 4.html Look at the relationship between the auth_attr and the user_attr databases for the adminusr role we added earlier: adminusr::::type=role. A shortened name for the authorization suitable for displaying in user interfaces. the authname consists of a prefix and functional area and ends with a period).pswd:::Change Password::help=AuthUserMgrPswd.html solaris.xyzcompany).printmgr. The long description can be displayed in the help text of an application. Help files can be accessed from the index.shutdown.nobanner to other users.printmgr is an example of a heading.write:::Mount and Share File Systems::\ help=AuthFsMgrWrite. the functional area and operation. This field identifies the purpose of the authorization.system. it gives the user the right to delegate such authorizations as solaris.fsmgr. Zero or more keys can be specified.system. Reserved for future use. A long description.profiles=All . The keyword help identifies a help file in HTML. the authname serves as a heading for use by applications in their GUIs rather than as an authorization.printmgr.auths=solaris.4. the applications in which it is used. authorizations with the same prefix and functional area) to other users. When the authname ends with the word grant.usermgr.admin.html solaris. the authname serves as a grant authorization and allows the user to delegate related authorizations (that is.\ solaris. Authorizations for the Solaris operating environment use solaris as a prefix.solaris.pswd.fsmgr.

which is in the default prof_attr database. auths. A long description. Table 4.html Several other profiles are defined in the prof_attr database. auths specifies a comma-separated list of authorization names chosen from those names defined in the auth_attr database. The solaris. and profiles. the prof_attr file is located in the /etc/security directory.html file in the /usr/lib/help/auths/locale/C directory. Rights Profiles (prof_attr) Database We referred to rights profiles. This field should explain the purpose of the profile.5. attr . Again.system. Defining a role account that has several authorizations can be tedious. or simply profiles. Following is an example of a profile named Operator. The definition of the profile is stored in the prof_attr database. we assigned authorization rights to the role account. Operator:::Can perform simple administrative tasks:profiles=Printer Management. A field reserved for future use. The four valid keys are help. These authorization entries came out of the auth_attr database. Colons separate the fields in the prof_attr database: profname:res1:res2:desc:attr The fields are defined in Table 4. Help files can be accessed from the index.Media Backup. Up until now. shown previously. The keyword help identifies a help file in HTML.199 Role-Based Access Control (RBAC) Notice the authorization entries that are bold. if you are not using a name service. privs. which is several authorizations bundled together under one name called a profile name.) that describe the security attributes to apply to the object upon execution. earlier in this chapter.shutdown authorization. A field reserved for future use. Profile names are case-sensitive. An optional list of key-value pairs separated by semicolons (. Zero or more keys can be specified. The long description should be suitable for displaying in the help text of an application. In this case.All. gives the role the right to shut down the system. including what type of user would be interested in using it. which is defined in the auth_attr database.5 Field Name profname res1 res2 desc prof_attr Fields Description The name of the profile.help=RtOperator. Authorization names can be specified using the asterisk (*) character as a wildcard. it’s better to define a profile.

200 Chapter 4: Controlling Access and Configuring System Messaging Perhaps the system administrator wants to create a new role account and delegate the task of printer management and backups.delete:::Delete Printer Information::help=AuthPrinterDelete.delete When you look at these three authorizations in the auth_attr database.admin.printer. Or.solaris.admin. Printer Management . he could use the Operator profile currently defined in the prof_attr database.admin. I’ll describe how this profile is defined in the next section when I describe execution attributes.modify .admin.printer. Media Backup .printer.admin. updating. which looks like this: The Operator profile consists of three other profiles: .printer. All Let’s look at each of these profiles as defined in the prof_attr database: Printer Management:::Manage printers.html Assigning the Printer Management profile is the same as assigning the three authorizations for viewing.\ html.html solaris.printer.modify:::Update Printer Information::help=AuthPrinterModify. .printer.printer. and deleting printer information.printer.read:::View Printer Information::help=AuthPrinterRead.admin. daemons.read . solaris.html Printer Management has the following authorizations assigned to it: .admin.html All:::Execute any command as the user or role:help=RtAll. solaris.admin.auths=solaris. The Media Backup profile does not have authorizations associated with it like the Printer Management profile has.read. you see the following entries: solaris.delete Media Backup:::Backup files and file systems:help=RtMediaBkup. The Media Backup profile provides authorization for backing up data.modify. spooling:help=RtPrntAdmin.admin.html solaris.printer. solaris. but not restoring data. He could look through the auth_attr file for each authorization and assign each one to the new role account using the roleadd command.\ solaris. as described earlier.

To create a new role account named admin2 specifying the Operator profile.*. We’ll explore this concept further when I describe execution attributes in the next section.*.*: Backup:suser:cmd:::/usr/bin/mt:euid=0 Backup:suser:cmd:::/usr/lib/fs/ufs/ufsdump:euid=0.>0:privs=all Backup:solaris:act:::TarList. Execution Attributes (exec_attr) Database An execution attribute associated with a profile is a command (with any special security attributes) that can be run by those users or roles to which the profile is assigned. use the roleadd command with the -P option: # roleadd -m -d /export/home/admin2 -c “Admin Assistant” -P Operator admin2<cr> The following entry is added to the user_attr database: admin2::::type=role. in the previous section.profiles=Operator At any time.gid=sys Backup:suser:cmd:::/usr/sbin/tar:euid=0 The fields in the exec_attr database are as follows and are separated by colons: name:policy:type:res1:res2:id:attr The fields are defined in Table 4.*:privs=all Backup:solaris:act:::Tar.*.6. the Media Backup profile was defined in the exec_attr database as follows: Media Media Media Media Media Media Backup:solaris:act:::Tar. For example.*. we looked at the profile named Media Backup in the prof_attr database.*.MAGTAPE.*. These shells can only execute commands that have been explicitly assigned to a role account through granted rights. users can check which profiles have been granted to them with the profiles command: $ profiles<cr> The system lists the profiles that have been granted to that particular user account. .*.*.TAR.201 Role-Based Access Control (RBAC) The All profile grants the right for a role account to use any command when working in an administrator’s shell. Although no authorizations were assigned to this profile.

Zero or more keys can be specified. Commands designated with uid run with both the real and effective UIDs. This field is reserved for future use. Commands designated with euid run with the effective UID indicated. type res1 res2 id attr NOTE Trusted Solaris You will see an additional security policy if you are running Trusted Solaris. the asterisk (*) wildcard can be used. which is similar to setting the setgid bit on an executable file. The cmd type specifies that the ID field is a command that would be executed by a shell. Commands designated with egid run with the effective GID indicated. The policy tsol is the trusted solaris policy model.6 Field Name Name policy exec_attr Fields Description The name of the profile. suser (the superuser policy model) and solaris are the only valid policy entries. Commands should have the full path or a path with a wildcard. Currently. egid. The type of entity whose attributes are specified. This field is reserved for future use. The list of valid keywords depends on the policy being enforced. a special security-enhanced version of the operating environment. uid. To specify arguments.) separated key-value pairs that describe the security attributes to apply to the entity upon execution. The act type is available only if the system is configured with Trusted Extensions. Profile names are case-sensitive. egid and gid contain a single group name or numeric group ID. Six valid keys exist: euid. Looking back to the Media Backup profile as defined in the exec_attr database. The solaris policy recognizes privileges. euid and uid contain a single username or numeric user ID.202 Chapter 4: Controlling Access and Configuring System Messaging Table 4. The two valid types are cmd (command) and act. which is similar to setting the setuid bit on an executable file. A string identifying the entity. privs. we see that the following commands have an effective UID of 0 (superuser): /usr/bin/mt /usr/sbin/tar /usr/lib/fs/ufs/ufsdump . An optional list of semicolon (. The security policy associated with this entry. whereas the suser policy does not. gid. and limitprivs. Commands designated with gid run with both the real and effective GIDs. write a script with the arguments and point the id to the script.

and the type of entity is cmd.notice] NFS server saturn. A critical part of the system administrator’s job is monitoring the system. The messages can be warnings. Notice that no special process attributes are associated with the wildcard. Solaris uses the syslog message facility to do this. syslogd is the daemon responsible for capturing system mes- sages. no other rights are consulted when you look up command attributes.203 syslog Therefore. a user would have access to the privileged commands. In other words. For example. any user that has been granted the Media Backup profile can execute the previous backup commands with an effective user ID of 0 (superuser). but no access to normal commands such as ls and cd. and a newline at the end of the message. we get the following entry: All:suser:cmd:::*: Examining each field. the message type keyword at the beginning of the message. Without the All profile.conf file and syslog messaging. the user has access to any command while working in the shell. syslog Objective . the following messages were logged in the /var/adm/messages file: July 15 23:06:39 sunfire ufs: [ID 845546 kern.notice] NOTICE: alloc: /var: \ file system full Sep 1 04:57:06 docbert nfs: [ID 563706 kern. alerts. All did not have authorizations associated with it. NOTE The All profile Always assign the All profile last in the list of profiles. As the system administrator. Explain syslog function fundamentals and configure and manage the /etc/syslog. The attribute field has an *. it adds a timestamp. When we look at the exec_attr database for a definition of the All profile. so the effect is that all commands matching the wildcard run with the UID and GID of the current user (or role). we see that All is the profile name. The * is a wildcard entry that matches every command. we also saw that the Operator profile consisted of a profile named All. The syslogd daemon receives messages from applications on the local host or from remote hosts and then directs messages to a specified log file. In the prof_attr database. It’s common to grant all users the All profile. If it is listed first. the security policy is suser. or simply informational messages. you customize syslog to specify where and how system messages are to be saved.east ok . Again. To each message that syslog captures.

Facility is considered to be the service area generating the message or error (such as printing. EXAM ALERT /etc/syslog.conf file. or a combination of both. When the syslogd daemon starts up. Also watch out for the ifdef statements to see if the logging is being carried out on a remote system. The macro then can evaluate whether log files are to be held locally or on a remote system. output is passed back to syslogd. syslog also enables you to forward messages to another machine so that all your messages can be logged in one location. This is a very common mistake. An entry in the /etc/syslog.conf file is composed of two fields: selector action The selector field contains a semicolon-separated list of priority specifications of this form: facility. or emergency). will be logged. it preprocesses the /etc/syslog. such as a failed login.conf file through the m4 macro processor to get the correct information for specific log files.conf. or network). EXAM ALERT Separate with tabs The separator between the two fields must be a tab character. which parses the /etc/syslog. the statement is evaluated for a true or false condition and the message is routed relative to the output of the test. The syslogd daemon reads and logs messages into a set of files described by the configuration file /etc/syslog. syslogd does not read the /etc/syslog.204 Chapter 4: Controlling Access and Configuring System Messaging syslog enables you to capture messages by facility (the part of the system that generated the message) and by level of importance. Spaces do not work and give unexpected results.conf file for ifdef statements that can be interpreted by m4. error. syslogd starts m4. email. .level [ . facility. whereas the level can be considered the level of severity (such as notice.conf and ifdef statements Make sure you become familiar with the facilities and values listed in the tables in this section. Many defined facilities exist. The function ifdef is an integral part of m4 and identifies the system designated as LOGHOST. syslogd then uses this output to route messages to appropriate destinations.level ] The action field indicates where to forward the message. An exam question might provide a sample file and ask where a specific type of message. warning. If m4 doesn’t recognize any m4 commands in the syslog. When m4 encounters ifdef statements that it can process.conf file directly.

Warning messages. Other errors. For timestamp messages produced internally by syslogd. such as a corrupted system database. The audit facility. Reserved for the Usenet network news system. For example. such as crontab. Reserved for the UUCP system. Conditions that are not error conditions but that might require special handling. This is the default priority for messages from programs or facilities not listed in this file.ftpd. cron. The cron/at facility. Table 4. Conditions that should be corrected immediately.8 lists recognized values for the syslog level field. getty. The authorization system.7 Value user kern mail daemon auth lpr news uucp cron audit local0-7 mark * Recognized Values for Facilities Description Messages generated by user processes. Table 4. Table 4. at. The mail system. lpr is the syslogd facility responsible for generating messages from the line printer spooling system—lpr and lpc.7. such as in. such as a failed login attempt.8 Value emerg alert crit err warning Notice info debug none Recognized Values for level Description Panic conditions that would normally be broadcast to all users. It does not currently use the syslog mechanism. su.205 syslog The facilities are described in Table 4. They are listed in descending order of severity. such as login. Indicates all facilities except the mark facility. and others. Messages that are normally used only when debugging a program. System daemons. Warnings about critical conditions. Informational messages.conf sends all messages except mail messages to the selected file.debug. the entry *. Does not send messages from the indicated facility to the selected file.none in /etc/syslog. A failed login attempt is considered a notice and not an error. and others. such as auditd. such as hard device errors. Reserved for local use.mail. . Messages generated by the kernel.

err.notice. For example. they are sent to the machine loghost on the network. and emerg levels as well.daemon. daemon and authentication system notices. syslog does not create the file if it doesn’t already exist.err /dev/console *.auth. the first line prints all errors on the console. A comma-separated list of usernames. . .206 Chapter 4: Controlling Access and Configuring System Messaging NOTE Levels include all higher levels too When you specify a syslog level. which indicates that messages specified by the selector are to be written to the named users if they are logged in.mail. which indicates that messages specified by the selector are to be forwarded to syslogd on the named host.conf file: *. which indicates that messages specified by the selector are to be written to all logged-in users. It is also possible to specify one machine on a network to be loghost by making the appropriate host table entries. Otherwise. The name of a remote host. The second line sends all errors. Lines in which the first nonwhitespace character is a # are treated as comments. if you specify the err level. Blank lines are ignored.warning /var/log/auth In this example.alert root *. This indicates that messages specified by the selector are to be written to the specified file. . it means that the specified level and all higher levels. If the local machine is designated as loghost.emerg * kern. .alert.auth. alert. and critical errors from the mail system to the file /var/adm/messages. Every machine is its own loghost by default.err @server *. this includes crit. syslogd messages are written to the appropriate files. A filename. This is specified in the local /etc/hosts file. prefixed with a @. An asterisk. All of this becomes much clearer when you look at sample entries from an /etc/syslog. The file is opened in append mode and must already exist. The hostname loghost is the hostname given to the machine that will log syslogd messages. Values for the action field can have one of four forms: .crit /var/adm/messages mail.debug /var/log/syslog *. An example is @server. beginning with a leading slash.

logs a number of messages. The mail system can produce a large amount of information.conf and then run the following command to cause the file to be reread by the syslogd daemon: # svcadm refresh system-log<cr> EXAM ALERT No more kill -HUP Make sure you remember that the kill -HUP facility should no longer be used to try to cause a daemon process to re-read its configuration file. You can make your changes to /etc/syslog. and when the system is booted. The fourth line sends all alert messages to user root. To stop or start syslogd.conf whenever it receives a refresh command from the service administration command. As of Solaris 10. sendmail. The syslog function is now under the control of the Service Management Facility (SMF).none /var/adm/messages This selects debug messages and above from all facilities except those from mail.mail. mail messages are disabled. enable or disable: # svcadm enable -t system-log<cr> # svcadm disable -t system-log<cr> The syslog facility reads its configuration information from /etc/syslog. The svcadm refresh command is now the recommended way of achieving this. remember that sendmail messages come in very handy when you’re diagnosing mail problems or tracking mail forgeries. starting. The level none may be used to disable a facility. which is described in detail in the book Solaris 10 System Administration Exam Prep: CX-310200.207 syslog The third line sends mail system debug messages to /var/log/syslog. even though it still works. The mail system. however. so some system administrators disable mail messages or send them to another file that they clean out frequently. and refreshing syslogd has changed. Part I. Before disabling mail messages. The fifth line sends all emergency messages to all users. The last line logs all alert messages and messages of warning level or higher from the authorization system to the file /var/log/auth. In other words. This is usually done in the context of eliminating messages.debug. svcadm. use the svcadm command with the appropriate parameter. For example: *. the mechanism for stopping. The sixth line forwards kernel messages of err (error) severity or higher to the machine named server. .

alert “Backups Completed” The last line of the script uses the logger command to send a “Backups Completed” message to the default system log (/var/adm/messages). This can be defined as a numeric value or as a facility. syslog logs are automatically rotated on a regular basis. The default priority is user. The syntax for the logger command is as follows: logger [-i] [-f file] [-p priority] [-t tag] [message] .9. A configuration file /etc/logadm. The message priority. Use the contents of file as the message to be logged.8.208 Chapter 4: Controlling Access and Configuring System Messaging The first message in the log file is logged by the syslog daemon itself to show when the process was started. logger -p user. Marks each line with the specified tag.conf is now used to manage log rotation and allows a number of criteria to be specified. After running the script.alert] Backups Completed .notice..7 and 4.9 Option -i -f <file> -p <priority> logger Options Description Logs the Process ID (PID) of the logger process with each line written to a log file. Options to the logger command are described in Table 4. A new method of log rotation was introduced with Solaris 9—logadm.level pair.conf manual pages for further details. -t <tag> message For example. This is especially useful in shell scripts. a program normally run as a root-owned cron job. separated by a single space character comprising the text of the message to be logged. perhaps you have a simple shell script that backs up files: #/bin/ksh tar cvf /tmp/backup . See the logadm and logadm. One or more string arguments. I see the following message appended to the log file: Jan 23 14:02:52 sunfire root: [ID 702911 user. In previous Solaris releases.. as described in Tables 4. Using the logger Command The logger command provides the means of manually adding one-line entries to the system logs from the command line. Table 4. this was achieved by the program newsyslog.

you learned about the system logging facility (syslog) and the configuration that facilitates routing of system messages according to specific criteria. svcadm command . which allows the system administrator to enter ad-hoc messages into the system log files. /etc/security/prof_attr: Defines the profiles. A number of profiles allow privileges to be grouped together so that a user can easily be granted a restricted set of additional privileges. /etc/security/auth_attr: Defines authorizations and their attributes and identifies the associated help file. syslog . . RBAC . /etc/security/exec_attr: Defines the privileged operations assigned to a profile.209 Summary Summary In this chapter you learned about Role-Based Access Control (RBAC). Four main RBAC databases interact with each other to provide users with access to privileged operations: . . Key Terms . and identifies the associated help file. RBAC databases (know about all four) . . logger . Rights profile . lists the profile’s assigned authoriza- tions. which allows the system administrator to delegate administrative responsibilities to users without having to divulge the root password. /etc/user_attr: Associates users and roles with authorizations and execution profiles. The logger command was covered. as well as determining where the messages are logged. Also in this chapter. Authorization . Role . Execution profile .

Edit the /etc/security/prof_attr file and enter the following line: Shutdown:::Permit system shutdown: Save and exit the file. The Shutdown profile will be added to the role. 7. Assign a password to the new user account: # passwd trng1<cr> You are prompted to enter the password twice. 3.All admin1<cr> 4. Add the Shutdown and All profiles to the role: # rolemod -P Shutdown. 2. you’ll create a new role named admin1 and a profile called Shutdown.210 Chapter 4: Controlling Access and Configuring System Messaging Apply Your Knowledge Exercise 4. Estimated time: 20 minutes To create a user and a role. Create the role named admin1: # roleadd -u 2000 -g 10 -d /export/home/admin1 -m admin1<cr> # passwd admin1<cr> You are prompted to enter the password twice. Verify that the entry has been made to the passwd. A user account trng1 will be created and have the admin1 role assigned to it. The user will then assume the role and execute a privileged command to shut down the system. Create a profile to allow the user to shut down a system. shadow.1 Creating a User and a Role In this exercise. follow these steps: 1. Verify that the changes have been made to the user_attr database: # more /etc/user_attr<cr> 5. Create the user account and assign it access to the admin1 role: # useradd -u 3000 -g 10 -d /export/home/trng1 -m -s /bin/ksh -R admin1 trng1<cr> 6. and user_attr files: # more /etc/passwd<cr> # more /etc/shadow<cr> # more /etc/user_attr<cr> .

Test the new role and user account as follows: a. which file contains details of the user attributes? ❍ A. Which of the following commands is used to create a role? ❍ A. /etc/shadow . /etc/user_attr ❍ C. In Role-Based Access Control. useradd ❍ B. Shut down the system: $ /usr/sbin/shutdown -i 0 -g 0<cr> Exam Questions 1. addrole 2. List the profiles that are granted to you by typing the following: $ profiles<cr> e. d. 9.211 Apply Your Knowledge 8. b. makerole ❍ C. /etc/security/prof_attr ❍ B. /etc/security/user_attr ❍ D. Use the su command to assume the role admin1: $ su admin1<cr> You are prompted to enter the password for the role. Assign commands to the Shutdown profile: Edit the /etc/security/exec_attr file and add the following line: Shutdown:suser:cmd:::/usr/sbin/shutdown:uid=0 Save and exit the file. Log in as trng1. List the roles that are granted to you by typing the following: $ roles<cr> c. roleadd ❍ D.

❍ B. 4. Which component of RBAC defines the privileged operations assigned to a profile? ❍ A. An account created with roleadd is the same as a normal login account.212 Chapter 4: Controlling Access and Configuring System Messaging 3.) ❍ A. euid ❍ B. egid ❍ D. ❍ C. user_attr ❍ B. which of the following is not a valid value for the attr field? ❍ A. prof_attr ❍ C. exec_attr 5. Which component of RBAC associates users and roles with authorizations and profiles? ❍ A. The file identifying the privileged commands has missing entries. The role’s profile is not associated with the correct commands. The role’s profile is not associated with the correct authorizations. The access mechanism to the role is not initializing properly. . uid ❍ C. roleadd uses the profile shell (pfsh) as the default shell. ❍ C. suid 7. What is the cause of this problem? ❍ A. user_attr ❍ B. auth_attr ❍ D. The RBAC setup has a problem. ❍ E. auth_attr ❍ D. Which two statements about the roleadd command are true? (Choose two. The -A option associates an account with a profile. roleadd looks similar to the useradd command. ❍ B. The role is not associated with a correct profile. you find that the only commands that can be executed within the role are the privileged commands that you have set up. In the execution attributes database. exec_attr 6. prof_attr ❍ C. ❍ D. ❍ D. Ordinary nonprivileged commands are not available. After creating an RBAC role.

Which option to the rolemod command appends an authorization to an exiting list of authorizations? ❍ A. rolemod ❍ C. useradd ❍ D. usermod 11. Which command(s) grant a user access to a role account? (Choose two. Give the user the root password. /etc/usr_attr ❍ B. None 12. -o ❍ E. /etc/security/prof_attr ❍ B. roleadd ❍ B.) ❍ A. Given due care to system security. /etc/user_attr ❍ C. ❍ B. -P ❍ C. ❍ C. /etc/security/exec_attr ❍ D. /etc/security/exec_attr ❍ D. Set the suid on the crontab command. Use RBAC to give the user an ID of root when executing the crontab command. ❍ E. You want to enable a user to administer all user cron tables. This includes amending entries in any user’s crontab. Use the ACL mechanism to give the user RW access to each crontab table. (Choose two. /etc/security/prof_attr 9. Use RBAC to authorize the user to administer cron tables.) ❍ A. /etc/user_attr ❍ C.) ❍ A. 10. ❍ D. Which of the following are valid RBAC databases? (Choose three. -a ❍ D. In which files are profiles defined? Choose all that apply. /etc/security/auth_attr . what should you do to enable the user to carry out this duty? ❍ A.213 Apply Your Knowledge 8. -A ❍ B.

It represents a profile in the exec_attr database. /etc/user_attr contains details of the extended user attributes. For more information. 5. C. Ordinary nonprivileged commands are unavailable. For more information. B.) Media Restore:suser:cmd:::/usr/lib/fs/ufs/ufsrestore:euid=0 ❍ A. privs. exec_attr (profile attributes database) defines the privileged operations assigned to a profile. It represents a role definition in the user_attr database. A. 7. egid. For more information. For more information. see the “RBAC Components” section. user_attr (extended user attributes database) associates users and roles with authorizations and profiles. Any role that has Media Restore as a profile can execute the ufsrestore command with an effective UID of root.214 Chapter 4: Controlling Access and Configuring System Messaging 13. If a role is not associated with a correct profile. . see the “RBAC Components” section. A. D. and limitprivs. Six valid keys exist: euid. uid. which of the following is a bundling mechanism for grouping authorizations and commands with special attributes? ❍ A. For more information. 4. The roleadd command looks very similar to the useradd command. 14. B. see the “RBAC Components” section. see the “RBAC Components” section. 2. but it uses the profile shell as the default shell. see the “Using RBAC” section. gid. For more information. D. Use the roleadd command to create a role account. 6. the only commands that can be executed within the role are the privileged commands that you have set up. Profile ❍ B. A. Group Answers to Exam Questions 1. For more information. ❍ C. see the “Using RBAC” section. It represents a profile in the prof_attr database. Which statements are true regarding the following line? (Choose all that apply. see the “RBAC Components” section. 3. Authorization ❍ D. In RBAC. ❍ D. Role ❍ C. ❍ B.

The rolemod command does not add to the existing authorizations. For more information. and /etc/security/prof_attr. 11. use the useradd command with the -R option to assign the role to the new user account. /etc/security/exec_attr (profile attributes database) defines the privileged operations assigned to a profile. with the usermod command. For more information.com. and identifies the associated help file. C. A. The three valid RBAC databases are /etc/user_attr. Use the roleadd command to create a role account. 10. For more information. The following entry in the exec_attr database represents a profile named Media Restore: Media Restore:suser:cmd:::/usr/lib/fs/ufs/ufsrestore:euid=0 Any role that has Media Restore as a profile can execute the ufsrestore command with an effective UID of root. 12. configure RBAC to authorize the user to administer cron tables. A. 13. 14. B. A. For more information. C. see the “Using RBAC” section. Suggested Reading and Resources Solaris 10 Documentation CD: “Security Services” and “System Administration Guide: Advanced Administration” manuals. it replaces any existing authorization setting. /etc/security/prof_attr (rights profile attributes database) defines profiles. 9. If you are creating a new user account. .sun. Execution profiles are bundling mechanisms for grouping authorizations and commands with special attributes. see the “Using RBAC” section.215 Apply Your Knowledge 8. D. see the “RBAC Components” section. /etc/security/exec_attr. D. C. lists the profile’s assigned authorizations. see the “RBAC Components” section. see the “RBAC Components” section. B. see the “Using RBAC” section. http://docs. Then. see the “RBAC Components” section. For more information. For more information. Solaris 10 documentation set: “Security Services” and “System Administration Guide: Advanced Administration” books in the System Administration collection. E. assign the role to an existing user account using the -R option. C. To enable a user to administer all user cron tables. For more information.

.

The NIS name service is covered along with what a domain is and which processes run to manage the domain from a master server. NIS. Configure naming service clients during install.adjunct. and set up the LDAP client (client authentication. This chapter describes how to configure a DNS client and an LDAP client. . .5 FIVE Naming Services Objectives The following test objectives for exam CX-310-202 are covered in this chapter: Explain naming services (DNS. processes. and actions). This chapter describes how to select and configure the correct file for use with the available naming services. Explain NIS and NIS security including NIS namespace information. and LDAP configurations) after installation. . It assumes. and client perspective. The name services in Solaris help to centralize the shared information on your network. domains. however. NIS+. which speeds up queries of the most common data and the getent command to retrieve naming service information from specified databases. proxy accounts. manage the NIS master and slave server. client profiles.conf is used to direct requests to the correct name service in use on the system or network. Configure. status codes. Configure the NIS domain: Build and update NIS maps. The name service switch file /etc/nsswitch. that a DNS server and an LDAP server have already been configured elsewhere. and password. This chapter also discusses NIS security. and LDAP) and the naming service switch file (database sources. configure the NIS client. securenets. This chapter describes the name services available in Solaris 10 so that you can identify the appropriate name service to use for your network. slave server. . This chapter describes the use of the Name Service Cache Daemon (nscd). . stop and start the Name Service Cache Daemon (nscd) and retrieve naming service information using the getent command. and troubleshoot NIS for server and client failure messages. configure the DNS client.

NIS provides a number of default maps. an NIS slave server. includ- ing setting up an NIS master server.. which also are examined.adjunct Map The securenets File Troubleshooting NIS Binding Problems Server Problems Summary Key Terms Apply Your Knowledge Exercises Exam Questions Answers to Exam Questions Suggested Reading and Resources The getent Command Name Service Cache Daemon (nscd) Lightweight Directory Access Protocol (LDAP) Sun Java System Directory Server Setting Up the LDAP Client Modifying the LDAP Client Listing the LDAP Client Properties Uninitializing the LDAP Client Configuring the DNS Client NIS+ Hierarchical Namespace NIS+ Security Authentication Authorization . and an NIS client. Outline Introduction Name Services Overview The Name Service Switch File /etc Files DNS NIS The Structure of the NIS Network Determining How Many NIS Servers You Need Determining Which Hosts Will Be NIS Servers Information Managed by NIS Planning Your NIS Domain Configuring an NIS Master Server Creating the Master passwd File Creating the Master Group File Creating the Master hosts File Creating Other Master Files Preparing the Makefile Setting Up the Master Server with ypinit Starting and Stopping NIS on the Master Server Setting Up NIS Clients Setting Up NIS Slave Servers Creating Custom NIS Maps NIS Security The passwd. along with the failure messages that can be encountered both on a server and a client. This chapter describes how to configure and manage an NIS domain.

be prepared to state the purpose of a name service and the type of information it manages. and identify the correct name service switch file associated with a naming service. We highly recommend that you practice the tasks until you can perform them from memory. Be prepared to describe the characteristics of each naming service.Study Strategies The following strategies will help you prepare for the test: . Also. slave servers. Be sure that you understand how to configure NIS master servers. . You’ll need at least two networked Solaris systems to practice the examples and step-by-step exercises. On the exam you will be asked to match a command or term with the appropriate description. . and clients. The exam focuses mainly on NIS with only a few questions on the other name services. study the terms provided near the end of this chapter in the “Key Terms” section. . be sure you can describe each command we’ve covered in this chapter. Finally. although you have to know how to configure LDAP and DNS clients. See if you can make use of an existing LDAP or DNS server to practice client commands. As you study this chapter. compare their functionality. NIS is covered in depth as the main naming service. You’ll need to understand entries in the NIS name service switch file. . specifically the ones we’ve used as examples.

but you should note that Sun does not intend to support this name service in future releases of the Solaris operating environment. where a DNS server would contain host information relating to the local environment. NOTE DNS exception The DNS name service can be thought of as an exception when considering its global nature because information is stored in hierarchical root servers and in many other servers around the world. DNS and LDAP are also introduced in this chapter (LDAP is expected to replace NIS and NIS+ in the future). We also want to provide an overview of NIS. and applications must be able to access to communicate across the network. complete enough so that you are equipped to set up a basic NIS network and understand its use. The exception applies when the DNS server is connected to the Internet and is part of the global DNS namespace. systems. A brief overview of NIS+.220 Chapter 5: Naming Services Introduction This chapter concentrates mainly on how to configure and administer the servers and clients in an NIS (Network Information Service) domain. maps. as it is not specifically tested in the exam other than to explain what it is. and is therefore centrally located. originally designed as a replacement for NIS. NIS is a huge topic that could potentially span several volumes. This chapter shows how to set up a client using the LDAP and DNS Naming Services. Information is stored in files. Therefore. It is included here for background information and comprehensiveness. centrally locating this data makes it easier to administer large networks. . The examples provided in this book relate to local area networks. each system would have to maintain its own copy of this information. or database tables. The purpose of this chapter is to prepare you for questions regarding NIS that might appear on the exam. Name Services Overview Name services store information in a central location that users. Without a central name service. is included in this chapter.

the following: . This simplifies communication because users do not have to remember to enter cumbersome numeric addresses such as 129. NIS+ users are recommended to migrate to LDAP) The Domain Name System Lightweight Directory Access Protocol Name Service NIS NIS+ DNS LDAP A name service enables centralized management of host files so that systems can be identified by common names instead of by numeric addresses. Addresses are not the only network information that systems need to store. They also need to store security information.master. the list grows. this is simply tedious. email addresses. services offered on the network. User names .221 Name Services Overview The information handled by a name service includes. Table 5. groups of users allowed to use the network. each system might need to keep an entire set of files similar to /etc/hosts. As this information changes. and so on. network services. Groups . auto. As networks offer more services. the job becomes not only time-consuming but also nearly unmanageable. Access permissions and RBAC database files The Solaris 10 release provides the name services listed in Table 5. A name service solves this problem. .44.1 /etc files Solaris 10 Name Services Description The original UNIX naming system The Network Information Service The Network Information Service Plus (NIS+ is being dropped from future Solaris releases. but is not limited to.1.1. It stores network information on servers and provides the information to clients that ask for it. Automounter configuration files (auto. but on a medium or large network.home) . System (host) names and addresses . In a small network. Passwords . administrators must keep it current on every system in the network.3. without a name service. As a result. information about their Ethernet interfaces.

2 Name nsswitch. Uses the NIS database as the primary source of all information except the passwd. It determines which sources will be used to resolve names of other hosts on the net- work. and customize it as required. and aliases tables. nisplus . and then /etc files. and the NIS database last.files nsswitch. copy it to nsswitch. and then the NIS+ database. which is stored in each system’s /etc directory. The name service switch is often simply referred to as “the switch.conf. The printers map searches local user files first. This can be a single source or multiple sources. project. auth_attr.conf. Whatever name service you choose. and aliases maps. Also. automount. project. These are directed to use the local /etc files first and then the NIS databases.222 Chapter 5: Naming Services The Name Service Switch File The name service switch file controls how a client machine (or application) obtains network information. Table 5. and in what order. It is a file called nsswitch. The printers map searches local user files first. select the appropriate name service switch template. prof_attr. group. Uses the NIS+ database as the primary source of all information except the passwd. . auth_attr. The name service switch file coordinates the usage of the different naming services and has the following roles: . services. All sources are searched until the information is found. you’ll find templates that can be used as the nsswitch. It contains the information that the client system needs to locate user authorizations and profiles. group. and the /etc local files last. in every system’s /etc directory. It is used to determine how user logins and passwords are resolved at login. These are directed to use the local /etc files first and then the NIS+ databases.” The switch determines which naming services an application uses to obtain naming information. as described in Table 5.2.conf file. nsswitch. automount.nis Name Service Switch File Templates Description Use this template when local files in the /etc directory are to be used and no name service exists. . prof_attr.

which looks like this: # # # # # # # /etc/nsswitch. you select “none” as the default name service. auth_attr. passwd: files group: files hosts: files ipnodes: files networks: files protocols: files rpc: files ethers: files netmasks: files bootparams: files publickey: files # At present there isn’t a ‘files’ backend for netgroup.conf. “hosts:” and “services:” in this file are used only if the /etc/netconfig file has a “-” for nametoaddr_libs of “inet” transports. the hosts entry is directed to use DNS for lookup. If. and aliases tables. The search sequence for the tnrhtp and tnrhdb databases is local /etc files first and the ldap databases last. If. In this case. These are directed to use the local /etc files first and then the LDAP databases.ldap When you install Solaris 10. nsswitch. during software installation. a host entry is not located in the /etc/hosts file.files: An example file that could be copied over to /etc/nsswitch.223 Name Services Overview Table 5. This template file contains the default switch configurations used by the chosen naming service. the correct template file is copied to /etc/nsswitch. prof_atrr.conf is created from nsswitch. /etc/nsswitch. the system will # figure it out pretty quickly.files. it does not use any naming service. the local /etc files are used. for example. project. automount. group.dns nsswitch. netgroup: files automount: files aliases: files services: files sendmailvars: files . Uses LDAP as the primary source of all information except the passwd.2 Name Name Service Switch File Templates Description Sets up the name service to search the local /etc files for all entries.conf. and won’t use netgroups at all.

“hosts:” and “services:” in this file are used only if the /etc/netconfig file has a “-” for nametoaddr_libs of “inet” transports.nis /etc/nsswitch. hosts: nis [NOTFOUND=return] files # Note that IPv4 addresses are searched for in all of the ipnodes databases # before searching the hosts databases. if you start using NIS. copy /etc/nsswitch.nis as follows: # cp /etc/nsswitch. # NIS service requires that svc:/network/nis/client:default be enabled # and online.nis file looks like this: # # # # # # # /etc/nsswitch. passwd: files nis group: files nis # consult /etc “files” only if nis is down.224 Chapter 5: Naming Services printers: auth_attr: prof_attr: project: user files files files files If you decide to use a different name service after software installation.conf<cr> The default /etc/nsswitch. # the following two lines obviate the “+” entry in /etc/passwd and /etc/group. you can move the correct switch file into place manually.conf. ipnodes: nis [NOTFOUND=return] files networks: protocols: rpc: ethers: netmasks: bootparams: publickey: netgroup: automount: aliases: nis nis nis nis nis nis nis nis files nis files nis [NOTFOUND=return] [NOTFOUND=return] [NOTFOUND=return] [NOTFOUND=return] [NOTFOUND=return] [NOTFOUND=return] [NOTFOUND=return] files files files files files files files .nis: An example file that could be copied over to /etc/nsswitch. For example. it uses NIS (YP) in conjunction with files.

Supports an old-style + syntax that used to be used in the passwd and group information. or the local /etc files. The name service switch file lists many types of network information. called databases.3 lists valid sources that can be specified in this file. it needs to query the NIS server. the source returns a status code. followed by one or more sources. such as host. and group. Table 5. such as NIS maps. password. The source was unavailable. The source contains no such entry.3 Source files nisplus nis user dns ldap compat Name Service Sources Description Refers to the client’s local /etc files.225 Name Services Overview # for efficient getservbyname() avoid nis services: files nis printers: user files nis auth_attr: prof_attr: project: files nis files nis files nis Each line of the /etc/nsswitch. Then.printers file. For example. . the DNS hosts table. When the naming service searches a specified source. Table 5. try later” message. such as local files or NIS. with their name service sources for resolution.nis file identifies a particular type of network information. Refers to an NIS table. These status codes are described in Table 5. Refers to the ${HOME}/.4. The source is where the client looks for the network information. Applies only to the hosts entry.nis template file. Refers to an NIS+ table. Refers to the LDAP directory. The source returned an “I am busy. Table 5. the system should first look for the passwd information in the /etc/passwd file. and the order in which the sources are to be searched. As shown in the previous nsswitch. the name service switch file can contain action values for several of the entries. if it does not find the login name there.4 Source SUCCESS UNAVAIL NOTFOUND TRYAGAIN Name Service Source Status Codes Description The requested entry was found.

NOTE NOTFOUND=return The next source in the list is searched only if NIS is down or has been disabled. this can be a difficult task. it can become difficult to maintain all these files and keep them in sync between each system. Each file needs to be individually maintained and on a large network. But sometimes you want to stop searching when an unsuccessful search result is returned. the following entry in the nsswitch. Return: Stop looking for an entry. pass- words. For example. a success indicates that the search is over and an unsuccessful result indicates that the next source should be queried.nis template states that only the NIS hosts table in the NIS map is searched: hosts: nis [NOTFOUND=return] files If the NIS map has no entry for the host lookup. and users’ accounts are added and deleted.226 Chapter 5: Naming Services For each status code. groups. users. . Continue: Try the next source. the system would not reference the local /etc/hosts file. /etc Files /etc files are the traditional UNIX way of maintaining information about hosts. therefore. The default actions are as follows: SUCCESS = return UNAVAIL = continue NOTFOUND = continue TRYAGAIN = continue Normally. the following name services were introduced. Remove the [NOTFOUND=return] entry if you want to search the NIS hosts table and the local /etc/hosts file. These files are text files located on each individual system that can be edited using the vi editor or the text editor within CDE. . to name just a few. two actions are possible: . and automount maps. As IP addresses change. the traditional approach to maintaining this information had to change. On a large changing network.

The master server should be a system that can handle the additional load of propagating NIS updates with minimal performance degradation. NIS was formerly known as Sun Yellow Pages (YP). users. Clients of NIS servers The center of the NIS network is the NIS master server. An NIS domain is a collection of systems that share a common set of NIS maps. NOTE YP to NIS As stated. any changes to the maps must be made on the master server. NIS stores information about workstation names and addresses. only the name has changed. Common configuration information. is a distributed database system that lets the system administrator administer the configuration of many hosts from a central location. and network services.227 NIS NIS NIS formerly called the Yellow Pages (YP). which would have to be maintained separately on each host in a network without NIS. . you need to be aware that the NIS administration databases are called maps. Each NIS domain must have one. This collection of network information is referred to as the NIS namespace. Before beginning the discussion of the structure of NIS. The Structure of the NIS Network The systems within an NIS network are configured in the following ways: . can be stored and maintained in a central location and then propagated to all the nodes in the network. The system designated as master server contains the set of maps that you. and only one. The functionality of the two remains the same. the NIS administrator. create and update as necessary. Slave servers . Master server . After the NIS network is set up. master server. the network itself.

the set of maps shared by the servers and clients is called the NIS domain. Each slave server has an identical directory containing the same set of maps. but it is faster and more resilient to do so. A slave server has a complete copy of the master set of NIS maps. can answer the request. in the directory /var/yp/<domainname>. are NIS clients. and then NIS might be consulted if the requested information is not found in the /etc files. Normally. but only systems with disks should be NIS servers. Typically. whether it’s a master or a slave server. Under the <domainname> directory. The master copies of the maps are located on the NIS master server. an NIS master server supports only one NIS domain. you can create backup servers. a master server for one domain might be a slave server for another domain. Any server that has the set of maps for the client’s domain. When a client starts up. but it can be configured to support multiple domains. however. called NIS slave servers. Solaris 10 does not require the server to be on the same subnet. including the master and slave servers. If a process on an NIS client requests configuration information.228 Chapter 5: Naming Services In addition to the master server. and that server then answers all its NIS queries. all the hosts in the network. . each map is stored as two files: <mapname>. to take some of the load off the master server and to substitute for the master server if it goes down. Doing this.pag. belongs to only one domain. for example. Servers are also clients of themselves. As mentioned earlier. it broadcasts a message to find the nearest server. The existence of slave servers lets the system administrator evenly distribute the load that results from answering NIS requests. If you create an NIS slave server. It also minimizes the impact of a server becoming unavailable.dir and <mapname>. the /etc files might be consulted first. For group and password information and mail aliases. When a client starts up. Determining How Many NIS Servers You Need The following guidelines can be used to determine how many NIS servers you need in your domain: . A client. The client “binds” to the first server that answers its request. However. in which <domainname> is the chosen name for your own domain. A host can be a slave server for multiple domains. depending on the total number of clients. You should put at least one server on each subnet in your domain. it calls NIS instead of looking in its local configuration files. Any system can be an NIS client. the updates are propagated among the slave servers. whether master or slave. the maps on the master server are copied to the slave server. it broadcasts a request for a server that serves its domain. allows each physical system to have a separate root account password. If a change is made to a map on the master server.

If it has the address and needs to find the name.229 NIS .com directory. lightly loaded server can easily support hundreds of NIS clients. In general. for example. In other words. Do not use gate- ways or terminal servers as NIS servers. the names and addresses of systems are stored in two maps: hosts.com are located in each server’s /var/yp/pyramid. . as well as other configuration files.byaddr map. Maps were designed to replace UNIX /etc files. each subnet should have enough servers to accommodate the clients on that subnet. but this is a topic for another time. heavily loaded database server. For example. One column is the key. it’s a good idea to distribute servers appropriately among client networks. NIS stores information in a set of files called maps. NIS maps are two-column tables.byaddr. it looks in the hosts. the maps that belong to the domain pyramid. Maps for a domain are located in each server’s /var/yp/<domainname> directory. You might even see situations where the master and slave servers are running in Solaris zones sharing the hardware with other virtual servers. . Determining Which Hosts Will Be NIS Servers Determine which systems on your network will be NIS servers as follows: . it looks in the hosts. while a slower.byname and hosts. If a server has a system’s name and needs to find its address. . Information Managed by NIS As discussed. Although it isn’t a requirement. For example. Choose fast servers that are not used for CPU-intensive applications. Choose servers that are reliable and highly available. Some information is stored in several maps because each map uses a different key.byname map. would struggle to support 50 clients. NIS finds information for a client by searching through the keys. and the other column is the information value related to the key. A fast. the number of NIS clients a server can handle is limited by the physical hardware specification and current load of the server.

If you run the /usr/ccs/bin/make command in that directory. an input file might be /etc/hosts.byaddr ethers. Contains the pathnames that clients need during startup: root. Contains system names and Ethernet addresses. Additionally. NIS clients that are bound to the slave server will query inconsistent data and receive unexpected results. Master automounter map.5. Issue the following command to create the NIS map files: # cd /var/yp<cr> # /usr/ccs/bin/make<cr> NOTE Generate maps on the master server only Always make the maps on the master server and never on a slave server. the maps are generated from data in the slave server’s local files and are inconsistent with the rest of the domain. NIS can also use whatever maps you create or add. Solaris provides a default set of NIS maps. if you install other software products. For example. and possibly others. including the corresponding file that is used to create each of them.home auto. makedbm creates or modifies the default NIS maps from the input files. swap. Creating NIS maps is described in more detail later in this chapter in the “Configuring an NIS Master Server” section. Contains the authorization description database.byname Default NIS Maps Admin File /etc/shadow /etc/security/ audit_user /etc/security/ auth_attr /etc/auto_home /etc/auto_master /etc/bootparams /etc/ethers /etc/ethers Description Contains password aging information. Automounter file for home directories. They are described in Table 5.master bootparams ethers. Contains per user auditing preselection data. The Ethernet address is the key in the map. You might want to use all or only some of these maps. part of RBAC. Contains system names and Ethernet addresses. If you run make on a slave server. The system name is the key. .byname audit_user auth_attr auto.5 Map Name ageing. Table 5.230 Chapter 5: Naming Services An NIS Makefile is stored in the /var/yp directory of the NIS server at installation time.

The IP address is the key. The alias is the key. Contains the group name. Contains the group name.byaddr hosts. The username is the key.byaddr ipnodes. The mail address is the key.byuser netid. exec_attr group. and system name.byhost netgroup. it is consulted in addition to the data available through the other files. The GID (group ID) is the key.aliases mail. Contains the system name and IP address. Contains mail addresses and aliases. and system name. Contains the group name.bygid group. part of RBAC. The system (host) name is the key.byaddr netgroup netgroup. byname group.byname mail. The system (host) name is the key. username.byname hosts. It contains the system name and mail address (including domain name). The IP address is the key. Contains the system name and IP address. Contains the system name and IP address.adjunct.byname ipnodes. and system name. username. The system name is the key. If a netid file is available. The group name is the key. Contains group security information. Contains group security information.byname .231 NIS Table 5. username. The group name is the key. Contains the system name and IP address. Contains aliases and mail addresses. Used for UNIX-style hosts and group authentication. C2 security option for group files that use passwords.5 Map Name Default NIS Maps Admin File /etc/security/ exec_attr /etc/group /etc/group /etc/group /etc/hosts /etc/hosts /etc/inet/ipnodes /etc/inet/ipnodes /etc/mail/aliases /etc/mail/aliases /etc/netgroup /etc/netgroup /etc/netgroup /etc/passwd Description Contains execution profiles.

Contains public or secret keys.byname rpc. Lists Internet services known to your network.byname passwd. byname passwd. Contains the network protocols known to your network. Contains the network protocols known to your network. Lists Internet services known to your network.adjunct. The username is the key. The username is the key. Contains password and shadow information. part of RBAC. The user ID is the key. Contains the projects in use on the network. The project number (ID) is the key. The service name is the key.232 Chapter 5: Naming Services Table 5. The program number is the key. Contains profile descriptions. Contains the default timezone database.byuid prof_attr project. The timezone name is the key. Contains the program number and the name of Remote Procedure Calls (RPCs) known to your system. Lists the NIS servers known to your network. It’s a single-column table with the system name as the key. Contains the projects in use on the network.5 Map Name Default NIS Maps Admin File /etc/netmasks /etc/networks Description Contains the network masks to be used with IP subnetting. The key port or protocol is the key. netmasks. The address is the key.byname project.byname services. Contains the extended user attributes database.bynumber protocols. Contains names of networks known to your system and their IP addresses.byaddr networks. Contains auditing shadow information and the hidden password information for C2 clients. The project name is the key. part of RBAC. Contains names of networks known to your system and their IP addresses.byname /etc/networks passwd. The protocol is the key.byaddr networks. The name of the network is the key.bynumber /etc/publickey /etc/rpc services.byservice timezone.byname user_attr ypservers /etc/services /etc/services /etc/timezone /etc/user_attr N/A . The address is the key. Contains password and shadow information.byname /etc/passwd and /etc/shadow /etc/passwd and /etc/shadow /etc/passwd and /etc/shadow /etc/security/ prof_attr /etc/project /etc/project /etc/protocols protocols.bynumber /etc/protocols publickey. The protocol number is the key.

You no longer have to change the administrative /etc files on every system each time you modify the network environment.233 NIS The information in these files is put into NIS databases automatically when you create an NIS master server. if your Internet domain name is pdesigninc. you must plan the NIS domain. STEP BY STEP 5. Here is the basic ypcat syntax: ypcat [-k] <mapname> If a map is composed only of keys.com. Planning Your NIS Domain Before you configure systems as NIS servers or clients. You can use the ypwhich command to determine which server is the master of a particular map: ypwhich -m <mapname> In this case.1 outlines the steps for planning an NIS domain. use ypcat -k. Step By Step 5. and each system shares the common set of maps belonging to that domain. as in the case of ypservers. AN NIS domain name can be up to 256 characters long.byaddr maps.com. For example. Decide which systems will be in your NIS domain. For example. A good practice is to limit domain names to no more than 32 characters.byname and hosts.1 Planning Your NIS Domain 1. Choose an NIS domain name. In this case. Other system files can also be managed by NIS if you want to customize your configuration. ypwhich responds by displaying the name of the master server. if you add a new system to a network running NIS. For convenience. you can use the ypcat command to display the values in a map. ypcat prints blank lines. Otherwise. mapname is the name of the map you want to examine. . Each domain has a domain name. although much shorter names are more practical. 2. NIS makes updating network databases much simpler than with the /etc file system. Domain names are case-sensitive. mapname is the name of the map whose master you want to find. you can use your Internet domain name as the basis for your NIS domain name. This process automatically updates the hosts. you can name your NIS domain pdesigninc. These and other NIS commands are covered in the following sections. you only have to update the input file on the master server and run /usr/ccs/bin/make from the /var/yp directory. Just as you use the cat command to display the contents of a text file. These maps are then transferred to any slave servers and made available to all the domain’s client systems and their programs.

Both packages are part of the standard Solaris 10 release. ypupdated consults the updaters file in the /var/yp directory to determine which NIS maps should be updated and how to change them. Use the pkginfo command to check for these packages. A sample /etc/nodename file would look like this: # more /etc/nodename<cr> The system responds with this: sunfire A sample /etc/defaultdomain file would look like this: # more /etc/defaultdomain<cr> The system responds with this: pdesigninc. A system’s hostname is set by the system’s /etc/nodename file. be sure the NIS software cluster is installed. Table 5.com To set the domain name. the correct NIS domain name and system name must be set. Before a system can use NIS. Whichever way you choose. It changes a password entry in the passwd. This must be done on the NIS servers as well as the clients. The daemons that support the NIS are described in Table 5. The package names are SUNWypu and SUNWypr. This daemon handles password change requests from the yppasswd command. shadow. ypbind ypxfrd rpc.6 Daemon ypserv NIS Daemons Description This daemon is the NIS database lookup server. This daemon provides the high-speed map transfer. Configuring an NIS Master Server Before configuring an NIS master server. you would either have to run the domainname command. ypxfrd moves an NIS map in the default domain to the local host. At least one ypserv daemon must be present on the network for the NIS service to function.ypupdated . It creates a temporary map in the directory /var/yp/ypdomain.6. and the system’s domain name is set by the system’s /etc/defaultdomain file.log file exists when ypserv starts up.yppasswdd rpc. or reboot if you have edited /etc/defaultdomain. respectively. you are now ready to configure your NIS master server. and security/passwd. The function of ypbind is to remember information that lets all NIS client processes on a node communicate with some NIS server process. These files are read at startup. entering your domain name as the argument to the command. If the /var/yp/ypserv.adjunct files. This daemon updates NIS information. This daemon is the NIS binding process that runs on all client systems that are set up to use NIS. log information is written to it (if error conditions arise).234 Chapter 5: Naming Services 3. The ypserv daemon’s primary function is to look up information in its local database of NIS maps. and the contents are used by the uname -s and domainname commands.

Before the NIS master server is started. although still available. This command returns the name of the NIS server that supplies the NIS name services to an NIS client. is not the recommended way to start NIS and might even have unpredictable results. This command propagates a new version of an NIS map from the NIS master server to NIS slave servers.7. This command lists data in an NIS map. Table 5. makedbm creates or modifies the default NIS maps from the input files. NIS should be started via the Service Management Facility (SMF). This command gets a map order number from a server. ypinit is used to set up an NIS client system. You can use make to update all maps based on the input files or to update individual maps. some of the NIS source files need to be created. ypcat ypinit yppoll yppush ypset ypstart ypstop ypwhich EXAM ALERT Identifying daemons versus commands Make sure you are familiar with what each daemon and command does. . The yppoll command asks a ypserv process what the order number is and which host is the master NIS server for the named map. This command sets binding to a particular server. You must be the superuser to run this command. This command is used to start NIS. We describe some of these commands in more detail later when we show examples of setting up NIS. Exam questions are frequently presented by describing the daemon or command and asking you to identify it correctly. This command. The makedbm command takes an input file and converts it to a pair of files in ndbm format. ypset is useful for binding a client node that is on a different broadcast network. The NIS master server delivers information to NIS clients and supplies the NIS slave servers with up-to-date maps. This command builds and installs an NIS database and initializes the NIS client’s (and server’s) ypservers list. This command creates a dbm file for an NIS map. is not the recommended way to stop the NIS processes and might even have unpredictable results. Any changes to the NIS maps must be made on the NIS master server. ypstart automatically determines the machine’s NIS status and starts the appropriate daemons. NIS should be stopped via the Service Management Facility (SMF).235 NIS The commands that you use to manage NIS are shown in Table 5. When you run make in the /var/yp directory. An NIS master server holds the source files for all the NIS maps in the domain. or it returns the name of the master for a map. This command is used to stop the NIS processes.7 Utility make makedbm NIS Management Commands Description This command updates NIS maps by reading the Makefile (if run in the /var/yp directory). This command. After the host has been configured using the ypinit command. although still available.

Because the source files are located in a directory other than /etc.2 shows you how to create the passwd source file. The source files can be located either in the /etc directory on the master server or in some other directory. Also. . Now. and because the root password would be passed to all YP clients through the passwd map. modify the Makefile in /var/yp by changing the DIR=/etc line to DIR=/var/yp. Creating the Master passwd File The first task in setting up an NIS master server is to prepare the source file for the passwd map. Creating other master files . Therefore. The password files used to build the passwd maps should have the root entry removed from them. Step By Step 5. to create the passwd source file. be careful with this source file. Preparing the Makefile . copy all the source files from the /etc directory into the /var/yp directory. use a copy of the /etc/passwd file on the system that becomes the master NIS server. Locating the source files in /etc is undesirable because the contents of the maps are then the same as the contents of the local files on the master server.236 Chapter 5: Naming Services The basic steps for setting up an NIS master server are as follows: . This file is used to create the NIS map. Starting and stopping NIS on the master server . Creating the master passwd file . Create a passwd file that has all the logins in it. However. the files used to build the NIS password maps should not contain an entry for root. For this exercise. and to prevent unauthorized root access. Creating the master group file . Sun recommends that for security reasons. the password maps should not be built from the files located in the master server’s /etc directory. and they should be located in a directory that can be protected from unauthorized access. Setting up the master server with ypinit . Setting up the name service switch Each of these tasks is described in the following subsections. This is a special problem for passwd and shadow files because all users would have access to the master server maps. Creating the master hosts file . modify the PWDIR password macro in the Makefile to refer to the directory in which the passwd and shadow files reside by changing the line PWDIR=/etc to PWDIR=/var/yp.

Examine /var/yp/passwd. Be sure each user in your network has a unique username and UID (user ID).1 /var/yp/passwd. Remove the root login from the /var/yp/passwd.temp<cr> 6. in which <hostname> is the name of the host it came from. If you notice that the root login occurs more than once. Issue the sort command to sort the temporary passwd file by username.temp for duplicate usernames that were not caught by the previous uniq command. . 4.. edited file) to /var/yp/passwd. Name each copy /var/yp/passwd.temp file.temp<cr> 3.2 Creating the Password Source File 1. If you find multiple entries with the same UID. edit the file to remove redundant ones. NOTE Duplicate UIDs and usernames You have to resolve duplicate UIDs (where the same UID has been used on more than one system) and usernames (where a user has previously had home directories on each system). Issue the following command to sort the temporary passwd file by UID: # sort -o /var/yp/passwd.hostname1 passwd. edit the file to change the UIDs so that no two users have the same UID. This could happen if a user login occurs twice. Examine /var/yp/passwd.237 NIS STEP BY STEP 5. The NIS-managed UID has ownership of any duplicated UIDs’ files unless they are changed accordingly to match modifications made to this file.<hostname>.<hostname> files from the master server. Copy the /etc/passwd file from each host in your network to the /var/yp directory on the host that will be the master server. remove all entries..hostname2 . Concatenate all the passwd files into a temporary passwd file: # cd /var/yp<cr> # cat passwd passwd.temp (the sorted. move /var/yp/passwd. > passwd. 2. This file is used to generate the passwd map for your NIS domain. but the lines are not exactly the same. 5.temp for duplicate UIDs once more.3 /var/yp/passwd. Sorting the passwd file simply makes it easier to find duplicate entries. Remove all the /var/yp/passwd. 8.temp<cr> NOTE Sorting the passwd file NIS does not require that the passwd file be sorted in any particular way. After you have a complete passwd file with no duplicates. 7.temp -t: -k 3n. If you find multiple entries for the same user.temp | uniq > /var/yp/passwd. and then pipe it to the uniq command to remove duplicate entries: # sort -t : -k 1.

Remove the /var/yp/group. NOTE Duplicate GIDs You have to resolve duplicate GIDs (where the same GID has been used on more than one system) and group names (where a group has previously existed on each system). Examine /var/yp/group. Copy the /etc/group file from each host in your NIS domain to the /var/yp directory on the host that will be the master server.temp<cr> 3.238 Chapter 5: Naming Services Creating the Master Group File Just like creating a master /var/yp/passwd file.temp (the sorted. Step By Step 5.1 /var/yp/group. Examine /var/yp/group.. If you find multiple entries with the same GID. including the master server’s group file.temp<cr> 6. STEP BY STEP 5.<hostname>. 7. Name each copy /var/yp/group.3 shows you how to create the master group file.temp for duplicate GIDs.3 /var/yp/group. If a group name appears more than once.temp<cr> NIS does not require that the group file be sorted in any particular way. edit the file to change the GIDs so that no two groups have the same GID. The NIS-managed GID will have group ownership of any duplicated GIDs’ files unless they are changed accordingly to match modifications made to this file. Issue the following command to sort the temporary group file by GID: # sort -o /var/yp/group. 2.3 Creating the Master Group File 1.hostname2 . in which <hostname> is the name of the host it came from. into a temporary group file: # cd /var/yp<cr> # cat group group.<hostname> files from the master server. . Move /var/yp/group. Sorting the group file simply makes it easier to find duplicate entries.temp -t: -k 3n. merge the groups that have the same name into one group and remove the duplicate entries. edited file) to /var/yp/group. 4. Concatenate all the group files.temp for duplicate group names. 5..hostname1 group.temp -t: -k1. This file is used to generate the group map for your NIS domain. the next task is to prepare one master /var/yp/group file to be used to create an NIS map. Issue the following command to sort the temporary group file by group name: # sort -o /var/yp/group. > group.

in which <hostname> is the name of the host from which it came. 5.temp /var/yp/hosts. . Name each copy /var/yp/hosts. including the master server’s host file.temp (the sorted. 7.<hostname> files from the master server. If you need to map an IP address to multiple hostnames.temp for duplicate hostnames.temp<cr> 4. > hosts.239 NIS Creating the Master hosts File Now create the master /etc/hosts file the same way you created the master /var/yp/passwd and /var/yp/group files (see Step By Step 5. 8. Concatenate all the host files. Examine /var/yp/hosts.temp<cr> 3.temp file for duplicate aliases. STEP BY STEP 5.4 Creating the Master hosts File 1. This file is used to generate the host’s map for your NIS domain. into a temporary hosts file: # cd /var/yp<cr> # cat hosts hosts. remove all the entries but one. Remove the /var/yp/hosts.temp<cr> 6. edited file) to /var/yp/hosts. Examine /var/yp/hosts.<hostname>.hostname1 hosts. If a hostname appears in multiple entries that are mapped to IP addresses on different hosts.. Move /var/yp/hosts. 2.temp for duplicate IP addresses. No alias should appear in more than one entry.hostname2 . A hostname can be mapped to multiple IP addresses only if the IP addresses belong to different LAN cards on the same host. Issue the following command to sort the temporary hosts file by hostname: # sort -o /var/yp/hosts.4).2 /var/yp/hosts.. include them as aliases in a single entry. Issue the following command to sort the temporary hosts file so that duplicate IP addresses are on adjacent lines: # sort -o /var/yp/hosts.temp -b -k 2. Copy the /etc/hosts file from each host in your NIS domain to the /var/yp directory on the host that will be the master server. Examine the /var/yp/hosts.

/etc/rpc . /etc/user_attr Unlike other source files. /etc/project . can also be copied to the /var/yp directory to be used as source files for NIS maps. /etc/auto_home . /etc/timezone . /etc/services . This file must reside in the /etc/mail directory. /etc/bootparams . .240 Chapter 5: Naming Services Creating Other Master Files The following files. /etc/inet/ipnodes .2. /etc/networks . /etc/protocols . /etc/shadow . /etc/netmasks . /etc/security/auth_attr . /etc/security/audit_user . /etc/ethers . /etc/security/prof_attr . /etc/auto_master . which were described in Table 5. Be sure that the /etc/mail/aliases source file is complete by verifying that it contains all the mail aliases that you want to have available throughout the domain. But first be sure that they reflect an up-todate picture of your system environment: . /etc/netgroup . /etc/publickey . /etc/security/exec_attr . the /etc/mail/aliases file cannot be moved to another directory.

It contains the commands needed to transform the source files into the desired ndbm format maps. If the list is incorrect. Step By Step 5. by typing the following: # cp /etc/nsswitch. A default Makefile is provided for you in this directory. type # /usr/sbin/ypinit -m<cr> ypinit prompts you for a list of other systems to become NIS slave servers. you need to convert those source files into the ndbm format maps that NIS uses. The function of the Makefile is to create the appropriate NIS maps for each of the databases listed under “all. the entered list of servers is displayed and you are asked if it is correct. Edit the /etc/hosts file to add the name and IP address of each of the NIS servers. type n. which uses the file Makefile located in the /var/yp directory. Type y if it is correct.241 NIS Preparing the Makefile After checking the source files and copying them into the source file directory. Type y.5 shows you how to set up a master server using ypinit. you are returned to the list of servers to add extra entries. This is done automatically for you by ypinit. as appropriate. The ypinit script calls the program make. At this point. 3.files /etc/nsswitch.pag. Press Ctrl+D when you’re finished.conf<cr> 2. Become root on the master server and ensure that the name service receives its information from the /etc files. To build new maps on the master server. and $PWDIR/security/passwd.adjunct files. Both files are located in the /var/yp/<domainname> directory on the master server. STEP BY STEP 5. Do this for each server. Enter the server name. Enter each server on a separate line. It also initially runs make to create the maps on the master server. We describe how to use ypinit in the next section. Type the name of the server you are working on. the data is collected in two files.dir and mapname. Setting Up the Master Server with ypinit The /usr/sbin/ypinit shell script sets up master and slave servers and clients to use NIS. ypinit asks whether you want the procedure to terminate at the first nonfatal error or to continue despite nonfatal errors. and then press Enter.” After passing through makedbm.5 Using ypinit to Set Up the Master Server 1. not from NIS. $PWDIR/shadow. along with the names of your NIS slave servers. . The Makefile builds passwd maps from the $PWDIR/passwd. mapname. 4.

This message is displayed only if NIS was previously installed. You can start up NIS manually on the server by running the svcadm enable nis/server command from the command line.nis /etc/nsswitch. type # cp /etc/nsswitch. It cleans any remaining comment lines from the files you designated and then runs makedbm on them.conf is described later in this chapter. a slave server. Now that the master maps are created. .conf<cr> This command replaces the current switch file with the default NIS-oriented one. You might be given a scenario where you are asked to select the correct command option to initialize either a master server. You must answer yes to install the new version of NIS. and then restart ypinit. These errors do not affect the functionality of NIS. After you configure the NIS master server by running ypinit. The daemon ypserv answers information requests from clients after looking them up in the NIS maps. or a client. This is accomplished via SMF. ypinit exits upon encountering the first problem. creating the appropriate maps and establishing the name of the master server for each map. If you prefer to continue. you can then fix the problem and restart ypinit. This procedure is recommended if you are running ypinit for the first time. you can start the NIS daemons on the master server. followed by svcadm enable nis/client. the NIS server is automatically invoked to start ypserv whenever the system is started. 7. you need to start the ypserv process on the server and run ypbind. You can edit this file as necessary. NOTE Nonfatal errors A nonfatal error might be displayed if some of the map files are not present. you can manually try to fix all the problems that might occur. ypinit asks whether the existing files in the /var/yp/<domainname> directory can be destroyed. Ensure that you are completely familiar with what each command option achieves. EXAM ALERT Selecting the correct command option Exam questions are often based on the syntax of the ypinit command.242 Chapter 5: Naming Services If you typed y. it invokes make. The make command uses the instructions contained in the Makefile located in /var/yp. The name service switch file /etc/nsswitch. 6. 5. After ypinit has constructed the list of servers. To enable NIS as the naming service. Starting and Stopping NIS on the Master Server To start up NIS on the master server.

as described earlier in this chapter. The recommended way to start and stop NIS is via SMF. Also. . Set up the nsswitch.243 NIS To manually stop the NIS server processes. run the svcadm disable nis/server command on the server as follows: # svcadm disable nis/server<cr> # svcadm disable nis/client<cr> NOTE NIS and SMF You should note that the NIS service is now managed via the Service Management Facility (SMF) and can be stopped and started using the svcadm command. and any other network files that are now managed by NIS. but you might get unexpected results. Setting Up NIS Clients As root. Don’t forget to update the /etc/shadow file. a user’s files might come under the ownership of another user. If not.conf file and setting your domain name as described in the section titled “Planning Your NIS Domain.” you configure each client system to use NIS by logging in as root and running the /usr/sbin/ypinit command: # ypinit -c<cr> . . refer to the earlier sections “Setting Up the Master passwd File” and “Creating the Master Group File” for details on how to merge existing account information into the NIS-managed maps. /etc/hosts. Configure the client to use NIS. unless they are dealt with at the time of any passwd and group modifications. NOTE Client home directories Home directories that have previously existed on separate systems need to be taken into account when NIS is introduced. especially as SMF could automatically restart the service if you stop it manually.conf file on the client. you must perform four tasks to set up a system as an NIS client: . remove entries from /etc/group. Ensure that user account information from the /etc/passwd and /etc/group files on the client has already been taken into account in the master passwd and group files. Without correct handling. The first step is to remove from the /etc/passwd file all the user entries that are managed by the NIS server. You can still use the ypstop and ypstart commands. Set the domain name on the client. After setting up the nsswitch. as explained next. .

Test the NIS client by logging out and logging back in using a login name that is no longer in the /etc/passwd file and is managed by NIS. Change directories to /var/yp on the slave server. otherwise. As root. This list is used each time the client is rebooted. Test the host’s map by pinging a system that is not identified in the local /etc/hosts file. An alternative method is to rename the previously mentioned file and restart NIS. the domain name is set by adding the domain name to the /etc/defaultdomain file. The servers that you list can be located anywhere in the domain. edit the /etc/hosts file on the slave server to add the name and IP address of the NIS master server. followed by a carriage return. Before actually running ypinit to create the slave servers. After you’ve verified that the NIS master server is functioning properly by testing the NIS on this system. Your network can have one or more slave servers. STEP BY STEP 5. If no server responds. Enter each server name. type the following: # /usr/sbin/ypinit -c<cr> . At this point.6. This causes the client to “broadcast” over the local subnet to try to find an NIS server to bind to. you must set it up as an NIS client. You need an entry for this hostname in the local /etc/hosts file. you need to specify the IP address of the NIS server. we are assuming that you’re not using DNS to manage hostnames (DNS is covered later in this chapter). When you enter a server name during the client setup. see Step By Step 5. Step 3 prompts you for the hostname of the NIS master server. To set up an NIS slaver server. 2. To initialize the slave server as a client. Having slave servers ensures the continuity of NIS if the master server is unavailable. followed by the more distant servers on the network because the client attempts to bind to the first server on the list. Remember. you should run the domainname command on each NIS slave to be sure that the domain name is consistent with the master server. the file /var/yp/<domainname>/ ypservers is populated with the list of servers you enter. the client is unable to use the name service until either an NIS slave server is configured on the same subnet. Setting Up NIS Slave Servers Before setting up the NIS slave server. to establish a “binding” with an NIS server.6 Setting Up the NIS Slave Server 1.244 Chapter 5: Naming Services You are asked to identify the NIS servers from which the client can obtain name service information. you can set up the system as a slave server. or the list of servers is reinstated. You can list one master and as many slave servers as you want. 3. It is good practice to first list the servers closest (in network terms) to the system.

Now you can start daemons on the slave server and begin the NIS. Type the following to restart ypbind: # svcadm enable nis/client<cr> 6. To distribute it to other slave servers. If it is running. This is a simple process where you first create the file with a normal text editor such as vi and then create the map. <master> is the system name of the existing NIS master server.com/abook<cr> The map is now created and exists in the master server’s directory. ypbind is running. followed by the other NIS slave servers in your domain. you can either reboot the server or type the following: # svcadm enable nis/server<cr> Creating Custom NIS Maps NIS provides a number of default maps. We assume here that the domain being used is pdesigninc. you need to stop and restart it. in order. Check to see if ypbind is running by typing this: # pgrep -l ypbind<cr> If a listing is displayed. You can now run such commands as ypcat to list the contents of the map. you must stop all existing yp processes by typing the following: # svcadm disable nis/server<cr> To start ypserv on the slave server and run ypbind. from the physically closest to the farthest (in network terms). 4. Repeat the procedures described in these steps for each system that you want configured as an NIS slave server. If ypbind is running. type the following: # /usr/sbin/ypinit -s <master><cr> In this example. as we have already seen earlier in this chapter. You can also add your own map to be managed by NIS.245 NIS The ypinit command prompts you for a list of NIS servers. Enter the name of the local slave you are working on first and then the master server. . stop it by typing this: # svcadm disable nis/client<cr> 5.com: # cd /var/yp<cr> # makedbm /etc/abook pdesigninc. To initialize this system as a slave. You need to determine whether ypbind is already running. use the ypxfr command. First. The following example shows how to create a fictional address book map called abook from the text file /etc/abook. 7.

that can access the NIS maps.255.255.adjunct in the same directory as your passwd and shadow files (/var/yp in the examples used in this chapter). or networks.255. Any user can list the contents of the passwd map. so redirect the output to another file if it will produce a large amount of text. you have to add the details of the new map to the Makefile in /var/yp.246 Chapter 5: Naming Services If you want to verify the contents of an NIS map. In addition to creating the file.37.adjunct file to remove encrypted passwords from the passwd map. This map is accessible only by the root user. a separate map. This writes the contents of the map to the screen. a netmask and a network. you also have to modify the NIS Makefile (held in /var/yp) to add the passwd.adjunct file must be amended to correctly reflect the current shadow file. is created. that can access the NIS namespace.255. To make a new NIS map permanent. This is an overhead for the system administrator.0 255.255. NOTE Extra editing The only downside of using this option is that when a new user is created or an existing user modified. Have a look at the Makefile to see how to modify it to add a new entry.100.100.36. This issue is partially addressed in two ways: by using the passwd. the passwd. The securenets File A further enhancement to NIS security is to restrict the hosts.0 255. so a potential attacker could easily gather the encrypted passwords for use with a password cracking program.0 210. or networks. When this has been done.100. passwd. An example securenets file is shown here: 255. NIS Security NIS has been traditionally insecure because the passwd map contains the encrypted passwords for all user accounts.0 210. This ensures that the map is updated when changes are made.adjunct Map If you copy the contents of your shadow file to passwd.0 210. it protects the encrypted passwords from unauthorized users.255.0 . Entries in this file consist of two fields.adjunct entry to the “all” section.byname. you can use the makedbm command with the -u flag.adjunct.35. The passwd. and using the securenets file to restrict the hosts. but should be offset against the increased security that is achieved by doing this. any further changes to the new map are automatically propagated to all other NIS servers when the make command is run. The file /var/yp/securenets achieves this.

Binding Problems Normally.conf is configured correctly.0 255.100. but not entered into the file /etc/defaultdomain. you must also restart the NIS daemons to allow the changes to take effect. and the actions to take. If any servers are not on these networks. the domain name is lost. Troubleshooting NIS This section provides some details of how to troubleshoot NIS when problems occur. . If you make any modifications to the securenets file. You should make sure that all NIS servers are covered by the network entries in the securenets file.76.48.255.255. . It looks briefly at some of the errors seen on the server as well as some of the errors seen on a client.37. .48.100.0 210. check that the client’s /etc/nsswitch. particu- larly if you have several NIS servers configured in the domain.255. Also. one of the following has occurred: . you need to add individual host entries. otherwise they might not be authorized. Check that the client has network connectivity. so when the system is rebooted.255. ypbind isn’t running on the client: In this case enter svcadm enable network/nis/client to start the process.0 210.35.36.100. The following modified securenets file was created by adding two individual hosts: host 10.4 255.3 host 10. No NIS server is available: This would point to a possible network problem. this problem occurs because the domain name has been set manually. The securenets file is read by the ypserv and ypxfrd processes on startup.255. If only a single NIS server is present. Frequently. You can also add entries for specific hosts.0 210. you should check that the ypserv daemon is running.0 255. when a client fails to bind with an NIS server.247 NIS This code shows that only hosts with IP addresses in the specified networks can access the NIS namespace.0 NOTE securenets Don’t fall into the trap of not allowing your own NIS servers to access the NIS namespace.76.255. The domain name is set incorrectly or not set at all: Check the contents of /etc/defaultdomain or run the domainname command.

NIS+ is designed for the now-prevalent larger networks in which systems are spread across remote sites in various time zones and in which clients number in the thousands. You can restart the NIS server by executing svcadm restart network/nis/server. Last but not least. To this end. Check that the server isn’t busy or overloaded. Sun recommends that users of NIS+ migrate to LDAP using the Sun Java System Directory Server. . but a new system. NIS+ is not an extension of NIS. it is covered only briefly in this chapter. NIS addresses the administrative requirements of small-to-medium client/server computing networks—those with less than a few hundred clients. Use commands such as vmstat. and because NIS+ is not mentioned as an objective for this exam. NIS+ NIS+ is similar to NIS. Check that the NIS daemons are running on the server and restart the service if neces- sary. Run ypwhich to verify which server you are meant to be bound to. systems today require a higher level of security than provided by NIS.248 Chapter 5: Naming Services Server Problems Problems encountered in an NIS environment normally point to network or hardware problems. It is likely that Solaris 10 will be the last release to contain NIS+ as a naming service. . the information stored in networks today changes much more frequently. and again with the release of Solaris 10. and NIS had to be updated to handle this environment. It was designed to replace NIS. try the following: . In addition. Some sites with thousands of users find NIS adequate as well. . . iostat. ping the server to make sure it is accessible across the network. or if you are not getting any response to NIS commands. especially when several NIS servers are available. and NIS+ addresses many security issues that NIS did not. End of life for NIS+ It is important to note that Sun Microsystems issued an end of support notice for NOTE NIS+ with the release of Solaris 9. If you find that you cannot connect to an NIS server. and netstat to monitor the server for possible performance issues. but with more features.

When identifying the relation of one directory to another. mail. and groups. the directory beneath is called the child directory. all sites use the same structural components: directories. Authentication Authentication is used to identify NIS+ principals. Ethernet interfaces. NIS+ can be arranged to manage large networks with more than one domain. An NIS+ principal might be someone who is logged in to a client system as a regular user. and groups. Thus. an NIS+ principal can be a client user or a client workstation. The NIS+ namespace is the arrangement of information stored by NIS+. Any NIS+ directory that stores NIS+ groups is named groups_dir. tables. they divide the namespace into separate parts. This configuration of network information is referred to as the NIS+ namespace. much like UNIX directories and subdirectories. Although UNIX directories are designed to hold UNIX files. . it has only one directory: the root directory. The topmost directory in a namespace is the root directory. Although the arrangement of an NIS+ namespace can vary from site to site. NIS+ directories are designed to hold NIS+ objects: other directories. tables. the user’s identity and password are confirmed and validated. or any process that runs with superuser permission on an NIS+ client system. The directory objects beneath the root directory are called directories. it includes an authorization model that allows specific rights to be granted or denied based on this authentication.249 NIS+ Hierarchical Namespace NIS+ lets you store information about workstation addresses. Second. it can authenticate access to the service. NIS+ Security NIS+ security is enhanced in two ways. Directory objects form the skeleton of the namespace. Every time a principal (user or system) tries to access an NIS+ object. If a namespace is flat. and they can be arranged into a hierarchy that resembles a UNIX file system. When arranged in a treelike structure. A namespace can have several levels of directories. These components are called objects. and any directory that stores NIS+ system tables is named org_dir. someone who is logged in as superuser. and the directory above is the parent. First. so it can discriminate between access that is enabled to members of the community and other network entities. The namespace can be arranged in a variety of ways to fit an organization’s needs. security. and network services in central locations where all workstations on a network can access it.

An NIS+ server can operate at one of three security levels. . create.8. Four types of access rights exist: . Access rights are similar to file permissions. a given class could be permitted to modify a particular column in the passwd table but not read that column. NIS+ authorization is the process of granting NIS+ principals access rights to an NIS+ object. For example. The implementation of the authorization scheme just described is determined by the domain’s level of security. Nobody: Unauthenticated principals The NIS+ server finds out what access rights are assigned to that principal by that particular object. Owner: A single NIS+ principal . and destroy rights to NIS+ objects for each class. If they do not match. They can be displayed with the command nisls -l and can be changed with the command nischmod. The NIS+ security system lets NIS+ administrators specify different read. they are placed in one of four authorization classes. the server denies the request and returns an error message.250 Chapter 5: Naming Services Authorization Authorization is used to specify access rights. Create: The principal can create new objects in a table or directory. If the access rights match. or categories: . Modify: The principal can modify the contents of the object. . Read: The principal can read the contents of the object. Access rights are displayed as 16 characters. summarized in Table 5. Destroy: The principal can destroy objects in a table or directory. the server answers the request. . modify. World: All principals authenticated by NIS+ . Group: A collection of NIS+ principals . Every time NIS+ principals try to access NIS+ objects. . or a different class could be allowed to read some entries of a table but not others.

the clocks might be out of sync. Two or more name servers support each domain: the primary. Security level 2 is the default. or cacheonly server. After repeated failures to obtain a valid DES credential. The process of finding a computer’s IP address by using its hostname as an index is referred to as name-to-address resolution. Without it. there might be a key mismatch. and it should not be used. DNS duplicates some of the information stored in the NIS or NIS+ tables.) Security Level 1 2 DNS DNS is the name service used by the Internet and other Transmission Control Protocol/Internet Protocol (TCP/IP) networks. It authenticates only requests that use Data Encryption Standard (DES) credentials. The DNS namespace can be divided into a hierarchy of domains. and so forth. or mapping. but DNS information is available to all hosts on the network. An NIS+ server running at security level 0 grants any NIS+ principal full access rights to all NIS+ objects in the domain. It was developed so that workstations on the network can be identified by common names instead of Internet addresses. The principal making the request might not be logged in on that system. The collection of networked systems that use DNS is referred to as the DNS namespace. . Level 0 is for setup purposes only. DNS is a system that converts domain names to their IP addresses and vice versa. This level is not supported by NIS+. and administrators should use it only for that purpose. A DNS domain is simply a group of systems. Security level 1 uses AUTH_SYS security. Requests with no credentials are assigned to the nobody class and have whatever access rights have been granted to that class. Requests that use invalid DES credentials are retried.251 DNS Table 5. Regular users should not use level 0 on networks in normal operation. users would have to remember numbers instead of words to get around the Internet.8 0 NIS+ Server Security Levels Description Security level 0 is designed for testing and setting up the initial NIS+ namespace. It is the highest level of security currently provided by NIS+ and is the default level assigned to an NIS+ server. secondary. requests with invalid credentials fail with an authentication error. Each domain must have one primary server and should have at least one secondary server to provide backup. (A credential might be invalid for a variety of reasons.

com .0. try these servers nameserver 123. The second line identifies the loopback name server in the following form: nameserver 127.0. It sets the local domain name and instructs the resolver routines to query the listed name servers for information.conf file in its /etc directory. The resolver reads this /etc/resolv. it defaults to using a server at IP address 127.3. (Do not list more than three primary or secondary servers. secondary. The resolver’s function is to resolve users’ queries.0. Sample resolv. The resolver is neither a daemon nor a single program. try local name server nameserver 127.1 nameserver 111.0.1 .0.) Here’s an example of the /etc/resolv.conf file: . each DNS client system on your network has a resolv. which is the local host. instead. Make sure that you enter a hard carriage return immediately after the last character of the domain name.0. The resolver library uses the file /etc/resolv.conf file.6. NOTE Domain name format No spaces or tabs are permitted at the end of the domain name.252 Chapter 5: Naming Services Configuring the DNS Client On the client side. The resolver queries these name servers in the order they are listed until it obtains the information it needs. it is a set of dynamic library routines used by applications that need to find IP addresses given the domain names.conf file for the machine server1 domain example.22.) Name server entries have the following form: nameserver <IP_address> <IP_address> is the IP address of a DNS name server. Normally. collectively called the resolver.conf file lists the domain name in this form: domain <domainname> <domainname> is the name registered with the Internet’s domain name servers. which lists the addresses of DNS servers where it can obtain its information.1. DNS is implemented through a set of dynamic library routines. or cache-only name servers that the resolver should consult to resolve queries.conf. . (If a client does not have a resolv. if local name server down.1 The remaining lines list the IP addresses of up to three DNS master.5 The first line of the /etc/resolv.45.conf file to find the name of the local domain and the location of domain name servers.

After the resolver is configured. in. Because name services such as NIS and NIS+ contain only information about hosts in their own network. Name-to-address mapping occurs if a program running on your local system needs to contact a remote computer.conf. For example. The program most likely knows the hostname of the remote computer but might not know how to locate it. which is considered a DNS client. the effect of a hosts: nis dns line in a switch file is to specify the use of NIS for local host information and DNS for information on remote hosts on the Internet. the NIS name service is first searched for host information. outside the local administrative domain.253 DNS Whenever the resolver must find the IP address of a host (or the hostname corresponding to an address). The DNS client sends a request to a DNS name server. the resolver libraries are automatically used. which maintains the distributed DNS database. to use DNS terminology. If the hostname is not in that name server’s DNS database. Only if that name service does not find the host in question are the resolver libraries used. a system can request DNS service from a name server. the DNS resolver is used. The files in the DNS database bear little resemblance to the NIS+ host table or even to the local /etc/hosts file. If the resolver queries a name server. that name service is consulted first for host information. If a system’s /etc/nsswitch. IP addresses. ultimately returning the answer to the resolver. The name server uses the hostname that your system sent as part of its request to find or “resolve” the IP address of the remote system. If the nsswitch.named reads the default configuration file /etc/named. the program requests assistance from the DNS software running on your local system.conf file specifies hosts: dns. To obtain the remote system’s address. external servers are consulted to try and resolve the hostname. such as NIS. If the information is not found in NIS. particularly if the remote system is in another network. it builds a query package and sends it to the name servers listed in /etc/resolv. the lists of domain names and IP addresses are distributed throughout the Internet in a hierarchy of authority. Because maintaining a central list of domain name/IP address correspondences would be impractical.conf. and other information about a particular group of computers. this indicates that the system is outside its authority—or. It then returns this IP address to your local system if the hostname is in its DNS database. Each DNS server implements DNS by running a daemon called in. The servers either answer the query locally or contact other servers known to them. the server returns either the requested information or a referral to another server. although they maintain similar information: the hostnames. A DNS server that maps the domain names in your Internet .named. If your network is connected to the Internet. and listens for queries from the DNS clients. When run without any arguments.conf file specifies hosts: nis dns. loads DNS zones it is responsible for. if the hosts line in the nsswitch.conf file specifies some other name service before DNS.

254 Chapter 5: Naming Services requests or forwards them to other servers the Internet. LDAP has provisions for adding and deleting an entry from the directory. directory entries are arranged in a hierarchical. NOTE LDAP information LDAP is a protocol that email programs can use to look up contact information from a server. and filters may be used to select just the person or group you want and return just the information you want to see. Lightweight Directory Access Protocol (LDAP) LDAP is the latest name-lookup service to be added to Solaris. LDAP is a directory service. Below them are entries representing states or national organizations. every email program has a personal address book. attribute-based information. A directory service is like a database. For instance. In LDAP. Information can be requested from each entry that matches the criteria. here’s an LDAP search translated into plain English: “Search people located in Hudsonville whose names contain . but it contains more descriptive. and changing the name of an entry. or just about anything else you can think of. LDAP provides a hierarchical structure that more closely resembles the internal structure of an organization and can access multiple domains. LDAP is used to search for information in the directory. The information in a directory is generally read. Most of the time. Below them might be entries representing people. similar to DNS or NIS+. It is probably provided by your Internet access provider. not written. LDAP is used as a resource locator. For example. Use LDAP as a resource locator for an online phone directory to eliminate the need for a printed phone directory. but it is practical only in read-intensive environments in which you do not need frequent updates. It can be used in conjunction with or in place of NIS+ or DNS. organizational units. Specifically. printers. LDAP servers index all the data in their entries. though. or organizational boundaries. but how do you look up an address for someone who has never sent you email? Client programs can ask LDAP servers to look up entries in a variety of ways. LDAP can be used to store the same information that is stored in NIS or NIS+. documents. Entries representing countries appear at the top of the tree. This application is mainly read-intensive. The LDAP search operation allows some portion of the directory to be searched for entries that match some criteria specified by a search filter. geographic. but authorized users can update the contents to maintain its accuracy. changing an existing entry. NIS provides only a flat structure and is accessible by only one domain. tree-like structure that reflects political.

Return their full name and email address.500 is too complex to support on desktops and over the Internet. LDAP lets you do this easily. A directory server runs on a host computer on the Internet. It provides a standard protocol and a common application programming interface (API) that client applications and servers need to communicate with each another. retrieving the email address of each entry found. Java System Directory Server provides a hierarchical namespace that can be used to manage anything that has previously been managed by the NIS and NIS+ name . or prove. LDAP lets you do this. allowing anyone to see the information. whether they support color or duplexing. entry for organizations with the string “Pyramid” in their names and that have a fax number. and depart- ment. Or.S. LDAP was designed at the University of Michigan to adapt a complex enterprise directory system.500. company asset tag information. such as data about the printers in your organization. Sun Java System Directory Server Sun Java System Directory Server is a Sun product that provides a centralized directory service for your network and is used to manage an enterprise-wide directory of information.” Perhaps you want to search the entire directory subtree below the University of Michigan for people with the name Bill Calkins. emergency contact information. paving the way for rich access control to protect the information the server contains. . and pay grade. . and project dates. Logins and passwords. Sun Java System Directory Server meets the needs of many applications.255 Lightweight Directory Access Protocol (LDAP) ‘Bill’ and who have an email address. the manufacturer and serial number. you might want to search the entries directly below the U. and so on. employee identification numbers. its identity to a directory server. such as salary. phone number. called X. and various client programs that understand the protocol can log in to the server and look up entries. contract numbers. LDAP provides a method for a client to authenticate. such as name. Customer information. email address. to the modern Internet. This could include information on where they are located. bidding information. X. . Physical device information. so LDAP was created to provide this service to general users. including the following: . . Some directory services provide no protection. such as the name of a client. Public employee information. As discussed earlier. phone numbers. Private employee information.

this requires an in-depth working knowledge of LDAP.com. The ldapclient utility is used to set up LDAP client. It allows for more frequent data synchronization between masters and replicas. providing all the functionality once provided by these name services. The advantages of the Java System Directory Server over NIS and NIS+ are listed here: . a few things must already be in place: . It’s assumed that the LDAP server has already been configured as a naming service with the appropriate client profiles in place.com 192. log in as root. Setting Up the LDAP Client It’s not within the scope of this chapter to describe how to set up an LDAP server. It is compatible with multiple platforms and vendors. You must install and configure the LDAP server with the appropriate profiles before you can set up any clients.conf. The scope of this chapter is to describe how to set up the LDAP client.conf file must point to LDAP for the required services. Run the ldapclient command as follows: # ldapclient init -a profileName=new -a domainName=east. . . For background information on LDAP and Java System Directory Server.ldap to /etc/nsswitch. The server manages the directory databases and responds to all client requests. .sun. NIS.168. The nsswitch. ldapclient assumes that the server has already been configured with the appropriate client profiles.example. The client’s domain name must be served by the LDAP server. and LDAP) Guide available at http://docs. . It also reduces the number of distinct databases to be managed. The Java System Directory Server runs as the ns-slapd process on your directory server. it very likely will eventually replace NIS and NIS+. The LDAP client profile consists of configuration information that the client uses to access the LDAP information on the LDAP server. This would be achieved by copying the file /etc/nsswitch. Each host in the domain that uses resources from the LDAP server is referred to as an LDAP client. refer to the System Administration Guide: Naming and Directory Services (DNS.0. It is more secure.1<cr> .256 Chapter 5: Naming Services services. At least one server for which a client is configured must be up and running. . Before setting up the LDAP client. Because LDAP is platform-independent. To initialize a client using a profile. It gives you the capability to consolidate information by replacing application-specific databases.

168. If no particular encryption service is being used. Modifying the LDAP Client After the LDAP client has been set up. it can be modified using the ldapclient mod command. run the ldapclient command as follows: # ldapclient init -a proxyDN=proxyagent \ -a profileName=new \ -a domainName=east. The system responds with this: System successfully configured To initialize a client using a proxy account.com \ -a proxyPassword=test0000 \ 192. as shown here: # ldapclient mod -a authenticationMethod=simple<cr> Listing the LDAP Client Properties To list the properties of the LDAP client. set this to simple. domainName refers to the domain for which the LDAP server is configured.1 NS_LDAP_AUTH= simple Uninitializing the LDAP Client To remove an LDAP client and restore the name service that was in use prior to initializing this client. The remaining LDAP client information is stored in the file /var/ldap_client_file.257 Lightweight Directory Access Protocol (LDAP) Whereas init initializes the host as an LDAP client. use the ldapclient list command as shown here: # ldapclient list<cr> NS_LDAP_FILE_VERSION= 2.168. use the ldapclient uninit command as follows: # ldapclient uninit<cr> The system responds with this: System successfully recovered .1<cr> The proxyDN and proxyPassword parameters are necessary if the profile is to be used as a proxy. profileName refers to an existing profile on the LDAP server.0.0 NS_LDAP_BINDDN= cn=proxyagent NS_LDAP_BINDPASSWD= <encrypted password> NS_LDAP_SERVERS= 192. The proxy information is stored in the file /var/ldap_client_cred. One of the things you can change here is the authentication mechanism used by the client.0.example.

/dev/tty is used for standard output. ipnodes . user_attr Because nscd is running all the time as a daemon.conf Attributes Description Specifies the name of the file where debug info should be written. exec_attr . The default is 0. passwd . Table 5.258 Chapter 5: Naming Services Name Service Cache Daemon (nscd) nscd is a daemon that runs on a Solaris system and provides a caching mechanism for the most common name service requests. group . It is automatically started when the system boots to a multiuser state. The behavior of nscd is managed via a configuration file /etc/nscd. The following is an example of the /etc/nscd. Sets the desired debug level. hosts .9 Attribute logfile <debug-file-name> debug-level <value> /etc/nscd.conf. The attributes are described in Table 5. This file lists a number of tunable parameters for each of the supported databases just listed. 0 to 10. nscd provides caching for the following name service databases: . any nscd commands that are entered are passed to the already running daemon transparently.9. .conf file: debug-level 0 positive-time-to-live negative-time-to-live keep-hot-count check-files positive-time-to-live negative-time-to-live keep-hot-count check-files <output has been truncated> audit_user audit_user audit_user audit_user auth_attr auth_attr auth_attr auth_attr 3600 5 20 yes 3600 5 20 yes Each line in this file specifies an attribute and a value. prof_attr .

Displays current configuration and statistical data. The truncated output that follows shows the results of the cache statistics for the hosts database: . Sets the time-to-live for negative entries (unsuccessful queries) in the specified <cachename>. the nscd daemon must be stopped and started so that the changes take effect. Enables or disables the specified cache. -i <cachename> Whenever a change is made to the name service switch file.conf.10. The <value> should be kept small to reduce cache coherency problems. /etc/nscd.259 Name Service Cache Daemon (nscd) Table 5. The commands to stop and start nscd have changed because the cache daemon is now managed by the Service Management Facility (SMF).10 Option -f <configuration-file> -g -e <cachename>. Table 5. This is the only option that can be run by a nonprivileged user. /etc/nsswitch.conf Attributes Description Enables or disables the specified cache.9 Attribute /etc/nscd. <value> is in integer seconds. yes | no] [-i cachename] The options for the nscd command are described in Table 5. enable-cache <cachename> <value> positive-time-to-live <cachename> <value> negative-time-to-live <cachename> <value> The syntax for the nscd command is as follows: nscd [-f configuration-file] [-g] [-e cachename . Larger values can be specified to increase the cache hit rates and reduce mean response times. This attribute can be adjusted to significantly improve performance if there are several files owned by UIDs not found in the system databases. <value> may be either yes or no. and clears out any information that the nscd daemon may have stored in its cache. The commands to use are as follows: # svcadm restart system/name-service-cache<cr> Restarting nscd restarts the nscd daemon. forces the nscd daemon to reread its configuration file. yes|no nscd Command Options Description Causes nscd to read its configuration data from the specified file. Sets the time-to-live for positive entries (successful queries) in the specified <cachename>. Statistics can be obtained from nscd by running the command with the -g flag. but this can increase problems with cache coherence. <value> is in integer seconds.conf. Invalidates the specified cache.

output truncated. For example. ethers.output truncated.. The getent command displays the entries of the specified database that match each of the keys. group.. protocols..conf file. getent consults each name service database in the order listed in the /etc/nsswitch.] The getent Command The getent command is a generic user interface that is used to get a list of entries from any of the name service databases.] The options for the getent command are described in Table 5. The syntax for the getent command is shown in the following code: getent database [key. services... This can be hosts.. ipnodes.260 Chapter 5: Naming Services #nscd -g [. inet/ipnodes} file for changes No use possibly stale data rather than waiting for refresh [.7% cache hit rate 0 queries deferred 4 total entries 211 suggested size 3600 seconds time to live for positive entries 5 seconds time to live for negative entries 20 most active entries to be kept valid Yes check /etc/{passwd. Multiple keys can be specified. getent Command Options Description The name of the database to be examined. The following example looks at the root entry of the passwd database: # getent passwd root<cr> root:x:0:1:Super-User:/:/sbin/sh . hostname or IP address for the hosts database.. passwd.11.] hosts cache: Yes cache is enabled 44 cache hits on positive entries 0 cache hits on negative entries 3 cache misses on positive entries 1 cache misses on negative entries 91. An appropriate key for the specified database. group...11 Option database Key . all entries are printed.. or netmasks.. If no key is specified. Table 5. networks.. hosts.

This chapter described how to configure the master server. and clients for the most commonly used name service. Makefile .sun.261 Summary Summary This chapter covered all the name service topics that are included in the Solaris 10 System Administrator exams. If you will be migrating from NIS+. Of course. which is available on the Solaris Documentation CD and the online documentation site. and LDAP. DNS resolver . Master NIS server . You will see that most are similar in their implementation. you can refer to the section titled “Transitioning from NIS+ to LDAP” in the Solaris 10 System Administration Guide: Naming and Directory Services (NIS+). NIS+. Configuring clients for DNS and LDAP were also covered briefly. Name service . NIS . DNS . Many large networks that use a name service are heterogeneous. LDAP . Key Terms . NIS. In addition. Name service switch . DNS.com. Finally in this chapter. better understanding of the naming services will come as you use the systems described and become experienced over time. with only subtle differences. and also the getent command. we described the Name Service Cache Daemon used to speed up requests for the most common name service requests. and eventually NIS. Hierarchical namespace . slave servers. which is used to retrieve entries from specified name service databases. meaning that they have more than just Solaris systems connected to the network. http://docs. this chapter described the Sun Java System Directory Server that could soon replace NIS+. NIS. This includes the local files in the /etc directory. The name service switch file used by the operating system for any network information lookups was covered. Refer to the vendor’s documentation for each particular system to understand how each different operating system implements name services.

/var/yp/securenets file .262 Chapter 1: The Solaris Network Environment . NIS security (passwd. NIS+ objects . . you’ll need two Solaris systems attached to a network. NIS source file . NIS+ .1 Setting Up the NIS Master Server In this exercise. and /var/yp/hosts files. /var/yp/group. One system will be configured as the NIS master server.adjunct) . Follow the instructions described in this chapter to create these files. you’ll go through the steps to set up your NIS master server. 2. Estimated time: 20 minutes 1. Log in as root. NIS+ security levels (three levels) . nscd (Name Service Cache Daemon) . 5. NIS+ authorization (four classes and four types of access rights) . and the other will be the NIS client.com<cr> Populate the /etc/defaultdomain file with your domain name: # domainname > /etc/defaultdomain<cr> 3. On the system that will become your master NIS server. create the master /var/yp/passwd. Slave NIS server Apply Your Knowledge Exercises For these exercises. NIS map . Set your domain name if it is not already set: # domainname <yourname>. NIS client .

7. Change entries for /etc to /var/yp in /var/yp/Makefile as follows: Change this: DIR = /etc PWDIR = /etc to this: DIR = /var/yp PWDIR = /var/yp 5.2 Setting Up the NIS Client In this exercise.conf<cr> 6. Verify that the NIS master server is up by typing # ypwhich -m<cr> 5.conf<cr> . For this exercise. press Ctrl+D. Indicate you do not want ypinit to quit on nonfatal errors by typing N when asked. Estimated time: 10 minutes 1. Create the name service switch file by copying the NIS template file as follows: # cp /etc/nsswitch. You’ll know the process was successful when you get the message indicating that the currentsystem was set up as a master server without any errors. Set your domain name if it is not already set: # domainname <yourname>. Log in as root. you’ll go through the steps to set up your NIS client. Create the name service switch file by copying the NIS template file as follows: # cp /etc/nsswitch. Run the ypinit command as follows to set up this system as the NIS master: # ypinit -m<cr> When asked for the next host to add as an NIS slave server.nis /etc/nsswitch.com<cr> Populate the /etc/defaultdomain file with your domain name: # domainname > /etc/defaultdomain<cr> 3. 2. we will not add an NIS slave server.nis /etc/nsswitch. Start up the NIS service on the master server by running # svcadm enable network/nis/server<cr> 8.263 Apply Your Knowledge 4.

Type the NIS master server name. When asked for the next host to add. groups. NIS+ ❍ D. DNS ❍ B. NFS service ❍ C. /etc . NIS ❍ B. AutoFS 2. passwords. Which of the following is the traditional UNIX way of maintaining information about hosts. Which of the following services stores information that users. /etc ❍ C. Test the NIS client by logging out and logging back in using a login name that is no longer in the local /etc/passwd file and is managed by NIS. Start the NIS daemons by executing the following script: # svcadm enable network/nis/server<cr> 6. and applications must have access to in order to communicate across the network. systems. Which of the following is not a Solaris name service? ❍ A. in a central location? ❍ A. users. Verify that the NIS client is bound to the NIS master by typing # ypwhich<cr> The master server name should be displayed. 5. NIS+ ❍ D. DNS 3. Automount ❍ D.264 Chapter 5: Naming Services 4. Exam Questions 1. press Ctrl+D. Configure the client system to use NIS by running the ypinit command: # ypinit -c<cr> You are asked to identify the NIS server from which the client can obtain name service information. followed by a carriage return. DES ❍ B. 7. and automount maps? ❍ A. NIS ❍ C.

The NIS Namespace ❍ D. make ❍ C. ypbind 8. Which of the following commands can be used to determine which server is the master of a particular map? ❍ A. ypserv ❍ D. ypinit 7.265 Apply Your Knowledge 4. you have to update the input file in the master server and run which of the following? ❍ A. A table ❍ B. yppush ❍ D. ypcat ❍ B. makedbm ❍ B. What is the set of maps shared by the servers and clients called? ❍ A. Which of the following commands is used to display the values in an NIS map? ❍ A. Tables ❍ C. ypbind ❍ B. When you add a new system to a network running NIS. ypserv ❍ D. ypwhich ❍ C. Files ❍ B. An object ❍ C. Objects 5. ypcat ❍ C. Maps ❍ D. ypwhich -m . None of the above 6. What are the NIS administration databases called? ❍ A.

nsswitch. yppush ❍ C. ❍ C. resolve. followed by one or more sources. ypinit ❍ B. or local /etc? ❍ A. /etc/netconfig ❍ D. such as NIS+ tables. In the name service switch file. Which of the following propagates a new version of an NIS map from the NIS master server to NIS slave servers? ❍ A. what does the following entry mean if the NIS naming service is being used? hosts: nis [NOTFOUND=return] files ❍ A. Do not search the NIS hosts table or the local /etc/hosts file. nsswitch. and group. makedbm ❍ B. make ❍ D. resolve.nis 12. /etc/netconfig ❍ D. such as host.conf ❍ B.conf ❍ C. make ❍ C.conf ❍ B. nsswitch. Search only the NIS hosts table in the NIS map. Which of the following is the configuration file for the name service switch? ❍ A. Each line of which of the following files identifies a particular type of network information. nsswitch. Search only the /etc/hosts file. the DNS hosts table.266 Chapter 5: Naming Services 9. password. . Which of the following sets up master and slave servers and clients to use NIS? ❍ A. Search the NIS map and then the local /etc/hosts file. yppoll 10.conf ❍ C. ❍ D. yppush 11. ypinit ❍ D.nis 13. NIS maps. ❍ B.

modify.nis+ ❍ C.267 Apply Your Knowledge 14. write. Which of the following is the name service provided by the Internet for TCP/IP networks? ❍ A. in. create. The primary task of DNS is to provide what? ❍ A. None of the above 17. DNS ❍ B. Read. What are the four types of NIS+ access rights? ❍ A. nsswitch. Security service ❍ B. Name service ❍ D.nisplus ❍ D. nsswitch. delete. modify ❍ D. write. execute. named ❍ B. NIS ❍ C. Namespace services . dnsd 18. nsswitch. Name-to-address resolution ❍ C. Which name service switch template files are found in Solaris 10? (Choose two.fns 15. nsswitch. Each server implements DNS by running a daemon called what? ❍ A. Read. no access ❍ C. NIS+ ❍ D. nfsd ❍ D. destroy 16.) ❍ A.named ❍ C. create. write. Read. Read. modify ❍ B.files ❍ B.

and authentication determines whether the particular user is allowed to have or modify the information.nis ❍ D. /etc/resolve. NIS+ ❍ D.268 Chapter 5: Naming Services 19. Which of the following is the name service used by the Internet? ❍ A. How many name services does Solaris 10 support? ❍ A. Authentication is checking whether the information requester is a valid user on the network. ypserver -m ❍ B. Three ❍ B. Which of the following describes the difference between NIS+ authentication and authorization? ❍ A. NIS ❍ C. nisserver -m ❍ D. and authorization determines whether the particular user is allowed to have or modify the information.conf ❍ B. ypinit -m . /etc/nsswitch. ❍ B. Which of the following commands is used to set up an NIS master server? ❍ A. This file determines how a particular type of information is obtained and in which order the naming services should be queried. /etc/nsswitch. 20. Six 22. DES 23. Authorization is checking whether the information requester is a valid user on the network. Which file is being described? ❍ A. DNS ❍ B. Four ❍ C. Five ❍ D.nisplus 21.conf ❍ C. /etc/nsswitch. nisinit -m ❍ C.

B. 6. 10. For more information. set up the name service switch. passwords. B. users. D. and group. see the “Configuring an NIS Master Server” section. 4. you can use the ypcat command to display the values in a map. nsswitch. see the “Name Services Overview” section. see the “The Name Service Switch” section.ldap.nis. see the “Name Services Overview” section. 14. followed by one or more sources. The ypinit command builds and installs an NIS database and initializes the NIS client’s (and server’s) ypservers list.nis template states that only the NIS hosts table in the NIS map is searched: hosts: nis [NOTFOUND=return] files For more information. which involves editing the /etc/nsswitch. For more information. For more information.conf file identifies a particular type of network information. 5.conf file. . such as host. C. For more information. For more information. the DNS hosts table.nisplus. The NIS administration databases are called maps. NIS stores information about workstation names. The command yppush propagates a new version of an NIS map from the NIS master server to NIS slave servers. nsswitch. and network services. A. D. or the local /etc. and automount maps. see the “Configuring an NIS Master Server” section.dns. A. For more information. A. groups. NIS maps. For more information. For more information. The following entry in the nsswitch. 2. see the “The Name Service Switch” section. addresses. In setting up the NIS. see the “Configuring an NIS Master Server” section. 12. you execute the /usr/ccs/bin/make command.269 Apply Your Knowledge Answers to Exam Questions 1. You can use the ypwhich -m command to determine which server is the master of a particular map. For more information. B. such as NIS+ tables. B. A.files. C. see the “Configuring an NIS Master Server” section. users. and nsswitch. nsswitch. see the “Configuring an NIS Master Server” section. /etc files are the traditional UNIX way of maintaining information about hosts. To update the input file in the master server with a new system name. see the “Name Services Overview” section. For more information. A. 11. password. 13. 7. see the “The Name Service Switch” section. C. For more information. the network itself. The set of maps shared by the servers and clients is called the NIS Namespace. see the “The Name Service Switch” section. 3. 9. see the “Name Services Overview” section. DES is not a Solaris name service. Just as you use the cat command to display the contents of a text file. C. Each line of the /etc/nsswitch. For more information. For more information. 8. The following template files are available: nsswitch. see the “Name Services Overview” section.

B. For more information. type /usr/sbin/ypinit -m. the user’s identity and secure RPC password are confirmed and validated. Solaris 10 supports five name services: /etc files. NIS+.com. A. A. see the “Name Services Overview” section. The /etc/nsswitch. 16. C. An NIS+ principal can be a client user or a client workstation. see the “DNS” section. see the “DNS” section.270 Chapter 5: Naming Services 15. For more information. see the “Configuring an NIS Master Server” section. http://docs. For more information. B. The four types of access rights are read. see the “The Name Service Switch” section. For more information. To build new maps on the master server. . The primary task of DNS is to provide name-toaddress resolution. DNS is the name service used by the Internet. modify. 19. DNS. Access rights are similar to file permissions. For more information. For more information. For more information. 21. NIS. System Administration Guide: Advanced Administration and System Administration Guide: Naming and Directory Services books in the System Administration collection. create. D. For more information. D. see the “NIS+ Security” section. 18. A. and destroy. Solaris 10 documentation set. see the “DNS” section. 17.sun. For more information. 20. A. Authorization is used to specify access rights. The process of finding a computer’s IP address by using its hostname as an index is referred to as name-to-address resolution. and LDAP. 23. see the “NIS+ Security” section. Each server implements DNS by running a daemon called in. DNS is the name service provided by the Internet for Transmission Control Protocol/Internet Protocol (TCP/IP) networks. Authentication is used to identify NIS+ principals.conf file determines how a particular type of information is obtained and in which order the naming services should be queried. 22.named. Suggested Reading and Resources Solaris 10 Documentation CD: System Administration Guide: Advanced Administration and System Administration Guide: Naming and Directory Services manuals. Every time a principal (user or system) tries to access an NIS+ object. or mapping. see the “DNS” section.

You’ll see how to install zones. . This chapter helps you understand the components of the new zones feature. . describe the interactive configuration of a zone. networking. daemons. Given a scenario. We also show how zones are viewed from a global zone. halt. as well as uninstall and remove zones. we create a zone. It also describes the zone configuration and the mechanism to verify that a zone has been configured correctly. In this chapter. install. check the status of installed zones. This chapter explains the different components of a zone and how to carry out zone configuration. including zone types. identify zone components and zonecfg resource parameters. given a scenario. create a Solaris zone. first introduced in Solaris 10. use the zonecfg com- mand. and decipher between the different zone concepts. . and view the zone configuration file. and reboot a zone. Explain consolidation issues. boot and reboot. use the zoneadm command to view. features of Solaris zones. boot. allocate file system space.6 SIX Solaris Zones Objectives The following test objectives for exam CX-310-202 are covered in this chapter: . command scope and. It describes the zone concepts and how they fit into the overall container structure. Given a zone configuration scenario.

Outline Introduction Consolidation and Resource Management Consolidation Solaris Zones Types of Zones Zone States Zone Features Nonglobal Zone Root File System Models Sparse Root Zones Whole Root Zones Networking in a Zone Environment Zone Daemons Configuring a Zone The zonecfg Command Viewing the Zone Configuration Installing a Zone Booting a Zone Halting a Zone Rebooting a Zone Uninstalling a Zone Deleting a Zone Zone Login Initial Zone Login Using a sysidcfg File Logging in to the Zone Console Logging in to a Zone Running a Command in a Zone Creating a Zone Making Modifications to an Existing Zone Moving a Zone Migrating a Zone Cloning a Zone Backing Up a Zone Suggested Reading and Resources Apply Your Knowledge Exercise Exam Questions Answers to Exam Questions Summary Key Terms .

. and manage them. Make sure you are familiar with all the concepts introduced in this chapter. Get familiar with all the options. You’ll see questions on the exam related to the zonecfg. Be sure that you understand each step and can describe the process of setting up a zone. particularly the types of zones and the commands used to create. and zlogin commands. installing and booting a zone. especially the ones used in the examples. as well as uninstalling and deleting a zone. manipulate. Practice the step-by-step examples provided in this chapter on a Solaris system. . . . zoneadm. You need to know all the terms listed in the “Key Terms” section near the end of this chapter. Understand each of the commands described in this chapter.Study Strategies The following strategies will help you prepare for the test: .

or exhausted all CPU resources. Security: When a process is created in a zone. allowing the zone to run on a different LAN or VLAN (when used on an exclusive NIC) than the global zone. Each zone has its own set of user accounts. Network isolation: Allows the zone to have an exclusive IP.274 Chapter 6: Solaris Zones Introduction Solaris zones is a major new feature of Solaris 10 and provides additional facilities that were not available in previous releases of the Operating Environment. Isolation: Multiple applications can be deployed on the same machine. each in differ- ent zones. This is achieved by limiting the amount of physical resources on the system that the zone can use. Now you can create virtual environments on any machine capable of running the Solaris 10 Operating Environment. An application in one zone does not affect applications in another zone on the same system. Zones allow virtual environments to run on the same physical system. or use an expensive high-end server capable of physical partitioning. Previously. that process (and any of its children) cannot change zones or affect other zones. Applications can run in an isolated and secure environment. A further important aspect of zones is that a failing application. activities using that service affect only that zone. Details about the system’s physical devices and primary IP address are hidden from the applications in each zone. root account. such as one that would traditionally have leaked all available memory. . each zone is administered separately. The following are features provided by zones: . the only way of compartmenting an environment was to purchase a separate server. . and passwords. Network services can be isolated to each zone so that if a network service is compromised in a zone. can be limited to affect only the zone in which it is running. Virtualization: In a virtualized environment. . Zones provide a virtual operating system environment within a single physical instance of Solaris 10. . such as the Starfire servers. This isolation prevents an application running in one zone from monitoring or affecting an application running in a different zone.

Environment: Zones provide the same standard Solaris interfaces and application environment that applications expect on a Solaris 10 system. and adjust the allocations when required. Resource management is not an objective for exam CX-310-202. It is used to host multiple OS instances on a single computer. . make it operational. with branded zones. Solaris zones is a subset of containers. as if they are the same thing. . Zones differ from VMware in that VMware uses large amounts of the system’s CPU capacity to manage the VMware environments. . which is available on x86-compatible computers. such as CPU time and memory. In fact. In most cases. it is possible to run a different operating environment inside a nonglobal zone.275 Introduction . The best comparison of zones to existing technology would be FreeBSD Jails. the system overhead is negligible. Monitor how resource allocations are being used. Containers are a technology that combines a zone with the operating system’s Resource Management (RSM) features. a system administrator can use the resource management facility to allocate resources such as memory and CPU to applications and services within each zone. You might be familiar with VMware. Consolidation and Resource Management Resource management (RSM) is one of the components of the Solaris 10 containers technology. or Linux environment. such as a Solaris 8. CAUTION Zones and containers Some people refer to zones and containers interchangeably. This chapter looks at the whole concept of Solaris zones and how to configure and create a zone. Allocate specific computer resources. Granularity: Hardware resources can be shared between several zones or allocated on a per-zone basis using Solaris resource management tools. several dozen zones can take up less than 1% of the system’s resources. It allows you to do the following: . With zones. so the two terms should not be used interchangeably. This is incorrect. With containers. and then remove it. Solaris 9. Therefore. but a brief introduction is included in this chapter to help put the zones feature in the correct context.

so it provides a useful control mechanism for a number of functions. Now the resources can be allocated according to priority. or functions. providing an isolated environment for each. multiple workloads can now be run on a single server. Generate more detailed accounting information. refer to the Sun Microsystems “Solaris Containers—Resource Management and Solaris Zones” administration guide. larger. this would have meant that a larger server would be needed to accommodate the resource requirement. This is very useful if you need to allocate additional resources to a group of resources for a limited period of time. An example of this would be when a company runs end-of-month reports. Additionally. the resource management feature can tailor the behavior of the Fair Share Scheduler (FSS) to give priority to specific applications. You can consolidate applications onto fewer. Remember that a project can be a number of processes or users. described at the end of this chapter. together and control their resource usage globally. more scalable servers. . so that one workload cannot affect the performance of another. For more information on RSM. A new resource capping daemon (rcapd) allows you to regulate how much physical memory is used by a project by “capping” the overall amount that can be used. Consolidation The resource management feature of Solaris containers is extremely useful when you want to consolidate a number of applications to run on a single server. and also segregate the workload to restrict the resources that each can use. Using the resource management facility is beyond the scope of this book and is not covered on the CX-310-202 certification exam. Resource pools can be utilized to group applications. such as the maximum amount of CPU resource or memory. a number of applications would run on separate servers. The extended accounting feature of Solaris 10 provides this facility. Using the resource management feature. Consolidation has become more popular in recent years because it reduces the cost and complexity of having to manage numerous separate systems. with each application having full access to the system on which it is running. Previously. Before resource management was introduced. . allowing the server to be more efficiently utilized. even though it would be used to its capacity only once a month.276 Chapter 6: Solaris Zones .

and view the zone configuration file. or even Linux. halt. Even a privileged user in a zone cannot monitor or access processes running in a different zone. boot. Beginning with Solaris 10 version 08/07.277 Solaris Zones Solaris Zones Objectives . install. Types of Zones The two types of zones are global and nonglobal. You can have up to 8. A zone is a virtual environment that is created within a single running instance of the Solaris Operating Environment. The only real limitation is the capability of the server itself. Think of a global zone as the server itself. A nonglobal zone is created from the global zone and also managed by it. use the zonecfg command. which allow an alternative runtime configuration within each zone. Explain consolidation issues and features of Solaris zones. networking. By default. because they share the same kernel. it is possible to run a different operating environment inside a nonglobal zone. Given a zone configuration scenario. The zones technology provides virtual operating system services to allow applications to run in an isolated and secure environment. Applications running in a zone environment cannot affect applications running in a different zone. describe the interactive configuration of a zone. This is called a branded zone (BrandZ). . command scope. The zone does not actually run the Linux OS. allowing multiple versions of the same application to run on the same physical server. For example. . a nonglobal zone has the same operating system and characteristics of the global zone. Solaris 9. create a Solaris zone. This brand could be used to “emulate” Solaris 8. the lx brand provides a Linux environment for the x86/x64-based platforms. Given a scenario.192 nonglobal zones on a single physical system. Applications that run in a nonglobal zone are isolated from applications running in a separate non-global zone. daemons. allocate file system space. where you can log in as root and have full control of the entire system. even though they exist and run on the same physical server. and given a scenario. and decipher between the different zone concepts including zone types. It enables binary applications designed for specific distributions of Linux to run unmodified within the Solaris zone. the traditional view of a Solaris system as we all know it. . It allows the creation of brands. Every system contains a global zone and there can only be one global zone on a physical Solaris server. The global zone is the default zone and is used for system-wide configuration and control. and reboot a zone. identify zone components and zonecfg resource parameters. use the zoneadm command to view.

The zoneadm command is used to verify that the zone will run on the designated Solaris system. installed. it also displays this state. Even though the zone is installed. it changes to the correct state. the brand defines the operating environment to be installed and how the system will behave within the zone. Table 6. Upon completion of the operation. You should note that zone states refer only to nonglobal zones. and running states.1 State Configured Zone States Description A zone is in this state when the configuration has been completed and storage has been committed. but no processes are associated with this zone. If a zone cannot shut down for any reason. The kernel creates the zsched process. This is the normal state for an operational zone. A zone in this state has a confirmed configuration. A zone is set to this state during an install or uninstall operation. Pay particular attention to the differences between the configured. You may get a question that asks you to match the correct state to the correct description. The only time the global zone is not running is when the server has been shut down.278 Chapter 6: Solaris Zones In branded zones. Zone States Nonglobal zones are referred to simply as zones and can be in a number of states depending on the current state of configuration or readiness for operation. because the global zone is always running and represents the system itself. Incomplete Installed Ready Running Shutting Down + Down EXAM ALERT Know your zone states The exam often has at least one question about the different zone states. Packages have been installed under the zone’s root path. . The system also assigns a zone ID at this state. Table 6. it still has no virtual platform associated with it. Transitional states that are visible only while a zone is in the process of being halted.1 describes the six states that a zone can be in. A zone enters this state when the first user process is created. the network interfaces are plumbed and file systems are mounted. The zone’s virtual platform is established. Additional configuration that must be done after the initial reboot has yet to be done. ready.

It is the only zone that is aware of all file systems and devices on the system. managed.279 Solaris Zones Zone Features This section describes the features of both the global zone and nonglobal zones. files. Nonglobal zones have the following features: . . . Contains a complete product database of all installed software components. . . It contains a full installation of Solaris system packages. The global zone is assigned zone ID 0 by the system. and uninstalled. file. or data that was not installed through the packages mechanism. It contains a subset of the installed Solaris system packages. The global zone has the following features: . Several questions on the exam will require you to thoroughly understand these characteristics. shared from the global zone. It can contain additional software packages. . The nonglobal zone is assigned a zone ID by the system when it is booted. . packages. It can contain additional software. installed. . It can contain additional software. . . . such as the global zone hostname and the file system table. . or shared from the global zone. It shares the Solaris kernel that is booted from the global zone. It is the only zone from which a nonglobal zone can be configured. It holds configuration information specific to the global zone. EXAM ALERT Be very familiar with the characteristics of the global zone and the nonglobal zone. It is the only zone that is aware of nonglobal zones and their configuration. . or data that was not installed using the pack- age mechanism. . It can contain additional software packages that are not shared from the global zone. It provides the single bootable instance of the Solaris Operating Environment that runs on the system. .

. It is unaware of the existence of other zones.280 Chapter 6: Solaris Zones . or uninstall other zones. The system administrator can restrict the overall size of the nonglobal zone file system by using any of the following: . “Managing Storage Volumes. the nonglobal zone. Nonglobal Zone Root File System Models A nonglobal zone contains its own root (/) file system. see the manual pages for lofi and lofiadm. including itself. It contains a complete product database of all software components that are installed in the zone. NOTE Zones Only one kernel is running on the system. Therefore. . Use a lofi-mounted file system to place the zone on. It contains configuration information specific to itself. Soft partitions can be used to divide disk slices or logical volumes into a number of partitions. manage. and NIS server. . This includes software that was installed independently of the global zone as well as software shared from the global zone. The nonglobal zones share this kernel. For further information on the loopback device driver. It cannot install. . and it is running on the global zone. domain name.” . for middleware applications such as Java Enterprise System. However. but the zone administrator. each zone can be patched on a per-zone basis. Soft partitions are covered in Chapter 3. A nonglobal zone cannot be an NFS server. The size and contents of this file system depend on how you configure the global zone and the amount of configuration flexibility that is required. normally the system administrator. . such as the nonglobal zone hostname and file system table. There is no limit on how much disk space a zone can use. must ensure that sufficient local storage exists to accommodate the requirements of all nonglobal zones being created on the system. all nonglobal zones are at the same kernel patch level as the global zone. . Standard disk partitions on a disk can be used to provide a separate file system for each nonglobal zone.

Only the global zone has visibility of all zones on the system and can also inspect network traffic. without any interference between them. The majority of the root file system is shared (inherited) from the global zone. Of course. . using. network traffic is restricted so that applications running on a specified zone cannot interfere with applications running on a different zone. you have to decide how much of the global zone file system you want to be inherited from the global zone. where a list of inherited directories from the global zone are specified. but even though the zones reside on the same physical system. the default port for http traffic. A sparse root zone uses the inheritpkg-dir resource. This is because the IP stack on a system supporting zones implements the separation of network traffic between zones. unlike the sparse root model. Generally this model would require about 100MB of disk space when the global zone has all the standard Solaris packages installed. where loopback file systems are used. A sparse root zone optimizes sharing by implementing read-only loopback file systems from the global zone and installing only a subset of the system root packages locally. consider three zones all providing web server facilities using the apache package. Each zone has its own set of bindings and zones can all run their own network daemons. all three zones can host websites on port 80. When a zone is created. for example. a dedicated IP address is configured that identifies the host associated with the zone. The disk space requirement for this model is considerably greater and is determined by evaluating the space used by the packages currently installed in the global zone. The only interaction allowed is for ICMP traffic to resolve problems. Networking in a Zone Environment On a system supporting zones the zones can communicate with each other over the network. when a zone is running. so that commands such as ping can be used to check connectivity. Whole Root Zones This model provides the greatest configuration flexibility because all the required (and any other selected) Solaris packages are copied to the zone’s private file system. the zone’s IP address is configured as a logical interface on the network interface specified in the zone’s configuration parameters. In reality. snoop. As an example. though. it behaves like any other Solaris system on the network in that you can telnet or ftp to the zone as if it were any other system. assuming that the zone has configured these network services for use. Using zones.281 Solaris Zones Sparse Root Zones When you create a nonglobal zone.

It is also known as the zone scheduler. Prepares the zone’s devices if any are specified in the zone configuration . zonecfg checks that a zone path has been specified and that for each resource. The zoneadmd daemon carries out the following actions: . The zoneadmd daemon starts when a zone needs to be managed. The zonecfg Command Objective . Allocates the zone ID and starts the zsched process . Given a zone configuration scenario. so it is not uncommon to have multiple instances of this daemon running on a single server. it has to be created and configured. the service identifier is called svc:/system/zones:default. running. Configuring a Zone Before a zone can be installed and booted. Plumbs the virtual network interface . Sets system-wide resource controls . . Mounts any loopback or conventional file systems The zsched process is started by zoneadmd and exists for each active zone (a zone is said to be active when in the ready. all the required properties have been specified. The job of zsched is to keep track of kernel threads running within the zone. allocate file system space. and view the zone configuration file. A zone is configured using the zonecfg command. This section deals with the initial configuration of a zone and describes the zone components. use the zonecfg command. An instance of zoneadmd is started for each zone. identify zone components and zonecfg resource parameters. It is started automatically by SMF and is also shut down automatically when no longer required. or shutting down state). The zonecfg command is also used to verify that the resources and properties that are specified during configuration are valid for use on a Solaris system.282 Chapter 6: Solaris Zones Zone Daemons The zone management service is managed through the Service Management Facility (SMF). describe the interactive configuration of a zone. Two daemon processes are associated with zones—zoneadmd and zsched.

This starts a configuration in memory for a new zone. or remove. Begins configuring a zone. it adds the specified property to the resource type. the prompt changes to show that you are in a zonecfg session. resources in a configuration . In the resource scope. The command scope also changes so that you are limited to entering commands relevant to the current scope.2 help create export add set select zonecfg Subcommands Description Prints general help. the prompt changes to include the resource being configured. the prompt changes: # zonecfg -z apps<cr> zonecfg:apps> This is known as the global scope of zonecfg. removes the specified resource type. This is applicable only in the global scope. but you have to enter sufficient property name-value pairs to uniquely identify the required resource. The scope changes to the resource. zonecfg carries out the following operations: . which can be used as a command file. or using a command file. Commit (save) a configuration . If you are configuring a zone called apps. It can run interactively. Exit from a zonecfg session When you enter zonecfg in interactive mode. Subcommand remove . this command takes you to the specified resource scope. Table 6. A command file is created by using the export subcommand of zonecfg. You have to enter an end command to return to the global scope. it selects the resource of the specified type. Set the properties for a resource in the configuration . In the global scope. When you configure a specific resource. In the global scope. or delete. Table 6. Add. Revert to a previous configuration . a zone configuration . Query and verify a configuration . Sets a specified property name to a specified property value. or help about a specific resource. Create. or to a specified file name.2 describes the subcommands that are available with the interactive mode of zonecfg.283 Solaris Zones The zonecfg command is used to configure a zone. Prints the configuration to stdout. on the command line. You have to enter sufficient property name-value pairs to uniquely identify the required resource.

Any partially specified resources are discarded. To restrict visibility to nonprivileged users in the global zone. Resource Type zonepath fs inherit-pkg-dir net . and periods (. The packages associated with these directories are inherited (in a read-only loopback file system mount) by the nonglobal zone. Subcommand Table 6. Each zone can have network interfaces that are plumbed when the zone transitions from the installed state to the ready state. You need to use the -F option to force deletion with this option.284 Chapter 6: Solaris Zones Table 6. The nonglobal zone inherits only read-only access. The path to the zone root in relation to the global zone’s root directory (/). You can use the -F option with this subcommand to force the command to execute. Displays information about the current configuration.3 zonename zonecfg Resource Types Description Identifies the zone and must be unique. Reverts the configuration to the last committed state. and /usr. /sbin. It ends the resource specification and returns to the global scope. or inherited from the global zone. Four default inherit-pkg-dir resources are included in the configuration—/lib. Exits the zonecfg session. Commits the current configuration from memory to disk. Network interfaces are implemented as virtual interfaces. The name global and all names beginning with SUNW are reserved and not allowed. It can also contain underscores (_). Table 6. If a resource type is specified. A configuration must be committed before it can be used by the zoneadm command.2 end cancel delete info verify commit revert exit -F zonecfg Subcommands Description This is available only in the resource scope and ends the current resource specification. Verifies the current configuration to ensure that all resources have the required properties specified. the permissions on the zonepath directory should be set to 700. hyphens (-).3 lists the resource types that are applicable to the zonecfg command. it displays information about the resource type. Destroys the specified configuration. Each zone can mount file systems.). /platform. Specifies directories that contain software packages that are shared with the global zone. This is available only in the resource scope. This resource specifies the path to the file system mount point. It’s case-sensitive and must begin with an alphanumeric character. It can’t be longer than 64 characters. described later in this chapter.

. fs: dir. The zone-wide resource controls implemented in Solaris 10 are zone.max-lwps.3 device rctl zonecfg Resource Types Description Each zone can have devices that are configured when the zone transitions from the installed state to the ready state.3 also have properties that need to be configured if the resource type is to be used. raw. options The following code gives an example of how these properties are used. The text in bold type indicates what the user enters. special. A generic type most often used for comments. and a couple of mount options have been added. inherit-pkg-dir: dir This specifies the directory that is to be loopback-mounted from the global zone. zonecfg:apps> add fs zonecfg:apps:fs> set zonecfg:apps:fs> set zonecfg:apps:fs> set zonecfg:apps:fs> set zonecfg:apps:fs> add zonecfg:apps:fs> end dir=/testmount special=/dev/dsk/c0t1d0s0 raw=/dev/rdsk/c0t1d0s0 type=ufs options [logging. along with examples of usage: . type. The following list describes the properties and the parameters. Used for zone-wide resource controls. The controls are enabled when the zone transitions from the installed state to the ready state. Resource Type attr Some of the resource types described in Table 6. The following example shows that /opt/sfw is to be mounted: zonecfg:apps> add inherit-pkg-dir zonecfg:apps:inherit-pkg-dir> set dir=/opt/sfw zonecfg:apps:inherit-pkg-dir> end . The file system is of type ufs.cpu-shares and zone.285 Solaris Zones Table 6. nosuid] This code example specifies that /dev/dsk/c0t1d0s0 in the global zone is to be mounted on directory /testmount in the nonglobal zone and that the raw device /dev/rdsk/c0t1d0s0 is the device to fsck before attempting the mount.

.max-lwps: The maximum number of LWPs simultaneously available to this zone. attr: name.0.max-locked-memory: The total amount of physical locked memory available to a zone. device: match This specifies a device to be included in the zone.42 zonecfg:apps:net> end . value The attr resource type is mainly used for adding a comment to a zone. zone. The zone. The following example adds a comment for the zone apps: zonecfg:apps> add zonecfg:apps:attr> zonecfg:apps:attr> zonecfg:apps:attr> zonecfg:apps:attr> attr set name=comment set type=string set value=”The Application Zone” end There are several zone-wide resource controls: . The following code example specifies an IP address of 192. .max-lwps controls prevent the zone from exhausting resources that could affect the performance or operation of other zones. zone.42 and that the physical interface to be used is eri0: zonecfg:apps> add net zonecfg:apps:net> set physical=eri0 zonecfg:apps:net> set address=192. rctl: name. /dev/rmt/0: zonecfg:apps> add device zonecfg:apps:device> set match=/dev/rmt/0 zonecfg:apps:device> end .168. zone. value .168. type.max-swap: The total amount of swap that can be consumed by user process address space mappings and tmpfs mounts for this zone.cpu-shares: The number of fair share scheduler (FSS) CPU shares for this zone. zone. The following code example includes a tape drive. .cpu-shares and zone. physical This specifies the setup of the network interface for the zone. .0. net: address.286 Chapter 6: Solaris Zones .

The resource manager in Solaris 10 is based on a Fair Share Scheduler (FSS). For an overview of CPU shares and the Fair Share Scheduler (FSS).action=none) zonecfg:apps:rctl> end There are no known methods of breaking into a zone from another zone. If nothing else is using the processor. This demonstrates the use of the Solaris Containers feature to manage a resource within a zone.cpu-shares zonecfg:apps:rctl> set value=(priv=privileged. To view the configuration for a zone named testzone. you would enter # cat /etc/zones/testzone. If other zones are contending for CPU power. zonecfg:apps> add rctl zonecfg:apps:rctl> set name=zone.limit=20. FSS ensures that processes get their fair share of the processing power as opposed to a percentage. described at the end of this chapter. it is possible for an attacker to try to use up all the PIDs in a system by issuing a denial-of-service (DOS) attack on one zone. use the zoneadm command to view a zone. By viewing a file .action=deny) zonecfg:apps:rctl> end This prevents a zone’s processes from having more than 1.000 simultaneous LWPs. However.xml<cr> . this zone gets 100% of the CPU power. Viewing the Zone Configuration Objective: . By using the export option of zonecfg Both of these methods are described next. Given a scenario.287 Solaris Zones The following example allocates 20 CPU shares to the zone. To prevent this type of attack.limit=1000. including the global zone. the shares determine who gets what.max-lwps zonecfg:apps:rctl> add value (priv=privileged. refer to the Sun Microsystems “Solaris Containers—Resource Management and Solaris Zones” administration guide. The zone configuration file is held in the /etc/zones directory and is stored as an xml file. Using up all the PIDs in a zone could essentially use up all the PIDs and virtual memory on the entire system. The zone configuration data can be viewed in two ways: . you could limit the number of lightweight processes (LWPs) that can be run simultaneously within a given zone: zonecfg:apps> add rctl zonecfg:apps:rctl> set name=zone.

43 physical: eri0 attr: name: comment type: string value: “first zone . but you can change this by entering a filename instead. If you save the configuration to a file. You can also view the zone configuration by using the info option with the zonecfg command: # zonecfg -z testzone info<cr> The system displays the following information about the zone: zonename: testzone zonepath: /export/zones/testzone brand: native autoboot: true bootargs: pool: limitpriv: scheduling-class: ip-type: shared inherit-pkg-dir: dir: /lib inherit-pkg-dir: dir: /platform inherit-pkg-dir: dir: /sbin inherit-pkg-dir: dir: /usr fs: dir: /data special: /dev/dsk/c0t1d0s7 raw: /dev/rdsk/c0t1d0s7 type: ufs options: [] net: address: 192. This option is useful if you have to re-create the zone for any reason.168.testzone” . the output goes to stdout. if required.0.288 Chapter 6: Solaris Zones The alternative method of viewing the configuration is to use the zonecfg command with the export option. as a command file input to the zonecfg command. The following example shows how to export the configuration data for zone testzone: # zonecfg -z testzone export<cr> By default. it can be used later.

enter the following command: # zoneadm -z testzone verify<cr> If. Before the boot command is issued. the verify operation generates a suitable error message. Given a scenario. When the zone has been successfully verified. use the zoneadm command to install a zone. Booting a Zone Objective: . The state changes to installed when the install operation has completed. it can be installed: # zoneadm -z testzone install<cr> A number of status and progress messages are displayed on the screen as the files are copied and the package database is updated.clonezone installed PATH / /export/zones/testzone /export/zones/clonezone BRAND native native native IP shared shared shared Installing a Zone Objective: . use the zoneadm command to boot a zone. for example. its state changes from configured to incomplete. or it does not have the correct permissions set. When a zone has been configured. a zone needs to be transitioned to the ready state.289 Solaris Zones View all zones installed on a system by issuing the following command: # zoneadm list -iv<cr> ID NAME STATUS 0 global running . This can be done using the zoneadm command: # zoneadm -z testzone ready<cr> . the zonepath does not exist. To verify the zone configuration for a zone named testzone. Notice that while the zone is installing. Given a scenario. the next step in its creation is to install it.testzone installed . You should verify a configuration before it is installed to ensure that everything is set up correctly. This has the effect of copying the necessary files from the global zone and populating the product database for the zone.

The boot operation does this automatically before booting the zone. issue the halt option of the zoneadm command: # zoneadm -z testzone halt<cr> The zone state changes from running to installed when a zone is halted. no processes are running. plumb the network interface.-m verbose<cr> . To boot the zone testzone. there is no need to transition to the ready state. .290 Chapter 6: Solaris Zones The effect of the ready command is to establish the virtual platform.-m milestone=single-user<cr> Halting a Zone Objective: . issue the following command: # zoneadm -z testzone boot<cr> Confirm that the zone has booted successfully by listing the zone using the zoneadm command: # zoneadm -z testzone list -v<cr> The state of the zone will have changed to running if the boot operation was successful. issue the following command: # zoneadm -z testzone boot -. To shut down a zone. issue the following command: # zoneadm -z testzone boot -s<cr> . To boot the zone into single-user mode. Boot a zone into the single-user milestone as follows: # zoneadm -z testzone boot -. At this point. use the zoneadm command to halt a zone. and mount any file systems. You can also supply other boot arguments when booting a zone: . NOTE No need to ready If you want to boot a zone. Given a scenario. though. To boot a zone using the verbose option.

If you omit this option. you are asked to confirm that you want to uninstall the zone. You can also use the zlogin command to reboot a zone: # zlogin <zone> reboot<cr> zlogin is described later. it should be uninstalled before it is deleted.291 Solaris Zones Rebooting a Zone Objective: . To view all configured zones. A zone can be rebooted at any time without affecting any other zone on the system.testzone2 configured PATH / /export/zones/testzone1 /export/zones/testzone2 BRAND native native native IP shared shared shared . In order to uninstall a zone. Given a scenario. regardless of their state. The reboot option of the zoneadm command is used to reboot a zone to reboot the zone testzone: # zoneadm -z testzone reboot<cr> The state of the zone should be running when the reboot operation has completed.testzone1 installed PATH / /export/zones/testzone BRAND native native IP shared shared The zone is not listed because the -i option displays only zones in the installed state.testzone1 installed . issue the uninstall command to uninstall the zone testzone2: # zoneadm -z testzone2 uninstall -F<cr> The -F option forces the command to execute without confirmation. type the following: # zoneadm list -cv<cr> ID NAME STATUS 0 global running . in the “Zone Login” section. use the zoneadm command to reboot a zone. List the zones on the system to verify that the zone has been uninstalled: # zoneadm list -iv<cr> ID NAME STATUS 0 global running . Uninstalling a Zone When a zone is no longer required. it must first be halted. When this has been done.

Make sure you are aware of this anomaly when you take the exam. but a nonglobal zone can also be accessed from the global zone using zlogin.4 describes the various options for zlogin. the session is automatically closed. . or a role with the RBAC profile “Zone Management. you are asked to confirm that you want to delete the zone configuration. always use a lowercase letter f. and umount. such as mv. and ssh. Zone Login When a zone is operational and running. This is necessary for administration purposes and to be able to access the console session for a zone.. If you omit this option. Enter the zonecfg command to delete the zone testzone from the system: # zonecfg -z testzone delete -F<cr> The -F option forces the command to execute without confirmation. Console: A console session is established for administration purposes. Noninteractive: A single command or utility can be executed. When a zone has been successfully uninstalled. Given a scenario.292 Chapter 6: Solaris Zones Deleting a Zone Objective: . for example.” can use the zlogin command from the global zone. The syntax for the zlogin command is as follows: zlogin [-CE] [-e c] [-l <username>] <zonename> zlogin [-ES] [-e c] [-l <username>] <zonename> <utility> [argument. the normal network access commands can be used to access it. EXAM ALERT Remember the force Unlike most other UNIX commands. All other commands. zoneadm and zonecfg use an uppercase letter F to force the command to be executed without prompting you for confirmation. Interactive: A login session is established from the global zone. rm.. Only the superuser (root). its configuration can be deleted from the system. . such as telnet. Table 6. use the zonecfg command to delete a zone. rlogin. Upon completion of the command (or utility). .] zlogin works in three modes: .

This option cannot be used when using zlogin in console mode. When this reboot completes. Hostname . This option allows arguments to be specified and passed to the utility or command being executed. Root password These settings are configured interactively the first time you use zlogin to connect to the zone console. because the internal zone configuration needs to be completed. Changes the escape sequence to exit from the console session. Initial Zone Login When a zone has been installed and is booted for the first time.). the zone will not be operational. This includes setting the following: . -l <username> Specifies a different user for the zone login. Security policy . The zone then reboots to implement the changes. or command. If this is not completed.4 Option -C -e c -E zlogin Options Description A connection is made to the zone’s console device. User root is used when this option is omitted. . the zone is fully operational.293 Solaris Zones Table 6. and users will be unable to connect to the zone across the network. to run in the zone. Time zone . Language . and zlogin operates in console mode. -S <zonename> <utility> <argument> “Safe” login mode. This option is used to recover a damaged zone when other login forms do not work. This option cannot be used in console mode. Disables the use of extended functions and also prohibits the use of the Escape sequence to disconnect from the session. NOTE Initial console login You must complete the configuration by establishing a console connection. The default is the tilde dot (~. Terminal type . Name service . it is still not fully operational. similar to when you first install the Solaris 10 Operating Environment. Specifies the zone to connect to. Specifies the utility.

Note that the root password entry needs to include the encrypted password: lang=C system_locale=en_GB terminal=vt100 network_interface=primary { hostname=testzone } security_policy=NONE name_service=NONE timezone=GB nfs4_domain=dynamic root_password=dKsw26jNk2CCE In previous releases of Solaris 10. you can preconfigure the required options in a sysidcfg file. You will see the boot messages appear in the console as well as the reboot after the sysidcfg file has been referenced. and you will have to complete the zone setup interactively.NFS4inst_state. this file is no longer created and has been replaced by the nfs4_domain keyword in the sysidcfg file. the files will be ignored. NOTE Install sysidcfg before boot You need to install the sysidcfg file and create the . If you are completing a hands-off configuration. connect to the console before the initial boot. The sysidcfg file needs to be placed in the /etc directory of the zone’s root. using a sysidcfg file: # zlogin -C testzone<cr> . The following session shows what happens when the zone testzone is booted for the first time. The following example of a sysidcfg file sets the required parameters for a SPARC based system but doesn’t use a naming service.NFS4inst_state.domain<cr> Since Solaris 10 08/07.domain file before the initial boot of the zone.294 Chapter 6: Solaris Zones Using a sysidcfg File Instead of completing the zone configuration interactively. or a security policy. Otherwise. The zone console is available as soon as the zone is in the installed state. the sysidcfg file would be placed in /export/zones/testzone/root/etc. This enables the zone configuration to be completed without intervention. Logging in to the Zone Console You can access the console of a zone by using the zlogin -C <zonename> command. you could suppress the prompt for an NFSv4 domain name during the installation by creating the following file in the zone’s root /etc directory: # touch /export/zones/testzone/root/etc/. For a zone named testzone with a zonepath of /export/zones/testzone.

SunOS 5. Hostname: testzone Loading smf(5) service descriptions: 100/100 Creating new rsa public/private host key pair Creating new dsa public/private host key pair Configuring network interface addresses: eri0. The command zonename is run to display the name of the current zone. rebooting system due to change(s) in /etc/default/init [NOTICE: Zone rebooting] SunOS Release 5. Logging in to a Zone The superuser (root). Use is subject to license terms. The following example shows a zone login to the testzone zone. Inc. The system administrator uses the zlogin command. Inc.” can log directly into a zone from the global zone. Use is subject to license terms. All rights reserved. Be aware that breaking the connection to the zone’s console is not the same as logging out. To disconnect from the zone console. to break the connection.10 # zonename<cr> testzone # exit<cr> Generic January 2005 [Connection to zone ‘testzone’ pts/6 closed] .295 Solaris Zones [NOTICE: Zone readied] [NOTICE: Zone booting up] SunOS Release 5.10 Version Generic 64-bit Copyright 1983-2008 Sun Microsystems. without having to supply a password. and then the connection is closed: # zlogin testzone<cr> [Connected to zone ‘testzone’ pts/6] Sun Microsystems Inc.10 Version Generic 64-bit Copyright 1983-2008 Sun Microsystems. Hostname: testzone testzone console login: All rights reserved. type ~. or a role with the RBAC profile “Zone Management. Connections to the console persist even when the zone is rebooted.

installs it. the same hostname command is run. Perform the initial configuration on a zone named testzone. let’s put it all together and create a zone. This zone will be a sparse root zone with no additional file systems being mounted from the global zone.1 configures the zone named testzone. The connection is automatically disconnected as soon as the command has completed. which shows we are back on the host called global: # hostname<cr> global # zlogin testzone zonename<cr> testzone # hostname<cr> global EXAM ALERT No -z in zlogin Be careful not to include the -z option when answering exam questions on zlogin. a noninteractive login is initiated and a single command is executed. . which runs the zonename command and then exits automatically. Here.1 Creating a Zone 1. and the IP address will be 192. Then a noninteractive login to the testzone zone runs. Create the zonepath. The following example shows how this works. Creating a Zone Now that we have seen the technicalities of configuring a zone. we will list the zone configuration data. Enter the zonecfg command to configure the new zone: # zonecfg -z testzone<cr> testzone: No such zone configured Use ‘create’ to begin configuring a new zone. demonstrating that we are on the host called global. and boots it. and assign the correct permission (700) to the directory. First. where the -z option is used. It’s easy to get confused with the zoneadm command. Finally.0.168. STEP BY STEP 6.296 Chapter 6: Solaris Zones Running a Command in a Zone In the previous section an interactive login to a zone was achieved. Step By Step 6. the hostname command is run.43. Finally. The zonepath will be /export/zones/testzone. The text in bold indicates what the user enters: # mkdir -p /export/zones/testzone<cr> # chmod 700 /export/zones/testzone<cr> 2.

and then check to see if the zone exists using the zoneadm command.Testzone” zonecfg:testzone:attr> end 3. . Verify and save the zone configuration. 5.43 zonecfg:testzone:net>end zonecfg:testzone> add rctl zonecfg:testzone:rctl> set name=zone. 4. zonecfg:testzone> verify zonecfg:testzone> commit zonecfg:testzone> exit # zoneadm -z testzone list -v<cr> ID NAME STATUS PATH .testzone configured /export/zones/testzone Notice that the zone now exists and that it has been placed in the configured state.168. use a separate login session to check to see if the zone exists using the zoneadm command: # zoneadm -z testzone list -v<cr> zoneadm: testzone: No such zone configured At this point the zone configuration has not been committed and saved to disk. Having entered the initial configuration information.297 Solaris Zones zonecfg:testzone>create zonecfg:testzone>set zonepath=/export/zones/testzone zonecfg:testzone>set autoboot=true zonecfg:testzone>add net zonecfg:testzone:net>set physical=eri0 zonecfg:testzone:net>set address=192. so it exists only in memory. the verification is performed automatically when the zone is installed. Use the zoneadm command to verify that the zone is correctly configured and ready to be installed: # zoneadm -z testzone verify<cr> 6.limit=20.cpu-shares zonecfg:testzone:rctl> add value (priv=privileged. If you do not verify the zone prior to installing it. Install the zone: # zoneadm -z testzone install<cr> Preparing to install zone <testzone>.0. Exit zonecfg.action=none) zonecfg:testzone:rctl> end zonecfg:testzone> add attr zonecfg:testzone:attr> set name=comment zonecfg:testzone:attr> set type=string zonecfg:testzone:attr> set value=”First zone .

and root password. 9.298 Chapter 6: Solaris Zones Creating list of files to copy from the global zone. The file </export/zones/testzone/root/var/sadm/system/logs/\ install_log> contains a log of the zone installation. Preparing to initialize <1141> packages on the zone. Initializing zone product registry. you’re prompted to enter the system identification information. Connect to the console to watch the system boot and to finish the configuration: # zlogin -C testzone<cr> [Connected to zone ‘testzone’ console] After the system initializes. View the configuration data by exporting the configuration to stdout: # zonecfg -z testzone export<cr> The system displays the following information: create -b set zonepath=/export/zones/testzone set autoboot=true add inherit-pkg-dir set dir=/lib end add inherit-pkg-dir set dir=/platform end add inherit-pkg-dir set dir=/sbin end add inherit-pkg-dir set dir=/usr . The zone is now ready to be used operationally. and then boot the zone and check that the state has changed to running: # zoneadm -z # zoneadm -z ID NAME 7 testzone # zoneadm -z # zoneadm -z ID NAME 7 testzone testzone ready<cr> testzone list -v<cr> STATUS PATH ready /export/zones/testzone testzone boot<cr> testzone list -v<cr> STATUS PATH running /export/zones/testzone 8. 7. Zone <testzone> is initialized. Copying <77108> files to the zone. Determining zone package initialization order. network information. time zone. Change the state to ready and verify that it has changed. Initialized <1141> packages on zone. such as hostname.

299 Solaris Zones end add set set end add set add end add set set set end net address=192. This task is performed from the global zone: Halt the zone: # zoneadm -z testzone halt<cr> After the zone has been halted. use the zonecfg command to edit the zone configuration: # zonecfg -z testzone<cr> zonecfg:testzone> add fs zonecfg:testzone:fs> set dir=/data zonecfg:testzone:fs> set special=/dev/dsk/c0t1d0s7 zonecfg:testzone:fs> set raw=/dev/rdsk/c0t1d0s7 zonecfg:testzone:fs> set type=ufs zonecfg:testzone:fs> end zonecfg:testzone > exit . Making Modifications to an Existing Zone After a zone has been installed. you can still reconfigure it. EXAM ALERT Zone configuration file You can also view the configuration file directly by viewing the /etc/zones/<zonename>.cpu-shares value (priv=privileged.xml file from the global zone.action=none) attr name=comment type=string value=”First zone .Testzone” Notice the four default inherit-pkg-dir entries showing that this is a sparse root zone.0. You might be asked this location on the exam.limit=20. and you want to add it to the nonglobal zone named testzone.168. Let’s say that you have a file system named /data in the global zone. For example. suppose you want to add a file system to an existing zone. This file is created when you save the configuration using zonecfg.43 physical=eri0 rctl name=zone.

if the new host has the same or later versions of the zone-dependent packages and associated patches. But this would require additional steps. such as adding a network controller. I move the zone named testzone from /export/zones/testzone to the /testzone file system: # zoneadm -z testzone halt<cr> # zoneadm -z testzone move /testzone<cr> Migrating a Zone You migrate a zone when you want to move a zone from one system to another. The following rules apply when migrating a zone: . . The following is the information displayed from the zonecfg command related to the file system that was just added: <output has been truncated> fs: dir: /data special: /dev/dsk/c0t1d0s7 raw: /dev/rdsk/c0t1d0s7 type: ufs options: [] <output has been truncated> Boot the nonglobal zone. If the new host has a mixture of higher and lower version patches as compared to the source host. The new directory can be on an alternate file system. using zoneadm attach with the -u option updates those packages within the zone to match the new host. the data is copied. but it cannot be on an NFS mounted file system. Many operations can be performed on a running zone without a reboot. The zoneadm command is used to halt and then move a zone. All data is copied using cpio to preserve all data within the zone.300 Chapter 6: Solaris Zones View the entire zone configuration using the following command: # zonecfg -z testzone info<cr> All the information about the zone configuration is displayed. as demonstrated in the following example. storage device. or file system. When the zone is moved to a different file system. and the original directory is removed. Typically. it’s when you want to move a zone’s path on a system from one directory to another. Moving a Zone You will move a zone when you simply want to relocate a nonglobal zone from one point on a system to another point. The /data file system will be mounted during the boot process. Starting with the Solaris 10/08 release.

2 Migrating a Zone A zone named “testzone” already exists and is currently running on systemA. 1.tar sftp> bye 5. Halt the zone: # zoneadm -z testzone halt<cr> 2. The zone’s path is /export/zones/testzone. Step By Step 6.tar Uploading testzone. Log into systemB. is generated and stored in the zone’s path. The manifest contains information required to verify that the zone can be successfully attached to systemB. such as from sun4u to sun4v. Detaching a zone leaves the zone in a configured state on the original system. .tar to /export/zones/testzone. Beginning with the Solaris 10/08 release. and copy it to systemB. I’ll use sftp to transfer the tar file to systemB: # sftp systemB<cr> Connecting to systemB . detach the zone.2 describes the process of migrating a zone from systemA to systemB. The manifest describes the versions of installed packages and patches installed on the host. Gather the data from the zone path on the original system. reconfigure the zone on the new system. Detach the zone. zoneadm attach with the -u option also enables migration between machine classes. Password: sftp> cd /export/zones sftp> put testzone. and finally attach the zone and boot it.tar testzone<cr> 4.301 Solaris Zones an update during the attach operation is not allowed. An XML file. copy the zone configuration to the new system. I’ll use the tar command to create a tar file of the data: # cd /export/zones<cr> # tar cf testzone. and change to the /export/zones directory: # cd /export/zones<cr> . The following command detaches the testzone: # zoneadm -z testzone detach<cr> 3. During this procedure. you halt the zone. STEP BY STEP 6. . called the manifest.

if systemB has a different network interface than what was installed on systemA.302 Chapter 6: Solaris Zones 6. The system displays the zonecfg:testzone> prompt. STEP BY STEP 6.tar<cr> 7. The process to clone a zone is outlined in Step By Step 6. As root (on the global zone). halt the testzone: # zoneadm -z testzone halt<cr> . Now that the configuration is correct. The zone’s path is /export/zones/testzone. Now is the time to make any changes to the zone configuration. Extract the tar file: # tar xf testzone. The objective is to have two identical nonglobal zones running on the same global zone. Let’s assume that systemA has an hme interface and systemB has an eri interface. 1. 9.3 Cloning a Zone A zone named “testzone” already exists and is currently running on systemA. Use the create subcommand to begin configuring a new zone: zonecfg:testzone> create -a /export/zones/testzone The -a option instructs zonecfg to use the XML description of the detached zone. For example. Use the zonecfg command to create the zone configuration: # zonecfg -z testzone<cr> testzone: No such zone configured 8. I would make this change to the network interface: zonecfg:testzone> select net physical=hme0 zonecfg:testzone:net> set physical=eri0 10.” Its zone path will be /export/zones/clonezone.3. you can attach the zone: # zoneadm -z testzone attach<cr> Cloning a Zone You clone a zone when it is copied from its current zone path to a new zone path. you need to make this modification. You want to create a clone of this zone and name it “clonezone.

# more /export/zones/master<cr> The system displays the following information: create -b set zonepath=/export/zones/clonezone set autoboot=true set ip-type=shared add inherit-pkg-dir set dir=/lib end add inherit-pkg-dir set dir=/platform end add inherit-pkg-dir set dir=/sbin end add inherit-pkg-dir set dir=/usr end add fs set dir=/data set special=/dev/dsk/c0t2d0s7 set raw=/dev/rdsk/c0t2d0s7 set type=ufs end add net set address=192.clonezone” 4. The procedure will create a configuration file named /export/zones/master: # zonecfg -z testzone export -f /export/zones/master<cr> 3. and set the permissions: # mkdir /export/zones/clonezone<cr> # chmod 700 /export/zones/clonezone<cr> . Create a directory for the new zone. such as zonepath. The items in bold have been modified for the new zone.303 Solaris Zones 2. by exporting the configuration from testzone.0. The following output is a sample master file that was created in the previous step. Modify the zone properties.168. clonezone. Use the vi editor to edit the master file that was created in the previous step. Configure the new zone. clonezone.44 set physical=eri0 end add attr set name=comment set type=string set value=”first zone .

Perform the backup: # ufsdump 0f /backup/testzonebkup. clonezone: # zonecfg -z clonezone -f /export/zones/master<cr> 6. . follow the steps outlined in Step By Step 6.4. The zone’s path is /export/zones/testzone.testzone installed . and verify that both zones are installed: # zoneadm list -iv<cr> ID NAME STATUS 0 global running .dmp /export/zones/testzone<cr> 3. Follow these steps to back up this zone using ufsdump. Install the new zone by cloning testzone: # zoneadm -z clonezone clone testzone<cr> Cloning zonepath /export/zones/testzone. 7. List the zones on the system. The backup will be saved in /backup/testzonebkup..clonezone installed PATH / /export/zones/testzone /export/zones/clonezone BRAND native native native IP shared shared shared Backing Up a Zone To make a backup of a zone from the global zone.. UFS snapshot is described in the Solaris 10 System Administration Part 1 book.304 Chapter 6: Solaris Zones 5. Create the new zone. Halt the zone: # zoneadm -z testzone halt<cr> 2.4 Backing Up a Zone A zone named testzone already exists and is currently running on systemA. After the ufsdump is complete. boot the zone: # zoneadm -z testzone boot<cr> You could also back up a zone while it is running by first creating a UFS snapshot of the zone’s path using fssnap and then backing up the snapshot using ufsdump.dmp. 1. STEP BY STEP 6.

zone . zoneadm . but independently of each other. Virtualization . It allows virtualization of operating system services so that applications can run in an isolated and secure environment. Key Terms .305 Summary Summary The Solaris zones facility is a major step forward in the Solaris Operating Environment. You’ve learned how to list. Consolidation . Nonglobal zone . Branded zone . One of the advantages of zones is that multiple versions of the same application can be run on the same physical system. zlogin . this functionality has been available only on high-end. zonecfg . remove. move. You’ve also learned how to access the zone console and log in to a zone for system administration purposes. zoneadmd . Container . zsched . Previously. Sparse root zone . and clone a zone configuration. Global zone . You have seen how to configure a zone from scratch and install and boot a zone. extremely expensive servers. Whole root zone . This chapter has described the concepts of Solaris zones and the zone components as well as the types of zone that can be configured. Resource management . view. Solaris zones also protects the user from having a single application that can exhaust the CPU or memory resources when it encounters an error. migrate. uninstall. Isolation .

Estimated time: 1 hour 1.28 zonecfg:zone1:net> set physical=eri0 zonecfg:zone1:net> end zonecfg:zone1> add rctl zonecfg:zone1:rctl> set name=zone. which copies the Solaris packages to the zone’s private file system. For this example.168.action=none) zonecfg:zone1:rctl> end zonecfg:zone1> add attr zonecfg:zone1:attr> set name=comment zonecfg:zone1:attr> set type=string zonecfg:zone1:attr> set value=”This is a whole root zone” zonecfg:zone1:attr> end zonecfg:zone1> remove inherit-pkg-dir dir=/lib .0. we have used the /export file system.limit=20. This is necessary to ensure that the entire Solaris package collection is copied to the zone. but in order to create a whole root zone. The zone you will create will be called zone1.306 Chapter 6: Solaris Zones Apply Your Knowledge Exercise 6.5GB of free disk space.28. only the basic setup is required. the default inherited file systems must be removed. You also need to set the permissions on the directory. and its IP address will be 192. and identify a file system with at least 3. Enter the following commands at the command prompt: # mkdir -p /export/zones/zone1<cr> # chmod 700 /export/zones/zone1<cr> 2. zonecfg:zone1> create zonecfg:zone1> set zonepath=/export/zones/zone1 zonecfg:zone1> set autoboot=true zonecfg:zone1> add net zonecfg:zone1:net> set address=192. Make sure you are logged in as root and are running a window system (either CDE or Gnome).1 Creating a Whole Root Zone In this exercise. Create the zone directory. Open a terminal window. you’ll see how to create a nonglobal zone. In this exercise. Enter the commands shown in bold: # zonecfg -z zone1<cr> zone1: No such zone configured Use ‘create’ to begin configuring a new zone. You will need a Solaris 10 workstation with approximately 3.cpu-shares zonecfg:zone1:rctl> add value (priv=privileged.0.168.5GB of free disk space. Start creating the zone using the zonecfg command.

and a time zone. Verify the zone. The next thing to do is to make the zone ready and boot it so that it is running: # zoneadm -z zone1 ready<cr> # zoneadm -z zone1 boot<cr> 7. When it’s complete.307 Apply Your Knowledge zonecfg:zone1> zonecfg:zone1> zonecfg:zone1> zonecfg:zone1> zonecfg:zone1> zonecfg:zone1> remove inherit-pkg-dir dir=/platform remove inherit-pkg-dir dir=/sbin remove inherit-pkg-dir dir=/usr verify commit exit 3. because the internal configuration of the zone has yet to be completed. This will fail. You can view the state by entering the following command: # zoneadm -z zone1 list -v<cr> 4. The reboot takes only a few seconds. A console session is established with the new zone. . a naming service (choose “none” if a naming service is not being used). the hostname for the zone. you will be asked to enter a root password. 10. locale. When you have entered all the required information. The zone has now been created and should be in the configured state. terminal. a final prompt appears concerning the NFSv4 domain name. When it has completed. Add an entry to the global zone /etc/hosts file. you will be able to telnet to the zone as if it were any other remote system. 9. A number of questions need to be answered before the zone is fully operational. Several messages inform you of the progress of the installation. and then enter the command to install the files from the global zone: # zoneadm -z zone1 verify<cr> # zoneadm -z zone1 install<cr> 5. a security policy (if required). Complete the installation by logging in to the console of the newly created zone: # zlogin -C zone1<cr> 8. Finally. The zone reboots to implement the configuration you have just specified. Answer this question (“no” is the default). verify that the zone state has now changed to installed by re-entering the following command: # zoneadm -z zone1 list -v<cr> 6. and try to connect to the hostname for the zone using telnet. Enter the language.

zlogin -z testzone ❍ B. zoneadm -z appzone1 install ❍ D. Part Root ❍ D. zoneadm testzone ❍ D.308 Chapter 6: Solaris Zones Exam Questions 1. Which of the following are valid types of Root File System types for a nonglobal zone? (Choose two. Which command will perform an interactive administration login to the zone directly from the global zone? ❍ A. zoneadm -z appzone1 uninstall -F ❍ C.) ❍ A. zonecfg appzone1 uninstall ❍ B. You are the system administrator. zoneadm -z testzone . zoneadm -z appzone1 install -U ❍ D. Which of the following is the correct command to install the zone called appzone1? ❍ A. zonecfg appzone1 install 2. Zone Root ❍ C. zoneadm appzone1 install ❍ C. Which of the following would uninstall the zone called appzone1 automatically. Whole Root ❍ B. and you need to administer a zone called testzone. Sparse Root 4. zoneadm -z appzone1 uninstall 3. without requesting confirmation from the system administrator? ❍ A. zonecfg -z appzone1 install ❍ B. zlogin testzone ❍ C.

❍ B. zoneadm -z testzone list -v ❍ C. Each nonglobal zone uses a logical interface and is assigned a unique IP address. and you need to see if the user account testuser has been created in the zone testzone. ❍ B. zone1 ❍ B. sun-zone 7. ❍ D. The global zone is always assigned Zone ID 0. It contains a full installation of Solaris system packages. zoneadm testzone grep testuser /etc/passwd ❍ B. ❍ D. zlogin -z testzone grep testuser /etc/passwd ❍ C. Each nonglobal zone requires its own physical network interface. All nonglobal zones must use the same IP address. zonecfg -z testzone list ❍ D. Which command from the global zone will achieve this using a noninteractive login to the zone? ❍ A. sunzone ❍ C. Which command displays the current state of the zone testzone? ❍ A.) ❍ A. Nonglobal zones must use unique port numbers to avoid conflict. ❍ C. 9. Which of the following are features of the global zone? (Choose three. The global zone is not aware of the existence of other zones. ❍ C. SUNWzone ❍ D. zlogin testzone grep testuser /etc/passwd 6. Which of the following zone names is invalid? ❍ A. grep testuser /etc/passwd ❍ D. zlogin testzone zonename . It provides the single bootable instance of the Solaris Operating Environment that runs on the system.309 Apply Your Knowledge 5. You are the system administrator. 8. Which of the following describes how networking in a nonglobal zone is implemented in Solaris zones? ❍ A. You are creating a new nonglobal zone. It contains a subset of the installed Solaris system packages. zoneadm list ❍ B. ❍ E.

Which transitional zone state can be seen when a nonglobal zone is being installed or uninstalled? ❍ A. inetd 11. You have a nonglobal zone called tempzone that is no longer required. zoneadm -z tempzone delete -F ❍ C. You are configuring a nonglobal zone called zone1. /export/zones/zone1/root/etc 12. zsched ❍ C. Configured ❍ D. Ready ❍ B. /etc ❍ C. -S . Which option of the zlogin command would be used to gain access to a damaged zone for recovery purposes when other forms of login are not working? ❍ A. zoneadm delete tempzone 14. zonecfg delete tempzone ❍ B. plumbs the virtual network interface. which has a zonepath of /export/zones/zone1. The zone has already been halted and uninstalled. Incomplete ❍ C. /export/zones/zone1 ❍ B.310 Chapter 6: Solaris Zones 10. Which daemon process allocates the zone ID for a nonglobal zone. You have preconfigured the zone configuration by creating a sysidcfg file. -C ❍ B. zonecfg -z tempzone delete -F ❍ D. Installed 13. and you need to install it in the correct location so that when you log in following the initial boot of the zone. Where will you install the sysidcfg file? ❍ A. /export/zones/zone1/etc ❍ D. zoneadmd ❍ B. init ❍ D. the configuration will complete automatically. and mounts any loopback or conventional file systems? ❍ A. Which command deletes the zone configuration for this zone without asking for confirmation? ❍ A.

cat /etc/zones/newzone.) ❍ A.xml ❍ C. Uninstalled ❍ D. and you want to view the zone configuration data. -l ❍ D. zoneadm -z testzone -s boot ❍ D. ❍ B. It contains a subset of the installed Solaris system packages. It is always assigned Zone ID 0. zoneadm -z newzone list -v ❍ D. Which of the following are features of a nonglobal zone? (Choose two. zoneadm -zs testzone boot ❍ B.311 Apply Your Knowledge ❍ C. Its zone ID is assigned when it is booted.) ❍ A. ❍ D. zonecfg -z newzone export 18. Booting F. Running 16. zoneadm -z testzone boot -. Ready ❍ ❍ E. zoneadm -z testzone boot -s ❍ C. You have created a new nonglobal zone called newzone. zoneadm -z testzone boot -. cat /export/zones/newzone/root/etc/zones/newzone.) ❍ A. 17.-m milestone=single-user . Configured ❍ B.) ❍ A. Which of the following will display the required information? (Choose two. ❍ C.xml ❍ B. Which of the following are valid states for a nonglobal zone? (Choose three. -E 15. Prepared ❍ C. ❍ E. Which commands can be used to boot a zone into single-user mode or into the single-user milestone? (Choose two.-m single-user ❍ E. It contains a full installation of Solaris system packages. It provides the single bootable instance of the Solaris Operating Environment that runs on a system.

312 Chapter 6: Solaris Zones Answers to Exam Questions 1. Whole Root and Sparse Root are valid types of Root File System in the nonglobal zone. The zlogin -S command is used to gain access to a damaged zone for recovery purposes when other forms of login are not working. see the section “Booting a Zone. see the section “Installing a Zone. For more information. A. C. For more information. see the section “Uninstalling a Zone. because it is a transitional state that is displayed when a nonglobal zone is being installed or uninstalled.” . The command zlogin testzone will initiate an interactive login to the zone from the global zone. For more information. The command zlogin testzone grep testuser /etc/passwd will run the command grep testuser /etc/passwd in the testzone zone.” 12. The global zone is always assigned Zone ID 0. The zone name “SUNWzone” is invalid because all zone names beginning with “SUNW” are reserved.” 8. C. D. B. it contains a full installation of Solaris system packages. B. C.” 7. For more information. B.” 11. C.” 3. see the section “Zone Daemons. For more information. see the section “Zone Login. For more information. For more information. A.” 4. The zoneadmd daemon process assigns the zone ID to a nonglobal zone. B. For more information. see the section “Logging in to a Zone. The command zonecfg -z tempzone delete -F will successfully delete the configuration for zone tempzone.” 10. see the section “Networking in a Zone Environment. see the section “Running a Command in a Zone. in a noninteractive login from the global zone. For more information. B. D. see the section “Nonglobal Zone Root File System Models. For more information. see the section “The zonecfg Command. The command zoneadm -z appzone1 install will successfully install the zone called appzone1. For more information. The command zoneadm -z testzone list -v displays the current state of the zone called testzone.” 6. B. The zone state being described is incomplete.” 2. 14. The command zoneadm -z appzone1 uninstall -F will successfully uninstall the zone called appzone1 without asking the administrator for confirmation. In order to get the nonglobal zone zone1 to automatically complete the zone configuration. Networking in nonglobal zones is implemented by using a logical network interface and the zone is assigned a unique IP address. C. see the section “Zone Features. the sysidcfg would be installed in the /export/zones/zone1/root/etc directory. E.” 5.” 9. see the section “Using a sysidcfg File. and it also provides the single bootable instance of the Solaris Operating Environment that runs on the system. it also plumbs the virtual network interface and mounts any loopback or conventional file systems. For more information. D. 13.

A.com. D. Boot a zone into single user-mode using zoneadm -z testzone boot -s. D.F. and running. B.313 Suggested Reading and Resources 15. For more information. part number 817-1592-15. You can also boot into the single-user milestone using the following command: zoneadm -z testzone boot — -m milestone=single-user. For more information. 17. E.” 16. see the section “Zone States. D. . The two ways of displaying the zone configuration data for the zone newzone are cat /etc/zones/newzone.” Suggested Reading and Resources “System Administration Guide: Solaris Containers—Resource Management and Solaris Zones” manual from the Solaris 10 Documentation CD. ready. . For more information.xml and zonecfg -z newzone export.sun. see the section “Booting a Zone. The nonglobal zone contains a subset of the installed Solaris system packages and its zone ID is assigned by the system when it boots. “System Administration Guide: Solaris Containers—Resource Management and Solaris Zones” book in the System Administration Collection of the Solaris 10 documentation set. A. C. see the section “Viewing the Zone Configuration.” 18. The valid zone states are configured. available at http://docs.

.

This chapter shows you how to implement a JumpStart installation. including JumpStart related commands. troubleshooting. and establishing JumpStart software alternatives (setup. . The Solaris Flash feature takes a snapshot of a Solaris operating environ- ment. Explain Flash. create and manipulate the Flash archive. . and services. configuration. complete with patches and applications. establishing alternatives. identification. as well as the files and scripts that are modified and used. configure both the install and DHCP server. . It can be used only in initial installations. Flash Archive. This chapter shows you how to use the PXE to boot and install an x86 client across the network. identify requirements and install methods. and installation services. and boot the x86 client. editing the sysidcfg. configuration files. and resolving problems). including the boot. not upgrades. and PXE Objectives The following test objectives for exam CX-310-202 are covered in this chapter: Explain custom JumpStart configuration. if desired. Given a Preboot Execution Environment (PXE) installation scenario. however. .SEVEN 7 Advanced Installation Procedures: JumpStart. rules. . and use it for installation. This chapter helps you understand the components of a JumpStart network installation. You’ll learn about setting up servers and clients to support a JumpStart installation. Configure a JumpStart including implementing a JumpStart server. and profile files.

and Time Server Setting Up JumpStart in a Name Service Environment Setting Up Clients Troubleshooting JumpStart Installation Setup Client Boot Problems A Sample JumpStart Installation Setting Up the Install Server Creating the JumpStart Directory Setting Up a Configuration Server Setting Up Clients Starting Up the Clients Solaris Flash Creating a Flash Archive Using the Solaris Installation Program to Install a Flash Archive Creating a Differential Flash Archive Solaris Flash and JumpStart Preboot Execution Environment (PXE) Preparing for a PXE Boot Client Configuring the DHCP Server Adding an x86 Client to Use DHCP Booting the x86 Client Summary Key Terms Apply Your Knowledge Exercise Exam Questions Answers to Exam Questions Suggested Reading and Resources . and Name Server Keywords Network-Related Keywords Setting the Root Password Setting the System Locale. Time Zone. Terminal.Outline Introduction Custom JumpStart Preparing for a Custom JumpStart Installation What Happens During a Custom JumpStart Installation? Differences Between SPARC and x86/x64-Based Systems JumpStart Stages: SPARC System JumpStart Stages: x86/x64 System The Boot Server /etc/ethers /etc/hosts /etc/dfs/dfstab /etc/bootparams /tftpboot Setting Up the Boot Server The Install Server The Configuration Server Setting Up a Profile Diskette The Rules File Rules File Requirements Rules File Matches Validating the Rules File begin and finish Scripts Creating class Files archive_location backup_media boot_device bootenv_createbe client_arch client_root client_swap cluster dontuse filesys forced_deployment install_type geo layout_constraint local_customization locale metadb no_content_check no_master_check num_clients package partitioning pool root_device system_type usedisk Testing Class Files sysidcfg File Name Service. Domain Name.

the class file. and be prepared for a question in the exam that asks you to match a symbol with its corresponding description.Study Strategies The following strategies will help you prepare for the test: . Practice the Flash Archive example in this chapter using two Solaris systems. . . . Also. You should also be able to identify the events that occur during the JumpStart client boot sequence. and the rules file. . Given the appropriate software source. as well as the procedures to successfully use this powerful feature. You’ll see questions on the exam related to the add_install_client and add_to_install_server scripts. . the requirements. an install server. Be prepared to define each term. Practice the Step By Step examples provided in this chapter on a Solaris system. and the procedures to follow in order to get an x86 client to successfully boot across the network. especially the ones used in the examples. Be sure that you understand each step and can describe the process of setting up a boot server. State the features and limitations of Solaris Flash and be able to implement a Flash Archive. and a configuration server. be prepared to explain how to create a configuration server with a customized rules file and class files. Get familiar with all the options. State the purpose of the sysidcfg file. . make sure you understand what the DHCP symbols represent. Learn the terms listed in the “Key Terms” section near the end of this chapter. Make sure you are comfortable with the concepts being introduced. Understand each of the commands described in this chapter. State the purpose of the JumpStart server and identify the main components of each type of server. Become familiar with the Preboot Execution Environment (PXE) features.

editing the sysidcfg. Solaris Flash. WAN boot. Part I. Custom JumpStart Objectives . Configure a JumpStart including implementing a JumpStart server. troubleshooting. and profile files. there is no guarantee that each system is set up the same. and installation services. . The custom JumpStart method of installing the operating system provides a way to install groups of similar systems automatically and identically. Finally. and resolving problems). including the boot. are described in Solaris 10 System Administration Exam Prep (Exam CX-310-200). Also in this chapter. With a Flash Archive. I describe the Preboot Execution Environment (PXE). Flash Archive. PXE is a direct form of network boot that can be used to install the Solaris Operating Environment onto x86/x64-based systems across the network using DHCP. The topics are quite lengthy. I describe how to use custom JumpStart to install the operating system onto SPARC-based clients across the network. This method effectively creates a clone. this task can be monotonous and time-consuming. including patches and applications. The first two interactive methods of installation. you can take a complete snapshot of the Solaris operating environment on a running system. . Explain custom JumpStart configuration. and PXE Introduction There are six ways to install the Solaris software on a system. and create an archive that can be used to install other systems. I describe the Solaris Flash Archive method of installation. In this chapter. so I divided them into two chapters. In addition. establishing alternatives. At a large site with several systems that are to be configured exactly the same. you must interact with the installation program by answering various questions. GUI and command-line. identification. rules. Custom JumpStart is used to install groups of similar systems automatically and identically. and Live Upgrade—are described in this book. configuration. and establishing JumpStart software alternatives (setup.318 Chapter 7: Advanced Installation Procedures: JumpStart. It does not require the client to have any form of local boot media. The more advanced installation methods—custom JumpStart. If you use the interactive method to install the operating system. Custom JumpStart solves this problem by providing a method to create sets of configuration files beforehand so that the installation process can use them to configure each system automatically.

but it’s the most efficient way to centralize and automate the operating system installation at large enterprise sites.1 lists the various commands that are introduced in this chapter. . the identification service can be provided by any network server configured to provide this service. This facility enables you to patch Solaris installation commands and other miniroot-specific commands. Alternatively. Boot and Client Identification Services: These services typically are provided by a networked boot server and provide the information that a JumpStart client needs to boot using the network. which can be used to preconfigure the system identification information and achieve a fully hands-off installation.319 Custom JumpStart Custom JumpStart requires up-front work. Validates the information in the rules file. This command is also used to set up a boot-only server when the -b option is specified. add_to_install_server add_install_client rm_install_client check pfinstall patchadd -C JumpStart has three main components: . Table 7. A command to add patches to the files in the miniroot (located in the Solaris_10/Tools/Boot directory) of an installation image created by setup_install_server. Table 7. A command that adds network installation information about a system to an install or boot server’s /etc files so that the system can install over the network. The custom configuration files that need to be created for JumpStart are the rules and class files. This option is not necessary when creating an install server from a DVD. Both of these files consist of several keywords and values and are described in this chapter. A script that copies additional packages within a product tree on the Solaris 10 Software and Solaris 10 Languages CDs to the local disk on an existing install server. Custom JumpStart can be set up to be completely hands-off. creating custom configuration files before the systems can be installed.1 Command setup_install_server JumpStart Commands Description Sets up an install server to provide the operating system to the client during a JumpStart installation. Removes JumpStart clients that were previously set up for network installation. Another file that is introduced in this chapter is the sysidcfg file. Performs a dry run installation to test the class file.

and PXE . add or remove Solaris packages. Will the installation be an initial installation or an upgrade? . which pro- vides an image of the Solaris operating environment the JumpStart client uses as its source of data to install. NOTE Server configurations At times we describe the boot server. Preparing for a Custom JumpStart Installation The first step in preparing a custom JumpStart installation is to decide how you want the systems at your site to be installed. however. group systems according to their configuration (as shown in the example of a custom JumpStart near the end of this chapter).” Each of these components is described in this chapter. Here are some questions you should answer before you begin: . If any of these three components is improperly configured. How much swap space is required? These questions will help you group the systems when you create the class and rules files later in this chapter. Who will use the system? . Configuration Services: These are provided by a networked configuration server and provide information that a JumpStart client uses to partition disks and create file systems. Additional concerns to be addressed include what software packages need to be installed and what size the disk partitions need to be in order to accommodate the software. . create file systems. Flash Archive. . The reality. After you answer these questions. This topic is discussed in more detail in the section “The Install Server. is that most sites have one system that performs all three functions. Ask questions interactively for configuration. and perform other configuration tasks. . Installation Services: These are provided by a networked install server.320 Chapter 7: Advanced Installation Procedures: JumpStart. and load the operating environment. What applications will the system support? . and the configuration server as though they are three separate systems. Fail to boot. . Fail to find a Solaris Operating Environment to load. . the install server. the JumpStart clients can . Fail to partition disks.

in most cases. To prepare for the installation. on a server that is located on the same network as the client you are installing. This is called the boot server (or sometimes it is called the startup server). The rule links each group to a class file. which is a text file that defines how the Solaris software is to be installed on each system in the group. one server provides all these services. Use a configuration server when you want to perform custom JumpStart installations on networked systems that have access to the server. most x86/x64-based systems can boot directly from a network interface card using the Preboot Execution Environment (PXE).ok file is a file that should contain a rule for each group of systems you want to install. The custom JumpStart configuration files that you need to set up can be located on either a diskette (called a configuration diskette) or a server (called a configuration server). Both the rules. After the client starts up. however. On the other hand. .321 Custom JumpStart The next step in preparing a custom JumpStart installation is to create the configuration files that will be used during the installation: the rules. Use a configuration diskette when you want to perform custom JumpStart installations on nonnetworked standalone systems. you create a set of JumpStart configuration files. This chapter covers both procedures. These differences affect the JumpStart process and are worth noting. you set up the server to provide a startup kernel that is passed to the client across the network. Each step is described in detail in this chapter.ok file and the class files must be located in a JumpStart directory you define. the boot server directs the client to the JumpStart directory. Differences Between SPARC and x86/x64-Based Systems SPARC and x86/x64-based systems differ in how they perform a network boot. SPARC systems initiate a network boot by executing the boot net command from the OpenBoot prompt.ok file (a validated rules file) and a class file for each group of systems. These can be three separate servers. the rules and class files. an install server. What Happens During a Custom JumpStart Installation? This section provides a quick overview of what takes place during a custom JumpStart installation. The configuration files in the JumpStart directory direct and automate the entire Solaris installation on the client. The rules. Next. which is usually located on the boot server. and a configuration server. Each rule distinguishes a group of systems based on one or more system attributes. To be able to start up and install the operating system on a client. you need to set up three servers: a boot server.

asking for an IP address. The boot server sends a JumpStart mini-root kernel to the client via TFTP.install<cr> 2. The client sends a BOOTPARAMS request to the boot server to get a root (/) file system. The boot server responds to the RARP request with an IP address. 13. Flash Archive. 8. the client uses NFS to mount the root (/) file system from the boot server and starts the init program. it locates the client’s configu- ration server in the bootparams table. The boot client boots to the mini-root kernel that was sent from the boot server. 12. 5. 4. and PXE JumpStart Stages: SPARC System The JumpStart stages for a SPARC-based system are as follows: 1. A rarpd daemon. The boot server searches the ethers and hosts databases to map the client’s Ethernet MAC address to an IP address. The client sends a BOOTPARAMS request to the boot server to get its hostname. The boot server returns a hostname obtained from its bootparams table. The client broadcasts a reverse address resolution protocol (RARP) request over the network requesting an IP address. The in. the inetd daemon receives the TFTP request and starts the in. 17. running on a boot server. At the boot server. The client mounts the OS image on the install server. The boot client broadcasts another RARP request. The client mounts the OS image from the install server and executes sysidtool to obtain system identification information.tftpd daemon locates an IP address along with the boot client’s architecture in the /tftpboot directory. 15.tftpd daemon. The client uses the BOOTPARAMS information to search for the configuration server. 11. 14. 16. After the boot server is finished bootstrapping the client. 18. 7. 6. 3. Using the bootparams information just received. The client runs install-solaris and installs the OS.322 Chapter 7: Advanced Installation Procedures: JumpStart. 10. responds to the RARP request with an IP address for the boot client. It sends this information to the client. The client issues a TFTP request to the boot server to send over the bootstrap loader. Boot the client from the OpenBoot PROM: ok> boot net . . The boot server locates the information in the bootparams table and sends the root file system information to the client. 9.

8. 6. 5. During the network boot. 3. A network boot is configured in the system’s BIOS (or in the network interface utility).tftpd daemon. The client then discovers the boot server and receives the name of an executable file on the chosen boot server. or both SPARC and x86/x64 clients could use DHCP. But x86/x64 clients that use the PXE use only DHCP for their configuration. After obtaining the boot image. the PXE client broadcasts a DHCPDISCOVER message containing an extension that identifies the request as coming from a client that implements the PXE protocol. The client issues a TFTP request based on the BootSrvA and BootFile parameters it received from the DHCP server and requests a download of the executable from the boot server. 4. or the user can press the appropriate key at boot time to display a menu of boot options. the PXE client issues another DHCPDISCOVER request message. the inetd daemon receives the TFTP request and starts the in. The same boot server may provide ARP/RARP services for SPARC clients and DHCP services for x86/x64 clients. Setting up a DHCP server to support PXE clients is described later in this chapter. requesting a new IP address.323 Custom JumpStart NOTE Configuring a DHCP server For SPARC-based clients. Therefore. The boot server responds with a network bootstrap program filename. . At the boot server. The boot server sends the PXE client a list of appropriate boot servers. The in. JumpStart Stages: x86/x64 System The JumpStart Stages for an x86/x64-based system are as follows: 1. The boot server sends a JumpStart mini-root kernel to the client via TFTP. The PXE client initiates execution of the downloaded image. 2. The BootsrvA parameter specifies the IP address of the boot server. you must configure a DHCP server to support the boot and identification operations of x86/x64-based JumpStart clients. The PXE client downloads the executable file using either standard TFTP or MTFTP (Multisource File Transfer Protocol). Typically.tftpd daemon locates an IP address along with the boot client’s architecture in the /tftpboot directory. The BootFile parameter specifies the file that the PXE client will use to boot through the network. you have the option of using RARP or DHCP to supply the identity information they require to boot and begin the system identification and installation process. this key is F12. 7.

This server must be on the local subnet (not across routers). 12. The client mounts the configuration server and executes sysidtool to obtain system identification information. in. it does not have an operating system installed or an IP address assigned. The boot server running the RARP (Reverse Address Resolution Protocol) daemon. SsysidCF: Path to the sysidcfg 10.rarpd. When a client is first turned on. also called DHCPOFFERACK. SrootNM and SrootIP4: Hostname and IP address of the boot server . SrootPTH: Path to the exported Solaris distribution on the install server . when the client is first started. The client mounts the OS image on the install server. and passes the Internet address back to the client.324 Chapter 7: Advanced Installation Procedures: JumpStart. SinstNM and SinstIP4: Hostname and IP address of the install server . SjumpsCF: Path to the Jumpstart configuration . The client runs install-solaris and installs the OS. is where the client systems access the startup files. therefore. Using this DHCP information received from the DHCP server. The in. there must be a boot server that resides on the same subnet as the client. Flash Archive. and PXE 9. the boot server provides this information. the PXE client uses NFS to mount the root (/) file system from the boot server and to locate the configuration server. 13. The Boot Server The boot server.rarpd service is managed by the service management facility under the FMRI svc:/network/rarp. looks up the Ethernet address in the /etc/ethers file. NOTE Check the rarpd daemon rarpd is a daemon that is not always running. The DHCP server responds with a DHCPACK. 11. Although it is possible to install systems over the network that are not on the same subnet as the install server. which includes the following: . Make sure that this service is enabled by issuing the following command: svcadm enable svc:/network/rarp . also called the startup server. SrootPTH: Path to the exported mini-root file system on the boot server . checks for a corresponding name in its /etc/hosts file.

rpc. you enable it by uncommenting the following line in the /etc/inetd.sun4u. As soon as the client has its boot parameters. it issues the whoami request to discover the client’s hostname. The client runs this boot program to start up. The client mounts the configuration directory and runs sysidtool. When the boot server is finished bootstrapping the client. The client searches for the configuration server using the bootparams information. the boot program on the client mounts the / (root) file system from the boot server. This link points to a boot program for a particular Solaris release and client architecture. it redirects the client to the configuration server. The client then uses the . After supplying an IP address. For example: C009C864. which is a suite of tools used to configure the system identification information.tftpd in. Although the service is managed by SMF under the FMRI svcs:/network/tftp/udp6:default. The client loads its kernel and starts the init program.SUN4U -> inetboot.bootparamd. To do so.tftpd daemon is not enabled.325 Custom JumpStart RARP is a method by which a client is assigned an IP address based on a lookup of its Ethernet address. The boot server responds with the information obtained from the /etc/bootparams file. the in. Typically. looks up the hostname and responds to the client. the filename is <hex-IP address. Make sure that the in. described later in this chapter.architecture>.tftpd -s /tftpboot After uncommenting the line. The boot program then issues a getfile request to obtain the location of the client’s root and swap space.tftpd daemon to transmit the boot program to the client via Trivial File Transfer Protocol (TFTP). For SPARC systems.conf file: tftp dgram udp6 wait root /usr/sbin/in.Solaris_10-1 The boot server uses the in. all the system identification information is stored in the sysidcfg file. The boot server running the boot parameter daemon. run the following command: # /usr/sbin/inetconv<cr> Check that the service is running by typing: # svcs -a|grep tftp<cr> The system displays this: online 10:02:35 svc:/network/tftp/udp6:default The boot program tries to mount the root file system.tftpd daemon is enabled on your boot server by typing: # svcs -a|grep tftp<cr> If nothing is displayed. the server searches the /tftpboot directory for a symbolic link named for the client’s IP address expressed in hexadecimal format.

it has no IP address. . the client name is matched to an entry in the /etc/hosts file. /etc/dfs/dfstab . An entry for the JumpStart client must be created by editing the /etc/ethers file or using the add_install_client script described later in this chapter in the section “Setting Up Clients. For boot operations to proceed. The boot server references this file when trying to match an entry from the local /etc/ethers file in response to a RARP request from a client. the boot server sends the IP address from the /etc/hosts file back to the client. It supports RARP requests sent from the SPARC-based JumpStart client. The client then runs the install-solaris program and installs the operating system. /tftpboot . /etc/hosts . The rarpd service in SMF The following sections describe each file. /etc/ethers This file is required on the boot server.326 Chapter 7: Advanced Installation Procedures: JumpStart. See Chapter 5. and services must be properly configured on the boot server: .” The /etc/hosts file is the local file that associates the names of hosts with their IP addresses. the following files. “Naming Services.” for more information on how this file can be managed by NIS. this file would be controlled by NIS. If a match is found. In response to the RARP request from the client. Flash Archive. “The Solaris Network Environment. and PXE bootparams information to locate and mount the installation directory where the Solaris image resides.” /etc/hosts The /etc/hosts file was described in Chapter 1. The boot server receives this request and attempts to match the client’s Ethernet address with an entry in the local /etc/ethers file. directories. When the JumpStart client boots. /etc/bootparams . so it broadcasts its Ethernet address to the network using RARP. /etc/ethers . In a name service environment. The TFTP service in SMF . The client continues the boot process using the assigned IP address.

NOTE Booting on a separate subnet Normally. “The Install Server. If the same server will be used as a boot server and an install server. and when it receives a reply. follow the steps in Step By Step 8.1. When booting over the network. See the section “Setting Up Clients” later in this chapter for more information on how this file is configured. the boot fails. It is described later in this chapter. /etc/dfs/dfstab The /etc/dfs/dfstab file lists local file systems to be shared to the network.” To set up the boot server.327 Custom JumpStart NOTE DHCP services on the boot server can be used as an alternate method of providing boot and identification information to the JumpStart client.bootparamd program.” Setting Up the Boot Server The boot server is set up to answer RARP. “Virtual File Systems. However. the install server also provides the boot program for booting clients. When a client boots. proceed to the next section. /tftpboot /tftpboot is a directory that contains the inetboot.Solaris_10-1 file that is created for each JumpStart client when the add_install_client script is run. which is required to communicate across a router on a network. because the network traffic cannot be routed correctly without a netmask. the setup_install_server command is used to set up the boot server. TFTP.” /etc/bootparams The /etc/bootparams file contains entries that network clients use for booting. The client’s IP address is expressed in hexadecimal format. the JumpStart client’s boot PROM makes a RARP request. RARP. This file is described in detail in Chapter 2. See how this directory is configured in the section “Setting Up Clients.SUN4x. In fact. Before a client can start up from a boot server. Swap Space. . DHCP is used to support x86/x64-based JumpStart clients. however. and BOOTPARAMS requests from clients using the add_install_client command. Here’s the reason: SPARC install clients require a boot server when they exist on different subnets because the network booting architecture uses Reverse Address Resolution Protocol (RARP). does not acquire the netmask number. This link points to a boot program for a particular Solaris release and client architecture. it issues a RARP request to obtain its IP address. If the boot server exists across a router. the Solaris network booting architecture requires you to set up a separate boot server when the install client is on a different subnet than the install server. the PROM broadcasts a TFTP request to fetch the inetboot file from any server that responds and executes it. JumpStart clients retrieve the information from this file by issuing requests to a server running the rpc. and Core Dumps.

and PXE STEP BY STEP 8. Flash Archive.328 Chapter 7: Advanced Installation Procedures: JumpStart./setup_install_server -b /export/jumpstart<cr> The system responds with this: Verifying target directory. allowing vold to automatically mount the media.. On the system that is the boot server. For example.. 1. Install Server setup complete NOTE Insufficient disk space The following error indicates that there is not enough room in the directory to install the necessary files. 2.. You need to either clean up files in that file system to make more room or choose a different file system: ERROR: Insufficient space to copy Install Boot image 362978 necessary -69372 available./setup_install_server -b <boot_dir_path><cr> where -b specifies that the system is set up as a boot server and <boot_dir_path> specifies the directory where the Solaris image is to be copied. Ensure the system has an empty Here is an example: # cd /cdrom/cdrom0/s0/Solaris_10/Tools<cr> 3. Insert the Solaris 10 DVD or Software CD 1 into the DVD/CD-ROM drive. Copying Install Boot Image hierarchy.. Enter this command: # . The -b option copies just the startup software from the Solaris media to the local disk. log in as root.. You can substitute any directory path. as long as that path is shared across the network. Calculating space required for the installation boot image Copying Solaris_10 Tools hierarchy. . the following command copies the kernel architecture information into the /export/jumpstart directory: # . Use the setup_install_server command to set up the boot server.1 Setting Up the Boot Server directory with approximately 350MB of available disk space.. Change the directory to the mounted media.

329 Custom JumpStart NOTE Destination must be empty The location in which you are trying to create the boot server must be empty. This can be any of the following: . You see the following error if the target directory is not empty: The target directory /export/jumpstart is not empty. The install server is a networked system that provides the Solaris 10 operating system image. This chapter focuses on using CD images. we create an install server by copying the images from the Solaris installation media onto the server’s hard disk. If you have JumpStart clients on other subnets. Then a boot server is required on that subnet. the boot server is now set up. you enable a single install server to provide Solaris 10 CD images for multiple releases. the boot server and the install server are typically the same system. A client can only boot to a boot server located on its subnet. Solaris 10 Software CD 2 CD image . Solaris 10 Software CD 1 CD image . but you should be aware that Solaris 10 is also available on a single DVD. you need to create a boot server for each of those subnets. For example. A Flash installation image Typically. A spooled image from either CD or DVD media . If no errors are displayed. Solaris 10 Software CD 4 CD image . Solaris 10 Languages CD image (this CD is optional) . The installation program creates a subdirectory named Solaris_10 in the <boot_dir_path> directory. The exception is when the client on which Solaris 10 is to be installed is located on a different subnet than the install server. A shared CD-ROM or DVD-ROM drive with the Solaris OE media inserted . including Solaris 10 CD images for different platforms. This boot server will handle all boot requests on this subnet. By copying these CD images (or single DVD image) to the server’s hard disk. Solaris 10 Software CD 3 CD image . a SPARC install server could provide the following: . The Install Server As explained in the previous section. Please choose an empty directory or remove all files from the specified directory and run this program again.

just be sure that the target directory is empty. This Step by Step assumes that all systems are on the same subnet.2 Setting Up a Server as a Boot and Install Server Insert the CD labeled “Solaris 10 Software CD 1” into the CD-ROM.. such as sparc_10. if all four CD images and the Language CD image are to be copied. Change to the Tools directory on the CD: # cd /cdrom/cdrom0/s0/Solaris_10/Tools<cr> 1.. This is because the install server could be used to hold multiple versions and multiple architectures. This directory must be empty. Install Server setup complete 3. Many system administrators like to put the CD images for the boot server and install server into /export/install and create a directory for each architecture being installed./setup_install_server /export/install/sparc_10<cr> The system responds with this: Verifying target directory. and the boot and install server are to be on the same system. The first step is to copy the Solaris 10 Software CD images to the server: 2..2. and it must be shared so that the JumpStart client can access it across the network during the JumpStart installation. and PXE To set up a server as a boot and installer server.. To install the operating environment software into the /export/install/sparc_10 directory. Calculating the required disk space for the Solaris_10 Product Calculating space required for the installation boot image Copying the CD image to disk.. complete Step By Step 8. It’s a personal preference. Use the setup_install_server command to install the software onto the hard drive: # . and allow vold to automatically mount the CD. and insert the CD labeled “Solaris 10 Software CD 2” into the CD-ROM.. and has approximately 3GB of space available./setup_install_server <install_dir_path><cr> <install_dir_path> is the path to which the CD images will be copied. or x86_10. allowing vold to automatically mount the CD. Copying Install boot image hierarchy. Flash Archive. STEP BY STEP 8. Eject the CD. shared. Change to the Tools directory on the mounted CD: # cd /cdrom/cdrom0/Solaris_10/Tools<cr> .330 Chapter 7: Advanced Installation Procedures: JumpStart. issue the following command: # .

It is usually the same system as the install and boot server..331 Custom JumpStart 4. you can use the patchadd -C command to patch the Solaris miniroot image on the install server’s hard disk.. After copying the Solaris CDs. Checking required disk space. to copy the software into the /export/install/sparc_10 directory. Copying Top Level Installer. the class file. This option patches only the miniroot../add_to_install_server <install_dir_path><cr> For example. The server that contains a JumpStart configuration directory is called a configuration server. issue the following command: # . such as the rules file. When it’s finished installing. 4256 blocks Processing completed successfully.ok file. the check script. This directory contains all the essential custom JumpStart configuration files. although it can be a completely dif- . the rules. The Configuration Server If you are setting up custom JumpStart installations for systems on the network. the image is copied from CD to disk. and the optional begin and finish scripts. repeat the process with the remaining CDs and then with the Solaris 10 Languages CD if you are planning to support multiple languages./add_to_install_server with the -s option.. these additional steps are not required. Run the add_to_install_server script to install the additional software into the <install_dir_path> directory: # . After checking for the required disk space. you have to create a directory on a server called a configuration directory. Systems that are installed still have to apply recommended patches if they are required./add_to_install_server /export/install/sparc_10<cr> The system responds with the following messages: The following Products will be copied to /export/install/sparc_10/ Solaris_10/Product: Solaris_2 If only a subset of products is needed enter Control-C \ and invoke . When using a DVD... 131008 blocks Copying Tools Directory.

See Step By Step 8. The configuration directory on the configuration server should be owned by root and should have permissions set to 755. 5. STEP BY STEP 8. rules. 2. If the system is already an NFS server. To set up the configuration server. the rules. Sample copies of these files can be found in the Misc/jumpstart_sample subdirectory of the location where you installed the JumpStart install server. and class files are covered later in this section. The custom JumpStart files on the diskette should be owned by root and should have permissions set to 755. STEP BY STEP 8.3.anon=0 /export/jumpstart 4. rules. Flash Archive. The rules. Create the configuration directory anywhere on the server (such as /export/jumpstart). edit the /etc/dfs/dfstab file and add the following entry: share -F nfs -o ro. Execute the svcadm enable network/nfs/server command. and PXE ferent server. and log in as root. Format the disk by typing the following: . follow Step By Step 8. The add_install_client script is described in the section “Setting Up Clients.” Setting Up a Profile Diskette An alternative to setting up a configuration server is to create a profile diskette. To be certain that this directory is shared across the network.4 Setting Up a Profile Disk # fdformat -U<cr> 1. If you use a diskette for custom JumpStart installations.ok. and the class files) must reside in the root directory on the diskette. You can also use the add_install_client script. 3. which makes an entry into the /etc/dfs/dfstab file as part of the script.ok. and class files) in the /export/jumpstart directory. rules. you only need to type shareall and press Enter. Place the JumpStart files (that is. the essential custom JumpStart files (the rules file.332 Chapter 7: Advanced Installation Procedures: JumpStart. The diskette that contains JumpStart files is called a profile diskette. Choose the system that acts as the server.4 to set up a profile diskette.ok file. also called a configuration diskette (provided that the systems that are to be installed have diskette drives).3 Setting Up a Configuration Server 1.

Eject the disk by typing the following: # eject floppy<cr> 5. 3. the Solaris installation program reads the rules. The rules. you need to create a rules file for each specific group of systems to be installed.ok file is a validated version of the rules file that the Solaris installation program uses to perform a custom JumpStart installation. the installation program uses the class file specified in the rule to install the system. Notice that almost all the lines in the sample rules file are commented out. During a custom JumpStart installation. uncommented line is the rule we added for the example. Each rule distinguishes a group of systems based on one or more system attributes and links each group to a class file.333 Custom JumpStart 2. For the examples in this chapter. If the check script runs suc- cessfully. Insert the formatted disk into the disk drive. The Rules File The rules file is a text file that should contain a rule for each group of systems you want to install automatically. insert the disk. where <install_dir_path> is the directory that was specified using the setup_install_server script when the install server was set up. You have completed the creation of a diskette that can be used as a profile diskette. The last. Create a file system on the disk by issuing the newfs command: # newfs /vol/dev/aliases/floppy0<cr> 4. The syntax is discussed later in this chapter. These are simply instructions and sample entries to help the system administrator make the correct entry. it creates the rules. . After you create the rules file. If your system uses Volume Manager.ok file. validate it with the check script by changing to the /export/jumpstart directory and issuing the check command. If a match occurs. After deciding how you want each group of systems at your site to be installed. Each line in the code table has a rule keyword and a valid value for that keyword.ok file and tries to find the first rule that has a system attribute matching the system being installed. Now you can create the rules file and create class files on the configuration diskette to perform custom JumpStart installations. the install directory is /export/install/sparc_10. which is a text file that defines how the Solaris software is installed on each system in the group. A sample rules file for a Sun Ultra is shown next. It will be mounted automatically. You’ll find a sample rules file on the install server located in the <install_dir_path>/Solaris_10/Misc/jumpstart_sample directory.

It is used with the rule_value to match a # system with the same attribute to a profile. # . Flash Archive. # # This example rules file contains: # o syntax of a rule used in the rules file # o rule_keyword and rule_value descriptions # o rule examples # # See the installation manual for a complete description of the rules file # # ########################################################################## # # RULE SYNTAX: # # [!]rule_keyword rule_value [&& [!]rule_keyword rule_value]. and PXE The Solaris installation program scans the rules.334 Chapter 7: Advanced Installation Procedures: JumpStart. # To match a range of values. # begin profile finish # # “[ ]” indicates an optional expression or field # “.12 94/07/27 SMI # # The rules file is a text file used to create the rules. If the program matches a rule keyword and value with a known system. a system’s value must be # greater than or equal to NN and less than or equal # to MM.. The rules file is a lookup table # consisting of one or more rules that define matches between system # attributes and profiles. # If no begin script exists. it installs the Solaris software specified by the class file listed in the class file field. # # rule_value a value that provides the specific system attribute # for the corresponding rule_keyword. # # @(#)rules 1.” indicates the preceding expression may be repeated # “&&” used to “logically AND” rule_keyword and rule_value pairs # together # “!” indicates negation of the following rule_keyword # # rule_keyword a predefined keyword that describes a general system # attribute. you must enter a minus sign (-) # in this field....ok file from top to bottom. A rule_value can # be text or a range of values (NN-MM).ok file for # a custom JumpStart installation. # # begin a file name of an optional Bourne shell script # that will be executed before the installation begins.

Rules are matched in descending order: first rule through # the last rule. # 2. Don’t use the “*” character or other shell wildcards. # # finish a file name of an optional Bourne shell script # that will be executed after the installation completes. Rules can be continued to a new line by using the backslash # (\) before the carriage return. # If no finish script exists. # # Notes: # 1. # # ########################################################################## # # RULE_KEYWORD AND RULE_VALUE DESCRIPTIONS # # # rule_keyword rule_value Type rule_value Description # —————— —————————————————— # any minus sign (-) always matches # arch text system’s architecture type # domainname text system’s domain name # disksize text range system’s disk size # disk device name (text) # disk size (MBytes range) # hostname text system’s host name # installed text text system’s installed version of Solaris # disk device name (text) # OS release (text) # karch text system’s kernel architecture # memsize range system’s memory size (MBytes range) # model text system’s model number # network text system’s IP address # totaldisk range system’s total disk size (MBytes range) # # ########################################################################## # # RULE EXAMPLES # # The following rule matches only one system: . # 3. you must enter a minus sign (-) # in this field. # because the rules file is interpreted by a Bourne shell script. # 4. You can add comments after the pound sign (#) anywhere on a # line.335 Custom JumpStart # profile a file name of a text file used as a template by the # custom JumpStart installation software that defines how # to install Solaris on a system.

222.336 Chapter 7: Advanced Installation Procedures: JumpStart. such as a hostname (hostname) or the memory size (memsize).43.222. The complete list of rule_keywords is described in Table 7. Use this to indicate that the preceding expression might be repeated. It is used with rule_value to match a system with the same attribute to a profile.upgrade # # The following rule matches all x86 systems: #arch i386 x86-begin x86-class - - # # The following rule matches any system: #any any_machine # # END RULE EXAMPLES # # karch sun4u basic_prof - Table 7.1 installed on it: #arch sparc && \ # disksize c0t3d0 400-600 && \ # installed c0t3d0s0 solaris_2. Table 7.0 && \ # karch sun4c - net924_sun4u - # The following rule matches any sparc system with a c0t3d0 disk that is # between 400 to 600 MBytes and has Solaris 2.3. A predefined keyword that describes a general system attribute.. Use this to indicate an optional expression or field. and PXE # #hostname sample_host host_class set_root_pw # The following rule matches any system that is on the 924.. Flash Archive. #network 924.2 describes the syntax that the rules file must follow.43. rule_keyword Rule Syntax Description Use this before a rule keyword to indicate negation. .1 .2 Field ! [] .0 # network and has the sun4u kernel architecture: # Note: The backslash (\) is used to continue the rule to a new line.

At least one rule . All begin scripts must reside in the JumpStart directory. The rules file can contain any of the following: . . Rules that span multiple lines.3 for the list of rule_values. A minus sign (-) in the begin and finish fields if there is no entry The rules file should be saved in the JumpStart directory. See the section “begin and finish Scripts” for more information.” The name of an optional Bourne shell script that can be executed after the installation completes. If a line begins with a #. All class files must reside in the JumpStart directory. The information in a class file consists of class file keywords and their corresponding class file values. A comment after the pound sign (#) anywhere on a line. or you can contin- ue a rule on a new line by using a backslash (\) before pressing Enter. you must enter a minus sign (-) in this field. Use this to join rule keyword and rule value pairs in the same rule (a logical AND). and a corresponding profile . rule_value && <begin> <profile> <finish> Rules File Requirements The rules file must have the following: . The name “rules” . a rule value.337 Custom JumpStart Table 7. You can let a rule wrap to a new line. should be owned by root. the entire line is a comment. Blank lines. If no begin script exists. See the section “begin and finish Scripts” for more information. everything after the # is considered a comment. Class files are described in the section “Creating Class Files. . A name of an optional Bourne shell script that can be executed before the installation begins.2 Field Rule Syntax Description Provides the specific system attribute value for the corresponding rule_keyword. At least a rule keyword. See Table 7. During a custom JumpStart installation. you must enter a minus sign (-) in this field. The name of the class file. a text file that defines how the Solaris software is installed on the system if a system matches the rule. and should have permissions set to 644. If no finish script exists. . If a # is specified in the middle of a line. a system must match every pair in the rule before the rule matches. All finish scripts must reside in the JumpStart directory.

which controls how a name service determines information.2. Matches a system’s IP address. Example: installed c0t0d0s0 Solaris_9. such as disksize c0t0d0 32768 to 65536. The first available disk (searched in kernel probe order). The c0t0d0s0 disk. The size of disk. or the special words any or root disk. This example tries to match a system with a c0t0d0 disk that is between 32768 and 65536MB (32 to 64GB). 3. hostaddress hostname <IP_address> <host_name> installed <slice> <version> <slice> A disk slice name in the form c?t ?d?s?. If any is used. Matches a disk that has a root file system corresponding to a particular version of Solaris software.576 bytes. If rootdisk is used. or the special word rootdisk. the uname -n command reports the system’s hostname. the domainname command reports the system’s domain name. If you have a system already installed. . such as c0t0d0. domainname <domain_name> Matches a system’s domain name.048. Note: When calculating size_range. Table 7. disksize <disk_name> <size_range> <disk_name> A disk name in the form c?t?d?. The disk that contains the preinstalled boot image (a new SPARC-based system with factory JumpStart installed). If you have a system already installed. such as c0t0d0s5. all the system’s disks will try to be matched (in kernel probe order). This example tries to match a system that has a Solaris 9 root file system on c0t0d0s0. The system’s architecture processor type as reported by the arch or the uname –i commands. For example. <size_ range>. Flash Archive. remember that a megabyte equals 1. which must be specified as a range of MB (xx to xx). the disk to be matched is determined in the following order: 1. 2.3 any arch rule_keyword and rule_value Descriptions Rule Value Minus sign (-) <processor_type> Rule Keyword Description The match always succeeds. and PXE Table 7.338 Chapter 7: Advanced Installation Procedures: JumpStart. Matches a system’s disk (in MB). if it exists.4. i86pc or sparc. Matches a system’s hostname.3 describes the rule_keywords and rule_values that were mentioned in Table 7.

Matches a system’s physical memory size (in MB). which the Solaris installation program determines by performing a logical AND between the system’s IP address and the subnet mask. Example: network 193. use the uname -i command or the output of the prtconf command (line 5). Note: If the <platform_name> contains spaces. Any valid platform name will work. If you have a system already installed. <version>.2. 2. Example: memsize 256-1024. The example tries to match a system with a physical memory size between 256 and 1GB. if it exists. any Solaris or SunOS release is matched.1. or the special words any or upgrade. Example: ‘SUNW. The disk c0t0d0s0.0 IP address (if the subnet mask were 255. the output of the prtconf command (line 2) reports the system’s physical memory size. Description Rule Keyword karch <platform_group> Valid values are sun4m. This example tries to match a system with a 193. The first available disk (searched in kernel probe order). If upgrade is used.1 or greater release is matched.2. you must enclose it in single quotes (‘).144.144. The disk that contains the preinstalled boot image (a new SPARC-based system with factory JumpStart installed). memsize <physical_mem> Matches a system’s platform group. If you have a system already installed. and prep (the name for PowerPC systems). To find the platform name of an installed system. the disk to be matched is determined in the following order: 1.255.Ultra-5_10’. model <platform_name> network <network_num> .339 Custom JumpStart Table 7. sun4u.255.3 rule_keyword and rule_value Descriptions Rule Value If root disk is used. Solaris_2. Matches a system’s network number. If any is used. 4. A version name.x. i86pc. Matches a system’s platform name. 3.0). The value must be a range of MB (xx to xx) or a single MB value. the arch -k command or the uname -m command reports the system’s platform group. any upgradeable Solaris 2.

340

Chapter 7: Advanced Installation Procedures: JumpStart, Flash Archive, and PXE

Table 7.3
osname

rule_keyword and rule_value Descriptions Rule Value
<solaris_2.x>

Rule Keyword

Description Matches a version of Solaris software already installed on a system. Example: osname Solaris_9. This example tries to match a system with Solaris 9 already installed. Use the probe keyword to return a value from a system. For example, probe disks returns the size of the system’s disks in megabytes and in kernel probe order. Matches the total disk space on a system (in MB). The total disk space includes all the operational disks attached to a system. Example: totaldisk 32768-65536. This example tries to match a system with a total disk space between 32GB and 64GB.

probe

<probe_keyword>

totaldisk

<size_range>

The value must be specified as a range of MB (xx to xx).

During a custom JumpStart installation, the Solaris installation program attempts to match the system being installed to the rules in the rules.ok file in order—the first rule through the last rule.

Rules File Matches
A rule match occurs when the system being installed matches all the system attributes defined in the rule. As soon as a system matches a rule, the Solaris installation program stops reading the rules.ok file and begins installing the software based on the matched rule’s class file. Here are a few sample rules:
karch sun4u basic_prof -

The preceding example specifies that the Solaris installation program should automatically install any system with the sun4u platform group based on the information in the basic_prof class file. There is no begin or finish script.
hostname pyramid2 ultra_class -

The rule matches a system on the network called pyramid2. The class file to be used is named ultra_class. No begin or finish script is specified:
network 192.168.0.0 && !model ‘SUNW,Ultra-5_10’ - net_class set_root_passwd

The third rule matches any system on the network that is not an Ultra 5 or Ultra 10. The class file to be used is named net_class, and the finish script to be run is named set_root_passwd.
any - - generic_class -

The last example matches any system. The class file to be used is named generic_class, located in the /export/jumpstart directory. There is no begin or finish script.

341

Custom JumpStart

Validating the Rules File
Before the rules file can be used, you must run the check script to validate that this file is set up correctly. If all the rules are valid, the rules.ok file is created. To validate the rules file, use the check script provided in the <install_dir_path>/
Solaris_10/Misc/jumpstart_sample directory on the install server.

Copy the check script to the directory containing your rules file and run the check script to validate the rules file:
# cd /export/jumpstart<cr> ./check [-p <path>] [-r <file_name>]

<install_dir_path> is the directory that was specified using the setup_install_server

script when the install server was set up. The check script options are described in Table 7.4. Table 7.4
Option
-p <path>

Check Script Options
Description Validates the rules file by using the check script from a specified Solaris 10 CD image, instead of the check script from the system you are using. <path> is the pathname to a Solaris installation image on a local disk or a mounted Solaris CD. Use this option to run the most recent version of check if your system is running a previous version of Solaris. Specifies a rules file other than a file named “rules.” Using this option, you can test the validity of a rule before integrating it into the rules file. With this option, a rules.ok file is not created.

-r <file_name>

When you use check to validate a rules file, the following things happen:
1. The rules file is checked for syntax. check makes sure that the rule keywords are legiti-

mate, and the <begin>, <class>, and <finish> fields are specified for each rule.
2. If no errors are found in the rules file, each class file specified in the rules file is

checked for syntax. The class file must exist in the JumpStart installation directory and is covered in the next section.
3. If no errors are found, check creates the rules.ok file from the rules file, removing all

comments and blank lines, retaining all the rules, and adding the following comment line to the end:
version=2 checksum=<num>

342

Chapter 7: Advanced Installation Procedures: JumpStart, Flash Archive, and PXE

As the check script runs, it reports that it is checking the validity of the rules file and the validity of each class file. If no errors are encountered, it reports the following:
The custom JumpStart configuration is ok.

The following is a sample session that uses check to validate a rules and class file. I named the rules file “rulestest” temporarily, the class file is named “basic_prof” and I am using the -r option. With -r, the rules.ok file is not created, and only the rulestest file is checked.
# /export/install/Solaris_10/Misc/jumpstart_sample/check -r/tmp/rulestest<cr> Validating /tmp/rulestest... Validating profile basic_prof... Error in file “/tmp/rulestest”, line 113 any - - any_maine ERROR: Profile missing: any_maine

In this example, the check script found a bad option. any_machine was incorrectly entered as any_maine. The check script reported this error. In the next example, the error has been fixed, we copied the file from rulestest to
/export/jumpstart/rules, and reran the check script:
# cp rulestest /export/jumpstart/rules<cr> # cd /export/jumpstart<cr> # /export/install/Solaris_10/Misc/jumpstart_sample/check<cr> Validating rules... Validating profile basic_prof... Validating profile any_machine... The custom JumpStart configuration is ok.

As the check script runs, it reports that it is checking the validity of the rules file and the validity of each class file. If no errors are encountered, it reports The custom JumpStart configuration is ok. The rules file is now validated. After the rules.ok file is created, verify that it is owned by root and that it has permissions set to 644.

begin and finish Scripts
A begin script is a user-defined Bourne shell script, located in the JumpStart configuration directory on the configuration server, specified within the rules file, that performs tasks before the Solaris software is installed on the system. You can set up begin scripts to perform the following tasks:
. Backing up a file system before upgrading . Saving files to a safe location . Loading other applications

343

Custom JumpStart

Output from the begin script goes to /var/sadm/system/logs/begin.log.

CAUTION
Beware of /a Be careful not to specify something in the script that would prevent the mounting of file systems to the /a directory during an initial or upgrade installation. If the Solaris installation program cannot mount the file systems to /a, an error occurs, and the installation fails.

begin scripts should be owned by root and should have permissions set to 744.

In addition to begin scripts, you can also have finish scripts. A finish script is a user-defined Bourne shell script, specified within the rules file, that performs tasks after the Solaris software is installed on the system but before the system restarts. finish scripts can be used only with custom JumpStart installations. You can set up finish scripts to perform the following tasks:
. Move saved files back into place. . Add packages or patches. . Set the system’s root password.

Output from the finish script goes to /var/sadm/system/logs/finish.log. When used to add patches and software packages, begin and finish scripts can ensure that the installation is consistent between all systems.

Creating class Files
A class file is a text file that defines how to install the Solaris software on a system. Every rule in the rules file specifies a class file that defines how a system is installed when the rule is matched. You usually create a different class file for every rule; however, the same class file can be used in more than one rule.

EXAM ALERT
Terminology warning You’ll see the class file referred to as the profile in many Sun documents, scripts, and programs that relate to JumpStart. In Sun System Administration training classes, however, it is sometimes called a class file. That’s how we refer to it throughout this chapter. On the exams, it is also called both a profile and a class file. The same is true of the configuration server. Sometimes Sun calls this server a profile server.

A class file consists of one or more class file keywords (they are described in the following sections). Each class file keyword is a command that controls one aspect of how the Solaris installation program installs the Solaris software on a system. Use the vi editor (or any other

344

Chapter 7: Advanced Installation Procedures: JumpStart, Flash Archive, and PXE

text editor) to create a class file in the JumpStart configuration directory on the configuration server. You can create a new class file or edit one of the sample profiles located in /cdrom/cdrom0/s0/Solaris_10/Misc/jumpstart_sample on the Solaris 10 Software CD 1. The class file can be named anything, but it should reflect the way in which it installs the Solaris software on a system. Sample names are basic_install, eng_profile, and accntg_profile. A class file must have the following:
. The install_type keyword as the first entry . Only one keyword on a line . The root_device keyword if the systems being upgraded by the class file have more

than one root file system that can be upgraded A class file can contain either of the following:
. A comment after the pound sign (#) anywhere on a line. If a line begins with a #, the

entire line is a comment. If a # is specified in the middle of a line, everything after the # is considered a comment.
. Blank lines.

The class file is made up of keywords and their values. The class file keywords and their respective values are described in the following sections.

archive_location
This keyword is used when installing a Solaris Flash Archive and specifies the source of the Flash Archive. The syntax for this option is shown here:
archive_location retrieval type location

The retrieval_type parameter can be one of the following:
. NFS . HTTP or HTTPS . FTP . Local tape . Local device . Local file

345

Custom JumpStart

The syntax for a Flash Archive located on an NFS server is as follows:
archive_location nfs server_name:/path/filename <retry n>

where <retry n> specifies the maximum number of attempts to mount the archive. The syntax for a Flash Archive located on an HTTP or HTTPS server is as follows:
archive_location http://server_name:port/path/filename <optional keywords> archive_location https://server_name:port/path/filename <optional keywords>

Table 7.5 lists the optional keywords that can be used with this option. Table 7.5
Keyword
auth basic user <password> timeout <min> proxy <host>:<port>

HTTP Server Optional Keywords
Description If the HTTP server is password-protected, a username and password must be supplied to access the archive. Specifies the maximum time, in minutes, that is allowed to elapse without receiving data from the HTTP server. Specifies a proxy host and port. The proxy option can be used when you need to access an archive from the other side of a firewall. The <port> value must be supplied.

The syntax for a Flash Archive located on an FTP server is as follows:
archive_location ftp://username:password@server_name:port/path/filename <optional keywords>

Table 7.6 lists the optional keywords that can be used with this option. Table 7.6
Keyword
timeout <min> proxy <host>:<port>

FTP Server Optional Keywords
Description Specifies the maximum time, in minutes, that is allowed to elapse without receiving data from the FTP server. Specifies a proxy host and port. The proxy option can be used when you need to access an archive from the other side of a firewall. The <port> value must be supplied.

The syntax for a Flash Archive located on local tape is as follows:
archive_location local_tape <device> <position>

where <device> specifies the device path of the tape drive and <position> specifies the file number on the tape where the archive is located. The <position> parameter is useful because you can store a begin script or a sysidcfg file on the tape prior to the actual archive.

346

Chapter 7: Advanced Installation Procedures: JumpStart, Flash Archive, and PXE

The syntax for a Flash Archive located on a local device is as follows:
archive_location local_device device path/filename file_system_type

The syntax for a Flash Archive located in a local file is as follows:
archive_location local_file path/filename

All that is needed for this option is to specify the full pathname to the Flash Archive file.

backup_media
backup_media defines the medium that is used to back up file systems if they need to be real-

located during an upgrade because of space problems. If multiple tapes or disks are required for the backup, you are prompted to insert these during the upgrade. Here is the backup_media syntax:
backup_media <type> <path>

<type> can be one of the keywords listed in Table 7.7.

Table 7.7
Keyword

backup_media Keywords Description Specifies a local tape drive on the system being upgraded. The <path> must be the character (raw) device path for the tape drive, such as /dev/rmt/0. Specifies a local diskette drive on the system being upgraded. The <path> is the local diskette, such as /dev/rdiskette0. The diskette must be formatted. Specifies a local file system on the system being upgraded. The <path> can be a block device path for a disk slice or the absolute <path> to a file system mounted by the /etc/vfstab file. Examples of <path> are /dev/dsk/c0t0d0s7 and /home. Specifies an NFS file system on a remote system. The <path> must include the name or IP address of the remote system (host) and the absolute <path> to the file system. The file system must have read/write access. A sample <path> is sparc1:/home. Specifies a directory on a remote system that can be reached by a remote shell (rsh). The system being upgraded must have access to the remote system. The <path> must include the name of the remote system and the absolute path to the directory. If a user login is not specified, the login is tried as root. A sample <path> is bcalkins@sparcl:/home.

local_tape local_diskette local_filesystem

remote_filesystem

remote_system

Here are some examples of class file keywords being used:
backup_media local_tape /dev/rmt/0 backup_media local_diskette /dev/rdiskette0 backup_media local_filesystem /dev/dsk/c0t3d0s7

347

Custom JumpStart
backup_media local_filesystem /export backup_media remote_filesystem sparc1:/export/temp backup_media remote_system bcalkins@sparc1:/export/temp

backup_media must be used with the upgrade option only when disk space reallocation is nec-

essary.

boot_device
boot_device designates the device where the installation program installs the root file system

and consequently what the system’s startup device is. The boot_device keyword can be used when you install either a UFS file system or ZFS root pool. The eeprom value also lets you update the system’s EEPROM if you change its current startup device so that the system can automatically start up from the new startup device. Here’s the boot_device syntax:
boot_device <device> <eeprom>

Table 7.8 describes the <device> and <eeprom> values. Table 7.8
Keyword
<device>

boot_device Keywords Description Specifies the startup device by specifying a disk slice, such as c0t1d0s0 (c0d1 for x86 systems). It can be the keyword existing, which places the root file system on the existing startup device, or the keyword any, which lets the installation program choose where to put the root file system. Specifies whether you want to update the system’s EEPROM to the specified startup device. <eeprom> specifies the value update, which tells the installation program to update the system’s EEPROM to the specified startup device, or preserve, which leaves the startup device value in the system’s EEPROM unchanged. An example for a SPARC system is boot_device c0t1d0s0 update.

<eeprom>

NOTE
x86 preserve only For x86 systems, the <eeprom> parameter must be preserve.

The installation program installs the root file system on c0t1d0s0 and updates the EEPROM to start up automatically from the new startup device. For more information on the boot device, see Chapter 3, “Perform System Boot and Shutdown Procedures,” in Solaris 10 System Administration Exam Prep (Exam CX-310-200), Part I.

348

Chapter 7: Advanced Installation Procedures: JumpStart, Flash Archive, and PXE

bootenv_createbe
bootenv_createbe enables an empty, inactive boot environment to be created at the same

time the Solaris OS is installed. The bootenv keyword can be used when you install either a UFS file system or ZFS root pool. You only need to create a / file system; other file system slices are reserved, but not populated. This kind of boot environment is installed with a Solaris Flash Archive, at which time the other reserved file system slices are created. Here’s the bootenv createbe syntax:
bootenv createbe bename <new_BE_name> filesystem <mountpoint:device:fs_options>

The bename and filesystem values are described in Table 7.9. Table 7.9
Keyword
bename <new_BE_name>

bootenv createbe Keywords Description Specifies the name of the new boot environment to be created. It can be no longer than 30 characters, all alphanumeric, and must be unique on the system. Specifies the type and number of file systems to be created in the new boot environment. The mountpoint can be any valid mount point, or a hyphen (-) for swap, and fs_options can be swap or ufs. You cannot use Solaris Volume Manager volumes or Veritas Volume Manager objects. The device must be in the form /dev/dsk/cwtxdysz.

filesystem <mount pointdevice:device:fs_ options>

For a ZFS root pool, the bootenv keyword changes the characteristics of the default boot environment that is created at install time. This boot new boot environment is a copy of the root file system you are installing. The following options can be used when creating a ZFS root pool:
. installbe: Used to change the characteristics of the default boot environment that

gets created during the installation.
. bename <name>: Specifies the name of the new boot environment. . dataset <mountpoint>: Identifies a /var dataset that is separate from the ROOT

dataset. The <mountpoint> value is limited to /var. For example, to create a ZFS root pool with a boot environment named “zfsroot” and a separate /var dataset, use the following syntax:
bootenv installbe bename zfsroot dataset /var

client_arch
client_arch indicates that the operating system server supports a platform group other than

its own. If you do not specify client_arch, any diskless client that uses the operating system server must have the same platform group as the server. client_arch can be used only when

349

Custom JumpStart
system_type is specified as server. You must specify each platform group that you want the

operating system server to support. Here’s the client_arch syntax:
client_arch karch_value [karch_value...]

Valid values for <karch_value> are sun4u and i86pc. Here’s an example:
client_arch sun4u

client_root
client_root defines the amount of root space, in MB, to allocate for each diskless client. If

you do not specify client_root in a server’s profile, the installation software automatically allocates 15MB of root space per client. The size of the client root area is used in combination with the num_clients keyword to determine how much space to reserve for the /export/root file system. You can use the client_root keyword only when system_type is specified as server. Here’s the syntax:
client_root <root_size>

where <root_size> is specified in MB. Here’s an example:
client_root 20

NOTE
Don’t waste space When allocating root space, 20MB is an adequate size. 15MB is the minimum size required. Any more than 20MB is just wasting disk space.

client_swap
client_swap defines the amount of swap space, in MB, to allocate for each diskless client. If you do not specify client_swap, 32MB of swap space is allocated. Physical memory plus swap space must be a minimum of 32MB. If a class file does not explicitly specify the size of swap, the Solaris installation program determines the maximum size that the swap file can be, based on the system’s physical memory. The Solaris installation program makes the size of swap no more than 20% of the disk where it resides, unless free space is left on the disk after the other file systems are laid out.

Here’s the syntax:
client_swap <swap_size>

350

Chapter 7: Advanced Installation Procedures: JumpStart, Flash Archive, and PXE

where <swap_size> is specified in MB. Here’s an example:
client_swap 64

This example specifies that each diskless client has a swap space of 64MB.

cluster
cluster designates which software group to add to the system. Table 7.10 lists the software

groups. Table 7.10 Software Groups
group_name SUNWCrnet SUNWCreq SUNWCuser SUNWCprog SUNWCall SUNWCXall

Software Group Reduced network support Core End-user system support Developer system support Entire distribution Entire distribution plus OEM support

You can specify only one software group in a profile, and it must be specified before other cluster and package entries. If you do not specify a software group with cluster, the end-user software group, SUNWCuser, is installed on the system by default. Here is cluster’s syntax:
cluster <group_name>

Here’s an example:
cluster SUNWCall

This example specifies that the Entire Distribution group should be installed. The cluster keyword can also be used to designate whether a cluster should be added to or deleted from the software group that was installed on the system. add and delete indicate whether the cluster should be added or deleted. If you do not specify add or delete, add is set by default. Here’s the syntax:
cluster <cluster_name> [add | delete]

<cluster_name> must be in the form SUNWCname.

351

Custom JumpStart

dontuse
dontuse designates one or more disks that you don’t want the Solaris installation program to

use. By default, the installation program uses all the operational disks on the system. <disk_name> must be specified in the form c?t?d? or c?d?, such as c0t0d0. Here’s the syntax:
dontuse disk_name [disk_name...]

Here’s an example:
dontuse c0t0d0 c0t1d0

NOTE
dontuse and usedisk

You cannot specify the usedisk keyword and the dontuse keyword in the same class file, because they are mutually exclusive.

filesys
filesys can be used to create local file systems during the installation by using this syntax:
filesys <slice> <size> [file_system] [optional_parameters]

The values listed in Table 7.11 can be used for <slice>. Table 7.11
Value
any c?t?d?s? or c?d??z rootdisk.sn

<slice> Values Description This variable tells the installation program to place the file system on any disk. The disk slice where the Solaris installation program places the file system, such as c0t0d0s0. The variable that contains the value for the system’s root disk, which is determined by the Solaris installation program. The sn suffix indicates a specific slice on the disk.

The values listed in Table 7.12 can be used for <size>. Table 7.12
Value
num existing auto all

<size> Values Description The size of the file system in MB. The current size of the existing file system. The size of the file system is determined automatically, depending on the selected software. The specified slice uses the entire disk for the file system. When you specify this value, no other file systems can reside on the specified disk.

unnamed is set by default. the size of swap is set to 512MB. This can be used to ignore a file system on a disk during an installation so that the Solaris installation program can create a new file system on the same disk with the same name. If file_system is not specified. overlap can be specified only when <size> is existing. preserve can be specified only when size is existing and slice is c?t?d?s?. file_system is an optional field when slice is specified as any or c?t?d?s?.13 can be used for file_system. or start:size. The specified slice is used as swap. Table 7. . but you can’t specify the optional_parameters value.14 Option preserve <mount_options> optional_parameters Options Description The file system on the specified slice is preserved.14. unnamed is set by default. The specified slice is defined as a representation of the whole disk. The file system is explicitly partitioned. and <size> is the number of cylinders for the slice. In the following example. The specified slice is defined as a raw slice. Flash Archive. and the installation program determines what disk to put it on when you specify the any value: filesys any auto /usr The optional_parameters field can be one of the options listed in Table 7. so the slice does not have a mount point name.13 Value <mount_pt_name> <swap> <overlap> <unnamed> <ignore> file_system Values Description The file system’s mount point name. The specified slice is not used or recognized by the Solaris installation program. and PXE Table 7. One or more mount options that are added to the /etc/vfstab entry for the specified <mount_pt_name>. all.12 Value free <start>: <size> <size> Values Description The remaining unused space on the disk is used for the file system. /usr is based on the selected software. Table 7. <start> is the cylinder where the slice begins. and it is installed on c0t0d0s1: filesys c0t0d0s1 512 swap In the next example. If file_system is not specified. such as /opt.352 Chapter 7: Advanced Installation Procedures: JumpStart. ignore can be used only when existing partitioning is specified. The values listed in Table 7.

<slice> <size> <file_system> <optional parameters> filesys can also be used to set up the installed system to mount remote file systems automatically when it starts up. Table 7. d50. You can specify filesys more than once. NOTE Only on initial install The filesys mirror keyword is supported for only initial installations.15 Option <name> filesys mirror Options Description An optional keyword allowing you to name the mirror.16. The size of the file system in megabytes. Specifies the file system you are mirroring. One or more mount options that are added to the /etc/vfstab entry for the specified <mount_pt_name>. Specifies the disk slice where the custom JumpStart program places the file system you want to duplicate with the mirror. The naming convention follows metadevices in Solaris Volume Manager. This facility allows the creation of mirrored file systems. .353 Custom JumpStart A new option to the filesys keyword in Solaris 10 is mirror. The syntax for the filesys mirror keyword is as follows: Filesys mirror [:<name>]slice [<slice>] <size> <file_system> <optional_parameters> Table 7. including root (/) or swap. This can be any file system. The following syntax describes using filesys to set up mounts to remote systems: filesys <server>:<path> <server_address> <mount_pt_name> [mount_options] The filesys keywords are described in Table 7. which facilitates the creation of RAID-1 volumes as part of the custom JumpStart installation.15 details the available options for the filesys mirror keyword. the custom JumpStart program assigns one for you. If a name is not specified. You can issue this keyword more than once to create mirrors for different file systems. in the format dxxx (where xxx is a number between 0 and 127)—for example.

install_type must be the first class file keyword in every profile. Don’t forget to include the colon (:). only the initial_install keyword can be used. For a ZFS installation. An example is ro. If you need to specify more than one mount option. install_type install_type specifies whether to perform the initial installation option or the upgrade option on the system. upgrade. flash_install. Here is the syntax: install_type [initial_install | upgrade] Select one of initial_install. or flash_update. Flash Archive.intr forced_deployment This keyword forces a Solaris Flash differential archive to be installed on a clone system even though the clone system is different from what the software expects. The syntax is geo <locale> .354 Chapter 7: Advanced Installation Procedures: JumpStart. One or more mount options that are added to the /etc/vfstab entry for the specified <mount_pt_name>. The remote file system’s mount point name. The IP address of the server specified in <server>:<path>. so it should be used with caution. but you must specify a minus sign (-). this value can be used to populate the /etc/hosts file with the server’s IP address. The name of the mount point where the remote file system will be mounted. <server_address> <mount_pt_name> [mount_options] Here’s an example: filesys zeus:/export/home/user1 192.200. If you don’t have a name service running on the network.bg.1 /home ro. Here’s an example: install_type initial_install geo The geo keyword followed by a <locale> designates the regional locale or locales you want to install on a system (or to add when upgrading a system).quota.16 Keyword <server>: <path> filesys Remote Mount Keywords Description The name of the server where the remote file system resides. and PXE Table 7.9. This option deletes files to bring the clone system to an expected state. the mount options must be separated by commas and no spaces.

and Switzerland Eastern Europe. layout_constraint can be used for the upgrade option only when disk space reallocation is required. or online at http://docs. Colombia.sun. Portugal. and Spain Western Europe. Bulgaria. Here’s an example where the locale specified is S_America: geo S_America layout_constraint layout_constraint designates the constraint that auto-layout has on a file system if it needs to be reallocated during an upgrade because of space problems. including Greece. Italy. including Canada and the United States South America. and the Netherlands Middle East. Poland. Here’s the syntax: layout_constraint <slice> <constraint> [minimum_size] The <slice> field specifies the file system disk slice on which to specify the constraint. Latvia.17 Value N_Africa C_America N_America S_America Asia Ausi C_Europe E_Europe N_Europe S_Europe W_Europe M_East <locale> Values Description Northern Africa. Finland. Hungary. With layout_constraint. Macedonia. Lithuania. Bolivia. Uruguay. Estonia. Nicaragua. . and Turkey Northern Europe.com. Chile. and Panama North America. Paraguay. Republic of Korea. This guide is available on the Solaris 10 documentation CD. you specify the file system and the constraint you want to put on it. including Belgium. Peru. Ireland. and Sweden Southern Europe. Slovenia. Mexico. including Denmark. Ecuador. Iceland. Brazil. including Austria. Norway. including Argentina. France. including Japan. and Thailand Australasia. Bosnia. including Australia and New Zealand Central Europe. Guatemala. Croatia. It must be specified in the form c?t?d?s? or c?d?s?. including Egypt Central America. Taiwan. including Israel Refer to the “International Language Environments Guide” in the “Solaris 10 International Language Support Collection” for a complete listing of <locale> values. Romania. El Salvador.355 Custom JumpStart Values you can specify for <locale> are listed in Table 7. including Albania. Serbia. Slovakia.17. Table 7. including Costa Rica. Russia. Germany. Czech Republic. and Venezuela Asia. Republic of China. Great Britain.

The minimum_size cannot be less than the file system needs for its existing contents. and its size stays the same. Table 7. movable available collapse minimum_size The following are some examples: layout_constraint c0t0d0s3 changeable 1200 The file system c0t0d0s3 can be moved to another location. The size of the file system might end up being more if unallocated space is added to it.18 Option changeable layout_constraint Options Description Auto-layout can move the file system to another location and can change its size. the changed size would be 1010MB. All the data in the file system is then lost. Auto-layout moves (collapses) the specified file system into its parent file system. Flash Archive. and PXE Table 7. Auto-layout can use all the space on the file system to reallocate space. but the size is never less than the value you specify. This value lets you change the size of a file system by specifying the size you want it to be after auto-layout reallocates. the file system’s minimum size is set to 10% greater than the minimum size required. For example. If minimum_size is specified. and its size can be changed to more than 1200MB but no less than 1200MB. if the system has the /usr and /usr/openwin file systems. When you mark a file system as changeable and minimum_size is not specified. but its size stays the same: layout_constraint c0t2d0s1 collapse c0t2d0s1 is moved into its parent directory to reduce the number of file systems.356 Chapter 7: Advanced Installation Procedures: JumpStart. collapsing the /usr/openwin file system would move it into /usr (its parent). You can use this optional value only if you have marked a file system as changeable. This constraint can be specified only on file systems that are not mounted by the /etc/vfstab file. any free space left over (the original size minus the minimum size) is used for other file systems. if the minimum size for a file system is 1000MB. Auto-layout can move the file system to another slice on the same disk or on a different disk. layout_constraint c0t0d0s4 movable The file system on slice c0t0d0s4 can move to another disk slice. . You can change the file system’s size by specifying the minimum_size value. For example.18 describes the options for layout_constraint. You can use this option to reduce the number of file systems on a system as part of the upgrade.

and monetary value. You can use this keyword more than once to create state database replicas on several disk slices.357 Custom JumpStart local_customization This keyword is used when installing Solaris Flash Archives and can be used to create custom scripts to preserve local configurations on a clone system before installing a Solaris Flash Archive. time. if you want English as your language but you also want to use the monetary values for Australia. metadb The metadb keyword allows you to create Solaris Volume Manager state database replicas as part of the custom JumpStart installation. such as date. You can specify a locale keyword for each language or locale you need to add to a system. . you would choose the Australia locale value (en_AU) instead of the English language value (. A locale determines how online information is displayed for a specific lan- guage or region. The English language packages are installed by default. spelling. The syntax for this keyword is shown here: metadb slice [size <size-in-blocks>] [count <number-of-replicas>] Table 7. Therefore. The syntax for this option is local_customization local_directory The local_directory parameter specifies the directory on the clone system where any scripts are held. locale locale designates which language or locale packages should be installed for the specified locale_name. Following is the locale syntax: locale <locale_name> Here’s an example: locale es This example specifies Spanish as the language package you want installed.19 describes the options for metadb.

The number of replicas to create. which is used to ensure that a clone system is a duplicate of the master system. add or delete indicates the action required. You can use this option only when system_type is set to server. space is allocated for each diskless client’s root (/) and swap file systems. it ignores fileby-file validation. size <size-in-blocks> count <number-of-replicas> no_content_check This keyword is used when installing Solaris Flash Archives. package package designates whether a package should be added to or deleted from the software group that is installed on the system. five diskless clients are allocated. it ignores the check to verify that a clone system was built from the original master system. because files are deleted to bring the clone to an expected state if discrepancies are found. If you do not specify num_clients. Following is the syntax: num_clients client_num Here’s an example: num_clients 10 In this example. three replicas are created by default. no_master_check This keyword is used when installing Solaris Flash Archives. num_clients When a server is installed. Use this option only if you are sure the clone is a duplicate of the master system. Use this option only if you are sure the clone is a duplicate of the original master system. and PXE Table 7.358 Chapter 7: Advanced Installation Procedures: JumpStart. It must be in the format cwtxdysz. Flash Archive. When specified. a default size of 8192 is allocated. Following is the syntax: package <package_name> [add [<retrieval_type> location] | delete] . The number of blocks specifying the size of the replica.19 Option slice metadb Options Description The disk slice on which you want to place the state database replica. If this option is omitted. num_clients defines the number of diskless clients that a server supports. If this option is omitted. add is set by default. space is allocated for 10 diskless clients. When specified. If you do not specify add or delete.

Local file The syntax for a package located on an NFS server is as follows: package <package_name> add nfs server_name:/path <retry n> where <retry n> specifies the maximum number of attempts to mount the directory. The syntax for a package located on a local device is as follows: package <package_name> add <local_device> <device> <path> <file_system_type> The syntax for a package located in a local file is as follows: package <package_name> add <local_file> <path> All that is needed for this option is to specify the full pathname to the directory containing the package. Local device . HTTP or HTTPS .359 Custom JumpStart The <package_name> must be in the form SUNWname. . The syntax for a package located on an HTTP or HTTPS server is as follows: package <package_name> add http://server_name:port/path <optional keywords> package <package_name> add https://server_name:port/path <optional keywords> Table 7.20 lists the optional keywords that can be used with this option. that is allowed to elapse without receiving data from the HTTP server. Here’s an example: package SUNWxwman add nfs server1:/var/spool/packages retry 5 In this example.20 Keyword timeout <min> HTTP package Optional Keywords Description Specifies the maximum time. NFS . The <retrieval_type> parameter can be one of the following: . Table 7. SUNWxwman (X Window online man pages) is being installed on the system from a location on a remote NFS server. The proxy option can be used when you need to access a package from the other side of a firewall. in minutes. The <port> value must be supplied. proxy <host>:<port> Specifies a proxy host and port.

Table 7. <dumpsize>. When you specify the filesys class file keyword with partitioning. you must use the filesys class file keyword to specify which disks to use and what file systems to create. are required: . The options are auto and size. <swapsize>: A value specifying the size of the swap volume (zvol). The <poolsize>. The size is assumed to be in megabytes unless g or auto is specified. Except for any file systems specified by the filesys keyword. <poolsize>: A value specifying the size of the new pool to be created. existing explicit pool The pool keyword is used for ZFS only and defines the new root pool to be created. The syntax for this keyword is as follows: pool <poolname> <poolsize> <swapsize> <dumpsize> <vdevlist> where <poolname> is the name of the new ZFS pool to be created. Following is the syntax: partitioning default|existing|explicit The partitioning options are described in Table 7.360 Chapter 7: Advanced Installation Procedures: JumpStart. All file systems except /. <swapsize>. When you use the explicit class file value. and /var are preserved. The Solaris installation program uses the disks and creates the file systems specified by the filesys keywords. and PXE partitioning partitioning defines how the disks are divided into slices for file systems during the installation. When using auto. the default is set. . The default size . The Solaris installation program uses the existing file systems on the system’s disks. /usr/openwin. Additional disks are used if the specified software does not fit on rootdisk. rootdisk is selected first. existing must be specified. The installation program uses the last mount point field from the file system superblock to determine which file system mount point the slice represents. described in the following list.21. If you do not specify partitioning. /usr. auto allocates the largest possible pool size on the device. /opt. Flash Archive. all the Solaris software is installed in the root file system.21 Option default partitioning Options Description The Solaris installation program selects the disks and creates the file systems where the specified software is installed. the swap area is automatically sized. If you specify only the root (/) file system with the filesys keyword. and <vdevlist> options.

The syntax for this keyword is as follows: patch <patchid_list> . . The root pool will be mirrored and will use any two available devices that are large enough to create an 80GB pool: pool rootpool 80g 2g 2g mirror any any This example is the same as the previous one. vdevlist can be either a <single-device-name> in the form c#t#d#s#. size is assumed to be in megabytes. unless specified by g (gigabytes). . such as c0t0d0s0.” The size of the pool is determined automatically by the size of the disks. The patches are installed in the order specified in the list. the swap is sized automatically (half of physical memory). Use the auto option to use the default swap size. mirror <device-names>: Specifies the mirroring of the disk. . the swap and dump volumes are 4GB each: pool rpool 20G 4G 4G c0t0d0s0 The following example installs a mirrored ZFS root pool. The root pool is named “rootpool. <single-device-name>: A disk slice in the form or c#t#d#s#. mirror any: Enables the installer to select a suitable device. or mirror or any option: . <vdevlist>: Specifies the devices used to create the pool.” the disk slice is 80GB. The list should be a list of comma-separated Solaris patch IDs (no spaces). and the swap and dump volumes are 2GB each. Devices in the vdevlist must be slices for the root pool.361 Custom JumpStart is one half the size of physical memory. the dump device is sized automatically. <dumpsize>: Specifies the size of the dump volume that will be created within the new root pool. The device names are in the form of c#t#d#s#. . but no less than 512MB and no greater than 2GB. The following example creates a new 20GB root pool on device c0t0d0s0. or specify a custom size using the size option. and the mirror is set up on devices c0t0d0s0 and c1t0d0s0: pool rootpool auto auto auto mirror c0t0d0s0 c0t1d0s0 patch patch specifies the patch ID numbers that are to be installed. except that the disk devices are specified: pool rootpool 80g 2g 2g mirror c0t0d0s0 c1t0d0s0 This example creates a new root pool named “rootpool. You can set the size outside this range by using the size option.

root_device root_device designates the system’s root disk. <optional_keywords>: Optional keywords that depend on where the patches are stored. . Following is the syntax: system_type [standalone | server] Here’s an example: system_type server .362 Chapter 7: Advanced Installation Procedures: JumpStart. If you do not specify system_type in a class file. The other side will be upgraded automatically. the slice you specify should be one side of the mirror. Flash Archive. . or local file. <patch_file>: A file that contains a list of patches that are found in the <patch_location>. <patchid_list>: Specifies the patch ID numbers that are to be installed. system_type system_type defines the type of system being installed. local device. <patch_location>: Specifies the location where the patches are found. HTTP server. . This location can be an NFS server. standalone is set by default. and PXE or patch <patch_file> <patch_location> <optional_keywords> where: . Following is the syntax: root_device <slice> Here’s an example: root_device c0t0d0s0 NOTE Specifying mirrors If you are upgrading a RAID-1 (mirror) volume. Refer to “Solaris 10 Installation Guide: JumpStart and Advanced Installations” for a list of keywords.

By default. To test a class file for a particular Solaris release. pfinstall actually installs the Solaris software on the system by using the specified class file. if you want to test a class file for Solaris 10. such as c0t0d0. If you specify the usedisk class file keyword in a class file.d/pfinstall -D<cr> NOTE Install or test? Without the -d or -D option.d/pfinstall -d<disk_config><cr> or type the following: # /usr/sbin/install. and type the following: # /usr/sbin/install. Following is the syntax: usedisk <disk_name> [<disk_name>] Here’s an example: usedisk c0t0d0 c0t1d0 NOTE dontuse and usedisk You cannot specify the usedisk keyword and the dontuse keyword in the same class file. Testing Class Files After you create a class file. the Solaris installation program uses only the disks that you specify. disk_name must be specified in the form c?t?d? or c?d?. change to the JumpStart directory that contains the class file. you can use the pfinstall command to test it. By looking at the installation output generated by pfinstall. and the data on the system is overwritten. because they are mutually exclusive. you can determine whether a system has enough disk space to upgrade to a new release of Solaris before you actually perform the upgrade. you can quickly determine whether a class file will do what you expect. Testing a class file is sometimes called a dry run installation. To test the class file. you must test it within the Solaris environment of the same release.363 Custom JumpStart usedisk usedisk designates one or more disks that you want the Solaris installation program to use when the partitioning default is specified. For example. For example. you have to run the pfinstall command on a system running Solaris 10. . the installation program uses all the operational disks on the system.

You must always test an upgrade class file against a system’s disk configuration using the -D option. Specifies the path to the Solaris CD image.d/pfinstall [-D|-d] <disk_config> [-c <path>] <profile> The pfinstall options are described in Table 7.364 Chapter 7: Advanced Installation Procedures: JumpStart. Tells pfinstall to use a disk configuration file. Table 7. Flash Archive. and c?d?s2 designates a non-SCSI disk. and slices. to test the class file against. See the example following this table of how to create the <disk_config> file. NOTE Identifying disks c?t?d?s2 designates a specific target for a SCSI disk. and PXE Following is the syntax for pfinstall: /usr/sbin/install. <device_name> must be in the form c?t?d?s2 or c?d?s2. <disk_config>. Specifies the name of the class file to test. <disk_config> is the name of the disk configuration file to contain the redirected output. you must specify the path. For example. This is required if the Solaris CD is not mounted on /cdrom. It describes a disk’s bytes per sector. you must specify the path. This option cannot be used with an upgrade class file (an install-type upgrade). If the <disk_config> file is not in the directory where pfinstall is run. -c <path> <profile> You can create a <disk_config> file by issuing the following command: prtvtoc /dev/rdsk/<device_name> > <disk_config> where /dev/rdsk/<device_name> is the device name of the system’s disk. and it would look like this: * /dev/rdsk/c0t0d0s2 partition map * . use this option if the system is using Volume Manager to mount the Solaris CD.22. A disk configuration file represents a disk’s structure. If class file is not in the directory where pfinstall is being run. flags. Here’s an example: # prtvtoc /dev/rdsk/c0t0d0s2 > test<cr> The file named “test” created by this example would be your <disk_config> file.22 Option -D -d <disk_config> pfinstall Options Description Tells pfinstall to use the current system’s disk configuration to test the class file against.

Look for the following message. For this example. The ultra_class class file is located in the /export/jumpstart directory. The following example tests the ultra_class class file against the disk configuration on a Solaris 10 system on which pfinstall is being run. if you want to test the class file for a system with a specific system memory size. . concatenate single disk configuration files and save the output to a new file. which indicates that the test was successful: Installation complete Test run complete. and the path to the Solaris CD image is specified because Volume Management is being used. Exit status 0. I’ll set SYS_MEMSIZE to 512MB: # # # # SYS_MEMSIZE=512<cr> export SYS_MEMSIZE<cr> cd /export/jumpstart<cr> /usr/sbin/install. In addition.d/pfinstall -D -c /cdrom/cdrom0/s0 ultra_class<cr> The system tests the class file and displays several pages of results.365 Custom JumpStart * Dimensions: * 512 bytes/sector * 126 sectors/track * 4 tracks/cylinder * 504 sectors/cylinder * 4106 cylinders * 4104 accessible cylinders * * Flags: * 1: unmountable * 10: read-only * * First Sector Last *Partition Tag Flags Sector Count 0 2 00 0 268632 1 3 01 268632 193032 2 5 00 0 2068416 3 0 00 461664 152712 4 0 00 614376 141624 6 4 00 756000 1312416 Sector 268631 461663 2068415 614375 755999 068415 Mount Directory / /export /export/swap /usr NOTE Multiple disks If you want to test installing Solaris software on multiple disks. set SYS_MEMSIZE to the specific memory size in MB.

you’ll use the sysidcfg file to answer system identification . Table 7.23 JumpStart Client Identification Information Configurable Using the sysidcfg file? Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes Yes Yes Yes Configurable Using a Name Service? Yes No No No Yes Yes Yes No No No Yes if NIS or NIS+ No if DNS or LDAP No Yes Yes No No No No Yes No No No JumpStart Client Identification Item Name service Domain name Name server Network interface Hostname IP address Netmask DHCP IPv6 Default router Root password Security policy Locale Terminal type Time zone Date and time Power management Service profile Terminal type Date and time Timeserver Keyboard NFS4 domain The JumpStart client determines the location of the sysidcfg file from the BOOTPARAMS information provided by the boot server. and PXE sysidcfg File When a SPARC-based JumpStart client boots for the first time. The identification information and the configurable sources are described in Table 7. If you’re not using a name service. The location of the sysidcfg file was specified when you set up the JumpStart client on the boot server using the add_install_client script.23. the booting software first tries to obtain the system identification information from either the sysidcfg file or a name service.366 Chapter 7: Advanced Installation Procedures: JumpStart. Flash Archive.

Here’s an example illustrating independent and dependent keywords: name_service=NIS {domain_name=pyramid. NIS or NIS+: If you are using NIS as your name service. name_service is the independent keyword. DNS. LDAP.0.168.1)} In this example. The name_service=<value> keyword is assigned one of five values that specify the name service to be used: NIS. If you’re using a name service. You must create a unique sysidcfg file for every system that requires different configuration information. the client displays the appropriate interactive dialog to request system identification information. NIS+. Only one sysidcfg file can reside in a directory or on a diskette. The sysidcfg file can reside on a shared NFS directory or the root (/) directory on a UFS file system. additional keywords are specified: domain_name=<value> . you’ll want to look over the section “Setting Up JumpStart in a Name Service Environment. we’ll group them in categories and describe each of them in detail.com name_server=server(192. Name Service.367 Custom JumpStart questions during the initial part of the installation. Creating a sysidcfg file requires the system administrator to specify a set of keywords in the sysidcfg file to preconfigure a system. NOTE Dependent keywords Enclose all dependent keywords in curly braces ({ }) to tie them to their associated independent keyword. Domain Name. Without the sysidcfg file. If the JumpStart server provides this information. for example. and domain_name and name_server are the dependent keywords. the client bypasses the initial system identification portion of the Solaris 10 installation process. To help explain sysidcfg keywords. and Name Server Keywords The following keywords are related to the name service you will be using. It can also reside on a PCFS file system located on a diskette. Values can optionally be enclosed in single quotes (‘) or double quotes (“). specify the fol- lowing: name_service=NIS For the NIS and NIS+ values. and NONE: . You use two types of keywords in the sysidcfg file: independent and dependent. The location of the sysidcfg file is specified by the -p argument to the add_install_client script used to create a JumpStart client information file.” You’ll use the sysidcfg file to answer system identification questions during the initial part of the installation.

specify it as follows: domain_name=pyramid.168.com For the name_server <value>.com.168.com . you can specify up to three IP addresses for the name_server. LDAP: If you are using LDAP for the name_service <value>. Flash Archive. you can specify up to three IP addresses for the name_server. specify the following: name_service=DNS Then you need to specify the following additional dependent keywords: domain_name=<value> Enter the domain name for the domain_name <value>.2.168. such as pyramid.2.368 Chapter 7: Advanced Installation Procedures: JumpStart. Here’s a sample DNS search entry: search=pyramid.192. For example.0. For the name_server <value>. DNS: If you are using DNS for the name_service <value>.com.west. if the domain name is pyramid.192.east.com.0.0.192. which cannot exceed 250 characters.1.0.3 The search option adds the values to the search path to use for DNS queries. if the domain name is pyramid. specify it as follows: domain_name=pyramid.pyramid. Specify the following: search=<value> where <value> is the search entry.1. name_server=<value> The name_server <value> is the hostname or IP address for the name server.com.pyramid. For example.com .168. For example: name_server=192.0.com.0. specify the following: name_service=LDAP Then you need to specify the following additional dependent keywords: domain_name=<value> Enter the domain name for the domain_name <value>.3 . For example: name_server=192. and PXE The domain <value> in the previous line is the domain name.168.192.168.

com profile=default profile_server=192. You can enter a specific interface. such as eri0. Here’s an example LDAP entry with its dependent keywords: name_service=LDAP {domain_name=west. Specify this item as follows: network_interface=<value> Specify a <value> for the interface to be configured. here’s a sample sysidcfg file: network_interface=eri0 {primary hostname=client1 ip_address=192.0. if your primary network interface is named eri0.255.10 netmask=255.0. The profile server identifies the IP address of the profile server from which the LDAP profile can be obtained.1 protocol_ipv6=no} . or you can enter NONE (if there are no interfaces to configure) or PRIMARY (to select the primary interface): network_interface=eri0 If you are not using DHCP.255.168. the dependent keywords for a PRIMARY interface are as follows: hostname=<hostname> ip_address=<ip_address> netmask=<netmask value> default_route=<ip_address> protocol_ipv6=<yes or no> For example.168.0.pyramid.0 default_route=192. Specify this as follows: profile=<value> where <value> is the profile name.168.369 Custom JumpStart The profile parameter can also be specified to identify an LDAP profile to use.100} Network-Related Keywords Network-related keywords relate to the network interface to be used. Specify this as follows: profile_server=<value> where <value> is the IP address of the profile server.

Here’s an example using the security_policy keyword: security_policy=kerberos {default_realm=pyramid.pyramid. here’s a sample entry: network_interface=eri0 {primary dhcp protocol_ipv6=no} Whether using DHCP or not. NOTE Multiple interfaces allowed You can now enter multiple network interfaces into the sysidcfg file.kdc2. For example. Flash Archive. and PXE If you are using DHCP.com. At least one is required. an entry might look like this: root_password=XbcjeAgl8jLeI The following is the security related keyword: security_policy=<value> where <value> is either KERBEROS or NONE. the protocol_ipv6 keyword is optional. Setting the Root Password The root password keyword is root_password=<encrypted passwd> The value for <encrypted passwd> is taken from the /etc/shadow file.com} . you also need to specify the following dependent keywords: default_realm=<fully qualified domain name> admin_server=<fully qualified domain name> kdc=<value> where <value> can list a maximum of three key distribution centers (KDCs) for a security_policy keyword. the only keywords available are the following: dhcp protocol_ipv6=<yes or no> For example.com admin_server=krbadmin. just specify a separate network_interface entry for each one to be included.370 Chapter 7: Advanced Installation Procedures: JumpStart.pyramid.com kdc=kdc1.pyramid. When specifying the KERBEROS value.

The following example sets the value to English: system_locale=en_US The keyword to set the terminal type is as follows: terminal=<terminal_type> where <terminal_type> is an entry from the /usr/share/lib/terminfo database. Keywords can be in any order. Keyword values can be optionally enclosed in single quotes (‘). Terminal. HOSTNAME. and Time Server The keyword used to set the system locale is system_locale=<value> where <value> is an entry from the /usr/lib/locale directory. Only the first instance of a keyword is valid. . the first keyword specified is used.371 Custom JumpStart Setting the System Locale. or IP_ADDRESS. if you specify the same keyword more than once. Time Zone. The following is a sample sysidcfg file. The following entry sets the time zone to Eastern Standard Time: timezone=EST The keyword to set the time server is as follows: timeserver=<value> where <value> can be LOCALHOST. . located in the configuration directory named /export/jumpstart: . . Keywords are not case-sensitive. The following example sets the time server to be the localhost: timeserver=localhost The following rules apply to keywords in the sysidcfg file: . The following example sets the terminal type to vt100: terminal=vt100 The keyword to set the time zone is as follows: timezone=<timezone> where <timezone> is an entry from the /usr/share/lib/zoneinfo directory.

.10\ protocol_ipv6=no netmask=255.24.1. use the add_install_client command on the install server to set up remote workstations to install Solaris from the install server. See Chapter 5 for more information on NIS and how to create NIS maps. and a separate sysidcfg file for each client is unnecessary.0} Setting Up JumpStart in a Name Service Environment As stated in the previous section. Flash Archive. and PXE system_locale=en_US timezone=EST timeserver=localhost terminal=vt100 name_service=NONE security_policy=none root_password=XbcjeAgl8jLeI nfs4_domain=dynamic network_interface=eri0 {primary hostname=sunfire ip_address=192. You’ll use the /etc/locale.255. When the sysidcfg file is used with the NIS naming service. /etc/timezone. you can use the sysidcfg file to answer system identification questions during the initial part of installation regardless of whether a name service is used.168. /etc/hosts. and /etc/netmasks files as the source for creating NIS databases to support JumpStart client installations. The command syntax for the add_install_client command is as follows: add_install_client [-e <ethernet_addr>] [-i <ip_addr>] \ [-s <install_svr:/dist>] [-c <config_svr:/config_dir>] \ [-p <sysidcfg_svr/sysid_config_dir>] <host_name> <platform group> add_install_client -d [-s <install_svr:/dist>] [-c\ <config_svr:/config_dir>] [-p <sysidcfg_svr/sysid_config_dir>]\ [-t install_boot_image_path] <platform_name> <platform group> The add_install_client options are described in Table 7.255. /etc/ethers. The sysidcfg file necessary for installing a JumpStart client on a network running the NIS name service is typically much shorter. Setting Up Clients Now you need to set up the clients to install over the network. After setting up the /export/jumpstart directory and the appropriate files. identification parameters such as locale and time zone can be provided from the name service.372 Chapter 7: Advanced Installation Procedures: JumpStart.

373

Custom JumpStart

Table 7.24
Option
-d

add_install_client Options Description Specifies that the client is to use DHCP to obtain the network install parameters. This option must be used for PXE clients to boot from the network. Specifies the Ethernet address of the install client and is necessary if the client is not defined in the name service. Specifies the IP address of the install client and is necessary if the client is not defined in the name service. Specifies the name of the install server (install_svr) and the path to the Solaris 10 operating environment distribution (/dist). This option is necessary if the client is being added to a boot server.

-e <ethernet_addr> -i <ip_addr> -s <install_svr:/dist>

-p < sysidcfg_svr/sysid_config_dir> Specifies the configuration server (sysidcfg_svr) and the sysid_config_dir> path to the sysidcfg file (/sysidcfg_dir). -t < install_boot_image_path> <host_name> -c <config_svr:/config_dir> <platform_name>

Allows you to specify an alternate miniroot. The hostname for the install client. Specifies the configuration server (config_svr) and path (/config_dir) to the configuration directory. Specifies the platform group to be used. Determine the platform group of the client by running uname -i. For a Sunfire box, this would be set to SUNW, UltraAX-i2. Specifies the client’s architecture of the systems that use <servername> as an install server.

<platform_group>

For additional options to the add_install_client command, see the Solaris online manual pages. In Step By Step 8.5, you’ll create a JumpStart client that will boot from a system that is configured as both the boot and install server. In addition, the entire Solaris 10 media is copied to the local disk.

374

Chapter 7: Advanced Installation Procedures: JumpStart, Flash Archive, and PXE

STEP BY STEP
8.5 Creating a JumpStart Client NOTE
Sample setup In the following steps, the following associations have been made in the examples: Install server name: sunserver Distribution directory: /export/jumpstart/install Configuration server name: sunserver Configuration directory: /export/jumpstart/config Boot server name: sunserver Install client: client1 Install client’s MAC address: 8:0:20:21:49:25 Client architecture: sun4u 1. On the install server, change to the directory that contains the installed Solaris 10 Operating Environment image:
# cd /export/jumpstart/install/Solaris_10/Tools<cr>

2. Create the JumpStart client using the add_install_client script found in the local directory:
# ./add_install_client -s sunfire:/export/jumpstart/install -c \ sunfire: /export/jumpstart/config -p sunfire:/export/jumpstart -e \ 8:0:20:21:49:25 -i 192.168.1.106 client1 sun4u<cr>

The system responds with this:
Adding Ethernet number for client1 to /etc/ethers Adding “share -F nfs -o ro,anon=0 /export/jumpstart/install” to\ /etc/dfs/dfstab making /tftpboot enabling tftp in /etc/inetd.conf updating /etc/bootparams copying inetboot to /tftpboot

The add_install_client script automatically made entries into the following files and directory:
/etc/ethers 8:0:20:21:49:25 client1 /etc/dfs/dfstab share -F nfs -o ro,anon=0 /export/jumpstart/install /etc/bootparams client1 root=sunfire:/export/jumpstart/Solaris_10/Tools/Boot \ install=sunfire: /export/jumpstart/install boottype=:in sysid_\

375

Custom JumpStart
config=sunfire:/export/jumpstart/config install_config=sunfire:/export/jumpstart rootopts=:rsize=32768 /tftpboot directory lrwxrwxrwx 1 root other 26 Jun 19 16:11 C0A8016A -> \ inetboot.SUN4U.Solaris_10-1 lrwxrwxrwx 1 root other 26 Jun 19 16:11 C0A8016A.SUN4U ->\ inetboot.SUN4U.Solaris_10-1 -rwxr-xr-x 1 root other 158592 Jun 19 16:11 \ inetboot.SUN4U.Solaris_10-1 -rw-r—r— 1 root other 317 Jun 19 16:11 rm.192.168.1.106\ lrwxrwxrwx 1 root other 1 Jun 19 16:11 tftpboot -> .

3. Use the rm_install_client command to remove a JumpStart client’s entries and configuration information from the boot server:
#./rm_install_client client1<cr>

The system responds with this:
removing client1 from bootparams removing /etc/bootparams, since it is empty removing /tftpboot/inetboot.SUN4U.Solaris_10-1 removing /tftpboot disabling tftp in /etc/inetd.conf

TIP
Know your config files Make sure you are familiar with the differences between the rules file, a class file, and the sysidcfg file. It is quite common to get an exam question that displays the contents of one of them and asks you to identify which one it is.

Troubleshooting JumpStart
The most common problems encountered with custom JumpStart involve the setting up of the network installation, or booting the client. This section describes briefly some of the more popular errors and what to do if you are faced with them.

Installation Setup
When running the add_install_client command to set up a new JumpStart client, you might get the following message:
Unknown client “hostname”

The probable cause of this error message is that the client does not have an entry in the hosts file (or table if using a name service).

376

Chapter 7: Advanced Installation Procedures: JumpStart, Flash Archive, and PXE

Make sure the client has an entry in the hosts file, or table, and rerun the add_install_client command. When you have set up the JumpStart Install server, make sure the relevant directories are shared correctly. It is a common problem to share the file systems at the wrong level so that the table of contents file cannot be found when the client tries to mount the remote file system.

Client Boot Problems
The following error message can appear if the Ethernet address of the JumpStart client has been specified incorrectly:
Timeout waiting for ARP/RARP packet...

Check the /etc/ethers file on the JumpStart server, and verify that the client’s Ethernet address has been specified correctly. When booting the client from the network, to initiate a custom JumpStart installation, you might get the following error message if more than one server attempts to respond to the boot request:
WARNING: getfile: RPC failed: error 5 (RPC Timed out).

This error indicates that more than one server has an entry for the client in its /etc/bootparams file. To rectify this problem, you need to check the servers on the subnet to find any

duplicate entries and remove them, leaving only the entry required on the JumpStart server. When booting the client from the network, you could get the following error message if the system cannot find the correct media required for booting:
The file just loaded does not appear to be executable

You need to verify that the custom JumpStart server has been correctly set up as a boot and install server. Additionally, make sure you specified the correct platform group for the client when you ran add_install_client to set up the client to be able to use JumpStart.

A Sample JumpStart Installation
The following example shows how you would set up a custom JumpStart installation for a fictitious site. The network consists of an Enterprise 3000 server and five Ultra workstations. The next section details how to start the JumpStart installation process by creating the install server.

Setting Up the Install Server
The first step is to set up the install server (see Step By Step 8.6). You’ll choose the Enterprise server. This is where the contents of the Solaris CD are located. The contents of the CD can

377

Custom JumpStart

be made available by either loading the CD in the CD-ROM drive or copying the CD to the server’s local hard drive. For this example, you will copy the files to the local hard drive. Use the setup_install_server command to copy the contents of the Solaris CD to the server’s local disk. Files are copied to the /export/install directory.

STEP BY STEP
8.6 Setting Up the Install Server
1. Insert the Solaris Software CD 1 into the server’s CD-ROM drive. 2. Type the following:
# cd /cdrom/cdrom0/s0/Solaris_10/Tools<cr> # ./setup_install_server /export/install<cr>

The system responds with this:
Verifying target directory... Calculating the required disk space for the Solaris_10 Product Calculating space required for the installation boot image Copying the CD image to disk... Copying Install boot image hierarchy... Install Server setup complete

3. Eject the Solaris 10 Software CD 1, and put in the Solaris 10 Software CD 2. Let vold automatically mount the CD. 4. Change to the Tools directory on the CD:
# cd /cdrom/cdrom0/Solaris_10/Tools<cr>

5. Execute the add_to_install_server script as follows to copy the images from the CD to the /export/install directory:
# ./add_to_install_server /export/install<cr>

6. Repeat steps 3, 4, and 5 for the remaining CDs.

Creating the JumpStart Directory
After you install the install server, you need to set up a JumpStart configuration directory on the server. This directory holds the files necessary for a custom JumpStart installation of the Solaris software. You set up this directory by copying the sample directory from one of the Solaris CD images that has been put in /export/install. Do this by typing the following:
# mkdir /export/jumpstart<cr> # cp -r /export/install/Solaris_10/Misc/jumpstart_sample/* /export/jumpstart<cr>

378

Chapter 7: Advanced Installation Procedures: JumpStart, Flash Archive, and PXE

Any directory name can be used. You’ll use /export/jumpstart for this example.

Setting Up a Configuration Server
Follow the procedure in Step By Step 8.7 to set up a configuration server.

STEP BY STEP
8.7 Setting Up a Configuration Server
1. Log in as root on the server where you want the JumpStart configuration directory to reside. 2. Edit the /etc/dfs/dfstab file. Add the following entry:
# share -F nfs -o ro,anon=0 /export/jumpstart<cr>

NOTE
NFS server It may be necessary to run the svcadm enable nfs/server command if the NFS server daemons are not running.

3. Type shareall and press Enter. This makes the contents of the /export/jumpstart directory accessible to systems on the network. 4. Working with the sample class file and rules files that were copied into the JumpStart directory earlier, use them to create configuration files that represent your network. For this example, I create a class file named engrg_prof. It looks like this:
#Specifies that the installation will be treated as an initial #installation, as opposed to an upgrade. install_type initial_install #Specifies that the engineering systems are standalone systems. system_type standalone #Specifies that the JumpStart software uses default disk #partitioning for installing Solaris software on the engineering #systems. partitioning default #Specifies that the developer’s software group will be installed. Cluster SUNWCprog #Specifies that each system in the engineering group will have 2048 #Mbytes of swap space. filesys any 2048 swap

The rules file contains the following rule:
network 192.9.200.0 - engrg_prof -

This rules file states that systems on the 192.9.200.0 network are installed using the engrg_prof class file.

379

Custom JumpStart 5. Validate the rules and class files:
# cd /export/jumpstart<cr> # ./check<cr> Validating rules... Validating profile eng_prof... The custom JumpStart configuration is ok. # /usr/sbin/install.d/pfinstall -D -c /export/install engrg_prof<cr>

If check doesn’t find any errors, it creates the rules.ok file. Look for the following message, which indicates that the pfinstall test was successful:
Installation complete Test run complete. Exit status 0.

You are finished creating the configuration server.

Setting Up Clients
Now, on the install server, set up each client:
# cd /export/install/Solaris_10/Tools<cr> # ./add_install_client -s sparcserver:/export/install -c sparcserver:\ /export/jumpstart -p sparcserver:/export/jumpstart -e 8:0:20:21:49:25\ -i 192.9.200.106 sun1 sun4u<cr> # ./add_install_client -s sparcserver:/export/install -c sparcserver:\ /export/jumpstart -p sparcserver:/export/jumpstart -e 8:0:20:21:49:24 -i 192.9.200.107 sun2 sun4u<cr>

This example sets up two engineering workstations, sun1 and sun2, so that they can be installed over the network from the install server named sparcserver. It is assumed that a sysidcfg file is located in the /export/jumpstart directory on “sparcserver” and that both clients will use the same sysidcfg file.

Starting Up the Clients
After the setup is complete, you can start up the engineering systems by using the following startup command at the OK (PROM) prompt of each system:
# boot net install<cr>

You see the following:
Rebooting with command: net - install Boot device: /pci@1f,0/pci@1,1/network@1,1 File and args: - install 20800 SunOS Release 5.10 Version Generic_127127-11_64-bit Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved.

380

Chapter 7: Advanced Installation Procedures: JumpStart, Flash Archive, and PXE
whoami: no domain name Configuring /dev and /devices Using RPC Bootparams for network configuration information. Configured interface eri0 Using sysid configuration file 192.9.200.101:/export/jumpstart/sysidcfg The system is coming up. Please wait. Starting remote procedure call (RP services: sysidns done. Starting Solaris installation program... Searching for JumpStart directory... Using rules.ok from 192.9.200.101:/export/jumpstart. Checking rules.ok file... Using profile: engrg_prof Executing JumpStart preinstall phase... Searching for SolStart directory... Checking rules.ok file... Using begin script: install_begin Using finish script: patch_finish Executing SolStart preinstall phase... Executing begin script “install_begin”... Begin script install_begin execution completed. Processing default locales - Specifying default locale (en_US) Processing profile - Selecting cluster (SUNWCprog) WARNING: Unknown cluster ignored (SUNWCxgl) - Selecting package (SUNWaudmo) - Selecting locale (en_US) Installing 64 Bit Solaris Packages - Selecting all disks - Configuring boot device - Configuring swap (any) - Configuring /opt (any) - Automatically configuring disks for Solaris operating environment Verifying disk configuration Verifying space allocation - Total software size: 3771.46 Mbytes Preparing system for Solaris install Configuring disk (c0t0d0) - Creating Solaris disk label (VTO Creating and checking UFS file systems - Creating / (c0t0d0s0) - Creating /opt (c0t0d0s5) Beginning Solaris software installation Starting software installation

381

Custom JumpStart
SUNWxwrtl...done. 3756.31 Mbytes remaining.

<output truncated> Completed software installation Solaris 10 software installation succeeded Customizing system files - Mount points table (/etc/vfstab) - Network host addresses (/etc/hosts) Customizing system devices - Physical devices (/devices) - Logical devices (/dev) Installing boot information - Installing boot blocks (c0t0d0s0) Installation log location - /a/var/sadm/system/logs/install_log (before reboot) - /var/sadm/system/logs/install_log (after reboot) Installation complete Executing SolStart postinstall phase... Executing finish script “patch_finish”... Finish script patch_finish execution completed. Executing JumpStart postinstall phase... The begin script log ‘begin.log’ is located in /var/sadm/system/logs after reboot. The finish script log ‘finish.log’ is located in /var/sadm/system/logs after reboot. syncing file systems... done rebooting...

The client reads the sysidcfg file, and then the class file, and then the rules.ok file on the server. If any system identification information is missing in the sysidcfg file, the client displays the appropriate dialog requesting identification information. The system then automatically installs the Solaris operating environment. This completes the JumpStart configuration.

382

Chapter 7: Advanced Installation Procedures: JumpStart, Flash Archive, and PXE

Solaris Flash
Objective
. Explain Flash, create and manipulate the Flash Archive, and use it for installation.

The main feature of Solaris Flash is to provide a method to store a snapshot of the Solaris operating environment, complete with all installed patches and applications. This snapshot is called the Flash Archive, and the system that the archive is taken from is called the master machine. This archive can be stored on disk, CD-ROM, or tape media. You can use this archive for disaster recovery purposes or to replicate (clone) an environment on one or more other systems. When using a Flash Archive to install the Solaris environment onto a system, the target system we are installing the environment on is called the installation client. When you’re ready to install the Solaris environment using the Flash Archive, you can access the archive on either local media or across the network. The Flash Archive is made available across the network by using FTP, NFS, HTTP, or HTTPS. Furthermore, when installing from a Flash Archive onto the installation client, the install can be modified from the original archive to accommodate things such as kernel architecture, device differences, and partitioning schemes between the master machine and the installation client. A few limitations of the Flash Archive are worth noting:
. Flash does not support metadevices or non-UFS file systems. . The archive can only be generated using packages that are currently installed and avail-

able on the master server. You can also initiate pre- and post-installation scripts to further customize the system before or after the installation of the Flash Archive. These standard shell scripts can be run during creation, installation, post-installation, and the first reboot. Specifically, you could use these scripts to perform the following tasks:
. Configure applications on the clone. . Validate the installation on the clone. . Protect local customizations from being overwritten by the Solaris Flash software.

This section describes how to create the Flash Archive using the flarcreate command, how to obtain information from an existing Flash Archive using the flar command, and how to install the operating system on an installation client from a Flash Archive.

383

Solaris Flash

NOTE
Flash install enhancement A Flash installation can now be used to update a system, using a differential Flash Archive. Previously, a Flash Install could only be used to perform an initial installation. A new <install_type> of flash_update is available with Solaris 10.

Creating a Flash Archive
The first step is to identify the master machine. This system will serve as the template for the archive. All software and data on the master machine, unless specifically excluded, will become part of the Flash Archive that will be installed on the installation client. Next, make sure that the master machine is completely installed, patched, and has all its applications installed. Depending on the application, you may want to create the archive before the application is configured, however. This will allow you to configure the application specifically for each system it is running on. To ensure that the archive is clean, it’s recommended that the archive be created before the master machine has ever gone into production and while the system is in a quiescent state. Finally, determine where the archive will be stored. You can store the archive onto a disk, a CD-ROM, or a tape. After the archive has been stored, you can even compress it so that it takes up less space. Because these archives can be used for disaster recovery, store the archive somewhere offsite. You’ll use the flarcreate command to create the archive. The syntax for the command is as follows:
flarcreate -n <name> [-R <root>] [-A <system_image>] [-H] [-I] [-L <archiver>]\ [-M] [-S] [-c] [-t [-p <posn>] [-b <blocksize>]] [-i <date>]\ [-u <section>...] [-m <master>] [-f [<filelist> | -] [-F]] [-a <author>]\ [-e <descr> | -E <descr_file>] [-T <type>] [-U key=value...]\ [-x <exclude>...] [-y <include>...][-z <filelist>...] [-X <filelist>...]\ archive

The options to the flarcreate command are described in Table 7.25. In this command syntax, <archive> is the name of the archive file to be created. If you do not specify a path, flarcreate saves the archive file in the current directory.

384

Chapter 7: Advanced Installation Procedures: JumpStart, Flash Archive, and PXE

Table 7.25
Option
-n <name>

Command-Line Options for flarcreate
Description The value of this flag is the name of the archive. This is a name stored internally in the archive and should not be confused with the filename used when storing the archive. Creates a differential Flash Archive by comparing a new system image with the image specified by system_image. Uses the contents of <filelist> as a list of files to include in the archive. Uses only files listed in <filelist>, making this an absolute list of files, instead of an addition to the normal file list. Compresses the archive by using the compress command. Does not generate a hash identifier. Ignores the integrity check. The value for the file_archived_method field in the identification section. cpio is the default method used, but you could specify -L pax to use the pax utility to create an archive without a 4GB limitation on individual file sizes. Used only when you are creating a differential Flash Archive (described in the next section). When creating a differential archive, flarcreate creates a long list of the files in the system that remain the same, are changed, and are to be deleted on clone systems. This list is stored in the manifest section of the archive. When the differential archive is deployed, the Flash software uses this list to perform a file-by-file check, ensuring the integrity of the clone system. Use of this option to avoids this check and saves the space used by the manifest section in a differential archive. However, you must weigh the savings in time and disk space against the loss of an integrity check upon deployment. Because of this loss, use of this option is not recommended. Creates the archive from the file system tree that is rooted at root. If you do not specify this option, flarcreate creates an archive from a file system that is rooted at /. Skips the disk space check and doesn’t write archive size data to the archive. Does not include sizing information in the archive. Excludes the file or directory from the archive. If you specify a file system with -R <root>, the path to the directory to exclude is assumed to be relative to <root>. Includes the file or directory in the archive. This option can be used in conjunction with the x option to include a specific file or directory within an excluded directory.

The Following Option Is Required

The Following General Options Are Available
-A <system_image> -f <filelist> -F -c -H -I -L <archiver>

-M

-R <root>

-S -s -x <exclude>

-y <include>

flarcreate uses the current system time and date. to exclude from the archive. Running precreation scripts. Specifies that a description is contained in the file <descr_file>. or directory names. or minus (-). 8172587 blocks The archive will be approximately 3. Specifies a description.. Integrity OK. Creates an archive on a tape device. After you enter the command and press Enter. we are creating a Flash Archive named “Solaris 10 Ultra Archive.. Information on these options is found in the online manual pages and in the Solaris 10 Installation Guide in the Solaris 10 Release and Installation Collection. If you do not specify a master. Specifies the archive’s content type. to include in the archive.lfar<cr> In the previous example. -X <filelist> -z <filelist> Options for Archive Identification -i <date> -m <master> -e <descr> -E <descr_file> -T <type> -t -a <author> Additional options are available. Creating the archive. prefixed with either a plus (+). Precreation scripts done.. The filelist argument contains filenames... The last part of the command specifies which directory to store the archive in and what to name the archive.” The -R option specifies to recursively descend from the specified directory. 8172587 blocks .” We are specifying the author (creator) to be labeled as “WS Calkins. The following example shows how to use the flarcreate command to create the Flash Archive: # flarcreate -n “Solaris 10 Ultra Archive” -a “WS Calkins” -R / /u01/ultra.25 Option Command-Line Options for flarcreate Description Uses the contents of filelist as a list of files or directories to exclude from the archive.. Determining the size of the archive. such as for creating the archive on tape and adding some userdefined options. If you do not specify a date. flarcreate uses the system name that is reported by uname -n. Allows you to specify the archive’s author.. the flarcreate command displays the status of the operation: Full Flash Checking integrity.89GB..385 Solaris Flash Table 7.

you can see the archive file by issuing the ls command: # ls -l /u01/ultra. . Extract information from an archive .10 creation_os_name=SunOS creation_os_version=Generic_137137-09 files_compressed_method=none files_archived_size=4184375301 files_unarchived_size=4184375301 content_author=WS Calkins content_architectures=sun4u type=FULL For additional information on the flarcreate or flar commands.flar<cr> The system displays the following information about the Flash Archive: archive_id=fb2cfa3c51d3af4a10ce6e804243fe19 files_archived_method=cpio creation_date=20090217003131 creation_master=ultra content_name=Solaris 10 Ultra Archive creation_node=ultra10 creation_hardware_class=sun4u creation_platform=SUNW. use the following command: # flar -i /u01/ultra. Flash Archive. Split archives . .flar<cr> -rw-r—r— 1 root other 3820943938 Sep 3 11:12 ultra. .386 Chapter 7: Advanced Installation Procedures: JumpStart. With the flar command. Running pre-exit scripts.flar The flar command is used to administer Flash Archives. and PXE Archive creation complete. UltraAX-i2 creation_processor=sparc creation_release=5. Postcreation scripts done. you can . When the operation is complete. . Pre-exit scripts done. . Running postcreation scripts . Combine archives To use the flar command to extract information from an archive. refer to the online manual pages or the Solaris 10 Installation Guide: Solaris Flash Archive (Creation and Installation)” in the Solaris 10 Release and Installation Collection.

1 Specify Media window. FIGURE 7. Initiate a Solaris installation from CD-ROM. In this section. When prompted to select the Installation Media. . You’re prompted to enter the path to the network file system that contains the Flash Archive. you learn how to install this archive on an installation client using the GUI-based Solaris installation program. as shown in Figure 7. Click the Next button.2.387 Solaris Flash Using the Solaris Installation Program to Install a Flash Archive In the previous section we described how to create a Flash Archive. NFS and the share command are described in Chapter 2.110 and placed into a file system named /u01. The Flash Archive was created on a system named ultra10 with the IP address of 192.0. select Network File System. On ultra10 we need to share the /u01 file system so that the archive is available to other systems on the network via NFS.168.1. You use the share command to do this. as shown in Figure 7.

You’re prompted to enter any additional archives you want to install. Flash Archive. click the Next button. as shown in Figure 7. Verify that it is correct. and then click the Next button.3 Flash Archive Summary window. The Flash Archive Summary window appears. . After entering the path. and PXE FIGURE 7.3.2 Specify Network file system path window. The selected archive is listed. as shown in Figure 7.388 Chapter 7: Advanced Installation Procedures: JumpStart.4. FIGURE 7.

4 Additional Flash Archives window. configure your applications. The final step is to log in as root. so click the Next button. The system is initialized. you see the Disk Selection window displayed as with a normal GUI-based installation. The system is now ready for production use. The difference is that you are not asked to select the software you want to install. After the system initialization is finished.5 Initialization window. as shown in Figure 7.5. You have no additional archives to install. the entire Flash Archive is installed. and the login message appears. . When the installation is complete. Instead. and make systemspecific customizations. the installation continues as a normal GUI-based installation.389 Solaris Flash FIGURE 7. FIGURE 7. the system reboots (if you selected this option during the earlier dialog). From this point forward.

8 describes the process of creating a differential archive. create a differential archive by comparing the original Flash Archive with the current OS image that is installed in (/): # flarcreate -n “differential archive” -A /u01/original. you can update that clone using a differential archive. and so on). 2. and PXE Creating a Differential Flash Archive If you created a clone system using a Flash Archive. let’s say you have already created a Flash Archive on a master server using the flarcreate command. Flash Archive. Or you can use Solaris Live Upgrade to install the differential archive on an inactive boot environment. For example. . unchanged master Flash Archive. NOTE Differential update failure If the clone has been manually updated after it was originally created from the master server’s archive. only the files that are in the differential archive are changed.flar <cr> 1. and you used that archive to install a clone system. You’ll use a differential archive to accomplish the task.390 Chapter 7: Advanced Installation Procedures: JumpStart. STEP BY STEP 7. After modifying the master server (adding/removing packages. using custom JumpStart or Live Upgrade. you install updates and make other changes to the OS. the differential update fails. When installing the differential archive on the clone. The name of the new differential archive is /u01/diff_archive. After updating or making changes to the master server’s OS. Step By Step 7. Now you can install the differential archive on the clone system with custom JumpStart. When creating a differential archive on the master server. Create your original Flash Archive on the master server: This is the archive that was initially used to create the clone.8 Creating a Differential Archive # flarcreate -n “Archive” /u01/original. the original master Flash Archive must still be present and untouched. patches. The differential archive is created and contains the differences between the two archives. Later. you’ll create a differential archive by comparing the current OS to the original master Flash Archive. and you want to apply these changes to the clone system.flar /u01/diff_archive <cr> where -A specifies the location of the original. on the master server.

we set up a boot server. The next step is to create a profile for the installation client. You can utilize a Solaris Flash Archive in a JumpStart installation. However.0. which provided the information that a JumpStart client needed to boot across the network. For a differential flash archive.s1 1:449 swap . archive_location . .168.110/u01/ultra. partitioning: Only the keyword values of explicit or existing must be used. . root_device Here’s a sample profile for an installation client using a Flash Archive: install_type flash_install archive_location nfs://192. I described how to set up a JumpStart installation. install_type: For a full flash archive install. package: Used only for a full flash installation. local_customization .391 Solaris Flash Solaris Flash and JumpStart Earlier in this chapter. but first you need to add the installation client to the JumpStart boot server as described earlier in this chapter. specify flash_update. when using JumpStart to install from a Flash Archive. no_content_check: Used only for a differential flash archive. . If you recall.flar partitioning explicit # #8 GB / and 1GB swap on a 9GB Disk # filesys rootdisk. no_master_check: Used only for a differential flash archive. filesys: The keyword value auto must not be used. . . We also set up an install server. forced_deployment . cannot be used with a differential flash archive. . This was also described earlier in this chapter. and we created the profile and rules configuration files which provided additional setup information such as disk partitions and software packages. which supplied the Solaris image.s0 free / filesys rootdisk. specify this option as flash_install. only the following keywords can be used in the profile: .

regardless of the sources or vendors of the software and the hardware of both client and server machines. you can boot the installation client using this: ok boot net . to ensure interoperability. and PXE The rules and sysidcfg files for the Flash installation client would be the same as described earlier in this chapter. so it can accomplish its task independent of the type of network adapter implemented in the system. It does not require the client to have any form of local boot media. Depending on your system. and the system will be installed using the Flash Archive.392 Chapter 7: Advanced Installation Procedures: JumpStart. x86/x64-based clients can boot consistently and in an interoperable manner. A configured DHCP server from which to boot successfully . . PXE is available only to x86/x64-based systems that implement the Intel Preboot Execution Environment specification. A configured install server containing the Solaris boot image and images of the Solaris CDs . An x86 client that supports the PXE network boot NOTE Only one DHCP server You must make sure that only one DHCP server is on the same subnet as the PXE client. In addition. They ensure that network-based booting is accomplished through industry-standard protocols used to communicate with the server. because the PXE network boot does not work properly on a subnet containing multiple DHCP servers. You need to consult the hardware documentation for your system to determine whether it supports the PXE network boot. Preboot Execution Environment (PXE) The Preboot Execution Environment (PXE) is a direct form of network boot that can be used to install the Solaris Operating Environment over the network using DHCP. and assuming the Flash Archive is available on the install server in a shared file system. and sysidcfg files.install<cr> The automated installation proceeds without further intervention. With PXE. you need three systems: . rules. Flash Archive. This is accomplished via a uniform and consistent set of preboot protocol services within the client. When finished configuring the profile. To use PXE. the downloaded Network Bootstrap Program (NBP) is presented with a uniform and consistent preboot operating environment within the booting client. PXE may be implemented in the system’s BIOS or might be configurable via the network adapter’s configuration utility.

however. Table 7. Table 7. NOTE DHCP already configured You should note that a working DHCP server should already be configured. Remember that you cannot run setup install server on a SPARC system using an x86 CD. It is worth investigating whether an upgrade to the BIOS firmware is necessary as well.393 Preboot Execution Environment (PXE) Preparing for a PXE Boot Client As you saw in the previous section. The third system is also very straightforward. Configuring the DHCP Server A few parameters need to be configured to ensure that the client. All it does is share the CD images over the network. It is necessary. Configuring a DHCP server is beyond the scope of this exam and is covered completely in the Solaris 10 Network Administrator Exam (Exam 310-302). The first of these is the install server. required for the installation of the Solaris Operating Environment. has all the information it requires in order to boot successfully and then access the install server containing the correct CD images. It is the second of these systems that requires the most work.26 lists some of the most common parameters. and a single install server can serve both SPARC and x86 clients. NOTE You can still use SPARC Even though you are setting up an x86 installation. you can still use a SPARC system as your install server if you want to. when booted. but you store x86 CD images instead of SPARC. The details described in this section merely configure some parameters within the DHCP server.26 Symbol Name SrootIP4 SrootNM SrootPTH Vendor Client Class Options Code 2 3 4 Type IP Address ASCII Text ASCII Text Granularity 1 1 1 Max 1 0 0 Description The root server’s IP address The root server’s hostname The path to the client’s root directory on the root server . to create some vendor class macros so that the correct configuration information is passed to the client when booting across the network. in the section “The Install Server. three systems are required in order to be able to make use of the PXE network boot.” The procedure for an x86 install server is the same. but you can from a DVD. because you have to consult your hardware documentation to verify whether PXE network boot is supported by the BIOS. or vice versa. Setting up the install server was described earlier in this chapter.

Granularity: The number of instances.6. For example. Code: A unique code number. For example. The DHCP manager window appears. The following example shows how to add a symbol (SrootIP4) and Vendor Client Class (SUNW. Description: A textual description of the symbol. Max: The maximum number of values. . . a Granularity of 2. Start dhcpmgr by entering /usr/sadm/admin/bin/dhcpmgr & from any CDE window. Symbol Name: The name of the symbol. . as shown in Figure 7. . a symbol with a data type of IP Address and a Granularity of 2 means that the entry must contain two IP addresses. You can add these symbols to the DHCP server by using the dhtadm command: dhtadm -A -s <macro> -d <definition> or by using the GUI-based dhcpmgr command. . and a Max of 2 means that the symbol can contain a maximum of two pairs of IP addresses.26 Symbol Name SinstIP4 SinstNM SinstPTH SrootOpt SbootFIL SbootRS SsysidCF SjumpsCF Vendor Client Class Options Code 10 11 12 1 7 9 13 14 Type IP Address ASCII Text ASCII Text ASCII Text ASCII Text Number ASCII Text ASCII Text Granularity 1 1 1 1 1 2 1 1 Max 1 0 0 0 0 1 0 0 Description The JumpStart install server’s IP address The JumpStart install server’s hostname The path to the installation image on the JumpStart install server NFS mount options for the client’s root file system Path to the client’s boot file NFS read size used by standalone boot program when loading the kernel Path to the sysidcfg file. a symbol with a data type of IP Address.394 Chapter 7: Advanced Installation Procedures: JumpStart. Note that the DHCP server is already configured to support 10 IP addresses and that the DHCP server name is achilles. Type: The data type of the entry. Flash Archive. and PXE Table 7. in the format <server>:</path> Path to the JumpStart configuration file. .i86pc) to the achilles macro using the GUI-based dhcpmgr: 1. in the format <server>:</path> The fields are described here: .

2. . FIGURE 7.6 DHCP Manager window. Select Edit.7.7 DHCP Options window. the Options window appears. as shown in Figure 7.395 Preboot Execution Environment (PXE) FIGURE 7. Create. Select the Options tab.

The next field is a pull-down menu. 5. as shown in Figure 7. and click OK to complete the operation. On the right side of the window is the Vendor Client Classes box. as shown in Figure 7. In this case. and PXE 3. Table 7.8 DHCP Create Options window. For example. if an x86 client is being used. which lists the valid values for the symbols to be added.9. The type is currently set to IP Address.396 Chapter 7: Advanced Installation Procedures: JumpStart. FIGURE 7. which now includes the symbol just created. A subwindow appears to create the option.10. 7. Enter this in the box provided and click Add. 6.26. the code value for the symbol SrootIP4 is 2. select Vendor from this menu. This is where you specify which class of systems the option applies to. Flash Archive. The class now appears in the list. enter these accordingly into their correct locations.i86pc. Make sure the box titled Notify DHCP server of change is checked. .8. You are returned to the Options window. which is correct. Enter the name SrootIP4 in the Name field. the client class is SUNW. Refer to Table 7.26 also states the values for Granularity and Maximum. 4. as shown in Figure 7.

9 DHCP completed Create Options window. The remaining symbols can be added by repeating the previous steps. Figure 7. FIGURE 7. To add the symbol SrootIP4 to the achilles macro.397 Preboot Execution Environment (PXE) FIGURE 7. 8.10 DHCP Options window with a symbol defined. select the Macro tab and the achilles macro from the list on the left. 9.11 shows the current contents of this macro. .

so click Select to the right of the Option Name field.398 Chapter 7: Advanced Installation Procedures: JumpStart. You need to locate the symbol that you want to add. Select Edit. and PXE FIGURE 7.13. Properties. The Select Option window appears. Figure 7.11 The achilles macro. 11. Flash Archive. . as shown in Figure 7. 10.12 The Properties window.12 shows the Properties window. FIGURE 7.

13 The Select Option (Standard) window. The symbol SrootIP4 appears. .399 Preboot Execution Environment (PXE) FIGURE 7. so click the menu and choose Vendor. FIGURE 7. 12. The symbol just created is a Vendor class symbol. and the options being displayed are standard symbols. as shown in Figure 7. The selector field is a pull-down menu.14.14 The Select Option (Vendor) window.

.15 The Macro Properties window. Figure 7. showing the contents of the achilles macro. Click the symbol SrootIP4.16 shows that the symbol SrootIP4 has been added to the macro. Click Add to insert the symbol and value into the macro properties. which is 192. When you click OK to complete the operation. as shown in Figure 7. and then click OK to display the Macro Properties window. This symbol identifies the IP Address of the JumpStart root server. you are returned to the macro win- dow. and PXE 13.400 Chapter 7: Advanced Installation Procedures: JumpStart.168.15.110 for this example. FIGURE 7. 14. Flash Archive. Enter this in the Option Value field. 15. FIGURE 7. Figure 7.16 The Macro Properties window with symbol added.17 shows the completed operation.0.

The next example configures DHCP to PXE boot one specific machine based on its MAC address of 00:21:9b:33:c0:d7: # . When the macro and symbols have been configured./add_install_client -d SUNW./add_install_client -d -e 00:21:9b:33:c0:d7<cr> .401 Preboot Execution Environment (PXE) FIGURE 7. Adding an x86 Client to Use DHCP Having configured the DHCP server. 16.i86pc class of system: # cd /export/install/x86pc/Tools<cr> # . This is carried out using the add_install_client command. but this time the majority of the configuration information is supplied by the DHCP server. the next task is to add the client to the install server. The following command adds support for the SUNW.17 The achilles macro with symbol added. virtually the same as for a custom JumpStart. the DHCP server is ready to handle the client correctly when it boots across the network.i86pc i86pc<cr> This add_install_client example configures DHCP to PXE boot a class of machines. Repeat this operation for the other symbols that the DHCP server requires to properly support the PXE network boot.

The system should start booting from the network and should prompt you for the type of installation you want to run. . but usually one of the following will have the desired effect: . Enter the system BIOS using the appropriate keystrokes. Adjust the boot device priority list. . the only remaining thing to do is to boot the x86 client to install over the network. Flash Archive. Exit the system BIOS. so that a network boot is attempted first. NOTE Set back boot options Remember when the installation finishes and the system reboots to re-enter the system BIOS and restore the original boot configuration. . if present. Configure the BIOS to boot from the network. and PXE Booting the x86 Client When the install server and the DHCP server have been configured correctly and the x86 client has been added.402 Chapter 7: Advanced Installation Procedures: JumpStart. . The way in which this is done depends on the hardware that you have. The remainder of the installation process depends on which installation type you choose.

and most are underutilized. and they need to be installed only once. especially if the JumpStart installation is to be used more than once. and a configuration server. you learned about a new facility. A good example of this is in a test environment.403 Summary Summary It’s been my experience that JumpStart is not widely used. For system administrators managing large numbers of systems—say. Finally. The key to using JumpStart is whether it will benefit you to spend the time learning and understanding what is required. and editing a rules file to ensure that all systems are accommodated. if the system administrator manages only three or four systems. Class file . it is questionable as to whether the time is worth investing. the Preboot Execution Environment (PXE). You learned how the Flash Archive can be used in a JumpStart session for a completely automated installation. You also learned how to configure a DHCP server to add the required symbols to properly support a booting x86 client. Many of the popular UNIX systems have installation programs similar to JumpStart. Clone system . You also learned how to create a differential Flash Archive by comparing a new root (/) image to an existing Flash Archive. where systems might have to be regularly reinstalled to a particular specification. the install server. It might be more efficient to carry out interactive installations. more than 100—it is probably worth the effort. which facilitates the installing of x86 clients across the network using a DHCP server to provide the boot configuration information. Key Terms . I described the entire process of installing a networked system via JumpStart. or simply store it away in case you need to rebuild the system as a result of a system failure. System administrators could save a great deal of time if they would only learn more about this type of installation. and the configuration files located on the configuration server. On the other hand. Boot server . Many system administrators would rather go through an interactive installation for each system than automate the process. an install server. You also learned how to use the Solaris Flash Archive feature to create an exact image of a particular Solaris environment and replicate this environment across many systems. mainly because of its complexity. including how to set up the boot server. and then creating the necessary class files. I also described the necessary procedures that need to be performed for each client that you plan to install.

Custom JumpStart . One system will serve as the boot/install/configuration server. Solaris Flash . you’ll create a JumpStart boot server.1 Creating JumpStart Servers In this exercise. JumpStart directory . Configuration server . The second system will be the client and will have the entire disk destroyed and the operating system reloaded. Differential archive . and configure a JumpStart client to automatically install the Solaris 10 operating environment across the network. For this exercise. JumpStart client .404 Chapter 7: Advanced Installation Procedures: JumpStart. MTFTP . DHCP server . you’ll need two systems connected on a network. so it needs about 5GB of free disk space. Preboot Execution Environment (PXE) . Flash Archive. Flash installation . RARP . Profile . Rules file . configuration files. configuration server. . Install server . TFTP Apply Your Knowledge Exercise 8. install server. NBP . JumpStart server . and PXE . Flash Archive .

Create the JumpStart configuration directory: # mkdir /export/jumpstart<cr> 6. log in as root. Repeat the procedure with the remaining CDs.405 Apply Your Knowledge CAUTION Destructive process This procedure destroys data on the disk. and let vold automati- cally mount the DVD/CD. 3./add_to_install_server /export/install<cr> d. Change to the Tools directory on the CD: # cd /cdrom/cdrom0/Solaris_10/Tools<cr> c. Let vold automatical- ly mount the CD. Add the following entry in the /etc/dfs/dfstab file for this directory to share it across the network: share -F nfs -o ro. On the system that will be used as the boot and install server./setup_install_server /export/install<cr> 4. and make an entry for the JumpStart client.anon=0 /export/jumpstart<cr> . Estimated time: 1 hour 1. b. Execute the add_to_install_server script as follows to copy the images from the CD to the /export/install directory: # . I use /export/install as the install directory: # . and put in the Solaris 10 CD 2. Be sure you have about 5GB of free space and that the target directory is empty. Add the additional software: a. 5. Insert the Solaris DVD (or CD labeled Solaris 10 CD 1). and specify the location for the Solaris image. Run the setup_install_server script. 2. In the following example. Eject the Solaris 10 CD 1. Change to the Tools directory on the CD: # cd /cdrom/cdrom0/s0/Solaris_10/Tools<cr> c. Create the boot server: a. b. Be sure you have proper backups if you want to save any data on these systems. Edit the /etc/hosts file.

In the /export/jumpstart directory. Start the NFS server as follows if the nfsd daemon is not already running: # svcadm enable nfs/server<cr> 8./add_install_client -s <SERVERNAME>:/export/install \ -c <SERVERNAME>:/export/jumpstart -p <SERVERNAME>:/export/jumpstart\ -e <MAC ADDRESS> <CLIENTNAME> <PLATFORM><cr> where SERVERNAME is the hostname of your boot/install server. use the vi editor to create a rules file named rules with the following entry: hostname sun1 .basic_class - 10. Flash Archive. install_type initial_install #Specifies that the engineering systems are standalone #systems. as opposed to an upgrade. Validate the class and rules files with the check and pfinstall commands: # cd /export/jumpstart<cr> # /export/install/Solaris_10/Misc/export/jumpstart_sample/check<cr> # /usr/sbin/install. partitioning default #Specifies that the developer’s software group will be installed cluster SUNWCprog #Specifies that each system in the engineering group will have #2048 Mbytes of swap space. and PLATFORM is your client’s architecture (such as sun4u). CLIENTNAME is your client’s hostname.d/pfinstall -D -c /export/install basic_class<cr> 11. filesys any 2048 swap 9. system_type standalone #Specifies that the JumpStart software uses default disk #partitioning for installing Solaris software on the #engineering systems. In the /export/jumpstart directory. Set up the JumpStart client: # cd /export/install/Solaris_10/Tools<cr> # .406 Chapter 7: Advanced Installation Procedures: JumpStart./add_install_client -s sparcserver:/export/install\ -c sparcserver:/export/jumpstart -p sparcserver:/export/jumpstart \ -e 8:0:20:21:49:24 sun1 sun4u<cr> . and PXE 7. use the vi editor to create a class file named basic_class with the following entries: #Specifies that the installation will be treated as an initial #installation. MAC ADDRESS is your client’s Ethernet address. For example: # .

Exam Questions 1. profile ❍ D. At the boot PROM. pfinstall ❍ D. Go to the client and turn on the power. Which of the following sets up an install server to provide the operating system to the client during a JumpStart installation? ❍ A. JumpStart ❍ B. check . which of the following files should contain a rule for each group of systems that you want to install? ❍ A.ok ❍ C. Which of the following is a method to automatically install Solaris on a new SPARC system by inserting the Solaris Operating System DVD in the drive and powering on the system? ❍ A. rules. add_install_server ❍ C.install<cr> The JumpStart installation executes. setup_install_server 4. issue the following com- mand: ok boot net .407 Apply Your Knowledge 12. JumpStart ❍ C. Custom JumpStart 2. add_install_client ❍ B. For a JumpStart installation. WAN boot installation ❍ C. sysidcfg ❍ B. Which of the following is a method to automatically install groups of identical systems? ❍ A. Network Installation ❍ D. Interactive installation 3. Custom JumpStart ❍ B. Interactive installation ❍ D.

For a JumpStart installation. rules. which of the following servers is set up to answer RARP requests from clients? ❍ A. profile diskette 8. which of the following files defines how to install the Solaris software on a system? ❍ A.ok ❍ D. In JumpStart.408 Chapter 7: Advanced Installation Procedures: JumpStart. check script ❍ D. check ❍ B. Install server ❍ C.ok ❍ D. Which of the following is used as an alternative to setting up a configuration directory? ❍ A. Configuration server ❍ D. specified within the rules file? ❍ A. rules. Which of the following is a user-defined Bourne shell script.log . rules ❍ C. Boot server ❍ B. begin script 9. add_install_client script ❍ B. which of the following files contains the name of a finish script? ❍ A. For a JumpStart installation.ok file 7. class file ❍ C. class file ❍ B. profile ❍ C. JumpStart server 6. Flash Archive. rules. and PXE 5. Install server ❍ C. Configuration diskette ❍ D. install. Boot server ❍ B.

install_type . Which of the following is used to test a JumpStart class file? ❍ A. Which of the following is not a valid entry in the first field in the rules file? ❍ A. hostname ❍ D. which of the following files is not used to provide information about clients? ❍ A. rules ❍ D. class ❍ B. check ❍ C. When working with JumpStart. add_install_client 11. class 12. rules ❍ C. check ❍ D.409 Apply Your Knowledge 10. sysidcfg ❍ C. pfinstall ❍ B. karch ❍ B. rules ❍ B. check ❍ B.ok file? ❍ A. ip_address 13. pfinstall 14. pfinstall ❍ C. Which of the following files is the JumpStart file that can use any name and still work properly? ❍ A. Which of the following scripts updates or creates the rules. any ❍ C. sysidcfg ❍ D. setup_install_server ❍ D.

410 Chapter 7: Advanced Installation Procedures: JumpStart. add_install_server ❍ C. Setup server ❍ B. boot net . Profile server ❍ D. Install server ❍ D. boot net ❍ C. Flash Archive. boot . Which of the following contains the JumpStart directory and configuration files such as the class file and the rules file? ❍ A. Which of the following commands is used on a JumpStart client to start the installation? ❍ A. setup_client 18. add_install_client ❍ B. Profile diskette ❍ B. boot net . Configuration server 17.install ❍ B. setup_install_server ❍ B.jumpstart . add_install_server -b ❍ C. Which of the following supplies the operating system during a JumpStart installation? ❍ A. Which of the following commands is issued on the install server to set up remote workstations to install Solaris from the install server? ❍ A.jumpstart ❍ D. Setup server ❍ C. setup_install_client ❍ D. Install server ❍ C. setup_install_server -b ❍ D. Which of the following commands sets up a system as a boot server only? ❍ A. /jumpstart directory 16. and PXE 15. setup_boot_server 19.

install_type ❍ C. SinstNM ❍ C. An x86 client with a system BIOS that supports the Intel Preboot Execution Environment specification ❍ C.411 Apply Your Knowledge 20. A system with more than 1 GB of physical memory ❍ B. locale ❍ D. An install server 23. SrootIP4 . archive_location ❍ B. Which script copies additional packages within a product tree to the local disk on an existing install server? ❍ A. add_install_server -a ❍ B. _server -a 21. SrootNM ❍ D . system_type 22. setup_install_server ❍ D. Which of the following class file keywords is valid only for a Solaris Flash Install using JumpStart? ❍ A. SinstIP4 ❍ B. A configured DHCP server ❍ D. A server running either NIS or NIS+ naming service ❍ E. Which of the following are required to be able to boot an x86 client using the PXE network boot and install method? (Choose three. add_to_install_server ❍ C. Which of the following symbols would you configure in a DHCP server to correctly specify the Hostname of the JumpStart Install server so that a PXE network client would be passed the correct configuration information at boot time? ❍ A.) ❍ A.

see the section “Setting Up the Boot Server. For more information. see the section “JumpStart. For more information. see the section “JumpStart. -C ❍ D. -A ❍ C. The rules. -D ❍ B. C. The setup_install_server script sets up an install server to provide the operating system to the client during a JumpStart installation. see the section “Creating Class Files. For more information.” 4. After you create a class file. The boot server is set up to answer RARP requests from a JumpStart client. A. Which option is used to create a differential Flash Archive? ❍ A. For more information. A. The rules. For more information. C. For more information. For more information. JumpStart lets you automatically install the Solaris software on a SPARC-based system just by inserting the Solaris CD and powering on the system.” . The custom JumpStart method of installing the operating system provides a way to install groups of similar systems automatically and identically. You do not need to specify the boot command at the ok prompt.” 6.” 2. A . see the section “The Rules File. Flash Archive.” 5.ok file is a file that should contain a rule for each group of systems you want to install. and PXE 24. see the section “begin and finish Scripts. located in the JumpStart configuration directory on the configuration server. For more information. that performs tasks before the Solaris software is installed on the system.” 10.” 7.412 Chapter 7: Advanced Installation Procedures: JumpStart. A class file is a text file that defines how to install the Solaris software on a system. D. A configuration disk is used as an alternate to setting up a configuration directory.” 9. A begin script is a user-defined Bourne shell script. A.ok file contains the name of a finish script.” 8. see the section “Testing Class Files. see the section “Setting Up a Configuration Diskette. see the section “The Install Server. see the section “The Rules File. you can use the pfinstall command to test it. B.” 3. For more information. B. -M Answers to Exam Questions 1. D. specified within the rules file. For more information.

” 22. For more information. but it should reflect the way in which it installs the Solaris software on a system. see the section “Creating Class Files. For more information. For more information. C. setup_install_server -b sets up a system as a boot server only.” 18. the check script. see the section “Creating a Flash Archive. The check script updates or creates the rules. For more information. rules. see the section “Setting Up the Boot Server. see the section “Starting Up the Clients. B. A.” 16. see the section “Creating Class Files. The check script is used to validate the rules file. and karch are all valid keywords that can be used in the rules file. A. any. and class files all provide information about the JumpStart client. The -A option is used to create a differential Flash Archive by comparing a new system image to the original Flash Archive image.” 13. For more information.” 15. For more information. Use the add_install_client command on the install server to set up remote workstations to install Solaris from the install server. A. The sysidcfg. see the section “Configuration Server. B. D.” 19. C. The archive_location option is a valid class file keyword that is used only when installing a Flash Archive using JumpStart. see the section “The Rules File.ok file. see the section “Preboot Execution Environment. The requirements for a PXE network boot are an install server.” 23.” 14. the rules.” . boot net . For more information. see the section “The Install Server. and an x86 client that supports the Intel Preboot Execution Environment specification. B. hostname. For more information. The class file can be named anything.” 12. see the section “Validating the Rules File. D. The install server supplies the operating system during a JumpStart installation. For more information. B. see the section “Configuring the DHCP Server . A.ok file. the class file.” 17. C.install is used on a JumpStart client to start the installation. For more information. and the optional begin and finish scripts. For more information. The configuration server contains all the essential custom JumpStart configuration files. see the section “JumpStart. a configure DHCP server. The add_to_install_server script copies additional packages within a product tree to the local disk on an existing install server. E. For more information. The DHCP symbol SinstNM specifies the hostname of the JumpStart Install server.” 24.” 20. see the section “The Install Server. For more information. B.413 Apply Your Knowledge 11. For more information.” 21. see the section “Setting Up Clients. such as the rules file. B.

Solaris 10 documentation set. “Solaris 10 Installation Guide: Custom JumpStart and Advanced Installations” manual.sun. . “Solaris 10 Installation Guide: Solaris Flash Archives (Creation and Installation)” manual.414 Chapter 7: Advanced Installation Procedures: JumpStart. Solaris 10 Documentation CD. and PXE Suggested Reading and Resources Solaris 10 Documentation CD. “Solaris 10 Installation Guide: Network Based Installations” book in the Solaris 10 Release and Installation collection. “Solaris 10 Installation Guide: Solaris Flash Archives (Creation and Installation)” book in the Solaris 10 Release and Installation collection. Solaris 10 documentation set. Solaris 10 Documentation CD. Flash Archive.com. “Solaris 10 Installation Guide: Custom JumpStart and Advanced Installations” book in the Solaris 10 Release and Installation collection.com. http://docs. http://docs. http://docs. Solaris 10 documentation set. “Solaris 10 Installation Guide: Network Based Installations” manual.com.sun.sun.

. . . You’ll learn how to perform an operating system upgrade while the system is running. You’ll learn how to configure and perform a secure WAN boot installation across a wide area network.EIGHT 8 Advanced Installation Procedures: WAN Boot and Live Upgrade Objectives The following test objectives for exam CX-310-202 are covered in this chapter: Configure a WAN boot installation and perform a Live Upgrade installation. You’ll understand the differences between a WAN boot installation and a custom JumpStart installation. . You’ll learn the requirements for a WAN boot installation. .

conf File Booting the WAN Boot Client Boot the Client from the Local CD/DVD Boot the Client Interactively from the OBP Boot the Client Noninteractively from the OBP Boot the Client with a DHCP Server Solaris Live Upgrade Live Upgrade Requirements Solaris Live Upgrade Process Creating a New Boot Environment Displaying the Status of the New Boot Environment Upgrading the New Boot Environment Activating the New Boot Environment luactivate on the x86/x64 Platform lucreate on the SPARC Platform Maintaining Solaris Live Upgrade Boot Environments Removing Software Packages from a Boot Environment Adding Software Packages from a Boot Environment Removing Patches on an OS Installed on a Boot Environment Adding Patches to an OS Installed on a New Boot Environment Deleting an Inactive Boot Environment Changing the Name of a Boot Environment Changing the Description of a Boot Environment Viewing the Configuration of a Boot Environment Summary Key Terms Apply Your Knowledge Exercises Exam Questions Answers to Exam Questions Suggested Reading and Resources .Outline Introduction to WAN Boot WAN Boot Requirements WAN Boot Components The WAN Boot Process The WAN Boot Server Configure the WAN Boot Server Configure the WAN Boot and JumpStart Files The wanboot.

and PXE. be sure you thoroughly understand how to set up a custom JumpStart installation. . “Advanced Installation Procedures: JumpStart. Practice the step-by-step examples provided in this chapter on a Solaris system. Understand how to configure a WAN boot server. . including a system that has limited disk space. Understand how to perform a Live Upgrade on a system.” . .Study Strategies The following strategies will help you prepare for the test: . Understand how to initiate a WAN boot installation from the client. Be familiar with all the configuration files and scripts that are associated with a WAN boot installation. as described in Chapter 7. Flash Archive. . Know the requirements for performing a Solaris Live Upgrade. Because WAN boot is built on JumpStart. .

EXAM ALERT Understand the advantages of a WAN boot installation over a JumpStart installation. . A WAN boot installation is more secure than a custom JumpStart installation for the following reasons: . It’s best if the WAN boot client system’s . x86/x64-based systems currently cannot be installed using a WAN boot installation.The WAN boot client and server can authenticate using SHA hash algorithms.The Solaris 10 OS can be downloaded to the WAN boot client using HTTPS. . WAN Boot Requirements EXAM ALERT Understand all the requirements of a WAN boot installation. WAN boot is used to install the Solaris OS on SPARC-based systems over a large public network where the network infrastructure might be untrustworthy. Chapter 7 describes how to perform a custom JumpStart installation. WAN boot provides a scalable process for the automated installation of systems any- where over the Internet or other WANs.418 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade Introduction to WAN Boot Objective: . A WAN boot installation performs a custom JumpStart installation. Before you can perform a WAN boot installation. Configure a WAN boot installation A WAN boot installation enables a system administrator to boot and install software over a wide area network (WAN) by using HTTP. JumpStart boot services are not required to be on the same subnet as the installation client. but it goes beyond a custom JumpStart installation in that it provides the following advantages: . you need to make sure that your system meets the minimum requirements for a WAN boot. . You can use WAN boot with security features to protect data confidentiality and installation image integrity.

The WAN boot client must have . which requires a minimum of OpenBoot firmware version 4.12 2002/01/08 13:01 Or you can check it as follows: # eeprom | grep network-boot-arguments<cr> If the variable network-boot-arguments is displayed.1 minimum. A minimum of 512MB of RAM . This option works in all cases when the current OBP does not provide WAN boot support.1 minimum. . Must be running Solaris 9 release 12/03 or higher. the OBP supports a WAN boot installation. Must be configured as a web server and must support HTTP 1. Must be a SPARC or x86-based system running Solaris 9 release 12/03 or higher. you can still perform a WAN boot installation by utilizing WAN boot programs from a local CD/DVD.0. An UltraSPARC II processor or newer . Must be configured as a web server. In addition. such as a spooled image of the CD/DVD that performed a pkgadd-style install. Flash Archives must be available to the web server. In addition. the web server software must support SSL. do not work with WAN boot. If the client’s OBP does not support WAN boot. . . Must have a local CD or DVD. . WAN boot requires a web server configured to respond to WAN boot client requests. perform the WAN boot installation from the Solaris Software CD1 or DVD. If you want to use HTTPS in your WAN boot installation. or if the preceding command returns the output network-boot-arguments: data not available. Flash Archives are the only format supported. and must support HTTP 1. At least 2GB of hard drive space For clients with OpenBoot firmware that does not support WAN boot. the web server software must support SSL version 3. If you want to use HTTPS in your WAN boot installation. . You can check your PROM version as follows: # prtconf -V<cr> OBP 4.419 Introduction to WAN Boot OpenBoot PROM (OBP) supports WAN boot.14. the WAN boot server must meet these requirements: . . Traditional JumpStart images. Must have enough disk space to hold the Flash Archive.

. This involves configuring the web server. contains a kernel and just enough software to install the Solaris environment. If you are installing a more recent version of Solaris 10. It parses the WAN boot server files and client configuration files into a format that the WAN boot client expects. Configuring the WAN boot server is described later in this chapter. WAN boot miniroot: A version of the Solaris miniroot that has been modified to per- form a WAN boot installation. it’s necessary to define some of the WAN boot files and components that you’ll see used throughout this chapter: . . JumpStart and JumpStart configuration files: These terms are described fully in Chapter 7. bootlog-cgi: A CGI program on the web server that creates a log of all client activity in the /tmp/bootlog. The WAN boot miniroot. WAN Boot Components To perform a WAN boot installation. Before describing the WAN boot process.conf: A text file in which you specify the configuration information and secu- rity settings that are required to perform a WAN boot installation. Install server: Provides the Solaris Flash Archive and custom JumpStart files that are required to install the client. an optional DHCP server. be sure to read the Solaris 10 release notes that accompany that release. . . installation. wanboot. wanboot-cgi: A Common Gateway Interface (CGI) program on the web server that services all client requests. like the Solaris miniroot. . The information in this directory is transferred to the client via the wanboot-cgi program as a file system. . . . WAN boot file system: Files used to configure and retrieve data for the WAN boot client installation are stored on the web server in /etc/netboot. Review any new installation issues or requirements associated with a Solaris Live Upgrade before beginning the upgrade.420 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade NOTE Solaris 10 version This chapter was written using Solaris 10 05/08. and configuration files onto the WAN boot client. The WAN boot miniroot contains a subset of the software found in the Solaris miniroot. wanboot program: A second-level boot program that is used to load the miniroot.client file. and a JumpStart server. The wanboot program performs tasks similar to those that are performed by the ufsboot and inetboot second-level boot programs. referred to as the WAN boot file system. you must first configure the WAN boot server.

The installation program begins a custom JumpStart installation to install the Solaris Flash Archive on the client. The installation program requests a download of the Flash Archive and custom JumpStart files from the install server and installs the Solaris Flash Archive. the client can request the wanboot program from the local CD/DVD. . Single server: Centralize the WAN boot data and files on one system by hosting all the servers on the same machine. . The WAN boot server can be a single server. The archive and files are transmitted using either HTTP or HTTPS. . or the functions can be spread across several servers. and the WAN boot miniroot. EXAM ALERT Understand all the WAN boot components. wanboot loads the UNIX kernel into RAM and executes the kernel. The WAN boot server is described later in this chapter. You can administer all your different servers on one system. WAN boot server: A web server that provides the wanboot program. The wanboot program performs the following functions on the client: . Alternatively. the client executes the wanboot program. and you need to configure only one system as a web server. .421 Introduction to WAN Boot . wanboot requests a download of authentication and configuration information from the WAN boot server. OpenBoot uses configuration information to communicate with the wanboot-cgi program on the WAN boot server and request a download of the wanboot program from the server. Pay special attention to the wanboot-cgi program. The WAN Boot Server The WAN boot server provides the boot and configuration data during the WAN boot installation. The information gets transmitted to the client by the server’s wanboot-cgi program using HTTP or HTTPS. the configura- tion and security files. The kernel loads and mounts the WAN boot file system and begins the installation program. wanboot requests a download of the miniroot from the WAN boot server. After the download. and the information is transmitted using either HTTP or HTTPS. The WAN Boot Process When the WAN boot client is booted.

The client’s hostname .1. You will configure three components on the WAN boot server: . as described in Step By Step 8. as provided in the following lists: WAN boot server information: . For the examples in this book. You could set up a central WAN boot server and configure one or more install servers to host the Solaris Flash Archives. URL of the wanboot-cgi program . In this example. Path to the wanboot program . you can host these servers on multiple machines. The client’s subnet mask . IP address for the client’s router . The client’s MAC address Configure the WAN Boot Server The first step of setting up the WAN boot server is to configure it as a web server. I recommend that you gather all the information you will need. The web server . Path to the WAN boot miniroot . Path to the client’s subdirectory in the /etc/netboot hierarchy WAN boot client information: . Multiple servers: If you want to distribute the installation data and files across your network. The JumpStart server Before beginning the WAN boot setup. Path to the custom JumpStart files . I’ll use the single-server method. The client’s IP address . you configure the Apache version 2 web server for an unsecure WAN boot installation. The optional DHCP server .422 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade . .

423 Introduction to WAN Boot STEP BY STEP 8.0. Start the Apache web server: # svcadm enable apache2<cr> 4.32780 *.* *.* INDEX<cr> 2.1 Configuring the Apache Web Server 1. you are ready to set up the files necessary to perform a WAN boot.conf-example /etc/apache2/httpd. My server is named “sunfire.0.* 0 0 0 0 49152 0 49152 0 49152 0 LISTEN 0 LISTEN 0 LISTEN Configure the WAN Boot and JumpStart Files After configuring the web server. Verify that the web server is running on port 80 by issuing the following command: # netstat -an|grep 80<cr> *.conf<cr> Edit the following line: ServerName 127.” so I’ll change the line to the following: ServerName sunfire Save and exit the file. .conf<cr> # vi /etc/apache2/httpd.en index. Update the primary Apache configuration file with the WAN boot server’s IP address: # cp /etc/apache2/httpd. Move the unused index files from the Apache document root directory: # # # # cd /var/apache2/htdocs<cr> cp index.80 *. which in our example will be the /var/apache/htdocs directory.2 describes the process of setting up these files.80 *. 3.html. These files must be made accessible to the web server by storing them in the WAN boot server’s document root directory. Step By Step 8.1 Replace the IP address with the hostname of the WAN boot server.* *.html.html<cr> mkdir INDEX<cr> mv index.

This directory will contain the ramdisk image used to start the client boot process. # mkdir install<cr> c. Create the wanboot directory. # mkdir miniroot<cr> d. Create the install directory. 2. Calculating space required for the installation boot image Copying Solaris_10 Tools hierarchy. This directory will contain the WAN boot JumpStart configuration files.. This directory will contain the WAN boot miniroot image needed to start the JumpStart process over HTTP. Create the directories needed for the WAN boot configuration in the /var/apache/htdocs directory: # cd /var/apache2/htdocs<cr> a. Place the Solaris 10 DVD into the DVD drive.. Copying Install Boot Image hierarchy. and place your Flash Archive file in it. it is not necessary to spool the entire contents of the DVD/CD onto the server. Create the /var/apache2/htdocs/flash directory.424 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade STEP BY STEP 8./setup_install_server -b -w /var/apache2/htdocs/wanboot/ /var/apache2/htdocs/install<cr> The system responds with the following: Verifying target directory. Set up the WAN boot install server using the setup_install_server command. If you are using a CD. Because I will be using a Flash Archive for the installation. Create the config directory. # mkdir config<cr> 3.2 Configuring the WAN Boot and JumpStart Files for an Unsecure WAN Boot Installation 1.. Use the -b option to install the boot image only into the /var/apache2/htdocs/install directory and the -w option to copy the WAN boot miniroot image into the /var/apache2/htdocs/wanboot directory: # cd /cdrom/sol_10_508_sparc/s0/Solaris_10/Tools<cr> # . This directory will contain the remote root file system. place CD #1 into the CD-ROM drive... # mkdir wanboot<cr> b.. Create the miniroot directory. . 4.

and set the file permissions: # cp /usr/lib/inet/wanboot/wanboot-cgi /var/apache2/cgi-bin/wanboot-cgi<cr> # chmod 755 /var/apache2/cgi-bin/wanboot-cgi<cr> # cp /usr/lib/inet/wanboot/bootlog-cgi /var/apache2/cgi-bin/bootlog-cgi<cr> . of ‘miniroot’ You should also make sure you have initialized the key generation process by issuing (once): # /usr/sbin/wanbootutil keygen -m Install Server setup complete 5.. Copy the architecture-specific wanboot program from the CD/DVD to the wanboot directory on the WAN server: # cd /cdrom/cdrom0/s0/Solaris_10/Tools/Boot/platform/sun4u/<cr> # cp wanboot /var/apache2/htdocs/wanboot/wanboot.425 Introduction to WAN Boot Starting WAN boot Image build Calculating space required for WAN boot Image Copying WAN boot Image hierarchy.conf(4) for each WAN boot client contains the entries: root_server=<URL> where <URL> is an HTTP or HTTPS URL scheme pointing to the location of the WAN boot CGI program root_file=<miniroot> where <miniroot> is the path and file name. 567008 blocks WAN boot Image creation complete The WAN boot Image file has been placed in /var/apache2/htdocs/wanboot/miniroot Ensure that you move this file to a location accessible to the web server. relative to the web server document directory.... Copy the CGI scripts into the web server software directory.s10_sparc<cr> 6. 686800 blocks Removing unneeded packages from WAN boot Image hierarchy Creating the WAN boot Image file Image size is 288128000 bytes Copying WAN boot to Image file. and that the WAN boot configuration file wanboot.

conf In the sample wanboot.conf<cr> Make the following entries. Configure the install server WAN boot parameters in the /etc/netboot/wanboot. and set the permissions: # mkdir /etc/netboot<cr> # chmod 700 /etc/netboot<cr> # chown webservd:webservd /etc/netboot<cr> 8. my web server’s IP address is 192. Open the file using the vi editor: # vi /etc/netboot/wanboot. and save the file: boot_file=/wanboot/wanboot.conf file.1.109/cgi-bin/wanboot-cgi root_file=/wanboot/miniroot signature_type= encryption_type= server_authentication=no client_authentication=no resolve_hosts= boot_logger=http://192.conf file parameters and syntax are described in the next section.109/cgi-bin/bootlog-cgi system_conf=system. Open the file using the vi editor: # vi /etc/netboot/system.1. my web server’s IP address is 192.109/config In the sample system.conf file.1. The WAN boot installation programs will retrieve configuration and security information from this directory during the installation.109.168. Substitute your web server’s IP address for the root_server and boot_logger entries. Create the /etc/netboot directory. the boot_logger is set to log all messages to the WAN boot server in the /tmp directory. .conf file. Configure the client configuration file pointer parameters in the /etc/netboot/system.1. Create the /etc/netboot hierarchy.1.conf<cr> Make the following entries.168.109/config SjumpsCF=http://192. Also in the example. all log messages will be displayed on the WAN boot client’s console.168.conf file.168.426 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade # chmod 755 /var/apache2/cgi-bin/bootlog-cgi<cr> 7.1. and save the file: SsysidCF=http://192.109.s10_sparc root_server=http://192.168. The wanboot. 9. Substitute your web server’s IP address in both lines. If you leave this line blank.168.

In the /var/apache2/htdocs/config directory.flar.conf /etc/netboot/system.0 protocol_ipv6=no } timezone=US/Central nfs4_domain=dynamic terminal=vt100 name_service=NONE security_policy=NONE root_password=dT/6kwp5bQJIo . and I named the file profile: # cd /var/apache2/htdocs/config<cr> # more /var/apache2/htdocs/config/profile<cr> install_type flash_install archive_location http://192. I made the following entries in the sysidcfg file. Configuring the profile for a JumpStart installation is covered in detail in Chapter 7. Refer to that chapter for instructions.168. You could also use a template supplied on the CD/DVD in the /cdrom/cdrom0/s0/Solaris_10/Misc/jumpstart_sample directory. so be sure to verify the web server ownership. and I named the file sysidcfg: # more /var/apache2/htdocs/config/sysidcfg<cr> timeserver=localhost system_locale=C network_interface=eri0 { default_route=none netmask=255.flar partitioning explicit filesys c0t0d0s0 free / filesys c0t0d0s1 512 swap I placed the Flash Archive in the /var/apache2/htdocs/flash directory and named the file archive. Change to the /var/apache2/htdocs/config directory. You will configure the sysidcfg file for a WAN boot client the same as you would for a JumpStart installation.255.109/flash/archive. I made the following entries in the profile. You can check by running the following command: # ps -ef |grep httpd<cr> webservd 5298 5297 0 Sep 18 ? 0:00 /usr/apache2/bin/httpd -k start 10. For this example. For this example. You will configure the profile for a WAN boot client the same as you would for a JumpStart installation. Configuring the sysidcfg file for a JumpStart installation is covered in detail in Chapter 7. 11.255.1. create the sysidcfg file.conf<cr> Your system may be different.427 Introduction to WAN Boot NOTE File ownership Set the file ownership on the following files so that they are owned by the web server: # chown webservd:webservd /var/apache2/cgi-bin/wanboot-cgi\ /etc/netboot/wanboot. Refer to that chapter for instructions. and configure the client installation parameters by creating a profile.

For this example. It is the repository for WAN boot configuration data (file paths. The following WAN boot installation programs and files use it to perform the WAN boot installation: .428 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade In the sample sysidcfg file. encryption type.. signing policies). Verify the configuration of the WAN boot server: # bootconfchk /etc/netboot/wanboot. the eri0 network interface and encrypted root password are unique for the system. You will configure the rules file for a WAN boot client the same as you would for a JumpStart installation. 12. Validating profile profile. Refer to that chapter for instructions. as described in Chapter 7: # . Copy the sample rules file from the CD/DVD: # cd /var/apache2/htdocs/config<cr> # cp /cdrom/sol_10_508_sparc/s0/Solaris_10/Misc/jumpstart_sample/rules . Substitute the values used in this example with the network device and root password (cut and pasted from your system’s /etc/shadow file) that are specific to your system. check it using the check script..conf<cr> # No output appears if the server has been configured successfully. If the check script is not in the /var/apache2/htdocs/config directory.conf File EXAM ALERT Understand the purpose of the wanboot. copy it there from the CD/DVD: # cp /cdrom/sol_10_508_sparc/s0/Solaris_10/Misc/jumpstart_sample/\ check/var/apache2/htdocs/config <cr> 13. I made the following entry in the rules file: # more rules<cr> any .../check<cr> Validating rules.conf file is a plain-text configuration file that is stored in the client’s subdirectory located in the /etc/netboot directory. The wanboot.conf file and the configuration information it contains. The custom JumpStart configuration is ok.profile - After creating the rules file.<cr> Configuring the rules file for a JumpStart installation is covered in detail in Chapter 7. The wanboot..

For example: boot_file=/wanboot/wanboot. For example: signature_type=sha1 For an insecure WAN boot installation that does not use a hashing key. The value is a path relative to the document root directory on the WAN boot server.1 Parameter boot_file=<wanboot-path> wanboot. you may leave this value blank: encryption_type= .conf file has the following syntax: <parameter>=<value> Parameter entries cannot span lines.example. Table 8. WAN boot file system . The value specifies the path to the document root directory on the WAN boot server. The following is a sample setting used for an unsecure WAN boot installation: root_server=http://www.com/cgi-/ bin/wanboot-cgi root_server=<wanboot/ CGI-URL>/wanboot-cgi The following example is for a secure installation: root_server=https://www.example. For an unsecure WAN boot installation that does not use an encryption key. you must also set the signature_type keyword value to sha1. WAN boot miniroot Each line in the wanboot.429 Introduction to WAN Boot . For example: encryption_type=3des.s10_sparc Specifies the URL of the wanboot-cgi program on the WAN boot server. For example: root_file=/miniroot/miniroot. leave this value blank: signature_type= encryption_type= 3des | aes |<empty> Specifies the type of encryption used to encrypt the wanboot pro gram and WAN boot file system.conf File Parameters Description Specifies the path to the wanboot program. When setting the encryption type to 3des or aes.1 describes each wanboot. wanboot-cgi program . set this value to 3des or aes to match the key formats you use. For secure WAN boot installations that use a hashing key to protect the wanboot program.com/ cgi-bin/wanboot-cgi root_file=<miniroot-path> Specifies the path to the WAN boot miniroot on the WAN boot server.conf parameter. You can include comments in the file by preceding the comments with the # character. For WAN boot installations that use HTTPS. set this value to sha1.s10 signature_type=sha1 | <empty> Specifies the type of hashing key used to check the integrity of the data and files that are transmitted during a WAN boot installation. Table 8.

To send WAN boot log messages to a dedicated log server. use the following syntax: boot_logger=http://www.conf file or in a client certificate. leave this value blank: resolve_hosts= When specifying hostnames. For an unsecure WAN boot installation that does not use authentication. set encryption_type to 3des or aes. use this syntax: resolve_hosts=sysA. When using server authentication. If all the required hosts are listed in the wanboot. For an unsecure WAN boot installation that does not use authentication. set this value to no.sysB boot_logger=<bootlog-cgipath> | <empty> Specifies the URL to the bootlog-cgi script on the logging server.conf file or the client certificate. The value of this parameter is the path to the sysidcfg and custom JumpStart files on the web server. set this value to no. and set the URL of root_server to an HTTPS value. When using server and client authentication.example. set this value to yes: client_authentication=yes When the value is set to yes. Set the value to the hostnames of systems that have not already been specified in the wanboot. or server and client authentication. For example: system_conf=sys.conf File Parameters Description Specifies if the server is authenticated during the WAN boot installation. You can also leave the value blank: server_authentication= client_authentication= yes | no Specifies if the client should be authenticated during a WAN boot installation.1 Parameter wanboot. You can also leave the value blank: client_authentication= resolve_hosts= <hostname> | <empty> Specifies additional hosts that need to be resolved for the wanboot-cgi program during the installation. and set the URL of root_server to an HTTPS value.conf | <custom-system-conf> . you must also set the value of signature_type to sha1.conf system_conf=system. set encryption_type to 3des or aes. you must also set the value of signature_type to sha1. leave the value of this parameter blank: boot_logger= Specifies the path to the system configuration file.430 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade Table 8.com/cgi-bin/bootlog-cgi= To display WAN boot and installation messages on the client console. set this value to yes: server_authentication=yes server_authentication= yes | no When the value is set to yes.

install<cr> . All the client information is con- figured on the WAN boot server so that no questions are asked during the installation process. this method allows you to boot the client from a local CD/DVD and then continue the installation via the WAN boot server. The instructions to boot from a CD/DVD are described in Step By Step 8. rather than from the WAN boot server. STEP BY STEP 8.431 Introduction to WAN Boot Booting the WAN Boot Client EXAM ALERT Understand the OBP commands used to initiate the four types of WAN boot installation methods described in this section. Power on the system. . It’s still possible to use WAN boot to install the OS on these systems. Interactive installation: Use this method if you want to be prompted for the client configuration information during the boot process. . Noninteractive installation: Hands-off installation. type ok boot cdrom -o prompt -F wanboot . Installing with local CD/DVD media: If your client’s OBP does not support a WAN boot. When you use a local CD/DVD.3 Booting a SPARC System from a Local CD/DVD 1. The following sections describe how to boot a client using the various methods. before the OS is installed. You have four options when booting and installing the WAN boot client: . . but you need to perform the WAN boot from CD/DVD rather than directly from the OpenBoot PROM. and insert the Solaris software DVD or the Solaris Software #1 CD in the CD-ROM/DVD drive.3. They can be performed on any SPARC-based client. Boot the Client from the Local CD/DVD Some older SPARC stations have OpenBoot PROM versions that do not support a WAN boot. the client retrieves the wanboot program from the local media. From the OpenBoot ok prompt. Installing with a DHCP server: Configure the network DHCP server to provide the client configuration information during the installation.

1.install <time unavailable> wanboot info: WAN boot messages->console <time unavailable> wanboot info: Default net-config-strategy: manual The boot prompt appears: boot> 2. .168. . http-proxy?<cr> client-id?<cr> aes?<cr> 3des?<cr> sha1?<cr> .0:f File and args:\ -o prompt -F wanboot . issue the prompt command: boot> prompt<cr> 3. Each prompt is described next: Enter the client’s IP address: host-ip? 192.255.1<cr> Enter the client’s hostname: hostname? client1<cr> You may leave the remaining prompts blank by just pressing Enter. After you enter the boot command.0/pci@1. They are not needed for an unsecure installation. . the system responds with the following: Boot device: /pci@1f. -F wanboot: Instructs the OBP to load the wanboot program from the CD-ROM. -o prompt: Instructs the wanboot program to prompt the user to enter client configuration information.168. The system prompts you for the client’s network interface settings and encryption keys to be entered. cdrom: Instructs the OBP to boot from the local CD-ROM.432 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade The following options are used with the boot command: .1/ide@d/cdrom@0.1. At the boot> prompt.102<cr> Enter the client’s subnet mask value: subnet-mask? 255. -install: Instructs the client to perform a WAN boot installation.0<cr> Enter the IP address of the network router: router-ip? 192.255.

which you can ignore: Unknown variable ‘/192.1. Inc. the system reboots: Mon Sep 22 18:35:10 wanboot info: WAN boot messages->192. Network interface was configured manually.1..109/cgi-bin/wanboot-cgi<cr> The system responds with the following error.109:80 SunOS Release 5. All rights reserved.109/cgi-bin/wanboot-cgi 5. ignored boot> 4.10 Version Generic_127127-11 64-bit Copyright 1983-2008 Sun Microsystems.168.168.102 Beginning system identification.168.1.eri0 : 100 Mbps full duplex link up Using sysid configuration file http://192.255... Configuring devices.168.433 Introduction to WAN Boot Enter the information for the WAN boot server (use the IP address of the WAN boot server): bootserver? http://192.168. Use is subject to license terms.1.168. Initiate the WAN boot installation with the go command: boot> go<cr> The system begins to boot from the WAN boot server.1.1. syslogd: line 24: WARNING: loghost could not be resolved Searching for configuration file(s).1.1. 192.102 255. Sep 22 11:28:01 client eri: SUNW.168..255. use the list command to display and verify the settings: boot> list<cr> The system responds with a summary of the information you entered: host-ip: subnet-mask: router-ip: hostname: http-proxy: client-id: aes: 3des: sha1: bootserver: 192. At the boot> prompt.109/config/sysidcfg .168.1 client1 UNSET UNSET *HIDDEN* *HIDDEN* *HIDDEN* http://192. and the following information is displayed: <time <time <time <time unavailable> unavailable> unavailable> unavailable> wanboot wanboot wanboot wanboot progress: wanbootfs: Read progress: wanbootfs: Read progress: wanbootfs: Read info: wanbootfs: Download 72 of 368 kB (19%) 152 of 368 kB (41%) 368 of 368 kB(100%) complete After downloading the WAN boot miniroot.109/cgi-bin/wanboot-cgi’.0 192.

Your OBP must support WAN boot to perform this type of installation. begin by setting the network-boot-arguments variable in OBP.. STEP BY STEP 8.4. . At this point. router-ip=<router-ip>: Specifies the network router’s IP address. .. set the network-boot-arguments variable in OBP: ok setenv network-boot-arguments host-ip=<client-IP>. . Completing system identification. if you want to install keys and set client configuration information from the command line during the installation. . .434 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade Search complete. .4 Booting the Client Interactively from the OBP 1. The network-boot-arguments variable instructs the OBP to set the following boot arguments: .bootserver=<wanbootCGI-URL><cr> At the ok prompt on the client system. host-ip=<client>: Specifies the client’s IP address. Boot the client from the network using the network boot argument variables: ok boot net -o prompt . At the ok prompt on the client system.\ router-ip=<router-ip>. 2. The client is configured according to the configuration files on the WAN boot server. hostname=<client-name>: Specifies the client’s hostname. The URL value for the bootserver variable must not be an HTTPS URL.subnet-mask=<value>.\ http-proxy=<proxy-ip:port>. If the WAN boot programs do not find all the necessary installation information. and the Flash Archive is extracted and installed..install<cr> Resetting . Discovering additional network configuration. Boot the Client Interactively from the OBP Use the interactive installation method. Starting remote procedure call (RPC) services: done. http-proxy=<proxy-ip:port>: An optional variable used to specify the IP address and port of the network’s proxy server. The URL must start with http://.hostname=<client-name>.. . as described in Step By Step 8. the Solaris installation program begins the boot process and installation over the WAN. . subnet-mask=<value>: Specifies the subnet mask value. bootserver=<wanbootCGI-URL>: Specifies the URL of the web server’s wanboot-cgi program. . the wanboot program prompts you to provide the missing information.

type=<key-type>: The key type that you want to install on the client. or sha1.type=3des<cr> The hexadecimal value for the key is displayed: 9ebc7a57f240e97c9b9401e9d3ae9b292943d3c143d07f04 . avoiding any DES weak keys. It also instructs the wanboot program to prompt the user to set the key values for the client system at the boot> prompt.install instructs the client to boot from the network. If you are performing an insecure installation that does not use keys. Obtain the client’s SHA1 key value on the WAN boot server by using the wanbootutil command. SHA1. If you use an AES encryption key. net=<network-IP>: The IP address of the client’s subnet. go directly to step 3. aes. For a secure WAN boot installation using HTTPS. type the hashing key value: boot> sha1=<key-value><cr> where sha1=<key-value> specifies the hashing key value. and sha1. and AES keys by typing the following: # wanbootutil keygen -d -c -o net=<network-IP>. . type=sha1<cr> where: . The hexadecimal value for the key is displayed: b482aaab82cb8d5631e16d51478c90079cc1d463 Obtain the client’s 3DES key value on the WAN boot server by typing the following: # wanbootutil keygen -d -c -o net=<network-IP>. The client ID can be a user-defined ID or the DHCP client ID. . aes. use the following format for this command: boot> aes=<key-value><cr> At the next boot> prompt. 3DES. The wanbootutil keygen command is used to create and display client and server HMAC.cid=<client-ID>. -o: Specifies the WAN boot client and/or key type. -d: Generates and stores per-client 3DES/AES encryption keys. cid=<client-ID>: The ID of the client you want to install. . . -c: Displays a key of the type specified by the key type. . which must be either 3des. the information entered at the boot> prompt is as follows: boot> 3des=<key-value><cr> where 3des=<key-value> specifies the hexadecimal string of the 3DES key.cid=<client-ID>. Valid key types are 3des.435 Introduction to WAN Boot net -o prompt .

Before you try to boot with a DHCP server. . start the boot process by typing go: boot> go<cr> The system begins the boot process and installation over the WAN. follow these instructions to boot the client: 1. After setting up the WAN boot server. If the WAN boot programs do not find all the necessary installation information. To boot a client using a DHCP server. . as described earlier in this chapter. set the network-boot-arguments variable in OBP: ok setenv network-boot-arguments host-ip=<client-IP>. Boot the Client Noninteractively from the OBP Use this installation method to boot the client without any interaction after entering the initial boot command. At the ok prompt on the client system. For you to perform this type of interactive boot. After you enter the client key values.436 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade 3. Boot the Client with a DHCP Server If you configured a DHCP server to support WAN boot options. .\ router-ip=<router-ip>. The location of the wanboot-cgi program: Specified using the SbootURI option on your DHCP server.subnet-mask=<value>. your system’s OpenBoot PROM must support WAN boot. The system begins the boot process and installation over the WAN.bootserver=<wanbootCGI-URL><cr> 2.hostname=<client-name>. the wanboot program prompts you to provide the missing information. the wanboot program prompts you to provide the missing information.install<cr> Resetting . . make sure your client’s OBP supports a WAN boot installation. . If the WAN boot programs do not find all the necessary installation information.\ http-proxy=<proxy-ip:port>. you can use the DHCP server to provide client configuration information during bootup and installation. Boot the client from the network using the network boot argument variables: ok boot net . The Proxy server’s IP address: Specified using the SHTTPproxy option on your DHCP server. you must first configure your DHCP server to supply the following information: .

Solaris Live Upgrade Objective: . 2. If the WAN boot programs do not find all the necessary installation information.437 Solaris Live Upgrade I won’t go into the details of setting up a DHCP server on the network. you could use Solaris Live Upgrade to clone an active boot environment for purposes other than an OS upgrade. dhcp: Instructs the OBP to use the DHCP server to configure the client. The system begins the boot process and installation over the WAN. after testing. If. Perform a Live Upgrade installation Solaris Live Upgrade significantly reduces downtime caused by an operating system upgrade by allowing the system administrator to upgrade the operating system. When the upgrade is complete. or install a Flash Archive. Boot the client from the network using the network boot argument variables: ok boot net . you can reboot to the old environment anytime. the wanboot program prompts you to provide the missing information. It’s a great way to simply create a backup of the current boot disk.hostname=<client-name><cr> The network-boot-arguments variable instructs the OBP to set the following boot arguments: . The Live Upgrade process involves creating a duplicate of the running environment and upgrading that duplicate. .install<cr> Resetting . The current running environment remains untouched and unaffected by the upgrade. while the system is in operation. . . you want to go back to the old operating environment. In addition. The upgrade does not necessarily need to be a complete OS upgrade. At the ok prompt. hostname=<client-name>: Specifies the hostname that you want assigned to the client. . the upgrade is activated with the luactivate command and a system reboot. set the network-boot-arguments variable: ok setenv network-boot-arguments dhcp. After you’ve configured the DHCP server. This topic is covered in the Solaris Installation Guide published by Sun Microsystems for each version of the Solaris 10 operating system. it could simply consist of adding a few OS patches. follow these instructions to boot the client: 1. .

com. or a differ- ent OS release. TIP An important point about the Live Upgrade software The release of the Live Upgrade software packages must match the release of the OS you are upgrading to. and you want to upgrade to the Solaris 10 10/08 release. these patches are listed in the Sun Microsystems info doc 206844. You can also locate the list of patches by searching for “Live Upgrade Patch” at http://sunsolve.sun.5 describes the process of installing the required Solaris Live Upgrade packages. which can be found on Sun’s website. Therefore. Step By Step 8. Maintain numerous boot images. this is the recommended way to do all patching and OS upgrades.5 Installing the Solaris Live Upgrade Packages 1. if your current OS is the Solaris 9 release. sizes. For example. you probably will install a more current version of the Solaris Live Upgrade software than what is currently on your system. Insert the CD/DVD from the version of Solaris OS that you will be upgrading to. Upgrade the operating system to a new OS release or new patch level. Install the packages in the following order: # pkgadd -d <path_to_packages> SUNWlucfg SUNWlur SUNWluu<cr> .438 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade Solaris Live Upgrade enables you to perform the following tasks on a running system: . Remove the existing Live Upgrade packages: # pkgrm SUNWlucfg SUNWlur SUNWluu<cr> 3. In fact. you must ensure that the system meets current patch requirements before attempting to install and use the Solaris Live Upgrade software on your system. . . For the Solaris 10 05/08 release. you need to install the Solaris Live Upgrade packages from the Solaris 10 10/08 release. Resize the boot disk configuration. 2. and lay- outs on the new boot environment. such as changing file system types. STEP BY STEP 8. However. such as images with different patch levels. Live Upgrade Requirements Solaris Live Upgrade is included in the Solaris 10 operating environment.

439 Solaris Live Upgrade where <path_to_packages> specifies the absolute path to the software packages on the CD/DVD. These file systems are separate mount points in the /etc/vfstab file of the active and new (inactive) boot environments. In fact. These file systems are always copied from the source to the new boot environ- . When you create a new inactive boot environment. The new files merge with the inactive boot environment critical file systems. start the creation of a new boot environment. the Live Upgrade packages are located on CD #2 using the following path: /cdrom/sol_10_508_sparc_2/Solaris_10/Product 4. you perform an upgrade on that boot environment. but shareable file systems are not changed. This task is covered in the next section. depending on which software packages are currently installed and what version of the OS you are upgrading to. to estimate the file system size that is needed to create the new boot environment. Verify that the packages have been installed successfully: # pkginfo | grep -i “live upgrade”<cr> application SUNWlucfg Live application SUNWlur Live application SUNWluu Live application SUNWluzone Live Upgrade Upgrade Upgrade Upgrade Configuration (root) (usr) (zones support) The disk on the new boot environment must be able to serve as a boot device. Critical file systems are required by the Solaris OS. Currently (as of the Solaris 10 05/08 release). The following list describes critical and shareable file systems: . as described in the upcoming section “Creating a New Boot Environment. You can then abort the process. it’s preferable that the new boot environment be put on a separate disk if your system has one available. you need to create a new boot environment. the root (/) file system does not need to be on the same physical disk as the currently active root (/) file system. Disk space requirements for the new boot environment vary. you do not affect the active boot environment. When upgrading the inactive boot environment. After you have created a new boot environment. The disk might need to be prepared with format or fdisk before you create the new boot environment.” The size is calculated. and verify that the disk slices are large enough to hold the file systems to be copied. Creating the new boot environment involves copying the critical file systems from an active boot environment to the new boot environment. as long as the disk can be used as a boot device. Check that the disk is formatted properly. Solaris Live Upgrade Process After installing the necessary patches and software packages to support Solaris Live Upgrade. Therefore.

Deletes a boot environment. Sun .2 Command luactivate lucancel lucompare lumake lucreate lucurr ludelete ludesc lufslist lumount luupgrade lurename Solaris Live Upgrade Commands Description Activates an inactive boot environment. Names the active boot environment. Compares an active boot environment with an inactive boot environment. the archive replaces all the files on the existing boot environment as if you performed an initial installation. all swap slices are shared by default. Adds a description to a boot environment name. Activating a boot environment makes it bootable on the next reboot of the system. the menu will be displayed on any ASCII terminal.2. and /opt. This command enables you to modify the files in a boot environment while that boot environment is inactive. Like a shareable file system. Updating shared files in the active boot environment also updates data in the inactive boot environment. Sun Microsystems no longer recommends the use of the lu utility. The final step in the Live Upgrade process is to activate the new boot environment. Lists critical file systems for each boot environment. Creates a boot environment. . Renames a boot environment. /usr. but they are shared. Enables a mount of all the file systems in a boot environment. Shareable file systems are user-defined files such as /export that contain the same mount point in the /etc/vfstab file in both the active and inactive boot environments. When you install the Solaris Flash Archive. Recopies file systems to update an inactive boot environment. you could install a Flash Archive on the new boot environment. Cancels a scheduled copy or create job.440 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade ment. You can also switch back quickly to the original boot environment if a failure occurs on booting the newly active boot environment or if you simply want to go back to the older version of the OS. Critical file systems are sometimes called nonshareable file systems. You can also use the lu command to get into the Live Upgrade utility to perform any of the Live Upgrade functions. Shareable file systems are not copied. Enables you to install software on a specified boot environment. A bitmapped terminal is not required. Examples of critical file systems are root (/). /var. Table 8. Solaris Live Upgrade is performed from the command line using the commands listed in Table 8. Rather than upgrading the new boot environment.

441

Solaris Live Upgrade

recommends that you issue the Live Upgrade commands from the command line, as done in this chapter.

Creating a New Boot Environment
Creating a new, inactive boot environment involves copying critical file systems from the active environment to the new boot environment using the lucreate command. The syntax for the lucreate command is as follows, along with some of its more common options:
lucreate [-A ‘<description>’] [-c <name>] [-x <file>]\ -m <mountpoint>:<device>:<ufstype> [-m ...] -n <name>

where:
. -A <description>: (optional) Assigns a description to the boot environment. . -c <name>: (optional) Assigns a name to the active boot environment. If you do not

specify a name, the system assigns one.
. -m <mountpoint>:<device>:<ufstype>: Specifies the /etc/vfstab information for

the new boot environment. The file systems that are specified as arguments to -m can be on the same disk, or they can be spread across multiple disks. Use this option as many times as necessary to create the number of file systems that are needed to support the new boot environment.
<mountpoint> can be any valid mount point. A - (hyphen) indicates a swap partition. <device> can be any of the following:

. The name of a disk device . An SVM volume (such as /dev/md/dsk/<devicename>) . A Veritas volume (such as /dev/md/vxfs/dsk/<devicename>) . The keyword merged can be used in the <device> field, indicating that the file sys-

tem at the specified mount point is to be merged with its parent.
<ufstype> can be one or more of the following keywords: ufs, xvfs, preserve, mirror, attach, detach, swap.

. -n <name>: The name of the boot environment to be created. The name must be

unique for the system.
. -x <file>: (optional) Excludes the file or directory from the new boot

environment. First, you need to select an unused disk slice where the new boot environment will be created. It must be on a bootable disk drive. If a slice is not available, you need to create one. If your system has only a single disk, you can still perform a Solaris Live Upgrade, but you need

442

Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade

enough space on the disk to create an empty slice large enough to hold the new boot environment. In Solaris 10 10/08, for a bootable ZFS root pool, the disks in the pool must contain slices. The simplest configuration is to put the entire disk capacity in slice 0 and then use that slice for the root pool. This process is described later in this section. Migrating a UFS root (/) file system to a ZFS root pool is beyond the scope of this chapter. Refer to “Solaris 10 10/08 Installation Guide: Solaris Live Upgrade and Upgrade Planning” for information on migrating from a UFS file system to a ZFS root pool. Every system configuration varies, so covering all the possible various disk scenarios is not possible. For simplicity, and to cover the topics that you will encounter on the exam, I’ll describe a very common configuration. In my example, I have a system with two 36GB hard drives: c0t0d0 and c0t1d0. The current boot drive is c0t0d0, and I want to upgrade the OS to Solaris 10 05/08. To create the new boot environment on c0t1d0, I’ll use the lucreate command:
# lucreate -A ‘My first boot environment’ -c active_boot -m /:/dev/dsk/c0t1d0s0:ufs -n new_BE <cr> \

Several lines of output are displayed as the new boot environment is being created and copied. The following messages appear when the operation is complete:
<output truncated> Population of boot environment <new_BE> successful. Creation of boot environment <new_BE> successful. #

The previous command created a new boot environment with the following characteristics:
. The description of the new boot environment is “My first boot environment.” . The current (Active) boot environment is named “active_boot.” . A file system is created on the secondary disk (c0t1d0) for root (/). . The new boot environment is named “new_BE.”

Optionally, I could create a new boot environment where root (/) and /usr are split into two separate file systems. To split the root (/) file system into two file systems, issue the following command:
# lucreate -A ‘My first boot environment’ -c active_boot -m \ /:/dev/dsk/c0t1d0s0:ufs -m /usr:/dev/dsk/c0t1d0s3:ufs -n new_BE<cr>

443

Solaris Live Upgrade

In the previous examples, swap slices are shared between boot environments. Because I did not specify swap with the -m option, the current and new boot environment share the same swap slice. In the following example, I’ll use the -m option to add a swap slice in the new boot environment, which is recommended:
# lucreate -A ‘My first boot environment’ -c active_boot -m \ /:/dev/dsk/c0t1d0s0:ufs -m -:/dev/dsk/c0t1d0s1:swap -n new_BE<cr>

If you want a shareable file system to be copied to the new boot environment, specify the mount point of the file system to be copied using the -m option. Otherwise, shareable file systems are shared by default, and they maintain the same mount point in the /etc/vfstab file. Any change or update made to the shareable file system is available to both boot environments. For example, to copy the /data file system to the new boot environment, issue the following command:
# lucreate -A ‘My first boot environment’ -c active_boot -m \ /:/dev/dsk/c0t1d0s0:ufs -m /data:/dev/dsk/c0t1d0s4:ufs -n new_BE<cr>

You can also create a new boot environment and merge file systems in the new BE. For example, in the current boot environment (active_boot) we have root (/), /usr and /opt. The /opt file system is combined with its parent file system /usr. The new boot environment is name new_BE. The command to create this new boot environment is as follows:
# lucreate -A ‘My first boot environment’ -c active_boot -m \ /:/dev/dsk/c0t1d0s0:ufs -m /usr:/dev/dsk/c0t1d0s1:ufs\ -m /usr/opt:merged:ufs -n new_BE<cr>

In some cases, you might want to create an empty boot environment. When you use the lucreate command with the -s - option, lucreate creates an empty boot environment. The slices are reserved for the file systems that are specified, but no file systems are copied. The boot environment is named, but it is not actually created until it is installed with a Solaris Flash Archive. The following example creates an empty boot environment:
# lucreate -A ‘My first boot environment’ -s - -c active_boot -m \ /:/dev/dsk/c0t1d0s0:ufs -n new_BE<cr>

If you are running Solaris 10 10/08 and are currently using a ZFS root pool, you can either create a new ZFS boot environment within the same root pool or create the new boot environment on a new root pool. The quickest method is to create a new boot environment with the same ZFS root pool. The lucreate command creates a snapshot from the source boot environment, and then a clone is built from the snapshot. The amount of space required varies; it depends on how many files are replaced as part of the upgrade process. To create a new boot environment within the same root pool, issue the following command:
# lucreate -c current_zfsBE -n new_zfsBE<cr>

444

Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade

The system displays the following output (the entire process took less than a minute):
Analyzing system configuration. No name for current boot environment. Current boot environment is named <current_zfsBE>. Creating initial configuration for primary boot environment <current_zfsBE>. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <current_zfsBE> PBE Boot Device </dev/dsk/c1t0d0s0>. Comparing source boot environment <current_zfsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment <new_zfsBE>. Source boot environment is <current_zfsBE>. Creating boot environment <new_zfsBE>. Cloning file systems from boot environment <current_zfsBE> to create boot environment <new_zfsBE>. Creating snapshot for <rpool/ROOT/s10s_u6wos_07b> on <rpool/ROOT/s10s_u6wos_07b@new_zfsBE>. Creating clone for <rpool/ROOT/s10s_u6wos_07b@new_zfsBE> on <rpool/ROOT/new_zfsBE>. Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/new_zfsBE>. Creating snapshot for <rpool/ROOT/s10s_u6wos_07b/var> on <rpool/ROOT/s10s_u6wos_07b/var@new_zfsBE>. Creating clone for <rpool/ROOT/s10s_u6wos_07b/var@new_zfsBE> on <rpool/ROOT/new_zfsBE/var>. Setting canmount=noauto for </var> in zone <global> on <rpool/ROOT/ new_zfsBE/var>. Population of boot environment <new_zfsBE> successful. Creation of boot environment <new_zfsBE> successful. #

A second option when creating a new boot environment from a ZFS root pool is to create the new boot environment on another root pool. You need to be aware of a few requirements for the new root pool:
. The ZFS storage pool must be created with slices rather than whole disks. The pool

must have an SMI label. An EFI-labeled disk cannot be booted.
. On the x86 platform only, the ZFS pool must be in a slice with an fdisk partition. . If you mirror the boot disk later, make sure you specify a bootable slice and not the

whole disk, because the latter may try to install an EFI label.

445

Solaris Live Upgrade . You cannot use a RAID-Z configuration for a root pool. Only single-disk pools or

pools with mirrored disks are supported. You will see the following message if you attempt to use an unsupported pool for the root pool:
ERROR: ZFS pool <pool-name> does not support boot environments

The process of creating a new boot environment on another root pool is described in Step By Step 8.6.

STEP BY STEP
8.6 Creating a New Boot Environment in Another Root Pool
1. Create a new ZFS pool on a slice located on a secondary disk. You must create the root pool on a disk slice. For the example, I’ll be performing the steps on an x86-based Solaris system. I’ve already used the format command to put the entire disk capacity of c1d1 in slice 0. I’ll use that slice when I create the root pool:
# zpool create rpool2 c1d1s0<cr>

Creating a ZFS pool is described in Chapter 9, “Administering ZFS File Systems.” 2. Create the new boot environment on rpool2:
# lucreate -n new_zfsBE -p rpool2<cr>

The new boot environment is named new_zfsBE. Because I didn’t use the -c option to name the current boot environment, it is given a default name, as shown in the following output:
Checking GRUB menu... Analyzing system configuration. No name for current boot environment. INFORMATION: The current boot environment is not named - assigning name <s10x_u6wos_07b>. Current boot environment is named <s10x_u6wos_07b>. Creating initial configuration for primary boot environment <s10x_u6wos_07b>. The device </dev/dsk/c0d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <s10x_u6wos_07b> PBE Boot Device </dev/dsk/c0d0s0>. Comparing source boot environment <s10x_u6wos_07b> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1d1s0> is not a root device for any boot environment; cannot get BE ID.

446

Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade
Creating configuration for boot environment <new_xfsBE>. Source boot environment is <s10x_u6wos_07b>. Creating boot environment <new_xfsBE>. Creating file systems on boot environment <new_xfsBE>. Creating <zfs> file system for </> in zone <global> on <rpool2/ROOT/new_xfsBE>. Creating <zfs> file system for </var> in zone <global> on <rpool2/ROOT/new_xfsBE/var>. Populating file systems on boot environment <new_xfsBE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Populating contents of mount point </var>. Copying. Creating shared file system mount points. Copying root of zone <testzone>. zoneadm: zone ‘testzone’: illegal UUID value specified Copying root of zone <testzone2>. Creating compare databases for boot environment <new_xfsBE>. Creating compare database for file system </var>. Creating compare database for file system </rpool2/ROOT>. Creating compare database for file system </>. Updating compare databases on boot environment <new_xfsBE>. Making boot environment <new_xfsBE> bootable. Updating bootenv.rc on ABE <new_xfsBE>. File </boot/grub/menu.lst> propagation successful Copied GRUB menu from PBE to ABE No entry for BE <new_xfsBE> in GRUB menu Population of boot environment <new_xfsBE> successful. Creation of boot environment <new_xfsBE> successful. #

You have several other options when creating a new boot environment:
. Creating a boot environment from a different source (other than the active boot envi-

ronment)
. Merging file systems in the new boot environment . Reconfiguring swap in the new boot environment . Creating a boot environment with RAID-1 (mirrored) volumes . Migrating a UFS root (/) file system to a ZFS root pool

NOTE
Creating a boot environment Refer to the Sun Microsystems “Solaris 10 5/08 Installation Guide: Solaris Live Upgrade and Upgrade Planning” for more information.

447

Solaris Live Upgrade

Displaying the Status of the New Boot Environment
Verify the status of the new boot environment using the lustatus command:
# lustatus<cr> Boot Environment Name ————————————— active_boot new_BE Is Active Active Can Copy Complete Now On Reboot Delete Status ———— ——— ————- ——— ————— yes yes yes no yes no no yes -

Table 8.3 describes the columns of information that are displayed. Table 8.3
lustatus Information Description The name of the active and inactive boot environments. Specifies whether a boot environment can be booted. Complete indicates that the environment is bootable. Indicates which environment is currently active. Indicates which boot environment will be active on the next system boot. Indicates that no copy, compare, or upgrade operations are being performed on a boot environment. Also, none of that boot environment’s file systems are currently mounted. With all these conditions in place, the boot environment can be deleted. Indicates whether the creation or repopulation of a boot environment is scheduled or active. A status of ACTIVE, COMPARING, UPGRADING, or SCHEDULED prevents a Live Upgrade copy, rename, or upgrade operation.

Boot Environment Status Boot Environment Name Is Complete Active Now Active On Reboot Can Delete

Copy Status

At this point, the new boot environment is set up. You can even test it by booting to c0t1d0 from the OBP.

Upgrading the New Boot Environment
After creating the new boot environment, you will use the luupgrade command to upgrade the new boot environment. The luupgrade command enables you to install software in a specified boot environment. Specifically, luupgrade performs the following functions:
. Upgrades an operating system image on a boot environment. The source of the image

can be any valid Solaris installation medium.
. Runs an installer program to install software from an installation medium. . Extracts a Solaris Flash Archive onto a boot environment.

448

Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade . Adds or removes a package to or from a boot environment. . Adds or removes a patch to or from a boot environment. . Checks or obtains information about software packages. . Checks an operating system installation medium.

The syntax is
luupgrade [-iIufpPtTcC] [<options>]

where the options are as follows:
. -l <logfile>: Error and status messages are sent to <logfile>, in addition to where

they are sent in your current environment.
. -o <outfile>: All command output is sent to <outfile>, in addition to where it is

sent in your current environment.
. -N: Dry-run mode. Enables you to determine whether your command arguments are

correctly formed.
. -X: Enables XML output. . -f: Extracts a Soloris Flash Archive onto a Boot Environment.

The following luupgrade options apply when you’re upgrading an operating system:
. -u: Upgrades an OS. . -n <BE_name>: Specifies the name of the boot environment to receive the OS upgrade. . -s <os_path>: Specifies the pathname of a directory containing an OS image. This can

be a directory, CD-ROM, or an NFS mount point. The following luupgrade options apply when you’re upgrading from a Solaris Flash Archive:
. -n <BE_name>: Specifies the name of the boot environment to receive the OS upgrade. . -s <os_path>: Specifies the pathname of a directory containing an OS image. This can

be a directory on an installation medium such as a CD-ROM, or it can be an NFS or UFS directory.
. -a <archive>: Specifies the path to the Flash Archive.

The following luupgrade options apply when you add or remove software packages:
. -p: Adds software packages.

449

Solaris Live Upgrade . -P: Removes software packages. . -n <BE_name>: Specifies the name of the boot environment to receive the OS upgrade. . -s <pkgs_path>: Specifies the pathname of a directory containing software packages to

add.
. -O: Used to pass options to the pkgadd and pkgrm commands.

I’ll describe how to upgrade the new boot environment using both a Solaris CD/DVD (Step By Step 8.7) and a Flash Archive (Step By Step 8.8). In the first example, I have a Solaris x86-based system with two disk drives and running Solaris 10 release 08/07. I’ve created the new boot environment, which is as follows:
# lustatus<cr> Boot Environment Name ————————————— active_boot new_BE Is Active Active Can Copy Complete Now On Reboot Delete Status ———— ——— ————- ——— ————— yes yes yes no yes no no yes -

Step By Step 8.7 describes how to update the new boot environment named new_BE on c0d1. In the example, I’ll update the system to Solaris 10 release 05/08 from the local DVD. I’ve already installed the Live Upgrade packages and patches from the Solaris 10 05/08 release as described earlier in this chapter.

STEP BY STEP
8.7 Performing a Solaris Live Upgrade from a Local DVD
1. Insert the Solaris 10 05/08 DVD into the DVD-ROM. 2. Issue the luupgrade command:
# luupgrade -u -n new_BE -s /cdrom/cdrom0<cr>

Several lines of output are displayed as the new boot environment is being upgraded. The following messages are displayed when the operation is complete:
Upgrading Solaris: ...<output has been The Solaris upgrade Installing failsafe Failsafe install is # 100% completed truncated> of the boot environment <new_BE> is complete. complete

450

Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade

When using Live Upgrade to install a Flash Archive, use the lucreate command with the -s option, as described earlier in this chapter. When the empty boot environment is complete, a Flash Archive can be installed on the boot environment, as described in Step By Step 8.8.

STEP BY STEP
8.8 Upgrading from a Flash Archive from a DVD
1. Insert the Solaris 10 05/08 DVD into the DVD-ROM. 2. Issue the luupgrade command:
# luupgrade -f -n new_BE -s /cdrom/cdrom0 -a /export/home/flash.flar<cr>

where -a /export/home/flash.flar is the name of the Flash Archive. Several lines of output are displayed as the new boot environment is being upgraded. The following messages are displayed when the operation is complete:
<output has been truncated> Upgrading Solaris: 100% completed ...<output has been truncated> The Solaris upgrade of the boot environment <new_BE> is complete. Installing failsafe Failsafe install is complete #

Activating the New Boot Environment
Activating the upgraded boot environment with the luactivate command will make it bootable at the next reboot. In addition, you can use the luactivate command to switch back to the old boot environment if necessary. To activate a boot environment, the following requirements must be met:
. The boot environment must have a status of “complete.” . If the boot environment is not the current boot environment, you cannot have mount-

ed the partitions of that boot environment using the luumount or mount commands.
. The boot environment that you want to activate cannot be involved in a comparison

operation (lucompare).
. If you want to reconfigure swap, make this change prior to booting the inactive boot

environment. By default, all boot environments share the same swap devices. In the previous section, I upgraded the OS on an x86/x64-based system. Before I activate the new boot environment, I’ll check the status again:
# lustatus<cr> Boot Environment Is Active Active Can Copy

451

Solaris Live Upgrade
Name ————————————— active_boot new_BE Complete Now On Reboot Delete Status ———— ——— ————- ——— ————— yes yes yes no yes no no yes -

The status shows “Complete,” so I’m ready to issue the luactivate command. The syntax for the luactivate command is as follows:
# luactivate [-s] [-l] [-o] <new_BE><cr>

where:
. <new_BE>: Specifies the name of the upgraded boot environment you want to activate. . -o <out_file>: All output is sent to the <out_file> file in addition to your current

environment.
. -l <errlog>: Error and status messages are sent to the <errlog> file in addition to

your current environment.
. -s: Forces a synchronization of files between the last-active boot environment and the

new boot environment. The first time a boot environment is activated, the files between the boot environments are synchronized. With subsequent activations, the files are not synchronized unless you use the -s option. “Synchronize” means that certain critical system files and directories are copied from the last-active boot environment to the boot environment being booted. The luactivate command performs the following tasks:
. The first time you boot to a new boot environment (BE), the Solaris Live Upgrade

software synchronizes this BE with the BE that was last active.
. If luactivate detects conflicts between files that are subject to synchronization, it

issues a warning and does not perform the synchronization for those files. However, activation can still complete successfully, in spite of such a conflict. A conflict can occur if you make changes to the same file on both the new boot environment and the active boot environment. For example, you make changes to the /etc/vfstab file in the original boot environment. Then you make other changes to the /etc/vfstab file in the new boot environment. The synchronization process cannot choose which file to copy for the synchronization.
. luactivate checks to see whether upgrade problems occurred. For example, impor-

tant software packages might be missing. This package check is done for the global zone as well as all nonglobal zones inside the BE. The command can issue a warning or, if a BE is incomplete, can refuse to activate the BE.

452

Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade . On a SPARC system, luactivate determines whether the bootstrap program requires

updating and takes steps to update if necessary. If a bootstrap program changed from one operating release to another, an incorrect bootstrap program might render an upgraded BE unbootable.
. luactivate modifies the root partition ID on a Solaris x86/x64-based disk to enable

multiple BEs to reside on a single disk. In this configuration, if you do not run
luactivate, booting of the BE will fail.

luactivate on the x86/x64 Platform
To activate the upgraded boot environment on the x86/x64-based platform, issue the luactivate command:
# luactivate -s new_BE<cr>

The system displays the steps to be taken for fallback in case problems are encountered on the next reboot. Make note of these instructions, and follow them exactly if it becomes necessary to fall back to the previous boot environment:
System has findroot enabled GRUB Generating boot-sign, partition and slice information for PBE <active_BE> A Live Upgrade Sync operation will be performed on startup of boot environment <new_BE>. Generating boot-sign for ABE <new_BE> Generating partition and slice information for ABE <new_BE> Boot menu exists. Generating multiboot menu entries for PBE. Generating multiboot menu entries for ABE. Disabling splashimage Re-enabling splashimage No more bootadm entries. Deletion of bootadm entries is complete. GRUB menu default setting is unaffected Done eliding bootadm entries. ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. **********************************************************************

the following process needs to be followed to fallback to the currently working boot environment: 1. 5.findroot> propagation successful File </etc/lu/GRUB_capability> propagation successful Deleting stale GRUB loader from all BEs.1.latest> deletion successful Activation of boot environment <new_BE> successful. Run <luactivate> utility with out any arguments from the Parent boot environment root slice. Exit Single User mode and reboot the machine. luactivate. the luactivate command modifies the menu. In addition.lst file (GRUB boot menu). when you activate a boot environment on an x86/x64-based system. Boot from the Solaris failsafe or boot in Single User mode from Solaris Install CD or Network. Mount the Parent boot environment root slice to some directory (like /mnt). 2.findroot> propagation successful File </etc/lu/stage2. You can use the following command to mount: mount -Fufs /dev/dsk/c0d0s0 /mnt 3. activates the previous working boot environment and indicates the result.latest> deletion successful File </etc/lu/stage1. .453 Solaris Live Upgrade In case of a failure while booting to the target BE.latest> deletion successful File </etc/lu/stage2. as shown in Figure 8. File </etc/lu/installgrub. as shown below: /mnt/sbin/luactivate 4. ********************************************************************** Modifying boot archive service Propagating findroot GRUB for menu conversion. File </etc/lu/installgrub.findroot> propagation successful File </etc/lu/stage1.

The menu. you must use the shutdown or init command. halt. refer to Solaris 10 System Administration Exam Prep (Exam CX-310-200). and the system boots to the last-active boot environment.lst file contains the information that is displayed in the GRUB menu. Keep in mind a couple cautions when using the GRUB menu to boot to an alternate boot environment: . The preferred method for customization is to use the eeprom command when possible. Changing the order might cause the GRUB menu to become invalid. the boot environment must always be activated with the luactivate command. The reboot. or 10 3/05 release. files are not synchronized. These older boot environments do not appear on the GRUB menu. Be careful if you change the disk order in the BIOS. . . If this problem occurs. For more information on booting x86/x64-based systems and the GRUB menu. If a boot environment was created with the Solaris 8.1 Modifying the menu.lst file to modify Solaris Live Upgrade entries. Modifications could cause Solaris Live Upgrade to fail. when you switch between boot environments with the GRUB menu.454 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade FIGURE 8. not necessarily on the active boot environment disk. Do not use the GRUB menu. The GRUB menu is stored on the primary boot disk. This is necessary only when you’re performing the first reboot after running the lucreate command. and uadmin commands do not switch boot environments.lst file. However. 9. The next time you boot. you can choose the boot environment directly from the GRUB menu without using the luactivate command. After you run the luactivate command on an x86/x64-based system and then shut down for a reboot. changing the disk order back to the original state fixes the GRUB menu. Part I.

This list of files copied is maintained in /etc/lu/synclist. the following process needs to be followed to fallback to the currently working boot environment: 1. If you do not use either init or shutdown. # After running the luactivate command on a SPARC system. or uadmin commands. Inc.0/pci@1/scsi@8/disk@0. It will be used when you reboot. The reboot and halt commands do not switch boot environments. . and the system boots to the last-active boot environment. You MUST USE either the init or the shutdown command when you reboot. **************************************************************** In case of a failure while booting to the target BE. and follow them exactly if it becomes necessary to fall back to the previous boot environment: **************************************************************** The target boot environment has been activated. the system will not boot using the target BE. Make note of these instructions. NOTE: You MUST NOT USE the reboot. issue the following command: # luactivate new_BE<cr> The system displays the steps to be taken for fallback in case problems are encountered on the next reboot. Boot to the original boot environment by typing: boot **************************************************************** Activation of boot environment <new_BE> successful. Use is subject to license terms.0:a 3. Verify that the OS has been upgraded to the Solaris 10 /05/08 release with the following command: # cat /etc/release<cr> Solaris 10 5/08 s10s_u5wos_10 SPARC Copyright 2008 Sun Microsystems. Change the boot device back to the original boot environment by typing: setenv boot-device /pci@1f. halt. when you shut down for a reboot. Enter the PROM monitor (ok prompt). you must use the shutdown or init command. Assembled 24 March 2008 All Rights Reserved. 2. This is because it’s important to run the shutdown scripts necessary to perform the upgrade. During the first boot of a new boot environment.455 Solaris Live Upgrade luactivate on the SPARC Platform To activate the upgraded boot environment on the SPARC platform. data is copied from the source boot environment.

05. Adding and removing packages for an OS installed on a new boot environment .n.REV=2005.456 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade Maintaining Solaris Live Upgrade Boot Environments You can perform various administrative tasks on an inactive boot environment: .01.16 Do you want to remove this package? [y. Deleting an inactive boot environment . Updating the contents of a previously configured boot environment . Removing Software Packages from a Boot Environment The following example uses the luupgrade command with the -P option to remove the SUNWgzip software package from the OS image on an inactive boot environment named new_BE: # luupgrade -P -n new_BE SUNWgzip<cr> where: . -n <BE_name>: Specifies the name of the boot environment where the package is to be removed. Checking for differences between the active boot environment and other boot environments . . Removing packages from the BE <new_BE>.08. Changing the name or description of a boot environment . Viewing the configuration of a boot environment These administrative tasks are described in the following sections. Removing patches on an OS installed on a boot environment .q] y<cr> ## Removing installed package instance <SUNWgzip> ## Verifying package <SUNWgzip> dependencies in global zone WARNING: The <SUNWdtdte> package depends on the package currently being removed. -P: Used to remove the named software packages from the boot environment. .?. The system responds with the following output: Mounting the BE <new_BE>.10.0. Adding patches to an OS installed on a new boot environment . WARNING: The <SUNWfppd> package depends on the package currently being removed. The following package is currently installed: SUNWgzip The GNU Zip (gzip) compression utility (sparc) 11.

# Adding Software Packages from a Boot Environment The following example uses the luupgrade command with the -p option to add the SUNWgzip software package to the OS image on an inactive boot environment named new_BE: # luupgrade -p -n new_BE -s /cdrom/sol_10_508_sparc_2/Solaris_10/Product SUNWgzip<cr> where: .n. . Adding packages to the BE <new_BE>. -p: Used to add the named software packages from the boot environment. The system responds with the following output: Validating the contents of the media </cdrom/sol_10_508_sparc_2/Solaris_10/Product>.q] y<cr> ## Processing package information.457 Solaris Live Upgrade Dependency checking failed. Unmounting the BE <new_BE>. Mounting the BE <new_BE>. -s <path-to-pkg>: Specifies the path to a directory containing the package or pack- ages to be installed. . . ## Removing pathnames in class <none> /a/usr/bin/gznew /a/usr/bin/gzmore /a/usr/bin/gzless /a/usr/bin/gzip /a/usr/bin/gzgrep /a/usr/bin/gzforce /a/usr/bin/gzfgrep /a/usr/bin/gzexe /a/usr/bin/gzegrep /a/usr/bin/gzdiff /a/usr/bin/gzcmp /a/usr/bin/gzcat /a/usr/bin/gunzip /a/usr/bin <shared pathname not removed> /a/usr <shared pathname not removed> ## Updating system information.?. Removal of <SUNWgzip> was successful. The package remove from the BE <new_BE> completed. Do you want to continue with the removal of this package [y. -n <BE_name>: Specifies the name of the boot environment where the package is to be added.

without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Inc.n.01. . ## Processing package information.0.10. USA. The package add to the BE <new_BE> completed. This package contains scripts which will be executed with super-user permission during the process of installing this package. or (at your option) any later version.05. Unmounting the BE <new_BE>. This program is distributed in the hope that it will be useful. if not. ## Checking for setuid/setgid programs.?] y<cr> Installing The GNU Zip (gzip) compression utility as <SUNWgzip> ## Installing part 1 of 1. MA 02139. Using </a> as the package base directory. # Removing Patches on an OS Installed on a Boot Environment The following example uses the luupgrade command to remove a software patch named 119317-01 from the OS image on an inactive boot environment named new_BE: # luupgrade -T -n new_BE 119317-01<cr> where -T is used to remove a patch from the named boot environment. You should have received a copy of the GNU General Public License along with this program.. either version 2. Do you want to continue with the installation of <SUNWgzip> [y. you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation. See the GNU General Public License for more details. ## Verifying disk space requirements.08. write to the Free Software Foundation. ## Verifying package dependencies. 160 blocks Installation of <SUNWgzip> was successful. 2 package pathnames are already properly installed. ## Processing system information.REV=2005. but WITHOUT ANY WARRANTY. ## Checking for conflicts with packages already installed.16 Copyright 1992-1993 Jean-loup Gailly This program is free software. 675 Mass Ave. Cambridge.458 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade Processing package instance <SUNWgzip> from </cdrom/sol_10_508_sparc_2/Solaris_10/Product> The GNU Zip (gzip) compression utility(sparc) 11.

459 Solaris Live Upgrade Adding Patches to an OS Installed on a New Boot Environment The following example uses the luupgrade command to add a software patch named 11931701 to the OS image on an inactive boot environment named new_BE: # luupgrade -t -n ‘new_BE’ -s /tmp/119317-01 119317-01<cr> where: . Other tasks. are beyond the scope of the CX-310-202 exam and this book.——— ————— yes yes yes no yes no no yes - Notice that the “Can Delete” field is marked yes for the new_BE boot environment. refer to Sun Microsystems’ Solaris 10 5/08 Installation Guide. you cannot delete a boot environment that contains the active GRUB menu. The following boot environments are available on the system: # lustatus<cr> Boot Environment Name ————————————— active_boot new_BE Is Active Active Can Copy Complete Now On Reboot Delete Status ———— ——— ————. x86/x64-based systems: Starting with the Solaris 10 1/06 release. . -s: Specifies the path to the directory containing the patch. -t: Adds a patch or patches to an inactive boot environment. To remove the new_BE boot environment. You can only delete a boot environment that has a status of complete. issue the following command: # ludelete new_BE<cr> The system responds with this: Determining the devices to be marked free. Deleting an Inactive Boot Environment Use the ludelete command to delete an inactive boot environment. . . Updating boot environment configuration database. . You cannot delete a boot environment that has file systems mounted with lumount. The following limitations apply to the ludelete command: . such as updating an existing boot environment and checking for differences between boot environments. . If you would like more information on these topics. You cannot delete the active boot environment or the boot environment that is activat- ed on the next reboot.

Updating compare databases on boot environment <solaris10_0508_BE>. Changing the name of BE in configuration file. The system responds with this: Renaming boot environment <new_BE> to <solaris10_0508_BE>. In the following example. Boot environment <new_BE> renamed to <solaris10_0508_BE>. Propagating the boot environment name change to all BEs. # Changing the Name of a Boot Environment You can rename a boot environment using the lurename command.——— ————— yes yes yes no yes no no yes - Changing the Description of a Boot Environment It’s a good idea to have a description associated with each boot environment on your system. You can create a description when you create the boot environment using the lucreate -A option or after the boot environment has been created using the ludesc command. # Verify that the name was changed with the lustatus command: # lustatus<cr> Boot Environment Name ————————————— active_boot solaris10_0508_BE Is Active Active Can Copy Complete Now On Reboot Delete Status ———— ——— ————. I rename the boot environment from new_BE to solaris10_0508_BE: # lurename -e new_BE -n solaris10_0508_BE<cr> where: . . Changing the name of BE in Internal Configuration Files. Updating all boot environment configuration databases. I add a description to an existing boot environment: # ludesc -n solaris10_0508_BE “Solaris 10 05/08 upgrade” <cr> . In the following example. Changing the name of BE in the BE definition file.460 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade Updating boot environment description database on all BEs. -e <name>: Specifies the inactive boot environment name to be changed. -n <newname>: Specifies the new name of the inactive boot environment. Boot environment <new_BE> deleted.

# In the output. Where the lustatus command is used to display the status of a boot environment. and file system size of each boot environment mount point: # lufslist -n solaris10_0508_BE<cr> boot environment name: solaris10_0508_BE This boot environment will be active on next system boot.Solaris 10 05/08 upgrade. The system responds with this: Setting description for boot environment <solaris10_0508_BE>. Filesystem —————————/dev/dsk/c0t0d0s3 /dev/dsk/c0t1d0s0 /dev/dsk/c0t0d0s7 # fstype ———— swap ufs ufs device size —————2097460224 10738759680 10485821952 Mounted on ——————— / /export/home Mount Options ——————— - . Updating boot environment description database on all BEs. Issuing the lustatus command displays the following information: # lustatus<cr> Boot Environment Name ————————————— active_boot solaris10_0508_BE Is Complete ———— yes yes Active Now ——— yes no Active On Reboot ————yes no Can Delete ——— no yes Copy Status ————— - I can view the description by using the ludesc command with the -n option followed by the boot environment’s name: # ludesc -n solaris10_0508_BE<cr> The system responds with this: . the lufslist command displays the disk slice. Viewing the Configuration of a Boot Environment Use the lufslist command to display the configuration of a particular boot environment. ludesc does not append a newline to the display of the BE description text string.461 Solaris Live Upgrade where -n <BEname> specifies the boot environment name. file system type. followed by the description enclosed in double quotes (“ ”).

HMAC . Fallback . know how to perform a WAN boot installation. Hashing . GRUB . limitations associated with each command. and know which types of media you can use with a WAN boot install. bootlog-cgi . you need to understand the advantages of a WAN boot installation over other methods used to install the OS. DHCP . For the certification exam. Finally. Understand the use and. CGI . Flash Archive .462 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade Summary This chapter has described how to configure a WAN boot installation and perform a Solaris Live Upgrade installation. Document root directory . New in the Solaris 10 10/08 release. Go back and perform the steps I’ve outlined in this chapter to become familiar with the process and the files that are associated with a WAN boot installation. Encryption . For the Solaris Live Upgrade portion of the certification exam. you should know the system requirements and the various Live Upgrade commands described in this chapter. Boot environment . in some cases. DES . As of the Solaris 10 05/08 release. administrators can migrate from a UFS file system to a ZFS file system during a Solaris Live Upgrade. Understand the requirements for a WAN boot installation. You read about how to use a WAN boot installation to install the operating system securely and insecurely over a WAN and how to use a Solaris Live Upgrade to copy and update an existing OS. The next chapter describes ZFS file systems and how they’ve revolutionized disk storage. Solaris Live Upgrade cannot be performed on a ZFS file system. Key Terms .

A WAN boot supports all SPARC-based systems.2. and 8. ❍ B. Solaris Live Upgrade . 8. WAN boot server Apply Your Knowledge Exercises Perform Step By Steps 8. wanboot-cgi . A WAN boot provides a scalable process for the automated installation of systems. WAN . A WAN boot installation is more secure than a custom JumpStart installation.7.463 Apply Your Knowledge . SHA1 .3. HTTP . ❍ E. ❍ D. Which of the following describe the advantages of a WAN boot installation over a JumpStart installation? (Choose four. SSL . URL .conf .) ❍ A. . wanboot. WAN boot miniroot . wanboot program . Boot services are not required to be on the same subnet as the installation client. ❍ C. WAN boot installation . A WAN boot supports all x86-based systems. Key . sysidcfg file . HTTPS . Exam Questions 1.

14 ❍ B. Which of the following are requirements that your server must meet before it can be used as a WAN boot server? ❍ A. The WAN boot client must have a SPARC II processor or newer. OpenBoot uses configuration information to communicate with this program on the WAN boot server and request a download of the wanboot program from the server. Which of the following services all WAN boot client requests and parses the WAN boot server files and client configuration files into a format that the WAN boot client expects? ❍ A. installation.) ❍ A. ❍ D. The WAN boot server must be a SPARC or x86-based system running Solaris 9 release 12/03 or higher. Which of the following are requirements that your system must meet before it can be used as a client in a WAN boot installation? (Choose two. x86 systems must support a PXE boot. and it must support SSL version 3. A WAN boot requires a web server to be configured. 4. Before you can perform a WAN boot installation. A minimum OpenBoot firmware version of 4. ❍ B. and configuration files onto the WAN boot client? ❍ A. bootlog-cgi ❍ D. wanboot program ❍ E. 3. ❍ C.464 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade 2. ufsboot ❍ B. HTTP . Which of the following is a second-level boot program that is used to load the miniroot. wanboot-cgi ❍ C. ❍ D. The WAN boot client must have a minimum of 512MB of RAM. ❍ C. wanboot-cgi ❍ C. wanboot program ❍ B. wan boot miniroot 5. Before you can perform a WAN boot installation. you need to make sure that your server meets the minimum requirements for a WAN boot installation. bootlog-cgi ❍ D. you need to make sure that the WAN boot client meets the minimum requirements for a WAN boot installation. The WAN boot server must be a SPARC or x86-based system running Solaris 10 or higher. The WAN boot server must be a SPARC system running Solaris 9 release 12/03 or higher.

Which commands (issued on the WAN boot client) can be used to initiate the WAN boot installation? (Choose two. In terms of a Solaris Live Upgrade. encryption type. where does the wanboot program reside? ❍ A.wanboot ❍ D. ok boot cdrom -o prompt -F wanboot . which of the following are examples of shareable file systems? (Choose two.465 Apply Your Knowledge 6. Either in an NFS shared (exported) directory on the WAN boot server or on the client’s local CD or DVD ❍ C.install ❍ C. / ❍ B. A spooled image of the DVD or CDs ❍ C. Flash Archive ❍ D. wanboot. In an NFS shared (exported) directory on the WAN boot server ❍ B. ok boot net ❍ E. signing policies) that are required to perform a WAN boot installation? ❍ A.conf ❍ C. In the WAN boot server’s document root directory ❍ D. 9.install ❍ B. /export .) ❍ A. Which of the following can be used to supply the operating system during a WAN boot installation? ❍ A. In /etc/netboot on the WAN boot server 7. On the WAN boot server. swap ❍ D. ok boot net .install 10. bootlog.) ❍ A. Which of the following is a file in which you specify the configuration information and security settings (file paths.conf ❍ D. Any image supported by JumpStart is also supported by WAN boot. /etc/netboot 8. wanboot ❍ B. ok boot net . Local CD/DVD ❍ B. /usr ❍ C. ok boot -F wanboot .

Only SPARC-based systems can use Solaris Live Upgrade.” 2. and the WAN boot client must have a SPARC II processor or newer. You can still perform a WAN boot installation by utilizing WAN boot programs from a local CD/DVD. inactive boot environment involves copying critical file systems from the active environment to the new boot environment. see the section “Understanding the WAN Boot Process. A. C. The wanboot program performs tasks similar to those that are performed by the ufsboot or inetboot second-level boot programs. and configuration files onto the WAN boot client.466 Chapter 8: Advanced Installation Procedures: WAN Boot and Solaris Live Upgrade 11. The WAN boot client must have a minimum of 512MB of RAM. D. lumake ❍ C. B. wanboot-cgi is a Common Gateway Interface (CGI) program on . Creating a new. ❍ D. ❍ B.” 4. see the section “WAN Boot Requirements. C. For more information. When the WAN boot client is booted. x86/x64-based systems cannot be installed using a WAN boot installation.1 minimum. installation. 12. All the answers describe advantages of a WAN boot installation over a JumpStart installation except answer E. Which of the following are requirements for performing a Solaris Live Upgrade? (Choose two. D.” 3. For more information. see the section “Introduction to WAN Boot. it must be configured as a web server. x86/x64-based systems cannot be installed using a WAN boot installation. A. Ensure that the system meets current patch requirements. ❍ C. and it must support HTTP 1. B. see the section “WAN Boot Requirements. luactivate ❍ B.” 5. it is not a requirement. The release of the Live Upgrade software packages must match the release of the OS you are upgrading to. Although it’s best if the WAN boot client system’s OpenBoot PROM (OBP) supports WAN boot. lucopy ❍ D. B. luupgrade Answers to Exam Questions 1. For more information. For more information. The server can be a SPARC or x86based system. The root (/) file system of the new inactive boot environment must be on the same physical disk as the currently active root (/) file system. OpenBoot uses configuration information to communicate with the wanboot-cgi program on the WAN boot server and to request a download of the wanboot program from the server. lucreate ❍ E. The wanboot program is a second-level boot program that is used to load the miniroot. Which command is used to accomplish this? ❍ A.) ❍ A. The WAN boot server must be running Solaris 9 release 12/03 or higher.

do not work with WAN boot. see the section “Configure the WAN Boot and JumpStart Files. If the OBP does not support WAN boot. D. However. C. you can still boot using the WAN boot programs located on the local CD/DVD as follows: boot cdrom -o prompt -F wanboot install. inactive boot environment involves copying critical file systems from the active environment to the new boot environment using the lucreate command. “Solaris Installation Guide: Network-Based Installations. The wanboot. For more information. The disk on the new boot environment must be able to serve as a boot device.” Sun Microsystems part number 820-4041-11.” 7. You must ensure that the system meets current patch requirements before attempting to install and use the Solaris Live Upgrade software on your system. For more information. For more information. when you create a new inactive boot environment. all swap slices are shared by default. D. see the section “Live Upgrade Requirements. Creating a new.conf File. A. For more information. see the section “WAN Boot Requirements. as long as the disk can be used as a boot device.” 10. but they are shared. For more information. . see the section “The wanboot. Shareable file systems are not copied. or a local DVD/CD.” 12. see the section “Booting the WAN Boot Client. In addition.” 6. B. Flash Archives are the only format supported.sun. see the section “Understanding the WAN Boot Process. For more information. such as a spooled image of the CD/DVD that performed a pkgadd-style install. When the OBP supports WAN boot.467 Apply Your Knowledge the web server that services all client requests.install command to boot the WAN boot client. B.” 9. C. It parses the WAN boot server files and client configuration files into a format that the WAN boot client expects. For more information.” 8. the release of the Live Upgrade software packages must match the release of the OS you are upgrading to.” Sun Microsystems part number 820-4040-10. available at http://docs.sun.” 11. C. Traditional JumpStart images.conf file is a text file in which you specify the configuration information and security settings that are required to perform a WAN boot installation. available at http://docs.com. Shareable file systems are user-defined files such as /export that contain the same mount point in the /etc/vfstab file in both the active and inactive boot environments. A. see the section “Solaris Live Upgrade Process. the root (/) file system does not need to be on the same physical disk as the currently active root (/) file system. Like a shareable file system. see the section “Creating a New Boot Environment.” Suggested Reading and Resources “Solaris Installation Guide: Solaris Live Upgrade and Upgrade Planning. B.com. For more information. The files necessary to perform a WAN boot must be made accessible to the web server by storing them in the WAN boot server’s document root directory. you use the boot net .

.

work with ZFS snapshots and Clones. Why the Solaris ZFS file system is a revolutionary file system when com- pared to traditional Solaris file systems. . mount and unmount ZFS file systems. modify ZFS file system properties. destroy ZFS pools and file systems. you’ll learn the following about ZFS file systems: . Creating ZFS snapshots. Describe the Solaris ZFS file system. How to view and modify ZFS file system properties. In addition. Cloning ZFS file systems. and use ZFS datasets with Solaris Zones. You’ll also learn about the features and benefits of ZFS and how ZFS file systems differ from traditional Solaris file systems. . How to create and remove ZFS pools and ZFS file systems. . . How to set up a bootable ZFS root file system during the installation of the operating system. Using ZFS datasets with Solaris Zones.9 NINE Administering ZFS File Systems Objectives The following test objectives for exam CX-310-202 are covered in this chapter: . . . create new ZFS pools and file systems. . Mounting and unmounting ZFS file systems. .

Outline Introduction to ZFS ZFS Storage Pools ZFS Is Self-Healing Simplified Administration ZFS Terms ZFS Hardware and Software Requirements ZFS RAID Configurations Creating a Basic ZFS File System Renaming a ZFS File System Listing ZFS File Systems Removing a ZFS File System Removing a ZFS Storage Pool ZFS Components Using Disks in a ZFS Storage Pool Using Files in a ZFS Storage Pool Mirrored Storage Pools RAID-Z Storage Pools Displaying ZFS Storage Pool Information Adding Devices to a ZFS Storage Pool Attaching and Detaching Devices in a Storage Pool Converting a Nonredundant Pool to a Mirrored Pool Detaching a Device from a Mirrored Pool Taking Devices in a Storage Pool Offline and Online ZFS History ZFS Properties Setting ZFS Properties Mounting ZFS File Systems Legacy Mount Points Sharing ZFS File Systems ZFS Web-Based Management GUI ZFS Snapshots Creating a ZFS Snapshot Listing ZFS Snapshots Saving and Restoring a ZFS Snapshot Destroying a ZFS Snapshot Renaming a ZFS Snapshot Rolling Back a ZFS Snapshot ZFS Clones Destroying a ZFS Clone Replacing a ZFS File System with a ZFS Clone zpool Scrub Replacing Devices in a Storage Pool A ZFS Root File System Using ZFS for Solaris Zones Adding a ZFS Dataset to a Nonglobal Zone Delegating a ZFS Dataset to a Nonglobal Zone Summary Key Terms Apply Your Knowledge Exercise Exam Questions Answers to Exam Questions Suggested Reading and Resources .

Practice the Step By Step examples provided in this chapter on either a SPARC-based or x86-based Solaris system. . It is recommended that your Solaris system have at least three spare disks. . as well as system requirements that are outlined.Study Strategies The following strategies will help you prepare for the test: . Understand all the ZFS terms described in this chapter.

we add disks to the system and then divide those disks into one or more file systems. we simply call it ZFS. and rebuilding the existing file system. Sometimes we allocate too much space to one file system while another file system fills up. ZFS allows for 256 quadrillion zettabytes of storage. no initialization or mount procedures. ZFS was designed to be more robust. Part I. All metadata is allocated dynamically. With ZFS. and ZFS manages how the storage gets allocated. Directories can have up to 256 trillion entries. It revolutionizes the traditional Solaris file systems described in Solaris 10 System Administration Exam Prep (Exam CX-310-200). more scalable. we either add another disk or take away space from another file system. That is precisely what ZFS does to the disks installed on a server. so there is no need to preallocate I-nodes or otherwise limit the scalability of the file system when it is first created. As you learn about ZFS. Taking away space from an existing file system typically requires backing up. and no limit exists on the number of file systems or number of files that can be contained within a ZFS file system. ZFS has no slices. All the algorithms were written with scalability in mind. nor is it an improvement on that existing technology. ZFS Storage Pools With conventional file systems. To get more free disk space. If we need more space. but it is a fundamental new approach to data management. destroying. There is just a pool of disks. and it is no longer an acronym for anything. disk space is not allocated to a file system. ZFS comes from the acronym for “Zettabyte File System. and easier to administer than traditional Solaris file systems. we manually allocate more space to that file system. As we add data to a file system. the file system begins to fill up. Since then. ZFS is quite different and much easier to administer. much as we do not worry about allocating physical memory when we add DIMMs (dual inline memory modules) to a server. I don’t partition it and allocate the RAM to each application one chip at a time.” mainly because “Zetta” was one of the largest SI prefixes. ZFS does not replace those traditional file systems.472 Chapter 9: Administering ZFS File Systems Introduction to ZFS ZFS is a 128-bit file system that was introduced in the 6/06 update of Solaris 10 in June 2006. The name referred to the fact that ZFS could store 256 quadrillion zettabytes of data. it’s best to try to forget everything you know about traditional file systems and volume management. no file system consistency checks. . When I add RAM to a server. I simply install the DIMMs and let the kernel manage it all. ZFS represents an entirely new approach to managing disk storage space.

and any sequence of operations is either entirely committed or entirely ignored. described in Chapter 3. The most recently written pieces of data might be lost. which is used on traditional file systems. If the checksums are the same. As with the Solaris Volume Manager (SVM). however. because the data needs to be written twice. the data has been changed. Furthermore. Block devices (disks or disk slices) make up the zpool. called “zpools. I can limit how much space the ZFS file system takes from the zpool. The file system takes data blocks from the zpool as it needs the storage space. there is no need for an fsck equivalent. if one copy is damaged. This mechanism ensures that the ZFS file system can never be corrupted through loss of power or a system crash. records an action in a separate journal. in which live data is never overwritten. If the checksums are different. Traditional file systems simply overwrite old data as data changes. Your server may have one or more zpools. ZFS checksums . The journaling process introduces unnecessary overhead.” to manage physical storage.473 Introduction to ZFS ZFS uses storage pools.” ZFS file systems can span multiple devices. the data has not changed. “Managing Storage Volumes. In a mirrored ZFS file system. NOTE What is a checksum? A checksum is a value used to ensure that data is stored without error. but the file system itself is always consistent. ZFS detects it and uses another copy to repair it. However. ZFS differs from SVM in that we do not need to allocate blocks of storage to each file system as it is created. such as when the journal can’t be replayed properly. I specify which zpool the file system belongs to. or I simply let ZFS use as much as it needs. ZFS Is Self-Healing ZFS is a transactional file system that ensures that data is always consistent. ZFS uses copy-on-write semantics. When I run out of space in the zpool. or tampered with. When I create a ZFS file system. In addition. I add another block device to increase the size of the zpool. corrupted. the checksum is recalculated and matched against the stored checksum. in a replicated (mirrored or RAID) configuration. This often results in a new set of problems. ZFS allocates the space as it is needed. The journal can be replayed if a system crash occurs. It is derived by calculating the binary value in a block of data using a particular algorithm and storing the calculated results with the data. NOTE ZFS file system The ZFS transactional file system should not be confused with file system journaling. every block is checksummed to prevent silent data corruption. I do not. When data is retrieved. specify the size of the file system. described in previous chapters. however. In a ZFS file system. The journaling process.

essentially providing a continuous file system check-and-repair operation. because performance is maintained by delegating a single core of a multicore CPU to perform the checksums. that stores identical copies of data on two or more disks. You’ll find it easy to mount file systems. and volumes. A generic name for the following ZFS entities: clones. ZFS terminates the request and pulls the block from the other member of the mirror set. ZFS file system Mirror . set disk quotas. file systems. If there’s a disparity between the 256-bit checksum and the block.1 Term Checksum Clone Dataset ZFS Terminology Definition A 256-bit hash of the data in a file system block. Each dataset is identified by a unique name in the ZFS namespace.474 Chapter 9: Administering ZFS File Systems each block as it is returned from disk. enable file compression. A virtual device. Simplified Administration ZFS greatly simplifies file system administration as compared to traditional file systems. In a subsequent operation. A file system with contents that are identical to the contents of a ZFS snapshot. The current set of default file systems is /. <path> is a slash-delimited pathname for the dataset object. snapshots. matching the checksums and delivering the valid data to the application. and /var. the bad block seen on the first disk is replaced with a good copy of the data from the redundant copy. /usr. /opt. Performance is not negatively affected on newer systems. Default file system A file system that is created by default when using Solaris Live Upgrade to migrate from UFS to a ZFS root file system. Table 9. All these tasks are described in this chapter. A ZFS dataset that is mounted within the standard system namespace and behaves like other traditional file systems. Table 9. ZFS Terms Before I describe how to manage ZFS. [<snapshot>] is an optional component that identifies a snapshot of a dataset. and manage numerous file systems with a single command.1 defines some terms that you will need to understand for this chapter. also called a RAID-1 device. The system administrator will find it easy to create and manage file systems without issuing multiple commands or editing configuration files. Datasets are identified using the following format: <pool>/<path>[@<snapshot> <pool> is the name of the storage pool that contains the dataset.

The process is also called mirror resynchronization in traditional volume management products. Shared file systems might also contain zone roots. RAID-Z Resilvering Shared file systems Snapshot Virtual device Volume ZFS Hardware and Software Requirements The system must meet the following requirements before ZFS can be utilized: . . The machine must be a SPARC or x86/x64 system that is running Solaris 10 6/06 release or newer. Multiple controllers are recommended for a mirrored disk configuration. at least 1GB or more of memory is recommended. you can create a ZFS volume as a swap device. such as /export. A dataset used to emulate a physical device. For example. . A read-only image of a file system or volume at a given point in time. For good ZFS performance. the data from the up-to-date mirror component is copied to the newly restored mirror component. The minimum disk size that can be used in a ZFS environment is 128MB. similar to RAID-5. or a collection of devices.1 Term Pool ZFS Terminology Definition A logical group of block devices describing the layout and physical characteristics of the available storage. For example. The process of transferring data from one device to another. This set includes file systems. a file. Space for datasets is allocated from a pool. The mini- mum amount of disk space for a storage pool is approximately 64MB. The set of file systems that are shared between the alternate boot environment and the primary boot environment. which can be a physical device. . but this is not a requirement. A virtual device that stores data and parity on multiple disks. Also called a storage pool or simply a pool. A logical device in a pool. . and the area reserved for swap.475 ZFS Hardware and Software Requirements Table 9. when a mirror component is taken offline and then later is put back online.

Creating a Basic ZFS File System The easiest way to create a basic ZFS file system on a single disk is by using the zpool create command: # zpool create pool1 c1t1d0<cr> NOTE Pool terminology The terms storage pool. To force the system to overwrite the file system. Data is not lost as long as one mirror set survives. zpool. All three terms refer to a logical group of block devices describing the layout and physical characteristics of the available storage in a ZFS file system. RAID-1: Mirrored disks where two or more disks store exactly the same data. type this: # zpool create -f pool1 c1t1d0<cr> # The system returns to the prompt if successful. all data is lost. and pool are used interchangeably. RAID-0: Data is distributed across one or more disks with no redundancy. /dev/dsk/c1t1d0s2 contains a ufs filesystem. every block of data is its own RAID-Z stripe so that every write is a full stripe write.” Notice that I did not specify a slice. RAID-Z: A ZFS redundancy scheme using a copy-on-write policy. Using a dynamic stripe width. . at the same time. If a single disk fails. . so the entire 36GB disk is assigned to the zpool.476 Chapter 9: Administering ZFS File Systems ZFS RAID Configurations ZFS supports the following RAID (Redundant Array of Inexpensive Disks) configurations: . If the disk has an existing file system. RAID-Z is similar to RAID-5. . you receive the following error: invalid vdev specification use ‘-f’ to override the following errors: /dev/dsk/c1t1d0s0 contains a ufs filesystem. I created a RAID-0 zpool named “pool1” on a 36GB disk named “c1t1d0. rather than writing over old data. but RAID-Z eliminates a flaw in the RAID-5 scheme called the RAID-5 write hole. In the previous example.

Each of the file systems has access to all the space in the zpool.2G /devices 0K 0K ctfs 0K 0K proc 0K 0K mnttab 0K 0K swap 985M 1. must not exist before the storage pool is created.477 Creating a Basic ZFS File System When I issue the df -h command. As you can see. also named “pool1. better yet.4M objfs 0K 0K sharefs 0K 0K fd 0K 0K rpool/ROOT/s10s_u6wos_07b/var 33G 67M swap 983M 0K swap 984M 40K rpool/export 33G 20K rpool/export/home 33G 18K rpool 33G 94K pool1 33G 18K avail capacity 27G 0K 0K 0K 0K 983M 0K 0K 0K 27G 983M 983M 27G 27G 27G 33G 14% 0% 0% 0% 0% 1% 0% 0% 0% 1% 0% 1% 1% 1% 1% 1% Mounted on / /devices /system/contract /proc /etc/mnttab /etc/svc/volatile /system/object /etc/dfs/sharetab /dev/fd /var /tmp /var/run /export /export/home /rpool /pool1 The previous zpool create command created a zpool named “pool1” and a ZFS file system in that pool. Now. the /pool1/data file system has 33GB available. ZFS creates this directory automatically when the pool is created. I see that the following /pool1 file system is ready for data: # df -h<cr> Filesystem size used rpool/ROOT/s10s_u6wos_07b 33G 4. I’ll create another ZFS file system in the same zpool: # zfs create pool1/data<cr> I’ve just created a ZFS file system named /pool1/data in the pool1 zpool. A df -h command shows the following information: <df output has been truncated> pool1 33G 18K pool1/data 33G 18K 33G 33G 1% 1% /pool1 /pool1/data Again. pool1 is its parent file system.” The /pool1 directory should be empty or. the entire size of my disk (minus 3GB for overhead). The pool1 pool is 33GB. The /pool1 file system has 33GB available. Now. The new file system is called a descendant of the pool1 storage pool. I’ll create a 1GB file in the /pool1/data file system: # mkfile 1g /pool1/data/largefile<cr> The df -h command displays the following storage information for each of the ZFS file systems: <df output has been truncated> pool1 33G 19K pool1/data 33G 925M 32G 32G 1% 3% /pool1 /pool1/data . the ZFS file system is mounted automatically after it is created.

.9M 788M 21K 18K 59.9M 788M 39K 18K 512M AVAIL 4. which I’ll describe later.5M MOUNTPOINT /pool1 none /export/data /rpool legacy / /var /export /export/home - The information displayed includes the following: . USED: The amount of space consumed by the dataset and all its descendents. If the value is legacy.9G 12. The example I’ve shown is a quick and easy way to create a ZFS file system. This space is shared with all the datasets within that pool. NAME: The name of the dataset.9G 12.9G 12. the file system is mounted manually using the mount command. the zfs rename command is used to rename the pool1/data file system to pool1/documents: # zfs rename pool1/data pool1/documents<cr> Listing ZFS File Systems List all the active ZFS file systems and volumes on a machine using the zfs list command: # zfs list<cr> All the file systems and volumes on this particular system are displayed: NAME pool1 pool2 pool2/data rpool rpool/ROOT rpool/ROOT/s10x_u6wos_07b rpool/ROOT/s10x_u6wos_07b/var rpool/dump rpool/export rpool/export/home rpool/swap USED 106K 150K 18K 4.89G 4. In the following example. However. MOUNTPOINT: The mount point used by this file system.38G 67.9G 12.9G 12. which might or might not be shared with other datasets in the pool. The space can be limited by quotas and other datasets within that pool. . .44G 67.44G 3.5K 18K 3. .89G 4.9G 13. you may want more control over the hierarchy of the file systems.478 Chapter 9: Administering ZFS File Systems Notice how the available space has decreased for each file system.89G 12.9G 12.3G REFER 18K 18K 18K 35. REFER: The amount of data accessible by this dataset. .72G 3. AVAIL: The amount of space available to the dataset and all its children. Renaming a ZFS File System You can rename a ZFS file system using the zfs rename command.

. You receive no confirmation prompt after the command is executed. You can attempt to recover the pool using zpool import. and other ZFS file systems are created under it: pool1 pool1/data pool1/data/app1 pool1/data/app2 33G 33G 33G 33G 20K 19K 18K 18K 33G 33G 33G 33G 1% 1% 1% 1% /pool1 /pool1/data /pool1/data/app1 /pool1/data/app2 . I’ll use the zfs destroy command to remove the /pool1/data file system created earlier: # zfs destroy pool1/data<cr> Destroying a file system can fail for the following reasons: . I forcibly remove the pool1/data file system: # zfs destroy -f pool1/data<cr> CAUTION The -f option Use the -f option with caution. In other words.89G REFER 18K 18K MOUNTPOINT none /export/data Removing a ZFS File System Use the zfs destroy command to remove a ZFS file system. but you risk losing all the data in that pool. because it will unmount. Make certain that you are destroying the correct file system or storage pool. it is a parent file system. and destroy active file systems. causing unexpected application behavior. you can forcibly remove it using the -f option. If you accidentally destroy the wrong file system or pool. The file system has children. When a file system is busy. In the following example. unshare. use the -r option followed by the pool name: # zfs list -r pool2<cr> NAME USED AVAIL pool2 150K 4.479 Removing a ZFS File System To recursively list only the datasets in the pool2 storage pool. The file system could be in use and busy. you’ll lose data.89G pool2/data 18K 4. CAUTION Destroying data The zfs destroy and zpool destroy commands destroy data.

the /pool1 and /pool1/data ZFS file systems that I created earlier have been removed. This space will get used over time. the origin property for a ZFS clone displays a dependency between the clone and the snapshot. In this example. so the amount of time that this destroyed pool . I’ll remove the file system named pool1/data and all its dependents: # zfs destroy -R pool1/data<cr> NOTE Object sets ZFS supports hierarchically structured object sets—object sets within other object sets. use the -r option to recursively destroy the parent file system named pool1/data and all its descendants: # zfs destroy -r pool1/data<cr> . A child dataset is dependent on the existence of its parent. A parent cannot be destroyed without first destroying all children.480 Chapter 9: Administering ZFS File Systems For a ZFS file system with children. but use extreme caution when using this option. Earlier in this chapter. you can attempt to recover it by using the zpool import command. I created a storage pool named pool1. as shown in the example when I try to destroy the pool1/data@today snapshot: # zfs destroy pool1/data@today<cr> cannot destroy ‘pool1/data@today’: snapshot has dependent clones use ‘-R’ to destroy the following datasets: pool1/clone Removing a ZFS Storage Pool Use the zpool destroy command to remove an entire storage pool and all the file systems it contains. In the following example. ZFS marks that pool as destroyed. The zfs destroy command lists any dependencies. You receive no confirmation prompt. If you accidentally destroy a pool. The ZFS file system has indirect dependents such as clones or snapshots associated with it. and you could remove dependents that you did not know existed. You can view a dataset’s dependencies by looking at the properties for that particular dataset. I’ll remove pool1 using the following command: # cd /<cr> # zpool destroy pool1<cr> When I destroy the storage pool. but nothing is actually erased. When you destroy a pool. everything in that pool is also destroyed. For example. The -R option to the zfs destroy command overrides this and automatically removes the parent and its children. Use the -R option to destroy a file system and all its dependents.

and all the data is accessible. Hyphen (-) . issue the zpool import command again using the -D and -f options. you can identify the pool1 pool that was destroyed earlier. pool1 c1t1d0 ONLINE ONLINE In the output produced from zpool import.481 ZFS Components remains available for recovery will vary. ZFS Components The following are considered ZFS components: . Files . Virtual devices Follow these rules when naming ZFS components: .2M 33. Each component can contain only alphanumeric characters in addition to the following: . Empty components are not permitted. Disks . Now. .7G CAP HEALTH ALTROOT 0% ONLINE - The pool has been recovered. list the pool: # zpool list pool1<cr> NAME SIZE USED AVAIL pool1 33.8G 50. and specify the name of the pool to be recovered: # zpool import -Df pool1<cr> The -f option forces the import of the pool. To recover the pool. List your destroyed pools using the zpool import command with the -D option: # zpool import -D<cr> The system responds with this: pool: id: state: action: config: pool1 11755426293844032183 ONLINE (DESTROYED) The pool can be imported using its name or numeric identifier. Underscore (_) . even if the pool has been destroyed.

Pool names must begin with a letter.482 Chapter 9: Administering ZFS File Systems . . The beginning sequence c[0-9] is not allowed.” “raidz. there is no need to format the disk. except for the following restrictions: . . For more information on disk slices and EFI disk labels. . and performance could be adversely affected. The example in Step By Step 9. Part I. The only requirement is that the device must be at least 128MB in size. Colon (:) . and slice 0 encompasses the entire disk. . Using Files in a ZFS Storage Pool You can use UFS files as virtual devices in your ZFS storage pool. It is recommended that an entire disk be allocated to a storage pool. it makes administration more difficult. I’ll create a ZFS pool on a file located in a UFS file system when I don’t have any physical devices.1 creates a ZFS pool in a UFS file. Although disk slices can be used in storage pools. ZFS formats the disk for you using an EFI disk label. I’ll do this strictly for testing purposes. When using an entire disk for ZFS. Use this feature for testing purposes only. because any use of files relies on the underlying file system for consistency. If you create a ZFS pool backed by files on a UFS file system. A name that begins with “mirror. which can be either a disk or a slice on a disk. . Period (.” or “spare” is not allowed. Dataset names must not contain a percent sign (%). . Pool names cannot begin with a percent sign (%). The name “log” is reserved and cannot be used. refer to Solaris 10 System Administration Exam Prep (Exam CX-310-200).) . because these name are reserved. Dataset names must begin with an alphanumeric character. Using Disks in a ZFS Storage Pool The most basic element in a storage pool is a physical storage device. you are relying on UFS to guarantee correctness and synchronous semantics and not fully utilizing the benefits of ZFS.

1 Using a UFS File for a ZFS Storage Pool 1. Use the mkfile command to create an empty file in the /export/home file system.9G 1K 4. A storage pool can contain more than one mirror. A two-way mirror consists of two disks. It’s recommended that each of these disks be connected to separate disk controllers.9G 1% /pool2 . When creating a mirrored pool. a separate top-level device is created.483 Mirrored Storage Pools STEP BY STEP 9. The df -h command shows that the following file system has been created: pool2 4. and a three-way mirror consists of three disks. Use the following command to create a two-way mirror device: # zpool create pool2 mirror c2t2d0 c2t3d0<cr> This pool was created using two 5GB disks. which only “reserves” the space and does not actually allocate disk blocks to the file system until data is written to the file: # mkfile -n 200m /export/home/zfsfile<cr> 2. I’ll use the -n option. Verify the status of the new pool: # zpool status -v tempzfs<cr> The system displays the following information: pool: tempzfs state: ONLINE scrub: none requested config: NAME tempzfs /export/home/zfsfile errors: No known data errors STATE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 Mirrored Storage Pools At least two disks are required for a mirrored storage pool. Create a ZFS pool and file system named “tempzfs” on the UFS file: # zpool create tempzfs /export/home/zfsfile<cr> 3.

It then repairs the damaged disk and returns good data to the application. Create a double-parity RAID-Z configuration by using the raidz2 keyword: # zpool create pool3 raidz2 c2t2d0 c2t3d0 c2t4d0<cr> Displaying ZFS Storage Pool Information You can display status information about the usage. RAID-Z can handle a whole-disk failure.97G 17. ZFS compares it against its checksum. but it can also be more proactive and actually detect and correct any corruption it encounters.94G CAP 0% HEALTH ONLINE ALTROOT - . Like RAID-5.94G 4.484 Chapter 9: Administering ZFS File Systems RAID-Z Storage Pools RAID-Z provides a mirrored storage pool.27G AVAIL 9.9G USED 112K 111K 4. When ZFS reads a RAID-Z block. and double-parity RAID-Z is similar to RAID-6.97G 13.8G 24K 9. but it also provides single or double parity fault tolerance.6G CAP 0% 0% 23% HEALTH ONLINE ONLINE ONLINE ALTROOT - To display information about a specific pool. If the data disks didn’t return the right answer. and health of your ZFS pools using the zpool list command.8G 1% /pool3 You need at least two disks for a single-parity RAID-Z configuration and at least three disks for a double-parity RAID-Z configuration. ZFS also reports the incident through Solaris FMA (Fault Management Architecture) so that the system administrator knows that one of the disks is silently failing.94G 4. I/O statistics. specify the pool name: # zpool list pool1<cr> NAME SIZE USED AVAIL pool1 9. type the following command: # zpool list<cr> The system displays this: NAME pool1 pool2 rpool SIZE 9. The df -h command shows the following information: pool3 9.94G 112K 9. To display basic status information about all the storage pools installed on the system. Single parity is similar to RAID-5. Use the zpool create command to create a single RAID-Z (single-parity) device that consists of three disks: # zpool create pool3 raidz c2t2d0 c2t3d0 c2t4d0<cr> This RAID-Z pool is created from three 5GB disks. ZFS reads the parity and then does reconstruction to figure out which disk returned the bad data.

.30G # AVAIL 195M 28. as shown in the example where I create a new pool named pool2 using /mnt as the alternate root path. . USED CAPACITY: The amount of data currently stored in the pool or device. . pools can be imported using an alternate root.94G 4. An example is a recovery situation.4G CAP 0% 15% HEALTH ONLINE ONLINE ALTROOT /mnt - In addition. . Alternate root pools are used with removable media. calculated as a percentage of total space. The size represents the total size of all top-level virtual devices. . AVAILABLE CAPACITY: The amount of space available in the pool or device. . An alternate root pool is created using the -R option. where the mount point must not be interpreted in the context of the current root directory.size pool1<cr> The system displays only the name and the total size for pool1: NAME pool1 pool2 rpool SIZE 9. Instruct the system to display only specific information about the pool: # zpool list -o name.485 Displaying ZFS Storage Pool Information The information displayed includes the following: . zpool list shows the following information: # zpool list<cr> NAME SIZE USED pool2 195M 103K rpool 33. USED: The amount of space allocated by all the datasets. SIZE: The pool’s total size. but under some temporary directory where repairs can be made. CAPACITY: The space used. .8G 5. NAME: The pool’s name.97G 17. .9G The following storage pool I/O statistics can also be displayed for each pool: . HEALTH: The pool’s current health status. AVAILABLE: The unallocated space in the pool. where users typically want a single file system and they want it mounted wherever they choose. ALTROOT: The alternate root of the pool if an alternate exists.

9K ——. The following command displays current stats every 2 seconds until Ctrl+C is pressed: # zpool iostat pool1 2<cr> The system displays the following: pool ————— pool1 pool1 pool1 pool1 <Ctrl+C> # capacity used avail ——.——112K 9.94G 112K 9. .97G 4. view the health of the storage pools and devices using the zpool status command.94G 112K 9. expressed as units per second. Use the following command to list all the I/O statistics for each storage pool: # zpool iostat<cr> The system displays the following: pool ————— pool1 pool2 rpool ————— capacity used avail ——.——0 0 0 0 2 1 ——. READ BANDWIDTH: The bandwidth of all read operations (including metadata). . WRITE OPERATIONS: The number of write I/O operations sent to the pool or device.27G 13.——- All the statistics displayed are cumulative since the system was booted.94G 111K 4. It’s best to specify an interval with the zpool command.23K 0 270 183K 16.——240 2. expressed as units per second.94G 112K 9.94G operations read write ——.6G ——.486 Chapter 9: Administering ZFS File Systems . .——0 0 0 0 0 0 0 0 bandwidth read write ——.——bandwidth read write ——.——operations read write ——. The health of the storage pool is determined by the health of the devices that make up the pool. READ OPERATIONS: The number of read I/O operations sent to the pool or device. WRITE BANDWIDTH: The bandwidth of all write operations. Use the zpool status command to obtain the health information: # zpool status <cr> .90K 0 0 0 0 0 0 Last.——112K 9.——204 1. where the first line of output is cumulative and the next lines represent activity since the previous stat.

. nothing in the pool can be accessed. it’s possible for some transient errors to still occur. The pool’s fault tolerance might be compromised. If a top-level virtual device is UNAVAILABLE. .UNAVAILABLE: The device or virtual device cannot be opened. The health of the storage pool is determined by the health of all its top-level virtual devices. ZFS is incapable of sending data to it or receiving data from it. pools with UNAVAILABLE devices appear in DEGRADED mode. Device removal detection is hardware-dependent and might not be supported on all platforms. If a top-level virtual device is in this state. the pool is also FAULTED.DEGRADED: The virtual device has experienced a failure. .ONLINE: The device is normal and in good working order. because a subsequent fault in another device might be unrecoverable. In this state. .FAULTED: The virtual device is inaccessible due to a total failure. but the device can still function. The -x option can be used to display only the status of pools that are exhibiting errors or are otherwise unavailable: #zpool status -x<cr> all pools are healthy The health status of each device falls into one of the following states: . the pool is inaccessible.REMOVED: The device was physically removed while the system was running. If all virtual devices are ONLINE.OFFLINE: The administrator has taken the virtual device offline. The default is to display verbose output. .487 Displaying ZFS Storage Pool Information The system displays the following: pool: pool1 state: ONLINE scrub: none requested config: NAME STATE pool1 ONLINE c2t2d0 ONLINE c2t3d0 ONLINE errors: No known data errors READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 The following two options are available with the zpool status command: . the storage pool is ONLINE. In some cases. The -v option displays verbose output. This state is most common when a mirror or RAID-Z device has lost one or more constituent devices. . If a virtual device is FAULTED. .

http://www. Sufficient replicas exist pool to continue functioning in a degraded state.sun. Adding Devices to a ZFS Storage Pool Add more space to a storage pool using the zpool add command.78G pool1/data 4. This link (http://www.00G 5.sun. It provides up-to-date information on the problem and describes the best recovery procedure.com/msg/ZFS-8000-2Q none requested NAME pool1 c2t2d0 c2t3d0 STATE DEGRADED ONLINE UNAVAIL READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 cannot open errors: No known data errors Notice the link displayed in the output.96G 949M pool1/data 3.com/msg/ZFS-8000-2Q) points to an online article to visit for more information. The following example shows a storage pool named pool1 with a dataset named /pool1/data: # zfs list -r pool1<cr> NAME USED AVAIL pool1 3.96G 949M REFER 19K 3.78G REFER 19K 4.00G MOUNTPOINT /pool1 /pool1/data .96G MOUNTPOINT /pool1 /pool1/data Storage pool1 currently has a single 5GB disk (c2t2d0). Add another 5GB disk drive (c2t3d0) to the pool: # zpool add pool1 c2t3d0<cr> Another check of the storage pool shows that the size has been increased: # zfs list -r pool1<cr> NAME USED AVAIL pool1 4.00G 5.488 Chapter 9: Administering ZFS File Systems The following example displays the health status of a pool with a failed disk drive: # zpool pool: state: status: for the action: see: scrub: config: status -x<cr> pool1 DEGRADED One or more devices could not be opened. The additional space becomes available immediately to all datasets within the pool. Attach the missing device and online it using ‘zpool online’.

89G 18K /pool2/docs # zpool status pool2<cr> pool: pool2 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool2 ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 errors: No known data errors .489 Attaching and Detaching Devices in a Storage Pool A check of the storage pool shows the status of the two disk drives: # zpool pool: state: scrub: config: status pool1<cr> pool1 ONLINE none requested NAME pool1 c2t2d0 c2t3d0 STATE ONLINE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 errors: No known data errors Attaching and Detaching Devices in a Storage Pool Add another device to a mirrored storage pool using the zpool attach command.89G 19K /pool2 pool2/docs 18K 4.89G REFER 19K 18K MOUNTPOINT /pool2 /pool2/docs A check of the storage pool shows the mirror’s status: # zfs list -r pool2<cr> NAME USED AVAIL REFER MOUNTPOINT pool2 132K 4. The following example shows a two-way mirrored storage pool named pool2 with a dataset named /pool2/docs: # zfs list -r pool2<cr> NAME USED AVAIL pool2 132K 4.89G pool2/docs 18K 4.

Converting a Nonredundant Pool to a Mirrored Pool Use the zpool attach command to convert a nonredundant pool into a mirrored (redundant) storage pool. Step By Step 9.2 Convert a Nonredundant Pool to a Mirrored Storage Pool # zpool create mypool c2t2d0<cr> 1. Create a nonredundant storage pool: Verify the pool: # zpool status mypool<cr> The system displays this: pool: mypool state: ONLINE scrub: none requested config: NAME mypool c2t2d0 STATE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 .2 describes the process. STEP BY STEP 9.490 Chapter 9: Administering ZFS File Systems To convert this pool to a three-way mirror. and resilvering is complete. attach another 5GB disk (c2t4d0) to the pool: # zpool attach pool2 c2t3d0 c2t4d0<cr> A check of the storage pool shows the mirror’s status: # zpool pool: state: scrub: config: status pool2<cr> pool2 ONLINE resilver completed after 0h0m with 0 errors on Thu Dec 11 09:26:01 2008 NAME pool2 mirror c2t2d0 c2t3d0 c2t4d0 STATE ONLINE ONLINE ONLINE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 errors: No known data errors The three-way mirror is online.

Attach a second disk to the pool to create a mirrored (redundant) pool: # zpool attach mypool c2t2d0 c2t3d0<cr> Verify the creation of the redundant pool: # zpool status mypool<cr> pool: mypool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Thu Dec 11\ 09:37:23 2008 config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 errors: No known data errors Notice that the STATE is ONLINE and resilvering is complete.491 Attaching and Detaching Devices in a Storage Pool errors: No known data errors 2. For example. issue the zpool detach command: . in the previous section we created a redundant pool name mypool. Detaching a Device from a Mirrored Pool Use the zpool detach command to detach a device from a mirrored storage pool. The current status is as follows: # zpool status mypool<cr> pool: mypool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Thu Dec 11\ 09:37:23 2008 config: NAME mypool mirror c2t2d0 c2t3d0 STATE ONLINE ONLINE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0 errors: No known data errors To detach the device c2t3d0 and convert the mirror back to a nonredundant pool.

That message remains until the next operation. Taking Devices in a Storage Pool Offline and Online To temporarily disconnect a device from a storage pool for maintenance purposes. ZFS did not perform a resilvering operation when the c2t1d0 device was detached.492 Chapter 9: Administering ZFS File Systems # zpool detach mypool c2t3d0<cr> A check of the storage pool shows the status: # zpool status mypool<cr> pool: mypool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Thu Dec 11\ 09:37:23 2008 config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 errors: No known data errors Notice that the zfs status shows that a resilvering operation was performed. The scrub message gets updated only when a ZFS scrub or resilvering operation completes. a redundant storage pool named mypool is set up on a server. Offlining a device is meant to be a temporary state. which was described earlier. NOTE A device cannot be detached from a nonredundant pool. Because the detach operation did not perform a scrub. The message refers to the previous resilver operation that was performed when the pool was originally mirrored. In the following example. ZFS allows a device to be taken offline using the zpool offline command. the old message still appears. whereas detaching a device is a permanent state. A check of the status shows the following information about that pool: # zpool pool: state: scrub: config: status mypool<cr> mypool ONLINE resilver completed after 0h0m with 0 errors on Thu Dec 11 10:58:07 2008 NAME STATE READ WRITE CKSUM . Taking a device offline is not the same as detaching a device.

. and try to bring it online. and this device remains offline even after the system has been rebooted. All the data gets written to the c2t3d0 device. While the device is offline. data can still be written to the mypool storage pool. and there is no redundancy. the device remains in a faulted state. any information that was previously written to the storage pool is resynchronized to the newly available device. NOTE Offlining a device Note that you cannot use device onlining to replace a disk.493 Taking Devices in a Storage Pool Offline and Online mypool mirror c2t2d0 c2t3d0 ONLINE ONLINE ONLINE ONLINE 0 0 0 0 0 0 0 0 0 0 0 0 errors: No known data errors Take the c2t2d0 device offline using the following command: # zpool offline mypool c2t2d0<cr> The pool’s status has changed. If you offline a device. Sufficient replicas exist for the pool to continue functioning in a degraded state. as displayed by the following zpool status command: # zpool status mypool<cr> pool: mypool state: DEGRADED status: One or more devices has been taken offline by the administrator. scrub: resilver completed after 0h0m with 0 errors on Thu Dec 11\ 10:58:07 2008 config: NAME STATE READ WRITE CKSUM mypool DEGRADED 0 0 0 mirror DEGRADED 0 0 0 c2t2d0 OFFLINE 0 0 0 c2t3d0 ONLINE 0 0 0 errors: No known data errors The offline state is persistent. When a device is brought back online. issue the following command: # zpool online mypool c2t2d0<cr> A device can be brought back online while the file system is active. action: Online the device using ‘zpool online’ or replace the device with\ ‘zpool replace’. To bring the c2t2d0 device back online. replace the drive.

Use the zpool history command: # zpool history pool2<cr> The system displays all the history for that pool: History for ‘pool2’: 2009-02-22. native properties are either read-only or settable.15:59:13 zpool scrub pool2 [user root on server:global] The -i option displays internally logged ZFS events in addition to user-initiated events.15:49:28 zpool 2009-02-22.13:33:34 zpool create -R /mnt pool2 /export/home/zfsfile [user\ root on server:global] 2009-02-22. but you can use them to annotate datasets in a way that is meaningful in your environment.15:49:28 zpool attach pool2 /export/home/zfsfile\ /export/home/mirror [user root on server:global] 2009-02-22.15:50:29 zpool detach pool2 /export/home/mirror [user root on\ server:global] 2009-02-22. .15:55:34 zpool 2009-02-22.15:59:13 zpool create -R /mnt pool2 /export/home/zfsfile attach pool2 /export/home/zfsfile/export/home/mirror detach pool2 /export/home/mirror scrub pool2 attach pool2 /export/home/zfsfile/export/home/mirror detach pool2 /export/home/mirror scrub pool2 Use the -l option to display the log records in long format: # zpool history -l pool2<cr> History for ‘pool2’: 2009-02-22.15:56:24 zpool 2009-02-22. These properties are divided into two types: native and user-defined.494 Chapter 9: Administering ZFS File Systems ZFS History The system administrator can view all the operations that have been performed on a ZFS pool by viewing the history. User properties have no effect on ZFS file system behavior.15:56:24 zpool attach pool2 /export/home/zfsfile\ /export/home/mirror [user root on server:global] 2009-02-22. Native properties either export internal statistics or control ZFS file system behavior.13:33:34 zpool 2009-02-22.15:56:47 zpool detach pool2 /export/home/mirror [user root on\ server:global] 2009-02-22.15:50:29 zpool 2009-02-22.15:56:47 zpool 2009-02-22.15:55:34 zpool scrub pool2 [user root on server:global] 2009-02-22. a default set of properties control the behavior of the file systems and volumes. ZFS Properties When you create ZFS file systems. In addition.

Property Name Available compressratio creation mounted . This source is a result of no ancestor’s having the property as source local. and zfs get commands. For a complete set of ZFS properties.00x pool1 mounted yes pool1 quota none <the list has been truncated> SOURCE default Table 9.2 lists some of the more common native read-only ZFS file system properties. zfs inherit. nor are they inherited. default: A value of default means that the property setting was not inherited or set locally.78G pool1 referenced 20K pool1 compressratio 1. The source can have the following values: .2 Native Read-Only ZFS Properties Description The amount of space available to the dataset and all its children. . assuming no other activity in the pool. indicates whether the file system is currently mounted.495 ZFS Properties Many settable properties are inherited from the parent and are propagated to its descendants. Table 9. ZFS dataset properties are managed using the zfs set. This property can be either yes or no. A read-only property that identifies the compression ratio achieved for this dataset. Use the zfs get command with the all keyword to view all the dataset properties for the storage pool named pool1: # zfs get all pool1<cr> The system displays the list of properties: NAME PROPERTY VALUE pool1 type filesystem pool1 creation Mon Dec 8 14:39 2008 pool1 used 136K pool1 available 9. see the ZFS man pages by typing man zfs at the command prompt. inherited from <dataset-name>: <dataset-name> specifies where that property was inherited. These properties cannot be set. The time when the dataset was created. . For file systems. local: A local source indicates that the property was explicitly set on the dataset by using the zfs set command. All inheritable properties have an associated source indicating how the property was obtained.

Controls the checksum used to verify data integrity. and dfstab. the snapshot from which the clone was created. the file system cannot be mounted by using the zfs mount or zfs mount -a command. Controls the mount point used for this file system.2 origin type used Native Read-Only ZFS Properties Description For cloned file systems or volumes. Controls the compression algorithm used for this dataset. volume. or clone. The amount of space consumed by this dataset and all its descendents. Controls whether the set-UID bit is respected for the file system. unshare. and what options are used. The minimum amount of space guaranteed to a dataset and its descendents. The type of dataset: filesystem.3 lists the settable ZFS properties. Most of these properties are inherited from the parent. If this property is set to off. These are properties whose values can be both retrieved and set.496 Chapter 9: Administering ZFS File Systems Table 9. N/A none readonly recordsize reservation setuid sharenfs off 128K none on off . snapshot. with the exception of “quota” and “reservation.” Table 9. Controls whether device nodes can be opened on this file system. Controls whether this dataset can be modified. described later in this section.3 Settable ZFS Properties Default Value secure groupmask on on on off on on Property Name aclinherit aclmode atime canmount checksum compression devices exec mountpoint quota Description Controls how ACL entries are inherited when files and directories are created. Limits the amount of space a dataset and its descendents can consume. Controls how an ACL is modified during chmod. A file system with a sharenfs property of off is managed through traditional tools such as share. Controls whether processes can be executed from within this file system. Specifies a suggested block size for files in the file system. This property enforces a hard limit on the amount of space used. Property Name Table 9. These properties are set using the zfs set command. Controls whether the file system is shared via NFS. Controls whether the access time for files is updated when they are read.

3 snapdir volsize zoned Settable ZFS Properties Default Value hidden 8Kbytes off Property Name Description Controls whether the . Contain lowercase letters. Contain a colon (:) character to distinguish them from native properties. The user properties must conform to the following rules: .024 characters. This prevents the pool1/data file system from using all the space in the pool: # zfs set quota=25G pool1/data<cr> . Controls whether the dataset is managed from a nonglobal zone. _. numbers.. Arbitrary strings that are always inherited and are never validated. The maximum user property name is 256 characters. . as discussed in the “ZFS Snapshots” section. and the following punctuation characters: :.zfs directory is hidden or visible in the root of the file system. Typically. . In addition to the native properties that have been described. The user properties have no effect on the ZFS behavior. +. The syntax is as follows: zfs set <property>=<value> Only one property can be set or modified during each zfs set invocation. but they can be used to annotate datasets with meaningful information. ZFS supports arbitrary user properties. the property name is divided into the following two components. but this namespace is not enforced by ZFS: <module>:<property> . Here are two examples of user properties: dept:users=finance backup:frequency=daily Setting ZFS Properties You can modify any of the ZFS settable properties using the zfs set command. For volumes. The following command sets the file system quota to 25GB.497 ZFS Properties Table 9. . The maximum user property value is 1. specifies the volume’s logical size. .

I’ll start by setting the compression property on the storage pool named pool1: # zfs set compression=on pool1<cr> NOTE Compression In addition to reducing space usage by two to three times. I’ll create a user-definable property named backup:frequency and set the value to daily: # zfs set backup:frequency=daily pool1/documents<cr> Now I’ll use the -s option to list the properties by source type. For this reason.498 Chapter 9: Administering ZFS File Systems View a specific property using the following command: # zfs get quota pool1/documents<cr> The system displays the following: NAME pool1/documents PROPERTY quota VALUE 25G SOURCE local In this example. The valid source types are local. Use the -r option to recursively display the compression property for all the children of the pool1 dataset: # zfs get -r compression pool1<cr> The system displays only the compression property: NAME pool1 pool1/documents PROPERTY compression compression VALUE on off SOURCE local local . default. compression reduces the amount of I/O by two to three times. temporary. The following example uses the -s option to list only properties that were set locally on pool1: # zfs get -s local all pool1/documents<cr> The system displays this: NAME pool1/documents pool1/documents PROPERTY quota backup:frequency VALUE 25G daily SOURCE local local The following illustrates how properties are inherited. In this example. enabling compression actually makes some workloads go faster. inherited. and none. I have a storage pool named pool1 and a ZFS file system in that pool named pool1/documents.

which was a previously created dataset.499 ZFS Properties Notice that compression is set to on for pool1 but is set to off for pool1/documents. Now. Notice that compression in pool1/bill and pool1/data was automatically set to on: NAME pool1 pool1/bill pool1/data pool1/documents PROPERTY compression compression compression compression VALUE on on on off SOURCE local inherited from pool1 inherited from pool1 local The compression property for both datasets was inherited from pool1. When you issue the zfs inherit command. The use of the -r option clears the current property setting for all descendant datasets. you can use the zfs inherit command to clear a property setting for all the datasets in a pool. Setting the compression property again automatically sets it for all the datasets except pool1/documents: # zfs set compression=on pool1<cr> # zfs get -r compression pool1<cr> The system displays the following: NAME pool1 PROPERTY compression VALUE on SOURCE local . I’ll create two new file systems in pool1: # zfs create pool1/bill<cr> # zfs create pool1/data<cr> Check the compression property for all the datasets in pool1: # zfs get -r compression pool1<cr> The system displays the following information. Therefore. the compression property goes back to its default value for all the datasets: # zfs inherit compression pool1<cr> # zfs get -r compression pool1<cr> The system displays the following: NAME pool1 pool1/bill pool1/data pool1/documents PROPERTY compression compression compression compression VALUE off off off off SOURCE default default default local Notice that compression=off for all the datasets in pool1.

For example. as shown in the output from the following df -h command: pool2/data 4.500 Chapter 9: Administering ZFS File Systems pool1/bill pool1/data pool1/documents compression compression compression on on off inherited from pool1 inherited from pool1 local Mounting ZFS File Systems As you can see by now.9G 1% /pool2/data When the pool2/data file system was created. However. It is not necessary to manually mount a ZFS file system. the mountpoint property for the pool2/data file system can be displayed as follows: # zfs get mountpoint pool2/data<cr> NAME PROPERTY VALUE pool2/data mountpoint /pool2/data SOURCE default The ZFS file system is automatically mounted on /pool2/data. It is not necessary to make an entry in the /etc/vfstab file for a ZFS file system to be mounted at boot time. the mountpoint property was inherited. change the mount point on pool2/data to /export/data: # zfs set mountpoint=/export/data pool2/data<cr> Whenever the mountpoint property is changed. ZFS file systems are automatically mounted by SMF via the svc://system/filesystem/local service. For example.9G 18K 4. Use the zfs mount command to list all currently mounted file systems that are managed by ZFS: # zfs mount<cr> rpool/ROOT/s10x_u6wos_07b rpool/ROOT/s10x_u6wos_07b/var rpool/export rpool/export/home rpool pool2/data pool1 / /var /export /export/home /rpool /export/data /mnt ZFS uses the value of the mountpoint property when mounting a ZFS file system. as was required with traditional file systems. a ZFS file system is automatically mounted when it is created. Now the df -h command shows the following information: pool2/data 5128704 18 5128563 1% /export/data . the file system is automatically unmounted from the old mount point and remounted to the new mount point. a file system’s mount point can be changed simply by changing the mountpoint property. At boot time.

4G 0K 0K 0K 0K 924K 0K 0K 0K 70M 84K 28K avail capacity 13G 0K 0K 0K 0K 861M 0K 0K 0K 13G 861M 861M 22% 0% 0% 0% 0% 1% 0% 0% 0% 1% 1% 1% Mounted on / /devices /system/contract /proc /etc/mnttab /etc/svc/volatile /system/object /etc/dfs/sharetab /dev/fdrpool/ROOT/\ /var /tmp /var/run . the /pool1 file system does not get mounted. to unmount the /export/data file system. I typically don’t want users putting files directly into the top-level file system named /pool1. For example. I simply don’t mount /pool1 by setting the mountpoint property to none. ZFS creates the mount point directories as needed and removes them when they are no longer needed. The mountpoint property could be set to none. issue the following command: # zfs umount /export/data<cr> The file system can be mounted as follows: # zfs mount pool2/data<cr> Notice how the dataset name is specified (pool2/data) rather than the mountpoint property value /export/data. A listing of the system’s file systems shows the following: # df -h<cr> Filesystem size rpool/ROOT/s10x_u6wos_07b 18G /devices 0K ctfs 0K proc 0K mnttab 0K swap 862M objfs 0K sharefs 0K fd 0K s10x_u6wos_07b/var 18G swap 861M swap 861M used 3. When I create a ZFS file system using the following command: # zpool create pool1<cr> # zfs create pool1/data<cr> Two file systems are created: /pool1 and /pool1/data. preventing the file system from being mounted: # zfs set mountpoint=none pool2<cr> Now. /pool2 does not show up when the df -h command is executed. This can be useful for the following reason. Mounted ZFS file systems can be unmounted manually using the zfs umount command.501 Mounting ZFS File Systems Notice how I was able to change the mount point to /export/data without creating the /export/data directory. Therefore. With the mountpoint property set to none.

9G 18K avail capacity 4. . ZFS will not automatically mount and manage this file system. /pool1 is not mounted and /pool1/data is mounted: # df -h<cr> Filesystem size used <output has been truncated> pool1/data 4.ro pool2/data<cr> To temporarily change a property on a file system that is currently mounted.3 describes how to set up a ZFS file system using a legacy mount point.9G 1% Mounted on /pool1/data ZFS mount properties can be changed temporarily. If you set the file system’s mountpoint property to legacy. Step By Step 9. Display the readonly property using the following command: # zfs get readonly pool2/data<cr> The readonly value is displayed: NAME PROPERTY VALUE pool2/data readonly on SOURCE temporary Legacy Mount Points File systems can also be managed through the legacy mount command and the /etc/vfstab file. In the following example. Temporary properties revert to their original settings when the file system is unmounted. The file system must be managed using the legacy commands mount and umount and the /etc/vfstab file. so /pool1/data also was set to none: # zfs get -r mountpoint NAME PROPERTY pool1 mountpoint pool1/data mountpoint pool1<cr> VALUE none none SOURCE local inherited from pool1 Therefore.502 Chapter 9: Administering ZFS File Systems rpool/export rpool/export/home rpool 18G 18G 18G 21K 18K 35K 13G 13G 13G 1% 1% 1% /export /export/home /rpool The descendants of pool1 inherited the mountpoint property. I’ll change the pool1/data mountpoint property to /pool1/data: # zfs set mountpoint=/pool1/data pool1/data<cr> Now. you must use the special remount option. the readonly property is temporarily changed to on for a file system that is currently mounted: # zfs mount -o remount.

Use the format command to find all the available disks on your system: # format<cr> Searching for disks-done AVAILABLE DISK SELECTIONS: 1.0-5.0 6.0 4.0/pci8086.1/ide@1/cmdk@1. All the available disks are listed. c2t5d0 <ATA-VBOX HARDDISK-1.3 Set up a Legacy Mount Point for a ZFS File System a.0/pci8086. c0d1 <DEFAULT cyl 2557 alt 2 hd 128 sec 32> /pci@0.503 Mounting ZFS File Systems STEP BY STEP 9.0-5.0 Specify disk (enter its number): 1.1/ide@0/cmdk@0.0 5.00GB> /pci@0. b. Find an unused disk that is available for use in a ZFS storage pool.2829@d/disk@3.0/pci8086.0/pci8086.0/pci-ide@1.2829@d/disk@4. c0d0 <DEFAULT cyl 2346 alt 2 hd 255 sec 63> /pci@0.0-5.0/pci-ide@1. c2t3d0 <ATA-VBOX HARDDISK-1.0 2.2829@d/disk@2.0 7.2829@d/disk@5.00GB> /pci@0. c1d1 <DEFAULT cyl 2347 alt 2 hd 255 sec 63> /pci@0. c2t2d0 <ATA-VBOX HARDDISK-1.00GB> /pci@0.0/pci-ide@1. Check which disks ZFS is using: # zpool pool: state: scrub: config: status<cr> pool2 ONLINE none requested NAME pool2 mirror c2t4d0 c2t5d0 STATE ONLINE ONLINE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0 errors: No known data errors pool: rpool state: ONLINE scrub: none requested config: . c2t4d0 <ATA-VBOX HARDDISK-1.00GB> /pci@0.0-5.0 3.1/ide@0/cmdk@1.

c. Any mount point properties are set explicitly using the mount -o command and by specifying the required mount options. notice that c2t4d0 and c2t5d0 are in use for the pool2 mirror and that c0d0s0 is in use for rpool. 2. SVM. 6.504 Chapter 9: Administering ZFS File Systems NAME rpool c0d0s0 STATE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 errors: No known data errors In the output. The sharenfs property is a comma-separated . Using this method. ZFS file systems do not need to be shared using the /etc/dfs/dfstab file or the share command. 3. Create a directory for the mount point: # mkdir /data<cr> 5. Any attempt to use ZFS tools will result in an error. Sharing ZFS File Systems ZFS can automatically share file systems as an NFS resource by setting the sharenfs property to on. To automatically mount the ZFS file system at bootup. Create a ZFS pool and file system on that disk: # zpool create pool1 c2t2d0<cr> Verify that /pool1 is mounted by issuing the df -h command. After verifying that the disk was not being used. Mount the ZFS file system: # mount -F zfs pool1 /data<cr> Use the df -h command to verify that the file system is mounted as /data. 4. or Veritas volumes. make the following entry in the /etc/vfstab file: pool1 /data zfs yes - Legacy mount points must be managed through legacy tools. Make sure that none of the disks are being used for traditional file systems by issuing the df -h command and checking for mounted slices. I chose c2t2d0. Change the mountpoint property to legacy: # zfs set mountpoint=legacy pool1<cr> The df -h command verifies that the /pool1 file system is no longer mounted.

such as the /etc/dfs/dfstab file. The default is to set all ZFS file systems as unshared.ro “” . To set them as readonly. as shown in the output: NAME pool2 pool2/data PROPERTY sharenfs sharenfs VALUE on on SOURCE local inherited from pool2 File systems are initially shared writeable.505 Sharing ZFS File Systems list of options that are passed to the share command. When the sharenfs property is set to off. and you’ll see that the file system is now shared: # share<cr> The system displays all active shares: /pool2/data rw “” The sharenfs property is inherited. the file system is not managed by ZFS and can be shared using traditional methods. create a new file system: # zpool create pool2 c2t3d0<cr> Turn on sharenfs for pool2: # zfs set sharenfs=on pool2<cr> Create a new file system under pool2: # zfs create pool2/data<cr> List the sharenfs property for pool2 and its descendants: # zfs get -r sharenfs pool2<cr> The sharenfs property is inherited. For example. and file systems are automatically shared on creation if their inherited property is not off. change the sharenfs property to readonly: # zfs set sharenfs=ro pool2/data<cr> The share command shows the following active shares: # share<cr> /pool2 rw /pool2/data “” sec=sys. All ZFS file systems whose sharenfs property is not off are shared during boot. Share a file system using the zfs set command: # zfs set sharenfs=on pool2/data<cr> Issue the share command.

refer to Chapter 2.1. Create a new storage pool .506 Chapter 9: Administering ZFS File Systems ZFS file systems can be unshared using the zfs unshare command: # zfs unshare pool2/data<cr> The command unshares the /pool2/data file system. Move (export) a storage pool to another system . Create a file system .” ZFS Web-Based Management GUI Throughout this chapter I’ve described how to manage ZFS from the command line. Create a volume . “Virtual File Systems. Use this GUI to perform the following tasks: . you can use the ZFS web-based interface to manage ZFS. . ZFS does not attempt to share or unshare the file system at any time. Import a previously exported storage pool to make it available on another system . If you prefer a GUI interface. Roll back a file system to a previous snapshot You first need to start the SMC web server by executing the following command: # /usr/sbin/smcwebserver start<cr> You can set the server to start automatically at bootup by enabling the SMF service: # /usr/sbin/smcwebserver enable<cr> Access the Administration console by opening a web browser and entering the following URL: https://localhost:6789/zfs The Java Web Console login screen appears. Take a snapshot of a file system or volume . If the sharenfs property is off. View information about storage pools . This setting enables you to administer the NFS resource through traditional means such as the /etc/dfs/dfstab file. and Core Dumps. For more information on administering NFS. Swap Space. as shown in Figure 9. Add capacity to an existing pool .

as shown in Figure 9.2 ZFS administration window. At the Java Web Console screen. FIGURE 9. . The ZFS Administration window appears. enter the administrator login and password and then click the Log In button to proceed.2.507 Sharing ZFS File Systems FIGURE 9.1 Web Console login screen.

. The snapshot simply references the data in the file system from which it was created. The snapshot name follows this format: <filesystem>@<snapname> or <volume>@<snapname> For example. . Snapshots are created almost instantly. The number of snapshots that can be taken is virtually unlimited. As the file system from which the snapshot was created changes. and any pair of snapshots can be used to generate an incremental backup. However. Issue the following command to create the snapshot of the /pool2/data file system: # zfs snapshot pool2/data@tues_snapshot<cr> . Use the snapshot feature to create backups of live file systems.508 Chapter 9: Administering ZFS File Systems ZFS Snapshots A ZFS snapshot is a read-only copy of a ZFS file system. the snapshot consumes space from the same storage pool as the file system from which it was created. As you’ll see. ZFS snapshots provide the following features: . Creating a ZFS Snapshot Create a snapshot using the zfs snapshot command followed by the name of the snapshot. . snapshots are a great tool for backing up live file systems. to take a snapshot of the pool2/data file system. The theoretical maximum is 264. . The snapshot does not use a separate backing store. the name of the snapshot could be pool2/data@tues_snapshot. A snapshot can be created quickly. and it initially consumes no space within the pool. the snapshot grows and consumes space in the storage pool. Any snapshot can be used to generate a full backup. The snapshot persists across reboots.

. when you list the contents of the /pool2/data file system.zfs Dec 10 12:22 dir1 Change into the snapshot directory: # cd /pool2/data/. . Dec 9 20:14 . You see a read-only copy of the /pool2/data file system: # ls -l<cr> total 13 drwxr-xr-x drwxr-xr-x drwxr-xr-x -rw-r—r— -rw-r—r— -rw-r—r— -rw-r—r— 2 2 2 1 1 1 1 root root root root root root root root root root root root root root 2 2 2 0 0 0 0 Dec Dec Dec Dec Dec Dec Dec 10 10 10 10 10 10 10 12:22 12:22 12:22 12:22 12:22 12:22 12:22 dir1 dir2 dir3 foo foo1 foo2 foo3 This is an exact duplicate of the /pool2/data file system. you can copy data from this directory.zfs: # ls -la /pool2/data<cr> total 15 drwxr-xr-x 6 root root drwxr-xr-x 3 root root dr-xr-xr-x 3 root root drwxr-xr-x 2 root root 5 3 3 2 Dec 10 13:01 . list all the snapshots on the system by issuing the following command: # zfs list -t snapshot<cr> NAME USED pool2/data@tues_snapshot 0 AVAIL REFER 22K MOUNTPOINT - The snapshot is stored in the /pool2/data file system. this snapshot does not change or update. you see the snapshot directory named . but you cannot modify it. Dec 9 20:14 . but you can’t see it because the snapdir property is set to hidden. Change that property to visible: # zfs set snapdir=visible pool2/data<cr> Now. Because it’s a read-only snapshot.zfs/snapshot/tues_snapshot<cr> Issue the ls -la command. as it looked when the snapshot was taken earlier.509 Sharing ZFS File Systems Listing ZFS Snapshots After creating the snapshot. As data is added to and changed in the /pool2/data file system.

510 Chapter 9: Administering ZFS File Systems Saving and Restoring a ZFS Snapshot A snapshot can be saved to tape or to a disk on the local system or a remote system. Renaming a ZFS Snapshot You can rename a snapshot within the pool and the dataset from which it came using the zfs rename command: # zfs rename pool2/data@tues_snapshot pool2/data@backup<cr> List the snapshots on the system to verify the name change: # zfs list -t snapshot<cr> NAME USED AVAIL pool2/data@backup 0 REFER 22K MOUNTPOINT - . use the zfs recv command: # zfs recv pool2/data@tues_snapshot < /dev/rmt/0<cr> This restores the snapshot to the storage pool it came from. you can save the snapshot to disk on a remote system: # zfs send pool2/data@tues_snapshot | ssh host2 zfs recv newpool/data<cr> The snapshot is sent to the remote host named “host2” and is saved in the /newpool/data file system. they must be destroyed before the snapshot can be destroyed. In addition.gz file can be sent via FTP to another system for a remote backup. Use the zfs send command to save the snapshot to tape: # zfs send pool2/data@tues_snapshot > /dev/rmt/0<cr> To retrieve the files from tape. Rather than saving the snapshot to tape. use the zfs destroy command: # zfs destroy pool2/data@tues_snapshot<cr> NOTE Destruction A dataset cannot be destroyed if snapshots of the dataset exist. Destroying a ZFS Snapshot To remove the snapshot from the system. if clones have been created from a snapshot.gz<cr> Now the backup. Compress a ZFS snapshot stream using the following command: # zfs send pool2/data@tues_snapshot | gzip > backupfile.

. Dec 9 20:14 . 2. Using the zfs rollback command. STEP BY STEP 9.511 Sharing ZFS File Systems Rolling Back a ZFS Snapshot Roll back a ZFS snapshot to discard all changes made to a file system since a specific snapshot was created. Roll back the /pool2/data file system to the tues_snapshot: # zfs rollback pool2/data@tues_snapshot<cr> cannot rollback to ‘pool2/data@tues_snapshot’: more recent snapshots\ exist use ‘-r’ to force deletion of the following snapshots:\ pool2/data@weds_snapshot The error indicates that there is a more recent backup named weds_snapshot. List the contents of the /pool2/data file system: # ls -la /pool2/data<cr> total 12 drwxr-xr-x 5 root root drwxr-xr-x 3 root root dr-xr-xr-x 3 root root drwxr-xr-x 2 root root drwxr-xr-x 2 root root 4 3 3 2 2 Dec 10 14:31 .4 describes how to revert the /pool2/data file system to the most recent snapshot. Step By Step 9. we’ll use the zfs rollback command to revert the /pool2/data file system to the most recent snapshot. You do this using the -r option: # zfs rollback -r pool2/data@tues_snapshot<cr> . To use the older tues_snapshot. the file system reverts to the state at the time the snapshot was taken. List the snapshots currently available on the system: # zfs list -t snapshot<cr> NAME USED pool1/docs@tues_snapshot 0 pool2/data@backup 0 pool2/data@tues_snapshot 0 pool2/data@weds_snapshot 0 AVAIL REFER 18K 22K 22K 22K MOUNTPOINT - Four snapshots are listed. 1.zfs Dec 10 12:22 dir1 Dec 10 12:22 dir2 3. You can only revert a file system to the most recent snapshot. Dec 9 20:14 .4 Roll Back a Snapshot and ZFS File System In this exercise. you need to force ZFS to use the tues_snapshot and remove the weds_snapshot.

512 Chapter 9: Administering ZFS File Systems 4.zfs dir1 dir2 dir3 The dir3 directory.. a clone is created from the snapshot named pool2/data@tues_snapshot: # zfs clone pool2/data@tues_snapshot pool2/docs<cr> The zfs list command shows that a new ZFS file system named /pool2/docs has been created: # zfs list<cr> NAME pool1 pool1/docs pool1/docs@tues_snapshot pool2 pool2/data pool2/data@backup USED 133K 18K 0 168K 22K 0 AVAIL 4. A clone is related to the snapshot from which it originated. and you’ll see that the file system has changed: # ls -la /pool2/data<cr> total 15 drwxr-xr-x 6 root root drwxr-xr-x 3 root root dr-xr-xr-x 3 root root drwxr-xr-x 2 root root drwxr-xr-x 2 root root drwxr-xr-x 2 root root 5 3 3 2 2 2 Dec Dec Dec Dec Dec Dec 10 9 9 10 10 10 13:01 20:14 20:14 12:22 12:22 12:22 . The zfs clone command is used to specify the snapshot from which to create the clone.89G 4. Clones provide an extremely space-efficient way to store many copies of mostly shared data such as workspaces. has been restored. software installations. and a clone is a writable copy of a snapshot. . In the following example. the snapshot from which it originated cannot be deleted unless the clone is deleted first.89G 4. . ZFS Clones A snapshot is a read-only point-in-time copy of a file system. After a clone is created. The zfs list command shows that the weds_snapshot was removed: # zfs list -t snapshot<cr> NAME USED pool1/docs@tues_snapshot 0 pool2/data@backup 0 pool2/data@tues_snapshot 0 AVAIL REFER 18K 22K 22K MOUNTPOINT - 5.89G 4. and diskless clients. List the contents of the /pool2/data file system.89G REFER 19K 18K 18K 21K 22K 22K MOUNTPOINT /pool1 /pool1/docs /pool2 /pool2/data - . which was missing.

When you try to create the clone outside the pool2 storage pool. This clone was created from a snapshot of the /pool2/data file system. you cannot destroy the “original” file system of an active clone. the following error is reported: # zfs clone pool2/data@tues_snapshot pool1/data1<cr> cannot create ‘pool1/data1’: source and target pools differ Destroying a ZFS Clone Destroy a ZFS cloned file system just like you would destroy any other ZFS file system—by using the zfs destroy command: # zfs destroy pool2/docs<cr> Clones must be destroyed before the parent snapshot can be destroyed.513 ZFS Clones pool2/data@tues_snapshot pool2/docs 0 0 4. .89G 22K 22K /pool2/docs The contents are exactly the same as /pool2/data: # ls -la /pool2/docs<cr> total 15 drwxr-xr-x 5 root root drwxr-xr-x 4 root root drwxr-xr-x 2 root root drwxr-xr-x 2 root root drwxr-xr-x 2 root root 5 4 2 2 2 Dec Dec Dec Dec Dec 10 10 10 10 10 13:01 14:46 12:22 12:22 12:22 . In the following example.5. This feature makes it possible to destroy the “original” file system—the file system that the clone was originally created from. To replace the /pool2/data file system with the clone named /pool2/docs. I created a clone named /pool2/docs.. follow the steps described in Step By Step 9. dir1 dir2 dir3 The clone must be created in the same storage pool that the snapshot is in. . In the preceding section. Without clone promotion. I’ll try to destroy the tues_snapshot before I destroy the file system that was cloned from that snapshot: # zfs destroy pool2/data@tues_snapshot<cr> cannot destroy ‘pool2/data@tues_snapshot’: snapshot has dependent clones\ use ‘-R’ to destroy the following datasets: pool2/docs Replacing a ZFS File System with a ZFS Clone An active ZFS file system can be replaced by a clone of that file system using the zfs promote command.

The simplest way to check your data integrity is to initiate an explicit scrubbing of all data within the pool. A scrub traverses the entire storage pool to read every copy of every block. the /pool2/data file system will be replaced by its clone. Create a snapshot of the /pool2/data file system: # zfs snapshot pool2/data@tues_snapshot<cr> 2. but the file system should remain usable and nearly as responsive while the scrubbing occurs. Scrubbing proceeds as fast as the devices allow. This operation might negatively impact performance. use the zpool scrub command: # zpool scrub pool1<cr> You can stop a scrub that is in progress by using the -s option: # zpool scrub -s pool1<cr> . 1. To initiate an explicit scrub. Promote the cloned file system: # zfs promote pool2/docs<cr> 4.5 Replace a ZFS File System with a ZFS Clone In this exercise. /pool2/docs. the idea is to read all data to detect latent errors while they’re still correctable. Rename the cloned file system: # zfs rename pool2/docs pool2/data<cr> 6. This operation traverses all the data in the pool once and verifies that all blocks can be read. Rename the /pool2/data file system: # zfs rename pool2/data pool2/data_old<cr> 5. Create a clone of the snapshot: # zfs clone pool2/data@tues_snapshot pool2/docs<cr> 3. Like ECC memory scrubbing. and repair it if necessary. validate it against its 256-bit checksum. so ZFS provides disk scrubbing. Remove the original file system: # zfs destroy pool2/data_old<cr> zpool Scrub Cheap disks can fail.514 Chapter 9: Administering ZFS File Systems STEP BY STEP 9. although the priority of any I/O remains below that of normal operations.

sun. see: http://www. Sufficient replicas exist for the pool to continue functioning in a degraded state. The steps for replacing a failed disk in a ZFS pool are as follows: 1. action: Attach the missing device and online it using ‘zpool online’. 3. which means that the device cannot be opened.com/msg/ZFS-8000-2Q scrub: none requested config: NAME STATE READ WRITE CKSUM mypool DEGRADED 0 0 0 mirror DEGRADED 0 0 0 c2t2d0 UNAVAIL 0 0 0 cannot open c2t3d0 ONLINE 0 0 0 errors: No known data errors Notice in the output that the storage pool is a mirror but is in a DEGRADED state. Insert the replacement disk. a zpool status shows that mypool is in a DEGRADED state: # zpool status -x mypool<cr> pool: mypool state: DEGRADED status: One or more devices could not be opened. 2.6 describes the process of replacing a failed disk in a mirrored storage pool with another disk. The mirror continues to operate. . Run the zpool replace command. Offline the disk using the zpool offline command. Step By Step 9. This means that the virtual device has experienced failure but still can function.515 Replacing Devices in a Storage Pool Replacing Devices in a Storage Pool If a disk in a storage pool fails and needs to be replaced. Remove the disk to be replaced. The physical disk is either disconnected or has failed. The zpool status output shows that c2t2d0 is in an UNAVAIL state. In the following example. 4. swap out the disk and use the zpool replace command to replace the disk within ZFS.

Take the failed disk offline: # zpool offline mypool c2t2d0<cr> 2. Check the pool’s status: # zpool status mypool<cr> pool: mypool state: DEGRADED scrub: resilver completed after 0h0m with 0 errors on Fri Dec 12\ 10:28:51 2008 config: NAME mypool mirror replacing c2t2d0 c2t4d0 c2t3d0 STATE DEGRADED DEGRADED DEGRADED OFFLINE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 errors: No known data errors Note that the preceding zpool status output might show both the new and old disks under a replacing heading. A spare disk (c2t4d0) that is already connected to the system can be used as a replacement. the zpool status command displays the following: # zpool status mypool<cr> pool: mypool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Fri Dec 12\ 10:28:51 2008 config: NAME mypool mirror STATE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 . Follow these steps to replace the failing disk with the replacement disk: 1. This text means that the replacement process is in progress and the new disk is being resilvered.516 Chapter 9: Administering ZFS File Systems STEP BY STEP 9.6 Replace a Disk in a Mirrored Storage Pool A mirrored storage pool named mypool has a failing disk drive (c2t2d0). Replace the failed disk with the good disk: # zpool replace mypool c2t2d0 c2t4d0<cr> 3. After a few minutes.

so the disk you choose. and everything is set up automatically. Here are the new features: . In addition. . You can use the Solaris Live Upgrade feature to migrate a UFS root file system to a ZFS root file system. or the slice within the disk must exceed the Suggested Minimum value. . with the following exception: A screen prompts you to select either a UFS or ZFS file system: Choose Filesystem Type Select the filesystem to use for your Solaris installation [ ] UFS [ ] ZFS After you select the software to be installed. you can use Solaris Live Upgrade to perform the following tasks: . This screen is similar to those in previous Solaris releases. multiple disks will be configured as mirrors. The ability to perform an initial installation where ZFS is selected as the root file sys- tem. Create a new boot environment within an existing ZFS root pool. You can select the disk or disks to be used for your ZFS root pool. Select a ZFS file system. the physical disk can be removed from the system and replaced. Create a new boot environment within a new ZFS root pool. A ZFS Root File System New in the Solaris 10 10/08 release is the ability to install and boot from a ZFS root file system. a mirrored two-disk configuration is set up for your root pool. except for the following text: For ZFS. you’re given the option to install on a UFS or ZFS root file system. you are prompted to select the disks to create your ZFS storage pool. The entire installation program is the same as previous releases. If you select two disks. .517 A ZFS Root File System c2t4d0 c2t3d0 ONLINE ONLINE 0 0 0 0 0 0 errors: No known data errors Now that the c2t2d0 disk has been offlined and replaced. During the initial installation of the Solaris OS.

You can add a ZFS volume as a device to nonglobal zones.45G 68.9G 12.9G 13.72G 3. Also specify the name of the dataset to be created within the pool that is to be used as the root directory for the filesystem.518 Chapter 9: Administering ZFS File Systems After you have selected a disk or disks for your ZFS storage pool.3G REFER 35.9G 12. The following is an example of a ZFS root pool after the OS has been installed: # zfs list<cr> NAME rpool rpoo